Is Science Slowing Down?

[This post was up a few weeks ago before getting taken down for complicated reasons. They have been sorted out and I’m trying again.]

Is scientific progress slowing down? I recently got a chance to attend a conference on this topic, centered around a paper by Bloom, Jones, Reenen & Webb (2018).

BJRW identify areas where technological progress is easy to measure – for example, the number of transistors on a chip. They measure the rate of progress over the past century or so, and the number of researchers in the field over the same period. For example, here’s the transistor data:

This is the standard presentation of Moore’s Law – the number of transistors you can fit on a chip doubles about every two years (eg grows by 35% per year). This is usually presented as an amazing example of modern science getting things right, and no wonder – it means you can go from a few thousand transistors per chip in 1971 to many million today, with the corresponding increase in computing power.

But BJRW have a pessimistic take. There are eighteen times more people involved in transistor-related research today than in 1971. So if in 1971 it took 1000 scientists to increase transistor density 35% per year, today it takes 18,000 scientists to do the same task. So apparently the average transistor scientist is eighteen times less productive today than fifty years ago. That should be surprising and scary.

But isn’t it unfair to compare percent increase in transistors with absolute increase in transistor scientists? That is, a graph comparing absolute number of transistors per chip vs. absolute number of transistor scientists would show two similar exponential trends. Or a graph comparing percent change in transistors per year vs. percent change in number of transistor scientists per year would show two similar linear trends. Either way, there would be no problem and productivity would appear constant since 1971. Isn’t that a better way to do things?

A lot of people asked paper author Michael Webb this at the conference, and his answer was no. He thinks that intuitively, each “discovery” should decrease transistor size by a certain amount. For example, if you discover a new material that allows transistors to be 5% smaller along one dimension, then you can fit 5% more transistors on your chip whether there were a hundred there before or a million. Since the relevant factor is discoveries per researcher, and each discovery is represented as a percent change in transistor size, it makes sense to compare percent change in transistor size with absolute number of researchers.

Anyway, most other measurable fields show the same pattern of constant progress in the face of exponentially increasing number of researchers. Here’s BJRW’s data on crop yield:

The solid and dashed lines are two different measures of crop-related research. Even though the crop-related research increases by a factor of 6-24x (depending on how it’s measured), crop yields grow at a relatively constant 1% rate for soybeans, and apparently declining 3%ish percent rate for corn.

BJRW go on to prove the same is true for whatever other scientific fields they care to measure. Measuring scientific progress is inherently difficult, but their finding of constant or log-constant progress in most areas accords with Nintil’s overview of the same topic, which gives us graphs like

…and dozens more like it. And even when we use data that are easy to measure and hard to fake, like number of chemical elements discovered, we get the same linearity:

Meanwhile, the increase in researchers is obvious. Not only is the population increasing (by a factor of about 2.5x in the US since 1930), but the percent of people with college degrees has quintupled over the same period. The exact numbers differ from field to field, but orders of magnitude increases are the norm. For example, the number of people publishing astronomy papers seems to have dectupled over the past fifty years or so.

BJRW put all of this together into total number of researchers vs. total factor productivity of the economy, and find…

…about the same as with transistors, soybeans, and everything else. So if you take their methodology seriously, over the past ninety years, each researcher has become about 25x less productive in making discoveries that translate into economic growth.

Participants at the conference had some explanations for this, of which the ones I remember best are:

1. Only the best researchers in a field actually make progress, and the best researchers are already in a field, and probably couldn’t be kept out of the field with barbed wire and attack dogs. If you expand a field, you will get a bunch of merely competent careerists who treat it as a 9-to-5 job. A field of 5 truly inspired geniuses and 5 competent careerists will make X progress. A field of 5 truly inspired geniuses and 500,000 competent careerists will make the same X progress. Adding further competent careerists is useless for doing anything except making graphs look more exponential, and we should stop doing it. See also Price’s Law Of Scientific Contributions.

2. Certain features of the modern academic system, like underpaid PhDs, interminably long postdocs, endless grant-writing drudgery, and clueless funders have lowered productivity. The 1930s academic system was indeed 25x more effective at getting researchers to actually do good research.

3. All the low-hanging fruit has already been picked. For example, element 117 was discovered by an international collaboration who got an unstable isotope of berkelium from the single accelerator in Tennessee capable of synthesizing it, shipped it to a nuclear reactor in Russia where it was attached to a titanium film, brought it to a particle accelerator in a different Russian city where it was bombarded with a custom-made exotic isotope of calcium, sent the resulting data to a global team of theorists, and eventually found a signature indicating that element 117 had existed for a few milliseconds. Meanwhile, the first modern element discovery, that of phosphorous in the 1670s, came from a guy looking at his own piss. We should not be surprised that discovering element 117 needed more people than discovering phosphorous.

Needless to say, my sympathies lean towards explanation number 3. But I worry even this isn’t dismissive enough. My real objection is that constant progress in science in response to exponential increases in inputs ought to be our null hypothesis, and that it’s almost inconceivable that it could ever be otherwise.

Consider a case in which we extend these graphs back to the beginning of a field. For example, psychology started with Wilhelm Wundt and a few of his friends playing around with stimulus perception. Let’s say there were ten of them working for one generation, and they discovered ten revolutionary insights worthy of their own page in Intro Psychology textbooks. Okay. But now there are about a hundred thousand experimental psychologists. Should we expect them to discover a hundred thousand revolutionary insights per generation?

Or: the economic growth rate in 1930 was 2% or so. If it scaled with number of researchers, it ought to be about 50% per year today with our 25x increase in researcher number. That kind of growth would mean that the average person who made $30,000 a year in 2000 should make $50 million a year in 2018.

Or: in 1930, life expectancy at 65 was increasing by about two years per decade. But if that scaled with number of biomedicine researchers, that should have increased to ten years per decade by about 1955, which would mean everyone would have become immortal starting sometime during the Baby Boom, and we would currently be ruled by a deathless God-Emperor Eisenhower.

Or: the ancient Greek world had about 1% the population of the current Western world, so if the average Greek was only 10% as likely to be a scientist as the average modern, there were only 1/1000th as many Greek scientists as modern ones. But the Greeks made such great discoveries as the size of the Earth, the distance of the Earth to the sun, the prediction of eclipses, the heliocentric theory, Euclid’s geometry, the nervous system, the cardiovascular system, etc, and brought technology up from the Bronze Age to the Antikythera mechanism. Even adjusting for the long time scale to which “ancient Greece” refers, are we sure that we’re producing 1000x as many great discoveries as they are? If we extended BJRW’s graph all the way back to Ancient Greece, adjusting for the change in researchers as civilizations rise and fall, wouldn’t it keep the same shape as does for this century? Isn’t the real question not “Why isn’t Dwight Eisenhower immortal god-emperor of Earth?” but “Why isn’t Marcus Aurelius immortal god-emperor of Earth?”

Or: what about human excellence in other fields? Shakespearean England had 1% of the population of the modern Anglosphere, and presumably even fewer than 1% of the artists. Yet it gave us Shakespeare. Are there a hundred Shakespeare-equivalents around today? This is a harder problem than it seems – Shakespeare has become so venerable with historical hindsight that maybe nobody would acknowledge a Shakespeare-level master today even if they existed – but still, a hundred Shakespeares? If we look at some measure of great works of art per era, we find past eras giving us far more than we would predict from their population relative to our own. This is very hard to judge, and I would hate to be the guy who has to decide whether Harry Potter is better or worse than the Aeneid. But still? A hundred Shakespeares?

Or: what about sports? Here’s marathon records for the past hundred years or so:

In 1900, there were only two local marathons (eg the Boston Marathon) in the world. Today there are over 800. Also, the world population has increased by a factor of five (more than that in the East African countries that give us literally 100% of top male marathoners). Despite that, progress in marathon records has been steady or declining. Most other Olympics sports show the same pattern.

All of these lines of evidence lead me to the same conclusion: constant growth rates in response to exponentially increasing inputs is the null hypothesis. If it wasn’t, we should be expecting 50% year-on-year GDP growth, easily-discovered-immortality, and the like. Nobody expected that before reading BJRW, so we shouldn’t be surprised when BJRW provide a data-driven model showing it isn’t happening. I realize this in itself isn’t an explanation; it doesn’t tell us why researchers can’t maintain a constant level of output as measured in discoveries. It sounds a little like “God wouldn’t design the universe that way”, which is a kind of suspicious line of argument, especially for atheists. But it at least shifts us from a lens where we view the problem as “What three tweaks should we make to the graduate education system to fix this problem right now?” to one where we view it as “Why isn’t Marcus Aurelius immortal?”

And through such a lens, only the “low-hanging fruits” explanation makes sense. Explanation 1 – that progress depends only on a few geniuses – isn’t enough. After all, the Greece-today difference is partly based on population growth, and population growth should have produced proportionately more geniuses. Explanation 2 – that PhD programs have gotten worse – isn’t enough. There would have to be a worldwide monotonic decline in every field (including sports and art) from Athens to the present day. Only Explanation 3 holds water.

I brought this up at the conference, and somebody reasonably objected – doesn’t that mean science will stagnate soon? After all, we can’t keep feeding it an exponentially increasing number of researchers forever. If nothing else stops us, then at some point, 100% (or the highest plausible amount) of the human population will be researchers, we can only increase as fast as population growth, and then the scientific enterprise collapses.

I answered that the Gods Of Straight Lines are more powerful than the Gods Of The Copybook Headings, so if you try to use common sense on this problem you will fail.

Imagine being a futurist in 1970 presented with Moore’s Law. You scoff: “If this were to continue only 20 more years, it would mean a million transistors on a single chip! You would be able to fit an entire supercomputer in a shoebox!” But common sense was wrong and the trendline was right.

“If this were to continue only 40 more years, it would mean ten billion transistors per chip! You would need more transistors on a single chip than there are humans in the world! You could have computers more powerful than any today, that are too small to even see with the naked eye! You would have transistors with like a double-digit number of atoms!” But common sense was wrong and the trendline was right.

Or imagine being a futurist in ancient Greece presented with world GDP doubling time. Take the trend seriously, and in two thousand years, the future would be fifty thousand times richer. Every man would live better than the Shah of Persia! There would have to be so many people in the world you would need to tile entire countries with cityscape, or build structures higher than the hills just to house all of them. Just to sustain itself, the world would need transportation networks orders of magnitude faster than the fastest horse. But common sense was wrong and the trendline was right.

I’m not saying that no trendline has ever changed. Moore’s Law seems to be legitimately slowing down these days. The Dark Ages shifted every macrohistorical indicator for the worse, and the Industrial Revolution shifted every macrohistorical indicator for the better. Any of these sorts of things could happen again, easily. I’m just saying that “Oh, that exponential trend can’t possibly continue” has a really bad track record. I do not understand the Gods Of Straight Lines, and honestly they creep me out. But I would not want to bet against them.

Grace et al’s survey of AI researchers show they predict that AIs will start being able to do science in about thirty years, and will exceed the productivity of human researchers in every field shortly afterwards. Suddenly “there aren’t enough humans in the entire world to do the amount of research necessary to continue this trend line” stops sounding so compelling.

At the end of the conference, the moderator asked how many people thought that it was possible for a concerted effort by ourselves and our institutions to “fix” the “problem” indicated by BJRW’s trends. Almost the entire room raised their hands. Everyone there was smarter and more prestigious than I was (also richer, and in many cases way more attractive), but with all due respect I worry they are insane. This is kind of how I imagine their worldview looking:

I realize I’m being fatalistic here. Doesn’t my position imply that the scientists at Intel should give up and let the Gods Of Straight Lines do the work? Or at least that the head of the National Academy of Sciences should do something like that? That Francis Bacon was wasting his time by inventing the scientific method, and Fred Terman was wasting his time by organizing Silicon Valley? Or perhaps that the Gods Of Straight Lines were acting through Bacon and Terman, and they had no choice in their actions? How do we know that the Gods aren’t acting through our conference? Or that our studying these things isn’t the only thing that keeps the straight lines going?

I don’t know. I can think of some interesting models – one made up of a thousand random coin flips a year has some nice qualities – but I don’t know.

I do know you should be careful what you wish for. If you “solved” this “problem” in classical Athens, Attila the Hun would have had nukes. Remember Yudkowsky’s Law of Mad Science: “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.” Do you really want to make that number ten points? A hundred? I am kind of okay with the function mapping number of researchers to output that we have right now, thank you very much.

The conference was organized by Patrick Collison and Michael Nielsen; they have written up some of their thoughts here.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

283 Responses to Is Science Slowing Down?

  1. Well... says:

    Scientific progress doesn’t slow down, it goes “boink”.

    Sorry, somebody had to say it.

  2. amaranth says:

    > Shakespearean England had 1% of the population of the modern Anglosphere, and presumably even fewer than 1% of the artists.

    seriously? how are you so completely out of touch with human nature? i don’t understand what it takes to result in you confidently denying extremely basic facts about our history.

    here’s a small excerpt from the book Impro, which you certainly sound like you could never possibly understand

    • The Pachyderminator says:

      Please don’t make comments like this, this kind of hostile condescension has zero benefit and we need less of it, kthx

      • amaranth says:

        it would be good if you didn’t make comments like this. we need less of your kind of hostile condescension, kthx

        • albatross11 says:

          amaranth:

          Do you have anything interesting to say, or are you just here to waste everyone’s time?

    • Aapje says:

      @amaranth

      You are reading Scott in bad faith. He could very well have meant professional artists, who surely were more rare back then than now.

    • thomasthethinkengine says:

      I’m a firm believer in an upvote, downvote system like on Reddit. There’s no reason to prioritise early comments over later ones. Stuff like this should move down and the better comments that came in later should be elevated. I bet if you asked, Scott, a dozen people who read this blog could swiftly sort out a system like that for these comments.

      • silver_swift says:

        I would upvote this comment.

        • Elementaldex says:

          So would I. Upvote Downvote is the thing I really like about Reddit. It prioritizes by importance/interest to the relevant community.

      • Meister says:

        One of the downsides of reddit’s upvote system is that it also privelages early comments, due to the snow ball effect of initial upvotes -> more visibility -> more upvotes -> and so on.

        Vote systems also promote snarky/low-value comments.

        I support research into a system that filters out the low quality comments without promoting other kinds of low quality comments, such as

        -upvotes_up_to_a_cap (something low, like 50 votes) or perhaps
        -a simple vote-threshold system, like hiding any comments that receives >90% down votes.

        Naturally, all vote counts should be hidden to disincentivise vote seeking.

        • Aapje says:

          Reddit actually has multiple sorting options. ‘Top’ sorts merely by upvotes minus downvotes, but ‘best’ considers sample size.

        • cryptoshill says:

          Hiding comments with low votes is just a way of saying “if I say anything the community disagrees with strongly enough, the argument doesn’t have to be had”.

          Vote scoring in general inherently increases the prevalence of groupthink and a lively discussion forum becoming a place that HAS a lowest common denominator, where posts that don’t appeal to it die off.

      • AG says:

        The proof is in the pudding: the SSC reddit is not higher quality than the SSC commentariat.

        • Lambert says:

          Selection effects.
          The best commenters here are the kind that wouldn’t be on reddit.

          • HeelBearCub says:

            But one reason I am not on reddit is that I do not like systems that muck with comment order. I could perhaps see as workable a voluntary system that allowed one to vote up/down and also hide comments below a certain threshold. But I wouldn’t use it.

          • AG says:

            As HBC says. The selection effects are themselves evidence of the inferiority of the reddit comment system.

            (See also previous testimony where SSC commenters admitted that they’re more abrasive on reddit.)

          • Dan L says:

            @ HeelBearCub:

            Reddit has options to sort by chronological order, if the user so chooses. Or are you talking about a system that forces users to use such sorting? There’s a place for deliberate social engineering to shape an online community, but it’s not clear to me that that particular change would be an improvement.

            @ AG:

            An alternate selection effect: people who preferentially engage on platform X rather than Y will tend to be those that believe X is superior to Y. What metrics are we grading on? (Why did we choose those metrics?)

            And, uh, I think I interpret “SCC regulars keep getting banned off the subreddit” a little differently. There’s a longer discussion about people’s increased willingness to shit where other people eat, but I don’t think it’s too much to argue that the effect may be symmetrical.

          • AG says:

            @Dan L
            The implication of SSC regulars being banned from the subreddit depends: do you believe that the subreddit has higher comment quality than the blog comments?

          • Dan L says:

            I just explicitly asked you what metrics you were using. That wasn’t rhetorical – I can think of a few where the subreddit comes out ahead, and a few where these comments win out, and a few more where it’s competitive.

            And I can think of a few mechanisms completely orthogonal to comment quality that might render the userbases mutually unpalatable. That’s the part that actually annoys me, I think – the jump from “I don’t like that kind of discussion” to “therefore they’re terrible” without even a veneer of objectivity.

      • Nornagest says:

        I used to agree with you, but after spending a lot of time with the Reddit-like Less Wrong system and also a lot of time with the voteless SSC system, I’ve found nothing to privilege the former over the latter. Sure, we get a few comments like the ancestor, but so did Less Wrong, and a bunch of vote manipulation drama on top of it.

        The systems I really hate, though, are the ones with upvotes but no downvotes. Those encourage groupthink while doing nothing to discourage being an asshole, so they produce a lot of groupthinking assholes.

    • DS says:

      Scott is correct. In Shakespeare’s day, 80% of the population had to farm for food, less than half were literate, women weren’t allowed in most literate professions, and “writes for publication” generally meant “either noble or personally sponsored by nobility.” (Shakespeare was part of “The Kings’ Men”, for example, an acting company with King James I himself as their sponsor.)

      So far fewer of of England’s population then had any chance to work full-time making publishable or permanent art.

      I understand you prefer a different definition of “artist,” where anyone who makes up a new bit of song or changes a story when they tell it to their kids is an artist. But every famous Elizabethan writer came from a background far wealthier than ordinary farmers; the vocabulary alone guaranteed that. Ordinary folk didn’t get the education or the access to write for national posterity.

      So no, you were never going to get the works of Shakespeare from some village’s best farmer-singer. Not because she wasn’t capable of it, but because the economy and prejudice of the time was never going to give her the chance to find out.

      • Tarpitz says:

        You also weren’t going to get it because she wasn’t capable of it. Shakespeare is a ludicrous, ludicrous outlier. I have no time for conspiracy theories along the lines of “the works of Shakespeare were written by the Earl of Oxford” because they don’t explain anything. A conspiracy theory along the lines of “Shakespeare was an alien cyborg from the future”, on the other hand, might have some merit.

        I’d also add that “writes for publication” is a bit of a stretch, as applied to Shakespeare. He was certainly aware that his work was being/would be published (“So long lives this”, “states unborn and accents yet unknown” etc.) but the publication was a mix of sketchy pirate versions during his lifetime and pretty good versions edited by his friends and colleagues after his death. He made his money – very good money for the time – from the live performance of his work, probably mostly in his capacity as what we would now call a producer.

      • amaranth says:

        have you heard of material shaping lol ink and paper is one of many materials

    • Murphy says:

      >small excerpt

      That’s just standard elderly griping. And yes that is a basic element of human nature.

      In ancient times it was conceivable for one bright individual to learn everything of note up to the edge of human knowledge across most fields.

      Now, even if I wanted to spend all day every day reading research papers I’d have trouble keeping up with the cutting edge in just one field like chemistry. To really keep up I’d need to focus on one tiny area of chemistry.

      Shakespeare plagiarized from previous works and his contemporaries. 99.something% of the stuff written in Shakespearean England was shit and nobody remembers it. We remember a few dozen authors of note from an entire generation. it’s like pointing to that one remaining lightbulb still going after 50 years and saying “oh they don’t make them like that any more” without realising the sampling bias. That one lightbulb is the one survivor from hundreds of thousands, the last Remanent of a long tail hovering just above zero.

      There’s now hundreds of thousands of kids writing stories. Most of them are shit. A couple hundred thousand of the stories are harry potter fanfics. But it doesn’t really matter. Some really good authors end up standing out and in 300 years elderly curmudgeons will be griping that the year 2318 doesn’t produce anything on a par with the hundreds of great authors spawned from the millennial generation.

      • Doug S. says:

        Engineers these days are perfectly capable of designing a lightbulb that would last fifty years; it just wouldn’t be one you’d want to buy for your house, because it would be both expensive and substantially dimmer than the lights you normally use. It’s all about the trade-offs…

        • Furrfu says:

          If you want a regular incandescent lightbulb to last 50 years, like the Centennial Light, which is 110 years old, the recipe is fairly well-known. Run it at a low temperature, ideally cherry-red, and make sure it’s not a halogen type. Leave it turned on all the time, so it doesn’t experience thermal shocks from being turned on and off. Using a carbon filament rather than a tungsten filament would help (carbon’s vapor pressure is a lot lower than tungsten’s) but is not necessary. Using a thicker filament, like the ones used in low-voltage lightbulbs, will also help. To lower the temperature, a garden-variety leading-edge dimmer is probably adequate, but a transformer to lower the voltage might be more reliable than the triac in a dimmer. (It doesn’t really count as “lasting 50 years” if you have to replace the dimmer four times in that time, does it?)

          And, yes, it’s going to be very dim and energy-inefficient.

          • The Nybbler says:

            A triac dimmer will probably reduce bulb lifetime compared to using lower voltage. The transients make the filament vibrate. DC would be best, but smoothly varying AC is better than the chopped-up stuff from a triac.

        • kaleberg says:

          LED light bulbs last about 30,000 hours as opposed to 1,000 hours for incandescent bulbs. That’s about 13+ years, ten hours a day. Each bulb has a solid state rectifier to turn the AC into DC, a switching regulator to dial down the voltage for the LED and the LED itself. It’s an impressive piece of work. Modern semiconductors can deal with power levels that would have been considered insane 50 years ago, and let’s not forget the challenge of building an diode that emits white light in sufficient quantities. The choice was red, green and amber until the early 1990s.

      • I upvote this comment.

        Also I’m old enough to when we knew how how to cherry-pick anecdotes with which to bash the kids on our own, didn’t have to use some other whiny old fart’s book to do it for us.

      • Tarpitz says:

        I think Shakespeare is also just a special case. There really is no comparable outlier I can think of in any other field; the closest might be Don Bradman.

        • Frederic Mari says:

          As a non-native English speaker, why the emphasis on Shakespeare as the writer/dramatist to end all writers/dramatist?

          He was late 16thC. In France, early 17hC we have at least 3-4 literary giants: Corneille, Moliere, Racine and Jean de la Fontaine. Plus a fair amount of second tier guys, on par with, say, Marlowe or Middleton (in terms of modern name recognition, though I might be biased coz no one ever taught me English literature and, for example, I only know Marlowe b/c of the Shakespeare authorship conspiracy theories and his murder/untimely death).

          • johan_larson says:

            I more than half suspect it’s just a case of Shakespeare having a well-organized group of fanboys. Since I was educated in English, I had to read and watch and analyze quite a bit of Shakespeare over the years, and if there’s genius there, I must have missed it. Reactions from my fellow students were similar. I’m willing to grant him some respect for having legs, as they say; it’s impressive that his works still being performed hundreds of years after they were written. But my best guess is that he is merely a capable professional artist with an influential and well-organized group of fans, rather than some sort of otherworldly genius.

          • Murphy says:

            I’m reminded of an old story about an automated system being used to grade the quality of kids writing.

            The idea was simple: the system graded each students work, a human graded it and if there was more than a certain percentage divergence it was graded by a panel then put into the training data set. Some interesting points were that the system could output the reasons and highlight problem text etc.

            So of course someone had some fun feeding it the “classics”

            Which it pretty universally graded as being poorly written.

            But of course that doesn’t count because the classics are transcendent and anything that says they’re poor is by definition wrong.

            never mind that the more systematic you make your assessment method the worse they do.

            It’s pure fashion. it’s not the done thing in the english department to say “oh, ya, most of those ‘classics’ aren’t all that great”… not no, that would be terribly crass. So you have a culture of, to put it utterly crudely, people constantly jacking off over the classics. Criticizing them is like turning up on /r/lotr/ and saying bad things about tolkien’s work or /r/mlp and stating that it’s not all that.

            They’re the equivalent of the few works that managed to make it into top 10 popularity rankings back when the field was being defined and so the nostalgia for them got encoded in the culture like how people view games they played as kids through rose tinted glasses.

          • The Nybbler says:

            There’s probably a lot of dreck called “classic”, but I don’t think an automated grading system for kid’s writing thinking it’s dreck means anything. First of all, I wouldn’t trust such a system anyway; it’s as likely to seize on superficial features (correlated to what you want) in a training set as it is to go for what you’re looking for. Second, it’s operating way out of its domain when grading the classics; English has changed a bit since Shakespeare’s day, for instance.

            Shakespeare’s work has maintained its popularity for centuries, and in multiple languages; I think it’s beyond any accusation of faddishness.

            To name another work that a kid’s grading system would hurt, there’s Moby Dick. It’s IMO (and not just mine) nigh-unreadable, and there’s that crack about the odd chapters being an adventure story and the even chapters a treatise on 19th century whaling. Yet the story keeps getting re-written and re-adapted. That, IMO, probably means it’s ‘classic’ reputation is deserved.

            For another take on automatic grading, there’s a story called “The Monkey’s Finger” Asimov wrote about his own story “C-Chute”, based on an argument he had with his editor about it. Asimov obviously took the author’s side.

          • Chevalier Mal Fet says:

            Count me as one of his fanboys. I wish I were better able to articulate why I think he’s a genius – maybe I’ll chew it over and try to come up with something.

            But yes, in English writing at least, there’s no one who comes anywhere near Shakespeare.

          • Tarpitz says:

            French literature was actually (one half of) my degree subject, so I’ve read a fair amount of Racine, Molière and the rest in the original, and I just can’t agree. Are they great writers? Sure (Molière the best, for my money). I actually prefer Musset to any of them, but I grant that’s an idiosyncratic matter of taste.

            Taken as a poet, I don’t think Shakespeare is in a noticeably different class to them, or to a number of other English language poets. As a playwright, however, he’s completely unique. The scope and depth of his insight into a vast range of characters has no parallel (Chekov and Sondheim the closest, for my money, and neither very close).

            The best comparison in French literature isn’t a playwright at all; it’s Proust. Both have (different kinds of) exceptional insight about the world and the ability to elucidate those insights on the page so as to make them comprehensible to the rest of us clowns, to a degree that others don’t. I just think Shakespeare’s significantly further out ahead of other playwrights in that respect than Proust is other novelists.

          • benjdenny says:

            I’ve read the entirety of Shakespeare(as defined as “complete collection of…” type entirety) a couple of times now trying to figure this out. I find he’s occasionally very good, often drags, that type of thing.

            I think it’s partially big for the same reason chess persists throughout the ages, in that it got filed into a “smart people do this, so if I do this, I must be smart” box with a few other things(classical music, watching indie film) and thus is much, much bigger than I’d expect if everyone was unable to use it as a flag.

            This isn’t to say some people don’t absolutely, genuinely love these things, just that most of the people who talk about the genius of them probably don’t genuinely enjoy them as much as they do a well made piece of contemporary mainstream art, outside of the status effect.

        • AG says:

          Shakespeare’s influence is just a result of the social butterfly effect, not intrinsic to his works: https://lasvegasweekly.com/news/archive/2007/oct/04/the-rules-of-the-game-no-18-the-social-butterfly-e/

          In another world, someone else with comparable quality had the same amount of influence, while Shakespeare became the forgotten.

          • kaleberg says:

            Don’t forget that Shakespeare was a Tudor apologist. It was a new dynasty, and England as a whole was getting wealthier. Shakespeare rose with the tide.

        • Conrad Honcho says:

          John von Neumann?

          Then again, von Neumann had contemporaries like Edward Teller that he was only maybe an order of magnitude smarter than. But none of Shakespeare’s contemporaries even comes close to Shakespeare.

      • Nancy Lebovitz says:

        This isn’t the best place to sneer at Harry Potter fanfic.

        • Murphy says:

          That wasn’t really meant as a sneer, I think it’s wonderful to see so many people getting into writing that way.

          The vast… vast…. vast majority of fanfic is crap… but then the majority of everything is crap and there’s some real nuggets in among the dross.

      • amaranth says:

        have you heard of material shaping lol ink and paper is one of many materials

  3. chaosmage says:

    When Weber, Fechner and Wundt created psychology, they were not only three very smart guys converging on a new idea. They also did so specifically inside academia, inside the most progressive city of their entire country (which was also the center of the book industry and made it really easy for them to publish a lot), all while their entire country was going through incredible boom times.

    Imagine that at the same time as them, a philosopher, a physicist and a physiologist had made the exact same discoveries in Kazakhstan. It is quite possible. They could just as easily read Darwin in Kazakhstan, they would know about atomic theory and could make the same basic realization that the psyche has to be a material thing that can be studied experimentally like other material things. But we would never know, because who cares what happened in the 1870s in Kazakhstan.

    Or go to Ancient Greece. Sure Epicurus was a very smart guy. But did it take a very smart guy to come up with the Principle of Multiple Explanations or with the Epicurean paradox? I think a clever tribesman in the Amazon could have come up with those, given some spare time. But Epicurus happened to live during the rapid rise of Hellenic culture that was then cemented by the Romans, so he got a huge signal boost that no tribesman in the Amazon got.

    Was Shakespeare a literary genius? Sure! And additionally, he was a firmly English artist competing with classical art in a time when the English were rapidly expanding and seeking to distinguish themselves from the Catholics who were really into classical art. Maybe someone else in the same era wrote even better, but was less well-connected, or too Catholic, or a woman, or lived in Lithuania and wrote in Lithuanian. We wouldn’t know.

    All three examples of excellence in creating novel knowledge were in the right place at the right time: in places where history was being made, places and times that coming generations would be highly likely to learn about. And all of them stand out much more in the simplified and compressed view of historical hindsight than they stood out to their contemporaries. I don’t think this is a coincidence.

    So sure, there are indeed low-hanging fruit, but maybe they get picked all the time. Maybe what starts a new scientific field, or a philosophical school, or a literary classic has less to do with being another one of the guys who pick those fruit – and more with having the clout to make people listen when you say there are more fruit further up?

    • Emperor Aristidus says:

      Maybe someone else in the same era wrote even better, but was less well-connected, or too Catholic, or a woman, or lived in Lithuania and wrote in Lithuanian. We wouldn’t know.

      That reminds me of Mark Twain’s Captain Stormfield’s Visit to Heaven, which features a 16th-century shoemaker who, in his spare time, wrote sonnets whom he never dared show anyone one, but which are acknowledged in paradise as the greatest poetry known to man.

      • Nancy Lebovitz says:

        I don’t remember that, but Captain Stormfield did include mention of a man who would have been a great general (the world’s greatest) but he never got into the military because he didn’t have thumbs.

        • benjdenny says:

          I’ve read that as a boy and as a (sort of) man, and I’ve come to the conclusion that the cranberry farmer in that story didn’t get enough of my bandwidth at the time – like overall Twain seems to be saying that finding the thing you are supposed to do/made to do is the thing, and that the cranberry farmer is lucky to have his new profession even if he could have the General’s job. I missed that for a long time.

    • mikks says:

      Maybe someone else in the same era wrote even better, but was less well-connected, or too Catholic, or a woman, or lived in Lithuania and wrote in Lithuanian. We wouldn’t know.

      There were few texts translated into Lithuanian language, but in practice, Lithuanian language in written form did not during Shakespeare days. You cannot write (even better) if you do not have tools for it.

      • quaelegit says:

        s/Lithuanian/Polish/

        (At least I think Polish was the primary literary language of the region in Shakespeare’s time? Not sure. It definitely was in the 18th and 19th century. I think Adam Mickiewicz might be a good case in point — he was born in what is now Lithuania and is considered Poland’s greatest poet. Is he better than Shakespeare? I have no idea, because I’ve never read any of his work or seen anyone discuss its literary merit.)

    • benquo says:

      or lived in Lithuania and wrote in Lithuanian

      The evidence for Shakespeare’s greatness is better than this implies; Schiller’s German translations of Shakespeare’s plays are considered classics of German literature. Verdi wrote operas based on Shakespeare’s plays. Language barriers matter, but not so much that an English-language perspective is guaranteed parochial. (Likewise, Anglophone poetry students regularly read some of the best French and Hispanic work, and recognize Goethe and Schiller and the Greeks as important.)

    • fr8train_ssc says:

      I feel like Scott talked about this briefly before, and I’m slightly surprised he didn’t look back on it, unless I’m misreading him.

      There’s probably plenty of “low-hanging-fruit” but as you said, the problem isn’t just identifying a low hanging fruit. Genius is 99% perspiration you may still need a lot of resources to bring in idea to fruition. Not every child can be Taylor Wilson. It may not even be present circumstances but historical ones as well. Wilson wouldn’t have come up with his idea if Farnsworth and Hirsch didn’t make their advancements.

      I feel like an additional point, maybe a #4 realted to #3, is that there’s less time on average for innovation form any individual. Scott wrote an interesting parable on how expanding the cutting edge of a field requires growing not only knowledge on the field, but also pedagogy in that field, pedagogy in general, knowledge in related fields. A lot of this may be one has to “connect more dots” to come up with some type of innovation. Yudkowski had to connect DIY/Maker knowledge with building a better light box, along with knowledge of the lightbox/treatment for SAD. Even if exponential more people were going into makerspace/DIY, we would expect that the number who were good enough or familiar with the components to build the box would be smaller, and then also knowledgeable of SAD and or psychology would be drastically smaller. Same thing with the psychology side of things. There may be exponentially more psychologists, but smaller the number that deal with depressive issues and are motivated to find solutions, but even smaller still the number that also are familiar enough with modern electronics. This comes back to there being less time on average for being able to connect the dots, because more hours per year are being spent schooling, and more years (masters is the new bachelors). In doing so, we’re eating up valuable time when people are at the prime of their divergent thought, able to consider “connecting dots” that people ossified in their field wouldn’t think about.

      Scott seems optimistic that AI will overcome this barrier by basically exponentiating on exponential scientists. However, The Economist recently published an interesting article on AI’s attempting to accomplish their objective function like Corporations. Corporations kill creativity. We may be putting blinders on these AI’s depending on the training sets we give.

      This is why Taleb encourages Bricolage for knowledge generation as opposed to the systems we have in place now. Maybe we need to focus our efforts less on defining “the right” objective functions, and instead try to come up with an AI that behaves more like a polymath and less like a corporation.

      • Ghillie Dhu says:

        On the connecting-the-dots point, while resources are growing exponentially the threshold for breakthroughs is growing factorially?

        • fr8train_ssc says:

          Partially yes. It’s a very good and succinct way of putting it. I also want to emphasize that we’re also not focusing on divergent thinking as the necessary resource for generating solutions. I linked to the results of the paperclip creativity exercise, because it’s illustrating to see results where young children can easily come up with hundreds of answers, and older adults in the 10s, maybe 20s.

          Like, if you told a child to ask engineers in the 1940s “What if we could use a phonograph with vinyl records to read/write entry data instead of punchcards?” some would be amused, but most would think the child was being silly. Children are however, the most likely candidate to ask this sort of question, and so one that does, and then learned a bit about how magnetics tapes worked that was lucky could end up developing the hard drive

          So if a new innovation required say, four dots to connect (vinyl record, magnetic tape, punch cards, storage application) you may need to sieve through twenty-four different ideas before you say “eureka!”. This could be handled by an adult that still had a fair amount of creative and divergent thinking. As soon as a new innovation required five dots to connect though, you may be looking at a hundred-twenty different possible ideas to sift through to get that “eureka!” That may be possible if you have a group of geniuses working together that can put the knowledge together, but that may also be possible by a single child prodigy that has been exposed to a lot of different things to also come up with a significant innovation

          Once you have ideas that need six or seven dots to connect, then no number of adults can possibly connect them, as you’re basically approaching the Dunbar number. A group of children though, could manage it. Force children to waste time doing homework instead of playing and socializing and self-directed learning and they no longer have those opportunities.

          • cryptoshill says:

            I think your example there of the 19-year old is a just-so story. The more important part there wasn’t the child or his divergent thinking (adults can and do come up with innovative ways to clean the ocean on a fairly regular basis) – it was the capital to actually build the thing to show to a VC firm.

      • Tenacious D says:

        Scott wrote an interesting parable on how expanding the cutting edge of a field requires growing not only knowledge on the field, but also pedagogy in that field, pedagogy in general, knowledge in related fields.

        I was also reminded of “Ars longa, Vita brevis” by this post.

      • Frederic Mari says:

        That.

        But also b/c I’d like to push back on the idea of low-hanging fruits.

        Sure, Pythagoras and Thales theorems are now taught to kids less than 15 yo. and most of them get those and can use them to solve geometric problems.

        But, for Pythagoras or Thales to come up with such stuff… Boy, they must have been some level of SD away from the norm of their times.

        Basically – low hanging fruits aren’t hanging low when your arms are that much shorter than what they are now…

    • akarlin says:

      Probably not, because so far as innovation goes, “it takes a village” – people bounce ideas off each other, and the best rises to the top.

      No people of similar quality to engage with, and you go nowhere.

      There is also a startlingly logical pattern to where human accomplishment has historically been centered. For instance, the cultural and scientific explosion in Classical Greece was hitherto unprecedented in world history. But did you know that it was also the world’s first society to transition from “priestly literacy” (1-2%) to “craftsman literacy” (~10%)? This not only dectupled (thanks SA for improving my vocab) the literate population, but created previously unheard of concentrations of literate smart people. Add in the fact that the Greek world’s population was quite respectable for its time period – up to 10 million people, or not much less than that of contemporary China (!) – and things start to really make sense.

      • fr8train_ssc says:

        I remember thinking about that concept in Scott’s Steelmanning of NIMBYs and hence why I linked to Kleiber’s law as a power law that says innovation scales at the 5/4 rate instead of linearly (“10x size => ~17x more innovation) as a potential reason, so there is something to be said about that.

        It’s also why I phrased out my more general comment on the interrelation of previous ideas as a generator for new ones. Have a big city, more people who know the right components can bounce ideas off each-other. Have access to capital interested in investing into adjacent technology, and you have the resources to put that idea into fruition. Less resourced needed per person for sustenance also means more surplus for research or creative endeavors.

        I still insist though, that childhood creativity still plays a major part. Scott’s previous posts: Book Review: Raise a Genius and the atomic bomb as Hungarian Science Fair Project also shed some light. If one looks at (what I’ll call for lack of a better term) the “20th century Golden Age of Hungarian Mathematics” it didn’t just come out of a highly localized area, but could perhaps be linked to a good number of creative children having the opportunity to learn well (great institutions) with community support (not just in the same city, but also within the Jewish community) and possibly bouncing ideas off eachother (not in a “hey let’s build an atom bomb sense” but more of a “bet you can’t figure this out” sense)

        I feel like my responses have also been a synthesis of multiple previous Scott posts put together. I think I’ve linked to five different ones now?

  4. paulm says:

    Moore’s Law is not an academic “research” program really. It is a (mostly) industry-based engineering program driving semiconductors down the learning curve. Every doubling of the number of transistors manufactured results in the same decrease in cost. These learning curves are very general and seem to apply to almost everything. But for sure it is true that the number of scientists involved to make this happen is increasing dramatically.

    Here is a post from a couple of years ago on my blog on Cadence Design Systems (we produce the software for designing the transistors/chips). It turns out almost all transistors are memory (99.7%), and due to the transition to 3D NAND that is still improving at an amazing rate. Logic not so much, and analog maybe not at all.

    This sort of learning curve, I’m sure, applies to corn and soybeans. The relevant driver is almost certainly not the number of scientists but the tons of beans/corn produced, the value of which goes to pay for an increasing number of scientists. It is a total guess, but I wouldn’t be surprised if you plotted marathon times against total number of marathon entries ever run, that you’ll get a perfect straight line.

    For research, I would guess that number of researcher-years will give the same straight line. Every doubling of the number of researchers in a field produces the same “cost reduction” which I guess would be significant papers or something. Not quite sure how to make the learning curve fit into this, but it is clearly relevant. Obviously Edward Teller learned from everyone on the Manhattan project in a way that Atilla the Hun could not, which has to explain partially why Atilla didn’t come up with the H-bomb (let’s just assume he had the same IQ).

    There are also things that suffer from Baumol’s cost disease which don’t fit the model. The archetypal example is that it still takes 30 minutes and 4 players to play a string quartet. In your (Scott) field, it still takes 15 minutes to have a 15 conversation with a patient (although the drugs probably follow something more like Moore’s Law).

    • benquo says:

      I think that if it really mattered humanity could easily figure out how to write shorter string quartets or play them faster.

    • Doug S. says:

      These days, that one string quartet performance can be recorded and replayed endlessly, so we really do get a lot more music for a given amount of labor…

  5. Brett says:

    The bit about the God of Straight Lines reminded me of Tyler Cowen on Denmark’s economic prosperity. 1.9% real GDP per capita growth per year isn’t particularly remarkable, but do that for the better part of 120 years and it really adds up.

    Suddenly “there aren’t enough humans in the entire world to do the amount of research necessary to continue this trend line” stops sounding so compelling.

    Especially if you have some big efficiency gains on running each AI. Imagine throwing the equivalent of a trillion AIs at a problem.

  6. User_Riottt says:

    number of chemical elements discovered

    This is not a good scale for this. Almost all elements that haven’t been ‘discovered’ are radioactive with a half life of seconds making their discovery essentially trivial.
    https://en.wikipedia.org/wiki/Island_of_stability

  7. Steve Sailer says:

    Is Moore’s Law still in effect? Apple went almost 3.5 years between its early 2015 and late 2018 upgrades to its MacBook Air laptop. A big reason for that is because CPU chips from Intel (Moore’s company) aren’t improving as fast as they used to.

    • Acedia says:

      As a fan of computer games since the early nineties, I’m torn between feeling glad that I no longer have to buy expensive new hardware every year to run new games smoothly, and worried about what this (obvious, despite denials from some quarters) stagnation portends.

      • Conrad Honcho says:

        On the other hand, we’re also pretty darn close to photorealism, so game designers can focus on crafting excellent games instead of constantly chasing technological advancement. Personally I think we’re living in a golden age of video games. I’ve been playing games since the early 80s and am suffering from an abundance of extremely high quality games I do not have time to play. In the past two years I have played several games I consider “among the best game I’ve ever played.”

    • Doug says:

      Moore’s law, as its explicitly stated (density of transistors) still seems to be in effect. Commercial CPUs in 2014 used 14nm dyes, and the latest CPUs in 2018 now use 7nm dyes. Since density scales with the inverse of dye size, that’s a quadrupling in four years. 5nm dyes come out in 2020, which means that there’s at least one more doubling cycle left.

      Experimental labs have prototyped sub-1nm transistors, so there may be as many as six doubling cycles left. Then past that, we can start thinking about layering 3D chips instead of 2D dyes, spintronics, memrisotrs, replacing silicon with graphene, and other exotic approaches. My guess is that Moore’s law lasts ways pasts anyone expects. It’s not inconceivable for it to run thirty more years.

      The big asterisk with Moore’s law from 2008 onwards, is that it doesn’t payoff in terms of faster CPUs. Because of fundamental physical limits, computers hit their max clock speeds a long time ago. Nowadays increased transistor density is used to add more cores, rather than faster cores. So the same task pretty much takes the same time as ten years ago, but you can do more in parallel.

      Simply upgrading to a new CPU doesn’t give you the automatic speedup that it did in the 90s. You have to effectively utilize the multicore facilities, which from a software perspective involves a lot of tricky engineering pitfalls. That’s the big reason why device makers don’t rush out to to upgrade CPUs at every upgrade cycle.

    • Aftagley says:

      AFAIK the law still holds, but the time between advances seems to be lengthening. So, if you’ve got a long enough time horizon, you’ll still see roughly double the number of transistors every two years, but now it might be four years before the next generation (with 4x the number of transistors) comes online.

    • vV_Vv says:

      Is Moore’s Law still in effect?

      It’s still in effect in terms of transistor density, not in terms of core clock frequency, which is why the number of cores is now increasing exponentially.

    • Douglas Knight says:

      Measuring a single product is a really bad idea. The main thing that you are observing is Apple wanted to phase out the “Air” brand name! They made new laptops during that time, just not “Air.” You can tell that they wanted to dump the name because the new laptops were lighter than the “Air.” They failed to phase out Air because Intel stumbled, but that doesn’t mean that Intel lost 5 years, or that anyone else lost any time.

      1. Moore’s law for speed broke down 10 or more years ago, not 5 years ago.

      2. Recently Intel’s lead over the competition fell maybe from 2 years to 1 year. But both Intel and the competition are still chugging along making transistors cheaper.

      3. Intel is worse than the rest of the industry at making use of more transistors, especially along the metrics Apple cares about (eg, power consumption). The iphone is pretty much as fast as the fastest laptop Apple sells.

      4. Apple makes just-in-time designs. If Intel fails to meet their timeline or specs, it messes up their whole vibe and they abandon laptops and concentrate on phones. But small errors by Intel have small effects on the product lines of other companies.

      From 3+4 it’s pretty clear that Apple is going to switch away from Intel. For years there have been low-end Windows laptops that don’t run Intel. But Apple doesn’t want to fragment its line that way. When it switches, it will switch the whole line, including desktops.

  8. phoenixy says:

    As someone who studied Shakespearean Lit in college, it would not at ALL surprise me to learn that there were 100 writers just as good as Shakespeare living today. Shakespeare isn’t famous as he is simply because he was the very very best; he’s famous because he was so good *and* was the first to get to that level at a time when the playing field was a lot less crowded than it is nowadays (and also he got lucky on the publicity front). I think there are many writers of similar caliber around today, they’re the folks who are in writer’s rooms creating the new golden age of television. There is so much excellent fiction and drama around right now that we are massively spoiled for choice! That was not true in Shakespeare’s day.

    • James says:

      I’m not so sure. I won’t speak for plotting or characterisation, but purely at the level of the language, reading Shakespeare really is quite incredible. There’s no-one writing at that level today. (OK, maybe that’s partly because our priorities have changed and ‘elevated, poetic diction’ is no longer something our writers are interested in, but still.)

      There’s also the question of how much credit we give for doing something first. If Shakespeare invented some particular idiom, and nowadays dozens of writers are merely competently using it, do we give Shakespeare more credit for that? Even if the experience is the same for a (naive) audience, it seems that in some way we should.

      • Doug S. says:

        The closest thing to a Shakespeare-style play written by a contemporary artist is “Hamilton” by Lin-Manuel Miranda. Honestly, I think it holds up pretty well in comparison.

        • James says:

          I haven’t seen it, but from what I know I suspect Hamilton is actually a really good fit. I’ve long held that part of the reason poetry is no good nowadays is that all the people who would be writing good poetry go into songwriting (or rap) instead. So it stands to reason that the closest analogue for the language of Shakespearean plays would be in a rap-based musical.

          • Tarpitz says:

            Except for the fact that Miranda is a terrible lyricist.

            The closest living writer to Shakespeare is Sondheim, though I’m not sure you could count him as still active.

      • phoenixy says:

        Shakespeare’s plotting is legitimately not that great. Only one of his plots is one that he created, several have impossible inconsistencies or terribly convenient co-incidences, and many are resolved in ways that we would consider ridiculous and laughable by modern standards, including a literal deus ex machina in one play and “everybody dies!” or “everybody gets married!” in many others. And of course order and good is always promised to be restored at the end. Those were the genre conventions at the time, but we’ve left those genres behind and would probably consider such resolutions excessively simplistic now in modern writing.

        Likewise I think you are correct that the language is really a matter of style and that elevated poetic diction is no longer valued in theater. Shakespeare’s language was absolutely excellent but the style these days in drama is for more naturalistic writing. I have no doubt many people could write plays similarly but Elizabethan style prosody is not what the market demands in movies and theater.

      • AG says:

        Yeah, and dril is creating new memes that get warped into slang all of the time. There are plenty of people online who are influencing language as much as Shakespeare presumably did. The difference is that fiction is now more subservient to reflecting the culture than defining it, but that’s more a side-effect of globalization and increased wealth, allowing people to develop their culture by millions more avenues than from only the local theater.

        For that matter, aren’t Yudkowsky and Scott at Shakespeare levels of slang creation, for this community?

        And to really support phoenixy’s point, there’s literally a trope called “Buffy Speak,” named for the way YA dialogue style was so indelibly defined by Joss Whedon. And hey, there’s even a Shakespeare connection here!

        • James says:

          Who cares about influence? I’m talking about beauty.

          • AG says:

            Beauty is utterly subjective. In addition, the reason we find certain things beautiful is because of their influence, the way they’ve influenced culture primes us to find certain things beautiful (while someone from another culture may not).

            People find dril tweets have an ethereal charisma to them. Shitposting is an art.

          • James Reed says:

            Beauty is utterly subjective.

            If you think beauty is utterly subjective, you really aren’t the sort of person to be commenting on literary merits. Just like someone who thought science was utterly subjective shouldn’t be evaluating the claims of the most recent psychological studies.

          • AG says:

            Someone believing that aesthetics can be objective doesn’t give them any more qualification in judging artistic merit, either. Whole swaths of people curating for modern art museums with fancy degrees in it prove my point.

            My stance is that Shakespeare is good, great, but that his greatness is not unique. Our veneration of him is more strongly influenced by factors outside of his work’s quality than anything else, and that the perception that Shakespeare was/is uniquely great isn’t true. There are many works that are of comparable quality, merely unrecognized, because gaining popularity is both hard and mostly based on luck.
            “There’s no-one writing at that level today” is a function of you not being able to find them, not their not existing.

            For example, one of James’ hypotheses about why is that our priorities have changed. I agree, but in the direction that our priorities take much more of the visual dimension into account, that people with Shakespeare levels of talent may have become directors rather than writers, cinematography instead of prose.
            See also how therefore new adaptations of Shakespeare to audio-visual mediums convert some of the text into visuals, and so remove the need for parts of the text. When you have a closeup on an actor’s face conveying their intricate emotions, you don’t need a monologue from the days when people could barely see their heads from the theater balcony, much less facial expressions. Consider the adaptations that remove the text entirely, especially Kurosawa’s. You don’t need a monologue about the spirits in the forest, when you can have a shot of the fog in the trees.
            (For that matter, any good translation of Shakespeare into a foreign language implies a translator as good as the man himself, such is the amount of interpretation needed in translation.)

            As such, prose has also evolved to serve visuals, even in text forms, as they prioritize evoking an experience in the brain similar to watching a film or TV. It takes skill to do that, skill that could have the same magnitude as Shakespeare’s but in an orthogonal direction.

            Like, say, crafting a perfect shitpost tweet. “I will face God and walk backwards into Hell,” a nice complement to “Villain, I have done they mother.”

          • James Reed says:

            The argument for Shakespeare’s merit rests not on the fact that he influenced the English language (the influence pales in comparison to, say, William the Conqueror) or that his work required skill (so does farting the alphabet backwards), but on the claim that he wrote some of the most beautiful poetry ever written. If you think that such a claim can’t be adjudicated on account of beauty being utterly subjective, you really don’t have any business debating his poetry’s merit. Just like, since I deny the ability of astrology to predict the future, don’t have any business debating whether the sidereal or tropical horoscope is more accurate. I deny the claims that are a pre-requisite to debate.

            For instance, you say:

            When you have a closeup on an actor’s face conveying their intricate emotions, you don’t need a monologue from the days when people could barely see their heads from the theater balcony, much less facial expressions

            But, to the person who believes in the beauty of poetry, the monologue is not a means but and end. It’s the whole point. We go to the theater (or open the book) to hear poetry, because it’s beautiful. Claiming it’s no longer “needed” due to cameras is like claiming we no longer need ribeye steaks because of Soylent. Someone who would say such a thing obviously has no taste for ribeye steaks, and probably has no business discussing whether they should be preferred to filet mignon.

            You are right that our culture, like the centuries after the fall of Rome, has moved away from text back toward visual media. Neil Postman is good on this.

          • AG says:

            The point is that the beauty of prose is operating off of different standards, and therefore people with the same level of talent produce drastically different results. One of them produces Shakespeare’s poetry. The other writes the script to Throne of Blood.

            Secondly, the argument for Shakespeare’s greatness is commonly rooted in the claim that no one else achieves the same level. Therefore, it is a valid counterargument to dispute that “same level” is a meaningless measure, and so “Shakespeare’s greatness” as defined by that measure is also invalid. There is nothing wrong with saying that neither horoscope is more accurate because all horoscopes are bunk, and in invalidating our “right” to evaluate sidereal vs. tropical, does not make those who do debate that any more correct.
            Shakespeare is not necessarily greater on content quality, because measuring content quality is not so possible, and so we can only evaluate his greatness by other measures, such as influence or popularity. We cannot measure if various horoscopes are accurate, but we can measure how often they are read or followed.

    • akarlin says:

      Strongly agreed.

      Even truer for the visual arts. This guy is more technically adept than probably anyone even a century ago, let alone five.

      Eminence is precedence.

  9. Sniffnoy says:

    Or: the ancient Greek world had about 1% the population of the current Western world, so if the average Greek was only 10% as likely to be a scientist as the average modern, there were only 1/1000th as many Greek scientists as modern ones. But the Greeks made such great discoveries as the size of the Earth, the distance of the Earth to the sun, the prediction of eclipses, the heliocentric theory, Euclid’s geometry, the nervous system, the cardiovascular system, etc, and brought technology up from the Bronze Age to the Antikythera mechanism. Even adjusting for the long time scale to which “ancient Greece” refers, are we sure that we’re producing 1000x as many great discoveries as they are? If we extended BJRW’s graph all the way back to Ancient Greece, adjusting for the change in researchers as civilizations rise and fall, wouldn’t it keep the same shape as does for this century? Isn’t the real question not “Why isn’t Dwight Eisenhower immortal god-emperor of Earth?” but “Why isn’t Marcus Aurelius immortal god-emperor of Earth?”

    This is perhaps nitpicking, but I don’t think you want to go all the way back to the Greeks; you probably only want to go back to the scientific revolution. Of course, this only changes the question a little.

    Also I’m not sure it makes sense to credit the Greeks for the cardiovascular system? I mean, they didn’t even know that arteries carried blood, IINM.

    • Douglas Knight says:

      Are you distinguishing Galen from “the Greeks”? Some people claim that he showed that the arteries contained blood not air. It’s hard to imagine that anyone every believed that they didn’t contain blood. It’s probably a misinterpretation of Erasistratus, who said that arteries carried pneuma, but he probably meant in the blood, not in place of the blood. And he talked about vacuum pumps, but he probably meant that the same applied to liquids as to gas.

      Anyhow, Herophilos wrote a book “On Pulses” about how the heart pushes blood through the arteries.

  10. Steve Sailer says:

    Gordon Moore’s publication of what came to be called Moore’s Law was intended to be a self-fulfilling prophecy to help his firms, then Fairchild and from 1968 onward Intel, raise the capital and recruit the talent to make it come true.

  11. elitistprick says:

    You might be right that 3 is the dominant factor, but I want to make a case for 1 (lots of essentially not useful researchers).

    If you Google for top machine learning journals (http://www.guide2research.com/journals/machine-learning), basically… none of these matter. There are ~5 top ML conferences (and a bunch of area-focused ones, e.g., CVPR, but ~1-3 per area). If you look at the top 4 CS schools and look where the best professors publish, you’ll find that this is roughly true.

    I don’t know how to write this without sounding like an elitist prick, but most of the progress is made by very few researchers. For example, Kaiming He (http://kaiminghe.com/ , I have no relation to him) has published some of the most impactful papers in computer vision (ResNet, ResNeXt, Faster R-CNN, Mask R-CNN, etc). Every top entry in MS-COCO used Faster R-CNN or Mask R-CNN.

    By extension, I’m not entirely sure how productive the rest of the researchers are relative to these standouts. I’m also not sure if I have a solution to this problem.

    • Scott Alexander says:

      If the number of ML researchers has recently dectupled, and there are 50 good researchers and 950 crappy ones, that’s consistent with either:

      1. In the past there were 50 good researchers and 50 crappy ones
      2. In the past there were 5 good researchers and 95 crappy ones.

      Which do you think is true?

      • Aapje says:

        Most likely something in between, of course. The question is closer to what end, though (I suspect closer to 1).

      • elitistprick says:

        [epistemic status: guessing based on my inside view of academics]

        That’s a great question. I think there’s very non-linear growth (logarithmic? I’m not sure) of number of good academic researchers.

        Here’s a few reasons I believe it:
        – Roughly, the majority of work in a subfield (e.g., ML) is done in 1-10 schools. These schools are expanding their faculty non-linearly wrt to the number of researchers in the field.
        – Where does all the talent go since presumably there are good people? Well, if you get paid $500k to be a Tesla ML engineer, that’s pretty attractive. Related: a striking number of IMO gold medalists work in finance.
        – PhDs are essentially apprenticeships. It’s hard to learn all the nuances of being a good researcher (doing the experiments / proving stuff is not the hard part. It’s choosing the problems and effectively communicating it) without being mentored for years. Mediocre researchers probably don’t produce world-class researchers.
        – What about the good people who go to not-top tier schools? It’s significantly harder to do research: funding is harder, finding good students is harder, you’re likely tasked with service, etc. Thus, I believe their productivity would go down.

        Unfortunately I don’t have hard numbers.

      • Mr. Doolittle says:

        I would think almost certainly closer to #1.

        A new field with uncertainty about stability and growth will tend to attract only those people who are really interested in that field (as you mention in the post), or those who happen to be nearby and looking for a job (nepotism/connections comes to mind, or some version thereof).

        As has been the history of new companies and startups, most of these fields fail, leaving the people in them scrambling for new positions and sometimes entirely new professions. That’s not something a lot of your 9-5 group is interested in doing. They will wait until the field is more secure and lucrative before diving in.

        • Steve Sailer says:

          The pretty good 2011 movie “X-Men: First Class” reminded me that there really does sometimes exist a “first class” phenomenon where the first year of students at a new elite academy or program goes on to great success. This is most notable in Hollywood, where the first class of the American Film Institute school included Terrence Malick, David Lynch, Paul Schrader, and Caleb Deschanel. Similarly, the first students of CalArts in Valencia in Character Animation in the 1970s included Brad Bird, Tim Burton, John Lasseter, and John Musker.

          Congregation of talent or old boy’s network?

          • Aapje says:

            Or first mover effect? Perhaps they got taught something revolutionary and this allowed them to take advantage of previously untapped opportunity. Later classes then found those niches already filled.

          • Watchman says:

            My dad was in the first class at a new grammar school (selective state schools, now mostly phased out). It’s striking how successful his contemporaries (and him) have been relative to what you’d expect from a group of working and lower-middle class kids in the 1950s. He puts this down to them always being the top kids in the school so not having any sense of being obliged to kowtow to others.

        • cryptoshill says:

          I think this is first-mover effect.

          Consider – when “machine learning” first existed in the popular press – the projects were done in DARPA labs or were moonshot google projects made up of people making their own tools and kind of making stuff up as they go. If the field grows in stature and economic opportunity – those people that published the early tools happen to get a lot of prestiege (and thus funding, decent undergrad students to order around, fancy laboratories, etc) and become considered the “good” ones by default.

          This effect gets worse as time goes on (see – low hanging fruit) and leads to us thinking that we have lots and lots of bad researchers and only a few good ones – when in fact we have lots more people publishing much smaller pieces of the same puzzle, but the first guy who moved in got entire field-changing innovations named after him, specifically.

    • benwave says:

      I think Scott is trying to separate out the effects of ‘intrinsically good researcher’ (factor 1) from environmental factors such as ‘with the best equipment’, ‘has the best networks’, and ‘has a head start on everyone else’. The data that most of the progress is made by very few researchers is consistent with there being 10x more good researchers now, but still only few with the best labs, networks, and head start.

  12. Maxander says:

    My pet theory for the slowdown of science, which I think is distinct from the three possibilities you discuss, is that it has to do with negative network effects. If there’s ten scientists in ancient Greece, they’re probably either working on entirely separate fields that have no potential overlap, all hanging out at the forum together, aware of every detail of each others’ work through extensive interlocution. For the umpteen million scientists today, though, everyone’s work intersects with or involves the work of hundreds of other experts, who are all publishing a constant stream of scientific literature and conference proceedings. Our grand communication infrastructures mitigate the problem, but we run up against the sheer limitations of human bandwidth; you can only read so many papers in a day, particularly if you want to get other work done. The key result of this isn’t that work winds up being duplicated (although that does also happen), but that it winds up being irrelevant to the greater course of science.

    A large fraction of scientific papers are never cited, or cited only a handful of times (and that, often, by the same scientist.) Any given morsel of scientific progress usually solves a very small part of a larger whole, but the larger whole may be discontinued, shown to be impossible, or superseded by entirely different approaches. For instance, in the far future, people will know that the best technology for powering cars is either hydrogen, synthetic hydrocarbon fuels, some chemistry of battery, or some other exotic approach; each of these have small armies of scientists working on them, but only one will “win,” and the work of the others will be largely wasted. If you want to know what’s slowing science down, a force that can nullify entire subfields sounds like a likely candidate.

    I guess a count against this theory is that it doesn’t address the hundred-Shakespeares problem. A new innovation in transistors is mostly only worthwhile if it happens to work in the same context as the hundred other most popular new transistor innovations; but theoretically, a really good book is inherently worthwhile, independently of what other authors are doing. Or perhaps that’s not true either?

    Unrelated point; I would also argue that the slowdown of science is a bit overstated, since, though you have all these charts showing mere linear increases over time, you have more charts over time. You can’t extend a chart about transistor size to the 1500s. Coming decades will likely see the rise of linear increases in things we can now barely imagine.

    • ruelian says:

      This theory makes the most sense to me, as well. I’d venture to add some more things to your model: hyperspecialization and interdisciplinary researchers.

      For the umpteen million scientists today, everyone’s work intersects with or involves the work of hundreds of other experts, who are all publishing a constant stream of scientific literature and conference proceedings.

      Our grand communication infrastructures mitigate the problem, but we run up against the sheer limitations of human bandwidth; you can only read so many papers in a day, particularly if you want to get other work done.

      I think one result of this is that it generates the need for more types of researchers, kind of bypassing the human bandwidth limitation via parallel processing 😉

      On the one hand, making progress inside of a particular field tends to require years of studying previous research (although hopefully not as bad as this). On the other hand, a lot of scientific and technological discoveries lean on tools and research from multiple fields, which necessitates people who understand a bunch of different things with various levels of detail.

      If you look at two fields A and B, you can imagine a continuum between them with experts in just A and B at each end, and a range from people who do A and dabble in B, people who are equally into both, etc. The larger the inferential distance between A and B, the more points on the continuum you have to sample to bridge the gap, and thus the more interdisciplinary researchers you need.

      This is then compounded by adding field C that has to interact with both A and B in the same way, etc. Assuming (extremely pessimistically) that every new field has to be fully bridged with every other field, and the inferential distance between fields keeps expanding, this gets exponential pretty fast. I don’t think this is exactly the case because it’s pretty unlikely that we’ll ever have to connect, say, marine biology to microchip design (although that does sound pretty cool). But it’s a first approximation.

      As a concrete example, my dad makes a living by being that one guy who knows everything about how to set up design software environments for analog VLSI engineers. (I think. If I understand him correctly. Also he does do other things, but you get the point.) So he’s not exactly a software guy, and not an engineer or a designer either, but to do his job he has to be somewhere in between those, and be able to talk to both. Another good example is applied ML researchers in a particular domain, like computer vision for analyzing crop quality using hyperspectral imaging. (I swear this is a thing.) They’re not ML researchers working on entirely new algorithms, not exactly domain experts in crops, and they’re not developing new hyperspectral cameras or researching how they work. They just have to know a lot about each of them to do their jobs. And they have to be able to read through new ML research to stay on top of their field, and talk about crops with the crop people and camera design with the people designing their cameras.

      On a more abstract level, I think what’s going on is that as our body of knowledge grows, the search space we can access also grows. As the domain-specific problems we’re trying to solve get harder because we exhaust the low-hanging fruit inside the domain, we have to look at more of the search space for tools we can use. This ends up requiring more researchers just to be able to look at all the things.

      • Aapje says:

        It’s not just research either: people have been studying for longer, where they only start to work later in life.

      • Basil Elton says:

        >it’s pretty unlikely that we’ll ever have to connect, say, marine biology to microchip design

        Unless we want to design a cyborg-fish or something. That doesn’t sound like such an urgent civilizational need, but I’m not so eager to say we’re not ever likely to do that. And thinking about it,that doesn’t even have to be a fish – a biologically and cybernetically enhanced human adapted for living in sea will do just as well.

    • JohnofCharleston says:

      That all sounds right, but isn’t this just a formalization of what we mean by “using up all the low-hanging fruit”? As the frontier of knowledge gets further and further away, it’s harder to get to the frontier, harder to push against it, and harder to effectively train others when you do make progress. This is what we mean by reducing returns to scale.

      Your “more charts” point isn’t unrelated at all, it’s the exception that proves the rule. When we stumble upon a new field of knowledge (say computer science), we’re able to make faster gains because there’s low-hanging fruit around. After a few decades returns to scale in the core field rapidly decrease, meaning the field slows even as researchers increase, but sub-fields (say Machine Learning) show faster gains.

      You’re right, but this all supports #3, it doesn’t contradict.

      • Maxander says:

        It’s related, but I think you’re missing the distinction. Suppose in 500 BC, with all the low-hanging fruit still readily available but only ~10 dedicated scientist-equivalents, there were suddenly ten million scientist-equivalents. All of them have the opportunity to make grand strides in human knowledge, but the number of strides available is limited. Maybe a thousand of them will independently hit upon Democritus’ theory of atoms; another thousand will all come up with slightly different formulations of geometry, but 999 of those will be immediately forgotten in favor of the most elegant formulation; millions of Archimedes-style machines could be built, but perhaps only a thousand people will spend time closely studying these and formalizing how they work, and there’s only a hundred or so reusable insights to be gleaned from them. And so on. The ancient world would see much faster progress than it actually did, but not a million times more.

    • googolplexbyte says:

      So would this hypothesis predict that fields that aren’t increasing the number of researchers they have should still maintain the rate of productivity increase?

      • Maxander says:

        I don’t think so. Presumably, there’s still some boost from a good researcher, even if only in terms of expected value; after all, each one has a chance of turning out to have worked on something really important, even if this couldn’t have been reliably predicted ahead of time. It merely predicts that the boost becomes smaller as there are more researchers already in a given field. (Perhaps there’s some asymptote where the boost becomes negligible.)

    • moridinamael says:

      I’m reminded of Peter Thiel’s remarks to the effect that competition is inherently wasteful. More researchers working on a finite number of problems means more competition which means more waste.

      • Garrett says:

        I think that only applies as long as the researchers are completely and totally motivated by the research itself.

        In practice, I suspect that this will run into the same problem the Soviets ran into when it came to work efficiency. If you know that you are the only person assigned to a problem (and thus irreplaceable), it becomes a lot more tempting to take Fridays off, because: why not?

      • Salem says:

        I am familiar with Thiel’s argument that competition is destructive for the parties involved – that they would be better off being a monopoly. He expresses this e.g. in Zero To One. I am not familiar with his argument that it is wasteful in a societal sense. Could you point me at it, or summarise it for yourself?

        • moridinamael says:

          The argument you refer to is the one I am thinking of. I think the idea extends to the functioning of science. Science is, if anything, more winner-take-all than business, because in a pinch businesses can try to cut expenses and eke out a bit of profit, while important scientific discoveries are much more binary. If two researchers compete to solve a problem, one of them will win, and the efforts of the other will have been completely wasted. If they collaborate to solve the problem, they will not solve the problem twice as fast, or maybe not any faster at all, and they will have to split the credit. Whether the incentives motivate scientists to compete or to collaborate, effort is wasted in proportion to how many “excess” scientists there are above the threshold needed to solve the problem at a reasonable speed.

      • david stone says:

        This seems to ignore two important effects:

        1) Motivation. Knowing that there is another project out there motivates people to work more efficiently. To use a technology example, for a long time there was essentially one C++ compiler for Linux: gcc. Certain parts of the compiler advanced, but the feeling people had overall (myself included) was that development was stagnant. Then clang + llvm comes along and now gcc is racing ahead as fast as it can. This is despite both of these projects being free and open source, so there is no “benefit” to the authors other than pride, glory, and a belief that their way is useful.

        2) Diversity of technique. When there is only one person working on a project, the solutions will be the solutions they think of. When there is collaboration, differences are often abandoned early. With competition, differences are emphasized. To some extent, innovation is a hill climbing algorithm, and one strength of great thinkers is being able to guess which hill will be the tallest before you climb it, but no one is perfect. Climbing a bunch of hills simultaneously is more likely to find the tallest.

    • Leah Velleman says:

      Additionally, the more people there are in a field, the more of their time is spent on intra-subject rivalries and disagreements rather than on new work.

      If one person working alone generates Hypothesis X, and then finds a really solid counterexample, they modify it or abandon it and get on with things.

      But now imagine a subfield of 100 people, with its own conferences and its own informal power structure, working based on Hypothesis X. One of their rivals finds a really solid counterexample. In that case, they often end up spending the next few decades egging each other on to stay the course and defend Hypothesis X against the outsiders — occupying a bunch of brains and resources, and forcing their opponents to occupy a bunch of their brains and resources in return.

  13. aNeopuritan says:

    To those actually willing to consider that science might slow down to a crawl and wondering how we’d need to proceed from *that*, I recommend this and this.

  14. VNodosaurus says:

    I think the trust in exponential growth shown here is excessive (especially when applied to preindustrial times).

    It’s true that for a field experiencing a bloom, you get exponential growth; Moore’s Law. But exponential growth, intuitively, doesn’t last forever. I suspect that the long-term trend with constant population is a power law; dx/dt ~ x^p, where p is between zero and one. Every discovery does not increase crop yields by a fixed percentage, but proportionally to, say, the square root of current yields. Then sustaining exponential growth requires an increasing rate of discoveries, and eventually something breaks – though it takes longer for it to break if population growth is ongoing (well, so long as population growth is exponential, even with coordination difficulties, progress should also be exponential for a power law, unless coordination difficulties get logarithmic; but population growth has been slowing down anyway, and coordination problems may well be logarithmic, so that isn’t saying much). As to GDP growth, which seems to be basically the only exception, well, cost disease is probably an explanation, along with economic policies to raise GDP above all and old-fashioned number massaging. But there’s also the enduring legacy of industrialization, especially in developing countries. That doesn’t mean it’ll continue forever.

    This is all separate from the question of whether scientific productivity is slowing down. I think it likely is, at least in most fields, but I believe an exponential plot requires scientific productivity times (number of researchers to the power of a coordination trouble constant that’s less than one) to increase over time, not merely stay constant.

    As to art, I think that’s a separate issue altogether. I think we absolutely have a hundred Shakespeares and more; about the only pre-industrial works I really can’t see an analogue of being created today is, well, anything involving poetry, but that’s because poetry is in a horrible state as a medium.

  15. Rich Rostrom says:

    Improving “the economy” (the set of activities people do to obtain goods) has become broader as “the economy” has gotten broader and more complex. A 2018 automobile has far more components (physical and technological) than a 1918 automobile, even in the core elements common to both.

    Accordingly, there are a lot more scientists and engineers at work on automotive development and design. The same would be true for construction, agriculture, telecommunications. There are whole areas that didn’t exist in the past.

    In the biological field, the targets are no more complex than they were, but we know vastly more about the complexity within living organisms; there are lots of things to work on.

  16. Douglas Knight says:

    How about the efficiency of rediscovering the experience curve model?
    Is Henderson’s law optimistic or pessimistic? Or is this question a distraction from fields that aren’t following it?

    • VNodosaurus says:

      If we treat efficiency (1/cost) as equivalent to a measure of value like the inverse size of a transistor, then Henderson’s law is a power law – more pessimistic than Scott. However, if we treat cost as reflecting cost in time as well, and production speeds up in conjunction with the gain of experience, then Henderson’s law can be exponential or a singularity, so long as cost (in time) halves or better with a doubling of cumulative output. That does not, however, seem to be the case empirically. So as I understand it, Henderson’s Law is more pessimistic either way than the model discussed in Scott’s post.

      • Douglas Knight says:

        Are you saying that a power law is less optimistic than an exponential, even if they involve different variables? Surely you wouldn’t say that Henderson’s law more optimistic than the ideal gas law?

        Henderson’s law is extremely good at identifying which industries have exponential cost declines: exactly the ones with exponential growth of production.

        Henderson’s law is extremely close to the BJRW’s model. It’s not so much that they are competing, as that BJRW (and Nagy et al that Nintil cites) is a mechanistic explanation of Henderson’s law.

        • VNodosaurus says:

          My understanding is as follows:

          Scenario 1 (BJRW, apparently) – With constant input, we should expect exponential growth. With exponential input, we should expect superexponential growth.

          Scenario 2 – We should expect exponential growth no matter what.

          Scenario 3 – With constant input, we should expect power-law growth. With exponential input, we should expect exponential growth.

          (‘Input’ referring to, basically, the rate of research being done, or number of researchers times productivity.)

          Henderson’s law is dealing with a slightly different topic, yes: instead of having ‘number of researchers’ as external input, it takes ‘units produced’ as input. But if we assume the rate of producing units is either constant in cost or grows as the inverse of cost, to get a direct comparison (and is also proportional to the research rate input), we get scenario 3 in both cases.

          • Douglas Knight says:

            It’s pretty weird to define “input” as researchers*productivity when productivity is not observable, let alone controllable.

  17. Eratudo says:

    Maybe this is also an artifact caused by only looking at outcomes that are measured by looking at an increase in some measure over time. Meanwhile a lot of innovation is in the form of doing things that could not be done before. Like when people created the first free electron laser we went from not being able to build it to being able to build it. And the factor describing innovation is often not making it better in general, but making it good enough to be used for certain applications (this is related to what paulm said). Even with smaller chips the relevant outcome is having chips small enough for the pc or the smartphone, and only to a lesser extend having small chips in general.
    I also think that the slowdown in increase in crop yield is not that supriing, since we have heavy regulation in place to discourage the use of GM techniques. So most of this innovation is done using old techniques.

  18. AppetSci says:

    I didn’t really see the question of Hard Limits being brought up here. There are hard limits to athletic performance, so for example a marathon runner will never complete a marathon in 1 hour because the energy storage and release capacity of the human body to move the weight of the human body that distance is just not possible. There are also hard limits to technology, where it is impossible for example to reduce chip size beyond a certain limit due to the size of the electrons moving in the circuits.

    Hard limits can be overcome by technology step changes (quantum computing or artificial selection or genetic modification), but these require time, which is a major factor, not just intelligence, so you still need a genius to think up the increasingly complex ideas or the theories, but the R&D resources now needed to test these massively complex ideas is then also massive in terms of Man Hours.
    Chips are so complex today, that apparently no one person full understands how every part of the chip functions. Specialist teams are responsible for different parts but as no one has a complete picture of the whole, we have reached the Hard Limit on complexity that humans can handle. Now even if AI becomes the omniscient chip genius, will it still require hoards of scientists to carry our R&D?

    • Conrad Honcho says:

      Chips are so complex today, that apparently no one person full understands how every part of the chip functions.

      I do not believe this is true. Most of a chip is memory, which is just a very simple transistor configuration repeated over and over again. Chips today have multiple cores, so you really just need to understand one core which is then copied and pasted (also understand the interconnects). I did my graduate research in computer architecture, designing and simulating my own multicore processors about 15 years ago, and I sure thought I understood it. It’s more complicated today, but not that much more complicated.

      It’s not like biology where researchers are looking at this incredibly complex cell evolved over billions of years that they couldn’t possibly construct on their own. A microprocessor is an entirely human-conceived and designed object, made with extremely simply components arranged and connected in perfectly logical ways, building on top of each other in increasing layers of abstraction. “Complex,” sure, but definitely understandable to someone familiar with the art. And every semester at every major university in the world a few dozen more electrical engineering students take the Computer Architecture courses, design some simple microprocessors by themselves and are exposed to the more advanced concepts, and could certainly go on from there.

      • david stone says:

        (also understand the interconnects)

        There is quite a bit of complexity buried in that parenthetical, especially when you consider it in the context of this thread, which is on scientific understanding in general (as opposed to a single implementation). The interconnects give us the concept of memory orderings, which is to say, on some architectures you can start with x and y equal to 0, but have one thread where “You write 5 into x, then you read 3 out of y” on one thread and “You write 3 into y, then read 5 out of x”. In other words, on one thread the write into x happens before the write into y, and on the other thread, the write into y happens before the write into x. The general concept here is cache coherency, and it comes up pretty regularly when designing highly efficient multiprocessor systems, especially since you also need a way to prevent it to actually allow writing multi-threaded programs.

        Or consider something much simpler. How many people do you think understand how a branch predictor works? You can read this well-written StackOverflow answer for a more thorough explanation, but the short version is that when code says if (b) { do_something(); } else { do_something_else(); }, it can be really slow to wait until you know whether b is true before you start executing something. So instead, the processor guesses which way it will go and start executing some code speculatively. You can have a 0-bit branch predictor (assume it’s always taken, for instance), or a 1-bit branch predictor (assume it does whatever it did last time, for instance), and so on. At some point, you’re willing to give several bits for your branch predictor, but how do you actually decide what their states means? Machine learning, of course! There are processors which decide that determining how best to predict the branch manually is too complicated, so they have a neural net branch predictor to try to “learn” from your execution which way to take.

        I easily believe that a PhD or master’s student could learn the fundamentals, but when it comes to doing it efficiently (which is what it’s all about), there are only a handful of people who even legally have access to know the cutting edge, let alone understand it.

  19. thomasthethinkengine says:

    Progress within fields slows.

    New fields spring into existence.

    • Lambert says:

      To extend upon this, I suspect that progress in a field is an ogive.
      Due to some insight or technological advancement, a new avenue of discovery opens up. Researchers clumsily grasp at the lowest hanging fruit but quickly figure out how to get things done. Then there’s a golden age as they churn out loads of research before progress slows down again.
      But it looks like progress is always slowing down because the ones that are growing at an increasing rate are too new and small for most people to have heard of them. That, or they are not yet regarded as fields in their own right.

    • janrandom says:

      And maybe new fields are springing up at an exponential rate (or proportional to population size)?

      Seems hard to test this. Maybe look at year numbers in wikipedia scientific field categories?

  20. christopher hodge says:

    You’ve buoyed my spirits Scott; I eagerly await the future utopia wherein there are more transistors in my computer than there are atoms in the sun.

    • Freddie deBoer says:

      This is a succinct observation. The question is not whether there are physical realities that will stop linear progress but whether we will recognize them when approach them.

    • Michael Watts says:

      Given the statement of Moore’s Law used here, 235 years from the time when your computer had 10 billion transistors. 😀

    • david stone says:

      Clearly, we will move to sub-atomic transistors.

  21. DS says:

    Hypothesis 4: the bottleneck is industry, because science waits on tools.

    We already know many discoveries had to wait for new tools. Why did Galileo discover the moons of Jupiter when the ancient Greeks didn’t? Was he was so much smarter? Probably not. He had something better than smarts: he had a telescope.

    It just didn’t matter how smart the Greeks were; it didn’t matter how many of them there were. You could simulate their lives a million times and they would never, ever discover the moons of Jupiter, or work out orbits well enough to see they were ellipses. Because they didn’t have telescopes.

    X-ray crystallography had to wait for X-rays. Theories of superconducting depend on experiments with superconductors.

    What if this is actually true of everything? What if the real bottleneck to science is not the cool ideas, but the tools that let us find data to prompt and test cool ideas?

    Then we’d find that science depends not on geniuses, but on technological industry that makes their tools. And technological industry depends not only on prior science. Industry also depends on simple economic growth.

    The model here is: Industry -> Tools -> Data -> Science.

    If “industry” is necessary, then all science will get bottlenecked by the rate of productivity growth. As long as productivity growth is close to linear (and that’s its own mystery!), science will follow.

    Does it sound strange, to say that science depends on industry? Well, there’s a lie they tell in school. They tell us our tools arrive as “inventions.”

    But when was steam power invented? About 50AD, in Roman Egypt.

    When did it become a useful tool? In 1700s England, when there were enough coal mines to justify its development.

    When were convex lenses invented? Ancient Greece.

    When were telescopes good enough to see the moons of Jupiter? In 1600s Europe, when there was enough glassblowing industry to make quality telescopes practical.

    That’s your straight line: the straight line of economic growth. You can have as many geniuses as you want, but they’re all dependent on the society around them to improve its tools.

    So I think the right place to ask, “why the straight line?” is economic growth, not science. Once you’ve locked in yesterday’s productivity growth to a straight line, tomorrow’s science will also be a straight line.

    Remember, after AD 1300, China was not a major source of science, despite having more people than anybody. Why? Because its economy stayed rural. All those geniuses, and no tools to give them data.

    So that’s my hypothesis: science is bottlenecked by industrial progress.

    Because even a genius needs tools to do science.

    • DS says:

      “The bottleneck is industry” makes an interesting testable prediction.

      Sciences that have been deluged with new tools from industry should be the ones with the most excitement about new discoveries. Where tools have improved incrementally or not at all, sciences really should show a “low-hanging fruit” problem.

      For example, we should expect pure mathematicians to be more likely than chemists or biologists to rank 1900-1930 discoveries ahead of those in 1970-2000.

      At the other end, climate science and machine learning researchers should be distinctively impressed by their peers’ most recent work. Those are two fields where investigation becomes massively more feasible every time computers get more powerful; it’s like industry gives them a whole new set of tools every five years.

      • Autolykos says:

        FWIW, I work in machine learning, and am equal parts amazed and worried at the current rate of progress. Five years ago I thought Eliezer was way premature with thinking about AI risk, now I’d give a single-digit percent chance that one really dumb decision in the next decade by a guy working at Google or Facebook could doom us all.

    • Doug S. says:

      Kepler relied on Tycho Brahe’s extremely precise measurements for concluding that orbits must be elliptical and not circular. Tycho Brahe didn’t have a telescope, but he did make many improvements in the accuracy of instruments used in naked-eye astronomy.

    • woah77 says:

      I agree that this is true, but it seems to me to be another way of stating hypothesis 3. That is to say “All low hanging fruit available with the tools we have right now is easily captured.” With a sudden invention of a new tool, you can see exponential growth because new low hanging fruit is available. So while I think you’re right about what the limiting factor is, that’s more of an explanation for hypothesis 3 rather than a competing hypothesis.

    • Cerastes says:

      My field is a literal example of this. The first person to talk about my area was Aristotle, but it was all very hand-wavey and not quantitative. The field basically languished until the start of the 20th century (when motion pictures became possible) and even then it was highly limited until a series of technical advances in the 1950’s allowing us to measure the key variables of interest. There was a subsequent “early burst” where a lot of low-hanging fruit was picked, but, lucky for me, the tail end of that is still occurring.

      I apologize for not giving the field or more details, but I try to hide my identity online and the field is small enough that disclosing it could compromise that.

    • Mr. Doolittle says:

      I might suggest an addition here. Engineering and Science (practice verses theory) can be seen as a cycle. Science hits a wall where theory is too far ahead of our ability to test it, and needs to wait for practice to catch up. That seems to be your point. You could also say that some of our current Physics theories are at an extreme level of that point. I’m thinking origin of the universe, multiple-universes, etc. There isn’t much point in pushing for new theories, that are currently untestable, while there are existing theories that are untestable. This can also be true, as you mention, with specific sub-fields and inventions. In fields where practice is hitting walls (can’t improve on the current technology incrementally anymore), science has all the tools and is given emphasis on moving those walls further back.

      I also happen to think that a lot of the scientists working on many fields (the excess Scott talks about when 100,000 people don’t have more discoveries than 20 used to) are working on Engineering problems. It takes much more time to develop and introduce a new infrastructure than it does to think it up in the first place. Even if the thought is genius and revolutionary, implementing it takes a lot of time. Cell phones and smart phones are amazing tech, but it has taken years to extend networks and towers to make them functional outside of a few major cities.

    • Enkidum says:

      Related:

      Greenwald, A. G. (2012). There Is Nothing So Theoretical as a Good Method. Perspectives on Psychological Science, 7(2), 99-108. doi:10.1177/1745691611434210

      Abstract: This article documents two facts that are provocative in juxtaposition. First: There is multidecade durability of theory controversies in psychology, demonstrated here in the subdisciplines of cognitive and social psychology. Second: There is a much greater frequency of Nobel science awards for contributions to method than for contributions to theory, shown here in an analysis of the last two decades of Nobel awards in physics, chemistry, and medicine. The available documentation of Nobel awards reveals two forms of method–theory synergy: (a) existing theories were often essential in enabling development of awarded methods, and (b) award-receiving methods often generated previously inconceivable data, which in turn inspired previously inconceivable theories. It is easy to find illustrations of these same synergies also in psychology. Perhaps greater recognition of the value of method in advancing theory can help to achieve resolutions of psychology’s persistent theory controversies.

    • ChrisA says:

      My view is similar in that the way that most technical progress works is that it is an evolutionary process – where numerous small innovations are tried on existing artefacts (an artefact is a recognisable discrete technology or process like say aeroplanes or a shopping method or a wood working technique.) Most of these innovations or ideas are detrimental or not an improvement but a few work, these are then incorporated into the next generation and the process then repeats. Anyone trying something new knows this process – most of what you try just doesn’t work and it is really hard to predict ahead of time what will work.

      So innovations don’t really come from great insights, they are basically the survivors of lots of insights most of which didn’t work.

      I think this model explains the steady growth rates of industrial economies at the innovation frontier. You are incrementally improving things via evolutionary processes, the amount of innovation occurring are a function of the number of artefacts in existence at any one time and can’t deviate too far from the current artefact. Its sort of like it is theoretically possible to have a camel evolve into a whale in small number of stages, but this would require a whole lot of very unlikely mutations happening very quickly which is very unlikely. Evolutionary growth occurs in small steps not large ones in other words, like the 1 to 2 % growth rate in our economy.

  22. Jack V says:

    Hm, interesting. Thank you for summarising.

    I’m not quite sure what I think.

    Someone upthread talked about coordination problems, and I think that must be part of it. If you’ve read Mythical Man Month (or, like me, read a summary), you’ll recognise the idea that 10 programmers do much less work than 10 times that of one programmer, because of the overhead of getting everyone familiar with what everyone else is doing. And if you have 100 or 1000 programmers, you have whole teams dedicated to writing documentation, designing APIs, and other vital work that wasn’t as vital when it was just one person separately. What you’re really doing is designing a system made out of programmers which writes programs, and partly (a) that system needs to be grown over time, you can’t easily just conjure it into being and (b) its efficiency depends on the system, not just the programmers within it.

    This sounds a lot like science and engineering too. It might explain why more people seems to help, but it’s hard to measure the difference between 1000 and 5000.

    OTOH, I also wonder if it depends what’s driving the progress. It’s hard to demand progress on a particular question directly, you always need background research to advance the field at the same time. But I wonder if there were real rewards for improving transport speed and transistor density, which only now is slacking off, and that led to enough investment to produce progress, however many people were needed. Whereas, say, fusion power, kept getting “we don’t REALLY commit to it” funding?

    I’m really not sure what I think. I think you can’t expect a neat relationship between number of people and results. And it’s hard to imagine what makes science a lot faster, we are still making progress on things we think are important. But maybe sustained investment would produce a faster trendline. Sustained, because it may need, for instance, a revolution in scientific publishing or similar, not just more people doing the same thing.

    • eigenmoon says:

      > getting everyone familiar with what everyone else is doing.

      Exactly this. Perhaps science works just like Bitcoin mining. Every day the discovery of the day goes to a randomly selected researcher, and then everybody else has to study it for one day. Thus the number of researchers doesn’t even matter and is determined by how much the investors value their chance of gaining a competitive advantage.

  23. ana53294 says:

    There are many things that define a researcher’s productivity. Their IQ is just one of them, especially in the more experiment-heavy fields.

    30 years ago, grad students were saved from the unending drudgery of shaking trays by this really useful and simple machine. A machine like this costs around a 1000$, and saves lots of hours of mindless work. Modern inventions that also save endless hours of mindless tasks cost a lot more (I wasn’t able to find a price, because you need a quote; my guess is something upwards of 100,000 $).

    For 100,000 $, you can hire a grad student full-time for 4 years, buy chemicals, and the University gets their share of overhead. If you buy the robot, you just get the robot. You still have to spend $$ to hire the grad student who will actually figure out how to make use of the robot.

    The shaker is cheaper than the grad student; the grad student is cheaper than an automated workstation. Besides, Universities get a cut of the funding every time a grad student is given a grant (while they only get the robot cut once, if at all). There is no incentive to increase research per man-hour, because that would mean increasing $$ per man-hour.

  24. joncb says:

    A lot of people asked paper author Michael Webb this at the conference, and his answer was no. He thinks that intuitively, each “discovery” should decrease transistor size by a certain amount. For example, if you discover a new material that allows transistors to be 5% smaller along one dimension, then you can fit 5% more transistors on your chip whether there were a hundred there before or a million. Since the relevant factor is discoveries per researcher, and each discovery is represented as a percent change in transistor size, it makes sense to compare percent change in transistor size with absolute number of researchers.

    I don’t find this intuitive at all. There are three reasons why i question this is true.

    Firstly, i can’t understand what model would result in each discovery decreasing transistor size by a certain amount (even on average or by order of magnitude). At the very least, every discovery you implement closes off potential design space to create more discoveries this making your cost/benefit ratio worse by default. Secondly, most discoveries make the whole system more complex as a result. The best example of this in action is the whole speculative execution/Spectre kerfuffle. When you make things more complex you obviously make things harder to improve.

    Secondly, the problem of CPU transistors is starting to hit hard electrical limits like if you make them any smaller it’s hard to stop voltage going where it’s not supposed to be going. Expecting processes to continue forever seems shortsighted.

    Thirdly, his formulation of “fit 5% more transistors on your chip whether there were a hundred there before or a million” is probably not 100% true either. When you’re first finding things they apply generally. Then you have to start moving into special cases which only apply to a certain number of the chips. Every now and then you come out with a corker like Speculative Execution which is a 100% game changer but as i mentioned before, with the solution space shrinking with each innovation we should expect game changers less and less and less.

    The idea that the number of resources you need to keep improving isn’t even particularly new since Moore originally offered this flip side to Moore’s law: “This had led to the formulation of Moore’s second law, also called Rock’s law, which is that the capital cost of a semiconductor fab also increases exponentially over time.” There’s some argument about the applicability of Rock’s law but in terms of intuition i think it’s closer than linear discovery.

    As far as the graph… I don’t have solid evidence here but i’m always suspicious about believing any information about CPUs post about the 2000s. CPU manufacturers have gotten a bit fast and loose around definitions at times switching from measuring “thing” to measuring “thing equivalents” (clock cycles is the example i specifically remember). Perhaps it’s on the up and up but i’ll always have my doubts.

    • joncb says:

      Also to add, I’m definitely onboard with your low hanging fruit theory. There was an old comment i remember (but can’t find at the moment) that went something like “The last real discoveries in computer science were made 40 years ago by Edsgar Dijkstra, everything afterwards has been merely commentary”.

  25. Hackworth says:

    > I’m just saying that “Oh, that exponential trend can’t possibly continue” has a really bad track record.

    I’m not sure I really have to say that, but every trendline will be wrong eventually. Making fun of any era’s futurists for misjudging the exact point in time when that happens are cheap shots, expecially in hindsight, but don’t invalidate the overall point. Moore’s Law for electronic transistors will come to a dead halt, because at some point in the not-too-distant future, electron tunneling effects will make greater transistor density useless. Transportation networks of any kind will find their limit at the speed of light. Population growth will find its limit in the amount of energy the sun puts out, or in how much waste heat we can dissipate into space, whichever is smaller. And so on.

  26. Cheese says:

    A perspective from having worked (PhD and also paid before and after) for a reasonable time in mol biol associated fields. 2 & 3 ring true to me.

    2 (systemic problems) is a problem in biology research in my opinion. It’s difficult to secure large and long-time funding for any project really, outside of a top few labs. I have had multiple PIs who are essentially forced to game the grant system – go as general as possible, try to draw tenuous links to big ticket problems and then use a large amount of the money to generate ‘safe’ papers, while running high risk high gain projects on the side. Failure risk is too high when you have a house and a couple of kids and 10 people depending on you for work. This is not to say that the careerist perspective is true. The particular PIs i’m thinking of had some huge ideas and one basically had the idea that a large lab in the US (i’m Aus) used to develop a particular CRISPR/Cas9 tool – they simply beat us on a resources basis.

    In fact, point 2 renders the careerist point a bit moot I feel. If you want a cushy career, get out of academia now (in biology at least). It’s why I left. It’s why many I know who are more intelligent and insightful than I left. There are some who will plug away at basic science questions that many might consider to be a careerist view, but I don’t agree. The job/student ratio and the pay scale drives a lot off quickly. Yes, you have to sometimes go ‘safe’, but that barely makes it less difficult and stressful. 2 is also a big problem in creating walled gardens. I left one lab out of frustration that the whole field was doing the same bloody thing with slightly different tools and in different ways independently. Because that was what the system rewarded.

    3 is interesting. I think it’s largely true. Someone else in the thread mentions hard limits, which I also think may become a problem sooner than a lot of people think. That may be a function of complexity though, as someone else mentions – will AI solve it? I don’t know.

    • March says:

      Also a former molecular biologist (who didn’t leave to pursue a cushy career, but out of total fed-upness with the brokenness of that particular island in the science stream), can agree with that assessment. Especially the grant system gaming, ha.

      Perhaps the ‘careerist’ thing is also explained by the fact that in earlier centuries, many scientists were independently wealthy or sponsored by wealthy families. Now, PhD students, postdocs and even pre-tenure professors are in many ways a precariat. (A relative of mine worked at UCSF for a couple of years and told me that one of his PhD students couldn’t afford rent at a reasonable distance from work, so he decided to be homeless instead, take showers at work and live out of a locker.) Even if a scientist only becomes ‘valuable’ with 15 years of experience under his or her belt, the weeding out that happens at earlier stages isn’t a great filter for excellence.

      • cryptoshill says:

        The weeding out that happens at earlier stages is a great filter for “cynical careerism” though.

  27. Murphy says:

    Something that feels left out: the spawning of new fields.

    I work in a field that couldn’t have really been described in 1930.

    They didn’t really know that DNA was a thing that mattered, electronics and computers weren’t even a glimmer in the inventors eyes. There’s no exponential curve between then and now, the curve got created from whole cloth decades later so that it could even start.

    An old guy at a conference marveled that he spent his entire PHD studying 186 bases that were part of a sequence coding for part of a single protein. Decoding their order and function was a monumental task.

    His own phd students had just finished a project looking at 190 whole bacterial genomes. That was ~11 years ago.

    Now I’m working on tens of thousands of whole human exomes and hundreds of genomes routinely.

    faced with a novel organism with millions of bases of DNA code I can bung it’s genome through some predictive code and get a somewhat decent prediction of the shape of various proteins and identify likely important sections, all while I sip my morning coffee.

    My own field has itself spawned new fields that feed back and forth with it.

    My own field has fed back results to it’s own parents that they never would have reached on their own.

    Sometimes a discovery means you can produce twice as many horseshoes.

    Sometimes it means that the horseshoe productivity graph no longer matters or means anything because you’ve created a new field with a new graph that gets to start at the bottom again.

  28. Emperor Aristidus says:

    I read this article, not without confusion, with the belief that “Gods of Straight Lines” was something from an older post, so I then checked the archive. But… no. Ergo I still have no idea what it means exactly. Could someone help out a new reader?

    • Walter says:

      I think it means what he says in this article, trends want to keep on going. The Gods Of the Copybook Headings, by contrast, are common sense. Straight Lines > Copybook Headings is the Moore’s law situation, where the trend Just Keeps Going while the naysayers repeat that surely this year it will all come crashing down.

      • Emperor Aristidus says:

        Ah, alright. Thank you. That is… basically what I figured, but since the G.O.T.C.H.A. are, of course, this established *thing* with a complex background, I sort of assumed the same had to be true of their straightliney counterparts, regardless of whether Alexander had used them before.

    • jaimeastorga2000 says:

      The Gods of the Straight Lines come from Scott’s old post “Book Review: The Great Stagnation”. It doesn’t really make much sense now that Scott has deleted his LiveJournal, though.

    • AnnaR says:

      But “Gods of the Copybook Headings” is, I believe, from a poem by Kipling.

  29. Mr Mind says:

    Many many years ago, during the first day of the Digital Electronics course at my university, the professor showed a similar graph and told us something like “See? Engineers working in this field will be more and more sought after!”
    I remember thinking: “and in fifty years there won’t be enough humans on Earth to make progress in the field”.
    This has been always a subconscious thread in my mind, and I’m glad it has gained visibility.
    I agree with Scott and I think there’s a trivial explanation for the slow-down: entropy.
    If you take a system of complexity say n, then there is an optimal one for your task. If you have it already, you have exhausted the hanging fruit of complexity n.
    There are always more powerful systems, but they are also more complex (by Gödel’s speed-up theorem). Say that the added complexity is k: there are thus 2^(n+k) – 2^n new systems, most of which are opaque to the old system (by Chaitin’s theorem).
    Since it’s not yet possible to increase exponentially the intelligence of human beings, what’s left is increasing the number of people working on a problem, which contributes only linearly, so you’re left with throwing exponentially more scientists to explore the new complexity space, only a tiny fraction of which is going to be of interest.
    Thus, like entropy, the probability of finding a useful fruit decreases exponentially, and to make constant progress you need exponential increase in exploration of the system’s space.

  30. RalMirrorAd says:

    Have the implications of a finite # of smart people existing been handled here? I hesitate to bring it up because depending on your assumptions it enters CW territory.

    There probably *are* a lot more geniuses alive today then in the past, but the geniuses of today are working on harder problems and so appear to be less productive then in times past [as has been argued by others] — moreover there appear to be more geniuses alive that can be pulled out of rural areas into spaces where they can be useful for research when that otherwise might not have been the case.

    But what happens when populations stop growing [as they probably should] and you’ve meritocratically sucked out all of the smart people from the remote corners of the world? You suddenly can’t keep throwing exponentially more researchers at a problem.

    If ‘smart person’ is simply something that is manufactured in a school or university then you’re probably not concerned until close to 100% of the population is a scientist. You simply automate the lower skilled jobs and build more PhDs.

    But I suspect people who read this blog are at least sympathetic to the idea that manufacturing genius is not so easy and intelligence is heritable. Combine this with the fact that universities by their nature take society’s most intelligent people and keep them occupied and in debt during their most fertile years; the thought of there being fewer available researchers 50-100 years from now seems plausible.

    Somewhat related is the tendency of the FIRE sector to pull talent away with the lure of high salaries.

    The only saving grace I can see in that scenario is if gene modification becomes a practical option for a significant portion of the population.

    Am I missing something?

    • Murphy says:

      Combine this with the fact that universities by their nature take society’s most intelligent people and keep them occupied and in debt during their most fertile years; the thought of there being fewer available researchers 50-100 years from now seems plausible.

      it also puts them in constant contact with similar bright people. We have a lot of great Phd students who’s parents are both professors or similar.

      It’s like cramming all the top male and female basketball players of the world into a few hundred institutions together: you end up with a lot of pairings and a lot of very tall kids.

      • RalMirrorAd says:

        It allows for the concentration of intelligence [which has its own associated social problems, see “coming apart”] But where in the developed world are people with bachelor’s degrees or higher having children at or above replacement levels? And how do these levels compare with the population as a whole?

    • AnnaR says:

      Also takes intelligent people out of trades, manufacturing, farming, child rearing, and other less socially remunerative activities and pushing them into a narrow funnel of majors, occupations, ans geographies.

      Although (as an aside) many of the “sure moneymakers” (law and academia being prime examples) for smart people are becoming too saturated, so you might see a flow of intelligent people back into more diverse occupations and even back to their home countries and communities.

      Actually, as an even further tangent, as people are more plagued with loneliness and isolation and the grind of living in soulless metropolises away from families, perhaps there will be a slow trickle of people back to their communities of origin and to more diverse career and life pursuits, which may re-establish the distribution of intelligent people across various sectors of society.

  31. synthmeat says:

    #4 We’ve passed the One-Man Limit.

    a) Up to 19th century, given a good tutor/colleagues and material sources, a curious individual with some penchant could basically grok all of mathematics. This peaks with Gauss, one of the last ones to know essentially all math preceding him. Even savants like von Neumann couldn’t get there anymore.

    b) For physical sciences, due to the nature of slow interplay with mathematics, this extended a bit further, up to 20th century. I’m not sure which poster boy would I use here.

    c) Because of natural inertia of communication and humans in general, you see all this coalescing and peaking (results-wise) in ~ 30s of the 20th century, give or take a decade.

    If this is valid, corollary is that we’re stuck here unless we extend the reach of One-Man and/or fundamentally reinterpret essentially everything in a more elegant (dare I say, more beautiful) model, grokkable to One-Man.

    I’m aware this is not much far off #3, depending on interpretation. But I do believe it’s more precise.

    • Murphy says:

      60 years ago “programing” was done with a soldering iron and complete understanding of the computer in question.

      Today I can download a gigabyte of library code that will do most of the hard work such that i can code up an app in a few pages of code that doesn’t take spectacular intelligence to write.

      I don’t need to know how every byte of library code works. I don’t even need to know the physical basis of the computer it’s running on.

      Back during my undergrad I remember spending a few weeks digging into my chosen final-year project. A fairly specific set of problems. Researching it was one of the first times I realized that I’d reached the far edge of human knowledge on an academic subject.

      It was a very specific issue but the point is that once you have a modest base, you can power out along a needle-thin course to the very edge of human knowledge on a subject in a fairly short time.

      We’ve not reached the point where humans cant find the edges and even push them further outwards in a reasonable time. it’s merely that we can’t expect to do so across many fields.

      • synthmeat says:

        But you cannot integrate your needle into the whole, a reasonable requirement for something described as “breakthrough”. You’re relying on assumption that you have a good system that, if it’s anything significant, will percolate your research around and transform what it touches.

        In that context, I’m arguing that one very good example of such a system was One-Man, and that has been fundamentally exhausted.

        • Murphy says:

          Sure you can. Someone pushes to the edge through their needle. They push the border just a little bit and they publish a paper that perhaps a couple dozen people actually use.

          One of them adds the new discovery in the form of a function to a software library and 6 months later people who aren’t even aware of the field are using the work wrapped up in an iphone app to overlay puppy faces on their video streams in real time.

          • synthmeat says:

            I actually was throwing 3D models over people’s faces in real-time video conferences in browser, a decade ago. There’s nothing fundamentally different between that and puppy filters, except better cameras and more transistors. It’s a technological evolution based on projective geometry and slapping whatever statistical method happens to work at a point in time, given resources. Not a scientific breakthrough.

          • Murphy says:

            I think you’re kinda missing the point and taking issue with a hypothetical example rather than the central idea that you don’t need to be newton for some small breakthrough to reach the world.

            Especially since I didn’t specify what advance someone might have actually made related to that example.

            The point is that even trivial “breakthroughs” can percolate out nowdays.

  32. JPNunez says:

    I think that some of the arts are kind of too easy.

    When they invented the piano, immediately came along Beethoven, and dominated the field so throughly we are still venerating him. Same with the electric guitar and Hendrix. I guess theater was invented in greece (and had some of its masters there) but had to wait until it arrived to England to get Shakespeare, so that makes Shakespeare a late comer. Cinema had to wait less than a century to get masters of the craft, but it is too early to say if they dominate as much Shakespeare. Probably not, due to costs.

    Regular, literature, discounting theater, has had a harder time getting undoubted masters. Maybe regular literature is more flexible than Theater, and thus harder to master, or maybe it is so flexible that we have a hundred masters in different subfields of literature. We still have the modern novel dominated by El Quixote, but that’s more of an issue of it inventing the field, than its quality surpassing everyone else forever. Miguel de Cervantes is not as well regarded as Shakespeare in their fields, but he is still famous.

    Epic poetry has two of the oldest example we preserve completely in the Iliad and the Odyssey, and still are considered masterpieces. They were probably a group effort, but it is still impressive that they mastered the form so early.

    So it is not rare that the early efforts dominate the field and the late comers have to make monumental efforts to get the same notoriety. Early low hanging fruit is picked up around one century from a form being created. If anything, Shakespeare is the odd one, waiting 2000 years to own a field.

    • JPNunez says:

      I think that, at least for musical instruments, this is by design. No luthier sets out to build an instrument that nobody will master for 500 years. They want something that can be played _now_. There are no luthiers building visi-sonors that no one but crazy mutant geniouses can play.

      So they build easy-to-play instruments, and so a guy in the next 100 years learns all the ins and outs of it completely.

      • AG says:

        Music innovation is one of the things I’ve mused over quite a bit, and whether or not the next one will happen in our lifetimes.

        1) In our age of massively increased networking, people spend more time learning about the millions of different styles out there that already exist, instead of developing a new one. The last few trends in pop music have all been nostalgia-based, and new developments have basically been about how many different influences you can cram into a fusion.
        2) New music genres are actually technology driven, but we’ve perhaps reached peak music technology, in the sense that we can generate any sound waveform we want already. There is no new instrument that will create a unique timbre, that we cannot discover now with synths. Indeed, other than the genre fusion approach above, modern music development has basically been rooted in DJs creating novel synths for their EDM.

        In comparison, I can still imagine the next frontiers of technology for storytelling.

    • The Pachyderminator says:

      “Mastering” a field of artistic endeavor doesn’t mean exhausting the possibilities, though. Liszt and Chopin developed the art of piano composition and performance beyond where Beethoven left it. Other musicians in the twentieth century developed it beyond that. You may or may nor value them more than Beethoven – but they did things that Beethoven didn’t do, and thereby expanded the possibilities of the piano. Also, the piano itself was a development of earlier instruments. It would have been a big mistake for someone to think that J.S. Bach had mastered the harpsichord so thoroughly that keyboard music had no more progress to make. Similarly, Vergil developed the “epic” form far beyond Homer.

      • Yes. Sticking just to the piano, two new sets of masters during the 20th century expanded the known range of art on that instrument in genuinely new directions. One was the early-20th Russians (Shostakovich, Horowitz, Stravinsky) and the other was the midcentury jazz geniuses (Tatum, Peterson, Garner). Each took the art of piano performance beyond where the earlier greats had left it. (And those two groups knew of each other and listened to and admired each other.)

        • AG says:

          Jury’s still out on rubbing the strings with cellophane.

          More seriously, the modern innovation of keyboard performance is the usage of synths to recreate pop songs live, the switching back and forth between various instruments, with a 2-4 keyboard setup. (Though some of that was developed with organ playing already)

    • Winter Shaker says:

      it is still impressive that they mastered [epic poetry] so early

      How much of that is actually the result of the Greeks already having a long tradition of oral poetry recital, memorised by highly trained performers, which could then be written down more-or-less as soon as they developed literacy?

  33. jstorrshall says:

    The most common mistake in thinking about science and technology is to conflate them. But there is a hypothesis 4, the Machiavelli effect— see Chapter 5 of Where is my Flying Car.

    • Murphy says:

      Where is my Flying Car

      I mean if you reeeealy want one and don’t mind shelling out a huge pile of money you can get one.

      It just turns out flying cars are uneconomical and impractical for most uses.

      • jstorrshall says:

        The book has a substantial analysis of the economics of flying cars and of the potential of current technology. So far no one at any of the approximately 25 Flying Car startups appears to have read it.

      • The Nybbler says:

        If they’d read about flying cars, they’d know not to try to build one. First of all the technological barriers to make a practical flying car are formidable. Second, and probably more significantly (after all, cheap antigravity could be invented tomorrow), neither the FAA nor any similar agency is going to let ordinary man fly; the sky is an exclusive club by design.

        • Murphy says:

          all the rules re: flying are part of the impracticality.

          I vaguely remember an old comedy sketch about someone turning up with a flying car design and trying to sell it to a company. they agree that he’s solved the technical issue but still: nobody wants the average driver whizzing above their heads riding a load of jet fuel. They feel much better knowing that most of the people in charge of the flying machines are mentally stable people with nice hats.

  34. Doug S. says:

    I’d actually be fine with the “hundred Shakespeares” estimate. Do you think there are 100 marathon runners today that can match the record set in 1900?

    • Murphy says:

      well, the olympic sprinting records from back then are now regularly beaten by highschool sports teams… so ….

    • vV_Vv says:

      There are probably hundreds of people in the Anglosphere with Shakespeare’s literary skills, but there seem to not to be hundred times more Shakespeare-level works being published.

      But this might be actually a bad example, as literary quality at a high level is positional.

      It’s however more plausible to claim that that there aren’t works hundred times better than Shakespeare’s works.

      • 6jfvkd8lu7cc says:

        >But this might be actually a bad example, as literary quality at a high level is positional.

        On the other hand, the «low hanging fruit» argument is also a positional argument, so maybe it is indeed a good example.

        Of course, there are other positional aspects to the scientific question — important progress is measured by importance (which is positional), but the amount of progress in adjacent areas that is necessary (and achieved) is hard to quantify objectively.

        I would say that a hundred times more Shakespeare-level works _written_ (and probably even published, if we include online self-publication) is completely plausible — it’s just that nobody notices. I would even say there are proportionally more scientific breakthroughs on the same level of inventiveness as in the golden times; of course, even in the areas considered «important» it is a complicated question which subareas are «important» etc.

        • AG says:

          Ozy wrote this post about the value of new stories:
          https://thingofthings.wordpress.com/2016/06/15/against-gwern-on-stories/

          Where is my Juliet Capulet and the Methods of Rationality, Eliezer! (inb4 “Hamlet is Shakespeare Methods of Rationality That Tragically Still Don’t Work Because Tragedy”)

          What does it even mean for a work to be a hundred times better than Shakespeare? By many numerical measures (dollars earned, quantity sold/read, transformative works produced, subsequent original works influenced by it), one might argue that Harry Potter or Star Wars meet the requirements.

          And as per Ozy’s post, “great” is just so subjective for this. There are people who don’t think Shakespeare is great, and any definition that tries to get around this returns to numerical measures, as above, that have modern works meeting but many would be loathe to admit are therefore a hundred times better than Shakespeare.

    • Enkidum says:

      I’d be very surprised if there are less than 10,000.

    • Fossegrimen says:

      Not sure there was a record set in 1900, but IIRC the record of 1908 was around 2:54.

      The qualifying requirement for the Boston Marathon is 3:00 so essentially the entire field can finish around that record and I would think most can beat it given an easier course than Boston. IDK how many spots there are but it sure looks like a lot. 25000 maybe?

      Sports pre WW2 were mostly done by rich kids who could afford it, not by the fastest or most skilled.

      • Chevalier Mal Fet says:

        It’s worth noting that the Boston qualifying time is adjusted so as to keep the field roughly the same size – they can only accommodate so many runners. And the qualifying time has been steadily dropping since the marathon was first introduced. When first introduced in 1970 (after more than 70 years without a standard), a marathon of 4:00 was required to qualify. Now it’s been slashed all the way to 3:00, and I expect it will continue to fall.

      • Aapje says:

        One caveat is that marathons are in a race to lower their times, because faster run marathons have higher status. So a lot of marathons have optimized the course for fast times during the last decades.

        So the 1908 winner might have run a faster time on today’s course.

  35. Doug says:

    I’m not sure how much this changes the numbers and trends. But I suspect over time in any field there’s mission creep about what constitutes a “researcher”. Whereas at Fairchild, I’d expect almost all the researchers spent most of their figuring out ways to cram more transistors on chips. But at Intel, I’d be surprised if any more than a small minority are.

    You have all kind of researchers focused on things like compilers, pipelining, branch prediction, caching power usage, making sure major software is supported and optimized for, security (hello Spectre), Q/A (hello FDIV), jiggering with instruction sets, networking, microcode, virtualization, multicore communication, device support, tuning recommendations, and all kinds of downstream applications like gaming or machine learning.

    This is mostly a function of the sheer success of microchips in the first place. The original researchers made computers so good and so fast, that we use them for a bajillion different things and every single person on Earth touches a CPU in some way multiple times a day. That means microchips have to be robust, optimized, stable, and secure in a dizzying array of environments and use cases that the original Fairchild researchers could never dream of.

    Effectively that means a whole lot more researchers working on all kinds of ancillary concerns that aren’t directly related to Moore’s law. The generalization would be that as a field becomes successful it’s focus tends to shift more towards the “applied” end of the away from the “pure” core.

    If you sell ten billion CPUs a year, it certainly sense to hire two dozen PhDs into figuring out how to shave 15 watts off electricity consumption when idling. That doesn’t make sense in a world where computers are so big and expensive that only the five richest kings in Europe can afford them.

    This kind of clouds the original question, how much human effort does it actually take to keep Moore’s law itself humming along? I suspect the general gist is still correct. A (low growth-rate) exponential increase in required manpower to achieve the same gains. But I think without correcting for the issue BJRW is overestimating the impact.

    • vV_Vv says:

      (disclaimer, I’m not a physicists, mine it’s an outside view)

      I think she’s right that hep-th physics is stagnating, but she’s fundamentally wrong to blame it on institutional problems. The institutional problems she talks about exists, but they are more a symptom rather than a cause. The real cause is the depletion of low-hanging fruits, due to the energy desert (it’s not technologically possible to investigate the energy levels required to evaluate a theory of everything) and possibly a cognitive desert (humans may just be not smart enough to come up with a candidate theory of everything in the first place).

      • Lambert says:

        Wow. You’d have thought at GUT scales they’d just start using joules.
        Because that’s the energy of a small car in a single particle interaction.

      • Aron Wall says:

        Well I am a theoretical fundamental physicist, and I agree with you that it’s OBVIOUSLY the low-hanging fruits problem. We are rapidly approaching learning, not everything there is to know experimentally about fundamental physics, but everything which is feasible to know. (At least from accelerators; cosmology probably has a bit of a ways to go still.)

        I don’t think you’re using the phrase “energy desert” quite right though.
        What you should say is that there is a “large hierarchy” between the LHC and these other energy scales. We don’t know for sure there’s an energy desert; the only main reason to believe in it is if you are sure that GUTs are true, and worry that too many additional other things in between might spoil the coincidence of the 3 coupling constants lining up (which is now a bit off experimentally anyway.) The problem is that we built the LHC and there was nothing new there.

        The last really major change to the Standard Model of particle physics was in the 70’s, although in the 90’s we learned that two kinds of parameters that seemed to be 0 (neutrino masses and the cosmological constant) were actually not zero. Since about 1980, most progress in fundamental theoretical physics (outside cosmology) has come from exploring a) theoretical consequences of existing models or b) new models like string theory that have remained speculative. I do think we have learned quite a few important things in both categories though!

        I can sympathize with worrying about the domination of string theory preventing work on competing ideas, but the idea that somehow not funding string theory or worrying about naturalness would have led theorists to discover some quite different amazing new theory when we don’t have much experimental data, is really quite odd. Part of the reason for the bandwagon effect around string theory is that people actually got (mathematically and physically interesting) returns on their investment there. It doesn’t usually work to just set out to revolutionize physics from scratch; you usually have to work incrementally on something that already “works” at some level.

  36. EnviroEngineer says:

    A major factor in driving progress and growth that I don’t think was touched on here is the energy made available by fossil fuels. For example, for the past 100 or so years we’ve been turning fossil fuels into food and people through synthetic fertilizers, pesticides, and mechanization.

    Fossil fuels represent 1/2 a billion years of concentrated and stored solar energy. Liquid hydrocarbons are needed to run our transportation system. If we were to subsist on the annual flow of energy from the sun we’d have to massively reduce population, socio-technical complexity or both, and the economy would be much smaller than the current one that runs on fossil fuels.

    Ultimately, energy and resource limits, and limits on the biosphere’s waste assimilation capacity (e.g., CO2 buildup in the atmosphere) will limit “progress” and growth.

    The decline in science productivity shown in the data in this article are an expression of diminishing returns on complexity coupled with resource depletion (e.g., fracked shale oil has a much lower energy return on energy investment than the conventional oil it’s replacing).

    We tend to give all the credit for “progress” to the human intellect, and fail to realize the magnitude of the contribution of high net-energy fossil fuels to the equation.

    • Murphy says:

      ” If we were to subsist on the annual flow of energy from the sun we’d have to massively reduce population”

      That’s sort of a function of solar panel prices. Total available energy from the sun isn’t so much the issue, more the economic practicalities of collecting it.

      If we don’t sprint to the point where we can make other energy sources work we basically just die.

  37. benquo says:

    This seems like it’s mixing together some extremely different things.

    Fitting transistors onto a microchip is an engineering process with a straightforward outcome metric. Smoothly diminishing returns is the null hypothesis there, also known as the experience curve.

    Shakespeare and Socrates, Newton and Descartes, are something more like heroes. They harvested a particular potential at a particular time, doing work that was integrated enough that it had to fit into a single person’s head.

    This kind of work can’t happen until enough prep work has been done to make it tractable for a single human. Newton benefited from Ptolemy, Copernicus, Kepler, and Galileo, giving him nice partial abstractions to integrate (as well as people coming up with precursors to the Calculus). He also benefited from the analytic geometry and algebra paradigm popularized by people like Descartes and Viete. The reason he’s impressive is that he harvested the result of an unified mathematical theory of celestial and earthly mechanics well before most smart people could have.

    At best, the exponentially exploding number of people trying to get credit for doing vaguely physicsy things just doesn’t have much to do with the process by which we get a Newton.

    But my best guess is that there’s active interference for two reasons. First, Newton had an environment where he could tune out all the noninteresting people and just listen to the interesting natural philosophers, while contemporary culture creates a huge amount of noisy imitation around any actual progress. Likewise, Shakespeare was in an environment where being a playwright was at least a little shady, while we live in a culture that valorized Shakespeare.

    Second, the Athenians got even better results than the Londoners (Aeschylus, Sophocles, Euripides, Aristophanes all overlapped IIRC) by making playwriting highly prestigious but only open to the privileged few with the right connections. This strongly suggests that the limiting factor isn’t actually the number of people trying. Instead, it depends strongly on society’s ability to notice and promote the best work, which quite likely gets a lot worse once any particular field “takes off” and invites a deluge of imitators. Exponentially increasing effort actively interferes with heroic results, even as it provides more raw talent as potential heroes.

    This suggests that a single global media market is terrible for intellectual progress

  38. Robert Jones says:

    I find your position plausible, but there’s something weird going on here. How come across all these fields we increase the number of reasearchers at just the right rate to keep the line straight? Is there some mechanism which causes inputs to grow exponentially? Maybe because globally total inputs = total outputs, and constant growth implies exponentially increasing outputs?

    • Aftagley says:

      This question only applies if you consider # of researchers to be a necessary factor in the pace of discoveries. If you hold the position that the majority of the discoveries are done by the core group of actual limit-pushers than it’s almost irrelevant how many total people enter the field.

  39. vV_Vv says:

    A lot of people asked paper author Michael Webb this at the conference, and his answer was no. He thinks that intuitively, each “discovery” should decrease transistor size by a certain amount. For example, if you discover a new material that allows transistors to be 5% smaller along one dimension, then you can fit 5% more transistors on your chip whether there were a hundred there before or a million. Since the relevant factor is discoveries per researcher, and each discovery is represented as a percent change in transistor size, it makes sense to compare percent change in transistor size with absolute number of researchers.

    I don’t buy this.

    Let’s say that a new type of plastic allows us to make toy cars 5% larger at the same cost, does it imply that we can also make real cars 5% larger? A toy car is one order of magnitude smaller than a real car, a modern IC transistor is a thousand times smaller than a IC transistor from the 1970s. I’d expect that lots of technical innovations at a certain scale become obsolete at a different scale. Going further back, technical innovations for vacuum tubes were made obsolete by transistors, technical innovations for electromechanical relays became were made obsolete by vacuum tubes, and so on.

    More recently, in the last 5 years, the resurgence of neural networks made lots of innovations in computer vision, speech processing, machine translation, automated game playing, etc., obsolete.

    I think it’s improper to compare absolute numbers of researchers with growth rates. If you compare absolute numbers of researchers with absolute achievements (e.g. number of transistors, number of chemical elements, human lifespan) you’ll find that certain fields are still immature and have lots of low-hanging fruits, while others are mature and have run out of low-hanging fruits.

    • moridinamael says:

      Yeah, this hasn’t been my experience in R&D at all. It’s not even coherent to talk about a “single discovery” improving any metric by a “certain amount”. All design optimizations, from new chip designs to new wing, drill bit and boat hull designs, are complex, multivariate changes that jump around the design space in sometimes very surprising ways.

      When they started incorporating modern composite materials in airplane design, they didn’t just swap out the old parts with new composite parts, dust off their hands, and enjoy the fixed 10% fuel efficiency improvement or whatever. They had to subtly redesign every aspect of the aircraft to take advantage of the new materials. The same is even more true if you look further back and consider the introduction of jet engines.

      Perhaps the simplest things with the fewest variables do lend themselves to this kind of “fixed improvement” model. Like, what are the optimum quantities of other metals to introduce into a steel alloy to obtain an alloy with properties XYZ.

      Maybe I’m wrong. Maybe he could give me a list of examples of some specific discovery that let to a flat, comprehensive 5% improvement in {metric} across the board. That’s not my sense of how engineering works, though.

  40. Doug S. says:

    Logistic curves (S-curves) look exponential, until they don’t. As the saying goes, if something cannot continue forever, it will stop.

  41. The Nybbler says:

    I tend to think #1 is a large factor, and it combines with #2 to make a worse one: A lot of these new researchers are probably actually counterproductive, creating and operating the #2-related barriers (someone’s gotta run the IRBs!) which slow the good researchers down, publishing nonsense or banal papers and wasting people’s time that way, etc.

    On the other hand, there’s also the factor that the number of “researchers” may be miscounted. If you look at computer science, you can see there’s a LOT more people in the field than there were in Dijkstra’s time. But counting all of them as researchers who should be expected to move the field forward is obviously wrong. Even counting all the Ph.Ds is wrong. A lot of them aren’t doing cutting-edge work, but working on applications and expansions of existing work. These may be practically useful but they don’t drive the field “forward” in the same sense. (On the other hand there are also a lot of CS Ph.Ds writing bad papers describing well-known or obvious techniques in novel terms)

    • Murphy says:

      Relevant xkcd: https://xkcd.com/664/

      There’s also the support staff: Bob, destined to be the ultimate genius of the field now doesn’t need to worry about gluing together it’s own hacked together software to handle his data because a team of a dozen little worker bee types have written a nice suite of software tools to do it for him and he gets to spend extra months of his life working on his actual area of interest.

      I remember an academic who had great metrics. Most of his work was middle of the road.

      But at one point he made a little program that converted between 2 finicky file formats where the conversion was a common issue in the field.

      Literally everyone in his field used his program and he got cited ~30,000 times.

      Was he a genius who changed the world? no.

      Was his work valuable? Absolutely. The lives of every researcher in the field were made a little easier thanks to him and likely errors were avoided in thousands of papers.

      • arbitraryvalue says:

        I agree with you. (But I object to being described as a worker bee!) The difficulty of discovery scales with how much has already been discovered, so making the existing knowledge useful is a major endeavor.

        For example, my boss’s job is (among other things) to keep up to date with cutting edge developments in our field. Then, when he sees that one of these developments is gaining widespread acceptance (so it isn’t really cutting edge by that time) he will tell me to implement it in a way people unfamiliar with our field can use. So I’m not doing cutting edge work. I don’t even usually know what the cutting edge work is; I’m busy doing my job. But I help make it so that discoveries in my field are put to good use in other fields. The researchers in those fields don’t cite me (they can’t – I don’t publish papers) but they pay my company for the software I write, and I go home feeling satisfied that I have contributed to scientific progress.

  42. Jiro says:

    As I posted the first time, there was an article in Analog Science Fiction in the 1960s which extrapolated the speed at which men were able to travel. It unironically predicted faster than light travel by the 1990s or so.

  43. Ketil says:

    This seems to be about the same thing? Apologies if it was already mentioned.

    https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/

  44. gbdub says:

    What percentage of “researchers” are actually working on the specific problem of “make transistors smaller”? Or “produce more tons of soybeans per acre”? The thing is, “growth” in these field isn’t one dimensional – the fields get broader too.

    “Transistors are too big” may have been a key problem 30 years ago, but the smaller they get, the less practical benefits you get from making them incrementally smaller again, and the more you get from things like decreasing power usage, or improving chip cooling, or integrating the chips into new things like Echo Dots and toasters or whatever. The old problem exists, but as the field broadens there are more directions to progress.

    “Too few soybeans” may have been the key problem in the past, but now the problem is “how do I produce the same soybeans with less water and fertilizer and pesticides?” Again, new researchers are pushing down all these new branching paths and “soybeans per acre” no longer adequately describes growth in the field (ha).

    Another simple example: Cell phones have stopped getting smaller, and are now getting bigger again. Does this mean “scientific progress” in the portable telephone field has reversed? Of course not – it just turned out that there was no practical benefit to going much below “pocket sized” and now the measure of growth has become how powerful they are, how long their batteries last, how good their cameras are, etc.

    Another: Passenger airplanes aren’t any faster than they were 40 years ago. Some of this really is “low hanging fruit” effects. But some of it is just that going supersonic breaks windows and anyway isn’t very cost effective, people aren’t willing to pay exponentially more to shave a couple hours of time off the London-NYC route. Does this mean airplanes aren’t progressing? No, they are safer and lighter and go longer between breakdowns and are much more fuel efficient. New metrics are the important growth markers.

    • LesHapablap says:

      Agreed. The graphs above are hardly measuring progress in a consistent way. Even if they were, they tell us nothing about the total number of ‘graphs,’ which is likely increasing exponentially.

      This is a bit like claiming that the home-cooking industry has stagnated because ribeye steaks, blenders and forks have had little innovation in the last few decades despite all the R+D spending at KitchenAid. It ignores the hundreds of new appliances and innovations that get designed, manufactured and improved, each with their own ‘diminishing’ log graph.

      • The Nybbler says:

        It ignores the hundreds of new appliances and innovations that get designed, manufactured and improved, each with their own ‘diminishing’ log graph.

        Most of them are useless or marginally useful and quickly fall by the wayside. The InstaPot and the sous-vide are the new hotness, but I expect they’ll vanish too when the fad ends. There’s little in my kitchen that you couldn’t have found in the 1980s. I have a convection oven (rarely use the convection part), an over-the-stove microwave instead of one stuck in the wall, and the gas oven is nicer (those old quarter-turn controls were terrible). The non-stick cookware has been through a few incremental improvements as well. But nothing really revolutionary; the last of that was probably the microwave making it to homes. So I’d say the home cooking industry has pretty much stagnated.

  45. Majuscule says:

    I agree with Scott, and this whole debate reminds me of the concept of “Cliometrics”, which I encountered during a graduate historiography course. The basic idea is that you can look at measurable things in history for a more complete interpretation of events. My immediate reaction was that this was usually going to be absurd, since the data was so incomplete, inconsistent, impossible to interpret or just plain bad that whatever conclusions you reached would be profoundly flawed and full of hazards. Nonetheless, cliometrics got lots of interest, support and money, and the guys who started it got the Nobel Prize. Which is really what history departments crave more than anything: the same respect and cash that STEM fields have been getting. Ironically, the most lasting impact seems to have been to move the economic historians out of history departments and into the economics department. Which makes a bit more sense to me; historical conclusions are never going to be served as well by incomplete economic data, but economic theories might benefit more from being informed by historical accounts. Both can and should serve one another, but the expectations for each line of inquiry are somewhat different, in my own view.

    I was also reminded of the recent post describing how raising salaries for a new employee was unattractive because it implicitly forced wages up for every other employee, so offering one raise represents x times the investment for the employer. If you have one lab of 5 people producing impactful discoveries, you go “this is great! I will open 10 labs like this.” Never mind that “discoveries” is not really a meaningful or measurable thing, but you also need other specialists and staff; postdocs and secretaries and bottle washers and security guards. The relative complexity of new discoveries, often involving sophisticated equipment and maintenance, really represents the contributions of hundreds if not thousands of people. Scott gets at this point, but I want to emphasize how apples-to-oranges it is to compare what the Curies were doing basically in a shed 100 years ago to something like CERN.

    And how many of these people are being counted in the “meaningless” or “unproductive” scientist number, either because the “discoveries” don’t get ascribed to them or because their individual contributions were minimal or obscure? Or because their output failed to meet some (perhaps arbitrary) qualifying threshold for importance? I’m a taxonomist and thus I always want more specific definitions, especially where big fuzzy ideas are aspiring to be precise data points. I myself am a tech-adjacent person who sometimes got counted as IT, but on the company org chart I was under something like marketing or sales, as were dozens of other people key to the functioning of our tech stack. Without definitions for concepts like “scientist” or “discovery”, the terms and the metrics here are just too fuzzy for me to panic over.

    • Urstoff says:

      Moving economic history into the economics department seems to have led to the rise of a less rigorous (perhaps rival?) movement in the “history of capitalism” done by historians in history departments.

  46. Cerastes says:

    What strikes me is the similarity of a lot of what’s discussed in the evolutionary history of life on Earth, especially multicellular species. Something new shows up (e.g. land tetrapods, flowing plants, jaws, insect flight) or some disaster blasts a huge hole that’s now open, and you have an explosive burst of speciation followed by a slowdown. Sometimes this is followed by continued dominance (Teleosts, birds) other times the lineage seems to “wind down” and what was formerly diverse and dominant is now represented by only a handful of taxa (elephants, crocodilians, coelacanths) or died out entirely (Temnospondyls, Ichthyosaurs). Of course, mass extinctions polished some off, and those don’t seem to have much analogy in science fields.

    Many early diversifications are thought to be fueled by “key innovations” (powered flight, colonizing land, suction feeding, neural crest cells, eusociality), allowing the group to suddenly have a path to new resources or niche spaces and/or a leg up on previously dominant competitors. Eventually, though, they saturate the potential of this key innovation and are competing now with a mix of a) other species which weren’t vulnerable to being outcompeted once the key innovation occurred and b) their own sister taxa who also have the key innovation. Thus begins the long, slow trudge of less “sexy” evolutionary processes.

    Maybe the analogy is too clunky, but the idea of a “key innovation” followed by rapid diversification and eventual niche saturation seems like a good match for many fields within science.

  47. glorkvorn says:

    This makes me think of The Unreasonable Effectiveness of Mathematics in the Natural Sciences. We’ve had a huge amount of success using math to write down scientific principles. And it turns out that, not only does math work well for that, but that most of them can be described in relatively *simple* math, like a linear equation. Math was pretty well studied before the enlightenment and “age of science” really got started, so we had a lot of tools built up ready to be applied.

    Some science equations might look intimidating when you see them as a laymen, but if it’s simple enough to write down on paper and for a (brilliant and well-trained) human mind to do calculations with by hand, then it’s still relatively simple math. We’ve been able to stretch that for a while now by using computers to handle calculations that are too hard for a human, but you can’t stretch that forever. Once you leave the domain of problems that require a computer, you pretty quickly run into NP-hard problems that even a computer can’t do in any reasonable amount of time. So we may just be stuck- if we can’t use *math* to make scientific progress, then I don’t know what we’ll do to go forward.

  48. Squirrel of Doom says:

    My hunch is that science has been homogenized and bureaucratized and is now, even world wide, to a first approximation, a department of the US Federal Government.

    As such, its productivity and innovation characteristics is asymptotically converging with the Post Office and TSA.

    Yes, I’m exaggerating to make my point, but I think/hope the point is clear.

  49. Elisha says:

    As part of “the problem” (I’m a Chemistry PhD student, increasing the exponentia and no world changing contribution yet), I may have an inside view.
    A lot of the big research steps done in the early 20th century were leaps into unknown and unfathomed fields. For exampe, the understanding of the quantum nature of atomic energy states was a key insight into many many discoveries- but since then, quantum researchers use the tools made available to make calculations and models of more and more specific molecules. Understanding the orbitalic structure of anthracene could be really important (or not), but the splash will be smaller.
    Another example is polymers. A first discovery was that’s it’s even possible to make molecules so large. Then the first ones were made, and everyone went bananas over Bakelite. Nylon and Teflon still made an impressive splash (though they were non acedemic discoveries made by Dupont), but polymer discoveries now are about more obscure phenomenon like conducting polymers (in your screens phone), isotactic polypropelene (in your microwaveable plastic box) or stuff that may have an application if the next two PhDs on the subject go well… Which of course they never do because making discoveries is actually pretty darn hard.

    • Shion Arita says:

      As a recent chemistry phd, I’ll add a bit of inside view of my own.

      I agree that most things do follow that pattern of initial discovery and then diminishing returns following projects.

      What interests me the most about it is that the big initial discoveries aren’t really harder to do than the follow ups in terms of analytical, logical, and procedural work. But they do take a lot more creativity, which to me seems to be the limiting factor for a lot of things.

      So essentially, I think that it can be an interesting combination of 1 and 2 from the article. It requires an exceptional amount of creativity to do the next big thing, and the current system is not supporting that. There are more people creative enough due to population, but most of them aren’t able to really do what they’d need to do:

      Essentially, it’s happening because while we keep having more and more people, we don’t really have more Lord Kelvins who are able to play with state-of-the-art technology and just try to figure things out and let their thoughts carry them where they may. This is because the tools you need to look at the things that are on the forefront of knowledge are a lot more specialized and expensive. Kelvin learning about temperature took barrels of ice in his back yard. Looking at the orbital structure of anthracene requires spectrophotometers and NMR machines. I bet if everyone who wanted to could have a fully equipped chemistry lab in their house and free reign to do whatever they wanted with it, there would be a hell of a lot more progress.

      Basically, The subtle way I disagree with 3 is that I don’t think the ‘low-hanging-fruit’ have all been picked in an intellectual sense– it’s just that it takes more economic resources to do it now, not intellectual ones.

      That’s why we’re seeing insane explosive growth in the arts, look at how many works of literature, music, art, etc. are coming out now. I bet that the amount there is still positive or at the very least neutral even when you control for the number of people. And the reason is that the tools required to do it are still available to the individual.

  50. John Schilling says:

    Does it matter whether you have five or fifty or five hundred true first-rate researchers working on a problem, if it’s highly likely that the next step will come from one of the five obvious paths forward from the status quo and each of those paths takes two years to cogitate and calculate and test? You get the next step in two years regardless, the only difference is how many also-rans don’t get their name on the paper.

    Or maybe they do get their name on the paper, if your model is an institute where ten people do the same thing at the same time and any benefits from parallelization are consumed by the latency of cross-communication.

  51. Ghillie Dhu says:

    It doesn’t really fit into any of 1-3, but something like Amdahl’s Law supports Scott’s null hypothesis of diminishing returns.

  52. John Schilling says:

    Also, the problem / opportunity of Big Science may be involved here, though not sufficient to explain the whole set of observations. From Manhattan to Apollo to the Human Genome Project, and every generation of chip design at Intel et al, we’ve learned that you can force a scientific or technological advance to manifest ten or twenty years before its time by throwing a gobsmackingly huge amount of money at it and e.g. hire lots of scientists in related fields to redirect their work towards very specialized tool development supporting the work you’re really interested in.

    You’d get those advances in due time anyway, when the necessary advances are made in other fields without being forced and when the necessary tools reach the market at non-exorbitant prices, see again the Human Genome Project vs modern genetic sequencing. But once you’ve pushed into Big Science territory in a field, you’re stuck with having to pay gobsmackingly huge amounts of money for every new development, or having to accept ten or twenty years with no new developments at all.

    If we are seeing fewer advances and more $$$/advance, that would be consistent with running up against that wall and pulling back somewhat. And from the POV of the human race as a whole, that might be a good thing – the results of Big Science are often exceedingly difficult or expensive to turn to practical advantage in the short term (see again Manhattan, Apollo, and the HGP) when we could get to the same place in the long term at less cost. From my own personal POV, there are some advances that I’d really like to see in the next twenty years or so and am therefore not favorably inclined to a generation of stagnation.

    • Rob K says:

      My impression was that the huge investment of the human genome project was in some significant part responsible for getting affordable gene sequencing to market on the timeline that we’ve seen. Is that not the case?

      • Andkat says:

        The HGP was to my understanding the driving force and incubator for the technological innovations that have enabled the dramatic drops in sequencing costs that have in turn paid off enormously for the wider practice of molecular biology, for which sequencing is now a perfunctory and relatively inexpensive task.

        Moreover, the reason why ‘Big Science’ exists is that fundamental problems tend to get harder the deeper you go in terms of the nature or volume of data that is necessary to make any sort of theoretical advance and refine applications to a coherent science. Identifying that (archetypal) enzymes are proteins, that organic molecules can be produced independent of biochemistry, and characterizing the base ratios of DNA, all of which comprised foundational breakthroughs a .5 to 1.5 centuries ago, are tasks requiring indescribably less precision, resolution, and complexity of data than, say, solving the protein folding problem, generating catalysts that compete favorably with their biological counterparts, and accurately modelling the transcriptional programs of entire cells; very often even the nature of the data that is truly necessary for a good empirical model is ambiguous. Advances and applications of technologies in such have often largely served to merely keep pace with (and have in many cases been driven by the need for such) the growing complexity of the problems whose solution would comprise a real advance in understanding the bigger picture. At the end of the day what is gating High Energy physics is likewise moreso the fact that the energy regimes necessary to answer questions about more fundamental processes tend to require multibillion dollar instrumentation to address and collaborations spanning hundreds of scientists (biology and many other fields also have ample issues with competitive redundancies as previously addressed- although the example of the Supeconducting Supercollider and the LHC reveal that colossal waste from petty competition is not a problem that high energy physics has managed to avoid…) rather than a lack of creativity or theoretical insight.

        Indeed, the internet, automated paper and citation handling software, and word processors allow me to collate and analyze papers vastly more readily than the professors who trained me, who had to go down to the basement of the library, painstakingly trace back references, and pull volumes of publications out of dusty shelves and organize all of their notes and conclusions manually. Unfortunately, the sheer volume of information available to me both directly in my field and in the fields that serve as fundaments for and applications from it has vastly increased as has the amount of background needed to effectively plan new approaches, the number of distinct specialized experimental methods available, etc., so I am left little better off in terms of my state of mastery over the literature.

  53. dark orchid says:

    I agree with the gist of the post, but some of the graphs are hiding what’s actually going on. If you measure progress in an area by the way progress was defined when the area started out, you won’t notice if it goes off in a different direction.

    Take the trains graph: the French got a TGV up to something like 520 km/h on a test run once. But in regular service they run at “only” 300-ish km/h, because above a certain speed everything from energy consumption to wear on the rails, wheels, wires and everything else increases superlinearly and it becomes uneconomical.

    Trains have however continued to produce more “people-kilometers per day”, to the point that the plans in some countries (above all Switzerland) expect an exponential increase in this transport metric to continue, even without trains getting any faster. But there’ll be more of them, closer together.

    The metric of “fatalities per person-kilometer” has also gone down massively, although it’s an engineers’ truism that getting something from 90% reliable to 99% reliable is about the same effort as from 99% to 99.9%, or if anything the latter is harder.

    Cars too – fuel consumption increases quadratically with speed, all else being equal I think? But they’ve become a lot safer, more reliable and especially, more energy efficient. There’s very much still progress in these industries, but most of the researchers are not working on “make them go faster” – in Britain at least, the speed limit is 70 mph and the average car can easily do 100 mph, so that’s a stupid thing to optimise for.

  54. HeelBearCub says:

    Is some of this just nomenclature and classification?

    John Harrison (inventor of the first maritime clock) was a carpenter. William Smith (who created the modern field of geology) was essentially a civil engineer who specialized in clearing water from coal mines. Lewis and Clark were “explorers”, not “scientists”.

    How many of the people classified as “engineers” at Intel are actually intent on discovering the next chip design? How many employees of Thomas Edison would we now count as scientists?

  55. sandoratthezoo says:

    It’s kind of baffling to me to make a pretty compelling case for “science inevitably slows down after an initial burst because the low-hanging fruit gets picked,” and then go on to say, “But we’ll probably have genius-level AGI in 30 years, so we’ll be able to instantiate 100 billion geniuses!

    You don’t think that maybe statement A casts doubt on statement B?

    • Alex says:

      Yes. At around that point in the article I thought “oh well this started out interesting but it is going to turn into one of these ‘my man Eliezer is really smart’-pieces, isn’t it?” And I was not disappointed, token-quote and all.

  56. benwave says:

    “Doesn’t my position imply that the scientists at Intel should give up and let the Gods Of Straight Lines do the work?”

    I would argue no, because Intel’s goal is not to advance integrated circuit technology, but to capture the value from that advancement. Even if its effort does not result in faster progress, its effort Does result in Intel capturing more of that value.

  57. tscharf says:

    AMD’s Ryzen-based Epyc processor had 19.2 Billion transistors in 2017. We are already at that threshold. A trillion isn’t very far away given the trend, a few decades.

  58. akarlin says:

    3. All the low-hanging fruit has already been picked.

    This is the mechanism I describe in Apollo’s Ascent, my theory of technological progress.

    TLDR: Rate of technological progress depends on the numbers of literate people above some IQ threshold (with the highest IQ being most productive); plus you need some lesser number of smart people just to maintain technology at a given level. Consequently, as technology improves, this threshold keeps getting pushed further to the right of the bell curve, which – all else equals – results in the technological growth rate slowing down to near zero. (This is what happened by the Hellenistic period in the Classical Mediterranean world).

    However, there are several ways in which this “slowing down” pressure can be countered: (1) Development of technology that can be used to generate more technology with the same investment of human capital – for instance, paper, or eyeglasses – this is just a specific subset of technology; (2) Increasing the literacy rate, which increases the pool of people able to engage in technology production – after some low level, there’s very limited things even the world’s smartest subsistence peasant can achieve; (3) Increasing the population and number of potential innovators, which happens by increasing the carrying capacity of the land – which is itself also a function of technology; (4) Increasing the average IQ of the population through eugenic reproduction, e.g. through Clarkian-Unzian selection (which happens by default in many Malthusian societies), or more targeted means; (5) Improving institutions for technology production; (6) Relevant to today – it is more profitable for “laggard” countries that have scope for catching up, such as China, to focus intellectual energy on technological convergence as opposed to innovation; hence all the copying and industrial espionage they do – as China converges to First World status, we can expect its per capita science generation to converge upwards to the level of Japan/South Korea.

    I am significantly more skeptical about the technological outlook today than many transhumanists. “Technology of technology” is progressing slowly – perhaps Sci-Hub was one of the biggest real innovations in this sphere in the past decade, LOL. Though AI may yet create radical improvements in this sphere. Literacy is maxed out in the areas of the world where it is relevant (namely, the high IQ countries). As mentioned above, Chinese economic convergence will inject a great deal more high IQ innovators into the global scientific pool, though I wouldn’t expect a miracle from them – East Asians, despite higher IQs, are 2-3x less scientifically productive than Anglo-Germanic countries. Institutional quality is largely maxed out. Meanwhile, global IQ isn’t rising, at least in the countries that matter – to the contrary, it is falling fast due to dysgenic reproduction rates (as well as Third World immigration, though perhaps that’s not so critical, because it doesn’t have a direct effect on the quantity of the smart fraction). Meanwhile, the problems that have to be solved keep getting harder and harder.

    This is why I have predicted that if there are no “transformational” technologies that massively augment global civilization’s elite intellectual potential during this century, such as machine superintelligence or a “biosingularity” such as widespread genetic editing for IQ (I am a fan of the latter since it carries less existential risk), then technological progress will grind to a halt, “breeders” will come to dominate the population pool, and there will be a renewed demographic explosion to whatever the limits of the industrial economy are. This will usher in what I call the Age of Malthusian Industrialism.

    • Slicer says:

      We’ve needed a massive civilization-wide focus on engineering smarter people (as well as keeping alive and productive the ones we have) for quite some time now, but the will just isn’t there. Too much Hollywood, too many “ethicists”, too few people actually willing to acknowledge that there’s a problem and that we ought to do something about it. Have you seen what colleges have started becoming famous for these days?

      We’ve got the keys to paradise at our fingertips, and we’re choosing Hell.

      To the X-arians and Y-ocrats reading this, save your “ism”s; if we treated stupidity like a genetic disease, we’d have something resembling utopia in a hundred years.

      • Nornagest says:

        I’ve met too many smart people to believe that engineering out the dumb ones would give us utopia.

        • Edward Scizorhands says:

          Usually it starts with “get rid of the people who aren’t like me.”

          • Slicer says:

            There’s stupidity as defined by belief structures and life priorities, and then there’s stupidity as defined by neurogenesis, brain wrinkles, and total brain weight/volume.

            I propose increasing the second, and if it leads to the first being utterly what I hadn’t expected and wouldn’t like — so be it.

      • Mark V Anderson says:

        We’ve got the keys to paradise at our fingertips, and we’re choosing Hell.

        Do we know how? Or are you just talking about selective breeding? I suppose if we took a very totalitarian position on breeding, we could probably raise IQ a std dev in a generation or two. Although I can’t imagine the benefits of higher intelligence would outweigh the costs of living in an unfree society.

        Edit: And of course the risks of the powers doing much more destructive things when they control breeding is also pretty scary. One can look at the great things one can do on a collective basis; but that doesn’t mean we’d get anywhere near that vision if we tried to do it. We have the Soviet Union as an example of such a vanguard.

        • Edward Scizorhands says:

          Even though I’m a big believer in a lot of un-PC genetic theories, if society were to undertake such efforts through policy, we would do it horribly wrong.

          This is true even if we were to block the traditional racists (as defined by, say, The Atlantic) from having any say in the process.

        • MTSowbug says:

          It is not necessary to control people, through selective breeding or otherwise, to engineer smarter people. Given the technology and given the choice, many people will choose by themselves to engineer their children to become smarter. Many parents already seek to give their children every possible advantage, whether that be the best schools or the best nutrition. Genetic engineering is just one more such advantage, just with higher risk and higher benefit.

          As long as society doesn’t actively choose dysgenics, genetically engineered intelligence should creep upwards over time. Right now, each and every genetic change to a human is an unknown risk, but the risk decreases with every success story. As risk drops, the number of parents choosing genetic engineering should rise. Furthermore, germline changes persist across generations, allowing the small number of risk-tolerant early adopter parents to have cascading consequences on the population.

  59. Error says:

    Perhaps most fields’ tech trees are narrow and deep.

    If discovery C depends on discovery B, which depends on discovery A, and each link requires a minimum of X hours of thought from a really good researcher to figure out, then you’re not going to get through N links in less than 5N time, no matter how many researchers you add or how good they are. The process isn’t parallelizable, and you can’t make brains run any faster. The best you can do by adding researchers is to get the average discovery time closer to N (by increasing the probability that some good researcher has exactly the right set of thoughts right out the gate).

    If most discoveries in a field have this character — if there’s a serial chain of important discoveries that “unlock” new tiers of Stuff — then there would be an upper bound on progress in the whole field. Progress would be linear because the speed of serial thought is constant.

    Whether that’s true or not I have no idea. If it is, I think it’s probably also an argument in favor of AI being hard-takeoff (via progress suddenly getting unmoored from the limitations of human processing speed).

  60. benjdenny says:

    This is a quibbly question: Can all the top whatever literally be all anything? Like as far as I know, you mean either the fastest marathoner(he’s the top, after all) or every runner on earth, or just every eastern African until you hit the first white guy.

  61. Maganik says:

    Personally, I don’t think the answer is (1), (2), or (3), but I think (1) is closest. I think it’s something much more fundamental to discovery and learning that effectively locks everyone into a linear rate of progress: in order to make a discovery or progress in a field, you have to have a model, a system of priors, which is good enough to work around the edges and discover new things, but those very discoveries and advances disrupt the model everyone is working from. Human corporate knowledge can only assimilate change at a certain rate regardless of how much money or how many scientists you throw at it, and pretty much all these straight line graphs hint at this regardless of whether the field of knowledge in question is making microchips, trains, new elements, or money.

    In other words, it’s not fault of the parts, it’s the system, and the system seems pretty robust.

  62. JASSCC says:

    I have another suggestion: when a general trend of improvement becomes obvious, it attracts more people to a field even though more people often cannot accelerate the trend, even in theory: the tempo of improvement may be due more to extrinsic factors that delay implementation of the next step than a want of ideas as to what to do next.

    So what do the other newcomers to the field do? They prove their value somehow by taking on ancillary tasks that, while also valuable, do not directly contribute to the trend of improvement. For example, they can work not directly on doubling the number of transistors but on producing chips from novel materials that may save costs. Or that better dissipate heat or run with less power, which might eventually contribute to doubling, but which has immediate energy savings benefits. Or on producing particular kinds of improvements that better allow for special-purpose chips, or chips that are backward compatible in some important way with older designs. Or on making chips with fewer defects, or that can be manufactured with less complicated machines, or in less stringent environments. All such improvements could improve the bottom line for the manufacturer *without helping to shrink the feature size*.
    But such improvements become ever more appealing the more advanced chip-making technology is, as computers grow asymptotically towards ubiquity.
    In other words, the measurement of progress by benchmarks of achievement blinds us to less salient forms of achievement, and makes them seem like a distraction.

    • AG says:

      Yeah, materials science is almost uniformly that. Someone above talked about how designs had to change to accommodate composite materials in airplanes. But also, what quantitative measure can we look at to see the impact of including composite materials in airplanes? There’s no huge overhaul in the various engineering fields involved (Materials, Aerospace, Mechanical), because the actual consequence is in the money saved on fuel efficiency, which perhaps trickles down to ticket prices.

      • JASSCC says:

        Hence the usefulness of using money as the measure of the value produced: by design revenues should capture any and all improvements, not just those you happen to be measuring for. Of course it’s highly imperfect, but I think it has to be better than picking particular technical benchmarks.

        Also, something I should have mentioned earlier — any particular exponential-looking technical improvement is often the steep part of a sigmoid heading to saturation or an asymptote for some other reason. But improvement of the material conditions of human existence consists of the collection of innumerable such individual types of improvements. So at any given time, some of them are taking off, and some are slowing down. If we only look at those that have been famously successful in the recent past, we are apt to catch them as they are finally slowing down.

  63. knockknock says:

    People here are looking at Scott’s question from a vantage point of impressive expertise in many cutting-edge fields. But how about looking at it for a moment from the viewpoint of the typical average man-in-the-street?

    Suddenly transport the typical 1880s American to 1950. Take the typical 1950s man or woman to 2018. Which one would be more shocked and amazed? Even with all our chips and devices, I’m guessing the 1950s person would take things mostly in stride today while the 1880s person would doubt his sanity at the sight of circa-1950 electricity, phones, cars, planes, movies, TV, radio and the A-bomb.

    The 1950s guy could probably get in my 2017 Subaru and figure it out soon enough (except for re-setting the damn clock). Give the 1880 man-on-the-street a 1950 Chevy and he’d have to wait for the next streetcar. But you could explain how he could take the subway — opened in the early 20th century.

    This of course speaks to the most obvious applied technology that an average consumer can see, instead of the deep and quiet ongoing explosion in human knowledge, so make of it what you will. If one of these guys needs medical treatment or wants to start a blog he’s clearly better off here in 2018 than 1950.

    So a sidebar question to “Is science slowing down or speeding up?” would be, “Are improvements in the general standard of living slowing down or speeding up?”

    • Lambert says:

      A car is a pretty specific example of a technology where we optimised ‘how to drive’ pretty early on.
      Telephony, OTOH…

    • That’s an excellent point. Reminds me of something I’ve learned of in my current work which has brought with it a lot of fresh learning about modern developed-world agriculture.

      Up in Scott’s post there are two graphs showing improvements in crop yields since 1960 in corn and soybeans. But if you zoom out a bit the picture changes in a pretty astonishing way. Your 1880s farmer could walk right in and take charge of a 1950s farm. From then to now though — not so much!

      It turns out that from the establishment of rowcrop corn farming in the Midwest which also happens to be when decent general statistics began (the Civil War era plus or minus), a solid yield was in the ballpark of 30 bushels per acre. That didn’t materially change for several generations: the Iowa or Illinois farmer in 1940 was getting about the same yield that his father and grandfather and great-grandfather had been used to.

      But then look what’s happened:
      https://www.agry.purdue.edu/ext/corn/news/timeless/yieldtrends.html

      Today’s Midwestern farmers don’t even start bragging to each other unless they’re getting 200 bushels an acre. I personally know a couple who have hit 250/acre in recent years. So they’ve multiplied their productivity, from the exact soil that their great-great-grandfathers tilled, times 6 or 7 in the space of basically two generations. There are similar charts for other crops.

      There are several specific changes of varying levels of tech which account for that modern miracle. Some of them have unwelcome side effects that some of us are trying to find practical ways to un-do. [Google “Gulf of Mexico dead zone”.] And there are other tradeoffs too, which my friends and colleagues doing “food system change” policy work will be glad to explain for as long as is necessary to make you want to kill yourself in shame. [I kid, I kid…partly.] None of that detracts from the astonishing fact of just how much more productive this basic human endeavor which is thousands of years old has been made in an insanely short time.

      Will that burst of productivity growth continue on the farms? Could it, should it? Damfino really, but simply extending that recent curve outward as a standard that should be maintained seems arbitrary at best. Insert here some analogy to “punctuated equilibrium” perhaps?

      • acymetric says:

        Before reading this post I my intuition already pointed this way, this just tips the scales further.

        I think the person from the 1880s would be more initially schocked by new things but also more able to adapt to the new lifestyle as they probably have skills that could still be useful.

        The person from the 1950s would still be pretty surprised, but would probably also find that there was almost nothing from their 1950s life that would translate to a usable skill in 2018 without significant training. It would take some luck for them to fall into a situation where they could get a highly unskilled job, even moderately or low skilled trades like low end manufacturing work would be out of the question.

        • The Nybbler says:

          The person from the 1950s would still be pretty surprised, but would probably also find that there was almost nothing from their 1950s life that would translate to a usable skill in 2018 without significant training.

          The building trades haven’t changed THAT much, though some training would be required. Same goes for warehouse work and retail. I don’t know about factory work, but the biggest issue would be that there’s less of it. Don Draper, however, could slot himself right in. Sure, he couldn’t run the computers, but he’d have people for that.

          • acymetric says:

            What people? He would have to get the job first. Have you tried to get a job without using the Internet and with no personal/business connections lately? Tough road to hoe indeed.

            If we’re assuming the person is transported into a life with a cushy job then sure, they’ll be fine. I’m assuming these time-travelers have to start from scratch, or at least from whatever meager means they were able to travel with on their person.

    • The Nybbler says:

      Jules Verne was writing in 1880, and more to the point he was widely read, so I think you’re underestimating the 1880s citizen. At least if he’d read Verne.

  64. spineback says:

    Progress on steam power has slowed right down – but that doesn’t mean science is slowing down. Stone clubs also have remained fundamentally the same for several thousand years.

    Compare the rate of advancement in the initial periods of old industries (eg synthesising chemicals) with the rate of advancement in the current new industries (eg creating a decentralised currency)… They’ll be pretty similar.

    Science isn’t slowing down, it’s just working on different stuff.

    • The Nybbler says:

      Progress on steam power has slowed right down

      Hardly. Both temperatures and pressures have been going up, mostly because of better materials.

  65. deciusbrutus says:

    How much do you trust your metric? Is “Reciprocal of transistor size” really the most direct measurement of some kind of technology? Is “Reciprocal of number of acres needed to grow a fixed amount of crop” really the best measure of farming technology?

    Why not “R+D Labor required to reduce labor requirements by 10%”, either in a given field or across all fields; instead of looking at the speed of trains, look at the man-hours of labor required to cross the country on one (amortizing what capital costs would be over the lifetime of the improvement, using some asspull math).

  66. mrthecabinet says:

    Not only is the population increasing (by a factor of about 2.5x in the US since 1930), but the percent of people with college degrees has quintupled over the same period.

    The number of people with college degrees is a completely meaningless statistic. It’s like comparing the rate of progress to the percentage of people with good hair. College degrees are not fungible. A person who gets a degree in (whatever field you think is stupid bullshit) is not interchangeable with a personhood gets a degree in (a field that is difficult). No number of English majors will ever generates a microprocessor.

    • acymetric says:

      I’m not sure that is necessarily true. I can certainly imagine that someone without a “formal” engineering education could learn enough to build microprocessors. It isn’t a stretch to imagine some of those people having degrees in other fields. Of course, engineering has some self-imposed restrictions with PE licenses and such which complicates it a bit. I certainly know people in other engineering fields that have a liberal arts degree rather than an engineering one, and it seems almost common for software developers.

      I don’t disagree with your general point (that number of college degrees doesn’t mean much) but that is because it is true for all degrees (there are plenty of people with engineering degrees who would not be able to develop a microprocessor too).

      • Mark V Anderson says:

        Yes his general point is correct, even if his specific one not so much. Does anyone need a college degree to do anything? It just takes more training if you don’t have the pertinent education. And I think we’ve all heard that lots of college educated folks get jobs pretty far from their college field. Usually I hear it isn’t that much harder for them to succeed than those that get jobs in their college field. I think that’s the point.

        • idontknow131647093 says:

          I think it isn’t just about what degrees people get (although I think that does matter), it is also WHO is getting a degree. Just because a schoolteacher in 2010 has a BA & MA and a school teacher in 1930 might just have a HS education doesn’t mean they are different. The 1930 teacher was probably about as good, came from a similar percentile on the Bell Curve, etc, he/she just has less credentials.

          The increase in degrees is more about credential creep than an indication that more intelligence is being generated. Its true that its good to get every genius level kid out of a sweatshop and into a place where they can be more productive and creative, but giving a janitor a college degree generates no value for the world, and it doesn’t qualify him for being anything more than a janitor, even if you make him do coursework.

  67. knockknock says:

    Here’s another way to measure progress: look around your typical middle-class home. Take someone from 1880 to 1950 and they’d be astounded by electricity and all its appliances, the heating, plumbing, the phone, TV and radio, not to mention the car out in the garage. The face of daily life changed vastly in those 70 years.

    Take someone from 1950 to 2018 and aside from the wifi it’s not that much of a change, just a matter of degree. Of course, your Alexa might creep them out a little.

    The amazing scientific progress of today (e.g. mapping the genome) is more subtle and in the background. Not “Holy s###!!” moments like the 1880 guy seeing a 1950 Boeing Constellation cruise overhead.

    One little thing that might knock the socks off a 1950 person: the ATM

    • woah77 says:

      I don’t know, Cell Phones probably would knock the socks off anyone from even 30 years ago. “I hold in my hand a computer more powerful than the Apollo Project computers, with data speeds in the megabytes a second, able to access the internet from most anywhere I could care to. I use it to talk to my friends in other countries and look at funny pictures.”

      • Lambert says:

        >“I hold in my hand a computer more powerful than the Apollo Project computers”

        According a few some Fermi calculations, it shouldn’t take more than two hands to hold more computing power than the entire world had in 1969.

        • woah77 says:

          That’s what I’m getting at. An adult from 1988 suddenly in 2018 would be shocked and dysfunctional at what computers do today. Going back another 40 years and the person would be hard pressed to even understand how to parse their experience. That said, I don’t think people of today would function especially well if they were suddenly displaced backwards a few decades.

        • deciusbrutus says:

          “A computer more powerful than the Apollo Project computers, and a telecommunications device with more bandwidth than the Apollo Project”.

    • acymetric says:

      I think people are drastically underestimating the difficulty of a transition from 1950 to 2018 and overestimating the transition from 1880 to 1950. I think a person from 1880 could carve out a pretty good life in 1950 without too much outside intervention. I think it would be virtually impossible for a person from 1950 to carve out a life in 2018 without significant outside intervention or a large amount of money at the start.

      If we assume both start out with no assets after time-traveling, I’d give life outcome of 1880s guy at 10/1 odds for any takers.

      You could argue 1880s guy has slightly lower floor (more likely for him to die prematurely than 1950s guy) but 1950s guy is much more likely than that to spend the bulk of his life homeless or in jail (which is a better outcome than death, but not by enough to outweigh the increased likelihood).

    • ana53294 says:

      North Korea is probably stuck somewhere in the 50s.

      Other than very dysfunctional ideas about how the world works, and their blind adoration of their leader, they probably have more or less the same skills that a person in an equivalent job had in the 50s.

      Do you think that North Koreans would be able to get jobs and function in South Korea?

      From everything I have heard about the people who manage to escape, even those who have university degrees have knowledge that is half a century behind. While some of the skills may be more transferable (nurses, taking care of kids), I am pretty sure that a North Korean engineer would be unable to get a job as an engineer without serious retraining.

    • david stone says:

      Fortunately, I have a related experience: old people. People who were born in 1950 or earlier tend to have a hard time adjusting to using technology today, and they were alive for the whole transition. This is such a common experience that it is just assumed knowledge that “old people are bad at technology”. From talking to people in their 70s and 80s, they don’t feel like their grandparents had quite such a hard time adjusting to life when they were kids, except for maybe some social changes.

      In fact, you could argue that your examples provide evidence for the opposite. The person from 1880 understands 1950 well enough to be shocked. The person from 1950 doesn’t even understand the common knowledge we all share well enough to realize that they should be shocked.

  68. KieferO says:

    My computer science professor back when I was in college wrote up his thoughts on “why the marathon record looks roughly like a straight line” http://allendowney.blogspot.com/2011/04/two-hour-marathon-in-2045.html I’m pretty convinced that his model explains most of marathon performance, but it’s tough to say how much bearing it has on the progress of science, given that the explanatory model entirely focuses on the individual record holder rather than collective ability of marathoners.

  69. Shion Arita says:

    I’m still pondering on how to evalute the rest of it, but I’m pretty sure that for art it’s not the case; there are easily at least 100 Shakespeare equivalents around today. Fiction, art, music, etc is really enjoying an immense amount of extremely talented individuals right now.

  70. Bugmaster says:

    If you “solved” this “problem” in classical Athens, Attila the Hun would have had nukes.

    Yes, and the world would’ve likely been better off for it.

    Yes, nukes are awful, and potentially an existential threat. But I doubt if Atilla the Hun was any more evil than Stalin or even Kim Jong Il. They have nukes, and the world is here. And the modern world contains things other than nukes: modern electronics, computers, the Internet, and yes, even nuclear power plants.

    There is no such thing as standalone “nukes”. There’s only the set of scientific models (nuclear physics, quantum physics, etc.) which allows the creation of fission/fusion bombs as a byproduct. If you want to prevent nukes from ever existing, you basically have to freeze humanity’s scientific and technological development at 19th century levels (if not earlier). Is that a sacrifice you’re willing to make ?

  71. Maxwell says:

    Another rhetorical question you could ask is, “why didn’t R&D go to 0%?”

    With ever greater economies of scale, you would expect R&D to approach 0%, to be consistent with constant progress per researcher.

    Nobody expects that – we expect R&D to increase proportionally with sales.

  72. anonymousskimmer says:

    How much time do people spend reinventing the wheel? Not just in terms of having to understand and learn how to use the wheels that previous people invented, but literally having to reinvent it because someone failed to communicate properly (or at all).

  73. anonymousskimmer says:

    Something you’re missing on crop yields is that we’re now using far less poisonous chemicals to get those yields (Bee health notwithstanding). Neonics and roundup are better than nicotine and arsenic for the environment.

    I’ve actually seen the charts that plot the logarithmic decreasing of toxicity of pesticides. But if all you’re looking at is the chart for crop yields, you’ll miss it.

    So what other people have said with respect to there now being far more charts.

  74. kaleberg says:

    I think some of the metrics are a bit simplistic. Moore’s law is about transistor count. That’s a useful metric for homogeneous devices like memory chips, but a modern processor chip packs an entire supercomputer facility onto a single chip. The typical “processor” chip includes several layers of memory cache, an array of processors, an input output system, a SIMD graphics processor and a host of supporting hardware. These units were once separate chips, but now all reside in a single package. Even the processors have gotten more complex with instruction pipelines, multiple functional units, predictive execution, legacy instruction set support and a host of other features that would have involved multiple chips not that long ago. This type of device requires more engineers per transistor since there are more engineers working on each chip.

    Many gadgets are a lot more complex than they were in the mid-20th century. Light bulbs used to be a tungsten coil suspended in an evacuated vessel. A modern light bulb has a diode based rectifier, a transistor and capacitor based regulator and then a diode based light source that might or might not need an optical frequency shift. Some of them have networked computers on board. Automobile engines were 19th century Otto cycle devices, but they now have computer controlled timing, advanced chemical processing to optimize combustion and a high performance turbocharger. They are harder to understand, but they need less maintenance, use less fuel, produce more horsepower per weight and emit fewer pollutants.

    There are a lot of tempting metrics that are tempting because they provide an easy way of quantifying things. It’s relatively easy to count elements or species or compounds, but most of modern progress is invisible. Modern technologies tend to involve a lot more science and are not as intuitive as older ones. It is like the jump in the late 19th century. Before then, a grade school graduate could improve a steam engine. After a certain point, though, one needed a college level education in the new-fangled science of thermodynamics. The Otto engine started as a refinement of an older gasoline engine which in turn was based on steam engine and cannon technology. The diesel engine started as a theory that optimized the Carnot cycle.

    Another factor is that companies are less likely than ever to tout their new technologies to the general public. Some of this is about maintaining proprietary advantage. Some of this is about public attention. You can clickbait an article about a new smartphone, but who is going to click on an article on geared jet engines? Yes, the internet makes it easier for the few interested in that kind of thing, but advertisers want millions of clicks, not hundreds. Worse, modern machines look boring. An old fashioned steel furnace with flying sparks, glowing metal and hundreds of men hauling heavy stuff around in their shirt sleeves is much more photogenic than a closed process and some guy sitting at a console. Even at Bunnie Huang’s blog, the gadget nerds were hard pressed to tell a tampon making machine from a diaper making machine. The important parts of the technology, the sizing and precise materials, were hidden inside.

    We should expect a logistic curve for just about any simple metric. Scientific knowledge tends to grow in jumps as new observational and classification tools become available and then reach their limits. There were a lot of subatomic particles discovered in the 20th century. The Standard Model actually cut the number of basic particles. The Human Genome project cut the cost of sequencing DNA, so we had a burst in the collection of gene sequences and the discovery of new genes, but we know that at some point we will stop finding novelty. We knew of no exoplanets until the 1990s. The count has soared, but we know that our current detection system has limits. We expect the genome count and the exoplanet count to rise much more slowly. This doesn’t mean we will stop learning about genetics or planets. It doesn’t mean that our scientists will have grown stupid or lazy. It just means that the metric is less useful.

  75. mmerrill says:

    Somebody else commented today, so maybe I’m not screaming into the wind.

    I’m not sure that saying that one of the 3 hypotheses is correct is the right way of thinking about it. All 3 hypotheses are rational ideas based off of observation.

    The first one would have a lot to do with the Pareto Distribution, which is an observable phenomenon. The second one is probably the most anecdotal, but I feel like anecdote is massively underrated. (not hard to do when the value you place on anecdote is 0) The argument for the value of anecdote would go something like this: “I have observed this thing. When I tried to find studies on this thing, they don’t exist. What I have left to go off of is what I have observed. Therefore, it’s more likely that what I have observed is correct than that it isn’t. If anyone decides that what I have observed is worth doing a study, and they do it well, I will change my opinion on it. Until then, I can only go off of what I have observed.” The particular argument for the second one is this: “I have experienced the bureaucracy of collegiate life today. It has affected my ability to conduct research. It is almost certain that other people have been affected the same way that I have. Therefore, this is part of what is causing the problem, assuming that it is a problem.” The third argument is almost self-evident. It’s pretty clear that it will get harder to answer questions as the questions that were easier to answer get answered. As I said before, all three are good arguments, which most likely means, it’s a combination of the three, plus things that haven’t been accounted for.

    I’m not going to touch on art, because people have already said the things I would have said about it.

    I’m a little bit surprised that nobody commented on diminishing returns when it comes to athletics. It’s probably the most clear and obvious example of diminishing returns you get when it comes to what we know about the world. So, we’ll take weight-lifting and running as examples. This is something that becomes immediately clear if you’ve ever tried it.

    Say you’re trying to improve your bench-press. Let’s say you start at 100 pounds, and you worked to get yourself to 150. That’s not that hard to do, if you felt it was worth your time and effort to do so. Say you’re at 250 pounds, and you’re trying to get to 260. The amount of effort that you have to do to increase your ability to lift 10 pounds is exponentially greater than the amount of effort you would have had to put in to increase 50 pounds starting at 100.

    With running, say you start at being able to run a mile in 15 minutes. It would be pretty easy, again if you felt like it was worth doing, to get it down to 14 minutes. When you get to around running a mile in 5 minutes, you’ll be putting exponentially greater effort to get seconds off of your time as opposed to a minute.

    It’s possible that science works on the same lines in terms of diminishing returns, but I don’t think the factors involved in science, art, or physical activity are even remotely the same thing. I think they’re so far removed that most analogies made between the 3 are kind of useless. And the ones that aren’t are very subjective and metaphorical, which has it’s own problems. What was done here wasn’t even an analogy, it was a direct comparison between the three, which might be why I felt it necessary to respond to this one. They’re not the same thing, and treating them as the same thing could potentially be a mistake.

  76. albatross11 says:

    Two things that come to mind:

    a. Some areas of science/math/etc. are stuck with the same basic set of problems they had 300 years ago. Congratulations! You are now in a who’s-smarter competition with Gauss. (Alternatively, you’re competing with Beethoven for who can write a better symphony.)

    b. Other areas have new problems that have arisen from the original studies. This can get you explosive growth in a field, where people are absolutely not all concentrated on the same narrow area of inquiry that opened up the field.

    As an example, something recognizable as a modern computer only got built in 1948 or so. Most of the problems of computer science were questions that hadn’t yet been asked before then. Nobody needed to work on AI algorithms or compiler design or how to make effecient distributed databases work in 1900. But by the 1950s, all kinds of stuff like that was opening up. The number of people in computer science went way up, because instead of a handful of people trying to break German ciphers or build mechanical calculating machines, there were thousands solving all kinds of new and different problems that had arisen.

    Another example is in biology. Before Darwin, evolutionary biology wasn’t really a field. (Or at least not one that was going to lead anywhere useful.) To really make progress, you needed to know a lot about genes (mostly learned from fruitflies and bacteria), and a fair bit of probability theory. But it wasn’t even a field open for exploration till Darwin (and kinda Wallace, though it seems like he had a flash of insight rather than having pieced together a huge case for evolution). Once that field was opened up, there was lots of newly-available fruit to be picked. Perhaps the number of naturalists and insect collectors and microscope enthusiasts went way up, but it’s not like now there was an exponential number of people taking voyages to the Gallapagos islands to formulate some notion of how things may have evolved. Instead, there were people selectively breeding worms and flies and bacteria and figuring out the molecules behind heredity and doing math to model it all and finding places the math didn’t work and needed a patch.

    Still another: There were people doing research into astronomy in 1800. But nobody knew what powered the stars till around 1920, and radioastronomy wasn’t available till sometime around then, etc. It was a much narrower field with a lot less to consider in 1800. People were peering through telescopes at stars and doing some mechanics to work out orbits and infer missing things, but later there were far more people using totally different techniques that hadn’t been invented in 1800 to learn about the stars and planets and such.

    So it looks to me like one thing that happens is that we find a place with lots of low-hanging fruit and crowd in until all the low-hanging fruit is picked (what people in my field call an area being “mined out.”). But another thing that happens is that in picking some reachable fruit, we open up a whole new orchard, so that where previously “computer science” was some boffin talking about imaginary machines with infinite-length paper rolls, it became an area with hundreds of sub-areas of study, each of which have plenty of fruit to be picked.

  77. Pingback: 74 – Adventures in Feedback! | The Bayesian Conspiracy

  78. Andrew Cady says:

    Maybe the “transistor-related research” people found some ways to improve the semiconductors besides making them smaller.