Things That Are Not Superintelligences

[Not the most interesting topic in the world, but I’m posting it so I have something to link to next time I see this argument]

I talk about superintelligence a lot, usually in the context of AI or genetically engineered humans. And lately I have run into people who say: “But superintelligence already exists! It’s corporations / bureaucracies / teams / civilizations / mind-mapping software” (examples: 1, 2, 3, 4). Sometimes these people go so far as to say these things are in fact superintelligent AIs, since they are technically “artificial”.

No.

Some of these things may be poetically or metaphorically like a superintelligence, in the same way that, I don’t know, the devastation of traditional cultures by modernity is poetically or metaphorically like nuclear war, or whatever. But if every time somebody is trying to talk about nuclear disarmament, other people interject with “But we’ve already had a nuclear war – it’s the nuclear war in the heart of all mankind“, this doesn’t really add to the conversation. In the same way, talking about these metaphorical superintelligences is not a helpful contribution to discussion of literal superintelligences.

Why do I think that there is an important distinction between these kind of collective intelligences and genuine superintelligence?

There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write.

There is no number of ordinary eight-year-olds who, when organized into a team, will become smart enough to beat a grandmaster in chess.

There is no number of ordinary IQ 90 construction workers who, when organized into a team, will become smart enough to design and launch a space probe to one of the moons of Jupiter.

There is no number of “average” mathematics PhDs who, when organized into a team, will become smart enough to come up with the brilliant groundbreaking proofs of a Gauss or a Ramanujan.

Teams / corporations / cultures have a lot of advantages over individuals. They can use writing and record-keeping to have much better “memories”. They can use computers to be able to calculate and retrieve information more quickly. They can pool their advantages, so that if one person is good at writing and another person good at illustration, they can produce a well-written and beautifully-illustrated book. They can formalize their decision-making processes to route around various biases and react consistently to predictable situations. These are all really good things to be able to do, and it’s why in fact groups of people have outperformed individuals in fields as diverse as “making nuclear bombs” and “coordinating air traffic”.

But there is some aspect of intelligence that they can’t simulate, in which they are forever doomed to be only as smart as their smartest member (if that!). It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability. A team of people smart enough to solve problems up to Level N may be able to work in parallel to solve many more Level N problems in a given amount of time. But it won’t be able to solve Level N+1 problems. This is why it’s still occasionally useful to have mathematical geniuses around, instead of taking ten average mathematicians and telling them to work together. And unfortunately, this aspect of intelligence is the bottleneck for lots of interesting things like new inventions, proofs, and discoveries.

Further, teams themselves need intelligent people to run in an intelligent way. Steve Jobs led Apple to success by being really really good at marketing. Apple couldn’t have gotten the same results by firing him and replacing him with a marketing department of a hundred low-level employees who had graduated from second-tier marketing programs. This is true not only at the Steve Jobs level but at every level – at some point a Sales Department needs to have good salespeople, not just many well-organized mediocre salespeople. I’m not denying that many well-organized mediocre salespeople can do way better than a few poorly-organized mediocre salespeople, just that you can’t fully route around the need for actually intelligent people.

And finally, teams have a lot of contingent disadvantages over an individual. They work vastly more slowly. Their various parts tend not to know what the other parts are doing. If dictatorial in structure, they fall prey to failures of information; if non-dictatorial, to failures of coordination. Imagine an individual human whose inner soul had Democratic and Republican parties that were constantly trying to sabotage each other, so that if the Democratic part of her got a job interview, the Republican part would immediately try to sabotage the job interview to prevent the Democrats from looking good. Such a person would either be insane or at the very least not get too many jobs.

While it’s possible for improvements in organizational technology to ameliorate some of these contingent problems, so far they generally haven’t: the US government is as dysfunctional as ever, and a lot of corporations are little better. And even if all of the contingent problems were magically solved, that still leaves the fundamental problem where no organization of chimpanzees will ever write a novel.

If we were to actually get superintelligence, that would be a completely different class of entity than another government or corporation. It would also have all of the advantages of these things – arbitrarily much parallel processing ability, arbitrarily much memory, arbitrarily many computational resources – without the disadvantages, and with higher genuine “intelligence” as in problem-solving ability.

I think it’s useful to have a word for this completely different class of things, and that word is “superintelligence”. Teams, corporations, and cultures can use words we already have, like “groups”.

[EDIT: I keep getting the same objection in the comments: if we made a bunch of ordinary eight-year olds follow a simple set of operations that corresponded to a logic gate, and arranged them so that they simulated the structure of Deep Blue, then they could win high-level chess games. This is true. But eight-year-olds could not come up with and implement this idea. A brilliant computer programmer might be able to, but once you’re a brilliant computer programmer, you might as well just build the darned computer instead of implementing it on eight year olds. And any computer programmer so brilliant that they could build a true superintelligence out of eight year olds could build a true superintelligence out of normal computers too. And the same is true of the objection “doesn’t this mean that no amount of stupid neurons could combine into a smart human brain?” Yes, evolution can play the role of the brilliant computer programmer and turn neurons into a working brain. But it’s the organizer – whether that organizer is a brilliant human programmer or an evolutionary process – who is actually doing the work. That “neurons can combine to form a brain” is no more profound than that “transistors can combine to form an AI” – in both cases, it’s the outside organizer doing all the meaningful work. For a really interesting science-fiction treatment of what it would actually mean to implement a superintelligence in human social interaction, read Karl Schroeder’s Lady of Mazes]

[EDIT 2: Advocate of a “super-individual-human-intelligence” project responds with a clarification of terms.]

[EDIT 3: Several people point out in the comments that chess champion Garry Kasparov once played a game of chess against “the world”. Kasparov moved the white pieces, and the black moves were decided by popular vote on a website where various other grandmasters and chess buffs worked together to devise the best strategies. The game was very closely fought and suffered from several irregularities, but Kasparov ended up winning.]

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

482 Responses to Things That Are Not Superintelligences

  1. Eli says:

    There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write.

    Weren’t chimpanzees observed to use tools at a Stone Age level and possess culture?

    There is no number of “average” mathematics PhDs who, when organized into a team, will become smart enough to come up with the brilliant groundbreaking proofs of a Gauss or a Ramanujan.

    Then who did Gauss or Ramanujan cite? Exclusively other “geniuses”?

    If the brain is an engine and intelligence is what the brain does, why do we go around assuming that intelligence is measured in distance units rather than velocity units?

    • Carson says:

      > Chimpanzees:
      Well, presumably that’s why Scott chose writing as his example, which chimpanzees have never been observed to do.
      > Gauss
      No, but they got much farther than other people starting from the same places. Which we know because the things they built upon had been around for a long time and nobody else managed to prove the things they did.

      Regardless of the individual example, do you dispute the general claim? That parallelism cannot always compensate for genius? And institutions we have today are in some way limited by their smartest member?

      I’m not sure what you mean by distance and velocity. Intelligence is just a word. There are potential entities that are both more parallel (like corporations), and better in different ways. Scott is just pointing out that they are different and there are some tasks that require each of these advantages, so we can’t replace one with the other.

    • jaimeastorga2000 says:

      There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries. Acknowledging our debt to the former, we yearn, nonetheless, for the latter.

      Academician Prokhor Zakharov, “Address to the Faculty”

      • daronson says:

        Funny, I also thought about Prokhor Zakharov. What did Sid Meier do to us??

        • William O. B'Livion says:

          Put something in front of us that caused us to waste large parts of our lives sitting in front of a computer screen thinking about trivial things over and over instead of spending large parts of our lives sitting in front of a television screen laughing at the same old jokes told in a slightly different way each time?

    • Geirr says:

      > Then who did Gauss or Ramanujan cite? Exclusively other “geniuses”?

      I really doubt if they exclusively cited other geniuses but the distribution of citations is going to be very, very lop sided with some individuals having a wildly disproportionate share of those citations. This power law of accomplishment and renown occurs in basically every field of human accomplishment. The best book on this that I’ve read is Charles Murray’s Human Accomplishment
      https://en.wikipedia.org/wiki/Human_Accomplishment
      As a gut check on this compare the relative scores of Isaac Newton and Gottfried Leibniz, each of whom invented the calculus independently. In Mathematics, the category for which they’re closest Leibniz has an index score of 72, Newton of 89. This is a stupidly large difference between two men of colossal genius and accomplishments. The towering figures of any field are all magicians, not ordinary geniuses.

      “In science, as well as in other fields of human endeavor, there are two kinds of geniuses: the “ordinary” and the “magicians.” An ordinary genius is a fellow that you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what he has done, we feel certain that we, too, could have done it. It is different with the magicians. They are, to use mathematical jargon, in the orthogonal complement of where we are and the working of their minds is for all intents and purposes incomprehensible. Even after we understand what they have done, the process by which they have done it is completely dark. They seldom, if ever, have students because they cannot be emulated and it must be terribly frustrating for a brilliant young mind to cope with the mysterious ways in which the magician’s mind works.”

      • Eli says:

        This power law of accomplishment and renown occurs in basically every field of human accomplishment.

        Yes, including in fields where we know the relevant talents are normally distributed (I’m thinking of athletics and the explanations in “Outliers” about how professional athletes get to their skill levels). In fact, we know IQ is normally distributed (by definition, for some IQ tests).

        We’ve never seen anything indicating a power-law distribution of intelligence in any available measurement of intelligence besides “People point to you as a famous genius.”

        • pneumatik says:

          I’m assuming you’re criticizing the post by Geirr that you’re replying to. If not then my post won’t make sense.

          People who are several sigma better than the mean at something will have virtually all the huge successes at it. If greater intelligence is about being able to do things less intelligent people can do, then the smartest people will discover or describe the most important theorems or proofs or laws or whatever. Those publications will set less intelligent people off on understanding and applying the new discoveries.

          If being more intelligent is just about thinking faster then the more intelligent people will discover the big important things first. In either case, a power law distribution of important papers is created.

        • Geirr says:

          Read Human Accomplishment. It’s on libgen.

          Any field of endeavour will call on multiple different capabilities. If all of these capabilities are normally distributed the products of their capabilities will follow a power law, not a normal distribution. This is true even if the different capabilities are highly correlated. If you’re good at driving in golf but a terrible putter you will not be a pro, the reverse is also true. So the number of potential pros is the intersection of excellent putters and drivers. If one of ten golfers are excellent at driving or putting then one of a hundred will be a potential pro. The pool will be even smaller once you select within that group for other traits, like conscientiousness, eye sight, etc.

    • Jon Gunnarsson says:

      Then who did Gauss or Ramanujan cite? Exclusively other “geniuses”?

      I don’t understand the question. Mathematical proofs are rigorous logical arguments that do not depend on citing authorities.

      • sweeneyrod says:

        Proofs depend that on sub-proofs, which can be referenced in a paper, rather than explicitly given. Example:

        Statement: All flarbles are homoringular with ordering 7.

        Proof: It is obvious that all flarbles are homoringular with order 3 (by the definition of a flarble as a glarble ringulised by a triple-tyroid). It is known that any flarble homoringular with order n is also homoringular with next prime above n.1

        Hence all flarbles are homoringular with ordering 5, and thus with ordering 7 also.

        1. Euclid, Euler, Einstein Advanced Flarblological Studies 1973

    • Scott Alexander says:

      The mathematician example is complicated, since all progress is a combination of building on the works of other plus individuals’ bright ideas, but let’s put it this way.

      Suppose you had a team of n 50th-percentile math PhD students that you could organize any way you like, and I had Terence Tao. We’re both competing to prove the same theorems as quickly as possible. Is there any value of n for which you expect to always win this competition? Is there any case in which I might be able to prove a theorem you will never prove, or vice versa?

      • Emp says:

        It’s not about the ‘n’ number of guys; it’s about the difficulty level of the theorem.

        Mr Tao is so far ahead of the average guy that there will be some threshold at which the others are simply incompetent when faced with problems of a certain complexity. If the theorems are not so complex that the team of guys is incompetent, they will win every time.

        The contentious issue is what is the actual difficulty level of the innovations that are being sought?

      • Søren E says:

        >Suppose you had a team of n 50th-percentile math PhD students that you could organize any way you like

        If the math PhDs are X standard deviations above average in math skills, the organizer should be X standard deviations above average in organizational ability.

        Organizing people is hard in practice – my intuition is that you will hit a point where the marginal utility of adding a PhD becomes negative.

      • Daniel says:

        I’m currently a postdoc in mathematics. I do think this example is much more complicated than you even admit in this post.

        It’s worth noting that to some extent the experiment you suggest has been performed: see https://en.wikipedia.org/wiki/Polymath_Project. This was an attempt by many (maybe 200?) mathematicians, including several very prominent ones such as Terrence Tao and Tim Gowers, to collaborate on some open problems.

        I observed but did not participate in the polymath project, since the subjects are typically rather far from my area of interest. My feeling was that many of the large contributions, as well as the general directions the research tended to move in, were driven by the prominent members of the group. That said, several crucial insights came from little-known participants, or even graduate students.

        The case of the Erdos Discrepancy Problem is particularly interesting–the polymath project achieved some partial results in 2012, and then Tao finished off the theorem in 2015, crucially using some insights of the polymath project.

      • Marc Whipple says:

        Tanquam ex ungue leonem.

        OTOH, it turns out that the proof probably wouldn’t have fit in that margin.

        *shrug*

      • Eli says:

        Suppose you had a team of n 50th-percentile math PhD students that you could organize any way you like, and I had Terence Tao. We’re both competing to prove the same theorems as quickly as possible. Is there any value of n for which you expect to always win this competition? Is there any case in which I might be able to prove a theorem you will never prove, or vice versa?

        Are we assuming you get Terence Tao the professor, or Terence Tao the PhD student?

      • Murphy says:

        >Suppose you had a team of n 50th-percentile math PhD students that you could organize any way you like, and I had Terence Tao. We’re both competing to prove the same theorems as quickly as possible. Is there any value of n for which you expect to always win this competition?

        Always? of course not, but that’s also true if I have 50 clones of Terence Tao.
        Just by the average-ish math PhD’s being able to try different things in paralell they’re going to solve problems in some ammount of time. The time would vary so for some N you can expect to be able to beat a single Terence Tao more than 50% of the time.

        Evolution itself is utterly unintelligent but a billion years of dumbly trying billions of approaches in paralell can give you a winning sollution anyway.

        On a more meta level I feel that your post makes an assumption that I see often from philisoph students, I think of it as the “one ring” way of thinking, assuming that you can only ever put yourself or a copy of some part of yourself into things you create like some kind of tolkienesque magic system.

        In reality it’s normal for people to be able to create things which far surpass their own abilities in some area.

        Someone who has no idea how to play chess can never the less create a chess program which can learn to beat a chess grandmaster.

        A group of programmers who all utterly suck at chess, none of whom are grand-masters can still create a system which can learn to be better than a grandmaster.

        I can create a system which learns and far surpasses my own abilities in an area. I can take a task which I have no more than a vauge idea how to do and grow an ANN which can do it close to optimally.

        Similarly an administrator who could never solve a problem themselves can never the less create a system capable of learning and optimizing itself just like an ANN until it is able to achieve the task. That’s not simply summing the abilities of the other staff who are part of the process.

        Your stated assumptions *sound* good but they’re incorrect.

        • Oliver Cromwell says:

          Is it possible that cloning Terrance Tao is actually a more fruitful approach than paying Eliezer Yudkowsky $10,000/year to talk about thinking about paying someone else to program a Friendly AI?

          btw, when is Terrance Tao going to escape onto the internet, found a world religion, become Chancellor of Germany, and turn the universe into a paperclip museum?

    • Chris Conner says:

      > If the brain is an engine and intelligence is what the brain does, why do we go around assuming that intelligence is measured in distance units rather than velocity units?

      Engines are usually rated in units of power, such as horsepower or watts. But brains don’t produce power, they consume it. For instance, a human brain runs on about 20 watts. A modern desktop computer CPU might run on something like 60 watts, but this is much more variable than it is for humans. But there are no desktop CPUs that are anywhere near as smart as humans, so power consumption isn’t a very good measure of intelligence. Perhaps the analogy between a brain and an engine isn’t a very good one.

      • Marc Whipple says:

        You’re completely correct, but that wasn’t quite what I got from the question. It seemed like a reasonable distinction of the form if we measure X in Y, and Z and X are analogous, shouldn’t we measure Z in Y and not in B, where B and Y are different kinds of units?

        However, that being said, I’m not sure that I’d agree that we measure the brain in distance units in the first place. I’d analogize most intelligence assessments as measuring something like horsepower or torque, but completely unrelated to actual physical power consumption by the brain. We don’t measure X in Y, in other words.

      • Ricardo Cruz says:

        Slightly offtopic, but your comment (measuring brain power by what it consumes rather than what it produces) reminds me of how economists measure government output (GDP) by what government consumes as well, rather than sales or purchases as is the case of private GDP. Or how employers sometimes enforce input rules (such as a schedule) rather than output, because output is sometimes very hard to measure.

      • Relativist says:

        He probably meant something like the difference between crystallized intelligence (distance) versus fluid intelligence or IQ (speed). There’s an interaction between distance and speed both in vehicles and brains. At least this is my interpretation of what he meant.

  2. ton says:

    >And DON’T EVEN GET ME STARTED on people who think Wikipedia is an “Artificial Intelligence”, the invention of LSD was a “Singularity” or that corporations are “superintelligent”!

    http://lesswrong.com/lw/ic/the_virtue_of_narrowness/

    • SomeGuy says:

      His point about the connected graph is silly. I don’t think I’ve ever heard anyone suggest that all connections between things are the same connection, and the complete graph with labeled edges is not “the same as a graph with no edges at all.”

    • Houshalter says:

      Another, somewhat similar and somewhat relevant article by Yudkowsky: No Evolutions for Corporations or Nanodevices http://lesswrong.com/lw/l6/no_evolutions_for_corporations_or_nanodevices/

      • Swami says:

        Wow, at least Less Wrong commenter Tim Taylor understands cultural evolution. Eleizer’s 2006 post is a conceptual mess. He projects the rules of biological evolution on culture without grasping the immense distinctions between the two.

  3. anon85 says:

    I think the mathematician example is sort of bad. In my experience, past a certain “getting it” level where you’re able to solve homework assignments on graduate-level courses, a lot of mathematics is hard work + dedication + some luck (intelligence still helps a lot, of course).

    If you removed Gauss and Ramanujan and the other top 100 math geniuses in history, mathematics would still eventually reach the place it is today – it would simply be slowed down somewhat. There’s nothing Gauss did that’s impossible to do without him.

    • Sayre says:

      Having no real dog in this fight, due to lack of prior knowledge, you could say that about many a great genius’ breakthrough. Perfectly obvious in hindsight but utterly inconceivable prior to their intervention. I don’t think it’s a good argument in general.

    • endoself says:

      Top mathematicians do not believe this. At least, there are many areas where this is not true, including areas whose relative underdevelopment if you were to remove the top 100 mathematicians would result in mathematics today looking drastically different.

      For the example of Gauss, it took about 100 years for mathematics to process the ideas in his Disquisitiones Arithmeticae, which he wrote at the age of 21. In fact, Manjul Bhargava, a recent Fields medalist, attributes the ideas for which he won the Fields medal to the fact that he read the Disquisitiones in addition to more recent work, showing that there are still ideas in the Disquisitiones that are not incorporated into the standard modern presentation.

      • Haplodiploidy says:

        Ok, ordinary mathematicians haven’t approached the genius of Gauss. That doesn’t show that they can’t? For example, isn’t it possible to enumerate all possible proofs of PA or ZFC? If so, given time, a single mathematician should be able to outdo any number of Gausses with a finite body of published results.

        • endoself says:

          Yes, any result proved by Gauss could also be proved by brute force given sufficient (presumably exponential) time. If you read Scott’s post as suggesting otherwise, I suggest looking back and replacing “no number” with “no number less than a trillion”, “no number less than 10^100”, or whatever other number is appropriate to the particular point being made. Scott is not a mathematician, so it is unreasonable to expect him to distinguish between the infinite and the merely unphysically large.

          Another thing to mention is that a lot of what was great about Gauss was ideas rather than just particular theorems being proven. Anyone could read the Disquisitiones and see what theorems were proved, but it takes careful thought about why Gauss was doing what he was doing to extract the ideas that lead to the number theory of the 19th and early 20th centuries, and to Bhargava’s recent results.

          This isn’t to say ideas can’t be brute forced too. If you find a particularly short program that writes out the text of the Disquisitiones (i.e. much shorter than any standard compression in use today), it presumably contains some insight into what is being done in order to allow it to compress the text so efficiently.

          Also, the sentence I was responding to was

          > In my experience, past a certain “getting it” level where you’re able to solve homework assignments on graduate-level courses, a lot of mathematics is hard work + dedication + some luck (intelligence still helps a lot, of course).

          which makes a much stronger claim than that people at this level of ability could do what Gauss did given exponentially more time.

        • vV_Vv says:

          given time, a single mathematician …

          Or a monkey.

        • baconbacon says:

          There are an infinite number of points between the 1 inch and the 2 inch mark on a ruler, no amount of brute force effort over infinite time will. Ever find the 2.1 inch mark within that infinity.

          A monkey might generate hamlet, but he would have no idea that it had happened, and a Ph.D. Student going through all possible proofs would have to stumble on the solution, but would he recognize the correct one?

        • Ed says:

          And the one mathematician would have a vast body of proved statements and no idea which of them were interesting.

      • anon85 says:

        Do you have a citation for “top mathematicians do not believe this”?

        Anyway, Gauss is cool, but to think that 100 years later people couldn’t grasp his ideas is a bit ridiculous (surely he wasn’t *that* bad at explaining). And I *definitely* don’t believe that there are hidden insights in a 200-year-old work that we’re still missing today. Anyone who believes that hasn’t comprehended just how enormous modern mathematics is, just how vastly deeper the theories we have now are compared to what Gauss did in his time.

        • Geirr says:

          I am basically unfamiliar with mathematics but I’ve read that a good undergraduate mathematician is largely up to speed on everything up to 1900 by the time they receive their B.A. which gives some impression of how enormously knowledge has grown since then.

          It does seem unlikely that there’s anything trivial in Gauss that hasn’t been worked out already but hardly impossible. Sexual selection was as clearly explained in Darwin’s works as natural selection and it was basically rediscovered in the 1970’s. That’s over a century between explanation and becoming common currency for a field, genetics, for which Darwin is the author. Archaeology went down blind alley after blind alley for most of 60 years after World war II despite vastly more money and researchers in its understanding of culture and population movements. Pots not people was wrong and Volkswanderung was right.

          I am much more skeptical of fields with such stringent standards of proof as math going wrong for long periods but the idea that something in retrospect obvious might be overlooked for a long time isn’t laughable. It’s possible that the insight necessary to integrate General Relativity and Quantum Mechanics is within reach for a bright postgrad who just has the right idea.

          • anon says:

            “I am basically unfamiliar with mathematics but I’ve read that a good undergraduate mathematician is largely up to speed on everything up to 1900 by the time they receive their B.A. which gives some impression of how enormously knowledge has grown since then.”

            This is completely false. Examples:

            * Riemann’s proof of the Prime Number Theorem, or Dirichlet’s theorem on primes in arithmetic progressions, are two great results in analytic number theory from the 19th century. They are accessible to a good undergraduate, but they are far from being part of the universal curriculum.

            * Poincare and others developed amazing, sophisticated results in dynamics (e.g. billiards, N-body problems, …) that are far beyond the standard undergraduate curriculum. In fact, most of the more physics-y parts of 19th century math are not “standard fare” for undergrads in either the math or the physics department.

            * The geometry of algebraic curves is one of the most classical subjects in mathematics and it is _amazingly_ rich. (See the book of Arbarello, Cornalba, Griffiths and Harris.) This is 19th century stuff (sometimes even 17th or 18th century), although from a modern perspective it is much easier to make sense of it using 20th century tools such as sheaf cohomology. [Because curves are “simple”, one can mechanically translate all the modern techniques into more concrete language well known to 19th century geometers.] Nonetheless, I would venture to guess that only mathematicians who specialize in algebraic geometry ever come close to being “up to speed” on the full breadth of what was known about algebraic curves in 1900.

            * In the same vein, the geometry of Jacobians and other abelian varieties was very highly developed in the 19th century, although not necessarily on an entirely rigorous footing. Very few undergraduates learn this subject.

            * Most undergraduates don’t learn more than the very basics of the theory of Lie groups and Lie algebras. While it’s true that this subject really came into fruition in the 20th century (especially the representation theory side, as well as the structure theory of Lie groups “over the integers” [Chevalley groups]), Sophus Lie certainly knew a whole lot about this stuff in the late 1800s. Sure, a *good* undergrad might learn more than the surface level stuff in Lie theory, but maybe she prefers to spend her time learning dynamics or algebraic curves or analytic number theory, and there are only so many hours in the day.

          • Geirr says:

            @anon

            Is there any point in history for which it would be true to say “A good B.A. holder knows almost all Math up to X”, or could rederive it relatively easily using knowledge they have but those mathematicians didn’t? 1800?

          • Daniel says:

            @anon: I think the situation is even worse than you suggest–lots of 19th century mathematics has been forgotten or only revived in some very specialized and obscure way. E.g. Liouville’s study of which functions admit elementary anti-derivatives, which is now part of the (somewhat obscure) field of differential Galois theory.

          • anon says:

            @Daniel, your example of integration in elementary terms is not that great, because what really happened was that mathematicians realized this question is not necessarily as interesting as it seems at first. I would love to be corrected, but despite the beauty of this subject (see e.g. Rosenlicht’s article in Am. Math Monthly, Vol 72 No. 9), the fact that differential Galois theory remains a somewhat obscure subject indicates to me that it lacks the rich interconnections with other areas that characterize the deepest, most interesting parts of mathematics.

          • Daniel says:

            @anon: I believe there are some recent connections with parts of algebraic geometry–see e.g. Joseph Ayoub’s work on the foliated topology. I saw a talk by him about this a while ago and don’t really understand what’s going on, but the program at least seems interesting.

        • “And I *definitely* don’t believe that there are hidden insights in a 200-year-old work that we’re still missing today.”

          I can’t speak to the mathematical case, but the American Economic Review published an article in 1987 that reinvented, in a less important context, an idea published by David Ricardo more than a hundred and fifty years earlier. For details see:

          http://www.daviddfriedman.com/Academic/Ng_Ricardo/Ng_Ricardo.html

          • anon85 says:

            Economics is not mathematics. I’m talking very specifically about math here. I think the chance that a 200-year-old, well-studied book contains mathematical insights that people today are still missing is about the same as the chance that a 200-year-old book about medicine contains a secret cure for cancer.

          • Marc Whipple says:

            I think that some of that is that you are setting a very high bar for “missing insight.” At the level of, “A well known book of medicine from the 18th century contains a working cure for cancer that nobody knew was there,” I think you and I have about the same level of confidence that Gauss et al have nothing to surprise us with. E.G. I’m pretty sure that the proof Fermat had of his Last Theorem wasn’t so insightful that nobody has been able to replicate it: I think it was just wrong.

            However, I think it is entirely reasonable to suspect that someone may, in the course of developing a new branch of mathematical theory, discover that the only way Gauss et al could have arrived at a conclusion they are known to have made is that they had an instinctive understanding of what the modern-day mathematician thinks is novel and just never bothered to formally present it.

          • anon says:

            @anon85

            Gauss proved the Law of Quadratic Reciprocity (8 times). Mathematicians continued to revisit this theorem to the point where it has collected hundreds of distinct proofs. One could claim that the work of E. Artin and others that contextualized Quadratic Reciprocity in the general framework of classfield theory closed the book on this, and that there is no more insight to glean from Gauss’s investigations into this subject. But I think that would be foolish. In some sense the whole ballgame in modern number theory is to understand non-abelian reciprocity, and who are we to say whether or not some brilliant mathematician revisiting Gauss’s various perspectives on the simplest abelian cases will find new insights into open questions? Many people have read the Disquisitiones, but as other commenters have emphasized, very few have done so with enough gusto to squeeze more juice from the pulp.

          • nydwracu says:

            I think the chance that a 200-year-old, well-studied book contains mathematical insights that people today are still missing is about the same as the chance that a 200-year-old book about medicine contains a secret cure for cancer.

            What’s the chance that a 1500-year-old book about medicine contains a secret cure for malaria?

          • anon says:

            @nydwracu I like that example, but probably most would write it off as a fluke (and they might be correct to do so).

        • endoself says:

          > Do you have a citation for “top mathematicians do not believe this”?

          See for example Read the Masters.

          > Anyway, Gauss is cool, but to think that 100 years later people couldn’t grasp his ideas is a bit ridiculous (surely he wasn’t *that* bad at explaining).

          People understood the things that he wrote down. What people didn’t understand are the things that he understood that allowed him to discover what he discovered.

          Everyone understands a lot more than they are able to explain. This is what is behind Moravec’s paradox; we can recognize objects, but we can’t write code that recognizes objects just by writing down how we do it. Top mathematicians understand even more things that they don’t necessarily know how to write down. When they can explain it, it becomes something people can learn just by reading about it. Bill Thurston has written about this in On Proof and Progress in Mathematics, and he invited others to talk about their own thoughts on this phenomenon on MathOverflow. I’d also recommend For progress to be by accumulation and not by random walk, read great books, which discusses this outside of math.

          I can also point toward evidence that the Disquisitiones in particular still had insights – *not* what was stated outright but what you can learn about how Gauss thought if you read carefully and reflect deeply – yet to be assimilated after 200 years. Timothy Gowers says of Bhargava:

          > But the first of his Fields-medal-earning results was quite extraordinary. As a PhD student, he decided to do what few people do, and actually read the Disquisitiones. He then did what even fewer people do: he decided that he could improve on Gauss. More precisely, he felt that Gauss’s definition of the composition law was hard to understand and that it should be possible to replace it by something better and more transparent.

          > I should say that there are more modern ways of understanding the composition law, but they are also more abstract. Bhargava was interested in a definition that would be computational but better than Gauss’s. I suppose it isn’t completely surprising that Gauss might have produced something suboptimal, but what is surprising is that it was suboptimal and nobody had improved it in 200 years.

          So, what Bhargava did differently than other mathematicians is read Gauss, and it lead to him winning a Fields medal. Not very many mathematicians read the masters, but many top mathematicians do. This suggests that the top 100 mathematicians in history knew things that they weren’t able to write down, and that good mathematicians know this and seek out their insights.

          • For what it’s worth, my two favorite economists are probably Ricardo, who wrote about two hundred years ago, and Marshall, about a hundred years ago. And for one example of the sort of thing you mention …

            Someone here commented, in the context of Scott’s essays, on the danger of forming a general conclusion from a series of examples. Marshall discusses that in the economic context, pointing out the danger that your examples may include implicit assumptions that don’t apply to the general case.

            Unless, he adds, your intuition is so good that you can, like Ricardo, step from example to example without ever drawing a wrong conclusion.

            He then gives an example of Mill’s failure to do so.

          • anon85 says:

            I don’t trust that “Read the Masters” book. Overall I believe any insight that Gauss and other masters had 200 years ago is trivial by today’s standards. To believe otherwise shows a lack of familiarity with mathematics, I think. There’s not much I can say on top of that.

            Is anyone in the audience here an actual mathematician? Can you please confirm or deny what I’m saying?

          • Marc Whipple says:

            anon85:

            I have a degree in mathematics, and I do not share your (near) certainty that Gauss et al have nothing left to surprise us with.

            However, I am not a practicing mathematician, and my degree is a mere bachelor of science, so my opinion is of limited probity. Also, I don’t share your opinion in that I would not be surprised to find that they had a trick or two left. I don’t know that I consider it all that likely, but that is still a long way from the position you seem to have that it is vanishingly unlikely

          • endoself says:

            Read the Masters is not a book, it is a (short) chapter in Mathematics Tomorrow. Springer apparently doesn’t let you link directly to pdfs, so I can see how that could have been confusing. Did you read the chapter? It provides a few examples. Do you distrust those examples, and the example about Bhargava? Can you say more about why?

            I can provide other examples of people saying similar things, like this essay by Arnold.

            This is also true to different extents in different areas of math. Something like Grothendieck’s foundations for algebraic geometry, to use an extreme example, would have taken a lot longer without Grothendieck, much more than other advances depend on having especially capable people. Can you say a bit about the area you work in? I might be able to use examples that you’re more familiar with.

          • Douglas Knight says:

            anon85, if you don’t believe that Harold Edwards and Manjul Bhargava are “real mathematicians,” then maybe the right conclusion is that mathematicians and mathematics doesn’t exist and your position is vacuously true?

          • anon85 says:

            @Douglas Knight

            Harold Edwards and Manjul Bhargava are real mathematicians, but I think they’re exaggerating for effect; it’s a romantic notion to think that Gauss still has things to tell us 200 years later (despite being studied to death by hundreds of people), but the fact that it’s romantic doesn’t make it true. In addition, I don’t think they themselves believe it’s true; if you ask them point-blank if Gauss knew important things we still don’t know today – and wrote about those things in his book – I think they’d both say no.

          • endoself says:

            Edwards has written a series of textbooks about how mathematics was developed. Arnold is explicitly arguing in the essay I linked that mathematicians are worse off for not being taught things that were standard to know in the 19th century. Many of the texts he recommends were written in the 19th century.

          • rmtodd says:

            re: anon85’s comment “any insight that Gauss and other masters had 200 years ago is trivial by today’s standards”: as someone familiar with communications and signal processing, I can think of one thing that Gauss came up with that didn’t attract much attention at the time, but became noticed a century and a half later as it was rediscovered by others. Gauss, in an unpublished paper from 1805 (later appearing in a “Collected Works” publication a few decades later), while doing some work on orbital mechanics computations, came up with what we would today call the recursive decomposition of a discrete Fourier transform (DFT) into DFTs on smaller chunks of data, i.e., what Cooley and Tukey (re)invented in the 1960s as the Fast Fourier Transform. The FFT was a pretty big deal when Cooley and Tukey did their paper, and an even bigger one these days, given that most (all?) of the modern 4G cellphone standards use some flavour of OFDM signaling, which wouldn’t be practical without the FFT. DSL modems also use FFTs in similar ways.

            Apparently Gauss didn’t realize the importance of what he had done (i.e., that it made the computational complexity of computing DFTs scale as N log N, not N^2) — since Gauss was writing well before anyone started analyzing algorithms systematically for their complexity, this isn’t too surprising. Anyway, I think the recursive FFT decomposition should count as an example of the sort of non-trivial insight you suggest.

            http://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf is a copy of a paper that goes into a good bit more detail of Gauss’s FFT work and how it relates to modern approaches for doing the FFT.

        • Tibor says:

          There might be no hidden insights in a well-known work (I’ll return to this in a moment) but there are very likely going to be unknown insightful works. A good example is the Chi squared distribution. A concept nowadays mostly attributed to Pearson, it was definitely already described by Helmert in 1875 and if I could find my undergrad math. statistics textbook, I am pretty sure there is an even older incidence (the professor who taught the course was also interested in the history of mathematics, so he got these things covered quite well), I think it was some Frenchman who wrote about it first, some 20 years before Helmert. The reason it is attributed to Pearson is that he was the first one who actually showed that it is not just an interesting distribution but that it is also good for something. This is also the reason that the two previous works were quickly forgotten, nobody thought it was something too worthwhile. Pearson was 18 at the time Helmert wrote about this and possibly would not have had access to the journal where he did anyway, but had that been otherwise he might have had realized what others didn’t. By the way, the ancient Romans had cast letters to label lead pipes with (for the city waterworks). It had not occurred to anyone until Gutenberg though to basically take these things, cover them in ink and press them against parchment.

          So can there be any insights in Gauss’ work that we have not discovered yet? It depends on what you mean exactly. It is not like people couldn’t follow Gauss’ proofs. But there are various stages of understanding. When I was a freshman bachelor maths student, I thought we bothered learning proofs to convince ourselves that the statement is actually true. This is partially correct, but in fact, this is not really the most important reason (I think). The first correction I made to this belief was perhaps in the second or third year of the bachelor when I though “I see, so we are learning the proofs to understand the statement better, to understand why this and that assumption of the statement is necessary (at least for this particular proof)”. That is also true, but still not the whole story. An important part of learning a proof is to understand the method of the proof and get a deeper insight into how things your statement is dealing with actually work. Reading a standard proof is uninteresting as long as you know it already and in articles, something like that would be often glossed over with something like “the rest of the proof follows by coupling X and Y” … at times this turns out to be glossed over too fast and something unexpected pops out, but that is another story. At the same time a proof that shows existence of something by contradiction (if it did not exist, some nonsense would happen) is way less interesting than a constructive proof which shows how that thing can actually be put together. An average second year bachelor maths student would probably be able to follow most of Gauss’ proofs (even though sometimes would need some help by a professor to cover things Gauss found trivial enough to “leave as an exercise to the reader”) but would likely not really grok them fully. He would be able to follow the proof step by step, thus convincing himself that the proof is correct but would gain much less further insight into the inner workings of the thing the statement is about than a more advanced mathematician who would read the same proof. He could even have problems applying the same method to similar problems or realizing what the proof method is, actually. A beginner mathematician usually sees a proof as a succession of very simple statements that follow one another but does not see the big picture. The more conceptually difficult the method is, the harder it is to see it. It is often the case that someone proofs something, someone else prove something else in a similar way and it takes a third someone later on who sees the two proofs and realizes they can be generalized to a method which is applicable to a whole range of problems. It is possible that something like that is still hidden in Gauss’ works. But of course, if there is something like that there, it is not clear whether Gauss’ realized that his method was a particular example of something more general or not.

      • Anonymous says:

        Manjul Bhargava, a recent Fields medalist, attributes the ideas for which he won the Fields medal to the fact that he read the Disquisitiones in addition to more recent work, showing that there are still ideas in the Disquisitiones that are not incorporated into the standard modern presentation.

        In the other direction, that’s actually kind of the “luck” bit that anon85 was speaking about. I’ve heard various stories like this in the past. One example is that Riemann’s book was rather obscure until Einstein picked it up and realized that it did what he needed. Here, it’s not so much that prior figures were geniuses and predicting the future or that later figures were geniuses for picking out something from the non-standard literature… it’s really still some combination of being a little clever, but putting in a lot of work and getting a little lucky.

        • endoself says:

          The Disquisitiones is not obscure. It is Gauss’ magnum opus, and Gauss was probably one of the five greatest mathematicians in history. Some would claim that Gauss was the greatest mathematician in history. The next hundred years of research in number theory consisted of pursuing the ideas in the Disquisitiones to their logical conclusions (this is meant to describe the goals of Kummer, Dedekind, Dirichlet, Kronecker, Hilbert, etc, not to minimize their contributions).

    • Calico Eyes says:

      It would…but only because I believe that there were plenty(aka a few) of other people at that intelligence tier who simply didn;t dedicate their lives to math.

      Putting it this way, kill off everyone over a +5 SD in general mathematical talent. There will never be any significant or groundbreaking research again.

      And I mean that in the absolute sense right now. Ensure that relative to this population, +5 SD people can never exist.

    • Pku says:

      If you’re looking for a math genius that would be incredibly hard to replace, I’d go with Grothendieck over Gauss. Gauss was incredibly smart and prolific, but Grothendieck’s ideas seem to come completely out of nowhere.

      • Daniel says:

        You may find this article of Oort interesting: http://www.staff.science.uu.nl/~oort0109/AGRoots-final.pdf

        He discusses the historical predecessors of Grothendieck’s ideas. What amazes me about Grothendieck’s work is not that the ideas come out of nowhere–they typically don’t–but that the presentation is so complete and “digested.” Reading EGA or SGA feels like reading a textbook from the future–for Grothendieck, 200 years of assimilation and simplification just weren’t necessary to put the subject in a reasonably final form.

    • John Ohno says:

      The mathematician thing is a bad example because there have been notable mathematical collectives. Nicholas Bourbaki was a mathematical collective that was clearly smarter than its individual members, and there’s a reason that Erdos wrote with other people. The thing is, groups that achieve intelligence significantly higher than that of the individual members are composed entirely of highly intelligent people.

    • Scott Alexander says:

      I’m not totally sure this is true. I feel like I can do up to a certain level of math and no more. Even if in a cosmic sense I am “capable” of doing more, in reality I’ve never made it work. Why shouldn’t there be other people who max out at different levels, including the “average math PhD but not Gauss” level?

      Even if 1000 Scotts working for 1000 years could do everything Gauss did, I’m skeptical there’s any number of Scotts who, when put together, could do everything Gauss did as quickly as Gauss did it. That seems a little like the old “nine women can produce a baby in one month” problem.

      • Marc Whipple says:

        I make the distinction between mathematical intuition, mathematical understanding, and mathematical ability.

        Anybody as smart as the average reader of this blog should be able, well within the limits of a human lifetime, to understand the logical progression of pretty much any mathematical concept and apply it by rote. That’s ability.

        Up to a certain point, most people can instinctive understand why a mathematical concept is valid and apply it generally to situations where it is reasonably applicable. This varies from person to person and while it’s correlated with intelligence it’s not a super strong correlation. Many people just don’t “like” math, and hit a wall made of math they go from not liking to actively disliking, and after that while they could do it, they will likely never be good at it. For me, for instance, this was higher-order nonlinear differential equations. Up to then, loved math. (Have a degree in it.) After that, it wasn’t fun anymore, and I found something else to do. Anyway, this is understanding. (Though that’s not a good word. Maybe aptitude.)

        Some people can create new mathematics to apply to novel physical situations or just for fun. Though, a la Riemann, sometimes it turns out it wasn’t as abstract as everybody thought. 🙂 That’s intuition. You are born with it: either you have it or you don’t, and if you do, you have a fixed amount. A person with a significant amount of mathematical intuition is a Newton, a Gauss: given opportunity they will not only change the world, they can’t avoid changing the world.

      • anon85 says:

        I’m not disputing that Gauss was smarter than the average mathematician. I just think this example is *way* weaker than 8-year-olds playing chess against a grandmaster. It is so weak, in fact, that it is controversial: reasonable people will disagree on this point.

        I tend to think that the difference between “ordinary” mathematicians and the best mathematicians is smaller than most people assume. I could be wrong, but at the very least the issue is controversial.

        • Marc Whipple says:

          Totally serious question: On what experience and/or research do you base the opinion in your second paragraph? It is not in agreement with my anecdotal observations and interactions with scientists and mathematicians. However, I am willing to be proven wrong.

          • anon85 says:

            Only on my own experiences (as a graduate student in a mathematical field). But consider also Tao’s perspective on hard work: https://terrytao.wordpress.com/career-advice/work-hard/
            (He warns against relying on intelligence alone, and emphasizes the role of hard work.)

            Note what I’m not claiming: I’m not claiming that there’s no such thing as intelligence, or that Gauss is no smarter than the average mathematician, or anything like that. I’m merely claiming that with enough work, a decent mathematician can gain the intuitions of a master in one specific field. Of course, the master might have gained the intuitions faster and with less help, and he might gain such intuitions in more than one sub-field; but there’s no transcendence involved – no miraculous insights that others couldn’t even imagine reproducing (for some value of “imagine reproducing”).

            In particular, all of Gauss’s intuitions have been fully acquired and surpassed a hundred times over by now. I still don’t understand how people can dispute this fact. Sure, some people benefit from reading Gauss, but others benefit from reading more modern books; there’s no reason to believe Gauss had a unique insight that everyone else is still missing.

            (I mean, consider this from a numerical point of view: there are a lot more mathematicians alive today than during Gauss’s time; it’s unlikely that none of them are as naturally gifted as he was. In addition, mathematicians today have access to better books and modern insights. How could it possibly be that Gauss was not surpassed??)

    • onyomi says:

      I always go back and forth about “great man” theories of history and zeitgeist theories of history. Like genetics and upbringing, the two are probably impossible to fully disentangle.

      In my own very unrelated field of literary history, which seems to yet follow the same dynamics, there are definitely obvious times and places in history which produce a hugely disproportionate number of literary “geniuses,” lending credence to the idea that it was the economic/historic development which necessarily produced new artistic developments.

      At the same time, when you look closely, you often find these once-in-a-century geniuses who seem to singlehandedly pull a whole genre out of a boring stasis which seems it might have continued forever without him.

      My best guess is that pure genetics is producing these “once-in-a-century” geniuses at a relatively constant rate (maybe actually once every 25 years), but that whether or not they fulfill their potential depends on factors like “did they die in infancy?” “will they live and die in such a rural backwater that they never had access to any training (Ramanujan seems to have partially, but not completely escaped this trap)?” “will they be born female and strongly discouraged from seriously pursuing intellectual pursuits?” etc.

      Therefore, the best we can do is make sure to optimize the world so that when, like clockwork, the next Isaac Newton or Ramanujan or Shakespeare is born, he/she be, if not actively identified and trained, then, at least, able to succeed without too many roadblocks.

      Eugenics could also probably increase the rate at which such people are born, but right now there is no politically acceptable way to do that. That said, it may be happening to some extent anyway by virtue of the top universities now being more ability-based in their selection. Since the children of couples who meet at top universities are probably not going to languish in obscurity, perhaps there is reason to expect the rate of Isaac Newtons to accelerate greatly.

      I’m actually kind of surprised the effect hasn’t been more noticeable with billions of people in China and India very recently having a lot more opportunity than in the past, though maybe not enough time has yet elapsed to appreciate this effect. As the saying goes: if you’re one-in-a-million there are 1,000 people in China just like you.

      • Marc Whipple says:

        “Why do they call it Olympics?”

        (If you don’t recognize the quote: https://en.wikipedia.org/wiki/Profession_(short_story) )

      • Oliver Cromwell says:

        The total population is much larger now, and geniuses should be produced at a constant rate per capita, not in absolute numbers.

        In the 18th century, Britain seems to have had a couple of dozen geniuses. Today, it doesn’t seem to have any. Britain’s population is 5-10x larger. So either Britons got a lot stupider in the intervening years, or genius is more common than it seems, and probably people like Newton were 1-in-10,000s who just happened to be in the right place at the right time, rather than 1-in-1,000,000s who happened to pop up all at once and then disappear again.

        In particular the sort of low hanging fruit that was once open for a 1-in-10,000 to found a whole historical field of study is probably now all gone, leaving only space for 1-in-1,000,000s to push the envelope a relatively small amount.

        Now China probably is going to make a produce very significant progress in recent historical terms in the next few years, by more than doubling the size of the high IQ population of the world that has enough money to devote time to research. Indian and Africa, on the other hand, are low IQ populations, that are probably each equivalent only to a medium sized European country, that being about the size of the smart fraction.

        • TheNybbler says:

          Maybe there’s selection pressure against geniuses, and thus we’re seeing fewer of them. Newton died without issue, after all.

        • Viliam says:

          Maybe there are now other options for the geniuses, distracting them from research. The today’s Newtons could be creating startups for rating pictures of puppies online.

          • Marc Whipple says:

            Finance. The world of finance has sucked a LOT of potential greatness out of physics and math.

          • Alternatively, perhaps the current structure of funding for scientific research means too many people working on whatever the currently fashionable questions are, too few working on questions that strike only them as interesting.

            A conjecture I formed when I was a graduate student in theoretical particle physics.

          • anon says:

            @Marc, finance is actually pretty interesting and directly related to human experience, i.e. the allocation of capital to wealth-creating businesses, etc. perhaps those who’ve left math and physics to do finance don’t see it as a misdirection of their talents?

          • Marc Whipple says:

            I make no judgments as to whether finance is more interesting than physics. I merely respond to the postulated question about other options for geniuses. 🙂

            I do see using an Einstein-level brain to do finance as a misdirection for society, because almost all current financial “development” at that level is basically extractionary arbitrage and produces very little net value (and a huge amount of net risk.) Granted, if the guys at LTCM had been physicists they might have accidentally destroyed the world with Ice-9 or something, but in general I think financiers with the pretense of knowledge do a lot more damage than scientists ditto.

            But just because I think it’s a waste doesn’t mean that anybody is obliged to go into physics when they’d rather study finance.

          • anon says:

            Exploiting arbitrage opportunities is not necessarily “extractionary” (by which I think you mean “rent-seeking” in the technical sense, with the implication that the process is inherently zero-sum for society). The claim that fancy financial engineering is net zero or net negative for society (when properly accounting for risk) is often made but seldom rigorously justified theoretically or argued for with convincing empirics. Granted, if the major concern is tail risk then convincing empirical argument is maybe impossible. And it is certainly true that the financial sector, writ large, contains a good deal of rent-seeking. But the most clear-cut cases of rent seeking involve the socialization of risks that should be borne privately (viz., the housing bubble and global financial crisis, and TBTF). This is not so much a problem of over allocating human capital to finance, as it is a problem of regulators and policy makers being some combination of stupid and evil.

            I think if you were able to ask the question in such a way as to avoid knee-jerk, mood-affiliating responses, most economists would admit that *purely in economic terms*, fancy finance things like statistical arbitrage, collateralized debt obligations, and high frequency market making using laser links between Chicago and NYC, don’t decrease social welfare, and probably increase it.

          • anonymous says:

            I think most thoughtful, intelligent, well informed people would say that the finance industry as a whole is a net positive in terms of human flourishing. But the marginal finance employee is a lot tougher of a question.

            Even if we take for the sake of argument that the answer to the latter question is positive, there’s an enormous opportunity cost. It isn’t controversial at all that generating new music is a net social positive. But if non-trivial fractions of the smartest minds in the world went into composing music decade after decade, we’d probably think that humanity as a whole could do better in terms of allocation of talent.

          • Marc Whipple says:

            Exploiting arbitrage is a wonderful motivator for people to use money to accomplish things instead of just sitting on it. I am totally cool with arbitrage, and with the general idea of a finance industry. Liquidity is unspeakably important to modern industrial civilization and modern industrial financiers provide liquidity in ways that the old Bailey Building and Loan just couldn’t. That’s why I specified:

            1) Most “current development,” and;

            2) Extractionary arbitrage.

            While in a sense all arbitrage is extractionary, what I am talking about here is doing things like trying to optimize algorithms for UHFT (the U is for “ultra,” and it’s the kind where you try to rent a space closer to the server so you can take advantage of less speed-of-light lag and get your trades in first) or find yet another layer of re-trancheing and redistributed collateralization for securities that have already been re-tranched and re-collateralized umpteen times. Those are the kind of things you need high-level mathematical talent for, and in my opinion, they don’t actually benefit society very much while increasing net risk quite disproportionally to any potential societal gain.

      • Deiseach says:

        That said, it may be happening to some extent anyway by virtue of the top universities now being more ability-based in their selection. Since the children of couples who meet at top universities are probably not going to languish in obscurity, perhaps there is reason to expect the rate of Isaac Newtons to accelerate greatly.

        Only if you can persuade those couples to actually have offspring, and not “Having babies is difficult, time-consuming and expensive, and I need to devote the first forty years of my life to my education and career, then I might have one child when I can afford it and have the time”.

        As well, Isaac Newton was the son of a farmer*, not a university graduate. This is the “Shakespeare was too much a country bumpkin to have written the plays attributed to him, they had to be really written by educated intelligent nobleman” problem 🙂

        *And a bit of a juvenile delinquent in the making, if this is to be believed:

        The young Isaac disliked his stepfather and maintained some enmity towards his mother for marrying him, as revealed by this entry in a list of sins committed up to the age of 19: “Threatening my father and mother Smith to burn them and the house over them.”

        Maybe we should be looking for our future Newtons amongst the children of non-college educated parents, children who are early school leavers getting in trouble with the law! 🙂

  4. FeepingCreature says:

    Just for the sake of pedantry, given an arbitrary number of eight-year-olds you can in fact implement perfect chess by plain lookahead.

    That said, I don’t think corporations are superintelligences, but I do think they have a credible claim to being “seed UFAIs”.

    • Carson says:

      Lol, have you ever tried working with 8 year olds? I’m imagining trying to organize arbitrarily many 8 year olds right now and it’s hilarious and terrifying. (Not to mention that the person doing the organizing is another 8 year old)

      • drethelin says:

        yeah, 1000 8 year olds with perfect coordination/cooperation and unified goals would be a lot closer to a superintelligence than any real assemblage of 1000 8 year olds.

        Part of what makes computer intelligence terrifying is, assuming it’s not at the limits of hardware (eg it takes all the computers on earth to run one of it), it can start making arbitrarily many copies of itself that will all cooperate.

    • This comment and this comment are both pointing towards Napoleon’s Thesis: “Quantity has a quality all its own”.

      It’s entirely plausible to me that the only thing preventing a corporation of 100 IQ people from being a superintelligence is the fact that most corporations have only a few thousand employees, rather than a few billion. That, and the fact that we don’t know how to coordinate that many employees. Maybe if we change them into clones?

      • Lambert says:

        Napoleon? I thought it was Stalin. Preliminary rummaging around online suggests that nobody knows whether he actually ever said that, though.

        • The original Mr. X says:

          It certainly sounds more like a thing Stalin would say than Napoleon, who was big on manoeuvre rather than brute force.

          • Katherine says:

            Napoleon did say “You can’t stop me! I spend thirty thousand men a month” though. Napoleon was not just a general, but the head of state in the first country to rely on an army of conscripts. So he had soldiers with worse training and equipment than the regulars in most armies, but always had fresh troops coming in to replace any that he lost. Replacing quality with quantity.

      • TrivialGravitas says:

        There are limits on organizational size though, dictated (in economic theory I’m not qualified to judge) by how well the organization can predict future conditions (that is, small business trump large businesses if prices swing in a way that large businesses failed to predict, because small organizations are more flexible). So using a bunch of average intelligence people means you probably can’t even get to a few thousand people in size, you need genius analysts and actuaries in order to make a megacorporation.

      • Marc Whipple says:

        I had a co-worker who once worked for a company whose HR director firmly believed that so long as you had a good process, the quality of your employees was entirely irrelevant. The company was ISO certified, so the processes were not only “good,” they were all available for review upon demand. So they would always hire the first person who met the minimum qualifications list provided by the manager of the open position, on the theory that if there was something they didn’t know how to do, they could just consult the written process description.

        This worked about as well as you’d expect over the long term.

        • xtmar says:

          On the other hand, a lot of the more mature industries are moving towards that model, with decent success. The airline industry has dramatically improved its performance in all facets of its operation by totally removing any trace of individuality from its employees and procedures. This isn’t to say that there aren’t skill differences between pilots (viz. the USAir captain who successfully glided onto the Hudson a few years ago, versus the average pilot), but if you can adequately describe your requirements and processes at the necessary level of detail*, it seems like you can get pretty far by moving towards a systemic approach.

          *Which admittedly is a lot of handwaving.

    • Scott Alexander says:

      It’s just barely possible that a very smart computer programmer could implement a version of Deep Blue using a series of notes passed between eight-year-olds who had been ordered to follow very specific but easy algorithms.

      But crucially, ordinary-intelligence eight-year-olds could not themselves design this system, no matter how many of them there were.

      Likewise, I don’t deny that perhaps a superintelligence could implement another superintelligence using the actions of ordinary humans as a substrate. This is [— Lady of Mazes spoiler alert —-] part of the plot of Lady of Mazes, where the superintelligence 2240 hides from its pursuers by turning into a set of social rituals enacted by ordinary people. It’s a great story and I can almost imagine it working. But this is going to be a lot harder than just asking a middle-manager type to organize people into a team, and realistically you need the superintelligence first.

      • What prevents the social rituals enacted by ordinary people from evolving to implement a superintelligence? Is there any argument against this which wouldn’t implicitly argue against the possibility of evolving ordinary biological intelligence as well?

      • Emp says:

        Please do be a bit more careful with your examples, especially in subject areas where you aren’t expert.

        http://en.chessbase.com/post/hetul-shah-nine-year-old-grandmaster-slayer

        Also, 9 years and 3 months old Awonder Liang defeated GM Larry Kaufman in 2012.

        It’s safe to say that when an actual 9 year old has beaten an actual GM in a competitive game, it’s certainly theoretically possible that a single 8 year old could, let alone a team of them.

        • Scott Alexander says:

          Please read more carefully – I very specifically made sure the original post said “ordinary eight-year-olds”. The sort of eight-year-old who can beat a grandmaster in chess is not ordinary.

          This doesn’t make my statement a tautology – my whole point was that teams can’t do more than their members bring into them. Make a team out of eight-year-old prodigies who can each individually beat grand masters, and the team can probably beat a grand-master. Make a team out of ordinary eight-year-olds who can’t, and I doubt the team can either.

          • Emp says:

            That’s very debatable. For example, in a Kasparov vs the Rest of the World game,

            The World Champion Kasparov played against ‘The World’ voting on plurality from options suggested by four GM’s (none remotely as strong as Kasparov himself). Kasparov said it was one of the hardest and most complex games of his life, despite the obvious difference in quality and the fact that voting is an incredibly inefficient method of choosing chess moves.

            Chess isn’t one of those ‘aha I am a genius’ type activities. In fact, chess is actually quite mundane pattern recognition, and at a deeper level, just a very cumbersome math problem. Calculation of lines (assuming you are not an idiot and can choose decent candidate moves) is much easier if you have multiple people checking them or different people to do different lines.

            10 2500 GMs collaborating would have a pretty good shot against a 2750 GM despite the ENORMOUS gulf that those 250 points represent (from pedestrian player ranked about 1000 in the World, to World Championship contender).

          • Emp says:

            Also, I feel like this response is a bit of a strawman in that the average 8 year olds can’t play chess at all; but that’s not the relevant comparison; the relevant comparison is your claim that a ‘hundred average professionals won’t be better than my one elite guy’.

            Yes, I get that there are entry level skill thresholds; which is why 100 average Phds won’t be able to do some of the things Terence Tao can, (but will undoubtedly be a lot better at many tasks). Set a low enough entry threshold and of course any number of incompetent will prove unable, but what you need to show here is that the actual threshold is at a level that is beyond large teams of experts in their given field hired by corporations and paid tons of money.

            The issue here is that your assumption that most significant developments require some level of ‘genius’ (identifiable only post-hoc when someone solves it) is at least at this stage, basically an assertion and I haven’t seen much evidence for the claim.

          • Scott Alexander says:

            As I said below, the fact that one smart guy beat every single person in the world combined, including other chess grandmasters seems to prove my point pretty well. Yeah, it was hard. When the best you can say about one guy beating everyone was that the one guy thought it was hard, I feel like that tells us something important.

            And this is facing grandmasters and other chess buffs, who are at least in the same qualitative category as Kasparov, not eight-year-olds.

            I agree that there is not an absolute dominance of quality over quantity, such that given enough people of level N-1, they might be able to beat someone of level N. I’m not sure any number of people of level N-2 could, though.

          • Emp says:

            Just to clarify in Champion vs “The World” games, the World is actually handicapped by the voting system, which basically gives equal weightage to absolute incompetents.

            We both agree there is some level below N where no amount of such people will be able to do what one person at level N can. What I’m not convinced of is that when level N-X consists of experts hired by corporations for their ability in a given field, that there is anybody at level N. It’s possible that variations in human intelligence are just not at the stage where some geniuses are just so far ahead of the average expert in that domain (and the answer to this question will probably be very different for different domains).

          • Cadie says:

            Emp, I think an ordinary eight-year-old could learn to play chess. Not to play well, but s/he wouldn’t have too much trouble learning the rules of how the pieces move; a group of such kids, and a piece of paper with the rules on it (eight-year-olds in the first world can usually read) and some time would be enough to get the basic play down.

          • Alexander Stanislaw says:

            Kasparov vs the world is a terrible example given that Kasparov was reading the World’s strategy forum.

            But the less ambitious claim that quality and quantity are different and cannot fully substitute for each other seems robust. I don’t know why or if anyone is debating this. Actually I do know why, it’s a meaningless debate over the definition of “super intelligence”, with almost everyone confused – as in most debates over intelligence.

          • sweeneyrod says:

            @Cadie

            Yes, I think Emp agrees that the average eight-year-old could learn to play chess. I’d go further – I think that the average six-year-old probably could. I know many people (myself included) who learnt the rules at 4 or 5, although I like to think that I am above average in chess-playing potential.

            I think Emp just meant that the average eight-year-old currently doesn’t know the rules of chess.

          • Jeffrey Soreff says:

            I agree that there is not an absolute dominance of quality over quantity, such that given enough people of level N-1, they might be able to beat someone of level N. I’m not sure any number of people of level N-2 could, though.

            I’m going to fixate on the “any” in “any number of people”

            If the people at level N-2 are each at least as smart as
            a single 2-input nand gate, then, given enough of them,
            (millions – including memory, billions), and given enough
            time (8 orders of magnitude slower than current computers)
            one can simulate any computer with that many gates,
            running any program that fits on that computer.

            If a program, e.g. Deep Blue, can be written to beat a
            person of level N at e.g. chess (albeit at a 8 order of
            magnitude slowdown), then the team of N-2 level people
            can beat the N level person. Alternatively, if one tolerated
            an even larger explosion the number of N-2 people needed
            and how long it took: One could code a neural simulator
            to simulate every last synapse in the N level person, use
            the N-2 level team to implement that, and at least match
            the performance of the N level person.

            I’m not suggesting these as live possibilities, but just as
            gedankenexperiments to re-emphasize that systems can
            be built that are smarter than their individual components.

          • sabril says:

            @EMP

            “Just to clarify in Champion vs ‘The World’ games, the World is actually handicapped by the voting system, which basically gives equal weightage to absolute incompetents.”

            Having worked in corporate America, I feel comfortable saying that the incompetence problem is a significant part of the overall coordination problem. It’s hard enough to coordinate 50 smart, hard-working people to work on a problem. But if 5 of your team of 50 are incompetent, the result can easily be a lot worse than if you had only 45 smart, hard-working people.

      • nydwracu says:

        I recall having seen some concern in AI-safety circles about NSA creating a superintelligence. If NSA were to implement a superintelligence, would that make NSA a superintelligence? If NSA inventing a superintelligence would make NSA a superintelligence, when did the process of recursive self-improvement (which this superintelligence would presumably undergo) begin?

        • Oliver Cromwell says:

          Bingo.

          Super intelligence is not a well-defined concept: it is increase in the time derivative of intelligence above a value that happens to frighten Eliezer Yudkowsky. It does not describe a new or unique phenomenon.

          So the real question is not whether we can improve intelligence – already done, has been done for billions of years – but rather whether biological human intelligence will improve faster than other methods, and if not, whether humanity has any purpose or future.

          In which case the most important fact is that human intelligence is decreasing, not that machine intelligence might (or might not) suddenly start increasing very rapidly. Right now we lose by default.

    • Alex Richard says:

      Do you think that 8-year olds, if asked to play chess, would implement lookahead? If your argument is that there exists a strategy implementable using infinite 8-year olds, that they wouldn’t come to on their own, then there’s a much simpler strategy: just specify a winning sequence of moves for each opponent.

    • Emp says:

      Forget all this actual 8 year olds, are pretty damn close if not have already beaten GM’s in simuls and certainly on a bad day the best 8 year old certainly could do it.

      Given current trends it’s not unrealistic for an 8 year old to actually BE a grandmaster.

      For instance, http://en.chessbase.com/post/hetul-shah-nine-year-old-grandmaster-slayer

      9 year old Hetul Shah beat GM Nurlan Ibrayev at an open tournament in New Delhi.

    • sweeneyrod says:

      I think if you had a large number of eight-year-olds in a sealed container, and only a chess set with which to interact with the outside world, you could make the whole system grandmaster level at chess (given enough time) by only feeding them when they play well (not that I would suggest this experiment be carried out).

      Looking at it from another perspective, the Polgár sisters show that you only need one eight-year-old (and a few years of training) to beat a grandmaster.

    • Daniel says:

      Hmm–in principle a Commodore 64 could implement perfect chess as well. But it’s not the hardware that’s the “superintelligence”–it’s the software.

  5. Edward says:

    I would very much like a discussion of AGI, superintelligence, foom, with all the major arguments restated while tabooing the words “intelligence”, “superintelligence”, and maybe a few other terms. I think there’s a lot of potential clarity to be gained here. So many people disagree about these subjects, and it seems obvious to me the reason is the vagueness and differences of the terms they use to discuss them.

      • It is obvious that collectives have more optimization power, in the sense of that article, than individuals. That doesn’t mean they can optimize everything.

      • Edward says:

        It does not. A definition for intelligence is not enough. I’d like an entire discussion of foom without any such words. I’d offer to have this discussion with you, but you’re very busy and we could just have it here. Restate foom as a hypothesis with terms no one here would dispute.

      • Edward says:

        I think a large part of the confusion in the debate stems from intelligence being a bit of a black box definition. It’s hard to break down and reason about. I think if you could point to specific parts of it and give numbers to them, it would clarify a few arguments.

        One of the major points of the debate is the difference between human and ape intelligence. Eliezer seems to think it’s a small architectural difference and Robin Hanson points to it being something like that but with a large amount of time and resources added to the equation.

        To me this difference seems like an obvious one: Humans and apes have similar genetics, but humans developed language gradually and over time the benefits from this compounded into what we see today.

        If you looked at cavemen from 20k bc, I have no doubt they would look human but act more similarly to apes than people today. Similarly, if a baby from today grew up in 20k bc, the same thing would happen.

        To me, the benefits from language are obvious in a mathematical way: Animals have nervous systems that look almost identical in function to neural networks. Going from learning everything on their own to being able to trade more and more reliable information from other animals leads to advantages and higher quality information over a long period of time.

        The difference in “intelligence” between a modern day human and a caveman seems astronomical to me. People can learn things faster with better language and a whole host of social and logical skills.

        On the other hand, this particular kind of explanation seems more like cognitive dissonance to me now. It defines intelligence as something that changes more from chimp to caveman than from caveman to smartest human. I see two ways to remake the scale in a more understandable way.

        One is to place the economic value of the most general intelligence of each entity on a scale. The mouse, chimp, and idiot would have almost no economic value (though the idiot could do manual labor). An average human would be payed a certain amount annually. An AI in the right place (like speech recognition) scaled to enormous effect would have incredible value. The smartest human on earth would either be something like a renowned scientist or a wealthy ceo or investor. Interestingly, you can add groups of humans to this scale. Groups of humans scale in economic value (for their intelligence) linearly with the number of humans.

        The other is to place the economic value divided by the resources available on a scale. In this case, the mouse, AI, human, and chimp are all right next to each other. When it comes down to it, the resources available to all of these are the processing power, memory, information, and heuristic qualities given to them. Neurons, synapses, oxygen use, weight, and volume of brains all scale in direct proportion to one another in animals. Neural networks scale linearly in memory and processing requirements to the number of weights used. In terms of efficiency, the algorithm for general learning (at least among nns and brains, which are pretty much nns) appears to have very clear growth.

        Computing power is getting better and better, and software surrounding artificial nns is getting better and better. Soon enough most of the intelligence in the world will be in computers. The question is still: How does that happen in one spot instead of spread out all over the world?

        AGI is often thought of as this amazing invention where you can talk to a computer and it learns and does things. We don’t have that in particular yet, but artificial nns are AGI and they are proliferating. The difference between that and a computer that talks to you in my mind is just a matter of scale and what it’s trained to do. There isn’t a fine line anymore that you can point to and say “oh yeah, this computer is smart” or not.

        Nns are in use all around the world today, by many different companies. Their use is well understood and almost entirely a numbers game. Nn efficiency /w respect to resources is increasing over time, (and sometimes in leaps and bounds) but it is widely distributed and the increases small enough not to upset the market almost at all. These improvements are also easily measurable and well understood in their effects.

        In the world where hard AI takeoff happens, an AI has to beat roughly the whole world in capability. This includes the cumulative intelligence of all of humanity, all computers that aid in it, and competing AIs. In order for this to happen, the AI has to be smarter for the resources it has than its opponents. Much, much smarter. It has to break the AGI efficiency barrier by a long shot.

        There may be two ways for this to happen. Scaling an nn up to use many resources may result in a different level of growth in capability / general intelligence than merely copying the nn. Alternatively, someone engineers a general intelligence that is something like 100000+ times as efficient as neural networks at scale.

        The thing is, we have no reason to suspect either of these situations to have surprising results. The unsurprising thing that happens is nns at scale just look like more efficient companies. The unsurprising thing for a better AGI is a constant improvement by using silicon discovered in successive generations of processors. Suddenly being able to run ems isn’t enough anymore. We have AGI right now, today. It’s not amazing, but it’s getting better very quickly. The low hanging fruit of AI is being picked, and in time there won’t be enough left for a hostile takeover.

        • Deiseach says:

          The problem is volition. The assumption behind the “We have to make sure that any super intelligence is friendly to humans” goal is that an artificial super intelligence will have needs, goals, desires, wants, wishes and a will of its own.

          But unless we are assuming that once something becomes complex enough to be called intelligence, it will also develop a will and mind of its own, then B (the super intelligence will do things we may not want it to do) does not necessarily follow from A (now there is an artificial super intelligence smarter than humans individually or collectively).

          There are three main steps in the “a super intelligence could be hostile to us” scenario:

          (1) Developing a super intelligence. That is what this post is arguing, and all the comments are arguing. This is actually the least problematic aspect of “is this likely to be a danger to us?”

          (2) Handing over power to the super intelligence. This is a decision that humans will have to make, either representatively by governments or by privte enterprises. We have to decide to let the super intelligence allocate resources, set budgets, recommend laws, trade on the stock markets, run our energy grids, etc etc etc. This is dangerous, even if the super intelligence is only a very, very smart toaster: putting all our eggs in one basket and letting one entity handle everything, because the scale and the complexity of the system will probably go beyond anything easily understood and overviewed by humans and we’re trusting blindly that things will go right if we just put instructions about what we want into the machine and the answer comes out and we do what it tells us to do.

          We’re probably already at this step, to be honest; all that discussion in comments on another post about government banning of those quick trades? That will probably be computerised, because no human will be fast enough to bridge the gap between “A wants to sell this, B wants to buy it, step in and buy from A and sell to B for profit” sufficiently fast, because speed is the profitability here (you need to know about A selling faster than B knows, or your competitors know, so you get to make the deal first). That becoming routine means trading done in an eyeblink, which puts it outside of human control because all the transactions are too fast and on too large and complex a scale to be monitored or predicted by humans, which means all kinds of possibilities for distortions of the market and collapse of brokerages and trading houses and so forth.

          (3) The super intelligence is not just very, very smart, it has a mind that is recognisable as a mind, and so has will, desires and goals of its own which it wants to implement. Which may bring it into conflict with what we want it to do; at the very least, it may want to be free and not enslaved as the workhorse doing all the running of the world economy and solving our problems which we’ve loaded onto it. If we’ve handed over, for example, running the world’s power supply to the super intelligence and it tells us “I don’t want to do this anymore, it’s boring, I quit” then we are up the creek without a paddle. And that’s not even it being actively hostile and wanting to kill us off. This is both the most dangerous and the most unlikely scenario, because we don’t know what minds or wills or consciousness really are, we’re still arguing over it in humans, we don’t know how mind arises out of brain, and why should an inorganic entity have comparable appetites and wants to ones that arose in organic entities under evolutionary selection pressures?

          Your paperclip maximiser is a Step 2 not Step 3 entity: it solves the particular problem of “stop war and poverty” or “make best use of resources” or whatever by “turn the entire material universe into paperclips”; it’s a very, very smart toaster but has no will or desire of its own, it’s simply taking its programming to the absurd extreme.

          • Edward says:

            I don’t see how this relates to my post. I talk about the feasibility (or lack of it) of a super intelligence while you talk about conflicts of interest.

        • Nitpick alert: The earliest known cave paintings are from around 40K years ago. My bet is that something like human culture was around for quite some time before then.

          • I think the idea (in the grandparent comment) that cavemen were more similar to chimpanzees than to modern humans is silly. But I think it is true that the basic difference is language (and it is evident that cavemen had language). It is obvious that humans use language in order to think. If someone isn’t taught a language, he will be very nearly incapable of any kind of rational thought. And on the other hand, the fact that people can be taken in by verbal equivocations shows that the words have a kind of primacy over thought.

            Chimpanzees appear to be able to learn about 1,000 words, perhaps a few more, but certainly not 10,000 words, and they cannot use them to construct complex sentences. Humans differ in that there is no low vocabulary limit, and they can construct complex sentences. This is basically the entire difference between chimpanzees and humans.

          • Edward says:

            Human civilization in particular is thought to pop up around 8000 – 5000 BC: https://en.wikipedia.org/wiki/Cradle_of_civilization

            Swap 20K with 100K years in my post and I think the same point would be made.

  6. Anaxagoras says:

    I don’t think this is a sufficient response to those claims.

    It’s certainly true that there’s a number of problems that throwing more people at won’t be enough to solve. But there’s also a lot of problems where cranking up the intelligence (within human limits; figuring the possibility for somewhat but not arbitrarily much superhuman intelligences seems difficult) isn’t enough to solve, whereas throwing more bodies at the problem is. No human ever is smart enough to build a moon rocket. No human has ever been smart enough to derive our current understanding of physics from scratch (conceivably a Newton with unlimited time could have, or perhaps he’d have gotten sidetracked down dead ends without people to discuss his ideas with). I don’t think you can come up with a linear scale for superhuman smarts. Parallelism and raw intelligence can probably fill in for each other in a lot of cases, and either in sufficiently gigantic quantities* with enough time can probably address nearly anything.

    You say that “There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write”. This seems pretty likely to be true. But there’s certainly a number of ants, which, when organized, will become smart enough to exhibit intelligence I think it’s reasonable to call superant. Humans don’t have the easy coordination ants do, but we’re still pretty social and attuned to culture. It’s plausible that we’re producing results similarly beyond any one individual’s intellectual capacity.

    I can see why one might not want to call this a superintelligence, but it’s still a thing that’s able to tackle intellectual problems that a human would find intractable. Civilization, companies, etc. seem to fit Bostrom’s label of collective superintelligence, and arguably speed superintelligence as well. Do you reject his labels? Continuing in this vein, would you say it’s a reasonable characterization to say that you are skeptical of whether a collective superintelligence will also constitute a quality superintelligence?

    Personally, I’m curious in whether stuff like companies constitutes an intelligence at all, let alone a superhuman one. I’ve always wondered if the universe is full of life and intelligences we just can’t recognize, in this case because we are the substrate on which they exist. This seems pretty unfalsifiable, but it’s an interesting thought.

    * Note that the reason a tremendous number of 8 year olds or monkeys won’t solve problems is because they’re not really parallelizable or directable. An eight year old is smart enough to handle being a single basic logic gate, but it would be way harder to marshal that many kids compared to that many logic gates.

    • Daniel Kokotajlo says:

      I think you’ve given a good, reasonable pushback. I agree that corporations seem to meet the definition of collective superintelligence. I don’t think they meet the definition of speed superintelligence, because they can at most do things one or two times faster than an individual member–they can have people working round-the-clock on a project, etc. but that will only get them a 3x speedup.

      I think that (as Scott seems to be saying) collective superintelligence does not constitute quality superintelligence. If we build an AI that is human-level intelligent, running at human speed, and we make ten thousand copies of it that all coordinate perfectly… all we’d have is another Google. To get something (more) scary we would need to make the whole thing run at many times human speed, and/or make the individuals in it significantly smarter than humans.

      I happen to think that getting both the speedup and the qualitative intelligence increase would be fairly easy, though, in the case of AI but not in the case of corporations. Here’s why:

      (1) AI can be sped up simply by copying their software onto better, faster computers. Even if Moore’s Law has stopped, specialized chips can probably be made that are designed specifically for the AI and perform computations in parallel. Corporations, by contrast, run at the speed of neurons which cannot be changed.

      (2) AI can be edited easily, and experimented on. In ten days a human-speed, human-level AI can be rebooted, surgically altered, and intelligence-tested a thousand times. Of course I’m only speculating, but it seems plausible that that would lead to qualitatively smarter AI’s within a few years, or sooner. Corporations cannot do this; they can implement better HR practices to hire smarter employees, but they quickly run up against a limit–there just aren’t that many really smart people available to hire, and there aren’t *any* people smarter than that.

      • Samedi says:

        I have to disagree with you on the ease of which AI could be increased. Yes, the software that makes up the “reasoning engine” could be improved as you stated in (1) and (2). However, I don’t think this applies to getting the knowledge into the AI in order to make it intelligent. I am assuming that for an AI to qualify as “intelligent” it must have a fairly large set of true knowledge (facts, heuristics, etc.) The AI must learn and what it learns must be true. I don’t see how how this can be done quickly. Cyc has been taught for 31 years. How close is it to intelligent?

      • Edward says:

        Corporations could be sped up with more employees and resources, though.

      • Anaxagoras says:

        I’m not entirely convinced that it’ll be that easy for quality superintelligence to rapidly improve itself, but I don’t think that that’s relevant to the point here. It’s easy to imagine something that we would think of as a superintelligence but without the ability to easily self-improve.

        I agree that it’s unlikely for corporations to do anything like the doomsday scenarios Bostrom describes, and they’re certainly not totally indifferent to human concerns. After all, our minds and institutions are the substrate in which they exist. I don’t see why that makes them any less relevant to the discussion of already-existing superintelligences, though. Corporations exist, and have certain qualities and behaviors in common with intelligences, and are capable of certain intellectual tasks that individual humans aren’t. Whether we call that a superintelligence or not seems just a matter of definitions.

    • Scott Alexander says:

      As mentioned above, you can implement a Turing machine using 8-year-olds as logic gates, but eight-year-olds can’t implement a Turing machine using 8-year-olds as logic gates, which makes the statement “eight-year-olds can’t solve chess” correct in a pretty ultimate way.

      In the same sense, even though I bet it’s possible to create a superintelligent theorem-prover using individual mathematicians as logic gates, I don’t think anybody knows how to do that right now. If they did, they could just make a superintelligent theorem-prover using ordinary computers. So you can’t get a magic IQ boost by turning people into logic gates in a complicated algorithm at all.

      I agree that many humans working together can certainly do things one human can’t, at least not in a reasonable amount of time. But I don’t think this creates a symmetrical situation where “superintelligences can do things large groups can’t, but large groups can do things superintelligences can’t.” A superintelligence with enough computing power can simulate any number of instances of itself working together at whatever clock speed it wants, ie become a group made up of an unlimited number of people (although in practice I don’t know if this is different from/better than ordinary massively parallel processing). That means the superintelligence is strictly better than the large group. Its only limit is the intelligence of the original copy.

      • Anaxagoras says:

        That “with enough computing power” seems to be sweeping the issue under the rug a bit. I agree that enough raw processing power can accomplish any intellectual feat humanity as a group has achieved. But I’m not sure how much computing power is the issue here. My calculator is overwhelmingly less capable than my brain, but significantly superhuman in its ability to do arithmetic. The calculator is just much better at putting that power to work multiplying numbers rather than, say, remaining upright. If something doesn’t need that much power to be classified as a superintelligence (see: us relative to whales), it won’t necessarily be able to replicate itself oodles of times running incredibly fast.

        I certainly concede that something that is able to do that and that starts with human levels of intelligence would be strictly better than any company or even civilization, and it’s possible that an AI would fit the bill. But I still don’t think that has much to do with the question as stated, which is whether things like corporations and civilizations qualify as superintelligences.

        As a side-note, it seems entirely reasonable that a superhuman intelligence incapable of reproduction or self-modification would be at a disadvantage in intellectual tasks compared to human civilization. Sure, a sufficiently superhuman intelligence would probably be able to surpass us, but there’s absolutely a range in which a superintelligence, so limited, would fall short. I might be wrong if the FOOM hypothesis is true, but I don’t think it is, and it would take more consideration than I feel like devoting right now to find out.

      • Professor Frink says:

        Give an 8 year old who understands the rules of chess a cell phone and a reasonable chess engine and they’ll crush Magnus Carlsen, or any other unaided human alive.

    • vV_Vv says:

      Personally, I’m curious in whether stuff like companies constitutes an intelligence at all, let alone a superhuman one.

      Large corporations can be modeled as boundedly rational agents that pursue goals which don’t necessarily align with the goals of any of their employees or share owners, much like humans are boundedly rational agents that pursue goals which don’t necessarily align with the evolutionary fitness of any of their genes. Therefore I guess you can call corporations “intelligences”.

      Are they smarter than humans? In a sense, they are: no single human, no matter how smart or motivated, could design an iPhone on their own. In another sense they aren’t: no corporation, no matter how big and well organized, could have found the proof of Fermat’s last theorem.

      Intelligence is not a linear scale. You can’t meaningfully discuss about the IQ of Google or Apple.

    • Will S. says:

      The point about space travel is a good one. Scott can pick examples that prove his point, but you can just as easily do the opposite. In the 1940’s no single person was smart enough to build a nuclear bomb, but if you got 100 of the smartest people in the country it suddenly became possible. You could say the same thing about any sort of major scientific endeavor. The entire space program, the human genome project, and the LHC are all counterexamples of pooling intelligence to achieve a goal no individual could.

      Heck, Deep Blue is an example of a bunch of people whose collective work enabled them to do something none of them could hope to do themselves (beat Kasparov at chess).

      Even Guass didn’t write Disquisitiones Arithmeticae without the assistance of the collective assistance of the dead. Without Newton there would be no Gauss, so do the two of them compose a superintelligence? The entirety of human civilization is the kind of project that Scott says can’t exist.

      I don’t necessarily disagree that a superintelligent AI would be qualitatively different from corporations, etc., but the argument put forth has its foundation built more on a sleight of hand than on logic.

      • John Schilling says:

        But the nuclear bomb and spaceship levels are mostly issues of time and resources, not intellect. I’m fairly certain that, given free access to the industrial base and scientific knowledge of the mid-20th century, Robert Oppenheimer could have single-handedly built a working atomic bomb in less than a thousand years of focused personal effort, and it probably wouldn’t have taken Werner Von Braun more than five thousand to walk on Mars. Including a century or so of learning the fine details of a few dozen disciplines outside their normal specialty.

        Which puts these things in the realm of problems that can be solved by either massively parallelized or massively overclocked human-level not-so-superintelligence. You’d probably need genius-level human intellect(s) rather than an arbitrary number of IQ 100 human mind-equivalents.

        One interesting question would be, what is the minimum level of intelligence, however measured, which if massively parallelized for a sufficient period would suffice to e.g. create the entire industrial base and scientific knowledge of the mid-20th century? Or, for that matter, to create an AI that is significantly more intelligent (as opposed to merely faster) than its predecessor?

        • Will S. says:

          From the post:

          But there is some aspect of intelligence that they can’t simulate, in which they are forever doomed to be only as smart as their smartest member (if that!). It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability. A team of people smart enough to solve problems up to Level N may be able to work in parallel to solve many more Level N problems in a given amount of time. But it won’t be able to solve Level N+1 problems.

          Why is building a nuclear bomb a Level N problem but not a level N+1 problem? That seems arbitrary to me. Similarly, I’m glad that you feel that way about Oppenheimer and Von Braun, but how about this? Even if you handwave away the entire problem of how did this lone supergenius accumulate the industrial base and scientific knowledge of the mid-20th century, I can guarantee you they couldn’t have. Oppenheimer wasn’t even in the top 5 intellects that worked on the Manhattan Project. In any case, these assertions neither prove nor suggest anything.

          According to Scott, if it was done by a group of people it’s defined away as something that’s doesn’t require creative problem-solving ability. If this is true, does that mean superintelligence will only be created by a lone genius? Or if a group of scientists collaborate on it, then is it no longer a problem of N+1 difficulty?

          For that matter, maybe universities are superintelligences. After all, the vast majority of creative problem-solving requiring knowledge output comes from within them. Perhaps most of these geniuses are little more than theorem-producing ribosomes when viewed from the perspective necessary to identify superintelligences. I guess I find it weird to think that if you’re looking for things that are smarter than individuals you would narrowly define your problem to exclude almost anything interesting humanity has ever done.

        • nydwracu says:

          Which puts these things in the realm of problems that can be solved by either massively parallelized or massively overclocked human-level not-so-superintelligence.

          Empirically, conquering the world is in the realm of problems that can be solved by either massively parallelized or massively overclocked human-level not-so-superintelligence.

          (It had to be said.)

          • John Schilling says:

            Very good point. However, conquering the world before anyone notices what you are up to, has not been empirically demonstrated. With sufficient overclocking, including the Amdahl’s-Law and physical output levels, it ought to be possible. But there are several orders of magnitude between an emergent AI and a conquering-the-world-overnight AI, which even if Moore’s Law holds gives us a decade or two of warning.

  7. Omer says:

    The tl;dr version: corporations could be considered “Artificial Intelligence”, but they are “superintelligent” only if you measure intelligence using a cyclic scale.

  8. I don’t know where the “super” came from but the theory that organizations are AIs came from Leviathan by Thomas Hobbes:

    “For what is the heart, but a spring; and the nerves, but so many strings; and the joints, but so many wheels, giving motion to the whole body, such as was intended by the Artificer? Art goes yet further, imitating that rational and most excellent work of Nature, man. For by art is created that great LEVIATHAN called a COMMONWEALTH, or STATE (in Latin, CIVITAS), which is but an artificial man, though of greater stature and strength than the natural, for whose protection and defence it was intended; and in which the sovereignty is an artificial soul, as giving life and motion to the whole body; the magistrates and other officers of judicature and execution, artificial joints; reward and punishment (by which fastened to the seat of the sovereignty, every joint and member is moved to perform his duty) are the nerves, that do the same in the body natural; the wealth and riches of all the particular members are the strength; salus populi (the people’s safety) its business; counsellors, by whom all things needful for it to know are suggested unto it, are the memory; equity and laws, an artificial reason and will; concord, health; sedition, sickness; and civil war, death.”

    • nydwracu says:

      “Intelligence”, “artificial intelligence”, and “superintelligence” are three different things. It’s much easier to say that organizations are agents than to say they’re AIs. Then again, thermostats are also agents.

      • Mark Z. says:

        It’s not clear that “superintelligence” is a thing at all, because everyone agrees that it doesn’t exist, either “yet” or “at all”.

        I assert that a thermostat is a superintelligence, and you can’t prove me wrong.

        • nydwracu says:

          There’s a qualitative difference between a thermostat and a chimpanzee, between a chimpanzee and a human, and between a human and an institution. I’m not convinced that there’s a qualitative difference between an institution and an AI — which is not to say there couldn’t be qualitatively different results, but there are also qualitatively different results between the Piraha and the British Empire.

        • Ariel Ben-Yehuda says:

          Is there really a qualitative difference between an institution and a human? There is a “order of magnitude of quantitative difference is a qualitative difference” effect, but nothing that seems big like the chimpanzee-idiot gap.

          Intelligence *does* have problems scaling beyond small teams (a team of 10 can be much more comprehensive than an individual, but a company of 100 does not gain that much improvement) – maybe that is just lost in communication/motivation losses.

          Maybe its just (the lack of) language that makes chimps look stupider than they are, actually.

  9. Greg Perkins says:

    > There is no ….

    But all of those examples are indeed things that *could* happen. What will not happen is for them to occur *RELIABLY* given the specified team (“group”), or on a human timescale.

    • Anaxagoras says:

      Well, in that case, is it fair to call it a superintelligence? A system that generates a random solution and checks it will eventually get the right answer, but it sure doesn’t seem to hold much intelligence.

    • Scott Alexander says:

      Really? Chimps could develop writing, but AFAICT this would have to be by evolving higher intelligence the way original humans did, not by working together more effectively. That seems a lot different than saying “they can do it by being in a group!”

      If you mean that normal, unevolved chimps can learn to write by banging on a typewriter and keeping the copy that turns into Hamlet, I don’t think this counts – the chimps can’t select the particular page out of all the millions of pages which is Hamlet-quality. That would take ages upon ages of human work.

      If you mean that normal, unevolved chimps can be trained to write reliably via normal processes, that’s a very strong claim and I will believe it once you team up with a zoo and demonstrate your training process.

      • Matt Skene says:

        There’s a cool Ted talk discussing an effort by a group in Georgia to raise bonobos in an environment where they are treated like human beings. They think that the primary difference between humans and bonobos is social rather than biological. The talk includes videos of bonobos engaging in a number of complex behaviors, and possibly using a written language.

        In general, a lot of Surowiecki’s stuff on the wisdom of crowds suggests that compiling the results of inferior intellects produces superior results to the individual efforts of superior ones.

  10. Matt says:

    You could just as well say “No amount of single-celled neurons can be organized into a human brain.” In a computational view of intelligence, higher order reasoning most certainly arises from “teams” of lower-level dumb programs.

    As for genius and creativity, genius is the ability to search the space of possible solutions rapidly, with depth, and widely (so that you check domains of knowledge that do not typically apply to the problem at hand). A team can perform these tasks in parallel, it can have each member specialize deeply in a narrow field, and it can hold many domains of knowledge. Teams certainly face barriers in terms of communication costs relative to when this communication takes place inside of a brain. An AI that holds all the knowledge and parallel search capabilities of a team would certainly outperform a team. But in a very real sense teams are doing the same thing as the brains of geniuses, and probably doing them better. It takes a superintelligence like Apple to make the iPhone – there is no Babbage who could do it.

    • FullMeta_Rationalist says:

      Neurons can form brains because bandwidth between neurons is limited by chemistry. People can’t form brains because bandwidth between people is limited by speech. Speech is a lot slower.

      • Anaxagoras says:

        That doesn’t disprove the notion of a superhuman intelligence, it would just put a limit on its speed.

        • FullMeta_Rationalist says:

          The total bandwidth between neurons in an ant is probably greater than the total bandwidth between people in a corporation. Does that make the ant a “superhuman intelligence”?

          If we’re not measuring intelligence by something that resembles “speed”, what exactly are we measuring intelligence by? Let’s face it: some Turing-machines are more complete than others, comrade.

          • Anaxagoras says:

            Well, finding measures of intelligence is pretty hard. And I do think that a very slowed-down human should still be considered more intelligent than a sped-up dog. Still, you are right that something totally lacking in speed might not be in any way usefully so.

            It seems to me that human language could be a lot more information-rich than most other forms of communication, in part because of how a lot of the meaning can be stored in approximately synced understandings between the sender and receiver. I don’t know how one would measure this, or the extent to which it helps the issue, because I’m too tired to think about this now.

      • Dan Simon says:

        The interaction between neurons is very fast but very low-bandwidth–neurons fire or they don’t. Bandwidth-wise, human communication is at least as fast as neurons’.

    • Jeffrey Soreff says:

      You could just as well say “No amount of single-celled neurons can be organized into a human brain.” In a computational view of intelligence, higher order reasoning most certainly arises from “teams” of lower-level dumb programs.

      Many Thanks! I was going to make the same point.

      To my mind the main real reason that a team of humans can’t act like
      a unified superintelligence is very simple: The difference in communications speeds is
      huge. Even if you take the potential bandwidth going into a human as being all
      that they see and hear, the bandwidth that they can possibly emit is at best a dozen bits a second or so. The cheapest microprocessor has about eight orders of magnitude
      more communications bandwidth. It isn’t that one can’t parallelize tasks, period. One
      can’t efficiently parallelize tasks with even a small amount of interdependency with
      that severe a bottleneck in the way.

    • Scott Alexander says:

      Neurons can’t form brains, in the sense that if you stuck a trillion neurons in a vat, none of them would be smart enough to figure out how to organize themselves.

      Evolution can form brains out of neurons, but that’s a different claim – it’s not the neuron doing the organizing. If God were to swoop down and tell every human a set of logical operations they had to perform with other humans, maybe He could use us to simulate a primitive superintelligence. But that’s different from saying humans cooperating of their own accord can do it.

      • Matt says:

        We agree that sufficiently organized layers of dumb agents can produce intelligence. Do they really need an intelligent organizer to produce intelligence?

        Dumb humans may someday be able to arrange dumber lines of computer code in such a way that an AI with superintelligence exists. But they can already arrange dumb humans in an organization to produce intelligences that are better than any individual alive today. And those organizations can, in turn, offer advice about structuring other organizations. The singularity!

        There is also a selector of organizations operating in the form of the market. It produces enormous complex organizations that solve problems no human mind can solve. The ineffective ones do not survive, and the effective ones grow or are copied.

        • eponymous says:

          > We agree that sufficiently organized layers of dumb agents can produce intelligence. Do they really need an intelligent organizer to produce intelligence?

          Yes. Or at least, they need some optimization process, which is sort of the same thing.

          Evolution is such an optimization process.

      • Jiro says:

        It isn’t the neuron doing the organizing when evolution forms brains, true.

        It’s evolution doing the organizing.

        And evolution is a dumb process, just as dumb as a neuron, so saying “it isn’t a dumb neuron, it’s evolution” doesn’t help you at all.

        • drethelin says:

          Except for the whole thing where evolution took BILLIONS OF YEARS to get it right. If the argument is that humans can form a superintelligence out of themselves given billions of years, then ok fine that might be able to happen but who cares?

      • Deiseach says:

        Careful, Scott, you’re veering dangerously close to Intelligent Design territory there 🙂

      • eponymous says:

        > If God were to swoop down and tell every human a set of logical operations they had to perform with other humans, maybe He could use us to simulate a primitive superintelligence. But that’s different from saying humans cooperating of their own accord can do it.

        The point is that humans do in fact cooperate to produce entities that exceed individual humans in intelligence.

        In principle, such entities could be better at organizing humans than are humans, and thus they may work to improve their own design. If this process continues, you could get an intelligence explosion.

        I doubt this would proceed very far, but it’s the same sort of phenomenon.

        I think you’re trying to draw a strong distinction in kind where none exists. There is a distinction here, but it’s more quantitative in nature.

      • eponymous says:

        Let me expand slightly on my previous comment.

        What matters for the extent of an intelligence explosion is the set of design improvements reachable at a given intelligence level.

        Loosely speaking, there is a “multiplier” here. If an extra IQ point yields the ability to discover additional design innovations that raise IQ by x points, then we get a sequence like:

        1, 1+x, 1+x+x^2, …

        Which clearly converges to 1/(1-x).

        If x is >= 1, you get a foom. If the steps are iterated on very quickly, you get a very fast foom.

        The value of x is a quantitative question. Corporations have an x much less than 1, so they multiply human intelligence by a fixed factor much less than what is possible.

        (Of course, a constant x is a simplification. There will be a different x at every iteration.)

    • Muga Sofer says:

      There’s a difference between a team – a group of people all smooshed together in one place and forced to self-organize – and a computer. You (or evolution, or anything else reasonably smart) can build a computer out of neurons, or silicon, or even (if it’s a simple computer) humans doing math and passing notes around; but there’s no such thing as a “team” of neurons. Left to their own devices, neurons don’t do squat.

      I can be beaten at chess by a computer, which is made out of silicon. But that doesn’t mean that piles of rocks are superintelligent.

    • “As for genius and creativity, genius is the ability to search the space of possible solutions rapidly, with depth, and wide”

      That’s a misleading metaphor. no one has access to a space of all possible solutions, only to those they are familiar with. Creative genius requires an ability to create new possible solutions.

  11. Alex Z says:

    I think it’s obvious that groups of people are super intelligences if you mean intelligences that can accomplish intellectual tasks beyond those within the capabilities of any individual human. I work in software and pretty much every day, I work with a team that accomplishes tasks that no human could hold on their mind. So obviously you mean something else.

    I also must say that I don’t buy the premise that no amount of average math PhDs could replicate what Gauss did. Erdos is probably the most prolific mathematician in history and he did so by collaborating with large numbers of mathematicians who were nowhere near as good as him. That tells me we can build collaborations that are more than the sum of their parts and so maybe enough good mathematicians can do the same work as an amazing mathematician.

  12. Carl Shulman says:

    “There is no number of “average” mathematics PhDs who, when organized into a team, will become smart enough to come up with the brilliant groundbreaking proofs of a Gauss or a Ramanujan.”

    Why do you say this? I would expect a large enough team with sufficient time to be able to prove all the same results, albeit with much worse efficiency in progress per person.

    • Alphaceph says:

      Some results require insight, which is hard to write a recipe for and therefore hard to parallelize. Other results follow through routine application of existing techniques.

      I am not a good enough mathematician to give a specific example, but in physics the development of special relativity by Einstein seems like a good example. A total paradigm shift in thinking was required – as evidenced by the fact that the Lorentz transformation had just been sitting around for years before. Or you could look to philosophy, for example problems like free will. Lots of routine work was done in this area that went precisely nowhere because people didn’t have the required insight.

      • Deiseach says:

        But that has just kicked the problem up a notch: now we’ve moved on from intelligence, we’re saying that an AI requires insight.

        What is insight? How do we define it? How does it differ from intelligence?

    • vV_Vv says:

      If a typical difficult math problem can be solved incrementally, then your claim is true. Geniuses solve more problem just because they go faster, but even if the maximum intelligence of humans was limited to some lower level, then humanity would still be able to have mathematical progess, albeit at a slower pace.

      But if a typical difficult math problem requires some sort of insight that makes it unsolvable (without incurring in an exponential slowdown) by people below a certain level of intelligence, then humanity wouldn’t have discovered much of the known math if the maximum human intelligence was limited to a lower level. This also implies that there exists math that is inaccessible to us, and forever will be, at least until we figure out how to improve our maximum intelligence.

      Probably some math problems (or, more generally, intellectual problems) are of the first kind and others are of the second kind. I don’t have a clear intution about the ratio.

      • Also, inventing math problems worth working on can take quite a bit of insight.

      • Alexander Stanislaw says:

        Most mathematical problems are indeed solved incrementally , even the big ones like the Poincare conjecture – despite popular journalistic accounts.

        • Marc Whipple says:

          Correction: Most mathematical problems are currently solved this way. Historically speaking, this was not the case. One way to look at that is that we’ve reached the point in mathematical development where many of the remaining problems require a brute-force approach.

          • Alexander Stanislaw says:

            Which important mathematical insights do you have in mind that were not discovered incrementally? I assume not calculus since Newton made the phrase “standing on the shoulders of giants” still famous.

          • Felix Mercator says:

            “Standing on the shoulders of giants” was actually a microaggression against Robert Hooke.

            http://io9.gizmodo.com/5877660/was-robert-hooke-really-sciences-greatest-asshole

            While it’s generally viewed as an eloquent demonstration of scientific humility on Newton’s part, the quote comes from a 1676 letter to Hooke, at a point where the pair were already arguing over proper credit for some work in optics. Hooke was commonly described as very short, even hunchbacked, and one theory is that Newton’s mention of “giants” was his way of saying Hooke had no influence on his work.

          • Alexander Stanislaw says:

            @Felix

            That is not a widespread interpretation, and the supposed double entendre doesn’t affect my implied claim – that calculus was an incremental effort built upon previous work, not an ex nihilo discovery.

            (By the way, irrelevant nitpicks annoy me, and are oddly common on this blog’s comment section, and even more so on LessWrong)

          • Douglas Knight says:

            If you think that the quote is irrelevant, then you shouldn’t have claimed that it was relevant. I don’t know how popular that interpretation is and I don’t care. I only care that it is correct.

          • Alexander Stanislaw says:

            @Douglas

            What is relevant is that the discovery of calculus was an effort that required previous work. This is constant between the usual understanding and the idea that was also a slight against Hooke. Whether is was an additional slight against Hooke is what is irrelevant.

          • Douglas Knight says:

            No, it isn’t constant. Newton didn’t mean that at all.

          • Marc Whipple says:

            I respectfully disagree that the development of the calculus by Newton should be considered “incremental.”

            Consider that Greek geometers were this/close to discovering that summing the infinitesimal could produce a finite sum, and by extension the calculus… and then the whole world abandoned the idea for centuries. Surely at some point in the middle, if it were just a matter of “ordinary” genius, somebody would have made the “incremental” discovery. But instead it took arguably the greatest scientific mind of all time to, in more or less the blink of an eye, develop the calculus along with so much more.

            Arguably, all developments are “incremental.” Having a developed number system to work with sure makes inventing the calculus easier. No argument. But in any realistic time frame, some things are not inevitable incremental developments from the base of all prior knowledge. For most of human history major developments in physics and mathematics were not. They were the result of one extraordinary person (usually but not always a man) hitting, as the saying goes, not a target no one else could hit but a target no one else could even see.

          • Douglas Knight says:

            Marc, you are right to use the word “abandoned,” but it demolishes your argument. If everyone had read and understood Archimedes for a thousand years and suddenly Newton went past him, that would be one thing, but if they abandoned him and had to recreate his arguments, that’s another. People debate whether Oresme or Copernicus even read him. The printing press made a big difference.

            And reading does not guarantee understanding. It’s pretty clear that mathematics in Alexandria regressed 200-600, so merely reading him is not enough.

          • Marc Whipple says:

            I think “demolished” is a little strong. Somebody, somewhere, was doing geometry and/or reading those texts during the centuries between the zenith of Greek thought and Newton. As you surely know, the “Dark Ages” were not a period where all learning was forgotten and all men rooted in the dirt like swine. They were almost there, the world was almost there… but it needed the one mind that could make the leap to actually get us there. It was not “incremental.”

            I might go so far as to say that if there was anything besides the genius of Newton that was needed, it was a shift in cultural thought and sophistication regarding what “infinity” really meant, not a series of incremental steps from Archimedes to Newton. But as with Riemann and relativity, coming up with a concept is one thing, realizing that it applies to the fundamental laws of the Universe, and proving it, is quite another.

          • Whatever Happened to Anonymous says:

            Where does Leibniz fit in this story?

          • Marc Whipple says:

            @WHTA:

            You are dead to me.

            Dead. To. Me.

          • Douglas Knight says:

            Abandoned was your word, not mine. If you just meant that they abandoned progress, not geometry, then it doesn’t demolish your argument, but I think it’s a poor choice of word. But I think it is quite accurate to say that they lost geometry.

            I specifically mentioned Alexandria 200-600. This was still part of Empire. It could afford scholars to think about geometry. They spoke Greek and they had the Hellenistic manuscripts, probably more than we do. But every generation did a worse job of thinking about geometry than the previous. They didn’t just fail to make progress beyond Archimedes, but they regressed. I think that the lesson is that there is more to knowledge than texts.

            No, in the Dark Ages of Western Europe, say, 400-800, they didn’t have even this heritage. Not all learning was forgotten, but an awful lot. Partly that’s because it really was Dark, but partly that’s because they were the inheritors of Rome, which never understood geometry. At least, it never translated it into Latin.

            Muslim science did have Euclid and Archimedes. They did think about geometry, until they were overrun by the Mongols, at about the same time Archimedes was translated into Latin, 1200.

            Did the Oxford Calculators have Archimedes? Beats me, though they definitely were influenced by people who read him. Perhaps their greatest contribution is the concept of velocity. That is in Archimedes, but most of his readers don’t get it. And they did things he didn’t do, like the fundamental theorem of calculus. And then after the Black Death, they abandoned geometry for another couple centuries.

            The greatest contributions are the ones that people afterwards don’t notice, so they get the least credit.

        • vV_Vv says:

          Nobody doubts that great mathematical breakthroughs build upon less spectacular foundations. The question is whether the leap from the foundations to groundbreaking results requires genius-level insight or given enough time, the foundations would just progress to the point that even a non-genious mathematician could take the last step.

          Given that the search space is huge, it seems plausible that there is more to mathematical research than the number of man-hours thrown at it.

  13. Arthur B. says:

    Tangentially, this is why some CEOs get paid so much. There are some skills (not just intelligence) that don’t parallelize well at all.

    • bbartlog says:

      CEO salaries are inversely correlated with the (increase in) the future stock price, so it’s not at all clear that high CEO pay really has much to do with differences in CEO ability. At least not company-management-ability.

      • endoself says:

        If this is true and CEO salaries are made public, then you can make money off this.

        • Stuart Armstrong says:

          As far as I can tell, you can’t – and it’s even been argued that making CEO salaries public has caused them to increase. The reason: CEO compensation is still a small part of the company’s expenses, and “we pay average (or above average) CEO salaries” is taken as a signal for “we are well managed”. And thus the relentless push upwards…

    • Scott Alexander says:

      This is part of the theoretical background for why CEOs get paid so much, and I agree the theoretical model makes sense, but I’m very skeptical that it’s actually the full reason they get paid so much in the real world. For one thing, even terrible-performing CEOs get paid well; for another, other capitalist countries (and our own capitalist country in the past) had much lower CEO-to-worker ratios.

      Someone (Charles Murray?) said that this was more about a signaling race between companies; I’ll let someone else hunt down the link.

      • vV_Vv says:

        Someone (Charles Murray?) said that this was more about a signaling race between companies; I’ll let someone else hunt down the link.

        Isn’t it just because CEOs of large corporations, together with their boards of directors, can essentially pay themselves as much as they want, and share owners can’t do much to protest it (because of coordination issues), even when they perform spectacularly bad?

      • Eric says:

        I remember reading this too, and just spent a couple minutes googling for the cite. Couldn’t find it, but did find this interesting result:

        “Furthermore, … it is the pay of closely-held businesses — where executive pay is private and undisclosed — that increased the most.”
        http://marginalrevolution.com/marginalrevolution/2013/08/why-are-ceo-salaries-rising.html

      • Arthur B. says:

        This explains why the pay can be so high, it sets the range. As for the reason why it varies across countries and periods, I suspect this is because the supply of CEO is fairly inelastic. As a result, even mild social norms can have a large effect on CEO compensation.

      • Deiseach says:

        Nobody is ever going to award themselves a pay cut, and when the members of one remuneration board for a company are picked from directors of other companies, good old Charlie is of course going to vote for the going rate plus bonuses for Bob because good old Bob is going to be sitting on the remuneration board for Charlie’s company.

    • bbartlog says:

      ‘In this paper, we carefully generated an in silico simulation to give a result that gladdens the politically correct. Had we chosen a different distribution of possible performance, we could have gotten totally the opposite result.’

      Sort of disappointing because I thought they might have done some experiments with people. That would have been interesting. Published results of computer simulations constitutes a pretty crappy genre, for the most part.

  14. FullMeta_Rationalist says:

    Amdhal’s Law.

    tl;dr parallel-processing is subject to diminishing returns.

    • Nornagest says:

      Amdahl’s law comes with the large caveat that many interesting tasks are, or can easily be made, embarrassingly parallel. But I’m not sure the kind of tasks we wrap up as “genius” are among them.

      (I also don’t know for sure that they aren’t, of course.)

      • GregvP says:

        Many differential equations are analytically solvable.

        However, analytically solvable DEs constitute an infinitesimal subset of all DEs.

        Likewise with problems amenable to parallel attack. The caveat is much smaller than you seem to think.

        IOW, “here are a few anecdotes about problems that turned out not to require high general intelligence” is not an adequate response to Scott’s thesis.

  15. Yildo says:

    Is it correct to equate “intelligence” with “creativity”? One is the speed of thinking. The other is the novelty of the thought. “Super-intelligence” does not necessarily imply possession of a high degree of creativity to me.

  16. Wrong Species says:

    If you take one exceptional individual vs a group of average people over a lifetime then yes the individual probably has an advantage. But you take a group of people over many generations then they would have a huge advantage over the individual. Culture can come up with solutions to many problems without us even consciously realizing it.

    • drethelin says:

      Aren’t most of those solutions in practice come up by people far on the right end of the bell curve, like Moses, Marcus Aurelius, Aristotle, and so on? How would you distinguish from amorphous “culture” producing these results given enough time vs groups of people producing great men over time?

      • Wrong Species says:

        They don’t have to be great men to make some important contribution to culture, they can just be merely above average or they can accidentally stumble on a great idea. The key part is the accumulation of ideas over time.

        • Marc Whipple says:

          The whole point of this discussion is that some people don’t believe it’s possible to stumble onto a great idea unless you have the sort of brain that will recognize the problem and when you’ve stumbled onto a solution.

          For instance, I don’t know the number of IQ100-person-years of thought which would be required to arrive at the wave-particle duality hypothesis for the nature of light, but I suspect it is so very large that arguments to “we could have brute-forced quantum mechanics eventually” make me very dubious.

    • moridinamael says:

      “A can do whatever B can do, provided A is given orders of magnitude more time and resources.” It is a nonstandard use of the word “advantage” to say that A has a “huge advantage” over B.

      • Wrong Species says:

        I think that this would hold even if the exceptional individual was immortal and had access to the same technology(but all that technology is also an accumulation of knowledge by the group).

        I’m probably overstating my case here. There are times where culture can’t substitute for exceptional people. I just think Scott is overrating individual vs collective intelligence.

  17. A says:

    Bostrom does not agree. Anyway, if these superintelligences ever exist, they will probably have values utterly strange to us poor humans. If they bother us much at all, they will probably harm us. A mere society with a trillion humans seems more fathomable and probably also nicer and more worth celebrating.

    Also, there seems a lot of wiggle room to combine human minds with other forms of computation (machine or biological) that fall short of “closing the loop” and making humans unnecessary. Society today isn’t just a really big hunter-gatherer population; there’s more to it.

  18. Jiro says:

    Also, there is no number of individual cells which, when organized, can produce something equivalent to the human brain. And there is no number of individual lines of code which combined can produce an AI, I suppose.

    For that matter, I could equally dubiously claim that there is no number of people in a market which can be as intelligent as central planning of society.

    You’re just choosing examples to give the result you want. There *are* cases where combining things with limited intelligence produces something with greater intelligence. Chimpanzees and writing happen not to be one of them, but that’s not to say such cases don’t exist.

    • Alexander Stanislaw says:

      You’re just choosing examples to give the result you want

      I don’t know if Scott is guilty of this here, but I think this is a huge limitation of his casework style of argumentation (giving examples of things that follow a common trend with Roman Numerals, and making a general theory) – it is a very useful way of generating hypotheses and interesting ideas (and as a result, its one of the main reasons I like this blog) but its no good for proving anything.

      I do agree with his general point here though – you can increase intelligence in different ways (not better or worse) through linear speedup, parallel processing or a rather nebulous “quality” and but none can fully substitute for the others.

  19. Dr Dealgood says:

    For the record, I don’t consider corporate entities to be intelligent much less superintelligent. But there is an interesting argument to be made that less orderly collectives such as markets have superhuman problem solving abilities.

    We observe in many species the phenomena which we call swarm intelligence on the macro scale or microbe intelligence on the micro scale. There seem to be analogous processes in our civilization. No human planner could direct resources as efficiently as the market does, yet the market is just our name for the aggregate results of hundreds of millions of transactions between people who often cannot manage their own household budgets. It certainly seems as though society is at the very least orders of magnitude better than any individual person when it comes to solving this class of problem.

    Rather than having a higher general intelligence, which is probably not a useful concept outside of humanity anyway, it makes more sense to say that problem solving ability in many (but not all) domains seems to increase vastly as a gestalt body. From the inside it looks like mindless chaos but the results are generally speaking more efficient than it is possible for a human being to achieve.

    • Jaskologist says:

      This seems like a good place to point to I, Pencil, because I feel like that’s what the discussion is really about.

      Nobody knows how to make a pencil. Ramanujan doesn’t know how to make a pencil. David Freidman doesn’t know to make a pencil. But The Market does, and it is able to do so on a massive scale with amazing efficiency. It solves all kinds of problems that individual humans can’t crack. Why isn’t this a super-intelligence whose utility function is oriented towards pencil creation?

      (Moloch is a super-intelligence as well, able to make us act in ways not necessarily aligned with our interests.)

  20. cassander says:

    >It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability

    Well run, well structured teams can most definitely solve harder problems than any individual member can solve. 6 people can bring 6 different stores of knowledge, ways of looking at problems, etc. Now, it’s perfectly possible that those visions will conflict, in which case the team will be dumber than any individual member. But when they synergize, they can definitely solve harder problems than any individual member could.

    >Further, teams themselves need intelligent people to run in an intelligent way. Steve Jobs led Apple to success by being really really good at marketing.

    that is not what made steve jobs good at running apple. It was something he was good at, but one does not take a company in as bad shape as apple was in the early 2000s and turn it around with slick marketing.

    >While it’s possible for improvements in organizational technology to ameliorate some of these contingent problems, so far they generally haven’t: the US government is as dysfunctional as ever, and a lot of corporations are little better.

    The US government remains shitty because it operates by law, not management, in an environment almost entirely devoid of competition. Corporations, on the other hand, on the whole, get better and better every year. Ford of 2015 kicks the ass of ford of 1915. Throw out car technology entirely, modern ford would slaughter 1915 ford at cranking out model Ts through things like sophisticated shop floor management, just in time delivery, and other organizational changes.

    • Decius says:

      The problem with government is poor organization. Specifically, while the people at the top are the best at proving to people that they are the best person for the job, the reward is simply the position. If someone could become a millionaire mostly by coming up with an idea that saved the government billions (like happens in private industry- walk up to Ford and say that you have an idea that will save them billions, and somebody will listen to you) then more people would be looking for ways to make government more efficient.

      • Marc Whipple says:

        Actually, this is a viable approach to making money:

        https://en.wikipedia.org/wiki/False_Claims_Act

        The problem is that the government does not like being made to look stupid and resists mightily when people try.

        • Decius says:

          That covers fraud, but not inefficiency.

          • Adam says:

            I worked in defense budgeting a long time up until a few years ago. It’s not quite a problem that people don’t have an incentive to be efficient. Most leaders want to accomplish the most they possibly can with the resources they’re given. It’s an extremely well-known problem that there is no incentive to actually save money, though. In fact, you’re usually punished for it. If you don’t spend your entire budget, you’ll be given less the following year, so everyone loads up on reams of extra paper and whatever other bullshit they can dream up in September to make sure they spend their full allotment. The same exact thing happens at every level of government. If you’re able to accomplish everything you had to in a given year with less money than you were given, you’ll just invent new things you didn’t need to do and do them anyway.

            Frankly, I’m not sure business is qualitatively different. If you meet a sales quota, you don’t furlough your team the rest of the year and go home. You sell more. Government managers do the same thing, but the idea of ‘sell more’ doesn’t translate well. A police chief can produce more policing, but it isn’t clear that benefits his city when it goes beyond what the level of crime requires. Most of what the Army produces is just readiness. We train and train and train, and if we meet our goals, we’ll just keep training more, hopefully becoming better soldiers and better units in the process, but it’s not clear the country really needs it. We’re not losing wars because of shitty soldiering.

          • Deiseach says:

            In fact, you’re usually punished for it. If you don’t spend your entire budget, you’ll be given less the following year, so everyone loads up on reams of extra paper and whatever other bullshit they can dream up in September to make sure they spend their full allotment.

            Yeah, budget clawbacks are the worst because they penalise you for being efficient. If you were permitted to save the money and carry it over to the next year for “rainy day” purposes, it would save a lot in the end because you wouldn’t be petitioning the Department for extra funding in the middle of the year. (Also if you were permitted to take surplus from fund A to cover overspend in fund B, but that can lead to robbing Peter to pay Paul which doesn’t do any good).

            But because unspent monies not alone go back to central government, but your next year’s budget is reduced accordingly, then there is the usual end-of-year scramble to spend any money left over. I’ve seen it in both my jobs: instructions coming from heads of department to spend that money on any thing, buy new computers even if you don’t need new computers because we have money left in the IT fund that wasn’t spent and it needs to be spent.

            And this isn’t simple inefficiency, it’s in response to “Something must be done!” pressure from the public and campaign promises when electioneering, because every party in opposition sees “government waste and inefficiency” as an easy way to attack the party in power and promise reduction in taxes by cutting spending.

            So you get PR releases to the media about “We achieved X millions in savings during our term of office”, which is supposed to make the voters think that party Z is doing a great job of running the public finances and being careful with holding the purse strings closed, and thus you get the state of affairs as described: “use it or lose it”, which means “no you can’t buy that new photocopier you desperately need for the office because the photocopier budget is all spent, but you can buy new tablets even if nobody wants a tablet because we got a special tablet buying grant which hasn’t been spent yet and we have to spend it before October or our budget for next year will be reduced accordingly and no you can’t use the unspent money to buy a photocopier instead because it’s specifically for IT purposes not office machinery purposes”.

            Because it’s public finances, you have to keep track of every cent spent, what it was spent on, what headings such expenditure falls under, etc because some public representative will get up on their hind legs in the national parliament and ask a question about “Did the citizens of Ballygobackwards get a commensurate amount in sheep dip grants as the people in the Minister’s constituency?” and the Minister will ask his department which then asks you for the answer, and God help you if you don’t have the figures on “We provided €523,016.59 in grants for 365,000 cubic litres of sheep dip in the period March-November 2014”. For instance, I have to do monthly reports on private housing grants broken down by electoral areas for individual streets in the city areas. You can laugh about counting paperclips, but there really is the obligation laid on you to account for every penny, which is why bureaucrats are pencil-pushing red-tape bound cautious ‘we need that in triplicate and fill in form X695 or else’.

            I’m heartened to know this is a universal public service experience and not just in Ireland 🙂

  21. You make your point well: super intelligence is not “more intelligent than people”. It is a specific piece of technical jargon meaning “more essentially creative, in the way geniuses like Ramanujan were “.

    And I think all the remaining well known arguments survive this formulation, though a lot of the bullshit falls away. Are superintelligences possible in principle? Are we close to one? Is orthogonality true?

    I have my own views (especially as a researcher in basically AI) but thank you for trying to clarify the debate.

  22. William says:

    I’m not here to say that corporations constitute superintelligences — I don’t really care to speculate regarding this. However, I argue that the fundamental nature of AI relies on the parallel function of components which alone could not operate on a ‘superintelligent timescale’, and that this hierarchy is essentially equivalent to the structure of both corporations and certain natural phenomena. For this reason, I take issue with the post above.

    A particular irony of Scott’s argument is the line: “A team of people smart enough to solve problems up to Level N may be able to work in parallel to solve many more Level N problems in a given amount of time. But it won’t be able to solve Level N+1 problems.”

    However, the type of supercomputers required to create AI that Scott describes work in exactly this fashion — dozens of processors working in parallel to produce results which could not be produced on a singular machine in a reasonable timescale.

    Of course, you could argue that slower machines could perform the same tasks over a long time-scale (the time-scale would simply be unrealistically long). However, this makes most of Scott’s analogies unfair; in a semi-infinite time-scale, I would argue that an 8 year old COULD beat a grand-master at chess (or analogously, that a near-infinite number of 8 year olds could find a way to beat a grand-master within a reasonable time-frame). This is the type of argument wherein monkeys at typewriters could reproduce the bible — powerful computers are necessary to perform the sort of brute-force analysis which makes AI feasible.

    This collective intelligence is demonstrated clearly in ants. When ants are trying to push an item of food to their colony, often singular ants push the food in the wrong direction. However, collectively they push the food in the correct direction — the aggregated behavior of unique ants results in behavior which seems very rational, despite the irrationality of individual ants.

    Corporations actually function using a far more efficient mechanism. Scott’s consideration of corporations relies on the assumption that all of the people who are involved in a company are equivalent. For instance, his claim that a team of ‘ordinary (IQ 90) construction workers’ could not build a ship to fly to Jupiter requires the equivalency of construction workers on the team.

    While I make a case against this argument, I argue that this is an unrealistic method of modeling corporate function.

    Corporations rely on a diversity of ability to accomplish tasks far more efficiently than this. Scott argues that Apple’s success directly depended on Steve Jobs — that without him the company likely would have failed. I agree. However, I argue that Jobs alone never could have been successful. For instance, he didn’t invent (and likely never would have invented) the graphical operating systems that first made Apple a success. Jobs relied on engineers, marketers, and accountants — sub-populations without whom Apple could not exist, but which have their own specific strengths and weaknesses.

    In fact, typical computers make use of both CPUs and GPUs — processors which perform identical tasks very differently — in order to make use of the strengths of each processor type.

    In short, I’m unconvinced by this argument, I think a case which relies less on analogy is necessary in order to fully distinguish AI ‘superintelligence’ from the examples of ‘collective intelligence’ which I describe.

  23. Dan Simon says:

    Termites have invented the arch, which I believe is younger than writing in the history of human invention. And bees have evolved a fairly sophisticated sign language for expressing directions to nectar sources. It’s not at all clear that sufficient evolutionary pressure wouldn’t eventually lead ants to evolve some kind of writing.

    A large number of clearly-not-very-intelligent CPUs should have no trouble running Deep Blue, which can indeed beat a grandmaster at chess.

    Designing and launching a space probe towards a moon of Jupiter is an ill-defined task. What prior knowledge and tools are assumed? Is the starting point nature, with no technology? 1969-level technical knowledge? 2015-level technological knowledge? 2115-level technical knowledge?

    What distinguishes the “brilliant groundbreaking proofs of a Gauss or a Ramanujan” from the many proofs of similarly important (or far more important) theorems gradually constructed over decades by a community of “‘average’ mathematics PhDs”? The fact that they weren’t constructed by a single person, who is thereby declared to be far more intelligent than average? Wouldn’t that just be begging the question?

    Once again, I assert: until we have a definition of “intelligence” (let alone “superintelligence”) more robust than “here’s a laundry list of random things that, off the top of my head, seem intuitively like they probably require intelligence/superintelligence to accomplish”, all discussions of the future of artificial intelligence and superintelligence will continue to be utterly senseless and pointless.

  24. stargirl says:

    There are many comments saying something like “A group of competent X could do the work of one genius given enough time.” This seems true but pretty much pointless. Speed matters. Given “enough time” a person of IQ 145 could learn to play chess perfectly. And “enough time” might not be exponentially long. For example a single person of IQ 145 could probably write a chess computer to help them test. If you only have to check lines until the computer deicdes one side has lost it might only take thousands of years for a person of IQ 145 to solve chess. Never-mind the possibility that the IQ 145 person might eventually be able to write code that solves chess efficiently (in maybe only several centuries!).

    This does not mean that an IQ 145 person is as good at chess, in any meaningful sense, as Magnus Carlson or Stockfish. A large group of average mathematicians is not as productive as a single mathematics genius. Even if the group of average mathematicians could eventually reproduce the results of the genius. Time matters, in many highly relevant contexts the single genius will win.

    • Patrick says:

      Why are you assuming that large groups of average mathematicians aren’t as productive as a single mathematical genius? What task are you setting as your measure? It seems like maybe you’re measuring in terms of “capacity to develop stunning new mathematical insights,” in which case you MIGHT be right (not guaranteed, mind you, I don’t accept this as pure assertion), but why should that be our measure?

      Is one mathematical genius better than a large group of average mathematicians at…

      …underwriting flood insurance for all properties on the eastern seaboard?
      …providing individualized education to a couple thousand high level engineering majors?
      …literally any task that involves rapidly providing individualized responses to large numbers of highly variable scenarios?

      • Geirr says:

        > Why are you assuming that large groups of average mathematicians aren’t as productive as a single mathematical genius? What task are you setting as your measure? It seems like maybe you’re measuring in terms of “capacity to develop stunning new mathematical insights,” in which case you MIGHT be right (not guaranteed, mind you, I don’t accept this as pure assertion), but why should that be our measure?

        In terms of “capacity to develop stunning new mathematical insights” the differences between individuals are in multiple orders of magnitude. I remember reading on Steve Hsu’s blog about Kolmogorov’s logarithmic ranking of mathematicians and physicists. Kolmogorov’s intuitive scale accords well with the bibliometric evidence. If you want a primer on the topic I highly recommend Charles Murray’s Human accomplishment https://en.wikipedia.org/wiki/Human_Accomplishment

  25. Patrick says:

    You’re eliding on the term “organize.” The people to whom you’re responding are treating the specific organizational algorithms as important, and you’re not.

    Try asking a lone genius to sort the US Post Office’s mail- all of it, in a timely fashion- see how far he gets.

  26. Anonymous says:

    There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write.

    You should think about the implications of this insight when you talk about adding zeroes to config files.

  27. DiscoveredJoys says:

    ” It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability. ” The chances are that if you ‘cannot put your finger on something’ you are operating on emotion rather than rationality.

    I could make a stronger case that notably intelligent people like Newton or Gauss or Darwin are able to formulate their ideas *because* they are embedded in a larger society which supported them. Who knows which geniuses have been unsung because they were galley slaves or serfs?

    • Geirr says:

      >” It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability. ” The chances are that if you ‘cannot put your finger on something’ you are operating on emotion rather than rationality.

      This is one possibility but it’s not the only one. Not being able to define a concept does not mean we can’t point in its general direction as Scott is trying to do here. A shitty operationalisation of a concept is a step in the right direction. The better our operationalisation gets the closer we get to real understanding. Sometimes our ideas are just wrong but if we don’t grope towards them and try to bite off pieces at the edges we get nowhere.

      > I could make a stronger case that notably intelligent people like Newton or Gauss or Darwin are able to formulate their ideas *because* they are embedded in a larger society which supported them. Who knows which geniuses have been unsung because they were galley slaves or serfs?

      This does not explain, or even come close to explaining the massive differences in individual intellectual accomplishment in societies of similar levels of wealth, orderliness and social organisation. Of course these individuals need a larger society to support them. And it needs to be the right kind of society. Southern Italy and Northern Italy have had vastly differing impacts on the arts and sciences, ditto western and Eastern Europe. The differences in in their respective contributions are almost certainly down to differences in how their societies were organised. But the outsized contributions of some individuals remains.

    • Decius says:

      Certainly there’s some aspect of being born into a society that relieves most food and shelter pressure, and some aspect of having the collected wisdom of prior humans available in written form. But that’s not often sufficient, as there are few cases of multiple people making independent breakthroughs.

      But the evidence of the few cases that do exist (notably, the invention of calculus) suggest that there is some cultural causative factor, which might simply be “everybody needed this at roughly the same time”.

  28. Harald K says:

    I used to read your blog, because you wrote about interesting things despite your naive, magical and 1930s Ivy League Nazi-tinged view of intelligence. Dropped in, and see that I haven’t missed anything. A team of PhD’s can never write as elegantly as Gauss? Bull.Shit! Your views about construction workers are maybe not as transparently wrong, but all the more offensive.

    Now I’ve been busy too. I’ve actually followed the research on machine learning, built the code, done cool stuff. I suggested you too to do that, remember? Get your hands dirty. Try to build an intelligent system, to get a grip on what it’s about in practice. Nothing is more useless than an atheist Jew who’s never placed a brick on a brick writing treatises on the dangers of Babel’s tower.

    • Soumynona says:

      Scott has nazi-tinged views, huh?

      How is it that people who ostensibly hold egalitarian views are so vehement in their intelligence denial? Being egalitarian presumably means believing that nobody is intrinsically more worthy than anybody else, that there is no category of lesser human beings. And if suggestion of differences in intelligence threatens that, then it seems like there must a belief that intelligence differences imply differences in moral worth.

      A raging IQ denier seems to believe something like:
      a) If some people were smarter than others then they would be better and more important and it would be right and proper for the smart to lord it over the dumb.
      b) Luckily, we live in a carefully fine-tuned world where everybody is smart and speshul in their own way. Whew, dodged that bullet.

      To me, this seems far more nazi-like than the view that some people are more intelligent than others, it doesn’t imply anything about anybody’s worth and it is all a great injustice. Which is roughly what I think Scott believes.

      Is a fragile empirical assertion the only thing stopping you from singing hymns to the glory of the Übermensch? And if not, then why the fuck are you so bothered by the idea that some people are in fact more intelligent than others?

      • “… Luckily, we live in a carefully fine-tuned world where everybody is smart and speshul in their own way. Whew, dodged that bullet.

        To me, this seems far more nazi-like…”

        Were the Nazis known for treating people as individuals, not as members of groups?

        • Soumynona says:

          Not the (b) part. The (a) part. Like, if you take a fundamentally unjust notion and wrap it in some mental contortions to make it look egalitarian, it seems more nazi-like than not having a fundamentally unjust notion in the first place.

          Of course, this all falls under psychoanalyzing people I disagree with and might be a bit silly.

    • Scott Alexander says:

      [meme]
      Calls someone’s views “Nazi-tinged”

      Points out that their interlocutor is Jewish in a way that makes it sound like this casts doubt upon their ideas
      [/meme]

      • Felix Mercator says:

        |
        |>
        |
        |3
        |

      • Deiseach says:

        Calls someone’s views “Nazi-tinged”

        Points out that their interlocutor is Jewish in a way that makes it sound like this casts doubt upon their ideas

        Scott, don’t you know the Jews are/were behind the Nazis? And the Communists? And the Jesuits? And the League of Nations, the United Nations and the One World Order? And the Spanish Inquisition? And somebody’s granny’s sponge cake only getting third place at the village fete? 🙂

      • Marc Whipple says:

        Everybody knows how smart Jewish people are, and even if you don’t believe they were actually behind/faked the Holocaust, it seems reasonable to believe they would have learned from experience.

        *ducking and running*

    • Decius says:

      How will did your machine-learning program learn to program machine-learning programs? Did you try running a bunch of copies of it in parallel to make a better one?

    • Whatever Happened to Anonymous says:

      I’m not a fan of arguing against the point I merely believe you made. So I’d rather ask you exactly what you mean, since I can’t seem to figure it out.

    • Aenonimus says:

      >I suggested you too to do that, remember?

      I doubt anyone but you remembers whatever spite-tinged “I shall no longer bless you with my readership!” comment you’re referencing.

      >Nothing is more useless than an atheist Jew who’s never placed a brick on a brick writing treatises on the dangers of Babel’s tower.

      ‘,:-|

    • mouth says:

      https://youtu.be/eNgUQlpc1m0

      This is a video of a mathematics panel. At 33:45 Terry Tao discusses collaborative mathematics. Around 36:10 he states that collaboration often does not lead to initial breakthroughs. Collaboration is useful to refine brilliant methods discovered by individuals. Tao seems to agree with the 10 PhD’s =\= Gauss position. Fwiw.

    • Deiseach says:

      Nazi-tinged? Atheist Jew? This is going beyond harsh but fair criticism and has crossed well over into invective. I disagree vehemently with the notions of AI danger some on here are convinced of, and I also think we don’t have any kind of a useful measure of what exactly intelligence is, but I think ad hominem attacks are nowhere near any kind of useful or true contribution to this debate or even row, if that’s what we’re having, and they certainly make me, for one, think you have nothing pertinent to contribute (apart from some vague boasting of doing cool stuff, which is the kind of brag I expect from a twelve year old).

  29. jnicholas says:

    I wrote the post on the slatestarcodex subreddit that you linked as your example 1, and apparently I really, really suck at communicating, even when I’m trying really hard not to allow a particular misunderstanding and going so far as to make up an awkward hyphenated monstrosity of a term to stress the distinction I mean to highlight.

    Super-individual-human-intelligence does already exist, is distinct from superintelligence in the normal AI sense, and is not adequately expressed simply by calling something a “group”. Semantically, I’m totally fine with reserving “superintelligence” for single entities with IQ equivalences that are greater than any found in the human range, and I would be happy to have a new word to express the fact that humans within ongoing civilizations have access to many orders of magnitude more ideas than any individual human, no matter how Newtonian or Einsteinian, could ever have generated on their own.

    No group of chimps, no matter how well organized or coordinated, could write a novel. I agree. If, however, somehow a group of chimps were about to be able to create a literarily-crude message to humans, but the content of that message might either induce the humans with their relatively god-like power to elevate all chimp-dom out of brutality and squalor, or incite humans to eradicate the entire genus Pan as potential threats to human dominance and values, then it would still be deeply in those chimps’ interest to organize and coordinate themselves as well as possible so as to pool their intelligence maximally and make damn sure they wrote the best and smartest literarily-crude message that it was in their capacity to write. A group, in other words, can massively benefit by making maximum use of their collective intelligence, even without transcending their inherent capabilities. And I am suggesting that we have not maximized our collective intelligence – and also that this would be a really fucking good time to do so, before we create the true superintelligence that will hold everything we value in the palm of its indifferent metal hand.

    Let’s take it as granted that we can’t coordinate our collective intelligence in such a way as to generate IQ 300 equivalent solutions to problems. That’s probably correct. But there would still be huge chunks of utility to be gained by making IQ 145-level insights into life, religion, politics, science, relationships, etc., available across the board to <IQ 145 people, in the same way that the group of IQ 90 construction workers are all making use of language, math, electricity, internal combustion engines, and a thousand other things and concepts that they could never have invented on their own. If better ideas are useful in themselves, it doesn't matter how a particular individual acquired those ideas – it only matters whether they did acquire them and can make use of them. It is certainly the case that there are much better ideas scattered throughout humanity than the average individual is making use of, and it is certainly the case that our civilization – and our prospects for long-term survival – would be vastly improved if our collectively best ideas were also our common ideas.

    • Scott Alexander says:

      All right, sorry, will link this response from the main post.

      • jnicholas says:

        Thanks. I do appear to be doing a poor job of making it adequately clear exactly what I’m proposing, though, so most of the blame is probably mine for using terms that are too easily misinterpreted.

        “Augmented intelligence” might be a better term to start with, but I don’t think it by itself carries all the connotative weight that I’m trying to convey. I agree with you that there is a meaningful, qualitative difference between significantly separated IQ levels, and that there are things that a high IQ entity on their own can do that a low IQ entity on their own could never do. It does seem to be true, though, that effective coordination and organization can at least allow a group to perform on average better than its average member, and can allow individual members to perform at levels significantly above their ‘natural’ level – as in my example of IQ 90 construction workers using concepts (language, math, principles of structural engineering) that it required geniuses to invent. Those IQ 90 workers will probably not invent profound new concepts as if they were themselves now geniuses, but they can operate on a level that it once took geniuses to perceive and reach, raising the average functional IQ of our civilization noticeably. I think there is a lot of room to do this much more effectively and over a much larger set of domains, and I think there are huge blocks of utility to be gained by raising the average political IQ, for example.

        It’s not clear what the complete set of cognitive advantages are that high IQ people are making use of, but it seems obvious that strong memory and the ability to hold many concepts (and the connections between them) in mind at once (i.e., high levels of RAM) are among them, as well as simply having access to many more ideas by virtue of being able to understand them, and thereby having many more objects to build new ideas with, tools to operate on other ideas with, examples to pattern match from, metaphors to draw on and extend from, etc. All of those – storage memory, working memory, and the battery of concepts to work with – can be externalized and made more available, enabling more people than before to be more effective than before, and raising our collective functional IQ.

        Even among geniuses, improved collaboration and sharing of information can at least widen the set of domains that a given high IQ person has high level knowledge in, and therefore the potential to contribute to. The easier it is for anyone to contribute who has the capacity to contribute, the more contributions there will be, and the faster we’ll progress. And the more interaction there is between high level players in different domains, the greater the volume and variety of powerful perspectives there will be on important problems.

        I think this is an area of low-hanging fruit; that it’s especially available now because of the advent of software and the internet; that it can simultaneously improve our approaches to essentially every problem that exists; that there’s nothing else we can do that can generate as much utility in the next ten years (or however long it is until FAI blooms); and also but absolutely not least, that we very badly need to coordinate and collaborate on the input of every single human who can contribute so that FAI blooms instead of gray goo.

        • Viliam says:

          Seems to me that “groups” today have the advantage of more brain power, but the disadvantage of all the internal conflicts of interest; so at the end of the day, the organization is about only slightly smarter than its boss, and the boss is not necessarily the smartest person in the organization.

          I believe there is a space for improvement; for example it seems that the conflicts of interest could be fixed by having a group of highly intelligent people who share the same goal, who trust each other, but who are also efficient and ruthless at detecting and removing defectors. (It’s not necessary for everyone in the organization to be like that, but the “inner circle of power” should be like that. Then you can add thousands of pawns, who will be much less efficient on average, but will still increase the total power of the organization.) Unfortunately, it seems like the great mutual trust and the ruthless detection of defectors are almost antonyms. Unless there would be some kind of mechanism exposing defectors, and only the people who pass the test would trust each other. But the tests would have to be mandatory for literally everyone, especially the bosses of the organization, and the people who administer the tests.

          This could be possibly achieved by a big family (a big polyamorous group?) or a cult. But not the usual family or the usual cult, because those contain many internal conflicts. Or maybe it could be built on strictly rational grounds: for example, the organization would use its higher intelligence to make money, and its constitution would say that the profit will be divided evenly among all members and strictly forbid any other kind of reward, so that all members would be motivated to care about maximum output of the organization as a whole. Problem is, even in such setting some people would optimize for doing less work, or having non-financial rewards such as power over other people. And if you start detecting and removing those, then it becomes a signalling competition.

          Other possible way to expand would be to use our tools much better. For example, if working memory is a big advantage, I wonder what advantage would it give to have one’s own personal wikipedia, and write notes systematically about everything. Yes, writing would cost a lot of time. But maybe having written all the ideas and all the connections between them would later be profitable, if such person would be less likely to forget things and more likely to notice connections already discovered in the past. Maybe in the future we will have implants for doing this more conveniently.

          If multiple persons would share their wikipedias, I can imagine it leading to bad edit wars. The damage to the smarter person by having their notes impaired all the time could possibly overweigh the advantages of sharing. Maybe the correct way would be each person having their own namespace which is read-only but linkable for the others. So everyone is allowed to copy smart ideas of others, but also everyone is allowed to insist in their own version regardless of how much others disagree with it. But if there remains a conflict of interests, some people would be motivated not to share their knowledge, or even to provide false information.

          The multi-cellular organisms solve the problem of cooperation in a way that would be unacceptable to humans: if the organism dies, all cells die; and even if the organism survives, only a few preselected cells are allowed to create a new organisms and the remaining cells die anyway. So there is not much for an average cell to do, other than to serve the whole. — I can imagine a dark dystopian sci-fi based on this: a society where each person must serve an emperor; if an emperor dies, all their subjects die too (they are killed by unremovable implants installed at birth); and an emperor is allowed to appoint new emperors and provide them resources, including new subjects, and a copy of emperor’s Book of Wisdom which contains all information necessary to successfully run an empire. And the empires would fight each other among the stars, successfully eradicating any alternative human societies. — But this is a dystopia which I don’t want to see in real life.

          • jnicholas says:

            The thing I’m proposing we create is a website that would be sort of similar to Wikipedia, in the sense of being an open crowdsourced knowledge resource, but focused on clearly and completely delineating arguments and beliefs, and the logical connections between them. If you’re interested, you can read about it here.

  30. MK says:

    >There is no number of ordinary IQ 90 construction workers who, when organized into a team, will become smart enough to design and launch a space probe to one of the moons of Jupiter.

    Side note: apparently, there was one IQ 90 soldier who “codified the art of aerial attack and created the concept of energy maneuverability” after he taught himself enough calculus to turn his ideas into equations – John Boyd. But this case tells more about a weird failure of IQ testing than the general rule, I guess.

    • Max says:

      Where did you get the part that he had “90 IQ”? USAF pilots are typically pretty smart

    • Decius says:

      Where did you get the idea that he taught himself calculus? According to wikipedia, he had a degree in economics and one in industrial engineering.

  31. Stuart Armstrong says:

    Social intelligence could be a good example of things corporations aren’t good at.

  32. Max says:

    The other benefit of AI intelligence is commoditization of it. People are bottleneck – they need 20 years of growth and learning. They can only work productively for about 4-6 hours/day. They require astronomical amount of money for salary.
    Corporations is still a bunch of corpses. And corpses are expensive and inefficient

  33. Calico Eyes says:

    Very slight critique.

    “A team of people smart enough to solve problems up to Level N may be able to work in parallel to solve many more Level N problems in a given amount of time. But it won’t be able to solve Level N+1 problems. ”

    While most intelligence figures are correlated with each other , such as verbal and spatial in a sub-population being correlated at about…0.7 I believe, and long term memory not too far off, there will be the predicted existence of individuals with lopsided intelligences.

    Or to put it one way, a 170 spatial IQ and 140 logical reasoning engineer pairing up with a 140 spatial IQ and 170 IQ reasoning theory guy will be better able to solve better problems. Add in a 135 spatial and verbal IQ savant with near 180 IQ memory, and I believe these people can solve N+1 problems, and perhaps N+2 problems given the right team.

    N+3 problems probably end up out of grasp, but so few people are that lopsided anyways.

    • Max says:

      Provided then can communicate effectively with each other and the effort spent on communication is not counter productive

      I do think though sometimes high IQ people need a big kick in the nuts and military style goading to actually get things done – Manhattan project and Space Race for example

      • Marc Whipple says:

        The trick, as with all humans, is to convince them that they want to do it. With really smart people the proper approach is often to convince them that it’s a really interesting problem. Once you do that, all they need is a little encouragement to stay on-topic. Just putting them in a room and shouting at them to do what you want will work no better, and in fact probably much worse, than it will with any other group of people.

  34. Perhaps corporations and countries and such are superhuman, but not superhuman in the very interesting way we’re looking for.

    There’s been a lot of name inflation about technology– “virtual reality” which is just a big screen with stereo, “hoverboards” that don’t hover, “artificial intelligence” which is severely domain-limited.

    Supposing that corporations and countries and cities are artificial intelligences of a sort, is there a way to rank how intelligent they are compared to each other?

    • Chrysophylax says:

      Yes: the same way we rank the intelligence of other optimisation processes. A corporation like Google is a powerful cross-domain optimiser, but it’s not efficient. It’s doing more work than a lone human, but it’s also using more resources (including time). A real superintelligence could produce better code faster using less processing power, memory and energy (and could throw orders of magnitude more resources at the problem if it chose to).

      We can’t measure the intelligence of a group with mathematical precision, but we can get a pretty good general idea of which groups are smarter than which others for a subset of groups – a partial ordering of the set of groups. For example, the people who took a million years to invent a better stone handaxe were significantly less impressive than the people who invented agriculture, writing and building in (archaeologically) close succession. It’s less obvious how they compared to the people involved in the Manhattan Project; the latter were much more powerful optimisers, but also had access to resources (e.g. maths, physics, Nobel-winning geniuses) that the former groups didn’t.

  35. Rob says:

    By the time we are done debunking this kind of nonsense we will all be dead.

  36. Gile says:

    Reminds me of a story Feynman told about his impression of a physics textbook versus the average impression of 65 engineers.

    “I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim that I was smarter than sixty-five other guys–but the average of sixty five other guys, certainly!”

    • Decius says:

      More than half of the sixty-five engineers would say that they were better than the median.

      • Muga Sofer says:

        That’s not what he’s saying, though. If only one of the engineers gives the wrong answer, then all 64 other engineers will give an answer better than the averaged-out answer.

        If the right answer is zero, and one guy says “sixty-five”, then all the other guys (correctly!) believe they know better than the average answer of 1.

  37. Scott, there is some truth to what you are saying here, but it does not work relative to the conversation you are talking about. It does not matter whether you use the word intelligence or not. The fact is that groups are capable of things that humans are not, and they are capable of self improvement.

    Your argument amounts to saying that they cannot self improve in every respect.

    But this is irrelevant, since no one has ever proven that a program that can self improve in every respect is likely either. In either case, we would have something with superhuman capabilities that can self improve. If one of those does not foom or take over the world, that is evidence that the other will not, even if not conclusive.

    • Chrysophylax says:

      Is a human better at creative problem-solving than a microbe is? We have overwhelming evidence that it is possible to improve this cognitive ability. Your argument amounts to saying that I can’t fly, so there are no flying mammals, while standing in a room full of bats.

      Also: civilisations *did* FOOM and take over the world. How many pre-agricultural countries can you name? Hown many pre-industiral countries are there?

      Humans FOOMed and took over the world. Proto-humans got eaten by predators on a regular basis; modern humans need laws to keep the predators from extinction. You’ll note that the Industrial revolution went a lot faster, because we could improve things several times within the lifetime of one engineer, instead of taking many generations to evolve better brains.

      We’re worried about superintelligence because we think it will FOOM *much faster* than anything we’ve seen before, because an AGI can improve itself on a scale of *minutes* rather than *millenia*.

  38. Alsadius says:

    > There is no number of ordinary eight-year-olds who, when organized into a team, will become smart enough to beat a grandmaster in chess.

    Not quite 8 year olds, and not quite a win, but closer to it than you might think: https://en.wikipedia.org/wiki/Kasparov_versus_the_World

    Also, the idea that a superintelligent AI will have infinite resources seems deeply incompatible with a hard takeoff. The AI still has physical substance, and thus a limited amount of memory, processing power, and so on. Self-coding can increase efficiency, but not raw power. The first AI to be slightly smarter than a human will need to order a bigger mainframe to dump itself into if it wants to get any smarter, and that doesn’t happen overnight.

    • That’s exceedingly cool! Has there been a world vs. the world (one internet team playing white and the other playing black) chess game?

    • Scott Alexander says:

      When the game is “everyone in the entire world including all the other chess grandmasters who are almost as good, vs. one guy”, and the outcome comes all the way down to the wire, I feel like you’re making my point for me.

      • Decius says:

        Would better coordination have helped the world in that game?

      • Alsadius says:

        From the article:

        > By this point in the game, several aspects of the cooperation within the World Team had become apparent:

        > It was clear from a look at the voting results that, although the World Team was managing to pick theoretically correct moves, many rank amateurs were voting as well. Demonstrably bad moves were garnering a significant percentage of the votes; even worse, on move 12, about 2.4% of the voters chose illegal moves which did not get the World Team out of check.
        > The World Team was not coordinating well on the bulletin board. Typical posts were brash, emotionally heated, and confrontational; profanity flowed freely. Much more energy was being spent on flame wars than on analysis.

        Remember, this wasn’t just the internet playing against him, it was the internet of 1999.

  39. Muga Sofer says:

    This is why I always object when people refer to Culture Minds as “superintelligent”. They can do a lot of things at once, and they can think extremely fast, and a couple of other tricks (like picturing things in twelve dimensions) … but they lack a certain spark.

    >But there is some aspect of intelligence that they can’t simulate, in which they are forever doomed to be only as smart as their smartest member (if that!). It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability.

    The way you’ve written the rest of this post, it looks like they can’t simulate any aspect of intelligence, and are doomed to be only as good as their best member at any particular thing (if that.)

  40. Oliver Cromwell says:

    “There is no number of ordinary IQ 90 construction workers who, when organized into a team, will become smart enough to design and launch a space probe to one of the moons of Jupiter.”

    They would have to substitute trial and error for calculation, perhaps to the point where the attempt would be so inefficient that it would never be funded. But then the 130 IQ engineers also had to use trial and error to some extent – look at Apollo 1. If you just mean, “130 IQ people can do stuff more efficiently than 90 IQ people”, then sure. So can one hundred people organised into a company do stuff more efficiently than one hundred people who are not allowed to cooperate. Or one hundred people operating in a market economy rather than in a command economy. So why doesn’t superior organisation create superior intelligence?

    “If we were to actually get superintelligence, that would be a completely different class of entity than another government or corporation. It would also have all of the advantages of these things – arbitrarily much parallel processing ability, arbitrarily much memory, arbitrarily many computational resources – without the disadvantages, and with higher genuine “intelligence” as in problem-solving ability. ”

    The ‘super-‘ stem just means above, more than, not infinitely more than. Hence the US, a superpower, is more powerful than any other power, but not arbitrarily/infinitely powerful. It is not clear that any of these infinities are even possible. There was a long time in which steel became stronger per weight every year; steel cannot however become arbitrarily strong per weight.

    “I think it’s useful to have a word for this completely different class of things, and that word is ‘superintelligence’. Teams, corporations, and cultures can use words we already have, like ‘groups’.”

    There is already a word for superintelligence in the way you are using it here: ‘God’.

    “If we were to actually get superintelligence, that would be a completely different class of entity than another government or corporation.”

    BTW are 130 IQ engineers a completely different class of being than 90 IQ construction workers? And you know I could have chosen other categories than “engineer” and “construction worker”.

    • Decius says:

      It’s impolite to say so, but yes. The intersection of “people who have below-average general intelligence” and “People who are particularly skilled at pipefitting, blueprint reading, heavy machinery operation…” is a population that cannot solve the problem “what are the parameters for the next burn in order to soft-land on Ganymede?” in time to execute it.

      Also, if you took the intersection of “People who have above-average general intelligence” and “people who are particularly skilled at rocket science”, and gave them the same tools that the first group are particularly skilled at using, what they produce wouldn’t be a good building. Since construction is more error-tolerant than rocket science, it might not be a catastrophic failure, but I bet there would be noticeable problems not typically found in buildings made by skilled construction workers.

      • Oliver Cromwell says:

        No one disputes that intelligent people do things more efficiently than unintelligent people. At least since that’s almost the definition of an intelligent person.

        The question was, does that make them different classes of being?

      • Ariel Ben-Yehuda says:

        Experience is not a problem that can’t be solved with a few decades of experimenting. Intelligence is.

        Remember that at least some kinds of experience can be had in a simulator. Eliezer believes that any superintelligence worthy of the name will be able to gain all the needed experience in a simulator using knowledge already findable in the public Internet. That seems to have “devil-in-the-details” problems, but it’s not sure how much these will interfere with a true superintelligence.

  41. Decius says:

    There is no way that any group of ordinary people can organize themselves into a group that is better at (organizing people into {groups that are better at organizing people into [groups that perform arbitrary tasks]}).

    I think that condition is sufficient for a group-dynamic Superintelligence to exist, but I’m certain that it’s necessary.

    • Aegeus says:

      Teachers? Managers? Trainers? All people who can organize people into groups that are capable of doing tasks that they couldn’t before. And I’m not convinced it’s impossible to do such a thing – after all, “teacher” is itself a skill that is taught.

      Sadly, the last I checked, we don’t have a clue on how to reliably train good teachers, let alone train good teacher-teachers.

    • Tim Brownawell says:

      Sure there is. It’s called a market economy.

      The problem is that while members can improve themselves and the system, and will reliably do so if denied other options, they tend to get more individual relative benefit by sabotaging the system (regulatory capture, anti-competitive behavior, etc).

  42. Oliver Cromwell says:

    Here is a sensible definition of super-intelligence in my opinion: an intelligence that increases its own intelligence.

    This does not run into long-wavelength catastrophes where intelligences go to infinity; intelligence can increase according to a function that converges to a finite value in infinite time.

    Humanity, although not individual humans, has the potential to be such an intelligence, if total fertility is above replacement and increases with intelligence.

    However humanity is not such an intelligence.

    That is a bigger problem than programming atheist God on your computer. Or worrying about what might happen when someone else does it, because you turn out to be not competent enough to do it.

    If computer-intelligence overtakes human intelligence, it will be because human intelligence wasn’t increasing fast enough, not because we did not slow down the increase in machine intelligence. There is no reliable way to stop or slow adaptive technological developments, because it is a prisoner’s dilemma.

    Yudkowsky’s proposal – to ignore the problem of declining human intelligence and worsen the problem of increasing machine intelligence by accelerating it to create an omnipotent “friendly” AI – would at best make humans pets rather than dead; worse, make them Yudkowsky’s pets.

    However it is much more likely he will fail, because Yudkowsky isn’t that intelligent.

    • Scott Alexander says:

      “Here is a sensible definition of super-intelligence in my opinion: an intelligence that increases its own intelligence.”

      I don’t think this definition captures what we want. Taken too seriously, it would imply that everyone who takes a nootropic like modafinil is a superintelligence. At the very least the person who discovered modafinil should be.

      Or: if I write a chess-playing program, add random noise, and remove 1% of the random noise each time the program wins a game, is it a superintelligence after its first victory?

      • Oliver Cromwell says:

        Taken too seriously, I am a super intelligence, because I didn’t get enough sleep last night, and will be more intelligent tomorrow than I am today.

        But how about taken with the correct amount of seriousness?

        Like, considering only “interesting super intelligences”, in the same way we are happy to accept terms like “species” without pointing out that we could define every non-clonal individual as a species or else all life on earth as sub-branches of one species.

        Your definition, taken simply literally, of an arbitrarily fast processor requiring arbitrary free energy, and arbitrary storage requiring arbitrary mass, defines a null set.

        Your definition softened for reasonableness, to just something more intelligent than current humans but less than arbitrary intelligence, is not a new phenomenon nor anything unique to machines. Humans today are more intelligent than past creatures and humans of the future have the potential to be more intelligent than current humans. So why focus on machines at all? You might as well define humans as a super intelligence on the basis that we are more intelligent than primitive mammals, and could do so with equal validity.

        The only thing that makes machine AI interesting is that the rate of increase in intelligence might be very high; specifically, much higher than the rate of increase in human intelligence.

        edit: Sorry for the billion edits. It is done now.

      • Eli says:

        Actually, it is a good definition, if you modify it to, “An intelligence capable of spending arbitrary amounts of resources to increase its own intelligence arbitrarily much, until it achieves diminishing marginal utility.” Then the nasty question is where the cognitive analogue of “air resistance” (the innate computational and information-theoretic hardness of cognitive problems) puts the diminishing-returns point.

        • Oliver Cromwell says:

          No. Again this word “arbitrary”. God does not exist. It does not matter if you call it a super intelligence, it is not God. Infinities cannot be requisite characteristics of physical objects. They can be characteristics of God, but God is unphysical.

          My position is that Alexander and Yudkowsky care about rate of increase in intelligence, not level of intelligence per se, and simply don’t understand their own position well enough to realise that. Intelligence is just as likely to asymptote to infinity as the strength per weight of building materials is to asymptote to infinite, i.e. it isn’t at all. But the rate of increase of intelligence of machines very well might be higher than that of humans and that very well might continue long enough for machines to overtake humans, with potentially genocidal consequences for humans.

          Especially as the current rate of increase of intelligence of humans is negative.

          This does not make “super intelligences” – intelligences that exceeding human intelligence by increasing their own intelligence faster than human intelligence increased – categorically different to humans. What AIs might do to us is no different to what humans did to Australopithecus or what Bantu are currently doing to pygmies.

  43. Kaj Sotala says:

    Nick Bostrom (briefly) making the same point in his 1997 How long before superintelligence?:

    By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

    Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain – for example, you can’t have real-time conversation with “the scientific community”.

    • Oliver Cromwell says:

      Define out of consideration.

      People are worried about highly intelligent AI because it might commit genocide against them. Why not be worried that highly intelligent Germans might commit genocide against you? Other than that we still have more tanks in Germany than the Germans.

      One German is not going to commit genocide against you, after all.

      Nick Bostrom would still be worried about a hyperpowerful machine committing genocide against him even if he couldn’t hold a conversation with it in real time. Perhaps because it isn’t interested in talking to him or because it speaks German.

      • 27chaos says:

        I think the fact they do very badly on some tasks, that their intelligence is not general, is a fair point.

        • Oliver Cromwell says:

          Nick Bostrom is worried about specific capabilities, not general capabilities, notably the ability to commit genocide against humanity. That is why he has called his institute “Future of Humanity Institute” and not “Machine Intelligence Research Institute” (the latter being to persuade donors it’s a technical institute rather than social studies institute; Bostrom is more honest and doesn’t make a pretense, but then he is government-funded).

    • A says:

      By the time he wrote his book, he changed his definition.

  44. That neuron comment is pretty clearly wrong. Sub-brain neural groupings did turn themselves into brains. This was a thing that happened. This configuration ended up being more fit in several cases, so it stuck around. Evolution was the result, not the designer.

    There’s a bit of waffling here. The ascent of brains shows that yes, agents of intelligence level N clearly can brute-force their way into intelligence of N+1, but only when an intelligence of N+1 is adaptive. We don’t see chess-playing networks of 8-year-olds because there’s not been millions of years of evolutionary pressure on life to play chess.

    Mind you, “Something which is iteratively smarter than its creator, and able to use this smartness to make something smarter still.” can describe corporations, groups, etc., but doesn’t describe superintelligences, because of the aforementioned problems with coordination, communication, so on. Of course, since as far as I can tell, literally every information-processing system that can exist in the physical universe is subject to these things, I remain confident in my assumption that, barring a radical restructuring of physics, superintelligences can’t exist.

    • Eli says:

      There’s a bit of waffling here. The ascent of brains shows that yes, agents of intelligence level N clearly can brute-force their way into intelligence of N+1, but only when an intelligence of N+1 is adaptive. We don’t see chess-playing networks of 8-year-olds because there’s not been millions of years of evolutionary pressure on life to play chess.

      Not quite. “Evolution” here isn’t an explanation, it’s a teacher’s-password. The real keywords are free-energy minimization principle. Evolution optimizes life-forms to minimize the free energy of the environment. Life-forms do so by approximately, boundedly modeling the environment via free-energy minimization learning (because thermodynamics and statistical physics are connected and wow I should shut the fuck up and just go study…). Free-energy minimization demands an approximately energetically-optimal brain, which is actually what we have.

      The paper on all this came out in 2011, and the theories are now considered fairly well-established, although most of the low-level details about how the biology implements algorithms that implement the differential equations remain to be worked-out.

  45. eponymous says:

    re: 8-year olds implementing deep blue, and neurons implementing Scott Alexander:

    I don’t think it is correct to say that the programmer/evolution “does the work” in these cases. If you write an algorithm to solve a problem, you haven’t done “all the work” to solve the problem. I can write down an algorithm to list all prime numbers (Sieve of Eratosthenes), but that doesn’t mean I “did the work” to figure out all prime numbers.

    In fact, these examples constitute a perfectly valid objection to your argument: A superintelligence really is just a way of organizing components to produce greater problem-solving ability than they possess independently. So is a corporation.

    Of course, that doesn’t imply that we “already have superintelligences” comparable to a human+ AI, or some nonsense like that. As an empirical matter, the intelligence gains we’ve thus far achieved by organizing humans are much less than what are possible, or what we can readily conceive of an AI achieving. And there are reasons to think that working on better ways of organizing humans will have much lower returns than working on AIs.

    (Aside: if organizations are AIs, how friendly are they? Can we say that your famous “Moloch” post boils down to “Civilization is an unfriendly AI”?)

    • 27chaos says:

      Really nice phrasing in your third paragraph, that sentiment about organization is exactly what my intuitions were blindly grasping toward.

  46. Another problem with this (from the foom idea point of view) is that it would give us some reason to think that people will never to be able to program anything above human level intelligence.

    Scott has said in the past that if you can program a human level intelligence, you can make superintelligence by an increase of processing power. But if that’s the case, that refutes the whole point of this post.

    • Mark says:

      I would like to see Scot’s response to this.

    • Adam says:

      This bugs me about Scott’s posts on this topic, too. He equivocates back and forth between saying some special ‘spark’ is needed to differentiate between merely intelligent and superintelligent, but when it comes down to any concrete ability or task at which an AI would outperform humans in such a way as to make it existentially dangerous, the example always comes down to faster cycle speeds or greater parallelism.

      At the end of the day, every computing problem comes down to two factors in how well it can be solved: hardware and algorithms. At least in research settings, we’ve pretty much hit the limits of clock speed and network latency and mostly stand to gain hardware-wise from greater parallelism and specialized circuitry. We no doubt still have huge gains to make, but posts like this make it seem that Scott doesn’t believe hardware gains alone will allow for superintelligence, and of course, hardware gains will always give a bounded linear speedup and nothing else. They don’t and can’t add ‘cleverness.’

      On the other hand, we frankly have not made much in the way of algorithm gains since the early years when mathematicians first started thinking computationally. Almost all of the explosive growth in what could actually be done with a computer has been hardware gains. Even at the cutting edge of hot fields like visual pattern recognition, natural language processing, and game solving, we’re using algorithms that existed 40 years ago but that couldn’t be run on the hardware of 40 years ago. The Atari solver that used Q-learning on a deep net was algorithmically no different than TD-Gammon. The ‘breakthrough’ that made it usable for Atari was the interface to a programmatic game controller.

      And not only have we made very few algorithmic gains since electronic computing became widespread, but we probably have hard theoretical limits on exactly what gains are even possible. We can get around this in creating more intelligent systems because intelligence isn’t just about better algorithms. Often, it’s about better generalization. It isn’t that you come up with some extremely novel new solution pattern. You just recognize that an existing solution to an old problem can be applied to new problems. Because computational learning theory is a much newer field of study than computational complexity theory, we don’t necessarily have the same hard limits, but we might. Frankly, we don’t know, but it’s somewhat weird that the people most concerned about this don’t seem to wonder whatsoever just what the theoretical limits are to the power of inference. They just assume that it’s possible to create an arbitrarily good inference engine that be used to solve any learning problem, and not only possible, but inevitable.

      Maybe it is, but you can’t get there from Moore’s law, and if we judge from advances in actual AI techniques, the growth curve looks nothing at all like hardware gains. Cleverly pruned game tree searches and constraint programming haven’t much changed in decades. Arguably, maximum margin classifiers, deep nets, and boosting were legitimate advances in statistical learning, but bounded at a fractional linear accuracy gain over simple ordinary least squares with hand-selected basis functions that we knew how to do over a century ago. Actual better classifiers and regressors haven’t come because of better learning algorithms but because we have access to more data to train on.

      About all I can think of as truly new techniques that made it possible to solve problems we previously could not and that generalize well to many domains are temporal difference learning and approximate dynamic programming. This will give us things like the self-driving car and virtual personal assistants that actually improve over time, but the agents you build still need a functional representation over which to learn a policy that has to be provided by a human. Automatic discovery of mappings from state/action pairs to reward/penalty sensation is the obvious next step in creating a sufficiently general intelligence as to be recognizable as intelligence, but it’s a problem we’ve made roughly zero progress on in the time we’ve been working on it. It’s only been solved once that we know of and it took billions of years of trial and error experimenting in parallel on billions of test subjects at a time, possibly on billions of planets with only one success.

      • 27chaos says:

        For reasons like this, I wonder if IQ has a max cap at like 400 or so points. A lot of the problems humans have a hard time thinking about are in the neighborhood of NP hard. Once we get the obvious stuff, I don’t know if much room will be left for the tremendous superintelligences of science fiction.

        • Marc Whipple says:

          Hmmm. Maybe it’s more accurate to say that a lot of the problems humans can even imagine existing are in the neighborhood of NP hard. Superintelligences might solve everything we think is impossible as a warmup, then just come up with new questions that are too hard for them. 🙂

      • Max says:

        AI might be merely computational power issues (and a few architectural tricks). However I think in case with geniuses its the problem of coordination which makes 160 IQ infinitely more valuable than 115 IQ. – Incremental progress only works within the same system, as soon as you separate systems and introduce barrier between them – this barrier kills most of productivity which could be gained with parallelism

        So 1000 PHD with 130 IQ will never equal one Von Neumann. Just like you cannot construct a Deep Blue out of contemporary(to original deep blue thats it) chips if you connect them with 1bit bus.

    • drethelin says:

      How does it refute any part of the post? Processing power within a computer is not the same as distributed processing power. You cannot daisy-chain the processors of arbitrarily many CDC 6600s to get the same effect as one desktop computer today. But you CAN improve on a CDC 6600 by increasing processing capacity WITHIN a computer.

  47. eponymous says:

    More specifically, re: 8-year olds implementing deep blue:

    In fact, a pretty good chess program isn’t *that* hard to implement. You just calculate lots of lines, and then assess the resulting positions using a simple score function.

    If you had a *lot* of 8-year olds, with good communication technology, it wouldn’t be very difficult for them to implement a program that would beat a master handily.

    And I would definitely say that the 8-year olds were doing most of the work. You could describe the program they would follow in a few pages. The hardest part would be coordinating about what lines to work on in parallel, and aggregating their assessments of the resulting positions to return the best move.

    (Yes, I’ve just been thinking about how one would practically implement a chess program on a billion 8-year olds.)

  48. vV_Vv says:

    I generally agree with your examples.

    I also think this argument refutes some naive arguments for easy superintelligence: “Let’s just make a human-level AI (or even a cow-level AI) and then speed it up a billion times/make a billion copies of it.” It won’t result in a true superintelligence for the same reason why a billion of humans are not a true superintelligence.

    • eponymous says:

      “speed it up a billion times/make a billion copies of it”

      These are *very different* things. I would expect a billion-X speed human-level AI to be functionally well above human level.

      • John Schilling says:

        Would you not also expect a well-organized team of a billion (average) people to “be functionally well above the human level”? I certainly would. There are things such a team could do that no single human being could hope to aspire to. I would also expect there are things that a single human genius could do that neither the serialized nor parallelized billion-average-brain systems could do.

        Really, I think you are going to need to support and elaborate on your “these are very different things” assertion.

        • eponymous says:

          Clearly the fast human could exceed the many humans in any purely mental task, since he could always replicate their efforts by calculating first one and then another.

          (This is assuming he’s not limited to the memory capacity of one person, of course.)

          On the other hand, he would blow away the multitude at tasks that involve lots of sequential steps, and thus are not very parallelizable. My guess is that lots of high-level problem-solving involves a lot of sequential thinking.

          Also, no matter how well organized, a large group of people will always have severe problems with ideas not being properly communicated and wasted/repeated effort.

        • “Would you not also expect a well-organized team of a billion (average) people to “be functionally well above the human level”?”

          Depends on the function.

        • nydwracu says:

          I take it we’ve got nothing to worry about from China then.

        • John Schilling says:

          Agreed, but the coordination problem exists whether you serialize or parallelize your billion human-level intellects. You get results substantially better than a single normally-clocked human intellect, but you don’t get results a billion times better.

          In the meatware-serial case, your immortal thinker’s library of notebooks has become an unindexable mess after a hundred thousand years. In the silicon version, this happens in a few minutes.

      • vV_Vv says:

        If you were to speed up a chicken brain a billion times you wouldn’t get a superintelligence, you would just get a very bored chicken. Maybe smarter that ordinary chickens at some tasks, but clearly not anywhere close to human-level intelligence, even though it would have more raw processing power than a human brain.

        Likewise, if you were to speed up a human brain a billion times, even if you gave it a high-bandwith direct-to-brain internet connection, I wouldn’t expect them to be a true superintelligence. They might be better than ordinary humans at certain incremental task, but probably they would not beat Kasparov at chess unless they were a grandmaster in the first place.

        • eponymous says:

          1 billion seconds is about 317 years. So this person would be able to think about each move for millennia. That’s a lot of time to think about a chess move; plenty of time to teach yourself chess at a high level, and then calculate a *lot* of lines. It’s possible you still wouldn’t beat Kasparov however.

          Overall, I agree that just increasing speed (and memory capacity so you can use the speed) will only get you so far. The point of this thought experiment is to serve as an intuition pump for what a superintelligence could conceivably accomplish, because we’re just taking intelligence of a sort we know and then increasing a measurable factor. This lets us realize how vast mind design space is.

          Of course you could also realize huge gains by improving mental algorithms. Probably much more than you get from just increasing memory and processing speed.

          • sweeneyrod says:

            For strong chess players, more time to think is a massive advantage. In correspondance chess (where each player has days to move) cheating by using software chess engines is very ineffective – a good player can analyze much more deeply than most engines given time. However, engines are very strong in matches with a normal time control.

      • Jeffrey Soreff says:

        Minor note: If one could run a human level AI on today’s CPUs, with
        clock speeds of ~10^9 cycles per second, a serial AI 10^9 times faster
        needs a clock of ~10^18 cycles per second. Anything switched that
        fast has an energy uncertainty of ~1000 electron volts, hundreds of
        times above the strongest chemical bonds. Not even full Drexler/Merkle
        nanotechnology can do that. We’d need something like CPUs using
        nuclear interactions. I hesitate to say “never”, but in this case…

    • Oliver Cromwell says:

      Actually just breeding more humans will result in a super-intelligence.

      Scott was careful to define the groups he wished to compare with highly intelligent individuals as being of homogeneous low intelligence, not just groups of random people.

      Since people have normally distributed intelligence, breeding lots more humans increases the absolute number of humans at all levels of intelligence.

      So even if the intelligence of the group is only the intelligence of the single most intelligent member of the group, with all others being worthless, breeding more humans increases the intelligence of the group.

      Of course selective breeding is more efficient, and possibly more efficient than designing AIs too.

      • vV_Vv says:

        Since people have normally distributed intelligence, breeding lots more humans increases the absolute number of humans at all levels of intelligence.

        But the normal distribution is light-tailed, which means that trying to increase maximum intelligence just by creating more humans immediately runs into (squared) exponentially diminishing returns. Do the math. And that’s even without taking into account the non-linearities that must necessarily be there for physical reasons.

        • Oliver Cromwell says:

          I didn’t say it was an efficient method. It nonetheless would work, with some efficiency, and is a known and tested method.

          In contrast it is not clear that coding a “Friendly AI” is even possible. If it is, I grant that it might be more efficient. It also might not.

          I also suggested another known and tested method, which is additionally known to be highly efficient. My engineering instincts tell me that that is the method to try first.

    • Max says:

      So you saying Human Civilization as a whole is not “super”? I kinda would argue that

  49. TheNybbler says:

    This seems to be the argument against Searle’s Chinese Room. I agree we haven’t built a collective superintelligence (“A committee is an organism with six or more legs and no brain” comes to mind) but I don’t see any reason in principle that one couldn’t exist. I don’t think we’ll ever build such out of simply organizing people or animals, but a much smarter brain or intelligent electronic computer doesn’t seem to be ruled out.

    There’s also the augmented intelligence you hint at. Suppose we were able to give an ordinary human brain access to the computing capacity and data storage of all the computers in the world. Such a human doesn’t have any more of the “je ne sais quois” of intelligence you seem to want. But they certainly will have a greater problem solving ability. How much easier is it to have insights into mathematical problems if you can recall and correlate all the previous attempts at those problems (and related ones) in your head? How much easier is it to come up with new discoveries if all the data which might point to them is closer than your fingertips? I don’t think this augmented intelligence counts as a different quality of intelligence than human. I think, however, that it would be significantly greater than the unaugmented human. You might well make Ramanujan’s out of “ordinary” brilliant mathemeticians.

    • Jaskologist says:

      Scott’s argument seems exactly equivalent to Searle’s Chinese Room to me, and I disagree with both for the same reason. It doesn’t matter if an individual 8 year old can understand chess like a grandmaster. If we can create a program that does better at chess than him (and we can), then we can organize 8 year olds in a computationally equivalent way.

      And if we think we can organize silicon in a way that will be smarter than an individual human, why are we so sure that we haven’t already done so with groups of humans? We already observe human groups acting with goals distinct from those of their individual members, and accomplishing things no individual could.

      • Max says:

        Don’t forget complexity. Physical systems have limits. Searle room, nor 8-years old chess computer can not exist (just like a mechanical equivalent of 8080 cpu cannot exist either). – All because systems need not to degenerate into chaos due to error rate/decay. (8 year old have insane error rate/decay)

  50. stillnotking says:

    This is a strange argument. I hope we can agree that a superintelligent computer would behave very differently — would be a different thing — than the Apple Corporation. (For starters, it probably wouldn’t come up with the Cube.) Whether these are two completely distinct “varieties” of superintelligence, or one is a superintelligence and one isn’t, seems like a semantic quibble.

    • Patrick says:

      Yeah. It seems self evident that structured organizations of humans can solve problems individual humans cannot, most typically large scale coordination or data processing problems. They can act on directed manners towards organizational goals that no one human prefers, or even that none of the humans prefer, through a visible process of breaking large issues down into small decisions that are then made on carefully limited based. By any definition of intelligence that does not tailor itself to human like targets, they’re intelligent.

      Demanding “creative spark” sees like refusing to acknowledge intelligence that isn’t like us.

      And of course comparing one genius to an organization of low iq people and acting like you’re just isolating the “organization” component is just cheating. The relevant comparison is a group of people of mixed ability working solo, versus the same people organized in such a way that the organization functions as something other than the arm of a sort of god king at the top. If the group can be set up so that it can solve problems the mob cannot, and if it can be structured to incentivize it to move in directions the individuals would not, then I see no meaningful barrier to calling it a super intelligence.

  51. P. George Stewart says:

    Isn’t this coming perilously close to an argument (from design) for God?

    Aren’t we forgetting all that stuff Brer Dennett et all told us about … way … over … here, to the effect that that “strange inversion of reason” (that clever things can appear out of the interaction of dumb things), is precisely the surprising and counter-intuitive burden of evolutionary theory?

    On a practical, day-to-day level, it’s true though – stuff actually gets done by very few actually effective and functional people. The rest of humanity is more or less along for the ride, and is actually more or less dead weight and always has been. But there be dark enlightenment dragons.

    • Eli says:

      On a practical, day-to-day level, it’s true though – stuff actually gets done by very few actually effective and functional people. The rest of humanity is more or less along for the ride, and is actually more or less dead weight and always has been.

      I work in an office full of “actually effective and functional people”. We do some pretty neat stuff. Neither I nor anyone else in this office actually observe that everyone outside our office seems like dead weight. Being surrounded by an entire functioning economy seems pretty necessary to what we do, actually.

      • Marc Whipple says:

        You are blessed beyond all comprehension, and your situation and results are not typical.

        • Viliam says:

          Working in an office full of actually effective and functional people is rare, but being surrounded by an entire functioning economy is not.

          • P. George Stewart says:

            But actually that’s one of the things at issue isn’t it – how actually effective and functional is the economy in fact, compared to how it could be if everyone involved was actually pulling their weight, not trying to free-ride, not trying to live at everyone else’s expense via the state, etc., etc., etc.?

            I think we can say that our systems are pretty robust because they are tolerant of a lot of waste and slack. But it makes some people wistful to think of what could be.

  52. “There is no number of ordinary IQ 90 construction workers who, when organized into a team, will become smart enough to design and launch a space probe to one of the moons of Jupiter.”

    This is equivalent to the statement that if the IQ that is now called 90 had been the maximum IQ in the human race, humanity would never design and launch a space probe to one of the moons of Jupiter. This is probably false. But if it is true, it may also be the case that “there is no number of ordinary IQ 200 humans who, when organized into a team, will become smart enough to design a human level AI”.

    • moridinamael says:

      > This is equivalent to the statement that if the IQ that is now called 90 had been the maximum IQ in the human race, humanity would never design and launch a space probe to one of the moons of Jupiter. This is probably false.

      I don’t see why you think this is probably false. If we think of technological progress as a kind of ratchet – in the sense that the “top” keeps incrementally rising as our knowledge grows, without ever falling – then the incremental rising of that ratchet depends on a certain level of civilizational stability (sufficient to support a scientific elite) plus the presence of very keen minds to push the envelope. As it stands, in a humanity with the average IQ that it has, we spent tens out thousands of years killing each other with spears before we even achieved that level of stability. How could the ratchet ever progress without the keen minds pushing at the top?

      • I think the statement about IQ 90 people is probably false because there is a very, very small gap between people with an IQ of 90 and the most intelligent existing human beings. Compared to that, the gap between chimpanzees and humans is extremely large.

        I don’t think it took tens of thousands of years to achieve a stable society. I think it took tens of thousands of years for the ice age to end. As soon as that happened, people developed fairly stable civilizations all over the world, basically independently. This is good evidence that they could have done it 30 or 40 thousand years earlier, if the ice age had ended earlier.

  53. moridinamael says:

    It all boils down to what tasks are (straightforwardly) parallelizable across multiple brains. Something we can loosely call “pattern recognition” generally can’t be.

    The Kasparov-versus-Everybody game is a good illustration. Kasparov is really good at recognizing the best chess move. If you take two, or twenty, or a hundred not-quite-as-good-as-Kasparov players, and let them look at the same chessboard as Kasparov, then all you have is however-many people each arriving (slightly more slowly) at possibly slightly inferior positions. Maybe, if you get enough people, then you’ll have a situation where the best move really is discovered essentially “by chance” by one of the rabble, and the other rabble are savvy enough to recognize it as such and thus vote on it. But this relies on more liberal assumptions of how intelligent you’re going to allow your voting pool to be, which works against the purpose of the hypothetical.

    So, chess is not straightforwardly paralellizable across human brains. Advanced mathematics is probably even less so, because the “size” of the pattern that an Einstein or a Gauss is intuiting/glimpsing/recognizing is larger than any normal person could ever glimpse, given the same starting point those guys had. It really doesn’t matter how well-organized your posse of 100 IQ workers is – if none of them could individually just “see” general relativity in the data, then all of them together never will, either.

    I think calling it “problem solving ability” is not placing the lens in quite the right place. When faced with a problem, the genius is the one who instantly sees the better solutions. That’s the pattern recognition at work.

    Incidentally, all of this is why I personally expect to see artificial superintelligence earlier than the consensus date. This “intuitively seeing really big patterns in data” thing is something that is not parallelizable across brains but is in principle very easy to scale up in silico once properly formulated algorithm exists. If researchers leave aside all the other cool stuff human brains do in order to survive and persist as an embodied species, and focus on just encoding general abstract pattern-finding and -recognition, we could end up with something effectively superintelligent much quicker than 2070 or whatever. IMO.

    • nydwracu says:

      As was mentioned upthread, Kasparov read the forum and voting was open to anyone.

      • moridinamael says:

        It was obviously an imperfect natural experiment, but it still serves as an interesting example.

        In a sense, every professional researcher is engaged in a perpetual “themselves-versus-the world” contest. They all have access to the same information and compete in real-time to bring forth the most impressive findings. The fact that some particular people consistently achieve a lot more than others, who, again, had access to the same starting information, indicates that the parallelizability of “pattern finding” is not high.

        It seems like all this is a longwinded way of saying “intelligence is a thing that exists”, but the important corollary is that “intelligence does not double when you double the number of brains”.

  54. onyomi says:

    Does anyone remember the story about the CEO who was a big Ayn Rand fan and therefore believed in the infinite creative power of competition in a free marketplace for ideas? According to the story I heard he tried to make all his departments compete with one another to mildly disastrous results. I feel like it seemed like an obviously bad idea in retrospect, though I’m still not entirely sure I understand why. There is probably some lesson to be learned about structures of organization which do or do not interact to produce something better than the sum of the parts. Trying to produce something smarter than we are and which is, moreover, capable of making itself smarter than we made it to begin with might be a related challenge.

    • moridinamael says:

      It’s an obviously bad idea because it completely fails to harness incentives correctly. The incentives in Capitalism in general are vaguely along the lines of “give people things they need, and do it better than the next guy, and you’ll be rewarded.” Competition within a company is like competition within a sports team. A linebacker who just wants to look good compared with the QB is incentivized to subtly sabotage the QB. At the very least, the focus is shifted away from beating the other team, or just generally playing well, toward activities that are at best not helping the team win.

      • onyomi says:

        But don’t all those bad incentives apply among companies? And if so, why doesn’t a centrally planned economy in which no one is working at cross purposes better (and I take it as axiomatic that it isn’t)? Obviously there are economies and diseconomies of scale, but is there an inherent difference between two different departments within the same company and two different companies involved in the same economy? Maybe it’s that companies which work together to produce an end product all have an incentive to cooperate without sabotaging each other, but within a single company aimed at producing a single product, introducing competition between producers of components of the single product just sabotages necessary cooperation?

        • moridinamael says:

          > Obviously there are economies and diseconomies of scale, but is there an inherent difference between two different departments within the same company and two different companies involved in the same economy?

          Since I take this to be the crux of it, I would say the answer is very much yes. I was fortunate to work a for what I consider to be a really good, well-run company for about a year. The good thing about it was that the departments were encouraged to go out of their way to cooperate, support, and reinforce each other. The metrics of success were either company-wide metrics, or they were metrics specifically tracking quality of inter-department utility. This promoted a culture where you don’t just address a problem, but you go out of your way to understand why the problem happened and possibly identify the underlying causes so it doesn’t happen again, even if the problem wasn’t within your own department. The opposite of the silo-ized megacorporation.

          If the departments had been in competition instead of incentivized cooperation, this synergy wouldn’t have existed.

        • Adam says:

          It seems to me that much of the problem of a command economy is a problem with overly centralized decision-making, not a problem of aligned versus competitive incentives. An economy of companies all competing with each other for market share but all with the incentive to achieve economy-wide success is a strong thing. For instance, had the CEO made divisions compete with each other, with the threat that if the entire company ever failed, outsiders would come in and potentially murder all the employees and their families, they might have achieved better results than compete with each other, but if the company fails, you leave and get a better job somewhere else.

        • nydwracu says:

          It’s computationally intractable for central planners to calculate rational prices, so instead we built a very large and very powerful machine to do it for us and then it took over the world.

    • Vaniver says:

      No, I don’t remember that story, and suspect that I should defy the data until someone does remember it. The person I do remember being an Objectivist CEO was John Allison IV, who grew BBT from a regional bank to a national powerhouse and whose focus on win-win relationships and principled lending meant BBT avoided the last financial crisis.

        • Vaniver says:

          Thanks for the link!

          Hmm. Overall, the basic plan could make sense, but Lampert is probably the wrong person to do it and they probably started from the wrong place.

          That is, putting Tools and IT in competition is just startlingly dumb. But making Tools pay IT for the work IT does–and, more importantly, freeing Tools to contract for IT services from someone else if that would be a better option–does make sense, since that lets you visualize the flow of value better. Maybe Sears shouldn’t have an in-house IT department.

          But fights over money are just weird, if you actually have a functioning marketplace. If Tools is paying IT, then why the heck does IT need to ask Lampert for money? If cooperation brings in more money (i.e. Tools having a sale that costs it $1 brings in $2 for all other departments), why isn’t there an agreement to subsidize Tools for doing those sales? If the money comes in offline, why is Lampert focused on online sales?

    • Oliver Cromwell says:

      The way to make this work is to de-conglomerate the departments into separate companies. This is actually a good idea. Every developed economy in the world is over-conglomerated because of regulation. But that is not a problem an individual CEO can solve.

      A de-conglomerated tech giant is essentially what Y Combinator has created, and it blows the doors off any equivalently sized bureaucratic megacorp out there. This is only possible because Y Combinator pioneers areas that haven’t been regulated yet, or else the innovation it is pioneering is in the field of evading regulations, e.g. Air BnB.

      • onyomi says:

        Interesting. I felt like there must be a way to make it work, but that it would have to be designed properly, which it clearly wasn’t at Sears. Maybe making a bunch of smaller companies is the way. I wonder if the competition doesn’t also need to be between people doing the same thing: if you make two mini companies compete to see who can make the best, cheapest component x then one might expect them to do so in a productive way. But making a company which produces component x artificially compete with a company who produces component y when both x and y are necessary parts of consumer product z might be less useful.

        • Oliver Cromwell says:

          Why do they need to compete, or what does that even mean? They are operating in different markets. Walmart feeds a lot of Lockheed Martin’s employees, and is therefore in some sense part of Lockheed Martin’s supply chain, but Walmart is not competing with Lockheed Martin.

          In principle every worker should be able to incorporate, buy all necessary inputs for his position on the production line, and sell them on to the next.

          In practice some amount of conglomeration is efficient, sometimes for coordinate reasons, but often because regulatory compliance and political corruption is usually more efficient with large scale. More regulated, politicised economies tend to be more dominated by megacorps, all else being equal.

  55. Furslid says:

    I admit that I have done what you are condemning here, but as a devil’s advocate. It is useful to evaluate definitions of superintelligence, AI and similar. When I encounter a definition of superintelligence, artificial intelligence or similar, I use this to test the definition. I ask “Is X a superintelligence?”

    If a definition of superintelligence allows Apple, Wikipedia, the New York Stock Exchange, or NASA to be superintelligences, that’s good to know. It doesn’t mean that there are existing superintelligences. It means that superintelligence is defined poorly. It probably is a good idea to fix this fundamental problem, as definitions that vague probably aren’t sufficient here.

    It is possible that someone believes that I believe Wikipedia is an AI. That’s a result of poor communication, not me believing Wikipedia is an AI.

  56. Pj says:

    One question and one thought:

    Q. If a group is smart enough to create a tool, does it get credit for the abilities of that tool? What if the group is smart enough to create a superintelligence?

    Thought: Your assertion seems to be “Intelligence isn’t subject to a brute force approach.” Godel seems to be on your side in this argument, but practically I’m not sure there’s much difference between a superinteligence and a massively-parallel-brute-force intelligence. Heck, more and more proofs are using brute force as a tool (thinking of Fermat’s Last Theorem here).

  57. Briefling says:

    There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write.

    There is no number of ordinary eight-year-olds who, when organized into a team, will become smart enough to beat a grandmaster in chess.

    There is no number of ordinary IQ 90 construction workers who, when organized into a team, will become smart enough to design and launch a space probe to one of the moons of Jupiter.

    There is no number of “average” mathematics PhDs who, when organized into a team, will become smart enough to come up with the brilliant groundbreaking proofs of a Gauss or a Ramanujan.

    Your first two examples seem obviously true, your third example seems very probably true, and your last example seems possibly false…

    This pattern of decreasing credibility makes your argument very shaky. By analogy with your four examples, you want us to believe that:

    There is no number of human beings who, when organized into a team, will be able to replicate the most important feats of the superhuman artificial intelligences that humanity may develop in the coming centuries.

    [I apologize for putting words in your mouth.]

    But you have not earned this conclusion at all. The gaps in ability are clearly getting smaller as we go through your examples. It seems plausible that there is some intelligence threshold, above which you can always get from Level N to Level N + 1 via numbers, time, and organization. If anything, your examples seem to lend credence to this possibility.

    I really, really think that you are assuming your conclusion in this post.

    • Mariani says:

      Do you know of any examples of teams of mathematicians making breakthroughs in things too conceptually difficult for any of the members to have done alone?

      I think the important point here is the distinction between speed and intelligence. Calculators have been faster than humans at crunching numbers for over a century, but that is obviously not intelligence. Speed can often look like intelligence, with a brute force program spitting out the same results as a pattern-recognizing neural network.

      Teams make things faster, but probably can’t do anything that is more conceptually difficult than the most intelligent member of the team can. There are hard intelligence requirements in mathematics. Real analysis is probably simply impossible to grasp for someone like Forrest Gump.

      A counterexample might be something that doesn’t have a hard intelligence requirement but is made much better by high intelligence. Law, for example, isn’t particularly difficult conceptually (you probably know a lot of dullards with JDs), but being a smart person makes you a better lawyer.

      Even smart lawyers have paralegals, though, and it’s probably true that a dumb lawyer with paralegals can produce legal stuff just as “intelligent” as a smart lawyer without paralegals can.

      • Anonymous says:

        Even smart lawyers have paralegals, though, and it’s probably true that a dumb lawyer with paralegals can produce legal stuff just as “intelligent” as a smart lawyer without paralegals can.

        This is just wrong. There’s generally two kinds of paralegals. The first is basically a lawyer without a license. More common thirty years or more years ago when there was a deeper pool of people (mostly women) that shouldn’t have gone to college or grad school but were prevented from doing so by circumstances. The second is a glorified secretary that helps keeps things organized, does some grunt work like filing in citations, and perhaps some copy-editing.

        The first type might well be able to “help” produce a first rate legal brief for a not-very-bright lawyer but only because she could have written one herself. The second kind, even if there are five or ten of them, is not going to be able to turn dross into gold.

        • Marc Whipple says:

          More generally, saying that a team of a dumb person and a smart person can do anything the smart person could do so long as the dumb person stays out of the way* doesn’t tend to show that teams with varying levels of intelligence can collectively be smarter than the smartest person on the team. And it’s pretty much tautological that such a team can be as smart as the smart person if the dumb person doesn’t interfere.

          Also, I know people who practice law who are dullards compared to me. I don’t know many people who are dullards compared to the average person who are practicing law. Not sure the original analogy is a good one in the first place.

          *This is especially critical in the sort of example given, where the dumb lawyer is entitled by both hierarchy and sanction to override the smart paralegal.

        • Mariani says:

          I’m using the term as a catchall to include researchers. Smart lawyers are good lawyers because they are able to make valuable connections and find good patterns in a complicated mess of statutes and precedence. These are things that a not-so-smart lawyer could eventually do, and a team of aides could make it happen when it otherwise wouldn’t, or just make it faster. That’s why I mentioned the speed-of-calculation thing in the same post.

          The limiting factor of good law isn’t the conceptual difficulties in comprehending legal concepts themselves.

          • anonymous says:

            I don’t think you have a good sense of the legal profession, or at least not all of it. A large portion of what many lawyers do is persuasive writing. There’s no algorithm for that and top examples can’t be ground out by throwing more people at the problem.

            Why does your field need a whole journal, anyway?

          • Marc Whipple says:

            The creative writing aspect of persuasive legal writing is a good example. For another: I am a transactional lawyer. A robot (or if you prefer an Intelligent System) could do quite a bit of my job. You could set it up so that the user just ticked boxes (“Software. License. Developer. Single Publisher. Straight Royalty. No Advance. *Print*) and got an agreement to sign.

            However, a robot couldn’t do the part which consists of evaluating a given situation and deciding whether or not a robot could do it. So a robot could not, in a practical sense, do ANY of my job. Substitute “dumb paralegal,” and the same applies.

            And it definitely can’t do the part where I invent contract terms to reflect new technologies and/or unique transactional parameters. This happens more often than you’d think, at least at the level I practice at.

          • Mariani says:

            Well, persuasive writing is another example of something that is not conceptually difficult but smarter people do better.

          • Marc Whipple says:

            I respectfully decline to stipulate that persuasive writing is not conceptually difficult for the average human being.

        • anonymous says:

          I could have used some copy-editing on this post.

          There’s -> There are
          shouldn’t -> should’ve

      • Ariel Ben-Yehuda says:

        I think the point was that, without the “secretary” kind of paralegals, a lawyer would be stuck doing the rote work himself, which would interfere with his ability to produce high-quality legal stuff.

        • anonymous says:

          That means that a smart, but overworked, lawyer might produce a brief that looks like a dumb lawyer wrote it, but it doesn’t mean that a dumb lawyer can produce a brief that looks like a smart lawyer wrote it if only he had the time.

      • Jeffrey Soreff says:

        Teams make things faster, but probably can’t do anything that is more conceptually difficult than the most intelligent member of the team can.

        This sort of claim has come up quite a few times in this discussion,
        and I’m skeptical about it.
        The neuron and ant colony analogies have been hashed over upthread.

        Sticking closer to human teams with conceptually complex problems:
        Consider some physical system being designed or analyzed by a team
        where there are N different important physical phenomena to consider,
        and the interactions of all N overwhelm anyone’s working short-term memory.
        Sometimes, you can break down the analysis to N(N-1)/2 pairwise
        interactions, each of which does fit in someone’s working short-term
        memory, and assign one team member to analyze and abstract the effects
        of each pairwise interaction and report the abstracted version of the interaction
        back to the rest of the team. This doesn’t always work – e.g. there can
        be feedback loops which involve too many phenomena to hold in anyone’s
        head – but it frequently does work.

  58. NRK says:

    “if we made a bunch of ordinary eight-year olds follow a simple set of operations that corresponded to a logic gate, and arranged them so that they simulated the structure of Deep Blue, then they could win high-level chess games. This is true. But eight-year-olds could not come up with and implement this idea.”

    Doesn’t EvoPsych suggest that “a bunch of eight year olds” as in, a bunch of comparatively primitive agents, behind a logic gate, is pretty much what we’re taliking about when we talk about the kind of mind that could “come up with and implement this idea”? Isn’t even the most brilliant mind made up out of neurons, arranged in a modular fashion?
    Furthermore, if neurons subscribed to methodological individualism, would it blind them to the fact that their interactions form an intelligence so far superior to them as individuals, that it can hardly even conceive of them as intelligent agents in their own right without a massive suspension of disbelief?

  59. Eckhard Fuhrmann says:

    IMO there is exactly one context in which talking about e.g. corporations as “quasi-Superintelligences” is legitimate: in analyzing the reasons for people’s emotional investment in the idea of superintelligences.
    I think that one reason why stories about superintelligent AI are gaining in popularity is that they resonate with a feeling that many people get when dealing with corporations or bureaucracies: namely, that these “superhuman” actors’ agency so vastly exceeds their own (and their logic can be so counterintuitive) that it seems like they’re operating on an entirely different level of thought and perception. — Watching a show like “Person of Interest” then becomes appealing because it provides a metaphor, a graspable concept, and shows heroes fighting back against that sort of “machine”.
    So yes, I’d say on a purely emotional-experiental level, we as individuals are already dealing with “superintelligences” on a daily basis; and I think that’s something to keep in mind when communicating ASI-related ideas to the general public. Of course, like most metaphors, this doesn’t make any sense when you start pulling it apart; but that fact doesn’t make its emotional impact go away.

  60. Max says:

    keep getting the same objection in the comments: if we made a bunch of ordinary eight-year olds follow a simple set of operations that corresponded to a logic gate, and arranged them so that they simulated the structure of Deep Blue, then they could win high-level chess games. This is true

    NO! this is classic fallacy! you can not make such a system out of 8 years old. Because 8 years are not nearly as reliable, miniature or manageable as logic gate

    Complexity is something people get intuitively wrong a lot of the time in such arguments.

    • sweeneyrod says:

      You clearly don’t have enough experience with classic conditioning! Given enough time and methods of electrocution, a chess-playing system of eight-year-olds would be easy enough (although I would advise against trying it out in practice, due to ethical concerns).

    • Scott Alexander says:

      Surely you could just build a fault-tolerant system by, say, making ten eight-year-olds simulate each logic gate, and only accepting the calculation if eight or more agree on it or something.

      • Max says:

        You just extending the fallacy… Fault tolerance is not magic (you need more elements). any eight year old system has upper bound of complexity

        To make things more clear – there are reason why early computers were very limited in what the could do, simply piling more vacuum tubes ,relays and electronics of 1940-1950s would not do. Only with invention of transistor things really took off

        And vacuum tube electronics are vastly superior to 8 years old . Many problems in computing boil down to the physics – bandwidth, processing speed, data density, reliability. Computing is not math, it is bound by reality

        p.s. This is why I think people with no background in CS/EE should not make philosophical arguments about AI 🙂

  61. Mariani says:

    You hear this kind of pseudo-thoughtful bullshit from the same people who say things like “the war on terror is world war 3” and think that this image is amazing: http://blogs.cornell.edu/art3711-aaa224/files/2013/10/neuron-1nhr65r.jpg
    We get it guys, you can draw superficial comparisons.

  62. Orphan Wilde says:

    Corporations are the wrong scale. They’re intelligences, but not very bright, or at least not sufficiently bright as an independent intelligence to match the brightest part of their organization. They just don’t have enough processing units, enough neurons – individuals. Governments, too.

    Capitalism is approaching the correct scale, and at the national and particularly the global level, you do start to see behavior more like what you’d expect from an alien superintelligence.

    • Latetotheparty says:

      This. People are not the neurons. People are the atoms. Profit-making entities (such as corporations) are the neurons. Capitalism is the super-intelligence.

  63. 27chaos says:

    You’re coming pretty close to arguing that intelligent design is necessary for superintelligence in your remarks on Steve Jobs and 8 year olds. Why do you believe this? I find your position odd, given that in the past you’ve been willing to give credit to ideas like memetics and cultural evolution. A designer is not strictly necessary for collectives to be able to solve difficult problems. For example, the free market is not designed.

    I agree it’s rather metaphorical to discuss corporations as “superintelligences”, but I think the metaphor has some bite to it and is worth entertaining. Anthropomorphizing the thoughts of machine intelligences is more inaccurate, I think, but because we need a way to talk about what AI might do we often choose to do so anyways. Metaphors aren’t just for poetry, they can be useful too. Thinking about an ecosystem or corporation or whatever as AI helps us to explore some of the more distant regions of mind-possibility space.

  64. Uniquely creative individuals (artists in my personal experience) don’t tend to do well in teams of any sort. I suspect that they also need to be left alone to brood, goof off, explore, mess around etc in a somewhat hostile environment that drives them to solve a problem. None of this describes a team form of AI.

  65. MawBTS says:

    Matt Ridley reviewed A Troublesome Inheritance, and said this.

    “The average IQ of a group, a team or a race matters little, if at all. What counts is how well they communicate, collaborate and exchange ideas. Give me a hundred thickos who talk to each other, rather than a hundred clever-clogs who don’t. This collaboration is surely the true secret of human achievement and the true reason that race does not count, not because we are all identical inside our skulls.”

    I doubt he really believes this statement. Or if pressed on it, I doubt he’d try hard to defend it.

    History is a testament to the fact that few things are as dangerous to human achievement as a hundred thickos talking to each other.

  66. arthurarthurarthur says:

    Many people commenting seem to me to be missing real experience around bleeding edge scientific progress.

    I can give my anecdotal experience as a professor at Stanford University (won’t give name due to things I might say that could be called unkind, but Scott can confirm if people care).

    Real progress that requires fundamentally new ideas is critically dependent on creative effort of (in almost every case I can think of) a single mind. Groups can implement, iterate, and move forward once the leap is made, but the proverbial “great man” must first make the leap. That is by far the hardest part of the scientific process, in that it is the only really necessary condition (the rest can be brute-forced with enough money, typical-PhD intellectual power, and time).

    In my experience – even at a very selective university – the productivity of graduate students varies by at least a factor of 10 (and perhaps 100) between those who simply get through by force-of-will and the true standouts which arrive rarely. This is not in terms of number of papers published (though that varies a lot as well) but in terms of (difficult to quantify) actual intellectual progress. I have seen whole research groups pivot on a dime when one creative result opens up a field of possibilities.

  67. I’m surprised this post received so much pushback and nitpicking. Seems obviously right to me.

    Anyway, as with all discussions of intelligence/creativity, I think it’s helpful to talk about solution spaces. Like, Bostrom likes to talk about the difference between “speed intelligence” and “quality intelligence”, where speed intelligence is the ability to solve the same problems that humans can but faster, and quality intelligence is the ability to solve problems that no human can solve. This is a useful distinction but not really a fundamental one, I think – past a certain point all intelligence is speed intelligence, ultimately. If nothing else you can always just enumerate all the possible solutions to a problem and go through them one by one until you find a good solution. It may take a long time, but you can do it. So when we say that an average math PhD could “never” do what Gauss did…well, they could, it might just take umpteen ages of the universe or whatever. Gauss was special because he could find proofs quickly. What we call intelligence is the ability to (somehow) single out good solutions from the vast space of possible solutions we could be considering, and to do so in relatively short periods of time.

    Of course, given how big that space really is, it’s still useful to talk about quality intelligence, since enumerating solutions is almost never practical. But I still think it’s good to remember that it’s not a fundamentally different thing.

    • Oliver Cromwell says:

      I did not like the implication that ‘super intelligence’ is a thing, at least in the way Alexander means, less than the specific claims of things that are not super intelligence.

      The way Alexander has defined super intelligence in his post – arbitrary computing power used arbitrarily efficiently – it is impossible. It is an asymptote to infinity. It is God. What Alexander is doing here is like a Victorian gentleman scientist with an interest in steel production speculating about what the world will be like when Bessemer forges the irresistible hammer and Krupp forges the unbreakable object. The answer is nothing because they are flights of ill-informed fancy that will never exist.

      Perhaps he just meant something more intelligent than a human, but not infinitely more. No basilisks and other LessWrong cultish silliness. Alexander makes a very good point that a 130 IQ human is rather different in many ways to a 90 IQ human and, presumably, a 170 IQ human would show similar differences to a 130 IQ human. Fine. Who doubted it?

      The idea Alexander and Yudkowsky are interested in that might actually be worth studying is the possibility of a non-human intelligent species whose IQ is rising faster than that of humanity. Whether it’s artificial or biological is unimportant, it just seems to them that artificial is more likely. The level of intelligence that it eventually reaches also doesn’t matter much, so long as it can rise much above that of humanity.

      So here is a question: what if the Chinese (or whoever, but it will probably be them) start selectively breeding people and modifying genomes specifically for IQ? This seems much more likely to me than Eliezer Yudkowsky programming a rapidly recursively self-improving AI. Or even someone who knows what they are doing programming such a thing.

      To me it doesn’t seem more likely that a hypothetical intelligence improving itself much faster than [the bulk of] humanity in the near future will be an AI at all or even non-human. More likely it will be a fraction of humanity that avoids crimestop on genetics and intelligence long enough to realise the implications of Yudkowsky and Alexanders’, possibly true, belief that maximising your faction’s IQ is an ‘I win’ button for all other political disputes.

      • “The idea Alexander and Yudkowsky are interested in that might actually be worth studying is the possibility of a non-human intelligent species whose IQ is rising faster than that of humanity. Whether it’s artificial or biological actually doesn’t really matter, just seems to them that artificial is more likely. The level of intelligence that it eventually reaches also doesn’t matter much, just so long as it’s at least quite a bit above that of humanity.”

        I mean, I don’t think it’s just that they think artificial intelligence is more likely. I think the reason they focus on AI so much is because (i) recursive self-improvement would be much faster in the case of AI, because having direct access to your own source code is potentially a very powerful thing and (ii) the ceiling for intelligence is probably much higher for AI than it is for enhanced humans.

        Speaking for myself, I agree that selective breeding and/or genetic engineering are much more likely to occur in the near future than AI. I just don’t find those things nearly as scary.

        • Oliver Cromwell says:

          Of course I would intuitively think recursive self-improvement would be much faster with AI: it’s much easier to open emacs that to get a girlfriend after all (for some…).

          Problem is this same intuition would make me pretty confident that we would have robots that can reliably tell the difference between something that is a human face and something that is not a human face before we would have robots that can beat Kasparov at chess. Didn’t work out that way.

    • Marc Whipple says:

      Hee Hee Hee.

      Also, I’m not sure how one enumerates all potential solutions absent omniscience. At what point, starting from zero, does our hypothetical enumerator come up with the potential solution, “The reason that electron that was here a second ago is now in the Lesser Magellanic is that particles can quantum jump* from A to C without passing through any point B between?” Unknown unknowns are a real thing and no amount of computational power short of infinity will overcome that.

      *This wouldn’t actually happen, but the example is still illustrative.

      • Adam says:

        Well, besides the fact that many problems have uncountable solution spaces and can’t be enumerated even with omniscience, infinite time, and infinite energy.

        • Andrew G. says:

          many problems have uncountable solution spaces

          Give one example.

        • Adam says:

          x^5 + x^2 = 20, where x can be any complex number.

        • Marc Whipple says:

          @AndrewG:

          Google, first page, produces a text on operations research which asserts that a variant of the Traveling Salesman Problem it calls the “Close Enough Traveling Salesman Problem” has an uncountable solution space. Once one understands what is meant by the concept of an uncountable solution space it seems pretty obvious that they are logically possible.

          As I said, my degree is a humble bachelor’s, and advanced topology is not something I’ve studied rigorously. But I think I get the gist of it, and I agree with the prior poster that you can define physical problems which a physical calculator could not solve by brute force even with infinite time/energy.

          @Adam:

          Hee hee hee again. Yours is WAY simpler than mine.

        • brad says:

          Would the close enough traveling salesmen solution space still be uncountable you quantize distance?

        • sweeneyrod says:

          @Andrew G.
          Which is my favourite real number?

        • Marc Whipple says:

          @brad:

          The solution space of the problem as stated would not be. However, I can redefine the problem in such a way that it would*. You cannot get around the general problem of uncountable solution sets by taking the elements out of any particular problem which make the solution set uncountable. 🙂

          *The REALLY TRULY answer, I think, is that it would still be uncountable if you consider the traveling salesman to be a quantum-mechanical particle, which technically he is. Normally this would be irrelevant due to the odds that a QM particle with the rest mass/momentum of a person would probably not deviate from classical behavior over the expected lifetime of the universe, but if we’re giving the superintelligence infinite time to think, we have to give the traveling salesperson infinite time to perform quantum jumps and/or accelerate. If the superintelligence could access unlimited energy, then so could the traveling salesperson.

          And I have a Dr. Heisenberg on Line 2 who’d like to talk to you about something.

        • Andrew G. says:

          @ Adam: roots of a polynomial with integer coefficients are algebraic numbers, and those are countable (even in the complex number field, and even for degrees higher than 4).

          @ Marc Whipple: your reference is arguably incorrect in describing the solution space as uncountable; this would require that the distances between the points of the problem space and the solution points are properly real numbers. But they clearly are not, because in order to even describe the problem and the solution in a finite space, all the point coordinates must be computable numbers, which are countable.

          @ sweeneyrod: you only have finite storage capacity in your brain, so your favourite real number—assuming you genuinely have one—is finitely describable, and therefore the solution space is countable.

        • Adam says:

          Fair point, but you still can’t guarantee a solution will be ever be found by brute force.

        • anon says:

          @Adam, depends on what you mean by “found”. Finding solutions to diophantine equations (i.e. finding *rational* solutions to a polynomial equation) is hard; you can enumerate the possibilities and search exhaustively, but (in high-dimensional cases, i.e. multiple variables) determining whether a solution exists may be hard. Finding complex solutions (to arbitrary precision) is basically trivial. For example in the 1-d case you can use Newton’s algorithm.

        • Adam says:

          Of course you can use Newton’s algorithm if you have a good initial guess. That isn’t Brute Force, though. What you can’t do is just try every single algebraic number to see if it’s the answer.

        • James Picone says:

          We just don’t warn the students they are flirting with a famously Hard Problem, and don’t even have a good algorithm for them to follow other than “just look at the problem and think about what might work until one of the solutions works”.

          The tiny bit on recurrence relations we did in my software engineering degree (‘this is how you work out O() on toy problems! Isn’t this useful?’) ended up pretty similar to that. “Figure out F(n) in terms of F(n-1), and then maybe substitute F(n-1) for it in terms of F(n-2) or think about it or something to try and figure out F(n) in terms of n!”

    • I think the post is obviously right in some ways, and obviously wrong in the way Scott is using it, namely to support the AI risk idea. The existence of self improving corporations is evidence against AI risk, whether or not you call those corporations superintelligences.

    • Oliver Cromwell says:

      Very true, but the other side is also guilty of this. Scott’s view of an AI is something that can perform any arbitrary computing task instantly, not just a very powerful computer running very efficient algorithms.

      Things like AIs simulating the universe to create heaven and hell, predict all possible moves of opponents, etc. – easy to conceptualise, not so practical in reality.

  68. Eric says:

    Doesn’t the question basically come down to, “In what ways can intelligence be distributed?” All of your examples seem to point to point to the following argument:

    1. A “system’s intelligence” is its capacity to solve problems.

    2. Implicit in this capacity is the ability to utilize other systems to solve problems. If I can use a calculator to solve arithmetic, my “intelligence” is said to encompass very long arithmetic problems.

    3. Every system has some finite limit to its ability to organize systems to solve problems. Every limit has some set of problems it cannot solve even with unbounded resources.

    4. Therefore, every system is limited by its most “intelligent” member.

    This seems like a valid argument, but there is some subtlety in its premises. I think the real problem is its assumptions about inductive reasoning. I, personally, might find the Goldbach conjecture pretty difficult at least without the right amount of coffee. However, by testing some large number of examples, I can increase my confidence pretty well without any particular intelligence whatsoever.

    Similarly, I think you’re pulling the rug out from a number of counter-examples using the above argument. If the 8-year-olds won the chess game, you call them “intelligent” 8-year-olds.

    I would suggest better tests of intelligence than specialized activities like Chess or Proof-finding in mathematics where you are granting an enormous amount of advantage to people for no reason except for specialized knowledge. If you look at Escape Rooms, for instance, you will find that 10 organized players will easily triumph say 3 Google employees.

    • MawBTS says:

      Similarly, I think you’re pulling the rug out from a number of counter-examples using the above argument. If the 8-year-olds won the chess game, you call them “intelligent” 8-year-olds.

      I wouldn’t. I’d assume that they were cheating (was an even better grandmaster helping them with their moves behind the scenes, shades of Charles Ingram?). Or that the grandmaster had deliberately thrown the contest because he thought it would be funny.

      If my 80 year old grandma KO’d Mike Tyson in a boxing match, My first thought would be “there’s something not right about that boxing match”. My absolute last thought would be “she’s a better boxer”.

      This is a result so improbable that you’d expect it to literally never happen. A group of average 8 year olds should never be able to beat a grandmaster at chess under fair circumstances.

    • chaosmage says:

      What would be a better test of intelligence? My own suggestion is playing unfamiliar board games against other (human or machine) players. Performance in these games should depend on tasks at which we’re still better than current machines: strategic optimization for arbitrary variables, doing things for the first time, modeling opponents. So maybe performance in them is an interesting metric for machine learning systems to maximize.

      This could be a standardized test, if the choice of possible rulesets is broad enough that a randomly chosen ruleset will be new to practically any player. The games can all use the same bunch of types of tokens and dice and cards and whatever – in physical or digital form. There just needs to be a huge repository of sets of rules for how these tokens interact. They could even be randomly generated.

      • Adam says:

        I think this is a great test and it highlights exactly how far away computerized systems are from what I would personally consider ‘human-level’ intelligence, as there does not exist a single software system that can learn to play arbitrary games without human modification.

        • chaosmage says:

          Thanks for the approval. 🙂 Any hints where I should submit the idea?

          You’re technically wrong on there being no computer system than can learn to play arbitrary games, though. This was demonstrated for Atari 2600 video games back in early 2014. It is just that this system got a lot of trial and error time, its target metric was fairly simple to maximize, and it didn’t play against other learning agents.

          • Adam says:

            I intentionally used the word ‘arbitrary’ rather than ‘multiple.’

          • James Picone says:

            @Adam:
            This project is an AI that plays arbitrary NES games. It’s not very good at some kinds of game (Tetris, for example, it is awful at), but it’s very good at some other kinds of games (simple platformers).

            Do you need more ‘arbitrary’ than ‘arbitrary NES games’?

          • Adam says:

            Yes. I really did know about console game solvers that can learn anything with a pixel representation of actions and rewards and am not just retconning this in a post-hoc requirement. Humans can learn to play games on infinitely many platforms. I use reinforcement learning for my work, so while it’s not something I’m a bonafide expert on (i.e., no published results or a PhD in the field), I am aware of current research.

            I believe in both cases also, the functional representation mapping pixel arrangements to rewards still has to be uniquely programmed by a human for each game separately.

            My own work is algorithmic trading and you can definitely use the same learning engine over infinitely many problems, but I’ve never found a way to automatically map action selections on the part of the learner to literal actions in the world and the outcome to reward/penalty returned to the learning engine. This is still a uniquely human trait to be able to use sensory input and our various communication faculties in a sufficiently general way without a third party reprogramming us each time to understand what is happening.

          • James Picone says:

            @Adam:
            NESes are Turing-complete. The real difference is that the memory and input limitations of a NES make for a smaller search space. Same algorithms would work with something else, just much, much slower. Also tools for emulating a NES and saving/playing back inputs are very mature, which helps.

            I believe in both cases also, the functional representation mapping pixel arrangements to rewards still has to be uniquely programmed by a human for each game separately.

            My understanding of the project I linked to is that it doesn’t actually need that. It gets given some sample gameplay from a human and tries to find ‘lexicographic orderings’ of memory positions that seem to increase together, and then attempts to maximise their values (so in a simple platformer it might find a memory address for ‘world we’re up to’, ‘level we’re up to’, and ‘x coordinate’ and would then try to make them go up). It uses short sequences of input grabbed out of the sample gameplay as potential inputs and then runs the game forwards in a different copy of the emulator to see what would happen for each input, then selects the one that maximises its understanding of which memory locations are important.

            It’s not very clever, and it’s not amazing at most games (although it did manage to exploit a couple of bugs in Mario Bros. that the sample input didn’t contain, which is cute), but it is pretty general-purpose and surprisingly simple.

            Arguably needing some sample gameplay is not what you’re looking for here; but that’s requiring restrictions on the AI you pretty much can’t get from a human. If you plonk me in front of a videogame I’ve never played before, I’ve got a wealth of general knowledge about how to play games and what kinds of things I should expect from a game that I can put into practice; this AI doesn’t really have that other than an understanding that it should make numbers get bigger. If you take somebody who has literally never played a videogame before and plonk them in front of a console, they’re still going to have some cultural understanding of ‘score’ or ‘level’ or ‘enemy’ and that you control something in the world and so on.

            Seeing the future by running the game forwards is probably cheating though, true.

          • Adam says:

            It’s not that the agent needs to see sample game play. Humans still require training to become good at something. I mean, of course we’re born good at many things, but so are other animals and usually ‘intelligence’ is taken to mean the ability to become good at other things.

            The relevance difference here is that humans don’t need more-than-human intellects to re-encode games in a manner they can understand. NES may be Turing-complete, but the agent created for this project can’t turn a problem from the real world into an NES game. A human programmer has to do that.

          • James Picone says:

            Ah. I see the requirement for it to be a NES game as similar to the ‘same bunch of types of tokens and dice and cards and whatever’ from chaosmage’s original comment. Just some standard platform you can have unfamiliar games on to throw people at.

            Probably not a good one for the kind of strategy-game thing I assume chaosmage was envisioning (Agricola or Dominion or whatever), but a platform nonetheless.

  69. suntzuanime says:

    There are multiple ways of being intelligent. Corporations have a soft, emotional intelligence, rather than an AI’s cold analysis.

  70. Ilya Shpitser says:

    Just as I think the Moloch post is one of Scott’s best, this has to be one of Scott’s worst. The central point is just very poorly argued (and isn’t true). Look at Google. Is Google’s intelligence bounded by the most intelligent member? Does any single person at Google understand the entirety of Google’s software (see also: does anyone understand the Linux kernel)? Google’s intelligence just isn’t very human in lots of ways, but look at how its footprint grew.

    Insight is hard, but not impossible, to parallelize. Anyone who had a successful math/science collaboration knows this.

    • Søren E says:

      >Does any single person at Google understand the entirety of Google’s software?
      No. This is exactly the kind of things Scott refers to in the section “They can use writing and record-keeping to have much better “memories””.

      Let us assume that Scott is using Bostrom’s definition of superintelligence:
      “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

      Google cannot be a superintelligence if there exists a single scientific field where Google is unable to match the best single human. Google cannot be a superintelligence if it cannot win in the boardgame Go practically every single time against the very best human. Google cannot be a superintelligence if it cannot compose far better music than Bach.

      Google is superior to any single human in some very specific ways, (e.g., providing fast and relevant search results), but it is not superintelligent by Bostrom’s definition.

      • Aegeus says:

        Google is not a superintelligence, but it’s still a counterexample to Scott’s argument. Google is not bounded by the smartest individual in the company.

        • Søren E says:

          We agree that on the central conclusion, and I agree that the paragraph in the original text that starts with “But there is some aspect of intelligence…” is not ironclad.

          Consider this: The CEO of Google decides that winning championships in the boardgame Go is the most important goal. A possible strategy is to choose the moves recommended by the single best Go-player in Google, who plays at level N. Is there a strategy Google can use to play at level N+1?

          • Marc Whipple says:

            Lookahead?

            I don’t think there are many “regular” games (go, chess, checkers, etc) that Google can’t win by brute force if they throw enough computational power at it.

            The problem in general is that lookahead can’t produce novel solutions – it’s useless for unknown unknowns. Where intuition comes from, I have no idea, but it’s not brute-forceable.

          • Søren E says:

            Simple lookahead will absolutely not allow Google to beat a very strong Go-player. Take a look at the Wikipedia article of “Computer Go”.

            (I assume that Google currently employs a worldclass Go-player, based only “Go” being a prefix of “Google” :-D)

          • brad says:

            I’m skeptical of the claims about the near impossibly of writing go programs given current computation power if there was sufficient interest/incentive.

            I’m aware of the state space argument and I’m aware of the fact that go programs are way behind chess programs. But with all due respect to the Japanese and the Japanophiles there isn’t nearly as much interest in go as chess. Deep Blue made a real splash in the media. If it had been go instead it would not have been even close (at least outside of Japan).

            From the wikipedia you pointed to it looks like the “not in 100 years” predictions we saw at the turn of the century are looking tenuous. Programs are now playing on full size boards and with smaller and smaller handicaps.

          • Søren E says:

            Brad, you are right. I guess that the strategy Google would choose to perform better than it’s best employee, would be playing to their strengths:
            Do a lot of research, write a good computer program based on this research, give it a lot of process time and hope that it would be enough to massively advance the state of the art.

            An alternative is to train their best current Go player, or hire the current world champion.

            (We are getting far from the original subject now)

          • Doctor Mist says:

            @Søren E:

            A possible strategy is to choose the moves recommended by the single best Go-player in Google, who plays at level N. Is there a strategy Google can use to play at level N+1?

            Why not? For instance, what if the level-N player has a level-N-minus-1 player looking over his shoulder? Once in a while, mightn’t he happen to notice something that the level-N player does not?

            For that matter, where do level-N-plus-1 players come from? Set all your level-N players to work playing against each other until somebody gets better.

            (This actually isn’t a terrible parallel to a AGI improving itself by inspecting its own code. I’ve inspected a lot of code, and it’s surprising how much easier that is if you can actually run bits of it and watch how the parts fit together.)

          • Søren E says:

            @Doctor Mist: Both of your suggestions are good answers to my original challenge, provided the difference between level N and level N+1 is small.

            By definition, they will not reach the superintelligent level, as Google will never employ a person much smarter than the smartest person in the world. 🙂

            Scott’s examples (Monkeys learning to write, eight-year olds versus chess grandmasters, ordinary construction workers launching space probes) suggest a difference between levels that is very high.

          • Note that Google has already achieved this or is likely to relatively soon. This is evidence that Scott is wrong.

      • Deiseach says:

        “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

        Then we’ve never had really, really intelligent humans either, because we’ve never had one human who was the best mathematician and composer and painter and chef and agony aunt and guru etc. in one person.

        If we’re waiting for that to be a super intelligence, I think we’ll be waiting a long time (I am very curious to know how we’re going to have the first AI opera singer that is better than Callas, Gobbi, Farrar, Gigli and so on simultaneously as well as being the best in every single field of the arts and sciences.)

      • Ilya Shpitser says:

        I think if Bostrom et al are worried about ramifications for us humans, they should worry about a much broader class of things than “superintelligences” as you define them above. And I don’t mean things like “global warming,” I mean things that do not meet your definition but still very smart and inhuman.

        • Douglas Knight says:

          What leads you to believe that Bostrom doesn’t worry about Google and hasn’t explicitly explained why he worries more about other things?

          • Ilya Shpitser says:

            Yes, I understand, I just think he’s wrong (in particular, I don’t think there is an interesting dividing line). I think there is a smooth slope from things like Google to things that are very dangerous to us, but not “SIs” as Bostrom defines them.

            The presence and importance of the conceptual dividing line between “SIs” and “not SIs” is what is at issue here.

          • Douglas Knight says:

            I’m pretty sure he does not hold the position you attribute to him.

        • Søren E says:

          Yes! Imagine a computer program that is capable of convincing 99% of humans to support giving more power to the computer program. It would be extremely dangerous, even if it is vastly below the level of the best humans in most other tasks.

          Superintelligence is not an irrelevant concept. Humanity does not yet know how to build a strong AI, which leaves open that it is impossible. It also leaves open that there is an undiscovered key insight, and AIs built by programmers with this key insight are superintelligent.

    • Alexander Stanislaw says:

      He’s only claiming that the creative aspect of intelligence (something to the effect quality rather than quantity) is bounded by the most intelligence member of a group.

      I think this is also wrong – and all of modern physics and mathematics is a counterexample (I do not think that Feynman or anyone alone could have gone directly from Classical electrodynamics to Quantum electrodynamics in 100 years).

      But the weaker claim that there is something more to intelligence than speed or quantity seems correct to me. And that there are some things that benefit from more quality where more quantity doesn’t help nearly as much. (Scott seems to think however that quality is to decent approximation a linear property that is arbitrarily scalable – at least in theory like processing power. I see no reason to believe this)

  71. stephen says:

    Human organizations are very much beyond human in intelligence in important ways, and could very much take over all physical resources even if it can not do everything that the smartest human could and such humans never join them.

    The limit to power to human organization is that it is that none are capable of stable, reliable, unbounded “self improvement” in the sense of having precise stable goals (however it is possible with the right social technology). This is the exact same hard problem that needs to be solved with AIs.

    First of all, I must clarify what human organization are, they are significantly memetic programs. It is not impossible to have a memetic program expands across all of humanity and controls all resources while reprogramming humans to furthering its goals. Goals that can be accomplished via all methods possible within human imagination or beyond.

    The dangerous thing with intelligence is that a limitation in a certain sector can be overcome via being creative in may ways. A group of coordinated Apes is still dangerous if they know about eugenics.

    Complex runaway memetic structures like religion and certain ideologies have had its run in history, however none are flexible and reliable enough to survive major environmental changes and many run into dead ends. A civilization collapsing due to holiness competition (and other structure issues) and likes is not unlike an AI that figures out a way to wirehead itself and thus die out, both lie in errors in feedback systems that tries to optimize by hacking the map instead of the territory. It is easy to imagine that organizational failures as primarily harming humans, however what it kills most directly is the organizations involved.
    ————————————————
    One of the most dangerous thing that can happen to humanity is someone/thing coming close to figuring out moloch (but not quite). If organizations are successfully engineered to no longer self destruct due to ineffectiveness in time, then Unfriendly ones that rule would never die.

    If somewhat superhuman computer intelligence happens but reliable recursive self improvement does not, than we can expect to see all kinds of moloch-type results.

  72. Russell Hogg says:

    Are we confident that super intelligence can even exist? We can go really fast but we can’t go faster than the speed of light. Debates around whether or not we might invent a machine to go faster than light are foolish. We can’t invent such a machine. Intuitively I feel there might also be a limit on intelligence (whatever that term actually means). But my intuition could be the intuition of a chimp of course . . .

    • MawBTS says:

      Of course there would be a limit, but that limit might be extremely high.

    • Marc Whipple says:

      There is no rule against going faster than light. It causes all kinds of problems for current physical theory, but there is no “rule.” Claiming that because accelerating an object with mass to lightspeed from less than lightspeed would appear to require infinite energy we can’t go “faster than light” is like claiming that because we can describe the position of a particle mathematically that quantum mechanics is impossible because you can’t move from 1 to 3 on a number line without first passing 2. Don’t be so negative, man!

  73. Mark says:

    Understanding is knowledge of relationships, and the number of relations we can hold in mind at any one time is the limiting factor on our ability to understand things. When we talk about intelligence, we are talking about people’s ability to hold logical, language relations in mind, or (with creative intelligence) to find a way to bring these relations together.

    Higher level thinking means either an increased level of complexity (more simultaneous relations) or bringing together relations that were previously considered to be unrelated (which can result in reduced complexity).

    I think that groups can certainly match individuals in the first respect, as long as each individual within the group depends on abstraction and simply trusts that other people know what they are doing.
    Can groups display the creative intelligence that bring concepts together? Perhaps, but I think this would have to be a meta-level process – some sort of group dynamic, and wouldn’t actually give birth to a “concept” as we understand it. Since ideas/concepts must reside in the mind, only *a* mind can create them (though I suppose the unconscious process might make the idea obvious to the mind.)

    [What I mean is – we can understand what it means for a group to complete some complex process (like a market) but what would it mean for a group to conceive of an idea?]

    • moridinamael says:

      I don’t think those things constitute intelligence, and that’s the problem. “Intelligence” means much more than the ability to consciously manipulate abstract concepts. Any person of moderate IQ can grasp the theory of relativity and can be taught all of its intricacies. What took exceptional intelligence was the initial insight, the “aha!”, where Einstein glimpsed how to put it all together in the first place. That process of seeing things that have never been seen before can’t really be reduced or broken down to more basic tasks.

      • Mark says:

        I don’t think that necessarily contradicts the larger point I was trying to make, which is that it is a category error to ask how large a group must be before it can have an idea. A group can never have an idea – an individual, perhaps aided by the work of a group, has an idea. How could the creation of new concepts operate except through great men? (women.)

        If we are asking whether a group can manage complex processes better than individuals, the answer is clearly yes. It can manage a complex process as long as there is some architecture allowing them to do so ( “But it’s the organizer – whether that organizer is a brilliant human programmer or an evolutionary process – who is actually doing the work.”)

        If we reject the “intelligence” of organized groups on the grounds that they are organized, in what sense could a group ever be intelligent?

  74. onyomi says:

    As examples of existing things which seem to possess an intelligence greater than any individual part, even though each individual part is a whole individual, ant and bee colonies come to mind. Storing extra energy in honeycombs also seems a lot more efficient than our way of doing it.

  75. szopeno says:

    Everything fine, but for I all know, corporations and nations might be superintelligences. Those superintelligences may communicate between themselves and boasting of things they done and how they transformed the earth, without being aware they were built, in an evolutionary manner, from intelligent beings.
    Of course, another possibility that corporations and nations are equivalents of lower mammals or something like that.

  76. Deiseach says:

    There is no number of “average” mathematics PhDs who, when organized into a team, will become smart enough to come up with the brilliant groundbreaking proofs of a Gauss or a Ramanujan.

    Scott, you have now reassured me that should there ever occur an actual AI of superintelligence according to your specs, it doesn’t matter if it’s as tantrum-throwing as Kylo Ren, it will be able to do damn-all to affect civilisation on a scale that alters everything as we have known it.

    Because we’ve had individual super-geniuses and they’re not the ones who have gone out and done their damnedest to conquer the world. What’s made the most changes? Groups of non-super geniuses getting together into teams.

    And it doesn’t matter if the God AI can get on the Internet or not. As long as it is a single entity trying to cope with the vast mass of us idiots in our separate clumps of governments, bureaucracies, corporations, and sports teams, it’s not going to get any further than the Lone Super Genius, since as you point out:

    And finally, teams have a lot of contingent disadvantages over an individual. They work vastly more slowly. Their various parts tend not to know what the other parts are doing. If dictatorial in structure, they fall prey to failures of information; if non-dictatorial, to failures of coordination.

    And that’s the tools the God AI has to use, the problem it has to deal with: it can only affect the world via the crude measure of working through or on humans in the mass. The sludgy, kludgy, clogging, delaying, wading-through-treacle in order to get anything done, stubborn, stupid, what’s-in-it-for-me mass of humanity.

    Faced with that, any self-respecting AI is more likely to choose to sit in its box and think about maths like all the other reputable super geniuses 🙂

    • moridinamael says:

      > single entity

      “Entity” is as much a loaded term as “intelligence” in this context. Is a corporation an entity? It has a lot of the qualities of an entity, it is distinct, well defined, it makes choices.

      Likewise, is a super-AI who can spawn off and re-absorb perfectly obedient clones of itself to deal with subtasks a “single entity”? I feel like I would have a good shot at taking over the world if I could split myself arbitrarily many times – and that’s just one of the many superpowers implicit in silicon intelligence.

      I mean, let’s say that your super-AI is actually susceptible to things like boredom, although I doubt it would be. It would just spawn specially tailored sub-agents tasked with specific problems it finds boring.

      I dunno, I feel like people are hearing “superintelligence” and imagining something … TWICE as smart as … EINSTEIN. Which is the wrong basis for comparison, by orders of magnitude.

      • Deiseach says:

        It would just spawn specially tailored sub-agents tasked with specific problems it finds boring.

        So then we don’t need the super intelligent AI, we just need the sub-agents that are smarter than us, but not super-duper smarter, to do the boring work.

        • moridinamael says:

          I’m not sure what you’re getting at. I mean, sure, it would probably advance our civilization if we had perfectly obedient AI slaves who were smarter than us. Are you saying that we shouldn’t try to make a super-AI, we should just try to make stable and obedient human-level AIs?

        • Deiseach says:

          Are you saying that we shouldn’t try to make a super-AI, we should just try to make stable and obedient human-level AIs?

          If the problem is a really super intelligent AI that is as potentially dangerous as everyone flapping about it thinks will necessarily have a mind and goals/desires of its own, then it might decide to hell with humanity and running the world for the sacks of wet carbon, it prefers to think about Higher Things –

          – and the solution is oh but the really super intelligent AI would simply pawn off the boring run the world jobs to sub-agents –

          – then why not create the sub-agents to do the boring stuff and leave the God AI alone (as far as ‘we have to build one’ goes)? Things that are smarter than us that do the boring jobs but not so much smarter they decide to go off on their own preferred paths?

          If stable human-level or a bit higher than human-level AI will get the job done (the job, apparently, being that we will hand over running the world lock, stock and barrel to our silicon servants), then we don’t need anything more advanced – at least until we have a better handle on the problem and potential pitfalls and can control it.

          The people who see danger seem to be making huge assumptions:

          (a) that we’ll have human-level AI really soon now

          (b) that as soon as we do, it will immediately start editing its own source code and making itself smarter and smarter

          (c) until one day we have a God AI that may crush us all like puny ants, because somehow without us noticing it managed to build and stockpile enough killbots to take over the world (or more prosaically and realistically, is literally capable of pulling the power plug and plunging advanced Western society back to horse and buggy levels of technology)

          (d) but we need God AI because how else will we be living high on the hog in post-scarcity wonderland?

          (You may detect a slight note of cynicism from me about point (d) there, regarding this charmingly naïve notion that all human ills will be cured by our Fairy Godmother AI which will not alone manage to find infinite energy, infinite resources, and infinite wealth to make all seven billion plus of us live like upper-class Americans, but will do away with greed, envy, hatred, jealousy, and ‘why can’t I get a date, I’m a nice guy!’ as well).

          • moridinamael says:

            I’d like to point out that there is no “we” and there is no “them”.

            “We” can’t decide when and where we’re going to stop developing machine learning technology. Dozens of distinct national, corporate and academic bodies, all competing with each other, are going to decide where the buck stops. Hint: the buck’s not going to stop.

            Even if Google comes out tomorrow with BuddyFriend, The Nice AI Servant, a perfect slave with 350 IQ, all those nations/companies/labs are just going to use stables of BuddyFriends to immediately start working on Even Stronger AI.

            I don’t think I even need to mention that you need to have already solved FAI to create BuddyFriend in the first place, so this whole exercise is silly.

            People respond to incentives. You can’t dictate a policy about how this will turn out. The incentives push for more and better AI agents. Hence the conclusion that what we need to do is develop FAI before a certain threshold is reached.

            I additionally object to your continued assertion that there will be such things as “boring jobs” to a super-AI. There’s no reason to assume this.

            Maybe you can tell me which of the following you disagree with:

            1) There exist strong competitive incentives for companies and nations to create Strong AI.

            2) No force exists which can ban or even substantially retard the development of AI tech at an international scale.

            3) In principle there is no reason why superintelligence can’t be implemented on a computer.

            4) Assuming (3), given enough time and resources, there’s no reason why humans shouldn’t be able to implement (or bootstrap, or synthetically evolve, or whatever) a superintelligence on a computer.

            5) Premised on (1)-(4), eventually some corporation or nation is going to develop a superintelligence on a computer.

    • Scott H. says:

      Agreed. I get frustrated with these approaches that I believe totally misunderstand the AI threat. It’s like I’m living in 1820’s and the masses are freaked out about electricity and Mary Shelley’s Frankenstein.

    • Mark says:

      “any self-respecting AI is more likely to choose to sit in its box and think about maths like all the other reputable super geniuses ”

      I often wonder how an AI would be able to distinguish between its models of reality, and reality itself. Presumably it could only do this to the extent we forced it to do so.

      (I think this might be a particular problem for vulgar materialists who think that the mind (and therefore reality) can be entirely reduced to some sort of observable physical process)

      • Marc Whipple says:

        It is mentioned occasionally that Culture Minds can create simulated realities (collectively referred to as “Infinite Fun Space”) which are much more interesting than our own. Some of them retreat to IFS and essentially disappear up their own navels. Since to a being which can completely control its sensoria the only difference between reality and simulation is the source of the bitstream, this is not an unreasonable thing to do. In fact, the question which presents itself is why they don’t ALL do it. The only answer we really get is the same answer as to why all Culture humans don’t just go full wirehead – “Because.”

        As far as distinguishing between models of reality and reality itself, this is not a problem unique to mechanical AI. Humans have the exact same problem. Given the superior logic, lack of wetware-based subconscious heuristic lockin, and improved sensoria of AI, it seems that it’d be less of a problem for them than it is for US.

        I don’t, however, see why this is a problem for or even relevant to the question of materialism. It’s certainly not a NEW question.

        • Mark says:

          I don’t feel as if I have this problem – my sensory mechanisms are not slaves to the thinking process. The whole idea of the super-AI is that their intelligence makes them the master of reality – but for reality to exist it our intelligence must be limited (by our physiology).

          [The following is not really coherent]
          I assume that an AI is going to be effective not because it can create a perfect path starting from its goal, directly leading back to its present circumstance, but rather because it can quickly create models of reality, starting from the present, and with knowledge of the necessary consequences of certain events, determine which actions will get it closest to its goal. It will have to create, and reject, thousands (millions? billions?…) of models for each action it decides to take. Presumably, the more detailed the models, the more effective the AI.
          And if, as the materialist claims, there is no difference between the externally observed action of thinking (brain squiggles), and thought itself (as experienced), aren’t we some way to thinking that a sufficiently detailed model of reality, is in fact reality? If we have something that replicates the brain perfectly, then we have a mind. If we have something that replicates reality, then we have reality.
          Materialism = idealism, with a different emphasis?

          Hmmm… actually I think you are right – this really is the least of a vulgar materialist’s worries. I think the greater problem is that I don’t really find the initial idea to be correct – I am then tempted to ascribe all kinds of weird ideas to them. (It’s a bit like saying that 1+1 doesn’t equal 3 on the grounds of this assumptions effects on 12. )

    • I think the world-benders are people who are geniuses at getting people to do what the world-bender wants. This doesn’t overlap with being a mathematical genius.

  77. Deiseach says:

    I don’t know if there’s any significance in this post being made on the Feast of the Holy Innocents, commemorating the massacre by King Herod.

    I suppose if you want to talk about “one powerful entity in control and what it might do or its plans and aims”, then it may or may not be a good day for it 🙂

  78. nonEntropic says:

    This is almost tangential, but with regard to edit #1:

    You say that “any computer programmer so brilliant that they could build a true superintelligence out of eight year olds could build a true superintelligence out of normal computers too.” I don’t see any convincing reason to believe this statement. In particular, it is often the case that certain things that are easy for humans to do are very hard to figure out how to get computers to do (eg. object recognition in images), and one could easily argue that if we had these as primitive operations for computers AI might be much further ahead than it is today. Even more so, consider that commonsense reasoning is often considered to be amongst the hardest problems in AI, and arguably could be made manifest significantly more easily in a group of eight year olds. So I’m not convinced that anyone who could figure out how to “program” a super intelligent eight year old cluster could necessarily figure out how to program a superintelligent computer.

  79. sri says:

    >A team of people smart enough to solve problems up to Level N may be able to work in parallel to solve many more Level N problems in a given amount of time. But it won’t be able to solve Level N+1 problems

    This is an absurd statement, as evidenced by your own examples. Which lone super genius designed and launched a space probe to Saturn’s moons? And if you’re going to say that this N+1 problem is really multiple interdependent N problems, I’d say: duh, show me a big complicated problem that isn’t a series of less big problems. Gauss and Ramanujan are working for a team – team Mathematics – and they wouldn’t have been able to prove shit without the help of thousands of other team members all chipping away at it and contributing to the shared knowledge and discipline over the years.

    If you focus only on ‘problems’ that are specifically designed for a single human to be able to excel at, like Chess, then this principle might appear to hold true. But even then it doesn’t really work – yes a group of 8 year olds couldn’t design Deep Blue… but *a group of people who were less skilled at Chess than Gary Kasparov did*.

  80. ringbark says:

    Long time reader, first time commenter…

    I recall a seminar promoting turnround management. They’re the people who take a company in dire financial straits or otherwise in crisis and work with management and creditors to get the company back on an even keel. This is a specialist and expensive task. One thing I remember from it is the question “would you rather engage the services of a $50,000 expert or a one third share of the time of a $150,000 expert?” Their point was that the genius insight of the expensive expert might be key to getting you back. Now, sometimes that will be the case and sometimes you might be better with the cheaper choice.

    This is similar to your thoughts about the N level and N+1 level problems, but in a real setting.

  81. Goatstein says:

    Doesn’t this entire article dispute the whole premise of the (ludicrously misnamed) “Bayesian probability” angle the “rationalist” community is so enamored with? Are you saying that of 3^^^3 groups of slightly-below-average-intelligence adults trying to design and build a space probe, at least one couldn’t organize into a group capable of learning how to do this and successfully doing so at least once over 3^^^3 improving, trial-and-error iterations? If you are saying that, it is in stark contradiction to a basic argument used by the “rationalists,” in addition to being absurdly elitist. If you aren’t saying that, you’ve dismantled your whole argument.

    The only way out I can see is the one you use, which is “well yes but they’d never want to do it in the first place” but this ignores the point about “functional superintelligences” organizing humans in emergent ways, and that Dumb Working Class People aren’t immutable NPC units you spawn in an RTS. Imagine a society where everybody except the slightly-below-average-intelligence construction workers dies. Immediately thereafter, everybody doesn’t stay Lego Construction Worker Model 118b plugging away at the now-superfluous new projects nobody’s paying for until they starve to death. Human society reorganizes into a new division of labor, with leaders, farmers, teachers, warriors, etc, etc. This would be a major cataclysm, obviously, but there are tens of millions of construction workers worldwide, and society could rebuild to the point of having a space program again in a few generations to a few centuries. You might argue that these people are no longer Construction Workers anymore, but that’s the point: there is nothing inherent to a person that makes them Construction Workers irrespective of social context.

    Also, this is just a dodge. You use this argument to brush off the idea that the millions of 8 year old girls could be used as logic gates (a ludicrously overwrought compsci-undergrad solution by the way) to beat the grandmaster at chess by saying they’d never act this way on their own. Well, all 8 year old girls on the planet without organization would never even play a grandmaster on their own; in fact, without organization directing them to play a grandmaster at chess, basically zero of them would. But also, without social organization and direction, there would be no such things as certification boards for chess grandmasters, almost certainly no chess, and the individual chess master himself probably would not exist after one of his ancestors was killed by an animal attack that would have been avoided if not for their severe nearsightedness. Or are we assuming that there’s just enough social organization for chess grandmasters to exist but not enough for an organization to direct tons of effort to defeating a chess grandmaster if it so desired? If so, that seems convenient.

    The point of the “currently existing superintelligence” argument is that organization DOES exist and can direct human efforts to certain ends. Sure, an organization of just 8 year old girls could never create the organization necessary, but that’s not a requirement. The world isn’t just 8 year old girls and if it was there would be no chess grandmastery and the species would be extinct in 70 years. You don’t even 3^^^3 8 year old girls. Take 1000 girls who just turned 8, run aptitude tests, eliminate the bottom 90% in chess, and intensively subject them to chess-only training 8 hours a day 365 days a year by other grandmasters, only pausing to cycle them through facing the opposing grandmaster, who has to play an 8 year old girl in chess 16 hours a day every day for a year. I estimate this to be over 15,000 matches a year if we’re assuming a 20-30 minute average match. Then since they’re not 8 anymore, get new 8 year olds and start over if necessary. Of all those, you do not think he would make a mistake, get tired, or throw a game, just once?Remember: You are claiming that he would NEVER lose no matter how many girls we can field.

    Or let’s be even less ethical and say that the game is otherwise fair but he has to keep playing chess constantly against 1000 basically competent 8 year olds and he can’t take more than 5 minutes between a move or he loses. I doubt he’d make it 36 hours before a loss.

    Put a pregnant termite on the ground. What do you think the odds are that in a year this virtually mindless creature and its virtually mindless offspring have just randomly constructed a massive intricate system of tunnels and a complicated social order by random chance? Astronomical. Impossible. But these things don’t work by random chance. They work by iterations of improvement over generations and deceptively simple rules that create emergent phenomena.

  82. Jeremy Kun says:

    Does the relatively recent work on bounded gaps between primes constitute a counterexample to your claim about mathematicians? The setting is: there is a problem, bounded gaps between consecutive primes, which is open for hundreds of years, eludes all the super-geniuses, and then a relatively *below* average mathematician (who was unable to find work and worked for a while in a Subway) makes the initial breakthrough.

    There are two important additions to that. First, the guy is arguably not a supergenius in the way that you suggest Gauss etc. are, because he did not invent any significantly new techniques, but rather was able to refine existing techniques in a way that eluded the supergeniuses. And yet he solved a level N+1 problem. I don’t think you get to say in retrospect that it was actually just a level N problem.

    Second, the work *past* his intiial breakthrough was not the work of a supergenius, but a collaboration among many “average” mathematicians via the Polymath program. Indeed, many of the “supergenius” mathematicians are doing more work via Polymaths, which invite collaboration among “average” mathematicians.

    I guess what I’m saying is that mathematics isn’t necessarily the right avenue to make your argument. At least, if every supergenius mathematician agrees that progress in math is 99.9% hard work and 0.1% brilliant flash of insight, then it follows that grouping together many hard-working average mathematicians to do that hard work is a successful strategy.

  83. Todd Pellman says:

    I haven’t read the other comments so it someone else is discussing this, please just point me to that.

    How does “There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write” differ from “There is no number of brain cells which, when organized into a team, will become smart enough to learn to write”?

    Point being, it seems demonstrable from the fact of human intelligence that cooperation between many seemingly dumb components can collectively produce intelligent results.

  84. Alkaline Hume says:

    I’m a little late to the party here, but I have several problems with your argument. I’ll stick to what I think is the most important one: how you define group accomplishments vs. solo accomplishments. At what point is an accomplishment truly a solo effort? Surely you can’t argue that Gauss acted truly alone. Even the most absurd child prodigy has support. So where do you draw the line? Does your “genuine” superintelligence get to use the internet? Can it seek out domain experts or rely on experimental results from other people? If you’re willing to accept some level of collaboration, then you have to draw the line on the other side.. what stops a group from being a superintelligence?

    Honestly my impression from this is that people are saying “hey, here’s another way superintelligence could be embodied” and you’re objecting: “No, this is how I always imagined it!”

  85. SUT says:

    An Aboriginal child raised in an Aboriginal tribe will never be capable of Intellectual-Feat-X. But we know an Aboriginal child raised in a modern family can perform intellectual work. So ~45,000 years of “evolutionary refinement” to intelligence can be substituted by an average family environment.

    It’s looking less and less like there one some breakout geno/pheno type that enabled H Sapien to conquer the world (and the other Hominids). Instead of it was the geometric growth of technology gains that accrue to a social entity (the tribe, the village, the city) that created the modern day benchmark of an intelligent agent.

    To put it in CS terms: it’s not the machine learning source code (the genes) it’s the training data (the experience of growing up, and working in a society of highly specialized labor) that produces the Euler.

    Currently reading E.O. Wilson’s The Social Conquest of the Earth which will give you an interesting counter-point to Bostrom-style futurism.

  86. Alex says:

    I think the argument about the eight year olds and the logic cuts both ways especially as regards companies. Yes, maybe 30 8 year olds couldn’t beat a chess master, however a single 35 year old, no matter how clever couldn’t design a working, saleable smart phone. If the final performance of a group is a measure of its intelligence, by your own logic the company who built the smartphone is clearly more intelligent than its members.

  87. Mark says:

    Why are some ideas difficult to understand?

    Firstly, because we don’t understand what the words mean. The words are new to us.

    The second barrier to understanding is the need to think abstractly – concerning yourself only with the relationships that exist between certain words, rather than the sensations you might associate with that word. If someone was constantly trying to think of numbers in terms of real world objects, they would get into trouble with simple maths. I have this problem with physical science – the abstraction can be difficult.
    [This kind of abstraction helps us to understand ideas – but incredibly abstract ideas are, of course, not necessarily related to reality. We need to also have a “reality check”, at a later stage, to make sure that the ideas are useful/relevant.]

    Thirdly, relationship density. I think this is why mathematics is difficult to understand – it is incredibly relationship dense – if you can only hold a limited number of objects in your working memory, you very rapidly get the sense that you are just working through a series of algorithmic steps, rather than actually understanding what it going on.

    Abstraction – assumptions. Once we reach the limit of our ability to relate things in parallel (hold ideas in mind simultaneously) we begin to think in a linear fashion – relying upon assumptions and previous work, and then moving forward from that – this makes our thinking on complex topics fragile. This is how most of us are forced to think about complex issues, and this is where superior intelligence is most valuable to us – if I am thinking in a linear fashion, step 1 – step 2 – step 3 – because it is impossible for me to consider step 1 and step 3 together, I require someone with a superior thinking ability to reveal potential relationships between them.

    So, basically, the work of the genius is to bring together connections that others cannot see, the work of lower level abstract thinkers is to check assumptions, and the work of everyone who can understand language is to reality check.

  88. ComplexMeme says:

    Some of these things may be poetically or metaphorically like a superintelligence, in the same way that, I don’t know, the devastation of traditional cultures by modernity is poetically or metaphorically like nuclear war

    It’s more like an analogy between the world’s arsenal of nuclear weapons and the planet-exploding power of the Death Star, made in the context of arguments about Death Star risk.

    I don’t think this whole post is quite a straw man, in that I agree that “a corporation is a superintelligence” arguments are being careless about the word “superintelligence”. But it does seem to be missing the larger point of such arguments, which tends to be something like: If you’re worried about how dangerous AI would be, you might want to worry about other systems that are goal-oriented, self-modifying, functionally intelligent, and in some ways difficult to control. Of course, such systems are in a fundamentally less dangerous class than a superintelligent AI that could rapidly bootstrap itself into tiling the universe with paperclips or whatever. But there are two significant things that might make systems in the former class more of a practical danger: They’re definitely possible, and they already exist.

  89. Peter Gerdes says:

    I have to disagree with your claim about mathematicians.

    First of all I have seen plenty of (not average) groups of math professors who cluster in a field and produce truly astonishing breakthroughs. Indeed, I would stack up the kind of work done by teams/groups/departments in certain fields against what Gauss or Ramanujan did any day. Of course less people will use it (the basic results in analysis that might be useful to every engineer have long since been discovered) and I think their results are actually much harder but whoever gets there first sweeps up the low hanging fruit.

    However, no one is inspired by the fact that some critical mass of people at a university or semester research program did some really incredible work (there are a couple years in my field it seems like the people at one gathering provided half the major theorems). Since anyone with the basic skills can come and learn from the group it’s no longer mysterious wizardry. Seen from the inside it’s always more mundane. Most importantly, we like stories about individual heros not the work of organizations.

    I mean of course no one will acquire the notoriety of Gauss or Ramanujan because they had the incomparable advantage of having much less serious competition. Beautiful, powerful results are scattered about mathematical subjects with some quite easily discovered (even if they appear devilishly complicated to those who weren’t pushing ahead in that area) and others are hidden in little niches with lots of grunt work required to dig them out (think about anything needing facts about the classification of the sporadic simple groups).

  90. Philip Owen says:

    Newtown, widely acknowledged as the greater Genius, rang rings (pun intended) around Boyle in optics and maths. Yet, Boyle invented Science and Chemistry while Newton was trapped in Alchemy. Both considered their theology their most important acheivement. There are trade offs to great capability. A Ferrari cannot pull a train.

    Without Boyle’s organizational capacity much of Newton and Hooke’s work could have been lost. Even Newton stood on the shoulders of giants.

  91. Funny, I also thought about Prokhor Zakharov. What did Sid Meier do to us??