[Not the most interesting topic in the world, but I’m posting it so I have something to link to next time I see this argument]
I talk about superintelligence a lot, usually in the context of AI or genetically engineered humans. And lately I have run into people who say: “But superintelligence already exists! It’s corporations / bureaucracies / teams / civilizations / mind-mapping software” (examples: 1, 2, 3, 4). Sometimes these people go so far as to say these things are in fact superintelligent AIs, since they are technically “artificial”.
Some of these things may be poetically or metaphorically like a superintelligence, in the same way that, I don’t know, the devastation of traditional cultures by modernity is poetically or metaphorically like nuclear war, or whatever. But if every time somebody is trying to talk about nuclear disarmament, other people interject with “But we’ve already had a nuclear war – it’s the nuclear war in the heart of all mankind“, this doesn’t really add to the conversation. In the same way, talking about these metaphorical superintelligences is not a helpful contribution to discussion of literal superintelligences.
Why do I think that there is an important distinction between these kind of collective intelligences and genuine superintelligence?
There is no number of chimpanzees which, when organized into a team, will become smart enough to learn to write.
There is no number of ordinary eight-year-olds who, when organized into a team, will become smart enough to beat a grandmaster in chess.
There is no number of ordinary IQ 90 construction workers who, when organized into a team, will become smart enough to design and launch a space probe to one of the moons of Jupiter.
There is no number of “average” mathematics PhDs who, when organized into a team, will become smart enough to come up with the brilliant groundbreaking proofs of a Gauss or a Ramanujan.
Teams / corporations / cultures have a lot of advantages over individuals. They can use writing and record-keeping to have much better “memories”. They can use computers to be able to calculate and retrieve information more quickly. They can pool their advantages, so that if one person is good at writing and another person good at illustration, they can produce a well-written and beautifully-illustrated book. They can formalize their decision-making processes to route around various biases and react consistently to predictable situations. These are all really good things to be able to do, and it’s why in fact groups of people have outperformed individuals in fields as diverse as “making nuclear bombs” and “coordinating air traffic”.
But there is some aspect of intelligence that they can’t simulate, in which they are forever doomed to be only as smart as their smartest member (if that!). It’s hard to put my finger on exactly, but it seems to have something to do with creative problem-solving ability. A team of people smart enough to solve problems up to Level N may be able to work in parallel to solve many more Level N problems in a given amount of time. But it won’t be able to solve Level N+1 problems. This is why it’s still occasionally useful to have mathematical geniuses around, instead of taking ten average mathematicians and telling them to work together. And unfortunately, this aspect of intelligence is the bottleneck for lots of interesting things like new inventions, proofs, and discoveries.
Further, teams themselves need intelligent people to run in an intelligent way. Steve Jobs led Apple to success by being really really good at marketing. Apple couldn’t have gotten the same results by firing him and replacing him with a marketing department of a hundred low-level employees who had graduated from second-tier marketing programs. This is true not only at the Steve Jobs level but at every level – at some point a Sales Department needs to have good salespeople, not just many well-organized mediocre salespeople. I’m not denying that many well-organized mediocre salespeople can do way better than a few poorly-organized mediocre salespeople, just that you can’t fully route around the need for actually intelligent people.
And finally, teams have a lot of contingent disadvantages over an individual. They work vastly more slowly. Their various parts tend not to know what the other parts are doing. If dictatorial in structure, they fall prey to failures of information; if non-dictatorial, to failures of coordination. Imagine an individual human whose inner soul had Democratic and Republican parties that were constantly trying to sabotage each other, so that if the Democratic part of her got a job interview, the Republican part would immediately try to sabotage the job interview to prevent the Democrats from looking good. Such a person would either be insane or at the very least not get too many jobs.
While it’s possible for improvements in organizational technology to ameliorate some of these contingent problems, so far they generally haven’t: the US government is as dysfunctional as ever, and a lot of corporations are little better. And even if all of the contingent problems were magically solved, that still leaves the fundamental problem where no organization of chimpanzees will ever write a novel.
If we were to actually get superintelligence, that would be a completely different class of entity than another government or corporation. It would also have all of the advantages of these things – arbitrarily much parallel processing ability, arbitrarily much memory, arbitrarily many computational resources – without the disadvantages, and with higher genuine “intelligence” as in problem-solving ability.
I think it’s useful to have a word for this completely different class of things, and that word is “superintelligence”. Teams, corporations, and cultures can use words we already have, like “groups”.
[EDIT: I keep getting the same objection in the comments: if we made a bunch of ordinary eight-year olds follow a simple set of operations that corresponded to a logic gate, and arranged them so that they simulated the structure of Deep Blue, then they could win high-level chess games. This is true. But eight-year-olds could not come up with and implement this idea. A brilliant computer programmer might be able to, but once you’re a brilliant computer programmer, you might as well just build the darned computer instead of implementing it on eight year olds. And any computer programmer so brilliant that they could build a true superintelligence out of eight year olds could build a true superintelligence out of normal computers too. And the same is true of the objection “doesn’t this mean that no amount of stupid neurons could combine into a smart human brain?” Yes, evolution can play the role of the brilliant computer programmer and turn neurons into a working brain. But it’s the organizer – whether that organizer is a brilliant human programmer or an evolutionary process – who is actually doing the work. That “neurons can combine to form a brain” is no more profound than that “transistors can combine to form an AI” – in both cases, it’s the outside organizer doing all the meaningful work. For a really interesting science-fiction treatment of what it would actually mean to implement a superintelligence in human social interaction, read Karl Schroeder’s Lady of Mazes]
[EDIT 3: Several people point out in the comments that chess champion Garry Kasparov once played a game of chess against “the world”. Kasparov moved the white pieces, and the black moves were decided by popular vote on a website where various other grandmasters and chess buffs worked together to devise the best strategies. The game was very closely fought and suffered from several irregularities, but Kasparov ended up winning.]