Financial Times: What We Get Wrong About Technology. It cites boring advances like barbed wire and shipping containers to argue that some of the most transformative inventions are not the product of complicated high technology but just some clever hacks that manage to revolutionize everyday living. Throughout, it uses AI as a foil, starting with Rachel the android from Blade Runner and going on to people concerned about superintelligent AI:
Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age”, while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.
Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere.
If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael?
The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise.
Like paper, [mildly interesting warehouse management program] Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.
I agree with the gist of this article. It’s correct to say that we often overlook less glorious technologies. It’s entirely right in pointing out things like barbed wire as good examples of these.
Also, it was written on a digital brain made of rare-earth metals consisting of billions of tiny circuits crammed into a couple of cubic inches, connected to millions of other such brains by underwater fiber optic cables that connect entire continents with one another at an appreciable fraction of the speed of light.
What I’m saying is, sometimes the exciting cool technologies are pretty great too.
I realize this isn’t a brilliant or controversial insight. Exciting-looking technologies that everybody agrees will be exciting turn out to be exciting, breaking news, more at eleven.
But then what am I to make of the original article? It points out some cases where simple boring technologies proved to be pretty important. In one or two cases, it describes a field where a simple boring technology proved to be more important than a flashier and superficially-much-more-promising technology. Then it concludes that “perhaps” we should be more afraid of simple voice recognition programs than of superintelligent AI.
I can come up with equally compelling anecdotes proving the opposite. For example, the humble stirrup was one of the most disruptive and important innovations in world history – read about the Great Stirrup Controversy sometime. Imagine a society of horses in 1890, where some especially wise horse relates the story, and concludes with “So perhaps we should be more concerned about simple innovations like new stirrups and more efficient reins, than of the motorcar.” Nice try, A+ for effort, you’re still going to end up as glue.
I don’t want to claim that flashy paradigm-shifting technologies are always more disruptive than simple boring technologies, or that technologies always deploy quickly. I do want to claim that the article hasn’t even tried to prove the opposite. So when it says “perhaps we should be more worried about warehouse management programs than superintelligent AIs”, it means “perhaps” in the weaselly sense, like “perhaps we should be more worried about a massive worldwide snake infestation than global warming. I have no evidence for this, but perhaps it is true.”
Part of me wants to let this pass. It’s obviously a throwaway line, not really meant to be a strong argument. But another part of me thinks that’s exactly the problem. There are so many good throwaway lines you could use to end a piece. If you have to halfheartedly make a not-strong argument for something, why would you choose the one where you randomly dismiss an impending threat that already has way too few people willing to pay any attention to it?
I worry there’s a general undersupply of meta-contrarianism. You have an obvious point (exciting technologies are exciting). You have a counternarrative that offers a subtle but useful correction (there are also some occasional exceptions where the supposedly-unexciting technologies can be more exciting than the supposedly-exciting ones). Sophisticated people jump onto the counternarrative to show their sophistication and prove that they understand the subtle points it makes. Then everyone gets so obsessed with the counternarrative that anyone who makes the obvious point gets shouted down (“What? Exciting technologies are exciting? Do you even read Financial Times? It’s the unexciting technologies that are truly exciting!”). And only rarely does anyone take a step back and remind everyone that the obviously-true thing is still true and the exceptions are still just exceptions.
And for some reason, any discussion of AI risk dials this up to eleven. It seems pretty obvious that smarter-than-human AI could be dangerous for humans. For a hundred years, every scientist and science fiction writer who’s considered the problem has concluded that smarter-than-human AI could be dangerous for humans. And so we get these constant hot takes, “Oh, you’re afraid of superintelligent AI? What if the real superintelligent AI was capitalism?” Or “What if the real superintelligent AI was the superintelligent AI in the heart of all humanity?” Or just “What if superintelligent AI turns out to be less important than a bunch of small humble technologies that don’t look like anything much?” And so I feel like I have to do the boring work of saying “hey, by the way, 10-20% of AI researchers believe their field will end in an ‘existential catastrophe’ for the human race, and this number is growing every year, Steven Hawking is a pretty smart guy and he says we could all die, and Nick Bostrom is an Oxford professor and he says we could all die, and Elon Musk is Elon Musk and he says we could all die, and this isn’t actually a metaphor for anything, we are actually seriously worried that we could all die here”.
But I worry even more that this isn’t an attempt to sound sophisticated. I worry that it’s trying to sound cautious. Like, “ah, yes, some firebrands and agitators say that we could all die here, but I think more sober souls can get together and say that probably things will continue much as they always have, or else be different in unpredictable ways because history is always inherently unpredictable”, or something like that.
I worry that people don’t adequately separate two kinds of caution. Call them local caution and global caution. Suppose some new spacecraft is about to be launched. A hundred experts have evaluated it and determined that it’s safe. But some low-ranking engineer at NASA who happens to have some personal familiarity with the components involved looks at the schematics and just has a really bad feeling. It’s not that there’s any specific glaring flaw. It’s not any of the known problems that have ever led to spacecraft failure before. Just that a lot of the parts weren’t quite designed to go together in exactly that way, and that without being entirely able to explain his reasoning, he would not be the least bit surprised if that spacecraft exploded.
What is the cautious thing to do? The locally cautious response is for the engineer to accept that a hundred experts probably know better than he does. To cautiously remind himself that it’s unlikely he would discover a new spacecraft failure mode unlike any before. To cautiously admit that grounding a spacecraft on an intuition would be crazy. But the globally cautious response is to run screaming into the NASA director’s office, demanding that he stop the launch immediately until there can be a full review of everything. There’s a sense in which this is rash and ignores all sorts of generally wise and time-tested heuristics like the ones above. But if by “caution” you mean you want as few astronauts as possible to end up as smithereens, it’s the way to go.
And part of me gets really happy when people say that we should avoid jumping to conclusions about AI being dangerous, because the future often confounds our expectations, and shocking discontinuous changes are less likely than gradual changes based on a bunch of little things, or any of a dozen other wise and entirely correct maxims. These are the principles of rationality that people should consider when making predictions, the epistemic caution that forms a rare and valuable virtue.
But this is the wrong kind of caution for this situation. It’s assuming that there’s some sort of mad rush to worrying about AI, and people need to remember that it might not be so bad. That’s the opposite of reality. As a society, we spend about $9 million yearly looking into AI safety, including the blue-sky and strategy research intended to figure out whether there’s other research we should be doing. This is good, but it’s about one percent of the amount that we spend on simulated online farming games. This isn’t epistemic caution. It’s insanity. It’s like a general who refuses to post sentries, because we can’t be certain of anything in this world, so therefore we can’t be certain the enemy will launch an attack tonight. The general isn’t being skeptical and hard-headed. He’s just being insane.
And I worry this is the kind of mindset that leads to throwaway phrases like “perhaps we should be more worried about this new warehouse management program than about superintelligent AI”. Sure, perhaps this is true. But perhaps it isn’t. “Perhaps” is a commutative term. So, “Perhaps we should be more worried about superintelligent AI than about a new warehouse management program”. But the warehouse management company makes more money each year than the entire AI safety field budget combined.
Perhaps we should spend more time worrying about this, and less time thinking of clever reasons why our inaction might turn out to be okay after all.