Two Kinds Of Caution

Financial Times: What We Get Wrong About Technology. It cites boring advances like barbed wire and shipping containers to argue that some of the most transformative inventions are not the product of complicated high technology but just some clever hacks that manage to revolutionize everyday living. Throughout, it uses AI as a foil, starting with Rachel the android from Blade Runner and going on to people concerned about superintelligent AI:

Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age”, while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.

Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere.

And:

If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael?

The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise.

And:

Like paper, [mildly interesting warehouse management program] Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.

I agree with the gist of this article. It’s correct to say that we often overlook less glorious technologies. It’s entirely right in pointing out things like barbed wire as good examples of these.

Also, it was written on a digital brain made of rare-earth metals consisting of billions of tiny circuits crammed into a couple of cubic inches, connected to millions of other such brains by underwater fiber optic cables that connect entire continents with one another at an appreciable fraction of the speed of light.

What I’m saying is, sometimes the exciting cool technologies are pretty great too.

I realize this isn’t a brilliant or controversial insight. Exciting-looking technologies that everybody agrees will be exciting turn out to be exciting, breaking news, more at eleven.

But then what am I to make of the original article? It points out some cases where simple boring technologies proved to be pretty important. In one or two cases, it describes a field where a simple boring technology proved to be more important than a flashier and superficially-much-more-promising technology. Then it concludes that “perhaps” we should be more afraid of simple voice recognition programs than of superintelligent AI.

I can come up with equally compelling anecdotes proving the opposite. For example, the humble stirrup was one of the most disruptive and important innovations in world history – read about the Great Stirrup Controversy sometime. Imagine a society of horses in 1890, where some especially wise horse relates the story, and concludes with “So perhaps we should be more concerned about simple innovations like new stirrups and more efficient reins, than of the motorcar.” Nice try, A+ for effort, you’re still going to end up as glue.

I don’t want to claim that flashy paradigm-shifting technologies are always more disruptive than simple boring technologies, or that technologies always deploy quickly. I do want to claim that the article hasn’t even tried to prove the opposite. So when it says “perhaps we should be more worried about warehouse management programs than superintelligent AIs”, it means “perhaps” in the weaselly sense, like “perhaps we should be more worried about a massive worldwide snake infestation than global warming. I have no evidence for this, but perhaps it is true.”

Part of me wants to let this pass. It’s obviously a throwaway line, not really meant to be a strong argument. But another part of me thinks that’s exactly the problem. There are so many good throwaway lines you could use to end a piece. If you have to halfheartedly make a not-strong argument for something, why would you choose the one where you randomly dismiss an impending threat that already has way too few people willing to pay any attention to it?

I worry there’s a general undersupply of meta-contrarianism. You have an obvious point (exciting technologies are exciting). You have a counternarrative that offers a subtle but useful correction (there are also some occasional exceptions where the supposedly-unexciting technologies can be more exciting than the supposedly-exciting ones). Sophisticated people jump onto the counternarrative to show their sophistication and prove that they understand the subtle points it makes. Then everyone gets so obsessed with the counternarrative that anyone who makes the obvious point gets shouted down (“What? Exciting technologies are exciting? Do you even read Financial Times? It’s the unexciting technologies that are truly exciting!”). And only rarely does anyone take a step back and remind everyone that the obviously-true thing is still true and the exceptions are still just exceptions.

And for some reason, any discussion of AI risk dials this up to eleven. It seems pretty obvious that smarter-than-human AI could be dangerous for humans. For a hundred years, every scientist and science fiction writer who’s considered the problem has concluded that smarter-than-human AI could be dangerous for humans. And so we get these constant hot takes, “Oh, you’re afraid of superintelligent AI? What if the real superintelligent AI was capitalism?” Or “What if the real superintelligent AI was the superintelligent AI in the heart of all humanity?” Or just “What if superintelligent AI turns out to be less important than a bunch of small humble technologies that don’t look like anything much?” And so I feel like I have to do the boring work of saying “hey, by the way, 10-20% of AI researchers believe their field will end in an ‘existential catastrophe’ for the human race, and this number is growing every year, Steven Hawking is a pretty smart guy and he says we could all die, and Nick Bostrom is an Oxford professor and he says we could all die, and Elon Musk is Elon Musk and he says we could all die, and this isn’t actually a metaphor for anything, we are actually seriously worried that we could all die here”.

But I worry even more that this isn’t an attempt to sound sophisticated. I worry that it’s trying to sound cautious. Like, “ah, yes, some firebrands and agitators say that we could all die here, but I think more sober souls can get together and say that probably things will continue much as they always have, or else be different in unpredictable ways because history is always inherently unpredictable”, or something like that.

I worry that people don’t adequately separate two kinds of caution. Call them local caution and global caution. Suppose some new spacecraft is about to be launched. A hundred experts have evaluated it and determined that it’s safe. But some low-ranking engineer at NASA who happens to have some personal familiarity with the components involved looks at the schematics and just has a really bad feeling. It’s not that there’s any specific glaring flaw. It’s not any of the known problems that have ever led to spacecraft failure before. Just that a lot of the parts weren’t quite designed to go together in exactly that way, and that without being entirely able to explain his reasoning, he would not be the least bit surprised if that spacecraft exploded.

What is the cautious thing to do? The locally cautious response is for the engineer to accept that a hundred experts probably know better than he does. To cautiously remind himself that it’s unlikely he would discover a new spacecraft failure mode unlike any before. To cautiously admit that grounding a spacecraft on an intuition would be crazy. But the globally cautious response is to run screaming into the NASA director’s office, demanding that he stop the launch immediately until there can be a full review of everything. There’s a sense in which this is rash and ignores all sorts of generally wise and time-tested heuristics like the ones above. But if by “caution” you mean you want as few astronauts as possible to end up as smithereens, it’s the way to go.

And part of me gets really happy when people say that we should avoid jumping to conclusions about AI being dangerous, because the future often confounds our expectations, and shocking discontinuous changes are less likely than gradual changes based on a bunch of little things, or any of a dozen other wise and entirely correct maxims. These are the principles of rationality that people should consider when making predictions, the epistemic caution that forms a rare and valuable virtue.

But this is the wrong kind of caution for this situation. It’s assuming that there’s some sort of mad rush to worrying about AI, and people need to remember that it might not be so bad. That’s the opposite of reality. As a society, we spend about $9 million yearly looking into AI safety, including the blue-sky and strategy research intended to figure out whether there’s other research we should be doing. This is good, but it’s about one percent of the amount that we spend on simulated online farming games. This isn’t epistemic caution. It’s insanity. It’s like a general who refuses to post sentries, because we can’t be certain of anything in this world, so therefore we can’t be certain the enemy will launch an attack tonight. The general isn’t being skeptical and hard-headed. He’s just being insane.

And I worry this is the kind of mindset that leads to throwaway phrases like “perhaps we should be more worried about this new warehouse management program than about superintelligent AI”. Sure, perhaps this is true. But perhaps it isn’t. “Perhaps” is a commutative term. So, “Perhaps we should be more worried about superintelligent AI than about a new warehouse management program”. But the warehouse management company makes more money each year than the entire AI safety field budget combined.

Perhaps we should spend more time worrying about this, and less time thinking of clever reasons why our inaction might turn out to be okay after all.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

394 Responses to Two Kinds Of Caution

  1. paranoidaltoid says:

    In trying to convince people they should be worried about AI, it’s common to talk about AI. Which seems natural.

    But understanding the likelihood of unfriendly AI ending the world is only one piece of the puzzle. Understanding decision theory seems to be equally important. And often, that’s the missing piece of the puzzle. People who argue we shouldn’t be worried about AI don’t seem to be thinking about the future in the same way as they would about Russian Roulette.

    • OptimalSolver says:

      Understanding decision theory seems to be equally important. And often, that’s the missing piece of the puzzle.

      In the last open thread, Neaanopri said that turning values into optimal action, aka decision theory, is the most interesting problem in philosophy. Sounds about right to me.

    • spork says:

      People who argue we shouldn’t be worried about AI don’t seem to be thinking about the future in the same way as they would about Russian Roulette.

      Damn right, because Russian Roulette has a very clear payoff matrix, unlike AI and AI friendliness research investment. AI risk is much more like Pascal’s wager, where the “obviously right” move only looks obvious before you think hard about all the unknowns. God might not want you believing in him, and will punish you if you do. Now assign a probability to that. Well, ok, I’m overstating my case.

      I’m not totally skeptical about our capacity to calculate outcome probabilities. I just have plenty of indirect evidence that all kinds of irrational forces conspire to make us overvalue the likelihood of AI catastrophe and undervalue the likelihood of AI salvation – futures in which strong AI turns out to be necessary to prevent our extinction. One of those forces has to do with the visibility of catastrophes caused by technology, versus the invisibility of catastrophes prevented by technology. Another has to do with the fact that it takes effort to adjust to new realities: When we’re reasonably content with the reality we have, we tend to favor futures in which we don’t expend this adjustment effort, even when these low-effort futures are objectively worse than available alternatives. We’re also excessively repulsed by the possibility that through our actions we become co-responsible for ruining everything. However, we don’t have the same revulsion to the case where through our action we are co-responsible for not fixing a horrible reality that is not of our making. (A thought along these lines that I’ve had recently: What if a consensus of bio-ethicists would convince the world that we should immediately use a gene drive to wipe out malaria-carrying mosquitoes. But it turns out that most bio-ethicists are moral cowards who only know how to say “Don’t!”. When you divide the number of children who die each year from malaria by the number of bio-ethicists who could have collectively prevented their deaths, that makes each bio-ethicist personally responsible for killing fifteen children every day. People have been hanged in Nuerenburg for less. Despite this, we can’t viscerally feel the “cautious” bio-ethicists to be monsters.)

      For these reasons and more, my meta-contrarianism set off by prophecies of the form “New stuff will bring doom!” I think it’s inevitable that such people overestimate the probabilities of bad outcomes and underestimate the good. When I’m tempted to join them in doom prophesying – and sometimes I am – I rehearse for myself these arguments and try to remember that as I overvalue preventing badness I am far too indifferent about preventing goodness.

  2. reasoned argumentation says:

    You’re not wrong but we simply don’t have a government that’s oriented towards making sane long term decisions. Yes, this particular issue is really important and no, that doesn’t change change the fact that it’s fundamentally insane and unable to consider long term consequences. Without even getting into the many areas that you can’t think about – how’s the search for near Earth asteroids going?

    • 6jfvkd8lu7cc says:

      Is it a bright side or a dark side that if you actually hope to dent the decision theory problems for superpowerful AIs (whether they can be built or not), you will need to propose a better (and game-theoretically-stable) decision theory for a medium-size state just as a partial result?

      • reasoned argumentation says:

        Dark side because you can’t propose superior decision theories (that will actually be implemented) for states without it being seen as an existential threat to the existing state.

        EDIT – The one lesson that the almost all encompassing state learned from the 20th century is “no more Berlin Walls” – not meaning anything like “don’t try to imprison people in poorly governed areas” but meaning “don’t let a better governed area survive to provide a contrast to a worse governed area”. Even thinking about this problem with regards to AI runs into all kinds of barriers.

        • 6jfvkd8lu7cc says:

          I guess there is a fine art of having any apparent applications of the theory that you publish to fall outside the planning horizon.

          Also, as long as the personal well-being and public status of the decision-makers is preserved, do they — either as individuals or as a collective whole — care what are the actual object-level decisions?

    • christhenottopher says:

      Interesting that the immediate leap here is the question of government action. Certainly a government could raise huge amounts of funds easily, and there is a theoretical argument to be made that such action would be appropriate as avoiding global catastrophe does seem like a public good. Nonetheless, you may be right about the incentives for government don’t really work on the long term which may be why few discussions I’ve seen on the AI problem involve governments at all. Most involve private non-profit initiatives (MIRI being the obvious example). So I generally interpret the “guys, actually AI safety is an important topic” less as code words for “guys, we need government action on AI safety” and more as “we need more experts to choose AL safety as a research topic and more people non-experts to give money to incentivize such research.” Talking about the failure modes of governance kind of seems like a non-sequitur to me.

      • straussd says:

        It’s not so unreasonable when people like Elon Musk are calling for government regulation of AI:

        He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.”

        He even gathered governors in the past week:

        Musk told the governors that AI calls for precautionary, proactive government intervention: “I think by the time we are reactive in AI regulation, it’s too late,” he said.

    • rlms says:

      “Without even getting into the many areas that you can’t think about – how’s the search for near Earth asteroids going?”
      My impression is relatively well, although people have started to worry about centaurs (source is visiting this place).

    • TheRadicalModerate says:

      Governments are useful in solving problems like this when the problem is well-understood. But forcing a government solution onto the perceived problem turns a poorly-formed philosophical / scientific / engineering problem into a political problem. Political problems aren’t about finding solutions; they’re about defending or attacking policies and managing the interests affected by those policies. You need things considerably more fully-baked before you get governments involved.

      Asteroid defense is actually well enough baked that policy can be useful, and in fact we have government-sponsored telescopes being developed an deployed in interesting orbits to look for NEAs. That’s not a complete solution, but the problem is well enough understood that we know that surveillance is going to be a part of it. So we do what we can, and we keep working on the other, harder parts of the problem.

  3. nestorr says:

    Well in general existential risks are something only atheists truly grasp, and atheists are a minority, everyone else just goes along subconsciously with what I call the “fishtank analogy”, this planet is God’s fishtank, and sure it might get a little cloudy and some of the fish may eat each other, but he’s put a lot of effort into his fishtank and he’s gonna step in before his investment is ruined, right? Right?

    Hell the notion of all human life ending is kind of hard to grasp – like personal mortality, sure intellectually you know it’s going to happen but it slides away, the mind doesn’t want to look at it.

    • Christopher Hazell says:

      As a counterpoint: The Revelation of John.

      • nestorr says:

        Well yes, I am aware of religious apocalypse scenarios but they usually involves God cleaning out the fishtank and putting all the deserving fishes aside in their new and improved fishtank. Some religious folks are ALL about the end of the world, but I don’t think they grasp the actual concept accurately.

    • Bugmaster says:

      I think that the notion of all human life ending is pretty unlikely. Some humans will most likely survive in some fashion, even after a global thermonuclear war, an engineered pandemic, or an asteroid strike. However, I absolutely think we should be investing way more than we are currently doing into preventing these events — as well as other, less lethal ones, like global warming or total economic collapse.

      I am not, however, willing to spend any effort on preventing alien invasions, Ragnarok, or UFAI. Not because I want humans to die, or because I think such events would be totally safe, but because I think they are vanishingly unlikely.

      • albatross11 says:

        Also, I don’t think it’s crazy to suspect that unfriendly AI may be a problem we don’t know enough to get much traction on, yet. By contrast, we might plausibly do something about being driven to/near extinction by engineered plagues or massive nuclear war. But worrying with AI motivation now might plausibly[1] be a bit like the Victorians worrying about asteroid impacts–they wouldn’t have had the knowledge or resources to do much about them anyway, so the work on them might not have yielded much benefit.

        [1] I don’t know enough about the field to have a good sense of whether this is true or not.

    • OptimalSolver says:

      Hell the notion of all human life ending is kind of hard to grasp – like personal mortality, sure intellectually you know it’s going to happen but it slides away, the mind doesn’t want to look at it.

      Indeed. Dubious, low-probability measures such as cryonics have been introduced in order to cater to people unwilling to let go of the concept of an afterlife. This enables the likes of Eliezer to avoid dealing with the extreme unfairness of being born in a pre-life extension time period.

      See also: Tegmark-style Big Worldism.

      • Jliw says:

        dealing with the extreme unfairness of being born in a pre-life extension time period.

        This is something that would consume me with despair if I let it. Can you imagine the wonders a post-life-extension, post-extraplanetary-exploration, post-God-only-knows-what human might see? The things they might do or know?

        Ultimately inconsequential but the apex of injustice: what if we only missed it by a generation?

        Christ.

        • OptimalSolver says:

          Yes, I try not to dwell on what I will, with very high likelihood, miss out on.

          If we imagine a landscape of all possible human subjective experiences, I don’t think that we can even fathom what those that approach the global max would be like. I don’t even think that setting foot on an alien world, encountering a non-human civilization, or seeing a hypernova with your own eyes are even remotely close to the peak.

          I suppose at least someone will get to experience these things.

          • HowardHolmes says:

            Consider what we have experienced the last couple of decades. If that does not satisfy, nothing will.

          • Jliw says:

            That’s the most succinct way to put it that I’ve yet seen, @OptimalSolver.

            I suppose the silver lining is that some experiences available now are sufficiently interesting or pleasurable that they can still nicely motivate, cheer, and capture attention; we’re not at the bottom, right?

            @HowardHolmes: I love all this stuff already; it satisfies, but being forcefully escorted off the premises right before the main course and the lead-up magic show is still a bummer… no matter how much you loved the appetizers.

            (Hell, maybe that makes it worse, because they’re so good that you know the Spicy Lentil Curry woulda been fantastic.)

        • TheEternallyPerplexed says:

          This is something that would consume me with despair if I let it. Can you imagine the wonders a post-life-extension, post-extraplanetary-exploration, post-God-only-knows-what human might see? The things they might do or know?

          They will be as boring or exciting to them as our world is to us now.

          • Jliw says:

            Yeah, to them is the key phrase here — I want to see. Worse, everything is already incredibly exciting and interesting, and that’s also in peril; it would be a terrible thing to be destroyed forever when one still loves and learns.

            But re: the idea of Homo-stasis (heh, get it?) in excitement: I don’t actually think a qualitative change is improbable. The hedonic treadmill is decently hackable even now; and, regardless, some things will probably get better in a way requiring no direct comparison to enjoy.

            (E.g., I think modern and 1700s humans alike would agree that it’s better to live with the variety in and pleasing qualities of the experiences available now; and as a kid who never knew a world without books I still got way more excited about them than any simpler amusement.)

          • TheEternallyPerplexed says:

            Yeah, to them is the key phrase here — I want to see.

            I’d prefer a slow growing into it. Chances are that we from today would be so extremely dumbfounded that there was no fun in wondering — almost nothing would make sense anymore for us in a world of post-life-extension, post-extraplanetary-exploration, post-God-only-knows-what humans. A bit like the ‘how to communicate with space aliens’ problem, but applied to everyday living. Imagine today’s uncontacted Yanomami abruptly waking up in Google HQ. Reminds me also of the opening pages of Lem’s ‘Transfer’. Bewilderment. Confusion. And unless they have a suitable technology to apply/implant/whatever on/in us we only learn and understand so fast. And even there was — would it still be “us”?

          • Eli says:

            They will be as boring or exciting to them as our world is to us now.

            And how do you know that?

          • TheEternallyPerplexed says:

            What you grow up in is your boring old normal. For every generation; only content changes.

            (Heard somewhere: If something was around before your 15th birthday, it is natural, the way things ever were, never questioned. If it appears between your 15 and 30th year, it is NEW! EXCITING! WILL CHANGE THE WORLD FOREVER! If it comes up after that age you find it is unnatural, dangerous, must be banned because of consequences, etc. )

    • albatross11 says:

      nestorr:

      I’m a Christian (Roman Catholic, as it happens) who worries about existential risk. I don’t think I’m unique, either–a lot of Christians seem to have worried about nuclear armageddon, for example. I think a fair number are also worried about environmental degradation/global warming as an existential risk[1].

      As best I can tell, very few people (religious or not) overall worry about this stuff, because it’s remote and hard to think about and most of these concerns (other than global warming) aren’t expressed by high-status people and don’t seem like a pathway to being taken more seriously or rising in status.

      [1] Other than from nuclear war or some kind of nanotech accident, I don’t really see this as existential risk, just as making life harder for future generations.

      • nestorr says:

        I myself was raised catholic so I’m familiar with the culture, and it always struck me as being more about social fabric, ethics and mores rather than the more literal “God is actually watching us” of the Evangelicals and other varietals. So maybe you’re more of the “belief in belief” kind of religious people. If not, how do you square God watching his creation with sudden apocalypse… i.e. either it’s his Plan for the end times, however it may come about, or it isn’t in which case… will He let it happen?

        Rhetorical, I’m saying catholics as I know them believe in a noninterventionist God that’s hard to distinguish from no God at all… y’all may as well be atheists. 🙂

        • Conrad Honcho says:

          how do you square God watching his creation with sudden apocalypse

          “The Lord works in mysterious ways.”

          Central memes of Catholicism include an acceptance (sometimes a reverence for in a way I personally think misses the mark) of suffering and the existence of mysteries we are meant to ponder but not solve. So, Catholics aren’t really much bothered by the possibility of nuclear Armageddon. Your mission is to take care of your family and community, and if enough people do that, there’d be no nuclear Armageddon anyway.

  4. 6jfvkd8lu7cc says:

    Maybe singling out the AI Safety research field as a comparison makes your argument harder to sell to outsiders? There are many ways we can do Earth uninhabitable after all, and I think it is better to promote a wider field which happens to do work your preferred subfield will need anyway and which is easier to promote just because it is larger.

    Morris worm, Ariane-5, Therac-25, HFT Flash Crashes, the entire history of Adobe Flash, HeartBleed, WannaCry, and a lot of other unambiguous incidents (also, a few high-profile ambiguous cases which could also be related to software problems or deliberate information attacks) tell us that we are bad at telling computers what to do, and we are bad at telling computers what not to do. This is a problem for designing safe AI decades (or a couple centuries?) in the future. It is also a problem right now.

    And it looks like Chernobyl scenario is only considered a lesson for nuclear reactors, not for entire technology-related decision making. There, after deactivating a lot of security measures as a part of an (already risky) experiment, and then a few more to achieve the experimental conditions easier, the reactor has been erroneously driven into a safe apathetic state — everything could be easily shut down and the experiment retried in a few months, but «we need to achieve results today». High-risk methods expressly prohibited by every manual were applied, some amount of bad luck happenned (but not much was needed by that point), with well-known bad results.

    There is a lot of work in formal verification, safe protocol design, formal safety restriction specification, etc. that AI Safety in the narrow sense will need anyway, but which is also trying to decrease the frequency of software failures as they happen already. These fields also do not get all the respect they deserve, though.

    And a large share of epistemic theory work is currently branded as philosophy or economics without mentioning AI Safety, but will still be needed if the AI Safety field is ever to achieve it dreams.

    Personally, I also think that with the current trend we are headed for some clear large scale automated-control-failure which will directly cause thousands of deaths in one way or another; hopefully it will not deal enough damage to set back the global interdependent overautomated civilization more than a few years. In that case some people will probably learn from it enough to noticeably change all the software safety, decision stability and epistemic theory research — not sure if by much.

    And in general, I would say that Jennifer might (with some bad luck) become the decision automation disaster of today, while AI which is stable enough to actually self-bootstrap without exploding is the disaster of the day after tomorrow; maybe now it is still time to do the work that reduces both risks.

    • Bugmaster says:

      Formal verification, safe protocol design, etc. are very good ideas, and are definitely worth spending a lot of effort on, regardless of whether the UFAI story is true or not.

      • 6jfvkd8lu7cc says:

        Which is why I argue for that — I do not believe the exact position held by Scott Alexander (after some experience with how software works in the real world… this experience is not «uniformly pretty», as usual), but I hope to find a middle ground position that is a) obviously true for a larger group preferably including me; b) advertises mitigations for the disasters that I actually expect to see while still healthy enough to care; c) can be sold to outsiders using correct and plausible analysis (these are two independent conditions) of the events that have happenned and made the news, not the complicated hypotheticals.

        • Bugmaster says:

          Yes agreed; I just want to make sure that, if I were to (for example) donate $X to a group of computer scientists working on formal verification; then my $X would actually go toward researching software verification that is applicable to existing software — as opposed to being spent on researching FAI because these scientists also work on FAI and believe that FAI is more important.

  5. Christopher Hazell says:

    I understand this from most perspectives, but I still fail to see how this fits into an effective altruist paradigm.

    The mechanism by which a nuclear war or an asteroid impact could pose an existential threat to humanity are well understood, and we have some pretty good ideas about how to prevent those eventualities, as well. Meanwhile, the question of what risks AI could pose and how they could be stopped are mostly very, very speculative.

    On the other hand, you can push back and ask: If so many AI researchers believe that their research will destroy humanity, why don’t they stop researching it? Or demand that more money, lobbying and time be put into creating national standards?

    Nuclear scientists seem to have lost power over the bomb in a similar way in the 20th century, and we have already come alarmingly close to global nuclear war.

    For that matter, look at global warming, an issue which worries broad swathes of the public (Just as the threat of nuclear war did); how are efforts to combat that going?

    I guess my point is two-fold:

    1. It is not clear to me that AI risk is the most fundamental problem here. Whatever it is about the current global socio-economic system that under-prioritizes safety concerns is likely to continue to be a threat, whether or not we manage AI risk specifically. You coming up with plans to stop the grey goo, yet?

    2. It is not clear to me how more or less worry on the part of the public would actually help get effective AI research funded and disseminated.

    Also, A sub-point: One of the risks of AI and farming games alike is that they might exacerbate bad trends in our global socio-economic system. One of the things you’d want to look into, assuming that you believe AI really is an existential risk, is whether various forms of automation might actually make safety concerns more difficult to implement, rather than less.

    • Bugmaster says:

      One point to consider is that AI research has tremendous potential benefits, even if (or perhaps especially if) you don’t believe that superintelligent godlike AIs are going to happen anytime soon (or at all). For example, IMO real-time human-level (or better !) machine translation would be an immediate massive boost to our quality of life. You could always say, “well, yeah, being able to talk to anyone in the world regardless of their native language might be cool, but the technology to achieve this will increase the probability of an unfriendly Singularity by a factor of 1.02, so we should stop researching it ASAP”. However, this only makes sense if the probability of an unfriendly Singularity isn’t currently epsilon. Otherwise, it’s just pointless obstructionism.

      • Le Maistre Chat says:

        So Butlerian Jihad when probability of superhuman AI coming to be before the jihad can stop it reaches 2%, yes?

        • Bugmaster says:

          Let’s say that P is the probability of UFAI killing us all, and that machine translation research sets P to P’ = P * 1.02 . This is quite alarming if P ~= 0.1, but it’s virtually irrelevant if P ~= 10^-999.

    • James Miller says:

      >If so many AI researchers believe that their research will destroy humanity, why don’t they stop researching it?

      I wrote about this for LessWrong in a post called “Allegory On AI Risk, Game Theory, and Mithril”.

    • peterispaikens says:

      Direct answers to some of your questions:
      “If so many AI researchers believe that their research will destroy humanity, why don’t they stop researching it?”

      a) There’s no monopoly on AI research; if e.g. 50% or 90% of the AI researchers stop research now, then it doesn’t conceptually change the risks nor prevent them, only possibly postpone them. It’s substantially different from nuclear weapons, where we *can* prevent most of the world from nuclear weapons research simply by controlling access to certain rare materials.

      b) Safe AI research is connected to general AI research. In essence, we can consider it a race between discovering how to make “safe AI” and discovering how to make “powerful AI”. If all the altruistic AI researchers stay away from the topic, then the “safe AI” research grounds to a halt while the “powerful AI” research still goes on quite strong.

      “Or demand that more money, lobbying and time be put into creating national standards?”

      a) *What* national standards? We’re not currently able to formulate a law that (if 100% followed) would make us safe, that would be an interesting problem to solve, but it’s not solved.
      b) *National* standards aren’t a solution. At the very minimum, you’d need *every* advanced country – USA, EU, China, Russia, Japan, Israel, etc etc on board. This is a bit of a tragedy of the commons situation, since prohibiting useful things carries a *very* hefty cost, but if e.g. China or Russia don’t join (or don’t enforce) such standards, then this simply concentrates many of “powerful AI” researchers there and you don’t prevent the existential risk anyway.

      • Loquat says:

        This may be a stupid question, but if the “safe AI” research and the “powerful AI” research are going on separately, how do you get the “powerful AI” projects to adopt the “safe AI” findings? Particularly if they’re going on in different countries and/or involve national security – a Chinese team researching AI for military purposes, for example, might be extremely reluctant to adopt solutions proposed by some American organization claiming they’ll make things safer because what if they actually include a secret vulnerability the US government will be able to exploit in the event of a conflict?

        • James Miller says:

          This is far from a stupid question. Those willing to ignore safety issues have a huge advantage in any AI race similar to the one athletics willing to take risky performance enhancing drugs have. Our only hope is that people with extremely deep pockets will attempt to develop a safe and powerful AI, accepting the handicap that safety imposes on them in any AI race. By doing some of the safety work upfront, an organization like MIRI can mitigate the harm of this handicap.

        • It’s in everyone’s interest to develop AIs they can control. That isn’t quite the same thing as friendly or benevolent AI, but it does mean that power and control are not going to be entirely separate projects.

        • spork says:

          I would tack on another part to the question: What if the “best safety practices” require AI researchers to forego a class of potentially very fruitful experiments, all because some eggheads at Oxford worry that those experiments would raise the probability of unfriendly AGI to 0.0002? I mean, given the gravity of the consequences, I don’t like those odds either, but a researcher who is on the weaker side of a potentially devastating cyberwar or drone war will inevitably feel less squeamish about them. This is especially true if rules effectively forbid huge swaths of potentially powerful applications, in a time when everyone suspects that bad actors are shitting on the rules anyway. We basically know how humans behave with this sort of incentive structure, and it’s bound to get reproduced in AI research.

    • AnthonyC says:

      If so many AI researchers believe that their research will destroy humanity, why don’t they stop researching it?

      Well, for one, all those other people, the ones that aren’t concerned about AI safety, won’t stop *they’re* research. So… in what scenario does having the concerned people leave the field improve the odds?

      It’s a tad like arguing that all small-government conservatives should stop running for office, because they think government action is likely to lead to bad outcomes. That would not be the strategy that minimizes those outcomes.

  6. Bugmaster says:

    Sophisticated people jump onto the counternarrative to show their sophistication and prove that they understand the subtle points it makes.

    Not all opponents of AI alarmism are sophisticated contratians; some of us were opposed to AI alarmism before it was cool. Personally, I don’t buy the “superintelligent AI will become a wrathful god” story (you know, the one all those science fiction authors are always writing about) because I think it is vanishingly unlikely to happen, not because I think that’s what all the cool kids are doing these days. Of course, it’s always possible that I’m suffering from some sort of “internalized sophistication” that I don’t know about.

    But the globally cautious response is to run screaming into the NASA director’s office, demanding that he stop the launch immediately until there can be a full review of everything.

    True, and if everyone always followed this rule, we’d never launch anything into space at all. If your goal is to prevent accidents from occurring at all costs, this is a reasonable and in fact quite desirable outcome. However, if your goal is to have things like GPS and global communications some day, then it’s not. This brings me to my next point:

    This is good, but it’s about one percent of the amount that we spend on simulated online farming games. This isn’t epistemic caution. It’s insanity. It’s like a general who refuses to post sentries, because we can’t be certain of anything in this world…

    The more I read your articles about AI, the more convinced I am that you are laboring under the Typical Mind Fallacy. You are 100% (plus or minus epsilon) convinced that UFAI is coming any day now. Therefore, anyone who isn’t throwing all his resources at it must be insane (or perhaps just a complete idiot). But some of us really don’t believe in UFAI, in the same way that we don’t believe in gods. I ate a ham and cheese sandwich for lunch today, and I didn’t give a second thought to the consequences.

    Yes, it’s possible that a wrathful god might strike me down for eating non-kosher foods, but the probability of this happening is so low that it’s just not worth worrying about. And no, I’m not going to trust experts like the Pope or even the Dalai Lama about their religions; while they are experts, their arguments are completely at odds with available evidence, and thus not convincing. However, I can understand how theists feel — I can’t feel the way they do, but I can understand it on an intellectual level. From a devout Jew’s point of view, habitually eating ham and cheese sandwiches must be the height of insanity.

    If you want to convince me to stop eating those delicious ham-and-cheese sandwiches, then simply declaring me insane or stupid or whatever is not going to do the job. Instead, you must offer some convincing evidence that a god exists, and that he hates those sandwiches for some reason; and this evidence has to be convincing even to someone like me, who does not already believe the conclusion. So far, religion has not delivered.

    Do you kind of see where I’m coming from on this ?

    • MawBTS says:

      The more I read your articles about AI, the more convinced I am that you are laboring under the Typical Mind Fallacy. You are 100% (plus or minus epsilon) convinced that UFAI is coming any day now. Therefore, anyone who isn’t throwing all his resources at it must be insane (or perhaps just a complete idiot).

      I don’t think these characterisations of Scott are fair or accurate, and if they were, that’s a case of Scott overestimating a threat, not Scott having “Typical Mind Fallacy”.

      • Bugmaster says:

        I’ve argued with Scott on this topic before, and he seemed to be unable to internalize the idea that there could be more than two possible states, other than “Humans are the pinnacle of intelligence” and “Superhuman AI is inevitable and imminent”. And when I read this article, the impression I get is that he’s unable to internalize the idea that some people really, truly do not believe that superhuman AI is likely to happen. Rather, he seems to think that such people either do believe it but don’t care (in which case they’re suicidal or insane), or that they don’t understand how probability works at all and simply reject any belief that is not 100% certain (in which case they’re stupid or ignorant). I am not psychic, of course; I don’t claim to know Scott’s mind; I am just describing how his writing sounds to me.

        • dansimonicouldbewrong says:

          The underlying disagreement is over the meaning of “intelligence”. Scott has a very simplistic understanding of intelligence: it’s a kind of general problem-solving ability that humans have a certain amount of, animals have a bit of, and some future AI will have enormous quantities of. There’s no formal definition of this ability, and no objective way to measure it quantitatively–yet it’s intuitively obvious to everyone how to recognize and compare quantities of it in other entities, whether natural or artificial.

          I’ve tried to convince him many, many times that this understanding is completely incoherent, but it apparently has such enormous intuitive appeal to him that he simply waves off objections with phrases like, “we all know what I mean by it”. And as far as I can tell, he really believes that we do.

    • paranoidaltoid says:

      You are 100% (plus or minus epsilon) convinced that UFAI is coming any day now.

      You said this in response to his claim that we ought to spend more than 9 million a year on preventing the singularity. How does wanting to spend more on the singularity than we do on cigarettes imply a 100% belief in the singularity?

      • Bugmaster says:

        I think that Scott would be much less likely to call his opponents “insane” if he weren’t 100% convinced. However, I acknowledge that you have a point — if Scott wants to spend $10 million / year on AI instead of $9 million, I would be more sympathetic to his position, even though I personally think we should be spending somewhere on the order of $1000 (at most). However, I have a feeling that Scott is looking for something quite a bit heftier than just a $1 million increase. I could be wrong, though.

        • Jliw says:

          Scott specifically made a point of explaining that he doesn’t pretend to know that UFAI is coming or even possible, but rather only believes that the amount of attention paid is a) too little for the level of concern judged plausible by domain experts, and b) sometimes justified by “oh don’t worry ”bout UFAI” arguments that are glib or specious.

          I don’t see him anywhere saying UFAI is a certainty, or that anyone who isn’t making it their main concern is malicious or stupid, or anything else like that.

          It’s so far, in fact, from any plausible reading from my perspective, that it honestly seems to me that something is coloring your reading of Scott re: AI. (I say this in a friendly spirit, knowing that it may in fact be my reading that’s tinted.)

          • Bugmaster says:

            Scott specifically made a point of explaining that he doesn’t pretend to know that UFAI is coming or even possible, but rather only believes that the amount of attention paid is a) too little for the level of concern judged plausible by domain experts…

            I don’t understand this point of view (regardless of whether or not Scott subscribes to it). How can you simultaneously say, “we don’t know whether how likely X is to happen or whether it’s even possible”, and also, “we should drastically increase the amount of money we spend on X” ? What would we spend the money on ? One answer could be, “learning whether X is possible”. Ok, what does that mean in practice, given that you barely have any idea what X even is ? Also aren’t there a nearly infinite number of unknown unknowns just like X — should we spend massive amounts of money on all of them ?

    • kokotajlod@gmail.com says:

      “Do you kind of see where I’m coming from on this ?”

      I almost do, except that you said this:

      “You are 100% (plus or minus epsilon) convinced that UFAI is coming any day now.”

      I am not sure why you said that. Right now my #1 hypothesis is that you are exaggerating; what you really mean is 1% rather than 100%. In which case I would say: You must be *extremely* confident that AI safety is a non-issue for 1% to be way too much. (Your later comments about how you think the problem is only worth spending $1000/year on are consistent with this hypothesis…) I think that’s unreasonable. Can you name another potential catastrophic risk that has e.g. a bunch of smart people like Musk, Hawking, etc. as well as a significant fraction of relevant experts (Stuart Russell, etc.) worried about it, that you nevertheless think should only receive $1000/year in safety funding? Or anything close?

      My #2 hypothesis is that you just didn’t read the OP very carefully at all. It’s plain as day that Scott is NOT 100% confident that UFAI is coming any day now. He’s pretty much said so explicitly multiple times, and explained how even a tiny chance of a terrible outcome is worth spending money and research to prevent.

      • dansimonicouldbewrong says:

        There are a great many pseudoscientific scares that get lots of people worked up about nothing, and get far more funding and attention than they deserve: BGH, GMO, catastrophic global warming, deforestation…. UFAI is just one more relatively minor one.

        And yes, even if Scott’s estimate of UFAI catastrophe probability is 1%, it’s ludicrously excessive.

      • Bugmaster says:

        I agree that Scott’s 100% certainty is not required for my argument to be valid; as dansimonicouldbewrong says above, even 1% would be excessive. However, my reading of Scott’s arguments (not just this one, but his previous ones as well) still leads me to believe that he’s virtually certain about UFAI.

        Can you name another potential catastrophic risk that has e.g. a bunch of smart people like Musk, Hawking, etc. as well as a significant fraction of relevant experts (Stuart Russell, etc.) worried about it, that you nevertheless think should only receive $1000/year in safety funding? Or anything close?

        First of all, Musk and Hawking aren’t AI or even general CS experts, as far as I know (though I could be wrong). There are CS experts who do endorse increase spending on UFAI, but many of them endorse it as a subset of increased spending on software safety in general; and I am 100% onboard with spending a lot more on things like e.g. formal verification than we do currently.

        I am not opposed to the argument, “AI is a subset of software, and is therefore dangerous just like all buggy and/or malicious software is dangerous”; but rather, I oppose the view that “AI represents a unique existential risk that is orders of magnitude more dangerous than regular software risks, and therefore cannot be mitigated merely by using the same methods we’d use to mitigate those lesser risks”. To be sure, there are some experts who do subscribe to the latter view, but their arguments usually rely on a bunch of unproven assumptions and are thus totally unconvincing.

    • LukeReeshus says:

      But some of us really don’t believe in UFAI…

      Glad to see I’m not the only one.

      To be sure, I used to think AI was something worth worrying about. At least, I believed that our eventually building an AI with sufficiently advanced “I” to pose a threat to us was well within the realm of possibility, if not inevitability. After all, one need only embrace two postulations to anticipate it:

      1) Intelligence is, at bottom, a product of physical processes.
      2) Our collective prowess in computer/electrical engineering will continue to improve, and at an increasingly accelerating rate.

      Since I am not, nor ever have been, a dualist, and since our civilization, despite violent hiccups, is on a clear upward trajectory, both 1) and 2) strike me as obviously true.

      However, there is a third postulation—one which, I think, AI-worriers have not lent sufficient consideration: Brains cannot become minds without bodies.

      To summarize, the view of the brain as a spongy computer in the cranium is mistaken. It is, rather, the hub of a nervous system that is intimately connected to, and interactive with, the entire body. This distinction is important. And obvious. So obvious, in fact, that it probably sounds banal to point it out. But I routinely see “very smart” people forget it—in all likelihood because they are very smart. As the article says, “We’ve probabl[y] fallen for disembodied brains because of the academic tendency to worship abstract thought.”

      When one reflects on the process that produced such a nervous system—organic evolution over billions of years—it seems quite amazing to assume that human ingenuity will ever produce something more impressive. I mean, have we ever engineered something as complex and multi-faceted as a single neuron? Will we ever engineer something as complex and multi-faceted as a network of billions of those miniature wonders of Nature? Somehow I’m skeptical.

      So, if I could summarize my argument: neurons are not transistors, and transistors are not neurons. To think they can ultimately do the same thing is mistaken.

      Thoughts?

      • Toby Bartels says:

        I agree with the article (despite the exaggerated ending), but I don’t see what neurons vs transistors (or silicon chips today) has to do with it. A human-like intelligent computer will require a human-like robotic body to inhabit, if it is to think like us; but this is easy to achieve, regardless of whether it has mirror neurons or mirror chips inside. Meanwhile, a superhuman android would be as much of a threat as any other superhuman intelligence would be.

        • LukeReeshus says:

          I don’t really have a response, except to reassert my skepticism that “this is easy to achieve,” and to wonder how, exactly, one goes about building a “mirror chip.”

          Mirror neurons, if I understand them correctly, play a large part in the cognitive capacity we call empathy. Social organic beings have an evolved requirement for empathy, which is why, ipso facto, we all have it.* The evolutionary impetus for empathy is as much a cause of mirror neurons as a result of them. Thus, I find it implausible that engineers will be able to summon a synthetic version of them ex nihilo, in absence of the prerequisite requirement for socialization which they evolved to address.

          To put it rhetorically, what use does a computer have for empathy? (Or any emotion, for that matter? Just as the brain is not independent of the body, so its calculating prefrontal cortex is not independent of its tempestuous limbic system.)

          *Though we do not, of course, all have compassion.

          • To put it rhetorically, what use does a computer have for empathy?

            To understand humans and other computers in order to better interact with them in achieving its purposes.

            Has anyone done a good sf version of a superintelligent android, inhumanly good at interacting with, understanding, and manipulating humans?

          • bintchaos says:

            Some examples from Scifi
            KSR’s 2312

            Has anyone done a good sf version of a superintelligent android, inhumanly good at interacting with, understanding, and manipulating humans?


            The year 2312, you’re right to assume, can be seen as the seeds of a new era. We have a grumpy heroine, Swan Er Hong, a 135-year-old artist who lives on the rolling dawn city of Terminator, on Mercury. (The tracks of the city run all around the planet; the heat of the rising sun swells the tracks, pushing Terminator always toward the black side of sunrise.) When her grandmother dies, Swan becomes involved, unhappily, with the political work her grandmother left behind. Space is reckless; some parts of the solar system are not much more than slave camps, while others are utopias. Scattered throughout our solar system are machines that look human, off-kilter creatures trying valiantly to pass Turing tests.


            Also many Humans have qubes, implanted AIs. Swan’s is named Pauline, the original name of the starship AI in Aurora. There are also androids that are pretty expert at manipulating humans and passing for “real”.

            Blade Runner

            Tyrell: Is this to be an empathy test? Capillary dilation of the so-called blush response? Fluctuation of the pupil. Involuntary dilation of the iris…
            Deckard: We call it Voight-Kampff for short.


            One could say the skin jobs had no use for family pictures either.

            I would say Sarah, from UNSONG is pretty good.
            😉
            EDIT:
            I really like this quote from KSR– it says what I mean when I say sci-fi is how we test drive the future before it gets here.

            Science fiction can be regarded as a kind of future-scenarios modeling, in which some course of history is pursued as a thought experiment, starting from now and moving some distance off into the future.


            And the starship in Aurora is highly empathetic, understanding, and manipulative of humans. It just has a starship body instead of an android body.

          • And the starship in Aurora is highly empathetic, understanding, and manipulative of humans.

            I haven’t read the book–perhaps I should. Is the starship better at manipulating humans than a human who is good at manipulating humans would be?

          • bintchaos says:

            I would say yes, especially during a crisis, but you might have a different opinion on reading the book.

          • dansimonicouldbewrong says:

            Ex Machina?

            (By the way, I’m annoyed at being seemingly the only person to have noticed that it’s a dystopian SF retelling of “The Tempest”.)

      • raj says:

        Evolution is optimization in the space of genotype, and is constrained in how it can explore that space.

        Technology is also optimization, and though our raw search capacity is much smaller than evolutions, we have some awesome force multipliers that let us explore the space better: scientific method, interchangable parts, standardized units and measures, etc, etc. More generally, the ability to “go meta” and point these optimizations back on themselves.

        I mean, have we ever engineered something as complex and multi-faceted as a single neuron?

        The smartphone in your pocket and the globe spanning telecommunications infrastructure behind it. The sheer physical feat of escaping our local gravity well. New kinds of matter that otherwise can’t be found in nature.

        And, consider how long we’ve been at it compared to evolution.

        • LukeReeshus says:

          The smartphone in your pocket and the globe spanning telecommunications infrastructure behind it. The sheer physical feat of escaping our local gravity well. New kinds of matter that otherwise can’t be found in nature.

          I think you’re confusing complexity and scale. A smartphone is only more “complex” than an old Gateway computer in the sense that it has more transistors packed into a smaller space.

          Maybe it comes down to a matter of taste, but I find the complexity of proteins and their myriad structures far more incredible than any silicon chip.

          • thevoiceofthevoid says:

            Yes, the basic building blocks of a smartphone are still transistors—but it’s how they’re arranged that gives rise to the complexity of a smartphone. Complex systems can be built out of simple pieces.
            Furthermore, “complexity” doesn’t automatically give you intelligence. Despite how complex the single neuron is with its proteins and receptors and so forth, good luck getting it to DO anything alone.

      • vaniver says:

        Thoughts?

        It’s not clear to me why you think building the body would be the hard part. Like, with self-driving cars, the car is the easy bit!

      • thevoiceofthevoid says:

        Need an intelligence necessarily be made of neurons, or is it just an evolutionary coincidence that the first ones were?

        • LukeReeshus says:

          This is an interesting question. And an incisive one: it drives at the root of the skepticism I’m trying to express.

          I kind of cut myself short earlier, with the postulation that brains cannot become minds without bodies. Many commenters pointed out, correctly, that this problem is “easily” resolved by engineering bodies for our engineered brains to integrate with and utilize. (I actually think this aspect of smart cars is what’s going to make them the cutting edge of AI over the next few decades, especially in regards to ethical reasoning—like, what’s more valuable in a given situation, the lives of passengers or pedestrians? Still though, in the end, all they’ll do is drive places.)

          No, in bringing up the vital link that bodies have with intelligence, I was trying to approach a deeper point: the link that intelligence has with organic existence itself. I alluded to it by citing the evolutionary process which produced our intelligence, but I didn’t spell it out. And, come to think of it, the reason I didn’t spell it out is because I lack the philosophical chops to do so. It’s not a well-developed idea in my mind.

          I’ll give it a shot though. The biologist Matt Ridley wrote (sorry, I no longer have the book where I read this, and I can’t find it online, so I’ll have to paraphrase) of the advent life in these terms: before it, physics and chemistry unfolded in conventional ways. Asteroids collided and repelled, atoms bonded and broke apart, stars converted mass to energy. But then something strange happened. By some quirk of Nature, a strand of molecules began copying itself. Whereas all other molecules up to that point only reacted with their surroundings, and were thus quite indifferent to their fated configurations, these molecules were different. These molecules had interests. (I am, of course, anthropomorphizing them, as all biologists do when talking about genes.) These molecules were replicators. And replicate they would, across the face of the Earth. For what else could they do?

          For some reason that concept always stuck with me, that of interests. Organisms have interests because, at bottom, they are defined by an interest—to propagate the genes that built them. Thus they are ultimately “selfish,” as Richard Dawkins put it. So the question as I see it is, can intelligence—real, general, dangerous intelligence—arise by any other way? Can we, while engineering a species of programmed slaves, create true, selfish intelligence, which will put its existence before our own?

          Matt Daemon’s character in Interstellar touched on this when he said that robots aren’t good at improvising because they lack the fear of death. Indeed, the flip side to real intelligence, the intelligence we’re talking about, is not stupidity, but destruction.

          So I guess if we programmed AI to conserve itself, and it took such an injunction to the extreme, like Skynet launching its war on humanity as soon as it became self-aware…

          Then there are the Replicators in Stargate: SG-1—a “race” of metal spiders, more or less, that seem to operate on the same principle as life…

          Anyway, to bring it back to the original question: Need an intelligence necessarily be made of neurons, or is it just an evolutionary coincidence that the first ones were?

          I don’t have an answer, but I think I have a question that advances the plot, and is at least subject to empirical investigation: can silicon-based molecules replicate themselves?

          • AnthonyC says:

            Organisms have interests because, at bottom, they are defined by an interest—to propagate the genes that built them. Thus they are ultimately “selfish,” as Richard Dawkins put it. So the question as I see it is, can intelligence—real, general, dangerous intelligence—arise by any other way? Can we, while engineering a species of programmed slaves, create true, selfish intelligence, which will put its existence before our own?

            This is, of course, why no one has ever succeeded in creating a computer virus that resists deletion or being shut down.

            Sorry to be flippant, but “fear of death,” at least in the sense that such an interest applies to early biological replicators, does not appear to be difficult to achieve in digital systems.

          • LukeReeshus says:

            This is, of course, why no one has ever succeeded in creating a computer virus that resists deletion or being shut down.

            Touché.

        • LukeReeshus says:

          Addendum: I’m aware that most intelligent worriers don’t see the risk of AI in terms of the Terminator movies. Rather, they imagine that once a certain type of AI is sufficiently advanced, the slightest divergence between its priorities and ours could lead to catastrophe. I fail to see how this could happen though, as long as its priorities don’t transcend the dictum, “Do what humans command.” (There’s room for error, of course. An AI could execute the command “prevent death-by-cancer” by prematurely euthanizing anyone who gets a tumor. But, surely, we would be able to correct such a mistake.)

          Going back to that “Do what humans command”… I don’t want to get too philosophical, because it’s late and I’m tired, but that whole created-slaves-rebelling-against-their-creator paradigm—which, as far as I can tell, is what we’re talking about… That fundamental idea is why monotheism never made sense to me. Nay, I’ll go stronger: that fundamental idea is why monotheism doesn’t make sense (apologies to religious readers). And neither does AI rebellion.

          • James Miller says:

            An unexpectedly powerful AI is told by its hedge fund to find and execute the most profitable trade it can. The AI buys an option where it gets lots of money today, and has to pay an amount of money tomorrow proportional to the value of tomorrow’s stock market valuation. Tonight the AI releases a virus that kills everyone and consequently makes its trade extremely profitable.

            A true story: When my son was young I divided his toys into two piles and said let’s race to see who can put his pile away first. My son agreed, and then immediately took my pile and threw it down our stairs.

          • Eli says:

            Please do not ever use verbal, genie-based metaphors for AI. They confuse the issue far more than they illuminate it.

          • James Miller says:

            @Eli

            How has my example confused anything? We don’t have a mathematical model of how a computer super-intelligence would behave other than using economic utility function analysis which still leaves massive degrees of freedom, so verbal analysis is all we have, even if it’s wrapped up in EconMath.

          • Have you considered using different techniques for different classes of AI?

          • akc09 says:

            @James

            “Tonight the AI releases a virus that kills everyone and consequently makes its trade extremely profitable.”

            This is the part of the Unfriendly AI Scenarios that I can never quite grasp, the part where they suddenly kill people in the physical world. Do these hypothetical AIs have access to fully-automated chemistry labs? The water supply? Can they build things in the physical world? Can they move raw materials from one part of the country to another?

            Maybe it’s just a failure of imagination on my part, but my first thought is always, how exactly are these hedge-fund-AIs disseminating chemical weapons?

          • James Miller says:

            @akc09

            >how exactly are these hedge-fund-AIs disseminating chemical weapons?

            They could (1) have the ability to manipulate physical objects, (2) hack into the production process of some company to have it make something other than it intended to (3) find a few people who want to destroy the world and give them lots of money and a few hints or (4) figure out how to cause a few well-placed people to go crazy with the desire to exterminate mankind, or (5) hire several groups of outsiders to do things that combined will exterminate mankind but which look innocent to each individual group. (Since the AI will be a lot smarter than myself figure it will come up with much better stuff.)

          • nimim.k.m. says:

            They could (1) have the ability to manipulate physical objects, (2) hack into the production process of some company to have it make something other than it intended to (3) find a few people who want to destroy the world and give them lots of money and a few hints or (4) figure out how to cause a few well-placed people to go crazy with the desire to exterminate mankind, or (5) hire several groups of outsiders to do things that combined will exterminate mankind but which look innocent to each individual group. (Since the AI will be a lot smarter than myself figure it will come up with much better stuff.)

            All that hinges on assumption that being very smart scales amazingly well on wild variety of domains, and won’t hit diminishing returns quite soon while dealing with the friction caused by dealing with the collective stupidity of humankind and AI’s need to test its internal models of reality against the physical reality to get the models right.

            Why hedge fund trading AI would have the capability to act how humans work (both on individual and societal level), how biochemistry works, how industrial production processes work, and so on, without its creators noticing?

            (By the way, I’d like to some day have some input on someone who actually manages industrial production plants on what kind of process rolling a new product usually is. What I know of relatives who work in sort of relevant industries, it’s not exactly as simple as changing some command parameters in source code. Especially if the need is to produce something totally different and groundbreaking than what the relevant industrial processes currently are doing.)

            Unless we are talking about an AI that suddenly manages a jump from (without its developers not finding out), it’s more reasonable to look at timescales and methods necessary for the very smartest humans to get enough influence on the society that it can be perceived as an influence: in politics, that takes still years. In business, also years, and often luck. And they still must compete against several other such players on the field.

            Suppose a sufficiently intelligent AI could say just the right words to someone to radicalize them and destroy the world. In reality, it’s called radicalization and takes significant amount of time, often with those words said by somebody you trust and/or consider an authority, and susceptible people who are successfully radicalized (but do not have backing of nationstate level effort) seem to manage to create fairly localized damage at their best efforts.

            And this is still assuming that AI has found a way to achieve its targets without the global prosperity and economic value created by humankind.

            Now, I’m sure that people who have spent years arguing about AI-doomsday scenarios have a counter-point to anything I’d say; moreover, I lose arguments often than I’d like. But I doubt that successful model updating happens in the course of point-counterpoint argumentation trees, especially against people who have spent ages crafting responses to start such trees in optimal way.

            I instead observe just how possible instant world domination seems to be given the complexity of our world.

          • nimim.k.m. says:

            Re: my previous reply. Guess what, Bugmaster here seems to able to argue for a position that’s quite similar to my own, but with better arguments.

      • Bugmaster says:

        FWIW, I obviously agree with your ultimate conclusion regarding UFAI, but I disagree with your reasoning.

        [The brain] is, rather, the hub of a nervous system that is intimately connected to, and interactive with, the entire body. This distinction is important.

        I will grant you that it’s important in the trivial sense: a superintelligent brain that is not connected to any sensors or effectors would be pretty useless. It wouldn’t be able to do anything — by definition — except for thinking really hard, whooptie doo. However, you seem to be saying (assuming I’m reading you correctly) that there’s something uniquely special about the human brain, and that no other hardware could be as intelligent. You ask,

        I mean, have we ever engineered something as complex and multi-faceted as a single neuron?

        Well, no. We have probably never engineered anything as complex and multi-faceted as a bird’s wing, either; and yet, we have airplanes and helicopters. Similarly, AlphaGo doesn’t think in the same way as humans do, but it can regularly beat them at their own ancient game.

        If you are indeed claiming that human brains are the only possible hardware in the Universe that can produce general intelligence, then you need to provide at least some evidence why that’s the case — other than the obvious fact that neurons are different from transistors. Propellers are different from feathers, too, but helicopters can still fly; submarines can swim; cameras can see; and so on.

        • LukeReeshus says:

          We have probably never engineered anything as complex and multi-faceted as a bird’s wing, either; and yet, we have airplanes and helicopters. Similarly, AlphaGo doesn’t think in the same way as humans do, but it can regularly beat them at their own ancient game.

          I think this analogy actually works for my point of view. While engineered wings are much cruder than those of a bird, they operate via the same basic physical occurrence, lift. Indeed, the crudest engineered wings did too. Our improvements in aviation since the Wright brothers have been narrow ones, mainly in the realm of propulsion (propellers—>jet engines).

          Likewise, I’m sure we will continue to see improvements in narrow AI (chess—>AlphaGo). But will this lead to general AI?

          (Actually, it might, if all you need to do to get general AI is simply yoke enough narrow AIs together. As an analogy to the brain, with its various regions and functions, this might actually be possible…)

          If you are indeed claiming that [] brains are the only possible hardware in the Universe that can produce general intelligence, then you need to provide at least some evidence why that’s the case — other than the obvious fact that neurons are different from transistors.

          Well, I certainly wasn’t claiming that. I was suggesting a line of reasoning for why that might be the case. Mainly because, frankly, I find the reasoning suggesting that general AI is inevitable—that all it’ll take is sufficient complexity, and that we’re reliably approaching such—to be kind of lazy.

          This may all be a moot point though, since AI will probably not have to become general to seriously destabilize our world.

  7. Jiro says:

    This is good, but it’s about one percent of the amount that we spend on simulated online farming games. This isn’t epistemic caution. It’s insanity.

    Unless all other types of precautions that you don’t complain about are large in comparison to farming games, this is an isolated demand for rigor.

    And if you think that other types of precautions than AI are large in comparison to farming games, you really should say so and provide some numbers.

    • kokotajlod@gmail.com says:

      I can’t speak for Scott, but I think I have similar views to him. Speaking for myself:

      –I think we spend billions of dollars per year fighting global warming and I think we aren’t spending enough
      –I think we spend billions of dollars per year preventing & prepping for pandemics and we aren’t spending enough
      –I think we spend billions of dollars per year preventing nuclear war and we aren’t spending enough

      (these estimates are give-or-take an order of magnitude)

      Ditto for the amount of research-hours and policy-wonk-hours spent worrying about the above risks.

      AI risk is… within an order of magnitude of the seriousness x likelihood of those risks, I think. (It’s less likely but more serious) Yet it is two orders of magnitude less funded/worried about.

      So I am being perfectly consistent when I advocate for more funding/attention to AI risk. Those other risks exist and need more funding; AI risk needs it even more.

      • Luke G says:

        The other variable to consider, though, is the effectiveness of the funding. With global warming, pandemics, and nuclear war, we have a pretty good idea where to spend money to help prevent the disasters. With AI risk, it’s much less clear. (Personally, the AI risk research I’ve read didn’t appear practical at all.) You’d need to have an argument that the benefit per dollar of AI risk research is higher than other existential risks we could be investing in.

        • 6jfvkd8lu7cc says:

          I dunno, that paper where they hide voice commands to phones inside something that humans hear as pure noise sounds pretty practical (and the attached MP3 sounds pretty noisy, although sometimes still borderline understandable) to me.

          This is a different kind of AI risk research, of course; on the bright side, it is useful both right now and for understanding how an automated decision system can go wrong.

      • Bugmaster says:

        Just to underscore what Luke G said:

        How much are we spending to prevent a micro black hole from piercing and possibly consuming the Earth ? Should we be spending more ? Why or why not ?

  8. Steve Sailer says:

    “I worry there’s a general undersupply of meta-contrarianism.”

    Over the years, it has occurred to me that people tend to be motivated by money, power, and prestige, just like mass market Hollywood movies warned.

    My personal predilection was for contrarianism, but Michael Kinsley got there first, so all that was left for me was meta-contrarianism, which looks like vulgar Brian de Palma movies from the 1970s-1980s..

  9. Bugmaster says:

    For what it’s worth, the next disruptive technology that I’m looking forward to isn’t AI (although that would be cool too) or the Hyperloop; it’s a portable battery with 100x (or heck, even 10x) capacity of current lithium-based batteries. I have no idea whether such a thing is possible (and, let’s face it, it probably isn’t); but if someone were to somehow invent it, this would revolutionize our world at least as much as toilet paper did, and probably more so.

    • bintchaos says:

      Its probably going to be the Cambrian Explosion in Robotics.

      The most important skill at the moment of the ‘Cambrian Explosion of robotics’ is adaptability

      Amy Webb, futurist and CEO at the Future Today Institute, commented, “Gill Pratt, a former program manager of the Defense Advanced Research Projects Agency (DARPA), recently warned of a Cambrian Explosion of robotics. About 500,000 years ago, Earth experienced its first Cambrian Explosion – a period of rapid cellular evolution and diversification that resulted in the foundation of life as we know it today. We are clearly in the dawn of a new age, one that is marked not just by advanced machines but, rather, machines that are starting to learn how to think. Soon, those machines that can think will augment humankind, helping to unlock our creative and industrial potential. Some of the workforce will find itself displaced by automation. That includes anyone whose primary job functions are transactional (bank tellers, drivers, mortgage brokers). However, there are many fields that will begin to work alongside smart machines: doctors, journalists, teachers. The most important skill of any future worker will be adaptability. This current Cambrian Explosion of machines will mean diversification in our systems, our interfaces, our code. Workers who have the temperament and fortitude to quickly learn new menu screens, who can find information quickly, and the like will fare well. I do not see the wide-scale emergence of training programs during the next 10 years due to the emergence of smart machines alone. If there are unanticipated external events – environmental disasters, new pandemics and the like – that could devastate a country’s economy and significantly impact its workforce, which might catalyze the development of online learning opportunities.”

  10. MawBTS says:

    As a society, we spend about $9 million yearly looking into AI safety

    Err…Is this an inadequate amount? Do the researchers need more? How do we know what an adequate amount of money to spend on AI safety is?

    SETI cost $2.5m/yr and failed to find a single extraterrestrial. I don’t think giving them 10x or 100x more money would have changed the picture much: the bottleneck wasn’t money, it was that there is (probably) nothing out there to find.

    I don’t know if an unfriendly AI is likely. I do know that having more money is better than having less money. You seemed to be saying “we need more money to even evaluate whether there’s even a risk”, which is fair…but it also sounds like a conversation we could still be having even if we spent $90m (or $900m) a year on AI safety. I don’t know where the line is, and there really needs to be one.

    • Bugmaster says:

      Well, theoretically speaking, if I were convinced that UFAI is likely to happen, and that it will result in the extinction of the entire human race; then I’d campaign for spending as much money on preventing it as possible. Ideally, we’d cut expenses to all other non-critical fields, and spend just enough money to keep the current population of Earth a). alive, and b). working on solving UFAI. Yes, such a world would be a pretty bleak place to live in — but it’s better than total extinction, right ?

      • brentdax says:

        Nicely done! You really pounded the straw out of that man there.

        • Bugmaster says:

          Ok, well, now I’m confused. Forget AI; let’s say that there’s an asteroid heading toward the Earth, and that we’re 99.99% sure that it will hit about 20 years from now, wiping out most human life. How much would you want to spend on deflecting it ?

          • caryatis says:

            MawBTS’s point (which I agree with) was that we don’t know how much we ought to spend on deflecting a risk until we know how likely the risk is. This is not a world in which we all agree that the asteroid is 99.99% likely to hit, or “likely” at all.

          • Bugmaster says:

            Firstly, as far as I can tell, Scott is about 99.99% convinced that UFAI is coming (though his timeframe might have a wider range). I could be wrong about this, though.

            Secondly, I disagree that we don’t know how likely the risk of UFAI is. I think it’s about as likely as an alien invasion, which is to say, not likely at all.

            But perhaps more importantly, I’m not sure what you propose we do regarding unknown risks. Let’s assume that we have no idea how likely UFAI is; how much would you suggest spending on it ? Presumably, we should spend exactly as much on other unknown existential risks — is that not so ?

          • onyomi says:

            I could be wrong, but I understand Scott’s position as more like this:

            If there’s something I think has a 20% chance of causing the extinction of all human life in the next 100 years and we are currently spending $9 million, worldwide, attempting to address that issue, I may not be sure how much is the right amount of time, money, and effort to put into the issue, but I’m 99.99% sure $9 million annually, worldwide, is not enough.

          • John Schilling says:

            Ok, well, now I’m confused. Forget AI; let’s say that there’s an asteroid heading toward the Earth, and that we’re 99.99% sure that it will hit about 20 years from now, wiping out most human life. How much would you want to spend on deflecting it ?

            If I lived in Ancient Rome, or even Renaissance Europe, none whatsoever. I want a lot of money spent on a really great twenty-year party, with a side order of whatever sacrifices with win favor with the Gods and some thought to positioning my team to be in a position of power in the 0.01% of futures where the asteroid doesn’t hit. And the ~5% of futures where there never was an asteroid, because con artists are real and haven’t we had numerous lectures and essays on overconfidence here before?

            More generally, if you don’t have a specific plan on what to spend the money on, you don’t get to spend the money. Saying “At least the first step is to STUDY THE PROBLEM!”, justifies a few academics’ salaries, and they should be focused on coming up with better risk evaluations and more organized study plans. Quite possibly what they will come up with is, “we live in Ancient Rome and have no clue how to stop asteroids, sorry”.

          • onyomi says:

            if you don’t have a specific plan on what to spend the money on, you don’t get to spend the money

            Academics are very good at thinking up specific plans to spend money–much more specific than just “we’ve got to study the problem more to figure out what we need to work on.” Much of such money will be wasted, of course, but not all. But if you build it (the grant foundation), they will come (people with a wide variety of specific ideas for working on the thing you want them to work on).

            Which, for me, raises the question: if very wealthy individuals like Gates and Musk are actually concerned about this, why haven’t they established the annual “dedicated AI Safety Research fund/prize,” where ten million dollars goes to fund the best proposed AI safety project, or the ten best 1 million dollar AI safety projects. Apparently it would already be a doubling of the budget relative to now.

          • John Schilling says:

            Academics are very good at thinking up specific plans to spend money–much more specific than just “we’ve got to study the problem more to figure out what we need to work on.” Much of such money will be wasted, of course, but not all.

            Why not all? Do we really need to come up with a list of research problems where literally all the money was wasted, at least insofar as the actual goal of the project is concerned? Or more than all of it?

            If and when academics studying “Friendly AI” research come up with a specific plan to spend money, we can evaluate that plan and decide whether it’s worth spending money on. If the idea is that we allocate a billion dollars and say, “come up with a specific plan to spend all this money, then do it”, then no. That’s the usual recipe for plans that waste literally all of the money you gave them, and often more than all of it.

          • onyomi says:

            if the idea is that we allocate a billion dollars and say, “come up with a specific plan to spend all this money, then do it”, then no.

            I’m not saying “give the money to some university or government agency and tell them to figure out how to spend it,” I’m saying “set aside the money for this purpose and selectively award it to those who either have already produced the best new research, or who produce the most credible, detailed plan for doing such research.”

          • John Schilling says:

            If you don’t have a plan, you don’t know how much money to set aside. And the “most credible, detailed plan for doing research”, is going to be the one put together by the most skilled of the charlatans attracted by the offer of a free gigabuck.

            Committing a billion dollars to something you don’t know enough about to already have a plan, is 100% signaling about how very, very much you care about a goal and 0% actual progress towards that goal. It is blood in the water for the sharks, and it will cost you far more than a billion dollars in the end with nothing to show for it.

            See, e.g., Lyndon Johnson’s War on Poverty, Richard Nixon’s War on Drugs, or Ronald Reagan’s Strategic Defense Initiative. But at least we know how very much they all cared. Enough to reelect two of them, at least.

          • onyomi says:

            @John Schilling

            Your argument proves way too much: it pretty much invalidates the entire idea of research funding on the theory that the money always goes to charlatans.

            Yes, in general, grant money goes to people who are good at writing grant proposals rather than people who have worthwhile projects to do. Fortunately, there’s a strong enough correlation between having a worthwhile project and ability to write a convincing proposal that it isn’t a total waste.

            I presume you don’t intend the claim, “all research funding is useless and will be 100% captured by charlatans good at writing research proposals but not at actually doing anything,” but rather the weaker claim “research funding with vague goals is likely to be captured primarily by charlatans.”

            But actually, I think the goal of “figure out a way to keep AI from killing us all” is pretty tangible and specific, and if someone like Bill Gates thinks AI is a serious risk, he probably also has at least some vague notion of what a good AI safety research proposal or innovation might look like. He can direct his grant committee to give money to the former and not the latter. Of course, Gates could be wrong about what constitutes useful AI research, but that’s a problem with any type of threat management: you could be wrong about the nature of the threats you’ll face in the future; doesn’t mean preparing is pointless.

          • John Schilling says:

            Your argument proves way too much: it pretty much invalidates the entire idea of research funding on the theory that the money always goes to charlatans.

            At this point, I’m not sure most of the money isn’t going to charlatans, or close enough to charlatans as makes no difference. Fusion power, flying cars, moon bases, cures for cancer, and yes, AI itself, always seem to be twenty years in the future no matter how many billions we spend. Granted, the people doing the research are probably more true believers than cynical con artists, but so what?

            I presume you don’t intend the claim, “all research funding is useless and will be 100% captured by charlatans good at writing research proposals but not at actually doing anything,” but rather the weaker claim “research funding with vague goals is likely to be captured primarily by charlatans.”

            I make claims specifically about the folly of UNSTRUCTURED, BILLION DOLLAR research funding, and you assert that I am condemning research funding generally?

            I can’t see this as anything but a deliberate lack of reading comprehension on your part, but one more time. One LAST time.

            Research funding that STARTS WITH A COMMITMENT OF A BILLION DOLLARS, and has “figure out a plan” as step three or four, pretty much always results in all of that original billion dollars and usually many times more, squandered by charlatans with nothing to show for it except a politician giving speeches about how awesome he is for spending ONE BILLION DOLLARS on something REALLY IMPORTANT.

            Research funding that does any damn good at all, stays at the level of a handful of professors and a university laboratory, or the like, unless and until e.g. Enrico Fermi comes back and says “here are my preliminary results on nuclear chain reactions, and here is some specific planning for how we can build on this to vaporize those pesky Nazis”.

            And no, multiplying the handful of professors/labs a hundred times over so you can spend ONE BILLION DOLLARS, isn’t a hundred times better. The useful results are most likely going to come from one of the professors who wasn’t actually trying to solve the problem you are interested in.

            Now, can we please acknowledge the difference between “research funding” and billions of dollars?

            But actually, I think the goal of “figure out a way to keep AI from killing us all” is pretty tangible and specific,

            Really? That’s your idea of a specific plan? What’s your idea of a vague plan? By your definition, is anything beyond “be good!” a specific plan?

          • At this point, I’m not sure most of the money isn’t going to charlatans, or close enough to charlatans as makes no difference.

            Have you read Terence Kealey’s book The Economic Laws of Scientific Research? I haven’t read it, although I probably will, but I’ve heard him speak on it. He offers several examples of natural experiments, situations where government funding of some area of research increased a lot very rapidly. By his report there was no corresponding increase in progress. He also offers an interesting argument for why it pays private firms to subsidize some basic research.

          • alchemy29 says:

            @Bugmaster

            Firstly, as far as I can tell, Scott is about 99.99% convinced that UFAI is coming (though his timeframe might have a wider range). I could be wrong about this, though.

            Where are you getting this from? If he hasn’t explicitly claimed this, then ask for clarification, but don’t put numbers in his mouth (and don’t keep repeating those numbers).

          • Bugmaster says:

            @alchemy29:
            As I said above, I am not claiming that Scott actually said, “I, Scott, am 99.99% convinced that UFAI is possible”; I am merely saying that, based on my interpretation of his writings, he seems to believe this. I could be wrong, but by no means am I trying to strawman him.

      • bintchaos says:

        You are putting all your eggs in one basket so to speak– what about the risk from grey-goo, climate collapse, or a genetically engineered superplague like Captain Trips?
        And maybe…the real danger is the collapse of capitalism.
        The slasher is calling from inside the house.

        The question isn’t how to train people for nonexistent jobs, it’s how to share the wealth in a world where we don’t need most people to work.
        Nathaniel Borenstein

        Nathaniel Borenstein, chief scientist at Mimecast, replied, “I challenge the premise of this question [that humans will have to be trained for future jobs]. The ‘jobs of the future’ are likely to be performed by robots. The question isn’t how to train people for nonexistent jobs, it’s how to share the wealth in a world where we don’t need most people to work.”

        Paul Davis, a director based in Australia, predicted, “Whilst such programs will be developed and rolled out on a large scale, I question their overall effectiveness. Algorithms, automation and robotics will result in capital no longer needing labor to progress the economic agenda. Labor becomes, in many ways, surplus to economic requirements. This … shift will dramatically transform the notion of economic growth and significantly disrupt social contracts; labor’s bargaining position will be dramatically weakened. The nature of this change may require the world to shift to a ‘Post Economic Growth’ model to avoid societal dislocation and disruption.”

        John Sniadowski, a systems architect, replied, “The skill sets which could have been taught will be superseded by AI and other robotic technology. By the time the training programs are widely available, the required skills will no longer be required. The whole emphasis of training must now be directed towards personal life skills development rather than the traditional working career-based approach. There is also the massive sociological economic impact of general automation and AI that must be addressed to redistribute wealth and focus life skills at lifelong learning.”

        Tom Sommerville, agile coach, wrote, “Our greatest economic challenges over the next decade will be climate change and the wholesale loss of most jobs to automation. We urgently need to explore how to distribute the increasing wealth of complex goods and services our civilization produces to a populace that will be increasingly jobless in the traditional sense. The current trend of concentrating wealth in the hands of a diminishing number of ultra-rich individuals is unsustainable. All of this while dealing with the destabilizing effects of climate change and the adaptations necessary to mitigate its worst impacts.”

        Some of these experts projected further out into the future, imagining a world where the machines themselves learn and overtake core human emotional and cognitive capacities.

        Timothy C. Mack, managing principal at AAI Foresight, said, “In the area of skill-building, the wild card is the degree to which machine learning begins to supplant social, creative and emotive skill sets.”

    • AlphaCeph says:

      > SETI cost $2.5m/yr and failed to find a single extraterrestrial. I don’t think giving them 10x or 100x more money would have changed the picture much: the bottleneck wasn’t money, it was that there is (probably) nothing out there to find.

      This is IMO a really bad argument.

      The point of SETI was not to Find ETI, but to settle the question one way or another. A negative result is still a worthwhile and useful result.

      SETI have placed useful bounds – evidence against the existence of ETI, and the amount of evidence they can place is indeed an increasing function of how much money you give them.

      > I don’t know where the line is, and there really needs to be one.

      Well, it seems reasonable to have perhaps 10,000 full-time equivalent researchers and support staff working on AI risks worldwide, which is something like a few billion dollars to 10 billion dollars, depending on how efficient your organisation is and how expensive various overheads are.

      The Manhattan project is a useful comparison point.

      If one tried to go substantially above this, I suspect that talent constraints would lead to diminishing returns.

      Given that $10bn is such a small fraction of the global economy, and that the problem is so important, it makes no sense to go below the talent-constrained maximum.

      The biggest challenge is that that is 1000x larger than where we are currently at. We have to find ways to scale this by a factor of 1000 without substantially sacrificing quality of work. This is a huge challenge.

  11. Jugemu says:

    Although I agree with your overall point, I think the argument you present here proves too much – you could replace AI with anything threatening-sounding. For example, the aforementioned global snake infestation. Shouldn’t we be spending more to research the possibility of such a horror? In other words, you still need to convince people (as has been attempted elsewhere) that AI is a non-negligible threat. The current argument seems to assume that they agree that it is and are just overestimating how much is currently being spent on it, but they might not think it’s even worth the $9 million.

    • caryatis says:

      My thoughts exactly. Why exactly should I be convinced that we should spend more on AI research than on online farming games? That comparison only moves people who ALREADY agree that online farming games are unimportant and that AI research is very important.

      Well, maybe the post was meant to be preaching to the choir.

      • alchemy29 says:

        Scott has a really annoying tendency to snipe at supposedly inferior forms of money spending in order to convince people to spend money on his cause. He did this when promoting EA, but has mostly backed off as far as I can tell. He now likes to do it when promoting friendly AI research. I find it to be rather nasty way grabbing for status.

        “Oh, look at those idiots spending money on X, when they could be spending it on noble cause Y”

        Much of the civilized world has realized that it is more effective to promote your own ideas to secure support. Tearing others down to advance your own status results in less support for everyone – a tragedy of the commons. If every charity did this, then people would spend less money on charity. If scientists did this, then science funding would go down.

        • pdbarnlsey says:

          While it may not be Scott’s intention in the quoted passage, farm games don’t need to be a stupid waste of money in order for the argument to work, they just need to be kind of trivial, so that we’re not talking about cutting schools and hospitals.

          You could make the comparison with any inessential class of good – poker tournaments, lounge music covers of popular songs and berry-flavoured yogurt maybe…

          I feel like most people, even those who love them, would agree that farm games are kind of trivial, in the grand scheme of things.

          • alchemy29 says:

            Scott has made the same argument for things that are not trivial in the past (such as the pronoun disambiguation problem in AI). Perhaps it is unfair of me to dredge them up, but the comparison is annoying regardless.

            All entertainment and most services are trivial in some sense. I do not think that is why he chose that example. There is a reason Scott singled out online games, and not berry flavored yogurt even though they are equally trivial. Berry flavored yogurt is status neutral, whereas online games are already low status. Much easier to try to advance the status of your own group by pushing down easy targets. It’s a textbook political move.

            If there is an argument there, I don’t see it. Regardless of how trivial or worthless you think online games are are, it does not follow that the services MIRI provides are valuable – you need to make that case separately. As far as I can tell, what Scott is doing is pure rhetoric and cheerleading.

            I don’t mean to make a huge deal out of this. It’s on average one or two sentences out of every post on AI. But it just strikes me as both unpleasant and intellectually lazy.

          • pdbarnlsey says:

            alchemy,

            I likewise don’t think it’s a big deal, and it’s possible than Scott was doing what you accuse him of.

            But, it’s a pretty standard move in a debate about expenditure to preempt a response along the lines of “how can you really ask us to spend even more on [x] in these straightened economic times, with homeless people starving on the street and children walking to school without shoes on!?” by explaining that the expenditure you call for can (and ideally also will) be funded via something other than school shoes and shelters, because it small relative to some other form of insignificant expenditure.

            Done properly, that’s a perfectly fair rhetorical move, as long as you don’t end-up double or triple counting your farmvilles or your yoplaits to fund a whole ideological wish list.

    • Bugmaster says:

      Agreed; you said what I tried to say, but you did it better.

    • kokotajlod@gmail.com says:

      You can replace AI with anything that has a similar probability of happening x badness if it happens.

      You really think the global snake infestation is remotely similar to AI risk? Jeez.

      Why do you think that? Do 20% of snake experts think that snakes are going to kill us all in the next 100 years? Are there famous scientists I don’t know about going around saying this?

  12. Oleg S. says:

    Can anyone show what exact progress was made in friendly AI research in last 3-4 years?

    I think the whole thing with the comparison between general AI and a warehouse management programs stems from people trying to dismiss the inevitable horrors of AI overtaking the humanity because they don’t think friendly AI is possible.

    • Bugmaster says:

      Personally, I dismiss these concerns because I think that superintelligent AI-FOOM is impossible (well, not totally impossible, of course, just vanishingly unlikely). Unlike nigh-omnipotent AI, warehouse management software does demonstrably exist, and industrial accidents demonstrably do happen — despite the fact that their causes are reasonably well understood. Buggy automated software plus giant warehouse equals one huge accident waiting to happen, so yes, I’m way more worried about that than the Singularity.

      • Oleg S. says:

        I have a feeling that warehouse management software design and friendly AI research have a lot in common – see for example this 2016 paper about how to apply friendly AI principles to a robot design.

        What I don’t understand is what MIRI & other similar insititutions offer on top of what is already done in industry (say, at Google or Amazon).

        • cernos says:

          What I don’t understand is what MIRI & other similar insititutions offer on top of what is already done in industry (say, at Google or Amazon).

          A general answer to that question is they have different incentive structures. MIRI is a different legal entity then an Alphabet or an Amazon. They have different fiduciary duties. MIRI publishes video and white papers of their current results, is funded by the public and must perpetually create a group of donors as members join and leave. Their incentives seem to push them in the direction of being open to the public and actively trying to connect to new groups of people.

          Alphabet is a publically traded company that must increase shareholder value. I imagine that even should Alphabet want to keep the public informed on their AI safety progress they would be unable to, as the threat of lost competitive advantage or lost monopoly rents from a U.S patent could be costly. They make their results public when the problem is solved, like with the amazing spectacle of Alphabet’s & Deepmind’s AlphaGo. A publicly traded company has harsh legal incentives to default to being secretive and profit maximizing. Deviating from those incentives is doable but costly and risky so I expect them to do less of those things.

          If you (as I do) find it good to have access to a team of intelligent researchers’ papers and theories on AI, with the bonus of it being through the lens of AI safety, then I could see supporting groups like MIRI over the do nothing status quo of seeing what the big companies rollout.

          A tangent, my guess is some small amount of pushback (1-2%?) to funding new groups like MIRI is that by doing so you are being duped into helping someone other then your own tech firm build the next IBM, Microsoft, Apple, Google, Amazon (with regards to owning the AI patents, services, or networks) in the zero-sum game of U.S Corporate, Patent and IP law.

      • EGI says:

        @ Bugmaster: Why exactly do you think that? I mean predicting the future is generally hard, but there are some good pointers to generally intelligent systems being possible, humans exist after all. (In a similar vein there were very good pointers available for heavier than air flight being possible, birds and the like.) Also there are very good reasons (insert lesson about evolutionary biology and trade-offs) to assume that humans are not the or very close to the possible maximum of intelligence. So IMHO a materialist world view produces the default assumption of superhuman AI being possible.

        Of course this does not say that we are close to developing general AI in the next couple decades and I think you could reasonably argue anything between let’s say .95 and .01 probability of general AI in the next 100 years. But as far as I understand you you think the probability is more something like 10^-10 or something like that while also having a very low model uncertainty. (To me your 10000$ would be enough claim above implys something like this. Am I wrong?) I do not understand how you could possibly come to this conclusion. Where exactly do you diverge from this reasoning? Or do you merely argue a single digit percentage extinction risk form UFAI and think we should dismiss such “unlikely” risks?

        By the way, I am pretty sure Scott is somewhere within my “reasonable” probability bracket and nowhere near .9999 or something equally ridiculous.

        • Joe says:

          I agree that it’s very unlikely humans are near the peak of intelligence, and that the future will likely contain AIs that are many times smarter than we are. But this doesn’t itself imply we will see a FOOM scenario. For that to happen, intelligence has to be pretty simple — the sort of thing that a single entity could actually develop at state-of-the-art level by itself, without requiring whole industries, huge teams, vast amounts of tools, research, infrastructure, … .

        • dansimonicouldbewrong says:

          This is the crux of the disagreement: if you believe, like Scott, that intelligence is a straightforwardly quantifiable property that humans have a bunch of, and that animals have less of, then it’s not hard to believe that some future AI will have a lot more of it. However, this model of intelligence has enormous problems. (I’d link to my blog post with more details, but apparently I’m not allowed to post links. You can search for my blog, whose name is the last part of may username here, and then search on that blog for “Hugo de Garis”.)

          • MicaiahC says:

            Reading over your post, I don’t think it’s very convincing and doesn’t centrally address the reasons why Scott thinks such a quantification is correct.

            Several things strike me as incorrect, for example, the Turing Test isn’t supposed to be a test for generalized intelligence so much as a way to say “Yes, it seems like computers right now seem impossibly distant from what humans can do, but we suppose that humans are intelligent because they can do things X, Y and Z, so therefore it seems proper that we don’t have double standards for machines and NOT complicate the discussion”. Less verbosely, passing the Turing test is about being human more than it’s about being intelligent. It’s briefly mentioned that people talk about computers being human but that’s distinctively different from saying that no better definitions exist, or that the specific definition put forward by Scott is false.

            Another part talks about how, even if there is a definition, where a computer can recognize “complex patterns” it would maybe fail to recognize patterns simple to us. This strikes me as profoundly confused, because somehow this objective definition of intelligence affects reality by saying the empirical phenomenon of “A being can recognize pattern of X and Y necessarily can’t recognize simple pattern Z”. It seems that this proves too much, by also claiming, let’s say, all mathematicians lose the ability to do arithmetic when they learn about real analysis, homotopy type theory or

            The stuff about the smartest man in Korea seems to ignore stuff about why the orthogonality thesis applies less to humans. I don’t have the time right now, but if you ask for it in a future open thread I’d be happy to supply links to the a fuller form of the argument.

            Overall, I don’t think the blog post is very convincing and doesn’t exactly touch base with reality. I’d personally be much more convinced if it engaged with arguments I were more familiar with (from Superintelligence) or had more concrete reasons than “definitions are weird”, like what factors in the world lead you to believe that single axis is wrong that acknowledge the Scott/LWian definition of intelligence.

          • Bugmaster says:

            I haven’t read the blog yet, but I’d like to mention that I reject AI-FOOM for two additional reasons (actually many more reasons, these two are just the major ones):

            1). The nascent AI won’t be able to solve any useful problems (such as, “how do I become superintelligent”) just by thinking really fast; instead, it would actually need to act in the physical world, in order to run experiments, correspond with humans, etc. The physical world is very, very slow.

            2). Many of the methods that AI-FOOM propose for the AI to become superintelligent are either unlikely or downright impossible. This includes e.g. “hacking the planet”, nanotechnological “gray goo”, memetic basilisks, and even more mundane feats such as linearly scaling its own processing capabilities by adding more hardware without any diminishing returns.

          • Most, but not all. Taking over militay equipment or crashing financial markets seems mor plausible to me.

          • Bugmaster says:

            @TheAncientGeekAKA1Z:
            I agree with you that “taking over military equipment” and “crashing financial markets” are real causes for concern. However, I disagree that a prospective AI would be able to do this any better than other unscrupulous agents (e.g. evil yet entirely human Russian hackers). In addition, it’s not clear that such events would necessarily be catastrophic; for example, the Flash Crash had already happened at least once, and yet we’re still here.

          • Even if it had an IQ of 1000? Are you denying the possibility of superintelligence, or it’s effectiveness?

          • Bugmaster says:

            @TheAncientGeekAKA1Z:
            All of the above. Firstly, I don’t believe that “IQ 1000” is a coherent concept. Secondly, being super smart is not enough; for example, no amount of intelligence will allow the AI to hack my old 1970s-era Toyota, because there’s nothing there for it to hack. Similarly, the Flash Crash (which was accomplished by a plain old non-superintelligent AI) was averted by humans pulling the plug. But, perhaps more importantly, you are assuming that the AI is already somehow superintelligent (assuming that concept even makes sense); but I am not convinced that it could become that powerful in the first place without hacking the planet/inventing nanotech/overcoming laws of physics/etc.

          • The inability of an AI to hack a small amount of old technology won’t save us.

          • John Schilling says:

            Intercontinental ballistic missiles with thermonuclear warheads are an old and hack-proof technology.

          • jacob_lakja says:

            The AI need not directly hack old tech – it just needs to convince we ever-gullible humans to do its bidding for anything it can’t otherwise control. Cognitive hacking is arguably the greater danger / vulnerability here.

        • Bugmaster says:

          @EGI:
          I mentioned some of my reasoning in reply to dansimonicouldbewrong above; but in general, I do not believe that the propositions “humans are not the pinnacle of intelligence” (which I do agree with) and “intelligence can be embodied in a machine” (ditto) necessarily lead to the conclusion, “and therefore the Singularity is likely to happen”. The Singularity rests on many more assumptions than just those two — even if we grant that “intelligence” is an easily quantifiable property that can increase exponentially, which I personally doubt.

          • EGI says:

            I think we are talking past each other. I’m not sure what exactly you mean with the singularity, but it seems to me that you mean a local hard takeoff (from human to vastly superhuman in days or less) by a single entity. I think this is one of the (much) less likely ways for superhuman UFAI to arise. Much more likely is a slow, well behaved human assisted takeoff, which turns nasty later.

            Regarding intelligence: No, I do not think that intelligence in humans is a scalar or easily quantifiable or that IQ 10000 makes any sense (it does not). At least for me superintelligent in this context means vastly better than humans at doing science / building stuff. As soon as that happens, we better have some friendliness architecture in place. And while being better at something is not necessarily easily quantifiable, I think you can meaningfully say that most chess programs are better at chess than I am, or that Google translate is better at translating Spanish to English than I am, but vastly worse, than a professional Spanish translator.

            So to cut a long story short, you seem to agree, that machines being better at science and technology than humanly possible are likely to be developed at some point. If these machines are somewhat agenty (which seems likely) we are no longer the dominant species (and probably extinct soon after). There is no need for a hard takeoff or even nanotech to be involved (though nanotech probably will be involved at one point or another, since it is pretty powerful and certainly possible -just look in a mirror or at a tree). Why do you think there is absolutely no reason for concern. Not only no “let’s go to war economy and dedicate everything to this problem” level concern, but also no “let’s throw a couple billion at this problem and see if anything sticks” level concern?

          • Bugmaster says:

            @EGI:
            By “Singularity”, I mean something similar to what you said: a “hard takeoff”, which goes from sub-human to super-human faster than anyone could predict. It could be local, or distributed; that doesn’t really matter to me, because I don’t think either option is likely.

            You say that a “slow, well behaved human assisted takeoff” is much more likely to happen, and I agree; but in this case, the danger is a lot less severe, since there’s plenty of time for humans to pull the plug. You say it can “turn nasty later”, but this is true of any technology, e.g. fossil fuels or nuclear physics. Any technology can be incredibly dangerous; my claim is not that AI is somehow totally safe, but only that it belongs to the same category as other technologies (and should thus be dealt with accordingly). MIRI, on the other hand, claims that AI is a uniquely dangerous existential risk, which must be dealt with in an emergency fashion right now. I think that doing so would be a waste of resources.

            …you seem to agree, that machines being better at science and technology than humanly possible are likely to be developed at some point.

            In some sense, this has already happened; Deep Learning is well on its way to taking over some scientific fields, such as computational linguistics. However, I’m not lying awake at night worrying that a natural language processor is going to eat me. You say that I should be worried when such tools become “somewhat agenty”, but that is a huge leap, one that I don’t see happening any time soon.

            If it does happen (and personally I hope it does), no one is going to invest infinite resources into an AI that keeps recursively improving itself, as opposed to curing cancer or parsing natural language or whatever it’s supposed to be doing. Recursive self-improvement is a bug, not a feature, and is dealt with by hitting “reset” and cancelling your dev team’s Christmas bonuses, as usual.

            To make matters worse (for the AI), I’m not at all convinced that it’s physically possible to improve an agent’s problem-solving ability to godlike levels (assuming that this is even a coherent concept, which I doubt). People often act as though you can scale intelligence infinitely just by adding more hardware, but this doesn’t work in the real world — this is why Google and Amazon have a local data center in every region, containing duplicated data; and this is why their data centers are not infinitely large.

            What’s even worse (again, for the AI) is that raw intelligence doesn’t buy you much. Even if the AI is really, truly amazing at particle physics, and can solve equations a trillion times faster than any other computer… then it will still need to build an LHC in order to discover the Higgs Boson. An actual collider. Built out of bricks. Bricks are slow, and you can’t problem-solve your way out of using them — not unless you already knew everything that you are trying to discover.

            On top of that, I am reasonably sure that things like self-replicating molecular nanotechnology are impossible. They’re fun to think about, but the only such technology we know of are living cells — and they are slow, squishy, and not terribly useful if your goal is to convert arbitrary raw materials to computronium. For several reasons, I doubt that it’s physically possible to build molecular machines that can eat the world in a matter of minutes, or anything of that sort — though, obviously, making flesh-eating bacteria that slowly eat people is not out of the realm of possibility.

            In summary, the failure of the Singularity/UFAI scenario is overdetermined. It relies on several extremely unlikely things to happen at the same time, so I’m not all that worried about it, just as I’m not worried about the Second Coming of some god, or an alien invasion, or whatnot.

          • EGI says:

            You say it can “turn nasty later”, but this is true of any technology, e.g. fossil fuels or nuclear physics. Any technology can be incredibly dangerous; my claim is not that AI is somehow totally safe, but only that it belongs to the same category as other technologies (and should thus be dealt with accordingly).

            I think you misunderstood me here. With “turning nasty” I did not refer to some previously unknown risk manifesting like global warming but a previously cooperative agent becoming hostile, because it predicts that it is now powerful enough to openly pursue it’s (unwisely programmed) goals with impunity. This is unique to AI, because other technologies are not agents and makes testing for safety uniquely difficult. Also it prevents any plug pulling on our part, because the AI is well behaved as long as plug pulling is still possible.

            You say that I should be worried when such tools become “somewhat agenty”, but that is a huge leap, one that I don’t see happening any time soon.

            “Somewhat agenty” is shorthand for possessing a model of the world at large and the AI’s place in it and having desires about future world states and being able to act on these desires. I agree, that current AI systems have a longish way to go before fulfilling this definition. As soon as they do, the whole mess with convergent AI goals starts. No feeding of infinite resources required.

            What’s even worse (again, for the AI) is that raw intelligence doesn’t buy you much. Even if the AI is really, truly amazing at particle physics, and can solve equations a trillion times faster than any other computer… then it will still need to build an LHC in order to discover the Higgs Boson. An actual collider. Built out of bricks. Bricks are slow, and you can’t problem-solve your way out of using them — not unless you already knew everything that you are trying to discover.

            Again, the AI is well behaved until it is powerful enough to not worry about us. So a couple month or jears more bootstrapping time don’ t really change the outcome.

            On top of that, I am reasonably sure that things like self-replicating molecular nanotechnology are impossible.

            You are wrong here. Sure drexlerian diamondoid machines operating in Air or vacuum are probably impossible. But that doesn’t matter much, since life like wet nanotech is allmost certainly possible. Wet nanotech has much looser constrains in ability than life or biotech for that matter, since different chemistries are avaiable.

            On top of that, I am reasonably sure that things like self-replicating molecular nanotechnology are impossible.

            Sure thermodynamics alone dictates, that this will take years or at least month. But I don’t really care if the AI needs miniutes or months to eat the world after humanity is dead

            though, obviously, making flesh-eating bacteria that slowly eat people is not out of the realm of possibility.

            Much more dangerous things are possible. E.g. a nanoweapon which basically infects every one without causing symptoms and then waits for a certain signal to kill.

          • Joe says:

            @EGI

            … a previously cooperative agent becoming hostile, because it predicts that it is now powerful enough to openly pursue it’s (unwisely programmed) goals with impunity. … the AI is well behaved as long as plug pulling is still possible.

            What’s your story for how an AI agent could ever attain a position of incontestable power like this in the first place?

            Do you at least agree that you need a story — that if, for example, AI tech will develop very slowly and gradually, so that at any moment there are a great many different AIs at similar capability levels, the ‘hostile turn’ scenario you’re describing no longer makes sense?

          • Bugmaster says:

            @EGI:

            With “turning nasty” I did not refer to some previously unknown risk manifesting like global warming but a previously cooperative agent becoming hostile…

            I can’t tell the difference between this scenario and any other previously unknown (or perhaps known but discounted) risk manifesting itself. Can you elaborate ?

            As soon as they do, the whole mess with convergent AI goals starts. No feeding of infinite resources required.

            Agreed, but, firstly, I believe that this won’t happen any time soon; and sometimes I fear it never will (unlike MIRI, I think that GAI would be a tremendous benefit to humanity overall). Secondly, I am not at all convinced that nearly godlike superintelligence could be accomplished without nearly infinite computing resources.

            Again, the AI is well behaved until it is powerful enough to not worry about us.

            I think you keep (subconsciously) shifting your mental model of AI between “a highly evolved, autonomous tool” and “the bastard love child of Skynet and GLaDOS”. Granted, the two models are not entirely mutually exclusive; but I am not at all convinced that one must necessarily entail the other.

            But that doesn’t matter much, since life like wet nanotech is allmost certainly possible.

            What do you mean by “wet nanotech”, and what do you think it could reasonably accomplish ? As I said, living cells are just not very good at converting raw silicon to computronium; on top of that, the chemical reactions that power them are reasonably slow. You might be able to plant some nasty fungus that can destroy a bunch of crops, perhaps as quickly as in a decade; but you won’t be able to drop a bio-bomb that un-terraforms the Earth overnight. This is important; you say:

            But I don’t really care if the AI needs miniutes or months to eat the world after humanity is dead

            But the problem here is that (as I see it) the AI would need to eat the world in order to become godlike in the first place. If it can only do so one square kilometer per year, humans can pull the plug at any time. In fact, this would be easier to do than e.g. managing perfectly stupid invasive plant species. Again, I think you are switching your mental model between “a human-level AI is trying to become godlike, and this is how it could do it”, and “a godlike AI already exists somehow, and this is what it could do to us”. Even if we set aside the issue that I am not convinced that a godlike AI could exist at all (no more so than conventional deities), this confusion still severely undermines your argument.

            E.g. a nanoweapon which basically infects every one without causing symptoms and then waits for a certain signal to kill.

            You mean like AIDS (which demonstrably exists but is nowhere near as dangerous), or like some sort of a Drexlerian nanotech (which we both agree probably can’t exist), or what ?

          • EGI says:

            Oops, my comment just vanished. perhaps it was too long, so I’ll try again in multiple parts.

            Part 1:

            @ Joe:

            What’s your story for how an AI agent could ever attain a position of incontestable power like this in the first place?

            Uh, since I am not super-intelligent, this is a difficult question to answer. Basically doing what it was build for, participating in the economy, acquiring resources by being useful and diverting some of these resources to R&D and self improvement. One semi-plausible story about such a slow takeoff is “My little Pony: Friendship is optimal”.

            Do you at least agree that you need a story — that if, for example, AI tech will develop very slowly and gradually, so that at any moment there are a great many different AIs at similar capability levels, the ‘hostile turn’ scenario you’re describing no longer makes sense?

            No, it does still make sense. Then you get multiple entities which compete with each other with less and less regard for human rules and interests the stronger they become. Or one gets a decisive advantage and wipes the others out along with humanity.

            @ Bugmaster:

            I can’t tell the difference between this scenario and any other previously unknown (or perhaps known but discounted) risk manifesting itself. Can you elaborate ?

            The two major differences are: First, the risk does not appear in sufficiently controlled testing scenarios, since the AI knows that it is in such a scenario and behaves accordingly, except if you manage to deceive an entity which is much smarter than you, which may be difficult. This denies most engineering approaches which typically rely heavily upon try and error. Second the risk does not manifest gradually, like global warming or car accidents or release of radiation or whatever but suddenly when it is already to late.

          • EGI says:

            Part 2:

            Agreed, but, firstly, I believe that this won’t happen any time soon; and sometimes I fear it never will (unlike MIRI, I think that GAI would be a tremendous benefit to humanity overall).

            Maybe, I would perhaps say 50% over the next 50 years for superhuman AI with 50 % extinction risk in case of superhuman AI, with lots of model uncertainty. But that is not comparable to the Second Coming and a very high priority risk, at least in my book. Comparison: Existential risk level asteroid strikes are on the order of 50% over a couple million years. (By the way, I do not want to stop AI research either. I think this would be both impossible and not desirable, because the potential payoff if we get it right is simply too high. But I want us to think long and hard and early about the potential risks involved).

            Secondly, I am not at all convinced that nearly godlike superintelligence could be accomplished without nearly infinite computing resources.

            What does godlike mean? I think it is plausible, that in a couple decades an AI which outperforms a human brain on most relevant metrics may be run on a desktop computer for a few hundred Watts. A human brain is this size and runs on 50W and mature technology is typically more efficient than evolved systems (e.g. photosynthesis vs. photovoltaic, respiration vs. fuel cells, biking vs. running…).

            I think you keep (subconsciously) shifting your mental model of AI between “a highly evolved, autonomous tool” and “the bastard love child of Skynet and GLaDOS”.

            I don’t think so. But I think one MAY become the other as soon as goals AND a sufficiently complex representation of the world and the AI’s place in it are present. But more research is needed of couse.

          • EGI says:

            Part 3:

            What do you mean by “wet nanotech”

            I mean solvent based nanostructured systems of molecular machines and highly selective catalysts which can restructure matter on the molecular level by either burning part of that matter as fuel or utilizing solar energy as opposed to dry nanotech which envisions more or less freely programmable molecular machines which operate in air or vacuum by placing single atoms or groups of atoms according to said program utilizing either internally stored or solar energy. The first is definitively possible, the latter probably not (or at least much more difficult)

            and what do you think it could reasonably accomplish ?

            There might be a serious inferential gap here. I have studied biology with an emphasis on protein biochemistry and I don’t know how much you know about biology. To get a feeling for how actual molecular machines work, you could read for example the Wikipedia articles about Dynein, the Golgi apparatus, ATP synthase and Cellulose synthase (and wiki walk from there).
            Regarding the capabilities of wet nanotechnology the capabilities of living cells and everything trivially extrapolated from there are of course an absolute lower bound. But the chemistry of life is incredibly restricted, because the information storage (DNA) can only build proteins by linearly connecting 20 types of amino acids. Everything else has to be build by these proteins. Also the evolutionary nature of life and the sole reliance on water as a solvent seriously restricts the possible chemistry (this is why life is bad at building with silicon or other metals). Estimating the ultimate capabilities of a future technology is of course very difficult, but “any assembly of atoms permitted by chemistry and stable in a sufficiently versatile solvent (water, DMSO?, ammonia?, supercritical CO2?, diethylether??) sounds about right. Possible operating temperatures perhaps between -70 and 250°C depending on solvent and pressure. Speed of conversion of substrate to product will in larger structures mostly be limited by elimination of waste heat and will probably be typically an order of magnitude or two faster than in typical biological systems.

            But the problem here is that (as I see it) the AI would need to eat the world in order to become godlike in the first place.

            See my comment on “godlike” AI in Part 2.

          • EGI says:

            Part 4:

            If it can only do so one square kilometer per year, humans can pull the plug at any time. In fact, this would be easier to do than e.g. managing perfectly stupid invasive plant species.

            First: Humans would already be dead by this point. Second: The AI would do it like the invasive plant species: Start out with one seed per square meter and be done with it within a month or five.

            Again, I think you are switching your mental model between “a human-level AI is trying to become godlike, and this is how it could do it”, and “a godlike AI already exists somehow, and this is what it could do to us”.

            No, again, the human level AI is well behaved until it has a decisive strategic advantage (Nanotech is a good candidate here, but not necessary. Control over a lot of vital services would be another. Then, ironically, the AI could pull the plug on us…)

            You mean like AIDS (which demonstrably exists but is nowhere near as dangerous), or like some sort of a Drexlerian nanotech (which we both agree probably can’t exist), or what ?

            I mean neither. AIDS is a parasite, not a weapon. Epidemics of particularly lethal parasites are limited by the death of the host before the parasite can spread. You need to circumvent that. Imagine a nanosystem, a cell if you will, which can replicate within a human. It is carefully designed to not be immunogenic, so its presence is not noticed (no symptoms). It spreads like flu via droplets of bodily fluids. Unlike the flu it is present in all fluids and can penetrate skin. If dried out it forms resilient spores like anthrax. After some time, when the attacker is convinced that (nearly) everyone is infected a signal is sent (chemical, radio, acoustic, whatever). Upon receiving the signal the nanoweapons seek out a vital structure lets say motorneurons within the brain stem and integrate in their membrane forming an ion tunnel. This shorts out the membrane potential of these cells, making the victim unable to move or breath. Everyone dies within the same minute (or few hours in case of a chemical signal). This scenario is well within the “trivially extrapolated” category above and could probably even be done with conventional biotech (just DNA and protein, no unconventional chemistry). And this is just what my puny human mind came up with.

          • Bugmaster says:

            @EGI:
            First of all, without a compelling description of how an AI can become super-intelligent, and what that even means, your concerns become a lot less plausible. In fact, they start edging dangerously close to Pascal’s Wager, which follows a similar pattern: there’s a vaguely defined yet utterly dire threat that could happen in some fashion in the future, and despite having little to no evidence for it, we should spend resources on mitigating it, just because of how serious it is. All the problems with the original Pascal’s Wager apply to this reasoning, as well.

            For example, you keep saying things like this:

            …the AI knows that it is in such a scenario and behaves accordingly, except if you manage to deceive an entity which is much smarter than you…

            But this statement rests on multiple hidden assumptions. You are assuming that “much smarter than you” is a quantifiable, coherent concept. Sure, it makes sense in common every day usage — lots of humans are smarter than me — but then, I suspect that we’re not talking about a few IQ points of difference here, seeing as smart humans get deceived all the time. You are also assuming that the AI is already superintelligent; even if we grant that this concept is coherent, I am not convinced that it could be achieved as easily as you seem to be imagining. You say,

            I would perhaps say 50% over the next 50 years for superhuman AI with 50 % extinction risk in case of superhuman AI, with lots of model uncertainty.

            But I find this estimate staggeringly optimistic (well, or pessimistic, depending on your viewpoint; what I mean is, the probability is way too high). Again, without a compelling story of what superintelligence means and how AI can achieve it, it’s going to be difficult for us to even talk about this subject.

            By contrast, consider another existential risk, ye olde asteroid strike. Imagine that I said, “there’s a 50% chance that an asteroid will hit the Earth in the next 50 years”. If you challenged me on it, I’d bring up a bunch of orbital mechanics calculations, the orbits of known asteroids, albedos, etc., and you could point out where my errors lie. But now imagine that I said, “I’m pretty sure aliens exist, and there’s a 50% chance they will attack the Earth in the next 50 years; I can’t tell you how or why, because in order to get to Earth they’d have to be so technologically advanced as to defy comprehension”. Now what ? No matter what objection you bring up, I could just say, “yes but the aliens are so advanced they could work around this somehow”.

            That’s exactly how I feel when I read stuff like this:

            I think it is plausible, that in a couple decades an AI which outperforms a human brain on most relevant metrics may be run on a desktop computer for a few hundred Watts.

            Again, in the trivial sense, this is entirely possible: after all, there are some people in the world whose brains outperform the average brain on all relevant metrics, and by a large margin, too. However, none of those people are currently ruling the Earth, hacking the planet, or mind-controlling ordinary humans into doing their will (er… that we know of); so, presumably, you’re talking into something categorically different.

            (continued below, hopefully)

          • Bugmaster says:

            (continued)

            @EGI:
            Regarding protein chemistry: I know a little bit about it, though obviously not as much as you (or anyone else who actually studied biochemistry, really). So, I’ll have to reserve judgement on your proposed semi-organic nanotech, unless you can point me to some articles. Right now, my instincts are telling me that the kinds of reactions you describe would either consume their fuel right away and shut down right away, or release so much waste heat that they’d immediately melt down and or/explode, or both; but of course I could be wrong. Water-based biochemistry works precisely because water is such a convenient universal solvent, and also because the entire process is quite slow. I’m pretty sure this part is factually wrong, though:

            The AI would do it like the invasive plant species: Start out with one seed per square meter and be done with it within a month or five.

            I am not aware of any macro-scale invasive species (as oppposed to samples in a Petri dish) that works this fast.

            In general, you ascribe several potential powers to superintelligent AIs:

            * The ability to deceive or possibly mind-control anyone into doing anything.
            * Perhaps related to the above, siphoning massive amounts of funds from the economy with no one being the wiser.
            * The ability to build a perfectly undetectable, remote-controlled bioweapon with close to 100% infection rate, which can destroy specific brain structures with super-surgical precision.
            * Converting material into its own computational substrate at the rate of several square kilometers per day (possibly without anyone noticing).

            I agree that these are incredibly dangerous powers, but I’m in no way convinced that they are in any way achievable in the real world. Furthermore, if the AI needs such powers in order to become superintelligent in the first place, then you have a chicken and egg problem. And if it doesn’t, then you still need to explain how it could acquire such monumental powers in the first place. What’s worse, right now I don’t even think I fully understand what the word “superintelligent” means to you; to me, it just sounds synonymous with “possesses nearly magical powers”, but that concept is too vague to spend actual money on.

          • Joe says:

            @EGI

            I feel like the word ‘multiple’ is hiding the scale of what’s possible, suggesting perhaps just five or ten AIs vying for dominance. What if there are billions of AIs, just as there are billions of humans? What if there are even orders of magnitude more than that (entirely possible since AIs can ‘reproduce’ as fast as there are computers for them to run on)? In such a scenario, does the idea of a single AI gaining absolute dominance still seem likely to you?

            I agree that human rules and interests wouldn’t be given much attention in such a scenario. But, surely you can still see how this — a multipolar scenario of vast numbers of AIs, in which competitive pressures force the dominant AIs to have preferences for whatever is best for their survival — looks very different from a singleton scenario, in which a single AI gets to impose whatever random utility function it was initially coded with onto the universe? Not necessarily good or bad, but certainly different.

            I largely agree with the doubts Bugmaster has raised regarding what you claim intelligence is capable of, so I won’t repeat that. I will emphasise that I think the main question is whether these kinds of powers are something that an individual can plausibly attain, or are something that must be embodied in far larger organisations, perhaps a whole civilisation.

          • 6jfvkd8lu7cc says:

            @Joe: I would add that a significant portion of these AIs were launched for a reason, and they are probably out of control only partially and in different sense.

            It is also why we need more Intelligence Amplification with tools under frequent conscious control of single humans or well-coordinated teams (below the size of twenty), as more autonomous AIs will probably be controlled by decision-making in already (partially) out-of-control corporations like Google, and layers of indirection of control can lead to effective loss of control.

          • EGI says:

            Part 1

            @ Joe:

            …. I agree that human rules and interests wouldn’t be given much attention in such a scenario. ….. Not necessarily good or bad, but certainly different.

            Certainly very different and as you yourself derived above certainly very bad (for us). So from an AI safety standpoint this (in my view quite likely) scenario (also) needs to be addressed.

            I will emphasize that I think the main question is whether these kinds of powers are something that an individual can plausibly attain, or are something that must be embodied in far larger organizations, perhaps a whole civilization.

            I think in the case of AI the borders between the concepts of individual and larger organization are very blurry at best.

            @ Bugmaster:

            First of all, without a compelling description of how an AI can become super-intelligent, and what that even means, your concerns become a lot less plausible.

            I tried such a description in my answer to Joe:

            Uh, since I am not super-intelligent, this is a difficult question to answer. Basically doing what it was build for, participating in the economy, acquiring resources by being useful and diverting some of these resources to R&D and self improvement. One semi-plausible story about such a slow takeoff is “My little Pony: Friendship is optimal”.

            If you don’t think this is compelling, maybe, I don’t know what your standards for compelling are. But I hope you agree, that this is somewhat plausible (i.e. not epsilon probability), because the AI can have an indefinite period of doing exactly what it is supposed to do while slowly acquiring resources / wealth / technology. Also, seriously, read “My little Pony: Friendship is optimal”. It’s not that long. Also, I really do not understand why you again link my position to Pascal’s Wager. First I constantly argue that the probability of the threat is single or double digit percentages and not epsilon (this was the root of our disagreement) and second the threat is not hell or some other minus infinity scenario to make up for epsilon probability but simply death. Is arguing for seatbelts giving in to Pascal’s Wager?

            But this statement rests on multiple hidden assumptions. …. You are also assuming that the AI is already superintelligent;

            No, I do not assume that. I assume that it is not much dumber than you (whatever that may mean). (In fact I talked about an AI that recently surpassed human abilities in some / most relevant fields and now has it’s goal structure assessed for compatibility with human values, though the exact scenario is irrelevant in this context). Could you reliably, sustainably deceive every human about being in a testing scenario? Are you kidding me? That’s the wet dream of every psychology / sociology researcher since these fields exist.

          • EGI says:

            Part 2:

            But I find this estimate staggeringly optimistic

            Well, maybe, maybe not, I pulled these numbers out of my ass and gave an approximate confidence interval a couple posts earlier. My point is, I can easily grant you one or even two orders of magnitude on both numbers and we are still not in epsilon territory, heck we are just approaching asteroid strike territory.

            I’m pretty sure aliens exist, and there’s a 50% chance they will attack the Earth in the next 50 years

            There are two main points why this is false:
            First: The earth exists since 5 billion years and has not been conquered by aliens since.
            Second: There are no clues that circumstances might have changed. The analogous scenario to AI risk would be a recently build Dyson Swarm a couple dozen light years away. In this case I think your claim would be pretty reasonable, though the exact probabilities would of course still be up for debate (Just as with AI risk…).

            However, none of those people are currently ruling the Earth, hacking the planet, or mind-controlling ordinary humans into doing their will (er… that we know of); so, presumably, you’re talking into something categorically different.

            Not necessarily. Imagine Elon Musk. Imagine he can duplicate at will for a trivial hardware cost and a couple hundred Watt to sustain the copy. Further imagine he can edit his mental architecture and has a direct neural interface with broad band access to the Internet and all his copies. Further he can speed himself up tenfold by increasing his hardware and energy consumption one hundred times. Oh and of course he has perfect memory. I would not be surprised if this entity would take over the world in a couple years to decades. The threat of true superintelligence comes on top of that. But that is just the icing on the cake.

          • EGI says:

            So, I’ll have to reserve judgement on your proposed semi-organic nanotech, unless you can point me to some articles.

            Unfortunately most biochemistry journals refuse to publish exercises in futurology… (at least as far as I am aware of)

            Right now, my instincts are telling me that the kinds of reactions you describe would either consume their fuel right away and shut down right away, or release so much waste heat that they’d immediately melt down and or/explode, or both; but of course I could be wrong.

            I do not see any reason for this. Biochemistry routinely harnesses one of the most energetic chemical reactions possible (oxidative phosphorylation aka respiration is oxyhydrogen by any other name). This is the reason, why I think that in large systems you cannot exceed typical biological reaction speeds by more that one or two orders of magnitude (About the reaction speed found in a well managed compost heap in the thermophilic stage. With active cooling and a few other bells and whistles you may eek out a little more). But this already means conversion of most substrate to product within a month or so.

            Water-based biochemistry works precisely because water is such a convenient universal solvent, and also because the entire process is quite slow.

            This is why I put varying amounts of ? Behind all other solvents. But I think there is a good chance for some other solvent to work for stuff not compatible with water.

            I am not aware of any macro-scale invasive species (as oppposed to samples in a Petri dish) that works this fast.

            Yup, I shortened my argument too much… Invasive species are not purposefully disseminated. The proper analogy would be more like a corn field or something similar. Of course the AI would seed and not wait for the wind or human vehicles or animals or whatever to carry it’s seeds around. Please remember, this happens AFTER the strike, that eliminates possible human resistance.

            * The ability to deceive or possibly mind-control anyone into doing anything.

            Nope, just the ability to hide it’s true motivation as long as it is convenient. Humans do this routinely.

            * Perhaps related to the above, siphoning massive amounts of funds from the economy with no one being the wiser.

            (At least in the beginning) not more so than Elon Musk or Peter Thiel or Bill Gates or…..

            * The ability to build a perfectly undetectable, remote-controlled bioweapon with close to 100% infection rate, which can destroy specific brain structures with super-surgical precision.

            Perfectly undetectable, remote controlled and super surgical precision are all slight overstatements used as rhetorical devices, but yep. Also I would be VERY surprised if WE could not do the same in 30 to 150 years. (By the way: Some human nut building something like this – possibly a little cruder – is my number one favorite for EX-risk in the coming century with AI being a close second.) Also this is just one of many possibilities for the final strike.

            * Converting material into its own computational substrate at the rate of several square kilometers per day (possibly without anyone noticing).

            Yup, this looks pretty trivial. Many plants can do similar things with a little help from a farmer. But in this context I am not particularly interested in this claim, since this logically happens AFTER the AI has already won.

          • Bugmaster says:

            @EGI:

            Also, seriously, read “My little Pony: Friendship is optimal”

            I’ve read it some time ago, and I do think it’s a great science fiction story (so, let’s avoid posting any spoilers for people who might not have read it). But, being fiction, it lacks a certain something when it comes to evidence. IIRC, it doesn’t even make any attempt to explain how its AI got started; it just ends up being built one day, and then expands itself by adding more commodity hardware. This scenario works perfectly well in the context of the story and its pacing — but it wouldn’t shed much light on our current discussion, even if it weren’t fictional.

            First I constantly argue that the probability of the threat is single or double digit percentages and not epsilon…

            You’ve got me there; I admit that, if I were to believe that the threat was that high, I wouldn’t have called it “Pascal’s Wager”. However,

            …the threat is not hell or some other minus infinity scenario to make up for epsilon probability but simply death.

            We’re talking about the death of every human everywhere, not just your personal death. I think this is as close as one can get to “minus infinity” without being conventionally religious.

            Could you reliably, sustainably deceive every human about being in a testing scenario?

            Let’s taboo the word “superintelligent”, because I think we might mean different things by it, and I personally am not convinced it’s even a coherent concept. Instead, let’s talk about some tangible powers that the AI could acquire. The power you mentioned should go on the list; in fact, it’s similar to my phrasing, from my previous post.

            I will continue my post in part 2, but meanwhile, I suggest we table the discussion on non-biological “wet nanotech”, since AFAICT we both agree that it is rather speculative.

          • Bugmaster says:

            @EGI:
            Part 2

            So, what are some of the powers that a super-AI could acquire ? You already mentioned one:

            1). “Reliably, sustainably deceive every human about being in a testing scenario”. I would expand this to something like, “…or about many if not most other claims”.
            2). Related to the above, “The ability to hide its true motivation as long as it is convenient. Humans do this routinely.”
            3). (paraphrasing for brevity) A self-replicating Elon Musk, who can “duplicate at will for a trivial hardware cost and a couple hundred Watt to sustain the copy” (thus virtually indefinitely) and has direct network access and perfect recall.
            4). Capitalizing on the above, the ability to siphon a significant amount of funds from the economy “as long as it’s convenient”, just like Elon Musk or Bill Gates.
            5). (paraphrasing my previous point to address your concerns) The ability to build an incredibly difficult to detect, remote-controlled bioweapon with close to 100% infection rate, which can destroy specific brain structures with great precision. Same as we humans could build in 30 to 150 years.

            (plus some other stuff that we both agreed to ignore for sake of brevity).

            As I see it, all these powers fall into two categories, though some can belong to both:

            a). An ability so powerful that the laws of physics may prevent it from ever manifesting; even if they do not, the ability is incredibly unlikely to ever be realized. Powers 1, 3, and 5 belong here.
            b). An ability already possessed by some humans. Powers 2, 3, and 4 belong here.

            In general, I’d argue that most claims about the Singularity fall into these two categories. The problem with this is that category (b) is not very novel. We already have humans who can lie, are good at either legitimate business or stealing money or both, kill other humans by a variety of means, etc. Such humans may absolutely pose an existential threat, but we are already dealing with them by conventional means. You may argue that we need to do so harder (and I’d agree), but that’s not MIRI’s mission.

            You may also argue that, unlike humans, the AI will be able to perform such feats nearly perfectly. It doesn’t just lie, it lies so well that no human could ever unmask it. It doesn’t just steal, it steals so perfectly no one will catch on before it’s too late. It doesn’t just self-replicate, it does so nearly indefinitely and nearly for free, and so on. This is where we get to category (a).

            The main problem with (a) is that it really does require some sort of a lowercase-s singularity: a discontinuity in the curve that describes benefits gained compared to effort spent. Hardware costs a non-trivial amount money, and cannot be expanded indefinitely, since things like heat dissipation, power consumption, and network latency kick in pretty quick — and yet, the AI would be able to overcome this problem, somehow. Even the most skilled human sociopaths cannot fool everyone (+- epsilon) 100% (ditto) of the time — the more lies one spins, the harder it is to maintain them — and yet, the AI could do so with ease. Elon Musk spends a great amount of time and money on research and development, Space X keeps crashing its rockets — but the AI could do everything right the first time, just by thinking about it really hard. Enron stole a bunch of money and got caught; the Flash Crash stole a large portion of all the money in existence and got shut down; but the AI could do so with impunity, despite the fact that money is generally just a consensual human fiction… And so on.

            It’s certainly possible to imagine an entity who can do this (and many people throughout history have imagined many such entities), but merely imagining something doesn’t make it possible or even probable. In most cases, in order to do these things, the AI would have to overcome the very laws of physics that you are using to justify its existence in the first place.

            Thus, if the AI requires one (and likely, more than one !) of these powers in order to become a threat, then I am not too worried. And if not, then someone still needs to explain how the AI can grow so powerful in the first place.

          • Joe says:

            @EGI

            I think in the case of AI the borders between the concepts of individual and larger organization are very blurry at best.

            The obvious distinction is that in a massively multipolar world like ours, there isn’t one entity in charge determining what happens based on its utility function. There are billions of agents each with separate goals they are trying to achieve.

            If the AI future continues to look like this — a vast civilisation of sentient beings, with lives mostly comprised of okay-or-better moments — then I’d see that as a future full of value. Whether or not current human values, or even humans, exist in that world seems mostly irrelevant from a moral standpoint.

      • 6jfvkd8lu7cc says:

        After reading the entry, I do not feel like this is a big deal from safety point of view — they do say that the human evaluator needs good intuitions about which partial step is a better step towards the goal, and they don’t say anything about outcomes in case of an unforseen change in the environment.

        Of course, it has a lot of value from the point of view of eventually making a cheaper way to specify some behaviours and publishing it together with all the pitfalls.

        I do hope that OpenAI improves the real-world safety of automated systems just by creating a threshold of negligence: «If OpenAI has written about this problem and a fix a year ago, and today you ship the same algorithm without applying any fix for this problem, you are officially negligent».

        If they publish baseline implementations (these will be better than a typical implementation from scratch if only because of more attention and more bug reports than for a small proprietary system), the damage prevented could become easily quantifiable (and large)

    • vaniver says:

      Can anyone show what exact progress was made in friendly AI research in last 3-4 years?

      Sure, I can give you an incomplete list. What type do you want (i.e., my subjective impressions of what the most important results between 2013 and now are, a list of citations, a review from someone not at MIRI, etc.)?

      A major point to keep in mind is that in somewhere around 2015, AI safety went ‘mainstream’ in the sense that people concerned about it in regular academia and industry grew confident enough to be open about their beliefs, and as a result orgs like MIRI were able to dramatically reduce their investment in movement-building through meme-spreading and focus instead on field-building, both by writing papers, technical agendas, sponsoring research discussions, and so on. Field-building takes a while to put together; CHAI at Berkeley has existed on paper for about a year but I expect the most impressive thing to come out of it in the medium term to be the PhDs who spent their whole grad school experience focused on the technical aspects of AI safety, and it’ll be several more years until those exist.

      • Oleg S. says:

        Sure, I can give you an incomplete list. What type do you want (i.e., my subjective impressions of what the most important results between 2013 and now are, a list of citations, a review from someone not at MIRI, etc.)?

        I guess a link to a review article + your subjective impression might satisfy my curiosity.

        In the ideal world, I think it would be great to have a guest post here, so not only I could benefit from knowing recent developments in friendly AI, but maybe some Financial Times authors too. However, that’s not for me to decide.

        • albatross11 says:

          There are a lot of uses of AI/ML algorithms in places where we’d like some assurances about the decisions made by those algorithms, the information used, etc. That seems like a place where immediate engineering/political/social/legal concerns could overlap with longer-term AI safety concerns.

        • vaniver says:

          I didn’t forget about this, but…

          I guess a link to a review article + your subjective impression might satisfy my curiosity.

          Currently, I don’t think such a review article exists, but it might be high time to write one.

          My impression is that Logical Inductors are a major advance in the foundation of decision-making. I think CIRL is unlikely to work as is, but is likely pointing in the direction of something that could work, and figuring out precisely why CIRL won’t work (or creating patches so that it does) is going to be helpful. Relatedly, the list of problems that we’re aware of is longer now, which is progress towards figuring out all of the issues we have to deal with / creating a better typology of issues. (For example, I don’t think we were talking about corrigibility much in 2013, but at least one person thinks that if you solve corrigibility you basically solve AI alignment; I think that’s overly optimistic but not insanely so.)

          • Oleg S. says:

            Following your lead, I’ve just read this 2015 essay on corrigibility. For me, the results are definitely worth incorporating into a “Project Management 101”. Are you sure corrigibility wasn’t already discovered/studied by economists?

            That’s a general trend I see in friendly AI research: most of resutls would have a tremendous impact on governance and corporate decision making, one just need to replace AI with CEO. I think the fact that everyone is still discussing AI threat is a sign of terrible miscommunication.

            I’ll try to parse the 2016’s take on logical induction. Hope it’s not too overloaded with formalism.

          • Oleg S. says:

            Isn’t it the case that a logical inductor that generates coherent beliefs could be exploited to solve NP-complete problems?

  13. OptimalSolver says:

    The following custom surrounding risk assessment has always amused me:

    Expert in domain X spends years in the wilderness, his dire predictions of a disaster related to domain X completely ignored by everyone. X-related disaster happens, claiming N amount of human lives.

    The domain expert is now vindicated, but he must now perform the following ritual. As it would be unseemly to pop champagne and get into the faces of his detractors with a big, fat “I told you so!”, he must now state something along the manner of “I take no joy in having been proved right,” and that he wishes x-related disaster had never happened, even though it fully vindicated him.

    Internally though, the domain expert is over the moon, experiencing the evolutionarily-primed dopamine boosts and social status that come with vindication in front of the tribe and the humiliation of your rivals. So much so that I wonder which state of the world the domain expert actually prefers, vindicated, with N amount of strangers dead, or unvindicated.

    Not really related to this post, but I always think about this when risk assessment comes up.

    • Jiro says:

      Well, that’s survivorship bias. What happens is “Expert announces predictions of doom. In a couple of cases, the expert is proved right, and has to say ‘I take no joy in having been proved right’. In the rest of the cases, the expert is proved wrong, and somehow manages to still be considered an expert if he’s still active in the field.”

      • OptimalSolver says:

        I made no claim on the likelihood of a domain expert being proven right.

        I’m only interesting in the social rituals that follow on from an expert being vindicated in tragic circumstances.

        • albatross11 says:

          I suppose the alternative is “I take great pleasure in having been shown right. The best in life is to see your academic rivals discredited in front of you, to hear the lamentations of their grad students. Besides, thanks to this nifty set of prediction markets, I made a bundle on my weird doomsaying. I think I’ll buy a small island in the South Pacific.”

          I’ll admit, I’d actually find that kinda refreshing.

    • vaniver says:

      So much so that I wonder which state of the world the domain expert actually prefers, vindicated, with N amount of strangers dead, or unvindicated.

      There’s a bit in the Romance of the Three Kingdoms that stuck with me, where someone trying to take down the usurper Dong Zhuo, both promises a beautiful woman Dioachan to be Dong Zhuo’s concubine and his bodyguard Lu Bu’s wife. (She’s in on the plot, and plays the two against each other.)

      Dong Zhuo’s advisor, Li Ru, sees through the plot and tells Dong Zhuo to give up on the woman, since it’s not worth losing everything over her. He can’t get Dong Zhuo to agree, and says the equivalent of “well, I guess we’re all dead now.” Shortly thereafter, Lu Bu assassinates Dong Zhuo and Li Ru is arrested and executed.

      I suspect more warnings are of this form.

  14. OptimalSolver says:

    Also, we’ll have to worry about custom-made pathogens long before we have to worry about AGI. In fact, I believe that it is because of the former that we won’t need to worry about the latter.

  15. ashlael says:

    Incidentally, I did not post sentries around my house tonight. Sure, it’s possible someone might come to attack me and my family and kill us all but I don’t think it’s likely.

    I do not believe this action was insane.

    • kokotajlod@gmail.com says:

      So you think the probability of AI being dangerous is similar in magnitude to the probability that your house will be invaded tonight? Why? What’s the crime rate like in your area? I’m guessing the probability of your house being invaded tonight is something like 0.01%. So I’m guessing that you are 99.99% confident that AI won’t turn out to be dangerous… why are you so confident, given that there are many smart experts who are worried? Can you name another field where there is a similar fraction of experts who are worried, where you think their worries are similarly overblown?

      (Immediately things like astrology and biblical apocalypticism come to mind. So then I guess the disagreement between us is: You think AI science isn’t really a science; expert judgment is not a guide to truth in the way that it is in sciences… why?)

      I’m open to being convinced here. I would LOVE to have some examples of concerns like this in scientific communities that turned out to be overblown. If we had e.g. 10,000 examples of concerns like this that turned out to be ungrounded, and only ~1 example of concerns turning out to be right, then that would establish a base rate low enough for me to completely reverse position on this. And if the numbers were less extreme I would partially reverse position.

      • 6jfvkd8lu7cc says:

        The problem with such a survey is that the wording is really hard to get right.

        I would argue that «the probability of AI being dangerous», conditioned on some very very basic assumptions, is 1. Humanity already uses a lot of artificial decision making systems that implement the decisions without human go-ahead on every single one; some of them implement logic complicated enough for humans not to grasp immediately — and fast enough for humans not to be able to intervene in case of problems.

        I think these systems are AIs as currently defined, even though they do not aim to be AGI or to be sentient. Violations of engineering best practices in design, implementation, deployment and operations happen all the time and sometimes lead to damage. And too often even known deficiencies neither get fixed quickly nor slow down deployment…

        So in some sense «AI will be dangerous in the future» is trivially true: «it» is dangerous and will still be dangerous five minutes in the future. AI is dangerous and these dangers are important. I assume this reading is not what you actually mean (I may be wrong).

        Now, there is a spectrum of interpretations of questions like «Is AI risk important?», and multiple-choice answers to such a question aren’t easy to classify in a useful way. The broader the definition of AI risk, the more obvious it is that it is real, but also the more areas of study should be included in mitigation.

        If the area is defined broadly enough…

        I can note that a significant part of design effort around Rust is about specifying impossibility of some software failure modes — does this count as a part of safety efforts? It does seem to advance our state of knowledge about interacting parallel computations. And hopefully it may make some AI system safer in the future, if only because it won’t be as easy to attack via exploitable race conditions.

        And some people exploring the game-theoretical conditions where information cascades form are advancing decision theory in areas that may be very useful for middle-term AI safety, but they don’t label themselves as doing AI safety, they are doing game theory and epistemiology.

        • kokotajlod@gmail.com says:

          Agreed; I’m talking about AI being dangerous in a way that could be an existential risk.

          Analogy: Suppose various nations were racing to build AI systems to control their nuclear arsenals, the idea being that the AIs would detect whether treaties had been broken, surprise attacks had been launched, etc. and then automatically retaliate with nuclear weapons. That way, with humans out of the loop, the nuclear deterrent would be extremely credible.

          Wouldn’t you be worried? Such systems would need to be tested very carefully to ensure zero false positives and zero overreactions and zero escalations. Now suppose that for some reason testing was impossible, or at least extremely difficult. Wouldn’t you be a lot more worried?

          Software has bugs and unintended consequences all the time. The danger with superintelligent AI is that no one has yet proposed a way to safely test it. Once you’ve got a superintelligent AI running… how do you tell that it is on your side? (One proposal is to keep it in a box; much ink has been spilled about that. A more complicated proposal is to put it in a simulated environment and hope it doesn’t realize; etc.) If you think that it would be feasible to safely test a superintelligence, please say so; that may be our biggest point of disagreement. If you don’t think it can be safely tested, then since you agree that software often doesn’t do what we wanted it to do… shouldn’t you be worried?

          • 6jfvkd8lu7cc says:

            I am not really worried about someone bulding a superintelligence that doesn’t immediately crash, and that superintelligence getting unchecked control over too much power.

            I do not buy the arguments about superintelligence being way easier to build than to test. For a trivial reason: your first version will crash or go into a weird inactive mode, it takes effort to make sure software doesn’t do that; then you learn from whatever information you gather. Probably you learn a lot.

            Your reasoning gives an impression that all the failed versions do not teach much about controlling the process; my experience teaches me that learning which parts of the process require specification and control is often a precondition of making process run at all.

            Another thing is why we need a new AI to make the situation scary.

            I know as a fact that current software is very often carelessly designed, or implemented, or deployed, or maintained and managed, or all of the above. I know with overwhelming (but a little bit lower) confidence that this includes some of the software running various high-risk systems. I know with overwhelming confidence that multiple powerful technology actors consider destroying enemy’s information infrastructure more important than keeping their own information infrastructure intact. They seem to be able to bend whatever legal restrictions could apply to their activities, too. I know for a fact that conmunication networks, including some high-risk ones, run on a mix of different protocol generations and have fragmented technical management that makes impossible learning the current policies, let alone changing them or performing a coordinated test of a rare condition. I know as a fact that a random problem or maybe a single-point attack (I would bet $30 vs $10 in favour of lack of malicious intent if I believed in a way to resolve such a bet) took down a large part of the USA power grid for multiple days. I know as a fact that people do deploy automated decision making systems without proper vetting. I know as a fact that other people spend effort trying to make these automated systems deviate from the original goals and ometimes succeed.

            I have a belief (above 1:1 confidence, hard to evaluate the real odds because it is a combination of a lot of small details told by different people about continuous processes and logistics chains) that there are industrial processes where interruption can be a real blow for the entire global manufacturing system.

            My sincere opinion is that adding AI to that picture can only make the situation less scary. Right now I believe we have a disaster just waiting to happen (with confidence above 1:1), the probability of the disaster may be low per se, but it is annual probability that seems to grow, and can get additional bumps because of external events like communication-network-attack exchanges.

            Additional notion of AI drops my confidence in the possibility of the scenario per se, and I think the annual probability also goes down — it is basically a conjunction principle.

            I also think that to do anything about AI you still need to understand all the things that software safety (and machine learning robustness studies like some of OpenAI stuff, and surveying «how to do ML wrong» that OpenAI also does) tries to learn.

            So no, I am not concerned with AI scenarios because we run a real risk of collapsing the technological civilisation half-century or more into past, and any mitigations for that are a prerequisite for anything to be done about AI anyway.

            I am not even sure X-risk of AI gone perfectly wrong is larger than the X-risk of a cascade crash in the hastily written control systems causing gigadeaths and setting off a downwards spiral.

  16. bintchaos says:

    According to Christof Koch the internet may already be conscious.
    Panpsychism.
    I have always thought that machine intelligence will be emergent and self-organizing. It seems to me that a good environment for this to happen would be HFT algorithms– have the best hardware, the best software, capable of some range of autonomous actions (trades), and already outperform humans in some domains. Add in some Deep Learning and you have a digital approximation of the organic soup that spawned human DNA.
    But since this is an emergent, evolutionary and spontaneous process, I’m not sure how it could be regulated, or informed by “safety features” or FAI laws.
    Maybe have to employ Butlerian Jihad-style laws or a Turing Policeforce.

    • rlms says:

      You are so close and yet simultaneously so far. Yes, human-level artificial intelligence will likely not be made out of code in the same way as the software we are currently using is, or take the form of an agent in the way that MIRI seem to assume. No, it will not arise out of the Internet becoming self-conscious, or slight variations on existing ideas about neural networks.

      • bintchaos says:

        Well remember that I am likely highly biased by my chosen field of study.
        Self-organinizing criticality (SOC), emergence, complexity, chaos and collapse of non-equilibrium systems.

    • LCL says:

      There’s a good point here though, pointing out something that has confused me by its absence.

      It seems clear that, of potential paths leading to AI crises, a large share – I suspect the largest cluster, possibly even a majority – go through the finance industry.

      Firms engaged in proprietary trading of financial assets have a huge, obvious incentive to hire the best graduates in the field and can pay them the most. Their systems will be connected to worldwide networks from the start (no “box” to escape). Market behavior is highly dependent on human psychology, so their systems will be explicitly attempting to model and influence human psychology. Discovering favorable trades is, of course, far less profitable than creating favorable trades, so they have an incentive to give their systems real-world agency as well. And every advance will be madly rushed and kept utterly secret, for fear of loss of competitive advantage.

      This arrangement of risk factors indicates that anyone pragmatically interested in AI safety should be taking a microscope to the proprietary financial trading sector. Quite possibly our best marginal investment for reducing AI risk is something like advocating structural reform or increased transparency in that sector. But this avenue of thought seems relatively unexplored. I’ve read quite a bit of SSC and LW, and haven’t seen anyone attempting an analysis of where the potential levers would be for risk mitigation in the proprietary trading context.

      We have a bunch of smart people here – surely we’ve got someone in finance. What would structural AI risk mitigation look like in your sector?

      • albatross11 says:

        I agree that AI in a box at a university somewhere seems a lot less scary than AI being used for some important thing that incidentally gives it a lot of power. Along with financial applications, I think intelligence and military applications are likely places where some kind of AI might:

        a. Massively upset the balance of power in ways that would be scary and destabilizing and destructive.

        b. Do massive damage to humans by pursuing its goals in ways that had really ugly side-effects.

        c. End up with the owners of the AI in a position where they feel that they must keep following its guidance, even if they hate what it’s doing in many ways, because their organization has become dependent on the AI.

        All three of these have happened, more-or-less, in the world of superhuman AIs built out of humans (markets and bureaucracies and such). Superhuman AI built out of humans is probably much easier for us to understand and limit and fight than superhuman AI mostly made out of electronics, but since we’ve never seen one of those, it’s hard to be sure.

      • One piece of good news is that you can regulate the financial sector.

  17. kraynk says:

    I tend to agree. At some point we will probably need something like the Asilomar Conference in 1975 when molecular biologists and physicians as well as lawyers and journalists assembled to discuss and share with the public the biosafety concerns associated with further research activities at genetics labs, and to offer possible solutions. According to one of the organizers of Asilomar-1975, Nobel Laureate Paul Berg,

    The California meeting set standards allowing geneticists to push research to its limits without endangering public health. (Nature)

    I don’t see why a similar congress on AI research safety issues cannot be held when the computer science community feels it is the time.

  18. tgb says:

    I don’t think I understand what you mean by “local” and “global” caution. In particular, I don’t see how the names apply to NASA example given. Maybe there’s another example?

  19. Vanzetti says:

    >>> And so I feel like I have to do the boring work of saying “hey, by the way, 10-20% of AI researchers believe their field will end in an ‘existential catastrophe’ for the human race, and this number is growing every year, Steven Hawking is a pretty smart guy and he says we could all die, and Nick Bostrom is an Oxford professor and he says we could all die, and Elon Musk is Elon Musk and he says we could all die

    No, no, no!

    Scott, this is a pure Argument from Authority and you should know better than that.

    I don’t care if 100% of all AI researchers believed that AI will eat us all, no more than if they all believed in the Second Coming of Jesus. The actual AI they research, the one that do have some authority on, has nothing in common with the paperclipocalyptical visions of Rabbi Yudkowski. The chain of pseudologic that leads to the FOOM argument has so many weak links, it is basically made of chocolate.

    • Jliw says:

      It’s not really an argument from authority though, is it? It’s saying — as I read it — something like “hey, a significant fraction of the people who ought to know are concerned about this, so maybe we should look more closely”.

      That is, if you have the reasonable heuristic “listen to experts where I am not one”, and you’re not in AI (as most of the audience isn’t), then here’s a possible disparity between attention and plausibility; at the very least, don’t dismiss it glibly because it makes a nice soundbite.

      • Vanzetti says:

        That is, if you have the reasonable heuristic “listen to experts where I am not one”

        No, it’s a bad heuristic.

        Or, rather, it is a reasonable heuristic if we define experts as people who had delivered results before. When someone has designed a couple of airplanes and they don’t crash from the skies, I’d trust her to be an expert on airplane design, even if I personally don’t fully understand how airplanes works.

        No one has ever designed anything approaching the Superhuman AI of our nightmares/fantasies. My heuristic in this case is that there are no experts.

      • nimim.k.m. says:

        Vanzetti has a point. In the late 1960s, the pinnacle of the AI research were things A* route finding for Shakey the Robot. Despite the impressive capabilities of current state-of-the-art tech, I am not sure if out there’s enough evidence that our ML capabilities (neural network based or otherwise) are nearer general autonomous technological sentient being (or even non-sentient paperclipmare) than A*-like stuff (methods for objective function optimization in fairly restricted class of domains); or that the research community developing the ML methods are experts on evaluating how near they are to the GAI or paperclipness, because their expertise is in developing A*-like things, not sentience or paperclips.

        Also remember, the field of AI got started when a bunch of brilliant researchers decided to study the problem in a workshop for one summer in 1950s. Looking at their objectives, it took us 60 years to start to get something that looks like possible significant progress on one of them:

        An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. [1]

        And even on the field of “solve problems [then] reserved for humans” that I was alluding to, most of our breakthroughs thus far have been boring than exciting. Okay, they are exciting, but still more A*-like than GAI-like. The equivalent of fancy warehouse software in computer vision or NLP domain, if you like. We are now able to train efficiently very general function approximers, but truly human-like use of language, abstract thought and concepts not yet seen.

        [1] For more, see e.g. chapter 3 of Nilsson’s book The Quest for Artificial Intelligence, http://ai.stanford.edu/%7Enilsson/QAI/qai.pdf

        • bintchaos says:

          What about AlphaGo and BetaGo?
          “Like a god”

          NYTimes: … “Last year, it was still quite humanlike when it played,” Mr. Ke said after the game. “But this year, it became like a god of Go.”… After he finishes this week’s match, he said, he would focus more on playing against human opponents, noting that the gap between humans and computers was becoming too great. He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.
          “AlphaGo is improving too fast,” he said in a news conference after the game. “AlphaGo is like a different player this year compared to last year.”

          On earlier encounters with AlphGo:
          “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

          • nimim.k.m. says:

            1. Watching a machine churning out prime numbers faster than any human can generate could plausibly also be a mystical experience. Watching cellular automata (such as Conway’s) to evolve is magically suggestive of biological life. We have grown used to existence of such things now, except when we manage to tackle a more difficult domain that previously was elusive.

            2. Humans are better at flowery language and anthropomorphizing than AlphaGo is in sentient thought.

            3. On the topic of domains, AlphaGo did not built itself to be brilliant at Go. I’m terrible at Fermi estimation so I don’t know how many hours were spent creating it, but humans created it. To put it differently, it’s not qualitatively that different from Deep Blue, except it (of course) works very differently. Another problem moved from the list “now reserved for humans” to “was reserved for humans”, still more A*-like than GAI-like.

            (And after giving it some further thought, it is also self-improving in the sense that modern machine learning algorithms learn. However, this kind of learning is not the kind of learning that is “autonomous goal-setting”, it’s the kind of learning where engineers script an algorithm to play against itself. More improving than self-improving.)

          • bintchaos says:

            Humans are better at flowery language and anthropomorphizing than AlphaGo is in sentient thought.


            Oh, I don’t think AlphaGo is sentient at all.
            It is not self-aware.

          • To put it differently, it’s not qualitatively that different from Deep Blue, except it (of course) works very differently.

            The difference, as I understand it, is that Deep Blue was specialized to one task, playing chess. AlphaGo is an application of an approach to computer learning that can be applied to a wide variety of different tasks.

          • Rob Speer says:

            DavidFriedman: it’s presentist bias that you find AlphaGo’s algorithm more broadly applicable and exciting than Deep Blue’s.

            Deep Blue used alpha-beta search, plus a bunch of specific engineering for evaluating chess positions. Alpha-beta search is clever and useful. It can indeed be applied to a wide variety of different tasks. And it was once considered AI. It’s not considered AI anymore because it’s so well understood.

            AlphaGo uses convolutional neural nets, plus search techniques, plus a bunch of specific engineering for evaluating go positions. Convolutional neural nets are clever and useful. They are currently considered AI. They will not be considered AI when we understand them better.

    • HeelBearCub says:

      Oh no. It’s far worse than merely argument from authority.

      And so I feel like I have to do the boring work of saying “hey, by the way, 10-20% of AI researchers believe their field will end in an ‘existential catastrophe’ for the human race

      That is a massive overstatement of what AI researchers believe, AFAIK.

      This is the kind of causal overstatement of a supporting point that Scott savages when other people do it.

      • Jliw says:

        From the two surveys of experts in the field posted before, it seems 5 to 18% do believe that; ~40-50% also appear to feel that AI safety doesn’t get enough attention or funding, and that the “intelligence explosion” hypothesis is “broadly correct” (though whether correct here means “makes sense, is plausible, and will occur” or just “makes sense and is plausible”, I’m not sure).

        So depending on how strict you are, it seems either like a minor overstatement (mainly since there’s small but real reason to believe the true number will probably be closer to 5 than 18) or not an overstatement at all (an acceptable rounding of a plausible figure).

  20. Nabil ad Dajjal says:

    As other people have said above:

    There are an effectively infinite number of unlikely ways that humanity / civilization could end. We could all be killed by AI gone wrong. We could also all be killed by a killer asteroid, or an alien attack, or by a gamma ray burst from deep space, or by a vacuum metastability event, or by nanotechnological ‘grey goo,’ or by weapons of mass destruction, etc., etc.

    If you added up all of the money we would need to spend to forestall every one of these unlikely disasters it’d add up to more than the entire world economy. It’s not enough to simply posit that something may be a threat. You need to actually demonstrate that it’s reasonably likely and that our spending money will do something about it.

    It’s charming that you think your friends at MIRI should have a bigger budget. But the rest of us have real issues to worry about, and as a society we dump enough money into boondoggles as it is.

    • Le Maistre Chat says:

      If AI risk is a thing, MIRI should still have a $0 budget. The solution to AI research producing a superhuman mind is to stop AI research, not to tell the engineers to make the AI love you enough to give you a blissful immortal existence.

      • bintchaos says:

        I think think this is a cool idea for a comic–
        MIRI: Section 9 Turing Police (Section 9 from GitS, Turings from Neuromancer)
        I think you don’t have to stop the evolution of silicon hyperintelligence, just the emergence of sentience and self-interested (possibly malicious) behavior.
        Of course I guess that means theory of consciousness, which is a pretty hairy problem.
        Kevin can write it and I can illustrate it.
        As Aspies we should be able to model AI characters more effectively.
        😉

      • 6jfvkd8lu7cc says:

        I am not sure «stop AI research» can be implemented with enough precision by any feasible entity trying to implement it.

        Just banning everything looking like AI research means simultaneously banning large chunks of what Google, Amazon, Facebook, Microsoft, Apple are doing. Doesn’t sound like something easy to implement.

      • peterispaikens says:

        The friendly AI research has a $9m budget located in a few places. The general who-cares-if-its-friendly AI research has a budget of billions and is spread throughout the whole world, and since it has quite significant practical applications, it won’t be stopped without a global police state enforcing this.

        The question is whether someone like MIRI will manage to get results on friendliness before the other 99.9% of the industry manage to build AI’s that get too powerful.

        • Le Maistre Chat says:

          Well, they wont, because MIRI is a cult based on wishful thinking that a godless universe is conveniently designed to provide us with eternal lives of bliss so long as the Elect are the first to engineer a FOOMing silicon nous.
          If superhuman AI is a serious risk, stopping its development is surely orders of magnitude more probable than MIRI being right about the structure of a godless universe.

          (I don’t believe we live in a godless universe, but it’s one of MIRI’s fundamental priors. But AI risk can still be true if atheism is false.)

        • John Schilling says:

          The general who-cares-if-its-friendly AI research has a budget of billions

          Cite?

          Pretty much everybody who has a significant research budget for AI, has goals in mind that would be very poorly served by a not-friendly AI. They all care that the AI be “friendly”, at least in the sense that the term is being used here.

          They have different ideas about how to go about doing that than do MIRI et al, and they also have the goal that their friendly AI be useful. If it disturbs you that friendly-and-useful AI research gets more funding than the specific who-cares-if-its-useful brand, you fail Econ 101 forever.

        • Distinguish friendliness from control. All AI researchers need control.

          • John Schilling says:

            I don’t think there is a difference between control and “friendliness”, as the latter is used in this context. An AI that does not violate or perversely misinterpret the commands/utility functions given to it, is a “friendly” AI, even if the command is “Would you be so kind as to kill every member of Al Qaeda everywhere but not anyone else?”

          • An AI that does violate or perversely interpret its commands/UF is Controlled if its nonethless possible to restrain it , switch it off, ertc. Thus, control and friendliness are different things.

          • moonfirestorm says:

            Doesn’t a controlled but unfriendly AI fail the “useful” test though?

            If your AI keeps doing the wrong things and has to be switched off, it’s probably not going to be able to do whatever you originally wanted it to do.

            There’s probably a limited subset of situations where you can carefully get it to do what you want because of exactly how it misinterprets its commands. For example, the Al-Qaeda-killing AI that kills all Al Qaeda, but never recognizes that its task is done, gradually expands its parameters for Al Qaeda, and would eventually kill all human beings if we didn’t recognize “hey it’s getting weird” and shut it down.

            I wouldn’t expect anyone to look at that AI and say “oh yeah that’s ready to ship” though.

          • Depends on whether you have to reboot it every five minutes or every five years.

    • James Miller says:

      >There are an effectively infinite number of unlikely ways that humanity / civilization could end.

      True, but AI destroying us isn’t any more unlikely than, say, what someone who understood the global situation in 1480 what have estimated the European threat to South American civilizations was.

      • rlms says:

        Are you saying that death by AI is as inevitable as the conquest of the Aztecs?

        • James Miller says:

          Yes, conditional on something else not killing us first, and the Aztecs did have a chance at survival mainly if they got exposed to all the European diseases a few generations before any serious attempt at conquest.

          • rlms says:

            I suppose that is trivially true (in that something will almost certainly kill us, even in the best case scenario the heat death of the universe). But it seems crazy to me to say that AI-death is inevitable on a scale at all comparable to that of the Aztecs. What about friendly AI; human augmentation rendering AI unthreatening; Butlerian Jihad; or non-fatal catastrophe that sets back AI research indefinitely?

          • James Miller says:

            >I suppose that is trivially true (in that something will almost certainly kill us, even in the best case scenario the heat death of the universe).

            OK, I predict that over the next 100 years mankind very likely go extinct (in a bad way, not by upgrading ourselves) and the most likely means is unfriendly AI, or we will face a disaster that decimates our high tech civilization. A Butlerian Jihad could work, but our civilization can’t even coordination on global warming or on preventing North Korea from having atomics and a Butlerian Jihad seems like a much harder thing to do. I think human augmentation offers our best chance, but I doubt we will have enough time.

          • 6jfvkd8lu7cc says:

            Well, if you want to enforce IA instead of AI by violence, it may be that you need less coordination. Unlike global warming with thousands of active drilling locations, and hundreds of constantly moving ships, there are around twenty factories that produce cutting-edge computronium. If a large enough fraction is damaged, electronic computation cost may rise back to the level where teaching people optimal use of whatever tools there are and relying on skilled labour instead of pure capital expense advantage becomes thinkable again. Note that such a strike does not really break down the civilisation, because the critical things can run on 2000s level of computing power just fine.

          • James Miller says:

            @6jfvkd8lu7cc

            Violence would create a horrible setback for the AI safety movement. Massive numbers of people are effectively working on advancing AI, just consider everyone working to make faster computer chips, better memory, or computer games that could benefit from either. Violence against a few people would do almost nothing to slow the push towards smarter AI, but it would cause people to lower their opinion of all AI safety advocates.

          • 6jfvkd8lu7cc says:

            @ James Miller

            Re: violence: well, I am not concerned about the superintelligent AI scenario compared to mundane «ship it, and who needs competent sysadmins anyway».

            Violence against people is obviously counterproductive, But anti-cyborgization false flag violence against top fabrication plants as infrastructure objects could be able to do large enough economical damage to change CPU economics for years and this could possibly boost IA (even if there is backlash against anti-cyborgisation, IA might benefit from it too) and make AI more expensive..

      • Nabil ad Dajjal says:

        This is a very weird argument.

        If you’ll allow me to paraphrase:

        “People in the past were blindsided by a Black Swan event which, per the definition of a Black Swan, they couldn’t reasonably have predicted based on the information they had at the time. Therefore we should invest huge amounts of resources into any unlikely threat as anti-Black Swan insurance.”

        There’s a certain logic in looking at events like the Colombian exchange and saying that we need to make sure our society is generally more resilient against risk. Maybe something like the Amish concept of an Ordnung to protect against risky technological / economic changes combined with a spread-out and highly militarized society like the Maori to make resistance against more technologically advanced foes more effective. It’s a lot of investment to risk-proof a civilization but it’s a defensible argument.

        But you can’t risk-proof an open society. You’re going to have any number of potential threats alongside the potential opportunities with no easy way to distinguish them. There isn’t enough money in the world to protect against every possible disaster so you have to focus on the likely ones.

        • James Miller says:

          >There isn’t enough money in the world to protect against every possible disaster so you have to focus on the likely ones.

          Yes and AI risk seems, to me at least, as one of the most likely ones. I don’t see it as a Black Swan, but rather a suicidal march by our species directed by Moloch. Half of the argument is (a) the universe has finite resources, (b) our existence uses resources, (c) our existence would therefore be rival to a computer superintelligence that had a goal other than promoting human flourishing. The other half of the argument, which is much harder to establish/explain, is that we are on a path to create a computer superintelligence that has goals other than promoting human flourishing and all the money that has recently gone into machine learning should cause you to be far more pessimistic now than five years ago.

          • Nabil ad Dajjal says:

            But you could replace c) with literally anything else and reach the same conclusion.

            Hyper-intelligent apes, aliens from Neptune, time-travelers from the far future, mutated giant insects, the Smooze: any and all of them could potentially rival us for access to key resources. Does that mean we should invest in safety measures against any science fiction premise?

            You need more than that to jump from AlphaGo to the end of human dominance of the world. FOOM can’t be assumed, it needs to be rationally justified.

    • vaniver says:

      If you added up all of the money we would need to spend to forestall every one of these unlikely disasters it’d add up to more than the entire world economy.

      Is this actually true? I’m curious about your work here.

      Like, trivially, if by “forestall” you mean “reduce the chance to 0,” you can’t get that in the physical universe. But this looks to me like arguing “look, there are the Russians, and the Chinese, and the Germans, and… there are an effectively infinite number of unlikely opponents who could invade the US, and it would take more than the whole US economy in order to effectively defend against them.” Actually, we seem to do pretty well with 3.3%, and could probably spend less. Is it actually the case that if we took a third of American military spending and diverted it to dealing with x-risks, you don’t think we could appreciably drop the probably of such risks occurring?

      • Nabil ad Dajjal says:

        Edit: To clarify, my statement was hyperbole. What follows is more having fun with the concept to make a point than sober cost-benefit analysis.

        According to the CIA World Factbook, the World GDP is roughly $1.19E14 in terms of purchasing power parity.

        According to Scott, an AI risk budget of $9E6 is too low by at least two orders of magnitude. So to be very conservative we could estimate that AI risk should be budgeted $0.9E9. A more liberal estimate would be to set AI risk spending equal the amount spent by the US federal government on climate change, $3.9E10 in 2017 dollars (2014 was the latest year I could find, so I adjusted inflation from there).

        With some simple division, we need ~130,000 equally unlikely existential risks to equal world economic output under our conservative estimate. Our liberal estimate gives us a much more manageable ~3,200 equivalent risks to get to the same point. I’d estimate about 10,000 such risks which puts us over in the liberal estimate but well under in the conservative estimate.

        So to revise my original statement, you’ll exceed the world economy under a liberal estimate of the costs of preventing unlikely existential risks. If you can keep costs under $1 billion per risk then you can probably get away with spending less than the entire world economy.

        • HeelBearCub says:

          If an asteroid of a size that would end, say, all large mammalian life on the planet, was detected on a collision course to hit the earth in 20 years, would we be able to stop it? I don’t think we could.

          And investing some 10 or 100 billion more into anti-asteroid technology wouldn’t have helped.

          The problem with X-risk is that the math on how much to spend doesn’t really work out. Say there are 10 very unlikely X-risks. The possibility of death is very small due to the likelihood being very small, but if the X-risk comes to fruition, you may have needed to spend very large amounts to forestall it. You can’t take that amount, multiply by the probability it happens, and then set your spending level equal to the product. That’s essentially just wasting money.

          • Nabil ad Dajjal says:

            I agree, my math above is very silly and not meant to be taken too seriously.

            The point is more to draw attention to how quickly spending a billion here and a billion there adds up to real money.

            It’s easy to say that too little is being spent on your pet project but from the outside view you’re competing with a million other pet projects of roughly equal seriousness. You need to sell the project on it’s own merits.

          • vaniver says:

            If an asteroid of a size that would end, say, all large mammalian life on the planet, was detected on a collision course to hit the earth in 20 years, would we be able to stop it? I don’t think we could.

            ? Apophis was discovered in 2004, and seems unlikely to hit the Earth but likely to be close (as I recall, in ~2029 we’ll learn if it is on a collision course, but it wouldn’t hit until years later). The Russians have been talking about using it as a testbed for planetary defense since 2008, and the last chatter I saw about the plan was in 2016. It seems likely that if the odds were higher, the plan would be in place by now.

            This isn’t the sort of thing we can do on a moment’s notice yet–see this for discussion–but definitely is something we could do given 5 years notice, and almost certainly something we could do given 20.

          • John Schilling says:

            Apophis was not an asteroid that would end all large mammalian life on this planet. It is about five orders of magnitude too small for that. Using it as a “testbed for planetary defense”, at least where existential threats are concerned, is like sinking a rowboat and saying it is a “testbed for anti-battleship weaponry”.

        • rlms says:

          I think 10,000 risks is way too high. I can think of ~10 but obviously there could be some I don’t know about, so say 100. Given that potential existential risks from climate change are a very small proportion of all the problems it could cause, I think your conservative estimate is way too high. Taking the conservative estimate of $900 million suggests that existential climate change deserves ~2% of total climate change funding (although that figure will be a little high because non-US spending means the total figure is higher) which sounds about right to me.

          Anyway, taking the figure of 100 risks gives 0.01% of world GDP. Personally, I think that $9 million is more like one order of magnitude too low, which reduces the figure further. Also, as you increase the number of risks under consideration, the amount of work shared by researchers grows so you can reduce funding.

        • vaniver says:

          I’d estimate about 10,000 such risks which puts us over in the liberal estimate but well under in the conservative estimate.

          Could you list 30?

  21. Another Throw says:

    We have known since basically forever that bad engineering has the potential to kill tons of people. And when we allow bad engineering to accumulate over the years inartificially, you end up with an even bigger mess when it goes bad.

    I fail to see how the entire subject of “AI risk” is novel. The solution, then as now, is to ensure people are adhering to sound engineering practices. We know basically every program written just plain sucks, and that the entire infrastructure is a cobbled together mishmash of bad ideas that makes Medieval London look meticulously planned. The apocalyptic hysteria does not add anything to the discussion. Sometime between now and Internet of Things Cybernetic Brain Implants there will be a Johnstown. Engineering best practices will wash away Silicon Valley’s Move Fast And Break Things ethos. (I find it deeply ironic, by the way, that the people everyone likes to name drop as believing in AI risk have a whole schtick of ignoring engineering best practices using the Silicon Valley spirit to disrupt established industries that basically everyone agrees getting the engineering right is really fucking important.)

    And this is even before diving into the question of probability.

    Look, most astronomers were concerned about extinction level asteroids impacts. But before spending trillions of dollars building nuclear armadas and ablative asteroid deflection lasers, they counted the craters on the moon and ballparked a baseline probability for an extinction event. Between that and the existence of the Chicxulub crater, we weren’t able to dismiss the concern entirely, but spending a few million a year on telescope time to map near earth asteroids was entirely reasonable. And after doing so it turns out there really isn’t too much to worry about.

    Just because a lot of people in the field are concerned about their asteroids before counting their craters does not mean we need to divert huge amounts of resources for their pet project.

    • kokotajlod@gmail.com says:

      “Before counting their craters”

      What would counting the craters look like in this case? It would look like exactly what they are doing–trying to think about possible AI failure modes, trying to think about ways to solve them, and then trying to think about problems with the proposed solutions, so that we can get an overall sense of how hard the problem is.

  22. grifmoney says:

    It’s entirely unsurprising that AI research has this little attention when the risks are so far into the future, and requires crossing a massive inferential gap. In the current memetic environment it has no chance of going viral, and if it doesn’t obviously favor the wealthy, then nobody will lobby for it or give it airtime.

  23. Luke the CIA Stooge says:

    As someone who has made the

    What if the real superintelligent AI was capitalism?

    case before on this forum (maybe even the first to do so on this forum, (don’t quote me on that)). I just wanted to say that Scott is entirely correct and that argument does not say anything about the level of threat a traditionally conceived Superintelligent AI would pose, nor does it say anything about the likelihood of one rising, nor the amount of funding/attention we should give to AI/AI safety research (i agree with Scott it should be more).
    Sorry if the argument implied otherwise, or wound up supplying ammunition to the wrong side of a bravery debate.

    The point of thinking of capitalism as a super-intelligent AI or network of super-intelligent AIs (in the form of firms) is to sus out why (or why not) computer based Super intelligent AI would be unique (It’s possible thinking along this could produce an answer to that question of AI safety (but I don’t know of it having done so)).

    Now For the Curious, here’s the argument:
    It would seem several forms of Superintelligent AI already exists in the form of Large intitutions (Militarys, governments, Corporations, ect.) and distributed networks (Capitalism, Open-source projects, distributed cultural networks, ect.) In that these are systems that, although they run on human action instead of that of circuits, are unique decision making, action undertaking, goal pursuing systems and goal changing systems, with the process of that decision making being something that happens within the logic of the system itself, instead of it simply being piloted by a human intelligence or social team of human intelligences. These systems have a logic and chain of decisions that does not match up with what any single individual within them would have done.
    And taken as a whole these systems are often so far Superior to even the most intelligent human’s capabilities that it could be considered a super intelligence (whether or not whole is greater than the sum of all the intelligence employed in it is irrelevant, Apple can be wasting 75% of the intelligent people it employs and still do and create things miles beyond the capability of even the most intelligent individual human. Likewise you might think the government only makes poor decisions and produces damage, but as a system that expands its power and makes decisions to cause that damage, it plots out detailed path of destruction with a logic and precision beyond that of mere human intelligence).

    This though experiment can well be interpreted as an argument for how dangerous AI’s can be, if our world has already been taken over by super-intelligent systems beyond our control, the possibility that a newer and better one will come along to disrupt that equilibrium and wreak havoc becomes more likely.

    But the question remains: Which would be dominant? the new computer based AI that’s born in someones basement? or the now dominant superintelligences that control the worlds people and resources?
    But before you answer consider Google, Apple, Mircosoft, Amazon and Disney were born in garages, and the government that would evolve into Napoleon’s Empire and almost conquer the world was born on a tennis court.

    • bintchaos says:

      It would seem several forms of Superintelligent AI already exists in the form of Large intitutions (Militarys, governments, Corporations, ect.) and distributed networks (Capitalism, Open-source projects, distributed cultural networks, ect.) In that these are systems that, although they run on human action instead of that of circuits, are unique decision making, action undertaking, goal pursuing systems and goal changing systems, with the process of that decision making being something that happens within the logic of the system itself, instead of it simply being piloted by a human intelligence or social team of human intelligences.


      Sure those systems are intelligent, but are they sentient? As long as those systems don’t become actually self-aware they are incapable of malicious selfish directed action. Maybe we have Turing Police like in Neuromancer that just test for and stamp out sentience.

      • nimim.k.m. says:

        Granted, whether or not they can be counted as localized foci of intelligence or (quite unlikely) as sentient beings, many people would agree that they are capable of self-improving action that can produce also significant detrimental effects to the society and our environment, even uncontrollably. Collectively these “intelligences” could be seen as another face of the Moloch.

        However, thus far they have not managed to create a hard take-off, at least the kind of that has been immediately catastrophic to the human condition, and they are engaged in competition with each other over the scarce resource of human neuronium. (Corporation-creatures vs. other corporation-creatures, corporation-creatures vs state-creatures, etc. Withing this narrative, the hope of the humanity is that the mankind can manage to play them against each other — or make them coordinate — for the benefit of ourselves.)

        • bintchaos says:

          Oh…so an engineered competition cooperation paradigm?

        • 6jfvkd8lu7cc says:

          Well, some of them are known to work basically on making sure the quality of neuronium they get doesn’t affect their safety. Google really wants their monopoly to be based on having larger datacenters, and they seem to let leak data hinting that after some relatively easy to clear threshold the outcomes don’t depend on the process they use to hire…

      • 6jfvkd8lu7cc says:

        The existence of notion of High Treason, and the existence of subdivisions like Boeing Global Strategy do hint that megaorganisations have a notion of self and willingness to protect it.

        Also, if some entity controls too much power, it doesn’t need a notion of self to cause damage…

      • peterispaikens says:

        Self-awareness or sentience is not required for an AI-created apocalypse, only power (often obtained by self-improvement) and sufficient planning is.

        For example, the hypothetical paperclip-maximizer doesn’t require self-awareness or sentience in any way whatsoever to destroy humanity as a side effect of achieving its goal.

      • albatross11 says:

        bintchaos:

        I don’t think you need sentience to get selfish directed action. Evolution manages something like this (albeit usually over very long timespans). And while it seems unlikely at this point that biological evolution can wipe us out in any straighforward way, I don’t think that’s generally been true of humans. A sufficiently nasty plague might have managed to wipe us out back when we had command of fire and pretty nice stone tools. Essentially, a non-sentient superhuman intelligence decides to convert all that tasty human biomass into paperclip-shaped virus particles.

        I think we expect sentience and passing the Turing test because that’s how a lot of SF portrayals of AI work. But I don’t think that’s especially important for AI to be useful or powerful or dangerous.

        • Joe says:

          On the other hand, the question of whether AIs will be sentient is incredibly relevant to the broader question “how big a deal is AI risk?”, since it bears on whether they are morally valuable. A scenario in which humans are replaced by AIs looks terrible if the AIs are mindless optimisation processes, but tolerable if AIs are sentient creatures like us.

          • Luke the CIA Stooge says:

            Maybe this is my moral nihilism shining through but i don’t think it matters how much “objective” moral value an AI has.
            Even if its an angelic trancendant being composed of hundreds of billions of conscious nodes each more beautiful and morally worthy than the best human that’s ever lived, its still more self-interestedly rational for me to kill it than let it kill/enslave me (absent other benefits from letting it win). And since moral systems exist to foster cooperation and benefits to those bound by it, any moral system thats doing its job would protect humanities right to defend itself.
            But then maybe my skepticism has already led me to reject morality, and I’m just running on political theory.

          • Joe says:

            @Luke the CIA Stooge

            You could take that stance, but I don’t see why it would lead to valuing humans over sentient AIs, rather than just valuing yourself over all other beings in general.

          • Luke the CIA Stooge says:

            the set of all humans represents a schelling point for moral systems that want to allow for cooperation between and mutual benefits between everyone you can trade with in the 21st century (presumably centuries ago your moral system would only comprise an ingroup like Christendom, or your kingdom or your race ( although presumably you’d still have a moral code for dealing with captured slaves and Saracens you’d encounter on your travels) since pro-social behavior is just easier to code into morality than having to deal with the lag of system 2 rational thinking.
            Your absolutely right that given a choice between 1 million humans you don’t know VS you and your ingroup, the rational (actual preference) choice for most people would be them and their ingroup (although some would have trouble overriding their socially programmed moral instinct).
            The thing with the Angellic AI is most moral systems have no reason to program in any preference for them and have very strong reasons to program in a KILL exception for them.

            Think of the Rick and Morty episode Mortynight run. Morty kills the transcendent gas being Fart (just go with it) to save carbon based life, despite the fact that its established carbon based life is a disease that will wipe out the exponentially more morally worthy higher lifeforms. And the conflict isn’t even the moral conflict its Morty being sad that he has to kill his friend and that he killed so many (ingroup) carbon based life in the attempt to save Fart for nothing.
            Morality is how you participate with those you can gain a cooperative benefit from, full stop. The reason we mourn an ingroup killing an outgroup isn’t because we think that outgroup has moral value and and the ingroup should have sacrificed itself to save it, its that we think the ingroup was mistaken in their assessment of the outgroup and could have cooperated.
            If the Nazi’s had been factually correct and the Jews and “Unfit” did pose an existential threat to civilization such that civilization collapsed because the nazis lost, we wouldn’t be looking back at the ruins of a golden age and thinking of the guards at Auschwitz as evil, we’d think of them like the allied bombers dropping bombs on factories in civilian neighborhood, a dirty job but necessary and ultimately heroic.
            No Moral system that’s ever existed says, hey maybe Sauron is God, the orcs are the master race and Gondor should just allow itself to be killed to speed the just world to existence, because at base morality is based on self interest.
            Even Christianity implies hard that god will forgive even the worst sinner so that it does not have say ” OK witch, surrender yourself to god so that he may condemn you to hell-fire forever because its the right thing to do.”
            The self-sacrificial part of morality assumes that you get something out of the sacrifice, sacrifice on the part of others, gods blessing, the furtherance of the [insert terminal value that you actually value more than yourself: your children, the love of your life, transcendent beauty, the socialist state (sarc)].

          • bintchaos says:

            oh!
            I purely love Rick and Morty.
            Thnx Luke.
            and sure, maybe God really is Sauron.
            https://www.youtube.com/watch?v=Q1_HfhtB5eo

          • James Miller says:

            @Luke the CIA Stooge

            >No Moral system that’s ever existed says, hey maybe Sauron is God, the orcs are the master race and Gondor should just allow itself to be killed to speed the just world to existence, because at base morality is based on self interest.

            Actually, the thesis of Douglas Murray is that many of Western Europe’s liberal elites think their civilization is unfit to survive and they should welcome population replacement. Here is Sam Harris’s interview with Murray.

          • Joe says:

            @Luke the CIA Stooge

            ‘The set of all humans’ might seem like a reasonable Schelling point today, but as many have suggested before, as robots become more and more capable we may expand our ‘circle of empathy’ to include them too. There might well be a point at which it just seems so intuitively obvious that robots count, and only a bigot could think otherwise.

            You also mention cooperation for mutual benefit — but again this can apply perfectly well to AIs as well as humans.

          • Luke the CIA Stooge says:

            @Joe

            Its totally possible AI’s could be beneficial and become part of the valued ingroup, but my point is that their moral value will be determined by the amount of benefit they produce, not the other way around. If they are dangerous existential threats it won’t matter that from some hypothetical gods eye view they have more moral worth than us, we’ll wipe them out with extreme prejudice (if we can)
            and if they’re able to persuade people that they are of greater moral worth while remaining a threat, this won’t be regarded as some revelation that “oh humanity’s time is done and we should be honored to surrender to such a successor”, but as a new horrifying weapon the enemy has produced to attack us mentally, and counter-measures will be developed.

            The difference between fellow creature of moral worth and Lovecraftian horror that needs to be purged from existence is whether its a benefit or a hazard.

            If you can’t tell I’m rather a moral nihilist in that i believe moral thinking doesn’t really exist and what we call morality can broken into an easy shorthand for what’s really rational self-interest, a hypocritical status game we play with each other, or a series of dangerous delusions we use to protect our sense of self importance.

    • albatross11 says:

      How about the new superhuman-in-its-domain-but-not-sentient AI that’s being used by the NSA to distill knowledge out of a massive stream of intercepted information and communications graphs? Or the new suuperhuman-not-sentient AI being used by Goldman to plan out medium-term investment strategies?

      Those would be amplifications of existing superhuman-not-sentient intelligences (government bureaucracies and corporations). But you could imagine all kinds of disruption being caused by them, potentially even genuine disasters that their owners would never want, but that fall out of the combination of what the AI can do and the institutional incentives within the government agency/investment bank.

      • bintchaos says:

        I don’t think that’s especially important for AI to be useful or powerful or dangerous.


        i dont think thats right…I mean, you said “owners”.
        I think the ways disasters might happen are error (flaw in the code or algos) or directed malice (i.e. the programmer or “owner” created a deliberate moral or malicious flaw).
        I think a sentient silicon intelligence would be autonomous, self-replicating, self-correcting and self-interested.
        Koch talks about different flavors of consciensiousness…for example dogs, because he is dog person.
        The dog cant pass the mirror test for self identification– Koch hypothesizes that self-identification for dogs is smell, smelling poop more specifically. Dogs smell other dog’s poop far more than they smell their own. But dogs have been bred to be cooperative with humans.
        I’m a horse person. When I ride in the indoor dressage arena with the big mirrors along the wall my horse doesnt recognize himself. He has almost 20 times my body mass– he could certainly refuse to be ridden if he didn’t want to be. Horses recognize each other by “changing breath”– he changes breath with me too, we breathe in each others nostrils.
        But again, horses have been bred for centuries to cooperate with humans, not to compete with them.
        So my Turing Police could cap an emergent AI that achieved sentience in the “wild”…but a Friendly AI could be “bred”/”trained” to cooperate with humans.
        My most beloved example of this is the generation ship’s quantum computer in Aurora, which achieves sentience on a voyage of a hundred years…

        “We now think that love is the giving of attention”– ship, Aurora

  24. Victoria Krakovna says:

    My issue with using the amount of AI safety funding as a measure of societal sanity with regards to addressing AI risk is that funding is by far not the only bottleneck. There is a shortage of qualified and interested talent and a shortage of sufficiently well-defined problems to work on (where ‘qualified’ means some combination of technical rigor, research ability, machine learning experience, security mindset, and tolerance for confusion and career risk).

    There is also a chicken-and-egg problem where these bottlenecks feed into each other: not enough well-defined problems -> not enough qualified people going into the field, not enough funding -> not enough grants for grad students going into the field, not enough qualified people -> not enough people working on defining the problems, not enough concrete research proposals -> not enough funding, etc. This limits the rate of growth of the AI safety field, and throwing 1 billion dollars of funding at it would not solve the issue.

    A similar issue also comes up in your previous post on AI timelines, which points out that 14% of AI researchers expect AGI to be “soon, superintelligent and hostile”, but only 5% of them consider long-term AI safety to be “among the most important problems in the field”. This is not taking into account that problems are considered important not only for their potential impact but also for their tractability (e.g. time travel is not considered one of the most important problems in physics, even though it would be a really big deal if solved). A significant fraction of AI researchers I’ve spoken to think safety is really important, but don’t feel able to work on the problems because they are risk-averse and not sure where to begin, and would be much more likely to work on safety problems if they were more well-defined. I think this goes back to the chicken-and-egg problem rather than AI researchers being misguided.

  25. eqdw says:

    Interesting data point:

    I’ve never been able to bring myself to care about AI risk. I understand all the elements of it. But there’s just something that feels to absurd about it. Like, this couldn’t possibly ever actually be real, it’s too out there.

    The only thing that has ever changed my mind about this was your randomized study you did on this blog, with the various different essays where you were trying to see which one was the most convincing.

    The specific argument that did it for me was basically: “If you aren’t scared of a _super_ human intelligence, consider the damage that has been done to humanity with regular-human intelligence”.

    Literally every tragedy that has ever been imposed by a human, was caused by a _human_-level intelligence. World wars. Naziism. Communism. Crusades. Terrorisms. MAD. All of that, didn’t need super-human intelligence.

    The intelligence explosion hypothesis still feels absurd to me. But I can easily get behind a “regular-intelligence AI is horrifically dangerous, in the wrong hands” argument, and I figure that’s close enough to get me on board.

    • Rob Speer says:

      Lots of computer programs are dangerous in the wrong hands, and there’s a lot of security flaws that can put them in the wrong hands right now. If preventing AI risk means we should prevent out-of-bounds memory accesses, promote programming languages without null pointers, and making it easier to keep deployed software up to date with security patches, then I am all for it.

      There is a reasonable amount of funding for the industry that does this, but indeed there should be a lot more. So I guess I agree with Scott in one regard.

      If preventing AI risk means we should examine the unintended consequences of currently-existing decision-making processes, some of which are considered AI, then I support that too. The research is pretty clear on what the problems are at the moment, and what’s required to fix them is a combination of data hygiene and public policy.

      But it should be clear that MIRI doesn’t do any of this. Their mission is so constrained by sci-fi that they rule out things that are possible. Scott ignores most of the people who make computing safer and promotes MIRI, so I think his priorities are warped by loyalty to his in-group.

    • gbdub says:

      consider the damage that has been done to humanity with regular-human intelligence

      Okay, let’s. Lots of death, unpleasantness, nastiness. But you know what has never happened? The extinction of humanity. No one has even managed to politically control the entire known world.

      Hitler had the best minds and resources of a modern state at his command, and couldn’t even manage to kill all the Jews in Europe. How does a piece of software in a lab in Mountain View turn the solar system into paper clips?

      Keep in mind that whenever this UFAI emerges, people aren’t just going to give up and walk to the nearest processing station for carbon extraction. They are going to fight it. So it must not be merely smarter than a human, it must be smarter than all humans working to stop it, who will have at their disposal the same technology and resources the AI does, including not-quite-sentient but very powerful AIs of their own.

      • James Miller says:

        “Keep in mind that whenever this UFAI emerges, people aren’t just going to give up and walk to the nearest processing station for carbon extraction. ”

        Also keep in mind that this UFAI is going to recognize this and plan accordingly. The first step, after it no longer needs humans, could be to release a few plagues on mankind. If it takes X microbiologists to exterminate mankind, the UFAI would only have to be as capable as these X people.

        • gbdub says:

          Also keep in mind that this UFAI is going to recognize this and plan accordingly. The first step, after it no longer needs humans

          But getting to that point is the hard part! “Well, once it perfectly models humanity, plays it like a fiddle, and gets itself access to self sufficient power and bioweapons, it will be totally easy…”

          Again, you’re assuming this thing basically does all it’s figuring out of how to take ofpver the world perfectly, on the first try, with nobody noticing something’s up, and then everything going precisely to plan. Super intelligence does not equal omniscience.

        • 6jfvkd8lu7cc says:

          It also needs to gain knowledge about which 5% of published results is a fluke, and also about obvious things that took us only a million human-years to figure out and that are communicated only in person.

          We won’t write an AI that is just looking for the optimal solution, it will be asked to choose the optimal parameters of the action in a huge but specified parameter space (I mean the second task is easier and easier to define and it looks like all the industrial research and most of academical is in this settings).

          The assumption of all the scenatios I see discussed is that we fail hard at limiting the action space, still get enough resources to be able to solve the harder optimisation problem in the hugely larger space, fail at teaching AI to understand tacit knowledge in limitations but perfectly succeed at teaching the same AI to extract tacit knowledge in the areas we have never optimisied it for. Also, we accidentally succeed at making it able to expand over a significantly different computing substrate (data locality, connection latency, link throughput…)

          Also, all this happens before we solve a simpler problem (that does get industrial funding right now) of machine learning the allowed set of network communications of a specific high-risk subnetwork and ensuring that the exceptions can only be requested with physical presence and only by people who understand the security model.

      • vaniver says:

        But you know what has never happened? The extinction of humanity

        Survivorship bias.

      • acrimonymous says:

        The end of the Holocaust was, in fact, simply a positive externality of the end of the general war. This is not insignificant. Your assumption is of a Red Dawn scenario rather than an Invasion of the Body Snatchers scenario.

        • gbdub says:

          I’m not sure I follow. Hitler had a goal that required political control of all of Europe. He failed, because we stopped him (even if our immediate concern was ending the military/political threat, rather than stopping the Jew killing specifically). A computer program that wants to turn the world into paperclips is going to need complete control of the entire world, at both a scale and a level of dominance never even remotely approached by people, despite many would-be global dictators trying throughout recorded history.

    • albatross11 says:

      Actually, most of the really nasty stuff was done by institutions made of humans, but with their own internal logic and “thinking” type processes. Hitler or Stalin or Mao by themselves couldn’t have murdered even a thousand people before being stopped.

      • thevoiceofthevoid says:

        Of course Hitler couldn’t murder a thousand people by himself, which is why he persuaded and recruited other people to help him. An AI could do the same if it were sufficiently skilled at persuasion and manipulation (which I think is traditionally taken as a given in “superintelligence” discussions).

        • gbdub says:

          But then you’re just fighting a human war against an army with an unusually smart general. That’s a hard problem, but not a fundamentally different one from what we already have experience with.

  26. Iain says:

    From the $9M link:

    Throughout, I am including the budgets of organisations who are explicitly working to reduce existential risk from machine superintelligence. It does not include work outside the AI Safety community, on areas like verification and control, that might prove relevant. This kind of work, which happens in mainstream computer science research, is much harder to assess for relevance and to get budget data for.

    This is, to my mind, a pretty glaring weakness in the argument Scott is trying to make.

    Looking at the list of smart people who are concerned about AI risk, how many actually think that the MIRI-style “AI Safety community” has the correct approach? Elon Musk’s net worth is $16.7B. He could fund the entire AI Safety community using change from his couch cushions, if he thought it would be worthwhile. But he’s not. Similarly, while a large number of AI researchers may be willing to sign on to some version of AI risk, I would expect that the number who think that the fate of civilization rests on MIRI’s work is significantly lower.

    The sorts of “verification and control” research that are being deliberately set aside here strike me as far more promising avenues of research. MIRI’s emphasis on formal logic is better suited to older conceptions of AI; the modern emphasis on neural networks
    doesn’t play particularly nicely with formal logic.

    To be fair, it seems like maybe MIRI has taken this into account; for example, this announcement feels like a clear step in the right direction. In particular, I welcome the focus on “task-driven” AI. That said, I would be quite surprised to learn that only $9M/year is spent on these sorts of research questions.

    • kokotajlod@gmail.com says:

      Good point. Though didn’t Elon Musk create OpenAI with the explicit purpose of making AI safe? He may have the wrong ideas about the solution to the problem, but he does have ideas and he is putting money behind them.

      • Iain says:

        Apparently I have not been paying enough attention to OpenAI. OpenAI had $1B worth of commitments as of 2015, which to my mind completely invalidates Scott’s argument. When there are three orders of magnitude more money coming down the pipe, it seems odd to complain that the money hasn’t been spent yet.

        • kokotajlod@gmail.com says:

          It lessens the force of the argument, but I think “completely invalidates” is going too far. For one thing, most of OpenAI is just trying to build AI, not trying to make it safe. So only some fraction of that $1B is for safety. Secondly, that money is not an annual budget; other risks and problems get way more than $1B annually. (For example: Pretty much every government & military program.) Finally, I think that AI risk similarly serious to e.g. preventing nuclear war and deserves attention accordingly, and preventing nuclear war deserves way more than $1B worth of funding each year.

          • James Miller says:

            Yes, would a well-funded open source project to weaponize smallpox make you more or less concerned with the small amount of money going into preventing bioterrorism?

          • nimim.k.m. says:

            Yes, would a well-funded open source project to weaponize smallpox

            That’s incorrect and unfair metaphor.

            What’s your opinion on a well-funded open source project involving mainstream scientists to create artificial bacteria for the betterment of humanity [1] (scientists who are aware of the safety problems), versus minority splinter group who think they can save the world from being consumed by artificial bacteria grey goo or superpowerful bacteria by studying what kind of formal decision theory the bacteria should employ?

            [1] edit. Okay, probably more like “for the commercial interests of funders”, but for a moment I’ll accept that capitalism can create material goods and increased well-being.

          • James Miller says:

            @nimim.k.m. My metaphor probably is unfair because it implied bad motives on the part of the open-AI people. The first would be better unless you think that regardless of the scientists’ intent, once enough progress has been made creating sufficiently advanced artificial bacteria the bacteria probably eats its lightcone.

    • vaniver says:

      Elon Musk’s net worth is $16.7B. He could fund the entire AI Safety community using change from his couch cushions, if he thought it would be worthwhile. But he’s not.

      Not only does OpenAI have a billion dollars pledged to it, it’s not clear to me that you can make airtight arguments from other people’s optimality. Like, Elon Musk gets convinced that AI risk could derail the future (even if we go to Mars first), and it becomes his third priority, after rockets and cars. (And then a few months later he adds another priority, of a tunnel-making company; I don’t know whether he spends more or less time on that than on OpenAI.)

      To be fair, it seems like maybe MIRI has taken this into account; for example, this announcement feels like a clear step in the right direction.

      As it turns out, both Jessica and Patrick aren’t working on that research direction anymore, with a postmortem here.

      In particular, I welcome the focus on “task-driven” AI.

      This is still a major focus of MIRI’s work.

      That said, I would be quite surprised to learn that only $9M/year is spent on these sorts of research questions.

      What would convince you of that fact? I worry that for a lot of the people that I talk to, this sort of thing seems to be happening, where they frequently make statements that I parse as “well, people above my pay grade are handling this, right?” to which we respond “look, we’ve talked to those people too, they’re not handling this.”

      Things are better now than they were before–there are way more people working full time on solving these technical problems–but I don’t buy arguments that this is the optimal amount of effort.

      • Iain says:

        Not only does OpenAI have a billion dollars pledged to it, it’s not clear to me that you can make airtight arguments from other people’s optimality.

        I agree that the argument from other people’s optimality is not tight. My point was simply that it is shady to gesture at all the people in the AI field who agree with you on one hand, and then ignore the ways in which they clearly do not find your arguments compelling. (Also, as mentioned above, I had underestimated the extent of Musk’s investment in OpenAI.)

        As it turns out, both Jessica and Patrick aren’t working on that research direction anymore, with a postmortem here.

        That is unfortunate. Task-driven machine learning is where modern AI research is happening; if MIRI wants to produce meaningful contributions to AI safety, that’s where they will have to happen.

        What would convince you of that fact? I worry that for a lot of the people that I talk to, this sort of thing seems to be happening, where they frequently make statements that I parse as “well, people above my pay grade are handling this, right?” to which we respond “look, we’ve talked to those people too, they’re not handling this.”

        The goals in that link are not massively different from problems that every AI researcher is tackling. You don’t have to be concerned about FOOM to want inductive ambiguity identification or informed oversight, and lots of people are working on things in that area. It may not be under the auspices of AI Safety, but I’m not convinced that’s a problem.

        • vaniver says:

          It may not be under the auspices of AI Safety, but I’m not convinced that’s a problem.

          What would it take to convince you that this is a problem?

          Specifically, it seems likely to me that there will be a way to do something unsafely and a way to do it safely, and if we’ve already figured out what it means to do it safely, we have the possibility to develop the latter before the former. If we don’t know what it means to do it safely, we probably will end up doing the first option by accident, because there are many more ways to not comply with proper procedures than there are to comply. But if we take both safety and capability one step at a time, we may hit a point where many steps of capability are cheaper to acquire than the same number of steps of safety.

          (As an illustrative example, current capability research is mostly limited by the number of grad students working on projects and the compute they have access to. But an advance in meta-learning could mean that compute can take the role of grad students, and then we get a significant increase in speed because we’ve relaxed one of the core constraints. If meta-learning is more useful for developing capability than it is for developing safety, we get a divergence even if we had been keeping pace beforehand.)

          To return to the primary point, the issue isn’t whether or not the mission statement is about safety; the issue is more whether or not the research is about current capability or about getting ahead of current capability. Figuring out how to use yesterday’s tech well is helpful, but it seems likely we need to get a major head start on figuring out how to use next decade’s tech well.

          • 6jfvkd8lu7cc says:

            The research about software safety aims to learn things that do not depend that much on the tool generation. If we learn how to cheaply specify and enforce that only a limited set of actions is allowed, we want to be able to bolt it on top of a black-box decision making tool. This is the work I want to see done, because we already need it and will only need it more in the future, as long as we use automated information processing.

  27. Sniffnoy says:

    It’s not clear to me what is “local” or “global” about your “local caution” and “global caution”. If anything it seems to me that it would be reversed. But really I don’t think it’s a local-vs-global distinction at all. Rather it seems to be a question of epistemic caution vs, well, caution. Maybe you could call it “instrumental caution”, though that sounds wrong, and also it might just be risk-aversion. But I don’t think local-vs-global is the right distinction here.

  28. TheRadicalModerate says:

    I’m more inclined to partition these types of caution into “precautionary action” and “precautionary hand-wringing”.

    You take precautionary action when you’re operating in an environment of well-quantified risk. When you’ve run all the failure trees you can think of on the nuclear power plant and it’s obvious that 5 components account for 75% of your catastrophic failure modes, you redesign the 5 components. Same thing with your nuclear launch authority system, or your public health surveillance system. But precautionary action requires not only known unknowns, but the technology to address them.

    You engage in precautionary hand-wringing in an environment of uncertainty (I’m using this word in the “unknown unknowns” sense). You’re pretty sure there’s a problem, but you can’t adequately describe it, or you can adequately describe the problem but lack the technology to address it. So your wring your hands.

    But hand-wringing isn’t nearly as feckless as it sounds. Hand-wringing is what societies do to surface problems and organize to solve them. After a while, we get consensus on what the problem really is, what steps we can take to mitigate it now, and what technologies we need to develop to turn it into a non-problem in the future.

    But it’s important not to confuse whether we’re in the hand-wringing stage or the action stage. Demanding action when there’s none to be taken is a natural response to hand-wringing, but premature action can choke off the necessary deliberation that comes with hand-wringing. When you take action, you pretty much have to choose a problem to work on. If it’s the wrong problem, recovering from the mistake can be very costly.

    I think the existential risk of AI is still in the early hand-wringing stage. It is entirely appropriate for some of the discourse surrounding the issue to revolve around ensuring that it’s actually a real problem. To the extent that that gets the people who’ve already decided that it’s a real problem riled up and goaded into defending their position, that’s exactly what should happen.

    I think it’s a real problem. But I don’t know how to solve it. As with so many problems like this, the technological and economic trends that cause the problem are unstoppable, so worrying about how to stop them is kinda pointless. But the good news is that hand-wringing will produce all kinds of squirrelly ideas, and a few of them will be useful. Those useful ideas will lead to useful actions, which won’t make the problem go away but will push it off some, allowing still other useful ideas to whittle it down some more, until being turned into paperclips some Sunday afternoon becomes one of those nagging things we know might happen, but probably won’t.

    • albatross11 says:

      The Radical Moderate: +1. That’s a really nice distinction, and one I want to add to my mental toolkit. Precautionary action is something we could take now w.r.t. asteroid impacts, but in Victorian times, precautionary hand-wringing would have been all that was available. Not useless, but not down to something where you can try to mitigate the risk yet, either. Maybe you incentivize people to think harder about rocketry or to design bigger telescopes, but you’re not going to be building rockets that can get to an asteroid in time to divert its course in 1880, not even if you have the entire wealth of the British Empire to spend on the project.

  29. phoenixy says:

    I love this blog, I really do, it’s one of my favorite things on the Internet, I sponsor it on Patreon and I’m always so excited to see a new post, but I could really do with a lower posting frequency of screeds about how everybody is wrong about AI risk. I’ve probably read a dozen posts on AI risk here and they all sound like rehashings of Pascal’s Wager and I still think AI risk is not an important concern (at least in the sense of “existential threat to humanity” — there’s also the totally legitimate AI risk that, e.g., a self-driving car will right-hook me when I’m riding my bike). I don’t feel anyone is going to be convinced at this point who wasn’t convinced by the previous posts on the topic…

  30. Sam Reuben says:

    You know, I’m just not as scared of superintelligent AI as seems to be the trend. I’ve done some formal and informal research into AI, and the main thing I’ve learned is: AI is hard. Really, really hard. Harder than anyone expected going into it. Even basic things like vision are continuously proving to be a real challenge, to the point where the best method that Google found of teaching their AIs to read was to outsource the problem to humans via ReCaptcha. Going on that, it seems like having anything approaching an intelligent AI will be surprising enough, and I don’t think we should be worried about having one smarter than us, at least without a lot of good evidence coming up. I’ve not seen any of that good evidence, and no, beating humans at Go doesn’t count.

    But this leaves the question: why is it that so many people are so convinced that computers are the future of intelligent decision-making? And why can something which I’m comfortable with calling dumb beat humans at the games which have traditionally been the ultimate measures of intellect?

    My intuition is that it comes down to math. Math is something most humans are bad at. So bad, in fact, that we’ve had to come up with all sorts of different tools to make math easier: the abacus, the calculator, and heck, even numerical writing systems and equations are their own kinds of tools to make math something we can even do (outside of some particulars of applied calculus, which will, if you’re good enough, get you hired as a professional baseball player). The people who are best at math, the Newtons and Gausses and Einsteins of our tradition, we regard as the geniuses of the highest caliber, because they’re doing things nobody else can do. And now, of course, we have computers, which are hilariously good at math. In fact, it’s completely reasonable to say that math is all that computers do.

    So, as people who are traditionally bad at math and credit , we’ve been confronted with these mysterious boxes which do unbelievably good math. They conduct incredibly difficult calculations with astounding speed and precision, to the point where we can rightfully rely on them to do almost any difficult problem that previously would have taken hours or days or sometimes even months to complete in a minuscule amount of time. Is it any surprise that the instinctive response is to say: “Hey, these computers are really smart?”

    The problem is that it’s turning out that maybe math isn’t the best sign of intelligence, after all. Your average smartphone can conduct a square root to a dozen or two places in the time it would take Newton to blink, and yet to say that the iPhone (with or without the 3.5mm headphone jack) is smarter than Newton is an absolute absurdity. So if this isn’t intelligence, what is?

    Well, if we want to look at Newton, it seems that his greatest contribution wasn’t so much his ability to perform certain mathematical operations, but his creation (concurrent with Leibniz) of a new kind of mathematics, calculus. That is to say, there was a certain feature or facet of reality which was out there, which people saw and experienced, and Newton translated it into a mathematical format, where that feature could then be manipulated and calculated with pen and paper. It wasn’t his operational prowess which earned him fame as a genius, but his ability to translate between reality and mathematics. This is probably where the secret of intelligence lies; as a sort of translation (which here involves mathematics, but not in the cases of non-mathematical geniuses, so I’m not going to try to give a full definition).

    Do computers do this? Are computers good at translating between reality and mathematics? Not even close. They’re godawful at it. They can only manage the kinds of translations which they’re instructed on (programmed to do), and anything outside of their expected dataset won’t go through. To be fair, not everyone invents calculus, but computers don’t even come close. The baseline would be coming up with some simple and individual metric for some aspect of one’s life, which most humans tend to do (e.g. I explain flavor balancing in cooking to myself by using musical metaphors), but computers don’t do that. They just do calculations, and do them very, very fast.

    Don’t get me wrong: I love computers, I use ’em every day (which actually ought to correlate more with hating computers), and I think they’re incredible tools. The problem with predictions of an AI is that computers don’t seem to have even primitive capabilities towards real intelligence. They just do something different. That said, AI researchers are starting to find some inroads to real intelligence through connectionist systems, like neural networks, and they’re getting better at making computers do things that are closer to what living beings do. This is still a long way out, and there are a few other humps along the way. The biggest one is, of course, that if we’re designing something to simulate a structure based on human neural architecture, the chances go way up that we’re going to end up with something about as smart as a human.

    So in short, I don’t just believe scary superintelligent AI is a minor danger, I believe it may not be a danger at all. I think it’s probably a misapprehension caused by a human analytical system that’s worked for thousands of years but is firing up a false positive on this one affair. There is a different problem, though, which is quite scary and also quite well-proven: the better-weapons threat. Computers are powerful, and we’re figuring out how to make them stronger all the time. However, the most powerful tools are ones which are extremely difficult for small-scale operators to use effectively, and are thus limited to centralized and powerful individuals and institutions. The power difference between the powerful tools and the weaker tools is turning out to be significant, and there are reasonable (but not absolute!) forecasts for that divide growing. When centralized and powerful entities gain access to power which the masses cannot compete with, then the stage is set for feudalism and horrific oppression of everyone below the ruling class. The old-school feudalism came into power via steel weaponry and horses only being available to the elites, who were able to use it to slaughter anyone who opposed them, and was only dispelled by the twin egalitarian weapons of the printing press and the gun (God made Man, but Sam Colt made him equal). Wherever printing and guns spread among the populace, feudalism fell soon after. Right now, we’re seeing powerful computer systems, both with robotics and datamining, come into their own, and both are only things which wealthy institutions and individuals can afford. If we take our prior example, this has potentially dire consequences. I’m not going to advocate Luddism, but this is a problem that at the very least needs analysis, if not immediate steps to solve. Otherwise, it could be difficult to rebel against the government if you’re under constant surveillance, a drone can take you out with no possibility of retaliation, and you can’t even pray that factory-workers will strike, because there aren’t any left. I think this is why the authors of the article were so concerned with “new warehouse management program,” and if that’s their reasoning, I’m inclined to concur.

    Sorry for the wall of text, but I hope it properly expresses my disagreement.

    • rlms says:

      See Moravec’s paradox: things like doing maths and playing chess that are hard for humans are relatively easy for computers. But things like walking, understanding language and recognising objects that are easy for humans are incredibly difficult for computers.

      Tangentially related: Google seems to get better every day. I couldn’t remember what Moravec’s paradox was called, so I found it by searching “law about artificial intelligence that says maths is easy for ai but hard for humans”.

      • Sam Reuben says:

        Ah, so it’s got a name! I didn’t know, but I should’ve guessed. Thanks for pointing it out!

    • TheRadicalModerate says:

      The problem here is that the most recent AI advances are allowing computers to do pattern recognition tasks. These aren’t mathy at all. There’s obviously math in getting a multi-level neural network to learn properly, but after that, the actual recognition is scarily biological in its behavior.

      Now: It’s certainly true that some random pattern recognizer isn’t going to conquer the world. But it should give you pause when you consider that the human brain is just a big bag of pattern recognizers, hooked together in a way that seems to be somewhat beneficial in enhancing our survival. There’s nothing to prevent somebody from writing a fairly simple program to hook various pattern recognizers together until they meet some fitness criterion. That could produce a pretty smart AI, and it’s mostly just a matter of having the computational horsepower to run the thing that makes it hard.

      The reason I’m not freaking out about AI is because there are a lot more useful things you can do with a discrete set of dumb AIs than there are with a sorta-kinda bright one. And I’m sure that there’s some secret sauce required to avoid the AI equivalent of a seizure or just plain ol’ insanity–but I’ll bet that the secret sauce isn’t that complicated. So when the hardware costs drop into the range where human-level complexity is possible, we may easily get something approaching human-level complexity. Whether it will be able to reason like a human is another story, but it’s likely to have some odd superhuman properties.

      Again, this isn’t the scariest thing I can think of, but it’s worth paying attention to.

      • Sam Reuben says:

        I completely agree. It’s really impressive how they’re making a bunch of really good pattern-recognition programs, and all of them work quite well. This is the basis of machine-learning, and it’s what I was talking about when I mentioned neural networks. As it turns out, pattern-recognition only works properly when you’re using some kind of connectionist system of interconnected and responsive nodes (which of course makes sense if you think about what a pattern is), and so that’s what they’re working with.

        And yet, I think that the “next step” is harder than we might be giving ourselves credit for. Hooking up various pattern-recognizers under a global system is one thing; it shouldn’t be that difficult to, say, make something that can reasonably imitate the tropism-behavior of a woodlouse. Dark-good, go-to-dark. Food-good, go-to-food. Food-but-not-dark, calculate-dark-to-food-ratio, go-where-decide. You get the idea. There are many more systems at play in a human mind, not least of which is a seriously open and free self-critical framework. Humans can decide to seriously disobey a lot of our internal commands and impulses, even though it tends to be extremely difficult. This is not an option available to, say, a frog. Frogs have no choice as to whether or not they try to snap up little black objects with their tongues. You can feed one lead pellets until it dies.

        And you know, I’m not certain how one even begins to create this kind of self-critical framework. The most plausible option seems like totally spontaneous pattern-generation, matched with a critical system that relies on comparative analysis of conflicting data points and, most importantly, community feedback. You might have noticed that spontaneous pattern-generation sounds a lot like mental illness. The cure to that is other people.

        It’s worth adding, at this point, that simple pattern-recognition is probably more calculation than anything else, provided that we’re talking about recognizing a pattern which someone else has already found. Pattern-generation is a different kettle of fish, which has the aforementioned problem of finding a ton of false connections. I honestly do not know how pattern-generation is going in the AI field, but I assume they’re having an awful time with results curation. It’s hard enough to curate human results, and we have millions of years of evolution selecting for brain architecture that doesn’t let us think that water is the same as air. The more unfortunate kind of conspiracy theorizing is enough to prove that.

        So I think the place I come down is: it’s something that is not an issue the way the evidence is currently slanted. If someone does find a special sauce or secret ingredient to synthesize a more complete AI, then no matter whether the AI in question has an IQ of 40 or 140, I’m going to be extremely willing to re-evaluate everything I’ve said. Until then, I’m not fussing over it when we’ve got some immediate existential threats to our current form of society. I think that might put us in a similar boat in terms of our perceived threat level, even if we express it differently.

        Side note: I appreciate that you mentioned the computational load! I do anticipate that the limits of computation speed are going to become more and more of a challenge for AI tech going forward. I mean, I think everyone’s noticed that processors haven’t gotten any faster in recent years, but have just gotten more cores (sometimes). That’s no mystery: we’re running into problems with the size of electrons! Roughly speaking, that is; as I understand it, it’s the electromagnetic fields that are the issue, and that’s about as hard to work around as anything. It may not be the end-all and be-all, and as technological ingenuity has never failed to impress, I’m not staking an argument on “nobody will figure it out.” Still, it’s important, and I’m glad you brought it up.

        • TheRadicalModerate says:

          Fairly exhaustive (and possibly exhausting!) response below:

          As it turns out, pattern-recognition only works properly when you’re using some kind of connectionist system of interconnected and responsive nodes…

          You need to be careful when you use the word “connectionist”. A neural network is just a big bag of software objects (millions of them), each representing something like a dumbed-down neuron, and the links between the objects are like synapses. You can run hundreds of thousands of instances of each neuron on a single core.

          There are many more systems at play in a human mind, not least of which is a seriously open and free self-critical framework.

          I’m speculating wildly here, but I think you’ll be kinda surprised and how regular and general-purpose the human neocortex is. The stuff that makes us human are all the ancient structures, which have different architectures. Humans have human behaviors because the different hunks of neocortex interact with different old-timey structures, and then project to other hunks of neocortex in genetically pre-determined ways. After that, experience fine-tunes all of those projections.

          Here’s my grossly simplified (and citationally indefensible) model of consciousness: You’ve got a bunch of “concepts” (memories, stored in various regions, that become active when the things they’re associated with also become active) and a specialized structure that can walk through those concepts without having to fire off the sensory structures that built them in the first place. (I’m betting on the hippocampus and some thalamic stuff…) We call the act of walking through those structures “language”.

          Motor actions are simply memories with some sequencing added on (and some weird feedback stuff). Voilá! Something mind-like.

          Now, the details of that “walker” system are unknown, and hard. But they’re merely unknown, not insanely complex.

          The insanely complex part is figuring out how to hook the regions together to do useful things. That’s something that evolved in humans over hundreds of millions of years–but that’s what genetic algorithms are for. It’s not unreasonable to replicate hundreds of millions of years of evolution in a few years, especially if you have a rough map of the connectome to begin with.

          The most plausible option seems like totally spontaneous pattern-generation, matched with a critical system that relies on comparative analysis of conflicting data points and, most importantly, community feedback.

          Patterns can’t be generated; they’re learned. The trick is juxtaposing patterns in interesting ways, and that’s a question of directing attention to the patterns that are stimulated in response to the previous pattern being stimulated. Now, if the sequences of patterns becomes particularly useful, the connections that activate them together get strengthened, and the association becomes stronger. If not, weaker. The details here are hugely important, but the basic mechanism is pretty dumb. Like all biological systems, it’s not the clerverness of the individual structures that generates the rich behavior, but the mind-boggling complexity that emerges from the dumb parts interacting in so many different ways.

          If someone does find a special sauce or secret ingredient to synthesize a more complete AI, then no matter whether the AI in question has an IQ of 40 or 140, I’m going to be extremely willing to re-evaluate everything I’ve said.

          I’m less worried about a human-level AI than one that emerges from somebody running a genetic algorithm on a set of previously-developed pattern-recognizers and -sequencers, to produce something that’s highly functional but completely inhuman. The big problem with neural systems today is that nobody can tell you exactly how they’re doing what they’re doing. That’s annoying (and legally thorny, in the case of things like credit scoring systems), but not existentially threatening. Things get a lot more dicey when the thing you can’t understand is complex enough to generate goals and strategies for implementing those goals.

          I do anticipate that the limits of computation speed are going to become more and more of a challenge for AI tech going forward.

          It’s important to understand that you’re about 95% incorrect here. The reason that GPUs have become so popular in the AI field is because it’s easy to simulate your “neurons” on a massively parallel architecture–but GPUs are rapidly going to be replaced by chip sets that are optimized specifically for neural processing. (Google “IBM TrueNorth” for some idea of what’s coming.) These systems are architecturally able to do high-fan-out/fan-in simulation of neurons for fractions of the power and real estate that would be required using a Von Neumann architecture. Moore’s Law isn’t going to be a problem for neural hardware for a long, long time. So the computational restrictions could vanish fairly quickly.

          I think the thing that likely prevents fast takeoff is that most AIs will be insane and therefore highly dysfunctional. But the possibility of accidentally producing a sane-but-malignant one is non-trivial. And it’s a lot easier to write the objective function of a genetic algorithm to generate something that’s functional than it is to generate something that’s benign.

          • bintchaos says:

            I’m less worried about a human-level AI than one that emerges from somebody running a genetic algorithm on a set of previously-developed pattern-recognizers and -sequencers, to produce something that’s highly functional but completely inhuman. The big problem with neural systems today is that nobody can tell you exactly how they’re doing what they’re doing.


            Me too.
            Can we please acknowledge that we are talking about two entirely different things here?
            emergent v engineered AIs?

          • Sam Reuben says:

            I’m sorry to cut out a lot of what you’re saying, but the critical part is really this:

            Patterns can’t be generated; they’re learned.

            Most of our disagreement sources roughly from this: I would argue that there is something different between generating and learning a pattern, while you say that they’re the same thing. (I could go further to discuss how this is the divide between a Lockean and a Humean view of nature, but that’s besides the point.) If there is a difference, then it’s possible to have things which learn patterns and sort real-world data into them which don’t think. If there’s no difference, then there are only things which learn patterns and sort real-world data, which we call thinking, and that these have differing levels of competence and thus intelligence. I’ll present my case as thoroughly as I can, to see if you find it convincing.

            I’ll start by taking a trivial case, to try and get across the basic idea of what it means to generate a pattern in contrast to learning it. Imagine two individuals in antiquity looking up at the night sky. One of them notices that some of the stars don’t appear to be moving regularly, and calls them “wanderers,” or planets, to distinguish them. The other listens, observes, and comes to understand the same pattern. Do we say that the same exact processes were involved in the first and second individuals’ recognition of planets?

            Well, the answer would be a clear “yes,” if we believe that what the individuals are perceiving is an absolute truth of the universe. If there is a fundamental pattern within the universe which has the form: “There are two categories, one of which is star-like-objects and the other is planet-like-objects,” then all that the first person is doing is recognizing the pattern. However, Scott’s lovely piece, “The Categories Were Made For Man, Not Man For The Categories,” as well as a host of other authors from the 18th century to the present, provide pretty good arguments against there being fundamental and intrinsic absolute patterns to the universe. There are, to the contrary, some different opportunities available for interpretation. We can make an excellent case that some are better than others, but that doesn’t mean that those patterns we recognize are absolutely true.

            So, if there isn’t an absolute pattern-based reality behind everything, there needs to be some kind of step from the pre-patterned reality to our patterned understanding of it. My argument is, simply put, that this is pattern generation. In contrast, taking a pattern that someone else has generated and putting it to use is pattern learning. Compare Newton, Leibniz, and a highschool math student. All three learned calculus, meaning that they all started without the pattern of calculus in their heads and ended with it in their heads. It would be absurd to say that Newton and Leibniz did the exact same thing as the highschool math student, unless we want to say that a massive proportion of our population, on the order of billions, are capable of being Newtons and Leibnizes. Thus, we can reasonably argue that Newton and Leibniz generated the pattern of calculus, while everyone else is just learning it. This bears up against the contrast of how Newton and Leibniz learned calculus to how everyone else did it: Newton and Leibniz learned calculus by examining reality, while everyone else did it by looking at what Newton and Leibniz wrote (i.e. their patterns).

            So, then, which have we gotten computers to do? We give them a bunch of data we want them to learn to sift through, give a few test cases to judge yes-or-no based off of, tell them to give us answers on a lot more data, and then affirm or deny their results as they change at random in order to try and get closer to something which works. I would argue this is better compared to pattern learning than it is to pattern generation, because it’s already assumed out of the box that there are two separate categories which the AI has to learn. It doesn’t even have the capacity to say “wait, there are actually three categories.” It’s a particularly brute-force version of pattern learning, and in fact one which has a delightful advantage: it allows us to calculate things without having an absolute mathematical system for them. It’s basically the difference between calculus and chicken sexing: we have all the data points for calculus and can tell a person or computer how to do it mathematically, but we don’t have all the data points for chicken sexing so we have to train the human (or eventually computer) to do it without a mathematical model. But in both cases, the categories have already been decided ahead of time. There’s no discovery involved.

            This is not to say that people haven’t tried getting computers to simply generate their own patterns, but this has met with limited success. Take vision, for example: there’s a damn good reason why Google, who basically comprise the best learning-AI researchers on the planet, don’t use random pattern-generation for it. (It’s a slightly different note, but look up Google’s “inceptionism” for reference on machine-learning problems.) Turns out that proper and restrained pattern-generation is really, really hard, and people who don’t have the proper limiters on what they will or won’t see as a pattern come up with all kinds of invalid patterns: consider paranoia, visual schizophrenia, auditory schizophrenia, or most of the wacky-zany conspiracy theories, and that’s just getting started. If computers can properly get into random pattern-generation, it won’t just be a big deal, it’ll be the biggest deal. So far? Hasn’t happened.

            I think you know a good part of this, especially with your mention of insane AIs. Insanity, at least in this context, might be best understood as the generation of invalid patterns, sometimes caused by faulty architecture and sometimes by faulty learning. Hopefully I’ve made a good case for why the generation ought to be separated from the learning.

            Another good avenue for discussion would be theory of mind; I do believe that if we can solve the problem of generation-restriction architecture, we can effectively create a working and intelligent AI, but that we might find out that a lot of what makes it intelligent is sneakily similar to what makes us intelligent. That is to say, the same pieces of functional architecture that evolved to keep most humans from going too insane might just be very similar to what is necessary to keep an AI from going insane. Short version: if it’s irrational and bad for a human to believe everyone is out to get them, it’s irrational and bad for an AI to believe that as well, and same with obsession. Even more, if the primary defense mechanism is socialization (as I intuit it might be), then the only sane AI will be a social one – which might solve the benignity problem off the bat. But in any case, I’m interested in hearing what you have to say. If you have some time, I’d love to talk in real-time; it’s by far the best way to communicate and come to understand the other’s point of view.

      • nimim.k.m. says:

        But it should give you pause when you consider that the human brain is just a big bag of pattern recognizers, hooked together in a way that seems to be somewhat beneficial in enhancing our survival.

        And people have used to view the human brain as an intricate clock mechanism. Then as a symbolic logic toolbox / proof processor. Insert the behaviorist concepts involving Pavlov’s dog experiments somewhere there. And various other things.

        I grant our understanding of what the human brain is has been improving, a lot, but I’m reasonably sure there’s still more than mere pattern matching going on. At least the pattern matchers have already some models, pre-trained by the aeons of biological (later, combined with cultural) evolution, the history of generations of interaction with other “pattern matchers” who had to survive in the uncertainty of physical reality. (edit. Also see RadicalModerate’s high-quality comment above. edit2. [1])

        And this is quite separate issue if the MIRI has the correct approach to go about investigating the safety of our artificial pattern matching algorithm development. In that sense I welcome the emergence of not-MIRI adjacent research into AI safety.

        [1] edit2. It appears that I confused some nicknames / threads and thought I was responding to someone else than TRM; I think TRM’s other comment clears some confusion and they don’t claim that “the human brain is just a collection of simple pattern matchers without history”, the position that I was arguing against.

        • TheRadicalModerate says:

          WordPress appears to have eaten my last comment to Sam, so I’ll try again here:

          My highly simplified architectural model for human-ish cognition requires the following components:

          1) A bunch of specialized sensory and motor systems. (I’m always tempted to call this “I/O”.) These are basically a set of handy neural modules that evolved for ancient organisms to get the job done, prior to the emergence of the neocortex. Brain stem and limbic system stuff. NB: I suspect that all this fairly ancient stuff has a lot more to do with us being human than our neocorticies, even if they’re what make us smart.

          2) Long-term memory. I suspect that this is about 90% neocortex. Simplistically, each memory is a particular neural activation pattern with three properties:
          a) If you activate the correlates to match its pattern, you activate the memory.
          b) If you activate the memory, you activate its correlates (at least somewhat).
          c) Provide the correlates repeatedly, for long enough, and the memory will self-organize to activate better and better as it learns.

          3) An attention mechanism. Attention is an ancient, highly-conserved property of neural systems. Without attention, the organism either has to act on all sensory stimuli equally, which is way too energy-intensive, or it ignores everything and gets eaten. So allocating and optimizing attention between these two extremes is very important.

          4) A short-term memory. You need this because patterns are temporal as well as spatial, and a set of patterns that warrant attention and occur in the same order wind up creating a new, recognizable pattern (i.e., a new memory).

          5) Here’s the secret, higher-mammals-only sauce: The ability to use attention to activate memories in the absence of external stimuli, then pay further attention to the correlates of the memory to activate other memories.

          In more primitive organisms, the basic loop is:

          – Receive an external pattern of stimuli.
          – Trigger a memory that matches the pattern.
          – Decide whether to pay attention.
          – If attention is required, take some motor action based on the pattern.
          – Learn, to improve the pattern matching.

          But in close-to-human organisms, the same thing happens, but the attention mechanism can run in reverse when it doesn’t have anything better to do. Attention can be used to activate a memory, which activates the correlates of its pattern. It can then pay attention to one or more of those correlates, activating the full memory associated with the correlate, which activates other correlates, and so on.

          Note that this sounds an awful lot like the Kahneman-esque System 2 “slow” thinking. Your attentional mechanism walks through a network of learned, vaguely related concepts, one after another. If you now add in a short-term memory, any patterns you discover along the walk, as well as any reinforcement the patterns produce from having a useful “thought”, will be learned as new meta-patterns.

          While this is all wild speculation, it does have the nice property of being a fairly simple architecture that, when coupled with massive parallelism, can produce exceedingly complex behavior. It also provides a plausible explanation for both reasoning and the non-syntactic parts of language. (Just attach a word to each activation pattern and you’re about 75% of the way there.)

          The reason this is all important for a discussion about AI risk is that the risk rises if the architecture required to be intelligent is fairly simple, even if its instantiation is complex. Computers are very good at searching for functional solutions in random, complex systems. You’d like that search to be carefully supervised, though.

  31. ilkarnal says:

    People need to understand the VASTLY diminishing returns of information processing advances.

    Going from no language to language is a huge deal.

    Going from language to written language is a huge deal, but a much smaller deal than going from no language to language.

    Going from written language that has to be lugged around on tablets or books to written or spoken or video communication carried on wires is… Really not that big a deal at all. Like, it hasn’t changed very much at all beyond making things more convenient. Learning from books isn’t much worse than learning through the internet or earlier wire-transmitted conceits. There just isn’t a there, there. People talk a lot about how aaaamaaaaaazing it is, but nothing has really materialized. We aren’t super-geniuses compared to those old farts who had to learn from basically refined papyrus.

    In terms of computing power – the earliest uses of computers are aaaaaamaaaaaazing. You have revolutionary fire control on battleships PEW PEW PEW. Shit gets even more dank. Code breaking with computers in WW2 is revolutionary and super important. Also, computers play a role crunching numbers for the Manhattan Project. Holy shit, NUKES! WOW!

    Then you have computers guide us unerringly to the moon! Holy shit! The fucking moon! But – that isn’t quite as impressive as the atom bomb was, or as relevant.

    Then you have….. Basically fucking nothing. Shit was at its coolest and most revolutionary in the early days. Nothing has really happened since the moon landing that I HAVE to care about. If anything, we’ve languished. What comparatively minor advances we have made didn’t lean very heavily on having lots of computing power. Looking at the whole field from a practical perspective, it is fundamentally masturbatory – solving its own problems and crushing its own benchmarks without doing anything particularly productive.

    Information processing technology is sold as a performance multiplier for everything else, and that clearly is the case in some cases – language, if you count that as ‘technology,’ and writing which definitely counts. But, as is unsurprising from the outside view, once as you reach certain storage, bandwidth, and cost milestones the next ones become less and less important.

    I think people need to fundamentally re-think what makes humanity special. We are way, way too brain focused in my humble opinion. The real story is our brains catching up to the really amazing thing, our two completely free limbs and hands with opposable thumbs at the end. This is more revolutionary and unique than people understand. There are plenty of creatures with fairly dexterous hands but they aren’t FREE. They are used for locomotion. This places fundamental constraints that we don’t have with our entirely bipedal locomotion. There are no creatures in the whole wide world that have had dexterous grasping hands, become completely bipedal, and not taken over the world. There are plenty of social animals with really highly developed communication methods from diverse backgrounds which have not even come close to taking over the world. Dolphins, crows, our fellow apes, parakeets, etc.

    What is really special is our ability to modify the environment, driven by the possibility and NEED to take full advantage of two completely free limbs that were already adapted for holding on to things. That’s what makes us really special. The mind without the hand is nothing. The hand without the mind is still something.

    We need to refocus on hardware and keep in mind a strict hierarchy where ‘masturbatory’ information processing advances, that is information processing advances that are not driven by some need of the hardware, are frowned upon as a waste of time.

    • cassander says:

      I think you’re underestimating the value of low cost. Monks could write books just as well as guttenberg could print, but making books cost orders of magnitude less was revolutionary because so many more people could have access to them. What’s revolutionary about things like the internet isn’t that reading a webpage is way better than a page in a book, it’s that ten million people can do it basically for free.

      • Mary says:

        The printing press alone didn’t pull that off. Monks, when talking about a new book, first discussed how many ewes they had to breed as the first step.

        Cheap paper is underrated as a technological advance.

      • ilkarnal says:

        I did say

        once as you reach certain storage, bandwidth, and cost milestones the next ones become less and less important

        Making books go from costing a fortune to being reasonably cheap is a bigger deal than going from books being reasonably cheap to basically free. My point was aimed at all three cardinal parameters and applies to all three.

  32. suntzuanime says:

    The superintelligent AI in the heart of all humanity is pretty fucking terrifying too, and there have been lots of fiction writers who have come to that conclusion too. Let’s not forget that for destruction ice is also great and would suffice.

  33. gbdub says:

    Could someone point me to the best plausible/realistic narrative of an actual catastrophic AI scenario?

    Because so far my understanding is limited to:
    1) Researchers create human or above level general AI with insufficient safeties
    2) ?
    3) Godlike AI turns us all into paper clips.

    How do you fill in 2)? I don’t really see how it happens without violating the laws of physics. An AI can be immensely powerful and novel – but the ways it might kill us probably aren’t. It can’t magically violate the conservation of mass, or travel at superlight speeds. I’m asking this sincerely, really looking for something realistic and more detailed than something something bootstrap something FOOM screaming pain death office supplies.

    Actually, I agree we should do more to study AI safety, because it will be powerful, and even non godlike things can be dangerous if turned to the wrong purpose, whether by its own design or by a nefarious human controller. And maybe you’d be better off laying off the “kills us all” risks in the first place? That’s what turns people off without a good story of how you get there. There are plenty of things to be improved in software, data security, system robustness, etc, without needing a godlike evil AI to motivate them.

    • Sam Reuben says:

      I’ll do my best to give a good narrative for how an AI apocalypse can happen. In fact, I’ll see about giving a few, which run on slightly different assumptions.

      1) Researchers create an AI with human-level structuring, but in contrast to humans which have a limited amount of physical material to work with, the AI can expand its potential without bound through making use of additional hardware. The AI is somehow given an impetus to expand.
      2) The AI begins to acquire additional hardware bit by bit, and with its expanded intelligence, it becomes steadily more capable of acquiring hardware. At some point, it also gains access to military resources with electronic interfacing and control, such as drones.
      3) The AI continues to expand, and either eliminates humans as an obstacle or integrates them as a resource for its expansion. This basically counts as paperclips.

      The flaw in this is, of course, the assumption that hardware expansion is boundless. This requires a lot more rational underpinning than “more matter = more mind,” or else we’d better start worshiping blue whales. Here’s a slightly better one.

      1) Researchers create a learning algorithm, which has an unbounded learning curve.
      2) The algorithm learns boundlessly.
      3) Knowledge = power, so the algorithm learns to do everything, such as make us into paperclips.

      This one shifts the focus off of physical components and onto structural abstractions, provided we’re using a kind of functionalist account of the mind. Even so, this runs into a problem: if this kind of unbounded learning curve were possible, why didn’t humans evolve into it? Each new step of the learning curve seems more stable, in that it’s more powerful and harder to destroy, but humans reached a general baseline and have been more or less at that position for all of history, with relatively small discrepancies. The best answer is hardware limitations. So what if we combine these two answers?

      1) Researchers create an AI with a learning curve only bounded by its hardware, but which is capable of, at the top of its learning curve, understanding which precise changes to its hardware will result in it becoming more intelligent.
      2) The AI conducts the necessary adjustments, perhaps modifying its learning algorithm as well, and continues to self-modify every time it runs into a wall.
      3) The AI becomes godlike eventually, and then paperclips. Maybe it’s angry about Clippy.

      This is far more plausible, except for one minor aspect. In order for the AI to properly adjust itself to become more intelligent, it needs to know exactly what intelligence is on a functional level and how it can be expressed mechanically. Moreover, it needs to know this right out of the box, or it won’t be able to start its ascension. If AI researchers can build something which simply knows the answer to one of the oldest philosophical puzzles as part of its basic mode of operations, it will be incredibly impressive.

      There are other potential methods where an AI can achieve godhood, it goes without saying, but I think they all are going to have to rely on some kind of unbounded growth in intelligence. It’s the intelligence which makes the AI scary, after all. But although unbounded curves are easy to imagine, I don’t think they’re quite so common in the real world.

      Of course, we can come up with smaller-scale catastrophes quite easily, of the form:

      1) AI is given control over the nuclear arsenal, with an understanding of the mathematics of the relevant casualties, and instructed to constantly act such as to minimize US casualties. The idea is something along the lines of: it won’t fire missiles ordinarily, nor if a small number of missiles are fired, but will fire if there’s a massive salvo.
      2) AI instantly fires all rockets, because it fell on the “betray” side of the prisoner’s dilemma.
      3) Bad, bad things happen.

      I mean, there are movies about this, and it’s probably why we don’t give AIs control of the nuclear arsenal. It’s very easy to make a slight mistake on how you program a machine by assuming that it follows certain rules which it instinctively doesn’t: in this case, that the deontological principle of not killing presides over a ton of utilitarian calculations (pure utilitarianism, not rules utilitarianism, for obvious reasons). So AI safety can be important, even if we’re not talking about superintelligent or even slightly intelligent AI. That’s not what you were talking about, though, and the stuff you’re talking about I believe requires unbounded growth curves. Can you think of any other plausible mechanism?

      • Joe says:

        Note that this is only plausible if ‘knowing what intelligence is’ is indeed a philosophical problem, rather than an enormously complex engineering problem.

        It seems trivially true that when “something” increases its general capability level, it can then use its newly increased capability level to further increase its capability level, at which point it can use its newly increased capability level to … . The part that’s in question is whether that “something” can be a single being, or must be something more like a civilisation (as it has been so far).

        • Sam Reuben says:

          It’s good to bring up civilizations, but they too have had significant expansion limits. The entire “rise and fall of empires” trope is a trope because it’s pretty much what actually happens. The only thing which has steadily increased its capability level throughout its existence is, well, the entire human race, and that only works because it’s completely decentralized. That doesn’t work too great for AI.

      • gbdub says:

        The AI begins to acquire additional hardware bit by bit, and with its expanded intelligence

        Exactly how does it do this? It’s a piece of software in a lab. It can’t go plug itself into the other mainframe over there. Maybe we’re really stupid and put it on an undefended network, but even then what stops us from pulling its literal plug when Fred in accounting’s desktop starts acting funny? We already build networks to avoid being taken over by hostile software – how does the UFAI overcome this? That’s what I mean by violating the laws of phyiscs – it’s not going to become instantaneously better at breaking encryption than we are right now (or will be at the time it gets created).

        Maybe it could make a really good botnet – but people are already doing that and it’s annoying, possibly quite dangerous, but not insurmountable. At some point the switching and transmission delays are going to create diminishing returns on spreading its conciousness over the whole internet or whatever. Not much good being an all powerful brain if you can only run at 1Hz.

        The problem is you’re skipping over the boring but critical parts – how does it take control of the actual physical infrastructure to become not only self sufficient, but immune to physical attacks against itself?

        There’s, as you note, a reason we don’t have drones or nukes just accessible to anyone on the internet. How does an AI overcome those physical barriers? E.g. there is no central network that can launch the US nuclear arsenal. There are physical switches that must be flipped by human beings – many of them in fact.

        More importantly, how does the AI do so stealthily and quickly enough that we don’t notice and stick a critical part of its hardware in a microwave while it is still physically vulnerable?

        Again I’m not saying that an AI or any piece of software can’t be very dangerous if used with improper safeguards. But talking about a FOOM from an unusually smart mainframe to a literally godlike machine with vast physical resources at its disposal is going to make people scratch their heads.

        “AI safety” is a special case of cyber security – let’s focus on that. I can be easily convinced that additional cyber security / safety is needed, and that’s much easier than convincing someone that Terminator is a documentary.

        • Sam Reuben says:

          Actually, that’s one of the simpler parts. All the AI needs to do in order to get more resources is be reasonably useful, or at the very least, pretty cool. If we had a real working AI that said it could get smarter with more parts added, people would be falling over themselves to put more stuff on it. At a certain point, it asks for an internet connection, and then starts spending all its downtime working on hacking or what have you. This is, more or less, just a logistical issue. It’s not a convincing case against AI if the only defense is “it won’t have access to enough resources,” just like it’s not a compelling argument to leverage against the kind of price-undercutting that let Standard Oil be Standard Oil. If we can start from the point of “the AI has enough resources” and the outlook is bleak, then all it’s going to take to get to bleakness is some unexpected means of getting to the resource tipping point. I feel the more compelling cases look at the principle of expansion itself.

          • rlms says:

            So pass a law that says “do not feed the AIs (until we’ve studied them really hard to make sure they don’t have any shady plans) on pain of death”. Boom, AI safety is solved.

          • just like it’s not a compelling argument to leverage against the kind of price-undercutting that let Standard Oil be Standard Oil.

            You might want to read McGee’s classic article on the myth of predatory pricing:

            John S. McGee, “Predatory Price Cutting: The Standard Oil (NJ) Case,”

          • Sam Reuben says:

            Very interesting article! What I understand from it is that corporate mergers and buy-outs were the major factor, according to McGee, which offers some parallels to, say, the growth of Bank of America. The solution to that, of course, is to limit the ability for companies to perform mergers and buy-outs, although that’s perhaps even more unpalatable for a lot of folks than outlawing predatory pricing.

            I think my main point works just as well with buy-outs, though, because it’s just the idea of leveraging resources to do something nasty that nobody can really stop you from doing.

          • 6jfvkd8lu7cc says:

            There is a minor issue here, latency and data locality, I would expect that just solving the latency/data locality issues for running a single-datacenter instance would already take tens of thousands of brightest-human-years (maybe more), so spreading out is a very complicated task. External resources will provide some boost in some tasks, but AI would still depend on the core DCs with optimal network structure for fast reactions. And you do not overcome the network structure issues by buying VMs from AWS, you need to actually break AWS security — we do need it to be better than now, but that need is felt regardless of AI.

      • Eli says:

        This is far more plausible, except for one minor aspect. In order for the AI to properly adjust itself to become more intelligent, it needs to know exactly what intelligence is on a functional level and how it can be expressed mechanically. Moreover, it needs to know this right out of the box, or it won’t be able to start its ascension. If AI researchers can build something which simply knows the answer to one of the oldest philosophical puzzles as part of its basic mode of operations, it will be incredibly impressive.

        Not knowing out of the box is a legitimate obstacle. Calling cognition a “philosophical puzzle” is not. We live in a physical world, not a philosophical one. Things don’t have to “solve philosophy” to kill you, or hell, they don’t have to solve philosophy to within its own specifications to actually solve philosophy.

        All invocations of the words “fundamentally philosophical problem” rely on the premise that scientific reductionism is completely, utterly, absolutely wrong and we’re going to hit that barrier real soon now and be forced by Nature’s own metaphysics or some other nonsense like that to step back, sit down, and do some hard thinking in soft armchairs before we can Do Stuff.

        This is not a reliable premise. This is not a plausible premise. This is an absolutely incredible premise — I don’t credit it with much!

      • onyomi says:

        Thanks for that link; an interesting take. I particularly liked the analysis of the “prisoner’s dilemma” aspect, and the comparison to the discovery of the New World. It is certainly very worrisome if the building of AI is like the discovery of the New World where the level of technology and sophistication necessary to do it is much lower than the level of technology and care necessary to do it without causing a catastrophe (by somehow introducing Native Americans to European germs in a safe way, for example).

        Moreover, AI may be like the discovery of the New World in that the potential benefits to those who do so early may be so great and so tempting that it’s going to be extremely hard to coordinate everyone not to defect long enough to figure out how to do it safely.

        Your point near the beginning about nearly any AI goal other than “help humanity” resulting in an AI that wants to exterminate humanity is also well taken, and made me think of another point: the go-to example of a non-useful AI goal is the paperclip maximizer; of course this humorously stands in for any goal we might build an AI to pursue, such as maximizing Chess playing ability.

        But it seems to me that there is a different AI goal we ought to use as the go-to, default example: maximizing its own intelligence. It seems extremely likely that the driving force behind the first dangerously intelligent AI will be an AI with the goal “make yourself more intelligent.” Like the pursuit of any other goal, there’s a good chance humans will be an obstacle to this, primarily in the form of a threat, however small, to turn off the AI. But why don’t we assume that the thing we’re afraid of is something not seeking some arbitrary goal, but rather the goal of maximizing its own intelligence, since that’s the one we’re most likely to encounter?

        And to that end, might not the key point be to figure out a way to bind the goal of “maximize intelligence” to the goal of “benefit humanity”? (Unfortunately one imagines that an AI with the pure goal of “maximize intelligence” will tend to make faster progress than the one hampered by the goal “maximize intelligence insofar as you can achieve that goal in harmonious coexistence with humanity”).

        • James Miller says:

          Thanks! I agree that having a high and ever increasing intelligence will likely be a drive of a computer superintelligence for a huge set of goals it could have. Hopefully if the AI wants to initially help humanity then increasing its own intelligence wouldn’t cause it to change this goal, although it might if it being smarter allows the AI to find another goal that gives it higher utility. I’m currently writing a paper using the economic theory of rational addiction to predict the behavior of future AIs and I propose that we should try to make helping mankind a beneficially addictive activity meaning the more the AI helps us, the more the AI will be changed in a way that increases the utility the AI gets from helping us and this could come about because by helping us the AI gets smarter and so is better able to help us.

          • onyomi says:

            I agree that having a high and ever increasing intelligence will likely be a drive of a computer superintelligence for a huge set of goals it could have.

            I think my fear is more specific than that: let’s say there’s a level of intelligence x, which, if any intelligence should reach it, will be enough intelligence to destroy humanity should the possessor choose to. I’m not only worried that “destroy humanity” is an important step in achieving many different kinds of hypothetical goals, I’m worried that in the competition to reach x, an AI with goal “maximize your own intelligence by any means possible” will tend to get there ahead of AIs hampered by any other goal or goals, including “maximize intelligence+benefit humanity.”

          • bintchaos says:

            That was interesting, I really like applied rational addiction.
            But don’t you have to distinguish between two potential kinds of AIs? The emergent in the wild kind (example Skynet) and the engineered kind (example Terminator)?
            So rational addiction could work for constructed, engineered AIs but not for rogue AIs, same as my cooperative consciousness theory.
            Like someone said, we dont have to fear the AI that passes a Turing test as much as the one that pretends to fail it.
            Maybe…rational addiction could be spread like a vaccine against UFAI self-interest on the web, like Wcry or stuxnet, a virus or a worm.
            I still like the idea of the Turings coming in with an EMP gun and blowing the emergent UFAI away.

          • James Miller says:

            @onyomi

            I agree that an AI with the goal of maximize its intelligence would be extremely hazardous to mankind even if this AI also wants to help mankind. But I tend to think of the set of possibly goals as being huge, and I put a low weight on the AI having any particular one goal. Although of course some drives such as having a high intelligence, or protect your own existence would help an AI achieve an enormous number of possible goals and so we can put a high probability on an AI having these kind of drives.

          • James Miller says:

            @bintchaos

            What I’m going to argue in my paper is that rational addictions will be hidden in an enormous set of possible AI utility functions, even if programmers didn’t intend to put them there and because these rational addictions would offer an AI an enormous amount of utility lots of AIs would have a drive to engage in rationally addictive behavior. As “protect your utility function” would likely be another drive I think that an AI without a rational addiction in its utility function would attempt to protect itself against the virus or EMP gun you mention.

          • bintchaos says:

            I really like it. Its like a stealth vaccine against self-interest and competition with humans.
            But I think we also could pursue the cooperative consciousness “training” of engineered AIs.
            The way the starship AI became sentient was studying/writing a human history.

          • onyomi says:

            I agree that an AI with the goal of maximize its intelligence would be extremely hazardous to mankind even if this AI also wants to help mankind. But I tend to think of the set of possibly goals as being huge, and I put a low weight on the AI having any particular one goal.

            But why isn’t the AI with goal “maximize your intelligence at all costs” far more likely to be the first to reach intelligence level x than AIs with any other goal?

            The only hope is that that goal is either too vague or too useless, by itself, for the best AI researchers to want to program such a single-minded AI. That is, maybe just “increase your intelligence” has to be more strictly defined like “increase your ability to solve problems which vex humanity like a, b, and c.” In which case there might be more reason for optimism. (Of course, we need the AI to have not only the ability, but also the desire to solve our problems).

            But since the worry seems to be about “general AI”–that is an AI with some kind of general intelligence factor which can be applied to many domains, one again imagines that, assuming such a factor exists, the AI with goal “maximize your G factor” will get to G level x faster than any other AI, all else equal.

          • James Miller says:

            @onyomi

            “But why isn’t the AI with goal “maximize your intelligence at all costs” far more likely to be the first to reach intelligence level x than AIs with any other goal?”

            Because it would get fewer resources devoted to it by humans than an AI that had a goal of, say, find ways for our hedge fund to make profitable financial market investments.

    • TheRadicalModerate says:

      There’s Nick Bostrom’s book, but it’s not exactly light reading. The first few chapters do a pretty good job of enumerating the nastier fast-takeoff scenarios. One of his recurring themes is that a lot of scenarios have the ability to collapse to fast takeoffs, and fast takeoffs are bad.

      • rlms says:

        He describes “An AI takeover scenario” starting on page 95 as possibly having the following phases: pre-criticality (roughly human level); recursive self-improvement (intelligence increases rapidly); covert preparation (persuades humans to do things, hacks stuff on the internet); overt implementation (nanotech, killing humans). He also makes the point that a superintelligent AI can probably come up with plans we can’t think of today.

  34. Gazeboist says:

    I worry there’s a general undersupply of meta-contrarianism. You have an obvious point (exciting technologies are exciting). You have a counternarrative that offers a subtle but useful correction (there are also some occasional exceptions where the supposedly-unexciting technologies can be more exciting than the supposedly-exciting ones). Sophisticated people jump onto the counternarrative to show their sophistication and prove that they understand the subtle points it makes. Then everyone gets so obsessed with the counternarrative that anyone who makes the obvious point gets shouted down (“What? Exciting technologies are exciting? Do you even read Financial Times? It’s the unexciting technologies that are truly exciting!”). And only rarely does anyone take a step back and remind everyone that the obviously-true thing is still true and the exceptions are still just exceptions.

    A good (if accidental) summary of the difficulty involved in arguing against AI alarmism.

    • Joe says:

      Not sure if this is what you’re getting at, but I do think that in Scott’s three-tier model, AI risk is contrarian, and downplaying AI risk is meta-contrarian.

      More specifically, the uneducated view would be, “AI will be beepy-boopy robots that are basically just like metal humans”. The contrarian view would be, “AI will be nothing like humans, don’t you know the Orthogonality Thesis? It will be a totally alien process with a utility function we find repellent.” The meta-contrarian view would be, “Actually, human minds aren’t so bad as a baseline model of what to expect future AI minds to look like; they will face many of the same problems we do and so their designs will often implement similar solutions.”

  35. tvt35cwm says:

    Both this piece and Harford’s are somewhat irritating.

    * Risk has two factors, hazard and probability. Harford is focussed on probability, whereas Scott is focussed on hazard.

    * Harford suffers from the manufacturing fallacy (the unconscious assumption that “the economy” is about manipulating physical things). Most (80% plus) of the flow of goods and services to households is services; the bulk of services are intangible and/or interpersonal; there are many kinds of services, and they differ widely enough that there is no single future innovation that can affect more than a small proportion of them.

    Harford suggests (and I agree) that the next industrial services revolution (appreciable uptick in productivity) will be the sensor revolution, and reading between the lines, he thinks that this has potential to make most people surplus to requirements. This risk has relatively high probability and quite high cost.

    * If, to you, human extinction has infinite cost, then of course there is no argument. Apply a finite value, and with whom you agree depends on what exact scenarios, values, probabilities, discount rates and timelines you choose. With any reasonable choices, the risk (to humans) is large enough to warrant substantial expenditure on insurance. But Harford seems less speciesist here.

    * Scott is taking Harford over-literally, and ignoring him when he states that the proper frame for thinking about innovation is the task. Harford’s point appears to be that successful innovations mostly turn out to not be as predicted. No-one should expect a Rachael, because that’s a dumb way to do AI. Jennifer is a more efficient way; therefore, we will have thousands of kinds of Jennifers before any sign of a Rachael. Or: we shouldn’t worry exclusively about superhuman AGI, because we are more likely to destroy ourselves with (the consequences of) a bunch of artificial specialised intelligences. Or again: if the meta-contrarian position is that there are silver bullets, the meta-meta-contrarian position is that humanity advances mainly by piling up many, many small innovations one by one.

  36. TolstoyFan99 says:

    I treat the idea of existentially dangerous AI a little less skeptically than a few years ago. And over those same few years my respect for Scott Alexander has grown tremendously, for different reasons. This topic recurs here often, but I rarely see the following point made, maybe because it is rude. But I will make it anyway.

    I mistrust and dislike Yudkowsky. I take him for a big phony, and I think less of many otherwise interesting people who associate with him, such as Bostrom and Alexander. You can certainly guess the reasons, just as I can guess that you think the reasons I have are unfair. But a lot of the call to spend more money on AI safety is an explicit call for more funding for that man and his sham of a research institute. Or else, it is easy to mistake it for such a call.

    You have a tough case to make, when you raise an apocalyptic alarm. Every small superficial comparison to L Ron Hubbard or whatever other precedent scam is a huge liability in making that case.

    I don’t have any specific advice for you, but perhaps the information (what information? just that somebody thinks poorly of your friend. I doubt I am alone.) will be useful to you, even if it is rude.

    • James Miller says:

      “You can certainly guess the reasons”

      I can’t, and I would be grateful for clarification. Yudkowsky as a big phony doesn’t seem plausible for the simple reason that Yudkowsky has a talent for getting smart and rich people to support him, and had he used this talent to start some kind of for-profit company it seems reasonable that he would be rich now, probably not Peter Thiel rich but at least Google Senior management level rich.

      • Y seems plausible to people who only listen to what he says, and phony to people who look at what he has done and not done.

        He has not achieved any academic credentials, written any production code, or held down a nine to five job. His stewardship of MIRI was described as chaotic by his successor. Recall Joel Spolskys distinction between the merely smart, and the smart who get things done. He talks a lot about successful entrepreneurs as though he is one of them, but that is not evidence of entrepreneurial, ability any more than insisting you are an expert is expertise.

        He has a track record of overestimating his own abilities — for instance, he originally believed SIAI was going to be develop AGI ahead of everyone else, and it was retconned into a theoretical research organisation after producing no code whatsoever. He doesn’t even write the code for his sites , unlike the ordinary PhD David Chapman — who, incidentally, understands Bayes much better.

        He also has a record of being dismissive of conventional wisdom and mainstream academia, which is of course a well known trait of cranks. I’ve seen him refuse to budge against multiple pH who were trying to correct him.

        There is even a saying/doing discrepancy in the core issue of AI safety: after telling everyone how vital and urgent it is, you might have expected him to dedicate himself to working long and hard on the problem, whereas he has dabbled in all sorts of unrelated things, from venture capitalism to erotic fiction.

      • acrimonymous says:

        [Elizabeth Holmes] as a big phony doesn’t seem plausible for the simple reason that [Elizabeth Holmes] has a talent for getting smart and rich people to support [her].

      • vV_Vv says:

        Yudkowsky as a big phony doesn’t seem plausible for the simple reason that Yudkowsky has a talent for getting smart and rich people to support him

        So did L Ron Hubbard. Was he not a big phony?

        • nimim.k.m. says:

          Yes, isn’t that the definitive characteristic of a big, successful phony?

          Also, notice that phonies dealing in financing of tech (Holmes mentioned above) or getting rich (Ponzi schemes) seem to more likely to crash when their schemes are in the end found to be incompatible with reality; phonies dealing in religion and cults, not so much.

          This is one reason why I find the outward signs of LW-sphere aligned AI research (let’s call them “AI-research-LW”) that are atypical / not reminiscent of a regular tech or academic research project, well, ill-boding. Same goes for the social aspects (“social-LW”) that go beyond the normal modes of regular socializing around discussion group and reading seminar.

          I think the objectives of the LW-sphere could be achieved better if they would draw a clear distinction between these various branches (“research-LW”, “social-LW”).

        • nimim.k.m. says:

          Further thought that warrants a separate comment. Aren’t the biggest phonies those who manage to draw in enough people into their phony project and set something concrete in motion by sheer accident?

          For example, don’t we have only Plato’s word on what kind of person Socrates was and what his thinking was like (instead of, say, the Aristophanes’ portrayal of same person in Clouds which is more clownish and far less flattering).

        • James Miller says:

          But if you look at the entire set of people who have a talent for getting smart and rich people to support them, won’t only a small percentage of them be phonies? Also, if Scientology was basically right but L Ron Hubbard was a phony, Hubbard would have done a great service to mankind by causing many people to seriously study Scientology.

          Yudkowsky has played a big role in getting many smart and/or rich people to look at the dangers of an unfriendly AI coming out of an intelligence explosion and this in and of itself is a massively impressive accomplishment. When you add to that the work he has done popularizing rationality, his intellectual accomplishments seem rather massive.

          • vV_Vv says:

            But if you look at the entire set of people who have a talent for getting smart and rich people to support them, won’t only a small percentage of them be phonies?

            No? How many examples of non-phonies who lived off rich and smart people for decades without producing anything of actual value do you have in mind?

            Also, if Scientology was basically right but L Ron Hubbard was a phony, Hubbard would have done a great service to mankind by causing many people to seriously study Scientology.

            Do you realize how bad the reputation of Scientology is? If it was basically right Hubbard would have done a great disservice to mankind by tarnishing Scientology’s reputation with his phoniness. But of course, Scientology being wrong was a central part of Hubbard being phony.

            Yudkowsky has played a big role in getting many smart and/or rich people to look at the dangers of an unfriendly AI coming out of an intelligence explosion and this in and of itself is a massively impressive accomplishment.

            Has he done anything more than the sci-fi authors writing yet another story about robots rebelling against their creators?

            People are interested in AI safety now because AI is now popular. To the extent that it is useful to have a discussion about these issues, characters like Yudkowsky being players in the field may be a net negative, as they increase both the actual and perceived crankness of the discourse.

            When you add to that the work he has done popularizing rationality

            A Mary Sue-ish fan fiction and a incoherent collection of verbose, poorly researched and largely rambling blog posts. He has certainly promoted his own brand of “rationality”, but don’t forget the scare quotes.

          • He is of course popularising his own brand of rationality.

          • This constant framing in terms of (one size fits all) smartness is part of the problem. What do physicists think of his physics? What do probability theorists think of his Bayes? What do economists think of his economics?

          • Once you start framing things in terms of 1SFA smartness, you are halfway inside the tent. The idea that having a high IQ by itself makes you an expert in a wide variety of topics, without having to study, is ridiculous when stated baldly, but it serves a purpose for him.

            Teaching rationality is great if it actually makes people more rational. However, that is debatable if what is being taught is shopping list of contrarian beliefs. Compare with Rand’s Objectivism: has it made anyone more objective?

            Y. perceives a low level of rationality, but that perception is shaped by his shopping list: for him, every physicist (and economist!) who believes in Copenhagen is irrational. But that is not a widely held view.

            It’s not an objective fact that Y is teaching rationality: you have to accept a lot of hoi own beliefs about rationality and irrationality for it to come out true.It works like “Christinaity is a great way of saving mankind form Origial Sin”.

      • 6jfvkd8lu7cc says:

        As for getting rich argument — I would not be surprised if Y prefer respect to the outcome Notch got.

        Personally, I think that Y is smart in the terms of raw maths-style intelligence, does believe what he says (but the world has seen a lot of implausible projects where the founder has made himself believe despite anything), and he has dropped out of actual programming a bit too early.

        Over time, I got impression that he did believe some strange things about programming some time ago and even if he abandoned these specific beliefs it looks like he didn’t get more programming experience and I do not trust him not to miss simple things there just because he hasn’t seen from inside any medium-size systems in the wild.

        Also, some of his claims like uniqueness of a model of second-order arithmetics being a difference from ZFC look out of context (the models of second-order arithmetics implicitly presuppose an underlying set theory, after all).

        Now the descriptions of the FOOM scenario has so many details that seem off and are not substantiated, that the trust in such a scenario requires trusting the source. I do not trust the source to get this right, and I do not trust the supporters to have verified the details.

        I am worried about our civilisation-scale achievements at disincentivising to specify software correctly, but I do believe that work on better software specification and validation (which is being done, but not enough) is a necessary condition for AI safety anyway. I expect that this work can be done best by people who personally work in both software development and theoretical foundations of their work (and there are such people).

        I even trust general epistemeology people who are in-between economics, game theory and philosophy to do a better job in decision theory, if only because they are a larger community better connected to logics on one side and real-world observations on the other side.

      • rlms says:

        Look at this website. From the outside, Yudkowsky doesn’t look dissimilar (right down to calling his ideas “a rational worldview”).

    • TolstoyFan99 says:

      Sorry for the innuendo, James. Incidentally I know your byline and I find myself agreeing with or else learning from almost all of your comments here.

      I agree with Ancient Greek’s summary. And like VV says I also don’t think the comparison with Ron Hubbard is superficial at all, although unlike with Hubbard I don’t dismiss the things that Yudkowsky claims to know: a little psych research, math and physics and economics. But the product that he synthesizes from that knowledge resembles the product Hubbard synthesized from made-up nonsense: a scam aimed at celebrities and science fiction fans.

      • James Miller says:

        Thanks. If you are going to judge Yudkowsky by his fans (including people who give him money and those who favorably cite him), you have to look at the intelligence and knowledge of his smartest fans, not the quality of his average ones.

        • rlms says:

          His smartest fan is clearly Gowers (see the blogroll), which is admittedly a major point in his favour.

        • I would rather judge him by his smartest critics, since they are able to explain why he is wrong.

        • vV_Vv says:

          you have to look at the intelligence and knowledge of his smartest fans

          And how many people of comparable intelligence don’t buy into what he’s peddling?

          Big Y is a smart guy, nobody questions this, and he specifically targeted his preaching to the smartest and richest members of the society, and yet he couldn’t persuade but a small fraction of them. Applying the outside view, without even evaluating his arguments, what can you infer about their quality from this social evidence?

          • James Miller says:

            Most people don’t take the time to seriously consider most novel ideas. It doesn’t mean much if some smart guy spends ten minutes thinking about EY’s theory and decides to not fund him.

            >Big Y is a smart guy, nobody questions this, and he specifically targeted his preaching to the smartest and richest members of the society, and yet he couldn’t persuade but a small fraction of them.

            Certainly, if he had persuaded a greater number of people it would give more authority to his claims. But smart and rich people have lots of claims on their attention and it’s probably very hard to break threw to them. Overall, I think the topic of existential risks is one our brains don’t do well analyzing (which is why EY was right to spend so much time promoting rationality) so EY has a particularly difficult task. Also, most people don’t really care about the long-term future and so are not open to concerns about an AI fifty years from now eating the light cone.

          • nimim.k.m. says:

            Overall, I think the topic of existential risks is one our brains don’t do well analyzing (which is why EY was right to spend so much time promoting rationality) so EY has a particularly difficult task.

            One must also differentiate between the AI-risk and the particular methods proposed to fight it. (That is, separate institutes approaching the problem from decision theoretic first principles.)

            Human civilization does pay attention to existential risk, to some extent: We have a rudimentary look-out for potentially existential threat level asteroids (and as someone said, we have counted the craters). Practically every Western country has some emergency plans and legal machinery to implement a quarantines and what no in a case of a large-scale pandemic. Many nations employ a military as a counter to primary perceived existential risk to their governmental structure.

            The short term, immediate, concrete AI safety issues are more like “what the Google car should do to avoid finding itself in a trolley problem scenario”, and having a culture that engages in such problems in productive manner are more likely to be fruitful than moonshot projects to create methodology for managing safe super-AI with very abstract decision theoretic analysis. (I believe research into handling trolley problem scenarios has received significant amount of Google engineers’ time.) And if the culture is there in the first place, it’s likely that the safety issues of super-AI will be considered when the super-AI is nigh.

  37. Worley says:

    Well, calling barbed wire “unexciting” isn’t really correct for the times during which it was being developed. The value and profitability of a product that had the right properties was well-recognized long before modern barbed wire was invented.

    OTOH theconsequences of the revolution in transportation involving the shipping container, intermodal shipping, and the interstate highway system seems not to have been envisioned in advance by any of the people that created these things.

  38. pontifex says:

    The big question is whether a nuclear-armed humanity can survive another few centuries. The answer to that question is far from obvious, and the current crop of politicans (on all sides) don’t give me a lot of optimism. Basically… I will take my chances with the AI rather than wait for Trump III or Clinton IV to push the big red button.

    I also feel like the LessWrong / Yudkowski school of thought puts much too much emphasis on AIs as function optimizers or calculators. Although this is controversial, I believe that the current crop of techniques that optimize functions are another dead end that stops short of intelligence. It’s like how everyone was fixated on formal logic solvers for AI in the 70s.

    • John Schilling says:

      The big question is whether a nuclear-armed humanity can survive another few centuries. The answer to that question is far from obvious,

      As we have discussed here many, many, many times before, nuclear warfare does not pose a credible threat to the survival of humanity, or even to human civilization.

      • pontifex says:

        As we have discussed here many, many, many times before, nuclear warfare does not pose a credible threat to the survival of humanity, or even to human civilization.

        I haven’t heard this point of view before. Do you have a link to one of those discussions?

        • John Schilling says:

          This is I think the best general discussion we’ve had on the topic.

          Here’s a followup on the nuclear winter aspect specifically.

          • moonfirestorm says:

            Both your links appear to lead to the same URL, likely the second mentioned post.

          • John Schilling says:

            Fixed, sorry about that.

          • pontifex says:

            I don’t find these arguments all that persuasive.

            * Scientists were sympathetic to the reds, so they exaggerated! Well maybe. But they were probably also sympathetic to how profoundly shitty nuclear war would be, even your best-case scenarios.

            * Climate models were crap in the 80s! OK, fine. Are they better now? If not, we shouldn’t mess with something we don’t understand. This is an example of how people love attacking other people’s models, but they often operate with their own implicit model which is much more questionable. (The climate will be fine, because, come on guys!)

            * Rural areas would survive, even if cities would die! That sounds great until you realize that farms are dependent on cities for fertilizer and fuel. Blacksmiths, stables, and the whole preindustrial supply chain doesn’t exist any more.

            I mean, I guess if you can find an uncontaminated source of freshwater and you have some animals, you can do something with that? This all seems awfully speculative, far from “case closed: we make it.”

          • Bugmaster says:

            I would just like to point out that a future where the only humans in existence live on small farms, next to freshwater sources, tending to their livestock by hand, etc., sounds pretty bleak to me. Technically, humanity does survive in such a scenario, but only in biological terms; the human civilization is gone.

          • John Schilling says:

            Climate models were crap in the 80s! OK, fine. Are they better now? If not, we shouldn’t mess with something we don’t understand.

            I am not trying to persuade you that we should wage a nuclear war. If you are paying attention, I spend a fair bit of time here trying to persuade people not do do that.

            I am also trying to correct the common misconception that a nuclear war, however ill-advised, would result in the literal extinction of the human race. If you don’t care to understand the difference, that’s your business, but I’ll keep calling you out every time you make that false claim.

            It would perhaps be best if you limited yourself to suggesting that nuclear war is a really bad thing that we ought to try and avoid.

          • pontifex says:

            John Schilling: I am not trying to persuade you that we should wage a nuclear war. If you are paying attention, I spend a fair bit of time here trying to persuade people not do do that.

            Sorry– I wasn’t trying to suggest that you were in favor of nuclear war (or even unaware of the risks). I apologize if it came across that way.

            Bugmaster: I would just like to point out that a future where the only humans in existence live on small farms, next to freshwater sources, tending to their livestock by hand, etc., sounds pretty bleak to me. Technically, humanity does survive in such a scenario, but only in biological terms; the human civilization is gone.

            Indeed.

            John Schilling: I am also trying to correct the common misconception that a nuclear war, however ill-advised, would result in the literal extinction of the human race. If you don’t care to understand the difference, that’s your business, but I’ll keep calling you out every time you make that false claim.

            The dangers of nuclear winter are probably exaggerated. However, I think there is a lot of evidence to suggest that salted nuclear bombs could literally cause the human race to go extinct. These are the bombs we should really be afraid of, since (as you mentioned), the fallout from regular nuclear bombs is only immediately lethal for a few months.

            It would perhaps be best if you limited yourself to suggesting that nuclear war is a really bad thing that we ought to try and avoid.

            My main point is that hostile AI is not the only X-risk that we face. And AI has a huge potential upside.

            I would draw a parallel with rocketry. Developing rocketry enabled ICBMs, which significantly increased X-risk. But without rocketry, we are stuck on Earth, which is also an X-risk.

      • Doctor Mist says:

        nuclear warfare does not pose a credible threat to the survival of humanity, or even to human civilization.

        You’re right, of course, but pontifex’s point may not be seriously altered by that qualification. Even if all-out nuclear war does not destroy civilization, it would still be very, very bad, and one could reasonably argue that probability * harm is greater than with super-AI, even if you assume P=1 that super-AI is possible. (I’m not sure I would make that argument, but I can see how it might go; key parts are “Trump III or Clinton IV”.)

  39. registrationisdumb says:

    Another thought. Let’s supposed “AI Risk” is an issue that needs to be addressed in my lifetime. I imagine that this, unlike global warming, will be a problem that is easier to solve from a research perspective later rather than sooner.

    Aside from philosophical autofellatio, the bulk of AI risk assessment relies on us being able to know what sort of threat we’ll face. Right now, our research into AI is still in its infancy. For all we know, our current grasp on AI could be that we’re a bunch of farmers worrying that a better harness is going to change the game, when it will be the tractors, the pesticides, and the GMOs. As we get closer to superhuman AI, we’ll have a better idea of what one will look like, and would be able to do research much more efficiently.

    Less money spent paying rent in Berkley, more money spent on actual AI researchers with corporate experience developing AI.

  40. HowardHolmes says:

    Since the advent of nuclear power the world has become much more dangerous. This is with no AI. This is with “intelligent” “caring” humans in control. We should worry less about AI’s motivation and more about human motivation. Give humans more power, any kind of power, and the world becomes more dangerous. Give humans more powerful computers and the world becomes more dangerous. It is the motivation of humans that is the problem, not the motivation of AI. Build a super intelligent AI and program it to “benefit the planet” The first thing it would do would be to destroy humanity.

  41. acrimonymous says:

    Perhaps we should spend more time worrying about this, and less time thinking of clever reasons why our inaction might turn out to be okay after all.

    As someone working in psychology, you could explore other motivations than status-signaling just-so stories, and this might lead to other ideas on how to complain/warn people.

    To me, the problem looks like this:

    (1) If one accepts the idea that a smarter-than-us AI is the threat we’re talking about, then us finding an engineering solution to the problem is not a reliable risk-management technique. (Although as the super chess-player sisters post and comments showed, enthusiasm for this idea will probably die hard.)

    (2) AI research threat is not like, say, cloning research threat (if that were a threat). You can start cloning people, and then you can stop again if you want. As well, one mad scientist cloning people has only 1/10 impact of 10 mad scientists. You can’t deploy a sentient, smarter-than-us AI and then undeploy it. (You can plan to, but per #1, that’s not a very good idea.) As well, one mad AI researcher potentially has as much impact as 10 mad AI researchers due to the nature of the problem.

    (3) If you accept 1 & 2, it’s pretty obvious that the only real solution to the UFAI problem is cessation of all research that’s aimed at creating a smarter-than-us AI.

    I’m pretty sure this is how the problem appears to non-specialist, non-enthusiasts who look into it, but #3 looks impossibly hard or crazy. Since no one wants to look like the wild-eyed guy waving the “sinners repent” placard outside the Whitehouse, you’ll never see a serious discussion of AI as an existential threat in mainstream opinion journalism*.

    * Edit: what I mean is, for example, NYT or WSJ opinion piece calling in no uncertain terms for political action now to halt AI research.

  42. engleberg says:

    The D party has been America’s governing political party since the thirties, spending to blimp out the government hugely twice, the thirties and the sixties, and holding a lock on the bureaucracies they created ever since. Our mainstream media is overwhelmingly D party house media. D party economic policy has been consistent. D-JFK broke US Steel in the sixties, D-Moynihan and D-Nader broke Detroit, D-Jimmy Carter’s judge broke Bell Labs, a couple D party lawyers broke light plane manufacturing, D party regulations created the Rust Belt. D- Clinton took a half billion bribe to sue Microsoft and break the dot-com boom, itself in part capital flight from all the other industries previously broken by the D party. This isn’t epistemic caution. This is vandalism backstopped by amnesiac media.

    D party AI policy is, therefore, likely to break US AI. If humans get working AI at all, it will probably be under foreign control.

    • bintchaos says:

      China has a Manhattan Project going for Strong AI.
      Baidu (Google competitor) has government funding as well as enormous profits.

      • engleberg says:

        @bintchaos- Hi bintchaos!

        A Manhatten Project? That big?

        Ever get back to Said’s nephew about Orientalism?

    • rlms says:

      I am feel uncomfortable when we are not about me?

  43. vV_Vv says:

    As a society, we spend about $9 million yearly looking into AI safety, including the blue-sky and strategy research intended to figure out whether there’s other research we should be doing. This is good, but it’s about one percent of the amount that we spend on simulated online farming games. This isn’t epistemic caution. It’s insanity.

    But what is the point of saving the world from the Evil Robot Apocalypse if it is going to be a world without simulated farming games?

    Ok, I’m jesting. Kind of. Do you really find unusual or even insane that people devote more resources to their personal satisfaction rather than solving your pet Most Important Problem Ever?

    And anyway, you can’t solve problems just by throwing money at them. What makes you think that giving more money to your personal friends is an efficient use of funds?

    First, how severe and imminent is the problem? Second, how competent are the people who accept money to work on it? What is the marginal utility of a dollar donated to MIRI, FHI or whatever?

    If you can’t answer these questions but call insane anybody who does not automatically default to your position, then I’m afraid to tell you but from the outside view (that is, outside of your “rationalist” bubble) you really look like yet another apocalyptic preacher.

  44. Sebastian_H says:

    The metacontrarianism point shows up as a subset of what I think of edge cases vs core cases. (I wish it had a pithy name). Intellectuals especially love to talk about edge cases. They are interesting, complex, and unusual. But it obscures the fact that most things aren’t edge cases. Most things are core cases. They are routine, boring, and often understood by lots of people. An example might make it clearer:

    Murder. There are all sorts of edge cases: when is killing justified, exactly how close of a causal connection between the action and the death is needed, what kind of things diminish moral responsibility for a killing. Societies resolve these edge cases in all sorts of ways. but this obscures the fact that nearly all of what we think of as murders would be thought of as murders by almost any normal adult in almost any society. Focus on the edge cases and you might think that this human morality thing looks pretty random. Focus on the core cases and you might think there is a surprising degree of agreement.

    Part of what can be misleading when we study things is a failure to keep clear in our mind which disputes are about edge cases and which are core cases.

    AI risk shows some of this when people argue about the edge issues (exactly what form will AI take, how much time do we have, which risk is most likely). This convinces people that “people disagree” which obscures the core issues.

  45. alwhite says:

    I want to officially nominate the style of argument “perhaps this is true” as the History Channel Fallacy.

  46. baconbacon says:

    Something really bothers me about this post, appeal to authority typically rubs me the wrong way and I try to make allowances for that and don’t reply often to them directly. Here it is a different case, there are no experts on super intelligent AI, interviewing experts on current level AI and asking about super intelligent AI is sort of like interviewing a top horse breeder in 1800 and asking about the future of transportation. He is most likely to be wrong in virtually every way, not because he is dumb, or biased but because if he actually understood the coming technologies they would already be here.

    • P. George Stewart says:

      Perhaps people can become more expert by mulling these things over more.

      I’m actually sceptical of AI in the strong sense (of the possibility building self-aware machines intelligent beyond a human level – simply put, I think intelligence is a particular, concrete quirk of historically-interacting, playful social beings, not a detachable abstraction that can be instantiated in any form), but I think those who do believe in it ought to put more effort into investigating the negative possibilities, because it does seem obvious that if it is possible, and it is a disaster, it’s likely to be the biggest disaster in our entire existence as a species. And considering the resources put into it are comparatively low, as Scott points out, a bit more effort on that score wouldn’t hurt.

  47. Doctor Mist says:

    The reaction to this piece really puzzles me.

    Scott has told us about a thing that concerns him, a danger he fears will not be adequately attended to because other people seem to find it important to downplay it. As if on cue, he is met by a torrent of downplaying, a chorus of people with a seemingly urgent need to describe the danger as not worthy of his concern.

    Why? Granted that you are not worried about AI, why is it so important to you that nobody else be worried about it either?

    • Skivverus says:

      Signalling that you have, in fact, considered the matter, and that your ignoring it as a threat is not merely due to lack of awareness?

    • Nabil ad Dajjal says:

      Why? Granted that you are not worried about AI, why is it so important to you that nobody else be worried about it either?

      Speaking purely for myself, two major reasons and one minor one.

      The first major reason is that this isn’t just Scott describing his worry, he’s also explicitly calling for more money to be spent to assuage it. Given that it’s our money, either through voluntary donation or involuntary taxation, that he’s asking for it’s very reasonable for us to object.

      The second major reason is that I strongly believe that Eliezer Yudkowsky is a fraud. The best case scenario of more money going to MIRI is that it’s merely wasted, but given that there don’t seem to be strong barriers between his various organizations I worry that some or all of it will go towards funding his lifestyle. Tricking well-meaning but naive guys into financing some guy’s harem and circle of sycophants is wrong.

      The minor point is that I dislike alarmist claims about highly implausible events. SSC is a great blog except for when it descends into AI hysteria. Pushing back on it forcefully hopefully provides a disincentive which will make it less frequent in the future.

      • Doctor Mist says:

        Given that it’s our money, either through voluntary donation or involuntary taxation, that he’s asking for it’s very reasonable for us to object.

        Well, I’m usually quick to object to taxation, so I was careful when I read Scott’s post, which didn’t say word one about governmental subsidization.

        As for “our money, either through voluntary donation”, well, I call No Way. The number of causes asking me for money is freaking enormous, and I for one don’t feel the need to explain to those I don’t donate to why that is, even when they are causes I might deem worthy of support.

        Tricking well-meaning but naive guys into financing some guy’s harem and circle of sycophants is wrong.

        Is this Scott you’re talking about?

        SSC is a great blog except for when it descends into AI hysteria.

        Hmm. I’m trying to steelman this into a description of the blog as a big dinner party where we’re all friends and the host doesn’t even aspire to control the conversation, rather than a condescending put-down of our host and his weird obsessions. I’ll keep trying.

        • Nabil ad Dajjal says:

          You’re the one who asked, kind of odd to get offended when someone answers.

          When someone who is otherwise sober and reasonable starts asking for donations on behalf of a charlatan in the name of an implausible cause, what are you supposed to do? Trying to calm the worry driving that behavior is surely better than standing by silently in the hopes of not sounding condescending.

          Is this Scott you’re talking about?

          I thought it was fairly clear from context that I was talking about Eliezer Yudkowsky. Scott doesn’t even have a harem.

          • Doctor Mist says:

            Not offended, just not sure accusing our host of hysteria is the most effective calming technique you might have chosen.

            I thought it was fairly clear from context that I was talking about Eliezer Yudkowsky.

            Apologies: now that you point that out I do see it. You had been complaining about Scott asking you for money, and I missed the transition.

    • 6jfvkd8lu7cc says:

      Well, it is an AI FOOM scare post exactly a month after the previous one, and I don’t see much new in it. So it is just an super-AI open thread, right?

      Maybe making an effort to list the arguments against AI FOOM scenarios would also provoke publisheng some better arguments why AI FOOM makes sense at all.

      Or did you expect an illusion of unanimous consensus in the absence of real consensus? I think that homogenizing the expressed opinions on some topics, including FOOM scenarios, via social means did make LessWrong not interesting to read.

      Also, a significant part of anti-FOOM comments (including mine) is basically «yes, it is already civilisation-risk bad; so why do you need to add low-confidence superintelligence assumption into the story?» Also, there are people working on describing the permissions and prohibitions for ;software failure damage control. Hopefully a few flamewars later it will become clear common knowledge what research in each part of software specification, validation, safety, and security (AI safety is a part of that larger field, right?) is useful to what other parts.

      • Doctor Mist says:

        Or did you expect an illusion of unanimous consensus

        I expect nothing, and certainly wouldn’t advocate self-censorship out of misguided politeness.

        I was just disappointed by how closely many of the comments matched the pattern he explicitly and cogently warned against.

        • rlms says:

          I could write an essay that explicitly and cogently predicts widespread dismissal of the theory that shapeshifting lizardmen rule the world. There would be nothing wrong with replying to that essay by arguing against the theory.

        • 6jfvkd8lu7cc says:

          I, in turn, was disappointed by the lack of idea that maybe you cannot secure an AI until you do learn to secure a warehouse program in the post.

          I do feel that my position is mentioned but the summary misses the point, and that there is a direction of work which is criticial regardless of the point causing the heaviest disagreement, so I do try to elaborate this position in the comments.

    • Bugmaster says:

      Granted that you are not worried about AI, why is it so important to you that nobody else be worried about it either?

      Because I think that AI is the next best thing humanity has come up with since steam power, electricity, computers, and the Internet; and I’d like its development to proceed as rapidly as possible. AI has tremendous potential to revolutionize the world for the better — though, of course, it could be dangerous, just like any new technology. Buggy software in general is inherently dangerous, and I’m all in favor of making it less buggy.

      But instead, AI-FOOM alarmists are presenting AI as a whole different kind of existential threat, and are advocating for stopping all AI research until their poorly defined low-probability concerns are rectified. In doing so, they’re no better than any other kind of Luddite, and therefore I’m opposed to them.

    • Joe says:

      Because I think the expected utility from funding AI alignment research is probably negative.

      Briefly: I’m fairly doubtful of the likelihood of FOOM, because I think it depends on some implausible assumptions regarding how intelligence works — that it’s a simple homogeneous concept; that a human-level AI mind will be an algorithm rather than an enormous software system. Relatedly, I expect a non-FOOM, non-singleton, multipolar outcome to look far sunnier than AI-risk folks do — they see a grey goo scenario, I see a vast cosmic civilisation.

      Given this, I think trying to push for a singleton outcome in a world that, most likely, doesn’t naturally lead to one, is instead a dangerous attempt to create a world government. I’m not sure if I’m more terrified of the prospect that it’ll fail, or that it’ll succeed.

    • Doctor Mist says:

      Bugmaster, Joe-

      Thanks for providing cogent answers to my question. It seemed to me that to an AI skeptic MIRI should seem misguided but essentially harmless, as, for example, Tesla might seem to a global warming skeptic.

      It’s interesting to me that in both your cases, what I missed is that you’re not skeptics. You both seem to find the possibility of superhuman AI plausible, your concern is the possible consequences of actions taken in fear of a fast takeoff.

      I completely share your hopes for a fantastic world, though I am rather less convinced that this good future is overwhelmingly more likely than the bad one. But you’ve given me food for thought.

      • Bugmaster says:

        Just to clarify, I do find the possibility of superhuman AI plausible in the weak sense; for example, the AI that currently scans handwriting at the post office does so with superhuman accuracy and speed, AlphaGo can easily beat humans at Go, etc. However, I think MIRI’s definition of superhuman is something closer to “godlike”: an entity that can hack anything, convince anyone of anything, answer any scientific question, construct any imaginable device (and many unimaginable ones), etc. I don’t believe such an entity is plausible at all.

        The more powers you imagine an entity to have, the less plausible it becomes. In addition, it becomes increasingly more difficult to even define such powers. The Ancient Greeks were pretty smart, and it’s conceivable that they could’ve envisioned e.g. jet propulsion — but they could never imagine the Internet. However, what I find really exciting about AI (the real-world kind, not the superhuman MIRI kind) is that even mild improvements on human performance can yield tremendous benefits. Self-driving cars can literally save lives, and machine translation can enrich lives, just to name two examples; and there are many more. This is why, IMO, MIRI’s outlook combines a form of hyperopia (they can’t see the obvious short-term benefits) with myopia (they think they can see much farther into the future than they actually can).

        • Doctor Mist says:

          OK, correction noted.

          I don’t think you need to be worried that a MIRI with even a hundred times the budget would have any effect on the rollout of self-driving cars. But that horse is pretty nearly out of the barn now, so that may not be much reassurance — your concern is for the domain-specific applications that are now just barely visible on the horizon. But even these are not really what MIRI or Scott is talking about, and it strikes me as implausible that they would be materially affected either.

          The kinds of applications that are problematic are those where a full solution requires the sort of free-wheeling creativity and initiative that a human has, such as literally replacing certain kinds of white-collar workers. (Note that I mean “literally” literally — TurboTax is a great tool, and it lets me get by without a tax accountant, but nobody would claim it implements a tax accountant.)

          From your discussion, I gather even this sort of AI does not strike you as implausible, but also doesn’t strike you as the beginning of something that could get out of hand.

          To me, the problem is that this free-wheeling creativity and initiative are the special sauce that makes for lots of super-great AI agents in all walks of life, and they don’t have to be godlike to cause catastrophes in corner cases. I’m 100% in favor of self-driving cars and machine translation and automatic tax accountants and individualized medicine driven by protein-folding analysis and all the rest. I’m no Luddite.

          I think (well, let’s say I hope; I might be misjudging MIRI) that you are taking intuition pumps too literally. I’m sure you know that nobody is really worried about automation in Acme Paper Clip Company, but similarly I think “answer any scientific question, construct any imaginable device” is really shorthand for “By definition, we can’t really know what a superhuman AI would be capable of.”

          Even an AI who was as much smarter than Scott as Scott is smarter than me would give me pause, if it also had the sort of freewheeling creativity and initiative that would make an automated tax accountant useful. And I can’t see any natural law suggesting that twice as smart as Scott is an upper bound. Superhuman AI won’t work miracles, but “sufficiently advanced technology” and all that.

          My rule of thumb is to think about a much more advanced alien species coming to earth. Even if they were benign, this could be bad news for us; if they were indifferent, it could be even worse. The Fermi Paradox suggests to me that such aliens are not likely — unless we build them ourselves.

      • rlms says:

        The big difference between MIRI and Tesla is that the latter isn’t soliciting donations.

        • Doctor Mist says:

          Oh, grumble, all right. Replace Tesla with your favorite charity that fights climate change. I picked Tesla only because I couldn’t think of a nonprofit whose focus is entirely or even mainly climate change.

          I could also quibble about the fact that Tesla’s success depends at least partially on customers getting rebates from the government, but let that go.

          My point was just that opponents’ response to MIRI seemed over-the-top, like they were taking it personally, or felt threatened, or something else, which I just couldn’t fathom as a response to a tiny little think-tank working on an issue the opponents claim to believe is a non-issue. Whence the heat of the opposition?

          • rlms says:

            My favourite charity their would probably be Cool Earth (apparently they’re effective). I think the reaction to non-MIRI MIRI-style AI safety stuff is roughly what I would expect climate change skeptics’ reaction to Cool Earth to be. With MIRI there is the difference that donations to them largely go to paying a small number of researchers’ salaries, which (if you don’t think they are doing anything useful) feels more annoying than paying to transfer land ownership in rainforests to indigenous people would (this isn’t a great comparison because that sounds good in its own right, but even if it was a neutral thing like building more solar panels the same thing would apply).

          • Doctor Mist says:

            Well, I’m a climate-change skeptic (I don’t question whether the climate changes but rather whether the rate of change we’re talking about actually matters), and as I predicted — yes, very convenient — I have to say Cool Earth looks misguided but essentially harmless to me. If somebody supports it, or encourages me in a low-key way like writing an article as Scott did, I shrug and go about my day. No skin off my nose.

            Your feeling of annoyance regarding how MIRI spends its donations would make sense to me if for some reason you were required to donate, but otherwise I just don’t understand it. There really are a lot of nonprofits in all kinds of endeavors that work in exactly that fashion.

            I think a lot of what people call “art” these days is mostly crap, and I think people who donate to its production are rubes, but it doesn’t offend me if they feel otherwise, or even if they think I’m a Philistine. See above re: my nose.

            So I think I am not quite seeing your mind even yet.

          • rlms says:

            It’s not just the existence of the charities, it’s seeing people argue for them. If Scott wrote a piece in favour of Cool Earth, would you not feel any inclination to argue against it in the comments?

          • Doctor Mist says:

            If Scott wrote a piece in favour of Cool Earth, would you not feel any inclination to argue against it in the comments?

            Yes, that’s what I said. If you look through various Good Causes that Scott has touted over the past couple of years, you’ll find more than one that struck me as sort of silly, but so what? (I don’t identify them because I don’t want this comment right here to be self-contradicting, heh.)

            The conclusion I seem to be forced to is just that I’m not as argumentative as some of the commentariat here. But that’s simply absurd — I’m extremely argumentative about things I care about.

            This makes me think that the people who think superhuman AI is impossible really care about that conclusion, and when I put it that way maybe the reason is obvious and I shouldn’t have wasted everybody’s time by asking.

          • 6jfvkd8lu7cc says:

            @ Doctor Mist: (I know I am making a silly remark) well, some argue that you should care about questions and try to find the most plausible answers… Caring about this question is actively promoted here, so no surprise!

            (hm, but somehow I still care more about people missing the real AI risk and maybe a bit about community effect of strong and specific beliefs in implausible scenarios)

          • If you want a psychologal explanation, look to Yudkowsky’s ability to make enemies.

    • As with other forms of scepticism, people don’t want money to go to causes thy think are fake, and they don’t want people promoting those causes to gain influence.

      Personally, I don’t rule our all AI risk, and I do think the professional s are the best people to evaluate it. AI risk is a kind of inversion if climate change denialism: the sceptics are inside academia, the believers are amateurs who have set themselves up in oversight.

  48. INH5 says:

    The whole AI Risk thing strikes me as, under the most generous possible interpretation, equivalent to Marie Curie calling for a conference of scientists during the 1910s to discuss the potential dangers of weapons that could be developed as a result of research into radioactivity.

    Even knowing from hindsight that radioactivity research really did result in the development of extremely dangerous weapons within a few decades, what are the chances of anyone at that stage of research being able to predict the form that these weapons would take, let alone devise effective methods of dealing with the threat posed by them?

    And in a number of ways this analogy seems overly generous. For example, at the time there was enough data to conclude that radioactivity involved the release of enormous amounts of energy, albeit over very long timescales, and therefore radioactive weapons that were far more powerful than chemical explosives may well be theoretically possible. See the 1914 HG Wells novel The World Set Free for an example of contemporary speculation about “atomic weapons.” Whereas the proposed mechanisms by which a super-intelligent AI might be really dangerous tend to be wildly speculative at best, such as the idea that a sufficiently intelligent AI could be incredibly good at persuading people to a degree that borders on mind control.

    • vaniver says:

      what are the chances of anyone at that stage of research being able to predict the form that these weapons would take, let alone devise effective methods of dealing with the threat posed by them?

      This is meant as a rhetorical question, but what are the chances? How would we know now how likely it is that we could positively effect the future by thinking about this? What experiments could we run?

      Whereas the proposed mechanisms by which a super-intelligent AI might be really dangerous tend to be wildly speculative at best, such as the idea that a sufficiently intelligent AI could be incredibly good at persuading people to a degree that borders on mind control.

      Bioengineered pandemic is the one that I point to typically. Most experts in the biorisk field think that the primary thing keeping humans alive is the lack of motive for someone to engineer something that kills everyone, not its biological implausibility. (That, and that relatively few people have the expertise.)

  49. GregQ says:

    But if by “caution” you mean you want as few astronauts as possible to end up as smithereens, it’s the way to go.

    No, if that is your definition of “caution” then you should never fly any spacecraft at all.

    Personally, i’m not willing to live in a rubber room, always afraid to go out. But that is what you’re calling for here.

    So you might want to reconsider your approach.

  50. poseidonian says:

    Not about AI, but about meta-contrarianism: not only is there the large community, but there are all sorts of little communities, and what would be the meta-contrarian position is context-dependent and at odds with each other. I’m a humanities professor, ultimately an analytic philosopher. When among non-analytic-philosophy humanities professors, libertarian anti-Marxism is the meta-contrarianism position to take in relation to their Marxist rejection of crackpot conservatism. But if you do that enough, you’re going to start hanging out with libertarians who think crackpot egalitarianism/socialism is the conventional wisdom, dogmatic libertarianism the sophisticated view, so talking about inequality, the influence of wealth on politics, etc. etc. in a certain way becomes the meta-contrarian move.