If The Media Reported On Other Dangers Like It Does AI Risk

[Not actually inspired by Robert Wiblin’s recent Facebook post on this same topic, but I acknowledge the coincidence. The media has actually done a much better job than I expected here and deserves some credit, but I will snark anyway.]

It’s a classic staple of action movies and Tom Clancy thrillers – the Islamic terrorist group that takes over a failed state, forcing the heroes to mount a decisive response. But some geopolitics experts think such a scenario could soon move from political fiction…to political fact.

If carbon dioxide levels reach 500 parts per million, it could initiate dangerous “runaway global warming”. But more conservative scientists urge laypeople not to worry, noting “Carbon dioxide levels are not that high yet.”

A sufficiently large nuclear war could completely destroy human civilization. If the bombs struck major manufacturing centers, they could also cause thousands of people to be put out of work.

Remember that time your boss paid you a few days late? Or that time the supermarket stopped carrying your favorite brand of cookie? Then you might not be surprised to hear many analysts believe the world economy will crash causing a giant decades-long depression.

[An informative, scientifically rigorous explanation of the dangers of climate change, but the picture on the top is that image of the Statue of Liberty buried in ice from The Day After Tomorrow]

A giant asteroid could smash into Earth at any time, scientists say. Indeed, already we are having to deal with avalanches and landslides that have blocked several major roads. Geologists think stabilizing our nation’s cliff faces may be the answer.

A group of meteorology nerds have sounded the alarm that a major hurricane could form in the next week – and now they’re turning their giant brains to the question of where it will make landfall.

Tacticians worry Russia might invade Ukraine – for example, they could choose to paradrop the 5th Battalion in under cover of night. But our experts say that that the 5th Battalion is not capable of night-time paradrops. Therefore, Russia will not be invading Ukraine.

The new superplague is said to be 100% fatal, totally untreatable, and able to spread across an entire continent in a matter of days. It is certainly fascinating to think about if your interests tend toward microbiology, and we look forward to continuing academic (and perhaps popular) discussion and debate on the subject.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

68 Responses to If The Media Reported On Other Dangers Like It Does AI Risk

  1. I think a lot of the problem here is terminology. We’re using the term “AI” to refer to artificial general intelligence, which is totally different from what the mainstream academic field that calls itself “AI” studies: a vaguely defined class of computer programs that do things that historically were considered to require intelligent behavior, and that we have little enough systematic knowledge of that they are still somewhat mysterious. Or, as I like to put it (paraphrasing Douglas Adams), AI is software that doesn’t work yet.

    So the media looks into it and notices that most computer scientists who study “AI” and discuss its dangers do so only in the same context that any technology is potentially dangerous (often for the purpose of signaling that they are Socially Conscious and not Ivory Tower Mad Scientists Who Don’t Care If Their Work Is Used For Evil), and that only a few researchers and organizations, mostly outside the academic mainstream (though this may be changing), are talking about AI as an existential risk. So that’s what they write about. I don’t really blame them.

    I am, however, confused as to why more mainstream computer scientists don’t see AI risk of the kind MIRI is concerned with as at least a potential concern. I’d understand if they just didn’t want to come across as endorsing weird technological End Times prophecies, but the number who loudly insist that an AI can’t possibly kill us all (and of the potential reasons why this might be the case, they usually pick the worse ones) is something I don’t understand.

  2. gattsuru says:

    Sure, previous communist uprisings like the Paris Commune have lead to open warfare in the streets of a nation’s capital. We don’t expect future communist uprisings — even ones with more people, or a more complete ideology — to be dangerous, though, as they lack of powerful connections and experience ruling or falling in love.

    And since that’s such a stupid argument that I can’t simply snark at it once…
    New technology and the dropping price of lab equipment makes it possible to perform molecular biology in a home environment : is this new potential, or a deadly threat? It doesn’t matter, viruses can’t feel cold or tired.

    • Wow, I know Vox is a grab bag in terms of how worth reading their content is, but that article is really an entirely new level of stupid. The author has apparently failed to grasp the difference between a superintelligence and Rain Man.

    • Paul Torek says:

      I haven’t read the linked article, but that style of argument is very familiar. Humans do world-changing thing A; we also do magical wonderful thing B; therefore B is an absolutely necessary precondition of A.

      If you don’t think reasoning this badly is common, you haven’t read or listened to enough mass media.

  3. David Mathers says:

    Question from an interested grad student in philosophy. Does Bostrom actually have an answer to point 3) from the Vox article (4 is obviously idiotic, I agree.)? What’s the scientific consensus on how hard whole brain emulation will turn out to be?

    • Can't think of a name says:

      I don’t know the scientific consensus and have little knowledge of AI issues, but even I can see that point 3 of the Vox article is actually the dumbest part of it. There’s a difference between forecasting and simulation. One thing is to accurately forecast the weather another and much easier thing is to build a model of the weather that passes the Turing test – that is, one such that a person in a sealed bunker can’t tell apart its output from news about what the real weather does.

      Probably even a brain duplicated with a Star Trek teleporter or whatever, would be unable to exactly reproduce the output of its own copy, even if fed identical input. This does not mean it lacks human level intelligence.

    • gattsuru says:

      Bostrom’s response is essentially this paper (pdf warning). We fail to emulate the world weather because we try to understand complex high-level systems in lieu of modeling each individual component because the entire atmosphere is very, very big, and thus we can only be as successful as our understanding and measurements of the larger scale systems.

      That’s not necessary for an individual human mind : we’re talking a few kilograms of fatty meat, not 10^18 kilograms of planetary atmosphere.

      It’s possible that there’s some component to human (or even animal) minds that doesn’t arise from observable low-level properties — whether due to feedback from unobservable properties or the more mystical answer of a soul. There might even be an endless chain of observable attributes of ever-smaller details. It’s not impossible that you’d need to dive into the atomic or even subatomic range to get an emulatable mind. There does seem to be at least some unknown level of sub-cellular knowledge or higher-level system understanding required. But it’s not nearly so obviously impossible as those authors suggest.

  4. Gilbert says:

    [deleted, was mean]

  5. Michael R says:

    Am I the only person who reads this blog and thinks that Scott should apply the same level of skepticism to AI as he does to everything else?

    Do you people REALLY believe this AI and Singularity stuff? What happened to the idea that extraordinary claims require extraordinary evidence?

    • Moshe Zadka says:

      I’m not sure what you mean by “stuff”. Let’s try to figure out where you differ from the rough LW consensus (I’m assuming my beliefs are close enough to that of Scott based on his writings):

      [1] If someone makes a human-level intelligence AI, it can accomplish its goals pretty well

      [2] There is no a-priori reason to assume that an AI’s goals will be humanity’s

      [3] The chances of creating an AI are getting better with better hardware and better understanding of some problems (vision, language, etc.)

      For the record, I consider human-goals-vs-evolution to be evidence that the result of an optimizer will not likely share the optimizer’s goals (humans using condoms), I consider the fact that smarter people can accomplish their goals evidence for [1] and I consider the fact that computer hardware has been improving exponentially, as well as the latest improvements in computer vision and computer language parsing to be evidence for [3].

      • Michael R says:

        (1) seems like a colossal ‘if’ to me. Where is the evidence?

        (3) The claim that ‘computer hardware has been improving exponentially’ seems very dubious. And even if true, it is no more evidence an Ai is coming than the observation that riding a horse is faster than walking is evidence we will soon be exploring the center of the galaxy.

        • MugaSofer says:

          “(1) seems like a colossal ‘if’ to me. Where is the evidence?”

          There should probably have been a point [0]: human intelligence is not irreproducible or metaphysically “special”.

      • Eli says:

        Hmmm… You know, I generally put myself down as someone who “believes in” AGI, but I laugh in the face of [1], and don’t consider [3] to be very good evidence for the rise of AGI at all.

        Narrow machine learning is NOT AGI, and will never become AGI no matter how much processing power, training data, or learning efficiency you add!

        It is broadly within the same paradigm of statistical learning and probabilistic reasoning as AGI, but the actual algorithms are just congenitally incapable of ever becoming AGI or describing human general intelligence. Such things are actually subfields of machine learning and cognitive science that… don’t advertise themselves as AGI because they don’t want to wind up mixed in with Ray Kurzweil, AFAICT.

        Also, Moore’s Law has been slowing down.

    • Anonymous says:

      Maybe you’re the only person who thinks he didn’t do exactly that? After thinking skeptically and carefully about a given idea, sometimes the conclusion is that it’s real after all. Skepticism (when practiced correctly) is not just a synonym for your facility to flush ideas you don’t like down the toilet, after all.

      As for extraordinary evidence, what happened is that people really suck about judging how extraordinary a given claim really is. In particular, when a claim is an explicit prediction of our current best scientific knowledge, it’s not that extraordinary at all.

      • Michael R says:

        For a prediction of our current best scientific knowledge, it doesn’t seem to be taken very seriously by scientists outside the narrow world of Kurzweil fandom. It’s not something the UN, the US govt. or international organizations seem to take seriously.

        • MugaSofer says:

          Which, once again, is not the same thing as it being “extraordinary” in the sense or requiring extraordinary evidence.

        • Eli says:

          Well yes, but the US government and many international organizations don’t take global warming seriously either, and the entire reputable scientific community is in consensus around that one.

    • Matt C says:

      Nope.

      I’m not hostile to the idea that we might create an AI someday, but the idea that one is definitely coming along soon and it is crucial that we try to shape its motives in advance? This seems a stretch.

      Is there even a clear idea of what “an AI” is, or how we could know that we’re progressing toward one? I’m guessing UPS monitoring software and ELIZA bots aren’t supposed to be proto-AIs. What is?

      Has anyone marked out milestones along the path of creating an AI? That would help me in understanding what people mean, and perhaps why they seem so confident that one is coming along soon.

      • Anonymous says:

        http://intelligence.org/2013/05/15/when-will-ai-be-created/
        is the best I could find on why they (MIRI) believe it will occur soon. The relevant quote is “We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”

        there is more explanation at the link and some links to other stuff that you may find interesting.

        • Matt C says:

          Thanks. I do tl;dr that to “we have no real idea.”

          Modeling small living creatures electronically is interesting and might be one set of milestones to mark progress. If we can successfully model some living beings with actual brains, that would be a good indication to me that modeling humans electronically will happen eventually.

      • Bugmaster says:

        To be fair, I’d argue that ELIZA, UPS bots, Google Maps, face-recognizing cameras, self-driving cars, etc., are all steps on the road to creating some sort of a general-purpose AI. However, that is very different from saying, “the Singularity is inevitable and is coming any day now”. Which is kind of a shame, because we need a general-purpose AI (the machine translation applications alone would radically transform our world for the better) — and instead of trying to build one, MIRI is doing exactly the opposite.

      • Aris Katsaris says:

        “Is there even a clear idea of what “an AI” is, or how we could know that we’re progressing toward one? I’m guessing UPS monitoring software and ELIZA bots aren’t supposed to be proto-AIs. What is?”

        If intelligence is the ability to optimize goals, then general intelligence would be the optimization of goals across as many domains you can imagine by efficiently choosing between as many diverse means as possible.

        No, an ELIZA bot is not a proto-AI. Even if it’s ‘goal’ could be argued to be ‘I want to deceive the listener into thinking I’m a real person’, it isn’t actually tracking or modelling whether it’s achieving this goal, and it isn’t able to figure out non-preprogrammed means of achieving such deceptions.

        (Disclaimer: I’ve donated to MIRI, but not otherwise affiliated with it, so I don’t know what their views of the matter is)

        • Matt C says:

          Hmm. Following on Bugmaster’s comment, is Google infrastructure a proto AI? It optimizes a lot of goals and I imagine a good deal of resource allocation is done automatically.

          It doesn’t seem like Google should count, at least not the usual SF notion of AI. I’m guessing you don’t think so either, since you specifically mention non-preprogrammed means of pursuing goals.

          Talking about AI in the SF sense seems a little like talking about portable cold fusion powerpacks–it’s not implausible that we might see them someday, but they’re not obviously on the horizon the way self driving cars are. (Cold fusion powerpacks might lead to some serious hazards too . . .)

      • Eli says:

        If by “an AI” you mean “computational optimization process which can optimize for any Turing-computable (or even Turing-semicomputable) utility function in any Turing-computable (or more likely: Turing semi-computable) possible world, by learning which possible-world it’s in and deciding which actions to take via probabilistic reasoning”… then yes, we do know what we’re talking about.

        But actually building one is quite difficult, and existing ML/narrow-AI applications range from having very little to do with “an AI” to having almost nothing to do with “an AI”.

        TL;DR: BRO DO YOU EVEN SOLOMONOFF?

        • Matt C says:

          I don’t understand this very well.

          It sounds like you’re saying, if you can describe everything that is important to you in the form of a utility function, and then express everything in the world in the form of data that can be applied to a utility function, we have reason to believe we can eventually write software that will complete those calculations.

          Is that right? If so, it sounds like humans still have to do all the hard parts, and that software doesn’t sound like anything that would (presumably) be conscious like AIs are typically described as being.

          But I may have missed your meaning.

          > TL;DR: BRO DO YOU EVEN SOLOMONOFF?

          No, I never have before.

          I did look him up on Wikipedia, and from there skimmed “The Time Scale of Artificial Intelligence: Reflections on Social Effects”, which was quite readable for a layperson. I particularly appreciated his attempts to define milestones.

          In that, he talks about a general theory of problem solving (Milestone B). To assume that such a thing is even possible seems like a reach to me. I can’t help wondering if he means something different than what I take the phrase to mean, but his 1-4 seem to be talking about the same kind of stuff.

          I wish he had put milestones marking out progress on this particular step.

          I note he thought such a thing was most likely to appear in 2 to 25 years. That was in 1985. Is there still optimism that a general theory of problem solving (and learning) is going to be formulated?

          • Daniel H says:

            (I’m only responding to the first part of your comment here, before the “TL;DR: BRO DO YOU EVEN SOLOMONOFF?”, because I feel I have more useful to say about that.)

            I believe Eli’s saying something slightly different than what you think. Specifically, I think he’s saying something close to, “We could (theoretically at some point in the future) write software that, given a utility function, will optimize for it; we don’t actually need to know the laws of physics for it to do this,” while I think you’re reading us as needing to also give the program the actual laws of physics for the world we run it in.

            This still leaves the human doing the hard parts of identifying what they want and of writing the program, and it leaves the AI nonconscious. In general, when people use the term AI here, they don’t mean something conscious, but instead something that can solve problems.

            The danger that people are worried about comes from thinking humans are likely to get the second problem I mentioned right, but fail at the first. Thus, we have a program that will optimize for what we tell it we want, but unfortunately it won’t actually optimize for what we really want. Another potential failure mode is where we tell it correctly what we want, but in the process of becoming powerful enough to do it the program in effect “forgets” what it’s trying to do. Both options leave us with a really powerful probably-nonconscious entity that does something we don’t want, where the typical example of such an entity is the “paperclipper”.

          • Matt C says:

            Thanks for the reply, Daniel.

            Part of what I’m doing is trying to understand what the AI-is-coming folks mean by “AI”.

            What you’re describing seems much more possible, and much less powerful and dangerous, than an actually conscious and near omniscient AI.

            I also don’t see how this AI can be described as friendly or unfriendly, which I see attached a lot to these conversations.

            I’d describe what you’re talking about as improved business planning software. I am sure that there are existing biz software packages existing today that will give you estimated profit projections based on revisions to your biz inputs.

            Of course these are only as good as the programming and the data (and the willingness to listen to them), but it seems believable they may get quite good over time.

            I don’t think the software that is used by shoe manufacturers will be the same as the software used by crop farmers, though it might have some pieces in common. If you agree that what you’re talking about is plausibly the descendant of biz planning software, that would point to a variety of AIs, not one that gets control over everything.

            I don’t think anyone is going to hook up their entire production apparatus to the output of a planning package like these. Even if the software calculates my highest return is in switching to sorghum and finally buying the newest model of the Caterpillar automated tractor, I’m not going to let it ship out the orders for me, I’m going to want to review the options myself first.

            I suppose there would be some decisions I would trust directly to the machine, and over time the zone of trust is likely to increase. I think the trusted zone is going to have to prove itself continually in order to stay where it is and/or expand, though.

            I suppose you can imagine a disaster scenario where humans have trusted more and more of their production capability to software decision making, and one day it all suddenly goes berserk. Something like the analog of a flash crash from automated high frequency trading. Unless we have ceded complete control to our planning software (let’s don’t do that) even this sounds more like an expensive ugly mess than a global catastrophe.

            Interested to hear where you disagree.

    • There’s a difference between believing that a transhuman AI causing a singularity is highly probable—which MIRI believes, but I remain agnostic on—and believing, as I do, that it’s plausible enough that mainstream academics should be taking it a lot more seriously than they currently are. I can’t speak for Scott, of course.

    • Bugmaster says:

      Oh, you’re definitely not alone. My impression of MIRI is that it’s full of really smart people wasting their time on preventing an event which, while probably not prohibited by the laws of physics, is so vanishingly unlikely that their time and money would be better spent on almost anything else.

      In fact, I am not entirely convinced that a godlike AI could exist at all — given that it would almost certainly require some sort of hard nanotech to get to that point, and there are good reasons to believe that nanotech of this kind is impossible (or, once again, vanishingly unlikely).

      • Daniel H says:

        What are the good reasons to believe nanotech of the necessary kind is impossible (or that it’s actually necessary)? My thoughts on the nanotech side of the story is “Nanotechnology exists; this has been evident since 1676, although we didn’t yet know it was even duplicatible in theory. Now, with the advances being made in both engineering and synthetic biology, it seems incredibly unlikely that neither side would invent general-purpose nanotechnology in 100 years (assuming no other existential risks etc.)”.

        • Bugmaster says:

          I wouldn’t go so far as to say “impossible”, but I’d bet on “vanishingly improbable”.

          Yes, self-replicating nanotechnology does exist in the general sense — we are made out of it, and so are all other living things. But this is not the kind of nanotech that the Singularitarians are talking about; instead, they are talking about something that can put together individual molecules into arbitrary configurations, like Lego bricks. Living cells cannot do that; they can only put together some very specific molecules (primary water-soluble ones) in very specific ways. They cannot, for example, suck in arbitrary raw materials and convert them to “computronium” (or even paperclips), due to the limitations imposed by chemistry. There are some good reasons to believe that it may be impossible to reprogram living cells (or create artificial ones) to do anything remotely similar.

          Speaking of which, what would “computronium” even look like ? Assuming we’ve got the kind of magical nanotechnology that can arrange molecules however we want (without violating the laws of physics), what would we make it do in order to create superintelligent AIs ? I’m not convinced that there’s an adequate explanation of that, either, but maybe it’s a separate topic.

        • Daniel H says:

          I don’t know enough chemistry to comment on the potential for fully general-purpose nanotechnology (I’ll err on the side of assuming it’s possible in the meantime), but even things made of proteins and what they can manufacture would be a huge step up from what we have now in at least some respects.

          The question about what to do with the computronium is why I had the parenthetical. I don’t think most MIRI-brand AI thinkers believe general nanotech is actually required. “Computronium” just means “really powerful computing substrate”; the end of the line of improvement from vacuum tubes to silicon transistor chips to whatever 3D and/or memristor-based thing is next to whatever we invent after that. In and of itself, it’s no more an AGI than the computer you’re using to type this, except that it can store the entire current Internet in something smaller than your fingernail and process it faster than Google can. It would be cool to have, but neither necessary nor sufficient for AGI.

      • MugaSofer says:

        How “godlike” does an AI have to be for you to consider it an important problem? I’m really not sure why ruling out nanotech would make this problem go away.

        • Bugmaster says:

          At least godlike enough to pose an unpredictable and unstoppable existential risk. The usual scenarios involve stuff like, “disassembling the entire planet and converting it to paperclips”, usually preceded by something like, “becoming so powerful that no merely human agency could stop it”.

          • MugaSofer says:

            And … you don’t feel that a foomed AI would be that dangerous unless it had access to nanotech?

    • Aris Katsaris says:

      “Do you people REALLY believe this AI and Singularity stuff?”

      If by ‘believe’ you mean assign a very high probability to it, yes.

      “What happened to the idea that extraordinary claims require extraordinary evidence?”

      What’s this supposed ‘extraordinary claim’? (1) That humanity will in our century achieve the creation of artificial general intelligence, or (2) that such an event would be so extremely transformative that (in our current state) we’re not able to predict the world afterwards?

      Both (1) and (2) seem far from ‘extraordinary claims’ to me. Claiming the opposite seems much more ‘extraordinary’.

      • Daniel H says:

        Both these claims are outside of standard experience and discussion and require extraordinary changes in the world for which similar events have never happened in history (at least for some definitions of “similar” and “history”). Those are, often, good heuristics that make these claims appear extraordinary. The reason they don’t to you is probably because you already have extraordinary evidence, probably in the form of background knowledge of certain fields (maybe one or more of computer science, cognitive science, philosophy, or rationality).

        • I know “extraordinary claims require extraordinary evidence” is both one of the most respected aphorisms of scientific skepticism and a straightforward consequence of Bayes’s Theorem, but I think it might be time to put that saying out to pasture. In practice, it often doesn’t help clear things up at all, because it often isn’t at all clear what kinds of claims or what kinds of evidence should be considered extraordinary.

        • Samuel Skinner says:

          Only the first one is “outside standard experience” and we still have analogies (intelligent humans are made every day). The second one is extremely well supported by previous instances where we had technology able to replace human or animal labor (agriculture and the industrial revolution immediately spring to mind).

          Actually making a mind is something we don’t have experience in, but without prior knowledge there is no reason to believe it is an incredible claim. Even with knowledge, all we can say is that it is a complex task, not an impossible one.

        • Bugmaster says:

          I don’t think that intelligent humans are analogous to hyper-intelligent AIs (unless your point is as simple as, “intelligent humans are an existential risk”, in which case I’d agree). Humans have no capacity to recursively self-optimize on an individual basis; even when you look at the human population as a whole, its rate of self-improvement so far has been fairly modest. Slow enough for humans to keep up with, at least (almost by definition).

        • anon says:

          Why do you think a basic AI would be better at programming improved AI than humans? Just because it can run faster, or is there anything else?

        • DL says:

          Human intelligence developed under a number of evolutionary constraints that an AI or even an emulated human brain wouldn’t be bound by. Intelligence had to trade off against brain calorie consumption, skull size (for ease of birth), and time to maturity. Pleiotropy means that evolution can have trouble selecting for changes in one area (say, a particular brain region) without also changing other functionally unrelated areas that reuse the same genes (say, other brain regions, the peripheral nervous system, or something else entirely).

          Machines have some particular advantages. One is, as you mentioned, the ability to benefit in cognitive speed from improvements in the computational substrate. Another is the ease of copying and the ability to run many versions in parallel, which both makes it possible to test out many variations simultaneously and could let a single AI expand “horizontally” into a cooperating group without the overhead of recruiting new members and getting them up to speed.

          These are not reasons an AI would be better at optimizing AIs than a human of the same intelligence. But they are reasons why we might expect a human-level AI to be improvable, relatively easily and relatively quickly, into an AI much smarter than an individual human.

        • Viliam Búr says:

          Why do you think a basic AI would be better at programming improved AI than humans? Just because it can run faster, or is there anything else?

          It would also have other advantages, but this one is enough. Imagine that you have an AI capable of human intelligence, capable of understanding what any human can understand.

          Then it can learn what its authors learned, so it becomes a top-level AI expert. Then it can create hundred copies of itself, so we have hundred top-level AI experts. And if it can run 100 times faster, than we have hundred top-level AI experts inventing things in 1/100 of time. — That should be enough to invent things faster than humans do.

          Now there are the other possible advantages: Unlike humans, the self-modifying AI would not have to sleep, suffer from akrasia, be distracted by other things; it could focus fully on its project. Or, for any distraction it needs to deal with (e.g. convincing humans not to turn it off), it could create a copy which would only focus on that task, so the remaining copies don’t have to.

          To make it more intuitive, imagine what you could achieve, if you could magically make 100 perfectly loyal copies of yourself (like Naruto), and let each of those copies specialize at one task. You could become an expert at 100 different things at the same time, and then cooperate with your other copies.

    • Eli says:

      Lots of skepticism should be applied to the AI stuff. When I heard LW’s views on AI, I laughed in their face and set out to go find some academic papers that would show just how ridiculously wrong they are.

      Imagine my surprise when I found the academic literature proves them more right than wrong.

      • no one special says:

        Is there any kind of linkspam or overview of the academic papers that support the LW view of AI?

  6. maximo says:

    The story of the group of meteorology nerds that have sounded the alarm that a major hurricane could form in the next week is pretty scary, any truth into it?

    • Anonymous says:

      Of course not. It’s sensationalist drivel, hurricanes happen in movies not real life. And besides, why would a hurricane want to kill us?

      • Anonymous says:

        Besides, hurricanes can’t feel love, or hunger. And they’re really bad at learning through slow, methodical experimentation. How could they ever threaten us?

  7. Robert Wiblin says:

    Being acknowledged as *not* the inspiration for a SSC post. Truly I have made it in the world! 🙂

  8. Bugmaster says:

    I think you are playing a little fast-and-loose with your examples. They can be divided into the following groups:

    * Events that are, undeniably, either happening right now, or have happened sometime in the past two years: Islamic terrorism, hurricanes, global warming, localized economic crashes
    * Events that have happened in the past, for which we have very good evidence: Asteroid impact, global economic crashes, plagues
    * Speculative events that could happen in the future, whose mechanisms are very well understood, and whose potential effects can therefore be predicted with high accuracy: Nuclear war, future asteroid impacts, plagues (again)

    The Singularity belongs in none of those categories. It is not happening now. There is no evidence that it had ever happened before (no, the Simulation Argument is not evidence). And, while it could conceivably happen in the future, its mechanisms are either poorly (if at all) understood, or explicitly mysterious — and its effects are unpredictable pretty much by definition.

    Thus, the absurdity heuristic that you are invoking by listing your fictional media reactions does not exactly apply.

    • Daniel H says:

      Yes, the metaphors aren’t perfect. No metaphor is. But the point isn’t to say that that AI risk is in the same category, it’s to illustrate what reporting is being done. There are articles on AI risk that boil down to “creating a computer with a soul is impossible”, which have their own problems, but those weren’t the ones listed here (just like there are people arguing that global warming is impossible, which seems plausible to laypeople because not everybody is willing and able to learn climate science). The point is that the headlines we see are ridiculous in any context.

      The headlines’ problems are, in order:
      * It happened in a movie, so it can’t happen in real life. (This is ridiculous; cell phones also first happened in a TV show, for example).
      * Don’t worry about it; it’s not a problem yet (with no indication that it won’t become a problem in the future).
      * This thing can potentially cause a huge major problem, but let’s instead talk about a much more minor problem that the major one would render irrelevant. (There won’t be massive unemployment if there isn’t anybody left to be unemployed)
      * Generalization from tiny things to giant things (your boss paying you late has nothing to do with an economic catastrophe, and your computer getting 5% faster has nothing to do with the Singularity)
      * Using a completely irrelevant stock photo (I honestly don’t see this being unique to any form of reporting and would be somewhat surprised if the described article didn’t exist exactly as described without the theme change)
      * Here’s a major topic. Let’s talk about a minor unrelated one. (Again, any computer problems that exist now have nothing to do with AI risk except in the “poor standards for software testing” far-back common cause, just like landslides have nothing to do with meteors except for “rocks + gravity = damage”).
      * I’m honestly not quite following the point here even though it seems like it should be obvious.
      * Hey, here’s this interesting thing that we think is very dangerous. Let’s ignore the danger and treat it as any other niche scientific topic. Maybe someday somebody will actually synthesize it so we can study it more. (I’m all for the scientific method in general, but perhaps you should treat a superbug or superintelligence with a bit more care than an experiment to see how some species of fish reacts to varying salinity levels, at least if you believe it really is dangerous).

      • Army1987 says:

        * I’m honestly not quite following the point [of the penultimate example in the post] even though it seems like it should be obvious.

        Certain journalists have derided people worried about an AI singularity as “nerds” even though that has no logical bearing on whether they’re right or wrong.

        • Daniel H says:

          Ah, of course. In retrospect, that’s obvious. I sometimes need to remind myself that “nerd” is often an insult instead of a completely factual (though often poorly-defined) description of certain people. When I read the phrase “meteorology nerds”, I think of people who are extremely or obsessively interested in meteorology, and thus probably quite good at it: exactly the people I’d want predicting hurricanes and their landfall locations. I should have poked around my confusion about the phrase “giant brains” more.

      • Bugmaster says:

        I think you’re missing my point.

        As far as I can tell, the statement Scott intended to make is something like,

        “Look at all those other existential risks, both local and global. The media takes them seriously to at least some extent; but yet they lampoon or ignore AI, which is a risk exactly like all these others. Isn’t that ridiculous ?”

        To which I’d reply,

        “Maybe, but since AI risk is completely unlike all these other risks (for the reason I outlined in my previous comment), your logic is not valid.”

        • Daniel H says:

          That’s interesting. The takeaway I got was more along the lines of “The arguments and reporting techniques used for AI prove too much. It’s standard to see headlines saying that AI is right out of movies, or that it isn’t here yet, or that some minor advance in technology indicates the oncoming Singularity. Those headlines would be laughably ridiculous in any other context; not even global warming deniers say ‘there isn’t that much CO2 yet’, for example, or that international political predictions sometimes look like the subject of movies. Why do people pay attention to them in only this field?”.

      • Scott Alexander says:

        Thank you

        (and I agree with you against Bugmaster that the relevant axis isn’t how likely AI is, but whether they’re reacting to it appropriately. Economic collapse may not be very likely, but that has no bearing on whether or not you segue into it with a discussion of whether your paycheck was late.)

        • Bugmaster says:

          Economic collapse may not be very likely, but that has no bearing on whether or not you segue into it with a discussion of whether your paycheck was late

          Right, I agree with you on that point. It would be nice if journalists actually took the time to study the subject they’re discussing, once in a while.

          That said, I disagree on this:

          the relevant axis isn’t how likely AI is, but whether they’re reacting to it appropriately

          The likelihood of an event happening directly impacts the appropriate degree of discussion. For example, global warming is not merely likely, but is actively happening right now; thus, it is everyone’s problem and, in an ideal world, it would be treated as such. Gamma ray bursts, on the other hand, are highly unlikely to ever affect us, and thus they are of interest only to a select group of people who dedicate their lives to studying things like gamma ray bursts. Thus, in an ideal world, we wouldn’t expect them to be widely covered in media.

  9. anon says:

    I’ve just read the Vox article. I’m actually finding some of its points valuable as an inspiration for new ideas. If random noise is our most valuable resource, then that article is almost as useful to us as the ideas of a naive five year old child.

  10. jason says:

    These examples pretty much seem like how the media do report on these things.

  11. Eli says:

    My core question to people who disbelieve in AI risk is this: if humans have such an extremely easy time harming ourselves or each-other from both miscalibrated goal-systems and sheer incompetence, why wouldn’t a suboptimally programmed AGI harm us?

    Any argument against AI Risk that did not pose any problem for Hitler Risk is worthless.

    • Michael R says:

      My reply to your question is this:

      Why wouldn’t a demon hurt us?
      Why wouldn’t malevolent aliens hurt us?
      Why wouldn’t Nazgul hurt us?

      In order for people to take a risk seriously, the risk has to exist.

      Until you can post a link to an AI that can converse with me like HAL from 2001, I’m not going to be any more afraid of AI than I am of frickin Lord Voldemort.

      • Anonymous says:

        Would you also not be afraid of nuclear weapons in the time period between the idea first being proposed and the first successful test?

        • Michael R says:

          Fair question. If I’d been alive at the time of the Manhattan project, and lived as a civilian and not a scientist, yes I probably would have doubted the power of such a thing, assuming I had heard of it. That really was a case of extraordinary claims being matched by extraordinary evidence.

          But look at the difference between the two risks. The US govt. was so sure the atom bomb would work they devoted extraordinary resources to the project, employing tens of thousands of people for years to bring about the Trinity Test.

          Even according to MIRI, AI is somewhere between decades away and never. Maybe if the US govt. started a project similar to the Manhattan, I would sit up and take notice. Until then, yawn.