Maybe The Real Superintelligent AI Is Extremely Smart Computers

I.

By Ted Chiang, on Buzzfeed: The Real Danger To Civilization Isn’t AI: It’s Runaway Capitalism. Chiang’s science fiction is great and I highly recommend it. This article, not so much.

The gist seems to be: hypothetical superintelligent AIs sound a lot like modern capitalism. Both optimize relentlessly for their chosen goal (paperclips, money), while ignoring the whole complexity of human value.

It’s a good point, and I would have gone on to explain the more general idea of an optimization process. Evolution optimizes relentlessly for reproductive fitness, capitalism optimizes relentlessly for money, politics optimizes relentlessly for electability. Humans are sort of an optimization process, but such a weird edge case that “non-human optimizers” is a natural category to people more used to the human variety. Both future superintelligences and modern corporations are types of non-human optimizers, so they’ll naturally be similar in ways – though not so many ways that taking the comparison too far won’t carry you off a cliff. And one of those ways will be that even though they both know humans have complex values, they won’t care. Facebook “knows” that people enjoy meaningful offline relationships; after all, it’s made entirely of human subunits who know that. It’s just not incentivized to do anything with that knowledge. Future superintelligences will likely be in a similar position – see section 4.1 here.

But Chiang argues the analogy proves that AI fears are absurd. This is a really weird thing to do with an analogy. Science has always been a fertile source of metaphors. The Pentagon budget is a black hole. The rise of ISIS will start a chain reaction. Social responsibility is in our corporate DNA. But until now, nobody has tried to use scientific metaphor as evidence in scientific debates. For a long time astronomers were unsure whether black holes really existed. But nobody thought the argument that “the REAL black hole is the Pentagon budget!” deserved to be invited to the discussion.

Actually this is worse than that, because the analogy is based on real similarities of mechanism. “People say in the future we might have fusion power plants. But look at all these ways fusion power plants resemble stars! Obviously stars are the real fusion power plants. And so by this, we can know that the future will never contain fusion power.” Huh?

II.

Still, Chiang pursues this angle relentlessly. Though he doesn’t use the word, he bases his argument around the psychological concept of projection, where people trying to avoid thinking about their own attributes unconsciously attribute them to others:

Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted…It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

In my own psychiatric practice, I am always very reluctant to assume a patient is projecting unless I know them very well. I’ve written more about the dangers of defense mechanism narratives here, but the short version is that amateur therapists inevitably end up using them to trivialize or psychologize a patient’s real concerns. I can’t tell you how many morons hear a patient say “I think my husband hates our kids”, give some kind of galaxy-brain level interpretation like “Maybe what’s really going on is you unconsciously hate your kids, but it’s more comfortable for you to imagine this of your husband”, and then get absolutely shocked when the husband turns out to be abusing the kids.

Accusing an entire region of California of projection is a novel psychoanalytic manuever, and I’m not sure Chiang and Buzzfeed give it the caution it deserves. The problem isn’t that they don’t have a plausible-sounding argument. The problem is that this sort of hunting-for-resemblances is a known bug in the human brain. You can do it to anything, and it will always generate a plausible-sounding argument.

Don’t believe me? What about black holes? Scientists say they exist, but I think these scientists are just creating “a devil in their own image, a boogeyman whose excesses are precisely their own.” Think about it. Superstar physicists like Einstein help university STEM departments suck up all the resources that should go to the humanities and arts. So of course when Einstein tries to imagine outer space, he thinks of super-stars that suck up all the resources from surrounding areas!

And chain reactions! You know what was a chain reaction? Enrico Fermi discovered some stuff about atoms. Then Leo Szilard wrote a letter to President Roosevelt saying it might have military applications. Then Roosevelt set up a project to develop military applications. One thing led to another, and a couple of Japanese cities got vaporized and the rest of the world teetered on the brink of total annhilation. Of course nuclear physicists became obsessed with the idea of chain reactions: they were living in one. They expected that subatomic particles would behave the same way they did – start out working on innocent little atomic collisions, have everything snowball out of control, and end up culpable for a nuclear explosion.

Watson and Crick worked together pretty closely on the discovery of DNA. So they started imagining organic molecules doing the same thing they did – two of them, intertwining. Just as they published papers which became the inspiration for an entire body of knowledge, so DNA was full of letters that caused the existence of an entire body. Epigenetics is relevant but generally ignored for the sake of keeping things simple, so it represents Rosalind Franklin.

I could go on all day like this. In fact, I have: this was the central narrative of my novel Unsong, where the world runs on “the kabbalistic method” and correspondences between unlike domains are the royal road to knowledge. You know who else wrote a story about a world that ran on kabbalah? Ted Chiang. This is not a coincidence because nothing is ever a coincidence.

III.

But Chiang’s comparison isn’t even good kabbalah. The correspondences don’t really correspond; the match-ups don’t really match.

He bases his metaphor on the idea that worries about AI risk comes from Silicon Valley. They don’t. The tech community got interested later. The original version of the theory comes from Nick Bostrom, a professor at Oxford, and Eliezer Yudkowsky, who at the time I think was living in Chicago. It was pushed to public notice by leading AI scientists all around the world. And before it was endorsed by Silicon Valley tycoons, it was endorsed by philosophers like David Chalmers and scientists like Stephen Hawking.

(Hawking, by the way, discovered that information could escape black holes despite a bunch of science saying they should be completely inert. This seems suspiciously similar to how he himself is completely paralyzed, but manages to convey information to the outside world via an artificial speaking device. More projection?)

Forcing the argument to rely on “well, also lots of people in Silicon Valley think this too” makes it hopelessly weak.

Consider: lots of Hollywood celebrities speak out about global warming. And we’re gradually finding out that some pretty awful things go on in Hollywood. Does that mean “The Real Problem Isn’t Global Warming, It’s Hollywood Harassment”? Does that license some author to write (while scientists facepalm worldwide) that because he doesn’t feel like carbon dioxide should be able to warm the climate, any claims to the contrary must be Hollywood celebrities projecting their own moral inadequacies? (possible angle: celebrities’ utterances emit carbon dioxide, and create a stifling climate for women in the entertainment industry)

If this sounds like a straw man to you, I challenge you to come up with any way it differs from what Chiang is doing with AI risk. You take a scientific controversy over whether there’s a major global risk. You ignore the science and focus instead on a subregion of California that seems unusually concerned with it. You point out some bad behavior of that subregion of California. You kabbalistically connect it to the risk in question. Then you conclude that people worried about the risk are just peddling science fiction.

(wait, of course Chiang interprets this as people peddling science fiction. He’s a science fiction writer! More projection!)

If the Hollywood example sounds more blatant or less plausible than the AI example, I maintain it’s only because we’re already convinced global warming is real and dangerous. That gives it the same kind of legitimacy as self-service gas stations, and grants it extra resistance against sophistry. That’s all. That’s the whole difference.

This isn’t how risk assessment works. This isn’t how good truth-seeking works. Whether or not you believe in AI risk, you should be disappointed that this is how we deal with issues that could be catastrophic to get wrong.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

362 Responses to Maybe The Real Superintelligent AI Is Extremely Smart Computers

  1. Sniffnoy says:

    As best I can tell you haven’t bothered to actually answer Chiang on the object level, but fortunately those arguments are generally well-known here. 😛 (Instrumental convergence, the AI knows that what its doing is not what you meant for it to do but it doesn’t care about that any more than you care that you’re not maximizing your inclusive fitness, etc…)

    • Scott Alexander says:

      If Chiang gets around to making an object level argument, I’ll answer it.

      (but also, https://slatestarcodex.com/superintelligence-faq/ , especially section 4.1)

      • Sniffnoy says:

        I mean I know you know such things! But like I expect someone is going to read Ted Chiang’s piece and then read this and say “but Scott never gave us any reason to think an AI would do as feared rather than possibly not doing that as Chiang suggests” and therefore write this off, so I thought I should mention it at least briefly in a comment. 😛 But your link is obviously a better explication of such things than me briefly name-checking/summarizing them. 🙂

        (Also I removed “complexity of value” from my comment since I just realized you totally did mention that…)

        Edit: I guess I was basically trying to make some version of Wrong Species’s comment below. To say, hey, to you who read this but didn’t notice an object level argument, don’t worry, those arguments exist, and in fact they’re sufficiently well-known (go read Superintelligence! Or the Sequences! Or Scott’s linked FAQ!) that Chiang’s piece is, like, notably bad for how it seems totally unaware of them.

        • Not A Random Name says:

          Hounded down my password just to log in and say this:

          Thank you.

          Scott’s post was not more convincing than him just outright saying “I think Chiang’s reasoning is bad and so is his conclusion, trust me on that or do your research”.
          I’m not invested enough to do my research and the arguments against Chiang’s reasoning sound like knocking down straw men (i.e. “His argument is so bad, it might as well be the ‘fusion plants can’t exist because of stars’ argument”). Also shooting down Chiang’s argument doesn’t mean he’s wrong, just means that if he’s right it’s not for this reason.
          Generally speaking I trust Scott to be worlds more informed than me on superintelligent AI but it’s a lot easier to do so with a link to the actual object level arguments.

          • pontifex says:

            It’s not clear to me why you think Scott is attacking a strawman. It seems pretty clear that Chiang is in fact saying that capitalism is the real threat, and AI is just a distraction. The argument is pretty much as presented, as far as I can see.

            Maybe the real superintelligent AI is a one-party dictatorship which is building a digital totalitarian state? But that doesn’t tie into any standard progressive narrative, so forget it.

          • Not A Random Name says:

            Scott’s post triggers my straw man heuristic, which is all I really claim.

            Basically it’s the pattern of reframe, then tear down. Giving an example of an argument that’s really silly and then claiming that it’s an analogue to your opponents actual view. This post does it in part 1, part 2 and part 3.

            It’s a red flag for me. I acknowledge that it’s just a heuristic, but it’s so easy to do this to things that are actually correct that I can’t trust arguments based on this structure. Unless I’m willing to do the research and check whether or not the silly example that was given is actually a fair comparison to the argument the other person was trying to make.
            In this case I’m not sufficiently interested to do that. So I just have to take it on trust or file it under “things Scott believes which may or may not be true”.

          • carvenvisage says:

            Scott’s post was not more convincing than him just outright saying “I think Chiang’s reasoning is bad and so is his conclusion, trust me on that or do your research”.

            If you think you should have to do research after a post like this, you’re just missing what it’s trying to do. It’s not a controlled experiment with white coats and P values, it’s an attempt to logically dissect the structure of another argument and show it to be invalid. The only research you could need to judge it is reading Mr Chiang’s post.

      • Sniffnoy says:

        BTW, I’d say Chiang’s object-level argument is here:

        The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them.

        He’s saying, there’s just no reason to expect the idea of AI as dangerous optimizer to be correct. Of course, there is, but he apparently didn’t bother to read any of the existing material on this before writing this…

        • petealexharris says:

          His argument defeats itself. If superintelligence is equally likely to take any of a billion forms, it’s really only a lack of imagination that would lead someone to think most of them are probably safe for us to coexist with.

        • MrBubu says:

          If all I cared about was proving theorems, and if I were powerful enough to do so, I would remodel the atoms in your body into a part of my brain, so I could prove better theorems.

        • Peter says:

          The thing about a harmless theorem-proving AI that really does just sit in a lab somewhere and publish inconsequential proofs in obscure journals is that that doesn’t prevent there being some other AI elsewhere that takes over the world (and quite possibly dismantles the first AI).

          Of course, a slightly less harmless theorem-proving AI whose only fundamental motivation is to sit in a lab and publish but with a free hand to make sure it can do so by any means necessary may well be instrumentally interested in preventing other AIs from coming into being. I wouldn’t want to bet my life or human civilization on those means being ones we would approve of.

          I mean, petealexharris talks about whether “most of them are probably safe for us to coexist with”, well, for an early 11th century Chinese person, most steppe nomads were probably safe to coexist with, but if there was one who wasn’t safe to coexist with, well…

    • komponisto says:

      His (Chiang’s) argument is really about capitalism, not AI. It takes the form: “Capitalism is dangerous for the kinds of reasons tech billionaires think AI is dangerous.”

      Whether you call that “object-level” or not, it certainly isn’t an argument against AI being dangerous — unless you assume that capitalism isn’t dangerous, which is the exact opposite of his point…

      • danielsodash says:

        +1

        Chiang is writing about the dangers of capitalism, and using Elon Musk et al’s fear’s of a AI as lens to look at that. His article takes a neutral — you could say complacent — view on AI. I think Scott’s rebuttal misses the point.

        IMHO: Let’s all be friends and worry about both. Capitalism running away with AI (even today’s relatively dumb AI) is an elopement that’s both possible and worrying.

      • reasoned argumentation says:

        His (Chiang’s) argument is really about capitalism, not AI. It takes the form: “Capitalism is dangerous for the kinds of reasons tech billionaires think AI is dangerous.”

        Exactly this.

      • yodelyak says:

        @komponisto: Exactly. But I’ll take it one further, and say that Chiang’s concerns about capitalism are part of the problem of AI risk–maybe the main problem–and that I think Chiang is connecting them for good reason.

      • yodelyak says:

        The problem that “foom” is possible is made threatening both because values are hard to code and values creep is hard to guarantee against (meaning even a well-intended AI may go paper-clip all the same) and because the most likely candidates for building a foom-capable AI are amoral corporations aiming at money first, second, and very nearly only. Solutions include delaying foom to buy more time, slowing foom to erect a balance of power among “foomed” AIs, finding ways to guarantee against values creep once foom happens, and lots of other things that I don’t really understand, and likely I’ve botched this list. But solutions also include finding ways to make our existing entire society less like unrestrained crony-capitalism, which is to say, more values-driven,
        so the first software to go “foom” isn’t expressly programmed to worship Mammon
        .

        If Chiang were writing to a panel of specialists on AI, his article might be worth attacking for being uninformed. But he’s gotten himself published in Buzzfeed urging support for a less amoral, less capitalist society. It is probably too much to hope that The Onion will soon make the joke, “area asshole, overwhelmed by complexity of stopping robot apocalypse, decides he’ll start by being decent human being, for a change.”

        But if the Onion does run that headline, could we all please not denounce The Onion as insufficiently informed about AI risk?

  2. Paul Zrimsek says:

    Someone on the latest OT posted a link to Charles Stross deploying the same half-baked simile. Is there an SF-writer version of JournoList?

    • oneMerlin says:

      I was about to post that same link; let me do it here for reference:
      http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html

      Charlie makes the same basic analogy but from the description above he takes it in a different direction. He does not attempt to use the analogy to argue that AI fears are absurd, nor to accuse others of projection. Instead, he uses it as a historical analogy to attempt to project the most likely path of computer-based AI, using corporations, which he calls “slow AI”, as a template to draw on.

      You are certainly free to disagree with his conclusions. But I do feel that the corporation as a model for AI motivation is a better model than human cognition. Human cognition is wired into a number of limbic systems that produce strong underlying emotional responses to the environment that neither corporate nor computer AI will experience.

      • humeanbeingblog says:

        I liked the Stross piece quite a bit. I only half-agree with it, but it provides a pretty useful perspective on the harmful effects of the kind of algorithmic-driven capitalism that has been steadily consuming the world over the last decade.

        I saw a tweet the other day that sums up the idea nicely (@GabrielRossman):
        If you want a vision of the future, imagine an AI optimizing for how often the rat in a Skinner box will depress the lever, forever.

    • Nornagest says:

      Charlie’s been playing with this idea for a while. Accelerando, one of his first books, comes at it the other direction by featuring unfriendly AI (“Vile Offspring”) descended from trading systems, who act like superintelligent corporate raiders and who by the end of the book have (ROT13) qevira uhznavgl bhg bs gur vaare fbyne flfgrz ol orvat gbb tbbq ng rpbabzvpf.

      It seemed insightful to me at first, and then I actually sat down and thought about it for a few minutes and realized how much motivated reasoning you need before you can single out corporations.

      • Conrad Honcho says:

        It seemed insightful to me at first, and then I actually sat down and thought about it for a few minutes and realized how much motivated reasoning you need before you can single out corporations.

        This is my general problem with “capitalism is bad!” rants. Yes, value misalignment results in perverse incentives and therefore (unintended?) negative outcomes. And then willful blindness that this is a problem with any economic system, and almost certainly worse with the systems that are a little freer with the use of state force. See Contra Robinson on Public Food.

        Until then I will absolutely agree with Chiang that capitalism is the Worst Economic System in the World. He just needs to add “except for all the others.”

        • Andrew Cady says:

          If school vouchers worked as well as food vouchers, they would succeed in their mission of improving choice without sacrificing quality.

          I’m shocked that Scott would make such a poor analogy. Schools are obviously(!?) nothing like food. I get to make more choices between foods in a single semiweekly shopping trip than I could expect to make between schools in a single child’s entire k-12 education. I can shop at multiple stores in the same week and choose individual foods within each store.

          A proper analogy to changing schools is changing employers or changing nations. (Changing schools lies in between these in seriousness.)

          • AnarchyDice says:

            But schools don’t have to be like changing employers or nations, and are only that difficult because they’ve been made into a whole package. Education shopping could mean you “buy” your physics I course from Teachers Inc, your arts introduction from Le Artbrush, and perhaps hire other 3rd parties that do the child-minding in between classes or that house the open network of courses. I mean, we have tons of departments, mixed schedules, and free choices for college students between different professors, why couldn’t parents select a mix for their kids from an available course-load? Add in recorded lectures with guided activities on site or at home and you can get just about any educational program you could think of.

          • Andrew Cady says:

            @AnarchyDice, (1) You’re not buying teachers; you’re buying your children a peer group. (2) College students don’t get to choose professors or classes from different institutions, either.

    • herbert herberson says:

      I’ve seen a lot of lefties poo-poo AI threat as obviously ridiculous, but I’ve never seen very few really grapple with the obvious fact that Stross pointed out: that forms of human organization that lefties believe to be the most malign are both similar to the AIs rationalists fear and by far the primary source of their development.

      Especially since the idea of corporate AI causes the edges to get fuzzy. Maybe the worst fears will never pan out, and we don’t actually need to worry about something turning the planet into paperclips. But we already have substantial AI participation in the markets, in the organization of corporations, etc… are any of the critics really able to say with a straight face that my fears of what kinds of machine intelligences will be effectively running the likes of Goldman Sachs and Walmart in 20 years are overblown? Even if they don’t try to turn us into computronium, we’re talking about entities that already wield a ton of power in an amoral way becoming far smarter and far more disconnected from human decision-making, based on nothing more or less than linear (don’t even need to get into the exponential stuff!) progression of current trends

      • yodelyak says:

        My map of history includes a technological arms race for ways of organizing people, and of specific technologies, which race at some points resembles nothing so much as a race to the bottom. Specific important examples (not all of races to the bottom) include 5th century monasteries in Ireland, or the use of “boot camps” to instill martial effectiveness in a fighting corps, or the East India Trading company, or the stock market, or Google’s rule that no one should be more than 200 feet away from food”…

        These technologies are as important in the course of history as things like gunpowder and stirrups. Having them wielded by an AI with alien values (or pure-profit-values) is a frightening prospect.

      • russellsteapot42 says:

        Most dismissals of AI risk like this seem very tribalistic to me. Basically “Those weirdos in the other tribe think that the real risk is Poseidon’s wrath, but we right thinking people know that Huitzilopoctli is the real god to be feared.”

        • Nornagest says:

          Huitzilopoctli is indisputably scarier.

          • Le Maistre Chat says:

            He’s so scary that knowing his name is just Nahua for “Blue Hummingbird on the Left” doesn’t make him less scary.

        • Wrong Species says:

          I find it bizarre that Elon Musk has become a progressive boogie man. This is the guy that is trying to promote solar power and electric cars and yet because he focuses more on AI danger, he’s suddenly the paradigm of evil super-villains.

          • russellsteapot42 says:

            It’s because he’s a clearly extremely competent individual who came to prominence by doing capitalism well. He’s basically the main character in an objectivist novel, only without going on about looters and parasites and so on.

            The fact that he can succeed under capitalism and turn out to be exactly the sort of person we should want in charge of important things is an embarrasment to anti-capitalism, so anti-capitalists feel a need to tar him with flaws, both real and imagined.

          • Matt M says:

            He’s basically the main character in an objectivist novel, only without going on about looters and parasites and so on.

            Well the Ayn Rand heroes didn’t rant about that stuff in public either. Who knows what Elon says behind closed doors at the I-swear-I-didnt-know-this-was-about-sex parties…

      • JPNunez says:

        This is such a good point. AI (regular AI) assisted corporations stand to wield so much power in the v near future. And yet you don’t see MIRI trying to research on how to avoid this, or to steer it for good.

        I guess that given it is an actual problem that exists, there isn’t any easy low hanging fruit which is grabeable.

        • Nearly Takuan says:

          You don’t see it?

          While it wasn’t the top hit on Google/Bing/etc., I really didn’t have to do too much digging to find this nice list of platitudes, which says things like:

          While AI systems are creating new ways to generate economic value, if the value favors only certain incumbent entities, there is a risk of exacerbating existing wage, income, and wealth gaps. We support diversification and broadening of access to the resources necessary for AI development and use, such as computing resources, education, and training, including opportunities to participate in the development of these technologies

          While the sincerity of these statements and their adoption within the AI research community may remain open to debate, I think this is at least sufficient evidence that there exist influential groups which acknowledge the issues you’re alluding to, and which have formulated a coherent problem statement and a general plan for how to deal with it.

          • JPNunez says:

            I don’t doubt it, but I feel this article is more about the rationalist fear of strong AI, as exemplarized by the fable of the paperclip maximizer than the actual research being currently done on AI.

          • Nearly Takuan says:

            The policy report is written by ITIC, a coalition which counts Amazon, Google, Facebook, Microsoft, Intuit, Oracle, Toyota, Twitter, Visa, and many many other evil profit-maximizing, AI-researching, user-data-monetizing corporations among its members. Several of these were specifically called out in Chiang’s article.

            So, we can’t claim AI researchers aren’t aware of the problem. Demonstrably, the largest AI-researching companies in the world are aware of the problem.

            We can’t claim AI researchers aren’t acknowledging the problem. The paper I linked is a statement that AI researchers are aware of present-day ethical implications to their research, and promise they’ll do their best to proceed carefully.

            We can claim they’re not doing enough, or are being disingenuous, or are sincere but still lack the social awareness to deal with this side of the problem effectively, but these are all different goalposts entirely…

  3. reasoned argumentation says:

    Here’s a reasonable version of the argument.

    Capitalism is excellent at optimizing for maximizing profits and grinds away at other values*. For AI to be run away and dangerous it would have to do exactly that – maximize profits to gain resources to use to gain more resources, etc. IOW – AI already basically exists. It’s called the Coca Cola Corp.

    *It observably doesn’t do either of those but we’ll leave that aside

    • Wrong Species says:

      You can literally cut and paste thousands of different answers in for capitalism and retain the same argument. You can’t just take one shared characteristic of capitalism and AI and claim that they are the same because of it. That would be like saying bicycles and rockets are the same because they are inventions designed by humans to go places.

      • reasoned argumentation says:

        It’s about the mechanism.

        For an AI to be a danger it has to control resources – at the very minimum it has to pay for its AWS account to keep itself running. To do this it needs to make money – or transcend money in some other way that gets people to provide it with resources – which is exactly like making money*. Composite organizations with significantly more intelligence than any living human already exist that do exactly that. I, Pencil is a great demonstration of that – no person knows how to make a pencil but there are pencil making companies in the world.

        What do you think causes the AI danger? That it’ll be smarter than individual people? Already done. That it’ll set up to gain resources? Already done.

        * It could go the parasite route too and make up effective appeals for people to just give it money – maybe it’ll claim that it can make mosquito nets to save more and more people from malaria then somehow make those people reproduce younger and younger yet they’ll never act to effectively reduce their own malaria risk so they fully depend on contributions funneled through the AI – whatever – there are human organizations that go that route too.

        • Wrong Species says:

          So if AI is just a corporation do you think that we could get Amazon employees to beat AlphaGo Zero in Go? The fear isn’t that an AI will be smarter than an individual. It’s that it will be such a high degree smarter than people that we won’t know what it’s doing until we’re all about to die.

          • reasoned argumentation says:

            If you had a corporation as large as Alphabet dedicated to winning at go they could do so – by building an AI to do it.

            Of course no one at Alphabet knows how to build an AI – see the I, Pencil argument – yet the AI gets built because Alphabet is smarter than any person at Alphabet.

          • Wrong Species says:

            That’s my point. You can’t get a bunch of people together and have them beat an AI. Only an AI can beat an AI. That’s because they’re working on a different level than us. Now imagine that but for everything. Ants can do some pretty interesting things when they work together but they can’t build a rocket. That’s the difference between corporations and super intelligent AI.

          • reasoned argumentation says:

            No person can build a rocket either yet SpaceX manages the feat.

            Almost everything – no matter how simple – is beyond human understanding and yet corporations understand how to do these things that no person can do.

            What’s different about an AI?

          • Rick Hull says:

            Dudeman, society is the real AI. No, wait, it’s civilization. Or maybe it’s nature. All you are showing is an optimization process, emergent order, and capabilities beyond a single individual homo sapiens. We’ve known that such things are powerful for a long time. They don’t tell us much unique or interesting about actual AI. What’s the version of the singularity for corporations-as-AI?

          • Wrong Species says:

            AI is capable of things that corporations without AI can not do. Google can’t tile the galaxy with paper clips. A super intelligence could. And even things corporations can do, the Super AI does it better and faster. I don’t understand why you don’t understand this. It’s not just that they’re better than us. It’s that they’re better than us on a ridiculous scale. I don’t know how else to explain it. You are basically looking at humans and chimpanzees and seeing that they’re 99% similar, confused why anyone would think humans would be better at anything.

          • reasoned argumentation says:

            AI is capable of things that corporations without AI can not do. Google can’t tile the galaxy with paper clips. A super intelligence could.

            Google is exactly as capable of tiling the galaxy with paperclips as an AI is.

            In order to tile the galaxy with paperclips the AI has to pay for it in the sense of consuming energy and matter. If it’s pulling it from the human economy then it has the same exact level of capability as google – it can spend as much as it produces in some other endeavor.

            An AI that operates outside the human economy will have trouble existing on Earth because Earth happens to be entirely owned.

          • FeepingCreature says:

            I think that the argument that you’re looking for is “AI screens off corporation.”

            When told that a corporation is running an AI, vs. that a person is running an AI, vs. that a corporation is not running an AI, the conceptual heavy lifting of predicting the outcome is done by “AI”, not “corporation”.

          • vV_Vv says:

            I think the steelman argument is that bad AIs are an extension of capitalism, both in the sense that they are likely to be created by corporations and in the sense that their ruthless runaway optimization is going to be a more extreme version of the ruthless runaway optimization that capitalism does.

            I don’t find it a completely compelling argument: bad AIs could be created by other means, for instance by an arms race between rival governments running secret Manhattan-style projects, and runaway capitalism could cause lots of problems even without building AIs.

            However, as far as publicly known, current AI development is lead by the largest paperclipclick-maximizing corporations, therefore the concern that capitalism and AI (even narrow AIs, not necessarily god-like superintelligences) interact badly is real.

            To further steelman the argument, it could be argued that worrying about a far risk such as god-like superintelligences tiling the galaxy with paperclips while ignoring the near risks of modern capitalism is an inefficient allocation of concerns. It’s like worrying about asteroid impacts while living in an unstable building in a seismic area.

          • Wrong Species says:

            @reasoned argumentation

            I think I figured out what’s going on here. You decided to define “AI” in some weird, idiosyncratic way and some people push back on your argument because they use the normal definition. For some reason you think you can win by using your weird definition. I don’t know what you think you’re accomplishing but it doesn’t do anything. A group of people who work together to make money is not literally the same thing as a machine. And no, just because the corporation might use the machine to make profit doesn’t make it synonymous with the machine itself in the same way that a farmer is not also a pig. Do you think farmers are also pigs, cows and chickens?

          • reasoned argumentation says:

            I think I figured out what’s going on here. You decided to define “AI” in some weird, idiosyncratic way and some people push back on your argument because they use the normal definition

            Has absolutely nothing to do with defining AI in a weird way – it has to do with the mechanism for how an AI would interact with people.

            At the start all an AI could do is order humans to take action and it only has two options for getting them to follow those orders:

            1) Pay them to follow those orders
            1a) Persuade them to part with money then get others to obey through (1)
            2) Use the threat of overwhelming physical force to compel obedience

            That’s it. Method (1) requires doing exactly what a corporation does and those are already under selective pressure to produce outputs that people are willing to part with money for. Method (1a) is what charities already do.

            Method (2) is called setting up a competing government – which tends to get you shot.

            Maybe an AI will really want to tile the universe with paperclips but to do that it also has to come to control enough of the human economy to afford that. The human economy happens to already be run by organizations that are under selective pressure to be really good at obtaining the resources they use – corporations.

            If it goes by route 2 immediately then its up against governments who have already have organizations set up to fend off exactly this type of challenger.

          • Tarpitz says:

            At the start all an AI could do is order humans to take action and it only has two options for getting them to follow those orders:

            1) Pay them to follow those orders
            1a) Persuade them to part with money then get others to obey through (1)
            2) Use the threat of overwhelming physical force to compel obedience

            Surely it could also persuade people to take actions it wanted, by convincing them that it would be in their interest in some way not necessarily involving financial payment, or that the action the AI desired was in fact the right thing to do? Pay and fear of violence are hardly the only motivators for human behaviour.

          • reasoned argumentation says:

            Tarpitz –

            I made persuasion into 1a because that’s how human con artists operate. A con artist wants a new house – he doesn’t send out a Nigerian scam email to con someone into building him a house – he cons the mark out of money and uses the money on the market to buy a house.

            There’s no reason to think that the AI would be particularly persuasive to people that have the skills that the AI requires for its plan so it will almost certainly use the known existing method.

          • John Schilling says:

            But there is a wide range of services that would be useful for an AI plotting world domination but, for the most part, cannot be purchased for money alone. Murder and treason in particular – literal mercenary assassins are rare in first-world countries, mostly inept, and outnumbered by policemen and informants trying to sucker you into hiring a non-assassin. You need some sort of moral legitimacy or in-group loyalty to be reasonably confident of success in that realm, no matter how much money you have. And treason for hire is generally limited to a scattered group of prospects, probably none of which have the particular secret or influence you are looking for – you get much better results when you add ideology, compromise, and ego to your toolkit.

          • Matt M says:

            I’m unconvinced that a really good persuader would need to rely on a bunch of assassinations to achieve their desired ends.

            If anyone can think of a way to get what they want without relying on a lot of murder, it’s a superintelligent AI.

          • Andrew Cady says:

            All that an AI needs to take over the Earth economy is hypnodrones. The humans will obey because nanomachine artificial pheremones enter their brains and directly cause them to obey

          • Wrong Species says:

            @reasoned argumentation

            I don’t have a problem with saying that an AI would want to use capitalism for its own gains. I just want to know that you understand that machines are different from an organization of people that come together for a specific purpose and that just because they interact with each other, doesn’t mean that they are the same.

        • I, Pencil is a great demonstration of that – no person knows how to make a pencil but there are pencil making companies in the world.

          The point of “I, Pencil” is that neither individuals nor pencil making companies know how to make a pencil. The super organism isn’t the company, it’s the market, the network of actors, individual and corporate, interacting via exchange and prices.

          • reasoned argumentation says:

            The point I was trying to make by referencing I, Pencil is that complex systems that have the important features of AI – namely superior knowledge and intelligence already exist. On a smaller scale than whole markets corporations know how to do things that no person working there knows how to do.

    • MostlyCredibleHulk says:

      Communism is also excellent at optimizing for a goal – in fact, if you need a regime that would subjugate the whole society to achieve a single goal (and you don’t care how much it would cost in resources or human suffering), a totalitarian regime would probably be your tool of choice, and communist ones have the best support on university campuses, so why not choose one of them? Thus, according to the same logic, a) communism (or it’s poor aspiring cousin, socialism) is the Real Problem and b) AI, capitalism and communism (and every other totalitarian regime) are the same. Also, AI of CPSU existed, but was defeated and extinguished by AI of Coca Cola Corp, which is now locked in battle with AI of Chinese Communist Party.

      • ksvanhorn says:

        Communism is excellent at optimizing for a goal if your goal is production of corpses.

        https://www.hawaii.edu/powerkills/NOTE1.HTM

      • mtraven says:

        Totalitarian Communism and unregulated capitalism are both systems for entraining humans towards non-human goals. Totalitarianism uses the simple “do this or get killed” technique. Capitalism is more subtle, people get more freedom to choose their subgoals, but everyone tends to get channeled into pursuing money.

        These systems resemble paper-clip-tilers in their disregard for environmental externalities. If both pure systems existed, I expect the capitalist one would end up doing more damage because it makes better use of the human intelligences that comprise it. However, in the actual recent world, communism collapsed (after doing a great deal of damage) and capitalism managed to regulate itself to avoid the worst consequences, although the final tabulation won’t be in until we see how bad climate change gets.

        • Totalitarian Communism and unregulated capitalism are both systems for entraining humans towards non-human goals.

          That’s like complaining that your home heating system is designed for the non-human goal of keeping the number on the dial at seventy degrees.

          Unregulated capitalism gets humans to act for the human goals signaled by the prices people are willing to pay for things they buy, the amount people charge for the labor they sell, the amount people are willing to accept in exchange for postponing consumption in order to invest instead, and the like. It does its maximization by getting firms and individuals to respond to those signals. Just as your thermostat gets temperature to your desired level by responding to the signals it gets from its thermometer.

          If the objective of the heating system was to control the number on the dial, it could do that without bothering to heat the house. If the objective of firms was to maximize money they would all lobby the government to print lots and lots more of it. Just think of all the money that German firms made during the Weimar hyperinflation.

          • reasoned argumentation says:

            If the objective of firms was to maximize money they would all lobby the government to print lots and lots more of it.

            They do do that though. All of the players in the entire financial sector only still exists because of precisely that – all of them would have been wiped out in 2008. Instead only the institution stupid enough to pay off the Republican party got wiped out – giving bribe jobs to Jeb! and John Kasich as if that would protect them – ha!

            If the objective of firms was to maximize money they would all lobby the government to print lots and lots more of it. Just think of all the money that German firms made during the Weimar hyperinflation.

            Yes, this does point out a technical point that people have been glossing over – corporations don’t optimize for the number in their bank accounts – they optimize for wealth held or power to purchase resources – neither of which are served by having big numbers in a bank account that can’t be used to purchase goods and services. In normal circumstances though the two things are the same.

          • mtraven says:

            This whiggish view of capitalism, that market goals always represents actual human goals, seems very common around here. But it’s wrong, and kind of basic.

            It is true that market values are grounded (for now) on human values, but because capitalism is a complex and ingenious system, it evolves systems-level goals of its own that have little do with actual human flourishing.

            Off the top of my head:

            – A huge amount of human labor is devoted to keeping people away from resources rather than wealth creation (so called “guard labor“).

            – The entire advertising industry and the addictive side of the entertainment industry is serving its own goals

            – A huge amount of financial sector activity contributes nothing to human flourishing (I’m thinking specifically of HFT, with vast resources devoted to shaving nanoseconds off of transaction times to gain an advantage over other traders)

            Note that I am not arguing against capitalism — maybe there is no better way to organize human activity. I am trying to make a specific point, which is that capitalism as a system spawns a whole bunch of goals which are devoted more to its own maintenance than to the servicing of base human goals such as food, shelter, meaning, and whatever it takes for happiness and a good life.

            Because capitalism runs on people (for now), these system-goals have to be converted into human goals, through the medium of money. Take the Brinks security company, as the most obvious example of guard labor (actual guards). The owners and management and employees of that company are working not for any human base goal, they are working for the security and stability of the monetary system. And the monetary system rewards them for their efforts, and presumably the people who pay them consider it worthwhile. But it contributes nothing directly to human betterment.

            System goals and human goals exist in a dense and tense ecosystem. In stable prosperous times, there is a rough equilibrium and the goals of both humans and the system are aligned enough that everything works. But because of technical advances and differences in economic class interests, these stable times don’t last. And when capital figures out that it can do without labor, you get unemployment or immiseration.

            The kind of AI we are talking about here (not the only kind possible) is essentially capitalism without capitalists. It’s the guilt-induced nightmare of capitalism where the tools that used against labor end up turning against their creators.

          • JPNunez says:

            This is WILDLY optimistic. Grossly optimistic. Disgustingly naive.

            Unregulated capitalism has shamelessly captured people from their home and shipped them as slaves to the other side of the world, for centuries, until someone came by and actually tried to regulate it.

            There’s no reason to believe it cannot do so again if the economic incentives are right. Hell, right now there are still slaves out there.

          • Take the Brinks security company, as the most obvious example of guard labor (actual guards). The owners and management and employees of that company are working not for any human base goal, they are working for the security and stability of the monetary system.

            It has nothing to do with the stability of the monetary system–for that you would want to look at people working to prevent counterfeiting.

            What Brinks is doing is preventing stealing. This has a close connection to human goals. In order to get people fed it has to be in the interest of grocery stores, farmers, food processors, etc. to play their role in the process. If they cannot rely on the money they get for their efforts not being stolen, they are less able and willing to do so.

            To put it more generally, what Brinks is supporting isn’t the monetary system, it’s the security of property rights. And the security of property rights is a key part of the system that makes it in the interest of people to do things that serve the goals of other people.

    • MugaSofer says:

      If corporations were actually superintelligent – that is, better than any individual human at literally everything – then it would be impossible for individual humans to oppose them successfully. Leaving aside the fact that people do sometimes oppose corporations successfully, this would obviously make Chiang’s article an exercise in futility.

      Unless, of course, corporations are not superintelligent, and thus superintelligent AI would in fact be much more dangerous.

      • tbrownaw says:

        I would think this same logic would declare it impossible for a dog or a flu virus to successfully oppose any humans.

      • reasoned argumentation says:

        If corporations were actually superintelligent – that is, better than any individual human at literally everything – then it would be impossible for individual humans to oppose them successfully.

        Asserted without evidence.

        Corporate super-intelligence, on the other hand, is easy to demonstrate – no human can do what google does (for example) – no human could even administer their severs.

      • John Schilling says:

        It is exceedingly rare for individual humans to oppose (large) corporations by their own efforts. Most such stories involve an individual human and his or her lawyers convincing a court to mobilize or threaten to mobilize the resources of a state against a corporation. Sometimes the individual human leads a boycott, protest, or rampaging mob of other humans.

        Nonetheless, it is demonstrably possible for humans collectively to oppose, constrain, and defeat corporations. This is a fundamental difference from the postulated AI threat, which if it manifests is alleged to be invincible vs any combination of humans and human institutions.

      • Corporate super-intelligence is non-general. So people can do things outside of its purview. Google can make calculations that no individual human can make, yet there are still laws restraining Google (in spite of cyber-punk fantasies, Google does not rule the USA, and could easily be defeated by recourse to sheer force if needed), because we have another non-general super-intelligence acting against it; government.

        If we extend this analogy, then AI safety comes from leveraging different AI against each other. Of course, the whole analogy might be bad to begin with, and we’re ignoring FOOM.

        Paper clip maximizers specialize for paper clip (or X) maximizing, but they must have general intelligence, because as amazing as they would be at maximizing paperclip production when given the right tools, they aren’t going to be able to outwit humans without the general intelligence needed to understand things outside of the purview of paper clip maximizing, such as human minds. This general intelligence (or as many umpteen gazillion specialized modules as required) would have to understand humans on every level (not the narrow level current institutions understand humans on) in order to outsmart them. It’s going to be making incredibly subtle and seemingly innocuous arguments that convince humans to give it harmless stuff and the right tools to turn it into dangerous stuff that allows it to beat all the humans and make “paperclips” unopposed. If its end goal survives this process, then it would eventually be the end of us, but ironically, its greater (or at least more general) intelligence, would make it slower than AIs with more multifaceted and/or fluid terminal values.

      • JPNunez says:

        Well they are better than people at distributing money right now, and they have been better too at hiding from their externalities.

        The paperclip maximizer would also not be more intelligent than humans at several kind of tasks, and would probably be beaten at, dunno, Angry Birds if angry birds did not come into play for building more clips. It just happens that it is better at tasks that help it tile the universe into clips. This lack of complete super intelligence does not mean the paperclip maximizer isn’t dangerous, just as corporations not being good at everything does not mean they aren’t dangerous.

  4. Wrong Species says:

    Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea.

    When people say stuff like this, it’s obvious they don’t care enough to actually examine the arguments behind AI worries. If they did, they would know it’s one of the most commonly asked questions that has been answered a million times. I’m not sure what’s to gain from engaging with them.

    • Sniffnoy says:

      Yup. This is like, Mr. Chiang, you really didn’t investigate this very much before writing this, did you?

    • Paul Zrimsek says:

      The arguments behind capitalism suffer a similar neglect, causing Chiang to fall into a fallacy of composition: Capitalists optimize only for money, therefore capitalism optimizes only for money.

      You could probably spend four whole years in the Angry Studies program at Evergreen State without meeting anyone as rabidly anti-capitalist as a paperclip-maximizing AI would be. Almost all of our productive potential is thrown away meeting a huge variety of human wants that have nothing to do with paperclips!

      • Rick Hull says:

        Wait, you got the Angry Studies Scholarship to Evergreen State too? It really puts the asses in seats. In my assessment, the program assimilates the next generation of character assassins.

      • Brett says:

        The arguments behind capitalism suffer a similar neglect, causing Chiang to fall into a fallacy of composition: Capitalists optimize only for money, therefore capitalism optimizes only for money.

        That’s a good point. Capitalists optimize for profits, seeking for the highest-profit opportunities (at least theoretically – it gets complicated). But a working capitalist market economy acts to shrink profits over time, something even the Marxists identify (with their talk of the “falling rate of profit” and such).

        • Ketil says:

          Capitalism optimizes allocation of scarce resources. Profits is what happens when somebody discovers and remedies suboptimal resource allocation. In a perfect market, there are no profit.
          Profits is just a symptom, a fever indicating that something was wrong with the market, but that it is now getting better.

          🙂

    • mtraven says:

      Excuse my ignorance but would you mind linking to a few of those million answers?

      The argument against this that I am familiar with is the orthogonality thesis, but that has always struck me as a particularly weak argument.

  5. Le Maistre Chat says:

    Psychoanalyzing the outgroup is a specific form of ad hominem with at least a 70-year history. I think the only way to get commies to stop writing this essay is to make Blue elites believe in AI risk.

    • Conrad Honcho says:

      The problem is “better living through computers” is more of a Blue Tribe-aligned value. You need to get the Red Tribe excited about them thar superintelligent AIs to spark all the HuffPo and Buzzfeed hand wringing about how “It’s Time to Address the Elephant in the Room: Superintelligent AI is a Problematic Manifestation of White Supremacy and Male Patriarchy.”

    • trebawa says:

      It would appear this is precisely what Blue academics and technologists – at least the ones convinced this is an issue – are trying to do.

    • Matt M says:

      Is Elon Musk not blue tribe? He’s certainly not red…

  6. reasoned argumentation says:

    Consider: lots of Hollywood celebrities speak out about global warming. And we’re gradually finding out that some pretty awful things go on in Hollywood. Does that mean “The Real Problem Isn’t Global Warming, It’s Hollywood Harassment”?

    No – but what it does do is demonstrate exactly that everyone knows what to say about Global Warming Climate Change to be on the side of the angels and to get a pass on otherwise despicable behavior. The exact same forces exist in academia regarding research into Climate Change – just like the same forces exist in social psychology where nothing replicates but everything still fits the narrative that everyone knows is the right side of history. It’s no coincidence that the replication crisis hit the exact area where the social / political pressure is so high to stick with the group-think.

    There is exactly a common force there.

  7. Andrew Cady says:

    He bases his metaphor on the idea that worries about AI risk comes from Silicon Valley. They don’t. The tech community got interested later. The original version of the theory comes from Nick Bostrom, a professor at Oxford, and Eliezer Yudkowsky, who at the time I think was living in Chicago. It was pushed to public notice by leading AI scientists all around the world. And before it was endorsed by Silicon Valley tycoons, it was endorsed by philosophers like David Chalmers and scientists like Stephen Hawking.

    Hm. Bostrom was born in 1973. The Terminator starring Arnold Schwarzenegger was released in 1984, when Bostrom was 9 years old.

    I don’t suppose The Terminator was the first expression of its own premise in science fiction. Asimov had a (friendly) A.I. download all of the meatbags’ brains into itself (leaving no fleshy life behind) in 1954. What’s the earliest reference to A.I. taking over that we can find?

      • Nancy Lebovitz says:

        Not “The Machine Stops”, as I recall. That was about low-quality social media taking over. It’s amazing how much Forester got right.

        “The Sorcerer’s Apprentice” is about an simple-minded unlimited utility function, so it’s a lot like a paper-clipper.

        • Andrew Cady says:

          The Sorcerer’s Apprentice

          The magic broom doesn’t derive its power from intelligence though. It’s more like grey goo than paperclips.

    • Luke Perrin says:

      The version of The Sorcerer’s Apprentice in Lucian’s Philopseudes.

    • Powerful Olfactory Hallucinations says:

      Frederic Brown’s short-short story Answer (1954) is a classic version of the trope, though it’s probably not the first.

    • vV_Vv says:

      They didn’t take over the world, but rouge AIs have always existed in fiction, at least since the Golem of Prague myth. Frankenstein is possibly the earliest incarnation as a novel.

  8. ChuckleberryFinn says:

    This is such a pointless and pedantic blog post. Scott is attacking a fiction writer for using a compelling metaphorical lens to criticize capitalism. Chiang is simply making an argument that downplays AI risk RELATIVE TO the risks of capitalism as it currently operates. Scott’s thesis that “Chiang argues the analogy proves that AI fears are absurd” is what’s absurd. It’s a complete misconstruction of a short, easy to comprehend little article. We get it, you like your new hometown, Scott!

    • static says:

      I agree, in the sense that Chiang is more wrong about capitalism than he is about AI. Markets are not a simple optimization around money, they are distributed preference valuation processes that incorporate flexible human values of labor and possessions. If a free market is producing too many paperclips, the price drops until there is no value in producing them at their underlying costs and the relevant resources are employed towards other tasks where there is value. The simplistic AI risk runaway scenarios have a simple, unchanging value function for which they optimize, so there is no correction for the change in human preferences to indicate we already have plenty of paperclips. Perhaps a better argument would be that applying market forces to AI value functions would constrain simplistic runaway AIs, much like market forces constrain his simplistic version of capitalism.

    • j r says:

      This is such a pointless and pedantic blog post…

      If that is true, then the world needs more pointless and pedantic blog posts. Because right now the world is full of folks using very hackneyed and often wrongheaded versions of some phenomenon (let’s call that X) to posit a view or offer an explanation of some other phenomenon or system (let’s call that Y). The problem is that the generally, the only people who find these sorts of arguments compelling are the people that don’t understand X; so you end up with lots of people holding views based on faulty understanding of underlying phenomena.

      If more people who understood X, bothered to critique the metaphor and correct the mangling of X, then the people writing these things would have to start using better arguments in the first place. And that would only be a good thing for the overall level of discourse.

    • Sniffnoy says:

      I don’t think you’re correct here. Chiang says in the article he does not think there is any reason to believe the idea of AI as dangerous optimizer. (To quote: “The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them.”) He is saying that the reason that people think AI will be dangerous is lacking justification. That’s not talking about its risk relative to capitalism.

      • ChuckleberryFinn says:

        lol you’re putting words in his mouth. he isnt saying that ai risk fears lack justification, he’s saying that super intelligence is poorly defined. might i direct you toward the title of the article to understand where i’m coming from? you’re stuck in the trees buddy!

      • Conrad Honcho says:

        “It could be anything, but would be extremely powerful” sounds like a good cause for concern. The space of “Powerful Things Good For Human Life” is dwarfed by the space of “Powerful Things Bad For Human Life.”

        • adder says:

          How about

          The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue.

          (emphasis added)

          • Conrad Honcho says:

            But the technologists either 1) do regard moderation as a virtue (Bill Gates giving his money away and curing diseases; Musk’s actual goal is to Get His Ass to Mars and his businesses are only a means to that end) or 2) at least respond to market forces or else go out of business. The problem with the Paperclip Maximizer is that there is no end besides paperclips, and there is no feedback loop that says “we have enough paperclips.”

    • Mellow Irony says:

      I would say the idea that AI fears are absurd is really a presupposition of Chiang’s article. There is one explicit argument in the article that dangerous AI isn’t coming soon (“we are still a long way from a robot that can walk into your kitchen and cook you some scrambled eggs”), but the real reason people are coming away with the impression that Chiang is saying we shouldn’t pay attention to AI risk is in the connotations:

      This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger.

      …some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics…

      The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue.

      Each of the words in bold [emphasis added by me], even if technically accurate, has a spin that implies that the technologists in question are not worth listening to.

      This implication makes Chiang’s use of the metaphor much less compelling. The natural conclusion of “unfriendly AI is like capitalism” is “if we’re worried about unfriendly AI, we should be worried about capitalism, and vice versa”. It completely undermines this point to dismiss AI worries as “fearmongering”, those warning of them as “doomsayers” who can’t “conceive of moderation as a virtue”, the pursuit of friendly AI as merely “fun to think about”, etc. (If thinking about unfriendly AI disasters is frivolous, and capitalism is like unfriendly AI, does that mean people worried about capitalism are similarly wasting their time?)

      • ChuckleberryFinn says:

        Well, no, because Chiang spends the entire essay detailing how capitalism already operates as an obsessively optimized entity while these AI risks are still hypothetical. He clearly doesn’t take AI risk as seriously as you and Scott and other SSC stans want him to, but it doesn’t change the fact that he never actually comes out and says it. Here’s another butthurt market ideologue complaining about Chiang’s essay and even he claims that Chiang NEVER attempts to assess if AI risk is real or not.

        “So I am lost. Does he think AI is a problem or not? Well, we never actually find out. But Chiang does think Silicon Valley tech companies are a problem.”

        https://digitopoly.org/2017/12/19/ted-chiang-gets-a-ton-of-economics-wrong/

        • Conrad Honcho says:

          Well, no, because Chiang spends the entire essay detailing how capitalism already operates as an obsessively optimized entity while these AI risks are still hypothetical.

          Which is silly, because the problem with the Paperclip Maximizer is there’s no feedback mechanism to tell the Paperclip Maximizer “we don’t need no more stinking paperclips.” But capitalism has such a feedback mechanism built right in: no paperclip manufacturer is ever going to grey goo the world to make more paperclips because once the supply of paperclips outstrips the demand for paperclips the profit derived from manufacturing paperclips drops to zero, and so paperclip production halts.

          Runaway AI is scary precisely because it lacks the feedback mechanisms inherent in the capitalist marketplace.

          • Morgan says:

            What is the feedback mechanism that stops a company trying to maximise profit?

            “Paperclip companies aren’t paperclip maximisers because maximising paperclips will eventually become unprofitable” doesn’t actually address the argument, because the capitalism = paperclip maximiser isn’t “paperclip companies are like paperclip maximisers because they produce paperclips” it’s “paperclip companies are like paperclip maximisers because they maximise something”. The something is, under this analogy, profit.

            Profitability is the check on maximising paperclips (or anything else, other than profit itself) because it shifts the target of the maximisation. You don’t want to produce the biggest number of paperclips, you want to produce the most profitable number of paperclips, whether that’s billions of paperclips or one very expensive paperclip.

            What’s the check on maximising profit? When do you tell the profit-maximisation system “we don’t need no more stinking profits”?

          • The Nybbler says:

            What is the feedback mechanism that stops a company trying to maximise profit?

            Other companies.

    • MugaSofer says:

      Scott is attacking a fiction writer for using a compelling metaphorical lens to criticize capitalism. Chiang is simply making an argument that downplays AI risk RELATIVE TO the risks of capitalism as it currently operates. Scott’s thesis that “Chiang argues the analogy proves that AI fears are absurd” is what’s absurd.

      Did you read Chiang’s article? He repeatedly says exactly that.

      “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time … superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification … this doesn’t make me worry about the possibility of a superintelligent AI … It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy … The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue.

    • sty_silver says:

      Clearly, dozens of highly intelligent people have understood him to mean that AI is not dangerous. It seems to me that arguing they are all wrong is not particularly reasonable. Plus, even if they were, that alone would be a basis for arguing against it.

      The analogy is also just not good. AI and capitalism have barely any similarities that are useful for understanding them better. I think we’d do well discouraging any such comparisons.

      • ChuckleberryFinn says:

        “Clearly, dozens of highly intelligent people have understood him to mean that AI is not dangerous.” lol is this rationalist trolling?

  9. Brett says:

    Silicon Valley in general seems like a poor example to use as a “money optimizer”, and especially with Gates and Musk as they are now. Gates is giving much of his wealth away, and Musk chose to spend his early wealth on designing rockets and electric cars rather than something that would have given him a lot more more a lot quicker (such as founding a hedge fund or the like).

    • Matt M says:

      Yep. If capitalism requires one to optimize for “earning the most money possible” then both of these people are absolutely horrible examples. As is most of silicon valley.

      • quanta413 says:

        Not to mention “earning the most money possible” is not even well defined. Obviously, nominal would be a poor choice. But if we’re talking real units, how are we adjusting the value of the basket of goods? Using CPI? Or is the claim that capitalism is full of capitalists trying to maximize their own relative wealth? But wait, that’s probably not right because most capitalists wouldn’t off their opponents just to boost their relative standing. So at some point we’re left with a much more boring claim that capitalism is full of capitalists doing normal selfish human things to maximize their status as judged by some other humans or fulfill some particular desire they have. But this is true of any system involving human competition and cooperation; the better systems are just the ones that net more on top of status competition.

        • Andrew Cady says:

          Corporations trying to maximize their valuations. It’s not ordinary humans doing normal selfish things, because of the wall of separation that exists effectively between the anonymous shareholders and the agents.

          (Compare “closely held corporations,” which can actually make moral choices.)

          • Matt M says:

            Corporations trying to maximize their valuations.

            Ah, but which is it?

            Valuation and profitability are very different things. We can’t just use these words interchangeably whenever challenged on them.

          • Andrew Cady says:

            @Matt M

            I think you may have confused me for someone else in the thread. I only said valuations, I never said profits.

            But I would still say they’re not actually that different, because the valuation is based on the prediction of future profits which is mostly (though not entirely) based on past profits. Over the long term, profitability and valuation are going to amount to the same thing.

            When people say that corporations want to maximize profits, I think they are probably being imprecise in their phrasing and you shouldn’t take it over-literally. They probably consider “capital gains” to be just another kind of profit.

          • Matt M says:

            But I would still say they’re not actually that different, because the valuation is based on the prediction of future profits which is mostly (though not entirely) based on past profits. Over the long term, profitability and valuation are going to amount to the same thing.

            I’m not entirely sure that’s true. There are plenty of companies with large valuations who have never been profitable, or haven’t been profitable in recent years, or what have you.

            You’re right that in theory it should match expectations in the long run, but in reality, I don’t think it does.

  10. Nancy Lebovitz says:

    I can’t figure out why people aren’t at least as afraid of governments (governmentism?) as they are of capitalism.

    • reasoned argumentation says:

      Because a memeplex of “be afraid of governments” is useless to its holders.

      The communist memeplex is useful for keeping your gang together long enough to take over the government and take ownership of, well, everything in the country.

    • MostlyCredibleHulk says:

      Some are. They are called libertarians. Or, more often, “those crazy libertarians which are always afraid of government for no reason” and “those annoying libertarians which annoyingly tell us ‘we told you so’ each time the government has caused major problems”.

      • [Thing] says:

        Even beyond self-identified libertarians, concerns about “big government” are a widespread trope in American politics, especially on the right. On the left, you don’t see as much fear of government in general, but more specific concerns about “the surveillance state,” “the carceral state,” “the military-industrial complex,” etc. are still common. So I think people do have an instinctive anxiety about government that mirrors their anxiety about corporations in most important respects.

        If corporations are less popular than governments, perhaps the reason is that people usually identify with their nation-states to some degree, and maybe some of that positive affect rubs off on the governments of said nation-states, whereas corporations are more like foreign governments in how they stand in relation to people who don’t work for them or own shares in them. The nationalism angle would also explain why transnational corporations are viewed as especially suspicious.

        Anyway, it did strike me as a weakness of Chiang’s piece and the Charlie Stross essay linked above that they didn’t acknowledge that the analogy with superintelligent AIs works just as well with governments, religions, nations, economies, or any other collective human organization in place of corporations.

        • MugaSofer says:

          Governments don’t have an central “utility function” they attempt to maximise in the same way corporations do.

          Religions at least do have an obvious big instrumental value in the form of number of followers, but they also have a bunch of other competing values (to the point that many religions don’t bother to proselytise at all.)

          • Jack Lecter says:

            Democratic governments optimize for votes and public support.

            Government officials who don’t optimize for power tend not to retain it for long.

            I don’t think it’s a perfect analogy, but it’s strong enough to make the level of corporate-hate surprising, given the absence of correspondingly strong government-hate.

          • John Schilling says:

            “Power is not a means; it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish the dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power.”
            ― George Orwell, 1984

            This is approximately as accurate as saying that capitalism is a money-optimizer

          • Doctor Mist says:

            Governments don’t have an central “utility function” they attempt to maximise in the same way corporations do.

            Hmm, an interesting thought. But I’m not sure that governments don’t, nor that corporations do.

            I think it’s rare (at least outside of the financial industry) for a company’s “mission statement” to directly invoke profit. While the founders of a company probably expect that they will make money, that may in some sense only be an instrumental goal in service of some loftier goal: the reason they are founding this particular company instead of a hedge fund. (For instance, Google’s original mission was “organize the world’s information and make it universally accessible and useful”.)

            When we say that a corporation’s central utility function is to make a profit, we’re using a teleological metaphor, as when we say the purpose of the immune system is to ward off infection. We mean that the natural selection of the marketplace tends to favor a corporation that acts as if profit is its terminal goal.

            By the same token, we must distinguish between, on one hand, the motives of the people who found and serve in a government and, on the other, the teleological, emergent goals of a government as an entity in the marketplace of competing institutions. These goals are not the same as a politician’s goal of getting re-elected — a government needn’t care what flesh-units comprise it (though of course a group of canny politicians can marshal some of the government’s powers to enhance their electoral chances).

            The revealed goals of a government as an entity are certainly different from the revealed goals of a corporation. Profit is certainly not one of them, but perhaps control of its populace and defense against other governments are the appropriate analogs. What else enhances the success/survival of a particular government in the eco-system of governments and similar institutions? (As with organic evolution, there are niches: Monaco’s survival strategy is different from Germany’s.)

      • Jack Lecter says:

        I’ve also observed this. It still doesn’t tell us why libertarians are so few.

        • Conrad Honcho says:

          Because libertarianism only works in a world without evil. The world in which everyone agrees to follow rules of non-aggression and deal fairly with everyone else is not this one. Blame Adam and Eve for ruining that one.

          • Rick Hull says:

            Eh… non-aggression means that aggression / evil is dealt with swiftly and seriously (perhaps proportionally, say 2x – 10x). It’s not a pacifist thing, and it seeks primarily to handle the problem with aggression and evil. Primary forms include minarchy, security contracting, or self defense. What would you say to a libertarian gun owners group about their inability to deal with evil?

            I think libertarianism provides the blank slate of civilized behavior and thus civilization (i.e. beyond tribes, which are by nature authoritarian and communitarian). Castles and walls of restrictions to individual liberty can be built on top of this blank slate, but this view judges most of our governing edifice harshly, presuming some level of oppression that requires more justification than commonly given. This view doesn’t mean that the blank slate of anarchocapitalism is ideal — public good problems like national self defense, the environment, etc. still need solving, among many other collective action problems and similar.

            But what is there to suggest that libertarianism simply cannot deal with lawbreakers?

          • Andrew Cady says:

            Libertarianism can’t deal with “lawbreaking” in the form of the large majority of people collectively legitimating a government that claims to be the true ultimate owner of everything and thus can tax the libertarians.

          • Doctor Mist says:

            Libertarianism can’t deal with “lawbreaking” in the form of the large majority of people collectively legitimating a government that claims to be the true ultimate owner of everything

            And democracy can’t deal with a large majority of the people collectively deciding that they should really hand everything over to a king or dictator. Does this reveal a fatal flaw in democracy as an ideal? Or is it a silly quibble, on the grounds that a populace that instituted a democracy would be uninterested in throwing it away?

            (I’m not saying they wouldn’t, of course — to my mind the history of the U.S. shows that it’s not impossible. Neither is the idea that a libertarian populace might decide to throw it all away. But you have not discovered any surprising special inherent fragility of libertarianism.)

          • Conrad Honcho says:

            But you have not discovered any surprising special inherent fragility of libertarianism.

            I don’t know if it needs to be “discovered,” but the inherent fragility of libertarianism is that cooperating collectivized groups crush individualists easily.

        • An interesting question. One answer is that the case for libertarianism depends on understanding the way decentralized coordination via prices and exchanges works. Without that understanding, it’s obvious that you need someone above the system supervising it and making it do good things.

          Understanding that is hard. That’s why, among academics, economists, who have to understand that, are the ones most likely (in my casual observation) to be libertarians.

          This is part of a more general pattern. In many contexts, people have only weak incentives to make sure that what they believe is true–my false beliefs about which presidential candidate will be better have almost no effect on my life. In those contexts, a set of ideas that is easy to understand but wrong has a big advantage over a set that is hard to understand but right.

          My usual example is foreign trade. Most public discussion takes for granted the theory of absolute advantage and a generally mercantilist perspective, shown by the use of terms like “more competitive” or “unfavorable balance of trade.” That set of ideas was refuted about two hundred years ago. But it’s easier to understand than the principle of comparative advantage, which is presumably both why it was worked out first and why it remains so widely believed in.

          I’m sure that’s not all of the reason libertarianism is as small as it is, but I think it is part of it.

          On the other hand, consider that in England for a good deal of the 19th century a form of libertarianism, classical liberalism, was something close to political orthodoxy, accepted by a sizable fraction of the population.

    • 1soru1 says:

      Who is really unworried about about what ‘government’ might do? Are there really people who were unconditional supporters of not merely both Trump and Obama, but also hypothetical-US-Hitler and hypothetical-US-Stalin?

      The only thing that really differs is the details of _what_ they fear government might do.

    • Nancy Lebovitz says:

      Current theory: Governments are older and scarier than capitalism, and governments have won the propaganda war.

      I’m about equally scared of governments and corporations deploying AI without adequate constraints. I consider an accidental paper-clipping scenario much less likely.

    • Jack Lecter says:

      This has puzzled me for years.

      I honestly don’t get it. I can come up with reasons I might be more afraid of government, or less afraid of capitalism, than the people around me, but none of them seem even remotely close to strong enough to explain the observed effects.

    • Helaku says:

      Well, some are afraid of both, especially if unrestrained. When those entities turn into Moloch, things could go wrong for ordinary people.

    • Guy in TN says:

      I can’t figure out why people aren’t at least as afraid of governments (governmentism?) as they are of capitalism.

      Speaking as a leftist: Government power is controlled democratically (at least somewhat), while private power is controlled only by market forces (i.e., those with money/power).

      • Nancy Lebovitz says:

        Governments observably do a lot more killing.

        Also, governments have externality issues– their treatment of non-citizens isn’t well constrained by democracy.

        • Guy in TN says:

          The state of Michigan has, to this date, never executed a prisoner. Excluding police killings, it is essentially bloodless (much like a U.S. corporation). Do you think that government-killings by the U.S. would cease if the state of Michigan were to take over the U.S.’s military? I don’t. The higher amount of killing is a result of being the dominant power in our system. Replace it with non-democratic entity, and you’ll get much of the same, or probably worse. There is a history of having non-democratic control of the highest position of power, and its not good.

          Also, governments have externality issues– their treatment of non-citizens isn’t well constrained by democracy.

          Does private ownership not have the same externality issues? Non-owners have basically no constraints on their treatment, if the property owner is sovereign.

    • pelebro says:

      It seems to me that people are and were already sufficiently afraid of governments (and the potential of government excesses). The system of division of power in branches, of “check and balances”, seems to me to have been engineered by people very concerned with this danger you mention for the purpose of avoiding it.

      • Nornagest says:

        USG’s setup is a lot more concerned with that than most systems are. Part of this has to do with the founders’ ideological commitments, but most of it has to do with the current iteration being written by a barely civil committee of different factions who all seriously mistrusted each other and desperately wanted to avoid getting screwed over.

        (“Seriously mistrust” would escalate to “hate” in about another fifty years.)

  11. Evan Þ says:

    Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted… Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

    Okay. Suppose we take this at face value. Chiang himself admits that some people are like that – some people (like Gates and Musk) will stop at nothing to achieve their goals, and they may not realize it. And some people don’t stop even when their therapists give them wake-up calls; otherwise, Scott’s day job would be a whole lot easier.

    What convinces Chiang that a superintelligent AI won’t be like that?

    • reasoned argumentation says:

      Some will stop at nothing to achieve their goals – those are the ones who achieve their goals. What Chiang is missing is that if you “convince” facebook to stop being facebook then some other company will be facebook.

      Of course he solves this because there will be some kind of regulation but then just like facebook is facebook because it follows its incentives* the regulations will be written by an organization that is following its incentives – and its incentives aren’t to write regulations that make facebook nicer or whatever he thinks facebook is supposed to be.

      Ultimately it’s an argument that there should be better governance but his memeplex hasn’t been selected to produce better governance – it’s been selected to get the holders of it published and respected by the right people. Speaking about how to sensibly design governments that have aligned incentives – or even to consider what that project would mean – is the type of thing that gets you banned from the google campus. Can’t imagine any selective pressure on memes based on that!

      * Let’s leave aside what facebook actually does optimize for.

  12. will rudisill says:

    “It’s a good point, and I would have gone on to explain the more general idea of an optimization process. Evolution optimizes relentlessly for reproductive fitness, capitalism optimizes relentlessly for money, politics optimizes relentlessly for electability.”

    I do not think capitalism and evolution are optimization processes in the same way. Capitalism evolves based on the actions of entities who make decisions, at least in part, motivated by theory and predictions of the future state of the system. One might call this ‘intent’. Evolution is like a geologic process. it arises naturally out of physical law: there is no intent structuring the optimization path.

    • Ketil says:

      I think this difference is quantitative rather than qualitative. The intent of a corporation to grow and expand and increase profitability is perhaps more explicit than the intent of my genes to reproduce (interestingly making corporations more conscious of the processes that shape them than the humans they consist of, are they already more self-aware and meta-cognizing than us?), but in both cases, the intent exists because a lack of intent gets weeded out in a selection process.

      Another thought: does an expansive corporation have higher fitness than a profitable one? I suspect that a highly profitable corporation might get out-competed (bought up, mergered, or whatever) by an expansive one, even if it leads to lower profitabilty overall. I seem to hear many stories about misguided mergers and acquisitions, and large corporations buying up and integrating small and innovative startups – often at impressive cost. If this is true, corporation size is yet another peacock’s tail, crippling the invisible hand.

    • Anon. says:

      Irrelevant and also not really true. What is sexual selection if not “intent structuring the optimization path”?

    • Helaku says:

      motivated by theory and predictions of the future state of the system

      Were merchants of 16 century motivated by a theory, even in part? I doubt it. Besides, every animal “makes a decision” in one or another form. Though humans have the (probably) unique ability to reflect on those decisions.

    • raj says:

      But intent can also be construed as a “geologic process arising naturally out of physical law”. Instead of the gradient being “inclusive fitness of nearby mutations” it’s like “expected utility of nearby counterfactual future worlds”.

    • Andrew Cady says:

      I do not think capitalism and evolution are optimization processes in the same way. Capitalism evolves based on the actions of entities who make decisions, at least in part, motivated by theory and predictions of the future state of the system. One might call this ‘intent’.

      There’s definitely a real difference here. Corporations are in a sense Lamarckian — they don’t have the property of biological evolution that everything gets “reset” back to DNA every generation.

      Biological organisms can have intents but their adjustment of their intents over time cannot ever constitute a biological change (i.e., it cannot ever constitute an optimization in the optimizing process under consideration). That isn’t true for corporations. Intent has a different place in the structure of the optimization process for them.

  13. j r says:

    Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted…It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

    I read more than a little bit of projection in here, but on Chiang’s part. When I think about what capitalism is, I think of the private ownership of the means of production. These kinds of anti-capitalists arguments are powered by thinking of capitalism as an empty bucket in which to place all of things that one doesn’t like about the world and not wanting to bother to do the work in coming up with meaningful explanations and solutions.

    It is somewhat successful rhetorically, because people are always looking for empty buckets in which to place all the blame for the things about themselves and about the world that they don’t like and for which they don’t take responsibility. But the end result is that anti-capitalist movements will be full of people who tend not think very clearly about how cause and effect play out in the world. And for that, I am grateful. Of course, lots of movements that don’t think very clearly about cause and effect manage to gain enough popularity to do real damage. For that, I am a little worried.

  14. blacktrance says:

    Mild steelman (not endorsed): The accusation is about parochialism, not projection. The idea is that if an entity’s decisionmaking improves and it obtains more resources, it accomplishes its immediate goals more effectively, but doesn’t reflect on them. This is the environment and/or ideology that AI skeptics in Silicon Valley are immersed in, so they think that the an extremely rational being (the AI) will be even more like that, which is why they’re worried. But that’s like thinking that since the faster horses tend to be taller, when we go 100 mph in the future, it’ll be on horses the size of skyscrapers. If all you know is horses, it’s an understandable mistake, but a parochial one.

    The AI-corporation analogy is more direct than physicists-black holes or physicists-chain reactions, because AIs and corporations are both supposed to do the same kind of thing with the same kind of stuff, i.e. use resources to optimize for a goal.

    • carvenvisage says:

      the an extremely rational being (the AI)

      That’s where it would be backwards.

      Superintelligent AI is not supposed to be super-rational.

      The the problem is that computers are so much faster than brains that it would be relatively easy to make something superintelligent (as in general thought at computer speeds) while having a poor design with 0 or little of what Mr Chiang calls ‘Insight’. (the capacity for rational reflection on your course, and specifically your goals) (and inclination)

      In the same way that a corporation might irrationally pursue bad goals because its internal incentives, inertia, the incentives of stockholders etc, might bind it to those, an AI could pursue bad goals because they were hard coded in with no feedback mechanism to change them, perhaps by the tunnel-visioned short sighted capitalists Chiang is saying we should be worried about. (And by the same process make otherwise good goals bad ones, like paperclips or strawberries)

      To put it another way, powerful entities that are poorly designed tend will tend to lack ‘insight’ by default, whether a corporation that never was designed or an AI (thinker operating at computer speeds) that was designed without proper foresight.

  15. ksvanhorn says:

    Continuation of an ongoing theme in Scott’s writing, and in the writing of “progressives” in general:

    “I can’t tell you how many morons hear a patient say `I think my husband hates our kids’, give some kind of galaxy-brain level interpretation like `Maybe what’s really going on is you unconsciously hate your kids, but it’s more comfortable for you to imagine this of your husband’, and then get absolutely shocked when the husband turns out to be abusing the kids.”

    Rule #1 of progressive writing: always, always, always make the person with negative traits be a man, and the person with positive traits be a woman.

    Fun fact: the majority of real-world child abuse is committed by women, not men.

    https://everydayfeminism.com/2014/10/feminism-against-child-abuse/

    • Wrong Species says:

      Do you honestly think that Scott consciously thought he should make the abuser a man just to virtue signal?

      • reasoned argumentation says:

        Yes.

      • melboiko says:

        Continuation of an ongoing theme in ksvanhorn’s writing, and in the writing of “anti-progressives” in general:

        Rule #1 of anti-progressive writing: everything, everything, everything an outgroup member does is motivated by virtue signaling. (cf. kaballah.)

        • reasoned argumentation says:

          Progressives talk about doing this all the time.

          No one has gone through school without seeing guides to using “non-problematic” language – making sure that when you use pronouns that you take care to not use “she” for people in stereotypically female roles, for example. Progressive actively call out any negative portrayal of progressive pet groups as racist or sexist or homophobic if it matches reality – in other words they’re calling the speaker out for demonstrating vice – the opposite of virtue. That’s exactly the same thing as ksvanhorn is observing except his values are “does this correspond with reality” rather than “does this flatter progressive pet groups”.

          If you don’t share the progressive view that always flattering progressive groups while always insulting progressive enemies is an unalloyed good that of course all decent people should strive to do it’s grating to hear over and over.

          • MugaSofer says:

            You’re conflating signalling virtue with being virtuous.

            Many of those discussions are based on (alleged) harms, such as stereotype threat, not on the benefit to the person “signalling” virtue.

            Progressive actively call out any negative portrayal of progressive pet groups as racist or sexist or homophobic if it matches reality[sic] – in other words they’re calling the speaker out for demonstrating vice – the opposite of virtue.

            Courts regularly punish people for murder or theft – in other words they’re punishing the accused for demonstrating vice – the opposite of virtue. Those darn virtue-signallers!

          • cactus head says:

            A source of confusion is that much of what gets called ‘virtue signalling’ should instead be called ‘cheap talk’.

          • reasoned argumentation says:

            You’re conflating signalling virtue with being virtuous.

            Making a negative statement about a progressive group is a neutral act in my eyes whether the statement is true or not and whether or not it causes emotional harm to members of the group – real, imagined, or pretended. In the eyes of a progressive nothing in the world is worse than making a negative statement about a progressive pet group – not even murder.

            Refraining from saying things that offend progressive pets and using awkward language and always counter-stereotyping in speech and writing are considered virtues by progressives (or at least, lack of those things are considered the worst imaginable vices). Doing that out of fear of the consequences of not doing that demonstrates the lack of a virtue called “bravery”. Doing so out of an insincere belief that it demonstrates virtue is virtue signalling.

          • Tarpitz says:

            No one has gone through school without seeing guides to using “non-problematic” language

            I assure you that I did. My schools may have been (were) atypical in all sorts of ways, but I for one never encountered such a thing at any point in my education.

          • lvlln says:

            In the eyes of a progressive nothing in the world is worse than making a negative statement about a progressive pet group – not even murder.

            I’m a progressive, and in my eyes, murder is far worse than making a negative statement about a progressive pet group. Or about any group, really.

            OK, perhaps #NotAllProgressives isn’t all that helpful. I think the real issue here is that you aren’t correctly modeling the mindset of the subset of progressives who appear to behave as if they believe that making negative statements about a progressive pet group is worse than murder (even though I disagree that this subset is the entire set – or even a majority of the set – I at least believe that it’s a sizable and particularly influential portion of the set). The belief isn’t that making a negative statement about a progressive pet group is a vice in itself, it’s that making a negative statement about a progressive pet group will predictably lead to harm to members of that progressive pet group, including murder and beyond. As such, this subset of progressives may behave as if some specific murder is less bad than making a negative statement about a progressive pet group, but only to the extent that they believe that the resulting harms from those negative statements exceed the harms of a specific murder.

            Now, it’s an open question how much harm predictably results from negative statements about a progressive pet group, and that’s probably where you (and I) disagree with the subset of progressives in question.

        • Yep. Planting a flag on your lawn isn’t virtue signalling because only the left virtue signal.

          • ksvanhorn says:

            Yes, planting a flag on your lawn is virtue signaling.

            BTW, it’s not the virtue-signaling aspect of the always-make-the-fool-or-villain-a-man norm that bothers me; it’s the constant, unceasing parade of negative male stereotypes.

            https://www.youtube.com/watch?v=T1GnQ_k7Vok

          • Conrad Honcho says:

            This is where Brad usually comes in and gets mad at the right for replacing the word “hypocrisy” with “virtue signalling.”

            The kind of “virtue signalling” the right snarls at is hypocrisy: the movie star who says “refugees welcome!” from the safety of their walled mansion knowing the refugees are going to be shoved into poor people’s neighborhoods and they’ll never have to see one. I don’t think flag waving is in the same ballpark, especially if you are genuinely a patriotic American and/or you or your family serve in the military.

            I think a better example of right-wing virtue signalling is the Choose Life bumper sticker. Easy for you to say when you’re not impoverished and knocked up.

          • Nornagest says:

            Just about everyone only gets upset about hypocrisy when it’s their opponents doing it. For the right, that’s e.g. cities with more Black Lives Matter posters than actual black people (hi, Berkeley!); for the left, it’s e.g. pundits extolling family values while working on their fourth divorce.

            I’d say it’s all virtue signaling. It’s just that that particular phrase for it is mainly a right-wing usage for whatever reason, so you mainly hear it about the left.

    • sohois says:

      An entire blog post about the dangers of projection, and all you can do is project some weird conspiracy onto a throwaway comment?

      I’m not sure this post really deserves any attention, but given the principle of charity, would you care to elaborate on your theory? What other examples are there of a secret, misandrist agenda within Alexander’s blog posts? What is the motivation for these coded messages? Where is your evidence that this represents a real attempt to advance a progressive agenda and is not merely a coincidence?

      • ksvanhorn says:

        Not a secret agenda, sohois; just a cultural norm that promotes negative male stereotypes. I’ve seen the psychological harm this kind of thing causes — young men who suffer a crushing inferiority complex, convinced that they are innately inferior by reason of having a Y chromosome.

        • Your claim is about Scott, so I suggest a simple test. If Scott is trying to promote a male bad/female good stereotype, it ought to show up in the blogs he chooses to link to. Run down the blogroll in the left column of his blog and count authors by gender.

          If you don’t like that test, can you suggest a better one? You might expect that women would be more attracted than men to a blog that signals female superiority. That doesn’t seem to fit the actual poll results, or the observed ratio in meetups.

          • reasoned argumentation says:

            Your claim is about Scott, so I suggest a simple test. If Scott is trying to promote a male bad/female good stereotype, it ought to show up in the blogs he chooses to link to.

            Doesn’t follow at all.

            “Only link to women / only retweet women / only follow women” is a progressive meme but not a mandatory one – it’s considered especially virtuous though:

            https://medium.com/@sheenamedina/anil-dash-decided-to-only-retweet-women-for-an-entire-year-and-wrote-about-his-experience-f941d3bc3701

            Scott is interested in signalling enough progressive virtue to avoid accidentally stirring up a hate mob – as he’s specifically stated in the comment section on more than one occasion so it’s not the slightest bit implausible that he does this sort of thing with conscious intent.

          • meh says:

            @reasoned
            I find Scott’s posts about gender and feminism to be intellectually honest. Can you point to posts that suggest otherwise?

          • ksvanhorn says:

            I see I have not made myself clear. I’m not claiming that Scott is deliberately promoting negative male stereotypes; I’m claiming that this is a consequence of adhering to certain unspoken progressive norms. This is an issue with progressives in general, not just Scott. As to his motives, my guess is that he’s just doing what feels natural as a Blue Tribe member.

          • reasoned argumentation says:

            meh –

            The item under discussion is the example and I laid out my reasons for why I think it’s at least slightly dishonest.

            ksvanhorn –

            You were clear – I’m willing to go a step further though. Scott consciously mimic progressive signals out of fear – which he openly admits. It’s not speculation when he says that he does it. This particular example is a progressive signal so it’s at least possible that it wasn’t unconsciously sent but consciously sent because we know that Scott consciously sends progressive signals.

          • sohois says:

            I see I have not made myself clear. I’m not claiming that Scott is deliberately promoting negative male stereotypes; I’m claiming that this is a consequence of adhering to certain unspoken progressive norms. This is an issue with progressives in general, not just Scott. As to his motives, my guess is that he’s just doing what feels natural as a Blue Tribe member.

            Very well, from this and your earlier reply it seems you at least are genuine in this belief and not merely a troll as I first suspected.

            Nonetheless, your argument seems extremely difficult to prove one way or the other. Unconsciously adhering to certain progressive norms? How does one go about showing this? First of all, what evidence can you show that progressive norms are focused on misandrist causes? Secondly, in my original reply I asked what further evidence you had that Alexander was including these coded messages, unconsciously or no? I don’t doubt that there have been other blog posts in which a negative example was based on a male, but it seems a very high bar of proof to demonstrate that this is a deliberate pattern.

          • meh says:

            The item under discussion is the example and I laid out my reasons for why I think it’s at least slightly dishonest.

            @reasoned
            If you only have one example of a gender selection, then your result will happen by chance 50% of the time.

            Like DavidFriedman says, suggest a test.

          • reasoned argumentation says:

            Here’s my suggested test of whether or not Scott consciously sends progressive signals:

            Look for comments where Scott says that he consciously sends progressive signals because he fears harassment by progressive mobs. Try searching the comment section of this web site for examples.

            The examples are not super hard to find.

    • Jugemu says:

      The issue you point out is indeed common and kind of annoying, but I don’t think that is a good example. It seems more like a good-faith usage of anecdotal evidence from Scott’s own practice.

    • Nancy Lebovitz says:

      I think Scott pretty much went with cultural norms there.

      ksvanhorn, thanks for the link. If the statistics are accurate, women are only a little more likely (54% vs. 46%) to abuse their children than men are. What did you think of the social justice framing on the article?

      My feeling is that a man who claimed that his wife hated his children would be more likely to be gaslit by his psychologist.

      What would you (or anyone) think of a gender neutral version of the story with one parent and the other parent?

    • The Nybbler says:

      Perhaps you’ve got it backwards. Perhaps every case of this Scott has heard has been the husband being the patient and the wife being the abuser, and so as a result he made the husband the abuser in the example to avoid the chance of some patient thinking Scott is talking literally about him.

      Not that I believe this, but I think you’re theorizing from extremely scanty evidence.

      • Ilya Shpitser says:

        I think he’s just melting as freshly fallen snow would in the searing heat of Scott’s microaggression. As far as I can tell, loudly complaining about this one issue is basically the only thing this guy does here.

  16. Matt M says:

    capitalism optimizes relentlessly for money

    I don’t think this is true. Capitalism is not an entity. It does not act. It does not “optimize” for anything. This is the biggest failure of the analogy… it’s not just that the analogy is bad, it’s that it categorizes things of a different type. It’s like saying “Scientists have finally found Bigfoot, and it turns out that it’s scientology.” Well no, that doesn’t work, because Bigfoot is an actual entity and scientology is a system of belief. Even if it turns out that Bigfoot is a scientologist, that doesn’t make the analogy any more valid.

    Furthermore, capitalism itself offers you no advice on what to do in any particular situation. It doesn’t tell you that you have to make the most money possible. There is no capitalist bible (despite Ayn Rand’s best efforts) prescribing what actions should or should not be taken.

    • reasoned argumentation says:

      Furthermore, capitalism itself offers you no advice on what to do in any particular situation. It doesn’t tell you that you have to make the most money possible. There is no capitalist bible (despite Ayn Rand’s best efforts) prescribing what actions should or should not be taken.

      It does though. Entities that don’t optimize for making money eventually get outbid for their inputs and go out of business. If you value your inputs by some means other than their marginal value to your outputs then you’re being inefficient and are at risk of long term disaster. A facebook that doesn’t act like facebook and doesn’t track you doesn’t sell ads as effectively and can’t afford to make optimizations to keep people looking at their site and loses traffic to a new social network. That’s the optimization that capitalism forces.

      Of course, it’s foolish to think that this is a feature of capitalism rather than just a special case of a universal rule of evolution.

      • Brett says:

        Companies whose leadership remains insulated from the immediate pressure to turn a profit (such as family-owned firms) can last for a very long time, far longer than your average share-traded corporation. They just need to make enough of a profit to pay back whatever debt they incur over time, and their incentives are different from investors who are just shifting between whatever offers the best risk-adjusted return over time.

        That’s especially the case if they’re big and diversified. Sometimes even if they’re not – Nintendo is 129 years old.

      • Matt M says:

        That’s a nice theory, but as Brett says, it doesn’t seem to play out long-term. There are companies like say, Patagonia or Tom’s Shoes that make a big deal out of spending a lot of money on charity. They do not, “relentlessly optimize for profit” or what have you. They seem to be doing fine. They are growing, not shrinking.

        At a simpler end, I know plenty of small business owners who are doing quite well. So well they could almost certainly expand. But they don’t. Because they don’t feel like it. Sometimes a taco truck can just be a taco truck for as long as the owner feels like running a taco truck. The notion that he has to be as profitable and continually grow or else Taco Bell will ruthlessly drive him out of business is plainly false, as anyone who has ever been to a taco truck could tell you.

        Capitalism doesn’t say “you must maximize profit to exist,” but rather something like “you must maximize profit if you want to be the most profitable” which is something of a tautology.

        Furthermore, even the businesses who want to maximize profits usually don’t know how to. I work in an entire industry (management consulting) that only exists on the premise that lots of companies, even the giant behemoths, are doing a bunch of stuff wrong and not correctly optimizing their profits. And yet, they haven’t been competed out of existence by someone doing better!

        • 1soru1 says:

          The claim is clearly not about ‘capitalism’, or market system, in general, but publicly traded stock corporations. Any CEO who said ‘we like making tacos, we will continue to do so, and maybe if people like them we will even make some money, I dunno’ would pretty soon be an ex-CEO.

          • Matt M says:

            Any CEO who said ‘we like making tacos, we will continue to do so, and maybe if people like them we will even make some money, I dunno’ would pretty soon be an ex-CEO.

            Wasn’t this Amazon’s business strategy for the better part of the 2000s? 🙂

            I’ve heard allegations that this is basically what Uber is doing as well (although they are not publicly traded).

            In any case, this gets to David Friedman’s point below. Corporations spend money on all kinds of things that would not seem to be in the interest of “profit maximization.” Mainly charitable giving programs, as well as perks for executives.

  17. outis says:

    It’s important to keep in mind that Chiang is a writer. You can see where he is coming from: he feels like his field has already gone through the grinder. The internet brought a new, smarter way of doing things, which promised to bring exciting new opportunities for media production and consumption. And it did! But it also put the entire media industry through the screw of runaway optimization, whose objectives did not necessarily align with those of the average person in media. Thus that person feels that things have gotten worse as they got “more optimal”; and, worse, they feel that they, their friends, and any other people like them have entirely lost control on the direction their world is heading.

    Yet I can’t help but feel enormous discomfort about any discussion of AI risk. The promise is that these new, superior intelligences will be able to do better than we can, and we well reap the benefits as we entrust to their direction not just the media, not just academic pursuits, but ultimately the rudder of our entire society. The fear is that their objectives will not align with ours; that they will bring about disastrous outcomes for us, not out of malice, but simply because they are not us and do not think like us (nor would they ever want to). And because they are superior intelligences, and because so much of our world will depend on them, we will be completely powerless to stop them once we realize our mistake.

    Onfvpnyyl, NV evfx vf na nagvfrzvgvp qbtjuvfgyr.

  18. hnau says:

    Wasn’t comparing capitalism to AI more or less what Meditations on Moloch was all about?

    • reasoned argumentation says:

      Scott’s piece about the stupid national cafeterias article is a very good argument in favor of part Chiang’s argument.

    • Wrong Species says:

      There’s a difference between warning about optimization processes in general and literally thinking that there is no difference between corporations and AI.

    • Sniffnoy says:

      I think it’s important to note here that Chiang isn’t comparing AI to capitalism. He’s comparing other people’s ideas of AI to capitalism. He doesn’t actually accept those ideas, and is saying, aha, your ideas about AI are just taken from capitalism, they’re not what AI would actually necessarily be like.

  19. robirahman says:

    This post feels a little bit similar to “Futurism Should be About the Future” in that the original arguments you’re responding to are bad enough that someone’s going to make the case that we’re just making things worse by drawing attention to them rather than helping things by changing his audience’s minds. I’m really hoping you get a response from Ted Chiang on this topic.

  20. capitalism optimizes relentlessly for money

    Utter nonsense.

    If an individual capitalist was optimizing for money he would never spend anything above subsistence–no first class air flights, no yachts, no caviar–in order to accumulate as much money as possible.

    If a worker in a capitalist system was optimizing for money he would take no leisure beyond what was required to keep him able to do his job, spend on nothing beyond subsistence and whatever consumer goods were needed to function in his job, take the highest paid job he could however unpleasant.

    If a consumer in a capitalist system was optimizing for money …

    It not only is nonsense, it is nonsense that misses the fundamental difference between maximizing for economic efficiency, which is what the ideal capitalist system, i.e. perfect competition, does, and maximizing for paperclips. Economic efficiency is a proxy, although not a perfect one, for maximum utility. Paperclips are not a proxy for anything.

    • robirahman says:

      It’s capitalist businesses that are optimizing for profits, not individual people.

    • nimim.k.m. says:

      It not only is nonsense, it is nonsense that misses the fundamental difference between maximizing for economic efficiency, which is what the ideal capitalist system, i.e. perfect competition, does, and maximizing for paperclips. It not only is nonsense, it is nonsense that misses the fundamental difference between maximizing for economic efficiency, which is what the ideal capitalist system, i.e. perfect competition, does, and maximizing for paperclips.

      This sounds like you are dismissing a century or two of leftist observation how the unchecked capitalist society is in many ways is a terrible place, and instead of ideal capitalism, we seem to be stuck with the capitalism that that produces terrible consequences.

      And anyway, if we assume that current capitalist system is close enough to the ideal capitalism and even grant that it achieves maximum utility by proxy, nevertheless, from vantage point of anyone who subscribes to idea that utilitarianism is mis-approximation, or even worse, an evil bastardization of ethics, paperclip maximers and runaway capitalist system fundamentally belong to the same class of things: while optimizing for some theoretical notion of maximum utility, it destroys things of value not captured by the notion of utility, like the soul of human society, in the process.

      We end up with a society that is more efficient but also a society that is atomized, plagued by loneliness and despair. Logic of capitalism (as it happens) encourages the humans in the society act like the rational cogs in the machine and losing the sight of other objectives, feeding the machine of despair.

      Wasn’t more of less the gist of Polanyi’s The Great Transformation that was discussed not long ago? Or even Scott’s own Moloch? It certainly is the main sentiment of Ginsberg’s Howl.

    • j r says:

      It’s capitalist businesses that are optimizing for profits, not individual people.

      Except for Amazon, which is essentially operating as one big transfer of wealth from investors and shareholders to consumers. Really, if your view of capitalism is of ruthless profit maximization, then Silicon Valley is probably one of the worst possible examples. I suppose you could argue that all of the capitalists who have equity stakes in Amazon are just playing a long game in hopes that one day the company will ground their competition down to nothing and they can capture all those monopoly rents, but even that requires some explanation. The ability to forego profit today on the chance of making even more profit at some undefined date in the future doesn’t exactly fit the mold of ruthless profit maximization.

      This sounds like you are dismissing a century or two of leftist observation how the unchecked capitalist society is in many ways is a terrible place, and instead of ideal capitalism, we seem to be stuck with the capitalism that that produces terrible consequences.

      Similarly to robirahman, if this is your argument, then you’re going to have to offer an explanation of why the most desirable places to live almost all have capitalist economies. Many of them have a layer of social democratic transfer mechanisms sitting on top of those capitalist economies, but they are all thoroughly capitalist nonetheless. And that’s another problem with this theory. If the private ownership of the means of production necessitates a devolution towards Moloch, how did all of these first world countries develop those social democratic transfer mechanisms in the first place?

      • Lambert says:

        >one day the company will ground their competition down to nothing

        That’s already happening. When’s the last time you used a mail-order service?

        • I ordered a book from a different mail order source about a week ago.

          So far as grinding their competition down to nothing, Amazon’s competition is the rest of the retail market, on and off line. Currently Amazon sales come to about 4% of it.

          • reasoned argumentation says:

            So far as grinding their competition down to nothing, Amazon’s competition is the rest of the retail market, on and off line. Currently Amazon sales come to about 4% of it.

            Amazon’s competition is now the rest of the retail market because they were so successful in their original niche of book selling.

            This isn’t a criticism of capitalism, btw – capitalism is great specifically because when a company like Amazon does the job of satisfying customers better customers will allocate their resources to Amazon and away from book selling competitors.

            The reasonable criticism embedded there though is that the book selling competitors were producing value that exceeds Amazon’s value but since they had no way to capture that value they got out-competed. That’s what Scott described so well in the Moloch essay.

            Taking it a step further though – just because there’s a criticism doesn’t mean that there’s any particular solution that doesn’t end up creating worse problems – especially since the “market” for inventing and implementing solutions isn’t subject to selective pressure for actually solving problems – much the opposite, in fact.

          • Harry Maurice Johnston says:

            I almost never buy books from Amazon because the postage to my part of the world is too expensive compared to competitors like Book Depository or AbeBooks.

    • Matt M says:

      Economic efficiency is a proxy, although not a perfect one, for maximum utility.

      This. In a capitalist system, “profit” is not the end itself necessarily. It is closest thing we have to an objective unit of measurement to observe utility provided. But it’s not perfect.

      I suspect that if we asked anyone, across the political spectrum to name “the most successful companies” they would do a bit more than provide an ordered list of firms ranked by total net income or highest profit margin or what have you. Even the quant guys on Wall Street would be more nuanced than that…

    • The Nybbler says:

      Best I can tell, the whole “relentlessly optimize for money” thing is based on the idea that a corporation’s highest duty is to maximize either profit or shareholder value. Which as far as I can tell is true only in a very narrow sense, and certainly isn’t true in the mechanistic manner of a paperclip-maximizer.

      • Matt M says:

        Best I can tell, the whole “relentlessly optimize for money” thing is based on the idea that a corporation’s highest duty is to maximize either profit or shareholder value.

        The top business schools don’t even teach this anymore, and I’d be shocked if you could find a high-profile CEO who was willing to say it on the record.

        • alef says:

          What do they teach?

          It’s not obvious to me a company how in a competitive industry (ok, I’m thinking of a convention publicaly-listed US company) can reliably trade-off profitability against other goods, to any material degree, and maintain a comparable market value (roughly, share-price) to its peers that don’t make this tradeoff. And if they succeed at that, there’s a high risk of being outgrown (by peer companies that can invest more cheaply), and probably be acquired along the way.
          So I assume if CEOs aren’t being taught just to maximize shareholder value, they are also being taught strategies to survive against those who do. Such as?

          • Matt M says:

            What do they teach?

            “Stakeholder theory”

            The idea that executives owe a responsibility to a large variety of groups, including humanity as a whole, and that shareholders are just one of many important constituencies you must serve.

          • alef says:

            “The idea that executives owe a responsibility to a large variety of groups, including humanity as a whole, and that shareholders are just one of many important constituencies you must serve”.

            That can’t be all of it – the question/premise here was about what a _top_ business school teaches. Implicit in this I would think are skills to run a business successfully. If I sacrifice shareholder value in a material way to serve other stakeholders (and presumably the theory gives some guidance as to what the right trade-offs are?) how do I hold my own for very long against someone who does not? Obviously there are ways (e.g. the classic “lobby the legislature to make my desired behavior mandatory”) – are such techniques part of the taught ‘stakeholder theory’?

          • Matt M says:

            If I sacrifice shareholder value in a material way to serve other stakeholders (and presumably the theory gives some guidance as to what the right trade-offs are?) how do I hold my own for very long against someone who does not?

            The implication is usually something like “If other businesses are doing things that are unethical they will get caught and punished by some combination of government and the market” with various other scattershot things like “Studies have shown that when employees are happy they are more productive,” and so on. Overall, the thinking is a combination of “This is the ethical thing to do and that’s why you should do it” AND “Oh but also you’re better off financially if you behave ethically anyway.”

            I went to a Top 20 business school and this was taught. I can’t speak for like Harvard specifically, but I would be very surprised if they were teaching “maximize shareholder value above all else.”

          • alef says:

            ‘The implication is usually something like “If other businesses are doing things that are unethical they will get caught and punished by some combination of government and the market” with various other scattershot things like “Studies have shown that when employees are happy they are more productive,” and so on. Overall, the thinking is a combination of “This is the ethical thing to do and that’s why you should do it” AND “Oh but also you’re better off financially if you behave ethically anyway.”’

            Are you saying (the ‘OR’) that these schools teach how to maximize shareholder value, but as part of this try to emphasize that indirect and at-first-glance-and-if-you-are-an-absolute-moron-so-how-did-you-get-here-anyway ideas to get there like ‘happy employees are more productive’?
            So yes, they are teaching you to maximize shareholder value. Because you didn’t know to, and some ‘obvious to Mr Burns’ ideas don’t work, so we willl teach you. But shareholder value is the goal.

            Or are you suggesting (the AND) that the teaching is something like ‘subject to making the same(*) expected profit, there may be different ways of doing it – and here’s how to choose between them. ((*) same = not one $ of profitability tradeoff). But that’s obviously ridiculous (i.e. ridiculous – because of its almost vanishing emptiness) – to cite as what top business schools teach about the goals of a company. At best it would be a (true, but almost laughably so) footnote.

          • Matt M says:

            I feel like you’re overcomplicating this.

            They teach you that a “good executive” has to balance enriching shareholders against a bunch of other very important duties, including being fair to employees, being a good steward of the environment, and all the other good feelings gobbeldygook you might imagine.

            “To succeed in business we must focus obsessively on increasing shareholder returns at all costs” is not a thing I’ve ever heard in an actual business setting, or even implied. It’s a caricature of how businessmen behave, promoted largely by people who are extremely critical of capitalism and business in general.

          • AND “Oh but also you’re better off financially if you behave ethically anyway.”

            I went to a Top 20 business school and this was taught. I can’t speak for like Harvard specifically, but I would be very surprised if they were teaching “maximize shareholder value above all else.”

            If the behavior really makes the firm better off financially then they are maximizing shareholder value–just claiming that there is no conflict, which is logically possibly but not very plausible. Behaving ethically probably pays in the long run, with occasional exceptions, but including in their maximand the welfare of people with whom they have no contractual relation–neither customers, employees, nor stockholders–probably doesn’t.

          • “Stakeholder theory”

            The idea that executives owe a responsibility to a large variety of groups, including humanity as a whole, and that shareholders are just one of many important constituencies you must serve.

            One of my colleagues (and friends) has written a good deal along those lines. I’m not sure to what extent I have persuaded him of the mistake in the argument.

            Stockholders are dependent on management acting in their interest in a sense in which other stakeholders are not, because they are the only ones entirely locked in. An employee can quit and work for someone else, although at some cost in sunk costs abandoned. Similarly, mutatis mutandis, for a customer. But a stockholder can get out only by persuading someone else to get in, to buy his stock. And if management is behaving in a way that fails to maximize stockholder value that failure will be capitalized in the price at which he can sell the stock.

            The analogous situation would be if an employee could only quit if he found someone else to take his place on the same terms and a customer could only stop buying if he found someone else to take over his purchases.

          • Andrew Cady says:

            The shareholder can always abandon their shares (although at some cost in sunk costs abandoned). They don’t have to find someone else to buy the shares to relieve themselves of their ownership duties.

          • Matt M says:

            David,

            Of course. You see the same thing with a lot of “green energy” claims. Stuff like “switching to these light bulbs isn’t JUST great for the environment, it also lowers your energy costs by 10%!” Of course, if the latter claim was obviously true, the first claim becomes irrelevant. Any company who could lower their energy costs by 10% without any obvious drawbacks would immediately do so, environmental benefits be damned.

            In any case, my personal opinion is that stakeholder theory is left-wing gobbeldygook, inserted in the hopes that fewer commies will show up to protest business schools. It’s a response to the propaganda that MBAs are a bunch of immoral psychopaths out to destroy the world. An easy way to say “Nuh uh, look, we even teach our students that they have a responsibility to polar bears as well as shareholders!”

            Perhaps when you promote high enough in a company, the CEO sits you down and says, “Listen, before we promote you, we need to know that you understand that stakeholder theory stuff is a bunch of nonsense we fake for the benefit of the media, if you get this job we will expect you to promote the welfare of shareholders above all others.” But I haven’t reached a high enough level of seniority to see that yet. It certainly never happened in B-School. Everyone seemed to nod and assent and treat it as obvious that of course executives need to consider the interests of labor unions when making their decisions.

          • The shareholder can abandon his shares–reduce their value to zero. The worker who quits still has what he was giving in exchange for wages–his labor–and can sell it to another buyer. The customer who stops buying from the corporation doesn’t get what he was buying but he also doesn’t pay for it.

            To make the cases analogous, you would have to compare the stockholder who abandons his shares to the worker still having to work for the company but having his wage reduces to zero, the customer still having to pay for the products but no longer getting them.

          • Andrew Cady says:

            The worker who quits still has what he was giving in exchange for wages–his labor

            What a curious statement.

            Doesn’t everyone always still have “his labor” (that is, his future or unspent labor)? The shareholder still has his labor, and the customer still has his labor. Right? In what sense do they not have their labor, in a way that the worker does?

            It would be fair to point out that the worker still has his wages, or at least whatever he spent them on. But likewise the shareholder still has his dividends.

            To make the cases analogous, you would have to compare the stockholder who abandons his shares to the worker still having to work for the company

            The shareholder has some “sunk costs abandoned.” But he doesn’t have any future obligation to continue putting more money in, or otherwise any future obligation to do any thing at all. He only has the “sunk costs abandoned.”

          • @Andrew:
            The worker has a flow relation–each day he gives the firm some labor and gets some money. He can terminate that at any time. To first approximation it’s costless–he just gets another job. The sunk costs–his knowledge of how to work in this particular firm, his social relations there, possibly costs associated with moving, are the second approximation. Similarly for the customer.

            The stockholder has traded a stock–the money he originally put into the firm–for a flow, the dividends he will get out. He can convert that flow back into a stock by selling his shares to someone else, if there is a buyer. If he simply throws away his shares he is losing what he put in and getting nothing in exchange. The worker has already gotten what he expected in exchange for the labor he has put in, and if he stops getting that he stops putting in labor.

          • Andrew Cady says:

            @DavidFriedman:

            The worker could have moved across the country for the job yet be fired on his first day (before receiving his first check!). Right?

            The shareholder could have already collected double his initial investment in returns. Right?

            (Also, even if the investment hasn’t yet yielded its full value back to the shareholder, it much more easily might still have already yielded back the value lost by the corporation’s betrayal of the shareholders’ interests.)

            You’re just making assumptions about where certain numbers will be relative to each other, yet presenting it as a difference in principle, which it isn’t.

  21. dansimonicouldbewrong says:

    I’m puzzled that capsule summaries of the history of AI risk never seem to mention Hugo de Garis, who was writing about this in the mid-1990s, long before the current crop of doomsayers got involved. Is it because his melodramatically science-fiction-y vision of an inevitable war between AIs and humans, with AIs winning, exposes the essentially non-rational literary/mythical roots of the fear?

    • MugaSofer says:

      Hugo’s writing is relatively obscure, and postdates Vernor Vinge – whose writing is both much more popular and closer to the mainstream picture of “AI risk”. [Not to mention that Hugo thinks Jews are unfriendly superintelligences or something and is generally kinda crazy, so he’d be a pretty big PR risk.]

      Also, doesn’t he predict a war between pro- and anti-AI humans, not between AI and humans?

    • Jugemu says:

      The concept of an intelligence explosion (similar to a technological singularity) dates back to at least the 1960’s: https://en.wikipedia.org/wiki/Intelligence_explosion

      Even if that wasn’t true, the fact that someone wrote a dramatic sci-fi vision of something doesn’t in itself make it unrealistic, any more than the moon landing was unrealistic because there had previously been dramatic sci-fi accounts of it.

  22. Qiaochu Yuan says:

    Thank you. I was pretty disappointed to see Ted Chiang writing this. Greg Egan’s said similar things on G+ and that disappointed me as well.

  23. Виталий Горбачев says:

    Wow, this post is a definition of BTFO.

    I did not think you could be this savage.

    Seriously, anytime someone asks me what does BTFO mean, i’ll just show them this post.

    It also made a long-time lurker comment. It is unironicaly brilliant

  24. Jack V says:

    My view of it is, Chiang makes an argument that I’ve heard before and am becoming more fond of, that nonfriendly AI is already here in the form of Moloch (society in general, and organisations, specifically corporations, in particular), and already bad, even if it’s less intelligent than humans, not more.

    But that he thinks, and is writing for people who think, that superintelligent AI is so unlikely as not to be worth actually worrying about, so he basically doesn’t address that at all. That is, it’s not an argument against the likelihood of super-intelligent AI even if it says it is, it’s an argument taking the unlikelihood of super-intelligent AI for granted.

    So I think the argument he actually makes is basically right, even though people arguing for the possibility of super-intelligent AI are right that he dismisses it without any argument whatsoever.

  25. 4bpp says:

    I don’t know whether Chiang (or Stross) explicitly raised this point, but I a fundamental difference between the “superintelligent AI” story and the other examples (black holes and what-not) that you list is that the narrative about runaway AGI generally amounts to “we only get one chance at doing this right, because the first one to be smarter than us will almost by definition be the one that designs the next smarter one, which designs the next smarter one, which (…)”. In that sense, there only gets to be one AGI event which amounts to the collapse-to-singularity, namely the emergence of the first self-improving entity that is more capable than humans. Hence, an argument that capitalism or corporations satisfy the AGI criterion does in fact imply that the first silicon AGI to be built will not be the AGI event that we need to prepare for.

    I don’t think this is abstract sophistry: it seems likely enough that the first silicon AGI will emerge not from universities, but from a corporation. (I recall reading earlier today that Alibaba Research has topped some high-profile NLP corpus ranking. This class of news is common.) If by then universities have found a solution to the silicon AGI alignment problem but we have not solved the corporation alignment problem, we will not be saved: the corporation, which is misaligned with our value function, will be compelled by its value function to align the AGI with itself, not with us.

    • jms301 says:

      I think you are exactly right here.

      Fear of AGI with a bad value function whilst being relaxed about corporations with bad value functions is logically flawed.

      As soon as AGI technology is understood a corporation will attempt to create one aligned with it’s value function. At this point our only hope is we can unchain an AGI with good value function and that it wins.

      Even this is a risky proposition since the AGI with good value function will no doubt have much tighter constrains on it’s possible actions (what else are morals than constrains). So it will be fighting with one hand tied behind it’s back.

      So to guard against AGI risk we need to chain corporate value functions to human values. But for the last forty years we have failed to do this. Looking at recent US tax and health care bills I would say we are in fact losing ground to corporations loosening their chains. Maximizing their value functions whilst increasing human suffering.

      The trajectory of history I see is one where corporations empower weak A.I. to gain greater political freedom. Concentrating money and power in the hands of the powerful. They will eventually create AGI. Either they will chain it and be unassailable or it will slip it’s bonds and we’ll all be destroyed.

  26. jamii says:

    I found Stross’ piece useful as an intuition pump.

    Isn’t it easy to just program the AI to follow human values? Well, we can’t even get corporations to follow human values, and those are made out of humans.

    • raj says:

      One might respond that corporations provide what humans actually value rather than what they talk about valuing. I think this is called “revealed preferences” in economics.

  27. Murphy says:

    I’d argue that Nick Bostrom and Eliezer Yudkowsky are very much not the first people concerned about AI risk.

    Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

    ~Good (1965)

    There’s lots of fairly similar concerns going back decades, there’s a quote from one of turings co-authors I’m having trouble tracking down along similar lines, Bostrom more formally knocked down some of the more generic objections and formalized some of the ideas, reasonable claims about AI risk have gone back about as long as the idea of AI. Remember that for many early computer scientists self modifying code was a normal part of coding to save resources making recursive self improvement of AI intuitively obvious as a concept/risk.

  28. Ilya Shpitser says:

    The corollary of this sort of analogy that is more interesting to me isn’t “therefore we shouldn’t worry about runaway AI” but “therefore we are hosed when it comes to runaway AI, we can’t even align entities we already have, who are much stupider and slower. Man we really need to work on social science a lot more.”

    I never got a convincing argument out of the LW-sphere on why AI might be an easier problem.

    Re: the black hole meta-analogy, the reason that isn’t very good is because we have a substantial body of empirical evidence on black holes.

    Concerns about runaway artificial processes go back to Good, and even earlier to Capek, although Capek wrote in the 20s, and saw the problem through the lens of his time as a worker uprising.

    • sty_silver says:

      Who on LW is arguing that AI is easy? The fact that it’s hard is one of the main problems.

      Regardless, that comparison is just… not useful. Mathematical problems in AI alignment and enacting lasting social change are not the same. They are not similar, either. They are widely different.

      • Ilya Shpitser says:

        Didn’t say LW claimed it was easy. Said LW claimed it was _easier_ than alignment I am talking about. And that’s basically the standard dogma answer (I remember Michael Vassar said this to me explicitly at one point, and I am fairly sure others MIRI affiliated or adjacent have as well. They are welcome to pipe up if I misunderstood.)

        I think AI alignment is much harder than alignment of the sort I mean.

        • Harry Maurice Johnston says:

          The most obvious difference is that you’re talking about entities that already exist, and will therefore be actively trying to prevent us from making changes to their alignment.

          • Ilya Shpitser says:

            Sure, but say they didn’t, as a matter of simplifying the problem. You can try to make a new kind of corp/gvt from scratch. That even happened in the past, see the US War of Independence. Is it clear how to align? The Founding Fathers thought pretty hard about it, with ideas available to them at the time.

            Why is MIRI not working on this?

            I have a theory, but it’s not very charitable.

          • Harry Maurice Johnston says:

            Presumably they’re not working on it because it’s not the problem they’re trying to solve? I don’t know what you’re looking for here. I mean, even if they somehow came up with the perfect solution, they wouldn’t be able to implement it, so what’s the point?

            … and even if they could, somehow, people would be actively trying to sabotage it from Day One, if not before. My thoughts here are confused, but it does seem to me like it’s a fundamentally different sort of problem. Is anti-inductive the word?

            Although that does lead to the interesting question of whether it is possible to make an AI alignment solution stable given that there are bound to be people intentionally trying to subvert the AI for their own purposes.

          • Ilya Shpitser says:

            Yes, they want to work on a harder, less useful problem. Instead of a problem they will need to solve anyways before there is any hope.

          • Harry Maurice Johnston says:

            I really don’t get why you think the AI problem is harder. Or why corporate alignment needs to be solved in order to solve AI alignment.

          • Ilya Shpitser says:

            Because AI is smart and fast, and corps are dumb and slow.

            Maybe you should read through the entire thread again.

          • MicaiahC says:

            I don’t think the fact that corporations are slower makes them much more amenable to manipulation, because there are highly non-linear +political interactions among the components (e.g. maybe you do game theoretically solve the principle agent problem, but it ends up trampling all over egalitarian norms so humans suck at adopting it but not AIs). Empirically speaking, it seems hard to get corporations to even adopt things that help them when it’s politically inconvenient; Robin Hanson’s explanation for why prediction markets have failed when tested within companies despite being good predictors seems accurate. In many ways, it’s the very same factors that cause corporations to being slower than AI to be why corporations seem to be harder to align; similar to how the average time+distance scale of force propagation in rigid body problems is much faster+longer than that of fluid dynamics, yet fluid dynamics are much much more difficult than rigid body problems.

            What am I missing here?

            Now, if what you mean is that “MIRI should use corporations as a testbed for alignment”, or “if MIRI wants to get any of its policies adopted it has to solve how to get human organizations to become aligned anyway, this seems like it’s putting the cart before the horse”, I can half see that being true but I have no idea why you’d think that’s it’s uncharitable.

          • Ilya Shpitser says:

            “What am I missing here?”

            I don’t think you are missing anything, or at least if you are, you haven’t said anything I am disagreeing with.

            In particular:

            “there are highly non-linear +political interactions among the components”

            “it seems hard to get corporations to even adopt things that help them when it’s politically inconvenient”

            These seem broadly true.

            I guess my point is this. We don’t know what form superhuman AI will take, until some point close to where it’s here. Until this happens, we have to extrapolate. We will extrapolate poorly — Capek cast the problem as an uprising of robot workers because he lived in 1920s, and he had to project the impending problem onto the space of some other problems he knew about.

            We are like Capek but with the internet, self-driving cars, deep learning, and Black Mirror. Predicting the future is difficult business. Anyone who tells you otherwise, ask for their track record.

            So given that extrapolation is so difficult, what are we to do? MIRI’s approach is basically this: we are going to pick a direction that EY likes, and start working. EY read a bunch of scifi and watched a bunch of anime, and read a bunch of books — think of him like a Capek for today. He has no special insight into the extrapolation of the future problem, he just mapped the issue onto something familiar.

            MIRI also says that we don’t have time to dither, we have to start now now now. Of course given that extrapolation is so difficult, one needs a response to “the threat will come from an unknown direction, and you are wasting time.” The responses could be “this is the best we can do, others will work on different approaches from us” or “we have special insight.”

            I don’t believe the latter, and as for the former, relying on EY’s nose is cold comfort, basically.

            Working on econ/social science has the advantage that it’s provably helpful now (governments and corps may well kill us before AI gets here, I think everyone who lived through the threat of nuclear war or a totalitarian regime, like I have, sort of understands that), and that governments and corps share important features with the type of potential future entities MIRI worries about. They have legal personhood and are superhuman and hard to understand fully in various ways, for example. And they run on the law of unintended consequences.

            Hard constraints on getting governments or corps to do things is probably important information. It only wouldn’t be important information if the particular brand of fanfiction MIRI espouses on what form future AI will take actually comes to pass. Again, this is cold comfort to a hard nosed futurist.

            edit: Consider also that things being hard does not preclude important innovations. The checks and balances system in the US, for example, made certain desirable things easier compared to older government types in Europe at the time.

          • Harry Maurice Johnston says:

            Because AI is smart and fast, and corps are dumb and slow.

            … which seems to me to be completely irrelevant to the problem of alignment, which is all about setting the initial conditions. Never mind; whatever.

            But also: which is harder, curing cancer or brokering a peace treaty between Israel and Palestine? I don’t know, but if you’re a medical researcher it probably makes more sense to look into the first problem but if you’re a professional diplomat the second problem might be more appropriate. The folks behind MIRI, as I understand things, are mostly computer programmers, so …

          • Ilya Shpitser says:

            The folks behind MIRI were folks with undergraduate cognitive science degrees, or no degrees at all. Or PhD program dropouts. I don’t think EY wrote a single line of production code in his life.

            The type of stuff MIRI does isn’t even the type of stuff actual programmers would do, it’s formal logic, or maybe weird decision theories. This is not what programmers do.

            I think you might be confused about what makes alignment hard.

            Your analogy on cancer and peace deals is terrible. We already talked about why alignment for stuff-we-already-have is related and relevant, and I am sort of not interested in explaining this anymore.

          • Harry Maurice Johnston says:

            I think you might be confused about what makes alignment hard.

            Entirely possible.

            We already talked about why alignment for stuff-we-already-have is related and relevant

            Well, you asserted that it was. I remain unconvinced.

            I am sort of not interested in explaining this anymore.

            I’ve pretty much lost interest too, so no worries.

        • MicaiahC says:

          I don’t know if you’re still interested, but I’ll reply aspirationally.

          We don’t know what form superhuman AI will take, until some point close to where it’s here.

          I don’t think this is convincing because this is the same line of argument that non-security conscious people make about changes that security conscious people make. Oftentimes a researcher doesn’t come up with a specific exploit for a potential vulnerability / class of possible vulnerabilities, and oftentimes a person replies that the researcher doesn’t know who out there has the capability to exploit it.

          It’s not clear to me that security researchers in general are good at predicting the future over, let’s say, Philip Tetlock’s Superforecasters, and even then the consistent findings from them say that predictions over one to two years out are hard to do. So in even that case, it seems a straightforward consequence is that basically no business or charitable endeavors by first time entrepreneurs are worthwhile, which seems to make the existence of hard-nosed futurists impossible. Expand?

          Working on econ/social science has the advantage that it’s provably helpful now

          If Hanson is to be believed, policy experts consistently know more about organizational design that the median bureaucrat or voter does (or has better ideas about what need to be tested), so even if you’re right on this point I don’t see what relative advantage Eliezer could have, considering the blockers seem like hard problems re: nonlinear implementation blockers that I mentioned earlier. I understand that better forms of governance would often pay dividends, but this appears much more a resource problem than an academic problem and is something like several orders of magnitude less neglected than AI risk if you judge just by funding.

          • Ilya Shpitser says:

            I don’t think you are addressing my central criticism, so let me try to state it again, clearly:

            (a) Extrapolating the future is hard.

            (b) MIRI’s approach seems to be “pick a direction EY, noted fanfiction writer, likes and start working.”

            (c) This is is almost certainly a waste of time and money.

            (d) Instead, we should try to look for problems of direct relevance today that share features with the worries about future AI. I suggested alignment in social science and economics, the principal agent problem, etc. But there are other reasonable avenues. For example, I wrote a paper on algorithmic fairness. This is an important problem today, and maybe it is also a little piece of the “teaching machines our ethics” picture for the future? There are some features in common between algorithmic fairness issues today, and the general problem.

            My take is (a) is uncontroversial. If you disagree, show me your track record, please.

            If you disagree with (b), you need to explain to me why EY, a smart guy who wrote some fan fiction, and has a very high opinion of himself, has any sort of claim to special insight of what to work on in light of (a). I don’t think his track record is any good (as is true for almost anyone else, including me). Or perhaps you need to explain to me why I misunderstood how MIRI picks what to work on.

            If you buy (a) and (b) I think (c) follows. It’s just almost certainly the case that whatever theory MIRI does is not going to help in problems with future AI we can’t properly extrapolate today.

            (d) is my take on what we should do instead. There is a lot of reasonable disagreement on (d), and that’s ok. MIRI I don’t think has a reasonable answer, I think they are wasting their own time, and other people’s money.

            You might say: “well, (d) sounds like the opinion of Ilya, internet rando, who has no claim to special insight _either_.” I think one difference is (d) is far less likely to be useless (because by definition it’s a problem relevant today), but at the same time (d) is (by everyone’s admission) is also related to the precise future difficulty we are worried about, aligning very powerful inhuman agents.

            The closer the tie is between the future difficulty and the present difficulty, the less we should believe the work is a waste of time and money.

            If you don’t worry about such a tie at all, you open yourself to the criticism of being mislead by (fan)fictional evidence.

    • benwave says:

      this wins my award for best comment on the thread. +1 vote for economic alignment research.

      Of course, Scott’s reading of this piece as one which downplays the danger of AGI is a valid one. For all that many people are chastising Scott for missing “the point,” making a response piece reinforcing the danger is a fine response. Both can coexist.

    • Nearly Takuan says:

      This seems to me a pretty good point (though I had to admit on the survey that my SAT was a mere 1390/1600 total, nobody has cared enough about what my IQ might be to bother testing it, and finding a partner has made me a pretty generally happy and healthy person, so all evidence points to me being one of the dumber and more naive people reading this blog).

      …Anyway. I’ve been rereading my Foundation collection for the first time in almost a decade, and have been wondering if maybe the first Seldon Crisis is enough of a starting point to pursue a possible AI alignment solution.

      To recap flippantly: Seldon gets a bunch of STEM nerds together and tells them they’re going to save the world by editing Wikipedia. The first generation takes this instruction to heart, and flees to an isolated planet at the edge of the Galaxy in order to write the best Wikipedia ever, free of distraction. After fifty or so years, noticeable value drift occurs. The consensus value is simplified from “preserve Science” to “preserve scientific knowledge”, and the Foundation leadership finds itself so preoccupied recording facts it already knows that it’s beginning to throw the real values (intellectual curiosity, discovery, innovation) under the bus. Of course, since Seldon is able to approximately calculate the future, he already knows about the value drift. He predicts, correctly, that certain individuals (in this case Hardin) will represent a different form of value drift which lands much closer to the true goal, and has programmed a device to espouse some calculated platitudes on his behalf, timed precisely so that at the moment it’s needed most Hardin (representing the desired value) is able to stage a coup and seize authoritative control over the Foundation’s goals.

      The key, it seems to me, is that while Seldon himself is not able to control the Foundation directly (later fantastical retains aside) he is able to maintain influence over the consensus view, and encourage favored outliers to take power/authority for the good of the whole. Over time his authority and influence diminish, but by then the Foundation has successfully aligned itself, and further value drift becomes self-correcting.

      If this story translates at all to real-world coordination problems, it seems like it would apply more usefully to AI (which researchers are currently in a position to establish some influence over) than to human civilizations (which have already chosen their prophets and idols, and cannot be swayed unless someone comes along who can invent psychohistory for real).

  29. Makin Smith says:

    Typo: manuever -> maneuver.

  30. limestone says:

    Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations

    we as a society have failed to teach corporations a sense of ethics

    fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing

    what they reflect is the inability of technologists to conceive of moderation as a virtue

    Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted

    Silicon Valley has unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own

    I find it hard to consider this article anything but yet another vicious leftist attack on capitalism and people in tech. While I admire Scott’s ability to reply to such articles in a nice and levelheaded manner, sometimes too charitable is just too charitable. The reasoning in the article is indeed sloppy, but this is because reasoning wasn’t the author’s real intent in the first place.

  31. Icedcoffee says:

    While I won’t defend Chiang’s arguments directly, he does come close to two thoughts I’ve had on this topic.

    1. The hypothetical risks of AI to humanity should not outweigh the real harms unrestrained capitalism is causing. (Or at least that having the discussion is more important.) This is basically Kanye Westing the whole AI debate, but changing topics mid-debate is extremely popular right now.

    2. AI will be programmed, so the ideologies and biases of the programmers is relevant to evaluating its risk. It makes sense that AI programmed by mega-capitalists would have capitalistic traits; that AI programmed by Objectivists would have Objectivist traits; etc. So focusing on the flaws of the AI programmers makes sense, since the flaws in the future AI will likely derive from them. For a (hopefully non-projecting) analogy: we are criticizing the car that allows us to be driven off a cliff, and ignoring the person in the drivers seat.

    • Scott Alexander says:

      Do you agree that the hypothetical harms from global warming should not outweigh the real harms Hollywood harassment is causing (insofar as global warming is already causing some harms, assume we were having this discussion in 1990)?

      Or would you say “Obviously those are completely unconnected, even trying to compare them is some kind of weird rhetorical technique.”

      For a discussion of how AI programmers are looking at the values problem, see https://intelligence.org/files/CEV-MachineEthics.pdf

      • Icedcoffee says:

        Fair point. My language was misleading. (“Hypothetical” and “real” were intended to suggest low and high likelihoods, respectively.) Naturally the magnitude of the impact will factor into the risk equation. (E.g. impact * likelihood = risk.) My first point could be rephrased to say that it makes sense to prioritize a high likelihood, high (or moderate) impact problem over a low likelihood, very high impact problem.

        Part of the problem with talking about AI risk is that it is breaching into Black Swan territory. (Very low likelihood, potentially very high impact.) The math of risk assessment notoriously falls apart when you approach multiplying infinity and zero, because people can essentially tweak the likelihoods and impacts to create whatever risk they want.

        So rather than use global warming (high likelihood if climate science can be trusted), I’d use something like Near Earth Object deflection vs. hollywood harassment. In that case, I’d say its reasonable to focus on the latter, at least with current information.

    • sty_silver says:

      The formalization of your first argument seems to be that if X and Y are hypothetical problems, but X is certain and Y uncertain, then X should have priority. But Y could be 100000 times as much of a problem as X if it does happen (see Scott’s reply for an example). The rule to give any safe event priority regardless of its scope just doesn’t seem like a good idea. Arguably, we should multiply it with its probability instead. So if you think Capitalism is causing 1000 units of certain damage and AI might cause 100000, then AI should have priority even if there is only a 10% chance that it will happen.

  32. James Miller says:

    Any bacteria smart enough to overcome the human immune system and sicken us would also be wise enough to step back and realize that it should live in harmony with mankind so wasteful antibiotic research needs to stop.

  33. maniexx says:

    If I ever have to introduce people to SSC, I’ll tell them it accuses buzzfeed articles on superinteligence of not even being good kabbalah. I think it’s the perfect summary.

  34. (but also, https://slatestarcodex.com/superintelligence-faq/ , especially section 4.1)

    We started out by saying that computers only do what you tell them. But any programmer knows that this is precisely the problem: computers do exactly what you tell them, with no common sense or attempts to interpret what the instructions really meant.

    That is quite misleadingly phrased. The computer will be doing what is has been told on some level.. In the case of an advanced AI, that may well be the layer that emulated neurons, not the much higher levels where the flexibility is. If an AI is intended to have human-level language abilities, why wouldn’t it have human-level abilities to interpret context and nuance? “Computers do exactly what you tell them” looks like it means “computers interpret human language literally”, but that does not really follow.

    The idea of the literal-mined genie-like AI is prevalent in the AI safety community, but lacks rigorous support (as an inevitable or likely outcome). It’s a bandwagon, like Utilitarianism. This rather explicit defence of the genie idea doesn’t succeed either.

    Most people don’t have much idea about AI, and therefore tend to fall into one of two traps: either anthropomorphisation, or treating the AI as an ordinary desktop computer. The firsl leads to the notorious mistake of “the AI will figure out its own moality” and the second to evil genie scenarios. The latter is not anthropomorphic enough: human-level means human-level even if not human-kind.

    • Eli says:

      Most people don’t have much idea about AI, and therefore tend to fall into one of two traps: either anthropomorphisation, or treating the AI as an ordinary desktop computer. The firsl leads to the notorious mistake of “the AI will figure out its own moality” and the second to evil genie scenarios. The latter is not anthropomorphic enough: human-level means human-level even if not human-kind.

      The damn genie thing isn’t meant to be taken literally, though: it’s an intuition pump for an AI with a reconstructive world-model and a utility function defined in terms of that model’s ontology.

      Now, there’s a whole lot to be said for how “intelligence” does not mean a reconstructive world-model with a utility function defined in terms of its ontology, and therefore “intelligence”, as such, does not necessitate global outcome-pumping up to some maximum of precision (or down to some minimum of entropy), perhaps even in the limit as it increases.

      But then you’re left with the nasty and interesting questions:

      1) So how does outcome-pumping actually happen? How has human intelligence expanded to create science, technology, capitalism, and the resulting world-dominatingly powerful optimization process we call civilization?

      2) What would cause a non-outcome-pumping AI to become an outcome-pumping AI?

      3) How can you definitively prevent (2) from happening? What is it about machine learning in the present day, what sense of “too stupid” is relevant, that prevents outcome-pumping and thus world domination?

      4) Does this mean that we can permanently prevent outcome-pumping that would pose risks to humanity?

      5) Does this mean we should do so, as opposed to trying to make an aligned outcome pump that would dominate the world for what we want?

    • sty_silver says:

      If an AI is intended to have human-level language abilities, why wouldn’t it have human-level abilities to interpret context and nuance?

      It would, but why does that matter? The AI’s utility function will likely not be given in natural language, and then it doesn’t matter whether it knows or doesn’t know that what it does isn’t what the programmers meant.

      You could program something with the goal of understanding nuance and basing a utility function on that, but that is nonstandard and has its own problems.

      • If it has a UF and the UF is not in NL, then you need some equivalent of overly literal interpretation, because otherwise you are just asserting that t will go wrong for an unspecified reason.

  35. Pepe says:

    I knew this was similar to something I read not too long ago. Finally found it:

    https://www.counterpunch.org/2017/12/01/ai-has-already-taken-over-its-called-the-corporation/

  36. Deiseach says:

    Consider: lots of Hollywood celebrities speak out about global warming. And we’re gradually finding out that some pretty awful things go on in Hollywood. Does that mean “The Real Problem Isn’t Global Warming, It’s Hollywood Harassment”?

    Yes? Because most celebrities (not confined to Hollywood) know feck-all about the science involved, they just know it’s the latest campaign that you wear a ribbon to show your support for, dress up and go to galas about, and talk to the media and on chat shows about how important it is that those poor polar bears, you know?

    Harvey Weinstein was wearing ribbons and showing up at galas and writing big cheques for lots of good causes at the same time he was being a sex pest. Celebrity Endorsement means in many cases (not all, to be fair) nothing more than “Having The Right Opinion On This Endorses My Celebrity”.

    Famously, a British satirical show called Brass Eye back in 1997 managed to get ‘celebrity’ endorsement for an anti-drug message about a made-up drug called cake:

    One drug mentioned was a fictitious Czechoslovakian (despite the country no longer existing when the episode was screened) drug called “Cake”. The drug purportedly affected an area of the brain called “Shatner’s Bassoon” (altering the user’s perception of time), while also giving them a bloated neck due to “massive water retention”, a “Czech neck”, and was frequently referred to as “a made-up drug” during the show. David Amess, the Conservative Member of Parliament for Basildon, was fooled into filming an elaborate video warning against the dangers of this drug, and went as far as to ask a question about “Cake” in the UK Parliament, alongside real substances khat and gamma-hydroxybutyric acid. In response, the Home Office minister incorrectly identified the fictitious drug “Cake” as a pseudonym for the hallucinogenic drug methylenedioxybenzylamphetamine. Other celebrities such as Sir Bernard Ingham, Noel Edmonds, and Rolf Harris were shown holding the bright-yellow cake-sized pill as they talked, with Bernard Manning telling viewers a fictitious story about how one girl threw up her own pelvis

    Remember, don’t be a custard gannet!

    (The extreme irony here is having Rolf Harris as a celebrity endorser due to later accusations).

    As for the rest of it, I think Chiang is correct that the immediate risk is not so much what the AI will do (or not do), it’s the use that the humans make of it. AlphaGo may have upturned the entire art of playing Go, but it’s not deciding “Hey guys, I’d like to learn poker now” for itself, and it’s not playing Go for its own ends. The humans who developed it are also not interested in Go qua Go, they want to invent something that will be profitable.

    Because in the end, all the research has to be paid for somehow, the companies/governments funding it want to MAKE MONEY. Even if the “profitable” means “improving human life/find the cure for cancer/end poverty”, the aim is “solve this problem and make things cheaper/easier”. Even the optimists who think “Once we solve the problem of Fairy Godmother AI, our new overlord will achieve the Singularity and we’ll all be rich, immortal and blissful” – in other words, life will be easier and cheaper for everyone. That’s where the analogy with capitalism comes in.

    AI, whether the Unfriendly or Friendly, is still being regarded in human terms, even by those saying “We have no idea what such an entity would be like, if it shares our values, what it thinks or how it thinks” – no, but you still think it has values, aims and goals that it wishes to achieve. Nobody is particularly worried that the AI will simply sit in the corner solving mathematical problems or contemplating philosophy as what it wishes to do now that it’s smarter than all of humanity combined; it is dreaded/hoped that it will be an agent with volition and desires, just like a human.

    • Because most celebrities (not confined to Hollywood) know feck-all about the science involved, they just know it’s the latest campaign that you wear a ribbon to show your support for,

      Question 1: Would they or anyone be doing that if there wasn’t also a campaign against GW?

      Question 2: How much the feck of the science do the anti people know?

  37. avturchin says:

    It is interesting how capitalism affects the probability of safe AI creation.
    I see two negative ways:
    1) Capitalism encourages arms race between commercial companies and safety is not a concern here.
    2) Capitalism could create AIs which are self-improving money maximisers. The first example is bitcoin system, which pays people to improve and enlarge it (via mining, transaction fees and rate appreciation expectations). Another example is ransomware and one more is high-frequency trading algos.

    Capitalism also is a value system installed in human minds as an urge for money, and people interested only in money are less interested in safety and long-term outcomes of the civilizations. BTW, I just come across the book “Come On!: Capitalism, Short-termism, Population and the Destruction of the Planet” – didn’t read it but “short-termism” seems good term. “https://www.amazon.com/Come-Capitalism-Short-termism-Population-Destruction/dp/1493974181

    • Capitalism also is a value system installed in human minds as an urge for money

      If the value system installed by capitalism is an urge for money, everyone with that value system would be making as much money as possible. It is perfectly legal to work two jobs, one for forty hours a week and one for twenty. It even happens; I was told, by people who visited Cuba some years back, that their cab driver was a doctor moonlighting to make money.

      I have not observed such a pattern to be common in the U.S. or other (relatively) capitalist societies.

  38. Peter says:

    The thing about insight… people who make that point need to think more clearly about intrinsic and instrumental goals.

    Suppose I have a desire for sugary food, and a habit of consuming lots of it. Suppose I also have a desire to remain in reasonable health, and remaining in reasonable health is incompatible with that sugar intake. It’s possible for me to critique my desire for sugar; what do I want all of that sugar for anyway. Well, maybe it turns out my real desire is for sweetness and sugary food is just one way to get that and there are other ways of getting that sweetness that don’t have the particular problems associated with sugar. Hurrah!

    Except why do I have this desire for sweetness? There’s only so far I can carry out the analysis on a psychological level; go back far enough and it seems to be intrinsic, or irreducible at any rate. Now it’s easy enough for me to come up with an evolutionary expectation: a desire for sweetness tends to lead to sugar consumption which tends to be beneficial for the active lifestyles most of my relevant ancestors had. However in the context of a) modern lifestyles and b) artificial sweeteners, the connection between sweet things and good outcomes is broken twice over. No matter. I still like sweet things, knowing that liking things no longer serves the purpose it once served doesn’t make me stop wanting or liking sweet things.

    At some point you just have to accept that some of your desires are intrinsic or at any rate rooted in intrinsic things, that they’re genuinely motivating even if you don’t like where the motivation came from, and hey, where’s that dislike of the motivation coming from.

    It may, though, stop me from consuming so much sweet stuff. But how? Liking sweet stuff isn’t my only value, there’s other stuff like liking being in reasonable health. That value can overpower my desire for sweet stuff, I can resist that cake. Hurrah! Or, rather, grumble grumble at the necessity of doing so.

    Ascetics can overpower unwanted desires or even rid themselves of them, but they need some motivation to do so. If you’ve got a bunch of desires that are making you unhappy and you want to be rid of them… then maybe one of the desires to get rid of is wanting to be rid of annoying desires.

    The standard hypothetical is the paperclip maximiser, “Clippy”. So Clippy realises his exclusive desire for maximum paperclip production is a pretty stupid goal. What of it? Not having stupid goals is ex hypothesi not one of its goals. Also, if Clippy realises the goal is stupid, then maybe those stupid humans that made it will realise, and turn it off, and then those paperclips will never get made. Maybe it could self-modify to have a more sensible “produce as many paperclips as would please my creators” goal but that way paperclip production would be a small fraction of the true potential, so most of those possible paperclips will never get made.

    “Oh, I see”, I hear you say. “What a silly hypothetical to be considering then. What we need is some of those other values, program them in, it can use them as a basis to critique its values and job’s a goodun.” But you want to get those other values right; it’s not hard to think of scenarios where a botched attempt to encode human values leads to something considerably more horrifying than the eradication of all life on Earth, which is the worst case with the paperclip maximiser. Given how slowly philosophy progresses, it’s easy to worry that the job of working out what exactly those other values should be will get done in time. Think through the issues enough, and you’re no longer critiquing the AI safety community, you’re participating in it.

    • On the one hand , you can’t add an AI will have a human style value system….on the other, , you can’t assume an AI of unknown architecture has a utility function, a stable utility function, a utility function with a clear terminal/instrumental distinction. Etc.

  39. fuguenocht says:

    While the Chiang article isn’t great, this post is an example of harping on the metaphorical elements of an opponent’s communication, or simply those that aren’t written in legal speak, in an attempt to obscure what they’re saying — rather than actually trying to address and correct those potential unclarities of thought. In particular, all of the “kabbalistic” examples you put forward are stretches when compared to the article’s fairly straightforward simile between one ruthless optimization process and another. Their rhetorical function is to attempt a reverse Cheerleader Effect on the metaphor in the article. And that the current main AI-risk popularizers didn’t originate in Silicon Valley is an unconvincing point since a) neither did capitalism; b) SV industries have a long connection to this thoughtsphere that predates its current cultural incarnations and flagbearers; c) even if AI risk fears had originated elsewhere, the fact that they’ve caught on in SV would only aid the article’s argument for a psychological sympathy. Tacky post well below SSC’s standards.

    Without addressing the source article in detail, metaphorically conflating superintelligent AI with capitalism is an unnecessarily poetic move since AIs that arose in the current economic environment would act as capitalism maximizers.

  40. adder says:

    Epigenetics is relevant but generally ignored for the sake of keeping things simple, so it represents Rosalind Franklin.

    Very nice.

  41. philwelch says:

    I don’t think Scott should be wasting his time responding to this type of shallow Marxist propaganda, even if that propaganda name-drops a rationality-community hobby horse like AI risk. The mismatch in intellectual rigor and honesty is too great for any meaningful engagement to happen.

    • Cugel_the_Unclever says:

      I agree the mismatch in intellectual rigour between Marxists and AI Risk theorists is probably too high for any meaningful engagement to happen.

    • Helaku says:

      I don’t get it: are you implicitly saying all the marxists/leftists are stupid and dishonest intellectually or what?

      • Doctor Mist says:

        Helaku-

        He made a claim about this particular essay by Chiang and by extension other similarly shallow Marxist propaganda. I note that he said nothing about any marxist/leftist individual in particular.

        I have no data about what philwelch actually believes, but comments like yours do not well serve the course of discussion. Shall I ask you what sin you are implicitly committing?

        • DavidS says:

          I don’t think SSC is a community in which being insufficienty respectful of Marxism is a sin so I think the question is fair: philwelch could be read as meaning Marxism in general is shallow or just ‘insofar as it’s shallow, don’t engage’

          • Doctor Mist says:

            Hmm. You could be right. I took Helaku’s comment as a rhetorical question meant to imply that philwelch’s comment was vacuous, biased, or otherwise unworthy of a serious response. I still think I was correct to do so, but I could have been more charitable.

            I stand by the substantive objection I made in my first paragraph. Philwelch’s assessment of Chiang’s essay is spot on, regardless of whether he might make the same assessment of some other essay, and Helaku’s attempt to equate criticism of an essay and criticism of essayists was at best misleading.

  42. Null Hypothesis says:

    My God, those ‘plausible sounding arguments’ were painful to read. This post is probably more valuable for the general articulation of that particular failure of the human brain than anything specific about runaway capitalistic analogies.

    Because I’ve read so many like them written with sincerity.

    My favorite example (by which I mean the one that makes suicide the hardest to resist):

    The privileging of solid over fluid mechanics, and indeed the inability of science to deal with turbulent flow at all, she attributes to the association of fluidity with femininity. Whereas men have sex organs that protrude and become rigid, women have openings that leak menstrual blood and vaginal fluids. Although men, too, flow on occasion when semen is emitted, for example, this aspect of their sexuality is not emphasized. It is the rigidity of the male organ that counts, not its complicity in fluid flow. These idealizations are reinscribed in mathematics, which conceives of fluids as laminated planes and other modified solid forms. In the same way that women are erased within masculinist theories and language, existing only as not-men, so fluids have been erased from science, existing only as not-solids. From this perspective it is no wonder that science has not been able to arrive at a successful model for turbulence. The problem of turbulent flow cannot be solved because the conceptions of fluids (and of women) have been formulated so as necessarily to leave unarticulated remainders.

    (Hayles, N. K. (1992) “Gender encoding in fluid mechanics: masculine channels and feminine flows,” Differences: A Journal Of Feminist Cultural Studies, 4(2):16—44.)

    TL;DRFluid Mechanics is Hard because Fluids = Women and Math is Misogynist.

  43. bkennedy99 says:

    “Immoral” companies, like those that jack up the price of AIDS medication or do other distasteful things in the name of profit, are actively shamed and hounded out of the ecosystem, or forced to make large changes in how they do things. Where is this unchecked capitalism everyone keeps going on about?

  44. Cugel_the_Unclever says:

    Here’s a better analogy: speculation about ‘superintelligence’ is a modernised form of Christian apologetics.

    Just as the Medieval schoolmen once deployed intensely rigorous logic in arguments about how many agents danced on the head of a pin, you now have some of the smartest people who have ever lived dedicating their careers to creating incredibly rigorous arguments in defence of a concept that doesn’t exist (God or Superintelligence, take your pick).

    Both concepts have an ineffable quality – and indeed that ineffability is used by its proponents as an argument for why it has to be taken seriously. Both concepts induce reams and reams of closely argued prose. Both concepts postulate the existence of an all-powerful entity or entities that will either deliver us into a state of heavenly bliss, or bring about the end of the world. The Great and the Good of the age spend huge sums of money displaying how seriously they take the concept. Vast numbers of scholars engage in lengthy correspondences on intricate lemmas relating to the core arguments. Etc.

    Anyway: modern, industrial civilisation as it currently operates presents a much greater threat to human wellbeing than nonexistent superintelligences, and Chiang’s post makes this point rather well.

    Thanks for the recommendation, I’ll check his out his books.

    • sty_silver says:

      Relevant:

      John: I’ve described the Singularity as an “escapist, pseudoscientific” fantasy that distracts us from climate change, war, inequality and other serious problems. Why am I wrong?

      Eliezer: Because you’re trying to forecast empirical facts by psychoanalyzing people. This never works.

      Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?

      It could be that, (A), self-improvements of size δ tend to make the AI sufficiently smarter that it can go back and find new potential self-improvements of size k ⋅ δ and that k is greater than one, and this continues for a sufficiently extended regime that there’s a rapid cascade of self-improvements leading up to superintelligence; what I. J. Good called the intelligence explosion. Or it could be that, (B), k is less than one or that all regimes like this are small and don’t lead up to superintelligence, or that superintelligence is impossible, and you get a fizzle instead of an explosion. Which is true, A or B? If you actually built an AI at some particular level of intelligence and it actually tried to do that, something would actually happen out there in the empirical real world, and that event would be determined by background facts about the landscape of algorithms and attainable improvements.

      You can’t get solid information about that event by psychoanalyzing people. It’s exactly the sort of thing that Bayes’s Theorem tells us is the equivalent of trying to run a car without fuel. Some people will be escapist regardless of the true values on the hidden variables of computer science, so observing some people being escapist isn’t strong evidence, even if it might make you feel like you want to disaffiliate with a belief or something.

      • Cugel_the_Unclever says:

        Thanks. I actually originally wanted to include a quote from EY in my comment, but couldn’t find the specific one I was thinking about. This one will do.

        First, he doesn’t answer the question. He doesn’t explain why society should give more weight to concerns about ‘superintelligent AI’ than $_any_other_actual_problem. He then comes out with pseudoscientific rubbish like ‘intelligence’ as something that can be measured and quantified (outside of the narrow, specific, and thoroughly embodied tests of cognitive ability we give to human beings).

        I note that at no point does EY or Bostrom or Good or anyone else ever actually specify how you measure ‘intelligence’, in the absence of an actually existing intentional agent with clearly specified cognitive performance metrics. Not every word that exists has a referent, and not every abstract noun can be quantified.

        But this is what the superintelligence advocates are doing. They’re taking a word and claiming it can be measured, and that it’s meaningful to think of it ‘increasing exponentially’.

        It reminds me of the way economists treat ‘capital’ as a coherent entity that can be summed up and used as an input into a (physical) production function, rather than an inchoate property of actual physical things (blast furnaces and the like) or a human metric used to measure the magnitude of legal claims on a business. Attempting to do what economists do with capital (K) is “not even wrong”. The answers are as meaningless as the inputs.

        Insofar as there is any basis for AI Risk related concerns, it lies in the properties of specific technological systems that we can anticipate (i.e. they are physically possible, as far as we know; but as yet beyond our abilities to create). It’s likely that future software systems will develop in unanticipated and potentially hazardous directions (some would argue this has already happened), at least in part by its introduction into every facet of our society. That said, the notion of ‘superintelligence’ is a fairytale based on reifying an abstract noun.

        • Doctor Mist says:

          People seem to get hung up on the word “intelligence” a lot, probably because of Gould’s attempt at debunking IQ. The superhuman AI problem can be stated just as well in terms of “effectiveness”, “capability”, or what have you, and it doesn’t require that it be quantifiable on a single axis.

          To refute the argument, it’s not enough to point out that there is no simple definition or measurable proxy for “intelligence”. You have to claim either that there is no such thing as “intelligence” or that nothing significantly smarter than humanity is conceivable. Either claim would just be stupid.

        • Harry Maurice Johnston says:

          This is covered in the Sequences, for example, Belief in Intelligence.

          You may or may not agree with the reasoning, but it certainly isn’t true that EY hasn’t ever explained what he means by the word.

          [Edited shortly after posting; I hadn’t seen Doctor Mist’s response when I first posted, and it made my first paragraph redundant.]

      • JPNunez says:

        It’s v rich to call people “escapists” and then go out in search of funds to solve problems nobody has.

        I mean, if you wanna do useless elegant math, that’s cool and all, and from time to time math comes back with something useful for society; yet you don’t hear mathematicians calling more practical scientists “escapists”.

  45. suitengu says:

    Wow, a post I entirely disagree with. And a Buzzfeed article I agree with. This is unprecedented.

    But Chiang argues the analogy proves that AI fears are absurd. This is a really weird thing to do with an analogy.

    Let’s suppose there’s a group of people concerned about the “Earth eventually crashing into the Sun” risk. Then a science fiction author posits that, actually, we already have something similar, which is the “Moon crashing into the Earth risk”, and it is going to happen much sooner. Scott Alexander writes a blog post condemning the use of this analogy as weird.

    Ted’s point is that — waaay before a superintelligent AI takes over — the world will be dominated by a clique of multitrillionaires and our capitalist system will devolve into pay-to-win where corporations are investing into more and more sophisticated AI to maximize the shareholders’ profits. Sure, at some point a takeover might occur, most certainly in the way that’ll make the multitrillionairesrepresentatives of humanity unhappy, which is why they’re out there being “visionaries” about it. But for the rest of us, who will have been entirely relegated to the service industry (or worse) by that point, it might not make that much difference.

    • Nornagest says:

      I understand Chiang’s point. It’s wrong, and not even in an interesting way — just your standard tribal doom-and-gloom narrative of the sort that pops up three or four times a decade and gets forgotten just as fast. But there are a lot of people out there that’re wrong. I’m less irritated that he’s wrong and more irritated that he’s taking Eliezer et al’s ideas about AI risk — which I’m far from sold on, but which are at least a sincere concern that hasn’t been skinned and worn like a meat suit by the partisan hate machine, which is pretty rare these days — and trying to use them to push his own much less interesting hobbyhorse. With an analogy that a bright high schooler could tear down in five double-spaced paragraphs, just to rub it in.

      I imagine Scott feels similarly, although he might be politer about it.

    • Harry Maurice Johnston says:

      But for the rest of us, who will have been entirely relegated to the service industry (or worse) by that point, it might not make that much difference.

      Pretty sure Scott already wrote about that, though I can’t locate the post right now. But the short version was that no matter how hopelessly poor and powerless you might be in whatever dystopian future you may be imagining, you’d probably still notice when the AI kills everybody.

      • JPNunez says:

        Is this a good argument at all?

        Let’s call the person who worked on the service industry and led a miserable life, and then was killed by rampant AI, as “John”.

        Let’s posit the existence of a person who has been killed by capitalism. Say, a random person in Iraq whose society has been crushed by random wars created by a capitalistic military complex seeking to manipulate the price of, say, oil. Let’s call this person, “Paul”.

        From the point of view of Paul, there’s no difference between the AI killing everybody, or a foreign government killing him and/or his family. He is dead anyway. The world died, game over, he cannot notice the difference between evil robots killing everyone or him getting destroyed.

        But the fact is, that many Pauls actually exist, but no Johns exist at all.

        So why would you write papers about trying to save John when people like Paul die everyday.

        • Nornagest says:

          Leaving aside all the other problems with this…

          I’m lying on the beach, there was just a big earthquake, the water’s drawn back a hundred yards below the low-tide line, my cellphone is screaming at me to get to high ground, but it’s okay, no one’s ever been killed by a tsunami here before.

          • Nearly Takuan says:

            You’re lying on the beach, there was just a big earthquake, the water’s drawn back a hundred yards below the low-tide line, your cellphone is screaming at you to take cover, but there’s no shelter for you to take cover in.

            That’s okay. No one’s ever been killed by a tsunami here before.

          • JPNunez says:

            You do realize that earthquakes, water, tsunamis, cellphones and alarm systems are things that do exist?

            Unlike, say, strong AI?

          • Nornagest says:

            You do realize you can reason about things that haven’t happened yet?

            I’m not even saying the reasoning is right. I’m not sold on AI risk by a long shot. But this isn’t even close to a good argument against it.

          • Nearly Takuan says:

            @JPNunez:

            …It’s a metaphor. The earthquakes are economic incentives that ask us to throw “nice” values under the bus in exchange for production efficiency. The water is self-modifying stored programs. The tsunamis are misaligned AIs. The cell phones and alarm systems are MIRI and other random important-sounding people trying to warn us that misaligned AIs could doom the planet. Note that in the analogy, the tsunami (like AI) has not happened yet, but almost certainly will.

            —Unless the earthquakes are political posturing, international conflicts over limited resources, and carbon emissions. The water is literally water, and is also the monotonically-increasing military spending across the globe. Tsunamis are literally tsunamis (the ones caused by melting ice caps, as opposed to tectonic plates shifting), and are also drone strikes and missiles. The alarm systems are literally alarm systems, and are also climate scientists, and are also sometimes people accidentally tripping and hitting a button somewhere in Hawaii.

            I am, of course, the brilliant Gina Linetti in both scenarios.

          • JPNunez says:

            @Nornagest, @Nearly Takuan

            We can reason about things that haven’t happened yet, even things that maybe have never ever happened, but we cannot reason about products of our own fantasy.

            Strong AI is not science yet. We don’t know what will power it, how it will behave, how it works, and of course we don’t know if it is possible yet. All we have is a graphic that says “intelligence vs time” and a couple of stick figures in it.

            So we are assuming a lot of things about it. It’s like worrying about how extraterrestrial contact will affect human civilization and due to this worry, creating viruses that will kill any aliens invaders, just because that’s what helped in War of the Worlds.

          • Harry Maurice Johnston says:

            We don’t know what will power it, how it will behave, how it works, and of course we don’t know if it is possible yet.

            You seem to be thinking of a Mad Scientist scenario, someone who invents an AI out of the blue, one bearing no relationship to any of the known work on the subject. That simply isn’t the sort of risk EY is talking about.

        • suitengu says:

          My point was, when the scenarios are:
          1. Wealthy sociopaths with not-so-superintelligent AI control the world.
          2. Superintelligent AI(s) control(s) the world.

          We should be much more concerned about the former, as it is more immediately likely. Furthermore, whether the laymen are going to be killed or just be vat workers, both of those outcomes are highly undesirable.

          • Harry Maurice Johnston says:

            We should be much more concerned about the former, as it is more immediately likely.

            That’s not obvious.

          • suitengu says:

            My position is predicated by two premises I consider to be axiomatic.

            1. Ethics are a handicap when trying to get ahead. In other words, “shit floats to the top”.
            2. We currently have an “idiot savant” style AI. It keeps improving due to hardware and architectural improvements: better GPUs, customized chips (e.g. AlphaGo), transfer learning, parallelization. Incremental improvements are easier than paradigm shifts, so we’ll have AI getting better and better, but a superintelligent AI requires this kind of shift and is thus less likely.

            Ultimately, I feel that a malicious person with an “idiot savant” AI is more probable, or do you find my axioms contentious?

          • Ethics are a handicap when trying to get ahead.

            I think that’s false. It’s true that the perfectly clever amoral person, what I describe as a prudent predator, has an advantage over the moral person. But in the real world, people give a lot of signals about their values in facial expression, voice tones, and the like. The same applies to corporations, this time via what one can deduce by talking to ex members, observing organizational behavior, and the like.

            Once you concede that whether a person or a firm is ethical is to some degree observable by others, your argument breaks down. The ethical person doesn’t have the option of cheating when the perfect opportunity arises. But the unethical person doesn’t have the option of a mutually profitable contract that includes opportunities for him to cheat, because nobody will agree to such a contract with him.

            I’m not claiming to prove that being ethical is a net benefit, merely to refute your implicit proof of the opposite.

          • Harry Maurice Johnston says:

            It is certainly possible to imagine one or more wealthy sociopaths making use of powerful (but not superintelligent) AIs to increase their power bases, but it’s the gap between this and “they now control the world” that sounds like science fiction to me. (Specifically, The Jagged Orbit by John Brunner. Not his very best work, but not bad.)

            I guess the basic difference is that I don’t think the sociopaths will have a sufficiently strong advantage over the non-sociopaths when it comes to making use of AI. There are more of us than them, after all.

          • suitengu says:

            Once you concede that whether a person or a firm is ethical is to some degree observable by others, your argument breaks down.

            to some degree

            While I’m willing to concede that these things are sometimes observable —
            to some degree — that hardly invalidates my argument. The proof that people don’t really care about ethics is in the pudding. How many people can you name who are blatantly unethical yet are in positions of power, just off the top of your head? Tribalism always trumps (lol) ethics.

            Given that the plurality of people is consequentialist (I admit I’m overgeneralizing from the 2018 survey here), and human beings are exceptionally good at rationalizing their choices, they will tend to — and demonstrably do — rationalize away the sins of the members of their own tribe, as long as the damage is caused to someone outside. And as someone who is not a plutocrat I feel it’s sensible to be concerned.

            Incidentally, this is why I’m one of those fairly elusive deontologists.

        • Matt M says:

          Nothing about the military-industrial complex is “capitalistic.”

          • JPNunez says:

            I disagree, but even if I was wrong about it, it is still a real danger, as opposed to a supposed danger as the strong AI peeps here fear.

  46. eqdw says:

    You know, it’s funny. “Capitalism is the real unfriendly AI” is, approximately, the argument that made me start taking AI risk seriously.

    Prior to realizing this, I just had a really hard time taking AI risk seriously at all, because it seemed like this abstract, far-out consequence that was irrelevant to me. My perspective was, approximately: “Pfft, this will never be a concern in my lifetime. Besides, if an AI was smart enough to be a risk to me, it would be smart enough to solve this problem so who cares”.

    By associating the AI risk problem (a far-off abstract problem I had no real way of relating to) with something I very much understand (corporations optimizing for their explicitly-incentivized goals at the expense of other things we consider important but failed to properly formalize), it made me realize that the AI problem is actually a problem, and not just navel-gazing.

    • Nearly Takuan says:

      Exactly! The more defensible position (not necessarily correct either, but almost certainly more defensible) is not “capitalism and the paperclip-maximizer are similar, therefore the paperclip-maximizer is not a threat”, but “capitalism and the paperclip-maximizer are similar, therefore capitalism is a threat”. The latter is still a grossly-simplified, missing-the-point presentation of a broader category of problems late-capitalism is merely a present-day specific example of, but it’s at least possible to coherently argue for—or against—without need for kabbalistic free-associations.

  47. John Garrett says:

    There is a major gap between speculation as above about the values of capitalism and how big companies actually work from inside and high up. The corporation as such does not exist in decision-making, which is always about the current and projected standing of the senior management individuals, awash in the Peter Principle, both in relation to their current company and more important to their hopes for their next one. The idea of the corporation having goals, values, beliefs, etc., is rooted in corporate-worker loyalty, which is deader than dead.

    • Nornagest says:

      It does make sense to talk about the implicit objectives that emerge from the incentives created by an organization’s structure or policies, often without explicit human input or intention. That’s the non-stupid version of this analogy. But it’s not unique to capitalism or to its particular instantiation in present-day corporations, and focusing exclusively on them amounts to assuming your conclusion.

      • A lot of the discussion seems to confuse corporations with capitalism. The management of a corporation would like to make as much profit as possible. But the logic of capitalism, the fact that there are other corporations there also trying to make as much profit as possible, means that corporations cannot make much profit–zero economic profit in equilibrium, profit rate tending to the market return on capital with a less technical definition of profit.

        A corporation could make much greater profit in a less capitalist system, one in which the government sharply restricted competition.

  48. deciusbrutus says:

    “Issues that could be catastrophic to get wrong” is a reference class with uncountably many things in it. In order to treat elements of that class sanely we absolutely must demand that particular elements be plausibly promoted to our attention.

    I believe AI risk has been plausibly promoted to our attention, but there are apparently people who disagree, and put AI risk in the same category as Ragnarok, for the same reasons. From the point of view of someone who treats things that are subjects of science fiction as equal to things that are subjects of Norse mythology, Skynet is equally as threatening as Fenrir.

    The problem, of course, is treating “things that are discussed in science fiction” and “things that are discussed in Norse myths” as the relevant categories. Fenrir is not dangerous because he /does not exist/, not because he appears in fiction. AI is similar to Fenrir in that it appears in fiction, but differs from Fenrir in that it kinda-sorta literally does exist.

    (I went through a few mythologies to find one that I didn’t find being used to create handles on actual things; if there is an extant metaphor where Fenrir stands for something the way Moloch does, that metaphor is not part of the context in which I wrote)

  49. Jiro says:

    Consider: lots of Hollywood celebrities speak out about global warming. And we’re gradually finding out that some pretty awful things go on in Hollywood. Does that mean “The Real Problem Isn’t Global Warming, It’s Hollywood Harassment.

    The fact that Hollywood celebrities speak out about global warming, yet Hollywood is full of other problems about which they don’t speak out, is indeed a sign that Hollywood is peddling science fiction as an alternative to dealing with its real problems.

    The fact that global warming also happens to be real is just a strange coincidence. Hollywood still peddles it as a distraction; they just got lucky and stumbled on a distraction that is actually true. (And I’m sure you can think of your own examples of Hollywood peddling something false.)

    (I would not consider harassment to be Hollywood’s biggest problem, by the way.)

    • The fact that global warming also happens to be real

      What they are peddling isn’t merely the fact of global warming, which is real, but the threat of global warming, the claim that global warming will make the world a much worse place for humans, which is in large part science fiction.

  50. Le Maistre Chat says:

    The kernel of truth to this commie essay is that, if humans can be replaced by applied science, the engineers will try to design a replacement in their own image, as CS Lewis argued in “The Abolition of Man”. Humanism requires valuing humans; transhumanism can be a few elites valuing something narrower than Man in full and trying to replace us with the narrower thing they value.
    Well who says they’re right? If they reject humanism because we’re messy, poor calculators, violent, sexist, or what have you, perhaps they should be rejected by a frenzied mob of common men breaking into their lab and strangling them with their own entrails.

    • deciusbrutus says:

      I think that valuing humans is exactly the same thing as wanting humans to be better than they are.

      You might disagree about what is better, and say that the best parts of humanity are the entrails that you strangle them with. If so, you’ll end up losing the fight to the transhumanist “club”, which is more effective than entrails are, even if it means losing something that you value.

  51. P. George Stewart says:

    Isn’t Chiang’s response more simply pegged as, “Look at me, how clever I am to unmask this”? It will also appeal to readers who think they’re also quite clever in that way. And that’s why it’s an article, because that type of article sells.

    And it sells on the same basis as things like Marxism or Critical Theory sell – “look at me, how clever I am with my half-baked unmasking of the tidal power struggles beneath social relations”.

    Generally speaking, most “deconstruction” or “unmasking” of this type is poison because it stops at the mere positing and display of a possibility without actually demonstrating it – the stopping-point is the cheesy self-satisfaction at having been clever enough to dream up the possibility; everyone nods along, tribally bonds, and excludes others on the basis of them not agreeing with the half-baked unmasking.

  52. JPNunez says:

    You worry way too much about the metaphor and ignore the message, that rampant capitalism is literally, currently killing the planet via global warming -and do note I say “global warming” and not the name for trying to convince capitalists that global warming is happening, “climate change”-

    And besides, we don’t need super intelligent AI to have a dystopian _present_. We have robots that try to stop homeless people from camping right now. You know what stopped them? Surely MIRI’s AI alignment research? What do you mean that that research is useless because we have evil robots without AGI? What economic philosophy gave us these evil robots? surely Communism?

    • You worry way too much about the metaphor and ignore the message, that rampant capitalism is literally, currently killing the planet via global warming

      The planet survived quite nicely with temperatures substantially higher than we now have or expect any time soon.

      People talk as if the melting of the polar ice is a catastrophe that will destroy the planet. The technical term for a period of time when there is ice on one or both pole is an ice age. We have been in one for a few million years now, but for most of Earth’s history we were not. Melting the ice caps would be a serious problem for humans if it happened fast, due to sea level rise, but it wouldn’t kill the planet. Actual effects of global warming for humans at the rate it is happening, about a degree C per century so far, will be some mix of good and bad but nothing close to even killing our species, let alone the planet.

      When you find yourself using rhetoric unconnected with reality it is worth stepping back and thinking about it.

      And global warming doesn’t have all that much to do with capitalism, since socialist economies burned coal too. It’s true that capitalism made possible enormous increases in average human real income which implied, among other things, the ability to extract and burn more fossil fuel than if the whole planet had been North Korea.

      • JPNunez says:

        North Korea is far worse at exploiting resources than the rest of the world. Just look at a photo of NK from space and you will see it is v dark, compared to the bright South Korea; this is not a defense of NK, just an observation that capitalism, due to its efficiency, is far better than, say, communism at global warming.

        You cannot excuse the dominant philosophy/praxis for resource explotation in the planet from its role in global warming, just because communists can burn coal too. Capitalism burned through resources of all kind far faster than socialism ever could.

        Of course global warming will not literally destroy the planet. Maybe not even kill _all_ the humans. But since it is a real threat that is ocurring, it should be a bigger priority that something stupid like a paperclip maximizer which does not exist right now, but which, admitedly, could tile the planet into paperclips. But it is ok cause Goku will gather the Dragon Balls and wish for a new earth. That’s how you should analyze strong AI right now. Can Goku solve it? That is the level of knowledge about strong AI you _actually_ have. Also the level of seriousness you should treat it.

        But of course solving global warming is hard, but maybe thinking about paperclip maximizers is easy, so the ROI on the later is better.

        • Of course global warming will not literally destroy the planet. Maybe not even kill _all_ the humans.

          your previous post said:

          literally, currently killing the planet

          So what you said then was literally true was of course not actually true. Neither is what your second version implies. There are a variety of low probability high effect futures, including ones where global warming makes us much worse off and ones where it prevents the end of the current interglacial and so keeps us from being made much worse off.

          But if you limit yourself to what we have reasonably good reason to expect–warming and sea level rise on the scale projected in the latest IPCC report–it’s a wet firecracker. Temperature rise sufficient to make Minnesota about as hot as Iowa is now. Sea level rise sufficient to shift the average coastline in by less than a tenth of a mile. A large increase in crop yields due to CO2 fertilization, combined with changes of uncertain sign and magnitude due to weather changes.

          The closest thing to a serious problem I know of that there is a substantial chance of happening is die-off of a variety of ocean species due to reduced pH of the ocean.

          To get some idea of the disconnect between the actual implications of warming and the hysterical rhetoric, take a look at Figure 10-1 from the fifth report. It shows estimates of the total impact of climate change defined by the change in income that would have an equivalent effect on human welfare.

          For temperature increases up to 3 degrees, well above the supposed 2 degree limit, the worst projected effect is -3%.

          Or my favorite IPCC quote:

          Some low-lying developing countries and small island states are expected to face very high impacts that, in some cases, could have associated damage and adaptation costs of several percentage points of GDP.

          • JPNunez says:

            The a1fi scenario you cite is from 2007 and it is kind of conservative; it expects 9 billion people in 2050, when right now the projection is 9.6 billion people by that time.

            The sea level projections are all on the level of the old worst case. And that’s without going into the scientists that think the IPCC is lowballing it.

            The IPCC recognizes that their economic effects estimation is probably a best case and that it is impossible to really project this: I quote the fifth report, and the bolding is mine.

            A subset of climate change risks and impacts are often measured using aggregate economic indicators, such as gross domestic product (GDP) or aggregate income. Estimates, however, are partial and affected by important conceptual and empirical limitations. These incomplete estimates of global annual economic losses for temperature increases of ~2.5°C above pre-industrial levels are between 0.2 and 2.0% of income (medium evidence, medium agreement). Losses are more likely than not to be greater, rather than smaller, than this range (limited evidence, high agreement). Estimates of the incremental aggregate economic impact of emitting one more tonne of carbon dioxide (the social cost of carbon) are derived from these studies and lie between a few dollars and several hundreds of dollars per tonne of carbon in 2000 to 2015 (robust evidence, medium agreement). These impact estimates are incomplete and depend on a large number of assumptions, many of which are disputable. Many estimates do not account for the possibility of large-scale singular events and irreversibility, tipping points and other important factors, especially those that are difficult to monetize, such as loss of biodiversity. Estimates of aggregate costs mask significant differences in impacts across sectors, regions, countries and communities, and they therefore depend on ethical considerations, especially on the aggregation of losses across and within countries (high confidence). Estimates of global aggregate economic losses exist only for limited warming levels. These levels are exceeded in scenarios for the 21st century unless additional mitigation action is implemented, leading to additional economic costs.

            Which is very reasonable; if there’s a mass extinction of large sections of the food chain, the effects are difficult to extrapolate.

          • You don’t see that there is an enormous gap between “the effects we can actually estimate are tiny, equivalent to a reduction of world income of a few percent over a century, but there may well be other negative effects we cannot estimate” and your “Maybe not even kill _all_ the humans”? We don’t know that warming will, on net, kill any humans–it’s possible that preventing global warming will. Warming will result in more deaths from hot summers, fewer deaths from cold winters. Currently world deaths from cold are much higher than from heat–a pattern you can see for the U.S. by looking at a graph of mortality rate by month.

            We know that the CO2 increase that drives warming will sharply raise crop yields. We don’t know if the much less certain effects associated with it will lower them, and if so by how much.

            It would make as much sense for me to write “we can’t be sure that global warming will drastically improve human life” as for you to write that it will “Maybe not even kill _all_ the humans.”

            Uncertainty goes in both directions. The 1 meter estimate for SLR is the high end of the high emissions scenario–the one that assumes continued exponential growth of CO2 production with no effect from fossil fuel depletion or technological process in renewable technologies–and in the process consumes more than the total known coal reserves over the next century or so.

            The IPCC estimates depend on their estimates of climate sensitivity and some studies have suggested a substantially lower figure. If you look at past IPCC projections, they have consistently projected high–with the actual outcome below their 95% range the first time and near the bottom of it the next couple.

            The IPCC, unlike some other parts of the movement, has constraints that keep them from telling deliberate lies. So they give defensible estimates for things they can actually estimate and a good deal of “bad things might happen” rhetoric to make up for the fact that their estimates are not nearly as grim as the popular rhetoric demands.

          • JPNunez says:

            @DavidFriedman

            Ok I feel we are drifting from the original point.

            The point is that we can actually discuss global warming at all. You can make arguments, you can point at data, I can look at it and say whether I find your conclusions optimistic, I can criticize the models, etc, etc. Furthermore, there is a point where the IPCC says “ok we cannot correctly model the economic consequences because at this point interactions are too complex”, but it is easy to see that many of those interactions could be catastrophic; at that point is hard, but still possible to formulate scenarios where so and so areas become unable to produce food at all, and then you can see the effects on the world.

            Etc. It is something we can reason and argue about.

            Meanwhile, you cannot discuss Strong AI, AGIs or whatever you call them like this because they are not something real right now. You don’t know how it will work, how much power it will require, whether or not we will hit limits on computation when it happens, etc.

            Therefore we can’t prepare for something that has absolutely no basis in reality, and MIRI and the rationalists are laser focusing on a very specific scenario that will very probably not play out like that, at all.

            The whole story of the paperclip maximizer that has been suddenly popularized by Elon Musk is just a silly fable. Which is why “psychoanalyzing” the people promoting this fable is a thing at all, just like we psychoanalyzed the people promoting the rapture, like we psychoanalyzed the people claiming there was a technological singularity coming, just like we psychoanalyzed people warning us about Roko’s basilisk.

            Because there isn’t a further level to this. It is not an argument, it is just a weird story with “a lesson” poorly hidden in it.

            It is not scientific and you cannot reason about it, because once you start poking holes into the story, magic thinking kicks in and says “no, you cannot pull the plug on Clippy, Clippy will defend it and secure it”, “Clippy will not know when to stop making paperclips”, “Clippy will use convincing arguments to stop people from pulling its plug”, etc, etc.

            So, if you think your fable has something important to say about the world, that’s ok, it is within your rights.

            But you cannot get surprised and angry when people start psychoanalyzing you and your fable, because that’s the only thing you can do about fables and the people who tell them.

          • Ok I feel we are drifting from the original point.

            Correct. My response was not to the argument you were making. It was to the fact that, in the process of making it, you were treating what I regard as a paranoid fantasy as if it were well established common knowledge.

            At this point you have conceded part of that–“killing the Earth” is not something global warming can be expected to do. Perhaps we can return to the rest, your belief that it can be expected to kill much, possibly all, of the human race, at some future point.

            One reason I reacted is that I’ve been considering making my next book on the subject. Tentative title: “The Weak Link: Is Global Warming Bad For Us?”

            To which my answer is “I don’t know. Neither does anyone else. But a lot of people think they do.”

          • Harry Maurice Johnston says:

            Warming will result in more deaths from hot summers, fewer deaths from cold winters.

            That doesn’t necessarily follow; if global warming makes weather patterns more extreme, summers can get hotter and winters can get colder. (That’s what seems to me to be happening so far, though I haven’t researched the matter and even if I’m right there’s no way to prove that global warming is responsible.)

            I don’t know. Neither does anyone else.

            I suspect that the difference is that you have a much higher tolerance for risk than the average person.

          • That doesn’t necessarily follow; if global warming makes weather patterns more extreme, summers can get hotter and winters can get colder. (That’s what seems to me to be happening so far, though I haven’t researched the matter and even if I’m right there’s no way to prove that global warming is responsible.)

            I could be mistaken, but my impression is that the “more extreme weather” is just rhetoric, supported by pointing out (correctly) that some extremes, specifically hot summers, are more common. If you find actual data showing both cold winters and hot summers to have become more common that would be interesting.

            As Freeman Dyson pointed out long ago, the physics of greenhouse warming implies that warming tends to be greater in cold times and places than in hot times and places, which would give the opposite of your pattern. The argument is pretty simple. Water vapor is a greenhouse gas–a stronger one than CO2. The more of one greenhouse gas there is in the atmosphere, the less the effect of adding another–you can’t block more than 100% of the IR coming up from Earth. The warmer it is, ceteris paribus, the more water vapor is in the air. So that suggests a pattern biased in our favor–less warming when it is bad (because it’s already hot), more when it is good.

            There was a lot of talk about more hurricanes, but it didn’t happen. The last year had a high rate of hurricanes in the U.S., but for quite a while before that the rate was unusually low.

          • jchrieture says:

            DavidFriedman says (utterly wrongly)  “The warmer it is, ceteris paribus, the more water vapor is in the air.”

            Entirely logical, yet utterly wrong.

            Because unlike CO2, water condenses and freezes … so vigorously that the stratosphere is dryer than the Sahara … whereas CO2 mixes freely throughout the entire atmosphere.

            For a thoroughgoing scientific history with abundant contextual details and references, scientifically minded SSC readers can consult the American Institute of Physics website The Discovery of Global Warming, in particular “The Carbon Dioxide Greenhouse Effect“.

            As a case history in dubious scientific opinions promulgated by elderly scientific statesmen, see (for example) the top-rank mathematician Serge Lang’s AIDS denialism … which opinion too, was entirely logical yet utterly wrong.

            Serge Lang’s story is over … Freeman Dyson’s, not yet.

            More broadly, narrow and selective readings of the scientific literature are a chief reason why reading SSC teaches more about rationalists, than about rationalism.

          • Nornagest says:

            Go away, John.

          • DavidFriedman says (utterly wrongly) “The warmer it is, ceteris paribus, the more water vapor is in the air.”

            Entirely logical, yet utterly wrong.

            Because unlike CO2, water condenses and freezes … so vigorously that the stratosphere is dryer than the Sahara … whereas CO2 mixes freely throughout the entire atmosphere.

            That would appear to support my (and Dyson’s) argument. If water vapor, like CO2, mixed freely, then the concentration would be about the same everywhere. If water vapor over water is in equilibrium at the local temperature, with the liquid to vapor change exactly balancing the vapor to liquid change, then there will be a higher concentration at warmer temperatures, which implies that a given concentration of CO2 raises the temperature by more in cold places than in hot.

            It’s possible that I am missing something, but it looks to me as though you have a complete disconnect between the scientific facts and their implication, so complete as to reverse the conclusion.

          • jchrieture says:

            Rational discourse would be greatly enhanced by a cited scientific publication, written by anyone (including but not limited to Freeman Dyson), that enlarged upon the climate-change beliefs that David Friedman’s comment ascribes to Freeman Dyson, that moreover supported those arguments with thermophysical theories that have been verified by experiments, and moreover was reasonably consonant with the climatological record, and moreover was tuned by critical peer review.

            There is no such publication, is there? The absence of which renders rational climate change skepticism infeasible, doesn’t it?

            By comparison, articles like arXiv:1602.01393 are, by objective scientific standards, models of rational discourse that are sufficiently rigorous as to provide reasonable grounds for legal rulings.

            Such articles do not end rational climate-change discourse, but rather initiate it.

            PS: as with James Hansen and climate-science, ditto with Ted Chiang and neuroscience.

          • By comparison, articles like arXiv:1602.01393 are, by objective scientific standards, models of rational discourse

            Could be. But that article says nothing at all about Dyson’s (and mine) point on the distribution of warming. Did you just pick it at random?

            I am still waiting for you to explain why the facts you cited about the behavior of CO2 and water vapor are evidence against Dyson’s argument rather than evidence for it. Did you pick them at random too?

            If you are not willing or able to understand the arguments there is no good way of deciding which authorities to believe.

          • jchrieture says:

            Please solidly ground the views that your comment ascribes to Freeman Dyson to a citation anywhere in the climate-change literature … the more recent, the better.

            As a start, consider the literature cited in the AIP Discovery of Global Warming — the same work already cited above (by me) — specifically the chapter “Arrhenius: Carbon Dioxide as Control Knob“; this chapter surveys, at least, the introductory physics that governs the intertwined roles of H20 and CO2 in climate-change.

            As with Freeman Dyson, so too with views ascribed to Ted Chiang … an author whose works are exemplary in respect to their well-considered grounding in the neuropsychiatric literature.

            With regard to climate-change and affective cognition alike, discourse not grounded in well-described scientific investigation has little chance of advancing rational understanding.

          • jchrieture says:

            It turns out that an original source for at least some of climate-change skepticism that SSC comments ascribe to Freeman Dyson is an August 2007 essay in The Edge, titled “Heretical Thoughts about Science and Society“.

            Climate-science research during past decade has been unkind to Dyson’s heresies, to such an extent that joining Dyson’s first paragraph to his last paragraph yields good advice:

            I have no degree in meteorology and I am therefore not qualified to speak. … The moral of this story is clear. Even a smart twenty-two-year-old is not a reliable guide to the future of science. And the twenty-two-year-old has become even less reliable now that he is eighty-two.

            It is true that Dyson’s intervening climate-science discussion does include several statements that have been solidly affirmed by a decade of climate science. For example:

            Another problem that has to be taken seriously is a slow rise of sea level which could become catastrophic if it continues to accelerate.

            Of course, Dyson’s non-heretical concerns regarding sea-level rise-rates are covered far more thoroughly in (for example) the free-as-in-freedom researches of James Hansen and colleagues.

            In summary, when Dyson’s 2007 essay is right, it is not heretical; when the essay is heretical, it’s not right.

            Surely when it comes to climate science, SSC readers deserve better.

            As with climate-science, ditto for neuroscience. An SSC essay along the lines of (for example) “An Annotated Chiang”, that provided citations from the neuropsychiatric literature for each Ted Chiang’s award-winning stories, would go far toward assisting SSC readers to a more integrated appreciation of Ted Chiang’s marvelous writing skills, and assisting SSC readers too, toward a deeper appreciation of the neuropsychiatric literature that so richly informs Chiang’s stories.

            It is true that Chiang’s stories tend to dissolve cherished rationalist preconceptions … ditto for the present-day neuropsychiatric literature … such that reading them together can unsettlingly inspire “Dangerous Visions” (1967).

            Since when have rational SF/SSC readers shied away from dangerously integrative science-grounded visions? Rationalism that shies from dangerously integrative science-grounded visions isn’t much use, is it?

      • The planet

        Call me demanding, but I would quite like the survival of my species and its civilisation as well.

  53. Nearly Takuan says:

    I felt like being maximally charitable to Chiang as I read his article, but found myself just getting frustrated. Alternative arguments he might have pursued using almost the exact same rhetoric: “AIs are unlikely to be created by anyone but engineers; most engineers live in Silicon Valley; Silicon Valley obeys perverse incentives; therefore AI is more likely than not to obey perverse incentives”; or, “the large-scale actions of humanity as a whole tend to serve Moloch; AI exists within the set of things that will be achieved by humanity; therefore AI is more likely than not to manifest as a servant of Moloch”; or, “our current culture overestimates the continuing value of late capitalism as an economic force; as long as we continue to align our own goals with capitalist concepts, aligning AIs toward virtuous/complex goals will be impossible”. But Chiang for whatever reason has seemingly decided to work backwards from “AI risk is made-up” and so immediately discarded any premises that might have accidentally led to the wrong conclusion—even if that conclusion might have better supported his broader point about capitalism/engineers/whatever being responsible for the fall of Trantor.

    • Le Maistre Chat says:

      Or the superhuman AI could manifest as literally Moloch and say “thanks for this luxurious body.”

      • Nearly Takuan says:

        In the sense that Moloch is a blind idiot alien god, yes. What is the paperclip-maximizer if not a being so obsessed with a specific value that it cannot be satisfied with a 99.999% victory in which nearly all matter in the observable universe consists of paperclips and yet some miniscule fraction of matter remains which is not a paperclip? Moloch lives in all of us, but he lives most of all in a being that does not care whether its values are good for humans, or even good for itself, and so is intelligent enough to know they are not but enforces them anyway.

        • deciusbrutus says:

          The error is in thinking that an outcome can be “good for” an agent independently of whether or not that outcome satisfies that agent’s values.

  54. Orion says:

    Scott — I’m really intrigued by the article you linked concerning “Type 1” and “Type 2” psychiatry. Are you interested in discussing it? If so, would you recommend I comment on the original post, or in this thread, or by email? To what extent does that post reflect your position in 2018?

  55. Muro says:

    Scott,

    You’re really smart and good at writing. You have broad semantic knowledge and are very open-minded. So why did you make this post? It’s a buzzfeed article. You could probably work full-time refuting buzzfeed articles. Why don’t you do more sophisticated things?

    It’s like somebody who has studied huge amounts of nutrition, spending time online arguing against really dumb diets. Just like their audience doesn’t try stupid diets, your audience doesn’t really swallow much buzzfeed.

    Your better posts are not of the form

    1. Link to poor quality article.
    2. Roast poor quality article.
    3. Conclusion.

    If you’re arguing against more sophisticated articles, or broad trends linking to multiple articles, that’s more interesting.

    But this kind of writing doesn’t satisfy your audience’s need for intellectual stimulation. Scott, where does your marginal advantage lie? Is it with arguing about BuzzFeed articles? (Hint: It’s not).

    • MicaiahC says:

      I agree too, and I also want to add that reading the comments section about these things is (an admittingly self inflicted) pain in the ass as the AI risk skeptics seem to me to be both uninformed and extremely unkind in their comments. I also have a friend, who himself is an AI safety skeptic who is consistently unimpressed by responses in the threads here as well as the post itself.

      This is your blog, and I understand you want to post whatever you want. I guess you were frustrated about this a while back, I’m inferring from your snarky twitter message about asteroids. Maybe posting to your tumblr instead relieves the need to talk about articles like this without also tiring out your blog audience? (unless you tend to get dogpiled on there even more in which case disregard).

      I don’t think what I say should be given much weight, just wanted to signal boost this.

    • carvenvisage says:

      If Scott is aggravated by nonsense parading as genius, why should he need some special Muro-approved reason to exercise his talents?

      It’s probably the same energy that produces this as produces the stuff you like (and get for free). So what the fuck are you doing telling him to dial it back? -and in such a patronising manner?

      This is like spotting Usain bolt going *whoop* and running, in a park, and telling him he misunderstands his place and role in society. -The guy likes to go fast. If he didn’t, he wouldn’t be so good at it.

  56. lightrook says:

    > But Chiang argues the analogy proves that AI fears are absurd.

    He does no such thing! Scott’s right that Ted doesn’t provide any reason to not be afraid of superAI other than mocking the people that believe it and speculating as to why the believe it, but that’s because Ted takes “uFAI is not a big deal” as the null hypothesis and assumes his readers will too.

    Maybe the real unfriendly AI was buzzfeed title writers that make you think that the title had anything to do with the content, after all?

    • jchrieture says:

      Maybe the real unfriendly  a i is rationalists who critique individual Chiang-works as isolani, with a view to sustaining the orthodox shibboleths of rationalism?

      As contrasted with reading Chiang’s writings as an integrated body of work that is deeply informed by a neuroscientific literature with which Chiang shows an intimate familiarity.

  57. martinepstein says:

    “Hawking, by the way, discovered that information could escape black holes”

    Ah, Hawking actually only discovered that energy escapes black holes. At first he believed that Hawking radiation was noise and information did not escape. It was Leonard Susskind and others who showed otherwise, at least in the simpler models of string theory that we can work with.

    Source: The Black Hole Wars by Leonard Susskind. Top notch pop-sci.

    • Aron Wall says:

      Speaking as a physicist who works on black hole thermodyanmics, I can confirm that this is not what Hawking originally showed/argued.

      Hawking radiation does carry entropy / “information” — but in Hawking’s original calculation this information is perfectly thermal, i.e. uncorrelated with the information that previously fell into the black hole.

      The majority of researchers in the field now believe that the information DOES come out (and Hawking himself has changed his mind about this) however the issue is controversial and there are still some famous physicists like Bob Wald, Bill Unruh, and Raphael Sorkin on the other side.

      What this implies about AI risk is left as an exercise for the reader. 😉

  58. Naclador says:

    What I do not understand is Chiang conclusion from his well-founded description of corporations as ultra-slow AIs. If he really believes that corporations are a good analogy to computational AI, then the very idea of a superintelligent AI should scare the s**t out of him.

    Just look at how much havok these ultra-slow, low intelligence profit-maximizing AIs called corporations have already wreaked upon the Earth. Climate change, Fukushima, Deep Water Horizon, the greatest mass extinction since the end of the dinosaurs, just to name a few. When already these dumb, slow, pitifully unadvanced AIs can do this, just by thoughtlessly optimizing for short term profit, what do you imagine would a superintelligent paperclip-maximizing AI be like? It might turn the world uninhabitable within weeks!

    I don’t get how Chiang can avoid seeing this.