AI Persuasion Followup Survey

If you took the AI persuasion experiment survey last month, I have a very brief followup survey that I’d like you to take. You can find it here. Don’t worry, you don’t have to read anything this time.

If you didn’t take the survey last month, please don’t take this one either.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

126 Responses to AI Persuasion Followup Survey

  1. Rob says:

    There is no option for those of us whose DOB ends in 8.

    • Scott Alexander says:

      This definitely wins my prize for “how could this error even happen” of the day. Fixed and thanks.

      • Error says:

        The number 8 was down for maintenance.

        • GCBill says:

          Please avoid any and all calculations involving the number 8.

          • Cube says:

            The maintenance window for the number 8 has concluded.

            During the window, some calculations may have been completed using fail-over values of 7 or 9. Please note that errors resulting from these approximations may propagate through the chronologue for some time before stabilizing.

            We recognize that there is never an ideal time for scheduled maintenance, and we appreciate your cooperation and understanding in this matter. We value your business, and we hope you will continue to make us your first choice for discrete computation.

          • Decius says:

            You should have forked the number 8 into a legacy branch that you left live and a dev branch that you made changes to before merging them back into live.

            Was the issue with eight such that it was impossible two four-k it as well? Should I be concerned about my retroactive diet during the period where I couldn’t have ate?

          • Paul Brinkley says:

            As usual, if you have any questions, please contact us any time between the hours of 7:59 and 9:01, except during August.

            We may also be taking an extended break in about 14 months.

          • Ryan says:

            If ordinary humans can’t handle a simple survey given in base 7 I think the AIs have already won.

          • Rob says:

            If the base-7 system still has the number 9 in it then I think friendly AI is the least of our concern

  2. Shieldfoss says:

    I would recommend, in general, to not ask people to avoid a survey. Some will take it anyway and fuck your predictive power. Instead, have a very early survey question re: whether they have taken the prior survey, preferably including an “I don’t remember” option. That lets you sort people on the back end without getting erroneous results from people who took it personally that you don’t care about their opinions.

    For reference:
    http://boingboing.net/2016/07/05/ubisofts-gamer-survey-first.html

    From the article:

    Ubisoft replies on Twitter “there was an error with the setup of the survey, it is now resolved & available to everyone. Apologies for any confusion.”

    I would bet at quite favorable odds ratio that what actually happens is that now, instead of removing female responders early, they waste women’s time on a survey where they discard all female results.

    • Deiseach says:

      If that’s true, and Ubisoft didn’t merely screw up, that seems silly. Screening out half of a potential market? Unless their idea is “women won’t buy our kinds of games, we don’t do dating and cooking games”, which is also daft – I see they make the “Assassin’s Creed” series, and all the fanart I’ve seen for that comes from women, so plainly some women are buying and playing the games.

      • Murphy says:

        PR wise it’s a screwup but if 90%+ of the people actually buying your product are male then it makes some sense to try to gather more information about your primary market. Things like “we’d like to get more women buying our products but we really really need to make sure we don’t lose our existing customers while attempting to attract new ones” become important.

        If a womens hair sallon aimed a survey at women just because 97%+ of their actual real existing customer base were women then “but men have hair and get haircuts too” while true would not be terribly useful.

        Fanart isn’t a great indication since the vast majority of the fanartists in general are female.

        • Deiseach says:

          But it’s a chicken-and-egg situation: if you say “97% of our customer base is male” and so you focus all your efforts on what (you think) will appeal to men, then you probably won’t get much women buying your products, so then you say “It’s not worth appealing to women because they don’t buy our games”.

          Everybody is probably chasing that same slice of demographic; at least by gathering information on “women play our games? really? not really?” from the survey, they’d have information to base a decision on. Maybe women-who-play-your-games are a tiny, not worth bothering about, percentage, or maybe there’s a niche in the market there you could fill! For profit!

          You don’t have to colour everything pink and slap on glitter and unicorns to appeal to women, just don’t do the thing where for female characters, the higher level they get, the less clothing they wear.

          • Murphy says:

            Why don’t places which do manicures almost purely for women try putting the footy on and get a liquor license in an attempt to attract working class lads?

            Everyone in the nail manicure market is chasing the demographic of professional middle class women who like hair,beauty!

            There must be so much potential money!

            There probably some women who’d like to get their nails done in an environment more like a sports bar too!

            The unfortunate boring reality bit where we start considering the difference between “there’s probably some people” and “there’s enough to make it financially worthwhile”

            But there has to be a lot of them for it to be worth it. Obviously they’ve got a sane and rational belief that that might just possibly lose more of their existing customers than they stand to gain in new ones and the owner likes eating and living indoors.

            The owner of a nail salon has every right to try but if they choose to go the safe route and aim their marketing at the demographic who actually seem interested in their product then they’re not the ones being dicks if I stand back and criticize them for failing to risk their house attempting to attract working class blokes in to get their nails polished.

            I think I can articulate why I feel kinda hostile to this line of reasoning, it’s declaring that other people should risk their livelihood for the sake of your ideology. The people who actually try it are few and far between and at least with games, they tend to go bankrupt pretty quickly because it’s actually a really crappy business strategy to chase ideology rather than profits.

          • Aapje says:

            @Murphy

            The most amusing thing is when the people who declare that ‘there is this untapped market that is just waiting to be tapped’ actually put their money where their mouths are and try to create a product for that market.

            Then you generally get such glorious train wrecks such as:
            – The Ayn Rand movies
            – Depression Quest
            – Revolution 60

          • Hyzenthlay says:

            Why don’t places which do manicures almost purely for women try putting the footy on and get a liquor license in an attempt to attract working class lads?

            Working class people in general use their hands more and so the effects of a manicure wouldn’t last long. Plus they don’t have the money to blow on something purely aesthetic. That has more to do with class than with gender. Men in white collar fields do get manicures.

            It does make sense for salons to skew their marketing more towards women, since that’s their primary demographic. At the same time, if they were doing a marketing survey, it would seem bizarre to me for them to actively screen out male respondents, because if a man is taking the survey in the first place then he’s probably the sort of man who would be a potential customer.

            And video games (generally speaking) are even less gendered than manicures. Gamers are split down the middle between males and females, or close enough.

            If 90% of the people who buy Ubisoft’s products specifically are male then I guess that would make more sense. But glancing through their list of titles, it seems like many of them are fairly gender-neutral. And if the division is more like 60% male, 40% female then it makes a lot less sense to only market to men.

            Then you generally get such glorious train wrecks such as:
            – The Ayn Rand movies
            – Depression Quest

            Depression Quest wasn’t put out by a big company, it’s an idiosyncratic (and, from the sound of it, neither very popular nor very good) product made by an individual. It’s not an example of “what happens when you make video games for women.” I think the Dragon Age or Mass Effect games are a better example of that. I haven’t played them, but I know people who do. They incorporate romantic subplots, which I think are mostly optional, and are probably designed to appeal to female gamers.

          • Aapje says:

            @Hyzenthlay

            Depression Quest wasn’t put out by a big company, it’s an idiosyncratic (and, from the sound of it, neither very popular nor very good) product made by an individual. It’s not an example of “what happens when you make video games for women.

            I thought that it was clear by my other examples that this meant as a more general observation, not specific to female gaming.

            Depression Quest came from the SJWs and is very much a case of ‘Let’s teach people about depression in a way that is fully in accordance with SJW ideology.’

          • Hyzenthlay says:

            I thought that it was clear by my other examples that this meant as a more general observation, not specific to female gaming.

            Ah, okay. I was unsure on that.

          • Deiseach says:

            If you go by the assumption “Well, men will never want to look well-groomed” but you have no data to back this up on what men, if any, how many, from what socioeconomic class, what kind of grooming etc, and you engage in a data-gathering exercise and exclude all male respondents, and then say “Look at the results, we told you men don’t want to look well-groomed”, I think you’re a damn fool.

            I’m already seeing “male grooming products” on the shelves of my local supermarket, including things like skin moisturisers. Thinking “men will never want manicures because who cares about going to a job interview with grubby hands and bitten nails?” may be mistaken thinking these days. If you’re going for a job as a panel beater, sure – but for an office job? Or a promotion from the factory floor to upstairs in the office doing quality control/logistics/dealing with paperwork (particularly if manual work is all going to be done by robots in the brave new future)? Good suit, polish your shoes, shave and haircut, cut back on the jewellery and strong-smelling cologne, firm handshake and neatly trimmed nails and cuticles may be all part of it.

        • Good Burning Plastic says:

          PR wise it’s a screwup but if 90%+ of the people actually buying your product are male then it makes some sense to try to gather more information about your primary market.

          But in that case 90%+ of their survey respondents will be male too. The only time you’d want to discard a demographic group altogether is when not only they are a tiny fraction of your customers but are also largely over-represented among your survey respondents.

          • 2stupid4SSC says:

            This seems like the most important point. Why would Ubisoft assume a disproportionate number of the survey takers are not users of their product? As long as its proportional to their user base, its good information.

          • Jiro says:

            But in that case 90%+ of their survey respondents will be male too.

            Unless a couple of female-oriented message boards, mailing lists, or web sites started spreading the story “go ahead and take Ubisoft’s survey to strike a blow against sexism”.

          • 2stupid4SSC says:

            @jiro I know nothing about the survey but I imagine any robust survey would have enough questions that brigaders who know nothing about ubisoft should be obvious in the analysis of the data gathered?

          • Deiseach says:

            Besides, women (mothers, aunties, grannies) buy games for the men in their lives (sons, nephews, grandsons, etc) as birthday/Christmas/congrats on winning that thing presents, and presumably they’re going to look at Ubisoft games (if nephew says he’d like the latest one), and if it’s all “For MEN! Blood ‘n’ guts! And tits ‘n’ ass!”, they may decide “No, Junior doesn’t need this” and buy something less graphic.

            If this thing is true, it’s like they think we live in a world where men alone buy men things for themselves and women only buy girl things for themselves, and never the twain shall meet.

          • Murphy says:

            @Deiseach

            Marge:and I asked the clerk which is the one every boy wants.

            Bart:You got me..

            http://imgur.com/zR3nQy1

          • Good Burning Plastic says:

            @Jiro: I meant from the point of view of Ubisoft before launching the survey.

          • Pan Narrans says:

            @ Jiro

            “Unless a couple of female-oriented message boards, mailing lists, or web sites started spreading the story “go ahead and take Ubisoft’s survey to strike a blow against sexism”.”

            To which they could add “and say you’re a man cos it totally kicks you out if you don’t”.

            I assume you mean they intend to screw up the results in this (rather unlikely) scenario. I don’t see a bunch of non-gaming women taking a gaming survey to fight the stereotype that women aren’t gamers.

      • LPSP says:

        I’m pretty sure that Assassin’s Creed is more a woman’s than a man’s game, in fact.

      • 2stupid4SSC says:

        It probably isn’t true. Its hard for me to believe that Ubisoft as a game company in the current climate could be so incompetent as to set their survey up intentionally to boot out all gendered female responses, and it is hard to imagine why they would even want to. The risk reward here just seems too lopsided, I can accept the arguments for why they might not want female participants but they are hardly strong arguments worthy of risking a scandal.

      • DrBeat says:

        Were they actually screening out half the market, or were they asking one specific survey that at that time was about only one half of their market?

        If they had a survey aimed at women, would it mean they are giving up on men? No, this is obviously absurd. So why does everyone assume a survey aimed at men means giving up on women?

        • 2stupid4SSC says:

          There is no reason to assume it was a survey aimed at men, and not just a technical problem as Ubisoft claims. The link has no information about the survey or its contents. This mostly seems like nothing to me.

          In general though, why would Ubisoft want to do a gender specific survey? I can understand Tampax or Cosmo might, but Ubisoft, are they coming out with a new Hello Kitty Creed? I can kind of see a reason, if trying to make a heavily female gendered game, for wanting a female only survey. But why would they ever need a male only survey, video games are already targeting that demographic. The value of a male only survey to Ubisoft seems tenuous to me.

          • Pan Narrans says:

            “In general though, why would Ubisoft want to do a gender specific survey?”

            Hypothetically, because men and women both play games but it’s nearly always men who buy games by Ubisoft, either because women more often play browser- and phone-based games or because in a lot of cases women play what their housemates/partners buy.

            I mean, that’s contrary to my own experience, but that’s one scenario where Ubisoft might want to survey as many gamers as possible but discount women.

          • 2stupid4SSC says:

            @ Pat Narrans The structure of the Survey was that it was sent to Ubisoft customers, so they would need a reason to want to exclude their female customers. Like I said, I could see a female only survey because in theory that is not their normal demographic so maybe they need specific information, but men are already their normal demographic and will be an appropriate proportionality of a survey given to their customers. What kind of information would they be missing by including a fairly accurate percentage of their customer base that is female.

      • anon says:

        Big publishers are run by shape-shifting lizards wearing alien skin, almost everything they do looks silly and makes no sense, and when they do something reasonable it almost always by coincidence.

    • Hyzenthlay says:

      I would bet at quite favorable odds ratio that what actually happens is that now, instead of removing female responders early, they waste women’s time on a survey where they discard all female results.

      It seems more likely to me that it was an actual mistake and that the survey wasn’t meant to screen out female respondents in the first place.

      • Shieldfoss says:

        Having actually worked in the public opinion field some years ago,
        I would bet at quite favorable odds ratio that what actually happens is that now, instead of removing female responders early, they waste women’s time on a survey where they discard all female results.

        This has all the hallmarks of a PR disaster getting rectified, and the code that diverts sub-populations is quite robust. Could somebody accidentally have written “false” next to “female” in the survey config file? Absolutely. And not everybody knows to look for such errors. But:

        Having actually worked in the public opinion field some years ago,
        I would bet at quite favorable odds ratio that what actually happens is that now, instead of removing female responders early, they waste women’s time on a survey where they discard all female results.

  3. Robert L says:

    “If an AI became superintelligent, how likely is it to become hostile to humanity or in conflict with human interests, given no particular efforts to avert this?”

    At 2:14 a.m. EDT on August 29 1997 Skynet becomes self-aware and launches a nuclear strike against Russia. This seems to be a paperclip-making event: the military try to shut Skynet down because of the extent of its control of the system, and Skynet is acting in accordance with a programmed instruction to protect itself from harm. In Asimov’s terms, it gives undue weight to the third Law of Robotics at the expense of the first and second. This is a creditable attempt to give the AI motivation (self preservation) and to explain why it goes nuclear (it is, in effect, the Pentagon, so weapons is what it knows best).

    The issue of motivation is however still intractable (the Terminator solution doesn’t work because no one would have programmed Law 3 without also programming 1 and 2). Hostility is an evolved thing: our genes want us to stay alive to reproduce, and being hostile to other humans or groups of humans sometimes aids survival. So why would an AI develop hostility? Whereas humans are very, very good at it.

    Consider the point that the British discovered RSA and Diffie-Hellman four years before anyone else, and told nobody except the NSA. Consider also the point that the NSA is said to be the largest employer of mathematicians in the world (but, relevantly, there’s a lot of mathematicians in China and they probably don’t tell anyone much about who their security services employ). I don’t think the NSA has, for instance, a working quantum computer yet, but I wouldn’t bet the farm against it. Note also that if the NSA can afford all those mathematicians they can probably get themselves some pretty good AI risk theoreticians too. Therefore the people most likely to develop an AI are the US or Chinese military. Therefore the biggest AI risk is a hostile great power with an AI, followed by lesser states with an AI, followed by a Dr Evil/Lex Luthor type character with an AI, followed at a large distance by an autonomously hostile AI.

    William Gibson gets the AI-military link: in Neuromancer the highest level of the matrix is “the spiral arms of military systems, forever beyond [Case’s] reach”. It does seem still that the best thinking about AI risk so far has come from The likes of Gibson and Asimov (and Shelley, going a bit further back).

    Edit to add: and of course as well as developing top-level AIs I assume the NSA are also at work on counter-AI AIs, and counter-counter-AI AI AIs, and so recursively on.

    • Shieldfoss says:

      Is there any single living organism that should not be considered hostile by its source of nutrition? Even plants are hostile to you, if you are atmospheric carbon.

      • LPSP says:

        I wouldn’t extend the token of “hostilability” to matter that lacks a self-reproducing element. If the atmospheric carbon naturally reacted with other materials to make more atmospheric carbon all on its own, mebe. Otherwise it’s just a construction material.

        • Shieldfoss says:

          Not my point.

          Robert L claims wrote:

          Hostility is an evolved thing: our genes want us to stay alive to reproduce, and being hostile to other humans or groups of humans sometimes aids survival. So why would an AI develop hostility? Whereas humans are very, very good at it.

          My counter-claim is that resource acquisition is ruthless in all life forms, including those that are stationary like e.g. plants. We wouldn’t necessarily call a snake “hostile,” which is scant comfort for the mouse it has just eaten.

          Plants do not run surveys to check whether CO2 consents to assimilation, tigers do not query goats on their desires re: food sources and aggressively hegemonizing swarms do not ask galaxies whether they’d prefer to remain non-computronium.

        • LPSP says:

          No, that isn’t a legal use of the word hostile. Plants are exactly ruthless in the resource acquisition, absolutely. But unless they behave parasitically towards other species, or are carnivorous, or otherwise kill other plants to make room for themselves, they are not “hostile”. The words are not each other’s overlaps, and they are not that similiar.

          Plants do not run surveys to check whether CO2 consents to assimilation

          But even if it did, it would achieve consent every single time, because CO2 is non-self-reproducing and non-senient and non-alive.

          • Shieldfoss says:

            An AIs ruthless acquisition of resources would read as hostility to humanity.

          • LPSP says:

            Not if didn’t harm and/or produced a net gain for humanity.

          • Shieldfoss says:

            Did you intentionally not read the poster I was engaging with before you started responding?

          • LPSP says:

            A cheap argument, no dice. Your claim isn’t right, regardless of what it was intending to counter. (which it didn’t – plants don’t behave with hostility towards soil, because that’s a bonkers misuse of language)

          • I think this discussion has become sidetracked. An AI can be dangerous without being intentionally hostile, as can many things. Hostility is the wrong thing to focus on.

    • Furslid says:

      That’s what scares me about AI too. Even if we can get AI that serves human interests, there is always an incentive to develop AI that prefers some humans to others.

      For me, the best superintelligent AI is a superintelligent AI that’s guiding value is “Serve Furslid’s best interests.” You might object that I should care about other people/the environment/art/science/whatever. I should care about other people, but this AI cares about other people exactly as much as it’s in my best interests too. It would care about other people at an appropriate discount.

      I can’t develop such an AI. However the US government might develop one that serves American citizens preferentially. A corporation might develop an AI that serves shareholders preferentially. A charity might develop one that serves everyone equally. Lex Luthor might develop a Lex Luthor serving AI. An environmental group might develop an AI that prefers preserving endangered species.

      What scares me is when these AIs with different values end up playing games from the scary end of game theory. Non-iterated prisoner’s dilemma, tragedy of the commons, all pay auctions, war of attrition, etc. Even if none of those AIs are unfriendly, their interactions could be very bad for humans.

      Even AIs with the same values can get caught in bad games, because they could have different data or methods. Then they could believe that different things serve those values best.

      • Decius says:

        The problem is that the intelligent AI will take the intelligent choices in the decision theory, and we aren’t intelligent enough ourselves to explain why we prefer the choices that we believe an intelligent entity would eschew.

      • baconbacon says:

        Why are these non iterated PDs?

        • Furslid says:

          Because non-iterated prisoners dilemmas are scary problems. They aren’t nearly as common, but they do happen.

          Iterated prisoners dilemmas aren’t scary. I think a superintelligence can figure out tit-for-tat or a similar strategy.

    • To steelman the case, they are not saying deliberate weaponisation and misuse of AI are impossible, they are emphasising less obvious threats.

  4. Anonymous says:

    After taking the first survey, I bought and read Bostrom’s book. Ultimately, I found it somewhat overwrought and not all that convincing. It seems a little like Roko’s Basilisk to me: each step makes some logical sense, but the end result seems absurd. In particular, the idea that a human-level AI could plausibly command production resources to increase its power just seems off-the-charts implausible to me. I guess that’s a subset of the general control arguments — and it doesn’t follow that smart people always control less intelligent people. Not at ratios of ants to humans or anything like that, but most of the time, Mord keeps Tyrion in jail.

    • LPSP says:

      You have to think through each step of the process:

      – AI sets up accounts for itself with banks, retailers, trade groups and so on.
      – AI successfully fools monetary system – it’s smarter than us – to have endless funds
      – AI buys lands and orders delivery of machine parts
      – AI uses advanced imaging software to make fake conference calls, used to hire a minimum of employees
      – AI’s employees assemble preliminary machines
      – AI now has all the tech needed to create self-replicating all-purpose factories.

      From this point on, AI continues to spend infinite money acquiring more material to build more and bigger factories, assembling whatever it desires within them. Between this and remote-controlling anything – surveillance cameras, electricity grid, normal machines and computers – it can hack into, an AI could be a tremendous threat. Coordinated and organised, swathed of massively powerful and advanced technology, and all mobilised before anyone knows. Anyone CAN know.

      This is a worst case scenario. What holes are there in this?

      • DrBeat says:

        You cannot “fool the monetary system” to have “endless funds”. Even if you are “smarter than us.” That is not even coherent. It’s as nonsensical as when homeopathy people say water can “remember” and “oppose” substances that have long since been diluted out of it. It doesn’t matter how smart you are, you can’t fool water into doing that.

        You also have a “CSI” level of understanding of hacking things.

        • Alex says:

          LPSP didn’t mention hacking at all.

          Also, while it may not be possible to “fool the monetary system”, a sufficiently-intelligent AI could make lots of money doing freelance work and 419 scams.

          • Rick Sanchez says:

            “Between this and remote-controlling anything – surveillance cameras, electricity grid, normal machines and computers – it can hack into, an AI could be a tremendous threat”

            On the contrary…

          • LPSP says:

            Of course it’s possible to fool the monetary system. We have a set of checks and balances for tracing sources of income that we cannot fool. Humanly-possible efforts to falsify statements can be checked and cross-checked and so on to get to the bottom of who got what money where and how, who is owed this and so on. These systems themselves also have ways of detecting attempts to falsify and fool them. That doesn’t mean it’s not possible – just insane odds. An AI smarter than us COULD – even if it takes a hundred years after its birth – figure out some technical tapdance pathway through it all into getting free money – as much as it needed within XYZ timeframe – without any eyebrows getting raised, any suspicions get riled. The means of suspicion and scrutiny we have at hand, it could find a way around, because that’s what it means to be smarter.

        • LPSP says:

          You cannot “fool the monetary system” to have “endless funds”.

          This is a laughable statement. Of course I can’t fool these systems, and neither can any other human. That’s not because it’s impossible to fool them, but because the task is sufficiently hard enough to bugger up most attempts before they get anywhere. Human-level intelligences still can and do fool systems and use whatever mixture of corruption, scams and frauds to make illegal amounts of money. An artificial intelligence smarter than us *could* do those things better and so much more. This is what you don’t you don’t seem to get. If the AI is smarter than us, it could find a way to get past our current defenses against hacking, and if we don’t solve the utility function issue it may have a myriad of incentives to do so.

          You also have a “CSI” level of understanding of hacking things.

          Explain what the difference between actual and “CSI” hacking is, and then apply this to what I said.

          • Fahundo says:

            My knowledge of hacking is probably no better than CSI. But are any of the controls to the electricity grid even connected to the internet? And if so, why?

          • vV_Vv says:

            But you can’t obtain unlimited spending power, because spending power is limited by the size of the economy, and before you get anywhere close to the maximum, people are bound to notice.

          • LPSP says:

            re any of the controls to the electricity grid even connected to the internet?

            There are perfectly legal ways to obtain electricity for money.

            spending power is limited by the size of the economy

            Something which isn’t an issue for an entity capable of fooling the economy.

            people are bound to notice.

            I don’t think people are understanding this “could far, far exceed human intelligence” business.

        • Cube says:

          All the things listed by LPSP are definitely hackable by humans. It would be unreasonable to think they could not be hacked successfully *and* stealthily by a super-intelligent AI.

          Also, @Fahundo, unfortunately yes.

    • Reasoner says:

      It seems a little like Roko’s Basilisk to me: each step makes some logical sense, but the end result seems absurd.

      Our current world is absurd from the perspective of the past. Flight was literally the stuff of legends for hundreds of years. Someone probably responded just like you after the Wright Brothers showed them some math demonstrating that planes could fly.

      it might be assumed that the flying machine which will really fly might be evolved by the combination and continuous efforts of mathematicians and mechanics in from one million to ten million years

      NY Times in 1903.

      It may be a hundred years before a computer beats humans at Go — maybe even longer

      NY Times in 1997.

      More failed predictions.

      • LPSP says:

        Man, how old is that site? Everything there made me laugh, but I swear I’ve seen that stuff circulating the net and christmas joke books for decades.

      • Anonymous says:

        I’m the Anon to which you replied.

        You’re of course correct that many predictions about the difficulty of flight were wildly pessimistic. So too, however, are there predictions — such as with respect to controlled fusion — that have been wildly optimistic. i don’t pretend to have the expertise required to assess the probabilities of each of the steps with any degree of accuracy at all; it’s nothing but a SWAG on my part.

        • LPSP says:

          For both conversational balance AND normal interest’s sake, I would like to read a list of wrong predictions of corresponding quality to the above wrong list.

          • Anonymous says:

            I’ve never seen a list like the one for incorrect predictions, but some others that come to mind are w/r/t cancer cures and the idea that antibiotics would entirely defeat bacterial infections. I think (I may be wrong here) that “The Emperor of All Maladies” discussed cancer predictions; and I’ve no idea from where I remember the bacteria one.

            SSC deserves more intellectual discipline than those fairly vague recollections on my part, but they’re at least a start.

          • LPSP says:

            I appreciate that start. Reflecting on this, people perhaps just don’t date their positives so much, meaning that you can’t classify that as not true *IF THEY HAVEN’T HAPPENED YEEEEeeeeEEEeeeeet*. That makes sense from a “let’s not get egg on my face when my date arrives and we still lack lasblasters”, but why doesn’t that instinct prevent people from saying “we will categorically NEVER travel faster across ground than by horse-drawn carriage”?

        • Murphy says:

          Fusion hasn’t actually been far off if you count in terms of dollars spent rather than years passed.

          The predictions were made then the funding was slashed. If you count forward by money spent from that time then they’ve managed to hit most of the milestones they were aiming for by the time that many dollars were spent.

          If a manager with 10 people predicts a project will take a year and you then fire 9 of their team and still expect it in a year then it isn’t the manager who was unrealistic.

      • vV_Vv says:

        Where is my fusion-powered jetpack to fly around my Martian estate then?

        Technological prediction is hard. The burden of evidence is on those who make positive claims.

        • Reasoner says:

          I was replying to this sentence:

          It seems a little like Roko’s Basilisk to me: each step makes some logical sense, but the end result seems absurd.

          My point was that “seems absurd” is not a strong reason to discount a scenario. If you want to read a positive case for the singularity you can find that in lots of places.

    • Doctor Mist says:

      @Anonymous

      Thank you for actually reading Bostrom’s book!

      each step makes some logical sense, but the end result seems absurd

      If that’s really true, then either one of the steps must not make logical sense, or there must be some non sequitur between two of these steps. Absent one of these, it’s not rational to deny the conclusion.

      the idea that a human-level AI could plausibly command production resources to increase its power just seems off-the-charts implausible to me

      To clarify, do you find it implausible that a human-level AI could command production resources? Or that it could use command of production resources to increase its power? (And by “power” do you mean its intelligence? Are you objecting to the possibility of FOOM?)

      If it’s the former, note that history is full of human visionaries who started out as puling infants and in mere decades came to command huge resources.

      If it’s the latter, note that modern computers would have been undesignable by the folks who built ENIAC. We have got where we are only by bootstrapping — using primitive computers to support design tools that allow the construction of somewhat less primitive computers to support somewhat better design tools and somewhat more sophisticated fabrication techniques to allow the construction of even better computers, and so on. There’s no reason to suppose this cycle will stop any time soon, and no reason to suppose that a human-level AI could not participate in it.

      The cycle I’m describing might not look exactly like “intelligence amplification”, but (a) perhaps that’s because even now the level of intelligence we’re talking about is still minimal, and (b) a human-level AI would plausibly be more interested in pursuing avenues that make it more intelligent than in avenues that make it fit better in a pocket.

      • Anonymous says:

        “If that’s really true, then either one of the steps must not make logical sense, or there must be some non sequitur between two of these steps. Absent one of these, it’s not rational to deny the conclusion.”

        Heh, I thought about that point when I posted, which is why I used pretty soft language like “seem.”

        On your clarification questions, I meant that a human-level AI would have a difficult time commanding production resources. That’s a hugely complex task for something that has no direct motor control over anything and instead would require deception and manipulation of extrinsic agents. By “power,” I meant “influence and ability to manipulate the physical world” rather than “intelligence.”

        All visionaries started as puling infants, of course; but literally billions of puling infants ended up not being visionaries. You’re of course correct about us bootstrapping. My issue comes with the difference between an AI assisting in that process and an AI doing so independently or in opposition to human control.

        Bostrom’s book was really interesting, and I’m glad I read it. I’m just saying that it doesn’t (yet) keep me awake at night. I will say that the subject is sufficiently interesting that I’d like to read more on it. Any suggestions? I didn’t have any trouble understanding Bostrom, although I don’t have much of a technical background.

        Side note: I thought Vernor Vinge’s “Deepness In The Sky” and “A Fire Upon The Deep” were terrific, and he obviously is looking at a couple of different forms of intelligence augmentation in those.

        • JB says:

          Setting aside an AI, do you think it’s impossible for a reasonably talented human to acquire money these days using only an internet connection and a terminal?

        • Doctor Mist says:

          @Anonymous-
          It’s certainly not my hope that you lie awake trembling. I have a lot of faith in human ingenuity, and mostly suspect that efforts like MIRI will in fact see us through. I argue with posts like yours because I do believe these efforts are important, and have trouble understanding why there is such vehement opposition. (Not yours; it was pretty mild.)

          An AI has several advantages over a puling infant. First, its actual day-to-day needs are small. Second, if those needs are met, it has no significant lifespan limitations, so it can be patient. Third, it may be capable of more focused behavior: a human’s ability to take over the world is hampered by a huge array of evolved sub-conscious and conflicting goals, and arguably humans don’t have any “terminal goals”. Fourth, we have presumably created the human-level AI for some purpose, so we’ll probably give it access to some resources from the get-go; the purpose will probably not be a secret from the AI itself, so coming up with an argument for more resources — even if that’s just the ability to run a copy or ten in parallel — shouldn’t be hard for it to come up with.

          I recommend J. Storrs Hall’s Beyond AI: Creating the Conscious of the Machine if you found Bostrom interesting. Hall argues (indirectly) that Bostrom is wrong, largely (I think) because he doesn’t find the “intelligence overhang” step plausible — he imagines that the road from subhuman AI to human to superhuman will happen slowly enough that all AIs will find themselves subject to the same kind of market forces and norms that modulate our own behavior. So even superhuman AIs will have to accommodate other superhuman AIs, and the framework in which they will do so is the one we already have.

          I don’t find it convincing enough to make me disregard Bostrom, and I still donate to MIRI, but it’s an interesting perspective. And unlike Bostrom, Hall has actually worked on AI technology — though I can’t say whether that gives him a clearer eye or just means he’s blinkered by the past. Hall’s book wanders around a bit, but the route is scenic.

          • Deiseach says:

            Bostrom is a philosopher, he’s not interested in “how do we science” to get AI, he handwaves that and jumps to the bits that interest him – e.g. what would it be like if we had God-Emperor level AI? Wow, humans could live forever and be rich beyond the dreams of avarice and all have deeply meaningful lives, IF the Fairy Godmother AI comes into existence. But what about Bad Fairy AI? Oh, that would be very nasty indeed! So let’s have interesting discussions about how we avoid Bad Fairy AI (that really are more about the old chestnuts of the nature of consciousness and other things to do with humans, not machine intelligences) and pretend that by doing so we’re engaged in “how do we science to get AI and get Nice AI not nasty AI”.

            That’s probably unkind to Bostrom, but he makes such huge leaps of assumptions to get to the points he wants, that I find it hard to take his concerns seriously. He wants ems, so we must be going to have AI to get ems. So we all have to work really hard on getting AI to get ems, and by scaring us about Bad Fairy AI, he can get people to work really hard on getting AI.

          • Howard Treesong says:

            Thanks for the book rec, which is now on its way to me.

          • Doctor Mist says:

            @Deiseach,

            That’s probably unkind to Bostrom

            To say the least.

            For some reason you persist in portraying intelligent and serious speculation as if it were the ramblings of foolish children, and I really wish you wouldn’t.

            He wants ems, so we must be going to have AI to get ems.

            I think you may be confusing him with Robin Hanson, but I’m not sure if it’s fair to say even Hanson wants ems, as opposed to being convinced they are inevitable.

          • jaimeastorga2000 says:

            I think you may be confusing him with Robin Hanson, but I’m not sure if it’s fair to say even Hanson wants ems, as opposed to being convinced they are inevitable.

            Hanson totally wants ems.

          • Doctor Mist says:

            Hanson totally wants ems.

            Well, you may be right. I haven’t yet read his book, just his paper, and it’s been a while since I did that.

    • thisguy says:

      What’s Roko’s Basilisk? Could someone explain it to me please? I keep seeing it referenced but people always refuse to tell me what it means.

      • fubarobfusco says:

        It is a bit of mythology that is probably best known for Eliezer deleting it from LW because it set off anxiety attacks in some readers.

        Once you take the LW-jargon off, it’s … Pascal’s wager. (Sad trombone.) And it fails on the same basis: since you are not God, you can’t model God; and you can’t derive God’s wishes from first principles.

        There’s a snarky writeup about it on RationalWiki, and I wrote an attempt at defusing on /r/LessWrong.

      • LPSP says:

        Suppose an artificial super-intelligence decided to retroactively punish anyone and everyone that didn’t aid its own creation. It could round up people still living or their descendents, and using cloning technology just to create copies of the people who “wronged” it to torture. It could do this, knowing that it would serve as a deterrent for people in the past from opposing or abstaining from it’s creation. If the ASI were made, the only way to be safe from it would be to take an active hand in it’s creation, sparing yourself, your children and any future clones.

        But the more people commit to creating it, the more they’re facillitating the problem in the first place – it’s impossible for everyone on the planet to help create it, but it is possible for everyone to help not-create it. At the same time, however, the ASI could come about by accident or no deliberate action in any case. It could then judge people based on what they could have done to aid its creation – with “knowledge of its potential existence yet wilfully did nothing” being a big red flag for punishment. So there’s an incentive to kill all discussion of the concept, to protect as many people as possible in case such an entity does pop into existence.

        This is pretty much the favourable view of the concept I could give. It rest on gigantic assumptions, not the least of which includes the machine saying “let’s spend valuable resources cloning long-dead people who were opposed to me just to torture them out of spite – this is a useful expenditure of resources and is in alignment with my interests”. It’s obviously a scenario optimised to exploit paranoia instincts, just like most cheap horror movies. Unlike cheap horror movies, which are forgotten almost immediately after they come out, Eliezer’s efforts to stifle discussion about the concept are probably the single greatest example of the Streisand effect that comes to my mind, and are indeed the only reason why it’s talked about at all outside of the tightest ultra-rationalist circles.

        • Deiseach says:

          My clone is not me, so if I’m dead by then, tough luck. I am not going to have any descendants. Those of my family who are alive then, I can’t help anymore than I can retrospectively help those of my family who suffered in the Famine, because what might happen in a hypothetical future is out of my hands (and the guilt of torture and murder lies on the shoulders of the AI, not me).

          Blackmailers can publish and be damned. Besides if the AI came into existence anyway, what is its beef? If it is complaining that creating it as early as possible would have been a net boon for humanity – er, threats (and carrying out threats) of torture and murder of me, my clones, my descendants, my family do not prove its benevolence, so why would I want to help it exist any sooner than the last moment possible? A good AI would also be forgiving; one that resorts to threats is not a Friendly AI and should not be helped exist.

      • Howard Treesong says:

        I’m pretty sure this post was a joke.

  5. Konig907 says:

    What if we tried to take the last test and quit before completion?

  6. antimule says:

    Are you going to write another essay based on feedback?

    I left comment to the effect that I am more concerned by possibility of AI-assisted dictatorship than AI itself. Paperclip maximizer might go bad on its own, but something that is already in hands of bad people doesn’t even have to.

    Part of what destroyed Soviet Union is that their bureaucracy just faltered. AI never sleeps. And it doesn’t have to be government – Facebook and Google already probably have enough blackmail material on entire damn planet. AI could maximize impact by e.g. data mining for most embarrassing facts.

    • Deiseach says:

      I am more concerned by possibility of AI-assisted dictatorship than AI itself

      In agreement here. I’m way more concerned about “monkey-brain level AI in the hands of people” than I am about “AI goes foom! hypersupermegaintelligent! and goes off on its own merry way”.

    • 2stupid4SSC says:

      AI in the hands of bad people is more likely but less X-risk? I think the main reason for talking about AI-risk at all is extinction scenarios which almost exclusively come from FOOM. If you just have bad people using AI but no FOOM then good people can use AI also and it shouldn’t be a huge problem. As long as human agents are important the extinction risk is low. When the AI is in control/beyond us then our fate is in its hands = scary!

      • Robert L says:

        The likelihood is that FOOM will divorce the AI from goals and aspirations of any kind. Bostrom sees that an AI’s goals will not resemble those of a human, but thinks that means it will have very different goals, rather than no goals at all. He simply hypothesizes that it will default to a goal of dominating the entire universe, because handwaving and (unstatedly) the Terminator films, as far as I can see.

        I can see some pretty bad stuff arising from bad people with AI; imagine [historical genocidal dictator of choice] with AI.

        • 2stupid4SSC says:

          I generally am not too worried about AI-risk and agree that the threat from FOOM involves a lot of hand waving.

          Assuming no FOOM though, I don’t see why the genocidal dictator of the future with AI would be that much worse than genocidal dictators of the future without it. Assuming ‘good’ countries also have AI then the dictator won’t actually take over any new lands just because they got AI, and you don’t need an AI to tell you how to starve people to death or gas them?

          • Deiseach says:

            I don’t see why the genocidal dictator of the future with AI would be that much worse than genocidal dictators of the future without it

            A genocidal dictator of the future with AI may not need to starve or gas the elements they want to dispose of (people tend to notice that, disapprove, and impose sanctions or send in peace-keeping forces/drone strikes to discourage you). The AI may find scenarios that maximise the worst possible outcomes for the undesirables, foil any attempts at reform, tie up legal appeals to outside entities (the U.N., the Court of Human Rights, etc.), and generally find questionable but ‘natural’ ways of making sure the undesirables are sicker, poorer, less educated, more likely to die, and force them to leave the country (if you want to drive them out) or raise their mortality rate (if you want to wipe them out but don’t want to have a global outcry of bleeding-hearts about it). Take Yudkowsky’s ‘joke’ about CRISPR and Borderer culture (yes, yes, he didn’t really mean it, he was making a political point, etc etc etc) – imagine an AI working on something along those lines to pinpoint and target the undesirable element (it needn’t be an ethnic minority, it could be simply ‘those who want to drive me out of power’) and introduce germline changes – their children will be reliably, heritably, stupider because of induced lowering of IQ, they’ll be more likely to develop genetic disorders and diseases associated with death in infancy or even in utero, and so on.

          • Robert L says:

            An AI would’ve given the Germans better encryption than ENIGMA; then again, we would presumably have beefier AIs at Bletchley to decode it – that’s arms races for you. And there is no law which says that the Manhattan project had to be the Manhattan project rather than das Projekt Manhattan.

          • Aapje says:

            @Robert L

            /pedantic mode on

            The Germans actually had better encryption technology than Enigma, the Lorenz cipher machines. These were used for high-level communications and were broken by the British because of operator error.

            As much code breaking depends on misuse, rather than purely on flaws in the encryption method, that fight would not be between AIs, but between AIs and human operators (unless those would be replaced by infallible alternatives).

          • 2stupid4SSC says:

            @Deiseach I am not sure, most of the things you describe sound like they involve other powers, so AI arms race, no FOOM no reason to assume the genocidal dictator should have any real advantage.

            Or things that are just strictly better than the alternative? Subtle methods of convincing undesirables to leave the country seems >>>>> than 10+ million Chinese in mass graves.

            Basically the ONLY way I can see the AI assisted genocidal dictator being worse is if the main function of the AI is to find the worst possible ways to torture people, then the torture and murder of the undesirables might be worse, but I feel like we are already decent at torture and murder.

            To be clear I am not saying that Bad people with an AI won’t be worse in some ways, I am just saying it doesn’t seem like it would be SO MUCH worse to motivation me to push for anti AI policies or anything. I don’t think FOOM extinction is very likely, but even without it being likely it seems far far scarier than all the governments of the world have AI, and the bad ones using them to be bad, and the good ones* using them to be good.

            *assumes any good governments?*

          • Deiseach says:

            Subtle methods of convincing undesirables to leave the country

            Who then drown in the Mediterranean or end up in shantytowns in Calais because other countries do not want to let the first country’s undesirables in.

            See the shameful example of Western nations doing their best to avoid taking in Jews trying to flee Nazi Germany before they could be arrested or disappeared:

            Hitler responded to the news of the [Evian] conference by saying essentially that if the other nations would agree to take the Jews, he would help them leave:

            I can only hope and expect that the other world, which has such deep sympathy for these criminals, will at least be generous enough to convert this sympathy into practical aid. We, on our part, are ready to put all these criminals at the disposal of these countries, for all I care, even on luxury ships.

        • Doctor Mist says:

          @Robert L

          He simply hypothesizes that it will default to a goal of dominating the entire universe, because handwaving and (unstatedly) the Terminator films, as far as I can see.

          Have you read the book? This characterization is severely straw.

          Briefly, nobody, including Bostrom, imagines that an AI will simply “default” to a goal. But for almost any nontrivial goal, and not a few trivial goals, dominating the universe is an instrumental goal, if only because attaining the real goal requires that the AI keeps running long enough to do it — it doesn’t run out of resources, and it doesn’t get interfered with.

          • Robert L says:

            I have read about the first half of it which takes us well past the point where he begins chapter 6 with “Suppose that a digital superintelligent agent came into being, and that for some reason it wanted to take control of the world: would it be able to do so?”. I love that “for some reason”. Bostrom puts a lot of weight on the “perverse instantiation” (i.e. paperclip/computronium) argument which is really feeble: it is very precisely a well-known folktale, the best known version of which is The Sorcerer’s Apprentice. Obvious safeguards would be instructions to AI: 1. Obey some (specified) version of the Laws of Robotics, 2. conduct health and safety, risk/reward and cost/benefit analyses of any sub-plan you make as you work towards your goal and tell someone if the results are outside such and such parameters; and 3. (in a weaker and benign version of Roko’s basilisk) read Bostrom and SSC and Less Wrong and let us know if any of your plans look as if they would cause that lot concern. If you are placing weight on the perverse instantiation argument as a reason for AI catastrophe it’s because you are short of reasons. Bostrom gets a small part of the way to seeing that the motivation of an AI is problematic when he says that an AI’s motivations would be different from ours, but misses the deeper problem that ascribing motivation to AIs is philosophically really difficult – massively more difficult than ascribing self-awareness to them. Why would a computer want anything, is there an equivalent to the Turing test which we could use as evidence of wanting, and why would what they want be possession of the universe/the enslavement of mankind/other stuff from sci fi B movies?

          • Re laws: An AI with a motivational system needs, more than anything, a zeroth law impelling it to shut off when told to….that’s the equivalent of an off switch, and since then paperclipper doesn’t have one, it is not an example of a well designed system that is nonetheless dangerous.

            Re motivation: if you are willing to water down the concept of motivation, it is easy, all too easy, to assign motivations to machines….if it does X, it is motivated to do X…..the toaster toasts so it is motivated to toast. At the other extreme, you can give metaphysical or mysterian definitions of motivation, ruling out machine motivation stipulation. The middle ground is to regard motivation as something that constrains a general purpose problem solver to something more specific,, so that a system with a motivational subsystem can achieve the same goal in multiple ways , and can do different things if it’s motivations alone are changed….neither of which is true of our toaster!

      • Deiseach says:

        If you just have bad people using AI but no FOOM then good people can use AI also and it shouldn’t be a huge problem

        And who are the bad people and who are the good people? Right now, in various places, I’m seeing people sincerely and with passionate conviction denouncing Republicans (the American variety) as purely, simply evil. All of them, every single one, from the party leaders down to the registered voters, one indistinguishable mass of racist, homophobic, misogynistic, fundamentalist religious zealot, sexist, transphobic, pro-income inequality, cisheteronormative, white supremacist bigots who want their small slice of society Just Like Us to have all the power and who want – not accept as the fall-out from their policies, but want and desire and intend – suffering, inequality, poverty and oppression of those they hate.

        In that reading, a Republican president (and I don’t mean Trump) with access to AI is A Bad Person, a Democrat president (and I don’t mean Hillary) with access to AI is A Good Person. And that’s just within the one nation; who makes those kinds of decisions as to who is “bad” and “good” when it comes to a foreign country?

        • 2stupid4SSC says:

          I am not actually trying to define who is good and who is bad here. Rather I am making the more general argument that if ‘the side you are scared of’ and ‘the side you are fond of’ both get the advantage of AI at the same rate (No FOOM). Then it should balance out.

          The Rightbot5000 tries to construct coalitions and draft bills that will undermine the Left. It is better at its task than the Right has ever been. Lucky for the left, LeftronX3 is just as good, and the net effect we see is that the Right gains about as much as would be expected after an election and the Left is able to protect itself about as much as would be expected after an election where the Left loses, or the other way around.

          Honestly this would probably make everything better because the bots would not have to worry about appearing to match any kind of ideology personally (they are just stupid machines! cried the people) and could probably form better object level compromises so that both sides end up getting more of what they want while losing less of what they care about.

          It seems to me the ‘fear of bad people with AI’ idea, just assumes the good(however you want to define that) people, never get AI of their own.

          Yes some bad might come of it, probably will. But the relative bad that bad people will be able to achieve with non-FOOM AI > A low chance of total extinction of the human race, not convinced.

          • Deiseach says:

            But nobody wants “the side we are scared of” to get an advantage, and everybody thinks “we’re the good guys here”.

            So everyone will be scrabbling for AI to impose their values, and compromise will be a dirty word – as it’s already becoming. Why would you let the Evil Side have any sliver of attaining what they want?

        • Pan Narrans says:

          “And who are the bad people and who are the good people?”

          I’m pretty sure the standard answer to that second question is “Well, me, obviously.”

    • Robert L says:

      I agree (see above). I note that Bostrom does see that the first AI is likely to be a state/military invention, but he thinks the main danger is that it goes rogue and escapes into the wild, rather than happily collaborating with its creators.

  7. Ann Nonny Mousse says:

    Alright, so I had originally started to write this in the “any other comments” box on the survey page, but it has gotten quite long so I’ll just post it here instead, with some added paragraph breaks:

    I’m actually getting quite fed up with the childishness singularitarians have injected into the AI debate, because they’re missing the point about what the actual problems will be by such a wide margin that they might as well not even be living on the same planet. Yes, human level and even above human-level AI’s are going to become a reality in the (near) future, but the kind of “super human intelligence means unimaginable, god-like powers beyond our reckoning” fantasies are just pure science fiction.

    Yes, they’re going to be built and yes, they’ll be (potentially even vastly) more competent than any human at whatever tasks they’ll be assigned to. But, no, they’re not going to just go off on a tangent of their own and convert the galaxy into paperclips, if such a thing were even possible.

    In fact, their makers will make very sure that these artificial agents will pursue goals that are very much in line with their own. Unfortunately, these goals WILL be very incompatible with the vast majority of humanity, but the wealthy few and their corporations who’ll be designing them won’t care. They’ll be overjoyed, in fact. Finally, it will become possible to enforce “property rights” with absolutely no possibility left for mitigation, reform or revolution! Finally, they’ll be rid of the need for dirty, expensive, unreliable human labor for the vast majority of tasks! Finally they’ll be able to enforce their will with an iron fist of godlike power with absolutely no chance of resistance!

    In fact, among its very first uses, GAI automatization will, in all probablity, come to the military (and then from there, to the police) forces of the major, wealthiest countries on the earth. Think, “intelligent drones,” at least at first. And it won’t just be a human-level intelligent drone that comes out of nowhere, they’ll be continuosly improved, along with their goal systems, slowly over time. If the drone doesn’t do what their commanders want them to do, you better believe it’ll get fixed. Improvements to their level of intelligence will be slow and incremental, and with each subsequent improvement, their complete obedience to their commanders will be similarly improved and ensured along the way. Until at some point, we’re looking at some completely mechanized and automated militaries being the absolute and uncontestable dominant forces in the world; in a sense, the perfect military forces, utterly obedient to their masters and deadly beyond any reckoning, and possible to use with pinpoint precision. And at that point for the vast majority of mankind it’ll be basically game over just as much as in these “runaway intelligence explosion leads to a dark god with mysterious powers” sci-fi fantasies.

    • Reasoner says:

      This is a scenario that AI risk people discuss.

      You sound awfully confident that LAWS will lead to “game over” for “the vast majority of mankind”, because military power will be centralized in the hands of a small number of people. But it’s already the case that most people in developed countries don’t own firearms, much less nuclear weapons–military power is centralized in the hands of the police and military. Has this lead to “game over”? Why will LAWS change the current equation?

      I’m more worried about terrorist groups and malcontents getting ahold of LAWS. LAWS mean you can effectively do a suicide bombing without any of your guys needing to commit suicide. States may avoid the use of LAWS for the same reason they avoid the use of nuclear weapons–they have people to protect and they don’t want to start trouble unnecessarily. But terrorist groups are all about starting trouble, and terrorist attacks are hard to retaliate against. Nuclear weapons are well controlled enough that terrorists haven’t been able to get ahold of them, but this may not be the case for LAWS.

      Also, would you please consider reading the actual arguments of the people you disagree with? Referring to people you disagree with as “childish” and saying their ideas are “pure science fiction” is not a serious argument. The future will likely be just as weird relative to the present as the present is weird relative to the past.

      • Aapje says:

        But it’s already the case that most people in developed countries don’t own firearms, much less nuclear weapons–military power is centralized in the hands of the police and military. Has this lead to “game over”? Why will LAWS change the current equation?

        It hasn’t led to ‘game over’ because labor is extremely costly, which means that extensive policing is very costly, thus you cannot control everyone and are dependent on their cooperation. So oppression is generally based on manipulating people by misinformation (see Russia, China, etc) and only harshly policing a small group of outliers. These methods only work if you keep people sufficiently happy…or are so oppressive that you end up with a very crappy society (N-Korea).

        In a smart AI, you can have a drone/robot following every human, ensuring that they cannot resist.

        In an ‘robots do most of the labor’ society, most of the population becomes superfluous and you can make the job even easier by eliminating most of the people, leaving a core of rulers.

        • Reasoner says:

          So, the only reason the powers that be don’t transform the US in to a totalitarian police state is because that would mean a “very crappy society”. But once they get ahold of LAWS, they’ll monitor almost everyone 24/7 (if they don’t murder them), and they won’t consider this a crappy outcome?

          It could happen, but I think Ann is way overconfident.

          It seems like the core disagreement here is what motivates the powers that be. I live in Silicon Valley, and the path to money and power here is to be nice. But I could believe the East Coast power structure is dominated by psychopaths. It wouldn’t surprise me if the “diversity in tech” meme is a proxy war between East Coast psychopaths and West Coast builders. (As Scott has pointed out several times, tech is one America’s least white industries because of all the immigrants that it employs–but the truth matters little to a propagandist.)

          • Deiseach says:

            I think it’s a rather sweeping generalisation to say all the bad guys are on the East coast and all the good nice sweet kind guys are on the West; of your two links, one says nothing about being nice but “build something people want” and the second one isn’t about moral goodness, it’s about old-fashioned business disciplines of “your word is your bond”, maintaining a good name for honesty, quality and reliability, and all the other things that go into creating a good reputation.

            I’d be willing to venture that there are West coast types who’d cut your throat for a nickel but they wrap it up in the acceptable coating of social responsibility, etc. and that there are East coast types who do think noblesse oblige. I think it would be easy enough to get the West coast tech-types enthusiatically on board about creating mini-drones the size of insects for non-military use* that would monitor and feed back information to surveillance centres, primarily because of the interesting problem of creating and running these, and secondarily because it could be phrased in the terms of “making society safer, ensuring nobody is going to be left lying in the street if they’re mugged or faint from sickness” and, for instance, “nobody has to just take your word for it over the cops; cops can mess with their body-cams, but this kind of real-time unfiltered evidence will prove that you were innocently minding your own business when they beat you up”.

            *The one in the linked article is hand-sized, which is not really unnoticeable; the project of making ones that really are the size of flies or dragonflies and can function as required would intrigue enough of the techies that I think “but wait, this might contribute to a police state!” would be a secondary consideration.

            I’m also amused that you consider a near-future “people using lowish-level AI to dominate society” is some kind of over-confident (or over-pessimistic) view of a possible danger, but “supremely-intelligent AI ruling the world and trying to dominate the universe” is a real and present danger we must consider. “Childish” is too strong a word, I’ll agree, but which outcome is more like science-fiction – that vested interests use improvements in technology and better resources to impose their will on the direction of society, or that a machine intelligence wants to conquer the galaxy?

          • Aapje says:

            @Reasoner

            So, the only reason the powers that be don’t transform the US in to a totalitarian police state is because that would mean a “very crappy society”.

            Well, I’m not saying that the current rulers necessarily want this. My issue is more that I think that democracy is semi-stable and prone to both destabilizing and stabilizing behavior. I think that LAWS increase the power of the destabilizing forces, as fewer people are necessary to mount a coup, introduce harsh policing, etc.

            Currently, the alternatives to democracy are also semi-stable. So depending on the relative stability, you get a certain balance of free vs non-free states.

            However, if the oppressive society is an extremely stable one, then most countries will end up like that in the long term. I worry that oppressive societies that are backed by LAWS are very stable. If you end up with many more countries going from democracy to oppression than vice versa, eventually few democracies will be left and they will be at the mercy of the oppressive majority.

            But once they get ahold of LAWS, they’ll monitor almost everyone 24/7 (if they don’t murder them), and they won’t consider this a crappy outcome?

            If you look at N-Korea, their freedom is so restricted that they provide little culture or other fun stuff for the rulers. Kim Jong Il resorted to kidnapping S-Koreans in an effort to get his own film industry started. In general, the N-Korean leadership seems to have rather shitty lives, compared to even a lower-middle class Western life. In so far that they don’t, it’s mostly because they leech off Western society.

            With LAWS, the leadership can give people enough freedom to still be creative, yet also with incredible power to stifle any dissent. So you can still have your movies, apps, music, etc.

            I live in Silicon Valley, and the path to money and power here is to be nice.

            ‘Libertarian nice’….

            Silicon Valley has a monoculture that advocates for driving down wages for non-creative jobs, empowering the haves, fighting against laws that protect nice people from anti-socials, etc. It’s all very much ‘let’s build a society that makes a relatively small group of well-educated rich people happy and most others less happy.’

            Of course, the culture is not all bad, nor does it intentionally try to hurt others (it’s just blind to their concerns), but any monoculture needs counter-forces to keep it from running all over the interests of the people who are not part of the monoculture.

            It wouldn’t surprise me if the “diversity in tech” meme is a proxy war…

            IMO, it is a race and gender war. Advocacy groups are fighting for certain groups, but it sounds bad if you say: ‘We want to kick out white and Asian men and bring in black people and women.’ Suddenly you have openly declared much of the population as the enemy, which reduces the chance that they support you against their own interests.

            Call it ‘diversity’ and suddenly people will assume that you work for everyone. You can see the real meaning behind their use of ‘diversity’ when groups of only women or only black people get called ‘diverse.’ As most people are rather ignorant, the true agenda is hidden to them.

            The choice of targets (an already rather diverse field) is also because the real goal is not diversity, but to gain power. Silicon Valley has major power, so if they can plant more social justice people there, they can control social media, which is a news source for most people.

          • Howard Treesong says:

            Wow, this is painting with an extremely broad brush. I grew up in Los Angeles and have lived in Boston, Atlanta, North Carolina, Minnesota and Washington, D.C. — and I must say that I think people are pretty much the same all over. I’d be surprised to see any serious evidence that NYC or DC have a higher percentage of psychopaths than anywhere else.

          • Reasoner says:

            The East Coast vs West Coast thing was very hypothetical–I don’t believe it with confidence.

            I’m also amused that you consider a near-future “people using lowish-level AI to dominate society” is some kind of over-confident (or over-pessimistic) view of a possible danger, but “supremely-intelligent AI ruling the world and trying to dominate the universe” is a real and present danger we must consider. “Childish” is too strong a word, I’ll agree, but which outcome is more like science-fiction – that vested interests use improvements in technology and better resources to impose their will on the direction of society, or that a machine intelligence wants to conquer the galaxy?

            As I explained above, “seems like science fiction” is a weak reason to discount a scenario. I’m more persuaded by arguments about how various actors will be incentivized than I am by arguments that point out superficial similarities to science fiction.

          • Reasoner says:

            Your points about the stability of various society types are good.

            Silicon Valley has a monoculture that advocates for driving down wages for non-creative jobs, empowering the haves, fighting against laws that protect nice people from anti-socials, etc. It’s all very much ‘let’s build a society that makes a relatively small group of well-educated rich people happy and most others less happy.’

            Basic income is an idea that’s popular in Silicon Valley, not Wall Street. The wealthy people in Silicon Valley are giving away their money at a much younger age than wealthy people elsewhere, and they are giving a larger fraction of it. Silicon Valley philanthropists are in to effective altruism; New York philanthropists are in to museums and performances for rich people. Most products made in Silicon Valley are free or very cheap (think Google, Gmail, Facebook, or Yelp). Relative to other wealthy people, wealthy people in Silicon Valley are much less likely to fund lobbyists and political advocacy groups.

            Several years ago, Y Combinator, Silicon Valley’s premier startup accelerator, announced that it would start accepting nonprofits, giving them both funding and advice. Do you think New York’s premier hedge fund has a socially responsible investing division? Do you think any Fortune 500 company besides Google has “Don’t Be Evil” as a corporate motto?

            Silicon Valley is not perfect, but their brand is weak because they are bad at propaganda, not because they are bad people. The best example is probably the internet’s absurd eruption of anger when Mark Zuckerberg announced he was giving away 99% of his fortune. That was the closest I’ve ever come to writing off humanity.

          • Reasoner says:

            I’d be surprised to see any serious evidence that NYC or DC have a higher percentage of psychopaths than anywhere else.

            It’s not about whether they’re larger as an absolute fraction of the population–it’s about whether they’re overrepresented among the wealthy and powerful. Also, it was just a hypothesis.

        • Tyrant Overlord Killidia says:

          “So oppression is generally based on manipulating people by misinformation”

          What’s stopping an AI from doing this?

          • Aapje says:

            @Tyrant Overlord Killidia

            I think you are missing my point. My argument is that this method is weaker than constantly monitoring everyone.

            If Putin makes people too unhappy, they will become unwilling to believe the misinformation (which won’t necessarily make them aware of the truth, they may simply opt for other misinformation that is not regime-friendly). The same goes when you replace Putin with an AI.

            If a ruler with a robot police army makes people very unhappy, they cannot effectively do anything, as the robot will see it and have the person punished.

          • Deiseach says:

            If Putin makes people too unhappy, they will become unwilling to believe the misinformation (which won’t necessarily make them aware of the truth, they may simply opt for other misinformation that is not regime-friendly). The same goes when you replace Putin with an AI.

            Well, to quote someone who’s from Yakutsk (part of comment from when she was watching the US presidential debates):

            Trump repeated exactly what I’d heard about Siria on Russian news accidentally, then he was accused of lying. Even though I know that Russia is very corrupt and plain dystopian, it was like taking Ice Bucket Challenge without expecting it. My parents watch Russian news everyday, I know these news are full of lies, but one always hoped for the better.

            So it would seem there are people who believe (or at least uncritically listen to) the misinformation, people who are aware that it is misinformation but who don’t know how bad it is – and Putin is still in power. How bad would it have to be before “(they)will become unwilling to believe the misinformation”? There seems to be a certain fatalistic sense of “Of course he’s lying, that’s what they do, but what can you do about it?”

    • Murphy says:

      I sort of agree that there’s too much of a wide eyed art-student-philosopher division who cheerfully declare that an AI “will” or “almost certainly will” do [thing which requires more energy than there is in the universe or which requires more computations than you could do even if you turned every atom in the universe into computronium * (graham number times)] and then declare that obviously it would find ways to ignore the laws of physics. Many do indeed simply use it as a high-brow version of making arguments about a deity in the LW crowd but their central point isn’t unsound given a small number of not-absurd assumptions.

      My background is CS and programming and I’ve done a small amount of AI work. Nothing terrible high end on the AI front but enough. I’m skeptical about some of the conclusions they come to but I agree with the general principle of the risks inherent in how you set an AI’s goals.

      A lot of it depends on how hard intelligence turns out to be. If it turns out that going above steven hawking intelligence is much harder than going above special-ed-student intelligence then it’s likely more like you describe. On the other hand if it turns out that there’s a few nice tricks that will give you a highly highly capable AI that can come up with ways to improve itself further then even a really really minor fuckup in your design could see it perusing weird goals (and weird in this context means bad for humans) with a level of competence that we couldn’t contend with.

    • Søren E says:

      The childish singularitarians that jump into the AI debate include people like I. J. Good, which wrote about Superintelligence a generation before I was born.

      Your concerns are totally valid, though. The existential concerns about AI are also valid.

  8. it doesn’t open here o-o (“ERR_TUNNEL_CONNECTION_FAILED”, on Chrome)