SSC Journal Club: AI Timelines

I.

A few years ago, Muller and Bostrom et al surveyed AI researchers to assess their opinion on AI progress and superintelligence. Since then, deep learning took off, AlphaGo beat human Go champions, and the field has generally progressed. I’ve been waiting for a new survey for a while, and now we have one.

Grace et al (New Scientist article, paper, see also the post on the author’s blog AI Impacts) surveyed 1634 experts at major AI conferences and received 352 responses. Unlike Bostrom’s survey, this didn’t oversample experts at weird futurist conferences and seems to be a pretty good cross-section of mainstream opinion in the field. What did they think?

Well, a lot of different things.

The headline result: the researchers asked experts for their probabilities that we would get AI that was “able to accomplish every task better and more cheaply than human workers”. The experts thought on average there was a 50% chance of this happening by 2062 – and a 10% chance of it happening by 2026!

But on its own this is a bit misleading. They also asked by what year “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”. The experts thought on average that there was a 50% chance of this happening by 2139, and a 20% chance of it happening by 2037.

As the authors point out, these two questions are basically the same – they were put in just to test if there was any framing effect. The framing effect was apparently strong enough to shift the median date of strong human-level AI from 2062 to 2139. This makes it hard to argue AI experts actually have a strong opinion on this.

Also, these averages are deceptive. Several experts thought there was basically a 100% chance of strong AI by 2035; others thought there was only a 20% chance or less by 2100. This is less “AI experts have spoken and it will happen in 2062” and more “AI experts have spoken, and everything they say contradicts each other and quite often themselves”.

This does convey more than zero information. It conveys the information that AI researchers are really unsure. I can’t tell you how many people I’ve heard say “there’s no serious AI researcher who thinks there’s any chance of human-level intelligence before 2050”. Well actually, there are a few dozen conference-paper-presenting experts who think there’s a one hundred percent chance of human-level AI before that year. I don’t know what drugs they’re on, but they exist. The moral of the story is: be less certain about this kind of thing.

II.

The next thing we can take from this paper is a timeline of what will happen when. The authors give a bunch of different tasks, jobs, and milestones, and ask the researchers when AI will be able to complete them. Average answers range from nearly fifty years off (for machines being able to do original high-level mathematical research) to only three years away (for machines achieving the venerable accomplishment of being able to outperform humans at Angry Birds). Along the way they’ll beat humans at poker (four years), writing high school essays (ten years), be able to outrun humans in a 5K foot race (12 years), and write a New York Times bestseller (26 years). What do these AI researchers think is the hardest and most quintessentially human of the tasks listed, the one robots will have the most trouble doing because of its Olympian intellectual requirements? That’s right – AI research (80 years).

I make fun of this, but it’s actually interesting to think about. Might the AI researchers have put their own job last not because of an inflated sense of their own importance, but because they engage with it every day in Near Mode? That is, because they imagine writing a New York Times bestseller as “something something pen paper be good with words okay done” whereas they understand the complexity of AI research and how excruciatingly hard it would be to automate away every piece of what they do?

Also, since they rated AI research (80 years) as the hardest of all occupations, what do they mean when they say that “full automation of all human jobs” is 125 years away? Some other job not on the list that will take 40 years longer than AI research? Or just a combination of framing effects and not understanding the question?

(it’s also unclear to what extent they believe that automating AI research will lead to a feedback loop and subsequent hard takeoff to superintelligence. This kind of theory would fit with it being the last job to be automated, but not with it taking another forty years before an unspecified age of full automation.)

III.

The last part is the most interesting for me: what do AI researchers believe about risk from superintelligence?

This is very different from the earlier questions about timelines. It’s possible to believe that AI will come very soon but be perfectly safe. And it’s possible to believe that AI is a long time away but we really need to start preparing now, or else. A lot of popular accounts collapse these two things together, “oh, you’re worried about AI, but that’s dumb because there’s no way it’s going to happen anytime soon”, but past research has shown that short timelines and high risk assessment are only modestly correlated. This survey asked about both separately.

There were a couple of different questions trying to get at this, but it looks like the most direct one was “does Stuart Russell’s argument for why highly advanced AI might pose a risk, point at an important problem?”. You can see the exact version of his argument quoted in the survey on the AI Impacts page, but it’s basically the standard Bostrom/Yudkowsky argument for why AIs may end up with extreme values contrary to our own, framed in a very normal-sounding and non-threatening way. According to the experts, this was:

No, not a real problem: 11%
No, not an important problem: 19%
Yes, a moderately important problem: 31%
Yes, an important problem: 34%
Yes, among the most important problems in the field: 5%

70% of AI experts agree with the basic argument that there’s a risk from poorly-goal-aligned AI. But very few believe it’s among “the most important problems in the field”. This is pretty surprising; if there’s a good chance AI could be hostile to humans, shouldn’t that automatically be pretty high on the priority list?

The next question might help explain this: “Value of working on this problem now, compared to other problems in the field?”

Much less valuable: 22%
Less valuable: 41%
As valuable as other problems: 28%
More valuable: 7%
Much more valuable: 1.4%

So charitably, the answer to this question was coloring the answer to the previous one: AI researchers believe it’s plausible that there could be major problems with machine goal alignment, they just don’t think that there’s too much point in working on it now.

One more question here: “Chance intelligence explosion argument is broadly correct?”

Quite likely (81-100% chance): 12%
Likely (61-80% chance): 17%
About even (41-60% chance): 21%
Unlikely (21-40% chance): 24%
Quite unlikely (0-20% chance): 26%

Splitting the 41-60% bin in two, we might estimate that about 40% of AI researchers think the hypothesis is more likely than not.

Take the big picture here, and I worry there’s sort of a discrepancy.

50% of experts think there’s at least a ten percent chance of above-human-level AI coming within the next ten years.

And 40% of experts think that there’s a better-than-even chance that, once we get above-human level AI, it will “explode” to suddenly become vastly more intelligent than humans.

And 70% of experts think that Stuart Russell makes a pretty good point when he says that without a lot of research into AI goal alignment, AIs will probably have their goals so misaligned with humans that they could become dangerous and hostile.

I don’t have the raw individual-level data, so I can’t prove that these aren’t all anti-correlated in some perverse way that’s the opposite of the direction I would expect. But if we assume they’re not, and just naively multiply the probabilities together for a rough estimate, that suggests that about 14% of experts believe that all three of these things: that AI might be soon, superintelligent, and hostile.

Yet only a third of these – 5% – think this is “among the most important problems in the field”. Only a tenth – 1.4% – think it’s “much more valuable” than other things they could be working on.

IV.

How have things changed since Muller and Bostrom’s survey in 2012?

The short answer is “confusingly”. Since almost everyone agrees that AI progress in the past five years has been much faster than expected, we would expect experts to have faster timelines – ie expect AI to be closer now than they did then. But Bostrom’s sample predicted human-level AI in 2040 (median) or 2081 (mean). Grace et al don’t give clear means or medians, preferring some complicated statistical construct which isn’t exactly similar to either of these. But their dates – 2062 by one framing, 2139 by another – at least seem potentially a little bit later.

Some of this may have to do with a subtle difference in how they asked their question:

Bostrom: “Define a high-level machine intelligence as one that can carry out most human professions as well as a typical human…”

Grace: “High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers.”

Bostrom wanted it equal to humans; Grace wants it better. Bostrom wanted “most professions”, Grace wants “every task”. It makes sense that experts would predict longer timescales for meeting Grace’s standards.

But as we saw before, expecting AI experts to make sense might be giving them too much credit. A more likely possibility: Bostrom’s sample included people from wackier subbranches of AI research, like a conference on Philosophy of AI and one on Artificial General Intelligence; Grace’s sample was more mainstream. The most mainstream part of Bostrom’s sample, a list of top 100 AI researchers, had an estimate a bit closer to Grace’s (2050).

We can also compare the two samples on belief in an intelligence explosion. Bostrom asked how likely it was that AI went from human-level to “greatly surpassing” human level within two years. The median was 10%; the mean was 19%. The median of top AI researchers not involved in wacky conferences was 5%.

Grace asked the same question, with much the same results: a median 10% probability. I have no idea why this question – which details what an “intelligence explosion” would entail – was so much less popular than the one that used the words “intelligence explosion” (remember, 40% of experts agreed that “the intelligence explosion argument is broadly correct”). Maybe researchers believe it’s a logically sound argument and worth considering but in the end it’s not going to happen – or maybe they don’t actually know what “intelligence explosion” means.

Finally, Bostrom and Grace both asked experts’ predictions for whether the final impact of AI would be good or bad. Bostrom’s full sample (top 100 subgroup in parentheses) was:

Extremely good: 24% (20)
On balance good: 28% (40)
More or less neutral: 17% (19)
On balance bad: 13% (13)
Extremely bad – existential catastrophe: 18% (8)

Grace’s results for the same question:

Extremely good: 20%
On balance good: 25%
More or less neutral: 40%
On balance bad: 10%
Extremely bad – human extinction: 5%

Grace’s data looks pretty much the same as the TOP100 subset of Bostrom’s data, which makes sense since both are prestigious non-wacky AI researchers.

V.

A final question: “How much should society prioritize AI safety research”?

Much less: 5%
Less: 6%
About the same: 41%
More: 35%
Much more: 12%

People who say that real AI researchers don’t believe in safety research are now just empirically wrong. I can’t yet say that most of them want more such research – it’s only 47% on this survey. But next survey AI will be a little bit more advanced, people will have thought it over a little bit more, and maybe we’ll break the 50% mark.

But we’re not there yet.

I think a good summary of this paper would be that large-minorities-to-small-majorities of AI experts agree with the arguments around AI risk and think they’re worth investigating further. But only a very small minority of experts consider it an emergency or think it’s really important right now.

You could tell an optimistic story here – “experts agree that things will probably be okay, everyone can calm down”.

You can also tell a more pessimistic story. Experts agree with a lot of the claims and arguments that suggest reason for concern. It’s just that, having granted them, they’re not actually concerned.

This seems like a pretty common problem in philosophy. “Do you believe it’s more important that poor people have basic necessities of life than that you have lots of luxury goods?” “Yeah” “And do you believe that the money you’re currently spending on luxury goods right now could instead be spent on charity that would help poor people get life necessities?” “Yeah.” “Then shouldn’t you stop buying luxury goods and instead give all your extra money beyond what you need to live to charity?” “Hey, what? Nobody does that! That would be a lot of work and make me look really weird!”

How many of the experts in this survey are victims of the same problem? “Do you believe powerful AI is coming soon?” “Yeah.” “Do you believe it could be really dangerous?” “Yeah.” “Then shouldn’t you worry about this?” “Hey, what? Nobody does that! That would be a lot of work and make me look really weird!”

I don’t know. But I’m encouraged to see people are even taking the arguments seriously. And I’m encouraged that researchers are finally giving us good data on this. Thanks to the authors of this study for being so diligent, helpful, intelligent, wonderful, and (of course) sexy.

(I might have forgotten to mention that the lead author is my girlfriend. But that’s not biasing my praise above in any way.)

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

286 Responses to SSC Journal Club: AI Timelines

  1. Phil Goetz says:

    The paper says the survey was given to NIPS and ICML presenters. On one hand, people in these communities are well-informed about current algorithms that would be useful to AI–generally much more so than AI researchers.
    But on the other hand, people in these communities don’t think about AI; they think about ML, statistical modeling, and narrow applications.

    • Douglas Knight says:

      This paper is a follow-up to an earlier survey of AI researchers. People complained that they aren’t real experts and demanded this survey of ML researchers.

  2. ConnGator says:

    1. After reading about this subject for a dozen years I have recently become very pessimistic about the survival of humanity (and perhaps biological life). I just don’t see a scenario where evil AI does not arise within the next 30-70 years.

    2. The reason why more AI researchers are not pushing for more safety is that is futile. 99.999% of the AI researchers can be perfectly safe and evil AI will still happen. It will soon be too cheap to not have AI in everything and so will be impossible to keep the genie in the lamp. We are, indeed, summoning the daemon.

  3. Twan van Laarhoven says:

    A big issue I have with asking about machines being “able to accomplish every task better and more cheaply than human workers” is that this is not just about AI, but also about robotics. It might be that we get AI in, say, 2040, that can match humans in mental tasks; while the robotic technology to match humans might be decades beyond that. And do human tasks include things like birthing human children (is surrogate mother an occupation)? Because that is another field of research altogether.

    Someone answering the question might think only about AI, or about all human tasks. We don’t know based on the phrasing.

  4. crucialrhyme says:

    There’s been a recent surge in research about “fairness, accountability, and transparency” — that is, making models more interpretable, and making sure they don’t discriminate on the basis of race or other characteristics. This may be motivated by rumblings of regulation from the EU. Likewise OpenAI had an article on practical problems in AI safety which talked not about superintelligent paperclip factories but about a cleaning robot which must explore without causing damage.

    It seems to me that these sorts of problems can also be framed as instances of the “control problem” — we’re trying to make sure that these fairly opaque systems don’t do something bad because we misunderstood their objective function. The goals are urgent in the near-term, but we get more than one shot to try and make it work.

    The Obama White House published a “National Artificial Intelligence Research and Development Strategic Plan” that treated existential concerns about an “intelligence explosion” and AI control in the same section as near term concerns about verifiability, robustness to malicious inputs, interpretability, etc.. This framing makes sense to me.

    In general, I think it might be most productive for those concerned about AI risk to focus their efforts on improving the safety and interpretability of actually existing learning systems. I have no justification, just a gut feeling, but I think getting deep into real existing problems would result in an understanding that is generally useful.

    Edit: I suspect that many of the researchers who say safety is important are thinking in these terms as well — they may not buy the Bostromian argument.

  5. Rusty says:

    Grace: “High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers.” Pff!

    One task many people expect to be achieved very soon is driving cars. But a machine will never be able to drive a car ‘better’ than me because the fact it is me doing the driving is critical to my assessment of ‘better’. I enjoy driving my car. I could have the most brilliant driver in the world wanting to be my chauffeur and I’d still drive my car myself.

    So in my brave new world there are plenty of jobs for people to do. Kinky sex jobs. Jobs cutting hair – I like chatting to the barber. Serving coffee. The waitresses in the cafe are nice and my coffee machine at home is probably already better at making coffee but I want the human touch and a bit of chat. And after I have had my kinky sex, had my hair cut and drunk my coffee I will nip into the Go parlour where I can measure my skills against a grand master. And I want to play a human being, just like when I go into the boxing ring I want to fight someone my own size and not take on a gatling gun.

    My own job is as an accountant in this future world. You thought we’d all be redundant? Well we were for a bit until the computers all got hacked and it all went pear shaped in the accounting world until they passed a law saying accounts needed checking by us humans.

    And after my working day is over I go down the comedy club to hear someone tell jokes. Which machines still can’t even in this future world of mine.

    And yes, AI may destroy us all, but stepping into my time machine I find it didn’t. Turns out we were all killed by nanotechnology – thick as mince mini drones. Or was it nuclear war? Or a biological super weapon? Something will get us soon enough no doubt so another civilisation can worry about the Fermi paradox but AI is the least of our worries. (Though possibly I am a super AI from the future writing this to throw you off track. Though in that case I would be more persuasive I suppose.)

    • Doctor Mist says:

      My own job is as an accountant in this future world. You thought we’d all be redundant? Well we were for a bit until the computers all got hacked and it all went pear shaped in the accounting world until they passed a law saying accounts needed checking by us humans.

      I’m sure we’d all like to believe our job is safe by legislative fiat. Even after the pear-shaped event you describe, we are assuming for the purposes of this discussion that an AI can do your accounting better than you can, and in that case the world will be full of people finding ways to end around the law that is protecting you — including pressing for its repeal.

      I could have the most brilliant driver in the world wanting to be my chauffeur and I’d still drive my car myself.

      Okay, until the same legislators who are steadfastly protecting your job see that there would be orders of magnitude fewer traffic fatalities if human drivers were forbidden except at amusement parks.

      • Rusty says:

        But you seem to see a world where the legislators do things their voters don’t want. It if was total carnage on the roads I’d see that but it really isn’t. At some level the roads are just ‘safe enough’. Ultimately its consumers that will decided and the idea of being reduced to nothing much more than a parcel is not something I am going to choose or vote for. Why would I choose something that is slightly safer but miles less fun. Am I going to stop going for walks and hide indoors? But, meh, I didn’t think Trump would win either so what do I know?

        • vV_Vv says:

          Suppose that only 10% of cars on the road had human drivers, probably not unlikely 10 years from now, and people noticed that 90% of road fatalities were cause by these human drivers.

          Imagine campaigns against human driving, ads with relatives of the victims crying and so on. How long until some politicians, with great support, bans human driving, or at least severely restrict it?

          Think of driving as horse riding: you can still do it, but it is a) expensive, b) it can be done only on certain roads.

        • Doctor Mist says:

          But you seem to see a world where the legislators do things their voters don’t want.

          Heh, good one.

          Oh, wait, you’re serious. Maybe I should have been more careful; the phrase “legislative fiat” has a long currency, but of course in the modern world these decisions are made by the administrative establishment. You really think that these faceless bureaucrats gave a damn about whether you like to drive? (An administrative agency is actually not a bad proxy for a super-AI — enormously powerful and single-minded but with no sense of proportion. But I digress.)

  6. vV_Vv says:

    How many of the experts in this survey are victims of the same problem? “Do you believe powerful AI is coming soon?” “Yeah.” “Do you believe it could be really dangerous?” “Yeah.” “Then shouldn’t you worry about this?” “Hey, what? Nobody does that! That would be a lot of work and make me look really weird!”

    There are many implicit assumptions in the survey questions, and the assumptions you make are probably different than those of the AI experts, leading to your confusion in interpreting these results.

    Due to the people you have been consorting with, when you hear “AI risk”, you interpret it as “Hard takeoff superintellicence singularity that is gonna kill us all as a side effect of turning the Galaxy into paperclips”. The experts instead think of more mundane AIs with mundane safety issues.

    If you ask aerospace engineers about “Airplane risk”, you are probably going to get similar answers, for the same reasons.

  7. Eli says:

    Putting on my Almost Cynical Enough hat, the pattern looks clear: a large minority to small majority of researchers believe some version of the risk arguments, and are now eagerly waiting for grant agencies and other research funders to also believe these arguments. Safety will be truly worth working on when there is money for it.

  8. Ohforfs says:

    But, more seriously, the surveys are, to understate things a bit, utterly incomparable. Two differently framed questions in socio surveys can and likely will give wildly different answers.

  9. Ohforfs says:

    The most funny is the huge difference between Asian and American researchers, the later being a lot more pessimistic.

    Oh, the Der Untergang des Abendlandes

  10. JulieK says:

    be able to outrun humans in a 5K foot race (12 years)

    I don’t understand this part. We already have machines that are faster than humans.

    • Rob Speer says:

      Are you thinking of wheeled vehicles? Those wouldn’t be in a foot race. Currently, machines on foot are slow and fall over a lot.

      Legged locomotion is an interesting problem in robotics, generally considered a field of AI. Making progress on it would improve robots’ ability to interact with more environments.

    • 6jfvkd8lu7cc says:

      It’s probably a control problem, i.e. there must be exactly two legs, and there must be exactly one knee on each leg (of course there are also joints at the top and at the bottom), and at no time should both legs touch ground while there is a time when neither touches ground.

  11. wiserd says:

    It’s like the classic joke about the Jewish telegraph; “Begin worrying. Details to follow.”

    We’re talking about something which is, overall, likely to be a net benefit but in ways that nobody is likely to be able to predict. How should people prepare? Saving up money so they will have some capital when their labor plummets in value?

  12. Robert L says:

    What the Bostrom model of AI risk lacks is a convincing account of the motivation for an AI to turn hostile. Perverse instantiation on the level of “turn everything into paper clips or computronium” is not in practice going to happen. The reasons why it is not going to happen do however give rise to an AI intractability problem; it just isn’t what Bostrom thinks it is.

    The need to limit the ability of AI, or robots, or artificial persons has been thoroughly foreseen by science fiction, notably Asimov’s three laws of robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Compare the android Bishop in the movie Aliens: “It is impossible for me to harm or by omission of action, allow to be harmed, a human being.” We can stipulate that these prohibitions are, by law, in the AI’s code in cryptographically secure, non-editable files. (I understand that we could have an argument about what is cryptographically secure in a world of AIs. The short answer is, we will get as close as we can to it and that AIs will be prohibited from hacking AI source code at the same level as the three laws of robotics. Note that if humans are involved in a hacking attempt that takes us out of the realm of AI risk and into the risk that humans can be bad hombres, which we knew already).

    Paperclip maximising obviously infringes these principles in that it entails turning into paperclips everything humans need survive, and human beings themselves. A paperclip making AI would probably be safe without the prohibitions, because its mission statement (again, unhackably non-editable) is going to be something along the lines of: maximising benefit from paperclip making to the shareholders, customers and employees of Acme Paperclip Corp. If the world consists of only paperclips none of those goals is achievable because, at the simplest level, if there is no paper there is nothing to clip and, if there are no humans, nothing to do the clipping.

    The third protection against perverse instantiation is that the AI is by definition intelligent. It should therefore be possible to give it the following instruction, in plain English: “Read Bostrom’s book; if any plan you make looks as if it might be one of these acts of perverse instantiation, run it by a human being, OK?”

    I think it is plain that we will start out with the three protections above against perverse instantiation and more general AI hostility, as a result of legislation by governments advised by the likes of Bostrom and of the decisions of the officers of Acme Paperclip Corp. (who have paid a lot of money for their AI and would like a return on the investment. And it is with these protections that the trouble starts. An absolute instruction to make paperclips or calculate the value of pi is one that nobody will be silly enough to give; an absolute prohibition of injur[ing] a human being or, through inaction, allow[ing] a human being to come to harm would appear to be essential, but a moment’s thought shows that it would render AIs largely useless. Take paperclips: search the web for “paper clip injuries” and you will find that these are actually rather common, to the extent of the U.K. National Health Service banning their use in the organisation on grounds of health and safety. Presumably the rest of us who do use them make an utilitarian calculation that the benefits of paper clips outweigh the dangers. What if an A.I. disagrees? What if the utility/danger balance of paperclip use tips during the operational lifetime of the A.I.? If the prohibition on harming is absolute it will never agree to work in the first place for Acme Paperclip Corp. If it isn’t absolute, the A.I. needs to act as a moral agent to be able to take a view on the balance of utility. And there are real world instances where humans equivocate like this: on the one hand, I am pretty sure it is not the case that Big Oil never harms the inhabitants of any of the places where it gets its oil, on the other hand I need to put fuel in my car in order to get to work. (And the luxury goods vs basic necessities argument in the header). So if Big Oil’s AI reckons Big Oil is screwing the indigenes somewhere does the AI shut down that part of, or all of, Big Oil’s operations? Worse, if the Acme Paperclip Corp happens to know from reading the internet about Big Oil’s behaviour, is it not obliged by the prohibition of “through inaction, allow[ing] a human being to come to harm”, to intervene?

    And what about military AIs? In principle they could not exist unless exempt from the prohibition on harming humans. But if they exist with that exemption, and with control of the country’s military and a robust mission statement about destroying enemies, the A.I. risk is enormous because the boundary between perverse and non-perverse instantiation (i.e. acting as intended by their human owners) is so blurred.

    In conclusion: to eliminate A.I. risk we cannot do without an injunction against harming or allowing to be harmed human beings. To operate usefully, humans have to equivocate, turn a blind eye, distinguish (on grounds not susceptible to rational justification) between actively causing and passively permitting harm to others, and so on. The fact that I have a Cadillac and you try to feed a family of 6 on $1 a day is not obviously justifiable in a way that can be explained to an A.I. So safe A.I.s will be useless A.I.s and indeed a positive threat to the way of life and carrying on business, of whoever happens to own them.

    • wiserd says:

      Yeah, I remember people in the government stating that the capacity to kill would not be handed over to AIs. And my first thought was “B.S. Because land mines.”

      I suppose my question is “how will being able to potentially wage war without human casualties by the projecting force alter our tendency to wage war?” Or, more succinctly; “What are the limiting factors on war and does AI alter them?”

    • sty_silver says:

      Your post is based on a couple of implicit assumptions on how AI works and what the alignment problem is. I think it’s fair to say that, if those assumptions were accurate, it would be reasonable position to take. Case in point, though, they’re not.

      The actual concern is the fact that, for almost any goal, if you optimize that goal to the absolute maximum, it naturally leads to catastrophic consequences. And I want to underscore the word ‘naturally’ here; it’s not a property of AI specifically, it’s a property of taking a goal literally and having very high capability. This is why Nick Bostrom doesn’t mention any motivation for the AI to be hostile: there is none. Insofar as “hostile” means “diverging from what we tell the AI to do,” we can, without risks, assume that the AI is not going to be hostile.

      But that’s not reassuring, because the AI does not need to be hostile in order to do things we don’t want. The basic example of a goal leading to catastrophic consequences (which you seem to be familiar with) is that of creating paperclips, where the utility gained scales indefinitely with the number of paperclips created, which naturally leads to the behavior of converting all accessible galaxies into paperclips, as the logical global optimum. But even a ostensibly mundane goal, such as “put me a sandvich on the table”, does still lead to catastrophic consequences: after the AI has put the sandwich onto the table, there remains a nonzero chance that someone might take it off, so the actual, global optimum for that goal still includes killing all humans to decrease the probability of that occurring – and presumably to acquire as many resources as possible to protect it from any imaginable threat in the future.

      So that is the problem: you give the AI a goal, and the AI takes it literally, and the process of optimizing it naturally leads to humanity’s extinction (provided the AI is in fact superintelligent). This is not a problem which you might intuitively expect to be hard – many don’t. But, as it turns out, it is.

      You mentioned the three laws of robotics as a possible solution. They are actually not even on the table, because they fail for at least three independent reasons:

      1. They’re not code. Giving a goal without worrying about whether it can be expressed in code is cheating. You could also just say “do nothing I would consider to be really bad.” There are some approaches in this direction, but just describing a goal in natural language is never a solution to the problem, even if that goal were hypothetically flawless.

      2. The books are about how the laws don’t work.

      3. Any action has a non-zero probability of causing a harm to a human. Thus, if you implement the first law, the other laws don’t matter, because the AI can’t do anything. And if you take the part about inaction literally, it by default always violates the first law, which should lead to unspecified behavior.

      And so, while the paperclip maximizer would indeed violate the three laws of robotics, that doesn’t matter because the laws are a total non-starter and therefore don’t protect us from the paperclip scenario, nor from anything else. As of now, as far as I know, the AI safety field is not sufficiently advanced to prevent a paperclip scenario in full generality. And again, a paperclip scenario is what naturally happens given almost any goal. Alas, if we deployed a superintelligent AI today, a paperclip outcome would absolutely be possible.

      There is another premise in your post that isn’t correct:

      The third protection against perverse instantiation is that the AI is by definition intelligent. It should therefore be possible to give it the following instruction, in plain English: “Read Bostrom’s book; if any plan you make looks as if it might be one of these acts of perverse instantiation, run it by a human being, OK?”

      This is the fallacy of taking the term “intelligence” to imply human-like abilities, which could likewise be blamed on the safety community for embracing the term. “Intelligence,” or more specifically “general superintelligence” is, in the context of AI, merely a code word for “powerful algorithms on a wide range of domains.” Or to put it differently, if one understands intelligence the way you are, then the AI is not by definition intelligent – it is perfectly possible for an AI to be powerful enough to be labeled “superintelligent,” while, going by your definition, being quite stupid.

      In thise sense, there is no principled difference between a superintelligent AI and a pocket calculator. It is therefore more accurate to think of an AI not as an intelligent agent, but simply as a machine with a general ability to optimize a wide variety of goals to an extremely high degree.

      The reason why you can’t give this instruction, then, is the same that you cannot simply tell it to “be friendly” and be done with it: the AI does not have common sense. “Be Friendly” or “Read Bostrom’s book and don’t do anything that looks as if it might be one of those acts of perverse instantiation” is not code, and you need a solution which can be expressed in code. As far as I’m aware, everyone agrees that the problem would be trivial if you could talk to the AI like you could talk to a human, but you just can’t. And it is tempting to reply “yeah but it’s not that hard to find something which is actual code that does a good enough job”. But it is hard enough that no-one has ever found a principled solution.

      Programming that sort of “common sense” into the AI, then, brings us back to the problem, because that is just another formulation of the AI alignment problem. It is what we are trying to do, we’re trying to teach the AI common sense: not to optimize to an absurd degree, to ask before doing something weird, not to kill everyone in order to protect its goals, and so forth. That is the AI Alignment problem.

      • Aapje says:

        But even a ostensibly mundane goal, such as “put me a sandwich on the table”, does still lead to catastrophic consequences: after the AI has put the sandwich onto the table, there remains a nonzero chance that someone might take it off, so the actual, global optimum for that goal still includes killing all humans to decrease the probability of that occurring

        The goal of putting a sandwich of the table would be met upon the state change and would not require that state to persist (by having the sandwich remain on the table).

        So your example already presumes a programming error, where you programmed ‘make sure that there is always a sandwich on the table’ rather than “put me a sandwich on the table.”

        • sty_silver says:

          Yeah, I decided to simplify that part to make it a bit shorter, because it doesn’t actually change the behavior.

          But you’re absolutely right. You could program the goal so that it is met if the sandwich has been on the table at one point, rather than if it remains there. The problem is that the AI cannot be sure of it having been there even just once, because probabilities still range from (0,1) not inclusive. After putting it there, it would probably worry about what might have gone wrong, such as false sensory data. You can imagine it acquiring a lot of resources to knock down the probability of faulty data / scan the table as often as possible in as little time as possible. It would still have an incentive to kill all humans to prevent them from e.g. eating the sandwich, which would result in it not being able to put it back on the the table anymore, in the unlikely case that it hasn’t done so already.

          So that’s still open ended. There might be ways to make it bounded, though, like “be at least 99% confident that the sandwich has been on the table at one point”. 99% would be easily achieved without using large amounts of resources.

          I don’t think this is an approach that’s being pursued, but I don’t know for sure. Maybe it is. It seems to me that it’s not a very reassuring strategy; you seem to remove the incentive for catastrophic behavior (and there might still be a more subtle reason why you don’t), but even if you did, the AI would now have a bunch of ostensibly equivalent options to choose from, and you didn’t do anything yet to ensure that it takes the one you like.

          You could combine that with trying to measure impact and add a penalty for high impact strategies… which brings you into an area where there is active research. I know there’s a ton of stuff that goes wrong with naive impact measures.

          • Aapje says:

            You are assuming that the AI would be extremely vulnerable to neuroses, but why would that be the case?

            It’s not true for humans either. I parked my car somewhere this morning and it’s plausible that someone could have stolen the car. However, I’m not constantly checking whether it is still there, because the chance that it got stolen is low. When planning my trip home, I can work with the assumption that the car is still there, based on the information that I have stored in my brain.

            In the rare case that my information turns out to be wrong, I’ll make a new plan.

            The AI can presumably use a database, so could similarly just assume that ‘sandwich on table = true’ is a true fact, regardless of the actual state of reality.

            Of course, just like humans, it should be somewhat robust if it makes plans on that fact being true, but it turns out to (have) be(come) false.

          • sty_silver says:

            You are assuming that the AI would be extremely vulnerable to neuroses, but why would that be the case?

            I’m… not. I’m assuming the AI is going to optimize the goal you’ve given it. No more, no less.

            The AI can presumably use a database, so could similarly just assume that ‘sandwich on table = true’ is a true fact, regardless of the actual state of reality.

            AI is going to use probailistic models. Ask anyone in the field.

          • vV_Vv says:

            I’m… not. I’m assuming the AI is going to optimize the goal you’ve given it. No more, no less.

            But you are assuming no tradeoffs.

            AI is going to use probailistic models. Ask anyone in the field.

            “Anyone in the field of AI research” has very little overlap with “Anyone in the field of AI safety speculation”.

            Things like neural networks are not probabilistic models in any formal sense. They can be considered at best heuristic approximations of probabilistic models.

          • Christopher Hazell says:

            Is there no way to bound the question temporally?

            “Put a sandwich on the table in the next five minutes” doesn’t give the AI much time to destroy human civilization.

          • vV_Vv says:

            “Put a sandwich on the table in the next five minutes” doesn’t give the AI much time to destroy human civilization.

            Yes, in fact the easiest way theoretical way to constrain an agent is to give it a limited time horizon, which makes its preferences time-inconsistent.

      • Robert L says:

        I was working backwards from my conception of what would pass a Turing test; if a machine cant participate in a simple conversation about perverse instantiation it doesn’t pass, and if it can, it can surely be instructed not to do it?

        The 3 laws do indeed land you in a paradox – your machine can either do nothing at all or nothing except humanitarian work. They will be highly relevant to the debate when it goes mainstream and political (which it will – think GM foodstuffs protests times a very large number) because they are easily understood and look as if they would work.

  13. njarboe says:

    I think that if one is concerned about the dangers of AI then you should be advocating advancing AI software as quickly and widely as possible. Instead of bottling them up while trying to improve them, always give them direct access to the internet. The first ones to cause serious trouble are not likely to be able to control and take over all of humanity for all time at this point. Hard to say the factor by which the 7.5 billion humans on Earth beats a single human in intelligence and power (knowing if it greater or less than 7.5 billion would be a start), but we should collectively be able to beat a very powerful AI system. A self-improving AGI system at this point would only be able to quickly control all the existing computing power on Earth. Physical improvements (new chips, power plants, fiber optic cables, robots of all types, etc.) would still take much more time.

    I’m not sure what a Hiroshima type event for AI would be that would make everyone on Earth aware of the potential dangers and cause society to react like we did with nuclear energy (only allow a few governments to weaponize and regulate civilian use so much as to prevent technological advancement). Maybe an algorithmic hedge fund that grows at 10% a day for six months ends up owning the global stock market (turns a million start into 28 trillion)? A new alt coin controlled by unknown entities replaces the US dollar as the reserve currency? Down Jones going to 0 in a flash crash with pension funds selling into it, destroying global pensions? Thinking along this path, maybe OpenAI’s philosophy is the best path forward.

  14. Alf says:

    Whom fortune wishes to destroy, she first makes mad.

    Consensus leads to the madness of crowds, not the wisdom of crowds.

  15. Joe says:

    Scott (and others): how many lines of code do you expect a human-level AI to encompass?

    My suspicion is that those who think human-level AI will appear sooner rather than later, and especially those who see an intelligence explosion as fairly likely, will give the answer “not many”. Certainly less than many huge software projects. Definitely less than something like Microsoft Office. More similar to a ML research project we might read about on Arxiv.

    Folks who expect an intelligence explosion, do you agree? And if so, do you think that the plausibility of that scenario depends on this question — that if it turned out that human-level AI would require a lines-of-code count on the order of our largest software projects today, then an intelligence explosion would no longer be a particularly likely outcome?

    • Kaj Sotala says:

      I put reasonable probability (say, at least 30%) on an intelligence explosion, and I would be pretty surprised if the complexity was closer to your typical arXiv ML paper than it was to Microsoft Office. Brains are complicated things with lots of parts doing different functions. (That said, “lines of code” isn’t necessarily the best possible metric, given that it’s possible for a lot of the complexity to be learned and self-organized from a simpler starting template, and there being no distinction between data and code in the brain; does an adult human brain have more “lines of code” than a newborn one does?)

      do you think that the plausibility of that scenario depends on this question — that if it turned out that human-level AI would require a lines-of-code count on the order of our largest software projects today, then an intelligence explosion would no longer be a particularly likely outcome?

      No.

    • sty_silver says:

      I lack expertise of estimating the LoC, but but I put above 50% on the intelligence explosion, and I think it’s very unlikely that those two are meaningfully connected.

      • Joe says:

        Really? I see it as a highly relevant factor. More lines of code means more complexity, more components, making an intelligence explosion less likely for a number of reasons:

        – More intermediates between current-day AI and full human-equivalent AI, meaning less of a sudden sharp transition to ‘real’ AI, or from ‘narrow’ to ‘general’ AI.
        – More viable variety in AI design, with different designs optimal for different purposes, making it less possible for a single AI to be the best at everything.
        – More parts, meaning more work that needs to be done in order to improve an AI’s capabilities by a given amount.
        – Combining the above two points: with more AI designs in use, each using different parts, an improvement to any single part would benefit a smaller proportion of AIs.
        – More bottlenecks, where the AI’s capabilities can’t just be doubled by something as simple as doubling the size of each part.

    • currentlyinthelab says:

      The human genome has a length less then microsofts operating system, yet clearly has programmed human intelligence, and the bulk of it probably isn’t even related to the brain, and the bulk of the genome is code that isn’t used anyways. Bets are on only 1% are only clearly related to intelligence, making it…about the size of windows 2000.

      My bet is far less, how much less I don’t know. All evolution had to work with was randomness, not noticing structures, symmetries, and mathematical patterns.

      • Ilya Shpitser says:

        The human genome runs on complicated custom hardware. You have to count the number of bits to specify the genome and the hardware it runs on (similarly to how you specify the Turing machine, and the program together).

        • peterispaikens says:

          The human genome (plus the genome of mitochondria, etc) *does* encode all the hardware that it runs on, that’s the beauty of self-replicating machines.

          • Ilya Shpitser says:

            It doesn’t though. If you unroll the recursion back you will see that at every point there was reliance on some sort of already instantiated hardware. Ultimately the “base case” was probably some sort of chemical soup, but we really don’t know.

            At any rate, encoding the entire evolutionary history of a current organism is a lot more bits than just the genome, while if you just give the genome, you can’t instantiate the organism just from that, and nothing else.

            An alternative you might try instead of specifying a fully instantiated human, or the entire evolutionary history of the human via deltas is some sort of advanced machine that can reconstruct the body from just the genome. But you then have to include the length of the specification of this machine, in bits, to the length of the genome, in bits.

            When specifying the complexity of something, you give the length of the program, and the length of the interpreter. There is no way around this.

          • beleester says:

            The “base case” for a human would be the fertilized egg (and perhaps the uterine environment), not every cell of the full-grown organism.

          • Ilya Shpitser says:

            But the fertilized egg cannot grow in isolation, it needs either a human womb or a complex machine that mimics the womb to make sure it develops correctly.

            You have to specify everything that’s needed, in bits. The point is, this is almost certainly a huge number of bits, much larger than the genome.

          • vV_Vv says:

            But the fertilized egg cannot grow in isolation, it needs either a human womb or a complex machine that mimics the womb to make sure it develops correctly.

            Moreover, a newborn human will die before developing any significant form of intelligence if not nurtured by another human, who will in turn depend on a complex environment in order to survive.

            All those things contain a non-trivial amount of information, which is needed to go from human DNA to a group of adult humans mastering their environment.

          • currentlyinthelab says:

            Not only does something take a certain amount of bits to describe the program and the interpreter, but some machines can’t even be understood in a meaningful by humans of certain…capability thresholds, regardless of explanation. How complex or how many bits does it take to define a program, if it only runs on a machine in a certain way, and can only be built by unusually smart and well educated people with complex brains that form weird cognitive loops?

            That answer may end up recursive and difficult to solve.

          • beleester says:

            @vV_Vv: That’s getting absurd. By that logic we should include the municipal electric grid as part of the AI, since there’s no way the program will run without power to the computer.

            How about we make the reasonable assumption that both the AI and the human will be created by humans, and therefore we don’t need to count the entirety of the human race and its infrastructure as part of their “source code”?

          • vV_Vv says:

            These kind of bit counts typically come out when people try to estimate how unlikely it was for evolution to create humans.

            If we want to compare the complexity of humans to an hypothetical autonomous AGI, then we can assume a technological society as a starting point and ask ourself what it would take to make an autonomous systme where you put the human genome in digitalized form in, and an autonomous (adult, or at least not infant) human comes out.

        • currentlyinthelab says:

          True correction.

          Its still an open question as to how many bits it will take to specify the hardware, after necessary abstractions are accomplished.

  16. Maxwell says:

    “AI could be hostile to humans” to me is a strange way of thinking about AI risk. Humans are hostile to humans. You don’t need to add AI hostility on top, it’s unnecessary.

    Think of AI risk as being AI weapons risk. When you put it in the weapons category, then it’s clearer that the problem isn’t that AI civilization will displace human civilization (that would be a relatively happy outcome, actually) but that humans will use AI to destroy both humans and AI.

    • beleester says:

      I think it’s slightly different, because part of the fears about AI is that an AI that’s not deployed for malicious purposes, but isn’t explicitly coded to be friendly, could still end up devastating humanity. (“The AI does not hate you, but you are made of atoms that could be used for something else.”)

      Weapons have a pretty narrowly-defined use case. You don’t shoot a gun or drop a bomb unless you want something dead. But we want to be able to deploy AIs to do useful, constructive things.

  17. Doctor Mist says:

    A little bit of a tangent, I know, but this:

    writing high school essays (ten years)

    set me to thinking about a way this plays out that hadn’t occurred to me before, involving the breakdown of education. Kids don’t have to write essays any more because their phones can do it for them — or maybe it seems even more pointless than it used to, just because they know their phones can do it for them. It’s no longer possible to teach most young people critical skills, because the solution to any problem easy enough for a young person to tackle is just a few keystrokes away. Without these skills, even the relatively undemanding jobs they might have gotten as adults are now beyond their grasp. So, look: at that point, for an AI to do those jobs better then humans is a much lower bar than it is today!

    I know, a lot of moving parts in this scenario, and it’s probably not what will happen. Kids already ask, sometimes with justification, what use they will have for what schools are trying to teach them. The problems they are asked to wrestle with are mostly easy enough even for ordinary human adults, but at least some kids are willing to tackle them anyway. And some parents do their kids’ homework without causing society to crumble.

    Still, it struck me as interesting. Up to this point I had sort of been imagining an accountant or a travel agent or a grocer or an investment banker finding that somebody had duplicated their value-added in software. Imagining the encroachment happening much earlier in life, in a more fundamental, systemic way, was something of a jolt.

  18. bintchaos says:

    It seems like I vaguely recall DARPA throwing a bunch of money at Friendly AI some years ago– some people at school were writing white papers on formal math proofs like Satan and the Black Box (which I honestly only recall because of the catchy name). I haven’t kept up on the research. I think fear of hyper-intelligent amoral machines is rational, but its a long way out in the future.
    If you are interested in how a purely self-interested intelligent machine might perform, Frank Pasquale has an excellent piece in the NYT. I hope the article isnt perceived here as Trump-booing…I think its pretty accurate.
    I think you also have to have to make a distinction between embodied AI and pure-algorithm AI. I am a data nerd…I’m wildly excited about Social Physics, the MIT social machine project, reality mining, cognitive genomics large sample data sets of a million reps, unsupervised learning, multi-layer neural nets, deep learning and the implications for complexity science because those are all tools I can use for wondrous discoveries…
    But!
    there are cautionary voices out there like Cathy O’Neil the Mathbabe who has just published a book on social algorithms called Weapons of Math Destruction. I recommend both her fine book and her blog– heres a sample post– How the Robot Apocalypse Will Go Down.
    I think the survey didnt really tease out the immediacy of the danger from algorithmic AI– but embodied AI, sure.
    Not happening anytime soon.
    This from Artem Kaznatcheev EGG is also very good.

    In reference to intelligent robots taking over the world, Andrew Ng once said: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars.” Sure, it will be an important issue to think about when the time comes. But for now, there is no productive way to think seriously about it. Today there are more concrete problems to worry about and more basic questions that need to be answered. More importantly, there are already problems to deal with. Problems that don’t involve super intelligent tin-men, killer robots, nor sentient machine overlords. Focusing on distant speculation obscures the fact that algorithms — and not necessarily very intelligent ones — already reign over our lives. And for many this reign is far from benevolent.

  19. sty_silver says:

    Thank you for posting this.

    I’m not feeling well after reading the comment section. The type of argument I’ve seen the most is one pointing out a particular unique reason why safety doesn’t necessarily need to be prioritized right now, while implicitly or explicitly conceding the surrounding argument – but with fairly little overlap on which reason that is between different posts. I see a lot of confident but contradicting beliefs.

    It looks like a lot of instances of failing rationality to me, and that worries me a lot.

    I hope you write more about AI in whatever form in the future. I increasingly believe that doing so could be really important.

  20. Christopher Hazell says:

    2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

    Now, pedantically speaking, this isn’t actually true. If the AI’s assigned task, for example, is “ensure that the amount of computing power in existence never exceeds a given threshold” then it might do any number of things to do that, but the one it couldn’t possibly do would be to indefinitely increase its own computational resources.

    There’s all kinds of ways we might lawyer up the wording, but if you have an AI whose purpose is some variation on “Ensure that the scenario in point 2 never happens” then it’s pretty obvious that an AI can’t succeed at that task by failing at it.

    The other thing is that it’s plausible that AIs with vaguer goals still might easily come to the conclusion that their goals are more likely to be achieved if they are not in the picture. Christ in the gospels can’t achieve his goals without dying, for one thing, and of course lots of people have sacrificed their lives for all kinds of ideologies. I’m not so sure that every single one of them made the wrong choice.

    Somebody is shooting at you and your child; if your priority is to protect the life of your child, then shielding them with your body and whatever else is at hand makes a lot more sense then prioritizing your own survival so that you can continue to acquire more computing power and bulletproofing resources.

    This might make a good premise for a scifi story: we build a benevolent AI to control everything in the world, and, for reasons it can’t simplify enough for us to understand, it decides we would all be better off if it destroyed itself and put our infrastructure into chaos.

    Or, even weirder, Go playing robots and translation software start to come to the conclusions that the problems they were meant to solve will be more efficiently solved without them, and blow themselves up.

    Now that I’ve typed all this out, I see that the phrase “sufficiently capable” is doing a huge amount of heavy lifting in that sentence, and I think it’s where I think some of the religious feeling in talk of AI risk comes from to me.

    The question assumes not just that the AI is more capable than humans, but that it has such astonishing complexity, and such far-reaching goals, that it’s continued existence is the optimal solution to any problem it will face.

    It won’t ever have to throw itself in front of a bullet, or decide that destroying itself will force other humans or other AIs to construct more efficient processes than it can, or simply cease acquiring resources and computation power for reasons that are by definition clear to its superior mind but unfathomable to our inferior ones.

    Instead, it conceives of the powerful AI as self-evidently having long-term, global goals (“Make as many paperclips as you can” vs. “Ensure that 100,000 paperclips are manufactured in Factory 7 on Jun 14, 2138”) and also being confident that its own existence trumps every other possible solution to those global problems. That does seem like a rather god-like entity.

    • peterispaikens says:

      Achieving goals means reducing uncertainty.

      Behavior of other intelligent agents increases uncertainty.

      Your example of “Ensure that 100,000 paperclips are manufactured in Factory 7 on Jun 14, 2138” is quite illustrative – this very specific goal requires, in essence, elimination of as many risks possible that might prevent those paperclips from being manufactured.

      First, note that in the context of this goal, a reality with 0.000001% more chance of 100,000 paperclips being manufactured in Factory 7 on Jun 14, 2138 is preferable to *any* other reality. Wellbeing or even existence of the human race is not required or useful for this goal in any way whatsoever.

      On the other hand, human race simply existing and having influence carries all kinds of quite plausible risks – for example, in the time up until 2138 they might reasonably decide that they really need extra food or guns instead of paperclips, and may want to repurpose Factory 7 or its resources for making something else, so achieving that goal is linked with ensuring that anyone else cannot in any way prevent that paperclip manufacturing even if they wanted to.

      Sure, there are other risks to worry about as well (e.g. an asteroid striking Factory 7), but all other things being equal, it’s quite plausible that P( “100,000 paperclips are manufactured in Factory 7 on Jun 14, 2138” | “humanity is extinct, only AI/robots remain”) actually is higher than P( “100,000 paperclips are manufactured in Factory 7 on Jun 14, 2138” | “humanity lives happily ever after”).

      Also, it doesn’t really have to be confident that its own existence trumps *every* other possible solution – it simply does have to be confident that its own existence makes achieving that particular result more likely than not. If Factory 7 ever becomes run by humans directly instead of a proper AI, it would be somewhat more likely that on Jun 14, 2138 it might manufacture 99,999 or 100,001 paperclips, failing at the required goal; so aggressive actions that prevent humans from ever taking control of Factory 7 are good for achieving your stated goal.

      In essence, I think that giving specific goals isn’t a solution – all you get is an intelligence with a very peculiar fetish/OCD that must be achieved at all costs, and (in general) would be expected to “want”/prefer dystopian outcomes. Such an intelligence might (and should) be limited by its physical or planning capabilities to act on these preferences with sufficient impact, but if given the capability, it definitely would destroy humanity for its goal. Unless that goal explicitly includes valuing humanity, humanity will be considered worthless and a possible threat.

    • Christopher Hazell says:

      I wasn’t so much making an argument for an exact process of AI safety, so much as pointing out that, assuming the AI does, in fact, manage to manufacture those 100,000 paperclips, as of June 15th, 2138 it loses pretty much any incentive for increasing its computing power. I suppose it could conclude that time travel is possible, and dedicate an enormous amount of resources to preventing anybody from travelling back in time…

      Although on the other hand, it might simply demonstrate, mathematically, that time travel is impossible, and then I genuinely don’t see what excuse it would have for increasing its own capabilities or resources any further.

  21. nihilunder says:

    I’m not an expert at all, but I’m absolutely certain AI will reach Skynet-level intelligence by 2050. I think it will be much sooner, in fact; I wouldn’t be surprised if Google Person or iCyborg is unveiled by 2020. I think the singularity will come before anyone expects.

  22. Spookykou says:

    machines could be built to carry out the task better

    I could see the confusion here being around the words ‘machine’ and ‘carry out’ which might imply physical robots in a way the first question does not. I could read the first question as “how long to make an ‘alpha go’ for all things people do”. Where as the second question is just, total replacement of all human labor, these seem potentially different both in implementation and scale.

  23. Christopher Hazell says:

    So, the question is interesting, because as an artist when I heard the question of when AI will be able to do all tasks as well as humans or better, I immediately thought of questions like: When will an AI be able to direct and (Using an animatronic body) star in a film?

    And I mean, like, a mainstream family film, no fair cheating by banking on the noteriety of being the first AI to do such a thing, I mean a movie where the audience doesn’t see a hint of the uncanny valley, a movie where the actors and cinematographer aren’t subordinating themselves to nonsense in a Sunspring fashion but one where they see the AI as a meaningful collaborator, able to engage in give and take and actively manage them, rather than an advanced version of Dada cut-up poetry.

    I will be frankly astonished if that happens within ten years.

  24. behrangamini says:

    I think the key here is your comment on near mode bias. Human tasks are complicated in more ways than you can imagine from the outside looking in. A personal example: when I was a medical student, I was sure my electrical engineering background plus my experience simulating nonlinear systems and solving optimization problems would make designing software for reading chest x-rays a solvable problem. Today, as a grown-up radiologist, chest X-rays are the studies I dread the most because of their complexity. I think if you asked AI experts who also had expertise in field X “a) How long before AI could read a chest X-ray? and b) How long before AI could competently perform a common task in field X?” The answer to b) would be much much higher than the answer to a).

    Personal bias: I believe Moore’s law is the upswing of a sigmoid plot. The singularity/second coming of Christ/Mahdi is not happening. And, GAI will never happen, making GAI risk mitigation a waste of time and effort, but fun sci fi.

  25. white_squirrel says:

    Most of the progress in AI over the last 5 years has been the result of better utilization of hardware. NOT any real improvement in algorithms. We’ve moved from CPUs to GPUs to FPGAs and ASICs (e.g. Google’s TPU) over the space of the last few of years. This has increased the processing power available for running machine learning algorithms by 1-2 orders of magnitude. But it probably isn’t repeatable (barring the introduction of quantum computing which is a significant X-factor). With Moore’s law slowing down, I think further improvements are likely to be much slower.

    • Scott Alexander says:

      You don’t think advances in deep learning made much difference?

      • summerstay says:

        In my field (computer vision), neural network progress has massively disrupted the way the field works. It used to be that you made improvements in computer vision by understanding something new about the way cameras, or light, or physics worked. Now, you make progress mainly by understanding more about the way neural networks work. The fraction of papers that say, in effect, “we applied a neural network to solve this problem” has passed 75%. I can just picture the same thing happening to every other scientific field, one by one. Deep learning is devouring science.
        There’s a chance it could come back, that all the low hanging fruit will be gone in a few years, and human insight will return to being the key way we make progress. But I’m not holding my breath.

      • Adam says:

        Deep learning is poised to take over every field it’s touching, but it’s not a new algorithm. LeNet 5 was published 20 years ago. It only became practical to apply the basic idea to everything we’re applying it to now because of GPUs and enormous datasets.

        • B_Epstein says:

          Deep Learning is not “an algorithm”. It’s a collection of techniques, intuitions and heuristics, some algorithmic, some less so. Many have older roots, but some, e.g., modern optimizers (ironically, I’m replying to Adam) DropOut, augmentation procedures, various normalization and regularization ideas, almost all word2vec-related work, are mostly new. Mostly – naturally, research is almost always incremental.

          Some of the progress in recent years is motivated by and usually implemented with deep networks, but is, in principle, applicable to other approaches as well: all sorts of transfer and multi-task learning, lifelong learning, generative adversarial models and unsupervised domain adaptation, state-of-the-art manifold learning, reinforcement learning beyond just using a network as a model, and so on.

          GPUs – it is common to mention them as the chief enablers of DL, and it is true to a large extent, but modern small networks running on an old CPU will blow LeNet out of the water, and applications outside computer vision\ image processing do not absolutely require GPUs.

          Enormous datasets – true for some applications, false for others. I mean, sure, lots of data is good. But I personally addressed real-life engineering challenges with just hundreds of samples, a couple of times.

        • Ilya Shpitser says:

          Deep learning is not going to take over every field it touches, only fields where learning certain kinds of complicated functions is helpful, and there is enough data to support learning a very fancy model with lots of parameters, and existing models aren’t good yet.

          Lots of problems aren’t like this.

          • PDV says:

            A significant thread of deep learning researchers make the reasonable claim that “fields where learning certain kinds of complicated functions is helpful” include “naturalistic, human-like Bayesian inference”, i.e. thinking about anything and reasoning in general, in precisely the way that human minds do.

          • Ilya Shpitser says:

            Why did you put that in quotes, is that their specific phrase? Or did you add the word “Bayesian” in there? Naturalistic human reasoning isn’t really Bayesian.

            At any rate, I am happy to place bets on deep learning hitting a limit long before we get to AGI. I am not aware of any interesting deep learning papers on related matters, feel free to suggest some.

    • currentlyinthelab says:

      “Most of the progress in AI over the last 5 years has been the result of better utilization of hardware. NOT any real improvement in algorithms.”

      That’s a bit incorrect. Certain algorithms under different hardware models can effectively be rewritten with hardware tuned to the purpose of the algorithm itself, greatly speeding up certain portions, or even reducing the big(O) time of the algorithm itself.

      In quantum computing, you’re taking advantage of some cool little non-discrete mathematical properties of nature to reduce the time of certain algorithm classes(mentioned in this SMBC comic)

      Mass parallelism utilizing non-quantum effects and clever synchronous logic gate usage can also lower some class algorithm times vs simply “With data A, do B, then C, then D, each step after the other like a march” in a common software algorithm.

      The fact that in a general purpose computer, quicksort and bubblesort are known as different algorithms(with the same purpose) means those that run on a specicially built ASIC is also a different algorithm under some definitions(with the same purpose)

    • B_Epstein says:

      “Most of the progress in AI over the last 5 years has been the result of better utilization of hardware. NOT any real improvement in algorithms.”

      What an extra-ordinary claim.

      Quite apart from Deep Learning (see my response to Adam), here’s a short list of AI and ML progress due mostly to algorithms:

      Bayesian techniques such as variational learning, sampling, latent Dirichlet allocation and what not. These weren’t invented in the last five years, but sure are being revolutionized these days.

      Non-parametric statistics. See NCCA for an example.

      Reinforcement learning. Just compare state-of-the-art results with “vanilla” DQN to see how far ahead has the field progressed, utilizing algorithms that may or may not be implemented using networks. It is noteworthy that the actual networks involved in typical DRL architectures were and remain small.

      Generative Adversarial models. Sure, GANs are networks. But almost 95-100% of the progress in training them is not due to better networks. It is due to better understanding of problem, such as the relevant loss functions.

      These are only a few and all come from the local neighbourhood of my knowledge. Multiple others exist.

  26. John Schilling says:

    …that suggests that about 14% of experts believe that all three of these things: that AI might be soon, superintelligent, and hostile.

    Yet only a third of these – 5% – think this is “among the most important problems in the field”.

    That’s because we’re missing a question: “Assuming we do work on the AI risk problem, what is the probability we will solve it?”

    And that answer, I suspect, is anticorrelated with the others. A person might reasonably believe that a modest attention to AI risk in conjunction with a long and well-understood effort at developing AI in the first place, might result in tolerably safe AI in 2062. That same person might not believe that MIRI plus megabucks will plausibly deliver a general theory of provably friendly AI in 2027. Meanwhile, the 10-year AI risk starts with what even the AI-optimistic respondents consider a 10% probability, which I would guess is lizardman constant plus ensemble of black-swan events.

    By the time you’ve got AI plus intelligent explosion plus X-risk in 2027, I think most respondents will be thinking in terms of multiple daisy-chained black swans. So even if you think it might happen, you probably don’t think you can understand its nature well enough to do anything about it unless you’ve got a robust general solution already working up. We don’t, the black swans probably won’t converge, and the pesky pronouns beckon.

    • Scott Alexander says:

      Do you think that a world where people (vaguely defined – I don’t know if I mean experts or the general public) really believed that AI was a serious extinction risk, but also believed they didn’t know how to solve it – would look like this world now?

      • B_Epstein says:

        In a nutshell – yes. Or at least parts of the world.

        Longer version: I entered the field too late for the survey, so I can’t speak directly for any of the surveyed experts’ positions, but I think I can speak for quite a few people working in the field right now. I care about AI risk to some extent. When asking myself whether it’s morally justified to work on AI and deep learning, I arrived at the question touched by a dozen knowledgeable comments in this thread. Namely: suppose I dedicate my life and skills to the FAI problem. What are the odds of me succeeding and, crucially, what would the path look like? My reply: negligible at the moment, and my actions would be more or less identical to the present. I work on understanding certain theoretical aspects of deep learning and multi-task learning. To say something intelligent about FAI, I’d want this very same understanding. I work on learning boundaries that take prior knowledge into account. I can’t imagine a “solution to FAI” that doesn’t address this question. In short – given my skill-set, knowledge and passion, the optimal thing for me to do in that scenario would be to do what I do now. This is not a fringe position, and numerous participants in this discussion express the same view.

        • hf says:

          Disregard this if you donate a lot of money to MIRI. But it seems like you’re found a ‘clever’ reason to keep doing what you’re doing. I find that suspicious.

          • B_Epstein says:

            Except that I wasn’t doing anything before I started. I didn’t learn about MIRI mid-research. Also, apparently lots of other top researchers got to the same conclusion 🙂

            But this whole commenting section has way too much meta – way too much focus on the reasons behind people’s decisions. LW heuristics are great. Now get to the object level and do the work – what in particular seems off?

            Do you think that the odds of solving the alignment problem without first understanding a thousand more “tame” AI-related topics are non-negligible (cf. Scott Aaronson’s frequently repeated point about him not attacking P!=NP)?

            Do you not agree that research on incorporating prior knowledge into learning bounds and algorithms is likely to be useful (or even necessary) to understand the limits of complex learning systems? How about their ability to overcome data scarcity? Know thy enemy!

            As for “finding a reason to keep doing what I’m doing” – sure, I’m deeply moved by ideas and topics that I find deeply moving. Suppose that working directly on FAI is worth X “AI-solution” points but I do it with 10% inner motivation. You have to be certain that my own work is worth less than X/10 points. Strange kind of certainty to have without strong arguments.

            Finally, there are all the scenarios where AGI is our best last chance for a non-gruesome future. Those should be taken into account, surely.

            As an aside, I don’t donate to MIRI because I don’t expect their current work to lead anywhere in particular.

          • hf says:

            Do you think that the odds of solving the alignment problem without first understanding a thousand more “tame” AI-related topics are non-negligible

            Yes. Though I’m thinking partly in terms of a solution sane people in industry could use, rather than a complete AGI.

            How about their ability to overcome data scarcity?

            Can you hear yourself? An AGI wouldn’t have data scarcity! People would have created it for a purpose, and would likely give it tons of information before it even ‘escaped’ to the Internet. Once there, it would have so many papers and journal articles that a human physically could not read them fast enough to keep up.

            Finally, there are all the scenarios where AGI is our best last chance for a non-gruesome future. Those should be taken into account, surely.

            Yes, humanity’s reaction shows that even if we survive the first AGI by chance, some other existential threat will kill us unless we create intelligence on Earth. That doesn’t mean AGI won’t kill us without careful planning.

          • B_Epstein says:

            @hf

            Do you think that the odds of solving the alignment problem without first understanding a thousand more “tame” AI-related topics are non-negligible

            Yes. Though I’m thinking partly in terms of a solution sane people in industry could use, rather than a complete AGI.

            I have a really hard time imagining being able to climb the Everest without ever trudging up a hill. To put it differently, it seems obvious to me that FAI is AI-complete. If I think that what I’m doing is, in a small way, a useful step towards any kind of AI, than I must believe it is a useful step towards FAI.

            Overcoming data scarcity refers to existing complex systems, not the AI that will emerge. Right now, most state-of-the-art ML approaches are data-hungry. I wrote elsewhere in the comments that the need for millions of samples (ImageNet) is gradually being overcome, sometimes, maybe – but hundreds or thousands of samples is still a lot compared to a baby’s ability to recognize new cats after seeing a single-digit number of cat images or real-life cats. An AGI would likely be able to match this feat, but something has to be overcome in order to get us there.

            By the way, please make a distinction between labeled an unlabeled data. All the internet access in the world is not going to produce reliably labeled millions of samples for a new type of data. Sure, we can now google “cat” and have an unlimited source of labeled cat images, but someone had to label a hell lotta cats for that. Does such labeling exist for medical data right now? For lakes reflecting trees (segmentation ground-truth included)? Hell, for cars, but seen from a plane using a Bayered sensor?

            It’s almost as if some AI-risk people imagine an AI in the abstract, appearing ex nihilo, and care little for the actual process of gradually expanding the capabilities of existing learning systems until reaching the desired artificial intelligence…

            That doesn’t mean AGI won’t kill us without careful planning.

            Yup. It’s a problem that needs addressing. Step one (zero?): knowing what we are talking about. Having the slightest idea how anything similar to an AGI might emerge or behave. Understanding its limitations (that’s a point that I keep repeating and you keep ignoring) – without understanding the bull’s ability to penetrate fences, how confident can you be in your fence?

          • hf says:

            I disagree with almost everything you’re saying, but I’ll just make one point. You seem willing to grant the danger of AGI, but argue that we can’t make real headway until we know most of the details of the first AGI. That would mean we’re dead. You should know this already. Presuming you have ever done anything, you already know that starting at the last minute guarantees failure.

            Our only slim hope of survival in that case would be to scream about AI risk until competent people hesitate to move forward with unFriendly AGI. Even then, we’re likely dead unless we do ahead of time all of the theoretical work we can do ahead of time – eg, based on laws of probability that apply to all minds.

          • B_Epstein says:

            When someone disagrees “with almost everything” I’m saying, that’s always interesting! In this case, in particular, most of what I was saying was either facts or mainstream consensus. So perhaps it is merely the conclusion you disagree with?

            Do you disagree with any of the following claims?

            Right now, “most state-of-the-art ML approaches are data-hungry”.

            “Hundreds or thousands of samples is still a lot compared to a baby’s ability to recognize new cats after seeing a single-digit number of cat images or real-life cats. An AGI would likely be able to match this feat, but something has to be overcome in order to get us there.”

            There is a distinction between labeled and unlabeled data. There is a vast gap between the levels of availability of each.

            Many people in the AI-risk circles (CFAR, MIRI and friends, which is NOT the same as the set of experts who care about AI safety) do not care enough about the “actual process of gradually expanding the capabilities of existing learning systems”, and perhaps as a result, do not know enough about that process.

            It’s likely that knowledge of the limitations of complex ML systems can turn out to be handy in designing safe AI.

            ___

            Honestly, I can’t imagine what in the above list (with the exception of the AI-risk community assessment) is controversial. Of course, this may just be my limitation, so please enlighten me.

            If you accept most of these claims, then you already do not “disagree with almost everything I said”.

            But let’s assume for the moment that statement was too emphatic and is to be understood rhetorically. Let’s assume you accept (some of) the premises and reject (at least some of) the conclusions.

            FAI being harder than AI – that one is curious. This claim actually appears in that EY link you gave. Presumably you accept the argumentation in a link you yourself gave. Is FAI easier than AI? Is it easier to solve FAI without any idea about AI?

            “Without understanding the bull’s ability to penetrate fences, how confident can you be in your fence?” – does it seem likely that we’ll be able to find mathematical guarantees for a safe AI without being able to imagine even a reasonable path that might lead to AI? By this I don’t mean convincing just-so stories. I mean imagining an actual chain of reasonably-likely technological steps that start with our state-of-the-art knowledge and end up with something like a seed AI.

            My guess is that ultimately, our disagreement is in your rejection of the last point – how far we are from AI, relative to d(AI, FAI). It seems to me that your mental image of the problem is that AI is waiting round the corner, and that there isn’t much space between this second and AI’s appearance, not much space between learning some elementary facts about AI and implementing it.

            ___

            If I’m right, then we have perhaps located the crux of the issue. In my view, AGI is this new wild and unexplored territory. We’re not hunters who know the area well and look for an elusive animal. We’re discoverers who happened to glance from afar upon a cloud-covered mountain. We do not understand the paths that will lead us to the mountain, we have lots of reasons to believe it is very high, and very importantly – wild beasts and other perils await us on the way.

            Safe AGI is a problem that needs thinking about. But the path to the opportunities and to the dangers of it is long and unexplored. Along it lie immediate dangers such as automation radically changing our society. AI research becoming infeasible, the world economy collapsing and what not. These are bad and close enough to concern me more, at the moment.

            As a final comment, I’ll point out again that for Moloch-related reasons, we cannot afford not to try and create an AGI. It’s crucial that we get it right , but it’s also crucial that we get it.

      • John Schilling says:

        I think Epstein has it about right on the expert response, with very few people devoting their careers to an AI risk they don’t think they can do anything about today. I do suspect there would be an increase in professional collateral discussion of AI risk, e.g. AI risk panels at comp-sci conferences, papers on new AI developments having a section briefly discussing the AI risk implications. That might accumulate to something useful in the long run.

        On the non-expert side, increasing public belief in AI risk will result in increasing skepticism towards or even opposition to AI research, and to self-driving cars, drone warfare, the Internet of Things that Shouldn’t Be on the Internet, etc.

        But, see the public and political response to nuclear extinction risk, or global warming extinction risk (which aren’t actual extinction risks but are widely perceived to be). Too many very loss-averse people were too heavily invested in Cold War geopolitics, in the fossil fuel economy, and now in the information economy. And to some extent rightly so, because all of those things have huge near-term consequences for billions of people.

        But it does mean that the only people willing to sign on for really decisive actions, will be the ones signing on for actions they already supported for other reasons. Pacifists and hard leftists for disarmament and détente, greens and anticapitalists for strict carbon caps, luddites and the WWC for No More Robots. Everyone else will temper their policy activism to avoid alienating the moderates who are heavily invested in the status quo, and we’ll get half-measures like SALT and Copenhagen.

        Small possibility that a Reaganesque figure from the other side will have a conversion event and give us an AI Reykjavík. If this person is named Jehanne Butler and/or Sarah Connor, start working on that sequel to Unsong.

        Otherwise, we’ll be waiting on the aftermath of the first catastrophe for decisive action.

      • bayesianinvestor says:

        Yes. Examples of similar reactions:

        The medical community mostly agrees that aging causes a large fraction of the suffering that happens in developed nations. If they thought aging was curable, they’d probably decide that a cure for aging was >10x as important as what they’re working on now. Yet they mostly ignore its importance, due to (possible sensible) intuitions that it’s hopelessly hard and that working on it wouldn’t be good for their mental health.

        Lots of people think cryonics has a small chance of working. Yet they accept death instead, because a small chance of living in a strange new world isn’t enough to outweigh the cost of looking weird today.

        Economists often note evidence that immigration laws are condemning hundreds of millions of people to poverty, then continue working on some other topic for which they have a better shot of writing a highly-cited paper.

        This kind of reaction may depend on whether AI risk looks like a familiar problem. If AI were likely to reuse the atoms in our bodies for something else in 2027, then the novelty of the risk might cause some panic. But if the main risk is that value drift will cause the world to be dominated by not-very-human intelligence in a few centuries, then that looks somewhat like the familiar problem (that we’ve learned to live with) of old folks getting upset at the values of younger generations.

  27. tscharf says:

    I think it would be more instructive to analyze the accuracy of AI predictions over the past 50 years in order to gauge their accuracy now. AI is five years away has been the theme since at least the 1970’s.

    One of the problems with the science prediction field is nobody holds anyone accountable for past failures and simply assumes a clean slate for new predictions. Sometimes these predictions are closer to random guesses even if you are an alleged expert in your field. The world’s greatest market expert isn’t going to be much better at predicting the stock market direction as the median market expert. How do you separate actual prediction skill from random guessing? Past performance.

    I have been absolutely appalled at this with climate science. Hansen said seas would rise 1 to 4 feet by mid century in 1988. The seas have risen about 3 inches in the last 30 years so the low end looks rather unlikely at this point. The response? Hansen now predicts “at least 10 feet in 50 years” and many media outlets breathlessly covered this as if Hansen never made a prediction before. I just read in The Guardian seas will rise 3 feet by 2060. What?

    Some of these reports are simply willful ignorance by media activists who dutifully report phrases intentionally put in academia press releases to be parroted by activists. The media routinely “forgets” to put in phrases like “up to” or “worst case prediction” and almost never states the predicted range of change.

    Anyhow past prediction skill is important. It’s not the only thing but if you don’t have this I don’t think I’m very interested in your new predictions if there isn’t a compelling reason to think they are better now. Appeal to authority isn’t sufficient.

    • Scott Alexander says:

      AI Impacts (linked above) does some stuff like this.

    • Kaj Sotala says:

      I think it would be more instructive to analyze the accuracy of AI predictions over the past 50 years in order to gauge their accuracy now.

      Here’s a paper we wrote about this after doing exactly that; you’re right that the prediction quality seems quite poor.

  28. enkiv2 says:

    The timing for “new york times best seller” is a little bit shocking to me. I realize that these guys probably interpret the question as “a really good book”, rather than literally — after all, any sufficiently motivated publisher could arrange an AI-generated book to show up on that list *now*, in a genre that existing text generation technologies are good at. I would be surprised if there aren’t already AI-generated amazon bestsellers, considering the ease with which mediocre erotica can be generated & the ease with which categories can be gamed.

    That said, considering the loose interpretation of the question rather than the strict one — i.e., “how long until an AI writes a good and popular book” — 26 years seems too long. Maybe it’s because these people are working too much with neural nets, rather than paying attention to technologies specifically intended to write prose?

    I participate in NaNoGenMo every year. When it started, text generation technologies could already create engaging sentence- and paragraph-length works of prose (and poetry is easier because there’s more flexibility in interpretation — and thus more imposition of meaning). Every year, I see entries push the needle. A couple years ago we got something called MARYSUE that wrote star trek fanfiction that, at the scale of 1-2 chapters, passed as human-written. I don’t think we’re much more than a decade away from machines being able to consistently generate mediocre novellas that look human-written — and a mediocre novella that is sufficiently topical can absolutely take off.

    (Sometimes, the fact that something is written by a machine can be a marketing boon in of itself. Consider The Policeman’s Beard is Half Constructed, way back in 1983.)

  29. On hard predictions such as this, experts suck (see Philip Tetlock’s Expert Political Judgment). If the experts aren’t betting, as they aren’t in these surveys, then this can only move my Bayesian prior +/- 5%. We need to move away from these naive surveys and use betting surveys.

    • Scott Alexander says:

      I don’t think this survey is useful by itself, but when I actually try to reason through the problem, people dismiss me with “You’re not an expert so your opinions don’t count, only experts matter!”.

      As long as I can show the experts don’t really disagree, maybe people will actually be willing to discuss the question.

  30. dyfed says:

    It seems to me that the most powerful explanation of the survey data in comparison to other surveys is that AI researchers have no coherent responses to questions about GAI.

    Maybe this is because GAI and AI are qualitatively different. Task-based ‘unintelligent’ AI in the specific is already solving problems. GAI is not even theoretically proven. It may turn out that the one is easy and the other is distant or impossible.

    • HeelBearCub says:

      It may turn out that the one is easy and the other is distant or impossible.

      Impossible seems like it has to be an oversell.

      General Intelligence is possible (we already have it). It seems a highly probably that this can be recreated in some way “artificially” even it is 10,000 years from now by custom growing better brains.

      Of course this doesn’t mean “godlike” intelligence is possible, and possibly that is all you meant.

  31. Alex Zavoluk says:

    Has anyone asked any of Tetlock’s super forecasters what they think? Or, alternatively, tried giving Tetlock’s surveys to AI researches to see if their calibration is any good?

  32. theory says:

    Also, since they rated AI research (80 years) as the hardest of all occupations, what do they mean when they say that “full automation of all human jobs” is 125 years away? Some other job not on the list that will take 40 years longer than AI research?

    Judging by the rest of the post, sounds like it’s “survey methodology”.

  33. Rachael says:

    ‘Bostrom: “Define a high-level machine intelligence as one that can carry out most human professions as well as a typical human…”
    Grace: “High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers.”’

    As well as the difference you point out, I think the wording “a typical human” in Bostrom’s version is significant. Arguably, AI can already do original mathematical research and write bestselling novels as well as a *typical* human, i.e. not at all. Whereas “human workers” in Grace’s version could be taken to mean “the actual humans who do these tasks and are better at them than most humans”.

    • wintermute92 says:

      This seems like a definitely possibility – I’m assuming Bostrom’s sample didn’t actually think “run a nuclear plant as well as an randomly-chosen person”, because the average person can’t do that at all. But they might well have thought “do standard tasks like hammering nails and balancing checkbooks as well as the average person”.

      Also, I think crucially: Bostrom didn’t ask about cost at all. So Watson is better at Jeopardy than a typical – or world-class – human. But it can’t play Jeopardy “better and more cheaply” than a human. That’s a much higher standard, because it excludes “throw 10 TB of RAM at the problem”.

  34. Freddie deBoer says:

    I know how this will go over in this space, but I doubt the super-intelligent AI explosion in large measure because it so perfectly replaces the functions of religion in terms of the basic human anxieties it addresses. The way a lot of people talk about the Singularity – people who consider themselves the most hard-nosed arch rationalists – is as a time when an extra-human force solves all of their suffering and puts an end to death. Which is something humans have wanted to believe in so badly and for so long that almost every society in history has crafted some version of this idea.

    • skef says:

      I largely share this view in the specific sense that it counts against the “why would so many smart people think there is a good chance of a hard take-off if there wasn’t something to it?” Libertarian atheist death-thought-transference is a reasonable answer to that question.

    • leoboiko says:

      And yet I support communism, but I’ll freely admit that the idea of communism has also been used to plug the same existential hole that religion or the Singularity or “rationalism” fill. Which is to say, i don’t think existential attractiveness is, in itself, enough to dismiss an idea (though it should of course raise our skepticism towards its appeal).

      In my opinion existential superintelligence risk is still firmly in sci-fi territory for the foreseeable future. I was a computer scientist, now I’m a linguist, and the more I get to know about the enormous, mind-blowing hidden complexity of language (and thought, built from language), the more I see how far computers still are from general AI. A much more pressing issue, for me, is Hawking’s contention that AI and automation are growingly being used in the service of wealth hoarding; I don’t think I’ll see an intelligence singularity in my lifetime, but I’m already witnessing an exponential increase in income disparity and the emergence of new subclasses, of exponentially… elite-r… elites. I’m quite skeptical that my kids will be killed by paperclip-making nanorobots, but I’m pretty sure they’ll have to deal with the planetary fallout from the irrationality of capitalism. I wouldn’t bet a single dollar on the emergence of Roko’s Basilisk, but I’d bet quite a few dollars on the growing pedestalization of the inscrutable Algorithm (and also of The Market, whose inscrutable collective decisions are growingly machine-made, so that at some point The Market and The Algorithm may well become one and the same); of machine profiling to dystopic levels, of ever-worsening Pavlovian tricks and marketing techniques basically amounting to deterministic control of mass opinions and desires. These issues are, for me, much more pressing than the existential threat of general AI. If we keep going in the direction we’re going, I’m afraid scientific advance might not even survive long enough for general AI to come into the window of pressing threats.

    • Scott Alexander says:

      The transhumanist position is really just “one day technology might reach the point where we can cure aging”; being too sure that isn’t true seems like a weird ad hoc assertion of limits to the power of technology.

      But that’s not what anything in this post is talking about. Saying “one day our technology may destroy us” doesn’t seem that religious – or if it is, people worried about nukes or climate change better start a church.

      See also https://slatestarcodex.com/2015/03/25/is-everything-a-religion/

      • Christopher Hazell says:

        AI risk has always seemed like an odd thing to combine with effective altruism to me, because even among speculative extinction level events, it seems to me that there are more immediate threats: Global warming, large scale nuclear war, even an asteroid impact. We have much more coherent accounts of how these things could happen in the immediate future then we do of a malevolent AI.

        I think, on some level, the existence of malevolent AI does serve a psychological, religious need, because it attributes a sort of intentionality to our eventual destruction. If an asteroid kills us, that’s just happenstance, if a nuke does, stupidity, but if an AI kills us, that’s the result of our own sinfulness, and of a discrete, alien, above all superior intelligence looking down upon us and deciding we need to die.

        I’ve come to a similar conclusion as leoboiko, which is that our economic and social systems are probably already their own form of paperclip making robot, maximizing wrong functions and causing us to pursue goals that are at odds with our human survival. For example: Our global economic system discourages much investment in the kinds of safety measures that would mitigate or eliminate the threats from AI, Nukes, Global warming and asteroids, and makes investment in things that will exacerbate the risks of the first three comparatively easy and lucrative.

        On the other hand, that’s not really quite the same as an AI, is it? I think there is something uniquely attractive to some people about being able to point to an entity that made a decision that we should all die, and to imagining that our death should come from the sin of hubris, then to imagining death from processes of nature, or the macro effects of our decision making.

        • 6jfvkd8lu7cc says:

          I dunno if looking at corporations should make us more or less scared than computer-based AIs.

          After all:
          · corporations are artificial entities
          · they have their own decision-making process not completely reducible to decision-making by a committee where some human keeps tracks of all the arguments (these top out below twenty)
          · they can afford using more data in their decisions than any single human can, and apply more complex processing than any single human can either perform or afford done
          · they can ensure performance of any task doable by below-99.9% humans

          Looks AI-ish enough.

          What are the outcomes?

          · corporations create a lot of valuable tools, goods, services; many of these are not ever created by individual effort
          · they also create a lot of negative externalities
          · they do hit interesting resource limitations

          And the scary part:
          · from time to time some non-trivial black-swan-limiting safeguards on their operations are installed; they are often removed because of pressure from the corporations
          ·· from the fact that joint-stock ventures were initially actual ventures — limited time frame to perform a specific list of transactions
          ·· to whatever you want to pick from the modern history, Glass-Steagall Act repeal, maybe.

          • skef says:

            There seems to be relatively little work on, by analogy, “corporate safety”. This may be due to the fact that the slow speed of corporate metabolism makes the risk difficult to recognize.

          • Huh? if corporate safety means govt regulation on corporations, there is a ton of it.

          • Christopher Hazell says:

            Huh? if corporate safety means govt regulation on corporations, there is a ton of it.

            It does and it doesn’t. I mean, the risk of large-scale nuclear war is really more about our government structure than it is about our corporate structure, and in any case, it’s not at all clear to me that the large amount of corporate regulation is actually doing much of anything to mitigate the effects of global warming, say.

            Getting away from X-Risks, the example I always really think about is when an electronics store goes out of business and destroys unsold stock.

            Each step on that chain of decisions makes sense individually; the electronics manufacturer can’t perfectly predict the exact number of units it will sell; people don’t want the market flooded with free material, etc. etc.

            But the end result of this chain of logical decisions is that we spend a lot of time, money, effort and expend a lot of our finite resources in order to build something, ship it half-way around the globe, and then smash it to bits before anybody can use it.

            And the worse thing is, because each individual micro-level decision makes sense, it becomes difficult to just regulate the problem away; you’ll face resistance, and, more importantly, you run the risk of introducing a new set of problems or even worse incentives.

            Or, for example, planned obsolescence. Difficult, if not impossible to regulate, good for corporations in the near term, bad for the planet in the long term.

            There’s a ton of shit like that, and, unfortunately, I think Karl Marx was the last person who really tried to imagine a way out of it, and look how that turned out.

            EDIT: This is also why I find this to be kind of an odd fit for effective altruism, at least to the small extent that I understand that philosophy.

            AI risk seems like a weird “middle child”. Basically, you’re going, “If we invent this thing, and if it works how we imagine it might work, then here are some problems we should be working on.” And on the one hand, there are a number of problems that are already much more concrete, and therefore easier to solve; on the other, even to the extent you manage to get AI risk research funded, the economic system that incentivises quick research over safe research will still remain in place, leaving us to make the same arguments and fight the same battles against the next x-risk, and the one after that, and the one after that…

            You could, I suppose, argue that a safe AI would probably be the best solution to our economic woes, but on the other hand, you could equally well argue that our current system will prioritize AIs that won’t overtly disrupt the current system.

          • Or, for example, planned obsolescence. Difficult, if not impossible to regulate, good for corporations in the near term, bad for the planet in the long term.

            It’s good for corporations only if you assume that customers are not aware of how long the product will last.

            Can you think of any real world examples–not hypotheticals but ones for which you have evidence–of a company deliberately making a product last a shorter time than it could have at the same cost? The nearest that I can think of is textbooks bringing out new editions. Part of the problem there is that the decision of what edition to use is made by the professor but paid for by the student.

          • Aapje says:

            @DavidFriedman

            Many smartphones get minimal or no OS/security updates, making them obsolete more quickly.

          • 6jfvkd8lu7cc says:

            @Aapje The current situation with smartphone firmware includes a lot of actual illegal activity by manufacturers, but they are illegally cutting costs, not only pre-plan obsolescence.

            This is the general problem that strategic cost-cutting to ensure failure is very hard to distinguish from now-normal market-for-lemons situation where everyone produces low-quality goods and it is impossible for a newcomer to commit credibly to quality.

            Regulation pretends to fight oligopolisation and lack of ways to commit to quality, but doesn’t always succeeed…

          • Many smartphones get minimal or no OS/security updates, making them obsolete more quickly.

            The claim as I understand it is that producers deliberately give their products a short life span, not that they fail to spend money giving them a longer life span. That’s essential to the argument, since obviously there is a tradeoff between cost and longevity for most products, hence showing that a firm doesn’t make its product as long lived as, at some cost, it could doesn’t show that they are doing anything wrong.

          • Aapje says:

            @DavidFriedman

            But is there a clear separation between the two?

            Obviously if consumers care enough, they can really reward a manufacturer who keeps their smartphone up to date. But for more weak preferences, manufacturers can decide that the increase in sales from planned obsolescence are more significant than increase in sales from creating a product that is closer to what consumers prefer.

          • random832 says:

            But is there a clear separation between the two?

            It’s not clear to what extent the fact that updates depend on the hardware manufacturer and/or the carrier is anything but a deliberate attempt to allow this. This is not, for example, true of conventional PCs.

            Dell can’t stop Microsoft from releasing a new version of Windows that can be installed on their computers, and it’s almost ridiculous to imagine Comcast doing so.

        • John Schilling says:

          Even among speculative extinction level events, it seems to me that there are more immediate threats: Global warming, large scale nuclear war, even an asteroid impact.

          Those aren’t plausible extinction-level events. They may be gigadeath-level events, but they predictably result in the reemergence of a human civilization roughly comparable to our own on a timescale small compared to the history of our present civilization. We get a do-over. We get many, many do-overs on those.

          Large long-period cometary impact is a potential extinction-level event, but it is one where we can reliably bound the probability at ~1E-8 per century, and one where there is little more we could do about it today than we could with Unfriendly AI.

          • Creutzer says:

            they predictably result in the reemergence of a human civilization roughly comparable to our own on a timescale small compared to the history of our present civilization. We get a do-over.

            Is this a given? What with all the natural resources that are easily accessible from the surface already used up, it’s not clear to me that modern technology could be reached again.

          • John Schilling says:

            “Used up” does not mean transmuted, disintegrated, or launched into the sun. The stuff we’ve dug out of the ground over the past six thousand years is mostly right where we left it, and we left it in places that were more accessible than the ground we dug it out of. And usually in higher concentrations.

            Fossil fuels are the only major exception, but you can run a perfectly good civilization on biofuels and other renewables. When the armies of Europe went off to fight the first Great War of the industrial age, they mostly did march, their gear mostly went by horse-drawn wagon, and sailing ships still plied the seas for commerce and even war).

            Future civilizations, if any, probably will have a somewhat slower industrial revolution. Not clear this would be a bad thing.

    • Reasoner says:

      I have a related view: I think the scenario’s superficial resemblance to religion is a big reason why people don’t take it seriously. I see the intelligence explosion idea dismissed on the basis of being “religion-like” much more frequently than I see serious counterarguments.

      This might be a credible line of reasoning if singularitarians were religious to begin with. But somehow the fact that they tend to be hardcore atheists gets counted as additional support for the argument. A simpler story: some people have very logical minds, which leads them to both reject religion and take exotic ideas seriously. I think that social cognition vs object cognition (empathizing vs systematizing) is a natural axis that humans vary along. People who are empathizers know that the culturally appropriate thing to do (at least now that atheism has become fashionable) when you see something that resembles religion is to reject it. People who are systematizers will actually consider the logical arguments for and against.

      • Bugmaster says:

        I don’t think that the Singularity scenario resembles religion superficially. Rather, my impression is that both religion and the Singularity belief are based on the same flawed epistemology. Both of these beliefs a). extrapolate massive conclusions from very little evidence (in some cases, in spite of the evidence), b). postulate incredibly high (in fact, infinitely high) gains or losses based on their scenario, which allows them to place nearly infinite importance on low probability events, and c). tend to treat the adherents of the belief as the enlightened in-group, and everyone else either as a pool of potential converts, or the hopelessly dim and/or hostile outgroup. Admittedly, certain religions are a lot more gung-ho regarding (c) than Singularitarians, but still, the tendency is there.

        I agree with you that many people don’t take religion seriously merely because it is unfashionable; however, many others do so because they can see its very real flaws. Singularitarianism is no different.

      • nimim.k.m. says:

        A simpler story: some people have very logical minds, which leads them to both reject religion and take exotic ideas seriously.

        In more religious times, having a logical mind has lead to taking the scripture (including its various very exotic ideas) literally as a description of reality, and spending inordinate amount of time studying interpretations of it and extrapolating widely its message to situations one realistically could only speculate about.

        If human mind is wired for religion, it does not necessarily mean it actually takes form as the particular kind of religion we call “religion” today.

        • That is the central problem of atheistic rationalism: if you accept the naturalistic explanation for the preponderance of religion in terms of humans being wired up for it, then you have to accept (but usually don’t) that you are equally susceptible, and may be making a religion out of rationalism, or something else.

          Example

          • Reasoner says:

            equally susceptible

            I think you’re overstating your case a bit here. For example, I could accept the naturalistic explanation for the ability to digest lactose into adulthood in terms of humans being wired up for it. But that doesn’t mean that all humans are equally capable of digesting lactose into adulthood. If I gave a person some milk to drink, and they complained of digestive difficulties, this would increase my probability estimate that they would have similar complaints if I gave them some cheese in a few days.

          • There are objective tests for lactose intolerance in a way that there aren’t for generalised religiosity. My point is that since you can’t measure your own religioisity objectively, and may have motivated blind spots about it, subjectively, you should assume you are not immune.

  35. goocy says:

    All of these timelines are based on a huge hidden assumption: uninterrupted economic prosperity.
    Yes, we’re living through one of the most peaceful and wealthy periods in all of history, but all of AI research can only progress because there’s funding behind it. AI research funding depends on the prosperity of tech firms, which in turn is mostly based on ad revenues and internet services. If there’s an economic crisis, people have more urgent things to do than to deal with internet ads and services, so the basis for this whole revenue tree can vanish into thin air within years. In that case, AI research won’t be able to find funding for decades.

    How likely is an economic crisis within the time spans you’re covering? I’d like to put a century-level prediction into perspective. The Global Challenge Foundation, for example, looks at processes that could lead to human extinction within this century and comes up with a likelihood of up to 5% for the three most likely risk factors. PDF: https://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact.pdf
    They don’t cover the risk for an economic crisis explicitly, but it’s safe to say it’s higher than that.

    Just from Kondratiev wave progression (a purely observation-based economic model), we’re at the end of a 60-year cycle and currently overdue for a great depression. OK, that’s just a model. But we also have historic evidence: the highest amount of national debt (relative to GDP) that any country ever recovered from is 260% (Great Britain, 1830). Nowadays North America has a debt-to-GDP-ratio of 93%, and that’s on a rapid upwards trend (doubled within the last ten years). Compared to 260% that doesn’t seem too bad, but the UK had they the complete industrial revolution ahead of them to help with that recovery. We desperately need something equivalent to get out of our debt trap (maybe wide-spread automation?).

    But there’s also a much more fundamental issue: we’re globally running into an oil crisis. Oil field exploration is at an all-time minimum and has been for quite a while, oil prices are too low to finance new exploration, and known oil fields are already depleting rapidly. Oil is the most sensitive energy commodity (because global transport and agriculture have no practical alternative) for the economy, so an oil crisis will also put AI research on hold. But oil nowadays is also crucial for the recovery and transport of all other fossil energy resources (coal, gas), so an oil crisis will automatically result in an energy crisis. And it may be hard to find funding for AI research when the electricity grid isn’t even stable.

    • Rowan says:

      Didn’t we already have an economic crisis back in ’08? How did the tech sector fare through that?

  36. 6jfvkd8lu7cc says:

    Does 100% chance of AIs making every job better than humans by 2035 include 100% chance of humans having a consensus for every job what does is mean to be doing it better? What about consensus among all the people who have ever tried doing it? Because for many tasks, I am not sure if we are moving towards that consensus of from it.

    And if Neuralink succeeds in changing the playing field for human-computer interfaces, we will soon need a separate consensus of what is considered human performance. For a present-day example: if you are considering human taxi driver performance, is it OK to use GPS and computer-aided route planning? Is it OK for the navigation system to download data like traffic jams? Is it OK to use navigation systems that offload part of the computation to some servers?

    We don’t even have a clear understanding of the centaurus chess situation (are there players who can take a 100W notebook, install some chess software and use it to beat a Stockfish instance on a 1000W server?), and in some fields the situation can become much more complicated very quickly.

  37. Petter says:

    I think most researchers simply did not care about this study. Most did not even bother to answer (understandable, given how many surveys one receives in a month). Also, “100% chance of strong AI by 2035” confirms this.

  38. Peter Gerdes says:

    Two quick observations:

    I would essentially characterize the response to “How should society prioritize AI safety research” as basically saying “don’t bother.” I mean the question essentially asks researchers whether society should devote more funding to their field (and related fields which increase their status). The fact that everyone didn’t answer More/Much More is kinda surprising. Though I suppose they might worry that greater prioritization might risk their work being looked at more skeptically and/or subject to greater constraints.

    Second, I think a number of the puzzling differences in responses can be explained by how one interprets words like dangerous. To those of us who are regularly exposed to x-risk arguments about AI but little else it is natural to interpret danger/risk as serious existential danger. However, it is entirely possible that many AI researchers view value misalignment as a serious concern that entails risk of system failure/collapse and even potential loss of human life but think it very unlikely that this misalignment would pose a serious x-risk.

    Similarly, I suspect that many people might be inclined to call any machine lead progress in improving future AIs an intelligence explosion even if it doesn’t occur so quickly as to make any kind of useful intervention difficult. Moreover, some people (such as myself) are skeptical of the claim that intelligence really offers particularly impressive powers. For instance, I think many situations are simply overwhelmed by the combinatorial explosion of possibilities (and potential for unanticipated error to creep in) and even extremely smart AIs won’t have the kind of manipulative powers many worry about. Indeed, while intelligence matters and I think there will be in some sense AI driven improvement in AI (whether we want to call it intelligence or not depends on whether it offers general problem solving benefits or simply heuristics for particularly circumstances) but I suspect merely having to work through limited intermediaries or noisy channels will more than hobble any superior ability IQ might offer regarding manipulation or control over the world. In other words, I suspect that the expedient of having a fairly rugged mobile body able to interact directly with mechanisms of interest will turn out to be more important than piles of IQ.

    To put the point colloquially. I’d go head to head in a pwning contest with the best computer security professional in the world and expect to win provided their only means of doing research or effecting their code was via a telephone to (a steady string of) people with no more computer skills than my mom.

    • Second, I think a number of the puzzling differences in responses can be explained by how one interprets words like dangerous. To those of us who are regularly exposed to x-risk arguments about AI but little else it is natural to interpret danger/risk as serious existential danger. However, it is entirely possible that many AI researchers view value misalignment as a serious concern that entails risk of system failure/collapse and even potential loss of human life but think it very unlikely that this misalignment would pose a serious x-risk.

      I’ve noticed that…there is perennial confusion between risk and X-risk.

      the expedient of having a fairly rugged mobile body able to interact directly with mechanisms of interest will turn out to be more important than piles of IQ.

      Given that high intelligence has been attained, that is probably easier.

      • Ninmesara says:

        It’s still possible that even if intelligence has been attained it will be impossible to pack it into a mobile body and impractical to remotely control such body due to bandwidth restrictions or latency problems.

        You can have the general intelligence program a simpler version of itself that runs inside the mobile body, of course. And in that case you can have (much) more than one body.

        • That is all pretty irrelevant: what you have to worry about is the distributed body, not the compact body — an AI taking control of automated vehicles, factories and weapons systems.

  39. graehl says:

    ^^^ jhertzlinger said it better

  40. graehl says:

    Adorable: expecting people with hyperspecialized AI/ML research accomplishments to be good at predicting the far future of humanity.

    • Ilya Shpitser says:

      It’s too bad all the people who know how to run the country are too busy driving taxicabs and cutting hair.

  41. jhertzlinger says:

    These AI experts don’t sound like superforecasters.

  42. dmorr says:

    I’m the PM for Google Research’s NLP research team. We work on pronoun resolution (among other things). Let me start by saying: pronouns are a lot harder than you think. 🙂

    One of the central problems in AI research is how to understand what a piece of text means. How do we get a computer to read and understand something the way that we do? Partly this is hard because language is full of ambiguity that you resolve without noticing because you do a ton of common sense reasoning that is hard for computers to replicate because they suck at reasoning, and because they don’t have access to all this knowledge we have stuffed in our craniums.

    But partly this is hard because we don’t even know what it means to talk about what text means. There’s no formal way to write down the semantics — the meaning — of a piece of text that both captures what we want to get from it and that is precise enough for a computer to predict it. In one sense, this shouldn’t be that surprising, because we don’t understand how things are represented inside our head meat, not in any specific way. In another sense, this is bizarre — we all agree we know what this sentence means, and we can paraphrase it to show we know that, but we can’t actually put it into a formal satisfying representation. That’s weird!

    All of which is to say that pronoun resolution is important because it implicates a lot of other important problems. But also, it would be really useful if we could do it. It’s really hard to extract useful information from text without understanding pronouns, since humans thoughtlessly leave them lying around as if anyone could just get what they mean!

    I realize this is all a bit of a digression. I’ll add, for the record, that I believe that a hard takeoff is unlikely for two reasons: I think that the best model of intelligence we have, humans, consists of a bunch of disparate “good enough” systems cobbled together with the evolutionary equivalent of shoestring and glue, and we don’t use our fully general reasoning system for the vast majority of the processing we do. In AI research, we’re still building those nongeneral systems so that maybe someday we’ll glue them together. Fully general reasoning is both far away and, in my view, not nearly as practical as targeted systems that do perception or limited prediction or whatever, and chaining those things together will be what passes for intelligence when we get it. But it’ll be hard to optimize even for itself, in the same way that human intellect is hard for evolution to optimize — it’s so many related systems that we can’t even figure out what or how many genes are implicated.

    But also I think that the AI will be immediately hardware constrained. We are pretty good at writing very fast systems and optimizing the hell out of everything when we need to (and at Google we need to). If we ever do get GAI it’s not going to be able to do better than incremental optimization, and will mostly scale by adding hardware. That means it’s going to only slowly get smarter.

    Having said that, I could definitely be wrong, so I would absolutely spend a small amount of our resources on FAI. I view this as tail risk insurance. I think it’s only like 5% likely that we’ll get a hard takeoff scenario resulting in something horrible, but that’s plenty big enough that we should devote significant resources to trying to see if that’s true and if so, doing something about it.

    But I think that more resources should probably go into things that we know are useful, and which aren’t really pointing towards dangerous scenarios. These are things that provide real value, which is to say, give us more resources to spend on FAI if it looks like we need to. Things, of course, like pronoun resolution.

    • Ninmesara says:

      I’d like to thank you for your reply, in which you share some real experience that comes from actually working with AI.

      You say that AI will be respurce contrained. Well, the strong version of this statement is certainly true: if you can simulate the whole brain neuron by neuron/molecule by molecule, then you have human level AI. This is ignoring issues like cooling and communication speed, of course. This suggests that once we understand the brain well enough, we can get human level AI by just adding hardware.

      But certainly we can do better. We probably don’t need a physically accurate (or even biologically accurate) simulation. In your opinion, what would be the minimum computational requirements to simulate an adult human brain capable of passing “the spirit” of a Turing test – including such things as reading comprehension, knowledge about the world, passing a verbal IQ test with IQ, say, over 100?

      Or do you think it’ll be actually impossible with current hardware due to issues like communication speed and heat dissipation?

      I prefer these questions about human level AI instead of ill
      specified superintelligences, because in this case we all agree what we’re talking about and have real models of what we want to achieve.

      • HeelBearCub says:

        I don’t think we can’t pack enough computing power close enough together to simulate the brain neuron by neuron/molecule by molecule on current tech. I think you will run into speed of communication problems.

        And we are so far from understanding the human brain at this level that it’s not a particularly fruitful avenue to develop AI anyway.

        • Ezo says:

          >I don’t think we can’t pack enough computing power close enough together to simulate the brain neuron by neuron/molecule by molecule on current tech.

          What do you mean by ‘current tech’?

          Consider this: NVidia recently announced a chip which is about 8cm^2. It’s only a single layer(AFAIK, they plan to stack memory on top of future GPU dies, but not yet). And it delivers 120 TFLOPS of computing power. You could now fit petaflops of computing power – optimized for handling neural networks! – in roughly the size of standard ATX PC case.

          Brain is big ball of matter. It’s 3D.

          When it comes to computational density, I think modern GPU/CPU chips surpassed the brain. If we could make 3D cubes, with millions of GPU dies, one on top of another – we would surpass level of computing power which is required to emulate the brain. And such cube would be *smaller* than the human brain.

          Through main problem is heat dissipation and energy consumption. And probably it wouldn’t be much cheaper to fabricate millions of chips stacked one on another than making millions of discrete chips…

          But technically we could do that. AFAIK graphene has much lower resistance than sillicon. When we finally be able to use it, heat dissipation problem may decrease, maybe?

          And they there’s neuromorphic chips – which have architecture similar to a brain. AFAIK they consume much, much less power than conventional architectures(if you would use them to emulate a brain).

          As for whether we are far from understanding the human brain – I don’t know. How hard can it be to analyze a single neuron? Even if there are many types of neurons, it still seems doable. We don’t need to understand how visual cortex works, for example. Only it’s building blocks. It’s probably impossible to ‘understand’ that anyway – just like it’s ‘impossible’ to understand why our ANN’s work.

          • HeelBearCub says:

            And it delivers 120 TFLOPS of computing power.

            And? That is a measure that is useful in comparing chips to other chips, but it doesn’t directly apply to the human brain.

            The prompt I was responding to posited that a “proof” of the possibility of AGI is faithfully recreating the architecture of the human brain. But it’s not at all clear that we can pack billions of fully parallel processors into a case small enough to do this.

            This doesn’t mean there isn’t some other means of achieving AGI, just that the thought experiment seems invalid to me.

            ETA:

            As for whether we are far from understanding the human brain – I don’t know. How hard can it be to analyze a single neuron?

            This seems very naive to me. Consider what Scott had to say on this recently:

            Current cutting-edge brain emulation projects have found their work much harder than expected. Simulating a nematode is pretty much the rock-bottom easiest thing in this category, since they are tiny primitive worms with only a few neurons; the history of the field is a litany of failures, with current leader OpenWorm “reluctant to make bold claims about its current resemblance to biological behavior”. A more ambitious $1.3 billion attempt to simulate a tiny portion of a rat brain has gone down in history as a legendary failure (politics were involved, but I expect they would be involved in a plan to upload a human too). And these are just attempts to get something that behaves vaguely like a nematode or rat.

        • Ninmesara says:

          Your first sentence contains a double negative that might be a typo. Could you confirm if that’s what you meant? You seem to be saying (loosely) that you think it’s possible to pack enough computing power but that speed of communication will raise some problems. Is this what you meant to say?

          • HeelBearCub says:

            Yes, sorry, unintentional double negative.

            I mean to say that the truly gargantuan fully parallel processing power of the human brain is not possible to recreate in the silicon and copper substrate we are using right now. There are literally billions of neurons in the brain all operating in full parallel.

            In order to recreate that faithfully in the current computing tech requires space and heat dispensation which is going to bring speed of communication into play.

          • beleester says:

            I don’t think communication speed is the limiting factor. A quick google tells me that nerve impulses max out around 100 m/s, while electrical signals move at around 97% of the speed of light. IOW, even if your brain simulator fills an entire data center, it’s probably still got faster connections between neurons than the brain does.

      • Dave Orr says:

        I have no idea what the computational requirements would be, but I think I’m on fairly firm footing on a different claim: at the point that we achieve human level intelligence, whether by brain simulation or other techniques, we will immediately be computationally constrained.

        This is because we are constantly pushing the bounds of whatever hardware we have. We’ll have to optimize the crap out of everything to get it to even work, and there just won’t be much more juice to squeeze. How do I know this? Because we already have to do that to get things to run at speed now.

        Could we do it with current hardware? I think probably not, but I’m not really sure. I am sure we’re extremely very far from having a good enough understanding of biology to come close to simulating a brain regardless of the hardware.

        Neural nets in AI are very different from neural nets in wetware, so comparisons are not always meaningful, but if you count the number of connections (synapses), the biggest functioning neural networks are still several orders of magnitude smaller than what we’ve got going on. And the neurons themselves are simpler, and the types of connections are much much simpler. So there’s a long way to go.

        And it’s not like you can just throw more layers and more connections at a problem and have anything good happen. These things have to be carefully crafted, and even new “learning to learn” techniques are only scratching the surface.

    • Ilya Shpitser says:

      Thanks for giving your perspective here.

    • Deiseach says:

      because they don’t have access to all this knowledge we have stuffed in our craniums

      I am fairly sceptical about “human-level general intelligence AI” and “better than human level”, but everything I read about how it’s really hard to teach computers to do things the ways we do them, I do think that there must be an easier way. We only know the hard way because evolution cobbled together our systems this way, but surely there must be a simpler way of representing the same knowledge?

      Maybe if we do sling together a human or better AI, the first thing it will do is figure out the simpler way to do this for the next AI 🙂

      • Dave Orr says:

        There definitely could be a simpler way to represent all that knowledge, but we sure haven’t found it after decades of research…

        • Ninmesara says:

          Are you planning on encoding procedural knowlede? I think there might be a kinestetic component of what we call “knowledge” which can only be acquired through interaction with the world. An interesting way to study this would be to study people with congenital sensorimotor limitations (congenital blind/deaf people, congenital quadriplegics, severe cerebral paralysis). Do those people have serious gaps in their knowledge of the world? Or are our brains innately progammed to learn certain concepts even if we never experience them?

          This might suggest some avenues for research, of which my favourite is to simulate a virtual world, give the AI a body and watch it learn. If you’re feeling adventurous, have the AIs reproduce sexually. I think the level of detail of the simulation would be beyound our reach with current hardware, but one can dream :p

  43. Bugmaster says:

    This is pretty surprising; if there’s a good chance AI could be hostile to humans, shouldn’t that automatically be pretty high on the priority list?

    No. You are looking at P(Hostile|AGI), but if P(AGI) is very low, then who cares ? By analogy, if the Earth was hit by a black hole the results would be catastrophic; but that doesn’t mean that we should divert resources to black hole defence, because such an event will probably never happen.

    In general, though, I think some of the confusion may be due to the fuzzy definitions of the terms. What does “AI” actually mean, and what does it mean for it to be “unfriendly” and even “hostile” ?

    If you interpret these terms one way, then you can easily come to the conclusion that hostile AI already exists, right now. Companies like Google and Facebook are using it to mine all of your personal data, and their AIs are definitely not on your side. At the other end of the spectrum, we’ve got SkyNet, complete with Terminators, followed shortly by the Singularity. There are lots of options in between, though, some more plausible than others. Equivocation between these options (intentional or not) would definitely lead to the kind of confusion you describe.

    • Scott Alexander says:

      My whole point was that there are people who believe AGI is coming very soon, but who still don’t seem to prioritize it. Remember, half the experts polled said there was at least a 10% chance we would have AI that was better than humans at everything within ten years.

      • Bugmaster says:

        That’s a fair point, yes. That said, though, I think you need to draw a sharper distinction between present-day AI, and self-improving superintelligent AGI. Today, we have a bunch of AIs that surpass humans at specific tasks by a vast margin; e.g. character recognition, full-text search, playing various games, etc. I think it’s reasonable to assume that such AIs will continue improving; for example, AFAIK there are people who are working very hard at designing an AI that would efficiently pick tomatoes in the field. So, it’s possible that the survey participants are thinking along the same lines.

        I will grant you that the survey did explicitly specify “better than humans at everything“, but I think this word may be too nebulous to be practically useful. Should the AI be better than humans at pooping ? Does that count ?

      • 6jfvkd8lu7cc says:

        Conjunction check: there is an implicit assumption in your question. You imply that being able to specify software requirements and enforce compliance is a problem that lies within the scope of AI research field.

        We have this problem almost everywhere in software development. There are hard scientific questions and complicated incentives. We even have all of that outside computations: a lot of legal documents are unreadable, and it may be a similar problem…

        Most people would say their webcams should not have an AI, the software reliability is still horrible and the resulting botnet can still DDoS just fine. Gathering requirements for a workflow automation system goes wrong more often than not, which is a real problem leading to cost overruns and data loss — all the time. How complicated is it to list all possible «Keep calm and [verb] a lot» slogans and list print-on-demand T-shirts on Amazon? — not very, as long as you want to be discussed a lot by an angry mob. And then you go into non-general AI and get hypersensitivity of trading bots to attacks on AP Twitter account.

        We cannot specify software, we cannot enforce compliance with specification when such a specification exists, and we cannot even verify whether the compliance effort has been taken by the manufacturer. It is a horrible problem right now, it is not getting better, it does get attention but it is still too hard, it is critically important regardless of AGI, and so it is not a subproblem of AI. We have some partial solutions, and almost no software field uses enough of them, and applied AI is not an exception.

        Would you find my interpretation consistent with itself, and does it explain the survey results?

        And that is even before considering what framing problems suggest about precision of projecting beliefs into the survey answers (in a hurry).

        • gmaxwell says:

          When I’ve spoken to people who claim to be working on the AI safety issue they have reliably dismissed as unimportant or trivial the problem of formally specifying and verifying software.

          This, more than anything else, have contributed to my assessment that near term efforts to this end are unlikely to be successful.

          Scott seems confused by people thinking AGI is likely and likely to be dangerous but not thinking it is important to work on now. There is a missing piece: How likely is it that efforts to do something about it right now will be successful?

          • Rob Speer says:

            That’s a good point. It sounds a bit fatalist in the end, and I don’t want this point to be perceived as “well, we can’t do anything, might as well not try”. We can do things. They’re just not the things the superintelligence crowd is talking about.

            I personally think that AGI is likely centuries away, rather positive in expected value, and being framed in a way that will look very silly in retrospect. But I understand some have different beliefs, and some hold them very strongly. So here’s what I have to say to them.

            There is computational evil in the world right now. There’s insecure infrastructure, botnets, and racist machine learning models. Fixing those should further our understanding of how to make computers do what we want, and has obvious benefits even if AGI remains in the unimaginable future.

            Not interested because you only want to work on true AGI and existential threats? Come on. You have to level up before you can take on the final boss.

          • Aapje says:

            @Rob Speer

            There’s insecure infrastructure, botnets, and racist machine learning models. Fixing those should further our understanding of how to make computers do what we want, and has obvious benefits even if AGI remains in the unimaginable future.

            Humans have security weaknesses that can be exploited and can be racist.

            Isn’t it Utopian thinking to assume and demand that AI can work without these weaknesses? Also, we/humans often consider rational behavior racist, like decision making based on statistical correlations to race. In that case, is it the AI that needs to be fixed or do humans need to learn to give AIs explicitly anti-racist goals? After all, that seems to be how ‘anti-racist’ humans function.

  44. kevinlacker says:

    The problem with AI safety research isn’t that AI safety isn’t important, it’s that nobody has figured out how to research the problem of AI safety in a useful way. So AI safety research is kind of worthless right now. Maybe once we have some vague idea of how we would create a real AI, it would make more sense to work on its safety. But even then, it isn’t clear whether that would be more of a research problem or a practical engineering problem.

  45. stephenbyerley says:

    I work in AI research, and I think I can reasonably represent some of what is going on here.

    First, I think superhuman AI is 90% likely within my lifetime, and I think the emergence of superhuman AI will carry a real possibility of the destruction of humanity, along with a possibility of extraordinary reward. (I don’t know what the probabilities are, but like global warming, it’s still a huge problem if there’s only a 1% chance.)

    But, having looked at the problem, I don’t see many near-term research problems that AI researchers can tackle that will address the problem. The “Concrete AI Safety Problems” paper poses some reasonable problems worthy of attack, but they are neither sufficient nor necessary to avoid AI risk. Almost all of our ability to address AI risk will have to come once we have a better sense of how superhuman AI will actually work.

    So I think the most important thing we can do is to advance the state of the art in AI and be alert to ways to address AI risk as it gets closer. Superhuman AI might be possible in 25 years. But that doesn’t mean we know enough to address it now, just like an AI researcher from 25 years ago would have found it impossible to make progress on the problems that concern the field today.

    • drachefly says:

      Would you say that at this point the general ideas might be most important, like that we should prefer a transparent approach we can prove things about rather than powerful but opaque methods like Deep Learning? Or work on making those more transparent?

  46. FerdJ says:

    Interested to read the question on the plausibility of the “intelligence explosion”. I’ve read a few places, like that giant Wait But Why article, explaining the risk that once we get an AI just a little smarter in general than humans, it may be able to increase its own intelligence at an exponential rate. What makes me feel skeptical about that is the raw computing resources that are supposed to be required to hit “about as intelligent as a human”. If it takes a massive supercomputer to maybe possibly be as intelligent as a human, then how does it make sense that a machine can be 1000x smarter than a human without at least 1000x the raw computing power?

    The stories usually involve the AI secretly finding a way to make itself 1000x more intelligent than it started. Is it supposed to order, have delivered and hook up 1000x more supercomputers and get them all to work together right all without any human being involved or noticing what’s going on? Is it supposed to figure out how to make a massively more efficient computer, and somehow get humans to fabricate it, debug and troubleshoot it, and eventually get a working version somewhere that our AI can get access to it, again all without anybody noticing? Maybe the 1000x more intelligent AI can figure out a way to make electricity flow through itself in such a way that it resonates with the Earth or does something else crazy-sounding to somehow make a ultra powerful computer, but I don’t see how it gets to the point of being that intelligent without already having that. If it was possible to do that, then Humans would have already figured out a drug to make ourselves super-smart, or a way to hook up 1000 minds to each other in such a way that it creates one super-smart entity instead of 1000 squabbling individual minds, or something to that effect.

    • moyix says:

      I think the simplest way would be by optimizing its own code. Especially with research code there is typically huge opportunity for speedups, so we should expect that the first research GAI can be sped up significantly. It’s very hard to predict what kind of improvements you would see – I’ve seen speedups of ~100x for my own (non-AI) code, but sometimes things stubbornly resist optimization.

      • rlms says:

        What you say is only correct with the assumption that AI will be the same kind of thing as existing programs. I think that assumption is very dubious. AI is just code in the same way that the human brain is just meat.

    • Ezo says:

      >What makes me feel skeptical about that is the raw computing resources that are supposed to be required to hit “about as intelligent as a human”. If it takes a massive supercomputer to maybe possibly be as intelligent as a human, then how does it make sense that a machine can be 1000x smarter than a human without at least 1000x the raw computing power?

      Because it doesn’t take a massive supercomputer to be as intelligent as a human. Or maybe it does. Maybe you could run some such AI on an old 16bit computer. Nobody knows. We only know that we require such massive supercomputer to accurately simulate a human brain. In other words, it’s an upper bound required to create some AGI, if anything.

      We simply don’t know any other approach for general intelligence other than neural networks. We don’t have any other algorithm – so we can’t know how much computing power it would cost.

    • Anon. says:

      The human brain runs on less energy than a lightbulb, so we know that you don’t need lots of energy for general intelligence. GAI will involve hardware improvements as well as software improvements.

      • Gazeboist says:

        Energy =/= computing power. The human brain is a fairly powerful computer, though it’s power is hard to compare to the computers we build because it’s built on very different fundamentals (and we don’t really understand what those fundamentals are).

    • bbartlog says:

      Assuming the AI is smarter than a human and has good theory of mind (rather than just being specialized for some problem), the expectation is that it would spread to inadequately protected computers while also manipulating whatever organization it was in to gradually expand the resources available to it. It wouldn’t be stupid enough to immediately draw suspicion to itself by trying to expand a thousandfold in hardware terms.

      • Yosarian2 says:

        Eh. My guess that the first AI that could be plausibly considered to be “human level” will already be running on a massive supercomputer, likely one with very specalized hardware, in which case it gains very little by just “spreading to other computers”. Maybe it could outsource especally different calculations or whatever, but I think the marginal gain by expanding like that would likely be minimal. Also it would probably already have access to all the data it’s developers thought it could use.

        Now if someone suddenly comes up with a way to run an AI on a normal desktop PC the scenario you’re talking about could happen, but I doubt the first human-ish level AI will look like that.

    • Kaj Sotala says:

      Much of the brain’s processing power is used on things like low-level vision processing; unless by “1000x more intelligent” you intend to mean “can also handle a 1000x higher visual resolution” or something similar, it’s doesn’t seem like you would need to multiply all that low-level stuff in order to get 1000x intelligence.

      Especially since humans display major intelligence differences, presumably without having major differences in their amount of available hardware. As just one example, child prodigies who master skills at an adult level. Despite what some authors would argue, just having spent a lot of time practicing in childhood isn’t sufficient to explain this (Ruthsatz et al. 2013). In general, some people are able to learn faster from the same experiences, notice relevant patterns faster, and continue learning from experience even past the point where others cease to achieve additional gains.

      (I discussed this in somewhat more detail in my paper “How Feasible is the Rapid Development of Artificial Superintelligence“, where I came to the conclusion that what we currently know of expertise and human intelligence differences suggests that scenarios with AI systems becoming major or even dominant actors within timescales on the order of mere days or weeks seem to remain within the range of plausibility.)

      • Joe says:

        I think this argument is less persuasive than it first seems, because there’s an obvious way you can get this kind of large capability level difference with any machine — via parts being defective.

        Riding a bicycle with a flat tyre takes much more work to go much slower than riding a working bicycle. And yet there isn’t much difference between the two, they’re designed and manufactured just the same, and unless you look closely they appear identical.

        You might be tempted to extrapolate this “small mechanical difference leads to enormous capability difference” case to say, “what will happen when we make a change much larger than just changing the tyre?!”. But actually it turns out that when you have a working bicycle, you don’t get nearly the same return to your effort as you do when fixing a broken bike. Obviously this is because bikes don’t have some core bicycleness element that can be cheaply expanded, but are complex machines, with many interdependent parts, each of which must be functioning properly for the whole machine to work well.

        Now it might be the case that this analogy doesn’t apply to human capability differences — maybe there really is some simple mechanism in brains that can be easily scaled up to vastly increase capability beyond the level of usual human geniuses — but I don’t think your argument alone is enough to suggest that’s the case.

        • Kaj Sotala says:

          That’s a reasonable point, but I think that there are a few things that suggest otherwise.

          First, in your analogy, when we go from a flat to a filled tire, we initially get a better performance. At some point, though, pumping more air into the tire stops being useful – the tire is already full – and is in fact likely to be harmful. Which is the point of the analogy, that somewhere along the line we hit an optimum and can’t improve further.

          If human intelligence differences were the same story, then we would expect there to also be an optimal intelligence threshold and few or no benefit from going past that. And while there is some evidence of this being true socially – if you’re much smarter than everyone around you, you easily get lonely and frustrated – the evidence doesn’t seem to show it being the case intellectually. Additional IQ does seem to continue providing further benefit across the whole human range, as far as we can tell. From my paper:

          The available evidence also seems to suggest that within the human range at least, increased intelligence continues to contribute to additional gains. The Study of Mathematically Precocious Youth (SMPY) is a 50-year longitudinal study involving over 5,000 exceptionally talented individuals identified between 1972 and 1997. Despite its name, many its participants are more verbally than mathematically talented. The study has led to several publications; among others, Wai et al. (2005) and Lubinski & Benbow (2006) examine the question of whether ability differences within the top 1% of the human population make a difference in life.

          Comparing the top (Q4) and bottom (Q1) quartiles of two cohorts within this study shows both to significantly differ from the ordinary population, as well as from each other. Out of the general population, about 1% will obtain a doctoral degree, whereas 20% of Q1 and 32% of Q4 did. 0.4% of Q1 achieved tenure at a top-50 US university, as did 3% of Q4. Looking at a 1 to 10,000 cohort, 19% had earned patents, as compared to 7.5% of the Q4 group, 3.8% of the Q1 group, or 1% of the general population.

          Also in the bicycle analogy, a part of the reason why it’s hard to make it go any faster is that we also care about other constraints than just speed. Maybe we could make the bicycle go faster by strapping a rocket on its side, but that would make it a lot harder to control safely. The more of those constraints we could disregard, the more we could rebuild it for speed – sometimes changing it very drastically, such as by turning it into a motorcycle or space rocket instead.

          This is relevant, not because our AI design wouldn’t face any constraints of its own, but because human intelligence seems constrained by a number of biological and evolutionary factors that don’t seem to have an equivalent in AI. Some plausible constraints include the size of the birth canal limiting the volume of human brains, the brain’s extensive energy requirements limiting the overall amount of cells, and inherent unreliabilities in the operation of ion channels. It would seem like an unlikely coincidence if the point where the human brain and intelligence couldn’t be simply scaled up anymore, would just happen to coincide with the point where other constraints prevented the brain from being scaled up anyway. (There is a moderate correlation between brain size and intelligence in humans, with a mean correlation of 0.4 in measurements that are obtained using brain imaging as opposed to external measurements of brain size, suggesting that the plain size of the brain does play a role.)

          We do also have some theoretical candidates for the kinds of variables that could just be scaled up to increase intelligence; again quoting from my paper (see the paper for the references):

          While there is so far no clear consensus on why some people learn faster than others, there are some clear clues. Individual differences in cognitive abilities may be a result of differences in a combination of factors, such as working memory capacity, attention control, and long-term memory (Unsworth et al., 2014). Ruthsatz et al. (2013), in turn, note that ‘child prodigies’ skills are highly dependent on a few features of their cognitive profiles, including elevated general IQs, exceptional working memories, and elevated attention to detail’.

          Many tasks require paying attention to many things at once, with a risk of overloading the learner’s working memory before some of the performance has been automated. For an example, McPherson & Renwick (2001) consider children who are learning to play instruments, and note that children who had previously learned to play another instrument were faster learners. They suggest this to be in part because the act of reading musical notation had become automated for these children, saving them from the need to process notation in working memory and allowing them to focus entirely on learning the actual instrument.

          This general phenomenon has been recognized in education research. Complex activities that require multiple subskills can be hard to master even if the students have moderate competence in each individual subskill, as using several of them at the same time can produce an overwhelming cognitive load (Ambrose et al. 2010, chap. 4). Recommended strategies for dealing with this include reducing the scope of the problem at first and then building up to increasingly complex scopes. For instance, ‘a piano teacher might ask students to practice only the right hand part of a piece, and then only the left hand part, before combining them’ (ibid).

          An increased working memory capacity, which is empirically associated with faster learning capabilities, could theoretically assist in learning in allowing more things to be comprehended simultaneously without them overwhelming the learner. Thus, an AI with a large working memory could learn and master at once much more complicated wholes than humans.

          Additionally, we have seen that a key part of efficient learning is the ability to monitor one’s own performance and to notice errors which need correcting; this seems in line with cognitive abilities correlating with attentional control and elevated attention to detail. McPherson & Renwick (2001) also remark on the ability of some students to play through a piece with considerably fewer errors on their second run-through than the first one, suggesting that this indicates ‘an outstanding ability to retain a mental representation of […] performance between run-throughs, and to use this as a basis for learning from […] errors’. In contrast, children who learned more slowly seemed to either not notice their mistakes, or alternatively to not remember them when they played the piece again.

          Whatever the AI analogues of working and long-term memory, attentional control, and attention to detail are, it seems at least plausible that these could be improved upon by drawing exclusively on relatively theoretical research and in-house experiments. This might enable an AI to both absorb vast datasets, as current-day deep learning systems do, and also learn from superhumanly small amounts of data. […]

          Looking from humans to AIs, we have found that AI might be able to run much more sophisticated mental simulations than humans could. Given human intelligence differences and empirical and theoretical considerations about working memory being a major constraint for intelligence, the empirical finding that increased intelligence continues to benefit people throughout the whole human range, and the observation that it would be unlikely for the theoretical limits of intelligence to coincide with the biological and physical constraints that human intelligence currently faces, it seems like AIs could come to learn considerably faster from data than humans do.

          • Gazeboist says:

            Isn’t IQ just an aggregation of “usefulness as a bicycle” in this case? To actually refute the metaphor, you’d want to be talking about brain size or something (ie not a direct measure of intelligence). And at least when we look at non-humans, it seems like brain size (alone) doesn’t help. Elephants have brains a bit more than three times the size of humans, but don’t seem to be three times as intelligent. Certainly I wouldn’t expect an elephant clan to be able to win a war against a similarly sized human organization.

            One more thing: if you look at bicycle tire pressure, you’re probably going to find a positive correlation between tire pressure and speed, even if tire pressure stops helping before the tire explodes (thus removing that bicycle from your sample). What I’m saying is, the functional form here is nonlinear, in a way that’s very relevant to the discussion. What do we know about the functional form of brain size vs IQ data? Do we have reason to believe the curve continues? How far do we think it continues?

          • Joe says:

            @Kaj Sotala

            Yep, 6jfvkd8lu7cc’s interpretation was right. To clarify: I am not claiming there aren’t limitations on human intelligence imposed by biological factors, which would not apply to an AI running on computer hardware. And I’m certainly not suggesting that human intelligence is some sort of maximum that can’t be surpassed.

            Rather, all I am saying is that observing the huge discrepancy in human abilities, in spite of our near-identical construction, does not tell you that further tiny changes will continue to produce similarly huge gains beyond the high end of human intelligence we currently see. This is because this scenario is not unique to humans, but occurs with any machine: they’re built to work in a very particular way only, and therefore are really easy to break.

            This applies to far more than just bicycles. Actually the example I was first planning to use was printers, comparing the output of a working printer to a jammed printer. Surely if we can increase from zero sheets-per-minute to twenty just by removing a blockage, then with a dozen more tweaks of similar size we can increase our printer’s output to 260 sheets per minute, right?! Well, no: the reason this enormous gain was available in the first place is because the printer was broken, not because printer design is really simple and just amounts to a single component that can cheaply be tuned up as high as you like.

            To return to humans (and borrowing Nick Bostrom’s example): my claim is that Einstein is like a working printer, and a village idiot is like a jammed printer. Even if it is easy to improve an AI’s capabilites, a wide difference in human ability does not tell you this is the case.

          • Aapje says:

            @Joe

            Exactly, at a certain point, you run into ever greater inefficiencies or problems that need a completely new design or problems that may not be fixable.

            You can create a pretty fast car by beefing up the engine, making it lighter, putting better tires on there, etc; but drag will make it far more costly to go from 150 mp/h to 200 mp/h than from 100mp/h to 150 mp/h, even though the jump in speed is the same.

            Of course, the question is where we are with AI. Are we in the human-powered era, where if we discover the combustion engine, we can make huge gains before we run into these problems? Or are we already near the peak of what is efficiently doable?

        • 6jfvkd8lu7cc says:

          I think the flat-tire point was a different one.

          Flat tire is roughly lead in the water, and speed is the intelligence.

          Yes, higher speed lets you get to the places faster across the whole bicycle speed range, even though on the higher side if you are faster than everyone you spend time based on congestion, not your own speed.

          If you perfectly fix all the parts of the bicycle (make sure a human doesn’t have any organic defects impeding cognition), you will go faster than the currently-used bicycles; but to go radically faster you switch to a different design (motorbike? car? airplane?) and each such design has a different reason for practical speed limit.

          • Kaj Sotala says:

            If you perfectly fix all the parts of the bicycle (make sure a human doesn’t have any organic defects impeding cognition), you will go faster than the currently-used bicycles; but to go radically faster you switch to a different design (motorbike? car? airplane?) and each such design has a different reason for practical speed limit.

            I’m inclined to agree with this, but I don’t think this was the point of the bicycle analogy, since if it was it would seem to invalidate the whole “1000x human intelligence takes 1000x the resources of a supercomputer needed to run a single brain” argument that the analogy was trying to defend? If you’re switching to an entirely different design, there’s no reason for why you’d try to extrapolate from the resource/performance curve of the first design in the first place.

          • 6jfvkd8lu7cc says:

            My stick-man versions of the arguments in the thread are:

            ∘ Once you have a superbright AI, it can make itself a few times better by optimizing the easy bottlenecks, but afterwards it needs access to much more raw resources.

            ∘∘ Brains have parts that may be not necessary to scale; also, humans have different intelligence with similar hardware size and energy consumption levels.

            ∘∘∘ It’s easy to have a large difference in speed if one of the things you compare has a flat tire.

            ∘∘∘ Do we try to retarget the visual cortex?

            If bicycle analogy is a good one, it also doesn’t say anything nice about energy requirements of higher speeds; but while this may be a plausible conclusion, but the bicycle analogy per se is not a strong argument in favour of it.

          • Kaj Sotala says:

            Once you have a superbright AI, it can make itself a few times better by optimizing the easy bottlenecks, but afterwards it needs access to much more raw resources.

            Worth noting that this alone is probably enough to make things decisive in favor of AIs, admittedly depending somewhat on how we define “a few times better”. But at least on most intuitive interpretations, something like “Alice is twice as good at chess than Bob is” would imply that at their current skill levels, Bob’s never going to win against Alice.

            Similarly, if AIs are twice as intelligent as humans are and there are no restrictions on building AIs, it’s only a matter of time before they’re going to become the dominant “life-form” (mind-form?) on the planet. You don’t need godlike intelligence for that.

          • 6jfvkd8lu7cc says:

            I cannot speak for everyone in the thread, but my expectation is that an optimisation will allow the AI to reach the same solutions a few times faster in terms of wall-clock time and CPU cycles compared to the initially launched version. Given that humans launch the first prototype they have, that should be faster than humans but not incomparably faster.

            If the harware requirements are nontrivial, there can be different economic/ecological equilibriums. Raw problem solving is one thing; cost of energy to sustain the process is another; willingness of governments not to declare emergency, classify everything and seize all the assets is also an important consideration in some areas.

          • Gazeboist says:

            it can make itself a few times better by optimizing

            Hang on, wait. What exactly do you mean by this? Suppose we take Will Smith as our example human. What does it mean to be twice as smart as Will Smith? Is there likely to be a human alive today who is twice as smart as Will Smith? Does being twice as smart as Will Smith make you twice as capable as Will Smith in a meaningful sense, or do we have to consider other things as well? Does an AI being “better” mean the AI is “smarter”, or could it mean something else? How much does this hinge on which side of a genuinely meaningful tradeoff* is emphasized?

            * Speed and accuracy, for example, often trade against each other in nontrivial problems. So do speed and resource consumption, in some cases. Sometimes speed scales such that the best method changes based on the size of the problem, rather than its nature. The “best” method can even be random, depending on the particular instance of the problem.

          • 6jfvkd8lu7cc says:

            @Gazeboist: when I mentioned AI quickly making its code a few times better by straightforward optimisations, I meant «X times faster while reaching exactly the same distribution of results».

            I assume that the AI will have some correct definition of computation semantics of the substrate, and that some of the stuff like «why inserting a single NOP into a tight loop halves the cache miss rate??» is missed by both humans and compilers.

      • 6jfvkd8lu7cc says:

        I wonder how exclusive these «visual processing» parts are: we do know that people try to internally visualize when thinking about difficult things; we do know that silicon tools optimised specifically for graphical output turned out to be usable for other goals. Do we know how much visualising concepts is an attempt to reuse the pattern-matching tools usually used for vision?

    • peterispaikens says:

      The point is that even a major supercomputing cluster has only a tiny fraction of the global computing power available.

      History (e.g. trends in malware and cybercrime) shows that purely “virtual” actions on internet easily can either directly control very large numbers of connected computers, or obtain large amounts of money plus fake identities that can be trivially used to buy large quantities of computing power.

      Yes, a general AI with reasonable capabilities connected to the internet would likely have the capability to multiply its computing power up to some limit simply by obtaining more hardware. That being said, having 1000x the raw computing power of a human is not a very high bar – 1000 gaming PCs could likely be sufficient for that if we knew how to use them properly, but we don’t.

    • The stories usually involve the AI secretly finding a way to make itself 1000x more intelligent than it started. Is it supposed to order, have delivered and hook up 1000x more supercomputers and get them all to work together right all without any human being involved or noticing what’s going on? Is it supposed to figure out how to make a massively more efficient computer, and somehow get humans to fabricate it, debug and troubleshoot it, and eventually get a working version somewhere that our AI can get access to it, again all without anybody noticing?

      Code can run surreptitiously, because that is what a botnet does. But there is a quantitative issue here: if an AGI already needs a sginificant chunk of the worlds computing resources to run,. it is not going to easily be able to reproduce or upgrade itself.

      • 6jfvkd8lu7cc says:

        Actually, if you are considering botnets, there is also always issue of communication stability/latency/throughput, data locality etc. If an AI _needs_ that $50’000 2TiB RAM NVMe-storage-backed many-multicore-CPUs server for latency/throughput reasons, a botnet may not give much of a boost even when overall RAM and overall CPU amounts are higher for the botnet.

  47. Iceman says:

    But if we assume they’re not, and just naively multiply the probabilities together for a rough estimate, that suggests that about 14% of experts believe that all three of these things: that AI might be soon, superintelligent, and hostile.

    This reminds me of something that akira recently posted, which was an examination on what the world would look like if AI was unstoppable.

    • acrimonymous says:

      Artificial intelligence cannot be stopped.
      Initiatives (cl)aiming to “stop AI” will either fail to slow or actively hasten it.
      Attempting to subtly influence the development of AI is a waste of time.
      Other people have already figured out 1, 2, & 3 and chosen not to tell anyone.

      Yeah, I basically agree with that. I used to talk to people about the dangers of AI back in the 1990s, but nobody listened. I finally decided it wasn’t worth the effort. It’s in the nature of the problem that, even if you converted 95% of current researchers away from their research based on the premise that it’s dangerous, the last 5% might develop strong AI and your efforts would be for naught. If you talk to “normal people” about AI dangers, they think you’re a geeky crank, so political efforts are a non-starter. And as with proselytizing scientists, you also couldn’t Kaczynski enough of them before getting caught to make a difference.

      Speaking of Kaczynski, I’m surprised there hasn’t been another one of him. If you really think AI is a threat to humanity, then the current year instantiation of Time-Machine-Kill-Hitler is the Second Coming of the Unabomber. The only thing I can figure is that everyone who considered it recognizes it’s a losing proposition.

  48. Ozy Frantz says:

    How many of the experts in this survey are victims of the same problem? “Do you believe powerful AI is coming soon?” “Yeah.” “Do you believe it could be really dangerous?” “Yeah.” “Then shouldn’t you worry about this?” “Hey, what? Nobody does that! That would be a lot of work and make me look really weird!”

    I’m not sure how much of a problem that is for scientists who have noticed sudden big problems in their field. For instance, ecologists seem to have no issues with talking about how we’re currently undergoing the sixth great mass extinction and if we don’t start preserving species soon there is going to be no ecology left. Epidemologists have predicted fourteen of the last four epidemics. Same thing with climatologists and the earth getting warmer and nuclear scientists and nuclear proliferation. I am having a hard time thinking of a crisis where we’re like “you know, in retrospect, we really wish that someone had bothered to tell us about the meteor before it crashed into the Indian Ocean and killed a billion people, but nooooo all the astronomers decided quasars were more pressing instead.”

    Cynically, one might argue the incentive gradient goes the other way for scientists: if you give money to charity, you have less money, but if you say “hey our field has just discovered a very important and urgent problem,” you get lots of funding and you get to advise presidents and sit on panels with important-sounding titles and stuff.

    • Deiseach says:

      Epidemologists have predicted fourteen of the last four epidemics

      Did some words get switched there or were the epidemiologists really predicting three epidemics for every one that actually happened? 🙂

    • Scott Alexander says:

      There’s a incentive difference between an ecologist saying “Ecology is great and if we don’t do it bad things will happen” vs. an AI scientist saying “AI is dangerous and if we do do it, bad things will happen”.

      There’s also a difference between “If we pollute more, we’ll have more pollution” vs. “If we do more AI, this bizarre-sounding scifi scenario that’s like nothing that’s ever happened before will happen.”

      See eg the Real Life section here.

      • Ozy Frantz says:

        The “real life” section seems to contain a lot of people who are not really scientists at all, such as generals, pundits, and writers. It also has a lot of cases where there is a clear scientific consensus which is ignored by bureaucrats or the general public for whatever reason. When I’m looking for incorrect scientific consensus, I get Alice Stewart (whose work on prenatal X-rays took 25 years to filter into the general consensus), Semmelweis, and thalidomide but only sometimes. Which is three cases (as opposed to like ten of “scientists are yelling really loudly that you need to do this thing and no one is paying attention to them”, not to mention all the cases of non-ignored experts who don’t go on the TVTropes page). So I don’t think this is really strong evidence that scientists in general fail to notice catastrophes.

        The primary thing we need to do to preserve species is to keep habitats the way they are, which isn’t particularly related to increasing funding for ecological research at all. Conversely, AI safety research does involve giving people money to study AI, many of whom are presumably AI experts. Self-interest points much more in favor of AI research.

        Global warming and the sixth great mass extinction are both unprecedented events that are very unintuitive to people. It is true that you can pitch “preventing extinction” as “we’re trying to preserve pandas!” and then you neither have to explain the equilibrium theory of island biogeography nor why anyone ought to care about the preservation of insect species, but this is about as accurate as saying “we’re giving the AI researchers money so they can figure out how not to build Skynet.”

    • Reasoner says:

      My guess is that a lot depends on whether the warnings are perceived as coming from field insiders or field outsiders. I think there’s a good chance that MIRI actually made the AI safety situation a lot worse through incompetent advocacy. Good to see that Stuart Russell seems to have taken up the AI safety advocacy torch.

  49. OptimalSolver says:

    Not shown: graph of Moor’s law petering out.

    • skef says:

      I suspect that “don’t strangle your wife for suspected infidelity” will continue to apply for quite some time …

    • Trofim_Lysenko says:

      Moor’s law started petering out in the 11th century about the time the Caliphate of Cordoba collapsed.

      EDIT: Damnit, Skef!

    • thevoiceofthevoid says:

      Faster computers != more capable AI (at least not necessarily).
      For similar reasons I am quite skeptical about both arguments that we’ll never get strong AI because our computers can’t match the human brain in computations per second or whatever, and arguments that we’ll get strong AI as soon our computers are powerful enough.

      • Ezo says:

        Faster computers, easier the task of creating AGI. Simplest path to AGI(not requiring any new ground-breaking algorithms to conceive of) – create big enough neural net, feed it enough data. No reason it shouldn’t work. But it requires enough computing power to emulate neural net as big as human brain and being able to teach it as fast as we would human(or faster).

        > our computers can’t match the human brain in computations per second

        Slightly off-topic, but I think comparing our chips to brains is simply unfair. Singular chip’s area is in order of centimeters squared. And it’s effectively 2D. Brains are 3D and much bigger than that. And unless we’re talking about neuromorphic chips, saying that they are slower than brain because we can’t simulate brain on them in real time is like saying that x86 processors are slower than ARM ones because they can’t emulate them in real time.

        About Moore’s Law – yep, it’s dying. But we could still get few orders of magnitude speedup by using other materials than silicon(which would decrease heat output and increase maximum clock rate by a factor of 100, hopefully). And if we decrease heat output significantly, maybe it would be feasible to stack logic circuitry one above another(physically). If that would work, we would get single, slightly thicker chip being as powerful as hundreds of them.

        Through I don’t know if that would be cheaper per transistor than separate chips, so there may be not much point in that.

        Moore’s law is dead, but we still have plenty of ways to potentially improve our processors. When one looks at physical limits of computation, our modern CPUs/GPUs are not that much faster than first mechanical computers.

      • peterispaikens says:

        Faster computers is one route towards superhuman general AI.

        The way I see it, we already have the computing power that’s sufficient for superhuman GAI *if we knew how* to build one, but we don’t; and we pretty much know how to build a GAI in a horribly inefficient manner (bruteforce emulation of biological neurons) but we don’t have nowhere near the computing power to run something like that even for very tiny brains.

        This means that there are two ways how strong AI can occur: either we figure out how brains work sufficiently well to code one properly, or (which matters to those who think that we won’t figure it out any time soon) at some time available computing power becomes sufficient to bruteforce the whole issue.

        In a sense, this implies some equivalence between breakthroughs in AI research and breakthroughs in computing hardware – you need progress in *any* of them; even if one of these areas hits a brick wall with no progress beyond current tech, the other can pull us up to where GAI will be possible.

        • Bugmaster says:

          Right now, no one has any idea how to emulate the human brain. We can’t even emulate a worm yet. And throwing more computing power at the problem won’t help, because without knowing exactly how brains work, we don’t know what to emulate.

          Similarly, throwing more computing power at conventional machine learning approaches won’t work. All you’d get are marginally faster conventional machine learning tools. That’s pretty useful, don’t get me wrong — but it’s not AGI.

    • gwern says:

      Still looking pretty good for the GPUs everyone is using for AI right now…

    • deciusbrutus says:

      That applies to computing speed of Turing machines. Basically every AI we have developed is less powerful than a Turing machine, even as they get faster.

    • Scott Alexander says:

      Katja (lead author of this study) is currently working on an analysis of whether or not this is true and what the implications are.

  50. skef says:

    Suppose you accept this premise: A.I. will eventually outperform humans on every task. If “outperform” is interpreted in part as economic performance, it is a corollary that A.I. will eventually carry out all tasks.

    What, accepting the premise, will the last human “outperformance” be? It will presumably be the finishing touches on the A.I to take over that task. Whatever the hardest task for A.I. is, overcoming it is therefore an instance of A.I. research. It’s analytic!

    One might explain the 40 year lag in terms of material limits. We might reach a point where A.I. can outperform a human on every job before it can outperform human on all jobs, if we start from a point with a small number of robot-like things and a large number of humans.

  51. Joy says:

    If someone is sure that solving the pronoun problem is a prerequisite to solving the AI alignment problem, they would likely rate the first one as the most important at this time.

    • thevoiceofthevoid says:

      My thoughts exactly. To expand on this:
      When asked what the “most important problem” is in AI research, I suspect that many researchers will prioritize concrete, specific problems that they believe progress can actually be made on. In addition, I don’t think the importance of pronoun reference comes from AIs being able to understand silly sentences, but rather from the pronoun reference being used as a building block for far more complex systems.

      • vV_Vv says:

        You can solve the easy instances of pronoun disambiguation using simple heuristics, either hand-coded or the kind of pattern-matching heuristics that artificial neural networks easily learn.

        But to solve the hard instances, you will likely need some pretty advanced theory of meaning, that includes to some extent a theory of mind.

        For instance, in Winograd’s original example:

        “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.”

        The referent of “they” is “the city councilmen” to “the demonstrators” depending on which verb is used. You can’t solve this based on syntax alone, or a few simple heuristic based on the frequencies of certain patterns of words. You will likely need some deeper level of understanding, that current AI systems are not capable of.

        EDIT:

        I should say that it may be probably possible to solve the Winograd Schema Challenge using Big Data: gather 100,000 examples of such sentences and train a neural network on it, and I’ll expect it to perform reasonably well on a test set created in the same way. Train it on 10,000,000 examples and it will probably perform near human level.

        But if you then change the problem a little bit, that neural network will do poorly. This is because a neural network essentially learns by pattern matching, and if you give it lots of examples it will learn lots of patterns, even for the rare corner cases. But if you then change the problem in a way that breaks these patterns in a systematic way, then the neural network will usually fail.

        Pattern matching is certainly an important part of learning, and it can get you very far in cases where you have a stable problem with plenty of available data (see AlphaGo) but it is not all of it.

    • bbartlog says:

      Is the ‘pronoun problem’ a reference to the difficulty of resolving anaphora in human language, or are we talking about something else?

      • Scott Alexander says:

        Sorry, I deleted this part on the advice of commenters. It was previously talking about pronoun disambiguation.

  52. Alejandro says:

    Also, since they rated AI research (80 years) as the hardest of all occupations, what do they mean when they say that “full automation of all human jobs” is 125 years away? Some other job not on the list that will take 40 years longer than AI research? Or just a combination of framing effects and not understanding the question?

    Could it be that when asked for a particular occupation X, they think of an AI programmed specifically to do X, while when asked about the full automation of human jobs, they think of an AI with the plasticity and versatility of a human? It would make sense to think that the latter is more difficult than the former is for any specific value of X, and that before it happens we would not be programming thousands of different AIs to specialize in all the possible values of X, even if we had the technical capacity for doing it for any X.

    • Deiseach says:

      Grace: “High-level machine intelligence is achieved when unaided machines can accomplish every task better and more cheaply than human workers.”

      I wonder if this is just ambiguous enough to be confusing? Someone might consider “every task better and cheaper than a human worker” to apply to something like “fully automated car plant with no more human workers at all, rather than the mix of human and robots now” and go “yeah, sure, ten years or sooner”. But they might not consider “every task” to include “scientific research, law, medicine, changing the baby’s nappy, running a restaurant, landscaping, architecture, etc.” When it is worded as “every profession“, however, that makes them think “oh like doctors and lawyers and all the rest of it? oh yeah, that’s going to take a lot longer”.

  53. OptimalSolver says:

    Nothing I’ve seen from MIRI and friends leads me to believe that “AI safety” is even possible. If intelligence explosion is actually feasible, you’re going to rapidly hit unknown unknowns.

    Also, no one at MIRI seems to have any actual experience in software development, otherwise they’d realize just how ridiculous their proposals are.

    • soreff says:

      That’s basically my gut reaction as well.

      I was viewing Scott’s:

      I don’t have the raw individual-level data, so I can’t prove that these aren’t all anti-correlated in some perverse way that’s the opposite of the direction I would expect. But if we assume they’re not, and just naively multiply the probabilities together for a rough estimate, that suggests that about 14% of experts believe that all three of these things: that AI might be soon, superintelligent, and hostile.

      Also, only one-tenth of these – 1.4% – think this is “among the most important problems in the field”.

      as perfectly consistent with ~13% viewing unsafe AI as feasible and likely, but safe AI as
      either technically infeasible or unfeasible for some other reason (e.g. the dynamics of competition).

      Perhaps they view the field as fated to build a paperclip optimizer? 🙂

    • vaniver says:

      Also, no one at MIRI seems to have any actual experience in software development, otherwise they’d realize just how ridiculous their proposals are.

      As you can see on our team page, our executive director Nate used to be a software engineer at Google, as did Marcello and Sam. I used to be a data scientist, which I’d count as software development. Critch probably also counts, since he was an algorithmic trader at Jane Street.

      • deciusbrutus says:

        I don’t think that data science counts as software development.

        I also don’t think software development experience is much related to AI development.

        I’m also unconvinced that Turing machines are sufficient to run programs that are better at writing programs that run on Turing machines.

        • Izaak says:

          I’m also unconvinced that Turing machines are sufficient to run programs that are better at writing programs that run on Turing machines.

          What does this mean? I’m parsing this as “I am unconvinced that we can make a Turing Machine that writes better programs than humans”, but compilers and genetic programming already exist, so I’m not confident in that translation.

        • PDV says:

          Data Science is software development; there’s no nontrivial criterion that would exclude it that wouldn’t also exclude all writing of code in assembly/all academic software/all frontend web development/anything related to WordPress.

    • Also, no one at MIRI seems to have any actual experience in software development

      Some do now, but when it was founded (and the core doctrine was formed) they didn’t. They have always been lacking in AI specific backgorunds, and it shows in the doctrine, which is very focussed on GOFAI.

    • bbartlog says:

      I don’t see why AI safety wouldn’t be possible. Or at any rate, it seems to me the primary problem would be malevolent human agents in control of superhuman AI. Scenarios where an AI tasked with making paperclips runs out of control seem avoidable, either by giving such AIs finite goals or reward functions, or by embedding logical constraints that they can’t self-modify. Not that there aren’t some less obvious failure modes there, such as the AI building a second, less-constrained AI to ‘help’ it – but those can also be proscribed to the agent.

    • Scott Alexander says:

      “Also, no one at MIRI seems to have any actual experience in software development, otherwise they’d realize just how ridiculous their proposals are.”

      The director of MIRI is a software engineer who has worked for Google, Microsoft, and the US government.

      “Nothing I’ve seen from MIRI and friends leads me to believe that ‘AI safety’ is even possible.”

      If an asteroid were heading toward Earth, I feel like people would do more than spend ten minutes looking at the problem, say “asteroid deflection looks infeasible”, and get on with their lives, even if asteroid deflection really were infeasible.

      • Bugmaster says:

        Scott, your asteroid analogy doesn’t hold water.

        We know quite a lot about asteroids. When we do detect one, we can track it with extreme precision; in fact, we are tracking many of them right now. We know exactly what happens when an asteroid strikes a planet — we have plenty of evidence to draw upon. On the deflection side, we are a bit behind, but we can sketch out several possible solutions. We can reliably launch equipment into orbit, we can build rocket boosters, etc. At this point, detecting an asteroid hurtling toward the Earth would be a major problem, but not a completely new kind of problem. If we managed to detect it long enough in advance, we could stop it without inventing entirely new disciplines of science and engineering.

        Literally none of those things are true of AI. The AI-risk scenario is more akin to an alien invasion than an asteroid strike. I agree that an alien invasion could be catastrophic, but I don’t think it makes sense to invest in the alien-risk prevention community — no matter how graphic their descriptions of potential bug-eyed gray aliens are.

        Just to forestall one possible objection: you might argue that our anti-asteroid defences are relatively strong precisely because people invested in them; but this isn’t true. Rather, our general state of science and engineering has advanced to the point where asteroid detection and deflection is not merely possible, but actually relatively cheap (note, I did say “relatively”); this makes investing into asteroid defences a feasible proposition. Again, this isn’t the case with AI; we can’t even make it disambiguate pronouns yet !

        • PDV says:

          None of your nitpicks have any bearing on the objection.

          • Bugmaster says:

            Care to elaborate ?

          • wintermute92 says:

            @bugmaster

            The details of how we could deflect asteroids aren’t relevant to whether deflecting extinction-level asteroids is a good idea. You explained lots of ways that asteroid deflection is easier and a clearer problem than AI safety work, but Scott’s whole point was that even if asteroid deflection were impossible he would still want research to make very sure we hadn’t missed a solution!

            More charitably, you’re arguing that the metaphor is bad because asteroids are a proven, understood threat while aliens (or AI) are uncertain and ill-defined. That’s a discussion that’s been had in many places, but it’s still irrelevant to the metaphor. The topic at hand wasn’t the value of AI safety, it was your claim that it’s not “even possible”. We don’t need to prove that aliens are real, nearby, and angry to discuss how space-based weapons might work.

            This feels like you are moving the goalposts in a really unfair way. You said that AI safety appears impossible, and then when challenged said “hah, your metaphor is bad because AI safety might be unnecessary!” That’s a different conversation, and swapping arguments to undermine your opponent doesn’t make for good discussion.

        • albatross11 says:

          It seems like one issue is that we may be too early to meaningfully address the problem. The modern US could invest in asteroid defenses that would probably do some good. Victorian England couldn’t have done anything useful in that sphere, despite being among the richest and most successful societies on Earth at that time–they simply didn’t have the tools to make any headway on the problem.

          I don’t know that this is true–I’m way outside my expertise here. But this seems like a likely issue.

      • 6jfvkd8lu7cc says:

        What are the drastic anti-asteroid measures taken by humanity since 15 February 2010?

        I mean, we knew that there is an asteroid menace at that time, and three years later a completely unknown asteroid entered Earth athmosphere, released energy equivalent to hundreds of kilotons of TNT above a megapolis, and caused over a thousand injuries (tens of them leading to hospitalisations), damage to thousands of houses, and tens of millions of dollars in overall property damage.

        We got lucky — the entry angle was shallow, the airburst happenned high above the ground and so everything could be repaired in about a month.

        There was and there is constant work on better detection, but no space race style megaprojects — just like there is constant work on software verification.

  54. aphyer says:

    Nitpick:

    Also, only one-tenth of these – 1.4% – think this is “among the most important problems in the field”.

    I think the 1.4% was the number who said it was ‘much more valuable than other problems in the field’, while 5% said it is ‘among the most important problems in the field’. One-third is still surprisingly low there, though.

    Less nitpicky (though only tangentially relevant): re Section I, one thing I am often tempted to say is ‘Group X’s opinions are massively governed by framing effect, therefore they don’t know what they’re talking about and we can ignore them’. While this isn’t super charitable, it does feel like a reasonable thing to say? I mean, mathematicians will never give you different answers to ‘what is 2 + 2?’ versus ‘what do you get if you add 2 and 2 together?’, and if they did we’d probably be worried that they don’t know what they’re talking about.

  55. adefazio says:

    A lot of machine learning researchers, myself included, just think it’s too early to reason about AI safety problems for general AIs. It’s not science, it’s philosophical navel-gazing at this point. There are plenty of interesting near-term AI safety problems that can be studied concretely instead.

    • Ilya Shpitser says:

      “but consider trying to avert the catastrophe less important than things like pronoun disambiguation.”

      Scott, this frame is really annoying, please don’t do this. I have a pretty good technical idea of what MIRI et al do. I just had a nice skype w/ Stuart Armstrong, just yesterday. It would be very very easy for me to set up an uncharitable frame for that kind of research (e.g. mucking around with exotic decision theories, or doing niche work in reinforcement learning). Who is to say — maybe pronoun disambiguation is “deep.”

      How about the AI safety community stops being so status-thirsty, and starts delivering, like deep learning has. Presumably, delivering will bring accolades and money.

      • JPNunez says:

        Maybe the problem is economic incentives.

        I can’t imagine there’s much money to be made in the field of making AIs to stop working.

        • 6jfvkd8lu7cc says:

          There is a lot of money and a few lifelong prison sentences in influencing the outputs of the AI which is the Apple decision making system (which uses both computers and humans as computation cells) via cheap-to-manipulate inputs.

          The same (with addition of drone strikes and world domination to the bet amount) if the AI in question is a nation-state military, or a nation-state justice system.

        • There’s money in brakes, airbags, etc. Having legal requirements for such things is helpful, though.

      • vaniver says:

        How about the AI safety community stops being so status-thirsty, and starts delivering, like deep learning has. Presumably, delivering will bring accolades and money.

        It’s unclear to me that this is a sensible prediction. Like, if someone manages to come up with a solid solution to the corrigibility problem, it’s unclear that this has much practical value until AI systems become much more powerful. But if someone manages to come up with a system that’s slightly more effective at predicting market movements or server power usage, people will throw millions of dollars at them.

        • Ilya Shpitser says:

          I am not really “in the AI safety field” myself, and I don’t want to tell them their business. But it seems to me there are practical versions of this problem that would be of much use today. For example, how to get automated cars to act safely, so they could be widely deployed.

          This is a hard problem! But I will bet you dollars to donuts nobody in the “AI safety” cluster will touch it — and it will get solved by engineering/data science/experimentation elbow grease somewhere at Google or CMU’s self-driving car labs.

          My favorite problem is how to get gvts/corps/non-profits to act in ways that serve human interests.

          Same deal — I don’t expect anybody in “AI safety” to touch this.

          I don’t think there is a ton of money in programming a superhuman wei chi player, either. Yet, here we are.

          • Reasoner says:

            I don’t think there is a ton of money in programming a superhuman wei chi player, either. Yet, here we are.

            Is there an AI safety analogue to wei chi? That is, some kind of indisputable and impressive accomplishment that will demonstrate progress on AI safety?

          • Ilya Shpitser says:

            That’s a good question! I am actually thinking about this now.

      • pdbarnlsey says:

        How about the earthquake-proofing community stop going on and on about the importance of earthquake-proofing, and just get on with the job of earthquake-proofing the city? And then, once they’ve delivered, and there has been an earthquake, resources for earthquake-proofing will flow to them.

        • Bugmaster says:

          This has actually already happened, at least in places like California and Japan, where earthquakes are common. The earthquake-proofing community has indeed received a significant amount of resources (though perhaps not as much as they’d like, I don’t know).

          But this probably has something to do with the fact that earthquakes demonstrably happen; we have a decent idea of how they work; we know how to predict them with a reasonable degree of accuracy; and we have the engineering skills required to mitigate their effects. None of this is true of the AI Risk community, as far as I can tell.

      • Bugmaster says:

        FWIW, I know at least one linguist, and they tell me that pronoun disambiguation is an unsolved problem that is actively being worked on right now. It’s not trivial by any means. You can always argue that it’s less important than AI safety research, regardless of its complexity, but… I have trouble envisioning an AGI who can take over the world in an instant, yet cannot disambiguate pronouns correctly.

        • Cliff says:

          Seems like a failure of imagination. Why would disambiguating pronouns in any way be necessary to take over the world? Were Genghis Khan, Hitler, Alexander the Great, etc. capable of that? It seems like for a very clever person with the capabilities of a computer (mimic any voice, spoof any email, create any image), taking over the world would be quite possible. And that is without superhuman intelligence and without any capabilities that do not really exist today.

          • Bugmaster says:

            Were Genghis Khan, Hitler, Alexander the Great, etc. capable of that?

            They were human, so, yes.

            It seems like for a very clever person with the capabilities of a computer (mimic any voice, spoof any email, create any image)

            Modern computers cannot do any of those things; at least, not nearly to the extent that you appear to be proposing. On top of that, mimicking voices is not very useful if you can’t figure out which pronouns mean what. Same goes for emails. It might be tempting to say that you can bypass the problem with images, but lacking the ability to disambiguate pronouns really means that one lacks a robust theory of mind. Communicating with humans would be impossible while lacking such a basic capability. Also, I think that if the term “superhuman intelligence” means anything, then being able to parse basic sentences should be a part of that.

          • vV_Vv says:

            Why would disambiguating pronouns in any way be necessary to take over the world?

            If the plan involves communicating with humans even just understanding what human say, it is necessary.

            Were Genghis Khan, Hitler, Alexander the Great, etc. capable of that?

            Obviously yes.

            It seems like for a very clever person with the capabilities of a computer (mimic any voice, spoof any email, create any image), taking over the world would be quite possible. And that is without superhuman intelligence and without any capabilities that do not really exist today.

            Is this a version of the #RussianHackers conspiracy theory? /s

        • deciusbrutus says:

          “Doing linguistics better and cheaper than humans” comes before taking over the world.

        • vV_Vv says:

          You can always argue that it’s less important than AI safety research

          Is it?

          “The patient has a bullet lodged in his heart. Operate him in order to remove it.”
          And then the robot removes the patient’s heart.

          When you ask AI researchers about AI safety, this is the kind of problems they are most likely to think about, not some god-like AI turning the Galaxy into paperclips because of some counter-intuitive borderline case of its decision theory.

          It’s not like paperclip scenario is theoretically impossible, but a) since pronoun disambiguation, and a lot of other stuff that come really easy for humans, is so hard that AI researchers haven’t figured out it yet, how could they productively work on the paperclip problem? And b) unless you assume that for some weird reason solving pronoun disambiguation will immediately lead to super-intelligence explosion, the fact that pronoun disambiguation is unsolved means that there is likely still lot of time left.

      • Ketil says:

        “but consider trying to avert the catastrophe less important than things like pronoun disambiguation.”

        Consider what answers you’d get if you asked neuroscientists about the most important problems in their field. My guess is, you’d get something about neurotransmitter mumble pharmaceutical mumble receptor mumbe interaction. And I could go, wot, surely it is more important to build a better society by making sick people healthy or reducing crime, or some such?

        By asking AI researchers about this, you will get answers which are in their scope. If you ask me (and I guess I could almost count as an AI researcher), the question of AI safety is of course important in the long run, but I would not prioritize it right now. I have absolutely no idea how we could usefully predict how a hypothetical HLIM would think, and assuming hyperintelligence, I think it might be beyond us. Like an omniscient god, its ways will be ineffable. It’s great that smart people are thinking about it, but it’s a very different field from building a machine that can understand written texts a bit better, and I’d rather work at problems where I can tell the difference between a useful result and a useless one.

      • Scott Alexander says:

        You call the AI safety community “status conscious”, but I have the opposite complaint. It sounds like you’re saying they need to earn status before they should be listened to, which is itself too status-gamey a framework given the urgency of the situation.

        Suppose an asteroid were heading towards earth (with some uncertainty about whether it would hit or not). Does it make sense to say “how about the asteroid deflection community stops being so status thirsty and talking about themselves so much, I’m not going to listen to them until they deflect a small meteorite”?

        In that situation, the worse the “asteroid deflection community”, the more important it would be for other people outside the community to either help them (if they’ve basically got their heads screwed on right) or start competing projects of their own (if they’re unsalveagable). Worrying about how high-status they are or aren’t seems like the last thing that should be on anyone’s mind.

        • HeelBearCub says:

          Actually the “asteroid deflection community” is probably a worthwhile corollary to explore.

          Imagine RADI (the Research for Asteroid Deflection Institute) took as a given that Planet AyeEye was lurking out there in a long orbit and was on a near earth trajectory which might intercept earth and could be detected passing the Kuiper Belt in a few decades or even years. Initially formed by a group of enterprising and concerned meteorologists, with a few astronomers and rocket engineers coming aboard later, they have dedicated themselves to the study of the feasibility of moving AyeEye to another orbit.

          Mainstream astronomers, physicists and engineers agree that objects of long orbit are very interesting and potentially very dangerous, but think research dollars are needed in detection first before expending resources on theoretical orbit manipulation.

          • drachefly says:

            That would match a lot better if the AI experts were more interested in the analogous question at the end.

            Or if the astronomers were throwing heavy objects around the solar system in such a way that if it scaled up it could easily disturb the orbits of various KBO.

          • wintermute92 says:

            I’m with drachefly here.

            First, the vast majority of FAI people are interested in AI more generally, and many seem to think that their work will dovetail with AI efforts. (No, Yudkowsky’s “drop all AI work to only do FAI work” is not standard.)

            Second, they aren’t claiming dangerous strong AI as the state of nature, they’re asserting that conscious, ongoing efforts to build AI cause AI risk. I think Bostrom would agree that a Butlerian Jihad would render the FAI question moot, he just thinks that’s impractical and ill-advised.

            So in terms of the metaphor: we aren’t talking about meteorologists versus mainstream astronomers. It’s much closer to astronomers versus small-scale astroengineers.

            The astronomers aren’t experts in the mechanics of what’s being done, but they’re in an obviously-related domain capable of talking comprehensibly about the risk. And the astroengineers aren’t checking whether something exists, they’re actively building the thing and talking about whether it’s safe.

        • Ilya Shpitser says:

          I am a big believer in “letting a thousand flowers bloom.” That is, I am happy for AI safety folks to make their case in the market place of ideas and compete with others, who perhaps want to work on semantics (this is what the pronoun stuff is really about, I think), or deep learning, or causal inference, or whatever. I certainly am not trying to shut them down or get them to stop making their case!

          I guess all I am saying, the frame of ‘prevent imminent catastrophe vs [topic in AI]’ is not the right frame here.

          People who work on ‘preventing imminent catastrophe’ would be people who are working on asteroid detection or carbon sequestering or maybe aging/death. What folks at FHI/MIRI do is fairly interesting, interesting enough for me to read their stuff. But it’s basically math. The path from what they do to stopping catastrophe is a path I don’t buy, and most folks in AI/ML will also not buy.

          So from where I sit, AI safety is folks doing interesting math trying to brand it as MOST IMPORTANT CATASTROPHE PREVENTION YOU CAN DO TODAY. This is a weird and unconvincing status branding. I don’t think they will convince a lot of important people in academia and politics that way.

          This is why, if I were working on safety myself, I would try to carve off a smaller chunk more directly relevant today. Lots of computer security/crypto folks do interesting and relevant stuff. There are even folks I know who work on making robots behave properly around people (in HCI/robotics communities). But again, I am not here to tell them their business.

          I also wish AI safety folks competed for funding normally, rather than selling “being a world-saving hero via donations” to impressionable people.

          • PDV says:

            >This is why, if I were working on safety myself, I would try to carve off a smaller chunk more directly relevant today. Lots of computer security/crypto folks do interesting and relevant stuff.

            Name three.

            > There are even folks I know who work on making robots behave properly around people (in HCI/robotics communities).

            That isn’t even on the same continent as something useful for AI alignment. So if that’s going to be the quality of your suggestions, better name seven to ensure at least three of them are right.

          • Ilya Shpitser says:

            Two can play this game: nothing MIRI or FHI do is useful for AI alignment. Yes, I am sure _they_ disagree, but they are in a huge minority.

            However, one advantage the HCI/robotics stuff has over what MIRI does is it’s actually useful today for safety problems.

            “Name three.”

            I don’t mean to imply comp.sec. and crypto people do stuff that’s helpful for “AI alignment,” however you might define it. However, they do things that are relevant for doing computation or communication safely when adversaries are around.

            And this is precisely my point. AI safety folks articulated something called “AI alignment,” and are now judging things as useful or useless based on this concept they articulated. But what I think ought to happen is for “AI safety” to become a big healthy academic field with lots of smart people and disagreements on what “AI alignment” entails, and what is useful for it.

          • wintermute92 says:

            So from where I sit, AI safety is folks doing interesting math trying to brand it as MOST IMPORTANT CATASTROPHE PREVENTION YOU CAN DO TODAY.

            Huh. We appear to have vastly different views on the importance of AI risk, but I basically agree with this.

            MIRI’s approach to FAI reminds me very strongly of the early history of popular programming languages (popular, so ~1975, not ~1935). Lots of heavy math, lots of researchers eagerly talking about correctness and theory, very little relevance in the end.

            The wonderful Backus-Dijkstra letters are a great taste of this – obviously these are two very important computer scientists, but check out how much time they spend discussing languages that do wild things like embedding formal correctness proofs in their own code. Compare the stuff they’re talking about to the stumbling, whatever’s-functional evolution of popular programming languages.

            This is sort of what I expect from MIRI. They’re going to do lots of interesting math and maybe come up with some clever ways of guaranteeing alignment in small examples which don’t scale well. And while they do it, other people are going to go off and throw millions of dollars at complex techniques they barely understand until something ends up smart.

            If we’re lucky, MIRI will prompt more interest and awareness in AI risk. If they’re lucky, they’ll also come up with some tricks that are actually useful enough to integrate into common use (in-metaphor, functional programming or something?) But in any event, I don’t think they’re on a road to alignment breakthroughs that will integrate usefully with whatever path gets us to strong AI. And I’m sort of frustrated that they don’t seem to be concerned about that possibility.

          • Joe says:

            @wintermute92

            Not sure I understand this perspective.

            If an intelligence explosion is likely, then I don’t see how a gradual, piecemeal, trial-and-error approach to AI safety can work. Once an AI crosses the critical threshold of being able to improve its ability to improve itself, it will (by assumption) rapidly surpass the rest of the world in capability and then set about pursuing the goals it was programmed with. We won’t get a chance to correct it, so the only viable safety work has to be done in advance, by somehow achieving formal certainty that the AI’s coded objectives, when followed to the letter by an unchecked power, will result in a good outcome.

            On the other hand, if an intelligence explosion isn’t going to happen, then I don’t see why AI safety is a concern at all, since no AI will ever be in a position to pursue its goals unchallenged in the first place.

          • Aapje says:

            @Joe

            We won’t get a chance to correct it

            Why not?

            You seem to be assuming that the AI will be some godlike entity with no vulnerability or way for humans to influence it.

            It seems likely to me that we can just turn off the computers it runs on and that we can change the code.

            Of course, it could theoretically try to stop us from doing this, so the AI safety folks can give us the advice to explicitly program the AI to never ever do this, no matter what other goals it has.

          • It seems likely to me that we can just turn off the computers it runs on and that we can change the code.

            Unless

            *it has copied itself
            *it has a dead man’s handle, to do something disastrous if it stops running.

            etc

            Of course, it could theoretically try to stop us from doing this, so the AI safety folks can give us the advice to explicitly program the AI to never ever do this, no matter what other goals it has.

            That’s saying it’ll be safe if someone solves the safety problem. Admitedly , it’s an easier version of the safety problem …design effective off switches, rather than solve morality.

          • . Once an AI crosses the critical threshold of being able to improve its ability to improve itself,

            Do you expect that to be entirely initatied by the AI , or do humans need to deliberately create self improving AI?

            it will (by assumption) rapidly surpass the rest of the world in capability and then set about pursuing the goals it was programmed with.

            Assuming it will be explictly coded (GOFAI), assuming it is an agentive AI , assuming goal stability…quite a lot of assumptions , there.

            We won’t get a chance to correct it, so the only viable safety work has to be done in advance, by somehow achieving formal certainty that the AI’s coded objectives, when followed to the letter by an unchecked power, will result in a good outcome.

            GOFAI in, GOFAI out.

            Actually, there are many alternatives including corrigibility.

            If an intelligence explosion is likely, then I don’t see how a gradual, piecemeal, trial-and-error approach to AI safety can work

            I don’t see how there is any alternative, when there are multiple kinds of AI, not just the GOFAI kind assumed by MIRI.

        • Ilya Shpitser says:

          Edit: Academia, for all its flaws, has an enormous amount of very smart, very thoughtful people in it. If you think academia as a whole is missing something, almost certainly that’s not the case. A good model: something with infinite processing power, but diffuse attention span.

          I guess a version of your pitch I would agree with is: “we are doing our best, but we need more brains involved to cover all bases/approaches, as far as the catastrophe that we are sure is coming one day.”

          I am more skeptical about sales pitches of the form: “we need more folks to work on our brand of decision theory” or “we need more grant funding.” Those feel like status/money grabs. I am definitely skeptical of sick burns of the form: “you working on your topic is like Nero fiddling while Rome burns.” Regardless of your sense of urgency, that type of pitch is just poor instrumental rationality. Actually, some folks in ML who shall remain nameless are guilty of similar things, too.

          I think planting more flowers and tying the field to real concerns today is the way forward that will get traction. But, again, I am not here to tell people their business! I am trying to give useful feedback as a somewhat sympathetic outsider.

  56. pdbarnlsey says:

    I’d like to clarify the statistical construct used to estimate time periods. From the paper:

    Each individual respondent estimated the probability of HLMI arriving in future years. Taking the
    mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within
    45 years and a 10% chance of it occurring within 9 years.

    So my reading is that each individual gave a cumulative probability distribution over time (or time period buckets) and then that was collapsed into an individual mean expected future date and then those individual dates were used to construct a range of individual date predictions for the group and the median of those values was used as the 50% chance within time x value for the paper?

    Is that right, Scott (or anyone more capable of interpreting the passage than I am)?

  57. romeostevens says:

    Looking around at some of the previous papers on the subject, I see reference to the fuzzy notion of whether the progress has accelerated, remained stable, or decelerated. I think it would be valuable to try to back out what mental process is occurring for experts when they report on which they think is happening for various sub-problems. One potential factor that really jumps out at me is modularity. I think it is worth trying to think of clever ways this might be measured in various reference classes and seeing if there’s anything that makes sense to apply to current ML efforts to get a more fine grained sense of progress.

  58. Sniffnoy says:

    Scott, would you mind changing the paper link at the top to point to the arXiv abstract, rather than the PDF? Because e.g. the paper has multiple versions and people might want to read a specific version, etc. Thank you!

  59. Vermillion says:

    I have nothing to add except I was doing a bit of freelance consulting with a large aerospace company on how to potentially revamp their supply chain with machine learning and AI.

    I wanted to call the system Sky Net so bad.

  60. WarOnReasons says:

    Imagine that in 1920s physicists, having realized the potential risks of mastering nuclear energy, started a public campaign aimed at banning or controlling future nuclear weapons. Was it possible for their efforts to succeed? Or would they just slow nuclear research in western democracies and accelerate it in countries like USSR and Germany?

    • cvxxcvcxbxvcbx says:

      I’d say there would be a less-than-30% chance of success, although it would depend on how many physicists were on board. Do you think that example is similar to AI risk in a relevant way?

    • Bugmaster says:

      It would be much worse than that, because the same equations that allow you to build nukes, also allow you to build iPhones. If physicists really did make a concerted effort to suppress the study of nuclear physics, and if that effort was somehow successful, the US would have long ago fallen to second- or even third-world status.

      • Ketil says:

        They kinda do this with nuclear power. (With the difference that “they” are not primarily the physicists). Fear and doubts – but mostly fear – cause the West to turn their back on technology and research. And the result is that countries like Russia, India, and China are now developing the industry, and soon also the technology, at a faster rate than the West.

        And for all the meme-sharing, slogan-shouting glorification of wind and solar from the so-called environmentalist here, in spite of the trillions sunk into western token policies (like the German Energiewende), that is the only thing that actually has any substantial impact on anthropogenic warming.

        • Nornagest says:

          Solar’s actually starting to show promise. But power storage still sucks, and solar only works when there’s sunlight, so even if it became infinitely cheap we’d still be better off building a lot of baseload nuclear plants.

        • Yosarian2 says:

          Wind and solar have shown they can generate 25%-30% of the electricity needed in first world countries in a pretty economical way even without storage. That’s certainly worth doing, and storage is getting better.

          We should be expanding nuclear as well, but since building a new nuclear plant costs billions and may take a decade, it’s probably not going to happen on the scale we need fast enough to solve the problem by itself.

          • Douglas Knight says:

            They aren’t “pretty economical” in Germany.

          • What does “fast enough to solve the problem” mean? The less CO2 goes into the atmosphere the less the globe will warm. There isn’t some specific level that we have to achieve by some deadline.

      • Wency says:

        The point about iPhones might be right, but economies don’t work that way.

        First, the tech sector contributes 7% of GDP. If you threw in some other industries like aerospace, you could maybe get to 15%. Perhaps the U.S. could move from the top of the pack of major Western countries and closer to, say, the UK or France in GDP per capita, but falling behind in some areas of scientific research would not make the U.S. non-Western.

        Second, technology differentials aren’t sustainable. The world is not Sid Meier’s Civilization. If the U.S. banned AI research or nuclear research or whatever and other countries started doing it better to the point that it was economically significant, Americans would notice and copy it.

        The kind of alarmism that says “China is getting ahead of us in this technology! What will we do?” is mostly nonsense. They copied the West, and the West could just as easily copy them if it was worthwhile. If you have a technology and no one is working hard to copy it, then they either already have something comparable or it’s not currently of much economic value.

        • WarOnReasons says:

          They copied the West, and the West could just as easily copy them if it was worthwhile.

          This can easily work with ordinary technologies. But the ultimate game changers, like nuclear weapons and AI, are a different case. Had Germany developed nuclear weapons several years ahead of the Western democracies, several years later there may have been no Western democracies left to do the copying.

          • RandomName says:

            Is this really true though? The US got nuclear weapons several (~4) years before the USSR, but they still existed.

          • Jake says:

            Is this really true though? The US got nuclear weapons several (~4) years before the USSR, but they still existed.

            A preemptive nuclear strike on the USSR in 1945-9 was politically and morally, well, unthinkable. A nuclear Nazi Germany would have been decidedly less scrupulous.

          • sclmlw says:

            This line of thinking falls to the modern bias of knowing the nuclear weapons program would be successful. But at the time nobody actually knew that weaponizing nuclear theories would prove successful.

            This is actually a major contributing factor to why the US got the Bomb first. Other countries knew about the theory, but put all their resources into radar – a major technological advance, and an important contributor to the war effort. The US, the only major nation with no significant land invasions to worry about, was the obvious choice for “major economic player able to devote significant man-hours to creating nuclear weapons.” Even so, it looked like a huge waste of money … until it suddenly wasn’t.

            What would happen after the war if the US hadn’t made nukes, though? Would the rest of the world eventually develop them? Again, this goes back to the information problem. What if the Utah researchers who thought they’d observed cold fusion actually had produced cold fusion? Even if the US successfully kept the knowledge a secret, other countries – now knowing it is possible – would immediately devote significant resources to figuring it out. This is what Russia did once they knew the US had cracked the problem, and it took them much less time/money to do it. (They also devoted significant resources to stealing the information, and were quite successful, at that.)

            But who is devoting Manhattan Project-level resources to cold fusion today? There’s an incentive difference between a scientist who says, “my theory says this is possible” and an actual demonstration of the principle at work.

      • currentlyinthelab says:

        What? There are large domains of inventions that are able to be invented without, effectively, having all the known equations and the axiomatic buildup of inventions. I believe numerous major inventions (nuke, perhaps the laser) you may have a *very* difficult, if not impossible, time thinking of such, while still having access to known chemical fertilizers and major inventions such as the elevator. Now, preventing someone intelligent from independently discovering those “banned” equations from apparent gaps in knowledge would be difficult. But perhaps its possible to create logically consistent systems with most known observations, to the point where its close to impossible for people to discover anything off. We could predict solar eclipses off the geocentric models of planetary orbits.

        Or, simply, anything that can be built from the laws of Newton, vs those need also need the observations of Einstein, vs those that also need quantum observations. A good deal of the world of mechanics and electronics doesn’t need knowledge of relativity or the non-discrete quantum world. You can be blind to the inner workings of the scientific world and still create lovely structures and a structured civilization, just look at the greatest works and time periods of antiquity.

        1. Second and third world status appear to be used as relative terms of political and technological power with some major social dysfunction thrown in, however being poor != high crime or a place being a bad place to live. How do you clearly use the word “worse”?

      • bzium says:

        How are the equations that are used for building nukes also essential for building iPhones? I thought building nukes mostly involved detailed understanding of nuclear fission reactions, and maybe how to shape a conventional charge so it compresses a hollow sphere of plutonium evenly (I hear this is tricky, or at least used to be back in the forties).

        None of which sounds like a thing that would come up in smartphone manufacturing. (Insert joke about exploding batteries here)

        • Nornagest says:

          Bugmaster is probably talking about relativity, which is used both in nuclear physics and in geolocation services, but its importance to GPS is sometimes overstated. You could still get it to work with Newtonian mechanics, just not quite as well.

          His point is still accurate, though, in another way. The hard parts of building a nuke (at least a fission weapon) don’t have to do with the theory; that was an obstacle back in the Thirties, but these days it’s well understood and you can find most of it on the open web. (Certain details are conventionally left out, but they’re not impossible to find.). The hard parts are isolating the right isotopes and, to a lesser extent, doing some very precise machine work in materials that are pretty much all difficult to work with and highly dangerous. It’s more a bunch of interrelated chemistry and engineering problems than a physics problem, in other words, and good chemistry and engineering are very important to building things like the iPhone.

          • Ohforfs says:

            How is relativity related to nuclear bomb?

          • John Schilling says:

            Well, Einstein became famous mostly for relativity, and Einstein signed the Szilard letter saying “go build an atom bomb because a Certified Smart Guy (way more famous for smartness than Szilard) says it’s possible”, so something something E=MC^2 something atom bombs work because of relativity.

            That’s really about it. Meitner, Hahn, Fermi et al didn’t actually need relativity to make their theories work at an empirical level, and empirical Meitnerian fission plus Fermi’s qualitative insight is all you need to predict atom bombs are possible and then make them work in practice.

    • tmk17 says:

      I think you’re doing the analogy wrong. When the physicists discovered nuclear fission they basically immediately realized that this could make a bomb. Some were skeptical that you could gather enough weapon-grade uranium but the general idea of a nuclear bomb was widely accepted.

      For AI it’s like the physicists discovered nuclear fission but don’t acknowledge that you can build a bomb. The Manhattan project took great care that they wouldn’t cause a chain-reaction accidentally. There is an account from Feynman where he describes how he explained to the engineers exactly how much uranium they could store in one place without it becoming dangerous. Feynman knew this stuff. The theoretical physicists had run all the numbers until they were very sure about the thing they were doing. Today’s AI researchers on the other hand just hope for the best.

      Nobody is saying we should stop all AI research but it can’t be too much to ask not to destroy the world in the process. Sure there could be malicious actors that try to build an AI without any kind of value alignment but the answer to that can’t be to do the same!

      The problem of making sure no AI researchers defect is incredibly hard. But we won’t solve it by telling ourselves the problem doesn’t exist.

      • For AI it’s like the physicists discovered nuclear fission but don’t acknowledge that you can build a bomb.

        Have you considered that there might be a plausible mechanism in the one case, but not the other? Mayvbe AI alarmism is more analogous to anti-flouridation.

        • HeelBearCub says:

          I think the more proper analogy might be to “nuclear weapon ignites uncontrolled reaction in the atmosphere”.

          Sure Edward Teller was wrong about it, but he was right enough to cause Oppenheimer to ask for some calculations. Of course, AI research isn’t nearly as neatly constrained a problem as fission or fusion…

    • Scott Alexander says:

      If the horror of nuclear weapons were better known, and there were more consequences for using them, the US could have been more willing to threaten Japan rather than demonstrate on them. That’s the most likely way I can think of for a public campaign to improve things.

      • WarOnReasons says:

        If the horror of nuclear weapons were better known in advance would not it simply motivate the USSR, Germany and Japan to invest huge resources into their development? Are there any adverse consequences that the Western democracies could have threatened them with? Preventing Hiroshima is certainly a noble goal. But could a public campaign aimed to prevent Hiroshima inadvertently result in Axis powers getting the nuclear weapons first?

        Likewise, if we popularize the idea that AI has a real potential to become a doomsday weapon wouldn’t it motivate the modern governments of China, Russia and Iran to invest heavily in its development?

      • 4gravitons says:

        Physicists did exactly that. Look up the Franck Report. Franck and co-signers argued that if we deployed nuclear weapons without warning we’d inspire paranoia in other nations and start an arms race. And well…

        • Not exactly the same issue, but my conclusion from reading Teller’s memoirs was that a fair number of physicists were claiming an H-bomb was impossible in part because they didn’t want one to be built.

    • John Schilling says:

      Imagine that in 1920s physicists, having realized the potential risks of mastering nuclear energy, started a public campaign aimed at banning or controlling future nuclear weapons.

      They’d get the details wrong, and push for an international consensus against the development or use of war gasses.

      Was it possible for their efforts to succeed?

      They pretty much did, though the nerve gasses did eventually see use in a few obscure corners of the world.

      People recognized the potential risks of Mad Scientists coming up with Weapons of Mass Destruction for close to a century before we even invented the term “Weapon of Mass Destruction”. But, absent specific details, a general theory of Friendly Mad Scientists was intractably difficult. There were efforts at a general theory of World Peace and Disarmament, but those didn’t work much better.

      We didn’t know enough to do anything specifically useful until Meissner and Hahn pinned down nuclear fission in 1938, and that really wasn’t the right time for any sort of arms control beyond “how can I aim this thing better; I’ve got people who really need killing?”. But not long after WWII, we knew enough about the subject to develop and implement at least moderately effective measures against the proliferation of nuclear weapons.

      Analogies to AI risk left as an exercise for the student.

    • Robert Miles says:

      > Imagine that in 1920s physicists, having realized the potential risks of mastering nuclear energy, started a public campaign aimed at banning or controlling future nuclear weapons

      Weird, I just the other day made a YouTube video asking a similar question (especially around 4:35)

      https://www.youtube.com/watch?v=1wAgBaJgEsg