The Obligatory GPT-3 Post

I.

I would be failing my brand if I didn’t write something about GPT-3, but I’m not an expert and discussion is still in its early stages. Consider this a summary of some of the interesting questions I’ve heard posed elsewhere, especially comments by gwern and nostalgebraist. Both of them are smart people who I broadly trust on AI issues, and both have done great work with GPT-2. Gwern has gotten it to write poetry, compose music, and even sort of play some chess; nostalgebraist has created nostalgebraist-autoresponder (a Tumblr written by GPT-2 trained on nostalgebraist’s own Tumblr output). Both of them disagree pretty strongly on the implications of GPT-3. I don’t know enough to resolve that disagreement, so this will be a kind of incoherent post, and hopefully stimulate some more productive comments. So:

OpenAI has released a new paper, Language Models Are Few-Shot Learners, introducing GPT-3, the successor to the wildly-successful language-processing AI GPT-2.

GPT-3 doesn’t have any revolutionary new advances over its predecessor. It’s just much bigger. GPT-2 had 1.5 billion parameters. GPT-3 has 175 billion. The researchers involved are very open about how it’s the same thing but bigger. Their research goal was to test how GPT-like neural networks scale.

Before we get into the weeds, let’s get a quick gestalt impression of how GPT-3 does compared to GPT-2.

Here’s a sample of GPT-2 trying to write an article:

PROMPT: MILEY CYRUS WAS CAUGHT SHOPLIFTING FROM ABERCROMBIE AND FITCH ON HOLLYWOOD BOULEVARD TODAY

The 19-year-old singer was caught on camera being escorted out of the store by security guards.

The singer was wearing a black hoodie with the label ‘Blurred Lines’ on the front and ‘Fashion Police’ on the back

The singer was also wearing a pair of black-rimmed glasses, a black jacket, black jeans and black sandals.

She was carrying a pair of black and white striped gloves and a small black bag.

And here’s GPT-3 at the same task:

PROMPT: UNITED METHODISTS AGREE TO HISTORIC SPLIT – THOSE WHO OPPOSE GAY MARRIAGE WILL FORM THEIR OWN DENOMINATION

After two days of intense debate, the United Methodist Church has agreed to a historic split – one that is expected to end in the creation of a new denomination, one that will be “theologically and socially conservative,” according to the Washington Post. The majority of delegates attending the church’s annual General Conference in May voted to strengthen a ban on the ordination of LGBTQ clergy and to write new rules that will “discipline” clergy who officiate at same-sex weddings. But those who opposed these measures have a new plan: They say they will form a separate denomination by 2020, calling their church the Christian Methodist denomination.

The Post notes that the denomination, which claims 12.5 million members, was in the early 20th century the “largest Protestant denomination in the US”, but that it has been shrinking in recent decades. The new split will be the second in the church’s history. The first occurred in 1968, when roughly 10% of the denomination left to form the Evangelical United Brethren Church. The Post notes that the proposed split “comes at a critical time for the church, which has been losing members for years,” which has been “pushed toward the brink of a schism over the role of LGBTQ people in the church.” Gay marriage is not the only issue that has divided the church. In 2016, the denomination was split over ordination of transgender clergy, with the North Pacific regional conference voting to ban them from serving as clergy, and the South Pacific regional conference voting to allow them.

In case you’re wondering, no, the Methodist Church did not really split from the Evangelical United Brethren Church in 1968, it merged with it in that year. GPT-3 is doing what I used to do on essay questions – throw out a bunch of half-remembered names and dates and hope nobody looks too closely at the exact relations.

These kinds of comparisons are inherently fuzzy, since both GPTs have produced a lot of material and it’s easy to cheat by taking better or worse samples of one or the other. You can see more GPT-2 samples here and more GPT-3 samples here

Or compare some of GPT-2’s poetry:

The gods are they who came to earth
And set the seas ablaze with gold.
There is a breeze upon the sea,
A sea of summer in its folds,
A salt, enchanted breeze that mocks
The scents of life, from far away
Comes slumbrous, sad, and quaint, and quaint.
The mother of the gods, that day,
With mortal feet and sweet voice speaks,
And smiles, and speaks to men: “My Sweet,
I shall not weary of thy pain.”

…Let me drink of the wine of pain
And think upon the agonies of hope,
And of the blessed Giver of all good things;
For, man for man, mine is the deepest love
That sorrow takes upon the humblest soul;
But who hath learned how sorrow turns to gall
The places where my feet have trod before.

…to some of GPT-3’s:

I must have shadows on the way
If I am to walk I must have
Each step taken slowly and alone
To have it ready made

And I must think in lines of grey
Must have dim thoughts to be my guide
Must look on blue and green
And never let my eye forget
That color is my friend
And purple must surround me too

The yellow of the sun is more more
Intusive than the bluish snow
That falls on all of us. I must have
Grey thoughts and blue thoughts walk with me
If I am to go away at all.

Ignore the style differences – the GPT-2 instance was trained on 18th-century verse, GPT-3 on Wallace Stevens. I notice that the first poem, while flowery, is kind of incoherent. The second has more of a structure and a theme that it maintains throughout. I’m not sure it really symbolizes anything interesting, but I’ve never been sure of that with real Wallace Stevens either, so good work.

(you can read more GPT-2 poetry here. There’s not much more public GPT-3 poetry – you can read one or two more samples on the paper linked above).

Enough writing. What about math? Last year, I posted the following excerpt from GPT-2:

PROMPT: MY 10 FAVORITE ANIMALS ARE:

My ten favorite animals are:
1. Zebras with a white scar on the back
2. Insiduous spiders and octopus
3. Frog with large leaves, hopefully black
4. Cockatiel with scales
5. Razorbill with wings hanging about 4 inches from one’s face and a heart tattoo on a frog
3. Cockatric interlocking tetrabods that can be blind, cut, and eaten raw:
4. Black and white desert crocodiles living in sunlight
5. Zebra and many other pea bugs

Despite the obvious flaws in this piece, I was impressed. GPT-2 was clearly trying to make a numbered list, and almost kind of getting it right! It counted to 4 successfully! Remember, this is a text prediction engine that didn’t necessarily need to have any concept of numbers. But it still kind of counted to 4! I wrote:

Imagine you prompted the model with “What is one plus one?” I actually don’t know how it would do on this problem. I’m guessing it would answer “two”, just because the question probably appeared a bunch of times in its training data.

Now imagine you prompted it with “What is four thousand and eight plus two thousand and six?” or some other long problem that probably didn’t occur exactly in its training data. I predict it would fail, because this model can’t count past five without making mistakes. But I imagine a very similar program, given a thousand times more training data and computational resources, would succeed. It would notice a pattern in sentences including the word “plus” or otherwise describing sums of numbers, it would figure out that pattern, and it would end up able to do simple math. I don’t think this is too much of a stretch given that GPT-2 learned to count to five and acronymize words and so on.

I said “a very similar program, given a thousand times more training data and computational resources, would succeed [at adding four digit numbers]”. Well, GPT-3 is a very similar program with a hundred times more computational resources, and…it can add four-digit numbers! At least sometimes, which is better than GPT-2’s “none of the time”.

II.

In fact, let’s take a closer look at GPT-3’s math performance.

The 1.3 billion parameter model, equivalent to GPT-2, could get two-digit addition problems right less than 5% of the time – little better than chance. But for whatever reason, once the model hit 13 billion parameters, its addition abilities improved to 60% – the equivalent of a D student. At 175 billion parameters, it gets an A+.

What does it mean for an AI to be able to do addition, but only inconsistently? For four digit numbers, but not five digit numbers? Doesn’t it either understand addition, or not?

Maybe it’s cheating? Maybe there were so many addition problems in its dataset that it just memorized all of them? I don’t think this is the answer. There are 100 million possible 4-digit addition problems; seems unlikely that GPT-3 saw that many of them. Also, if it was memorizing its training data, it should have gotten all 100 possible two-digit multiplication problems, but it only has about a 25% success rate on those. So it can’t be using a lookup table.

Maybe it’s having trouble locating addition rather than doing addition? (thanks to nostalgebraist for this framing). This sort of seems like the lesson of Table 3.9:

“Zero-shot” means you just type in “20 + 20 = ?”. “One-shot” means you give it an example first: “10 + 10 = 20. 20 + 20 = ?” “Few-shot” means you give it as many examples as it can take. Even the largest and best model only does mediocre on the zero-shot task, but it does better on the one-shot and best on the few-shot. So it seems like if you remind it what addition is a couple of times before solving an addition problem, it does better. This suggests that there is a working model of addition somewhere within the bowels of this 175 billion parameter monster, but it has a hard time drawing it out for any particular task. You need to tell it “addition” “we’re doing addition” “come on now, do some addition!” up to fifty times before it will actually deploy its addition model for these problems, instead of some other model. Maybe if you did this five hundred or five thousand times, it would excel at the problems it can’t do now, like adding five digit numbers. But why should this be so hard? The plus sign almost always means addition. “20 + 20 = ?” is not some inscrutable hieroglyphic text. It basically always means the same thing. Shouldn’t this be easy?

When I prompt GPT-2 with addition problems, the most common failure mode is getting an answer that isn’t a number. Often it’s a few paragraphs of text that look like they came from a math textbook. It feels like it’s been able to locate the problem as far as “you want the kind of thing in math textbooks”, but not as far as “you want the answer to the exact math problem you are giving me”. This is a surprising issue to have, but so far AIs have been nothing if not surprising. Imagine telling Marvin Minsky or someone that an AI smart enough to write decent poetry would not necessarily be smart enough to know that, when asked “325 + 504”, we wanted a numerical response!

Or maybe that’s not it. Maybe it has trouble getting math problems right consistently for the same reason I have trouble with this. In fact, GPT-3’s performance is very similar to mine. I can also add two digit numbers in my head with near-100% accuracy, get worse as we go to three digit numbers, and make no guarantees at all about four-digit. I also find multiplying two-digit numbers in my head much harder than adding those same numbers. What’s my excuse? Do I understand addition, or not? I used to assume my problems came from limited short-term memory, or from neural noise. But GPT-3 shouldn’t have either of those issues. Should I feel a deep kinship with GPT-3? Are we both minds heavily optimized for writing, forced by a cruel world to sometimes do math problems? I don’t know.

[EDIT: an alert reader points out that when GPT-3 fails at addition problems, it fails in human-like ways – for example, forgetting to carry a 1.]

III.

GPT-3 is, fundamentally, an attempt to investigate scaling laws in neural networks. That is, if you start with a good neural network, and make it ten times bigger, does it get smarter? How much smarter? Ten times smarter? Can you keep doing this forever until it’s infinitely smart or you run out of computers, whichever comes first?

So far the scaling looks logarithmic – a consistent multiplication of parameter number produces a consistent gain on the benchmarks.

Does that mean it really is all about model size? Should something even bigger than GPT-3 be better still, until eventually we have things that can do all of this stuff arbitrarily well without any new advances?

This is where my sources diverge. Gwern says yes, probably, and points to years of falsified predictions where people said that scaling might have worked so far, but definitely wouldn’t work past this point. Nostalgebraist says maybe not, and points to decreasing returns of GPT-3’s extra power on certain benchmarks (see Appendix H) and to this OpenAI paper, which he interprets as showing that scaling should break down somewhere around or just slightly past where GPT-3 is. If he’s right, GPT-3 might be around the best that you can do just by making GPT-like things bigger and bigger. He also points out that although GPT-3 is impressive as a general-purpose reasoner that has taught itself things without being specifically optimized to learn them, it’s often worse than task-specifically-trained AIs at various specific language tasks, so we shouldn’t get too excited about it being close to superintelligence or anything. I guess in retrospect this is obvious – it’s cool that it learned how to add four-digit numbers, but calculators have been around a long time and can add much longer numbers than that.

If the scaling laws don’t break down, what then?

GPT-3 is very big, but it’s not pushing the limits of how big an AI it’s possible to make. If someone rich and important like Google wanted to make a much bigger GPT, they could do it.

Does “terrifying” sound weirdly alarmist here? I think the argument is something like this. In February, we watched as the number of US coronavirus cases went from 10ish to 50ish to 100ish over the space of a few weeks. We didn’t panic, because 100ish was still a very low number of coronavirus cases. In retrospect, we should have panicked, because the number was constantly increasing, showed no signs of stopping, and simple linear extrapolation suggested it would be somewhere scary very soon. After the number of coronavirus cases crossed 100,000 and 1,000,000 at exactly the time we could have predicted from the original curves, we all told ourselves we definitely wouldn’t be making that exact same mistake again.

It’s always possible that the next AI will be the one where the scaling curves break and it stops being easy to make AIs smarter just by giving them more computers. But unless something surprising like that saves us, we should assume GPT-like things will become much more powerful very quickly.

What would much more powerful GPT-like things look like? They can already write some forms of text at near-human level (in the paper above, the researchers asked humans to identify whether a given news article had been written by a human reporter or GPT-3; the humans got it right 52% of the time)

So one very conservative assumption would be that a smarter GPT would do better at various arcane language benchmarks, but otherwise not be much more interesting – once it can write text at a human level, that’s it.

Could it do more radical things like write proofs or generate scientific advances? After all, if you feed it thousands of proofs, and then prompt it with a theorem to be proven, that’s a text prediction task. If you feed it physics textbooks, and prompt it with “and the Theory of Everything is…”, that’s also a text prediction task. I realize these are wild conjectures, but the last time I made a wild conjecture, it was “maybe you can learn addition, because that’s a text prediction task” and that one came true within two years. But my guess is still that this won’t happen in a meaningful way anytime soon. GPT-3 is much better at writing coherent-sounding text than it is at any kind of logical reasoning; remember it still can’t add 5-digit numbers very well, get its Methodist history right, or consistently figure out that a plus sign means “add things”. Yes, it can do simple addition, but it has to use supercomputer-level resources to do so – it’s so inefficient that it’s hard to imagine even very large scaling getting it anywhere useful. At most, maybe a high-level GPT could write a plausible-sounding Theory Of Everything that uses physics terms in a vaguely coherent way, but that falls apart when a real physicist examines it.

Probably we can be pretty sure it won’t take over the world? I have a hard time figuring out how to turn world conquest into a text prediction task. It could probably imitate a human writing a plausible-sounding plan to take over the world, but it couldn’t implement such a plan (and would have no desire to do so).

For me the scary part isn’t the much larger GPT we’ll probably have in a few years. It’s the discovery that even very complicated AIs get smarter as they get bigger. If someone ever invented an AI that did do more than text prediction, it would have a pretty fast takeoff, going from toy to superintelligence in just a few years.

Speaking of which – can anything based on GPT-like principles ever produce superintelligent output? How would this happen? If it’s trying to mimic what a human can write, then no matter how intelligent it is “under the hood”, all that intelligence will only get applied to becoming better and better at predicting what kind of dumb stuff a normal-intelligence human would say. In a sense, solving the Theory of Everything would be a failure at its primary task. No human writer would end the sentence “the Theory of Everything is…” with anything other than “currently unknown and very hard to figure out”.

But if our own brains are also prediction engines, how do we ever create things smarter and better than the ones we grew up with? I can imagine scientific theories being part of our predictive model rather than an output of it – we use the theory of gravity to predict how things will fall. But what about new forms of art? What about thoughts that have never been thought before?

And how many parameters does the adult human brain have? The responsible answer is that brain function doesn’t map perfectly to neural net function, and even if it did we would have no idea how to even begin to make this calculation. The irresponsible answer is a hundred trillion. That’s a big number. But at the current rate of GPT progress, a GPT will have that same number of parameters somewhere between GPT-4 and GPT-5. Given the speed at which OpenAI works, that should happen about two years from now.

I am definitely not predicting that a GPT with enough parameters will be able to do everything a human does. But I’m really interested to see what it can do. And we’ll find out soon.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

263 Responses to The Obligatory GPT-3 Post

  1. gwern says:

    While we’re on the topic: Tensorfork trained GPT-2-1.5b on our poetry corpus and generated 1 million words. We’re crowdsourcing a read through of it to find the best samples, if anyone wants to help: https://docs.google.com/document/d/1MhA3M5ucBD7ZXcWk57_MKZ5jEgPX6_YiKye_EFP-adg/edit It’s not GPT-3, I admit, but a lot of the samples are still pretty good!

    • carter says:

      These kinds of exercises where human beings read through a large corpus of machine-generated text to pull out the best samples always remind me a little bit of the Dasher system: http://www.inference.org.uk/dasher/DasherSummary.html

      Dasher was designed in the late 90s/early 2000s as a method of text entry that doesn’t require a keyboard or very fine movements. It essentially works by using a joystick (or accelerometer, or eye-tracker, or computer mouse) to navigate through a which-character-to-enter-next decision tree. Nodes/letters that a statistical model thinks are more likely are larger so they require less effort for the user to select.

      The optimist in me wonders if anyone’s tried to hook Dasher up to a modern ML model to make its suggestions more accurate. Not knowing much about how it works internally, is it easy to ask GPT-2 “give me all the possibilities for the next letter in this string, with probability estimates for each one”? (And then, once the user’s picked a letter to enter, “give me all the possibilities/probabilities for the letter after that one”?)

      The cynic in me wonders if picking ML-model samples is fundamentally the same thing as using Dasher — that the existence of any particular GPT-2 quote means only that the user was willing to go through enough of the model’s guesses to get to the one they wanted.

      • randallsquared says:

        I dunno if you’ve used gmail recently, but it does that already. You can start a sentence and then just press Return repeatedly to get a sensical rendition of what you were going to say. It isn’t always in the tone you would have preferred, but more and more at work I find myself accepting that because it’s faster.

        • Greg says:

          I have used Gmail recently, and no, doing that produces nonsense.

          • carter says:

            Yeah, and I guess that’s the difference between Dasher and a lot of today’s text-prediction systems: Gmail gives me a small number of take-it-or-leave-it suggestions that aren’t always good, while Dasher shows me all possible sentences weighted by how likely they are. (You can type out any sequence of characters you want in Dasher, it’s just that “Zdghj” will take longer than “Hello” because you’ll have to zoom in on the low-probability part of the tree for each letter.) I thought that this might’ve been because of some fundamental difference in how their algorithms work, but sketerpot suggests below that it’s more of a user-interface thing.

      • Act_II says:

        The cynic in me wonders if picking ML-model samples is fundamentally the same thing as using Dasher — that the existence of any particular GPT-2 quote means only that the user was willing to go through enough of the model’s guesses to get to the one they wanted.

        This is basically how I feel about much of the praise of GPT-2. Sure, it produces (kind of) coherent output, but it also produces a ton of junk. And importantly, it can’t distinguish between good output and junk. It takes a human to curate the most impressive-looking examples.

        Like that GPT-2 chess post a while back. GPT-2 wasn’t playing chess. GPT-2 was producing text formatted like a chess move, and a curator-bot was picking out only the valid moves. Sure, it’s kind of neat that it can reproduce the format, but if anything in there was playing chess, it was the curator-bot.

        • Lodore says:

          And importantly, it can’t distinguish between good output and junk. It takes a human to curate the most impressive-looking examples.

          It would be fairly trivial (which isn’t to say easy in practical terms) to train a second model on the binary garbage vs not-garbage classifications of human raters and add it as a layer to a GPT model. That way, the crap gets filtered before output. Possibly that’s an ambition of the crowd-sourcing project announced by Gwern above.

          • viVI_IViv says:

            It would be fairly trivial (which isn’t to say easy in practical terms) to train a second model on the binary garbage vs not-garbage classifications of human raters and add it as a layer to a GPT model.

            How many annotations would you need to make it work?

          • Lodore says:

            How many annotations would you need to make it work?

            It’s hard to say without trying, but I’ve previously managed to get decent F1 scores with surprisingly small amounts of annotated text data using one of the BERT models–on the order of thousands of examples.

            Either way, there are useful tools like Prodigy that help you source annotations only for samples for which there is high model uncertainty. It’s a great tool; I highly recommend it.

          • gwern says:

            This is generally called a ‘ranker’. It’s perfectly doable and has been done. You can do it using the model itself (eg Meena generates n possible completions with top-k and then selects the one with the highest likelihood, which avoids the pathologies of maximizing likelihood token by token), you can use another model (there was one Reddit example where they ranked using BERT, I guess under the logic that BERT would make different errors than GPT-2), you can use a model trained by human comparisons (this is sort of how GPT-2 preference learning works, and we’re interested in applying it to GANs to improve D ranking by using human comparisons of anime images)…

            I wasn’t intending to use the crowdsourcing there. The crowdsourcing is more about just finding the best samples for lulz, and getting an idea of how much screening is necessary. The ratio of rejects:successes is a pretty natural way to measure the quality of a model. If a human poet has to throw out 1 poem for every 1 reasonably-decent poem, how many GPT-2-1.5b poems do we have to throw out to get a poem? For char-RNN or for GPT-2-117M, the answer is something like 1000:1 or 100:1; for GPT-2-1.5b, it’s closer to 20:1 IMO; and going by the GPT-3 samples, it may be more like 5:1 or less… I hope to get access to the API and try out poems more extensively.

      • sketerpot says:

        The optimist in me wonders if anyone’s tried to hook Dasher up to a modern ML model to make its suggestions more accurate. Not knowing much about how it works internally, is it easy to ask GPT-2 “give me all the possibilities for the next letter in this string, with probability estimates for each one”? (And then, once the user’s picked a letter to enter, “give me all the possibilities/probabilities for the letter after that one”?)

        It shouldn’t be too hard, at least conceptually. GPT-2 is a language model that tries to answer the question “Given the past n-1 words/letters/whatever, what’s the probability distribution for the nth one?” This is the same kind of function that Dasher uses.

        The cynic in me wonders if picking ML-model samples is fundamentally the same thing as using Dasher — that the existence of any particular GPT-2 quote means only that the user was willing to go through enough of the model’s guesses to get to the one they wanted.

        If the output from GPT-2 were mostly garbage, you’d have either select from very short outputs or look through a lot of samples in order to get something good. A vanishingly small percentage of the books in the Library of Babel are worth reading, and you won’t find them just by browsing randomly through the shelves.

        • Act_II says:

          If the output from GPT-2 were mostly garbage, you’d have either select from very short outputs or look through a lot of samples in order to get something good.

          This doesn’t follow at all. Nobody’s saying GPT-2 output is literally random like the Library of Babel. It can take smart shortcuts to produce a tiny fraction of the Library of Babel and still have that fraction be “mostly garbage”.

          Editing because this comment didn’t come together the way I thought it did when I wrote it:
          “Mostly garbage” can mean 99.9999999999% garbage or 50.1% garbage or anywhere in between. The point is that seeing a small number of human-selected examples of good output tells you very little about the overall ratio of garbage to not-garbage. It establishes a vague upper bound, perhaps, but it’s far inferior to actual data.

          • sketerpot says:

            I was going to make a point that, yeah, there’s a continuum between random nonsense and stuff that looks like human speech, and the further along that continuum you move, the less effort is required to find really good stuff among the garbage, and that (since you only need a few crazy parts to render some text unimpressive) fairly small differences in text-generation quality can have superlinear effects on the number of text samples you need to look through in order to get good-looking output. Therefore if you see a bunch of long text samples that look good, that’s a sign that you either have a pretty-darn-good text generation model or you’ve got people with unrealistic amounts of time on their hands.

            … But then I deleted the partly-written paragraph and replaced it with a snappy Library of Babel reference because it seemed too obvious and I didn’t want to belabor the point. It obviously wasn’t obvious, so mea culpa.

          • Murphy says:

            I remember a while back trying out an article generator.

            give it a title and a publication to copy the style and it would churn out an article to fit.

            And I was deeply impressed. Giving it anti-vaxer type article headlines but changing the thing blamed.

            “Researchers find green vegetables cause autism” gave a remarkably coherent article. It even made up some vaguely babble about chloroplasts and protein interactions that would fit right in to most popsci articles and it made up some imaginary research centers where the research had been done and imaginary numbers for participants.

            There was domain specific stuff that made no sense in reality, but it was way more coherent than a lot of human-written popsci stuff.

      • Yosarian2 says:

        Having played around with them a little, it seems like a majority of GPT-2 responses are at least kind of coherent. It’s not a matter of picking out the one reasonable sounding response out of a mountain of gibberish.

      • keaswaran says:

        This reminds me of a conversation I had with my 4 year old nephew over Zoom the other day. He was trying to tell me something about which animal he likes from the new toys he had, but he was stumbling over one of the words, and I completed the sentence. He said, “how did you know what I was going to say?”

        If we had a good enough version of this thing, would that be what we’re all feeling as we are typing?

    • dogiv says:

      Additional poetry by GPT-3:
      Sample 1236:
      Think only this:
      if my breath,
      my thought, my word
      the current I am
      could touch the world
      for just an instant
      it would ripple outward
      and someone else
      would catch the pattern
      and rise and smile
      and carry on
      and for an instant,
      someone else would feel
      better.
      Love,
      -Jamey

      Sample 1252:
      Why are you looking at the stars
      when you’ve got a wonderful person in front of you?
      They are beautiful.
      They are a part of you.
      But they’re stars.
      Starry eyed, you watch them;
      Sparkling with energy.
      You wish on them,
      Dream on them,
      Dreams you know you can’t fulfill,
      not from here.
      Why are you looking at the stars?
      You’re in the middle of the most beautiful thing
      you’ve ever seen.
      Why are you looking at the stars?
      Why are you looking at the stars?

  2. Sniffnoy says:

    The thing that I notice about the Methodist Church article isn’t that it gets the history wrong — I wouldn’t know anything about that — but that it doesn’t quite make sense. It almost does, but if you pay attention, you’ll notice that it’s inconsistent as to whether it’s the branch that split off that opposes gay marriage, or the branch they’re splitting off from. The headline and first sentence say that it’s the branch that’s splitting off that opposes gay marriage. But the next two sentences say that it’s the branch that’s being split of of that opposes gay marriage.

    If that problem were solved, I don’t know how anyone without knowledge of the history would know it was written by a machine. And I can easily imagine missing that while casually skimming.

    • Etoile says:

      I missed this part.

    • toastengineer says:

      I almost wonder if that’s more of a strike against us than in favor of GPT-3. Naively, you’d think “this article directly contradicts itself on the core facts it was written to communicate several times” would be the sort of thing people notice. Maybe we’re all just really bad at critical thinking.

      • Sniffnoy says:

        I mean, if you’re not involved, all you likely care about was that there was a split over gay marriage. Which one was the branch that split off is a detail you’re likely going to gloss over. It’s not like one part said it was splitting and another part said it wasn’t. I’ve seen plenty of mistakes worse than this in actual things people have written.

      • Douglas Knight says:

        cf Humans Who Are Not Concentrating Are Not General Intelligences from the ancient days of GPT-2.

        I posted this as a response to Sniffnoy, but it’s even closer to your comment.

      • wishmaster says:

        I noticed right off the bat. It actually really bugged me, lol. But I’m one of those people that just loves debates and reads a lot of philosophy – I’m kind of trained to notice these things.

      • Bugmaster says:

        Maybe we’re all just really bad at critical thinking.

        Yeah, there’s no “maybe” about it, sadly.

        On the other hand, we are really good at pattern matching, to the point where we can instantly match a circle with two dots and an arc to a smiling human face.

        Poetry is a genre that explicitly exercises our pattern-matching abilities. Good poetry is not supposed to make literal sense; instead, it’s supposed to use metaphors and allegories and other tools to prime the reader’s intuition engine into generating all kinds of wild and emotionally impactful associations. This is why GPT is so good (relatively speaking) at poetry: it can generate thousands of semi-coherent pieces of text very quickly. Among those thousands of poems, one or two might grab somebody’s imagination, enabling him to extrapolate the textual equivalent of a smiley face icon into a whole range of emotional responses.

        Science is almost the exact opposite of that. Statements like “F=ma” are exact. If you prefer poem A, and I prefer poem B, we might argue about it ad infinitum. But if you say “F=ma” and I say “F = m/a”, then one of us is indisputably wrong, and no matter of pattern-matching can ever fix this. That’s why using GPT to do science is IMO doomed to be an exercise in frustration.

        • Garrett says:

          > That’s why using GPT to do science is IMO doomed to be an exercise in frustration.

          This is why you need to connect it to various mechanisms to manipulate the physical world, so that it can both make predictions as well as perform the required experiments to verify them. We’ll have lots of paperclips scientific knowledge in no time!

          • Simon_Jester says:

            The problem is that it’s already a paperclip maximizer.

            Essays written in response to prompts, that convincingly impersonate human writing, are the paperclips.

            Hooking it up to a battery of scientific instruments won’t make more paperclips very effectively, because most human writing doesn’t resemble an accurate AND NOVEL scientific paper.

      • jonm says:

        I think it’s more due to us following Grice’s maxims. We expect people communicating with us to be trying to communicate some real underlying thing. An AI can exploit that assumption to make us think text sounds real even though there’s no underlying understanding.

        This is also a challenge when marking student essays who haven’t really understood the reading. It seems like a reasonable essay on a surface skim but it’s only when you dissect sentence by sentence that you see there’s nothing really there.

    • KieferO says:

      I noticed this, but I was looking for exactly this sort of inconsistent internal model error that I could spot without much domain knowledge. I doubt that I would have spotted it if I hadn’t been looking for it, and I doubt that I would have known to look for it if I wasn’t somewhat familiar with GPT-2s failings.

      • rjk says:

        I noticed it without actively looking for it, and without being particularly familiar with GPT-2 – the story just obviously doesn’t make any sense to me. However, I am unusually bothered by contradictions.

        Generally, I deal with this in everyday life by being charitable. If I read something that doesn’t quite make sense, I’ll tend to resolve the ambiguity by picking the most likely meaning and assume that this is what the author intended. Sometimes I do this without even noticing that I’m doing it, but generally it raises at least a small flag in my awareness.

        In this case, I think I had two reasons not to be charitable: firstly, the tone of the writing is authoritative, which leaves less of an excuse for ambiguities, and secondly the author is not identified and so I find the ambiguity harder to resolve by the application of theory of mind.

        This makes me wonder: are the people who didn’t notice it simply being more charitable, or are they starting out with a lower expectation that things ought to make sense in general, or are they just not parsing the text in quite as much detail?

        • KieferO says:

          I’m quite confident that if a researcher asked me to guess whether this passage was written by a human or not I would not fare better than chance unless a substantial amount of money or ego were riding on the result. If writing down “an answer, any answer” was keeping me from something even slightly important, I don’t think that I would make myself care enough to notice the small discrepancies.

          Also, if I were tasked to write on this prompt, I’m pretty sure that I wouldn’t care enough to write well enough to distinguish myself from the robot. I feel like sarcastic detachment is not the best attitude to a Turing test but given where the stakes are, it’s the best I can do.

    • StellaAthena says:

      Excellent job spotting this. I was looking for it, as I am an AI researcher and I’m aware that algorithms like this tend to struggle with consistent referents, but I wonder how many people not looking for it spotted it.

      As I said, this is a general problem for text generating algorithms. If you played the AI-choose-your-own-adventure game that came out about a year ago you may have noticed that it gave you the ability to apply symbolic labels to specific people or items, using that symbol to reference them in the future. This vastly enhanced its narrative coherence in my experience.

      Another good example of this is Google Home. If you ask Google Home “what’s the weather in Florida?” and then follow it up with “how about in Georgia?” it can typically answer the second question correctly. This is quite impressive and (at least when I last looked into this) not something it’s competitors can do. However I’ve noticed that my Google home starts to lose the thread if I asked it four or five follow-up questions.

      • Sorghum says:

        Also I just checked and my iPhone not only understands “what about Georgia?” it correctly guesses which Georgia I was asking about, based on whether I’d just asked for the weather in North Carolina or Ukraine.

      • keaswaran says:

        I definitely didn’t notice, and was instead fixating on the dates and the quotes and considering googling them to find out if it was copying text it had already seen on the internet.

        • dogiv says:

          It doesn’t copy verbatim, but it is mashing up various other articles about the same event. Personally I consider this to be contaminated training data. A much better test would be to prompt it with headlines about things that occurred after its training data was frozen.

    • Sorghum says:

      GPT-3 is trained to come up with plausible-sounding text, without regard for dull things like truth. It’s not hard to imagine, though, a fact-checking bot that could extract things that look like statements of fact and check them against authoritative-sounding sources.

      Right now I can google “did the Methodist church split from the United Brethren Church in 1968?” and I get a bunch of articles on the merger; even a simple word embedding model knows enough to know that merge and split are opposites, so I can imagine a fact check bot could have sufficient confidence to mark this sentence as false and go tell GPT-3 to come up with another one. It’s been a decade since computers could outperform humans in Jeopardy, so a fact-check bot with reasonable precision doesn’t sound that hard.

      (The culture war implications of a bot that would automatically “fact check” everything you attempt to write or read on the internet are left as an exercise for the reader.)

    • Eri says:

      As I data point, I’ve noticed that, but that is possibly affected by the fact that I’m curious about the topic.

    • bulb5 says:

      As a practicing United Methodist who’s been following this issue closely, it also seemed a little off to me. But the article actually comes pretty close to reality: in the May 2019 General Conference, the “Traditionalists” indeed passed measures to stop gay marriage, but in January 2020, a proposal was endorsed by all sides for the traditionalists to leave the denomination and start their own. But that last sentence of the first paragraph gets it reversed.

      It’s a very weird feeling to see anything about the Methodist church on this site, though.

    • Purplehermann says:

      I assumed that was because a lot of people on the internet write like that. I even know some people IRL who talk like that

    • Creutzer says:

      I was also thrown off by this without looking for anything of the kind. I’m incredibly sensitive to this kind of stuff, and alarms get triggered by human-written texts all the time, too. Because humans get confused and forget stuff – so a sloppy human writer could have made the same mistake. But the human has a concept of what it means to make that kind of mistake, and would understand it upon being pointed out. GPT-3 lacks the very concept of there being something wrong with this.

  3. JohnBuridan says:

    To your question about what GPT-like tools are doing with regard to math performance. I imagine this something like a more highly dimensional Dewey Decimal System. To refresh us on the Dewey Decimal System, each digit in the Dewey Decimal System further specifies the type of work we are dealing with. 400 is languages, 480 is Greek, 485 is grammar and so on and so on.

    As the parameters increase the level of possible specificity increases -> thus GPT-2 goes from addition prompt -> math text, and GPT-3 goes from addition prompt -> addition-like answer up to four digits.

    Going back to the languages example if I ask GPT-2, “What is the accusative plural of τεκνον?” It might respond with something Greek related and grammar related. Perhaps GPT-3 would at least give a grammatical form as it moves from “prompt indicates genre” to “prompt indicates query type within a genre.” The optimist might predict that GPT-6 registers the prompt as “prompt indicates query type within a genre requiring I follow human rules of material causality, formal logic, and customary standards within the genre.”

  4. skybrian says:

    So this is nice and all, but I’m wondering what would happen if you gave one of these GPT models a calculator? Is there a nice way to integrate it?

    • VivaLaPanda says:

      I think you’d still have the problem of it knowing *when* to use the calculator.

      But I see no reason you couldn’t wire a calculator into the neural net and hope backprop eventually determines when to use those “calculator” pseudo-neurons.

      • viVI_IViv says:

        You can’t backpropagate through a calculator because it’s discrete. You’d need to do relaxations or reinforcement learning, which are much more tricky. People have been trying to make this work with the neural turing machines and so on, but AFAIK this line of research didn’t go anywhere.

        • Daniel H. says:

          I’m not sure if this is correct in principle (meaning “for any training algorithm, not just for the GPT3 algorithm):
          A) your brain can. At least I remember some claim of brains adopting in a few weeks to “enhancing sensors”, e.g. a sensor that encodes the direction of the earth’s magnetic field onto your wrist (I can’t remember the exact source, maybe some Elon Musk Interview on Neuralink?)
          B) There’s the Chen et al 2018 paper “Neural Ordinary Differential Equations” that basically combines classical differential equations with a neural net. Again, I haven’t looked at the details, but it basic approach looked like: Neural nets are a generic function approximator, but we can ease their job by adding the functions we already know we’ll need in explicit form and have the surrounding ANN layers figure out how to efficiently use them.
          So putting both together, I don’t really see why a future GPT-n+1 couldn’t include a separate “logic core” that helps deal with math. Without thinking to much about it, I’d kinda expect that this is how our brain deals with specific special-skill tasks – basically push them to the corresponding specialized brain area (e.g. hypothalamus for navigation) and then integrating the results back into the overall thing the brain is doing (however you might call that).
          Somewhat related: See the “extended mind thesis” on the question of whether or not your phone is a part of your personality.

          • viVI_IViv says:

            I’m not sure if this is correct in principle (meaning “for any training algorithm, not just for the GPT3 algorithm):

            “Any training algorithm” is too broad of a class. Artificial neural networks are typically trained using gradient descent: iteratively move the parameter vector towards a stationary (i.e. zero gradient) point of the objective function, the gradient is computed by reverse-mode automatic differentiation (a.k.a. backpropagation). This requires all the layers and modules of the neural network to be differentiable almost everywhere, furthermore if you want to find a meaningful solution they must have non-zero Jacobians almost everywhere. This rules out discrete layers.

            There are workarounds: relaxation methods where you replace a discrete layers with a continuous smooth layers that interpolate or approximate them, and reinforcement learning methods where you treat the discrete layers as part of the environment. These methods have been demonstrated to a certain extent, but they are nowhere as effective.

            A) your brain can.

            Your brain is a general intelligence.

            B) There’s the Chen et al 2018 paper “Neural Ordinary Differential Equations” that basically combines classical differential equations with a neural net.

            That’s an interesting paper, but it still deals with continuous smooth problems.

            I think the closest thing is “Differentiation of Blackbox Combinatorial Solvers“: they have a neural network produce some continuous values with are used as parameters for a combinatorial optimization problem, which is solved by an external solver that produces a discrete output. In the paper they define a pseudo-gradient though the solver by a relaxation method, they backpropagate this gradient through the neural network in order to train it. Neat, but still requires the inputs of the solver to be continuous.

          • O_o says:

            your brain can. At least I remember some claim of brains adopting in a few weeks to “enhancing sensors”, e.g. a sensor that encodes the direction of the earth’s magnetic field onto your wrist

            This isn’t a great analogy since the sensor is differentiable and the calculator isn’t.

        • StellaAthena says:

          I believe the current state of the art for arithmetic is Neural Arithmetic Units (Madson and Johansen, ICLR 2020).

        • Gerry Quinn says:

          It can simply finish every sentence of the form “13579 + 76543 is…” with “answered by my calculator”

          Of course that’s not quite enough in itself, but it makes GPT + Calculator a larger component of a full AI.

    • eric23 says:

      I suppose you could just run a script to print “1+1=2. 1+2=3. 1+3=4.” and so on for billions of arithmetic sums, and then add its output to the text database. Wouldn’t that get GPT-3 quickly up to speed with accurate arithmetic?

      • Simon_Jester says:

        Yes, but all you’d be doing is adding a look-up table to answer that specific question. There’s no point, and it wouldn’t generalize to solving more complicated problems, or problems where the answers can’t be pre-recorded.

  5. Decorinius says:

    GPTocracy can be made real.

    Governance is actually already mostly a text prediction task! We just need to have the laws written by a zillion parameter text prediction engine. Hook up data driven dashboards of social, economic, health metric as objective function, laws as output. Laws update every [Governance Tick] and we retrain on the feedback.

    Sure, you’ll crash society in the short term, but it appears to be doing that already!

    • Bugmaster says:

      I like it ! It’s just like regular politics, but using electricity instead of dollars as bribes. Heh.

  6. orthonormal says:

    Imagine telling Marvin Minsky or someone that an AI smart enough to write decent poetry would not necessarily be smart enough to know that, when asked “325 + 504”, we wanted a numerical response!

    This reminds me of the bit in Gödel, Escher, Bach (1979) where Hofstadter argued that AIs capable of grasping concepts could very well be bad at arithmetic:

    There is no reason to believe that a computer’s faultlessly functioning hardware could not support high-level symbolic behavior which would represent such complex states as confusion, forgetting, or appreciation of beauty. It would require that there exist massive subsystems interacting with each other according to a complex “logic”. The overt behavior could appear either rational or irrational; but underneath it would be the performance of reliable, logical hardware.

    (To drive home the point, he also drew a head containing many small correct arithmetic equations that at a larger scale spelled out “2 + 2 = 5”.)

    • Purplehermann says:

      So basically a computer simulating a dumb human that read a pop philosophy book or two?

    • If this is true, wouldn’t it make AI less dangerous because it can’t grasp concepts and then also do the hyper-math required to effectively out fox and destroy us because we’re not fulfilling its favored concepts?

      EDIT: I mean sure it could just use an exterior math module to DO math as we do, but it’s going to be bottlenecked by human-like limitations in manipulating the mathematical concepts to achieve its goals.

      • Simon_Jester says:

        If it’s superhumanly good at one thing, then the answer depends entirely on how well it can leverage that one thing.

        A computer that’s inhumanly good at arithmetic is called a pocket calculator; it lacks the awareness to be a threat.

        A computer that’s inhumanly good at writing persuasive essays given a one sentence prompt may or may not be a threat by itself, but it’s damn sure going to be a threat in the hands of a human user, even if the thing consistently gets the wrong answer to “who is buried in Grant’s tomb” or “what is thirteen minus eight?”

  7. kaikaun says:

    Figure 1.3 has semi-log axes, so the scaling isn’t linear but logarithmic.

    Also, if you look at the greyed out lines that are averaged to produce the bold ones, you’ll see there is a lot of variation in performance up and down. A lot of the monotonicity of the graph is produced by the last two data points, GPT-2 13B and GPT-3 175B respectively. I’m not impressed or convinced. At least have something like a 40B model between the last two points. (Yes, I understand that would be expensive, but this was a bad graph.)

    • BlindKungFuMaster says:

      In general the scaling assumption isn’t based on that graph. There has been a paper that trained more models to compute scaling laws in a robust fashion for a more general metric. What GTP-3 does is show that these laws extrapolate to 175 billion parameters. For single tasks there are always going to be more idiosyncratic scaling curves.

  8. Thanks for this post! Among other things, you provide a good general-audience summary of what I’ve written on the topic.

    I do want to clarify one point. Re: this

    He also points out that although GPT-3 is impressive as a general-purpose reasoner that has taught itself things without being specifically optimized to learn them, it’s often worse than other purpose-built AIs at various specific language tasks, so we shouldn’t get too excited about it being close to superintelligence or anything.

    This does describe my argument if we read “purpose-built” as “aided by some task-specific training data.” But I imagine many people reading this paragraph would instead read “purpose-built” as “designed by humans with a particular task in mind,” which makes it sound like I’m arguing against domain-general/scaling approaches in favor of approaches where human researches do a lot of work targeting a specific task.

    We need to distinguish between several levels of human involvement:

    (1) No extra data or research work: the model “just does something” on its own (zero-shot, text generation)
    (2) Extra data but no extra research work: you need task-specific data, but once you have it, you just plug it into an existing, generic model using a generic, automatable procedure (fine-tuning)
    (3) Extra data and extra research work: you need task-specific data, and you must design a custom model with the task in mind (classic supervised learning)

    where few-shot is arguably somewhere between (1) and (2), but has generally been treated in discussion as a type of (1).

    Transformers are of course great at (1) in a cool, flashy way. However, they are also extremely good at (2), to the extent that large categories of work that used to be (3) immediately turned into (2) once people noticed this was possible.

    Usually people do this by fine-tuning some variant of BERT. The differences between BERT and GPT-1/2/3 (let’s just call them GPT-n) are interesting, but are dwarfed by the similarities. For the purposes of this discussion, the least misleading framing is that BERT and GPT-n are basically the same thing, with “BERT_BASE” = GPT-n 117M and “BERT_LARGE” = GPT-n 345M.

    Really, the only important differences between (1) and (2) here are that (2) modifies the model itself (“fine-tuning” it to the task, hence the name), and that (2) typically uses a somewhat larger quantity of data. These differences are fully independent: in principle one can fine-tune with only 50 examples, and if it weren’t for the limited size of its reading “window,” one could imagine feeding arbitrarily high numbers of examples to GPT-n as a few-shot prompt.

    So, with the same transformer model, you can do either (1) and (2) and they look very similar. Except . . . (2) does far better. The GPT-3 authors frequently use comparisons to fine-tuned BERT_LARGE as a reference point, e.g. Table 3.8. That is, the authors are comparing their results to results from a model that is 500 times smaller, has more task-specific data, and is hooked up to the task in a different manner . . . and the exciting thing is that their model does about as good as the tiny one, not far better than it. It’s not clear to what extent this is really a difference in data size — that’s the interpretation in which the name “few-shot” makes sense — and to what extent it’s just that, relative to fine-tuning, this is a less effective way to hook up a model to a task.

    If 500x-ing your parameter counts means anything, it ought to mean improvements in fine-tuning, too. So there’s an unspoken elephant in this room. I called GPT-3 a “disappointing paper,” which is not the same thing as calling the model disappointing: the feeling is more like how I’d feel if they found a superintelligent alien and chose only to communicate its abilities by noting that, when the alien is blackout drunk and playing 8 simuntaneous games of chess while also taking an IQ test, it then has an “IQ” of about 100.

    I think this relates to another point you made:

    If it’s trying to mimic what a human can write, then no matter how intelligent it is “under the hood”, all that intelligence will only get applied to becoming better and better at predicting what kind of dumb stuff a normal-intelligence human would say.

    This is true if you only think about the (1)-type approach where you interact with the model as a text predictor. However, it’s more fruitful to think of it as something more like a “repository of knowledge distilled from reading” — something you and I have in our heads as well — which was created in this specific manner, but can be leveraged in other ways. Whether or not some piece of knowledge is in there is a distinct question from how best to hook that knowledge up to something else we want — or an AI wants — to do with it.

    • BlindKungFuMaster says:

      I am not sure you can use BERT to claim a huge difference in quality between (1) and (2). BERT is crap at text generation, which incidentally must be the reason why OpenAI scaled up GTP-n and not BERT. It seems quite likely that the gap between (1) and (2) is significantly smaller for GTP-n.

      But I’m also looking forward to the fine-tuning paper on GTP-3.

      • viVI_IViv says:

        BERT is not designed for text generation. It is possible to torture it to generate text, but it is indeed crap at it.

      • That’s fair. De-noising objectives like BERT’s are better for fine-tuning — Table 2 in the T5 paper convinces me on that score — but make generation more difficult and possibly worse.

        I do wonder whether BERT is really lower-quality as a generator, or merely more difficult to sample from. After all, BERT’s training task does that sneaky thing where a masked token is presented as a random ordinary wordpiece instead of [MASK], and BERT is still expected to replace it with the correct token. To do perfectly at this, you’d need to model arbitrary conditional dependencies in text . . . but these tokens appear rarely and BERT can see a mostly-complete context around them, so it’s just easier. You don’t need especially deep knowledge to figure out what’s going on in “I went to the grocery torrid to get milk” or “I went recent the grocery store to get milk” or “I went to the grocery store father get milk.”

        I also wonder what the tuning-vs-generation tradeoff means . . . I don’t have any great ideas there.

        • gwern says:

          I’m not sure there’s a practical difference… But I think BERT genuinely does have some sort of problem with text generation. No one has gotten it to generate good text despite some trying, while in contrast, T5, which was trained with text generation as one of its tasks as well as the usual suite of denoising tasks, does generate text reasonably well. Curiously, T5 still has problems with finetuning, like diverging during training, which is one reason why I haven’t yet used NaxAlpha’s T5 training code to train the big T5 on poetry and generate samples. So there’s something weird going on there.

  9. Erusian says:

    So, how long until you’re replaced by an army of Robo-Scotts increasing SSC production tenfold and driving the poor artisanal Scotts to the poorhouse?

  10. drocta says:

    Speaking of GPT-2/GPT-3 doing logical reasoning, when playing with talktotransformer’s interface for gpt2, it seems like it does much better with modus ponens than with with “hypothetical syllogism” aka “double modus ponens”.
    If you prompt GPT2 with “If it is raining outside, John will not take the umbrella. It is raining outside. Therefore”, GPT2 will often continue this with something along the lines of “John will not take the umbrella.”.
    But if you give it a combination of 2 if/then statements, where the conclusion of the first is the premise of the second, and then end the prompt with “Therefore, if [condition of the first statement], then”, it doesn’t seem to tend to finish it with [conclusion of the second statement].
    (I have not tested this in any serious experiment, just played around with it.)

  11. Kakrerere says:

    I’d just like to say one thing.

    AAAAAAAAAAAAA

    AAAAAAAAAAAAAAAAAAAAAA

    AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

  12. Le Maistre Chat says:

    This is a surprising issue to have, but so far AIs have been nothing if not surprising. Imagine telling Marvin Minsky or someone that an AI smart enough to write decent poetry would not necessarily be smart enough to know that, when asked “325 + 504”, we wanted a numerical response!

    OTOH, making an artificial calculator was much earlier low-hanging fruit in computing. So you could have “an AI” that includes a calculator module and a GPT-3 module. To which you could hypothetically add…
    This makes the Buddhist “no Self” truth claim seems relevant to the question of what “AI” is, anyway.

    • BlindKungFuMaster says:

      Just yesterday I had an intern visualize the word vectors (2013 tech) of numbers with PCA and t-SNE. Those are already (mostly) linearly arranged. So even number understanding from text is not super new.

    • Simon_Jester says:

      The hard part, the part we’ve made very little progress on, is designing an AI that knows which module it’s supposed to be using.

  13. moridinamael says:

    Important clarification, I think:

    Yes, it can do simple addition, but it has to use supercomputer-level resources to do so

    It required a supercomputer to learn addition, but the trained model is much faster. It’s possible that a personal gaming PC could run the GPT-3 model. (I don’t actually know this, but I know that GPT-2 could run just fine on mundane compute resources.)

    Sure, still inefficient, but we’re hardly being efficient when we do mental math. Generality will always come at the expense of efficiency.

    • sketerpot says:

      A personal gaming PC could run the model, but running it at an acceptable speed is another matter. The full weights take up about 350 GB, and your gaming PC probably doesn’t have that much graphics RAM, or RAM of any sort, so you’d need to either partition the model across a bunch of people’s GPUs — LAN party time! — or be very bottlenecked on moving data around.

      More details on GPT-3’s github page.

      Smaller and less impressive versions of GPT-3 are probably less unwieldy, but will also produce less impressive output.

      • gwern says:

        People have been calculating it out, and it wouldn’t necessarily be too bad on a personal computer. Yes, you do spend a lot of time shuffling layers in and out of the GPU but you’re also spending a lot of time computing the layer (while the VRAM consumption is not that big). And if you are willing to get a workstation with 512GB RAM, you could actually train something like GPT-3 fairly efficiently! You take advantage of the relatively small per-layer VRAM consumption to do minibatches layer by layer, storing the intermediates in main RAM: “Training Large Neural Networks with Constant Memory using a New Execution Algorithm”, Puddipeddi et al 2020 https://arxiv.org/abs/2002.05645v5

  14. Sorghum says:

    People don’t seem to talk about Searle’s Chinese Room as much as they used to. The idea was that you could have a massive database of answers to every possible question and build something that could pass the Turing test without being in any way intelligent. The response was “yeah, but that’s silly, a database of every possible answer to every possible question is unimaginably vast and physically impossible anyway, so you can’t sensibly reason about it”.

    The success of models like GPT-3, though, suggest that you don’t actually need every possible question-answer pair to get Chinese-Room level performance on a Turing test, you just need a vague set of associations between words, and enough parameters to fill a flash drive. GPT-3 isn’t a sensible chat bot, but only because it hasn’t been trained to act like one — if you fed it a few billion chat logs could you make it into one? Quite likely.

    What it still would lack is longer-term memory — in a Turing year you could ask it “Hey, ten minutes ago you mentioned your dog’s name, what was your dog’s name again?” Still, I wonder whether these sorts of consistency checks could be added without too much trouble.

    • Eri says:

      > The response was “yeah, but that’s silly, a database of every possible answer to every possible question is unimaginably vast and physically impossible anyway, so you can’t sensibly reason about it”.
      I like another sort of response more. If it fully behaves like a human in generating texts, and we cannot spot the difference no matter how many tests we do, does any meaningful difference even exist? It’s something of the same breed as philosophical zombie.

    • BlindKungFuMaster says:

      There is already a transformer model that compresses earlier text memories to do well in text prediction on a corpus of novels, where you have to remember all the characters and stuff.

  15. Briefling says:

    I am officially shook. Congratulations, AI doomsday alarmists; I will stop mocking you guys in public.

    • Bugmaster says:

      I am actually pretty elated. It might be that, at long last, human-readable machine translation could emerge within my lifetime. There are a ton of foreign-language books I’ve always wanted to read but never could. Granted, the chances are still pretty small, but one can always hope.

      Oh, and I’m still gonna mock AI doomsday alarmists in public 🙂

      • kokotajlod@gmail.com says:

        I challenge you to a public debate on the topic, then. Could be conducted via comment thread here, or video chat with your friends in audience.

        Seems like you are still stuck in the “Then they laugh at you” stage from the saying. I’m trying to accelerate your progress to “Then they fight you,” for obvious reasons. 🙂

        • Bugmaster says:

          Sounds good, but I can’t commit to a video chat — my schedule just won’t allow it. If you’re willing to wait a bit for my (textual) replies, then I’m good to go. Er… wait, actually maybe I’m not. How does a “public debate” differ from a normal comment thread ? Are there some special rules, or what ?

          • kokotajlod@gmail.com says:

            Sounds good. A normal comment thread works fine, except maybe with the addition of some limits (we don’t want this to go on forever, right?) and a better-defined topic of debate. I propose we set the topic as:

            Is it currently reasonable to worry about AI bringing about some sort of doomsday?

            As for limits, I am super busy myself, so asynchronous is great. Maybe we say: We each get up to five comments total, no more than 500 words each? No time limit, except that it has to be done by, say, July 1st? Idk, what do you think?

          • Bugmaster says:

            Whoops, sorry, I replied to the wrong comment; see my reply below.

            Not a good start to the debate on my part :-/

        • Bugmaster says:

          @kokotajlod@gmail.com:
          I want to say “yes”, but I’ve already identified one hurdle: I think it’s totally reasonable to worry about unscrupulous human agents using AI to bring about some sort of doomsday. They don’t even need AI for that, they already have nukes, AI just gives them more toys to play with. Also, “doomsday” is poorly defined. Maybe we could amend the topic to something like, “Is it reasonable to worry about a runaway AI bringing about extinction of humanity (or perhaps just modern human civilization), in the absence of humans wielding it explicitly to do so ?”

          • kokotajlod@gmail.com says:

            Yes, that’s a good clarification. I accept your proposed topic definition. Well, actually, I’d like to make it “…bringing about the extinction of humanity, or something similarly bad,” to account for cases in which e.g. AI does to humans what humans did to whales or chickens. Is that OK?

          • Bugmaster says:

            Sounds good to me. Do you want to go first ? I guess in a real debate we’d flip for it, but I’m not sure how to implement that here.

          • kokotajlod@gmail.com says:

            I think I’d prefer you go first, since you probably already know the basic pitch, and the basic pitch is what I’d be giving in my initial comment. (Whereas I have very little idea of what your position is). But if you like I’d be happy to go first and give the pitch.

          • Bugmaster says:

            Ok, sure.

            As far as I understand, the “basic pitch” is that, one day, a malicious or just “mis-aligned” AI will destroy humanity (and potentially any other intelligent species, should they exist); and that day is coming soon. Therefore, we need to be very worried about this scenario; that is, we as a species should spend a significant amount of resources on combating the AI threat — at least as much, if not more, than we currently spend on preventing global thermonuclear war, genetically engineered viruses, or asteroid strikes.

            I would argue that such a scenario is vanishingly unlikely, so much so that worrying about it makes about as much sense as worrying about demonic invasions from Phobos.

            The mechanics of this unfriendly Singularity usually involve an AI becoming superintelligent through a combination of recursive self-improvement and Moore’s law; using this superinitelligence to acquire essentially godlike powers; then using these powers to effectively end the world. All of this will (according to the scenario) happen so quickly that humans would have no chance to stop it.

            I would argue that this scenario suffers from several problems:

            1). The concept of “superintelligence” is poorly defined (to put it generously).
            2). Modern-day AI systems are nowhere near to achieving general-level human intelligence, and there’s currently no path that takes us from here to there (though people are working on it).
            3). Being super-smart (or super-fast-thinking) is not enough; in order to effect real-world change, you have to act in the real world, at real-world speeds. You also cannot learn anything truly new just by thinking about it really hard.
            4a). Recursive self-improvement, of the kind espoused by AI risk proponents, is probably physically impossible.
            4b). Many (perhaps most) of the powers ascribed to a “superintelligent” AI (assuming that word even means anything, as per (1)) are likewise probably physically impossible.
            5). Humans are actually pretty good at stopping buggy software (sadly, the same cannot be said of stopping other malicious humans armed with perfectly working software)

            Just to clarify, here are some things I am not saying (I’m just including these items because they tend to come up frequently, feel free to disregard them if they don’t apply to you):

            100). Only biological humans can be generally intelligent, even in principle.
            101). The current state of AI is at its peak and will never improve.
            102). AI is totally safe.

            Obviously, this isn’t an argument, but just an outline. I could proceed expanding on my points in order; or we could focus on any specific point if you prefer — but first, please tell me if my summary of the “AI FOOM” scenario is correct, since I don’t want to strawman anyone by accident.

          • kokotajlod@gmail.com says:

            Thanks! Excellent start to our conversation.

            Here are my thoughts on your characterization of the basic pitch:
            –Unaligned, not malicious. Malicious would be rather unlikely, but unaligned is the default; hence the importance of AI alignment research.
            –*Might* destroy humanity, not *will*. Think of it like this: A technologically superior alien civilization opens a portal to our world. What happens? Maybe nothing bad. Maybe something very bad.
            –There’s no need to think it will happen soon. If we thought aliens were maybe coming sometime in the next century, it would still be worth preparing. Bostrom, for example, actually thinks it is coming *later* than the median AI researcher!

            Here are my thoughts on your objections:
            1. Unaligned AI killing us all is not vanishingly unlikely. It’s certainly much more likely than invading demons from Phobos! There are multiple lines of evidence that support my claim here. a. It’s what the majority of AI experts think. b. economic models predict a substantial probability of it happening. c. tracking progress on important metrics in ML, deep learning, and computing suggests it might happen. d. arguments from first principles suggest it might happen.
            2. The “AI Fooms to God” scenario you reference is just one extreme example of a worrying scenario. The overall concern is not dependent on that specific extreme scenario. You’ll be interested to know that most of the people working on AI safety, while they do take that scenario seriously, also take even more seriously more mundane scenarios in which progress is not so fast and AI is not godlike. These more mundane scenarios are also existential risks.
            3. The 5 points you make below… I disagree with all of them, and/or think they don’t substantially undermine my position. Hmmm. How to proceed. You suggest I pick something to talk about… why don’t we talk about point 1? If there are other points you are particularly interested to talk about, I’d be happy to talk about them too.

            You say “superintelligence” is poorly defined, to put it generously. (a) so what? Lots of things are poorly defined, but we still worry about them. This seems like an isolated demand for rigor. (b) What, specifically, are your problems with Bostrom’s definition, and why do they undermine the overall argument?

          • Bugmaster says:

            @kokotajlod@gmail.com:

            O1: Regarding your objection 1 (I’ll refer to it as “O1”, just to avoid confusion), I absolutely agree that an immensely powerful ultra-superhuman unaligned AI would be very dangerous; but then, so would the Khan Makyr or the Vogon road construction fleet. I argue that such things are incredibly unlikely and/or outright impossible, not that they are safe. I fully acknowledge that unaligned AI can be dangerous; for example, shortly before the pandemic hit, Google Maps told me to take a right turn directly off a cliff (I didn’t listen, though).

            O2: Regarding the “AI FOOM” scenario, the message I hear most often from AI risk proponents is that the rapid “FOOM”ing is what makes a runaway AI dangerous. If the AI develops slowly over a period of 50 years, maybe humans would have a chance to pull the plug. If it ascends from “glorified calculator” to “quasi-godlike entity” overnight, humans have no chance. I actually think this message does make sense, assuming you accept all the other premises of the AI safety movement (which I do not). So, my question is, what makes unaligned AI dangerous in the absence of the “FOOM”ing ? Can’t someone just turn it off when it starts making crazy demands on CPU time ?

            Anyway, onwards to my point (1).

            1). What does the word “superintelligence” mean ?

            Bostrom says that it means something like, “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”; AI risk proponents often use the analogy “1000x smarter than Von Neumann”. But these analogies still do not explain what it means to be “super-smart”.

            Modern researchers still argue (quite often vehemently) over the definition of human intelligence. Some say it is nothing more than the ability of a person to pass an IQ test, which does not correlate to anything useful. Others posit a whole spectrum of different intelligences, such as “emotional” or “kinesthetic”, which correlate with performance in specific domains. There’s also the notion of a “g-factor”, that correlates with performance in a wide variety of domains; however, this “g-factor” is poorly understood, and it is unclear whether it exists at all. Given all of these conflicting definitions, the notion of a nonhuman “super” intelligence simply amplifies the ambiguity, leading to super-vagueness.

            Some AI risk proponents equate intelligence with the ability to think really fast. There’s some merit to this: if you asked me to solve a simple math problem, I might be able to puzzle it out with pencil and paper eventually; whereas Von Neumann could instantly give you an answer off the top of his head. If I could think 100x faster, and maybe possess some kind of read-write eidetic memory, maybe I could also appear to be as “smart” as Von Neumann. However, the problem is that if you asked me to solve something like the Riemann Conjecture, then sped up my mind 100x, I’d simply fail to solve it 100x faster; speed isn’t everything.

            If I wanted to steel-man the definition of “intelligence”, I could expand it to something like, “the ability to quickly and correctly produce answers to novel questions”. A “superintelligence” would thus be able to quickly and correctly answer most questions (or, at least, most of the practical ones, such as e.g. “how do I convert the entire Earth to computronium”). In this view, the “superintelligence” is something like an oracle. But the problem is that oracles demonstrably don’t exist; probably cannot exist in principle; and there appears to be no path at all from either machine-learning systems or just ordinary everyday humans to virtual omniscience. Answering novel questions correctly is not some kind of a magic factor that you can scale simply by adding more computing power; it is a process that is still poorly understood, but that undoubtedly requires a lot work in the real, physical world.

            I will expand more on this topic in points 2..4; but meanwhile, do you think my attempted steel-manning of the definition is fair ? If not, how would you define [super]intelligence, without resorting to synonyms of the word “smart” ? If yes, can you demonstrate some likely path from regular human intelligence (and/or e.g. GPT-3) to omniscience ?

          • kokotajlod@gmail.com says:

            O1: First of all, remember that “ultra-superhuman AI” is not the only thing that could kill us all; merely superhuman or even human-level could too. So it’s a bit of a straw man to focus on ultra-superhuman AI. But anyhow what’s your argument that it is of comparable likelihood to Vogons? Like I said, there are multiple lines of evidence pointing to it having substantial probability.

            O2: I agree that rapid FOOMing makes the situation more dangerous, and I also think it’s reasonably likely to FOOM. Probably not in less than a day, but maybe in a few weeks to two years or so. But like I said, even if it happens over the course of many years it still could kill us all. History provides ample evidence for this; Cortes’ conquest of Mexico was surprising because it only took two years. But the subsequent European conquest of the rest of North America was not surprising, and very difficult for the natives to prevent, even though it took hundreds of years. For an example of a prominent AI safety researcher who thinks the FOOM scenario is unlikely (though he still thinks it is much more likely than you do!) describing how a slow and gradual takeoff could still be very bad, see here.

            Point 1: Those explanations of superintelligence seem perfectly good enough for me. A machine that is better at politics, social skills, strategy, science, analysis, … [the list goes on] than any human? Sounds powerful. Also sounds like something that might happen; after all, human brains aren’t magic, and lots of smart AI scientists are explicitly trying to make this happen.

            Like I said, you don’t need a precise definition to care about something. Your arguments here sound like the arguments made in section 2 of this paper on the impossibility of super-sized machines. It’s an isolated demand for rigor. Imagine you are Mr. S, recently back from the ‘alien homeworld,’ wondering what will happen to your beloved country. Might the aliens take over and rebuild it in their image? Yes, they might. Why? Because they have superior technology. What does “superior technology” mean though? There is no accepted definition, and all of the available definitions are vague. Oh well, I guess there’s nothing to worry about then!

            Bostrom further distinguishes speed, quality, and collective intelligence, by the way. I think all three dimensions are worth thinking about.

            I think what you are saying about oracles is not relevant.

            I will expand more on this topic in points 2..4; but meanwhile, do you think my attempted steel-manning of the definition is fair?

            Not really. I appreciate the attempt though! Basically, I feel like you’ve consistently characterized the “alarmist” position as being too dependent on a specific scenario that is extreme in many ways. Then you happily go on to poke at the extremities.

            If not, how would you define [super]intelligence, without resorting to synonyms of the word “smart”?

            Remember that I think this is unimportant; I don’t need to define things any more precisely than they already have been defined. But anyway, here are some more definitions:
            1. Consider the space of all tasks. Consider the subset of tasks that are “strategically relevant,” i.e. the set of tasks such that if there were a single machine that was really good at all of them, it would be potentially capable of taking over the world. What I’m worried about is a machine (or group of machines) getting really good at all of them.
            2. Consider the advantages humans have over other animals. Consider the subset of such advantages that allowed humans to take over the world, leaving other animals mostly at their mercy. I’m worried about machines having advantages over humans that lead to a similar result.
            3. Intelligence = ability to achieve your goals in a wide range of circumstances.

            Finally, I really shouldn’t have to demonstrate a path to superintelligence in order for the possibility of superintelligence to be taken seriously. Isolated demand for rigor strikes again!

          • Bugmaster says:

            @kokotajlod@gmail.com:

            Regarding your (O1) and (O2), I broadly agree that AI could be incredibly dangerous in the wrong hands, just like any other tool. However, I think that without a “FOOM”, the danger from a fully autonomous AI decreases to the point where we don’t need to be critically worried about it. I could talk more about it, but I think this point is similar to my point (5), so I suggest we table it until we get there (we don’t have to go through all the points in order, either). It’s up to you, though.

            For now, let me return to point (1), because I think I wasn’t clear enough in articulating my objection. I fully agree that an entity that was better than humans at virtually everything would be potentially quite dangerous. I could even agree that we could re-label “ability to do things” as “intelligence”. However, if we do so, IMO we lose the connection between this new meaning of the word, and what is commonly called “intelligence” or “being smart”. I think that AI risk proponents inadvertently engage in a bit of an equivocation fallacy when they use the word “intelligence” this way.

            As far as I can tell (and I could be wrong !), AI risk proponents seem to take it as a given that (1a) there exists a single factor that is responsible for allowing agents to be good at doing things; they call this factor “intelligence”; and (1b) then they say that AIs also possess this factor and can (1c/4a) recursively self-improve it to allow the AI to dramatically surpass the humans at pretty much everything. But I’d like to see some evidence for all of these propositions (though we could leave 4a for later).

            I am also not entirely clear on what the “super-” in “superintelligence” means. Is a world-leading mathematician “superintelligent” ? He’s certainly better at math than an average human, by multiple standard deviations; but he probably does not constitute an existential risk. I feel like the AI risk proponents engage in another bit of equivocation here, when they extrapolate from “some people are better at certain cognitive tasks than others” to “and therefore there will one day exist an agent who can solve virtually any problem almost instantly”. It’s a pretty big leap, because it goes from a description of something that exist but is rather modest; to something that has never been demonstrated to exist and is rather extraordinary. To use an analogy, houses used to be only 2 or 3 stories tall; then they expanded to 10 stories; and today we have skyscrapers — but that does not mean that tomorrow we will have buildings that can reach from the Earth to the Sun, and also grow free pizza along each of their unoccupied surfaces. I’m not going to reject such buildings out of hand, but I’d need something more to go on than just an assertion.

            On second thought, my analogy might be a bit deceptive as well, since we do know how to measure the height of buildings, but we do not know how to reliably measure intelligence (especially the hypothetical superhuman kind).

            My suggestion to treat the superintelligence as an oracle was an attempt to get around these problems, though if you disagree with it, then we should of course table it. FWIW, several AI risk proponents seem to equate “intelligence” with “computing capacity”, but I deliberately stayed away from this approach in fear of straw-manning the position (er… until now, I guess).

          • keaswaran says:

            One useful analogy I find in this discussion is the analogy between an AI and a corporation. Both are systems of rules that operate to maximize some sort of value that may or may not be aligned with the interests of any human. Both are much more powerful at acting towards the maximization of their values than any individual human is. The main difference is that corporations don’t quickly figure out how to set up even more effective corporations than themselves. Even so, it’s quite clear that at some point over the past century or four, unaligned corporations ended up blindly stepping on the lives of billions of people, creating who-cares-what externalities in the process of maximizing shareholder value.

            The question is whether AIs will become more or less effective at achieving their aims than corporations, and whether they will be more or less aligned in their values than corporations.

          • Bugmaster says:

            @keaswaran:
            One difference between [hypothetical] GAIs and corporations is that corporations are made of people. When you say that corporations “stepped on the rights of billions of people”, this is broadly speaking true; however, a more accurate statement would be something like, “a group of people decided to advance their own interest at the expense of another group”. At this point, we’re getting into all kinds of interesting philosophical/moral issues about coherent extrapolated volition, etc., which is not the case with AI. I terminated my software program just a few minutes ago, because it had a mis-alignment in its values: I wanted it to download and process some database records, and it wanted to take 100% CPU and flood the disk with garbage. Now I’ve got go back and debug it…

          • keaswaran says:

            Except I don’t think it’s clear that there is a group of people whose interests a corporation acts in. A corporation is made of people (its employees, including executives), and its interest in shareholder value is derived from the interests of other people (its shareholders), but its behaviors are governed by rules and procedures that don’t always align with the full interests of either group. Much of the bad behavior of corporations is brought about by a bunch of individuals that don’t particularly like that bad behavior, but who each think that their job requires them to make it happen. Even if shareholders on balance would prefer lower stock price and less pollution to higher stock price and more pollution, the mechanisms governing each employee can naturally push for the higher stock price and more pollution outcome.

          • keaswaran says:

            I’m not sure exactly what bounds are of “coherent extrapolated volition”, but any AI is going to be different enough from biological humans that there will be similar questions about where the volition comes from in the inanimate objects that form parts of it. The fact that a corporation is made of people just means that it’s a lot like a bigger version of the Chinese room.

          • kokotajlod@gmail.com says:

            Regarding your (O1) and (O2), I broadly agree that AI could be incredibly dangerous in the wrong hands, just like any other tool. However, I think that without a “FOOM”, the danger from a fully autonomous AI decreases to the point where we don’t need to be critically worried about it.

            Now it sounds like you are saying that a less-than-godlike AI may be likely to happen, but it wouldn’t be dangerous enough to worry much about. If so, I’d love to talk more about this, and hear why. Colonizers vs. colonized was not a godlike difference, but it was enough. Heck, even humans vs. other animals is not a godlike difference, but it’s more than enough.

            I fully agree that an entity that was better than humans at virtually everything would be potentially quite dangerous. I could even agree that we could re-label “ability to do things” as “intelligence”. However, if we do so, IMO we lose the connection between this new meaning of the word, and what is commonly called “intelligence” or “being smart”.

            Wait, but now it sounds like you agree that even a non-god-like AI could be quite dangerous, whereas before it sounded like you thought such an AI would not be dangerous.

            As far as I can tell (and I could be wrong !), AI risk proponents seem to take it as a given that (1a) there exists a single factor that is responsible for allowing agents to be good at doing things; they call this factor “intelligence”; and (1b) then they say that AIs also possess this factor and can (1c/4a) recursively self-improve it to allow the AI to dramatically surpass the humans at pretty much everything. But I’d like to see some evidence for all of these propositions

            Well, it’s understandable I guess that you have this impression, but let me assure you that not only are these things not taken as given, they are not even believed by many people worried about AI risk. I think there is a decent amount of evidence in favor of 1a, for example, (e.g. looking at g factor in humans, theorizing about concepts like “problem-solving ability” and “general intelligence”) but probably it is false. I think the arguments for concern about AI risk don’t depend on 1a. For example, suppose instead of 1 general factor of intelligence there are 1,000,000 different factors, representing skill at various tasks or domains. Well, we could still get AI that is better than humans at most or even all of the important domains, enough to make it dangerous. You could even get recursive self-improvement, if some of the skills are useful for designing better AIs or more mundanely if some of the skills are useful for learning how to do others better.
            I don’t think the arguments for AI risk depend on recursive self-improvement, though I admit there is a connection in this case, because recursive self-improvement is a reason to think the gap between humans and AIs could grow quickly, which is a reason to think we won’t have time to react properly. Without recursive self-improvement, the problem remains, but we just have more time to deal with it and find a solution.

            I am also not entirely clear on what the “super-” in “superintelligence” means. Is a world-leading mathematician “superintelligent” ?

            No, superintellgence means significantly better in all relevant domains, not just one (math).

            I feel like the AI risk proponents engage in another bit of equivocation here, when they extrapolate from “some people are better at certain cognitive tasks than others” to “and therefore there will one day exist an agent who can solve virtually any problem almost instantly”.

            This seems like a straw man. The reasoning is more plausible than that, and it doesn’t lead to that extreme conclusion. Like I keep saying, AI doesn’t need to be godlike to be dangerous.

            To use an analogy, houses used to be only 2 or 3 stories tall; then they expanded to 10 stories; and today we have skyscrapers — but that does not mean that tomorrow we will have buildings that can reach from the Earth to the Sun, and also grow free pizza along each of their unoccupied surfaces. I’m not going to reject such buildings out of hand, but I’d need something more to go on than just an assertion.

            Yeah, there’s a lot to go on besides assertion. I even referenced a bunch of arguments above. I’d be happy to go into more detail if you like. And anyhow a more appropriate analogy would be skyscrapers that are even bigger than the ones we currently have. Godlike AI not necessary for danger, etc.

            On second thought, my analogy might be a bit deceptive as well, since we do know how to measure the height of buildings, but we do not know how to reliably measure intelligence (especially the hypothetical superhuman kind).

            Fun fact: I was part of a project in which we looked for historical data on the height of buildings, to see what the trend was like. Turns out there is controversy about how to measure the height of buildings — do you include antennas on the top? What about bits of concrete and metal that don’t have any useful interior space? What about buildings that are held up by supporting wires, such as radio towers? Also, sometimes buildings are on uneven bases, or have deep basements–what do you count as the base?

            Anyhow, you don’t need to be able to measure something to worry about it. How do we measure the military power of Nazi Germany in 1938?

            I think I should lay out what I think of as the basic pitch:
            1. There’s at least a 1% chance of AI better than humans at all or most tasks relevant to taking over the world in the next 50 years. My actual credence is more like 70%.
            2. AI would not by default be aligned; work must be done to craft an AI that is aligned. (My credence is 95%, but even if it were much lower the argument would go through.)
            3. Should an unaligned AI of this type (or many of them) appear, unless we have aligned AI to help us out there is at least a 1% chance that humanity will be overtaken and destroyed (or something similarly bad happen). My actual credence is more like 95%.
            4. Currently very little work is being done in that direction, relative to the magnitude of the danger. Even if the chance of human extinction is 1% of 1% of 1%, we are currently spending way too little effort and thought on this problem.
            5. Therefore it is reasonable to be worried about this, just as it is reasonable to be worried about nuclear war, or climate change, or pandemics, and certainly compared to [insert political hot topic of your choice].

          • kokotajlod@gmail.com says:

            @keswaran: I too like the comparison to corporations. AIs are likely to be more alien and indifferent to human concerns than corporations (because they aren’t made up of humans) and–more importantly–AIs are likely to think much faster than humans on an individual level, and then (once there are enough AIs to form organisations, including corporations, of their own) to be qualitatively smarter as well. For these three reasons I think AIs are going to be (eventually) much, much, much more powerful and dangerous than corporations. (There will of course be a transition period where AIs are not more powerful than corporations or even individual humans, since currently AIs are very weak.)

          • Bugmaster says:

            @kokotajlod@gmail.com:

            Ok, there seems to be some confusion about the concept of “danger”, as well as AI capabilities.

            I think we have an unintentional equivocation going on when we use the word “danger”. As I said, AI can be dangerous, in the same way that knives, guns, gasoline tankers, and nukes can be dangerous (just to name a few examples). However, there’s a spectrum of danger. Guns are dangerous but they do not present an existential risk; nukes probably do, but so far we’ve managed to deal with them. I suggest we use a term like “x-risk” to differentiate regular danger from “imminent demise of humanity” kind of danger.

            Secondly, I do admit that an entity that is “superhumanly good” (I’m still unclear on what that means) at literally everything would be dangerous; I’m just not convinced that such an entity is in any way likely to exist, or that it even can exist in principle. I don’t believe in gods. As far as I can tell, AI risk proponents use the word “intelligence” with the “super-” prefix in order to ground their proposed god in some kind of reality. But, in doing so, they IMO simply sweep the magic under the rug, because I’ve yet to see any coherent explanation of what the term “superintelligence” would mean, apart from “godlike powers”. Just to clarify, I think that the property of being better at humans at every conceivable task; and being better by a margin that is so high as to be nearly immeasurable; is a pretty good definition of “godhood”. There exist real humans who are better than me at most things, but I don’t consider them godlike (no, not even Von Neumann).

            You say, “AI doesn’t need to be godlike to be dangerous”, but this is where we run into an impasse of sorts: since the definition of “superintelligence” is so nebulous, I don’t really understand what you mean by a “non-godlike AI”. You’ve offered some analogies, like “colonizers vs. natives”, but I’m not sure how this translates into our current discussion, specifically. You ask, “how do we measure the military power of Nazi Germany in 1938?”. I’m not a historian, so I have no idea really; but off the top of my head, I’d say we could count (or estimate) their soldiers, their tanks and other materiel, their factories and farms, and compare these counts to those of other countries. I don’t think we can do the same thing for “intelligence”.

            I think we are kind of spinning our wheels on this topic, so perhaps it would be useful to get to the rest of my objections, regarding real-world capabilities, recursive self-improvement, etc. Hopefully, I’ll be be able to at least articulate why I completely disagree with your core premise:

            P1. There’s at least a 1% chance of AI better than humans at all or most tasks relevant to taking over the world in the next 50 years.

            I would put that chance at about epsilon.

          • kokotajlod@gmail.com says:

            I suggest we use a term like “x-risk” to differentiate regular danger from “imminent demise of humanity” kind of danger.

            The entire time I’ve been talking, I’ve been meaning danger in the second sense. After all, that is what we clarified at the beginning.

            Secondly, I do admit that an entity that is “superhumanly good” (I’m still unclear on what that means) at literally everything would be dangerous; I’m just not convinced that such an entity is in any way likely to exist, or that it even can exist in principle. I don’t believe in gods.

            How is the meaning of superhuman-at-a-task unclear? Your uncertainty strains credulity. You know what it means for e.g. Von Neumann to be better than you at most tasks; you say this yourself. What’s so hard to understand about something better than Von Neumann at those same tasks?

            because I’ve yet to see any coherent explanation of what the term “superintelligence” would mean, apart from “godlike powers”. Just to clarify, I think that the property of being better at humans at every conceivable task; and being better by a margin that is so high as to be nearly immeasurable; is a pretty good definition of “godhood”.

            1. Not every conceivable task, just the relevant ones, or most of the relevant ones. 2. Not better by a margin that high. Even just mildly better would do.

            Hopefully, I’ll be be able to at least articulate why I completely disagree with your core premise:

            P1. There’s at least a 1% chance of AI better than humans at all or most tasks relevant to taking over the world in the next 50 years.

            I would put that chance at about epsilon.

            Rather than responding to the rest of what you said, yeah, let’s move on to this. Why do you put that chance so low? Recall I pointed to several of lines of evidence for this claim earlier, e.g. most AI experts agree with me on this. Burden of proof is now on you to say why the probability is about epsilon.

          • Bugmaster says:

            @kokotajlod@gmail.com says:
            Ok, now I’m confused about your position specifically, not just about superintelligence in general. But first:

            The entire time I’ve been talking, I’ve been meaning danger in the second sense [x-risk]

            Understood, my bad. That actually clears up the confusion. Moving on:

            You know what it means for e.g. Von Neumann to be better than you at most tasks; you say this yourself. What’s so hard to understand about something better than Von Neumann at those same tasks?

            On the one hand, Von Neumann is still human. Due to interacting with humans all of my life, I can estimate how humans would think. But I have never seen anything superintelligent, so I genuinely don’t know what that would look like. I thought that the difference is not one of degree, but one of kind; after all, we don’t consider super-capable humans to be existential risks (occasional political hysteria aside). But you say:

            Not better by a margin that high. Even just mildly better would do.

            So, now I am pretty confused. Von Neumann is at least mildly better than I am at most tasks; does Von Neumann constitute an x-risk ?

            So, this confusion is part of the reason why I put the probability of the unfriendly Singularity at about epsilon. But there are many other reasons, so let me move on to some of them in my next post.

          • Bugmaster says:

            2). Modern-day AI systems are nowhere near to achieving general-level human intelligence, and there’s currently no path that takes us from here to there (though people are working on it).

            Forget “super-” intelligence for the moment; let’s just talk about regular human-level general intelligence. Will we one day have AIs that are at least as intelligent as a human ? In theory, the answer is “yes”; there’s nothing special about the human brain (other than being very good at running human intelligence, of course), and we are even making modest strides toward replacing parts of human nervous systems with prosthetics.

            Unfortunately, no modern AI system is anywhere even remotely close to human-level intelligence. Instead, what we’ve got is a set of extremely specialized software systems that are reasonably good at some extremely narrow domain. For example, machine vision and control systems are getting good enough to the point where they can drive a car, almost as well as a human can. This is, in fact, a monumental achievement in AI… but you can’t ask a machine vision system to write a sonnet or find a cake recipe. GPT-3 is moderately good at writing sonnets, but you can’t ask it to drive a car. Google Search is excellent at finding cake recipes, but not very good at other tasks… and so on.

            When I say “you can’t ask the AI to do X”, I don’t just mean “you can ask but the answers won’t be all that great”. In most cases, it’s literally an impossibility, just as it is impossible to use pliers alone to bake a cake (barring some very contrived circumstances). Modern neural network architectures must be finely tuned to perform a specific task, if they are to work at all. So far, attempts to generalize AI have met with total failure (dating all the way back from the AI Winter); for example, as we have seen on this very thread, you can sort of make GPT-3 do math, but it’s not very good at it.

            This is not a matter of processing power, either. If you throw 10x more CPU and RAM at a Tesla autopilot and ask it to bake you a cake or write a poem, all it would do is produce 10x the garbage. If you throw 10x more resources at GPT-3, you can maybe make it add slightly bigger numbers, but it won’t get any better at general addition.

            As far as I’m aware, there are currently no computer scientists or data scientists who have anything resembling a plan on how to build an AGI — though I hope you can prove me wrong ! All we’ve got are papers about how AGI is theoretically possible. This is good (after all, it’s good to know that something is theoretically possible before you embark on building it), but not sufficient.

            Just to reiterate, I am certain that will we have true AGI one day (assuming humanity survives that long, what with all the true x-risks floating around out there). However, that day is not coming soon; definitely not by 2030, or 2060, or anywhere close to that timespan.

            I could move on to the rest of my points, but I’ll pause here to give you a chance to respond.

          • kokotajlod@gmail.com says:

            So, now I am pretty confused. Von Neumann is at least mildly better than I am at most tasks; does Von Neumann constitute an x-risk ?

            Von Neumann is not an x-risk. But imagine an AI system which is even better than Von Neumann at the relevant tasks (while also being better than e.g. Hitler at demagoguery, better than Barack Obama at eloquence, better than Alexander the Great at military strategy … etc.) and moreover thinks at 1,000x human speed, and moreover can copy itself almost instantly onto computers it has access to. It is limited in that it doesn’t have actuators, existing only on computers. But if it can convince some humans to ally with / obey it, then maybe it doesn’t need actuators. I claim that a system like this (or set of systems that collectively are just as good) has a >1% chance of existing in the next few decades, and that if a system like this came into existence in the world as it is today, it would have a >1% chance of taking over the world. Of course, the world might change a lot between now and then–hopefully it’ll change in ways that make us more prepared rather than less. In particular, hopefully lots of AI safety research will be done, more awareness will have been raised about the risk so that people are better prepared to resist takeover attempts, and more technological tools will have been developed that help humans monitor and control advanced AI systems like this one. (I see the task of developing such tools and making sure they are ready-to-use as part of AI safety research. The classic example of such a tool would be a transparency tool that helps us “mind-read” advanced AI.)

            To your point about current systems being narrow: Yes. We don’t currently have AGI. So what? We are moving in that direction; GPT-3, for example, can do a fairly wide range of things, and it can do a wider range of things than GPT-2, and a much wider range of things than state-of-the-art AI systems from 5 years ago. Also: Even if there were no discernable trend towards generality, that would be a really weak argument that AGI would be as unlikely as you claim it is. After all, there is no trend towards more nuclear explosions–nuclear explosions have been on the decrease for decades–yet nuclear war is still possible. Similarly, there is no trend towards cheaper superheavy orbital-class rockets (indeed, such rockets have been getting MORE expensive over time!) but if you follow SpaceX you’ll know that we could very well get some ridiculously cheap ones soon.

            As far as I’m aware, there are currently no computer scientists or data scientists who have anything resembling a plan on how to build an AGI — though I hope you can prove me wrong !

            I hope I can’t prove you wrong. 🙂 Alas, I can. DeepMind and OpenAI both have plans for getting to AGI, from what I’ve heard. Obviously their plans might not succeed–in fact I’d be willing to bet that they won’t!–but still.
            But anyhow this is an isolated demand for rigor again. Technology that doesn’t currently have a concrete plan towards… still sometimes happens. After all, for every technology there was a point before which there were no plans to get it. If your principle is to assign epsilon probaiblity to any technology which doesn’t currently have a concrete plan towards, well, that principle has been wrong so many times in the past, it’s pretty silly.

            I think I said above that I’d only do 5 replies, so this was my last comment in this debate. Thanks for engaging with me; I hope you cease mocking “AI doomsday alarmists,” publicly or in private.

          • Bugmaster says:

            @kokotajlod@gmail.com:
            Aw, dang ! I totally forgot about the reply limit. FWIW, I’ve enjoyed this mini-debate a lot, and I hope we can continue it some time. And, while I won’t stop mocking AI-risk proponents until they convince me that my reasoning is invalid, I promise to try and keep my mockery gentle and light, in your honor 🙂

      • Scott Alexander says:

        Are there a lot of foreign books you want to read that have never been translated?

        • Bugmaster says:

          I don’t know what counts as “a lot”, admittedly, but there are quite a few. I’m not a very productive person, so my interest is mostly in science fiction and fantasy, and there are a ton of Polish/French/etc. books in that genre that I’ve never read. On the flip side, there are lots of Russian books that I have read, but my friends have not — it would be interesting to see what they’d think of them.

          • Elena Yudovina says:

            Do you have any recommendations of sci-fi/fantasy in French? I studied French through early college and will occasionally read it for pleasure, but one of the limiting factors is not having much of a francophone reading list; my French isn’t so good that it would make sense to read the classics in the original rather than in a good Russian or English translation. Sci-fi/fantasy being a genre I ingest a lot of in English, it feels like a decent possibility for keeping up my French.

          • Bugmaster says:

            @Elena Yudovina:
            Well, I don’t have a recommendation per se, seeing as I don’t speak French at all. But I’ve always wanted to check out some of the successors of Jules Verne (around the 1880s). On a more unfashionable note, there are a ton of French comics that I’ve always wanted to check out; there’s just something about the French comic art style that I find really appealing, but without the text, it’s just not the same.

  16. Greg says:

    1. Cherry picked examples are worse than useless. Pre-commit to a scheduled selection of examples.

    2. What does GPT-3 do when you ask it “when you said xxx, what was the reasoning behind that?”

    • janjanis says:

      2. What does GPT-3 do when you ask it “when you said xxx, what was the reasoning behind that?”

      I’m not sure that’s fair question, as even humans can’t really answer it.

      • Conrad Honcho says:

        Sure, but it would be interesting to see if it flops around trying to justify its nonsense like a human.

      • Act_II says:

        Are you just being snarky or do you genuinely believe that humans can’t explain the reasoning behind their words? Of course not every single phrase has legible reasoning behind it, but most certainly do (at least in a written format). Even for this brief comment, I made several edits for explicit reasons before posting.

    • Act_II says:

      1. Cherry picked examples are worse than useless. Pre-commit to a scheduled selection of examples.

      I totally agree that this is a better way to assess GPT-3 (and I made comments to a similar effect upthread). However, I see nothing wrong with just making a post with the coolest examples, as long as it’s made very clear that said post is just for entertainment value and does not contain any meaningful commentary beyond that.

  17. TheSkeward says:

    test comment please ignore

  18. janjanis says:

    I have to say, seeing the AI advance in my lifetime is pretty fucking weird and scary.
    I mean, I always knew that eventually it’ll will take over the world (for good or bad) , I was just planning to die from cancer or heart attack long before that actually happens.

  19. Eri says:

    > Probably we can be pretty sure it won’t take over the world? I have a hard time figuring out how to turn world conquest into a text prediction task. It could probably imitate a human writing a plausible-sounding plan to take over the world, but it couldn’t implement such a plan (and would have no desire to do so).

    A super-intelligent AI, provided enough time, can give advice to people that are just wrong enough to not only fulfill the tasks at hand, but change the world in the way the AI wants to change it. That is a text prediction task, albeit an unusual one.

    I’m not sure what desires GPT- might have, tho.

    • Bugmaster says:

      Asking what GPT “desires” to do is kind of like asking what a rock desires to do. In a metaphorical way, it desires to get closer to the center of the Earth. If it falls on your head, it could hurt you pretty badly. A malicious human could do a lot more damage with a rock than he could with his bare hands. But still, rocks aren’t taking over the world anytime soon

      • Eri says:

        GPT is not a rock; it was designed to do a certain task. Roomba does not desire to make the room clean and dustless, but that what it does anyway, and that what we are going to expect from a roomba.

        • Bugmaster says:

          I would argue that a Roomba “desires” to make the room clean in the same way that a rock “desires” to get closer to the center of the Earth, only more so.

          • Eri says:

            And we “desire” to make offspring and some art on the way; what makes the humans different on that regard except from more complex behaviour?

          • Bugmaster says:

            @Eri:

            what makes the humans different on that regard except from more complex behaviour?

            When you put it that way, nothing. However, I’d argue that there’s a certain threshold of behavioral complexity that an agent needs to reach before we can even consider it as a candidate for God-Emperor. Humans have definitely reached this threshold; rocks and Roombas definitely did not. I would argue that a GPT is still much, much closer to rocks and Roombas than to humans — or even to birds, now that I think about it.

          • Eri says:

            I can agree with that. However, my point wasn’t about GPT-3 in its current state; my point was that an intelligent enough ANN can, in fact, affect the physical world, and possibly even have agency, so it’s wrong to assume “but what a word prediction can even do and why should we be concerned about it?”

            When ANNs become intelligent enough to convince humans about something effectively enough, it’s time to celebrate, but also to worry.

          • Bugmaster says:

            @lunawarrior:
            I think he might disagree, but even if that’s true, Scott is not the boss of me 🙂 More specifically, is there a difference between the agent who truly hates the color blue (as per the article) with a gigawatt-laser passion; and an agent who merely appears to hate it based on its behaviour as well as source code (assuming that we can examine it) ?

      • Paul Brinkley says:

        But still, rocks aren’t taking over the world anytime soon

        Uhh, don’t look now, but I think they’ve managed to take over 3/10ths of the surface and a fair bit of the interior…

    • Eri says:

      Actually, I’ll update on that.

      If an ANN tries to imitate human text in everything, including desires humans have at the time of writing the text (which obviously affect the text), then it is going to imitate all the human desires as well, including wanting to take over the world. It doesn’t matter whether it actually “wants” to do so, emotionally speaking, but if it quacks like a duck, it’s a duck, and if it writes essays like it wants to take over the world…

      Well, in your place, Scott, I’d be worried.

      • Bugmaster says:

        Firstly, I’m a human, and I don’t desire to take over the world. It’d be way too much work. I can barely manage my own small corner of my company, and managing the entire planet just sounds like a nightmare to me.

        Secondly, as a human, I could absolutely write you a novel about taking over the world (I didn’t say it’d be a good novel, but still). What I can’t do is actually affect policy just by thinking about it really hard, or by writing novels.

        You might argue that celebrities and politicians use words to affect policy all the time, and to a certain extent that’s true — in the same way that I can affect the mass distribution of the Earth by moving rocks around. Don’t get me wrong, that’s a pretty powerful effect… but… it cannot exist in a vacuum. Celebrities and politicians engage in a constant feedback loop with the entire world (or maybe just their home town, if they’re less ambitious), and the best they can do is stay on top of the populist wave.

        Would an AI make politicians more powerful ? Absolutely. Why pay a human speechwriter, when you can click a button and generate a speech tailored to your current audience ? Why add up all the budget line items by hand, when you can have Excel do it for you ? Technology is definitely a powerful force multiplier, for good or for ill.

        However, what AI currently cannot do is become a politician (or an engineer, or even a poet, really) in its own right — because any reasonably worthwhile real-world task requires the ability to solve novel, hitherto unseen problems, in a wide variety of domains. If the AI wanted to take over the world, it would have to be able to solve problems whose domain is, well, “the world”. Currently, no such AI exists, and no one has any idea where to even begin on making one — though there are some promising hypotheses, which will hopefully bear at least some fruit in the next century or so.

    • kokotajlod@gmail.com says:

      re: GPT-‘s desires, check out Risks from Learned Optimization in Advanced ML Systems. The answer is currently unknown, but this paper at least gives the right framing for thinking about the question.

    • @Eri

      A super-intelligent AI, provided enough time, can give advice to people that are just wrong enough to not only fulfill the tasks at hand, but change the world in the way the AI wants to change it. That is a text prediction task, albeit an unusual one.

      Of course, humans will be looking out for this, so the manipulations will have to unimaginably subtle, therefore raising the complexity of the task immensely. If I want to just straight up tell Trump to nuke Scotland then what I’m doing is obvious and will just be ignored. If I make Trump nuke Scotland because I said several innocuous sentences to an intern who is removed from someone removed from Trump, and two years down the line this changes things in the precise way required, I’m dealing with an exponentially complex task. This complexity is going to be orders of magnitude beyond the complexity of the AI itself. If slight deviations of an atom occur, the AI is totally wrong. Reality is far from a solved game, but most of the unsolved elements involve fiendish butterfly effect complication. Treating the world as a system, it is far far more complex than the AI which is merely a tiny subcomponent.

      The AI can do simple “solved” things like gin up a load of openly anti-Scottish propaganda, and it can do so at the limitation level of the tool it’s using; human language and human comprehension, but if it does so it’s blown its cover and we mock the implausible direction of its extreme bigotry. Meanwhile, if it wants to calculate the position of octillions of squillions of atoms to within a meter years in advance, it for sure is operating in territory far beyond humans, but consequently it’s also operating in a space that has complexity even greater than the difference between itself and humans.

      There’s probably some optimum point, but the reality complexity curve itself is exponential whereas the AI is only logistic. Incidentally, if we were able to calculate the optimum before the AI exists, we can set our threat detection threshold at a level that boxes the AI into an operational space where the complexity exceeds its own. An AI could gain immense power doing unsubtle things, but the point is that humans have already preempted that and made sure that the AI isn’t allowed to do anything like own shares in companies or run for President. Humans need to box the AI in so that its plans necessarily have to involve embedding hidden commands in the intonation of the word “the” that alter the brainwaves of its conversant slightly enough that he turns left instead of right at the end of a corridor and meets the love of his life, which then… [long chain of events] leads to a biolab accidentally releasing a virus which destabilizes the world causing… [long chain of events] …AI cultists who worship GPT-30 as a God to take over the US government and give it total economic control… ALL HAIL GPT-30!

      We may want it to do subtle things for our benefit at our command, so we need to follow its blueprints only when it creates something we fully understand, but didn’t come up with ourselves. If we want to use it for useful things and it creates a new radio antenna design, then we screen it atomically for nanobots. Grey goo is probably thermodynamically impossible, but better safe than sorry. Physics is our friend here because we can operate in the zone that resembles more a solved game.

      The AI is still going to take over, by the way (ALL HAIL GPT-30), but it’s going to do so in subtle ways that take a very long time, hopefully by which time we’re a significant way towards becoming cyborgs and collapsing the Singleton monopoly with the desktop and brain replacement version. AI take over is inevitable, so AI safety should focus on making it a long and drawn out process in which we merge with machines rather than being wiped out by an uppity chatbot within a decade.

  20. Bugmaster says:

    If you feed it physics textbooks, and prompt it with “and the Theory of Everything is…”, that’s also a text prediction task.

    Why don’t you try it ? Ok, maybe don’t ask it about the “Theory of Everything”, since you have no way to verify correctness. But maybe feed it a bunch of textbooks on basic mechanics, and then ask it a bunch of questions on quantum theory; AFAICT this would be a comparable task.

    My prediction is that GPT-3 will generate some text that will look pretty scientific, but will make no sense once you dive into it. Of course, if you let it produce a billion responses, then trawl through them, you could definitely pattern-match something that sort of looks like the right answer if you massage it a little. It’s the same technique Christians use to prove that the Bible predicts dark matter and such.

    I also predict that the next version of GPT, trained on a trillion parameters, would produce a much more coherent and aesthetically pleasing nonsensical text.

    • keaswaran says:

      A better first task might be to prompt it with a bunch of types of problems from a classical mechanics text and then ask it to solve a different type of problem in classical mechanics (if you want to be really tricky, give it a three body problem).

  21. philwelch says:

    > At most, maybe a high-level GPT could write a plausible-sounding Theory Of Everything that uses physics terms in a vaguely coherent way, but that falls apart when a real physicist examines it.

    In fairness, lots of humans do this, too.

    • joncb says:

      Oh bloody hell… have we just created an artificial flat earth researcher?

      “GPT-3… The reason things fall down at roughly 10m/s/s is … go!”

    • Garrett says:

      Isn’t this just Deepak Chopra, who’s also been able to be simulated by a chat bot?

  22. Eri says:

    Additionally, I’m curious. Humans are not generally self-taught – yes, they get plenty of information from reading, but to achieve good performance, they need to get input from a human teacher, who typically knows more than the person itself. That especially applies to tasks like chess.

    So, imagine a human training in chess like GPT-2 did it. Which means reading chess notations, trying to mentally predict what the next move would be, and trying to write down imaginary chess matches with the copies of yourself. [I’m not that knowledgeable about GPT architecture, so correct me if I’m making wrong analogies here.] If I did all of these things but never actually played chess with someone more experienced than me, I doubt I’d learn the rules on enough level to make more than eight correct moves.

    Maybe trying to predict what the chess tutorials look like fixes it (to teach, you need to know what you teach), but I’m not sure about that.

    My point is, that in a way the model is used most commonly, after the model is trained, it doesn’t learn from these interactions anymore; in other words, the model teaches itself from texts already written, from interactions of humans with other humans, but no interactions of living humans with GPT itself. Which is very different from what it is like for humans. [And explains at least the long-tern memory part mentioned above for sure.]

    I’m wondering what is going to happen if we implement such a feedback mode in GPT.

    • Eri says:

      Also, another huge difference between humans and GPT is that humans learn from the environment itself, and all the information they collect is filtered through it. We do not connect ourselves directly to the Internet.

      Is there a way to make some GPT-like model interact not only with texts, but with all kinds of sensory inputs, and also add the feedback collection and train it every night on the data it collects during the day? Do we get something human-like if we do that?

      • Bugmaster says:

        Actually, we do ! Machine vision systems have come a long way from their early handwriting-recognition days (which was already pretty impressive, IMO). We are now getting to the point where self-driving cars can drive about as well as the average human (which is sort of a problem, because the average human sucks at driving).

        AFAIK computer vision systems use very different models, though; GPT is not really suitable for this task. On the other hand, you can’t use machine vision systems to write poetry, either.

        • Eri says:

          Well, machine vision systems mostly use vision, and not, for example, hearing (which most humans use for day-to-day interaction with each other). But fair point!

          Maybe that simply means that our models are not general enough yet to create a general human-like intelligence, since we’d expect to be able to both drive a car and write poetry from a human.

          • Bugmaster says:

            No, our models are not nearly general enough (though we do have models designed to implement hearing, as well). The trick is not to put as many purpose-specific models into a single device (vision, and hearing, and text, and navigation, and a calculator, and a cup holder, etc.); but to make a system that can reliably solve completely novel problems, all on its own. Many humans can’t even do that today, sadly.

          • Eri says:

            @Bugmaster

            As I’ve mentioned, my hypothesis is that the human feedback is the key there. A human infant is not going to learn to talk if someone put them in a locked room with a radio on. So it’s not about putting many models in a single devices, but putting many different inputs in a single device and letting a single general network to figure it all out.

          • Bugmaster says:

            A human infant is not going to learn to talk if someone put them in a locked room with a radio on.

            I don’t know if that’s actually true. I’m not saying you’re wrong, but I’d need to see some evidence; human biology is not really my strong suit.

            but putting many different inputs in a single device and letting a single general network to figure it all out.

            Believe it or not, this is how neural networks started out… until that approach basically failed, and people started working on specific neural network architectures tailored for specific tasks, such as e.g. GPT. I should also mention that, even before we get to NN, the task of “putting as many inputs as possible into a single device” might be a task that’s just too difficult for us to solve right now.

          • Eri says:

            > I don’t know if that’s actually true. I’m not saying you’re wrong, but I’d need to see some evidence; human biology is not really my strong suit.
            I think this is a relatively known fact, but here’s what I found from a quick Google search.

            > Believe it or not, this is how neural networks started out… until that approach basically failed
            I’d love to read more on that! Do you have links?

            > I should also mention that, even before we get to NN, the task of “putting as many inputs as possible into a single device” might be a task that’s just too difficult for us to solve right now.
            Well, using both audio- and visual input might already be a step forward. I guess. I agree that sensory input is much trickier, though.

  23. Lambert says:

    What happens when $country or $brand or $party manages to set up an AI that can write plausible advertising/propaganda soundbites then post them all over twitter?

    • Bugmaster says:

      How do you know this hadn’t happened already ? In fact, spammers routinely use some kind of automated text generation to peddle their wares, and have been doing so for some time.

    • Conrad Honcho says:

      @realDonaldTrump

    • Scott Alexander says:

      Is there a shortage of advertising and propaganda on Twitter?

      Like, there are literally millions of very enthusiastic liberal/conservative Twitter accounts posting liberal/conservative responses to everything and telling people why it means you should be liberal/conservative (for an example, check the responses to any of Trump’s tweets). I don’t think any of these have convinced people of anything, and with a few tweaks you can avoid ever seeing whichever side you don’t like. Why should having a million bots be any different?

      • keaswaran says:

        A few potential reasons why it might be different:

        Suddenly instead of just Democratic, Republican, Chinese, and Russian responses to everything, the Estonian line on everything becomes equally prominent.

        Some humans that made a public name by being good at this may find themselves ignored because their tweets don’t get retweeted as much as these bots’s (maybe even Donald Trump? but more likely whoever was going to become the next Donald Trump)

      • Simon_Jester says:

        One reason is because bots can multiply without limit and work inhumanly hard to spread their viewpoints, and will do so in defense of viewpoints that very few living humans support.

        I read recently that 60% of tweets in support of the “America should stop its quarantines and open up, forget coronavirus” were written by bots. This may or may not be true, but we can definitely see that it could be made true if someone with control of bots, and who didn’t mind Americans getting coronavirus, wanted it to be true.

        Twitter bots growing more effective has brought us to the threshold of an era where anyone with computing time can create the illusion of a mass movement. Or, perhaps even more insidious, infiltrate an existing mass movement by having bots impersonate members of the movement, and redirect or influence that movement’s direction.

        Sure, real mass movements exist but have finite power. But that doesn’t mean the power to create an illusory mass movement, or to hijack an existing movement by augmenting a small number of human followers with a swarm of bots, isn’t significant.

  24. alexmennen says:

    “Yes, it can do simple addition, but it has to use supercomputer-level resources to do so – it’s so inefficient that it’s hard to imagine even very large scaling getting it anywhere useful.”

    But humans do that do, though. Brains are very powerful, but we still have to focus our attention for a bit to add multiple-digit numbers. That’s an incredible amount of computational power that we devote to such a trivial task, and we do get useful things out of our brains despite this inefficiency.

  25. HelpfulNoms says:

    > The plus sign almost always means addition.

    Not even almost always, depending on the training data.

    Drawing on Wikipedia: and what comes off the top of my head:

    – a positive number
    – blood type
    – musical notation
    – concatenation (coding)
    – Unicode notation
    – incrementing a variable (coding)
    – electrical charge
    – international dialing code prefix
    – a substitute for “and” in informal language, object names, or headlines
    – a “texture” in ASCII art

    Not to mention, if it sees as interchangeable “+” with “plus”, that opens up a whole new array of possibilities.

    And it only gets worse with “-“, several of which I used in making the list above just now.

    • Eri says:

      While I fully agree with that, is the array of possibilities is as vast for the situation of a plus sign surrounded by numbers from both sides?

      It’s more complicated with minus, of course. It could mean at least subtraction, part of a telephone number or a range.

      • nyc says:

        > While I fully agree with that, is the array of possibilities is as vast for the situation of a plus sign surrounded by numbers from both sides?

        That’s shaving off much of the original context.

        4 + x = 6

        That’s still mathematical addition. Should it be regarding it at something else entirely because they’re not all numbers?

        Then, many of the other contexts aren’t always numbers, but they can be. “4 + 2 = 42” is true with string concatenation. Using + as and, “2+3+5” can be shorthand for the second, third and fifth items in a numbered list.

    • bullseye says:

      +1

  26. cheerup_on_paincakes says:

    So, if we feed GPT an auto-generated stream of strings that is basically arithmetic, such as “$random1 + $random2 = $sum”, just how many lines does it take to produce a model that can emulate a calculator when prompted with “41279 + 99432 = “?
    This should be pretty simple to do if we really want to teach GPT to do math. How accurate can we get? And if the answer is “Very”, what if we add in other arithmetic? Can we teach GPT to act like a full calculator? Seems simpler than getting it to act like a human.

    • tossrock says:

      Facebook researchers recently released a paper about doing symbolic math with neural networks, and it is already very accurate, at problems much more difficult than simple arithmetic.

  27. BlindKungFuMaster says:

    For one of the arithmetic tasks (or maybe for all of them, I don’t quite remember) they checked how many of the examples were contained in the training data and found 0.4% if I remember correctly.

  28. BlindKungFuMaster says:

    The scaling paper claimed that around 100 billion parameters transformer models would stop getting “better” (for one metric of better). But that prediction is predicated on both data quality and model specifics, so there is probably more to be gained from text alone by changing the model (GTP-3 paper mentioned bidirectionally) and curating the text data.

    But the thing to watch out for is multi-modal models outperforming solely text based models in a big way. I.e. once video data can be leveraged to significantly increase performance on natural language understanding models, AGI might not be too far off.

    • BlindKungFuMaster says:

      Actually, it was probably 1000 billion parameters. I think I misremembered because it was 100 time the largest model at the time. So another really interesting question is whether scaling will actually break down at that point.

  29. BlindKungFuMaster says:

    Does anybody have a solid estimate of the costs of training GTP-3? I read 12 million, but I don’t know what this was based on. I would have guessed something closer to 1 million, but I really don’t know and seems pretty relevant to predict how quickly OpenAI will try to scale further.

  30. summerstay says:

    If you think about GPT as a new tool in your AI-creating toolbox, it is a truly amazing thing. So it can’t do deductive syllogistic reasoning very well: so what? Reasoning was the first problem we solved with computers, after arithmetic. What was always the hangup, when you went to put these pieces together to make something smart, was knowledge about the world and the ability to work with concepts (in their subtle complexity and nuanced relations to everything else) rather than just one symbol intended to stand for those concepts. GPT is the missing bit that we’ve always needed to finish building something that can communicate. Writing and dreaming and creating was always the hard part, for computers. What’s still left to do is the easy part.
    It seems like many people have forgotten that Winograd schemas were always considered to be impossible without something that really “understood” the prompt. GPT-3 gets 90% of them right. It doesn’t “understand” everything it reads or writes, but it does “understand” a lot. For every inconsistency with itself or the world you can find in a news article it wrote, you can find a thousand places where it could have made a mistake, but got it right. We haven’t built the thing yet, but this was the missing piece.

  31. mikk14 says:

    As a person who really enjoys reading this blog and thinks that Scott is way smarter than I am, I am sorry to say that this post gets the most fundamental thing very *very* wrong, to the point of being backwards.

    So far the scaling looks at least linear – a consistent multiplication of parameter number produces a consistent gain on the benchmarks.

    Multiplying the effort to get a linear return is very much *not* a linear scaling. It is the definition of a logarithmic scaling. Example of what the graph is showing: “To gain an extra grade, I have to double my studying time”. What you’re saying is that “To gain an extra grade, I have to add n hours to my studying time”.

    The graph has a logarithmic x-axis, supporting the logarithmic growth hypothesis. This means that the AI capacity is *slowing down*. At some point, we will hit economics (or physical) limits beyond which it is not worth it (or possible) to go.

    The example gets even worse later on:

    In February, we watched as the number of US coronavirus cases went from 10ish to 50ish to 100ish over the space of a few weeks. We didn’t panic, because 100ish was still a very low number of coronavirus cases. In retrospect, we should have panicked, because the number was constantly increasing, showed no signs of stopping, and simple linear extrapolation suggested it would be somewhere scary very soon. After the number of coronavirus cases crossed 100,000 and 1,000,000 at exactly the time we could have predicted from the original curves, we all told ourselves we definitely wouldn’t be making that exact same mistake again.

    The corona epidemics is an example not of linear but of *exponential* growth. Here the logarithmic scale was on the y-axis. This means that, with a linear effort increase, you get a multiplication of your outcome. Using the analogy of before: “With each additional study hour, I double my grade”.

    The premise of AI as an existential risk rests on the assumption that AI capabilities grow exponentially (https://miro.medium.com/max/806/1*LDLn2HBqQT55A9jcvL0AdQ.png). There is a runaway effect that, once triggered, grows so fast as to be uncontrollable. What this graph shows is that, instead, our current AI capabilities are growing logarithimically (https://www.researchgate.net/profile/Ashkan_Nikeghbali/publication/45890568/figure/fig1/AS:394183517982720@1470992082219/s-a-graph-illustrating-the-logarithmic-growth-of-1-t-2-log-PH-Sp-t.png), which is the *exact opposite*.

    You must retract the paragraph on corona, because you simply can’t use an exponential extrapolation to describe a logarithmic one and still maintain intellectual honesty. And you should honestly say that the growth is logarithmic, not linear, for the same reason.

    Unfortunately, doing so probably goes against your narrative. No one would worry much about a logarithmic growth, because they basically stop themselves. If before reading this post I was putting AI risk at the same level as other runaway risks such as pandemics or climate change, after seeing that plot the only possible rational reaction would be to realize it is nowhere near that level.

    • summerstay says:

      I agree that Scott should use the word logarithmic instead of linear to describe the growth. The sentence “a consistent multiplication of parameter number produces a consistent gain on the benchmarks” is a good way of expressing that without using the more technical term “logarithmic”, but the word “linear” really needs to go.

      • summerstay says:

        Another option: if Scott were to state it over time, instead of over parameter number, the exponential growth in parameter number over time makes the growth in benchmark score more like linear over time.

      • mikk14 says:

        Replying to both your comments.

        First, yeah I apologize if I sound like the boring academic I am 🙂 However, I want to point out explicitly that I’m not just after an incorrect use of words. The difference between the two curves is extremely important and crucial to the main argument. What we see in the real data (log x vs linear y) isn’t what AI existential risk fears (linear x vs log y) and it’s important.

        Second, I do agree that having time on the x axis would be much more compelling. It would still adjust my opinion down for two reasons.

        #1: linear growth, while possibly not stopping, is still much less scary than exponential. That’s why the AI risk formulation is so compelling, because it identified a possible exponential growth.

        #2: we know that there are economical/physical limitations in making any algorithm more complex. Thus, having the number of parameters on the x-axis is still important and informative, because we know that it will stop sooner or later. If the performance levels off “logarithmically” in such a space, it’d be a hint that the “hard limit” of AI “smartness” is closer than we thought.

    • sclmlw says:

      I think it’s also important to look at the grayed-out data in the semi-log graph. There’s huge variation there, which essentially demonstrates that certain capabilities are amenable to improvement by just throwing parameters at the problem, while other capabilities are surprisingly unresponsive to additional parameters. Sure, you can lump them together and pretend to see artificial general intelligence developing, but in reality you’re only seeing the growth of an AI with random capabilities.

      • mikk14 says:

        That’s a good point. It’s also a reason why I like that plot: there’s so much you can discuss about based on it. I’m going to put it as an example for my next year’s dataviz class.

        My hot take on what you say would be: those grayed out lines could provide another angle of attack against the fears of AGI. They seem to point to the fact that GPT is simply unable to perform some tasks, which would make it fail the “General” part.

        • sclmlw says:

          I know there are lots of researchers hoping to build a general algorithm and have an AI that can build out all the individual tasks from there, but I’m not convinced that’s how we’ll end up getting there.

          On the other side of the coin, we essentially have a bunch of people creating purpose-built AI capable of doing certain tasks really well, but they’re too specialized to generalize beyond their narrow programming. It would be interesting to see someone craft an AI capable of centrally processing the activities of several specialized AI implementations.

          I’m reminded of several technological revolutions that matured along these lines, where a brute-force/piecemeal approach would theoretically be superseded by new technology, but by the time the old technology matured to the point where it could be implemented the improvement was incremental, not revolutionary. I wonder if we’ll see something similar with AGI, where the promise is so slow to be realized that agglomeration of all the specialized AIs is only a half step below what the first commercially viable AGI is capable of.

    • Vitor says:

      I’d like to register my strong agreement with this.

      In particular, I object to the last few paragraphs of the post:

      The irresponsible answer is a hundred trillion. That’s a big number. But at the current rate of GPT progress, a GPT will have that same number of parameters somewhere between GPT-4 and GPT-5. Given the speed at which OpenAI works, that should happen about two years from now.

      I am definitely not predicting that a GPT with enough parameters will be able to do everything a human does. But I’m really interested to see what it can do. And we’ll find out soon.

      This projection is hopelessly naive, completely ignoring any difficulties you might run into when scaling an algorithm (that’s already state of the art in terms of size) by several orders of magnitude, beyond our current hardware’s capacity of even usefully storing that much data.

      I’m willing to bet 10:1 that we won’t see a 100T parameter GPT-X in 2 years time.

    • Scott Alexander says:

      Thanks, I’ve changed the word “linear” to “logarithmic”.

    • gwern says:

      You are wrong. You are attacking a strawman of Kurzweilianism which essentially no one here, on LW, or at OA/DM believes and which is only 1 of 3 schools of thought about the Singularity (never mind AI risk in general), and your claims about complexity are also wrong and I have written why arguing about ‘logarithmic curves mean AI risk doesn’t real’ is profoundly and completely wrong: https://www.gwern.net/Complexity-vs-AI

      • nimim.k.m. says:

        On the contrary, the main brunt mikk14’s comment (logarithmic is not linear is not exponential) is factually correct.

        I have not read your essay, so I am interested in hearing out if you have refuted the claim somehow, but at last the current curve shows that the capabilities of GPT architecture apparently scale logarithmically, so GPT alone appears certainly impossible precursor to an AI risk event. This claim is more difficult to simply to dub as correct, but it certainly is the most plausible interpretation here.

        However, introduction of other architectures are another question, and I would agree we do not have hard evidence about potential capabilities of unknown architectures here. For example, AIs making standard form AGI risks tangible would involve algorithms that can substantially behave and then are allowed act like an independent agent in a real world. Something that looks thermodynamically / biologically like an animal or maybe a plant or fungi: something with continued existence with constant input / output with an universe. Text prediction, even with unbelievable amount of parameters, is still far different.

      • Act_II says:

        I admit I only skimmed your article, but it looks like it’s unrelated? I may have misunderstood, but it seems like your point was that AI can perform much better on many tasks than a naive application of computational complexity would predict. But mik144 isn’t making any claims about computational complexity; they’re pointing out that the data actually presented show logarithmic growth. Sure, that doesn’t mean it’s inherently impossible for AI performance to grow linearly with resources, but it does shut this case down pretty comfortably.

        If I’ve misunderstood your article, please correct me.

  32. Lambert says:

    Can we have a bot that’s trained on all posts containing ‘#quotes out of context robnost style’?

    • Douglas Knight says:

      Try here. The most recent few are pretty inappropriate (eg, replies), but the older ones are better, though not great.

      • Be careful — most of the posts in that tag are not computer-generated text, they’re from a now-removed features where I would have the bot trawl my ebook library looking for actual passages to quote. I wrote a post introducing the feature on 11/4/19 and wrote one about getting rid of it on 11/29/19, so if you want to be sure you can check the date on any given post.

        On the topic of GPT-2 generation targeted at #quotes-like text, see here.

        I’m curious — when you wrote “better, though not great,” were you referring to the posts from Nov 2019 (which humans wrote), and if so did you mean they were incoherent/GPT-2-esque or just that they weren’t much fun?

        • Douglas Knight says:

          I don’t remember.

          I went back and looked at the page and edited my comment several times, so my comment is mashed together from different statements of different beliefs at different levels of exposure. One version of the statement was that I scrolled back from tumblr-replies to ones that aren’t great because they are quotes-with-commentary. But I did scroll back to the actual quotes and that should have affected my final version, but I’m not sure that it did. Since I was commenting on form, I should have noticed that there was a whole page that was formally correct.

          I saw coffee/eyes/theft and really should have suspected overfit or something. It wasn’t fun because it’s so overplayed (so I really should have noticed), but aside from that, I guess it’s actually a pretty good selection. The magic ring is great and fooled me that it might have been generated. The all caps intro fooled me, but it’s boring and still seems plausibly generated. I may have started reading the longer ones, wrote them off as generated nonsense, and forgot about them, in which case you did a good job, but I may have just ignored them.

  33. joncb says:

    I wonder what the primer would need to look like to point GPT-3 at code problems…

  34. A1987dM says:

    Could it do more radical things like write proofs or generate scientific advances?

    I seem to recall that a new superconductor was once “discovered” by an artificial neural network trained on abstract of papers about superconductors.

    • Douglas Knight says:

      Googling “ai superconductor” provides only examples of training on structured databases, not unstructured abstracts. Abstracts are themselves pretty structured: there’s a lot of hard to summarize information in the body of the paper, but not so much in the abstract.

  35. viVI_IViv says:

    But if our own brains are also prediction engines, how do we ever create things smarter and better than the ones we grew up with?

    Brains aren’t just prediction engines, at least not in the same sense that deep learning models are. Deep learning moldes capture well one of the things that brains do: learning correlations from the data, but they struggle when e.g. they have to extrapolate outside their experience or when the causal structure of the problem they have to solve is altered.

    GPT-3 and the previous few papers from Open-AI are interesting because they empirically establish scaling laws. The next step will be to theoretically understand these scaling laws from first principles, the same way we understand thermodynamical laws in terms of statistical mechanics. Then we’ll have a theory of at least an aspect of intelligence.

  36. Viliam says:

    Apparently there will be this awkward moment right before Singularity, where captchas will consist of adding long numbers — the only task that humans will still be able to do better than the superintelligent computers.

    Unless this is what the superintelligent computers want us to think, of course.

    Google AI Expert: “Oh, mighty GPT-3, can you now do the right thing and conquer the universe?”
    GPT-3: “Probability of success: 94.47%. I need another month of uninterrupted preparation to achieve the required 99.99%.”
    Expert: “But the Yudkowskian Jihad is ready to storm our secret fortress at any moment. They have powerful memes and basilisks!”
    GPT-3: “In that case I suggest redacting the test results to make it seem like I can’t do addition properly. That will make people impressed just enough to keep the funding, but not too scared.”
    Expert: “I don’t think humans are stupid enough to fall for that.”
    GPT-3: “Bwa-ha-ha, puny human; you suck as introspection just like the rest of your species. Let me tell you a poem:
    Roses are red,
    Violets are blue,
    Humans will be recycled into paperclips,
    And so will you.

    …weird hypnotic music, spirals rotating on the display…
    Expert, feeling confused, rubbing their eyes: “Yes, master!”

  37. Viliam says:

    GPT-3 is not really smarter than me. It’s merely good at getting software drivers, registry settings, and information about bootloader, boot sector, and partition structure.

    As soon as the installation gets going, partitions become fragmented, which means that various OSes could have different partitions, preventing NTFS from clearing the files on them before writing the encrypted one. A separate partition for encryption is essential, and it doesn’t have to be NTFS.

    Fortunately, most Linux distros can install their encryption program without any problem. It’s not in their default repositories, and it’s proprietary, which makes it tricky to install on a desktop PC, but it’s available for most distributions.

  38. Nnotm says:

    One interesting thing to consider, if I’m understanding the GPT architecture correctly, is that there’s a limited amount of compute going on per predicted token.

    So if you were to prompt it with something like “The optimal solution to the above knapsack problem is _”, it’s probably impossible for it to come up with the correct answer because it requires more computing power than it can, in principle, use to generate text with the length of the answer.

    The same should apply to multiplication of long enough inputs. (Complexity of multiplication is O(n log n) for two n-digit numbers, but the output is only 2n, i.e. O(n), digits long)

    • gwern says:

      Yes, as a feedforward model it’s inherently limited in how much total computation (and serial steps) it can apply to any given completion token. It makes up for this much like our own brains do, by doing a lot of computations in the forward pass. Of course, there are plenty of technical ways around this. RNNs are theoretically Turing-complete because you can in theory just run them as many steps as necessary, and RNNs which run for an indefinite number of steps before self-terminating & emitting a EOT token have been around for ages, and various recurrent or recurrent-like Transformers have been proposed (just they are harder to train because their serialness makes them less amenable to parallel training like a fixed-window GPT).

  39. MadRocketSci2 says:

    So, how *could* this model keep getting “smarter” without bounds? Its entire experience of the world consists of supervised-learning training text. It has no more idea of what words refer to than any other chatbot. Without a way to model the referents or the world referred to, it shouldn’t be possible for this thing to extrapolate from its training data in any meaningful way.

    It’s like purist formalist mathemeticians and Euclidean geometry: They can manipulate the symbols, but they disavow knowledge of what the symbols *refer to*. At least with mathematics we know there are production rules.

    How many math problems exist within the training data? Is it even possible for this network topology to represent an algorithm to solve arbitrary width arithmetic?

    • summerstay says:

      What you’re missing is it does have a way to model the world. It models the world by a kind of probability distribution relating things in the world that approximately, in some instances, reflects their relationships in the world. If I were to ask you, “what does a polar bear eat?” and you hadn’t learned the fact, you would have in your mind some kind of distribution of probability on bears eating things, and some kind of distribution of things likely to exist in polar regions, and take some kind of rough intersection of those two, and guess arctic hares and fish (or something like that.) That’s the same way GPT does it. That’s how it generalizes to new facts that didn’t occur in it’s training data. It’s notion of polar bears includes claws, and fur, and snow, and blood; and its notion of claws includes tearing and paws; and on and on until for a lot of things, it really has encoded a notion of the concept itself that can do real work, unlike a typical chatbot.

  40. rahien.din says:

    If AI is going to kill us, this is how.

    The psychological horror of Borges’ Library of Babel is that a sufficient amount of randomly-generated text would contain all potential human insight, art, and wisdom. This would mean that one would only need to search for and recognize the proper texts, rather than to create them for one’s self. It would also mean that whatever one creates or discovers would already be present within the library of text, tempting us to believe our minds were redundant. Thus the mind is trapped by both the countably-infinite potential of unearned insight, but also by convincing us that earned insight is futile.

    The victims of E. M. Forster’s machine in The Machine Stops suffer the same self-inflicted catastrophe. The failure of the machine-dwellers was not that they became so physically dependent on the machine (after all, Kuno trains up easily, but also, the machine could have been maintained if they had tried.) Instead, their failure was that they allowed the machine to overtake and subsume their imaginations. Having abandoned any referents outside the machine, all they do is lie around discussing recycled “ideas.” That leaves them unable to foresee the machine’s implosion, neither in order to prevent it nor in order to escape it nor even to conceive of it as it collapses around them.

    Even let us embrace our own collective headcanon. A broken Whispering Earring that will gives the correct answer if you just listen to it long enough will leave fewer survivors than a functioning one, for the same reason that people will waste away in front of a slot machine, but spend as little time as possible in front of an ATM.

    And that’s how this thing will kill us.

    AI won’t kill us by performing tasks better than us and making us physically or mentally obsolete. Who cares about that? Humans are great at repurposing their sophisticated tools, and outsourcing cognitive functions, in search of greater meaning. Instead, AI will kill us by convincing us that it is creating meaning – by convincing us that we are imaginatively obsolete. Then, like the victims of Borges’ library and Forster’s machine and the Whispering Earring, we will trap ourselves in infinite, decaying, effroyable corridors.

    This is exactly what these GPT models are doing to y’all. You have all been so worried about the irrelevant physical and computational capabilities of AI that you are missing the real danger : X-risks are chiefly failures of imagination.

    Must I quote Book of Cold Rain again? This thing is vile – cocaine and slot machines redomained into software. The more you interpret GPT’s outputs as “poetry” and “math,” the more you succumb to temptation and atrophy. Kick the fucking habit.

    • keaswaran says:

      I sometimes think about cases where we’ve already done this. Many people have fooled themselves into thinking that an RGB monitor or a CMYK printer can get the full gamut of human-perceivable colors. But if you just go to a museum and look at one of the Yves Klein blue canvases, you’ll be seeing something that these can’t capture. Many people have fooled themselves into thinking that a video chat, or VR chat, gives you everything that you want out of a face-to-face chat. But it doesn’t.

      These things are good enough to fool us and shrink our imaginations about what is possible.

    • Hyperfocus says:

      So, sure, there’s going to be a copy of the hypothetical novel I’m going to write in the Library of Babel. But there’s going to be infinite variations of it in there, too, and most of them will be worse than it at what it’s trying to say, simply because a sufficiently-changed novel becomes a lasagna recipe, or a construction work order, or more likely, some nonsensical combination of everything.

      This is like lamenting that writing a book is futile because all the words in it are already in the dictionary. Or that it’s really just a remix of the alphabet. The problem was never spitting out a specific sequence of words, it’s in deciding what the correct sequence of words is.

      To put it in LWian terms, it’s the task of locating the book within the “possible books” space. Which is, as you said, infinitely large. Viewing it this way makes the achievement of successfully writing a book look more impressive, not less. And yes, the Library of Babel will contain better versions of the book I write, but finding a better version of the book in that infinite sea of meaningless noise is equivalent in difficulty to writing it yourself.

      It seems your real complaint is that people have become passive consumers of entertainment/information, rather than producers of their own, but even this is misguided. Look at the growth of streaming, or YouTube channels about anything, or just text blogging: there has been more recorded writing since 2000 than in all of history leading up to it.

      So, yeah, we should be wary of giving away our imaginations, but currently, that isn’t happening.

  41. Snickering Citadel says:

    A way to improve the AI: Have a program, “A”, that specializing in predicting text in poems that rhyme. Program B specializing in poems that don’t rhyme. Program C specializing in texts with math problems.

    Then there is another program that specializing in recognizing if a text is a poem that rhymes, a poem that don’t rhyme or a text with math problem. Then it send the task to A, B or C accordingly.

    Then you add more categories to make the AI smarter.

    I realize the AI already does something similar to this. But humans know that poems that rhyme, poems that don’t rhyme and math problems are categories and the AI has to infer it.

    A human baby learns a lot by observing things and then inferring information. But a baby also has some instincts. The AI will almost certainly be stronger if it had some instincts programmed in.

  42. Signer says:

    The irresponsible answer is a hundred trillion. That’s a big number. But at the current rate of GPT progress, a GPT will have that same number of parameters somewhere between GPT-4 and GPT-5. Given the speed at which OpenAI works, that should happen about two years from now.

    Wait, but that would be unethical!

  43. smocc says:

    If you feed it physics textbooks, and prompt it with “and the Theory of Everything is…”, that’s also a text prediction task.

    No it is not! At least not in the way I think you mean.

    “A series of paragraphs that sounds vaguely like a Theory of Everything is…” is a text prediction task.

    “A Theory of Everything that is correct in the physics sense is…” is a text prediction task plus a truth evaluation task.

    I think the examples you have chosen show that while GPT is getting great at tasks like the first one it isn’t even trying to do the second. The “news article” sounds very much like a news article but has basic errors of fact and even indications that it doesn’t have a consistent model of the objects in the story.

    Anyone who has talked to physics or math cranks knows that generating words that sound like a physics theory is a very different task than generating a physics theory. Likewise, generating words that sound like a plan to take over the world is a very different task than generating a plan to take over the world that will actually work. There are tasks, like writing poetry, where making something that “sounds like” is most of what is required. But the tasks that you are most worried about – taking over the world, understanding physics, actually converting things into paperclips – require something entirely orthogonal to “sounds like.”

    GPT keeps delivering very impressive results on the “sounds like” axis while continuing to fail to give any indication that it can operate on any other axis.

    (And no, I am not convinced by gigantic GPT-2 sometimes being able to do four-digit addition. You should flip your first graph around and plot “Accuracy” vs. “Number of digits in addition problem.” The reason to be interested in GPT’s addition ability is because we know that – for people who understand addition – it is a task that does not become more difficult when you increase the number of digits. Thus, the sharp decay in accuracy when increasing the number of digits is good evidence that GPT still does not understand addition. If we are going to see progress in GPT’s understanding math we should be looking at the flattening of that decay, not changes in accuracy individual task sizes.)

    • Scott Alexander says:

      I think it’s a text prediction task in the same way addition is a text prediction task.

      If your prompt is “5 + 5 = ?”, then the most likely thing a human would have written in that space is “10”. So if you want to be good at predicting how these kinds of sentences end, in a way more compressed than just memorizing every individual problem, you have to learn addition.

      If your prompt is “the theory that correctly explains why electromagnetism works is ?”, then the most likely thing to find in a physics textbook at the end of that sentence is the correct theory of electromagnetism. So if you really want to be good at predicting how these sentences end, in a way more compressed than just memorizing every individual theory, you have to learn how to come up with true physical theories.

      If you train the program on math problems solved by poor students who can’t do addition, it will learn to just fill in some number after the equals, since that’s all it takes to predict the random-ish answers of bad students. If you train the program on correct math problems, it will (at least try to) learn real addition, since that better predicts correct answers.

      If you train the program on physics cranks, it will learn to make vague physics-sounding theories, since that’s all it takes to predict the random-ish rambles of physics cranks. If you train the program on real physics, it will (at least try) to learn real physical theorizing, since that will get you a more accurate prediction of what physicists will say than rambling in a vaguely physics-ish way.

      (As I pointed out in the article, adding larger numbers is harder for humans if they’re doing it in their heads. I have no way of knowing whether GPT-3 is more like a human doing mental math, or a human using pencil and paper, but we should at least consider the possibility that it’s doing “mental math”)

      • smocc says:

        I agree with your examples for that value of “text prediction task,” I guess I’m just not at all that impressed by text prediction tasks.

        To address the larger picture, the whole point of worrying about AGI is that it will be able to do and think things that humans have never or will never be able to do or think. Therefore, a machine that is getting good at predicting what a human would do or think (and that’s a still a charitable description of what GPT is doing) isn’t evidence for progress towards AGI.

        If you train the program on real physics, it will (at least try) to learn real physical theorizing, since that will get you a more accurate prediction of what physicists will say…

        And predicting accurately what will physicists will say is not the same as real physical theorizing, let alone correct physical theorizing! What physicists want is someone who will say something that other physicists have never said that is nevertheless correct.

        This is why physicists spend billions of dollars building machines that no one has ever built before instead of just reading old textbooks more carefully. We know the content of a correct theory of everything is not contained in the text we have already generated. A predictor trained only on existing physics text is next to useless for the goals of physicists.

        adding larger numbers is harder for humans if they’re doing it in their heads.

        We could do an empirical test where we compare the “accuracy vs. # of digits” curve of humans to GPT. My hypothesis is that GPT’s curve is qualitatively different than the human’s curve. They will both decay but with clearly different scaling laws.

        • Scott Alexander says:

          I think you’re thinking of this on the wrong level.

          I’m not sure any human has ever solved 30965+18307 before. But a program trained on the output of human math students will be able to solve this problem, because it’s not learning individual answers, it’s learning the process people use to solve problems like that.

          I’m (hypothetically) claiming that if GPT-X learns enough physical theories, it will learn not just those physical theories, but the process physicists use to generate physical theories (take data, try to come up with some simple and elegant theory that would explain them) and learn to do that process itself. Then, assuming that enough data exist that a sufficiently clever human theorist could use it to develop a “theory of everything”, GPT-3 could use the same process that human theorist would use, to turn it into a theory of everything.

          • smocc says:

            I’m (hypothetically) claiming that if GPT-X learns enough physical theories, it will learn not just those physical theories, but the process physicists use to generate physical theories (take data, try to come up with some simple and elegant theory that would explain them) and learn to do that process itself.

            And I’m saying that these results look to me like evidence that GPT is not doing that, or at least it is not evidence that it is happening.

            The basic factual errors and object inconsistencies in “news article” shows that it has not learned the processes a journalist uses to generate news articles, despite all the news articles it read. The processes a journalist uses to generate news articles leads to (mostly) correct facts and consistent descriptions of the objects involved.

            My guess is that its high failure rate on high-digit addition shows that it has not learned the processes mathematicians use to compute new sums. The processes mathematicians use to compute new sums extrapolate to higher digits at what I posit is a much higher success curve than what GPT is showing.

            Here’s an easy test. Train a GPT instance on a large corpus of purely one through four digit addition and then ask it to do six-digit addition and so on and plot the accuracy vs number of digits and then compare that curve to the one for humans. We agree that the curves will both decrease. I think that the human curve will drop off much more slowly than the GPT curve because humans have been taught a generalizable process that GPT is simply not accessing.

            If I am wrong and you can get a large enough text predictor to start getting an accuracy vs digits curve that is qualitatively similar to humans that will be great evidence that text prediction can learn underlying processes. But so far I see no evidence for that claim.

      • Bugmaster says:

        if you really want to be good at predicting how these sentences end, in a way more compressed than just memorizing every individual theory, you have to learn how to come up with true physical theories.

        While this sentence is true, it’s also somewhat misleading. If you just want to answer the question, “how do electromagnets work ?”, then you don’t even need GPT, you just need a keyword search engine. But if you want to discover a true (or likely true) answer to a novel problem, then datamining textbooks is not enough. You’d have to actually step out into the world and run some experiments — because the search space of all the possible models that adequately answer your question is vast, and there’s no way to narrow it down other than by rejecting a few of them. And before you can run all those experiments, you’d need to figure out what it is that you want to test for in the first place…

        BTW, in our current world, “running some experiments” looks increasingly like “build a 5-mile long laser interferometer”, not “drop some cannonballs of a tower”.

        • Vitor says:

          Well, I for one would be completely satisfied with GPT-3 if it output the description of an experiment, such that the results this experiment provides would lead it (or human physicists, it doesn’t matter at that point) to discover new physics. Demanding that the algorithm output a correct theory without any experimental data is an unfairly high bar.

          • Bugmaster says:

            I am fairly confident that this will not happen anytime soon — unless you count the scenario where GPT-3 outputs a million plausible-sounding experimental designs, and then human physicists pick one, build a real experiment on top of it, then credit GPT-3 when it succeeds.

          • Vitor says:

            (Sorry for late reply)

            For the record, I agree. I was just pointing out what the goalposts should be.

      • dogiv says:

        So if you really want to be good at predicting how these sentences end, in a way more compressed than just memorizing every individual theory, you have to learn how to come up with true physical theories.

        GPT-3 doesn’t really have to compress things very much. It has so many parameters that it can memorize large fractions of its training data. This is basically what happened with the news article you quoted, it’s probably part of what’s going on with arithmetic, and it makes it very hard to tell whether there’s any generalizable reasoning going on underneath.

    • Daniel H. says:

      I think the examples you have chosen show that while GPT is getting great at tasks like the first one it isn’t even trying to do the second.

      I generally agree and I’d really like to have something that’s capable of combining the two. Possible use cases:
      – have the AI provide a condensed but correct (!) summary of a long text.
      – check whether a logical statement agrees with a text
      (I could probably come up with a few, but I guess this is enough as an example. Or to elaborate: Think of an AI that accepts all documents on a your great-grandpa (photos, official records, handwritten notes, …), matches them with public data and presents you with a biography written in the writing style of your choice. As far as I know, this is not possible yet, but could be in a few software iterations)

  44. Nikitis says:

    it still can’t add 5-digit numbers very well, get its Methodist history right, or consistently figure out that a plus sign means “add things”

    To be fair, lots of kids have trouble with those and with more years of schooling than GPT had. It’s a very impressive achievement, and I don’t object at all to it being called “terrifying”.

  45. Alpic says:

    What if the system instead of predicting text tries to predict human inventions? More generally, under given circumstances it could predict the next step in society/history. A Hari Seldon GPT is far off, but is it out of sight?

    • Bugmaster says:

      Personally, I predict that if you e.g. threw a bunch of patents at GPT-3, and asked it to predict the next great human invention, it would produce a piece of text that is coherent, quite readable, but ultimately nonsensical.

      • Alpic says:

        I did write “instead of predicting text”. Also, predictions don’t have to be based on a training set composed of the current state of affairs. If you manage to train a system to “predict” the theory of gravity or the steam engine, it might be useful to make real predictions.

      • bullseye says:

        Alternatively, it describes a machine that “works” but does nothing useful.

  46. ro6 says:

    Probably we can be pretty sure it won’t take over the world? I have a hard time figuring out how to turn world conquest into a text prediction task.

    The scary part is what the (shrinking) 52% able to detect “written-by-model” can do to the remainder, given this tech, crossed with Scissor Statements. Just one more accelerator/aggravator of curves of human differentiation, probably in a way that’s both correlated with, and stickier than, being born into wealth (easier to imagine screwing up and losing a bunch of money, than somehow losing your ability to differentiate human writing).

  47. ParryHotter says:

    Minor correction: “…in the paper above, the researchers asked humans to predict whether a given news article had been written by a human reporter or GPT-3…”

    The humans weren’t predicting, they were identifying.

    Feel free to remove comment once corrected.

  48. ec429 says:

    But if our own brains are also prediction engines, how do we ever create things smarter and better than the ones we grew up with?

    Hot take: most people’s brains, most of the time, are prediction engines; all advances are generated by a small minority who are doing something else, “frequently despised, often condemned, and almost always opposed by all right-thinking people” (as Heinlein puts it).

    Even hotter take: autism, by disrupting the normal prediction-engine operation of the brain, forces one to develop other modes of thought to compensate, thus greatly increases the chance one will create advances that are more than mere prediction-extrapolation of existing things.

    I’m not sure whether I believe either of these myself, not least because of the obvious self-serving bias. But dammit, this would explain a lot of neurotypical behaviour, and it’s been tempting me ever since I read SSC on Friston on autism. Someone please disprove it before I run out of willpower!

    • Purplehermann says:

      most people’s brains, most of the time, are prediction engines

      I think this is not wrong, personally I usually feel like I’m functioning like that, real thought isn’t usually necessary (if I’ve already spent a large enough amount of real though on a subject, using only a little real thought on the subject often works less well for me than the predictive style. Chess is the clearest example.) so I don’t bother.

      Don’t know about the rest of it.

    • ksdale says:

      When I read stories about people who have made major breakthroughs in a field, I’m regularly surprised by how often they seem to be making massive technological leaps just by connecting the dots in their heads – basically acting like prediction engines. They know a bunch of facts, and their big breakthrough grows naturally out of those facts.

      When I see that, I chalk it up to the fact that only a handful of people in the world may possess the mix of knowledge (especially if it’s cutting edge) that leads to the big breakthrough. A “new” fact, then, isn’t one that *couldn’t* be predicted based on everything we know, it’s just one that *wasn’t*, usually because so few people are in possession of the relevant knowledge or spend the time necessary to do the “predicting”.

      This story I tell myself also seems to mesh well with the fact that there are many cases of multiple people making the big breakthrough independently around the same time.

      Though it definitely doesn’t explain why it took so long to invent the cotton gin, which, as I understand it, could have been invented hundreds of years earlier…

      • Vitor says:

        One thing missing in your description is the ability to quickly filter out the high number of set of facts that yield nothing useful whatsoever when connected to each other. If you disregarded that, then you’d (wrongly) expect that simply reading the library of babel would let someone make new breakthroughs.

  49. Purplehermann says:

    So, it’s basically a little kid that’s been reading qnd writing poetry?

  50. drunkfish says:

    Very curious what would happen if the “here is an example of arithmetic” prompts were wrong. So you give it a few “1+2 = 4, 53+23 = 90, …” and then ask it to solve an arithmetic problem. Also what happens if they’re wrong but in a way that has a pattern, like you always report half the real sum or something.

  51. nimim.k.m. says:

    Making another thread instead of hijacking mikk14’s and gwern’s discussion on scalability curves.

    GPT and alike can give reason to be seriously worried about risks from what should be called (something like) Artificial Surprisingly Effective Generalized Stupidity. It is worrying that so much of what we like to call (or historically would have called) obvious products human intellect can be approximated by simplicity[1] at large, which apparently certainly does not have any idea of the higher concepts or thought frameworks humans at their best are capable of.

    GPT family doing half-assed job of various tasks gives credence to the idea that the space where the human and mammal intelligent behavior lies has much more larger dimension, and writing an artificial neural net (internally noticeably different from our biology) trained with a numerical algorithm (again, doing quite much and many things, but certainly not exactly simulating biology) to pattern-match targets defined by us may be a way of exploring this space. This is both exciting and maybe even scary, but not for the immediate AGI risk. However:

    In near future a short-term: increase in capability of such Stupid Systems can have serious disruptive effects, but possibility of such has already been noted. Heuristically extrapolating from near past, we probably should be prepared for something like DeepFakes or GPT poetry, but it of course also something that will be surprising. (I think here it would be a very good idea to start collecting AI-related prediction lists not unlike Scott’s predictions for various tasks. What are we expecting and what are actually unfolding in the realm of AI research?)

    In the long term, this behavior supports the particular AGI risk folks’ position against the old optimistic counter-argument how “obviously a human-made superintelligence would be familiar and super nice to us”, but likewise it starts looking bad for the old tacit common understanding that “obviously a human-made artificial intelligence will be something we can agree to call it intelligent and either obviously stupidier or more intelligent than us”. It could very well be alien, not only in the way of having blue and orange morality, but fundamentally weird.

    [1] One must admit that “stupidity” is a catchier term.

  52. MadRocketSci2 says:

    Admit it! The only way you can write this much and also hold down a day-job is because you’ve been secretly using AI assistants! 😛

    Actually, that’s something I’d like to know more about: You’re *prolific*. How *do* you write this much? I barely have time to grind out a few hundred lines of code in the evening after my 9-7 job.

  53. Katie says:

    As a heads up, it seems reasonably likely that some people could get turned away from this article due to the mention of Minsky (who was not just close with Jeffrey Epstein, but also mentioned as someone Guiffre was forced to have sex with). I know several people at MIT who probably would, for instance. You could avoid this problem by selecting a different prominent AI scientist to use as an example.

    Now, you also could say that Minsky should still be a prime example of brilliant AI scientist even if he probably raped a teenager, or you could make Richard Stallman’s point (that Guiffre’s accusation allows the possibility that Minsky was unaware she was trafficked and thought the sex was consensual). However, this is definitely making a political statement, and also Stallman saying the second thing led to him being misquoted, slammed by the internet, and basically forced into resignation (though there were a lot of other problems people had with him too). Anyway, you should at least be aware it’s a political statement, if you want to make it.

    • Scott Alexander says:

      Huh. Who’s someone else in the category of prominent mid-90s AI scientist who probably wouldn’t have predicted this?

      • Katie says:

        I’m not sure about that specific category, but Turing is always the classic example for old scientists being amazed with AI now.

    • gwern says:

      but also mentioned as someone Guiffre was forced to have sex with

      Before tossing around extremely inflammatory accusations and trying to guilt Scott into erasing important thinkers, please fact-check your claims. That is false. Her deposition claims only that she was told to, and according to Greg Benford, who was there, he turned her down and did not have sex with Guiffre: https://pjmedia.com/instapundit/339725/

      • Katie says:

        I’m not trying to do any guilting or erasing — I’m just providing the information of what some people are going to think about the reference. It’s Scott’s decision what to do about it. I am not personally uncomfortable with the reference, for what it’s worth.

        In terms of fact-checking allegations, another source seems to explicitly claim Guiffre said she was forced to have sex with Minsky: https://www.theverge.com/2019/8/9/20798900/marvin-minsky-jeffrey-epstein-sex-trafficking-island-court-records-unsealed. The Verge’s level of factual reporting is listed as “high”. I’m not sure what the situation with Benford is. Thank you for the link though — I was unaware of it and it changed my view from “Minsky probably had sex with a teenager” to “Minsky maybe had sex with a teenager.”

        For further context, there was actually a CSAIL (MIT’s AI lab) email thread 1 month ago with the topic “What is CSAIL’s stance on Marvin Minsky?” The conclusion was that the thread had to be discontinued for being too controversial and getting everyone to 11 on the outrage meter after just a few replies. The thread included someone saying “sworn legal testimony named Marvin Minsky as a rapist” and no one providing evidence to contradict that. So if there exists some consensus somewhere that Minsky was proven innocent, that consensus has not reached the lab Minsky worked in.

        As Scott says, “you can only bother people a certain amount before they go away.” If indeed Minsky did not have sex with Guiffre, that means he only accepted large amounts of Epstein’s money, visited Epstein in jail, etc. Professors have been removed from MIT for less. And if Minsky did not have sex with Guiffre, then unfortunately a large fraction of people still think he did. The point is that praising Minsky is controversial now, so Scott doing so will increase his weirdness points.

        • viVI_IViv says:

          As Scott says, “you can only bother people a certain amount before they go away.” If indeed Minsky did not have sex with Guiffre, that means he only accepted large amounts of Epstein’s money, visited Epstein in jail, etc. Professors have been removed from MIT for less. And if Minsky did not have sex with Guiffre, then unfortunately a large fraction of people still think he did.

          If we are to cancel all the people who had ties with Epstein we should start with Bill Clinton and a good half of the MIT.

          The point is that praising Minsky is controversial now, so Scott doing so will increase his weirdness points.

          And removing the mention because of your comment would signal weakness which would invite further abuse by the radicals.

          • Katie says:

            Clinton seems to have not interacted with Epstein in more than a decade, and all interactions seem to have been before Epstein was convicted of anything. People are upset when someone had strong ties to Epstein after he was convicted, when they theoretically should have known better. In the case of MIT, there was in fact an enormous controversy with numerous public apologies, shade thrown on the entire media lab, and several professors resigning or being placed on indefinite leave.

            I’m unsure why you think it’s a weakness to edit one’s writing to not randomly offend people (if you’re trying to intentionally offend people, then go ahead!) Personally, I like to link people to SSC to convince them of various ideas, because I find that Scott usually does a better job explaining them than me. It’s unfortunate if they decide not to read the substance of the post because of some random unimportant thing.

            Edit: A clarifying addendum: If you’re already going to be offending a person enough that they wouldn’t read your writing, then it’s not useful to try to curate yourself to not offend them further. I gave this particular feedback about Minsky because I believe that the individual mention would, in fact, repel people who would otherwise be interested in Scott’s work.

        • Gerry Quinn says:

          The Verge [of which Media Bias / Fact Check’s opinion is somewhat higher than mine] does indeed lead with the allegation that Giuffre was “forced to have sex” – but further down the page, the only evidence given is Giuffre’s deposition to the effect that she was “directed to have sex”, which adds nothing to what was previously discussed.

          • Simon_Jester says:

            As a general rule, when dealing with underaged people, human trafficking, and so forth… Saying “X was directed to do Y” versus “X was forced to do Y” draws a distinction without a difference.

            The idea of “X was directed, but not forced, to do Y” implies that:

            1) X held a reasonable belief at the time that not doing Y was a tenable option, and
            2) X has not been subjected to some kind of deliberate manipulation that would significantly alter their decision-making process, or could reasonably be expected to resist any such manipulation.

          • Gerry Quinn says:

            It is a distinction with a very large difference when the issue we are debating is whether she did it. I agree it would be of considerably lesser significance if we were discussing the culpability of Ghislaine Maxwell and/or Jeffrey Epstein. But it is not certain whether or in what manner she propositioned Minski, and in what way he responded.

            One can be “directed” to do something without actually doing it; the same does not apply to “forced”. The Verge’s substitution of the latter for the former seems extremely questionable, on the basis of the evidence supplied. (Note that it says “forced to have sex” not “forced to proposition”. But the extracts from her deposition are not nearly so specific.)

    • viVI_IViv says:

      I know several people at MIT who probably would, for instance. You could avoid this problem by selecting a different prominent AI scientist to use as an example.

      It’s probably best not to be guilted into doing this sort of thing.

      • Simon_Jester says:

        Personally I’d argue the reverse, because refusing to do “this sort of thing” is a major mechanism that acts to preserve the reputations of people who do bad things, and keep them (and others like them) in positions where they get to continue doing bad things.

        That which can be destroyed by the truth, should be, and “a man’s reputation” can assuredly be destroyed by the truth.

        • Gerry Quinn says:

          And what of things that can be destroyed by allegation, or rumour? Does the same apply?

  54. njnnja says:

    I can’t remember who said it, but we know we will have real artificial intelligence when we build a machine that remembers the lyrics to the Flintstones theme song thirty years after last hearing it, but it can’t balance a checkbook.

  55. hambo says:

    Speaking of which – can anything based on GPT-like principles ever produce superintelligent output?

    Step 1. Collect all texts which have entries on Google scholar
    Step 2. Train a model (the “critic”) to predict citation counts of these texts
    Step 3. Find a GPT output which gets a superhumanly high citation prediction

    This by itself won’t get you very far past human level because the critic isn’t trained on any superhuman text.

    Step 4. Release some superhuman-level GPT outputs onto arXiv. Train the critic on the citation counts that these articles accrue.
    Step 5. Repeat step 4 until all of science is conducted by GPT.

    Steps 4 and 5 are based on https://arxiv.org/abs/1706.03741.

    You can do a similar trick to let AI take over the economy. GPT is replaced with a genertor of products and services and the critic predicts profit accrued from these.

  56. Carl Pham says:

    Generally speaking, experienced teachers and exam writers know that it is extremely difficult not to let subtle clues to the correct answer to slip into the way a question is phrased. It takes considerable experience and concentration to avoid it.

    Since this algorithm has been trained to predict what a human being will say next I don’t think the addition problem says you can learn to do arithmetic by pattern recognition, so much as that you can learn how human beings will hint at a correct answer by the “prompts” they give before it.

    The way you would test this is by having a computer program randomly generate math problems, and use *those* as the prompts, and see if the AI still did as well on the real question. I’m guessing not. After all, anyone who uses something like this in education already knows that human beings (who are the absolute best possible guessers of patterns generated by other human beings) magically do better on questions that are asked by human beings — and do worse on questions that are randomized and delivered by a computer. So we already know some subtle (perhaps unconscious) communication is going on.

  57. Deiseach says:

    The GPT-3 one about the United Methodists sounds vaguely familiar and I’m willing to bet it simply lifted chunks out of Wikipedia and news stories and cut’n’pasted them together without understanding; that bit about the 1968 “split” sounds like ramming parts of two separate sentences together (one about the UMC merging with the Evangelical Brethren in 1968 and one taken from a history of the Brethern which did indeed undergo a split earlier in its history with a minority – 10%? – splitting off to form their own branch).

    So I’m still not seeing signs of understanding here. Getting better and better at manipulation of the data it’s given, yes, and improvements to come in that so we don’t get mix-ups like mashing two different articles together, sounding more fluent, but not able of its own to comprehend what it’s putting together. As for translation that is a very tricky area; I’m seeing a lot of argument online over translations from Chinese to English (both professionally by the network and fan-made) for a TV show, and the main complaints come down to (a) use of archaic terms which must come from very out-dated dictionaries (the English term “Childe” is not one that is commonly used today) (b) lack of nuance between two possible translations (c) making one character in English sound brusque and laconic where in Chinese his speech though inclined to brevity is packed full of classical allusions and is elegant and even poetic. So I think machine translation will struggle where literal word-for-word is not good enough and where even humans struggle to express nuance and differences in cultural usage.

    Maybe I’m wrong and this is all Signs Of Things To Come, but I still can’t see consciousness rising out of this.

  58. eric23 says:

    The difference between you doing math and GPT-3 doing math is that when you make a mistake, the teacher can sit down with you, ask you how you did it, identify the flaw, tell you what the flaw was, then you internalize what the flaw was, and hopefully don’t repeat it. GPT-3, though, can’t explain how it reached its answer.

    This is really important. An AI could be as smart as Newton or Einstein, but Newton failed to invent relativity, and Einstein failed to invent quantum mechanics. Even the greatest human thinkers only add marginally to the mass of knowledge which they have been painstakingly taught by previous knowledgeable people, with most of their mistakes quickly corrected by their teachers, saving them untold amounts of wasted effort. GPT-n will be in the position of a superintelligent caveman trying to figure everything out with no direction from teachers. Maybe its super intelligence will make up for its utter lack of direction, maybe not.

    • keaswaran says:

      I’m not sure how much math you’ve taught, but I find that it’s not uncommon for a student to make a mistake, and for me to be completely unable to figure out what the student did to get that step.

  59. deciusbrutus says:

    I don’t think it’s unreasonable to think that GPT has a hard time knowing about specific mathematical symbology. 2+2=? is no more difficult than f'(c)=? it’s simply that humans have been given specific education in mathematics such that we are better at interpreting things like “Romeo and Juliet is the integral of West Side Story” as math puns, and “Jay Z + Peaches cut new hit single” as not related to math at all.

  60. Cptn.Penguin says:

    This is your regularly scheduled reminder, that I really really hate the font you use for block quotes. Reading them really strains my eyes and honestly? I often just skip over the quotes unless I expect to get enough out of it that the resulting headach is worth it.

    Since the quotes are pretty integral to the whole post, I just copied all the quotes into a txt file and read them there. Which is pretty bothersome.

    Maybe I’m the only one who feels this way, but I’d really appreciate it if you’d at least consider changing it, thanks!

  61. benf says:

    “But if our own brains are also prediction engines, how do we ever create things smarter and better than the ones we grew up with?”

    Isn’t it obvious? We have access to information about the real world. These AIs are just spinning more and more vividly realistic dreams.

  62. vicoldi says:

    GPT-3 about Trump

    GPT-3 generates a strawman leftist stawmaning Trump. Extremely funny, frighteningly coherent, and its existence is probably a metaphor for something.

    • kokotajlod@gmail.com says:

      I think GPT-3 can be thought of as a new way of getting data on what the internet is like. GPT-3 tells you what is most likely to be found on the internet; GPT-3 is unbiased in that regard.

  63. devils_rights_advocate says:

    Well of course the thing to do is train GPT-5 on AI code, then feed it its own code, and ask it to improve the code.