There are two major brain areas involved in language. To oversimplify, Wernicke’s area in the superior temporal gyrus handles meaning; Broca’s area in the inferior frontal gyrus handles structure and flow.
If a stroke or other brain injury damages Broca’s area but leaves Wernicke’s area intact, you get language which is meaningful, but not very structured or fluid. You sound like a caveman: “Want food!”
If it damages Wernicke’s area but leaves Broca’s area intact, you get speech which has normal structure and flow, but is meaningless. I’d read about this pattern in books, but I still wasn’t prepared the first time I saw a video of a Wernicke’s aphasia patient (source):
During yesterday’s discussion of GPT-3, a commenter mentioned how alien it felt to watch something use language perfectly without quite making sense. I agree it’s eerie, but it isn’t some kind of inhuman robot weirdness. Any one of us is a railroad-spike-through-the-head away from doing the same.
Does this teach us anything useful about GPT-3 or neural networks? I lean towards no. GPT-3 already makes more sense than a Wernicke’s aphasiac. Whatever it’s doing is on a higher level than the Broca’s/Wernicke’s dichotomy. Still, it would be interesting to learn what kind of computational considerations caused the split, and whether there’s any microstructural difference in the areas that reflects it. I don’t know enough neuroscience to have an educated opinion on this.