codex Slate Star Codex

"Talks a good game about freedom when out of power, but once he’s in – bam! Everyone's enslaved in the human-flourishing mines."

OT87: Ulpian Thread

This is the bi-weekly visible open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. The New York Solstice celebration will be on December 9 this year, and has a Kickstarter campaign to raise the necessary funds. Bay Area, Seattle, and other versions probably coming soon.

2. Frequent SSC commenter JRM has thrown his hat into the ring in a local district attorney campaign. He’s looking for “campaign donations, quality political advice, and graphic artists”. If interested, check his website or just comment here and he’ll find you.

3. Some later Dark Age comments that didn’t make it into the original highlights: Watchman on population swings, Tim O’Neill disputing the whole thesis.

4. Bean’s posts about naval warfare in SSC Open Threads have moved to their own blog, Naval Gazing.

Posted in Uncategorized | Tagged | 38 Comments

Highlights From The Comments On Dark Ages

Thanks to everyone who made interesting comments on yesterday’s post about Dark Ages.

Several people challenged the matching of the economic/population decline to the “fall of Rome”. For example, from David Friedman:

On the graph you are citing, 36 million is the population in 200 A.D. The fall of the Western Empire is commonly dated to about 450 A.D. By 400 A.D., on the same graph, population is down to 31 million–say 30 million by 450.

So a more accurate statement would be “The late Roman Empire caused a population decrease of about six million. Population continued to fall for another hundred and fifty years before it started back up. It passed its Roman high in about 1000 A.D. and continued growing for the next three hundred years.”

My rule of thumb for very poor societies is that the growth rate of population is a proxy for the average standard of living. That growth rate, the slope of the line on Figure 1.2 of the Atlas of World Population History, starts up in about 450 A.D. and continues increasing until about 1300.

From ksvanhorn:

The graph of lead production doesn’t really jive with the idea that the Dark Ages were a result of Rome falling — production had declined sharply centuries before the fall of Rome. It suggests that maybe we should count the Dark Ages as beginning considerably *before* the fall of Rome.

Dissonant Cognizance:

I was thinking throughout reading the post that you could put the start date of the Dark Ages around the Crisis of the Third Century, where Rome probably would have ceased to exist right then had Aurelian not managed to contain the damage. That let things stay superficially stable while the Western Empire cannibalized its outlying provinces for a couple more centuries.

And ctj09 agrees:

I’d actually move up the first date [for the start of the Dark Ages] to around the time of the Crisis of the Third Century, a period where the Roman Empire very nearly collapsed and never really recovered from. Especially because the Manorialism and the explicit Dominate-style hierarchy that typified the early Middle Ages was first really developed during this period. Not to mention just after the Crisis, the Emperor Diocletian laid the ground work for a lot of what would become institutions and norms in the early Middle Ages.

Other people thought the end date of the Dark Ages could also be earlier. Many brought up the Carolingian Renaissance. For example, Krill12:

1000AD is ridiculously late…I have no problem with pointing out that from about 450AD to 600AD there was very little going on. That is probably a real dark age. It’s also nothing like what people mean when they say “The Dark Ages.” The people who use that term might have forgotten the Carolingian Renaissance happened before 1000AD.

From RIP_Finnegan:

In my opinion the Carolingian Renaissance is pretty much proof positive that people even at the time saw part of their task as recovering old glories. I don’t have a source book to hand, but that’s the impression I got from Einhard and the RFA. There are those who argue the Carolingian Renaissance was mostly hot air, but I think it’s fair to say there’s a reliable middle ground between that and what they teach French schoolkids. Also, even if the Pirenne Thesis is no longer good money, it’s clear that there was economic continuity from Rome later than once thought. I would say that the popular conception of the Dark Ages probably owes more to the period after Charlemagne, when the Papacy was in disarray, Europe was fragmented, Vikings were on the loose, and the polities we know from the Middle Ages were just finding their feet.

However, that’s not the whole story, of course. The Carolingians saw themselves as superior to Rome in one very clear way – they were Christian! In the Cathedral in Aachen, among all the re-used Roman architecture, there is a plain throne made of materials from the Holy Land. I think perhaps this ties into the James Burnham theory of Whig history – that no matter how bad things get in reality for us, we’ll construct a worldview that makes civilizational defeat a victory for Truth and Justice.

Lillian disagrees:

The Carolingian Renaissance kind of fizzled out with the breakup of the Carolingian Empire after the death of Luis the Pious though. High culture and learning did not take for good until the Renaissance of the 12th century. Hell the Holy Roman Empire was really founded by the Ottonian Dynasty, who were crowned Emperors over a century after the Carolingian realm started collapsing under its own weight. Certainly the Carolingian Renaissance laid the groundwork for what came later, but in all it was a false dawn.

So “300 – 800 AD” might be as good a five hundred year interval to call “the Dark Ages” as 500 – 1000. I think this is true of a lot of historical periods – depending on what artists or scientists you think are most important “the Scientific Revolution” or “the Renaissance” can have pretty fluid boundaries – but it’s worth noticing the fuzziness.

I had briefly noted that scrolls might be shorter than codices, but felt okay dismissing this because they would have to be something like two orders of magnitude shorter for it to make much difference. Well…here’sCaf1815:

On the number of books at the library of Alexandria vs. at the university of Paris: the fact that scrolls vs. codices is apples vs. oranges is duly noted, but let me impress on you just how vast the difference is (full disclosure, this is my field of study, so the following may turn into a bit of a rant). At Alexandria, you’re counting scrolls; the length of a typical scroll translates to about 20-30 printed pages in a modern book. This is why works by ancient writers are divided into several “books”, each of which would take up one scroll: the City of God is 22 scrolls, the Republic of Plato is 10 scrolls, etc. At the University of Paris in 1300, they had codices, and these were huge; for example, I’m in the final stages of publishing a work by a 4th century writer; the book, my editor tells me, will be 550 pages long (admittedly counting an introduction, critical apparatus, etc.). But in its 13th century codex form, this work takes up folios 396 to 407 of a huge doorstop comprising 487 folios total; the whole codex, as was the norm at the time, features dozens of works by various authors in the same broad category. So if the library of Alexandria only had 40,000 scrolls (okay, that’s a pretty low estimate), it would have had less text than the Sorbonne in 1300 by an order of magnitude.

If this is at all right, then mea culpa.

A lot of people don’t like the idea of Dark Ages because they underestimate the continuity between the classical and medieval world; John of Salisbury argues that they overestimate it:

I sympathise with those who have brought up Ireland and Scandinavia. My concern is that ‘dark ages’ talk implies too much continuity: there is the grand narrative of Western Civi, in which ‘we’ (those of European descent, white people) flourished in antiquity, flailed in the dark, and then triumphed in modernity. My gripe is that there isn’t a ‘we’ that is the subject of this story: the ancient world was a Mediterranean world, and it is only in the middle ages we see a civilisation that looks like modern Europe. The Mediterranean peoples (olive oil people, as Taleb calls them) who went on to share in later European civilisation go from light to dark, but the northern Europeans, the butter people, are only stepping into the light. Take Britain, which has a dramatic, well-defined dark age that is also a lot shorter than the ordinarily cited 500 year span. The period between the Roman departure and the Saxon conversion is pretty much pitch black by all the relevant measures. After conversion, butter people who call the island home (even if they were rather recent arrivals) enter history on their own terms, as something more than a mere foil to Roman grandeur. Sure, the Saxons look undistinguished compared to the Athens of Pericles, but that’s hardly a fair comparison. They do pretty well considering that they had been tribal illiterates just a few generations earlier. For much of the conventional dark ages, the Saxons enjoy what is, by their standards and those of their northern European neighbours, an age of light.

Similar concerns from georgioz:

At one point Scott says that “by mid Dark Ages, there was no city in Christian Western Europe larger than about 50,000 people.”

What geographic area are we looking at here? It has to be western Europe but not Spain since Scott correctly declares Spain as part of Al-Andalus and therefore not subject to the western christian definition. So it leaves us with northern Italy – not Sicily or southern Italy as those were basically muslim Abbasid and/or Byzantine respectively until Norman conquest of Sicily in 11th century.

So what does that leave us with? Basically Roman provinces of Gaul, Britannia and Northern Italy. We cannot speak about Germany or Scandinavia as it was beyond Limes Romanus – the northern Roman border. And one can arguably say that nonitallian parts of what we have here actually flourished during 500-1000 compared to the tribal past. I am not sure if one can say that what is basically Frankish empire plus Britannia felt some sort of Dark Ages. To the contrary. So it basically all boils down to decline of the city of Rome itself. City of Rome and northern Italy went through a very rough period. But given that the empire they ruled disintegrated it is to be expected. Other cities like Constantinople or Baghdad or Cordoba flourished instead.

This is why I think historians want to have broader view. If one actually takes the area of whole Roman empire at its regional peak under Trajan then one can definitely see scientific and cultural progress in that area during 500-1000. That would be fair comparison with classical Roman times. Not only do we have now actual states in northern and western Europe such as Frankish empire or slavic protostates. But the rest of former Roman Empire did quite well under Byzantine rule and rule of Abbasid / Córdoba caliphs.

offwo2000 makes a fascinating claim about the year 1000 I’d never heard before:

It’s funny because Christianity actually caused the recovery about AD 1000. People were genuinely convinced that the world would end in AD 1000 and so the papacy started the “Peace of God” movements where rulers would stop fighting in the hope that they would be rewarded in heaven after the apocalypse. This became the catalyst for the papacy taking more control of international affairs, which enabled Western Europe to stop fighting between itself so much, focus on rebuilding the economy (there’s a surge in water-mill building, for example, since peace meant that they would actually gain their return before being destroyed) and allow uniting against common enemies.

(I’m not sure how seriously to take this since the wikipedia article on the Peace of God movement doesn’t really mention the “year 1000” thing, which I found the most interesting part).

On the philosophical implications of saying the Dark Ages “were real”, John Nerst:

Consider what it means to say that these things do/don’t exist or are/aren’t real:

global warming, a solution to global warming, race, infinitely many prime numbers, the patriarchy, Kurdistan, devil worshippers, free will, schizophrenia, Esperanto, Black English, white privilege, the Friendzone, meritocracy, fate, luck, sin, seasons, personality types, learning styles, genders other than male and female, the color purple, the word ‘cromulent’, the word ‘irregardless’, the War on Christmas, property rights, fiat money, a chance of rain, a meaning of life, the meaning of life, God

And on the political implications, the anonymouse:

The lesson here is not whether the Dark Ages were X% grimdark vs. Y% grimdark. (Although, I would suggest that the Byzantine world was still rolling 4d6-drop-lowest while western Europe had regressed to 3d6 in order.)

The lesson is that civilization is fragile. It’s easy to sit in Periclean Athens (or 1891 Paris, or 2017 Seattle) and think “wow, the march of progress is inexorable!” But it isn’t. Civilization is something to be lovingly nurtured and ferociously guarded; the wolves at the door haven’t gone away just because your lamp burns too bright for you to see out into the dark.

(fortaleza84 adds “Debate about the Dark Ages, to an extent, is a proxy for anxiety over the present-day West.”)

Finally, I talked a bit about the Dark Ages bringing up two axes of “civilizational moral goodness” vs. “civilizational impressiveness”. Cernos quotes Will Durant on a different way of judging the “impressiveness” of the Dark Ages:

The Dark Ages are not a period upon which the scholar can look with superior scorn. He no longer denounces their ignorance and superstitions, their political disintegration, their economic and cultural poverty; he marvels, rather, that Europe ever recovered from the successive blows of Goths, Huns, Vandals, Moslems, Magyras, and Norse, and preserved through the turmoil and tragedy so much of ancient letters and techniques. He can feel only admiration for Charlemagnes, Alfreds, Olafs and Ottos who forced an order upon this chaos; for the Benedicts, Gregorys, Bonifaces, Columbas, Alcuins, Brunos, who so patiently resurrected morals and letters out of the wilderness of their times; for prelates and artisans that could raise cathedrals, and the nameless poets that could sing, between one war of terror and the next. State and Church had to begin again at the bottom, as Romulus and Numa had done a thousand years before and the courage required to build cities out of jungles, and citizens out of savages, was greater than that which would raise Chartres, Amiens, and Reims or cool Dante’s vengeful fever into measured verse.

Posted in Uncategorized | Tagged , | 161 Comments

Were There Dark Ages?

[Warning: non-historian arguing about history, which is always dangerous and sometimes awful. I will say in my defense that I’m drawing off the work of plenty of good historians like Bryan Ward-Perkins and Angus Maddison whom I interpret as agreeing with me. And that the people I am disagreeing with are not historians themselves, but other non-historians trying to interpret historians’ work in a popular way that I interpret as wrong. And that as far as I know no historian believes non-historians should never be allowed to talk about history if they try to be careful and cite their sources. Read at your own risk anyway.]

Cracked offers Five Ridiculous Myths You Probably Believe About The Dark Ages; number one is “The Dark Ages Were A Real Thing”:

The Dark Ages were never a thing. The entire concept is complete and utter horseshit cobbled together by a deluded writer. The term “Dark Ages” was first used in the 14th century by Petrarch, an Italian poet with a penchant for Roman nostalgia. Petrarch used it to describe, well, every single thing that had happened since the fall of Rome. He didn’t rain dark judgment over hundreds of years of human achievement because of historical evidence of any kind, by the way; his entire argument was based on the general feeling that life sucked absolute weasel scrotum ever since Rome went belly-up.

Likewise There Were No European Dark Ages, The Myth Of The Dark Ages, The Myth Of The “Dark Ages”, Medieval Europe: The Myth Of The Dark Ages, Busting The “Dark Ages” Myth, and of course smug Tumblr posts.

This isn’t coming out of nowhere. Many people’s idea of medieval times is exaggerated. Not every scientist was burned at the stake, not everyone thought the world was flat and surrounded by space dragons, and the High Middle Ages were notable for impressive levels of material progress which in some cases outpaced the Classical World and which set the stage for the upcoming Renaissance (the continuity thesis). Granted.

But I worry that as usual, this corrective to an overblown narrative of darkness has itself been overblown. People are now talking about how you’re a gullible rube if you still believe in a so-called “Dark Age”, and how all the real intellectuals know that this was a time of flourishing civilization every bit as good as the Romans or the Renaissance.

Bulls**t. The period from about 500 to about 1000 in Christian Western Europe was marked by profound economic and intellectual decline and stagnation relative to the periods that came before and after it. This is incompatible with the “no such thing as the Dark Ages” claim except by a bunch of tortured logic, isolated demands for rigor, and historical ignorance.

To go through the arguments one by one:

1. The “Dark Ages” were only dark in Europe. And not even all of Europe – not in the Eastern Roman Empire, not in al-Andalus…

I wonder if these people interrupt anyone who talks about the Warring States period with “actually, there were only warring states in China. Many other areas during this period had no warring states at all! Guess you fell victim to the Myth Of The Warring States Period.”

What about the Bronze Age? There wasn’t any bronze in Australia. The Hellenistic period? Huge swathes of the Earth’s land area remained un-Hellenized. The Time of Troubles? Actually, outside of Russia there were no more troubles than usual. The Era of Good Feelings? Maybe there were a bunch of bad feelings not in the US.

Every other historical age name is instantly understood by everyone to refer to both a time and a place. The only time anyone ever gives anybody else grief over this is when they talk about the Dark Ages. This is an isolated demand for rigor. And if this is really your true objection, let’s just agree to call it the Western European Dark Ages, as long as we can also agree it existed and was bad.

2. What about all the great stuff in the Dark Ages? Thomas Aquinas! Gothic cathedrals! Dante! Troubadours! The Song of Roland! Roger Bacon! Musical notation! Surely no period that produced all that can be called ‘dark’!

All of those are from after the period 500 – 1000 AD.

Suppose someone tells you that the middle of America contains the Great Plains, a very flat region. But you know that actually there are lots of tall mountains, like the Rockies. Have you debunked the so-called Great Plains narrative and proven that its believers are credulous morons? Or have you just missed that there’s a natural and well-delineated area suitable to be called “Great Plains” that doesn’t include your supposed counterexamples?

The period after 1000 AD did indeed have lots of great accomplishments. That’s because Europe at that time had 500 years to recover from the civilizational collapse that demolished its economic and intellectual capacity – a collapse whose immediate aftermath we call “the Dark Ages”. I agree there are some concepts of the Dark Ages that mistakenly include some of the time after the recovery, and that Petrarch’s original version commits this error. But I think that there’s also a five hundred year period – more than long enough to count as a real historical age – that absolutely fits the bill.

3. The term “Dark Ages” was invented by Petrarch – who wasn’t even a real historian – based only on his personal opinion.

The term “World War I” was invented by Ernst Haeckel, who was not a historian, based on his personal opinion that it seemed to be a war, and involve the whole world, and be the first one to do so.

The term “Cold War” was invented by George Orwell, who was not a historian, based only on his personal opinion that it seemed conflict-y but without much actual fighting.

Very few of the historical terms we use were invented by professional historians, and they are all necessarily based on that person’s opinion that it correctly describes the thing being described. I await people admitting that there was no Cold War, because who is George Orwell to think he can just name an era based on what he feels it was like?

This is another isolated demand for rigor. Historical periods get their names from random individuals reflecting on them; the names catch on if people agree that they fit.

4. The term “Dark Ages” was originally just supposed to mean that there aren’t many sources describing it, not that the era was bad

Nope, wrong. Some people have used it this way, but this is neither how the term’s original inventors intended it, nor how a majority of modern people (historian or otherwise) think of it.

As mentioned above, the idea of a Dark Age was first developed by the late medieval/early Renaissance thinker Petrarch. As per Wikipedia:

The idea of a Dark Age originated with the Tuscan scholar Petrarch in the 1330s. Writing of the past, he said: “Amidst the errors there shone forth men of genius; no less keen were their eyes, although they were surrounded by darkness and dense gloom”. Christian writers, including Petrarch himself, had long used traditional metaphors of ‘light versus darkness’ to describe ‘good versus evil’. Petrarch was the first to give the metaphor secular meaning by reversing its application. He now saw Classical Antiquity, so long considered a ‘dark’ age for its lack of Christianity, in the ‘light’ of its cultural achievements, while Petrarch’s own time, allegedly lacking such cultural achievements, was seen as the age of darkness. […]

Petrarch wrote that history had two periods: the classic period of Greeks and Romans, followed by a time of darkness in which he saw himself living. In around 1343, in the conclusion of his epic Africa, he wrote: “My fate is to live among varied and confusing storms. But for you perhaps, if as I hope and wish you will live long after me, there will follow a better age. This sleep of forgetfulness will not last for ever. When the darkness has been dispersed, our descendants can come again in the former pure radiance.”

Petrarch can’t just be referring to an absence of good historical sources – he’s talking about his own era!

Part of the evidence for the “absence of sources” claim is that the first use of the exact term “Dark Age” may come from by the 16th-century writer Caesar Baronius, who had a more specific time in mind, 888 – 1046. He wrote:

The new age (saeculum) which was beginning, for its harshness and barrenness of good could well be called iron, for its baseness and abounding evil leaden, and moreover for its lack of writers dark.

But Baronius was writing well after Petrarch, his “Dark Age” was very different from the one we know today (only used to refer to a 150-year period in the Church), and in the same sentence that he mentioned dark = few writers, he also calls it “harsh”, “barren of good”, “base”, and full of “abounding evil”. This is not exactly a resounding victory for people claiming that the Dark Age had nothing wrong with it except slightly fewer records.

5a. It’s historical malpractice to call something “The Dark Ages”. The job of historians is to record, not to judge.

So I assume you also raise a fuss whenever someone talks about Alexander the Great? The Golden Age of Athens? The Five Good Emperors? The Enlightenment? Ivan the Terrible? The Belle Époque? I S O L A T E D . D E M A N D . F O R . R I G O R.

I agree there’s some level on which all of these are a sort of boundary-crossing in the ethics of historiography. And I agree that maybe very responsible historians want to avoid this and come up with more neutral names for very official work – I’ve seen some people talk about “Alexander III of Macedon”. Well, okay. The “Periclean Age Of Athens”. Fine. The “Time There Were Five Whole Emperors In A Row, None Of Whom Were Sadistic, Perverted, Or Insane, Which As Responsible Historians We Cannot Officially Call “Good”, But Which By The Standards Of Ancient Rome Is Seriously Super Impressive”. Whatever.

But if you only challenge the term “Dark Ages”, I feel like you’re doing the opposite of this suspension-of-judgment. If you say “The Dark Ages weren’t really dark!” you’re putting yourself in a position to judge historical eras, saying that maybe some of them were dark and others weren’t, but this particular one wasn’t. In this case you’re not responsibly abdicating historical judgment. You’re making a historical judgment, and getting it wrong.

5b. The Dark Ages were only “dark” if you like big centralized states with powerful economies. There were lots of ways they might have been good. For example, ancient Rome had slavery, and most Dark Age societies didn’t. That seems pretty light-side to me!

And Alexander the Great was only “great” if you like killing a lot of people and conquering their lands.

Look, a lot of history sucked, and moral judgments are hard. Jared Diamond thinks hunter-gatherers were freer and happier than anyone since. Maybe the real Golden Age of Athens was in 40,000 BC, when Neanderthals on the rocky plain that would one day become Athens hunted mammoths in carefree abandon, loving life and being at one with nature and the changing seasons. Maybe the title “Alexander the Great” should really go to Alexander IV of Macedon, who was killed at age 14 and so never conquered, murdered, or oppressed anyone – truly an outstanding achievement matched by approximately zero other kings of the era.

In order to avoid this kind of speculation, I think of history as being along at least two axes: goodness and impressiveness. Alexander may or may not have been a good person, but he was certainly an impressive one. Periclean Athens might not have been the most virtuous city, but it was certainly one with lasting accomplishments. Since it is so hard to judge the goodness or badness of historical figures, most of our claims of greatness are claims about impressiveness. And compared to the periods before or after, Dark Ages Europe was unimpressive.

I’m probably an overly literal person, but whenever I think about dark ages, I think of the modern (and anachronistic for the period in question) association between light, population density, and economic activity:

The Dark Ages in Europe were a time when things would have been more towards the North Korean end of that picture. In fact, you probably could have taken a similar picture at the time, with an east/west instead of north/south axis. From The Muslims of Andalusia:

[In medieval times], Europe was darkened at sunset, Al-Andalus shone with public lamps; Europe was dirty, Al-Andalus built a thousand baths; Europe lay in mud, Al-Andalus’ streets were paved.

I get that this is just a pun I’m taking too seriously. If you don’t like the term Dark Ages, I am happy to use the term “Unimpressive Ages”, “Disappointing Ages”, or “Pathetic Ages”. My point is that there is some axis, not the same as morality but involving economic and intellectual activity, in which the period 500 – 1000 AD was uniquely sucky.

6. Okay, forget disputes about the meanings of words or how to do history. On the object level, using normal meanings of the word “bad”, the Dark Ages were not that bad.

Wrong.

It’s hard to prove this is wrong, because there weren’t great statistics back then to compare Classical, Dark Age, and High Medieval societies on. As far as I know only two groups have dared try to estimate Western European GDP for these eras. Again from Wikipedia:

Both groups find that GDP declined from 1 AD (classical era) to 1000 AD (late Dark Age / medieval era). 1 was not the height of Rome, and 1000 was well into recovery from the Dark Ages, so we expect the difference between the Roman peak and the Dark Age nadir to be even more profound than this. But even these attenuated numbers tell the story of an entire millennium when human economic progress across an entire continent went backwards.

Although these numbers are inherently sketchy, the few real pieces of evidence we have seem to back them up. Arctic ice cores preserve a record of how much lead pollution was in the air, probably linked to human lead-mining activities. This allows us a pretty good look at how much lead-mining various European civilizations were doing:

And granted, the Romans were a little more obsessed with lead than could possibly have been healthy. But these data are supported by reconstructions of silver mining, copper mining, and iron mining. All of these are easily quantifiable activities that reinforce Maddison, Lo Cascio, and Malanima’s picture of economic decline between the fall of Rome and 1000.

We see a similar decline in population. The Atlas of World Population History thinks that continental Europe had a population of 36 million people at its peak in 200 AD, falling to 26 million at a nadir in 600 AD, and gradually recovering back to 36 million or so around 1000 AD. Various other estimates for the population of the Roman Empire and medieval Europe broadly support this picture (though remember that the Roman Empire didn’t occupy the same space as medieval Europe and so comparisons have to be more complicated than just comparing two sets of numbers). If this is true, the Classical to Dark Age transition caused a population decrease of about 10 million, or 30% of the population (though some of this happened in Late Antiquity). These are the sorts of numbers usually only associated with the worst plagues and genocides.

Classical Rome had a population of between 500,000 and a million. Even classical Athens had a population of over 100,000. By mid Dark Ages, there was no city in Christian Western Europe larger than about 50,000 people. The infrastructure for maintaining large urban populations had fallen apart.

And true, a lot of this is sparse and reconstructed. My usual go-to for economic history questions, Tumblr user xhxhxhx, was able to get me a bunch of excellent graphs comparing classical Rome to the High Middle Ages, classical Rome to the Golden Age of Islam, High Middle Ages to the Golden Age of Islam, etc. When I complained that none of them compared anything to the Dark Ages which was the whole point of my question, he answered that the data were worse quality, because “civilization collapsed, so fewer people were tracking wages and prices”.

So yes, I agree that there’s only a limited amount of data proving that the Dark Ages sucked. That’s because civilization collapsed, so people weren’t keeping great records. I don’t think this is a strong argument against the Dark Ages being bad.

7. But aside from the economy, there was still lots of great culture and intellectual advancements

If I ask Google for a list of the hundred greatest philosophers of all time, it brings up http://www.listal.com/list/100-greatest-philosophers. It doesn’t seem especially professional or official, but it’s a decent-looking list and because it’s the top Google result I can prove I wasn’t biased by selecting it.

Here’s a graph of number of European philosophers on the list per 500 year period:

The giant pit from 500 to 1000 where there was not a single European philosopher worthy of inclusion on the list corresponds to the traditional concept of a Dark Age without very impressive intellectual output.

Harold Bloom has a list of great books in ‘the Western Canon’. Once again separating them into 500 year intervals and graphing:

Again, we see a giant pit from 500 to 1000 AD (though this time it is not completely empty – Beowulf is the sole qualifying work).

Here’s a map (admittedly a later reproduction, since the originals are lost) by the greatest classical geographer Ptolemy:

And here’s an 8th-century map by Beatus of Liebana:

I’m not cheating here by taking the worst-quality Dark Age map (that would be one of these). If you can find a better Christian Western European map from 500 – 1000, tell me and I’ll replace this one with it. But as far as I can tell, this really was state-of-the-art.

The decreased quality of intellectual output seems to have been matched by a decline in quantity. I can’t find any great sources quantifying the number of books written in the classical world, but there are a few semi-reliable numbers about library size. The Ulpian Library of Emperor Trajan seemed to have tens of thousands of scrolls, and it was only one of as many as 28 libraries in Rome. Estimates of the number of volumes in the Library of Alexandria range from 40,000 to 400,000. Archaeologists studying the Villa of the Papyri in Herculaneum, a private residence in a medium sized town, have found a private library of almost 2,000 scrolls.

Medieval libraries seem to be much smaller. From Oxford Bibliographies:

It follows from this that the wealth and fame of any institution that required books would inevitably affect the size of its library, and, given the fact that books were always expensive, medieval libraries were, from a modern point of view, not large. The largest Anglo-Saxon libraries may have contained about two hundred books. In 1331 the collection at Christ Church, Canterbury, numbered 1,850, which may well have been the biggest collection in England and Wales. In 1289 the library of the university of Paris contained 1,017 volumes which, by 1338, had increased to 1,722—an increase of about 70 percent.

This might not be entirely fair – Roman scrolls were smaller than medieval books, so a work that took up one medieval book might have occupied several Roman scrolls, inflating the size of Roman libraries. But there still seems to be a pretty big gap between the tens to hundreds of thousands of volumes in classical libraries and the few hundred to few thousand in libraries all the way up until the High Middle Ages.

[EDIT: This might not be true – see here]

In a lot of cases, the people of the Dark Ages (and the High Middle Ages afterwards) themselves acknowledged this. The Roman author Vitruvius was the gold standard for architecture up to the Renaissance, and Brunelleschi became famous for creating a dome that surpassed the Roman domes made 1300 years earlier. Roman doctors like Galen and Celsus were semi-worshipped by medieval doctors; when the 16th century (!) doctor Theophrastus von Hohenheim became known as “Paracelsus” (meaning “equal to or better than Celsus”), it was taken as an outrageous boast of ability despite his having the benefit of 1500 extra years of medical science.

8. The Dark Ages weren’t all bad. There were still a few important accomplishments. Therefore, they cannot truly be called “dark”.

The night includes several bright things, such as the moon, the stars, and streetlights. But it’s still fair to call the night “dark”. You don’t have to prove that 100% of something fits a description at 100% of times to use the description.

One of the links from the top of the post says:

If the “dark ages” were so unproductive and backwards, how does one explain the proliferation of inventions and developments during this time period. A simple listing of inventions, discoveries and developments demonstrates the the Middle Ages were anything but dark.

…then goes on to give various inventions, the only ones of which from 500 AD – 1000 AD are “collar and harness for horses and oxen”, “iron horseshoes”, and “the swivel axle”.

Look. I am sure that horseshoes were a revolutionary advance in equine footwear. But the ancient Greeks gave us geometry, history, cartography, the screw, the water wheel, gears, cranes, lighthouses, and fricking analog computers. If you want to stake your claim to be more than a miserable failure as a historical age, you are going to have to do better than horseshoes.

(also, maybe the Romans invented iron horseshoes first anyway?)

9. I still think the term “Dark Ages” could possibly lead to misconceptions.

Yeah.

I like this debate because it’s so pointless, but also reveals some of the basic structure of these kinds of arguments. Like most language questions, we act like we’re debating facts, when in fact we’re debating fuzzy category boundaries that are underdetermined by facts. See previous work on is Pluto a planet?, is obesity a disease?, are transgender people their chosen gender?, etc.

There’s no strict criteria for what makes something a Dark Age or whether the term should be used at all. We’re left to wonder whether using it conveys more useful information than it does misinformation.

There are many interpretations of “The Dark Ages happened” that might be wrong, like:

1. There was darkness everywhere, not just in Europe
2. There was darkness in Europe all the way until the Renaissance, and the High Middle Ages sucked
3. Every single person in this era was an illiterate superstitious peasant covered in filth, and not one good thing ever happened
4. Greco-Roman civilization was better in every way than the period that followed it, including morally

On the other hand, there are many interpretations of “the Dark Ages didn’t happen” that might also be wrong, like:

1. The fall of Rome was not associated with a decline in wealth and population.
2. The fall of Rome was not associated with a loss of capacity for things like urban living or large-scale infrastructure
3. The intellectual output of the period was exactly as high in quality and quantity as the intellectual outputs of other periods
4. Civilization always proceeds in a nice Whig History straight upward line with no risk of catastrophic collapses

Surely people can get caught in different bravery debates here. If they live in a bubble where everyone falls prey to the first set of misconceptions, it can be tempting to try to rectify that by saying the Dark Ages never happened. If they (like me) live in a bubble where everyone seems to fall prey to the second, it’s tempting to…well, write a post like this one.

And then there are political implications that will work for the benefit of one group or another. If there was a Dark Age:

1. …maybe it casts Catholicism or Christianity in a bad light, since this was also the age when they rose to be a major power
2. …maybe it points to a broader conflict between science and religion, since this was a very religious age in ways
3. …maybe it suggests that civilization is more fragile than we think, and since it collapsed once it can collapse again
4. …maybe it makes Greece and Rome look extra good, since they were again of the curve in terms of civilizational greatness

Pictured: one way to politicize this discussion; not recommended

And finally, there are signaling aspects. Since everybody hears a vague Monty-Python-And-The-Holy-Grail-esque conception of the Dark Ages (“He must be a king…he doesn’t have shit all over him”), but only people who take a history class in college hear about the Continuity Thesis, loudly proclaiming that there was never a Dark Age is one way to signal education and intellectualism (I dare you to tell me that isn’t what’s going on in this Tumblr post). On the other hand, if you’re one of those people who rails against “postmodernism” and “cultural relativity” and wants a reputation for “calling a spade a spade”, it might be gratifying to get to say that actually, that one historical era that seems kind of sucky (but fancy college professors keep insisting otherwise) does, in fact, suck.

I think I know why this question bothers me so much, and it’s because I hate when faux-intellectuals give stupid black-and-white narratives that are the tiniest sliver more sophisticated than the stupid black-and-white narratives that the general population believes, then demand to be celebrated for their genius and have everyone who disagrees with them shunned as gullible science-denying fools.

(I know a lot of people accuse me and this blog of doing exactly this, and I’m sorry. All I can say is that I’m at the odd-numbered levels of some signaling game you’re at the even-numbered levels of, and it sucks for all of us.)

For other people, maybe it’s something different. Maybe a Chinese historian doesn’t like the term “Dark Ages” because she sees too many people think Europe-specific terms apply to the whole world, and for her the tiny number of people who do this are so annoying that it overwhelms any possible advantage the idea might have. Maybe a Muslim likes it because it helps contrast the poverty of Christendom with the glory of al-Andalus, and shake the myth that Europe has always been on top. I don’t know.

10. So you’re saying both positions are true and everyone is equally right?

No. Although I sympathize with the feelings behind both positions, I say the Dark Ages happened. I think the best evidence we have suggests the fall of Rome (and the period just before) was associated with several centuries of economic and demographic decline, only reaching back to their classical level around 1000 AD. I think it was also associated with a broader intellectual and infrastructure decline, which in some specific ways and some specific fields didn’t reach back up to its Roman level until the Renaissance. I think that common sense – the sense you get when you treat the question of the Dark Age the same as any other question, and try to avoid isolated demands for rigor – says that qualifies for the phrase “Dark Age”.

[see also: Highlights From The Comments On Dark Ages]

Posted in Uncategorized | Tagged | 521 Comments

SSC Meetup: Bay Area 10/14

WHEN: 3 PM on Saturday, October 14

HOW: We haven’t done well with cafes or other more traditional meetup spaces in the past, so we’ll probably just meet outside and sit on the grass. Bring blankets / refreshments iff you want them.

WHERE: Berkeley campus, meet at the open space beside the intersection of West and Free Speech. Please disregard any kabbalistic implications of the meetup cross-streets.

WHO: Special guest Scott Aaronson from Shtetl-Optimized. Also me, Katja Grace, possibly David Friedman, hopefully other people.

WHY: Because Professor Aaronson will be giving a lecture on Black Holes, Firewalls, And The Limits Of Quantum Computers (to which you’re all invited) at Berkeley later in the week and kindly agreed to hang out with us while he was in town.

See you there!

Posted in Uncategorized | Tagged | 33 Comments

SSC Journal Club: Serotonin Receptors

Pop science likes to dub dopamine “the reward chemical” and serotonin “the happiness chemical”. God only knows what norepinephrine is, but I’m sure it’s cutesy.

In real life, all of this is much more complicated. Dopamine might be “the surprisal in a hierarchical predictive model chemical”, but even that can’t be more than a gross oversimplification. As for serotonin, people have studied it for seventy years and the best they can come up with is “uh, something to do with stress”.

Serotonin and brain function: a tale of two receptors by Robin Carhart-Harris and David Nutt tries to cut through the mystery. Both authors are suitably important to attempt such an undertaking. Carhart-Harris is a neuropsychopharmacologist and one of the top psychedelic researchers in the world. Nutt was previously the British drug czar but missed the memo saying drug czars were actually supposed to be against drugs; after using his position to tell everyone drugs were pretty great, he was summarily fired. Now he’s another neuropsychopharmacology professor, though with cool side projects like inventing magical side-effect-free alcohol. These are good people.

And they have a good theory. One stumbling block in past attempts to understand serotonin was the brain’s dozen or so different types serotonin receptors, all of which seem to do kind of different things. Carhart-Harris and Nutt (subsequently: CH&N) focus on two of these which show up again and again in psychiatry: 5-HT1A and 5-HT2A. Past studies had always shown these two receptors having kind of opposite effects, which confused things pretty thoroughly: why would you want a chemical that does two opposite things?

5-HT1A is the most common serotonin receptor in the brain. When SSRI antidepressants like Prozac, Zoloft and Celexa increase serotonin, this is the receptor most of that serotonin goes to. Some other antidepressants and antianxiety medications like BuSpar, Viibryd and Trintellix just stimulate this receptor directly. So it looks like this receptor does something like “reduce depression and anxiety”. But this falls afoul of a version of Algernon’s Law: there shouldn’t be any switch in the brain which is 100% good or 100% bad. Why have a receptor for treating depression and anxiety, rather than just always keep the receptor at maximum so you’re never depressed or anxious?

5-HT2A is another pretty common receptor. Most new antipsychotics like Seroquel and Abilify block this receptor. And most psychedelic drugs like LSD and magic mushrooms stimulate it really hard. Since psychedelics make you kind of crazy, and antipsychotics make you stop being crazy, 5-HT2A must have something to do with psychosis. Of course, this is another Algernon’s Law violation: why is there a receptor just to make you psychotic?

1A and 2A seem to “fight” each other. The more you activate 1A, the quieter 2A becomes – this is why people on SSRIs get less effect from psychedelics. And all the drugs that block 2A are also decent antidepressants – this is why people recommend Seroquel for depression even though it’s an antipsychotic – and this seems to work because blocking 2A increases 1A.

On the other hand, there also seems to be some deeper unity. 1A makes you less depressed. 2A – well, we keep hearing all these studies, some of them from Dr. Carhart-Harris himself, showing that magic mushrooms treat depression really well. Not just as a once daily medication, but in the sense that one trip on mushrooms can make you long-term – maybe permanently – less depressed. This is pretty weird. Blocking 2A makes you less depressed? But stimulating 2A also makes you less depressed, in a different and more permanent way? What’s going on?

CH&N argue: both 1A and 2A promote coping with stress. 1A promotes “passive coping”. 2A promotes “active coping”.

Passive coping is basically being stoic, having a stiff upper lip, and waiting it out. Imagine you’re at some kind of terrible job and your boss is bullying you all the time and you can’t stand it and you get depressed and anxious. Your psychiatrist gives you an SSRI (or BuSpar, or Viibryd, or some other 1A stimulator) and now, you can stand it. Your boss is still just as mean. Your life is still just as bad. But you sort of shrug, think “what can I do?” and get back to work. This isn’t the most inspiring story, but it’s better than alternatives like “being a wreck” or “snapping and attacking your boss”. Did I mention that 1A is known to decrease impulsivity and aggression? Makes sense.

Active coping is…uh…sort of unclear from the paper. It sounds like it should mean working to solve the problem – quitting your job, finding a way to stand up for yourself. Heck, even snapping and attacking your boss would tie in with the psychosis angle. This is…not exactly where CH&N go, as far as I can tell. Active coping is like…an LSD trip? It’s some kind of grabbing the brain and shaking it, in the hopes that maybe when it settles it will be in a state that’s better able to deal with whatever’s going on. This sort of makes sense, insofar as big steps like quitting your job might require a lot of mental shake-up to consider. It seems to have something to do with a process of increased plasticity, becoming bolder to avoid getting trapped at local minima, and increasing the information-theoretic entropy of brain states. This definitely sounds like the sort of thing that can cause psychosis, and maybe it sounds like the sort of thing that might help?

MDMA, a strong 2A agonist, is currently in Phase III trials as a treatment for post-traumatic stress disorder. It looks really promising. Under CH&N’s theory, this makes a lot of sense. If you have trauma, your thoughts get stuck in some pattern which is useful for dealing with or avoiding the traumatic situation – for example, an abused child learns to be suspicious and afraid of everybody. People do therapy for years trying to cast off these thought patterns; they know they’re no longer adaptive, but they just can’t get rid of them. On MDMA – and especially in MDMA-assisted therapy – people find it easy; the usual metaphor is calcified thought patterns suddenly become fluid and re-writable. Is this the sort of “increased plasticity” that CH&N describe?

This theory gives an explanation of how 1A and 2A can have such a complex – and sometimes antagonistic – relationship. When a person undergoes adversity, their brain releases serotonin, which starts by hitting the 1A receptors. They bear it stoically and hopefully soldier through. But if the adversity gets really bad and the serotonin release passes some threshold, it starts hitting the 2A receptors instead. Now their brain realizes things are pretty bad, it’s got to try high-variance strategies, and so it increases its randomness in the hope of stumbling across a way-out-there solution to the problem.

(not super-clear what problem John Lilly thought he was solving by accusing space aliens of orchestrating a massive conspiracy to manipuate the world’s coincidences, but it’s a pretty safe bet the 2A receptor was involved somehow.)

I find the whole thing pretty plausible. But as written, it doesn’t entirely answer the Algernon’s Law questions. Why doesn’t everyone just have 1A and 2A functions set to max all the time? What’s the tradeoff?

There are some obvious possibilities. Too much 2A stimulation makes you psychotic. This puts the efficacy of atypical antipsychotics like Seroquel in a new light: they’re saying something like “keep your thoughts very careful and low-risk, this isn’t a good time to be deviating from normal patterns”. And so maybe someone who otherwise would have believed the space aliens were putting a transmitter in his teeth will decide not to think that. Is there a shade of Bayesian brain theory here? Is the phrase we’re looking for “strength of priors”? I don’t know.

Likewise obvious: if 1A promotes stoic coping, then too much of it prevents you from actively making your life better. One can imagine how this would be more relevant in the environment of evolutionary adaptedness than today. Back then stressors could have been some specific person whose skull you could bash in with a rock. Nowadays they tend to be things like corporations, national governments, and groups of people with terrible politics on Twitter; attempted skull-bashing, as satisfying as it might feel, is highly disrecommended.

I don’t know if these stories are true. They don’t really explain why 1A and 2A function seem inversely related. Is this just a wiring issue? Or is there some fundamental reason why ability to passively cope can’t coexist with creative outside-the-box problem-solving? Maybe the coping involves some sort of mental resolution not to let all the stress change the brain at all, and the problem-solving involves the brain becoming superplastic and really easily influenced by external events. But it’s not really clear why either of those things should be necessary.

Also, we should remember that although CH&N’s theory explains a lot, we’re reading the case they’re presenting, and there’s a lot they leave out. Some might complain that calling 2A the “active coping receptor” is as reductionistic as the whole “dopamine is the reward chemical” thing – 2A is also involved in obesity, sexual dysfunction, some forms of insomnia, possibly chronic fatigue syndrome, platelet clumping, et cetera. All of these psychedelics do opposite things acutely and chronically – something CH&N acknowledge – so you have to be really careful with time course in order to figure out whether your acid trip is treating depression due to acutely increased 2A stimulation or chronically decreased number of 2A receptors. Both Carhart-Harris and Nutt have spent big parts of their careers advocating more use of psychedelics, so them coming up with a theory of why psychedelics are really good is both reasonable and suspicious.

Still, this is as good a theory of serotonin function as anything else I’ve seen, and it will be exciting to see if it suggests any avenues for experimental research to confirm or refute it.

In Favor Of Futurism Being About The Future

From Boston Review: Know Thy Futurist. It’s an attempt to classify and analyze various types of futurism, in much the same way that a Jack Chick tract could be described as “an attempt to classify and analyze various types of religion”.

I have more disagreements with it than can fit in a blog post, but let’s stick with the top five.

First, it purports to explain what we should think about the future, but never makes a real argument for it. It starts by suggesting there are two important axes on which futurists can differ: optimism vs. pessimism, and belief in a singularity. So you can end up with utopian singularitarians, dystopian singularitarians, utopian incrementalists, and dystopian incrementalists. We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.

You’re going to protest that there has to be something more than that. Read the article. There really isn’t. The author ignores the future almost completely, in favor of having very strong opinions on which futurist movements include the right or wrong sorts of people. AI risk researchers are “majority men, although more women than in the previous group”; techno-utopians are “more women still…but in the end that does not denote progress”. All singularitarians were “sex-starved teenagers” and they all “wax eloquent about meritocracy over expensive wine” in a “super-rich bubble”. The lovingly detailed descriptions everyone’s social class, racial breakdown, gender ratio, what politics the author imagines they have, and what sexual insecurities she thinks produced their opinions. contrasts markedly with a total lack of concern for any of their beliefs or opinions about the future, their justifications for their beliefs, or whether those justifications are true or false. Literally the only future-related thing we know about the article’s third quadrant is that they may be involved in Bitcoin or something.

The author never even begins to give any argument about why the future will be good or bad, or why a singularity might or might not happen. I’m not sure she even realizes this is an option, or the sort of thing some people might think relevant.

Second, the article’s section on singularitarianism never mentions anything about the Singularity and doesn’t really seem to understand what the Singularity is. Its example of Singularity technologies are “augmenting intelligence through robotics”, “better quality of life through medical breakthroughs”, “cryogenics” (I assume it’s confusing this with cryonics), “medical strategies for living forever”, and “possibly even the blood of young people.”

None of these (except maybe the first) relate to the Singularity, which is defined as a point at which the rate of technological advance reaches near-infinity and it’s impossible to predict what happens afterwards. The article seems to use “singularitarianism” to mean “cool near-future technologies”, which is kind of the opposite of its real meaning. This is a fatal error for an article proposing a system classifying all futurists as “singularitarian” vs. “nonsingularitarian”.

It makes sense only in the context of the author having no interest in futurist movements at all, and indeed she later more-or-less admits that by ‘singularitarian optimists’ she means ‘rich white people she doesn’t like’. When discussing Elon Musk, whom some might call a pessimist based on his belief that the Singularity will destroy the world and doom humanity, she says that “being an enormously rich and powerful entrepreneur, he probably belongs in the first [Singularity optimist] group”.

Third, the article wants to classify some technologies as inextricably associated with privilege, but it has a pretty weird conception of which ones they are. It gives five examples of technologies that it’s possible to worry about without being a privileged white man, and every one of them is a different form of algorithmic bias. Really? That’s the only future technology it’s okay to care about? So much so that of five slots for potentially worrying technology, you filled all five with the same one?

Likewise, when the author discusses bad “singularity” technologies that only white men could want, she includes “better quality of life through medical breakthroughs”. I’m sure this just slipped in by accident. I’m sure (pretty sure?) if we pointed her to someone with chronic pain who hasn’t been able to leave the house in years and asked whether it might be good to have technology that could help this person, she would say yes. But it’s a really interesting slip-up to make. I’ve written hundreds of articles during my lifetime and I don’t think I’ve ever mistakenly said that only privileged white men could care about not being sick.

Again, this would make sense if the author doesn’t really believe in futurology except as a way of sending the right class signals. Helping sick people improve their quality of life? Do gross male nerds from the outgroup support that or oppose that? Okay, sold. I’m sure if her mental editor had caught it, she’d have realized that she was supposed to support that kind of thing, but it would be a post-processing addition to her thought stream rather than a natural component of it.

Fourth, the article presupposes a bitter conflict between the four quadrants, whereas actually people tend to be a lot more on the same side than she expects.

Her pessimists are concerned about algorithmic bias making banks less likely to extend credit to poor people. But her optimists just care about flashy new things like cryptocurrency. Okay. But one possible application for cryptocurrency is peer-to-peer microfinance via smart contracts – ie one of the most promising solutions to bias in big financial institutions. You don’t have to agree this is a good solution. But cryptocurrency enthusiasts are working on it, and it seems weird to deny this matters or that the whole reason behind developing some of these flashy new technologies is to solve recognized societal problems.

And her singularitarians are strategizing how to deal with far-future advanced AI algorithms, while her nonsingularitarians are strategizing how to deal with near-future primitive AI algorithms. These seem like…not entirely the opposite of each other? Imagine you were writing an article on the different kind of climatologists studying global warming. There’s the kind who indulge in crazy sci-fi scenarios where entire cities flood and the Earth becomes uninhabitable. And then there’s the kind dealing with important real-world problems like increased frequency of hurricanes and creeping desertification. Is this a reasonable distinction? Which kind should you be?

(Boston Review readers: “How should I know? You didn’t tell me what ethnicity they are!”)

Most people concerned about climate change are concerned about both those things. Maybe there’s a little room for disagreement on the best way to balance long-term versus short-term goals – should we build seawalls to protect our cities today, or start a program of power plan retrofitting which will pay off in twenty years? But to try to turn these two positions into arch-enemies would be ridiculous and destructive. The scientists involved may have different research interests and skillsets, but not necessarily different opinions. Obviously we should have some people working on near-term problems and other people laying the groundwork to work on long-term problems.

In real life, this is what futurists are doing too. The Asilomar Conference on Beneficial AI was organized by people whose main interest was far-future Singularity scenarios, but it included some of the top experts on algorithmic bias, gave the subject a lot of airtime, and ended up with all participants signing onto a set of principles urging more work both on near-term AI problems like algorithmic bias and long-term AI problems like the development of superintelligence. Jed McCaleb, founder of Bitcoin exchange Mt. Gox, donated $500,000 of his profits to the Machine Intelligence Research Institute, which deals with long-term concerns about the Singularity. In the real world, everyone from all four “quadrants” of futurist are either allies, or the same people.

Again, I feel like this is the kind of error you could only make if you totally missed that futurism was a real subject, and you just wanted to make it into a morality play for your particular political opinions.

Fifth, another quote from the article:

In the end my taxonomy (as amusing as I find it) doesn’t really matter to the average person. For the average person there is no difference between the singularity as imagined by futurists in Q1 or Q2 and a world in which they are already consistently and secretly shunted to the “loser” side of each automated decision.

I already posited that the author doesn’t understand “Singularity”, but this is something beyond that. This is horrifying. There will be no difference for the average person between a (positive or negative) post-singularity world and the world now? What?

Listen up, average person. If there’s a negative singularity you will notice. Because you will be very, very dead. So will all the rest of us, rich and poor, old and young, black and white.

And if there’s a positive singularity, you will also notice. I would promise you infinite wealth, but that sort of thing kind of loses its meaning in a post-scarcity society. I would promise you immortality, but who knows if we’ll even have individual consciousnesses at that point? I would promise you bread and roses, but they would be made of hyperintelligent super-wheat and fractal eleven-dimensional time blossoms.

I don’t care if you think this vision is stupid. We’re not arguing about whether this vision is stupid. We’re arguing about whether, if this vision were 100% true, it would make a difference in the life of the average person. The Boston Review is saying it wouldn’t. I’m sitting here with my mouth gaping open so hard I’m worried about permanent jaw damage.

A Singularity that doesn’t make a difference in the life of the average person isn’t a Singularity worth the bits it’s programmed on. And the triumphs of science have always been triumphs for common people, whether it was the Green Revolution saving hundreds of millions of lives in the Third World, or the advent of antiparasitic drugs that are wiping malaria from Africa. When Ray Kurzweil says that the future is exponential, he’s not just talking about the number of transistors per square inch, he’s talking about this (and note the green line representing “percent of people not living in extreme poverty”):

The Singularity is already here, it’s just unevenly distributed across various scales of x-axis

This is what everyone in whatever school or quadrant of futurism you care to name is thinking about. This is the only true thing. Drones, Bitcoin, Uber, superintelligence, whatever, these are part of it, but they’re not the goal in itself. We are going to fight our hardest to end poverty, disease, death, and suffering, and we’re going to do it in spite of petty Boston Review articles telling us we should stop doing it so we can focus on hating each other for stupid reasons.

So here’s my division of futurists into two groups: shining examples, and terrible warnings. And the patron saint of the latter category is Samuel Madden.

Madden was an Anglican clergyman in 18th-century Ireland, and maybe the first futurist. In 1733, he published Memoirs of the Twentieth Century, a novel about people in 1999 sending letters back through time to tell their 18th-century predecessors what the future would hold.

How did the prognosticators of 1733 imagine the future? Was it utopian? Decadent? Miserable? Beautiful? Incomprehensible?

Actually, it was none of those things. It was exactly like 1733 in every way, and the future people were just writing back to remind everyone how much Catholics sucked.

I am serious about this. Book-World-1999 had no technological advances over 1733. The political situation was more or less the same, although the Wikipedia review mentions that “Tatars” had taken Constantinople at some point. The important thing, the thing that they invented time travel to tell the past, was that Catholics were still bad. Really, really bad. The people of 1733 really needed to know just how amazingly bad Catholics were and would continue to be.

The problem here isn’t just that Catholics aren’t really that bad. I feel like even if Catholics were exactly as bad as Samuel Madden thought, there would still be an unforgivable pettiness here. If we could show Samuel Madden the real future of his world, I hope he would be awed and horrified beyond words. The hope and heartbreak of the French Revolution, the lightning-fast transformations of industrialization, the slow march of atheism through previously Christian Europe, the otherworldly horror of the atom bomb, the glory of the moon landing, and then a 1999 poised on the edge between a Fukuyaman end of history and collapse into environmental disaster and dystopia – nobody could write a book as grand as this, but surely one could win eternal renown just by making the feeblest attempt. And instead, we get “EVERYTHING THE SAME; ALSO, HATE CATHOLICS”. The only emotion I can muster is a sort of profound disgust.

And I can’t help but feel the same disgust when I read “Know Thy Futurist”. I don’t know whether the future will be better or worse than the past, but I feel pretty sure it will be grander. Either we will perish in nuclear apocalypse or manage to avert nuclear apocalypse; either one will be history’s greatest story. Either we will discover intelligent alien life or find ourselves alone in the universe; either way would be terrifying. Either we will suppress AI research with a ferocity that puts the Inquisition to shame, or we will turn into gods creating life in our own image; either way the future will be not quite human. And faced with all of the immensity and danger of the coming age, the best the Boston Review can pull off is “HAVE YOU CONSIDERED THAT SOME OF THE PEOPLE SPECULATING ABOUT THIS MIGHT BE IN YOUR (((OUTGROUP)))?”

There’s a Deeply Wise Saying that all science/prediction/philosophy/theology/whatever will inevitably reflect the parochial conditions of the writer’s own time. Maybe so. But I feel like it doesn’t have to be quite as parochial as Samuel Madden. If the people of 1733 had thought about things really hard, tried to transcend the feuds of their local time and place, might they have predicted the Industrial Revolution? Might they have been able to accelerate it, delay it, send it along a different track that ameliorated some of the displacement and poverty it caused in reality? I don’t know. But it would have been a pretty amazing attempt. What would it look like to try to do something like that today? Is “Know Thy Futurist” making it more or less likely that will happen?

In the grand scheme of things, it’s probably dumb for me to be so angry about this one article. I guess what bothers me is that it’s not just one article. Probably a majority of the stuff I see written evaluating the future, or technology, or Silicon Valley these days seems to take basically this perspective. I was really mad at Maciej Ceglowski a few months ago because his anti-singularity screed was about half this kind of thing, but by this point 50%-real-argument is looking pretty good. More and more people are dropping the 50%-real-argument veneer and just admitting that stereotypes and ad hominems are the way they want to conduct everything. Do we really need to turn our hopes and dreams about the world to come into yet another domain where white people accuse other white people of whiteness and are accused of whiteness in turn until everyone hates each other and anything good and real gets buried in an endless heap of bullshit and 140-character brutal owns?

I wish ignoring this kind of thing was an option, but this is how our culture relates to things now. It seems important to mention that, to have it out in the open, so that people who turn out their noses at responding to this kind of thing don’t wake up one morning and find themselves boxed in. And if you’ve got to call out crappy non-reasoning sometime, then meh, this article seems as good an example as any.

If we get very lucky, there will actually be a future. Some of the people in it will probably read the stuff we write. They’ll judge us. I assume most of that judgment will involve laughing hysterically. But we can at least aim for laughter that’s good-natured instead of scornful. Sub specie aeternatis, how much of what we do today is going to look to them the way Samuel Madden does to us?

OT86: Utopen Thread

This is the bi-weekly visible open thread. Post about anything you want, ask random questions, whatever. You can also talk at the SSC subreddit or the SSC Discord server. Also:

1. There’s a public beta of Less Wrong 2.0 up at lesserwrong.com. See also the overview of what’s going on and why one might want such a thing.

2. Probably there will be an SSC meetup in Berkeley on October 14. I’ll post more details later, but if it’s important to have a little advance warning, now you have it.

Posted in Uncategorized | Tagged | 873 Comments

SSC Survey Results On Trust

Last post talked about individual differences in whether people found others basically friendly or hostile. The SSC survey included a sort of related question: “Are people basically trustworthy?”

The exact phrasing asked respondents to rate other people from 1 (“basically trustworthy”) to 5 (“basically untrustworthy”). 4853 people answered. The average was 2.49 – so skewed a bit towards higher trust. The overall pattern looked like this:

Trust didn’t differ by gender. Women averaged 2.56 and men 2.48, not a significant difference even with our fantastic sample size.

And I couldn’t detect it differing by race. Whites averaged 2.47 and nonwhites 2.59, which was nonsignificant despite decent sample size. Blacks were 2.81 and trended toward significance vs. whites, but the sample size was too small to be sure.

And I couldn’t detect it differing by religiousity. Committed theists (n = 506) had a trust level of 2.54, no more or less trusting than the average (and mostly atheist) SSC population.

And I couldn’t detect it differing by intelligence. There was no correlation between trust and either IQ or SAT score. There was a significant difference (p = 0.001) based on education level, all the way from PhDs at 2.35 to high school graduates only at 2.62.

There were some decent-sized differences among different US states, with more urban and liberal states being more trusting. Among the states with decent sample size, California (n = 608) was 2.43, New York (n = 298) was 2.47, and Texas (n = 143) was 2.75; the California/Texas difference was significant at p = 0.001. I was only able to eyeball rather than actually significance-test the urban/liberal correlation, but it looked pretty strong.

There were similar differences between countries. Germany (n = 192) at 2.35, the UK (n = 353) at 2.37, and Canada (n = 2.39) were all significantly more trusting than the US (n = 3124) at 2.53 – but obviously the effect wasn’t too impressive. There were only two non-western countries with remotely usable sample sizes. Brazil (n = 28) was 2.89, and India (n = 27) was 2.81. The non-western/Anglosphere difference was significant even with the low sample size. The most trusting city in the world was Toronto (n = 60), at 2.23.

There were so many professions, with such small sample sizes, that I wasn’t really confident any of them were much more or less trusting than others. But for what it’s worth, the number one least trusting profession, at 3.00, was mental health, and I am 100% not at all surprised. Otherwise there seemed to be a weak trend for nerdier and math-ier professions to be more trusting than others.

By politics, the ranking looked like this:

Social democratic: 2.38
Liberal: 2.40
Libertarian: 2.48
Conservative: 2.67
Communist: 2.80
Neoreactionary: 2.97
Alt-right: 2.97

Looks like the same trend of conservative = less trusting. Harder to figure out what to think about left vs. liberal, given the social democrats in the lead vs. communists very far behind.

Effective altruists who had taken the GWWC Pledge were much more trusting – 2.19 – than any other natural group I could find in this survey, more trusting even than Torontoans. After some work, I managed to find an unnatural group that beat them. Polyamorous Less Wrong readers from California – my proxy for the real-life Bay Area rationalist community – had a trust score of 2.13.

I had wondered if more trusting people would want less strict moderation, but the opposite was actually true – less trusting people wanted weaker moderation. Maybe this is mediated by conservativism – or maybe they just don’t trust me to moderate!

Autism, anxiety, OCD, and drug use all lowered trust; depression and bipolar disorder did not.

People who put down “Other” in any category were always much less trusting than any of the options, almost as if they didn’t trust people to accurately interpret the binned choices.

If you want to look into this yourself, you can find the data publicly available here, but be warned – predictably, the people who agreed to let me make their data public were systematically more trusting (2.49) than the people who refused (2.70).

Different Worlds

I.

A few years ago I had lunch with another psychiatrist-in-training and realized we had totally different experiences with psychotherapy.

We both got the same types of cases. We were both practicing the same kinds of therapy. We were both in the same training program, studying under the same teachers. But our experiences were totally different. In particular, all her patients had dramatic emotional meltdowns, and all my patients gave calm and considered analyses of their problems, as if they were lecturing on a particularly boring episode from 19th-century Norwegian history.

I’m not bragging here. I wish I could get my patients to have dramatic emotional meltdowns. As per the textbooks, there should be a climactic moment where the patient identifies me with their father, then screams at me that I ruined their childhood, then breaks down crying and realizes that she loved her father all along, then ???, and then their depression is cured. I never got that. I tried, I even dropped some hints, like “Maybe this reminds you of your father?” or “Maybe you feel like screaming at me right now?”, but they never took the bait. So I figured the textbooks were misleading, or that this was some kind of super-advanced technique, or that this was among the approximately 100% of things that Freud just pulled out of his ass.

And then I had lunch with my friend, and she was like “It’s so stressful when all of your patients identify you with their parents and break down crying, isn’t it? Don’t you wish you could just go one day without that happening?”

And later, my supervisor was reviewing one of my therapy sessions, and I was surprised to hear him comment that I “seemed uncomfortable with dramatic expressions of emotion”. I mean, I am uncomfortable with dramatic expressions of emotion. I was just surprised he noticed it. As a therapist, I’m supposed to be quiet and encouraging and not show discomfort at anything, and I was trying to do that, and I’d thought I was succeeding. But apparently I was unconsciously projecting some kind of “I don’t like strong emotions, you’d better avoid those” field, and my patients were unconsciously complying.

I wish I could say my supervisor’s guidance fixed the problem and I learned to encourage emotional openness just as well as my colleague. But any improvement I made was incremental at best. My colleague is a bubbly extravert who gets very excited about everything; I worry that to match her results, I would have to somehow copy her entire personality.

But all was not lost. I found myself doing well with overly emotional patients, the sort who had too many dramatic meltdowns to do therapy with anybody else. With me, they tended to give calm and considered analyses of their problems, as if they were lecturing on a particularly boring episode from 19th-century Norwegian history. Everyone assumed that meant I was good at dealing with difficult cases, and must have read a bunch of books about how to defuse crises. I did nothing to disabuse them of this.

Then a few days ago I stumbled across the Reddit thread Has Anyone Here Ever Been To An LW/SSC Meetup Or Otherwise Met A Rationalist IRL? User dgerard wrote about meeting me in 2011, saying:

His superpower is that he projects a Niceness Field, where people talking to him face to face want to be more polite and civil. The only person I’ve met with a similar Niceness Field is Jimmy Wales from Wikipedia…when people are around [Jimmy] talking to him they feel a sort of urge to be civil and polite in discourse 🙂 I’ve seen people visibly trying to be very precise and polite talking to him about stuff even when they’re quite upset about whatever it is. Scott has this too. It’s an interesting superpower to observe.

I should admit nobody else has mentioned anything like this, and that narcissism biases me toward believing anyone who says I have a superpower. Still, it would explain a lot. And not necessarily in a good way. I’ve always believed psychodynamic therapies are mostly ineffective, and cognitive-behavioral therapies very effective, because all my patients seem to defy the psychodynamic mode of having having weird but emotionally dramatic reactions to things in their past, but conform effortlessly to the cognitive-behavioral mode of being able to understand and rationally discuss their problems. And the more I examine this, the more I realize that my results are pretty atypical for psychiatrists. There’s something I’m doing – totally by accident – to produce those results. This is worrying not just as a psychiatrist, but as someone who wants to know anything about other people at all.

II.

New topic: paranoia and Williams Syndrome.

Paranoia is a common symptom of various psychiatric disorders – most famously schizophrenia, but also paranoid personality disorder, delusional disorder, sometimes bipolar disorder. You can also get it from abusing certain drugs – marijuana, LSD, cocaine, and even prescription drugs like Adderall and Ritalin. The fun thing about paranoia is how gradual it is. Sure, if you abuse every single drug at once you’ll think the CIA is after you with their mind-lasers. But if you just take a little more Adderall than you were supposed to, you’ll be 1% paranoid. You’ll have a very mild tendency to interpret ambiguous social signals just a little bit more negatively than usual. If a friend leaves without saying goodbye, and you would normally think “Oh, I guess she had a train to catch”, instead you think “Hm, I wonder what she meant by that”. There are a bunch of good stimulant abuse cases in the literature that present as “patient’s boss said she was unusually standoffish and wanted her to get psychiatric evaluation”, show up in the office as “well of course I’m standoffish, everyone in my office excludes me from everything and is rude in a thousand little ways throughout the day”, and end up as “cut your Adderall dosage in half, please”.

(“Why is that psychiatrist telling me to cut my Adderall in half? Does he think I’m lying about having ADHD? Is he calling me a liar? These doctors have always treated me like garbage. I HAVE RIGHTS, YOU KNOW!”)

Williams Syndrome is much rarer – only about 1/10,000 people, and most of them die before reaching adulthood. It’s marked by a sort of anti-paranoia; Williams patients are incapable of distrusting anyone. NPR has a good article, A Life Without Fear, describing some of what they go through:

Kids and adults with Williams love people, and they are literally pathologically trusting. They have no social fear. Researchers theorize that this is probably because of a problem in their limbic system, the part of the brain that regulates emotion. There appears to be a disregulation in one of the chemicals (oxytocin) that signals when to trust and when to distrust. This means that it is essentially biologically impossible for [them] to distrust.

The results are less than heartwarming:

As Isabelle got older, the negative side of her trusting nature began to play a larger role. A typical example happened a couple of years ago, when Jessica and her family were spending the day at the beach. Isabelle had been begging Jessica to go to Dairy Queen, and Jessica had been putting her off. Then Isabelle overheard a lady just down the beach.

“She was telling her kids, ‘OK, let’s go to the Dairy Queen,’ ” Jessica says. “And so Isabelle went over and got into the lady’s van, got in the back seat, buckled up and was waiting to be taken to Dairy Queen with that family.”

Jessica had no idea what had happened to Isabelle and was frantically searching for her when the driver of the van approached her and explained that she had been starting her car when she looked up and saw Isabelle’s face in the rearview mirror.

The woman, Jessica says, was incredibly angry.

“She said, ‘I am a stranger, you know!’ ” Jessica says. Essentially, the woman blamed Jessica for not keeping closer watch on her daughter — for neglecting to teach her the importance of not getting into a car with someone she didn’t know. But the reality could not be more different. “It’s like, ‘My friend, you have no idea,’ ” Jessica says.

In fact, because of Isabelle, Jessica has had to rethink even the most basic elements of her day-to-day life. She can not take Isabelle to the dog park. She tries not to take Isabelle to the store. And when the doorbell rings, Jessica will leap over a coffee table to intercept her.

It’s not just Jessica and her family who must be vigilant. Every teacher at Isabelle’s public school has been warned. Isabelle is not allowed to tell them that she loves them. Isabelle is not supposed to tell other schoolchildren that she loves them. And there are other restrictions.

“She’s not allowed to go to the bathroom alone at her school, because there have been numerous instances of girls with Williams syndrome being molested at school when they were alone in the hallway,” Jessica says. “And these are like middle class type schools. So it’s a very real problem. And, you know, I’d rather her be overly safe than be on CNN.”

Some of the research on these kids is fascinating – I’m not sure I believe the study finding that they’re incapable of racism, but the one finding a deficit detecting anger in faces seems pretty plausible.

Williams Syndrome usually involves mental retardation, but not always. Some of these people have normal IQ. It doesn’t really help. Threat-detection seems to be an automated process not totally susceptible to System II control. Maybe it’s like face-blindness. Intelligence can help a face-blind person come up with some systems to reduce the impact of their condition, but in the end it’s just not going to help that much.

Psychiatric disorders are often at the extremes of natural variation in human traits. For every intellectually disabled person, there are a dozen who are just kind of dumb. For every autistic person, there are a dozen who are just sort of nerdy. And so on. We naturally think of some people as more trusting than others, but maybe that isn’t the best frame. “Trusting” implies that we all receive the same information, and just choose how much risk we’re willing to tolerate. I don’t know if that’s true at all.

A recent theme here has been the ways that our sense-data is underdetermined. Each datum permits multiple possible explanations: this is true of visual and auditory perception, but also of the social world. A pretty girl laughs a little too long at a man’s joke; is she trying to flirt with him, or just friendly? A boss calls her subordinate’s work “okay” – did she mean to compliment him, or imply it was mediocre? A friend breaks off two appointments in a row, each time saying that something has come up – did something come up, or is he getting tired of the friendship? These are the sorts of questions everyone navigates all the time, usually with enough success that when autistic people screw them up, the rest of society nods sagely and says they need to learn to understand how to read context.

But “context” means “priors”, and priors can differ from person to person. There’s a lot of room for variation here before we get to the point where somebody will be so off-base that they end up excluded from society. Just as there’s a spectrum from smart to dumb, or from introverted to extraverted, so there’s a spectrum in people’s tendencies to interpret ambiguous situations in a positive or negative way. There are people walking around who are just short of clinically paranoid, or just shy of Williams Syndrome levels of trust. And this isn’t a value difference, it’s a perceptual one. These people aren’t bitter or risk-averse – or at least they don’t start off that way. They just notice how everyone’s hostile to them, all the time.

III.

Another change in topic: bubbles.

I’ve written before about how 46% of Americans are young-earth creationists, and how strongly that fails to square with my personal experience. I’ve met young-earth creationists once or twice. But of my hundred closest friends/co-workers/acquaintances, I think zero percent of them fall in that category. I’m not intentionally selecting friends on the basis of politics, religion, or anything else. It just seems to have happened. Something about my personality, location, social class, et cetera has completely isolated me from one particular half of the US population; I’m living in a non-creationist bubble in the midst of a half-creationist country.

What other bubbles do I live in? A quick look over my Facebook and some SSC survey results finds that my friends are about twenty times more likely to be transgender than the general population. There are about twice as many Asians but less than half as many African-Americans. Rates of depression, OCD, and autism are sky-high; rates of drug addiction and alcoholism are very low. Programmers are overrepresented at about ten times the Bay Area average.

I didn’t intend any of these bubbles. For example, I’ve never done any programming myself, I’m not interested in it, and I try my best to avoid programmer-heavy places where I know all the conversations are going to be programming-related. Hasn’t helped. And I’m about as cisgender as can be, I have several Problematic opinions, and I still can’t keep track of which gender all of my various friends are on a month-to-month basis. Part of it is probably class-, race-, and location-based. And I have some speculative theories about the rest – I think I have a pretty thing-oriented/systematizing thinking style, and so probably I get along better with other groups disproportionately made up of people whose thoughts work the same way – but I didn’t understand any of this until a few years ago and there are still some parts that don’t make sense. For now I just have to accept it as a given.

There are other bubbles I understand much better. Most of my friends are pretty chill and conflict-averse. This is because I used to have scarier conflict-prone friends, and as soon as I got into conflicts with them, I broke off the friendship. I’m not super-proud of this and it’s probably one of those maladaptive coping styles you always hear about, and a lot of people have told me I’m really extreme on this axis and need to be better at tolerating aggressive people – but whenever I try, I find it unpleasant and stop. I know some other people who seem to actively seek out abrasive types so they can get in fun fights with them. I don’t understand these people at all – but whatever their thought processes, we have different bubbles.

All of this goes double or triple for people I’ve dated. I don’t think of myself as clearly having a “type”, but people I date tend to turn out similar in dimensions I didn’t expect when I first met them. I’m going to be ambiguous here because it’s a small enough sample that I don’t want to give away people’s private information, but it’s true.

I think about this a lot when I meet serial abuse victims.

These people are a heartbreaking psychiatric cliche. Abused by their parents, abused by their high school boyfriend, abused by their first husband, abused by their second husband, abused by the guy they cheated on their first husband with, abused by the friend they tried to go to for help dealing with all the abuse. The classic (though super offensive) explanation is that some people seek out abusers for some reason – maybe because they were abused as children and they’ve internalized that as the “correct” model of a relationship.

And maybe this is true for some people. I have a friend who admits it’s true of her – her current strategy is to try to find someone in the sweet spot between “jerkish/narcissistic enough to be interesting” and “jerkish/narcissistic enough to actually abuse her”, and she’s said so in so many words to people trying to matchmake. I guess all I can do is wish her luck.

But for a lot of people, this sort of claim is just as offensively wrong as it sounds. I know people who have tried really hard to avoid abusers, who have gone to therapy and asked their therapist for independent verification that their new partner doesn’t seem like the abusive type, who have pulled out all the stops – and who still end up with abusive new partners. These people are cursed through no fault of their own. All I can say is that whatever mysterious forces connect me to transgender pro-evolution programmers are connecting them to abusers. Something completely unintentional that they try their best to resist gives them a bubble of terrible people.

I want to emphasize as hard as I can that I’m not blaming them or saying there’s anything they can do about their situation, and I have no doubt that despite my emphasis people are still going to accuse me of saying this, and I apologize if any of this sounds at all like anything in this direction. But something has to be happening here.

IV.

Sometimes I write about discrimination, and people send me emails about their own experiences. Many sound like this real one (quoted here with permission) from a woman who studied computer science at MIT and now works in the tech industry:

In my life, I have never been catcalled, inappropriately hit on, body-shamed, unwantedly touched in a sexual way, discouraged from a male-dominated field, told I couldn’t do something because it was a boy thing, or suffered from many other experiences that have traditionally served as examples as ways that women are less privileged. I have also never been shamed for not following gender norms (e.g. doing a bunch of math/science/CS stuff); instead I get encouraged and told that I’m a role model. I’ve never had problems going around wearing no make-up, a t-shirt, and cargo pants; but on the rare occasion that I do wear make-up / wear a dress, that’s completely socially acceptable…Hopefully my thoughts/experiences are helpful for your future social justice based discussions.

Other times they sound like the opposite. I don’t have anyone in this category who’s given me permission to quote their email verbatim (consider ways this might not be a coincidence), but they’re pretty much what you’d expect – a litany of constantly being put down, discriminated against, harassed, et cetera, across multiple jobs, at multiple companies, to the point where they complain it’s “endemic” (I guess I can quote one word) and that we need to reject a narrative of “a few bad apples” because really it’s a problem with all men to one degree or another.

These dueling categories of emails have always confused me. At the risk of being exactly the sort of creepy person the second set of writers complain about, I hunted down some of these people’s Facebook profiles to see if one group was consistently more attractive than the other. They weren’t. Nor is there any clear pattern in what industries or companies they work at, what position they’re in, or anything else like that. There isn’t even a consistent pattern in their politics. The woman I quote above mentions that she’s a feminist who believes discrimination is a major problem – which has only made it extra confusing to her that she never experiences any of it personally.

These people don’t just show up in my inbox. Some of them write articles on Slate, Medium, even The New Yorker, discussing not just how they’ve never experienced discrimination, but how much anger and backlash they’ve received when they try to explain this to everyone else. And all of them acknowledge that they know other people whose experiences seem to be the direct opposite.

I used to think this was pretty much just luck of the draw – some people will end up with nice people at great companies, other people will end up with bigots at terrible companies. I no longer think this explains everybody. Take that New Yorker article, by a black person who grew up in the South and says she was never discriminated against even once. I assume in her childhood she met thousands of different white Southerners; that’s a pretty big lucky streak for none of them at all to be racists, especially when you consider all the people who report daily or near-daily harassment. Likewise, when you study computer science in college and then work in half a dozen tech companies over the space of decades and never encounter one sexist, that’s quite the record. Surely something else must be going on here.

V.

And I think this has to come back to the sorts of things discussed in Parts I, II, and III.

People self-select into bubbles along all sorts of axes. Some of these bubbles are obvious and easy to explain, like rich people mostly meeting other rich people at the country club. Others are more mysterious, like how some non-programmer ends up with mostly programmer friends. Still others are horrible and completely outside comprehension, like someone who tries very hard to avoid abusers but ends up in multiple abusive relationships anyway. Even for two people living in the same country, city, and neighborhood, they can have a “society” made up of very different types of people.

People vary widely on the way they perceive social interaction. A paranoid schizophrenic will view every interaction as hostile; a Williams Syndrome kid will view every interaction as friendly. In between, there will be a whole range of healthy people without any psychiatric disorder who tend toward one side or the other. Only the most blatant data can be interpreted absent the priors that these dispositions provide; everything else will only get processed through preexisting assumptions about how people tend to act. Since things like racism rarely take the form of someone going up to you and saying “Hello, I am a racist and because of your skin color I plan to discriminate against you in the following ways…”, they’ll end up as ambiguous stimuli that everyone will interpret differently.

Finally, some people have personalities or styles of social interaction that unconsciously compel a certain response from their listeners. Call these “niceness fields” or “meanness fields” or whatever: some people are the sort who – if they became psychotherapists – would have patients who constantly suffered dramatic emotional meltdowns, and others’ patients would calmly discuss their problems.

The old question goes: are people basically good or basically evil? Different philosophers give different answers. But so do different random people I know who aren’t thinking philosophically at all. Some people describe a world of backstabbing Machiavellians, where everybody’s a shallow social climber who will kick down anyone it takes to get to the top. Other people describe a world where everyone is basically on the same page, trying to be nice to everyone else but getting stuck in communication difficulties and honest disagreements over values.

I think both groups are right. Some people experience worlds of basically-good people who treat them nicely. Other people experience worlds of awful hypocritical backstabbers. This can be true even if they live in the same area as each other, work the same job as each other, et cetera.

And it’s not just a basic good-evil axis. It can be about whether people are emotional/dramatic or calm/rational. It can be about whether people almost always discriminate or almost never do. It can be about whether they’re honest or liars, shun outsiders or accept them, welcome criticism or reject it. Some people think elites are incompetent parasites; others that they’re shockingly competent people who mean well and have interesting personalities. Some people think Silicon Valley is full of overpriced juicers, other people that it’s full of structured-light engines. And the people who say all these things are usually accurately reporting their own experiences.

Some people are vaguely aware of this in the form of “privilege”, which acknowledges different experiences at the cost of saying they have to line up exactly along special identity categories like race and gender. These certainly don’t help, but it’s not that simple – as proven by the article by that black Southerner who says she never once encountered discrimination. I’ve seen completely incomprehensible claims about human nature by people of precisely the same race, sex, class, orientation, etc as myself, and I have no doubt they’re trying to be truthful. The things that divide us are harder to see than we naively expect. Sometimes they’re completely invisible.

To return to a common theme: nothing makes sense except in light of inter-individual variation. Variation in people’s internal experience. Variation in people’s basic beliefs and assumptions. Variation in level of abstract thought. And to all of this I would add a variation in our experience of other people. Some of us are convinced, with reason, that humankind is basically good. Others start the day the same way Marcus Aurelius did:

When you wake up in the morning, tell yourself: the people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous and surly. They are like this because they cannot tell good from evil.

Notice this distinction, this way in which geographic neighbors can live in different worlds, and other people’s thoughts and behaviors get a little more comprehensible.

Posted in Uncategorized | Tagged | 785 Comments

Against Individual IQ Worries

[Related to: Attitude vs. Altitude]

I.

I write a lot about the importance of IQ research, and I try to debunk pseudoscientific claims that IQ “isn’t real” or “doesn’t matter” or “just shows how well you do on a test”. IQ is one of the best-studied ideas in psychology, one of our best predictors of job performance, future income, and various other forms of success, etc.

But every so often, I get comments/emails saying something like “Help! I just took an IQ test and learned that my IQ is x! This is much lower than I thought, and so obviously I will be a failure in everything I do in life. Can you direct me to the best cliff to jump off of?”

So I want to clarify: IQ is very useful and powerful for research purposes. It’s not nearly as interesting for you personally.

How can this be?

Consider something like income inequality: kids from rich families are at an advantage in life; kids from poor families are at a disadvantage.

From a research point of view, it’s really important to understand this is true. A scientific establishment in denial that having wealthy parents gave you a leg up in life would be an intellectual disgrace. Knowing that wealth runs in families is vital for even a minimal understanding of society, and anybody forced to deny that for political reasons would end up so hopelessly confused that they might as well just give up on having a coherent world-view.

From an personal point of view, coming from a poor family probably isn’t great but shouldn’t be infinitely discouraging. It doesn’t suggest that some kid should think to herself “I come from a family that only makes $30,000 per year, guess that means I’m doomed to be a failure forever, might as well not even try”. A poor kid is certainly at a disadvantage relative to a rich kid, but probably she knew that already long before any scientist came around to tell her. If she took the scientific study of intergenerational income transmission as something more official and final than her general sense that life was hard – if she obsessively recorded every raise and bonus her parents got on the grounds that it determined her own hope for the future – she would be giving the science more weight than it deserves.

So to the people who write me heartfelt letters complaining about their low IQs, I want to make two important points. First, we’re not that good at measuring individual IQs. Second, individual IQs aren’t that good at predicting things.

II.

Start with the measurement problems. People who complain about low IQs (not to mention people who boast about high IQs) are often wildly off about the number.

According to the official studies, IQ tests are rarely wrong. The standard error of measurement is somewhere between 3-7 points (1, 2, 3). Call it 5, and that means your tested IQ will only be off by 5+ points 32% of the time. It’ll only be off by 10+ points 5% of the time, and really big errors should be near impossible.

In reality, I constantly hear about people getting IQ scores that don’t make any sense.

Here’s a pretty standard entry in the “help my IQ is so low” genre – Grappling With The Reality Of Having A Below Average IQ:

When I was 16, as a part of an educational assessment, I took both the WAIS-IV and Woodcock Johnson Cognitive Batteries. My mother was curious as to why I struggled in certain subjects throughout my educational career, particularly in mathematical areas like geometry.

I never got a chance to have a discussion with the psychologist about the results, so I was left to interpret them with me, myself, and the big I known as the Internet – a dangerous activity, I know. This meant two years to date of armchair research, and subsequently, an incessant fear of the implications of my below-average IQ, which stands at a pitiful 94…I still struggle in certain areas of comprehension. I received a score of 1070 on the SAT, (540 Reading & 530 Math), and am barely scraping by in my college algebra class. Honestly, I would be ashamed if any of my coworkers knew I barely could do high school-level algebra.

This person thinks they’re reinforcing their point by listing two different tests, but actually a 1070 on the SAT corresponds to about 104, a full ten points higher. Based on other things in their post – their correct use of big words and complicated sentence structure, their mention that they work a successful job in cybersecurity, the fact that they read a philosophy/psychology subreddit for fun – I’m guessing the 104 is closer to the truth.

From the comments on the same Reddit thread:

Interesting, I hope more people who have an avg. or low IQ post. Personally I had an IQ of 90 or so, but the day of the test I stayed up almost the entire night, slept maybe two hours and as a naive caffeine user I had around 500 mg caffeine. Maybe low IQ people do that.

I did IQTest.dk Raven’s test on impulse after seeing a video of Peterson’s regarding the importance of IQ, not in a very focused mode, almost ADHD like with rumination and I scored 108, but many claim low scores by around 0.5-1 SD, so that would put me in 115-123. I also am vegan, so creatine might increase my IQ by a few points. I think I am in the 120’s, but low IQ people tend to overestimate their IQ, but at least I am certainly 108 non-verbally, which is pretty average and low.

The commenter is right that IQtest.dk usually underestimates scores compared to other tests. But even if we take it at face value, his first score was almost twenty points off. By the official numbers, that should only happen once in every 15,000 people. In reality, someone posts a thread about it on Reddit and another person immediately shows up to say “Yeah, that happened to me”.

Nobel-winning physicist Richard Feynman famously scored “only” 124 on an IQ test in school – still bright, but nowhere near what you would expect of a Nobelist. Some people point out that it might have been biased towards measuring verbal rather than math abilities – then again, Feynman’s autobiography (admittedly edited and stitched together by a ghostwriter) sold 500,000 copies and made the New York Times bestseller list. So either his tested IQ was off by at least 30 points (supposed chance of this happening: 1/505 million), or IQ isn’t real and all of the studies showing that it is are made up by lizardmen to confuse us. In either case, you should be less concerned if your own school IQ tests seem kind of low.

I don’t know why there’s such a discrepancy between the official reliability numbers and the ones that anecdotally make sense. My guess is that the official studies give the tests better somehow. They use professional test administrators instead of overworked school counselors. They give them at a specific time of day instead of while the testee is half-asleep. They don’t let people take a bunch of caffeine before the test. They actually write the result down in a spreadsheet they have right there instead of trusting the testee to remember it accurately.

In my own field, official studies diagnose psychiatric diseases through beautiful Structured Clinical Interviews performed to exacting guidelines. Then real doctors diagnose them through checklists that say “DO NOT USE FOR DIAGNOSIS” in big letters on the top. If psychometrics is at all similar, the clashing numbers aren’t much of a mystery.

But two other points that might also be involved.

First, on a population level IQ is very stable with age. Over a study of 87,498 Scottish children, age 11 IQ and adult IQ correlated at 0.66, about as strong and impressive a correlation as you’ll ever find in the social sciences. But “correlation of 0.66” is also known as “only predicts 44% of the variance”. On an individual level, it is totally possible and not even that surprising to have an IQ of 100 at age 11 but 120 at age 30, or vice versa. Any IQ score you got before high school should be considered a plausible prediction about your adult IQ and nothing more.

Second, the people who get low IQ scores, are shocked, find their whole world tumbling in on themselves, and desperately try to hold on to their dream of being an intellectual – are not a representative sample of the people who get low IQ scores. The average person who gets a low IQ score says “Yup, guess that would explain why I’m failing all my classes”, and then goes back to beating up nerds. When you see someone saying “Help, I got a low IQ score, I’ve double-checked the standard deviation of all of my subscores and found some slight discrepancy but I’m not sure if that counts as Bayesian evidence that the global value is erroneous”, then, well – look, I wouldn’t be making fun of these people if I didn’t constantly come across them. You know who you are.

Just for fun, I analyzed the lowest IQ scores in my collection of SSC/LW surveys. I was only able to find three people who claimed to have an IQ ≤ 100 plus gave SAT results. All three had SAT scores corresponding to IQs in the 120s.

I conclude that at least among the kind of people I encounter and who tend to send me these emails, IQ estimates are pretty terrible.

This is absolutely consistent with population averages of thousands of IQ estimates still being valuable and useful research tools. It just means you shouldn’t use it on yourself. Statistics is what tells us that almost everybody feels stimulated on amphetamines. Reality is my patient who consistently goes to sleep every time she takes Adderall. Neither the statistics nor the lived experience are wrong – but if you use one when you need the other, you’re going to have a bad time.

III.

The second problem is that even if you avoid the problems mentioned above and measure IQ 100% correctly, it’s just not that usefully predictive.

Isn’t that heresy?! Isn’t IQ the most predictive thing we have? Doesn’t it affect every life outcome as proven again and again in well-replicated experiments?

Yes! I’m not denying any of that. I’m saying that things that are statistically true aren’t always true for any individual.

Once again, consider the analogy to family transmission of income. Your parents’ socioeconomic status correlates with your own at about r = 0.2 to 0.3, depending on how you define “socioeconomic status”. By coincidence, this is pretty much the same correlation that Strenze (2006) found for IQ and socioeconomic status. Everyone knows that having rich parents is pretty useful if you want to succeed. But everyone also knows that rich parents aren’t the only thing that goes into success. Someone from a poor family who tries really hard and gets a lot of other advantages still has a chance to make it. A sociologist or economist should be very interested in parent-child success correlations; the average person trying to get ahead should just shrug, realize things are going to be a little easier/harder than they would have been otherwise, and get on with their life.

And this isn’t just about gaining success by becoming an athlete or musician or some other less-intellectual pursuit. Chess talent is correlated with IQ at 0.24, about the same as income. IQ is some complicated central phenomenon that contributes a little to every cognitive skill, but it doesn’t entirely determine any cognitive skill. It’s not just that you can have an average IQ and still be a great chess player if you work hard enough – that’s true, but it’s not just that. It’s that you can have an average IQ and still have high levels of innate talent in chess. It’s not quite as likely as if you have a high IQ, but it’s very much in the range of possibility. And then you add in the effects of working hard enough, and then you’re getting somewhere.

Here is a table of professions by IQ, a couple of decades out of date but probably not too far off (cf. discussion here):

I don’t know how better to demonstrate this idea of “statistically solid, individually shaky”. On a population level, we see that the average doctor is 30 IQ points higher than the average janitor, that college professors are overwhelmingly high-IQ, and we think yeah, this is about what we would hope for from a statistic measuring intelligence. But on an individual level, we see that below-average IQ people sometimes become scientists, professors, engineers, and almost anything else you could hope for.

IV.

I’m kind of annoyed I have to write this post. After investing so much work debunking IQ denialists, I feel like this is really – I don’t know – diluting the brand.

But I actually think it’s not as contradictory as it looks, that there’s some common thread between my posts arguing that no, IQ isn’t fake, and this one.

If you really understand the idea of a statistical predictor – if you have that gear in your brain at a fundamental level – then social science isn’t scary. You can read about IQ, or heredity, or stereotypes, or gender differences, or whatever, and you can say – ah, there’s a slight tendency for one thing to correlate with another thing. Then you can go have dinner.

If you don’t get that, then the world is terrifying. Someone’s said that IQ “correlates with” life outcomes? What the heck is “correlate with”? Did they say that only high-IQ people can be successful? That you’re doomed if you don’t get the right score on a test?

And then you can either resist that with every breath you have – deny all the data, picket the labs where it’s studied, make up silly theories about “emotional intelligence” and “grit” and what have you. Or you can surrender to the darkness, at least have the comfort of knowing that you accept the grim reality as it is.

Imagine an American who somehow gets it into his head that the Communists are about to invade with overwhelming force. He might buy a bunch of guns, turn his house into a bunker, start agitating that Communist sympathizers be imprisoned to prevent them from betraying the country when the time came. Or he might hang a red flag from his house, wear a WELCOME COMMUNIST OVERLORDS tshirt, and start learning Russian. These seem like opposite responses, but they both come from the same fundamental misconception. A lot of the culture war – on both sides – seems like this. I don’t know how to solve this except to try, again and again, to install the necessary gear and convince people that correlations are neither meaningless nor always exactly 1.0.

So please: study the science of IQ. Use IQ to explain and predict social phenomena. Work on figuring out how to raise IQ. Assume that raising IQ will have far-ranging and powerful effects on a wide variety of social problems. Just don’t expect it to predict a single person’s individual achievement with any kind of reliability. Especially not yourself.

Posted in Uncategorized | Tagged , | 289 Comments