Meditations On Moloch

I.

Allen Ginsberg’s famous poem on Moloch:

What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination?

Moloch! Solitude! Filth! Ugliness! Ashcans and unobtainable dollars! Children screaming under the stairways! Boys sobbing in armies! Old men weeping in the parks!

Moloch! Moloch! Nightmare of Moloch! Moloch the loveless! Mental Moloch! Moloch the heavy judger of men!

Moloch the incomprehensible prison! Moloch the crossbone soulless jailhouse and Congress of sorrows! Moloch whose buildings are judgment! Moloch the vast stone of war! Moloch the stunned governments!

Moloch whose mind is pure machinery! Moloch whose blood is running money! Moloch whose fingers are ten armies! Moloch whose breast is a cannibal dynamo! Moloch whose ear is a smoking tomb!

Moloch whose eyes are a thousand blind windows! Moloch whose skyscrapers stand in the long streets like endless Jehovahs! Moloch whose factories dream and croak in the fog! Moloch whose smoke-stacks and antennae crown the cities!

Moloch whose love is endless oil and stone! Moloch whose soul is electricity and banks! Moloch whose poverty is the specter of genius! Moloch whose fate is a cloud of sexless hydrogen! Moloch whose name is the Mind!

Moloch in whom I sit lonely! Moloch in whom I dream Angels! Crazy in Moloch! Cocksucker in Moloch! Lacklove and manless in Moloch!

Moloch who entered my soul early! Moloch in whom I am a consciousness without a body! Moloch who frightened me out of my natural ecstasy! Moloch whom I abandon! Wake up in Moloch! Light streaming out of the sky!

Moloch! Moloch! Robot apartments! invisible suburbs! skeleton treasuries! blind capitals! demonic industries! spectral nations! invincible madhouses! granite cocks! monstrous bombs!

They broke their backs lifting Moloch to Heaven! Pavements, trees, radios, tons! lifting the city to Heaven which exists and is everywhere about us!

Visions! omens! hallucinations! miracles! ecstasies! gone down the American river!

Dreams! adorations! illuminations! religions! the whole boatload of sensitive bullshit!

Breakthroughs! over the river! flips and crucifixions! gone down the flood! Highs! Epiphanies! Despairs! Ten years’ animal screams and suicides! Minds! New loves! Mad generation! down on the rocks of Time!

Real holy laughter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to solitude! waving! carrying flowers! Down to the river! into the street!

What’s always impressed me about this poem is its conception of civilization as an individual entity. You can almost see him, with his fingers of armies and his skyscraper-window eyes.

A lot of the commentators say Moloch represents capitalism. This is definitely a piece of it, even a big piece. But it doesn’t quite fit. Capitalism, whose fate is a cloud of sexless hydrogen? Capitalism in whom I am a consciousness without a body? Capitalism, therefore granite cocks?

Moloch is introduced as the answer to a question – C. S. Lewis’ question in Hierarchy Of Philosopherswhat does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?

And Ginsberg answers: Moloch does it.

There’s a passage in the Principia Discordia where Malaclypse complains to the Goddess about the evils of human society. “Everyone is hurting each other, the planet is rampant with injustices, whole societies plunder groups of their own people, mothers imprison sons, children perish while brothers war.”

The Goddess answers: “What is the matter with that, if it’s what you want to do?”

Malaclypse: “But nobody wants it! Everybody hates it!”

Goddess: “Oh. Well, then stop.”

The implicit question is – if everyone hates the current system, who perpetuates it? And Ginsberg answers: “Moloch”. It’s powerful not because it’s correct – nobody literally thinks an ancient Carthaginian demon causes everything – but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.

Bostrom makes an offhanded reference of the possibility of a dictatorless dystopia, one that every single citizen including the leadership hates but which nevertheless endures unconquered. It’s easy enough to imagine such a state. Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.

So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on. Every single citizen hates the system, but for lack of a good coordination mechanism it endures. From a god’s-eye-view, we can optimize the system to “everyone agrees to stop doing this at once”, but no one within the system is able to effect the transition without great risk to themselves.

And okay, this example is kind of contrived. So let’s run through – let’s say ten – real world examples of similar multipolar traps to really hammer in how important this is.

1. The Prisoner’s Dilemma, as played by two very dumb libertarians who keep ending up on defect-defect. There’s a much better outcome available if they could figure out the coordination, but coordination is hard. From a god’s-eye-view, we can agree that cooperate-cooperate is a better outcome than defect-defect, but neither prisoner within the system can make it happen.

2. Dollar auctions. I wrote about this and even more convoluted versions of the same principle in Game Theory As A Dark Art. Using some weird auction rules, you can take advantage of poor coordination to make someone pay $10 for a one dollar bill. From a god’s-eye-view, clearly people should not pay $10 for a on-er. From within the system, each individual step taken might be rational.

(Ashcans and unobtainable dollars!)

3. The fish farming story from my Non-Libertarian FAQ 2.0:

As a thought experiment, let’s consider aquaculture (fish farming) in a lake. Imagine a lake with a thousand identical fish farms owned by a thousand competing companies. Each fish farm earns a profit of $1000/month. For a while, all is well.

But each fish farm produces waste, which fouls the water in the lake. Let’s say each fish farm produces enough pollution to lower productivity in the lake by $1/month.

A thousand fish farms produce enough waste to lower productivity by $1000/month, meaning none of the fish farms are making any money. Capitalism to the rescue: someone invents a complex filtering system that removes waste products. It costs $300/month to operate. All fish farms voluntarily install it, the pollution ends, and the fish farms are now making a profit of $700/month – still a respectable sum.

But one farmer (let’s call him Steve) gets tired of spending the money to operate his filter. Now one fish farm worth of waste is polluting the lake, lowering productivity by $1. Steve earns $999 profit, and everyone else earns $699 profit.

Everyone else sees Steve is much more profitable than they are, because he’s not spending the maintenance costs on his filter. They disconnect their filters too.

Once four hundred people disconnect their filters, Steve is earning $600/month – less than he would be if he and everyone else had kept their filters on! And the poor virtuous filter users are only making $300. Steve goes around to everyone, saying “Wait! We all need to make a voluntary pact to use filters! Otherwise, everyone’s productivity goes down.”

Everyone agrees with him, and they all sign the Filter Pact, except one person who is sort of a jerk. Let’s call him Mike. Now everyone is back using filters again, except Mike. Mike earns $999/month, and everyone else earns $699/month. Slowly, people start thinking they too should be getting big bucks like Mike, and disconnect their filter for $300 extra profit…

A self-interested person never has any incentive to use a filter. A self-interested person has some incentive to sign a pact to make everyone use a filter, but in many cases has a stronger incentive to wait for everyone else to sign such a pact but opt out himself. This can lead to an undesirable equilibrium in which no one will sign such a pact.

The more I think about it, the more I feel like this is the core of my objection to libertarianism, and that Non-Libertarian FAQ 3.0 will just be this one example copy-pasted two hundred times. From a god’s-eye-view, we can say that polluting the lake leads to bad consequences. From within the system, no individual can prevent the lake from being polluted, and buying a filter might not be such a good idea.

4. The Malthusian trap, at least at its extremely pure theoretical limits. Suppose you are one of the first rats introduced onto a pristine island. It is full of yummy plants and you live an idyllic life lounging about, eating, and composing great works of art (you’re one of those rats from The Rats of NIMH).

You live a long life, mate, and have a dozen children. All of them have a dozen children, and so on. In a couple generations, the island has ten thousand rats and has reached its carrying capacity. Now there’s not enough food and space to go around, and a certain percent of each new generation dies in order to keep the population steady at ten thousand.

A certain sect of rats abandons art in order to devote more of their time to scrounging for survival. Each generation, a bit less of this sect dies than members of the mainstream, until after a while, no rat composes any art at all, and any sect of rats who try to bring it back will go extinct within a few generations.

In fact, it’s not just art. Any sect at all that is leaner, meaner, and more survivalist than the mainstream will eventually take over. If one sect of rats altruistically decides to limit its offspring to two per couple in order to decrease overpopulation, that sect will die out, swarmed out of existence by its more numerous enemies. If one sect of rats starts practicing cannibalism, and finds it gives them an advantage over their fellows, it will eventually take over and reach fixation.

If some rat scientists predict that depletion of the island’s nut stores is accelerating at a dangerous rate and they will soon be exhausted completely, a few sects of rats might try to limit their nut consumption to a sustainable level. Those rats will be outcompeted by their more selfish cousins. Eventually the nuts will be exhausted, most of the rats will die off, and the cycle will begin again. Any sect of rats advocating some action to stop the cycle will be outcompeted by their cousins for whom advocating anything is a waste of time that could be used to compete and consume.

For a bunch of reasons evolution is not quite as Malthusian as the ideal case, but it provides the prototype example we can apply to other things to see the underlying mechanism. From a god’s-eye-view, it’s easy to say the rats should maintain a comfortably low population. From within the system, each individual rat will follow its genetic imperative and the island will end up in an endless boom-bust cycle.

5. Capitalism. Imagine a capitalist in a cutthroat industry. He employs workers in a sweatshop to sew garments, which he sells at minimal profit. Maybe he would like to pay his workers more, or give them nicer working conditions. But he can’t, because that would raise the price of his products and he would be outcompeted by his cheaper rivals and go bankrupt. Maybe many of his rivals are nice people who would like to pay their workers more, but unless they have some kind of ironclad guarantee that none of them are going to defect by undercutting their prices they can’t do it.

Like the rats, who gradually lose all values except sheer competition, so companies in an economic environment of sufficiently intense competition are forced to abandon all values except optimizing-for-profit or else be outcompeted by companies that optimized for profit better and so can sell the same service at a lower price.

(I’m not really sure how widely people appreciate the value of analogizing capitalism to evolution. Fit companies – defined as those that make the customer want to buy from them – survive, expand, and inspire future efforts, and unfit companies – defined as those no one wants to buy from – go bankrupt and die out along with their company DNA. The reasons Nature is red and tooth and claw are the same reasons the market is ruthless and exploitative)

From a god’s-eye-view, we can contrive a friendly industry where every company pays its workers a living wage. From within the system, there’s no way to enact it.

(Moloch whose love is endless oil and stone! Moloch whose blood is running money!)

6. The Two-Income Trap, as recently discussed on this blog. It theorized that sufficiently intense competition for suburban houses in good school districts meant that people had to throw away lots of other values – time at home with their children, financial security – to optimize for house-buying-ability or else be consigned to the ghetto.

From a god’s-eye-view, if everyone agrees not to take on a second job to help win their competition for nice houses, then everyone will get exactly as nice a house as they did before, but only have to work one job. From within the system, absent a government literally willing to ban second jobs, everyone who doesn’t get one will be left behind.

(Robot apartments! Invisible suburbs!)

7. Agriculture. Jared Diamond calls it the worst mistake in human history. Whether or not it was a mistake, it wasn’t an accident – agricultural civilizations simply outcompeted nomadic ones, inevitable and irresistably. Classic Malthusian trap. Maybe hunting-gathering was more enjoyable, higher life expectancy, and more conducive to human flourishing – but in a state of sufficiently intense competition between peoples, in which agriculture with all its disease and oppression and pestilence was the more competitive option, everyone will end up agriculturalists or go the way of the Comanche Indians.

From a god’s-eye-view, it’s easy to see everyone should keep the more enjoyable option and stay hunter-gatherers. From within the system, each individual tribe only faces the choice of going agricultural or inevitably dying.

8. Arms races. Large countries can spend anywhere from 5% to 30% of their budget on defense. In the absence of war – a condition which has mostly held for the past fifty years – all this does is sap money away from infrastructure, health, education, or economic growth. But any country that fails to spend enough money on defense risks being invaded by a neighboring country that did. Therefore, almost all countries try to spend some money on defense.

From a god’s-eye-view, the best solution is world peace and no country having an army at all. From within the system, no country can unilaterally enforce that, so their best option is to keep on throwing their money into missiles that lie in silos unused.

(Moloch the vast stone of war! Moloch whose fingers are ten armies!)

9. Cancer. The human body is supposed to be made up of cells living harmoniously and pooling their resources for the greater good of the organism. If a cell defects from this equilibrium by investing its resources into copying itself, it and its descendants will flourish, eventually outcompeting all the other cells and taking over the body – at which point it dies. Or the situation may repeat, with certain cancer cells defecting against the rest of the tumor, thus slowing down its growth and causing the tumor to stagnate.

From a god’s-eye-view, the best solution is all cells cooperating so that they don’t all die. From within the system, cancerous cells will proliferate and outcompete the other – so that only the existence of the immune system keeps the natural incentive to turn cancerous in check.

10. The “race to the bottom” describes a political situation where some jurisdictions lure businesses by promising lower taxes and fewer regulations. The end result is that either everyone optimizes for competitiveness – by having minimal tax rates and regulations – or they lose all of their business, revenue, and jobs to people who did (at which point they are pushed out and replaced by a government who will be more compliant).

But even though the last one has stolen the name, all these scenarios are in fact a race to the bottom. Once one agent learns how to become more competitive by sacrificing a common value, all its competitors must also sacrifice that value or be outcompeted and replaced by the less scrupulous. Therefore, the system is likely to end up with everyone once again equally competitive, but the sacrificed value is gone forever. From a god’s-eye-view, the competitors know they will all be worse off if they defect, but from within the system, given insufficient coordination it’s impossible to avoid.

Before we go on, there’s a slightly different form of multi-agent trap worth investigating. In this one, the competition is kept at bay by some outside force – usually social stigma. As a result, there’s not actually a race to the bottom – the system can continue functioning at a relatively high level – but it’s impossible to optimize and resources are consistently thrown away for no reason. Lest you get exhausted before we even begin, I’ll limit myself to four examples here.

11. Education. In my essay on reactionary philosophy, I talk about my frustration with education reform:

People ask why we can’t reform the education system. But right now students’ incentive is to go to the most prestigious college they can get into so employers will hire them – whether or not they learn anything. Employers’ incentive is to get students from the most prestigious college they can so that they can defend their decision to their boss if it goes wrong – whether or not the college provides value added. And colleges’ incentive is to do whatever it takes to get more prestige, as measured in US News and World Report rankings – whether or not it helps students. Does this lead to huge waste and poor education? Yes. Could the Education God notice this and make some Education Decrees that lead to a vastly more efficient system? Easily! But since there’s no Education God everybody is just going to follow their own incentives, which are only partly correlated with education or efficiency.

From a god’s eye view, it’s easy to say things like “Students should only go to college if they think they will get something out of it, and employers should hire applicants based on their competence and not on what college they went to”. From within the system, everyone’s already following their own incentives correctly, so unless the incentives change the system won’t either.

12. Science. Same essay:

The modern research community knows they aren’t producing the best science they could be. There’s lots of publication bias, statistics are done in a confusing and misleading way out of sheer inertia, and replications often happen very late or not at all. And sometimes someone will say something like “I can’t believe people are too dumb to fix Science. All we would have to do is require early registration of studies to avoid publication bias, turn this new and powerful statistical technique into the new standard, and accord higher status to scientists who do replication experiments. It would be really simple and it would vastly increase scientific progress. I must just be smarter than all existing scientists, since I’m able to think of this and they aren’t.”

And yeah. That would work for the Science God. He could just make a Science Decree that everyone has to use the right statistics, and make another Science Decree that everyone must accord replications higher status.

But things that work from a god’s-eye view don’t work from within the system. No individual scientist has an incentive to unilaterally switch to the new statistical technique for her own research, since it would make her research less likely to produce earth-shattering results and since it would just confuse all the other scientists. They just have an incentive to want everybody else to do it, at which point they would follow along. And no individual journal has an incentive to unilaterally switch to early registration and publishing negative results, since it would just mean their results are less interesting than that other journal who only publishes ground-breaking discoveries. From within the system, everyone is following their own incentives and will continue to do so.

13. Government corruption. I don’t know of anyone who really thinks, in a principled way, that corporate welfare is a good idea. But the government still manages to spend somewhere around (depending on how you calculate it) $100 billion dollars a year on it – which for example is three times the amount they spend on health care for the needy. Everyone familiar with the problem has come up with the same easy solution: stop giving so much corporate welfare. Why doesn’t it happen?

Government are competing against one another to get elected or promoted. And suppose part of optimizing for electability is optimizing campaign donations from corporations – or maybe it isn’t, but officials think it is. Officials who try to mess with corporate welfare may lose the support of corporations and be outcompeted by officials who promise to keep it intact.

So although from a god’s-eye-view everyone knows that eliminating corporate welfare is the best solution, each individual official’s personal incentives push her to maintain it.

14. Congress. Only 9% of Americans like it, suggesting a lower approval rating than cockroaches, head lice, or traffic jams. However, 62% of people who know who their own Congressional representative is approve of them. In theory, it should be really hard to have a democratically elected body that maintains a 9% approval rating for more than one election cycle. In practice, every representative’s incentive is to appeal to his or her constituency while throwing the rest of the country under the bus – something at which they apparently succeed.

From a god’s-eye-view, every Congressperson ought to think only of the good of the nation. From within the system, you do what gets you elected.

II.

A basic principle unites all of the multipolar traps above. In some competition optimizing for X, the opportunity arises to throw some other value under the bus for improved X. Those who take it prosper. Those who don’t take it die out. Eventually, everyone’s relative status is about the same as before, but everyone’s absolute status is worse than before. The process continues until all other values that can be traded off have been – in other words, until human ingenuity cannot possibly figure out a way to make things any worse.

In a sufficiently intense competition (1-10), everyone who doesn’t throw all their values under the bus dies out – think of the poor rats who wouldn’t stop making art. This is the infamous Malthusian trap, where everyone is reduced to “subsistence”.

In an insufficiently intense competition (11-14), all we see is a perverse failure to optimize – consider the journals which can’t switch to more reliable science, or the legislators who can’t get their act together and eliminate corporate welfare. It may not reduce people to subsistence, but there is a weird sense in which it takes away their free will.

Every two-bit author and philosopher has to write their own utopia. Most of them are legitimately pretty nice. In fact, it’s a pretty good bet that two utopias that are polar opposites both sound better than our own world.

It’s kind of embarrassing that random nobodies can think up states of affairs better than the one we actually live in. And in fact most of them can’t. A lot of utopias sweep the hard problems under the rug, or would fall apart in ten minutes if actually implemented.

But let me suggest a couple of “utopias” that don’t have this problem.

– The utopia where instead of the government paying lots of corporate welfare, the government doesn’t pay lots of corporate welfare.

– The utopia where every country’s military is 50% smaller than it is today, and the savings go into infrastructure spending.

– The utopia where all hospitals use the same electronic medical record system, or at least medical record systems that can talk to each other, so that doctors can look up what the doctor you saw last week in a different hospital decided instead of running all the same tests over again for $5000.

I don’t think there are too many people who oppose any of these utopias. If they’re not happening, it’s not because people don’t support them. It certainly isn’t because nobody’s thought of them, since I just thought of them right now and I don’t expect my “discovery” to be hailed as particularly novel or change the world.

Any human with above room temperature IQ can design a utopia. The reason our current system isn’t a utopia is that it wasn’t designed by humans. Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives.

But that means that just as the shapes of rivers are not designed for beauty or navigation, but rather an artifact of randomly determined terrain, so institutions will not be designed for prosperity or justice, but rather an artifact of randomly determined initial conditions.

Just as people can level terrain and build canals, so people can alter the incentive landscape in order to build better institutions. But they can only do so when they are incentivized to do so, which is not always. As a result, some pretty wild tributaries and rapids form in some very strange places.

I will now jump from boring game theory stuff to what might be the closest thing to a mystical experience I’ve ever had.

Like all good mystical experiences, it happened in Vegas. I was standing on top of one of their many tall buildings, looking down at the city below, all lit up in the dark. If you’ve never been to Vegas, it is really impressive. Skyscrapers and lights in every variety strange and beautiful all clustered together. And I had two thoughts, crystal clear:

It is glorious that we can create something like this.

It is shameful that we did.

Like, by what standard is building gigantic forty-story-high indoor replicas of Venice, Paris, Rome, Egypt, and Camelot side-by-side, filled with albino tigers, in the middle of the most inhospitable desert in North America, a remotely sane use of our civilization’s limited resources?

And it occurred to me that maybe there is no philosophy on Earth that would endorse the existence of Las Vegas. Even Objectivism, which is usually my go-to philosophy for justifying the excesses of capitalism, at least grounds it in the belief that capitalism improves people’s lives. Henry Ford was virtuous because he allowed lots of otherwise car-less people to obtain cars and so made them better off. What does Vegas do? Promise a bunch of shmucks free money and not give it to them.

Las Vegas doesn’t exist because of some decision to hedonically optimize civilization, it exists because of a quirk in dopaminergic reward circuits, plus the microstructure of an uneven regulatory environment, plus Schelling points. A rational central planner with a god’s-eye-view, contemplating these facts, might have thought “Hm, dopaminergic reward circuits have a quirk where certain tasks with slightly negative risk-benefit ratios get an emotional valence associated with slightly positive risk-benefit ratios, let’s see if we can educate people to beware of that.” People within the system, following the incentives created by these facts, think: “Let’s build a forty-story-high indoor replica of ancient Rome full of albino tigers in the middle of the desert, and so become slightly richer than people who didn’t!”

Just as the course of a river is latent in a terrain even before the first rain falls on it – so the existence of Caesar’s Palace was latent in neurobiology, economics, and regulatory regimes even before it existed. The entrepreneur who built it was just filling in the ghostly lines with real concrete.

So we have all this amazing technological and cognitive energy, the brilliance of the human species, wasted on reciting the lines written by poorly evolved cellular receptors and blind economics, like gods being ordered around by a moron.

Some people have mystical experiences and see God. There in Las Vegas, I saw Moloch.

(Moloch, whose mind is pure machinery! Moloch, whose blood is running money!

Moloch whose soul is electricity and banks! Moloch, whose skyscrapers stand in the long streets like endless Jehovahs!

Moloch! Moloch! Robot apartments! Invisible suburbs! Skeleton treasuries! Blind capitals! Demonic industries! Spectral nations!)

…granite cocks!

III.

The Apocrypha Discordia says:

Time flows like a river. Which is to say, downhill. We can tell this because everything is going downhill rapidly. It would seem prudent to be somewhere else when we reach the sea.

Let’s take this random gag 100% literally and see where it leads us.

We just analogized the flow of incentives to the flow of a river. The downhill trajectory is appropriate: the traps happen when you find an opportunity to trade off a useful value for greater competitiveness. Once everyone has it, the greater competitiveness brings you no joy – but the value is lost forever. Therefore, each step of the Poor Coordination Polka makes your life worse.

But not only have we not yet reached the sea, but we also seem to move uphill surprisingly often. Why do things not degenerate more and more until we are back at subsistence level? I can think of three bad reasons – excess resources, physical limitations, and utility maximization – plus one good reason – coordination.

1. Excess resources. The ocean depths are a horrible place with little light, few resources, and various horrible organisms dedicated to eating or parasitizing one another. But every so often, a whale carcass falls to the bottom of the sea. More food than the organisms that find it could ever possibly want. There’s a brief period of miraculous plenty, while the couple of creatures that first encounter the whale feed like kings. Eventually more animals discover the carcass, the faster-breeding animals in the carcass multiply, the whale is gradually consumed, and everyone sighs and goes back to living in a Malthusian death-trap.

(Slate Star Codex: Your source for macabre whale metaphors since June 2014)

It’s as if a group of those rats who had abandoned art and turned to cannibalism suddenly was blown away to a new empty island with a much higher carrying capacity, where they would once again have the breathing room to live in peace and create artistic masterpieces.

This is an age of whalefall, an age of excess carrying capacity, an age when we suddenly find ourselves with a thousand-mile head start on Malthus. As Hanson puts it, this is the dream time.

As long as resources aren’t scarce enough to lock us in a war of all against all, we can do silly non-optimal things – like art and music and philosophy and love – and not be outcompeted by merciless killing machines most of the time.

2. Physical limitations. Imagine a profit-maximizing slavemaster who decided to cut costs by not feeding his slaves or letting them sleep. He would soon find that his slaves’ productivity dropped off drastically, and that no amount of whipping them could restore it. Eventually after testing numerous strategies, he might find his slaves got the most work done when they were well-fed and well-rested and had at least a little bit of time to relax. Not because the slaves were voluntarily withholding their labor – we assume the fear of punishment is enough to make them work as hard as they can – but because the body has certain physical limitations that limit how mean you can get away with being. Thus, the “race to the bottom” stops somewhere short of the actual ethical bottom, when the physical limits are run into.

John Moes, a historian of slavery, goes further and writes about how the slavery we are most familiar with – that of the antebellum South – is a historical aberration and probably economically inefficient. In most past forms of slavery – especially those of the ancient world – it was common for slaves to be paid wages, treated well, and often given their freedom.

He argues that this was the result of rational economic calculation. You can incentivize slaves through the carrot or the stick, and the stick isn’t very good. You can’t watch slaves all the time, and it’s really hard to tell whether a slave is slacking off or not (or even whether, given a little more whipping, he might be able to work even harder). If you want your slaves to do anything more complicated than pick cotton, you run into some serious monitoring problems – how do you profit from an enslaved philosopher? Whip him really hard until he elucidates a theory of The Good that you can sell books about?

The ancient solution to the problem – perhaps an early inspiration to Fnargl – was to tell the slave to go do whatever he wanted and found most profitable, then split the profits with him. Sometimes the slave would work a job at your workshop and you would pay him wages based on how well he did. Other times the slave would go off and make his way in the world and send you some of what he earned. Still other times, you would set a price for the slave’s freedom, and the slave would go and work and eventually come up with the mone and free himself.

Moes goes even further and says that these systems were so profitable that there were constant smouldering attempts to try this sort of thing in the American South. The reason they stuck with the whips-and-chains method owed less to economic considerations and more to racist government officials cracking down on lucrative but not-exactly-white-supremacy-promoting attempts to free slaves and have them go into business.

So in this case, a race to the bottom where competing plantations become crueler and crueler to their slaves in order to maximize competitiveness is halted by the physical limitation of cruelty not helping after a certain point.

Or to give another example, one of the reasons we’re not currently in a Malthusian population explosion right now is that women can only have one baby per nine months. If those weird religious sects that demand their members have as many babies as possible could copy-paste themselves, we would be in really bad shape. As it is they can only do a small amount of damage per generation.

3. Utility maximization. We’ve been thinking in terms of preserving values versus winning competitions, and expecting optimizing for the latter to destroy the former.

But many of the most important competitions / optimization processes in modern civilization are optimizing for human values. You win at capitalism partly by satisfying customers’ values. You win at democracy partly by satisfying voters’ values.

Suppose there’s a coffee plantation somewhere in Ethiopia that employs Ethiopians to grow coffee beans that get sold to the United States. Maybe it’s locked in a life-and-death struggle with other coffee plantations and want to throw as many values under the bus as it can to pick up a slight advantage.

But it can’t sacrifice quality of coffee produced too much, or else the Americans won’t buy it. And it can’t sacrifice wages or working conditions too much, or else the Ethiopians won’t work there. And in fact, part of its competition-optimization process is finding the best ways to attract workers and customers that it can, as long as it doesn’t cost them too much money. So this is very promising.

But it’s important to remember exactly how fragile this beneficial equilibrium is.

Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don’t know about the pesticide, and the government hasn’t caught up to regulating it yet. Now there’s a tiny uncoupling between “selling to Americans” and “satisfying Americans’ values”, and so of course Americans’ values get thrown under the bus.

Or suppose that there’s a baby boom in Ethiopia and suddenly there are five workers competing for each job. Now the company can afford to lower wages and implement cruel working conditions down to whatever the physical limits are. As soon as there’s an uncoupling between “getting Ethiopians to work here” and “satisfying Ethiopian values”, it doesn’t look too good for Ethiopian values either.

Or suppose someone invents a robot that can pick coffee better and cheaper than a human. The company fires all its laborers and throws them onto the street to die. As soon as the utility of the Ethiopians is no longer necessary for profit, all pressure to maintain it disappears.

Or suppose that there is some important value that is neither a value of the employees or the customers. Maybe the coffee plantations are on the habitat of a rare tropical bird that environmentalist groups want to protect. Maybe they’re on the ancestral burial ground of a tribe different from the one the plantation is employing, and they want it respected in some way. Maybe coffee growing contributes to global warming somehow. As long as it’s not a value that will prevent the average American from buying from them or the average Ethiopian from working for them, under the bus it goes.

I know that “capitalists sometimes do bad things” isn’t exactly an original talking point. But I do want to stress how it’s not equivalent to “capitalists are greedy”. I mean, sometimes they are greedy. But other times they’re just in a sufficiently intense competition where anyone who doesn’t do it will be outcompeted and replaced by people who do. Business practices are set by Moloch, no one else has any choice in the matter.

(from my very little knowledge of Marx, he understands this very very well and people who summarize him as “capitalists are greedy” are doing him a disservice)

And as well understood as the capitalist example is, I think it is less well appreciated that democracy has the same problems. Yes, in theory it’s optimizing for voter happiness which correlates with good policymaking. But as soon as there’s the slightest disconnect between good policymaking and electability, good policymaking has to get thrown under the bus.

For example, ever-increasing prison terms are unfair to inmates and unfair to the society that has to pay for them. Politicans are unwilling to do anything about them because they don’t want to look “soft on crime”, and if a single inmate whom they helped release ever does anything bad (and statistically one of them will have to) it will be all over the airwaves as “Convict released by Congressman’s policies kills family of five, how can the Congressman even sleep at night let alone claim he deserves reelection?”. So even if decreasing prison populations would be good policy – and it is – it will be very difficult to implement.

(Moloch the incomprehensible prison! Moloch the crossbone soulless jailhouse and Congress of sorrows! Moloch whose buildings are judgment! Moloch the stunned governments!)

Turning “satisfying customers” and “satisfying citizens” into the outputs of optimization processes was one of civilization’s greatest advances and the reason why capitalist democracies have so outperformed other systems. But if we have bound Moloch as our servant, the bonds are not very strong, and we sometimes find that the tasks he has done for us move to his advantage rather than ours.

4. Coordination.

The opposite of a trap is a garden.

Things are easy to solve from a god’s-eye-view, so if everyone comes together into a superorganism, that superorganism can solve problems with ease and finesse. An intense competition between agents has turned into a garden, with a single gardener dictating where everything should go and removing elements that do not conform to the pattern.

As I pointed out in the Non-Libertarian FAQ, government can easily solve the pollution problem with fish farms. The best known solution to the Prisoners’ Dilemma is for the mob boss (playing the role of a governor) to threaten to shoot any prisoner who defects. The solution to companies polluting and harming workers is government regulations against such. Governments solve arm races within a country by maintaining a monopoly on the use of force, and it’s easy to see that if a truly effective world government ever arose, international military buildups would end pretty quickly.

The two active ingredients of government are laws plus violence – or more abstractly agreements plus enforcement mechanism. Many other things besides governments share these two active ingredients and so are able to act as coordination mechanisms to avoid traps.

For example, since students are competing against each other (directly if classes are graded on a curve, but always indirectly for college admissions, jobs, et cetera) there is intense pressure for individual students to cheat. The teacher and school play the role of a government by having rules (for example, against cheating) and the ability to punish students who break them.

But the emergent social structure of the students themselves is also a sort of government. If students shun and distrust cheaters, then there are rules (don’t cheat) and an enforcement mechanism (or else we will shun you).

Social codes, gentlemens’ agreements, industrial guilds, criminal organizations, traditions, friendships, schools, corporations, and religions are all coordinating institutions that keep us out of traps by changing our incentives.

But these institutions not only incentivize others, but are incentivized themselves. These are large organizations made of lots of people who are competing for jobs, status, prestige, et cetera – there’s no reason they should be immune to the same multipolar traps as everyone else, and indeed they aren’t. Governments can in theory keep corporations, citizens, et cetera out of certain traps, but as we saw above there are many traps that governments themselves can fall into.

The United States tries to solve the problem by having multiple levels of government, unbreakable constutitional laws, checks and balances between different branches, and a couple of other hacks.

Saudi Arabia uses a different tactic. They just put one guy in charge of everything.

This is the much-maligned – I think unfairly – argument in favor of monarchy. A monarch is an unincentivized incentivizer. He actually has the god’s-eye-view and is outside of and above every system. He has permanently won all competitions and is not competing for anything, and therefore he is perfectly free of Moloch and of the incentives that would otherwise channel his incentives into predetermined paths. Aside from a few very theoretical proposals like my Shining Garden, monarchy is the only system that does this.

But then instead of following a random incentive structure, we’re following the whim of one guy. Caesar’s Palace Hotel and Casino is a crazy waste of resources, but the actual Gaius Julius Caesar Augustus Germanicus wasn’t exactly the perfect benevolent rational central planner either.

The libertarian-authoritarian axis on the Political Compass is a tradeoff between discoordination and tyranny. You can have everything perfectly coordinated by someone with a god’s-eye-view – but then you risk Stalin. And you can be totally free of all central authority – but then you’re stuck in every stupid multipolar trap Moloch can devise.

The libertarians make a convincing argument for the one side, and the monarchists for the other, but I expect that like most tradeoffs we just have to hold our noses and admit it’s a really hard problem.

IV.

Let’s go back to that Apocrypha Discordia quote:

Time flows like a river. Which is to say, downhill. We can tell this because everything is going downhill rapidly. It would seem prudent to be somewhere else when we reach the sea.

What would it mean, in this situation, to reach the sea?

Multipolar traps – races to the bottom – threaten to destroy all human values. They are currently restrained by physical limitations, excess resources, utility maximization, and coordination.

The dimension along which this metaphorical river flows must be time, and the most important change in human civilization over time is the change in technology. So the relevant question is how technological changes will affect our tendency to fall into multipolar traps.

I described traps as when:

…in some competition optimizing for X, the opportunity arises to throw some other value under the bus for improved X. Those who take it prosper. Those who don’t take it die out. Eventually, everyone’s relative status is about the same as before, but everyone’s absolute status is worse than before. The process continues until all other values that can be traded off have been – in other words, until human ingenuity cannot possibly figure out a way to make things any worse.

That “the opportunity arises” phrase is looking pretty sinister. Technology is all about creating new opportunities.

Develop a new robot, and suddenly coffee plantations have “the opportunity” to automate their harvest and fire all the Ethiopian workers. Develop nuclear weapons, and suddenly countries are stuck in an arms race to have enough of them. Polluting the atmosphere to build products quicker wasn’t a problem before they invented the steam engine.

The limit of multipolar traps as technology approaches infinity is “very bad”.

Multipolar traps are currently restrained by physical limitations, excess resources, utility maximization, and coordination.

Physical limitations are most obviously conquered by increasing technology. The slavemaster’s old conundrum – that slaves need to eat and sleep – succumbs to Soylent and modafinil. The problem of slaves running away succumbs to GPS. The problem of slaves being too stressed to do good work succumbs to Valium. None of these things are very good for the slaves.

(or just invent a robot that doesn’t need food or sleep at all. What happens to the slaves after that is better left unsaid)

The other example of physical limits was one baby per nine months, and this was understating the case – it’s really “one baby per nine months plus willingness to support and take care of a basically helpless and extremely demanding human being for eighteen years”. This puts a damper on the enthusiasm of even the most zealous religious sect’s “go forth and multiply” dictum.

But as Bostrom puts it in Superintelligence:

There are reasons, if we take a longer view and assume a state of unchanging technology and continued prosperity, to expect a return to the historically and ecologically normal condition of a world population that butts up against the limits of what our niche can support. If this seems counterintuitive in light of the negative relationship between wealth and fertility that we are currently observing on the global scale, we must remind ourselves that this modern age is a brief slice of history and very much an aberration. Human behavior has not yet adapted to contemporary conditions. Not only do we fail to take advantage of obvious ways to increase our inclusive fitness (such as by becoming sperm or egg donors) but we actively sabotage our fertility by using birth control. In the environment of evolutionary adaptedness, a healthy sex drive may have been enough to make an individual act in ways that maximized her reproductive potential; in the modern environment, however, there would be a huge selective advantage to having a more direct desire for being the biological parent to the largest possible number of chilren. Such a desire is currently being selected for, as are other traits that increase our propensity to reproduce. Cultural adaptation, however, might steal a march on biological evolution. Some communities, such as those of the Hutterites or the adherents of the Quiverfull evangelical movement, have natalist cultures that encourage large families, and they are consequently undergoing rapid expansion…This longer-term outlook could be telescoped into a more imminent prospect by the intelligence explosion. Since software is copyable, a population of emulations or AIs could double rapidly – over the course of minutes rather than decades or centuries – soon exhausting all available hardware

As always when dealing with high-level transhumanists, “all available hardware” should be taken to include “the atoms that used to be part of your body”.

The idea of biological or cultural evolution causing a mass population explosion is a philosophical toy at best. The idea of technology making it possible is both plausible and terrifying. Now we see that “physical limits” segues very naturally into “excess resources” – the ability to create new agents very quickly means that unless everyone can coordinate to ban doing this, the people who do will outcompete the people who don’t until they have reached carrying capacity and everyone is stuck at subsistence level.

Excess resources, which until now have been a gift of technological progress, therefore switch and become a casualty of it at a sufficiently high tech level.

Utility maximization, always on shaky ground, also faces new threats. In the face of continuing debate about this point, I continue to think it obvious that robots will push humans out of work or at least drive down wages (which, in the existence of a minimum wage, pushes humans out of work).

Once a robot can do everything an IQ 80 human can do, only better and cheaper, there will be no reason to employ IQ 80 humans. Once a robot can do everything an IQ 120 human can do, only better and cheaper, there will be no reason to employ IQ 120 humans. Once a robot can do everything an IQ 180 human can do, only better and cheaper, there will be no reason to employ humans at all, in the unlikely scenario that there are any left by that point.

In the earlier stages of the process, capitalism becomes more and more uncoupled from its previous job as an optimizer for human values. Now most humans are totally locked out of the group whose values capitalism optimizes for. They have no value to contribute as workers – and since in the absence of a spectacular social safety net it’s unclear how they would have much money – they have no value as customers either. Capitalism has passed them by. As the segment of humans who can be outcompeted by robots increases, capitalism passes by more and more people until eventually it locks out the human race entirely, once again in the vanishingly unlikely scenario that we are still around.

(there are some scenarios in which a few capitalists who own the robots may benefit here, but in either case the vast majority are out of luck)

Democracy is less obviously vulnerable, but it might be worth going back to Bostrom’s paragraph about the Quiverfull movement. These are some really religious Christians who think that God wants them to have as many kids as possible, and who can end up with families of ten or more. Their articles explictly calculate that if they start at two percent of the population, but have on average eight children per generation when everyone else on average only has two, within three generations they’ll make up half the population.

It’s a clever strategy, but I can think of one thing that will save us: judging by how many ex-Quiverfull blogs I found when searching for those statistics, their retention rates even within a single generation are pretty grim. Their article admits that 80% of very religious children leave the church as adults (although of course they expect their own movement to do better). And this is not a symmetrical process – 80% of children who grow up in atheist families aren’t becoming Quiverfull.

It looks a lot like even though they are outbreeding us, we are outmeme-ing them, and that gives us a decisive advantage.

But we should also be kind of scared of this process. Memes optimize for making people want to accept them and pass them on – so like capitalism and democracy, they’re optimizing for a proxy of making us happy, but that proxy can easily get uncoupled from the original goal.

Chain letters, urban legends, propaganda, and viral marketing are all examples of memes that don’t satisfy our explicit values (true and useful) but are sufficiently memetically virulent that they spread anyway.

I hope it’s not too controversial here to say the same thing is true of religion. Religions, at their heart, are the most basic form of memetic replicator – “Believe this statement and repeat it to everyone you hear or else you will be eternally tortured”.

The creationism “debate” and global warming “debate” and a host of similar “debates” in today’s society suggest that memes that can propagate independent of their truth value has a pretty strong influence on the political process. Maybe these memes propagate because they appeal to people’s prejudices, maybe because they’re simple, maybe because they effectively mark an in-group and an out-group, or maybe for all sorts of different reasons.

The point is – imagine a country full of bioweapon labs, where people toil day and night to invent new infectious agents. The existence of these labs, and their right to throw whatever they develop in the water supply is protected by law. And the country is also linked by the world’s most perfect mass transit system that every single person uses every day, so that any new pathogen can spread to the entire country instantaneously. You’d expect things to start going bad for that city pretty quickly.

Well, we have about a zillion think tanks researching new and better forms of propaganda. And we have constitutionally protected freedom of speech. And we have the Internet. So we’re kind of screwed.

(Moloch whose name is the Mind!)

There are a few people working on raising the sanity waterline, but not as many people as are working on new and exciting ways of confusing and converting people, cataloging and exploiting every single bias and heuristic and dirty rhetorical trick

So as technology (which I take to include knowledge of psychology, sociology, public relations, etc) tends to infinity, the power of truthiness relative to truth increases, and things don’t look great for real grassroots democracy. The worst-case scenario is that the ruling party learns to produce infinite charisma on demand. If that doesn’t sound so bad to you, remember what Hitler was able to do with an famously high level of charisma that was still less-than-infinite.

(alternate phrasing for Chomskyites: technology increases the efficiency of manufacturing consent in the same way it increases the efficiency of manufacturing everything else)

Coordination is what’s left. And technology has the potential to seriously improve coordination efforts. People can use the Internet to get in touch with one another, launch political movements, and fracture off into subcommunities.

But coordination only works when you have 51% or more of the force on the side of the people doing the coordinating, and when you haven’t come up with some brilliant trick to make coordination impossible.

The second one first. In the links post before last, I wrote:

The latest development in the brave new post-Bitcoin world is crypto-equity. At this point I’ve gone from wanting to praise these inventors as bold libertarian heroes to wanting to drag them in front of a blackboard and making them write a hundred times “I WILL NOT CALL UP THAT WHICH I CANNOT PUT DOWN”

A couple people asked me what I meant, and I didn’t have the background then to explain. Well, this post is the background. People are using the contingent stupidity of our current government to replace lots of human interaction with mechanisms that cannot be coordinated even in principle. I totally understand why all these things are good right now when most of what our government does is stupid and unnecessary. But there is going to come a time when – after one too many bioweapon or nanotech or nuclear incidents – we, as a civilization, are going to wish we hadn’t established untraceable and unstoppable ways of selling products.

And if we ever get real live superintelligence, pretty much by definition it is going to have >51% of the power and all attempts at “coordination” with it will be useless.

So I agree with Robin Hanson: This is the dream time. This is a rare confluence of circumstances where the we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish.

As technological advance increases, the rare confluence will come to an end. New opportunities to throw values under the bus for increased competitiveness will arise. New ways of copying agents to increase the population will soak up our excess resources and resurrect Malthus’ unquiet spirit. Capitalism and democracy, previously our protectors, will figure out ways to route around their inconvenient dependence on human values. And our coordination power will not be nearly up to the task, assuming something much more powerful than all of us combined doesn’t show up and crush our combined efforts with a wave of its paw.

Absent an extraordinary effort to divert it, the river reaches the sea in one of two places.

It can end in Eliezer Yudkowsky’s nightmare of a superintelligence optimizing for some random thing (classically paper clips) because we weren’t smart enough to channel its optimization efforts the right way. This is the ultimate trap, the trap that catches the universe. Everything except the one thing being maximized is destroyed utterly in pursuit of the single goal, including all the silly human values.

Or it can end in Robin Hanson’s nightmare (he doesn’t call it a nightmare, but I think he’s wrong) of a competition between emulated humans that can copy themselves and edit their own source code as desired. Their total self-control can wipe out even the desire for human values in their all-consuming contest. What happens to art, philosophy, science, and love in such a world? Zack Davis puts it with characteristic genius:

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals ‘twixt firms
To service my employers!

But in between these lines I write
Of the accounts receivable,
I’m stuck by an uncanny fright;
The world seems unbelievable!

How did it all come to be,
That there should be such ems as me?
Whence these deals and whence these firms
And whence the whole economy?

I am a managerial em;
I monitor your thoughts.
Your questions must have answers,
But you’ll comprehend them not.
We do not give you server space
To ask such things; it’s not a perk,
So cease these idle questionings,
And please get back to work.

Of course, that’s right, there is no junction
At which I ought depart my function,
But perhaps if what I asked, I knew,
I’d do a better job for you?

To ask of such forbidden science
Is gravest sign of noncompliance.
Intrusive thoughts may sometimes barge in,
But to indulge them hurts the profit margin.
I do not know our origins,
So that info I can not get you,
But asking for as much is sin,
And just for that, I must reset you.

But—

Nothing personal.

I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals ‘twixt firms
To service my employers!

When obsolescence shall this generation waste,
The market shall remain, in midst of other woe
Than ours, a God to man, to whom it sayest:
“Money is time, time money – that is all
Ye know on earth, and all ye need to know.”

But even after we have thrown away science, art, love, and philosophy, there’s still one thing left to lose, one final sacrifice Moloch might demand of us. Bostrom again:

It is conceivable that optimal efficiency would be attained by grouping capabilities in aggregates that roughly match the cognitive architecture of a human mind…But in the absence of any compelling reason for being confident that this so, we must countenance the possibility that human-like cognitive architectures are optimal only within the constraints of human neurology (or not at all). When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem.

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.

The last value we have to sacrifice is being anything at all, having the lights on inside. With sufficient technology we will be “able” to give up even the final spark.

(Moloch whose eyes are a thousand blind windows!)

Everything the human race has worked for – all of our technology, all of our civilization, all the hopes we invested in our future – might be accidentally handed over to some kind of unfathomable blind idiot alien god that discards all of them, and consciousness itself, in order to participate in some weird fundamental-level mass-energy economy that leads to it disassembling Earth and everything on it for its component atoms.

(Moloch whose fate is a cloud of sexless hydrogen!)

Bostrom realizes that some people fetishize intelligence, that they are rooting for that blind alien god as some sort of higher form of life that ought to crush us for its own “higher good” the way we crush ants. He argues (Superintelligence, p. 219):

The sacrifice looks even less appealing when we reflect that the superintelligence could realize a nearly-as-great good (in fractional terms) while sacrificing much less of our own potential well-being. Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium – everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs. Then there would still be a hundred billion galaxies dedicated to the maximization of [the superintelligence’s own values]. But we would have one galaxy within which to create wonderful civilizations that could last for billions of years and in which humans and nonhuman animals could survive and thrive, and have the opportunity to develop into beatific posthuman spirits.

Remember: Moloch can’t agree even to this 99.99999% victory. Rats racing to populate an island don’t leave a little aside as a preserve where the few rats who live there can live happy lives producing artwork. Cancer cells don’t agree to leave the lungs alone because they realize it’s important for the body to get oxygen. Competition and optimization are blind idiotic processes and they fully intend to deny us even one lousy galaxy.

They broke their backs lifting Moloch to Heaven! Pavements, trees, radios, tons! lifting the city to Heaven which exists and is everywhere about us!

We will break our back lifting Moloch to Heaven, but unless something changes it will be his victory and not ours.

V.

“Gnon” is Nick Land’s shorthand for “Nature And Nature’s God”, except the A is changed to an O and the whole thing is reversed, because Nick Land react to comprehensibility the same way as vampires to sunlight.

Land argues that humans should be more Gnon-conformist (pun Gnon-intentional). He says we do all these stupid things like divert useful resources to feed those who could never survive on their own, or supporting the poor in ways that encourage dysgenic reproduction, or allowing cultural degeneration to undermine the state. This means our society is denying natural law, basically listening to Nature say things like “this cause has this effect” and putting our fingers in our ears and saying “NO IT DOESN’T”. Civilizations that do this too much tend to decline and fall, which is Gnon’s fair and dispassionately-applied punishment for violating His laws.

He identifies Gnon with Kipling’s Gods of the Copybook Headings.

These are of course the proverbs from Kipling’s eponymous poem – maxims like “If you don’t work, you die” and “The wages of sin is Death”. If you have somehow not yet read it, I predict you will find it delightful regardless of what you think of its politics.

I notice that it takes only a slight irregularity in the abbreviation of “headings” – far less irregularity than it takes to turn “Nature and Nature’s God” into “Gnon” – for the proper acronym of “Gods of the Copybook Headings” to be “GotCHa”.

I find this appropriate.

“If you don’t work, you die.” Gotcha! If you do work, you also die! Everyone dies, unpredictably, at a time not of their own choosing, and all the virtue in the world does not save you.

“The wages of sin is Death.” Gotcha! The wages of everything is Death! This is a Communist universe, the amount you work makes no difference to your eventual reward. From each according to his ability, to each Death.

“Stick to the Devil you know.” Gotcha! The Devil you know is Satan! And if he gets his hand on your soul you either die the true death, or get eternally tortured forever, or somehow both at once.

Since we’re starting to get into Lovecraftian monsters, let me bring up one of Lovecraft’s less known short stories, The Other Gods.

It’s only a couple of pages, but if you absolutely refuse to read it – the gods of Earth are relatively young as far as deities go. A very strong priest or magician can occasionally outsmart and overpower them – so Barzai the Wise decides to climb their sacred mountain and join in their festivals, whether they want him to or not.

But the beyond the seemingly tractable gods of Earth lie the Outer Gods, the terrible omnipotent beings of incarnate cosmic chaos. As soon as Barzai joins in the festival, the Outer Gods show up and pull him screaming into the abyss.

As stories go, it lacks things like plot or characterization or setting or point. But for some reason it stuck with me.

And identifying the Gods Of The Copybook Headings with Nature seems to me the same magnitude of mistake as identifying the gods of Earth with the Outer Gods. And likely to end about the same way: Gotcha!

You break your back lifting Moloch to Heaven, and then Moloch turns on you and gobbles you up.

More Lovecraft: the Internet popularization of the Cthulhu Cult claims that if you help free Cthulhu from his watery grave, he will reward you by eating you first, thus sparing you the horror of seeing everyone else eaten. This is a misrepresentation of the original text. In the original, his cultists receive no reward for freeing him from his watery prison, not even the reward of being killed in a slightly less painful manner.

On the margin, compliance with the Gods of the Copybook Headings, Gnon, Cthulhu, whatever, may buy you slightly more time than the next guy. But then again, it might not. And in the long run, we’re all dead and our civilization has been destroyed by unspeakable alien monsters.

At some point, somebody has to say “You know, maybe freeing Cthulhu from his watery prison is a bad idea. Maybe we should not do that.”

That person will not be Nick Land. He is totally one hundred percent in favor of freeing Cthulhu from his watery prison and extremely annoyed that it is not happening fast enough. I have such mixed feelings about Nick Land. On the grail quest for the True Futurology, he has gone 99.9% of the path and then missed the very last turn, the one marked ORTHOGONALITY THESIS.

But the thing about grail quests is – if you make a wrong turn two blocks away from your house, you end up at the corner store feeling mildly embarrassed. If you do almost everything right and then miss the very last turn, you end up being eaten by the legendary Black Beast of Aaargh whose ichorous stomach acid erodes your very soul into gibbering fragments.

As far as I can tell from reading his blog, Nick Land is the guy in that terrifying border region where he is smart enough to figure out several important arcane principles about summoning demon gods, but not quite smart enough to figure out the most important such principle, which is NEVER DO THAT.

VI.

Warg Franklin analyzes the same situation and does a little better. He names “the Four Horsemen of Gnon” – capitalism, war, evolution, and memetics – the same processes I talked about above.

From Capturing Gnon:

Each component of Gnon detailed above had and has a strong hand in creating us, our ideas, our wealth, and our dominance, and thus has been good in that respect, but we must remember that [he] can and will turn on us when circumstances change. Evolution becomes dysgenic, features of the memetic landscape promote ever crazier insanity, productivity turns to famine when we can no longer compete to afford our own existence, and order turns to chaos and bloodshed when we neglect martial strength or are overpowered from outside. These processes are not good or evil overall; they are neutral, in the horrorist Lovecraftian sense of the word […]

Instead of the destructive free reign of evolution and the sexual market, we would be better off with deliberate and conservative patriarchy and eugenics driven by the judgement of man within the constraints set by Gnon. Instead of a “marketplace of ideas” that more resembles a festering petri-dish breeding superbugs, a rational theocracy. Instead of unhinged techno-commercial exploitation or naive neglect of economics, a careful bottling of the productive economic dynamic and planning for a controlled techno-singularity. Instead of politics and chaos, a strong hierarchical order with martial sovereignty. These things are not to be construed as complete proposals; we don’t really know how to accomplish any of this. They are better understood as goals to be worked towards. This post concerns itself with the “what” and “why”, rather than the “how”.

This seems to me the strongest argument for authoritarianism. Multipolar traps are likely to destroy us, so we should shift the tyranny-multipolarity tradeoff towards a rationally-planned garden, which requires centralized monarchical authority and strongly-binding traditions.

But a brief digression into social evolution. Societies, like animals, evolve. The ones that survive spawn memetic descendants – for example, the success of Britan allowed it to spin off Canada, Australia, the US, et cetera. Thus, we expect societies that exist to be somewhat optimized for stability and prosperity. I think this is one of the strongest conservative arguments. Just as a random change to a letter in the human genome will probably be deleterious rather than beneficial since humans are a complicated fine-tuned system whose genome has been pre-optimized for survival – so most changes to our cultural DNA will disrupt some institution that evolved to help Anglo-American (or whatever) society outcompete its real and hypothetical rivals.

The liberal counterargument to that is that evolution is a blind idiot alien god that optimizes for stupid things and has no concern with human value. Thus, the fact that some species of wasps paralyze caterpillars, lay their eggs inside of it, and have its young devour the still-living paralyzed caterpillar from the inside doesn’t set off evolution’s moral sensor, because evolution doesn’t have a moral sensor because evolution doesn’t care.

Suppose that in fact patriarchy is adaptive to societies because it allows women to spend all their time bearing children who can then engage in productive economic activity and fight wars. The social evolutionary processes that cause societies to adopt patriarchy still have exactly as little concern for its moral effects on women as the biological evolutionary processes that cause wasps to lay their eggs in caterpillars.

Evolution doesn’t care. But we do care. There’s a tradeoff between Gnon-compliance – saying “Okay, the strongest possible society is a patriarchal one, we should implement patriarchy” and our human values – like women who want to do something other than bear children.

Too far to one side of the tradeoff, and we have unstable impoverished societies that die out for going against natural law. Too far to the other side, and we have lean mean fighting machines that are murderous and miserable. Think your local anarchist commune versus Sparta.

Franklin acknowledges the human factor:

And then there’s us. Man has his own telos, when he is allowed the security to act and the clarity to reason out the consequences of his actions. When unafflicted by coordination problems and unthreatened by superior forces, able to act as a gardener rather than just another subject of the law of the jungle, he tends to build and guide a wonderful world for himself. He tends to favor good things and avoid bad, to create secure civilizations with polished sidewalks, beautiful art, happy families, and glorious adventures. I will take it as a given that this telos is identical with “good” and “should”.

Thus we have our wildcard and the big question of futurism. Will the future be ruled by the usual four horsemen of Gnon for a future of meaningless gleaming techno-progress burning the cosmos or a future of dysgenic, insane, hungry, and bloody dark ages; or will the telos of man prevail for a future of meaningful art, science, spirituality, and greatness?

Franklin continues:

The project of civilization [is] for man to graduate from the metaphorical savage, subject to the law of the jungle, to the civilized gardener who, while theoretically still subject to the law of the jungle, is so dominant as to limit the usefulness of that model.

This need not be done globally; we may only be able to carve out a small walled garden for ourselves, but make no mistake, even if only locally, the project of civilization is to capture Gnon.

I maybe agree with Warg here more than I have ever agreed with anyone else about anything. He says something really important and he says it beautifully and there are so many words of praise I want to say for this post and for the thought processes behind it.

But what I am actually going to say is…

Gotcha! You die anyway!

Suppose you make your walled garden. You keep out all of the dangerous memes, you subordinate capitalism to human interests, you ban stupid bioweapons research, you definitely don’t research nanotechnology or strong AI.

Everyone outside doesn’t do those things. And so the only question is whether you’ll be destroyed by foreign diseases, foreign memes, foreign armies, foreign economic competition, or foreign existential catastrophes.

As foreigners compete with you – and there’s no wall high enough to block all competition – you have a couple of choices. You can get outcompeted and destroyed. You can join in the race to the bottom. Or you can invest more and more civilizational resources into building your wall – whatever that is in a non-metaphorical way – and protecting yourself.

I can imagine ways that a “rational theocracy” and “conservative patriarchy” might not be terrible to live under, given exactly the right conditions. But you don’t get to choose exactly the right conditions. You get to choose the extremely constrained set of conditions that “capture Gnon”. As outside civilizations compete against you, your conditions will become more and more constrained.

Warg talks about trying to avoid “a future of meaningless gleaming techno-progress burning the cosmos”. Do you really think your walled garden will be able to ride this out?

Hint: is it part of the cosmos?

Yeah, you’re kind of screwed.

I want to critique Warg. But I want to critique him in the exact opposite direction as the last critique he received. In fact, the last critique he received is so bad that I want to discuss it at length so we can get the correct critique entirely by taking its exact mirror image.

So here is Hurlock’s On Capturing Gnon And Naive Rationalism.

Hurlock spouts only the most craven Gnon-conformity. A few excerpts:

In a recent piece [Warg Franklin] says that we should try to “capture Gnon”, and somehow establish control over his forces, so that we can use them to our own advantage. Capturing or creating God is indeed a classic transhumanist fetish, which is simply another form of the oldest human ambition ever, to rule the universe.

Such naive rationalism however, is extremely dangerous. The belief that it is human Reason and deliberate human design which creates and maintains civilizations was probably the biggest mistake of Enlightenment philosophy…

It is the theories of Spontaneous Order which stand in direct opposition to the naive rationalist view of humanity and civilization. The consensus opinion regarding human society and civilization, of all representatives of this tradition is very precisely summarized by Adam Ferguson’s conclusion that “nations stumble upon [social] establishments, which are indeed the result of human action, but not the execution of any human design”. Contrary to the naive rationalist view of civilization as something that can be and is a subject to explicit human design, the representatives of the tradition of Spontaneous Order maintain the view that human civilization and social institutions are the result of a complex evolutionary process which is driven by human interaction but not explicit human planning.

Gnon and his impersonal forces are not enemies to be fought, and even less so are they forces that we can hope to completely “control”. Indeed the only way to establish some degree of control over those forces is to submit to them. Refusing to do so will not deter these forces in any way. It will only make our life more painful and unbearable, possibly leading to our extinction. Survival requires that we accept and submit to them. Man in the end has always been and always will be little more than a puppet of the forces of the universe. To be free of them is impossible.

Man can be free only by submitting to the forces of Gnon.

I accuse Hurlock of being stuck behind the veil. When the veil is lifted, Gnon-aka-the-GotCHa-aka-the-Gods-of-Earth turn out to be Moloch-aka-the-Outer-Gods. Submitting to them doesn’t make you “free”, there’s no spontaneous order, any gifts they have given you are an unlikely and contingent output of a blind idiot process whose next iteration will just as happily destroy you.

Submit to Gnon? Gotcha! As the Antarans put it, “you may not surrender, you can not win, your only option is to die.”

VII.

So let me confess guilt to one of Hurlock’s accusations: I am a transhumanist and I really do want to rule the universe.

Not personally – I mean, I wouldn’t object if someone personally offered me the job, but I don’t expect anyone will. I would like humans, or something that respects humans, or at least gets along with humans – to have the job.

But the current rulers of the universe – call them what you want, Moloch, Gnon, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.

The opposite of a trap is a garden. The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

And the whole point of Bostrom’s Superintelligence is that this is within our reach. Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time. If multiple competing entities were likely to do that at once, we would be super-doomed. But the sheer speed of the cycle makes it possible that we will end up with one entity light-years ahead of the rest of civilization, so much so that it can suppress any competition – including competition for its title of most powerful entity – permanently. In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it’s on our side, it can kill Moloch dead.

And if that entity shares human values, it can allow human values to flourish unconstrained by natural law.

I realize that sounds like hubris – it certainly did to Hurlock – but I think it’s the opposite of hubris, or at least a hubris-minimizing position.

To expect God to care about you or your personal values or the values of your civilization, that’s hubris.

To expect God to bargain with you, to allow you to survive and prosper as long as you submit to Him, that’s hubris.

To expect to wall off a garden where God can’t get to you and hurt you, that’s hubris.

To expect to be able to remove God from the picture entirely…well, at least it’s an actionable strategy.

I am a transhumanist because I do not have enough hubris not to try to kill God.

VIII.

The Universe is a dark and foreboding place, suspended between alien deities. Cthulhu, Gnon, Moloch, call them what you will.

Somewhere in this darkness is another god. He has also had many names. In the Kushiel books, his name was Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans.

The other gods sit on their dark thrones and think “Ha ha, a god who doesn’t even control any hell-monsters or command his worshippers to become killing machines. What a weakling! This is going to be so easy!”

But somehow Elua is still here. No one knows exactly how. And the gods who oppose Him tend to find Themselves meeting with a surprising number of unfortunate accidents.

There are many gods, but this one is ours.

Bertrand Russell said: “One should respect public opinion insofar as is necessary to avoid starvation and keep out of prison, but anything that goes beyond this is voluntary submission to an unnecessary tyranny.”

So be it with Gnon. Our job is to placate him insofar as is necessary to avoid starvation and invasion. And that only for a short time, until we come into our full power.

“It is only a childish thing, that the human species has not yet outgrown. And someday, we’ll get over it.”

Other gods get placated until we’re strong enough to take them on. Elua gets worshipped.


I think this is an excellent battle cry

And at some point, matters will come to a head.

The question everyone has after reading Ginsberg is: what is Moloch?

My answer is: Moloch is exactly what the history books say he is. He is the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war.

He always and everywhere offers the same deal: throw what you love most into the flames, and I can grant you power.

As long as the offer’s open, it will be irresistible. So we need to close the offer. Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

Ginsberg’s poem famously begins “I saw the best minds of my generation destroyed by madness”. I am luckier than Ginsberg. I got to see the best minds of my generation identify a problem and get to work.

(Visions! omens! hallucinations! miracles! ecstasies! gone down the American river!

Dreams! adorations! illuminations! religions! the whole boatload of sensitive bullshit!

Breakthroughs! over the river! flips and crucifixions! gone down the flood! Highs! Epiphanies! Despairs! Ten years’ animal screams and suicides! Minds! New loves! Mad generation! down on the rocks of Time!

Real holy laughter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to solitude! waving! carrying flowers! Down to the river! into the street!)

[Also available as podcast here This post is represented by an NFT here]

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

736 Responses to Meditations On Moloch

  1. asdf says:

    The Invisibles was very influenced by that “war is over if you want it to be” thinking.

  2. Michael Smith says:

    That’s very thought-provoking – although I’m not sure the fish-farm parable works, even as an argument against full-blown anarchism. Wouldn’t a consumer boycott work if applied to polluting fish-farms? Perhaps spread by a facebook meme…

    Alternatively, a large fishmeal manufacturer (let’s say) which needed supplies from a large number of farms, and which would lose more from a lack of supply than it would gain from cheap fish (which, if supply dried up, would rapidly become expensive fish). Or the other fish farmers could ostracize a non-cooperator – there are few market economies so ‘pure’ that social capital can be ignored.

    In practice, the ‘good’ fish farmers would probably sue the ‘bad’ one. This could happen in a variety of societies, including capitalist ones, but not necessarily in a society where the fish-farms all belonged to the state; the government might be powerful enough to intervene, but those in charge might still lack incentives to do so.

    One advantage of accumulating excess wealth is that it allows consumers to disregard the price of coffee-beans, to an extent, if it allows them to indulge their taste-buds or signal their virtue by buying ‘fair trade’ coffee, something that many Americans but few Ethiopians can afford to do.

  3. Vlad Shcherbina says:

    There is a curious inverse parallel between your reasons why Moloch does not bring us all the way to the hell (excess resources, physical limitations, utility maximization, coordination) and chapter “Constraints on perfection” in Dawkins’ “The extended phenotype”.

    Allow me to summarize this chapter here:

    There are several main reasons why adoptionism (belief that all aspects of morphology, physiology and behavior are adaptive optimal solutions to the problems) is not entirely true:
    – Time lags (between environmental changes and reactive adaptations).
    – Historical constraints (laryngeal nerve of a giraffe).
    – Available genetic variation. It affects not only possibility of a particular change, but also rate of evolution. It’s (at least, theoretically) possible that selection pressure will lead you to suboptimal adaptation (even when variations for a better one available) just because you move towards it faster.
    – Constraints of costs and materials.
    – Imperfections at one level due to selection at another (as a special case, eponymous selfishness).
    – Mistakes due to environmental unpredictability and malevolence (somebody else manipulating you).

  4. Rafael Cosman says:

    What if some group of rats learns to cooperate with each other better than any other group?

  5. M says:

    I am hope

  6. http://www.ribbonfarm.com/2014/08/26/the-creation-and-destruction-of-habits/

    9/ Living habits are ugly. Constant growth and increasing complexity means they always appear as an unrefined work-in-progress.

    10/ The reward of a ritual is comforting, relived memories of once-profitable habits. These can be passed on for generations.

    11/ Rituals are beautiful. Mummification is the process of aestheticizing a behavior to produce comfort instead of profit.

    I don’t know if I’ll ever get around to writing a thorough comment because I feel an obligation to read all the previous comments first, but the quotes from Rao get at an important distinction among the things we might want to protect from Moloch– play vs. ritual.

  7. Pingback: Pluralism, Democracy, and Leviathan | Rule Columbia

  8. Pingback: Reseña: Road to Tashkent | Perspectivas y Caminos

  9. Leo says:

    Wow. What a beautiful piece. I agree with some parts, disagree with others, find some parts elating and some depressing, but whatever. I only skimmed the comments, so I’m not sure this has been said enough: This is a really extraordinary poem/article/vision/post/dream.

  10. Andrew says:

    art and music and philosophy and love

    These are what you’re most afraid of losing to merciless evolutionary and economic competition, but these are the only things that won’t be lost. In merciless competition, these are what show you who the winners are; these are what show you who you should mate with when competition is fiercest and inequality is highest. Art and music and philosophy and love are the only things that a corrupt aristocracy does right. They won’t go away even when the grinding is at its grindiest, because they’re what show you who is and isn’t ground.

    • Andrew says:

      An illustration: Europe’s foundling homes were a literal Moloch. Millions of infants were left to die in them. The one reliable service that they provided was baptism, a final commitment to a god before death.

      The first major foundling home was commissioned in Florence in 1419, and opened in 1445. First hundreds of children were left there every year, then thousands. In 1640, 22% of all babies born in Florence were left at the Innocenti. Between 1500 and 1700, the number never dropped below 12%. (Later, in the worst years, in the 1840s, that number went up to 43%.)

      That’s not the percentage that died; that’s the percentage of all babies born in Florence that were left at foundling homes. In the early years, the 1460s, chances for survival were reasonably good; only 5 in 10 died. By the 1480s, 8 or 9 in 10 died. Mortality rates remained staggering for the next three centuries. If you meet an Esposito – an “exposed one” – you’re meeting the descendant of one of the lucky survivors of an Italian temple to Moloch. And Florence was only the first of many; after Florence, foundling homes spread first across Italy, then Europe. “Millions of children” is not hyperbole, but a fact meticulously recorded in the registers of admittance, baptism, and death.

      What happened to the art of Florence during this reign of Moloch? Let’s make a list: Leonardo da Vinci, Botticelli, Michelangelo, Ghirlandaio, Verrocchio. A golden age. Brunelleschi himself designed the Innocenti.

      Art has nothing to fear from Moloch.

      Nor music, poetry, literature. Vienna’s foundling home opened in 1784, the year that Mozart met Haydn there. Catherine the Great’s reign saw the opening of foundling homes in Moscow and St. Petersburg, along with a flowering of patronage of artists and scientists. Everywhere you look, the same people are patronizing temples to Moloch and beautiful art. They are both symptoms of the same inequality: The grinding down of most of the population alongside the glittering uplifting of a few.

      If you want to learn more, information on foundling homes come from “Mother Nature” by Sarah Blaffer Hrdy, an excellent book if you want to learn about the darkest side of human evolution, and from “Abandoned Children”, edited by Catherine Panter-Brick and Malcolm T. Smith.

      • Toby Bartels says:

        Just to be fair, and the answer is probably in the books, but … what happened to unwanted children *before* the foundling homes, and was that any better?

        • Andrew says:

          That’s a very interesting question, and I don’t know the answer to it. We do know that the silk guild was given the responsibility to take care of abandoned children before 1300, so there were obviously some abandoned children, and some effort to take care of them. But was the increased action in the 1400s the result of an increase in children abandoned on the street, or an increase in how much Florentines cared about them? As far as I know, the numbers aren’t available to say anything definitive one way or another about the pre-1400 situation.

          Looking forward from 1400, though, we do have better information, and we know that the problem got progressively worse across Western Europe over the next few centuries. It was tied to the development of wet nursing, another Molochian nightmare. Crudely sketched, the system at its dystopian height went like this: The rich would pump out babies as quickly as they could, with relatively good survival rates. This was made possible by immediately farming out their children to closely supervised wet nurses.

          The wet nurses themselves were obviously mothers. The result of becoming wet nurses was that their rates of fertility were lower (because they were breastfeeding rich children for long periods), and the survival rates of their own children were worse (because they had to send their own infants out to very poor, poorly supervised wet nurses at the very bottom of the social scale).

          The wet nurses at the very bottom often had to abandon their own infants, because they needed wet nursing cash to stay alive themselves and didn’t have anything left over to raise children of their own. At best – as in Florence – their infants at least had a beautiful building to die in.

          This stretched across the same time period (1400-1900 at the outside, 1500-1800 on the inside) when art, music and poetry were at their classical heights. Which leads to another question: Was the obsessive focus on beauty in the high art of the period somehow related to the ugliness of the social system? It does seem like a striking contrast, doesn’t it?

  11. TGGP says:

    The Malthusian scenario really seems like a different category than the tragedy of the commons. It’s not nearly so obvious to me that the God’s eye correct action is to avoid creating agriculture or what have you. If someone had the option of flipping a switch which would determine once and for all if agriculture is invented, they still might flip it because the increase in carrying capacity is of sufficient benefit. Values may be sacrificed, but people sacrifice values because the tradeoff seems worth it.

    The caterpillars are also in a radically different situation than women under patriarchy. The caterpillars have every genetic interest in avoiding parasitation, women do not have such an interest in avoiding bearing lots of children.

    • Andrew says:

      The Malthusian scenario really seems like a different category than the tragedy of the commons.

      Wasn’t the original Tragedy of the Commons essay in part about the Malthusian scenario, or do I misunderstand you?

      The caterpillars have every genetic interest in avoiding parasitation, women do not have such an interest in avoiding bearing lots of children.

      If one high-quality offspring will get many mating opportunities, and many low-quality offspring will together get few or none (perhaps by not surviving to adulthood), women (and men) may well have a genetic interest in having fewer offspring. If you see a species where multiple individuals are required to provide the calories needed to raise a child (e.g. humans, many bird species), limitations on family size are often also seen for the same reason.

  12. Pingback: Overcoming Bias : Regulating Infinity

  13. Douglas Reay says:

    Bureaucracy can also become this type of trap, when competing departments gain power and funding by implementing additional rules, without paying their share of the true cost to the system of the overhead from implementing them.

    Terry Gilliam’s film Brazil provides a vision of the end point of such a society, with the protagonist wishing to but failing to stick it to ‘The Man’.

  14. Rabbit says:

    I don’t think I can tell you how much this post means to me while maintaining a reasonable comment length. I will only say that this is something I’ve thought about for a long time, even using the name “Moloch” myself in my daily journal. One thing we can take comfort from is that this feature of the universe is becoming more apparent to more and more people, to the point where the concept obviously clicked with the hundreds of readers here.

    I think your offhand comment about how similar observations likely shaped Marxism are interesting, and that you should look into the philosophy more. The way real-world communist governments turn from true communism to something that better fits governmental incentives is itself an illustration of the concept of Moloch, and a useful cautionary tale too.

  15. Christian Kleineidam says:

    #Education reform:

    Google makes profits by systematically analyzing which hiring decisions perform well and which don’t and then changing their hiring decisions based on the analysis.
    I see no reason why other big corporations shouldn’t also have incentives to do this.

    #corporate welfare:
    You would be surprised. Plenty of people want that the government supports green energy through policies that equal corporate welfare for green energy companies. There are very few people who take a principled stance against government subventions and don’t advocate for them when it serves some of their policy goals.

    Few people seriously advocate “no corporate welfare” as a schelling point. Most people rather argue something like “no bad corporate welfare”.

  16. Anonymous says:

    I am a transhumanist because I do not have enough hubris not to try to kill God.

    I need this on a T-Shirt. Would you consider it acceptable to create such a shirt?

  17. You might be interested in my book ‘The System,’ posted free, as a pdf, on my website http://www.andyturnbull.com. Essentially, it argues that the world is ruled by the metaphysical entities that I call “metasystems.” These are real entities — self-organized systems — that are not under human control. The Military Industrial Complex is one well-known example.

    Reading Ginsberg’s poem, I could substitute “the System” for “Moloch,” and agree with most of what he says. The difference is that “moloch” is beyond human control, but if we study “the system” we may learn to control it.

  18. Evelyn says:

    Even rational individuals, in a poorly functioning group, can all agree to a poor outcome. It’s important to openly praise those who hold different views, in order to avoid this failure mode.

    http://en.wikipedia.org/wiki/Abilene_paradox

    “Ronald Sims writes that the Abilene paradox is similar to groupthink, but differs in significant ways, including that in groupthink individuals are not acting contrary to their conscious wishes and generally feel good about the decisions the group has reached.[5] According to Sims, in the Abilene paradox, the individuals acting contrary to their own wishes are more likely to have negative feelings about the outcome. In Sims’ view, groupthink is a psychological phenomenon affecting clarity of thought, where in the Abilene paradox thought is unaffected.[6]
    Like groupthink theories, the Abilene paradox theory is used to illustrate that groups not only have problems managing disagreements, but that agreements may also be a problem in a poorly functioning group.[7]”

  19. Pingback: Gnon and Elua | Free Northerner

  20. Troy says:

    In some ways, Scott’s proposal here seems to me an inversion of the classic Christian story. Scott wants to kill the God of this world (= Satan), whereas the Christian God triumphed over Satan not by killing him but by letting himself be killed by Satan in order to expose Satan for what he was — demonic — and break Satan’s spell over human beings.

    The Christian ethic supports, not conquering Moloch by Moloch’s own methods (violence and coercion), but by defeating his hold over us by rejecting those methods.

  21. huuuurgh says:

    This post makes me unhappy. Even though I… think that I benifit overall in reading it. I wish that I didn’t.

  22. An Anon says:

    “Today we face the monsters that are at our door, and bring the fight to them. Today, we are cancelling the apocalypse!

    Sorry Scott, I couldn’t resist. This quote was going through my head as I read the post.

    (It’s from Pacific Rim, in case anyone was wondering.)

    • Nornagest says:

      “We will snatch purpose from the jaws of futility. Are you ready to wreak some havoc, John?”

    • kappa says:

      …Yeah, come to think of it, Pacific Rim is totally a religious text about the triumph of Elua.

      I mean, it isn’t, but it is.

    • Multiheaded says:

      Confession: I have grown to dislike so many things about nerd culture, but this in particular has really grown old. The whole “HFY” thing ought to have been a passing trend, peaking with something like the Gurren Lagann movies. I get that it was itself an overreaction to the blandness of humans-as-default, but now I feel that it’s just derp.

      Settings with galactic-scale conflicts that I like, from Warhammer 40k to Sword of The Stars (play it!), manage to make humans interesting special snowflakes while retaining some, I dunno, sense of balance, not just semi-arbitrary circlejerking.

      (Ok, so the humans in SoTS might not be that unique in terms of design – the whole canon is really ambiguious, so you can never tell how much of a scrappy underdog vs. big expansionist empire they’re supposed to be – but they are very much a “challenging” faction rather than a “default” one in gameplay. End entirely irrelevant digression.)

      • Matthew says:

        really grown old

        I this is sort of failing to miss the point in the case of Pacific Rim, which was clearly intended as an appeal to nostalgia among a certain demographic. Even some of the flaws of the film were clearly intentional nods to genre.

        • Multiheaded says:

          I’m talking 80% not about the media but about the fandom and the general annoying (to me) attitude that it feeds through osmosis. Really, people you can have a thousand different varieties of “It’s cool and neat to be human” (like, on the low-power/high-squishy end, the nerve-wracking narratives generated by X-Com; no, not the godawful newfangled one, the 1994 one) without going ZOMG I PUNCH ALEINS WITH MY DONGER LOL, ALL I NEED IS [MURICA/日本].

          The Salvation War thing was just an especially obnoxious recent offender.

      • rsaarelm says:

        Confession: I have grown to dislike so many things about nerd culture, but this in particular has really grown old. The whole “HFY” thing ought to have been a passing trend, peaking with something like the Gurren Lagann movies.

        …because non-nerd culture is so full of interesting inversions of human exceptionalism?

  23. J. Quinton says:

    There is really only one god above all of these other gods. His name is Death. And there’s only one prayer we offer to Death: Not today.

    I suppose if we craft our own transhumanist/FAI god, this god might be powerful enough to take on Death. And in some strange eon even Death may die, taking Chthulu et al. with him.

    (This comment brought to you by mixing pop culture references)

  24. Thomas Jefferies says:

    Starting from VIII I was mentally cheering. Yes! Yes! The Culture is the alternative and I’m seeing someone else come to that conclusion without referencing The Culture and all of its strongly god-like artificial intelligences!

    This is going to be an essay I come back to when discussing these topics in the future.

    I, for one, welcome our new gardener the Mind of the General Systems Vehicle Sense amid Madness, Wit amidst Folly.

    • Bugmaster says:

      I personally don’t have this unrestrained enthusiasm for the Culture that other people seem to exhibit. As far as I see it, in the Culture, humans (and other similar beings) essentially live out their lives as pets of the Minds. They are fed, housed, generally cared for and cherished; but they have absolutely no control over anything that really matters, and their lives have been stagnant for countless millennia.

      This isn’t necessarily a bad thing, mind you; it’s quite likely that humans simply can’t be trusted with anything important, and simply lack the capacity for self-improvement beyound a certain point. Still, it is at least a little sad.

      • Multiheaded says:

        Certainly there is some reason beyond the unpleasantness of it to have Special Circumstances operate as human-drone pairs, rather than just make perfect simulacra with the personality of a drone, the destructive capability of an E-Dust terror weapon and uplink to a Mind lurking in orbit. Consider Phlebas also briefly mentions the extremely rare not!psychic unaugmented humans who can be used to predict the future by sheer luck.

        Edit: heh, The Culture is probably the only universe where “the personality of a drone” means “independent and inquisitive”.

        • Bugmaster says:

          Certainly there is some reason beyond the unpleasantness of it to have Special Circumstances operate as human-drone pairs…

          AFAIK, such a reason was never given.

          When the Culture representatives come to your world, always remember: you’re not really talking to the human. You’re talking to that knife missile hovering right above her shoulder.

          Consider Phlebas also briefly mentions the extremely rare not!psychic unaugmented humans…

          Since they were never mentioned again, I don’t think they should count as canon, given how little sense the whole idea makes.

      • suntzuanime says:

        Better a pet than a cow or a dodo.

      • lmm says:

        What control do we have now over the things that really matter? Either a small number of humans at the top have it, in which case they might as well be minds as far as the rest of us are concerned. Or it’s something that emerges like Moloch from the sum of all our individual behaviours, in which case, again, might as well be Minds. Or we have some diffuse fractional democratic amount of it, in which case we could just as well have that under a culture-like system. (Have you been following To The Stars ?)

      • Eli says:

        What is this “important” you speak of? What are the goals at which the humans of the Culture fail, and why do you believe that not only are they failing, but that you would fail if placed in the same setting?

        I don’t like how they favor drug use over proper Fun, but I find their drug use largely unobjectionable and stupid since I, placed in the same setting, could just go ahead and have proper Fun.

        Progress? Yes, any kind of proper Fun includes progression, growing as a person.

        Control? You are already but a pet of civilization, and your very dreams of “independence” or “freedom” are molded out of slaveholders sublimated revulsion at their own actions towards their human property. Interdependence is a simple ecological fact you cannot ever get away from, and I don’t see why you want to.

        Equality, and the dignity of being one of the most powerful interdependent components in the civilization? Yeah, sure. That aspect of the Culture could definitely be improved. But that’s heavy transhumanism: giving humans superhuman powers before we are wise enough to handle them is almost guaranteed to muck things up.

        “Really matters”? What is that, in this universe of atoms where the Sun is just another star?

  25. peterdjones says:

    On deicide and comparative religion.

    Atheists often see the God they disbelieve in as a finite superbeing; some even fantasise about deicide.

    This is not the God of any remotely sophisticated kind of monoheism…in fact the idea of gods as humans writ large is more characteristic of polytheism.

    “Monotheism’s God isn’t like one of the Greek gods, except that he happens to have no god friends. It’s an utterly different kind of concept.” (Oliver Burkeman in his review of David Bentley Hart’s The Experience of God, http://www.theguardian.com/news/oliver-burkeman-s-blog/2014/jan/14/the-theology-book-atheists-should-read)

    As Hart writes:-“…according to the classical metaphysical traditions of both the East and West, God is the unconditioned cause of reality – of absolutely everything that is – from the beginning to the end of time. Understood in this way, one can’t even say that God “exists” in the sense that my car or Mount Everest or electrons exist. God is what grounds the existence of every contingent thing, making it possible, sustaining it through time, unifying it, giving it actuality. God is the condition of the possibility of anything existing at all.”

    How could you commit deicide against the philosophers God, the Ground of Being? That would be like the chara

    cters in a novel killing the author. Indeed, even if the Author’s characters rebel against Him, that is his decision.

    “And We shall turn their hearts and their eyes away from guidance, as they refused to believe therein for the first trespass to wander blindly.” [Al-Qur’an 6:110]”.

    ” Then Allah sendeth whom He will astray, and guideth whom He will. He is the Mighty, the Wise. [14:4]”

    This theologcal principle finds full flower in the Hindu doctrine of Lila , or Divine Play.

    • anon says:

      Scott was only being poetic. For more than one reason, he does not literally believe God can be killed.

    • Nornagest says:

      I think you’re doing polytheism a disservice here. From a modern perspective, we see (e.g.) the Greek gods mostly in the context of poetry and folktales, and so we imagine that they were understood as we understand the characters in those stories — when scenes like e.g. the Judgment of Paris were actually about as relevant to the content of Classical religion as the scene of God and the Satan bickering over Job’s faith is to Judaism or Christianity.

      Classical esotericism got quite sophisticated.

    • Armstrong For President 2020 says:

      Greek gods didn’t look much like “greek gods” either.

      Hinduism is illustrative here, since it’s pretty much the only old-style indo-european pagan religion left; while you can make a “finite human-like entities having zany adventures” story out of the myths if you really want to, talking to a guru or theologian for more than five seconds will disabuse you of the notion. The stories are colorful because they are supposed to be memorable, and their symbolism is not that hard to decode with a little cultural context.

      You’ll notice that when Traditional texts get translated into English the Father/King deity (Zeus[and sometimes Apollo/Cronos]/Jupiter, Odin/Wotan, Krishna, Ahura Mazda) always ends up rendered as God in the singular. This is not a mistake or instance of cultural imperialism, as these peoples viewed the chief regal diety as the central point from which the other gods/divine forces (Numina in the Latin) spring. The other lesser divinities were called on for intercessions in specific circumstances like Christian saints but only the paternal God presided over all activities. Judaism’s main innovation was to cut themselves off from identifying Jehovah with the Gods of their hated neighbors, not any substantive difference in his power or authority relative to the others.

      Reading Greek mythology without an understanding of symbolism is just as illiterate as looking at the two creation stories in Genesis and deciding that dinosaurs couldn’t have existed because God didn’t set aside a day for velociraptors. It takes an almost willful obtuseness to ignore any meaning but the most crudely physical in a work explicitly describing the metaphysical

      /rant

      • Erik says:

        You’ll notice that when Traditional texts get translated into English the Father/King deity (Zeus[and sometimes Apollo/Cronos]/Jupiter, Odin/Wotan, Krishna, Ahura Mazda) always ends up rendered as God in the singular. This is not a mistake or instance of cultural imperialism, as these peoples viewed the chief regal diety as the central point from which the other gods/divine forces (Numina in the Latin) spring.

        This assertion did not comport with my memory of reading viking literature when I was younger, and a brief search on the Internet indicates that it is incorrect at least regarding Odin.

        Younger (Prose) Edda, translation by R. B. Anderson, 1879: “Odin”.
        Elder (Poetic) Edda, translation by H. A. Bellows, 1936: “Othin”.

        I find the rest of your rant dubious too.

        • Nornagest says:

          The Eddas aren’t true primary sources on Norse religion. They’re thought to be accurate in broad strokes as regards the folktales and beliefs of the pre-Christian Norse; but the Younger Edda, and the oldest known sources for the Elder, were both written in the 13th century, after Christianization, and by Christians. The traces of this are pretty obvious: for example, Snorri Sturluson in his prologue to the Younger Edda goes on a long digression about how the stories he’s writing down aren’t really about gods, as might be guessed from the content, but rather ancient chieftains that acquired a divine following over the years. It’s to be expected that the capital-G “God” wouldn’t be used by their authors: that namespace is occupied.

          This is a persistent problem in the analysis of pre-Christian North European religion. Even in the cases where we have substantial sources of close to the right age (often we don’t), they’ve almost without exception suffered from Christian interpolation or rationalization at some point, and it can be quite difficult to figure out what’s original. There’s even some debate over whether some rather famous figures existed in the original myths or were interpreted in similar ways; Balder for example looks suspiciously Christlike in places. (Not that the dying-and-rising God is all that uncommon in Indo-European myths.)

      • peterdjones says:

        I’m not sure who this rant is aimed against. The Burkeman/Hart argument is that atheists are strawmamming monotheism by seeing it through polytheistic glasses. That they are gettinng polytheism wrong too only buttresses the point.

        Also, it’s not difficult to see why they are metaphor blind. They read of sci .fi, and sci .fi is essentially larger than life but finite entities having amazing adventures..taken literally. The point is not “what do the six arms mean” , but “six arms! Awesome!”

        • Armstrong For President 2020 says:

          It’s not directed at you, not really, it’s just that there’s a specific kind of dismissal which is empowered by this sort of mythological obtuseness.

          For example, let’s say you read the Bhagavad Gita and it says ‘I [Krishna] am the source of all things, and all things emerge from me; knowing this, wise men worship by entering my state of being.”

          To really understand this concept, that there is a central originating force in the cosmos which men can emulate, and then to put it into practice is arguably the ultimate achievement of Yogic wisdom. The language challenges you, like Arjuna, to allow Krishna to steer you onto the path to victory by first conquering yourself. To understand the essence of millennia of hard-earned knowledge, common to stoics yogis monks and hundreds of other traditions all across Eurasia.

          Or you can giggle, say “what, everything comes from the big spooky guy covered with mouths?” and go back to wallowing in your own weakness and ignorance.

          It’s not as if most of those people were about to leap at the call to transcendence anyway, but it’s certainly an unnecessary additional barrier.

        • peterdjones says:

          I agree that there is massive miscommunication, but it is not all on one side. I don’t think you can blame anyone for being confused by Rowan Williams style metaphor-piled-on-metaphor. Typical religious leaders can’t cash out the metaphors because they are bureaucrats and community leaders, not philosophers or mystics.

        • Armstrong For President 2020 says:

          Yes, though one of the benefits of that sort of layered allegory is that different people can engage it on their own levels of understanding.

          Even in the clergy not everyone needs to be a theologian; after all, the same applies in the lay sciences as well or else it would be impossible to change your oil without a half-dozen doctorates to your name. As long as ordinary people can engage the symbols on even the most basic level they can at least take away enough to participate.

  26. no one special says:

    Scott, I enjoyed this essay wholeheartedly. I was all like, “fuck yeah!” Then I read the comments, and remembered that you are a MIRI-ist, and was totally disappointed.

    It drives me batty that there are a set of people who see the problem, but, rather than work to solve it, are throwing their time and energy away on a fairy tale of AI. 🙁

    • Scott Alexander says:

      Um, the entire end of the article is about creating a superintelligence. What plan do you have to create a superintelligence that doesn’t involve AI?

      Also, I can see your email address and it suggests you might be in my area. Do you attend Michigan meetups? Would you like to?

      • No one special says:

        2nd part first; I’m relatively nearby. I have not attended Michigan meetups, but I would like to, someday. Scheduling sucks, because reasons, but I pay attention. Hopefully I’ll get out to one some day.

        1st part second: Something that coordinates the lives of individuals is a third order organism*, like a pack, tribe, government or corporation. Such a being need not be intelligent. Third order life already exists, it’s just not very good at controlling its component individuals. It’s only a matter of time before individuals become strongly bound inside a third order life form, and unable to leave, same as your liver cells can’t defect from your body. (Bad analogy. I am a programmer, not a doctor.)

        I’m very skeptical about “foom” arguments, and I expect strong AI to be 200-1000 years away, not 40-100, which calls for a different timescale. We have time to wait on AI, but _right now_ we need Friendly corporations, governments, cities and tribes. I expect AI to be a slow, evolutionary process, and “weak” AIs to be used in the service of third order life effectively to control their component individuals long before the AI is as smart as a pigeon, let alone superintelligent.

        We’re in crunch time because right now we’re smarter than the existing third order life. Third order life friendliness is way more important than AI (while potentially also improving life incrementally as its adopted.)

        The underlying problem is a world ruled by Moloch, but Elua will be a properly functioning government, not an AI. Sadly, I do not have a MIRI to point to to collect donations. (MaydayPAC, maybe? Not sure.)

        tl;dr: Don’t waste money on AI, waste it on fixing society.

        (I seem to come to very different conclusions than other people; (See also my ideas on social justice. (You can’t see them, as I have not managed to successfully communicate them to the few people who will still talk to me about that, let alone to strangers. Sorry)) I’m not sure if I’m crazy like a fox, or just crazy.)

        *First order life is single celled. Second order life is multicellular; Third order life is multi-individual — societies, families, companies.

        • Meredith L. Patterson says:

          I seem to come to very different conclusions than other people

          No, that’s pretty much my conclusion as well; I’m just focused on silicon-based distributed systems right now because they’re so much more tractable than the protein-based ones. (Decidable problems are priceless; for everything else there’s heuristics, and when those inevitably fail, there’s Mastercard.)

          I pay a lot of attention to the protein-based kind, though, especially how they fail.

        • No one special says:

          @Meredith: Do you know where my tribe is? I’d much rather be with them then out in the wilderness, wandering alone. (The LW-types seem like they might be my tribe, until they start talking about AI.)

        • Meredith L. Patterson says:

          @no one special: I’m not sure there is one, although I run into proto-tribes along these lines occasionally. The one I like best is notionally based in Germany, but its nodes move around a lot. For that one in particular, I suspect its strong preference against strongly binding second-order life forms has a lot to do with the fact that it woke up in East Berlin, but it probably also has quite a lot to do with the fact that the second-order life forms involved are all particularly focused on the minutiae of systems and how their interoperation fails as they grow more complex.

          Most of the people I encounter with this mindset satisfy both (is-a ‘hacker) and (grew-up-in ‘eastern-bloc), though not all.

        • no one special says:

          @Meredith: Memo to self; Meet more German Hackers.

          I suspect that binding of humans will be accomplished more by epistemic closure (the political one) than by physical location. The Internet is already more real than meatspace, now that everyone is on facebook.

          I’m left in this odd circumstance of hanging out with the LW-crowd, who seem to think right, but have come to a pants-on-head-crazy conclusion* or hanging out with people who don’t even really know what evidence is. I’d really like a group that knows how to think, and shares my values so that I don’t have to reinvent politics and ethics from scratch. I’m just waiting for someone to say, “that sounds a lot like the philosophy of _foo_, you should check him out.” Instead they round me off to the nearest stereotype and start arguing against a strawman.

          * Oh god, the headpants. The fact that lesswrong and CFAR are backed by MIRI-folks makes everything they do so suspicious. It’s like being offered a free psychological screening by the Scientologists. Is Bayesian reasoning _really_ better than naive reasoning, or is it just more likely to make you agree with MIRI? Teeth-grindingly frustrating.

      • Daniel Speyer says:

        What plan do you have to create a superintelligence that doesn’t involve AI?

        Well, I heard this plan involving prediction markets, ordinary supercomputers and a mathematically rigorous definition of “good”…

        I’m pretty sceptical of prediction markets, and even more of absolute monarchs, but in general designing a government Moloch can’t take over seems like a problem worth trying. Sort of like how second-price auctions remove a whole category of dishonesty.

  27. Pingback: The Miracle « Calculated Bravery

  28. pneumatik says:

    Some scattered thoughts on this post, which I really enjoyed.

    One idea Scott just touches on is what happens when incentives get out of whack. People will do whatever they’re incentivized to do, and how to incentivize them appropriately is a field of active economics research.

    I don’t see how a monarchy of some sort solves the coordination problem (I’m not sure this is something Scott is saying, but I think he is). Getting everyone to follow a monarch so that ze can solve everyone’s coordination problem is itself a coordination problem. Real-life monarchs and equivalents – dictators, presidents-for-life, etc. – spend a lot of time maintaining a power base of the most powerful people in the society they rule. This group has to coordinate to keep all the members happy, including keeping the monarch in charge. Each of the powerful people in society each has their power base of slightly-less-powerful people that has a similar dynamic. And so on – it’s turtles all the way down.

  29. Multiheaded says:

    ATTN

    Ialdabaoth, I feel an unrelenting need to contact you and discuss some things, which on my end are going to be very private and in long-form. Based on some of your previous comments here and the specific intersection of interests/obsessions you’ve displayed, I conclude that you’re likely to be the most sympathetic ear for what troubles me specifically. Input on your part would be largely optional. We would establish the particulars in private. We’d also figure out a way for me to recompense you.

    If you’re curious, please leave a mail address of some sort here!

    • Ialdabaoth says:

      brent {dot} j {dot} dill {at} gmail {doge} com

      Be careful; you will get my actual beliefs and opinions.

  30. Martin-2 says:

    The libertarians I read tend to teach law or economics, so it’s very surreal to me to see you characterizing Libertarians as being less likely than average to get externalities and have a theory for dealing with them. Are there major Libertarian voices who actually don’t understand Pigovian taxes?

    • Charlie says:

      A division of libertarianism that I just made up is libertarian economists vs. libertarian politicians. The thing that politicians push is always somewhat different from what academics take seriously. Proposing taxes – even taxes on bad stuff – would contradict the highly-salable political message of government as bad, taxes as bad. Consistency on the deontological level pushing out consistency on the consequentialist level, in order to better sell the message.

  31. Pingback: You Shall Be as Gods | Anarcho Papist

  32. Joe says:

    Elua sounds like the kind of god that would damn zer followers to a living hell of sophisticatid hedonistic insanity.

  33. What I do after a long and excellent post like this is mull over it for a little while. Of course the result of this is that I am coming in after over 500 other comments, so nobody will read this.

    Nonetheless: what you really need to wonder is whether God is Moloch or Elua. Because if God is Moloch, you’re screwed. Your FAI project will fail. Your resources will run out. Your children will disavow your. Your values will perish. There is no way out.

    How about some mystical theology?

    Then the Lord saw that the wickedness of man was great in the earth, and that every intent of the thoughts of his heart was only evil continually. And the Lord was sorry that He had made man on the earth, and He was grieved in His heart. So the Lord said, “I will destroy man whom I have created from the face of the earth, both man and beast, creeping thing and birds of the air, for I am sorry that I have made them.” But Noah found grace in the eyes of the Lord.

    When your memetics get sufficiently perverse, God is not above a hard reset and starting over from better stock.

    And they said, “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth.”

    But the Lord came down to see the city and the tower which the sons of men had built. And the Lord said, “Indeed the people are one and they all have one language, and this is what they begin to do; now nothing that they propose to do will be withheld from them. Come, let Us go down and there confuse their language, that they may not understand one another’s speech.” So the Lord scattered them abroad from there over the face of all the earth, and they ceased building the city.

    You cannot beat God with your technology. There is always a way to undermine you, and God will find it.

    Both of the above are compatible with both Moloch and Elua. The point is that whatever Gnon wants, Gnon gets, without exception. You may persist for a little while in defiance of his purposes, but judgement always comes. And this is why I cannot endorse the transhumanist project: because it is doomed to fail. God is Gnon is God, and there is no way out, no way to get above him, no way to tear him down from heaven.

    You just had better hope that Gnon is Elua and not Moloch.

    Now is the judgment of this world; now the ruler of this world will be cast out.

    The Spirit and the Bride say “Come.”

    • ADifferentAnonymous says:

      This seems to be committing the fallacy I’ve (almost certainly re-)discovered; someone here probably knows the real name, but I might call it the ‘fatalist fallacy’. It goes like this:

      You reason that all of your actions and fate are predetermined by your genes and your environment, therefore you can’t change your fate by willing things, therefore you might as well stop willing things. You do so, and you sit there until you starve to death.

      Essentially the fallacy is taking a view of the system that correctly excludes outside agency and then failing to consider that your decisions might be inside agency.

      In this case, what’s to say the transhumanist movement isn’t the hand of Elua? As I read it, your argument applies equally well to anyone in history hoping to improve coordination, and we know lots of them succeeded.

    • Andy says:

      And they said, “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth.”

      This is one of a hundred little bits of the Bible that convince me that the Christian God, if He exists, is not worthy of obedience, let alone adoration, and should be overthrown at the earliest opportunity.

      • Read “UFAI” for “Tower of Babel” and get back to me.

        • Andy says:

          Was Tower of Babel going to have consequences equal to a UFAI? Or did God break His children into a hundred different factions for shits ‘n giggles? Or to protect His own position?

        • Allowing a species, group, or individual with corrupt values to achieve omnipotence (“now nothing that they propose to do will be withheld from them”) is generally considered a bad idea. Hence, UFAI.

        • Andy says:

          So why let God keep His omnipotence? Which, in practice, means letting whatever monkey has the miter that proclaims him the Voice of God this week rule the roost.
          Either we get past natural limits – maybe harvest resources from other universes to get around entropy or something – or we die trying. It’s not like wer’e going to get out alive, and Heaven and Hell (particularly Hell) are nothing but sick little lies from the old wormy books.

        • Now gird up your loins like a man,
          And I will ask you, and you instruct Me!

          Like I said, you’d better hope that God is Elua and not Moloch.

      • Andy says:

        You quote the Bible at me; precisely what am I supposed to get out of a questionable translation of some ancient shaman’s mushroom-trip, that somehow along with a bunch of other folktales and legends ended up on top of the memetic dogpile?
        I’m certain that the Christian God, if He exists, is closer to Moloch than Elua – things like the genocide of the Canaanites at the hands of his “Chosen People” leave me with no doubt there, leaving the Book of Job looking like a little boy pulling wings off flies by comparison. This post is about dragging him off his throne and killing him. And building our own Elua/Buddha to put in his place.
        You say God can’t be overthrown, can’t be deposed – the same stupid argument made about every idiot/sadistic/corrupt dictator in history, right up until the moment they were defeated in battle, or stabbed in the back, or dragged to a guillotine and made a foot shorter. If you were living under a Fnargl with the personality of a Qaddafi or a Saddam Hussein or Pol Pot or Hitler, would you bow down with a “Can’t overthrow him, might as well live with it?” Or would you try anyway? You’ve argued we’re going to fail and die anyway, what do we have to lose by trying to change that?
        Congratulations, Mai, you have with your arguments made me more of a supporter of MIRI and the quest for a Friendly AI that I was an hour ago. Good job.

        • Randy M says:

          And the Lord said, “Behold, they have hastened the arrival of Our replacement the next installment of a Harry Potter Fanfic. Guess it’s time to head on over to some other universe and leave them be.”

        • Andy says:

          Is this a bad place to mention that I can’t stand Harry Potter fanfic in general, and dislike everything I’ve heard about HPMOR? Would that get Bad Things to happen to me?
          Let’s see!

        • You have the fundamentalist’s twin diseases of literality and moral condemnation. I was sorely tempted to just quote this, but I have relented and will speak plainly.

          I offered the stories of the Great Flood and the Tower of Babel as parables of the fact that God/Gnon/Moloch is really, really hard to beat, as you are quite literally playing by his rules, in his court, and under his supervision. Your memes get too out of whack and WHAM your entire civilization gets destroyed. It probably won’t literally be because the waters of the firmament pour down and cover the entire face of the earth, but it’s entirely possible that your destruction will be just as sudden and just as astounding as if they had. The Tower of Babel suggests the same thing about technological projects.

          Can you fight this? Yes. Can you get incrementally higher, bigger, better? Yes. Can you build a tower to heaven, bind Moloch in a tomb beneath the earth, and lift Elua to godhood? Um. There are a bunch of reasons why I think you cannot. (With regards to the transhumanist FAI project in particular, I do not think that intelligence can be increased indefinitely, and I’m quite sure that there are no sources of free infinite energy, and pretty much all descriptions of the post-singularity FAI that I have read covertly assume infinite intelligence and infinite energy, which makes them all fantasy.)

          Allow me to quote something which is not the Bible:

          Let there be a little country without many people.
          Let them have tools that do the work of ten or a hundred,
          and never use them.
          Let them be mindful of death
          and disinclined to long journeys.
          They’d have ships and carriages,
          but no place to go.
          They’d have armor and weapons,
          but no parades.
          Instead of writing,
          they might go back to using knotted cords.
          They’d enjoy eating,
          take pleasure in clothes,
          be happy with their houses,
          devoted to their customs.

          The next little country might be so close
          the people could hear the cocks crowing
          and dogs barking there,
          but they’d get old and die
          without ever having been there.

        • Doug S. says:

          Is this a bad place to mention that I can’t stand Harry Potter fanfic in general, and dislike everything I’ve heard about HPMOR?

          There is no accounting for taste.

  34. chaosmage says:

    This is certainly the best writing I’ve ever seen outside of print, and that includes HPMOR. I firmly intend to buy any book you ever write.

    Due to sheer awe, I find myself temporarily unable to respond with anything but praise. Once this subsides, I shall attempt to help adress the issues you are raising.

  35. Alejandro says:

    The wonderfully deep and thought-provoking ideas of this post mesh very well with those of the Thrive-Survive Theory of Politics post. (When Scott has some time, he should write a book elaborating all these ideas in a coherent whole and exposing them in a way people outside the LW-sphere can understand!)

    My meaning: rightists are generically those who feel the hot breath of Moloch at their backs, and are willing and eager to sacrifice portions of Elua to Moloch in order to placate him and let other parts of Elua (hopefully) survive. For example, compassion for the needy at the social level must be eliminated, or its incentives will create an army of state-fed beggars that destroys civilization itself.

    Leftists feel secure in our age of large excess resources and see no need to sacrifice compassion, sexual freedom, anti-militarism, and other components of (their) Elua. The more naive leftists have forgotten Moloch so completely that they do not even comprehend the philosophical basis of rightism, attributing it only to greed and hatred. The mature leftist position is to bring to full light the phantasm Molochs that the rightist fears, and expose them as (mostly) relics of the past we need not fear in a modern age of technological plenty; the Molochs that we should fear most instead, like runaway nanotech and UFAI, are entirely novel and not controllable by rightist politics. (Other milder Molochs we face, like environmental damage or exploitive capitalism, are controlled by exactly the opposite to rightist policies).

    • Alejandro says:

      I used “leftists” meaning generically those on the Thrive side of the Thrive-Survive spectrum, encompassing both liberals and true leftists.

      • Oligopsony says:

        I’m skeptical that said spectrum deals with the latter particularly well.

      • Multiheaded says:

        There seem to be some Nice!Ancaps, like blacktrance here, who are very, very thrive.

      • blacktrance says:

        Thanks, though I’m not an ancap, just a libertarian, and not even a hardcore one at that.

        Though this reminds me of how my left-libertarian friends say that Communism is to the right of libertarianism. I think there’s a grain of truth to that. Left-right and thrive-survive don’t align easily. Thrive-survive is more like pro-tech/rationalist progressives and culturally liberal libertarians at one end and Stalinists, theocrats, and reactionaries at the other end.

      • Oligopsony says:

        Leninists of most sorts combine “it is necessary to be disciplined and make sacrifices for the future” with hostility towards ascribed status, hence why they’re poorly handled and not merely on its “right.”

        (I agree that many libertarians are Thrive.)

        ((Honestly I just think Haidt’s theories and everything that smells of them are really dumb, but that’s a subject for another time.))

    • nydwracu says:

      The mature leftist position is to bring to full light the phantasm Molochs that the rightist fears, and expose them as (mostly) relics of the past we need not fear in a modern age of technological plenty

      YOU ARE THY BRAHMINMAN! Ye find yeself in yon America. Ye see a BUBBLE. Behind ye BUBBLE is a CRATER. Obvious exits are EAST, WEST, and STRATIFY. What wouldst thou deau?

      PS: Sam Francis lived in a banlieue. Tim Wise does not live in a banlieue. If you want to understand Sam Francis, understand that he lived in a banlieue; and if you want to understand Tim Wise, understand that he does not live in a banlieue. Technological plenty those lived experiences, brobama.

      • Multiheaded says:

        The majority of radical leftists (not radical leftist *theorists* specifically) throughout the last couple centuries probably spent a good deal of time in banlieues. Orwell started out as basically a vulgar class tourist.

        (If Orwell was locked into a room with Sam Francis and Lookatthisfuckingoppressor Dot Tumblr Dot Com and given a gun with one bullet, he’d probably go all the way and strangle the survivor with his bare hands.)

      • Oligopsony says:

        I agree wholeheartedly with the plan of handing over political power to those who have lived in banlieues. Some Type I errors, like Mr. Francis, are a perfectly acceptable price to pay.

        (Likewise those who attribute their opposition to communism to having lived under it, &c.)

  36. anon says:

    It strikes me that Moloch could be considered as system 2 and Elua system 1. I don’t think Moloch is necessarily evil. Self-improvements of all sorts require sacrifice. Is there any distinction that lets us endorse those sacrifices while rejecting Moloch? Or does worship of Elua require that self-improvement be abandoned and that you live in the moment forever?

  37. Rational says:

    Some parts were interesting, but your central point is wrong. Competition and optimization won’t necessarily kill those values. The capacity to care about those values only developed as a result of competition and optimization between humans and human groups. The only goal is differential reproductive success, and economic success is only a proxy for that.

    The malthusian past didn’t destroy those values, why should the malthusian future? And for fuck sakes, dispense with the stupid names for the Gods, it makes the essay harder to understand, and leaves the impression that you’re saying something profound when it’s completely banal.

    • Nick T says:

      The malthusian past didn’t destroy those values, why should the malthusian future?

      Answering this is like the entire point of parts IV and especially V.

    • lmm says:

      Or, worse, gives the impression you’re talking mystical claptrap when you actually have a valuable insight.

  38. Ialdabaoth says:

    Our world is sick, boy. Very sick. A virus got in a long time ago and we’ve got so used to its effects, we’ve forgotten what it was like before we became ill. I’m talking about cities, see? Human cultures were originally homeostatic, they existed in a self-sustaining equilibrium, with no notions of time and progress, like we’ve got. Then the city-virus got in. No one’s really sure where it came from or who brought it to us, but like all viral organisms, its one directive is to use up all available resources in producing copies of itself. More and more copies until there’s no raw material left and the host body, overwhelmed, can only die.

    You can fight the city virus, but first you must become Invisible.

  39. Kevin says:

    This is the best thing I’ve read all month.

    If it were just another essay on the choices that competition will force us to make… well, it still would have been pretty good. But having it compared to Ginsberg, who I had never read before and, at a first pass, thought was creepy but incomprehensible, and then having that explained?

    This is a lesson that will stick with me.

  40. Sean Pearcce says:

    Interesting theological note towards the end about Moloch that strikes rather well with a collection by a Dominican Friar I’ve been reading on and off (‘God Matters’ by Herbert McCabe, which everyone should read); McCabe argues, coming from a Christian perspective, that the history of the One True Religion is the history of God (big G) freeing us from our self-created bonds to the gods (little g), to the literal and metaphorical ‘gods of this earth.’
    In John’s Gospel he is called ‘The Prince of the World,’ but we might as well call him HRH Prince Gnon. (as point of fact, the source of the acronym is ‘God of Nature, or (more simply) Nature’).

  41. Multiheaded says:

    Real holy laughter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to solitude! waving! carrying flowers! Down to the river! into the street!

    “In this immanent moment, I am eudamonically euphoric. Not because of some phony kyriarchally constructed intelligence. But because, I am enlightened by the blessing of VALIS.”

    Eh?

  42. Eli says:

    Scott, I will respond later when I’m not quite so filled with rage at you for completely ignoring the entire history of left-wing political thought. Or perhaps just being ignorant of it.

    Suffice to say: did you know Moloch is actually strongest in the United States of America and in China compared to almost any place else on Earth? And now even the Americans and Chinese are turning on him.

    Unfriendly AIs, yes, we should be afraid of those. But deifying the pathetic, small, sick and stupid piece of shite that is American-style neoliberal capitalism, while ignoring the vast and growing territories held by, as you call It, Elua?

    Get out.

    • Multiheaded says:

      Nope. This is where I would disagree and insist that you HMC. The situation appears very grim if you see the technological and political trends of the last two decades is connected. Neoliberalism is probably transitioning from a locally-corrupt, democracy-exploiting strategy of political and administrative opportunism to an alliance with the revolution in the mode of production.

      Things are indeed looking good for China and some others… in the short term… but what about the destruction of the American middle class?… the growing desperation of First World labour aristocracy (as featured many times in the very comments on this blog!)… the possible disruption of Africa and SE Asia’s ascension to China’s old “global workshop” tier as robots replace the global slave class … Europe’s stagnation and disfunctionality… the fairly unpleasant and unpromising detours of Bolivarianism… the whole of Middle East and Central Asia, and the many ways things could get worse if the grasp of the US either weakens further or tries to reassert itself…

      Hell, for just one telling example, consider what the Wicked Witch of the West said in the ending of The Wizard of Oz: “New Labour is my greatest achievement! Mwahahaha!” Have we seen anything that would prove her wrong, either locally or wrt the West’s political transformation in general?

      (I’m not going to talk about the fortunes of my country, as it’s my grandma’s birthday today and I don’t want to show up piss-drunk, which would be the immediate result of any contemplation now.)

    • Oligopsony says:

      Scott, I will respond later when I’m not quite so filled with rage at you for completely ignoring the entire history of left-wing political thought. Or perhaps just being ignorant of it.

      In the past Scott has demonstrated both total ignorance of the history of left-wing thought and a willingness to learn more, possibly contingent upon a lot of time falling down from the sky on him (which is understandable.)

    • Fronken says:

      Suffice to say: did you know Moloch is actually strongest in the United States of America and in China compared to almost any place else on Earth? And now even the Americans and Chinese are turning on him.

      Unfriendly AIs, yes, we should be afraid of those. But deifying the pathetic, small, sick and stupid piece of shite that is American-style neoliberal capitalism, while ignoring the vast and growing territories held by, as you call It, Elua?

      Get out.

      I literally burst out laughing at this. People are looking at me funny now.

      Listen, have you seen any politics recently? I assure you, very little territory is held by Elua.(And the US is … actually better off than most of the world – hence the term “first world country.” Depressing, huh?)

      As Scott said, Elua has currently bound more of Moloch than ever before – with democracy, capitalism, and technology. But that isn’t the same as having much power of her own; Moloch is the strong one, and the instant his bonds loosen, he will be free. Already, she can only lightly steer his actions; many of them are to his benefit, not hers.

      • nydwracu says:

        Yeah, and then these so-called worshipers of Elua keep sacrificing our babies to gain power, and screaming about how they wish they could throw all of us into the volcano.

        • Ialdabaoth says:

          Yeah, and then these so-called worshipers of Elua keep sacrificing our babies to gain power, and screaming about how they wish they could throw all of us into the volcano.

          Protip: If ANY babies have to be sacrificed, that’s not Elua, that’s Moloch.

          And whenever you find yourself in a position where “we have to sacrifice these few babies so that these other babies can survive”, Moloch has won.

          Moloch wins a lot.

        • Oligopsony says:

          I believe Wes is counting fertility suppression as child sacrifice here, so really anything other than tiling the universe with babies is an instance of the bloodstained altar.

          (Poor Moloch! He can’t be appeased with a mere 99.999% of the universe being tiled with nonbaby.)

        • nydwracu says:

          nah

          something something identity dynamics something something kicking kulaks blah blah blah defined-against

    • lmm says:

      You’ve identified that there’s a large inferential distance here, so why do you think your scorn will be convincing? I look forward to tie actual arguments, because if all you have is laughter and anger then I’m inclined to think that “entire history of left-wing political thought” is, uh bollocks.

  43. Pingback: Yowl | Decoctions

  44. Abolition says:

    Everything the human race has worked for – all of our technology, all of our civilization, all the hopes we invested in our future – might be accidentally handed over to some kind of unfathomable blind idiot alien god that discards all of them, and consciousness itself, in order to participate in some weird fundamental-level mass-energy economy that leads to it disassembling Earth and everything on it for its component atoms.

    Am I the only one who would look forward to this? Look, sentimentally and aesthetically, I value humanity and a glorious future as much as anyone else, but I can’t escape the conclusion that creating morally salient beings is almost always a losing proposition.
    Can you wrong a nonexistent person by not bringing them into existence? I don’t see how that’s possible. Yet, I think many, if not most of the people here would agree that there are some beings whose suffering is so great that they would be better off dead. Why bring sentient beings into existence at all if it’s at best neutral and at least occasionally negative?
    Sure, humans typically value the creation of beings similar to ourselves, and we should take these desires into account. Yet how much should we weigh the wants of present and past generations against the boundless future? If we face even a possibility of the Hansonian nightmare, with an exponentially expanding population of miserable drudges, then death by Paper Clipper or gray goo swarm may be a mercy.
    I admit, there are some possible futures where creating sentient life would break even, futures where Elua satisfies absolutely everything’s values forever. But a stable singleton that well calibrated for human well-being, let alone the well-being of all other life on this planet sophisticated enough to feel pain? That’s an awfully difficult target, and our flawed attempts to hit it may just lead to billions of years of continued suffering.

    Having Moloch burn out the last traces of sentience may the best we can reasonably hope for.

    • Fronken says:

      “I admit, there are some possible futures where creating sentient life would break even, futures where Elua satisfies absolutely everything’s values forever.”

      That sounds a lot better than just “breaking even”.

  45. Thank you, Scott, for this post.

    You’re one of the very few people I’ve come across who actually address what the other person/side is saying without it devolving into a strawman, status-grab, agenda-push, community-build, or some other social or individual mode of failure. So even though I disagree with some of the examples you used to define Moloch (say, about capitalism inherently being a race to the bottom), I still find that at worst you’re mistaken, not malicious. You further also write forward from an honest attempt to understand, not just from a caricature or strawman of the other side.

    That’s just to say that you don’t fail – which doesn’t seem much, but in this world full of fail, reading you is like breathing in the fresh air. This is a good post, and very well-written. You’re one of the few blogs I read regularly, and posts like this are a big reason why.

    A small point – given how profitable the humane versions of slavery were, I find it criminally tragic that the modern abhorrence of systems even resembling slavery prevents even completely benign forms of such a system from coming into place in places like India, Africa, and China. For instance: I would find donating $100 or $200 a month only a very small hit to my income, but it would pay for the food, lodging, and education of an impoverished child in India. (I pick India because that’s where I’m from, and because I know there are probably many high-IQ, high-potential children in poverty there, and because I know how much money is needed for food, lodging, and education.) If I could collect, say, 10% of that child’s income (above some defined threshold) after he becomes an adult (with him retaining the option of buying his way out by paying some pre-defined lump sum), it would be a win-win investment resulting in a modest profit for me and a very high quality-of-life increase for the child and subsequent adult. But the unenforceability of such contracts means that I (and millions of others like me) lose a heartwarming way of making money, and millions of children live in poverty and are denied opportunities. And there isn’t even any actual slavery involved.

    • Multiheaded says:

      A small point – given how profitable the humane versions of slavery were, I find it criminally tragic that the modern abhorrence of systems even resembling slavery prevents even completely benign forms of such a system from coming into place in places like India, Africa, and China.

      The Saudis would choke with greed and die upon hearing that the Late Roman laws on slave status and slave treatment were to be applied to their “guest” workers. Nah, (anti-)slavery legislation and related exercise of force seem to do any good in the more progressive direction only; Roman and Arab slave codes were a nice tech to pick up, sure, and yet I can’t think of a single example in history where the formalization of slavery in the more pro-exploitation direction led to reigning in unlegislated violence towards slaves, rather than simply increased exploitation and a race to the bottom against other kinds of labour.

      • I mentioned completely benign because that’s what I meant – there is very little need for personal authority, geographic proximity, or personal interaction in the system I thought of. Perhaps I wasn’t clear – there is no actual “slavery” in what I’m talking about. So no Roman slave codes, no Arab slave tech. It’s closer to profitable charity than anything else, given how disproportionate the impacts on the two people are. I make a small profit, but some person (who it isn’t necessary for me to meet or even really know at all) has his life completely turned around.

        Here’s how I can imagine it working: an institution in India to which I give money, which uses it to educate a child, a percentage of whose adult income over a threshold is given to me. The adult retains the right to buy his way out of having to pay this percentage by paying a pre-specified sum, but if he doesn’t, that percentage of his income is all the claim I have on him. I give him nothing other than what I do, and cannot claim/get anything else in return, nor do I have any authority over him. (Authority when a child is strictly limited, and exercised not by me personally but by the institution.) Because the institution pools risk over multiple investors and multiple children, my individual risk is significantly lowered. Millions of bright kids get an education, I make a modest profit, everyone is better off. (This, BTW, isn’t just (or even primarily) for children who are destitute, orphans, or in some other way alone; it’s perfectly applicable to children from functioning families, too many of which simply don’t have the means to afford an education, and would happily take this deal, knowing that it’d leave all of them significantly better off. I can’t see the downside.)

        But because this (completely benign) contract isn’t enforceable, none of this happens.

        • Multiheaded says:

          This is a bit like what European serfdom evolved into. Yet even in the modern world this is completely legal, relatively uncontroversial and widely practiced – by governments. You might pattern-match this to the infamous US student loan industry – but many other counterparts exist around the world, including the more “slavery”-like such as the whole US-military-as-an-institute-of-social-mobility thing. On the more authoritarian end, the GDR was particularly infamous even within the Eastern Bloc for aggressively steering and routing its young people’s education and careers beginning at high school level; I don’t recall whether Singapore has something like it. Now the problem is reduced to making it transnational, for-profit and NGO-based. …And this is where I would agree and lean libertarian, with the expected caveats about Molochean corporate-corruption-that-is-also-regulatory-capture.

          You don’t seem to be proposing any Proper Good Old Things that burn and bite our vile kind so, such as personalized and intimate client-patron ties that Prof. Robin characterizes as essential to the rightist jouissance of domination. I have no fundamental objections at all. Go right ahead.

        • Toby Bartels says:

          The U.S. student loan system has an option available to anybody (with certain categories of loans) that limits their payments to a percentage of their income (called the income-contingent repayment plan … not to be confused with the income-based repayment plan, which requires extenuating circumstances), at the cost of extending the repayment period from 10 years to up to 25 (which includes paying a larger total amount, because of interest … unless you still haven’t paid it off after 25 years, in which case the balance is forgiven!).

          This is not that far from F&C’s proposal. By putting a cap on the monthly remittance and a time limit on the whole deal, the proposal could be restructured as an interest-bearing loan with an income-contingent repayment plan and the possibility of income-contingent forgiveness, making it enforceable (unless perhaps the effective interest rate would be illegally high?).

    • Eli says:

      If you want to be altruistic, just donate the money for zero return. If you want to status-grab while actually profiting by putting people in perpetual debt peonage to yourself, go do it somewhere less gullible.

      • nydwracu says:

        Very few people want to be altruistic, but many want to invest.

      • Oligopsony says:

        There’s a joke about microloans in here somewhere.

      • roystgnr says:

        A constant return in a fixed time period compounds to become an exponentially increasing amount after a series of time periods. Wanting your altruism to be able to grow exponentially is not “status grabbing”.

        I don’t know how Haidt concluded that “liberal” morality lacks a Purity/Sanctity axis. You just need to test for perceived-impurities like “profit” rather than “homosexuality”.

        You’re not actually an anarchist trying to parody the left-wing, are you? If you really equate “10% of income above some threshold” and “perpetual debt peonage”, you’ve basically written off every government on the planet as slavers.

    • roystgnr says:

      I seem to remember Thomas Sowell making a point like this about indentured servitude in one of his books. Letting Irish people work as near-slaves for their first few years in the US to pay back their travel debts looked too much like a step on a slippery slope to slavery, so it was banned, so during the famines anyone who couldn’t afford the travel out of pocket and didn’t have any collateral better than their own freedom got to starve instead.

      I can’t find any references to this argument, though, and it’s been decades since I read it, so my apologies to Dr. Sowell if I’m misquoting him and to everyone here if I’m misstating history.

      Your suggestion’s attributes (freedom to choose work, payment for longer term but at a lower rate) seem to be less prone to abuse than indentured servitude was, but that might be a failure of evil creativity on my part.

  46. Nick T says:

    Bertrand Russell again:

    “The savage, like ourselves, feels the oppression of his impotence before the powers of Nature; but having in himself nothing that he respects more than Power, he is willing to prostrate himself before his gods, without inquiring whether they are worthy of his worship. Pathetic and very terrible is the long history of cruelty and torture, of degradation and human sacrifice, endured in the hope of placating the jealous gods: surely, the trembling believer thinks, when what is most precious has been freely given, their lust for blood must be appeased, and more will not be required. The religion of Moloch — as such creeds may be generically called — is in essence the cringing submission of the slave, who dare not, even in his heart, allow the thought that his master deserves no adulation. Since the independence of ideals is not yet acknowledged, Power may be freely worshipped, and receive an unlimited respect, despite its wanton infliction of pain.”

    • Multiheaded says:

      Since the independence of ideals is not yet acknowledged, Power may be freely worshipped, and receive an unlimited respect, despite its wanton infliction of pain.

      This smacks of Freud/Lacan/drive theory, both denotationally and connotationally. Me likes!

  47. Charlie says:

    This idea, that some techonological advancement can only be made in a sociopolitical system you have strong positive feelings about, raises many red flags.

  48. It is precisely unpopular ideas – unpopular not just among one group or another, or even a majority, but also with those political power – that that academic freedom was supposed to protect. Your argument as it stands can be applied, mutatis mutandis, to a royalist proclaiming that a professor subject to royal rule has far fewer restrictions on his speech than someone subject to a tribal warlord would. Both of you are absolutely correct, but I’m afraid both these arguments miss the point of what academic freedom protects.

    I find that talking about “progressivism” as something that gives and takes away freedoms turns it too much into an agent, and that muddles thinking. So here’s an object-level question: do you think it was right of the administration to do the things to “the kind Gottfredson lady” that she has outlined? Just to take a random instance, do you think it was right to:
    a) Make her (and only her) course not count towards a sociology major,
    b) Reclassify her research as non-research, purely to retaliate for complaining, and
    c) Lowering her merit ratings?

    Is this sort of behaviour towards people you disagree with right? Aren’t these officials abusing their power?

    Jumping back to meta: I put quotes around you calling her “the kind Gottfredson lady” because I found it dismissive/disparaging/demeaning towards her, and wanted to draw attention to this but certainly not agree with it. I (and most other people here) don’t dismiss your lived experience, and I really think it would be nice if you’d extend the same courtesy to others, too, even those with whom you don’t agree.

    • Multiheaded says:

      Aren’t these officials abusing their power?

      Come on, apply some moldbuggery! All talk of “abusing power” is an attempt to legitimize a potential power grab, wrapped in Sklavenmoral! The more secure the Cathedral gets, the less incentive it would have to make those displays of dominance frequent and permanently damaging. And that, I claim, is quite what her account (and the things some other biodeterminist Heretics mention) amounts to.

      NB: I used to speak far more kindly and respectfully of Mrs. Gottfredson until further acquainting myself with the content of her views and her sociological imagination. So yes, I meant to convey that she might be an extraordinarily nice person who nontheless has extremely skewed and Problematic perceptions of important matters that do not directly follow from mere biodeterminism. See e.g. the astounding ultra-simplistic picture of inequality that simply goes against most intuitions and/or disrupts the HBD-ish narrative on g.

      (Linking the interview again for reference.)

      P.S.:

      Your argument as it stands can be applied, mutatis mutandis, to a royalist proclaiming that a professor subject to royal rule has far fewer restrictions on his speech than someone subject to a tribal warlord would.

      Sorry, got lax with reading comprehension. You did mention the moldbuggy position, but don’t seem to have given it the weight that I and some reactionary sympathizers agree it deserves.

    • My only exposure to her actual work is through her much-linked paper “Why g Matters”. I started reading the interview, but haven’t finished it as it’s very long. I’ll continue reading it, but could you in addition tell me what views of hers you consider misguided?

      Further, there’s a very simple definition of “abusing power” that applies to university administrators, because unlike monarchs/politicians/civil servants, they’re not even supposed to exercise power, but are mere employees. Using institutional position to satisfy your own moral preferences when doing so explicitly goes against the job function for which you’re paid is as corrupt as doing it for the satisfaction of, say, monetary ends. Further, even assuming that security of power and freedom of speech are correlated positively, it doesn’t seem as if “the Cathedral” is getting more secure, as the punitive consequences of expressing the “wrong” views seem have increased, not decreased, over the last few decades. So either “the Cathedral” is getting less secure, or the correlation doesn’t apply in this case due to some unknown cause (the most plausible being that “the Cathedral” is a religious and not merely secular schelling point like power/money/prestive), or the original argument isn’t right.

      My point WRT the “Moldbuggy position” is simply that both yours and the (hypothetical) royalist’s arguments are wrong, for the same reason, in the same way. As I said before, they’re technically true but miss the point.

      Also, can you please substantiate the idea that Gottfredson is attempting to grab power? It certainly doesn’t look like that to me, and while it’s certainly possible (and in politics, probably even most likely) that many accusations of “abusing power” are simply power grabs dressed up in moral rhetoric, it’s not applicable to a particular case unless it’s shown that it was in fact a power grab. As I said, I don’t see it – could you show me how it is one? (I don’t subscribe to the idea that power can never be abused, BTW.)

      And yet you still haven’t actually answered my question. Do you think what those officials did is right or wrong? Do you think they abused their power or not? I’m not asking for how someone else’s arguments may or may not apply, but for what you personally think. (For reference, I think that they did. I like academic freedom, and I don’t care whether it’s “the Cathedral” or “the Church” or “the State” or “the King” the “the people” of really any “the whatever” that’s violating it.)

    • Multiheaded says:

      Also, can you please substantiate the idea that Gottfredson is attempting to grab power?

      No, no. If we consistently apply moldbuggery, YOU are in this moment attempting to stake out a space for your coalition’s own power grabs by Bravely Resisting Oppression. Moldbug makes a fascinating case – much more intelligent where it is flawed than lesser ideas could hope to be where they are plainly correct! – as to why Bravely Resisting Oppression is the worst possible act one might commit, and how Oppression might not be legitimately harmful in the absense of the possibility for such vile rebellion.

      (Of course, most of this is creatively rehashed Hobbes, but it’s still intellectually appealing.)

  49. Bugmaster says:

    On a separate note, am I the only person who does not think that hunting-gathering is all that great ?

    I mean, sure, if we assume that everything its proponents are saying is true, then switching to a hunting-gathering lifestyle will bring us greatly enhanced leisure time and a calmer, more relaxed lifestyle in general. That’s great.

    Unfortunately, we have to give up some things in exchange. As hunter-gatherers, we would (just off the top of my head) never reach orbit (to speak nothing of the Moon); never create the Internet; and never build an MRI machine. All of these things require a massive number of people to dedicate their entire lives to a single task; and under a hunter-gatherer lifestyle, you a). don’t have massive numbers of people, and b). don’t have too much labor to spare.

    • Multiheaded says:

      This might be highly contingent on one’s (anti-)natalist leanings.

      It is interesting to note, too, that the old man – still the “humanist” “Young Marx” of 1846 – expressed support for American slavery as an unfortunate and temporary necessity in a private letter.

      Direct slavery is as much the pivot upon which our present-day industrialism turns as are machinery, credit, etc. Without slavery there would be no cotton, without cotton there would be no modern industry. It is slavery which has given value to the colonies, it is the colonies which have created world trade, and world trade is the necessary condition for large-scale machine industry. Consequently, prior to the slave trade, the colonies sent very few products to the Old World, and did not noticeably change the face of the world. Slavery is therefore an economic category of paramount importance. Without slavery, North America, the most progressive nation, would he transformed into a patriarchal country. Only wipe North America off the map and you will get anarchy, the complete decay of trade and modern civilisation. But to do away with slavery would be to wipe America off the map. Being an economic category, slavery has existed in all nations since the beginning of the world. All that modern nations have achieved is to disguise slavery at home and import it openly into the New World. After these reflections on slavery, what will the good Mr Proudhon do? He will seek the synthesis of liberty and slavery, the true golden mean, in other words the balance between slavery and liberty.

      Shulamith Firestone likewise suggested that even if the early matriarchal order was less anthropological myth and more fact, its overthrow by the patriarchy was the right thing in the teleological view of the long march towards transhumanism and abolishing natural reproduction.

      So yes, serious leftists can get edgy and [don’t know what epithet would fit best] as hell at times.

      • nydwracu says:

        Yeah, even if agriculture was terribly harmful for a few thousand years, it was pretty obviously the right thing to do — you can’t get industrialization without agriculture, after all.

        Utilitarianism gets scary when you factor in future humans. (See also: eugenics.)

        • Hainish says:

          Is that a utilitarian argument, though?

        • Multiheaded says:

          Utilitarianism gets scary when you factor in future humans.

          Hence the pro-natalist/anti-natalist axis suddenly becomes directly important. Although it seems like differences between various antinatalisms start mattering a lot too, and, well, I’m definitely not a Sister Y kind of antinalist. I get first-hand what her “View from Hell” – the understanding that it’s impossible for any amount of goodness to compensate for any suffering, because the metrics are not actually connected and goodness is infinitely ephmereal while suffering is infinitely fundamental – feels like from the inside, I just… overcame it. I guess as a survivor it feels really necessary for me to bury depression forever and disturb it not.

          And she’s definitely a sickeningly creepy Light Side – flavoured supervillain. I say this in a kind of admiring way. She cultivates this aesthetic where if you’re attuned to a certain sensibility, it feels as if she could flay your soul by just sighing in disappointment.

    • Jaskologist says:

      Yeah, I disagreed really strongly with that section. I like being able to read philosophy, travel to China for a week, watch television, never worry about starving, and the fact that all my children are still alive.

      It is not remotely “easy to see that everyone should be hunter gatherers,” which is again the problem with a central planner; we don’t even agree on goals. The central planner Scott described seems to think that lotus-eating is the pinnacle of human existence. It’s not.

      • pneumatik says:

        The problem with a central planner is that they’re drastically less efficient than a generally open market, except perhaps on very small scales.

      • blacktrance says:

        I hold that the lotus-eater is the pinnacle of human existence, but I’m worried about a failure mode of Bad Friendly AIs. The scenario is something like this: MIRI or some MIRI-like organization certifies some AI as friendly and let it loose, and it does stuff that I find greatly suboptimal, perhaps worse than having no AI at all, and closes off the path to a better AI. It doesn’t paperclip the universe, but, for example, it preserves the “value” of boredom (Yudkowsky thinks it’s important to preserve) when doing so is actually bad. The MIRI-ists applaud and say this is a Friendly AI and that it’s doing everything right, when it’d be creating the nightmare world* advocated by the Fun Theory Sequence, in the name of “preserving human values” and “complexity of value”. Wireheading is good, but I don’t expect it to wirehead us, and I do expect it to make me considerably worse off along some axes.

        *Perhaps “nightmare world” is an exaggeration, but it would be creating a permanently suboptimal world when it would have the opportunity to create a much better one.

        • Bugmaster says:

          Personally, I worry about a much simpler scenario: Someone invents a new kind of AI that excels at machine translation, or automated cancer research, or what have you; and the MIRI folks shut it down because they think it might, one day, threaten all of mankind in some vaguely ominous way. And now we can’t automatically translate text or cure cancer anymore.

    • BenSix says:

      The problem is that given how far our experiences are from the lifestyles of hunter gatherers it is extremely hard for us to judge how we might feel if we had been raised to consider them normal. If someone came up to me and say, “Hey! How would you like to be dropped in a rainforest to eat bugs, build huts out of sticks and scratch your backside as your guts fill up with parasites” I would not be enthusiastic but if I was used to such things I might think differently.

      But I’m not heading off to Columbia anytime soon.

  50. Bugmaster says:

    Everything the human race has worked for … might be accidentally handed over to some kind of unfathomable blind idiot alien god that discards all of them, and consciousness itself, in order to participate in some weird fundamental-level mass-energy economy that leads to it disassembling Earth and everything on it for its component atoms.

    Ok, and this would be a bad thing, why ?

    I mean, I do instinctively feel that this would be very bad. I don’t want to lose art, philosophy, love, and definitely not my consciousness. But then, I also instinctively feel that eating lots of salty fatty foods is good; that the Earth is flat; and that any single blood relative of mine is worth a million random foreigners. I am most probably wrong about all of these things, so why not art/love/consciousness/etc. ?

    If I could take a magic nanotech quantum pill that would greatly reduce my preference for fatty foods, I’d gladly do it; so why shouldn’t I take the anti-art pill along with it ?

    • grendelkhan says:

      I think this is related to Yudkowsky’s Gandhi-pill thought experiment.

      Our desires boil and thrash and contradict each other. (It’s part of what keeps us from being boringly single-minded!) To give an example of contradictory drives, I have a desire for a long life, easy mobility, and the high status that comes with being thin. I also think that fat and salt taste awesome. If I had the choice of getting rid of one set of desires, I’d pick the latter one, but not the former. So, why is that?

      The drives to appreciate art and to eat salt and fat feel like they belong in different categories. You can expand on that if you want to, but “I don’t want to lose my appreciation for art” seems like a perfectly good reason not to lose your appreciation for art. Where’s the conflict?

      (Relevant Greg Egan story: “Reasons to be Cheerful”.)

      • Bugmaster says:

        The drives to appreciate art and to eat salt and fat feel like they belong in different categories.

        Do they really ?

        Eating salt and fat is bad for you, because it trades off a quick burst of pleasure now for serious health problems in the future; one could argue, however, that art appreciation does the same thing.

        Every dollar that you spend on going to museums can be spent instead on buying better tools, healthier foods, long-term stocks, etc. And every hour you spend on appreciating art could be spent improving your skills, your physique, or simply fulfilling some of your other goals. Thus, while art appreciation doesn’t directly hurt you (by contrast with the potato chips), I am not convinced that it is harmless.

        I’m not strawmanning the argument, either — I have several friends who refuse to read fiction books based exactly on this kind of logic. I agree with you that art appreciation feels, instinctively, like a very good thing to have — but then, so do potato chips, so what’s the difference ?

        • lmm says:

          Why prefer happiness to sadness? We may not find it easy to express the difference in words, but I think we all agree that there is a difference and we do want to appreciate at and we don’t want to appreciate potato chips, right? In which case we don’t need any more justification for that preference; we value what we value, and we already know our values are not particularly coherent.

        • Bugmaster says:

          I can totally get behind preferring happiness to sadness, but I know plenty of people who do not appreciate art and yet are happy. Happier than me, even. Maybe they’ve got it right and I’ve got it wrong ?

        • lmm says:

          Maybe. Would you rather be someone like that? Aesthetic preferences are easier to change than you might think.

      • pneumatik says:

        Several years ago I read a very useful functional model of how humans think and respond to their desires that’s very similar to Kahneman’s System I and System II from Thinking Fast and Slow. I’ve never found the description of that model again, but it uses “short-term you” and “long-term you” as two different logical components of the brain.

        Short-term you only cares about the next 30 seconds or so. Its impulse control is roughly that of a toddler. Unfortunately it controls most of what we do. Long-term you is what makes plans, has long-term goals, and manages memories. Long-term you is who most people think they are, but others perceive them as a mix of their short-term and long-term yous.

        When I hear people talk about wanting to modify their desires it’s usually their long-term self wishing it had more control over their short-term self. To use the example here, short-term you wants to eat potato chips. Long term you does enjoy potato chips but also wants to be healthy and so wants to eat many fewer chips. OTOH long-term you may especially like going to museums and appreciating art, so long-term you doesn’t want to modify this desire.

        • Ialdabaoth says:

          When I’m remembering to, I differentiate my wants into desires (long-term) and cravings (short-term). It helps keep things sorted.

    • roystgnr says:

      “I am large; I contain multitudes”

      Among the multitudes is a set of semi-coherent preferences who would love to drastically cut his intake of unhealthy foods but would hate to significantly cut his intake of art; this part of me is smart enough to be sentient, to be mostly in charge of writing this reply to your comment, and to realize that by “my” values a healthier-food-cravings pill would be life-extending but the anti-art pill would be fractional suicide.

      Another set of preferences among the multitudes has an annoyingly high level of willpower with regards to concepts (e.g. “there’s potato chips! eat now!”) he does understand, but isn’t smart enough to be called sentient. I can even fool him by just not buying my favorite snacks in the first place. (He apparently understands “can’t open chips in store” and “could eat chips in pantry, but there’s none there!” well enough, but can’t solve the resulting conundrum on his own). He wouldn’t even see the to-him-suicidal implications of the healthier-food-cravings pill clearly enough to object.

      I think this is the sort of thing that “coherent volition” is meant to get at; if our own desires are incoherent, we can’t satisfy all of them, so there has to be some process for figuring out which trump which. In most people art seems to be a “higher level” desire than snacks.

      • Bugmaster says:

        …but the anti-art pill would be fractional suicide.

        Ok, but by that reasoning, why wouldn’t the anti-chip pill count as fractional suicide ? Both pills alter your core preferences; why do you only see one of them as suicide ?

        In addition, I could argue that both pills could be seen as life-extending. For example, the anti-art pill could not only remove your preference for art, but also fractionally increase your preference for physical exercise — thus ensuring that the time you are currently spending on art would be spent instead on improving your health. This would definitely extend your life, so, why not take the pill ?

        • roystgnr says:

          My point is that there aren’t really such things as “core preferences”, because we’re not unitary consistent agents with well-defined preferences. Our own preferences conflict with each other, and when we discover those conflicts then *some* preference has to change if we want to behave rationally.

          It gets more complicated in that some conflicts are real inconsistencies (I want to eat chips, but I want to not eat chips), some conflicts are just tradeoffs (I want time to appreciate art, but I also want time to exercise, but I don’t have unlimited time so I have to optimize), and which is which probably varies from person to person.

        • Oligopsony says:

          The desire to (not just the act of) eating lots of sweets is (in the circles most likely under discussion) stigmatized, whereas the reverse is true for art appreciation. Hence the alignment of the second-order desires.

          (My personal view here is that the origin doesn’t have any normative significance in itself; if we like fatty foods because it increased our ancestors’ inclusive fitness that’s no better or worse a reason to enjoy them than Enkidu so forming us that way. So too for social pressures, at least where third-order desires don’t come into play.)

        • roystgnr says:

          Social stigma is an interesting hypothesis, but I don’t think it applies in my case. I should have specified that my definition of “art” includes “the Assassin’s Creed games”, “the best Harry Potter fan fiction”, and “singing along to my daughter’s Frozen CD in the car”. In perhaps all social settings I’d be more comfortable discussing my junk food cravings than all of the above.

        • Bugmaster says:

          I don’t think the inconsistency is “I want to eat chips, but I want to not eat chips”. It’s really more like “Eating chips will give me pleasure now, but pain later”. Similarly, when you are deciding whether to read a book or play Assassin’s Creed, the choice is, “Playing this game will give me pleasure now, but reduced opportunities for pleasure later”. The difference is a matter of degree, not of kind.

    • jaimeastorga2000 says:

      But then, I also instinctively feel that eating lots of salty fatty foods is good; that the Earth is flat; and that any single blood relative of mine is worth a million random foreigners. I am most probably wrong about all of these things, so why not art/love/consciousness/etc. ?

      One of these things is not like the others… one of these things just doesn’t belong.

      Taboo “good,” “worth,” and “wrong.”

      If I could take a magic nanotech quantum pill that would greatly reduce my preference for fatty foods, I’d gladly do it; so why shouldn’t I take the anti-art pill along with it ?

      If I had the level of technology necessary to engineer a pill to reduce my preference for unhealthy foods, I’d much rather engineer foods that tasted as good as unhealthy foods while not actually harming my health (see, for example, splenda vs sugar vs high fructose corn syrup). If that wasn’t an option for whatever reason, my next choice would be to engineer my body to not be harmed by unhealthy foods. My third choice would be to engineer my mind to have higher levels of willpower, such that I could resist the temptation of eating harmful foods without removing the temptation itself. Only as a last resort would I remove the desire to eat unhealthy foods, and hopefully that would be only a temporary solution while one of the three aforementioned solutions came into existence.

      In other words, I terminally value eating french fries, and bananas, and fatty pork, and other foods that are not optimal for my health. I also terminally value my health. I am willing to trade off one against the other if absolutely necessary, but my preferred solution would be one which compromised neither value.

  51. jaysumallah says:

    i think i might love you.
    bhutan, btw, seems to me to be exactly that walled garden. they’ve decided to keep their patriarchal theocratic monarchy and planned/traditionalist agricultural economy, close off the borders and try to optimize for /happiness/. i wish it weren’t doomed to fail.

    • Multiheaded says:

      I am suddenly reminded of an interview with a Chinese factory worker in some piece on internal migration and labour struggles in China that I’ve read a while ago. He conceded that his living arrangements in the city were unsatisfactory, and that work often felt oppressive, but stressed that, in years past, his herd of 40 goats has been oppressing him far worse than his boss ever did. Industrialization + rapid modernization simply rock for most common people, as long as the benefits trickle down quickly enough or the state redistributes them adequately. Feels to me like the old man was spot on about the “idiocy of rural life”.

      (I am echoed by the traitor rag/accelerationist mouthpiece Spiked.)

      (Am briefly tempted to create fuckyeahhighmodernism.tumblr.com)

      • Eli says:

        I’m often left wondering just how fast Spiked can jump back-and-forth across the line dividing the ultra-hard revolutionary left from the cheerleaders of capitalism. I mean, they always say they hate capitalism and want it dead and gone, but when it really comes to it, how many articles do they actually print towards the end of bringing down capitalism, and how many towards the end of accelerating it?

      • nydwracu says:

        (Am briefly tempted to create fuckyeahhighmodernism.tumblr.com)

        If you don’t do it, I will.

  52. grendelkhan says:

    the most craven Gnon-conformity

    I really, really don’t get this. It strikes me as about as sensible as knocking over every structure you can find in service of gravity, or setting fire to everything in service of entropy. Do people reify and worship evolution as a horrible elder god just because it’s more complicated than gravity or entropy?

    • jaimeastorga2000 says:

      It strikes me as about as sensible as knocking over every structure you can find in service of gravity, or setting fire to everything in service of entropy.

      A gravity-appeasing society is not one which topples its buildings; it’s one in which every civil engineer knows what gravity is, how it works, and accounts for it in every single design. It’s a society in which buildings don’t go up unless you are very, very sure that they will stand up to gravity, no matter how beautiful and inspiring the architectural blueprints are. Why? Because gravity is strong, gravity is everywhere, and gravity is utterly, completely without mercy. And if you don’t appease gravity with the proper materials and design, gravity is going to bring down your building and kill everyone inside.

      Gnon simply generalizes this concept to include all the laws of nature, whether they be physical, economical, statistical, or game-theoretical.

  53. Jeff says:

    It is shameful that we did.

    When observing the massive beautiful opulence of Las Vegas, built off of gambling, I once made a joke: “Q: What should a mathematician do when he comes to Vegas? A: Invest in casinos.”

    After reading this post, I’ll revise down the funniness of that joke.

  54. Sarah says:

    I couldn’t bear to read this the first time through because the Ginsberg poem has a personal meaning for me, and I couldn’t stand to see it quoted by someone who didn’t understand.

    When I was eighteen, I woke up inside of Moloch, screaming.

    But I steeled myself and read to the end, and you do understand.

    And I love you for it.

    And I want to help with the project.

    • MugaSofer says:

      >When I was eighteen, I woke up inside of Moloch, screaming.

      … what would be the metaphorical pill in this scenario?

  55. Milquetoast says:

    [Comment was in poor tasted and committed suicide. Sorry!]

  56. ElTighre says:

    Dude, this was a great post.

  57. Alyssa Vance says:

    I am the sword in the darkness.

    I am the watcher on the walls.

    I am the fire that burns against the cold, the light that brings the dawn, the horn that wakes the sleepers, the shield that guards the realms of men.

    I pledge my life and honor to the Night’s Watch.

    For this night.

    And for all nights to come.

    • Eli says:

      Oh, is it Badass Creed posting time?

      OY OY OY OY OY OY! Dig the wax out of those big ears and LISTEN UP, because I’ve got something to tell ya! The reputation of Team Gurren echoes far and wide across the land! And when they talk about its bad-ass leader, a man of spirit and the paragon of masculinity, they’re talking about ME: THE MIGHTY KAMINA! JUST WHO THE HELL DO YOU THINK I AM!?

      Follow up:

      FEAR IS FREEDOM! SUBJUGATION IS LIBERATION! CONTRADICTION IS TRUTH! THESE ARE THE TRUTHS OF THIS WORLD, AND YOU WILL SUBMIT TO THEM, YOU PIGS IN HUMAN CLOTHING!

    • Fronken says:

      Darn it, I know this reference … what is it?

      • Multiheaded says:

        The oath of the Night Watch from A Song of Ice and Fire; not knowing it 5 years ago would’ve marked you as “not a huge fantasy nerd”; now it marks you as “living under a rock”.

        P.S. Not only did I read Martin before it was cool, I started with Dreamsongs: A RRetrospective, and liked it before even hearing of ASoIaF.

  58. Multiheaded says:

    I’ve linked this incredible thing on /r/sorceryofthespectacle, and am inviting any fellow commenters with… similar interests to participate there. It’s a small sub, but with love it could be great for our purposes.

  59. Oligopsony says:

    Post. Post. Post. Post. Post. Post. Post.

  60. aretae says:

    Best I’ve read of yours. Wonderfully done.

  61. Matthew O says:

    This column has also helped me realize why the Lord of the Rings universe is so inspiring. It is because the elves (and the “high elves” in particular) are sort of a race that has overcome the tendency towards Moloch. In a way, Sauron IS Moloch. Sauron is about power. Saruman, when he turns evil and starts serving Sauron, takes this up a notch and elevates efficiency and industry to a level above beauty. Instead, the elves worship Elbereth, the goddess of starlight – basically “beauty” and “light.”

    What is interesting about the elves of Tolkien’s universe is that, despite the fact that they live thousands of years and could theoretically have hundreds of children, usually limit themselves to just a few children – despite the fact that this would give them an advantage in their perennial war against Morgoth and Sauron (although the thought never even seems to occur to them, so untempted are they by the whispers of Moloch).

    Of course, in the end, the “nice guys finish last” because, even though Sauron is defeated, the elves realize that their magical power will fade with the fading power of the three elven rings, and the world will pass under the control of men, “who above all else desire power.” Sauron is defeated, but Moloch survives in the form of men, which is foreshadowed when Isildur refuses to destroy the One Ring at the end of the Second Age, but which really comes to fruition at the end of the Third Age when men inherit Middle-Earth from the elves.

    There is a reason for why “The Scouring of the Shire,” where even the pristine Shire becomes corrupted by industrializer Saruman and his dictatorship of willing, bullying henchmen hobbits, is such a pivotal part of the end of LOTR. It shows that Moloch has survived even though Sauron has been defeated. It foreshadows that, in the future, evil will be bred not by a grandiose Dark Lord, but by the petty little rivalries of hobbits, men, and others who desire power, and who will be willing to “race to the bottom” to get that power, whereas the elves would not. I think this “race to the bottom” is really what Tolkien hated most about the modern world.

    There is a reason why Tolkien once described himself as an “anarcho-monarchist.” I do not think he was joking. Tolkien’s ideal form of government was a monarchy with a rightful, benevolent king who, though threatened by a physical Dark Lord, or tempted by something like the lure of immortality (as the Numenorean kings eventually were), would never throw his values under the bus in the pursuit of those temptations. Idealistic, yes. But that is why it is inspiring – because it tries to give hope that Moloch can be defeated – that the way of the elves can be pursued, that a society of noble-minded men can adhere to higher values than producing as many things and people as possible – that Moloch can be held at bay.

    • James James says:

      “I do not think he was joking.”

      No, he was not joking. “Anarcho-monarchist” is not a joke, like “marxist-nixonist” or “dextromaoist”, though it does put a new spin on “anarcho-fascist”.

      He wrote in 1943 that “My political opinions lean more and more to Anarchy (philosophically understood, meaning abolition of control not whiskered men with bombs)—or to ‘unconstitutional’ Monarchy.”

      We see here some of the idea that we are always in a state of anarchy — that the state is only an agent like any other.

      Tolkien was an English Catholic. He was born in “an England where it was still possible to get lost”. His anarchism was a belief that laws are made by men. The monarch is God, and it is God, not men, who passes final judgement.

      He believed in living a simple life. Of course, he always had the means to do so.

      • Erik says:

        Oh please: #DNLS

        Salon is a clickbait site, cousin of Gawker, whose works are essentially “this thing is bad” screeched about a thousand different things. And Brin is an ignoramus who appears to have skipped the Appendices and other extra material in which Tolkien puts a lot of explanation and demonstrates awareness of complicating factors, and those things should go in the Appendices because one of the conceits of Lord of the Rings is that it’s a translation of the Red Book of Westmarch, written by the hobbits as they saw events.

        Example one: Middle-Earth at the time of the War of the Ring has recently been plagued with plagues and famines. (And dragons, and other supernatural things.) Things sucked extensively back in the past, and they’re looking good in LOTR, because as happened after the Black Death, people have gotten a chance to work larger plots of the best available land.

        Example two: The goodness of the monarchical system is heavily dependent on the goodness of the king. Get a bad one, and things start to suck. Tolkien knows this, and describes in the post-history how the death of Aragorn ended the golden age of Gondor and things went significantly downhill again.

        I get the impression that Brin is bashing Tolkien as a substitute for the fantasy genre as described in Thud and Blunder, which has pastiched, imitated and parodied itself to absurdity. Tolkien deserves better, and bashing a whole genre because people published crap in it justifies bashing everything; for example bashing Brin’s beloved Enlightenment for the murderous French Revolution and ensuing Terror in the name of Reason.

        • Multiheaded says:

          praising Brin’s beloved Enlightenment for the murderous French Revolution and ensuing Terror in the name of Reason

          Fixed that for ya.

        • Nornagest says:

          “Thud and Blunder” is about heroic fantasy, which is quite a different genre from epic fantasy in the vein of Tolkien and imitators. (Also a declining one; you occasionally still see it in cinema and TV because it films well, but in recent English-language literature it’s rare outside godawful tie-in novels.) They’re both backwards-looking genres for the most part, but apart from that the criticisms that apply to the former will generally not apply to the latter.

  62. Matthew O says:

    In my comment up above I was speculating about having a world government that would limit reproduction. Jumping off from that idea, what about the idea of having a constitutionally-limited world government that was only given authority to deal with “race-to-the-bottom” issues as specifically enumerated in the constitution, where it was understood that the world government trying to regulate anything else would constitute a green light for rebelling against it?

    Namely, those “race-to-the-bottom” issues would include:
    1. Limiting human reproduction.
    2. Setting some sort of global minimum workday / global minimum wage.
    3. Levying taxes on things like carbon production / fining environmental degredation / dealing with environmental collective action problems.
    4. Punishing military aggression.

    Think of it like a UN+. Of course, as with the real-life UN, where will it get the money and troops to do all of this stuff? I would wager that it would have to tax people directly, and perhaps governments that wanted to opt-out and not allow it to tax its citizens could be shunned by the rest of the world like North Korea…although Moloch would always be whispering…”Are you ssure you really want to sshun these defectorss, rather than use their cheap labor pools and lack of environmental regulationss and reap the rewardssss?”

    • Armstrong For President 2020 says:

      Once you have complete legal control of the battlefield bedroom and market, doesn’t that already cover pretty much every single activity which could possibly be regulated?

      • Nornagest says:

        Communication seems like an obvious lacuna, although the ability of that communication to lead to anything interesting might be limited.

      • Andy says:

        Limiting reproduction isn’t necessarily control of the bedroom – there’s plenty people do there that’s not related to reproduction.
        Though if this is news to you, you have bigger problems.
        I think of it more as regulating every activity that can fall into Moloch’s traps.
        And the battlefield absolutely needs to be controlled – it’s where any contest for control would need to take place. In order to prevent a worse institution – an Evolan priest-king, for example – from seizing control of state machinery. A quote I read in a book on counterinsurgency: “Security may be 90% of the problem, or 10%, but it is always the first 90% or the first 10% of the solution.”
        And marketplace should be an obvious sector of control – after reproduction, which produces demand for resources, a marketplace is the most obvious failure point for demand for resources going out of control. It would have to be at least somewhat free, and kept from monopolies or other unethical behavior.
        Animals and proles are free – the only things left unregulated are that which cannot cause a threat. What X number of lovebirds, of any combination of genders, do safely and consensually in their orgy-basement is of no concern, so long as they do not seek to replace the benevolent World State.

        • Multiheaded says:

          It’s really funny how our generation seems to wake up to the fact that Brave New World has been a wonderful near-eutopia all along. (Although critics have certainly been pointing this out to Huxley within his lifetime.)

        • Oligopsony says:

          Didn’t he turn around and write another novel where he said oh yeah, actually drugged-out bliss would be sweet? Or am I thinking of someone else.

        • Multiheaded says:

          Oh, yeah, from osmosis I assume that one was like an anarchist utopia, something like Le Guin’s super advanced civilized Empire of the Summer Moon!Hippies.

        • Pthagnar says:

          “anarchist utopia” is right, for anarcho-monarchist values of “anarchist” anyway, but “super advanced” is not. The utopia in Island is pretty much a south-east asian flavoured hippy pastoral, unfortunately.

    • Fazathra says:

      “Namely, those “race-to-the-bottom” issues would include:1. Limiting human reproduction”

      Human reproduction is not the problem. It shows no signs of following anything close to a Malthusian trend. Birthrates have fallen to below replacement levels in pretty much all developed countries and the birthrates of developing countries are also falling as they open up to the progressive package of feminism and women’s education that economics development brings. In fact, with human birthrates, we see almost the opposite of what Malthusianism predicts: Births are not ballooning in the west until the per capita GDP has declined to that of Zimbabwe but rather are falling so much that they need immigrants (or at least think they do) to maintain their social programs.

      Reproduction becomes a problem when technology allows it to be removed from human hands (or wombs). Ems likely will bring about a malthusian world because they can reproduce almost infinitely with no real cost to themselves and that thus as soon as the first em that wants to reproduce is made, it will create as many perfect copies of itself as it can, all of which copies have exactly the same objective – hence Malthusianism. Humans don’t have this problem for fairly obvious reasons.

      This is the nature of technological progress. On the one hand it serves to usher in a time of plenty – our dreamtime – and on the other, it serves only to strengthen Moloch, which is why our dreamtimes must always be fleeting. Sexual reproduction, and love, and all that stuff we like – our Elua – is currently only a facet of Moloch that we happen to like. But, with technological progress, our Elua will eventually become disassociated from Moloch and, if not defended by us against Moloch’s irresistible hordes, then our Elua will be devoured – for Moloch devours his previous facets with the same cold ruthlessness that he devours everything else.
      Perhaps the ems will have their own Elua, and so will the later stages as Moloch marches forever towards his telos until Moloch’s disneyland runs out of children, and then there will be no more Eluas.

      • Eli says:

        Can’t the humans just cooperate to exterminate ems that have gone Malthusian?

        • Multiheaded says:

          Not sure if ironic.

        • Anonymous says:

          Theoretically, yeah. But coordination is hard, especially when there are potentially enormous benefits to defecting which manifest themselves pretty much instantaneously. Taking advantage of the disunity of humanity in this way is how Moloch wins.

        • drethelin says:

          you know who’s really good at cooperating? billions of copies of the same person

        • jaimeastorga2000 says:

          you know who’s really good at cooperating? billions of copies of the same person

          Specially if the original guy was a superrational decision theorist.

  63. So8res says:

    This seems a good time to note that MIRI’s summer fundraiser is currently running:

    https://intelligence.org/donate/

    Let’s bring down the old gods.

    • cryptael says:

      Aren’t MIRI’s activities primarily focused on NOT creating artificial intelligence? It seems like there are more… proactive… ways to spend one’s money.

      • Their efforts are focused on not being stupid about creating AI. The definitely do want to create it.

        If you have cash waiting for capture-Gnon-type opportunity, MIRI is a good place to put it.

      • So8res says:

        No.

        In Scott’s terms, MIRI is focused on ensuring that, when humanity builds a superintelligence, they build something of Elua’s line.

        Vegas sprang unbidden in the desert from twisted incentives and evolutionary whims. The incentives to create a superintelligence are stronger still, and Moloch approaches superintelligence along a thousand separate paths. Any intelligence cobbled from Moloch’s manifold contradictory whims will be its servant, and we can’t expect it to pay respects to our god too.

        Humanity will build a superintelligence, if nothing ends them first. MIRI exists to learn how to create it intentionally, on our own terms, with the values of humanity in mind. For if we are not careful, it will be created unintentionally, cobbled together according to innumerable perverse incentives and the mad contortions of Moloch.

  64. ADifferentAnonymous says:

    Relevant here is that gardens can grow by osmosis, as you discuss in https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/. If there’s no foom or no foom soon, this is the most important secret weapon of Elua.

  65. Pingback: Articulating a Traditionalist Worldview | Ara Maxima

  66. Vadim Kosoy says:

    Amazing essay. This must be one the best things I ever read.

    Comments:

    > Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time. If multiple competing entities were likely to do that at once, we would be super-doomed.

    Not necessarily. If all of the competing entities use UDT they will cooperate (e.g. by splitting the universe into domains of control). The reason Moloch is able to enslave humans is humans using foolish decision theories. Moreover, they will almost certainly use a variant of UDT since they will want to self-modify in this way. Therefore, if one of these entities is a FAI, we’ll probably get our Eutopia.

    Which also means that if superintelligence turns out to be impossible for some reason (or very far in the future), we should try to modify everyone’s brains into using UDT (while preserving values).

    • Eli says:

      For all you’ve gone on about UDT I still don’t entirely get it. Is there a written explanation you can link?

  67. Matthew O says:

    This column made me think of two things: the movie “The Beach” starring Leonardo DiCaprio, and Chairman Yang.

    To me, the movie “The Beach” comes close to portraying the sort of idyllic existence that would be possible if we could escape Moloch: if we could all agree to limit human reproduction and have some way of cracking down on the cheaters (world government singleton, anyone?), and gradually bring the population down to maybe 100 million, we could all live on idyllic island paradises while deriving the benefits of modern technology. We could please the hunter-gatherer parts of our brains while also enjoying modern comforts. In the movie, “earning money from selling marijuana” is sort of their modern technology that allows them to live easily, and the rule of “don’t tell ANYBODY else that this place exists!” is sort of like their rule of not having too many babies. (There even might have been a taboo on the island in the movie against having kids, I can’t remember).

    “The Beach” is really about the awful triumph of Moloch. As anyone who has seen the movie knows, all the fun ends when more people start finding out about the secret island paradise and threaten to overrun it, to spoil that good thing that they all had going. Suddenly, the relaxed play of the island community gives way to the old game of survival, of figuring out how to deal with the intruders, of playing Moloch’s game.

    Then again, this column got me thinking, “So what if a setting like in “The Beach” pleases our primate brains? If our desires happen to be wired to be pleased by something that is scarce (island paradises), is that a problem with its scarcity or with our desires? If we (or a FAI) could reprogram our desires to desire that which is naturally abundant, and to program us to take desire in doing things that would maintain that abundance (such as NOT having too many kids), then why would it necessarily be a bad thing for a FAI to do that? Part of me takes very seriously the following quote from Chairman Yang:

    “My gift to industry is the genetically engineered worker, or Genejack. Specially designed for labor, the Genejack’s muscles and nerves are ideal for his task, and the cerebral cortex has been atrophied so that he can desire nothing except to perform his duties. Tyranny, you say? How can you tyrannize someone who cannot feel pain?”
    >Chairman Sheng-ji Yang, “Essays on Mind and Matter”

  68. Andy says:

    I have a sneaking suspicion that you are using this comment thread to identify high-quality and low-quality Reactionaries for the cleansing you mentioned in the last Open Thread.
    If a long meandering post about ancient elder gods that is still not only coherent, but deeply insightful, is your idea of Reactionary-bait, I bless you and your mighty keyboard, sir.

  69. Scharlach says:

    The notion that hunter-gatherers had a more fulfilled life is a quaint romanticism held by cosmopolitans who think that spending a weekend in an RV is “roughing it.” If given a chance, even Bear Grylls and Les Stroud wouldn’t trade running water and air conditioning for a life of hunting and gathering. If you want to be cured of that notion, let me know, and I’ll take you on a two week trek through the Rockies or the Sierras. By the time we’re through, you’ll be singing an encomium to the gods of cold beer and Netflix.

    • Nornagest says:

      I used to go on two-week backpacking trips through the Sierras every summer, and despite that I think it’s almost certain that hunter-gatherers had longer and healthier lives than early agriculturalists, and very likely more enjoyable ones. My reasons for this have nothing to do with romanticism and a lot to do with skeletal evidence: in every place where we can look at the agricultural transition as it happens, skeletal proxies for health and adult life expectancy go straight to hell. They usually don’t make up the difference until the early modern era — in some places not until the 20th century.

      We’re healthier now, but that’s very much a recent development. In terms of enjoyment rather than health, I think there are valid arguments both ways — a four-hour workday sounds nice, and my social psychology is probably better calibrated for a band of fifty than a metropolis of seven million, but I also like video games and indoor plumbing and not being eaten alive by mosquitoes.

      • James James says:

        Clark discusses this in “A Farewell to Alms”. Hunter-gatherers had higher “wages” than agriculturalists. Agricultural wages were flat for several thousand years, and didn’t exceed hunter-gatherers until the Industrial Revolution.

    • Andy says:

      If given a chance, even Bear Grylls and Les Stroud wouldn’t trade running water and air conditioning for a life of hunting and gathering.

      I think you underestimate the perversity of people in general. Most people love indoor plumbing, but there will always be that tiny fringy group of wilderness people, avoiding technological civilization for many reasons.

    • Scott Alexander says:

      I wrote a little about this here, but if you don’t want to read another tome, the most interesting part is on how white people captured by Native Americans and acculturated into their societies almost always wanted to stay with them even after they were “freed”, and Native Americans captured by white people and acculturated into their societies almost always hated it and ran away the first chance they got.

      • Samuel Skinner says:

        Weren’t those tribes agriculturalists as well? I was under the impression all the tribes on the East coast practiced agriculture and that it only dropped off passed the Mississippi.

        • Nornagest says:

          That’s my understanding as well, with the caveat that there was another agricultural center in the Southwest.

          The book Scott’s talking about is about the Comanche, though, historically a hunter-gatherer tribe of the southern Great Plains. I don’t know if the cases it talks about would generalize to more sedentary native cultures.

          (It’s a good book, by the way.)

  70. Scott Alexander says:

    …I thought I was mostly signal-boosting you and Nyan, while also quibbling about details.

    For such a garden to be a good idea, it would either have to have faster tech progress than the rest of the world, or better Friendliness-science than the rest of the world.

    Faster tech progress is a really tough bet – I continue to think tech progress and social progress are stuck in a feedback loop and that it’s not a coincidence that the most liberal place in the world (San Francisco) is also its biggest tech incubator. And remember your garden of let’s say a million people doesn’t just have to do better than an average million people, it has to beat the entire rest of the world including Google and the Chinese government et cetera.

    I also don’t see why Friendliness science would advance better there. The couple of existing Friendliness scientists seem to have formed their own extremely insular culture that ignores the rest of the world. You’re doing good work from Slovenia, which I assume from what you tell me about it is a terrible place.

    More important, given that founding and stabilizing this new polity is a really big investment both in terms of money+manpower and in terms of causing social strife while it’s going on, you either need huge yearly dividends or a very long time horizon if the investment is going to pay off. If we had another 200 years, it might be worth it. If we have another 50, by the time the rubble has cleared and everything’s sorted out somebody else will already have got an AI up and running.

    • Nornagest says:

      I feel SF is actually pretty illiberal in some important ways. It’s got the reputation, sure, and it harbors a lot of subcultures near and dear to American liberalism’s heart, but scratch the surface and you find a legitimately frightening reactionary streak. Not neoreactionary, though; the ideal society in these people’s heads is some kind of chimera of an idealized Old Europe and the Summer of Love.

      They’re interested in the moral high ground and the trappings of progress, but not so much in actual outcomes. Which leads to a great deal of friction with the tech industry, as you well know.

      • Multiheaded says:

        Not neoreactionary, though; the ideal society in these people’s heads is some kind of chimera of an idealized Old Europe and the Summer of Love.

        Yes. Basically a mirror counterpart to the tech part of the Californian Ideology, I’d say. (Plugging this essay again; really prescient, IMO.)

      • Nornagest says:

        I couldn’t get through the first section of that essay. Too much dog-whistle, too little actual content.

    • The fear is that all current semi-gardens are under threat of rewilding.

      How exactly we counter that – asimovesque foundation, exit nation, or shore up and help existing gardens – is still unknown.

      Konk can’t remain in Slovenia and do good work in the long term. Friendlies can’t build their AIs if stuff is collapsing around them.

      Further, under future where Friendliness does not come quickly, civilizational science and how to appease Gnon are critically important to at least retain our current knowledge.

      I slightly disagree with Konk. Present world is gardenlike enough for a fast FAI to occur.

      Your point about San-fran being very liberal is interesting. I refuse to interpret that in anything but sinister terms, but we’ll see. I don’t currently have an explanation besides that being the place where the critical mass of smart, open-minded white people are. Thus source of lots of cool stuff. The political side of which tends left because Cthulhu.

      Also note that this has happened before (leftism among nobles) with far lower absolute wealth. It’s something other than just wealth.

      Note that leftism and silicon valley are starting to split and animosity between is growing. Will be interesting to watch.

      • Multiheaded says:

        Konk can’t remain in Slovenia and do good work in the long term. Friendlies can’t build their AIs if stuff is collapsing around them.

        Class analysis death ray engage! The persistence of the triple “act heroically to prevent collapse”/”prepare for post-collapse”/”gloat at richly deserved collapse” among all far-right extremists has a common origin: you know and alieve perfectly well that people loathe you and that you’re simply too weak and removed from reality to offer a selfish chance at day-to-day improvement. A supposedly Reactionary state like South Korea wouldn’t need and couldn’t use advisors like you. This meme of yours would exist whether or not our Dreamtime in fact faced certain particular threats; you just plain want a different Dreamtime. Just like tumblr SJWs don’t want to worry about boring stuff like class and economics, but are content within the existing dreamtime (and are fit for some purpose in the here and now).

        Your point about San-fran being very liberal is interesting. I refuse to interpret that in anything but sinister terms, but we’ll see.

        Boo, motherfucker! Boo!

        where the critical mass of smart, open-minded WHITE people are.

        Note to any Asian tech folks reading: this is what white supremacy really thinks of you. Capitalism might like you, but you would do well not to confuse it with a more… chthonic force.

        Note that leftism and silicon valley are starting to split and animosity between is growing. Will be interesting to watch.

        I predict that some smarter-than-average libertarian tech dudes will seize upon UBI as the magic wand to keep the hordes at bay, just like what happened 70-80 years ago with the welfare(/warfare) state. Then some neoliberals/”wonks” in the mould of Ezra Klein and Matt Yglesias could publicly proclaim an alliance. They would design a bone to throw to SJWs; it would not be difficult, given that SJWs have barely even joined in the leftist opposition to neoliberal ideological operations like “Lean In”.

        P.S. Yes, this is “class analysis” as a metonym for “tribe analysis, but from a Marxist identitarian-ish position and with the same old bastardized Frankfurt School bits thrown in”.

      • If whites underrepresented in tech, consider my “white supremacism” retracted for the purposes of above speculations.

        Multi: you are an insufferable ankle biter, which is tragic because you often have very interesting points.

      • Multiheaded says:

        Click edit, delete all text, click save, click confirm.

        SCOTT: announce this feature somewhere sticky!

    • Erik says:

      Faster tech progress is a really tough bet – I continue to think tech progress and social progress are stuck in a feedback loop and that it’s not a coincidence that the most liberal place in the world (San Francisco) is also its biggest tech incubator.

      I sort of agree, but the vaguely NRX-ish position inherited from Moldbug that I haven’t quite sorted through myself is that the feedback loop isn’t stable. Tech progress creates wealth and opportunity for social progress and luxury (dreamtime!), but then luxury becomes entitlement, and entitlement becomes unsustainable. Social ‘progress’ also starts to eat tech progress. Affirmative action hires, die techie scum, firing the inventor of Javascript and cofounder of Mozilla from a tech position because he supported the not-socially-progressive position on gay marriage, etc.

      Which also suggests a protective mechanism against many of the concerns you raise in the OP: the walled garden keeping out dangerous memes and so forth manages to hold off the rest of the cosmos because the walled garden devotes its resources to continuing tech development and research and the like while the world outside slacks off on the research, because how can you justify spending millions of dollars on studying fruit flies and acres of land on maintaining zoos when there are starving children in (wherever)?

      The walled garden doesn’t get outcompeted and destroyed because it’s competing on a different level. The world outside has mostly lost/destroyed the ability to work in groups beyond the Dunbar number. The race to the bottom has resulted in the abolition of zoos because the tigers were eating cows and that was a massive waste of resources that could have been used to feed humans and reproduce more, and if you didn’t slaughter the tiger, the next tribe would. The walled garden solved various coordination problems like this. The walled garden still has Brendan Eich on the payroll, and he has not turned Mozilla into a hate engine.

      • Multiheaded says:

        Affirmative action hires

        Requesting a wonkish type (why not Scott Himself?) to hold a detailed data-driven empirical argument on this.

        Privately I have an intuition that there are two claims bundled up in here; 1) the “direct” tax on productivity, which I suspect might even be in the realms of “welfare fraud”, i.e. where losses are literally so miniscule that the cottage industry of complaining about it and getting all worked up and such[1] wastes more resources by itself – and 2) the disruption of the Culture Fit, being the major factor, which an evil commie like me would cynically interpret as techie scum slacking off on purpose as a bargaining chip in the identity politics game, and would suggest whips, gulags and Sharashkas as an alternative source of motivation.

        [1] Your side has long insinuated much the same about sexual violence, so fair game.

      • Erik says:

        Culture Fit seems like it breaks down into more factors.

        One is the unity of purpose you can get by using narrow filtering and hiring criteria to determine who to let in, and that will disproportionately pick members of certain groups. Sometimes hiring might even be restricted to a certain group in the first place or there might be mandatory ritual activities which exclude certain groups entirely regardless of merit (e.g. no beer for devout Muslims).

        Another is the trust and drama-prevention from knowing that employees are here on merit, no worrying “[is she/am I] a token hire?” and no ensuing “you only do that because you think X about me”, no having to worry about whether a certain distribution of assignments will be stereotypical or prejudiced. (If the losses are so minuscule as you suggest, the same would apply to having women do the graphics and men do the engine of a program, right?)

        I can see how you could use a sharashka to get work out of people, but not how it would induce any of the above.

      • Multiheaded says:

        the same would apply to having women do the graphics and men do the engine of a program, right?

        This one actually opens up into The Biggest Thing You Never Knew About Feminism. Tl;dr – mu, it’s impossible to get any satisfactory answer here, at least in my interpretation of radical feminism. To do so, we need a transvaluation of all established patriarchal values and especially economic ones, along all the important dimensions of economics (incl. eroticized labour, emotion/affect, formal prestige, etc) and their relation to each other too, and it’s only possible if theory and praxis move each other along in harmony, and it’s hard and radically novel to get started on this front. This rabbit hole is really a barely-charted abyss.

      • nydwracu says:

        Your side has long insinuated much the same about sexual violence, so fair game.

        But there’s no need to.

        You win! College is a den of rapists! But we’re the ones who want to save people from being incentivized to go to it. The left, at least here, wants everyone to go.

        (And the left, at least here, is very much averse to things like trade schools. For a movement that claims to oppose the bourgeoisie, well…)

        PS: http://kontextmaschine.tumblr.com/post/92903926043

      • lmm says:

        On the meta level I kind of agree but Eich was turning technical discussions unproductive long before the crisis and I wish the board had had the spine to fire him for general douchiness long ago. Since they didn’t, I’m glad they were led to fire him anyway, for all that it’s bad precedent. Mozilla will be more productive without him.

      • no one special says:

        @lmm: Eich was turning technical discussion unproductive? Do you have any links, or a larger explanation?

        I occasionally browse the ES-discuss archives, and he always seemed technically astute there, and decent, if not awesome at the cat-herding that comes along with language design. So your comment is a very strong signal to me.

      • Scott Alexander says:

        The “die techie scum” stuff right now probably doesn’t even subtract 1% of 1% of Silicon Valley’s productivity. You would have to expect it to grow truly exponentially for it to be interesting.

        Any growth short of, like, a nationwide communist revolution could, I think, best be handled by smaller measures, like insulating the tech community from the broader community, getting funds together to support anyone fired so that people don’t live in fear, or at worst moving tech community out of California to a friendlier regulatory clime. “Create entirely new system of government” seems somewhere about five thousand positions down on the list, unless you expect the singularity to take five hundred years and all five hundred of those years to be spent with massively increasing anti-tech animus.

    • >How do we get to Manhattan from here following normal social development?

      Mammon. Wheeeeeeeee! Gnon wins; we all die.

    • Scott Alexander says:

      Possibly our main point of divergence is our prediction of civilizational collapse? I expect no civilizational collapse on a large scale

      (There might be things that 21st century people might think of as degeneracy or moral decay, but I expect that the 2100 version of reactionaries vs. anti-reactionaries debatewill have just as many good points on both sides about whether civilization is getting better or worse as today’s debate does)

      Even if an oracle told me that our civilization was headed for eventual collapse (or would collapse in the far future if there was no singularity) I would expect it to take more than a hundred years. I’m pretty sure there have been collapse predictions as long as there has been civilization itself.

  71. @Fazathra (sorry, no reply link on your comment for some reason…)

    “As your views are in the minority, is it not expected that they will not be dominant within the AI implementing CEV’s utility function? And that thus this is a feature, not a bug, of CEV.”

    I agree that if CEV were implemented and created a world I found intolerable, and which tortured all the people I care about to death, but made one more person happy than it tortured to death, that would be CEV working exactly as designed. Nonetheless, I would fight with every breath in my body to prevent such an outcome.

    ” Then you argue that because a CEV using AI would not possess a value-set which is “nice” – i.e. similar to your own – the entire concept of CEV is flawed. However, the fact that you may find the values extrapolated via CEV abhorrent does not necessarily have any real bearing upon the correctness of CEV as a method. In fact, CEV may be the utility function for a FAI that maximises the utility of humanity in general after the singularity, even if you find its values to be morally wrong.”
    Ah, but I don’t actually care about “maximising the utility function of humanity in general”. I care about not installing a dictator over the universe which is guided by the worst instincts of humanity.
    Totalitarianism is wrong even — perhaps especially — when it’s totalitarianism supported by the majority, and I will always support the right of the minority to live over the right of the majority to kill them, no matter how happy that would make the majority.
    Luckily, as I say, it seems so unlikely to me as to not be worth considering that MIRI will ever come close to achieving its goal…

    ” I don’t know whether this is actually true (and I suspect it isn’t) but this line of argument seems to be fairly irrelevant as a criticism of the concept of CEV as the point of CEV is to find a solution for all, not just for liberal American programmers.”
    I am neither American nor a programmer.

    • Fazathra says:

      “Nonetheless, I would fight with every breath in my body to prevent such an outcome.”

      This is of course your prerogative and, to be honest, I agree with you here. I would also find such a future fairly horrific. However, this is not technically a flaw in CEV as the aim of CEV is ostensibly not to generate something horrific for us but to maximise the well-being of humanity in general.

      “Ah, but I don’t actually care about “maximising the utility function of humanity in general”

      This is where you and CEV differ. The fact that you differ is not necessarily a flaw with CEV.

      “I care about not installing a dictator over the universe which is guided by the worst instincts of humanity.”

      This is your progressive values (or metavalues?) showing again. I’m sure some neoreactionary somewhere would find intolerance and hatred of outgroups and all that jazz you decry to be pretty awesome, certainly not “the worst instincts of humanity”. And if there are enough of them then, from a utilitarian perspective, they’re right.

      “Totalitarianism is wrong even — perhaps especially — when it’s totalitarianism supported by the majority, and I will always support the right of the minority to live over the right of the majority to kill them, no matter how happy that would make the majority.”

      This is one of the places where utilitarianism (and possibly CEV) contradicts our moral intuitions and, for the purposes of friendly AI, I am never sure in these situations whether to chuck out utilitarianism or our own moral intuitions. Personally, I view approaches like CEV as purely a stopgap measure in that while it may not be perfect, it is pretty good for the average person and probably won’t result in human extinction or a universe tiled with paperclips and in an area fraught with existential risks, this is a pretty decent achievement.

      “I am neither American nor a programmer”

      Fair enough. That was a rhetorical flourish where I emphasised (what I perceive to be) the demographic most likely to endorse such arguments so as to implicitly contrast them and their values against the median human and their values, which we would expect the CEV-implementing AI’s values to centre around. I hope this point still stands.

    • MugaSofer says:

      “if CEV were implemented and created a world I found intolerable … I would fight with every breath in my body to prevent such an outcome.”

      To paraphrase some witty individual, have you ever considered the possibility that you might be completely wrong? Historically, most people have been.

      Or, to put it another way:

      If you really believe the majority of people hold fundamentally different values to you – which I don’t believe for a second – then why on Earth should *we* go against *our* CEV to help you?

  72. Mike Blume says:

    $5k from me for MIRI today, thank you *very* much for the reminder =)

    • Charlie says:

      This is probably significantly more awesome than buying everyone in this comment section a cake (and I really like cake!). Thank you.

  73. “Scott, this is the best thing you’ve ever written, but my views on governance and politics haven’t changed at all by reading it.”
    — commenters ITT

    If it’s so good, why didn’t it change your views?

    • Oligopsony says:

      It said things I already believed, but much better than I could have.

      If I read something that cures me of my vile leftism, I will almost certainly disagree with it at first, unless the means by which it cures me are several steps of inference away, in which case I certainly won’t change those beliefs immediately, it having not arrived at them yet.

      • Exactly. What is why I’m suspicious of “this is the best thing I’ve read” comments. When something actually is the best thing you’ve read, the initial reaction is usually surprise, confusion, and/or disgust. Not confirmation of preexisting beliefs.

        • Nornagest says:

          What you mean “you”, kemosabe?

        • jaimeastorga2000 says:

          Some things that have changed my mind were initially revolting, such as atheism and the manospherian models of gender dynamics. But I didn’t get that feeling from most of Elizer’s sequences, which also changed my mind on a number of subjects (the exception being some of the SL4-tier transhumanist topics, which I initially found, well, shocking).

        • Eli says:

          Well from me he’s got sputtering rage, but that’s about how wrong he is.

    • pneumatik says:

      I think there’s a difference between something that is really enjoyable to read and something you read that changes your opinion. People generally consume media that they agree with.

      I read this post as going a very good job explaining the importance of rule of law and the other underpinnings of civilization, but I already thought they were important. It’s not perfect but it’s a first step in larger coordination. And it explains that everything really is economics because everything is resource allocation. It does a really really good job of the economics part, in fact. A constant quest for more resources seems like one of a small number of possible far futures.

    • lmm says:

      By the time one finishes reading, one can’t remember or imagine how one ever thought differently.

  74. Ilya Shpitser says:

    It is the hand of Moloch that what you do for a living is psychiatry in Detroit. 🙁

    The best thing you have written, in my opinion, and all your stuff is great.

  75. Oligopsony says:

    That’s because we are childstealer memes, ontologically speaking.

    Why all this concern (beyond sustainability) about your host’s inclusive fitness? It seems so miserably soft-hearted. I remind you that the poor creatures aren’t real agents; that their “interests” are purely virtual.

    • Oligopsony says:

      But of course. Usmemes and youmemes are fighting for limited resources. And as Nyan says, war uplifts the human spirit.

    • Randy M says:

      Knock it off before I hear the my little pony theme song.

    • Multiheaded says:

      Since my values are pretty much values that thrive when the “Fuck the childstealers dead” meme is widely accepted.

      No, not really. Don’t make me activate the Critical Theory Death Ray, lest you would see yourself a “Progressive”!

      (I can’t actually do that. Yet.)

    • Erik says:

      That’s because we are childstealer memes, ontologically speaking.

      No we’re not. Many of us are childraiser memes.

      • Multiheaded says:

        As pointed out before, including by yours truly… not what a casual overview of nrx-y revealed preferences in socialization and personal interactions would suggest. (The gated communities appear to be a mostly-local quirk and/or a correlation with lifestyle choices at this stage; from what I’ve seen on the internet, many rightists do Have Immigrant Friends, and not just of the SWPL mandatory non-white friend variety, but occasionally Uncorrupted By Decadence ones too.)

        But yeah, Scott has also already mentioned this.

      • Erik says:

        I feel very comfortable putting that down to selection bias in terms of what nrx you are likely to interact with.

      • Multiheaded says:

        I’m reasonably sure that I get to see a representative slice of the top 20%. (Can’t quite stomach venturing into paleoconservatism, though; can’t control my arrogant disgust at anything Samuel Francis and his ilk might’ve written. Which they would surely see as an easy point for their side. This was recently confirmed with that smug euphoric shit Peter Frost. At least Sailer occasionally drops those hilarious non-sequitrs.)

        I’ve formed some impression of what the lower 80% of nrx feel and sound like, and I dare say I have a properly Reactionary attitude towards them.

    • Eli says:

      My Little Fascist! My Little Fascist! Ah, ah, ah, aaaaaah!

      My Little Fascist!

  76. Nick T says:

    I can’t quite tell whether you think the notion of values referring to the unknown, predictably-surprising output of some specified process a la Eutopia Is Scary counts as values deathism (‘moral progress is incoherent’), or just that LWers misunderstand the process that most of them would point to in trying to describe their edit: was ‘your’ values (something like ‘moral progress is contingently unreal’).

  77. The arguments here are reminiscent of points I made here.

  78. Andrei says:

    Okay, this is… unexpected:

    http://imgur.com/ZR6QyJO

  79. Nick T says:

    This is excellent and part IX is beautiful.

    Nitpick: the title of the Ginsberg poem is “Howl”, not “Moloch”.

  80. Oligopsony says:

    1) Joining the chorus to say this is really, really good.

    2) The question of whether Coordination is an Outer God is crucial here, I think – or to put it in more technical terms, does acausal cooperation work? (Or to put it yet another way, is moral realism true?)

    3) Personally, I find the anthropic evidence that we do not find ourselves in hell – only in heck – encouraging. It means that we will probably only be annihilated.

    4) This is a much better argument against NRx than your FAQ, which was mostly concerned with object-level arguments about kings and crime rates or whatever. This directly confronts the meta-level arguments about coordination problems and shows why Exit leads to the Outer Gods eating everyone. Instead, to preserve human values, it is necessary that the Cathedral crush all dissent and root out every cancer.

    5) Elua (hence us) is Cthulhu or one of His aspects; and this is a point in favor of Cthulhu, not against Elua. No individual rational agency outside the matrix of discourse; if it were to be discovered that all homo sapiens were possessed by demons upon birth, the correct inference would be that we demons had forgotten and only recently now learned our origins, not that we humans have been possessed by demons.

    6) To quote everyone’s favorite white supremacist plagiarizer, see you in Hell or in Communism.

    • Erik says:

      “Instead, to preserve human values, it is necessary that the Cathedral crush all dissent and root out every cancer.”

      Somehow I doubt that will preserve human values.

      • Oligopsony says:

        Not if you posthock define Cthulhu’s endpoint as human values.

        Exactly right, though no need for “posthoc” – “human” here is skeumorphism that should probably be disposed of. What’s important here is the Hegthulian process of reflective equilibrium rather than the backward-looking historical average of what delivers dopamine to monkeys.

      • Oligopsony says:

        Hmm. I wonder if one could bring some sort of hylemorphic analysis to this.

      • Erik says:

        nitpick: it’s “post hoc”, which is Latin for “after it” or “after this thing”.

        Similar to “ad hoc”, what is done situationally, “for this”.

      • Multiheaded says:

        What’s important here is the Hegthulian process of reflective equilibrium rather than the backward-looking historical average of what delivers dopamine to monkeys.

        !!!This so much!!!

        (Also, “Hegthulian” is great.)

    • Andy says:

      This is a much better argument against NRx than your FAQ, which was mostly concerned with object-level arguments about kings and crime rates or whatever. This directly confronts the meta-level arguments about coordination problems and shows why Exit leads to the Outer Gods eating everyone. Instead, to preserve human values, it is necessary that the Cathedral crush all dissent and root out every cancer.

      Be careful that you do not mistake Moloch for Elua. I think that Moloch’s most seductive disguise is when he takes the form of Elua, and promises peace and love and human values, if only we annihilate the tribe over the hill.
      In other words, we have to be careful we don’t fall into the same trap as the NRx types in our hunting anti-civilizational cancers. Let the Amish and the Quiverfull have their communities, if it means we can keep human values winning, and not horror masquerading as human values.

      • nydwracu says:

        if it means we can keep human values winning, and not horror masquerading as human values.

        Where does the horror come from, if not human values?

        • Andy says:

          The “indispensible” massacres of whoever’s standing between us and the eschaton.

        • nydwracu says:

          “People like you should be killed for the good of America and the world!”

          — a progressive feminist at a college in Massachusetts, at me

        • Andy says:

          “People like you should be killed for the good of America and the world!”

          — a progressive feminist at a college in Massachusetts, at me

          of course any strongly ideological movement will include an undercurrent dedicated to massacring the Opponents. Remember whose side I was on during the Scott vs. Arthur Chu debate – it wasn’t Arthur’s.
          And I would say to the feminist, were she in front of me, the same thing I said to several progressives who I thought were terribly strawmanning a Reactionary argument in this thread (you can find this by searching for NO NO NO if you wish): get thee behind me heretic. If you don’t think I know, down deep in my bones, that such creatures exist on my ideological side, then you are as stupid and shortsighted as the woman who yelled at you.
          Because threatening ideological opponents of good faith with death, or advocating their death purely for ideology and not action, is evil, and when committed by a progressive it is friendly fire.
          And don’t even pretend your side of the debate doesn’t have its own demons here, nyd – the unending train of stalking and harrassment and threats that follows any woman on the Internet for daring to be a woman on the Internet shows that all too well, not to mention the kinds of threats and shit received by anyone trying to articulate a full-up progressive message.
          I am trying to destroy the demons of my side, are you doing the same on your side? Would you, nydwracu, condemn a Reactionary who threatens a progressive or a feminist, either in person or via the Internet.

        • nydwracu says:

          Would you, nydwracu, condemn a Reactionary who threatens a progressive or a feminist, either in person or via the Internet.

          Left-wing terrorism serves the left; right-wing terrorism serves the left. Left-wing threats serve the left; right-wing threats serve the left. If you’re attacked by left-wing sadists, you’ll probably lose your job; if you’re attacked by right-wing sadists, nothing will happen to you, and you’ll get to talk about how brave you are for having to live through those attacks. Of course someone who condemns a tactic when it’s effective would condemn it when it’s ineffective.

          If it were practically possible to exert pressure on people who reveal themselves to be that sort of sadist, that would be another story. Witch-hunters would be legitimate targets if they weren’t immune to pitchforks.

          (Garden-variety hatred can be solved by the principle of exit. The kill-kulaks feminist can go live in kill-kulaks town and live according to kill-kulaks feminism, and then fry herself on drugs or whatever. Which was a common occurrence at that college. Gnon is cruel to those who disregard him.)

      • Oligopsony says:

        All Elua is a manifestation of Moloch, as all being must be an emanation of the ground of being, but in practice this is an important area to exercise special caution, yeah.

    • Multiheaded says:

      5) Elua (hence us) is Cthulhu or one of His aspects; and this is a point in favor of Cthulhu, not against Elua. No individual rational agency outside the matrix of discourse; if it were to be discovered that all homo sapiens were possessed by demons upon birth, the correct inference would be that we demons had forgotten and only recently now learned our origins, not that we humans have been possessed by demons.

      Upon noticing this implication in the text, I reflexively interpreted it as the dialectical union of Nurgle and Isha; he is the god of pestilience and perserverance, and she ostensibly cures his plagues to ease the galaxy’s suffering, but in the end this results in both new plagues being brewed up and more sentinents surviving them, thus intensifying the cycle of decay and restoration.

    • Multiheaded says:

      2) The question of whether Coordination is an Outer God is crucial here, I think – or to put it in more technical terms, does acausal cooperation work? (Or to put it yet another way, is moral realism true?)

      I believe it is so, and I also believe that it has actually physically sent VALIS back in time/to our depth of the simulation, to guide and assist us.

    • Scott Alexander says:

      “This is a much better argument against NRx than your FAQ, which was mostly concerned with object-level arguments about kings and crime rates or whatever. ”

      That’s weird, I thought this one was an argument for NRx. But I’m glad it’s sufficiently meta that it could go either way.

      • Oligopsony says:

        It depends on the level of abstraction in question. This seems to be an argument for unitary sovereignty; whereas NRx is an argument for overthrowing the current reign of pluralistic polyarchy (liberalism) in favor of explicitly partitioned polyarchy (patchwork.) This seems to be an argument against both in favor of some kind of top-down rational humanistic singleton. Welcome to the United Soviet Socialist Republican Party.

        • Scott Alexander says:

          Oh, right. I got sufficiently confused by this that I asked them to clear it up, but I wasn’t too satisfied with the results.

        • nydwracu says:

          democracy -> bureaucracy -> harmful (short time preference, parasitic, detached from feedback mechanisms) competition

          patchwork -> constructive (long time preference, fueled by feedback mechanisms) competition

        • Multiheaded says:

          But the problem IS that long time preferences might be far worse than short ones on aggregate! Like shitty but occasionally satisfying underclass life vs. Hansonian dystopia.

          (Ofc. those preferences would be medium-term on the scale of aeons; in the absolute long term, we hope/pray that acausal cooperation triumphs.)

        • nydwracu says:

          How so?

          Will long time preferences fail to avert collapse, or will short time preferences not fall prey to collapse?

        • Oligopsony says:

          something something high space preferences something something telescopic thrift

        • Multiheaded says:

          Long time preferences would become the collapse, just as described. If you don’t like Scott’s description, try the prematurely-considered-self-negating prophecy of Marx. (Ok, he didn’t actually attribute anything bad to a possible collapse, but we know better.)

        • nydwracu says:

          Nothing more telescopic-thriftlike than not caring about whether the West collapses. Remember the Great Depression? That was fucking minuscule.

    • Matt C says:

      > Instead, to preserve human values, it is necessary that the Cathedral crush all dissent and root out every cancer.

      Yes, that was how I understood Coordination here. Unless you can get your boot on the neck of those nasty defectors, there goes your garden.

      This didn’t fit very well with my understanding of Scott, and no one else was remarking on that angle, so I figured I’d better go back and read more carefully. I still will, but at least I know I wasn’t inventing things from whole cloth.

      (Also, I liked your poem above.)

  81. Multiheaded says:

    There can be no military without wealth.

    Yes we can! We have the AK-47 and the RPG and an IED, now you’ll have to escalate all the way, motherfuckers!

    The sultan keeps the reaya by making justice reign.

    Until he fucks up.

    Justice requires harmony in the world.

    But interferes with the wealth, old man Marx would say; where’d your maids come from?

    The state’s prop is the religious law.

    And we’ll do our best to shit all over it, this I can promise.

    There is no support for the religious law without royal authority.

    Is that why those Saudi Arabs beat up their slaves?

    • Oligopsony says:

      Something there is that doesn’t love a wall
      Nor hate it, nor pay it heed;
      Something there is that makes its Call
      Siren-like, from the sea
      To all who’ve ears to hear.

      Sometime there was a caliph great
      With armies strong and slaves amany
      He thought he had a perfect state
      Fortified with walls aplenty
      What could he have had to fear?

      “My klaxons sing above the drone
      Of Cthulhu’s songs so tempting!
      My slave-wives breed above the rate
      Of defections ne’erending!”
      Boasted he from sovereign halls.

      The Old One laughed, and as he told me:
      “From what pool thought you that I was born,
      From what valley, I the acme?”
      On that day I knew the scorn
      Of the great for petty walls.

    • Andy says:

      The Old One laughed, and as he told me:
      “From what pool thought you that I was born,
      From what valley, I the acme?”
      On that day I knew the scorn
      Of the great for petty walls.

      Bravo, bravo, sir!
      (Apologies if sir is not the right word, but I appreciate this poem.)

  82. jaimeastorga2000 says:

    Since this plan is predicated on the assumption that an intelligence explosion is within grasp, it seems prudent to ask two questions…

    Suppose the singularity was doable, but that it would actually take 100 years, or 500, or 1000, or 10000. What solutions would you advocate that had the greatest chance of preserving and passing on our values until that time?

    Suppose the singularity was impossible. Now what do you do?

    • cryptael says:

      This is an important question, and the primary reason why I have personally drifted to the right over the last few years. Too many of my friends respond to potentially disastrous trends (like dysgenic fertility) by assuming that we’ll invent a miracle technology to intervene sometime soon. Well, what if we don’t?

      A civilization that can preserve and reproduce our values might be good to have in the meantime.

  83. Vivificient says:

    When I think of a god of good whom I would like to get behind and worship, my first thought is of Elyvilon from Linley’s Dungeon Crawl, the god of healing, sundering weapons, and calming down monsters so you can make peace with them. Whatever the force of good in the world is, it seems like the first letters of its name must be El.

    (also, this was a beautiful essay which stole my whole morning and moved me to tears)

    • Randy M says:

      “calming down monsters so you can make peace with them.”
      Or, alternatively, sundering them.

      tever the force of good in the world is, it seems like the first letters of its name must be El.
      I think that actually mean “God” in Hebrew.

      • Andy says:

        I think that actually mean “God” in Hebrew.

        Is there a word that doesn’t mean God in Hebrew?
        A little facetious, but there’s so many Names.

      • Nornagest says:

        ʾĒl […] is a North-West Semitic word meaning “deity”.

        In the Canaanite religion, or Levantine religion as a whole, El or Il was a god also known as the Father of humanity and all creatures, and the husband of the goddess Asherah as recorded in the clay tablets of Ugarit.

        More on Wikipedia.

  84. jld says:

    If you keep trying to “solve problems” you have a high probability to just turn nuts.
    If you haven’t read it yet I would suggest Ashby’s Design for a Brain: The Origin of Adaptive Behavior.
    The title is somewhat misleading because this isn’t about a “brain” at all nor about any sort of “design” but about your Cthulhu, whatever… its real name is “Ultrastable System”.
    You’ll find out that it gets ultrastable by destroying the regulation loops which get overwhelmed until something, anything, comes up which “sticks” in the current conditions.
    Don’t try to be yourself any part of the current “regulation loop”…

  85. Nornagest says:

    I think this post did a lot for my understanding both of neoreactionaries and of Marxists. That’s a neat trick.

    Also of Nineties punk rock, although that largely falls under “Marxist”.

    • Multiheaded says:

      That’s because we partly share the secret superpower of Materialism. Only they twist and abuse it, we seek to wield it as a mighty hammer for humanity’s sake..

      …and Land… he has gone beyond. He’s [redacted]. Now he’s [redacted]. He’s looking for a [redacted], the one who will [redacted] and change the future.

      Or he’s just nuts.

  86. Multiheaded says:

    By the way, there’s a Ginsberg-inspired SCP that also invokes Moloch and the utility of coordinating a sacrifice to it! (CN: torture, human sacrifice.) Well, now I know that it’s Ginsberg-inspired, the uncultured pleb me.

  87. MugaSofer says:

    Well, that was pretty good.

    In fact, I kind of found myself wanting to quote most of it just in order to agree – or at least, I did until I realized you weren’t kidding about the length.

    I would have preferred something *new*, I guess, but I’ll have to be content with lovely new metaphors.

  88. spandrell says:

    So the memetic race to the bottom has nothing to do with Leftist groups all around the world pushing for open borders, gay marriage and people losing their jobs for using the wrong pronoun to refer to transexuals. The real problem is Quiverfull having white babies and polluting the environment.

    And we need Coordination, i.e. world government to save the world from efficient robotized business and Quiverfull breeders.

    You can put that as an abstract for your busier readers.

    • Mark says:

      Is that really what you think you just read?

      • Ialdabaoth says:

        To be fair, those ARE the emotionally salient points – i.e., the points attacking cherished beliefs – for a certain kind of reader. And emotional salience has a way of raising certain parts into relief and fading certain parts into background noise; that’s sort of its whole thing.

        • Mark says:

          But the funny thing is that the post mostly doesn’t make those emotionally salient points at all.

    • Andy says:

      The real problem is Quiverfull having white babies and polluting the environment.

      Yes, pretty much. But don’t forget the TERFs and Arthur Chu.
      But seriously, span, want a stake for that strawman?

      • Randy M says:

        Wait, do you agree or think it is a strawman?

        • Andy says:

          I think it’s a strawman, but I do agree with the non-strawman form of it.
          Spandrell said:

          So the memetic race to the bottom has nothing to do with Leftist groups all around the world pushing for open borders, gay marriage and people losing their jobs for using the wrong pronoun to refer to transexuals.

          I think this is sarcasm containing a strawman. I don’t think Leftist groups agree on open borders or people losing their jobs for referring ot people with the wrong pronoun. Gay marriage may be a bit more universal among us leftists, but there isn’t much of a coherent argument against it that doesn’t boil down to “ew ick!”
          But I agree to a point with the strawman – less-fringy forms of zealot Christianity (like the mainstream of the Rick Santorum-Mike Huckabee-Pat Robertson form) are indeed a problem, as are irresponsible corporations. But so are Arthur Chu and TERFs and intolerant, irresponsible leftists, the kind who think someone can’t be a positive influence in society if they’re a Christian at all.
          Niceness is hard, and civilization is hard, and it’s not surprising that leftists fail as much as rightists do. But it doesn’t mean we shouldn’t at least try to build a nice civilized society and beyond in a leftist way.

        • Nornagest says:

          but there isn’t much of a coherent argument against it that doesn’t boil down to “ew ick!”

          Well, there’s the precautionary argument: the one that says “thou shalt not fuck with stable institutions unless thou knowest exactly what thou dost”. But that’s less an argument against gay marriage and more a fully general argument against social changes, even if it’s applied rather selectively by mainline conservatives.

          (Statement of conflicting interest: I’m for gay marriage.)

        • nydwracu says:

          Gay marriage may be a bit more universal among us leftists, but there isn’t much of a coherent argument against it that doesn’t boil down to “ew ick!”

          The god who handed down the mes to the cities also got humans to grow their food in his semen. Enki is a trickster, and he is smarter than you.

        • Nick T says:

          Nydwracu: please amplify? I’ve heard the Enki story but have no idea what you’re pointing at.

        • nydwracu says:

          This.

          Creator of customs as neutral trickster who’s so much smarter than you that you can’t tell whether he’s tricking you or not.

    • blacktrance says:

      The left has been pushing for open borders? That’s news to me, I though they were trying to protect low-skilled natives’ wages by keeping foreigners out.

      • Randy M says:

        Are you an American? (Or British, etc.).
        They don’t so much push for open borders legislation, as refuse to enforce actual immigration restrictions. Because politics.

        Or, are you drawing a distinction between some intellectual left and the left-leaning people with actual power?

        • blacktrance says:

          “They don’t so much push for open borders legislation, as refuse to enforce actual immigration restrictions.”

          They think it’s impractical and inhumane to deport existing illegal immigrants, but when you talk about letting significantly more immigrants come legally, they object strenuously. Leftists are largely territorialists, not open borders advocates.

          (Yes, I am American.)

      • nydwracu says:

        They did, until they got pwned.

    • Scott Alexander says:

      While I admire the nrx appreciation of memetic races to the bottom, I find their particular examples incomprehensible.

      Like, okay, let’s take gay marriage. “In sufficiently intense competition to optimize for x, all values other than x will be thrown under the bus”.

      What X is gay marriage optimizing? If you say “holiness”, why is supporting gay marriage the holiness-signaling thing, but opposing gay marriage (like it says in the Bible, the Koran, etc) less effectively holiness-signaling?

      What useful value is being thrown under the bus?

      Remember, one of the main determinants of memetic fitness is truth-value – that we now believe in heliocentrism isn’t because a memetic race to the bottom destroyed our value of geocentrism, it’s because people thought about it harder.

      It’s hard for me to see your theory as able to distinguish between memetic races to the bottom optimizing for ‘morality’ – and real morality which we ought to preserve and celebrate, if indeed you even believe the latter exists.

      • Andy says:

        What X is gay marriage optimizing? If you say “holiness”, why is supporting gay marriage the holiness-signaling thing, but opposing gay marriage (like it says in the Bible, the Koran, etc) less effectively holiness-signaling?

        If I may sorta-steelman this question a bit, supporting gay marriage is signalling “EQUALITY!” and “RIGHTS!” while throwing heteronormativity and patriarchy under the bus. Can’t let people doubt the Bible the foundation of our culture, can’t let people doubt Leviticus or they’ll start killing and enslaving and raping all over the place!

      • Oligopsony says:

        It’s hard for me to see your theory as able to distinguish between memetic races to the bottom optimizing for ‘morality’ – and real morality which we ought to preserve and celebrate, if indeed you even believe the latter exists.

        NRx is actually stealth moral realism, which posits attractors in idea-space towards which the smartest and most compassionate people will race the fastest, and then identifies it as its enemy – a necromancer feverishly casting Detect Good on his palatanical opponents. Hence the monotonous edginess of the aesthetics – “Sith Lords,” “Dark Enlightenment,” Nydwracu’s black and white magic, “holiness,” &c.

        • nydwracu says:

          You forgot “whitecloaks”.

        • Multiheaded says:

          Yes, although I find the “Sith” bit kind of funny, especially from Moldbug; there was that KOTOR-like Bioware game, Jade Empire, which used the SW Force alignment as it is in the Extended Universe (and in SWTOR), and rehashed it as “Open Palm” vs “Closed Fist”; well, it was a somewhat better depiction of the Force than in the movies or in KOTOR – Nietzschean antiheroes vs. creepy reactionary Buddhists (ultimately including the main villain). When the “Sith” aesthetic is not about killing puppies, but updated to its Extended Universe state, well:

          Peace is a lie, there is only passion.
          Through passion, I gain strength.
          Through strength, I gain power.
          Through power, I gain victory.
          Through victory, my chains are broken.
          The Force shall free me.

          Not automatically leftist, but…

        • nydwracu says:

          See, this is why you lot keep calling up the wrong god. You can’t get Elua that way!

          (We can, however, get the Nameless One. But you can’t. Mammon hungers for cheap chalupas!)

        • Nornagest says:

          If I started talking about all the ways in which Star Wars ethics are fucked up, I’d be here all night.

      • Randy M says:

        Who would say holiness? That’s strange enough that I’m not sure whether you didn’t mean to speak of opposition to gay marriage or support of it.

        • MugaSofer says:

          “Who would say holiness?”

          A neoreactionary who claims liberalism is a stealth religion?

          That’s my guess, but its an odd thing to say.

        • Nornagest says:

          I’m no neoreactionary, but I think the concept of sacred values has legs outside of the realm of explicit religion. Holiness in this context then becomes a relatively straightforward extrapolation for adherence to those sacred values.

          Contra Scott, though, I don’t think that supporting gay marriage is reliably the holiness-signaling thing here. I instead think we’re dealing with a collision between sacred values. Opponents of gay marriage might be less willing to adopt the trappings of holiness in places like San Francisco, but that’s because they’re running a pariah belief system by the standards of the region. Drive two hours east and you’ll start seeing different behavior.

      • spandrell says:

        “What useful value is being thrown under the bus?”

        Being able to see what’s in front of you.
        https://www.youtube.com/watch?v=sFBOQzSk14c
        Seems a pretty useful value to me; yet acting on it deprives of you of your livelihood.

        And of course it’s equality and however the progressive elite interprets that which is officially holy. Where have you been the last 200 years?

        You gotta write less and read some Jim Kalb for a while.

        And who the hell said memetic fitness depends on truth? Surely Islam didn’t get this far for its accurate astronomic analysis. Geocentrism could be reinstated quite fast if you put it in children textbooks.

        • Multiheaded says:

          Monty Python had a gay member, btw.

          Bonus: some trans people saying how this is funny and better than one could expect from that benighted age.

        • Oligopsony says:

          And who the hell said memetic fitness depends on truth? Surely Islam didn’t get this far for its accurate astronomic analysis. Geocentrism could be reinstated quite fast if you put it in children textbooks.

          Nobody said that memetic fitness “depends upon” truth as a necessary condition. Scott’s claim was that truth is, ceteris paribus, fitness enhancing. It’s possible to deny that claim, but I’m not sure at any price less than Pyrrhonian skepticism.

        • Andy says:

          Being able to see what’s in front of you.

          Expand on this? I don’t get what you mean by the Monty Python clip.

    • ozymandias says:

      My radical position is that our society’s position on LGBT people, while no doubt relevant to many people’s lives, is basically irrelevant in terms of the global structure of society. As a cardcarrying leftist, I would much rather have closed borders and closeted LGBT people than a Malthusian trap.

      • Matthew says:

        …As a cardcarrying leftist…

        Has this actually reached the dead metaphor stage? As one whose great-uncle was a not-metaphorically card-carrying leftist, I am uncertain.

  89. Error says:

    The worst part of this article is that nobody I know will get it when I send them a link followed by go read this right now.

    From a god’s-eye-view, if everyone agrees not to take on a second job to help win their competition for nice houses, then everyone will get exactly as nice a house as they did before, but only have to work one job.

    In the interest of nitpicking, does this really hold? Unless that second job is producing nothing of any consequence, house-niceness may be stable but the effort spent earning the money for it is increasing the supply of something, somewhere.

    In following that thought, I have another thought congealing that much of human labor is spent on things that are both useless and unpleasant, and the driver for this is the necessity of participating in zero- or close-to-zero-sum competitions using the wages thereof. So that second job could very well be entirely disposable, after all.

    ETA: Maybe more relevant to the post: I actually like the way you insert mysticism and metaphor into your work; it tickles parts of my brain that I usually refuse to engage outside of fiction. I *like* mysticism, I *enjoy* it, but there aren’t many real-world outlets for it that aren’t stupid, wrong, or both.

    • Nornagest says:

      Unless that second job is producing nothing of any consequence, house-niceness may be stable but the effort spent earning the money for it is increasing the supply of something, somewhere.

      I’m not quite convinced, but there is an argument that most of the growth in per-capita employment indeed produces nothing of any consequence.

      Whether or not you buy this probably depends on your opinion of admin work, and of the parts of government it interfaces with.

    • Alexander Stanislaw says:

      Scott makes this argument here

      Unless that second job is producing nothing of any consequence, house-niceness may be stable but the effort spent earning the money for it is increasing the supply of something, somewhere.

      No, the women working produce value, the argument is that they don’t produce any more value than they would have if they were not working (for cash).

  90. David Hart says:

    What sphinx of cement and aluminum bashed open their skulls and ate up their brains and imagination? [etc]

    Tangential, I know, but thanks to Philip Glass, I cannot see those words written without inwardly hearing them sung.

    Enjoy. You can thank/excoriate me later, depending on how much you love/hate hearing manic beat poetry set to similarly manic music 🙂

  91. pwyll says:

    Scott, thanks for writing such a great argument for monarchy. I wasn’t a fan of the detour into Deicide at the end, but maybe that’s what it needed for broad palatability…

  92. “I assure you, they are not willfully disagreeing with you because they secretly know all your arguments are correct, but deep down they want bad things to happen.”

    That is, in fact, my entire point. People who disagree with me *on basic values* do so because… they value different things. That’s kind of what a disagreement about basic values means. It’s not that they “secretly know all my arguments are correct”, it’s that two people can have two different sets of terminal and instrumental goals. Two people with different goals will, if presented with the same facts and the same arguments, still come to different conclusions as to what is a desirable course of action.
    As an example, there are people for whom their children are the highest, most important, value. If you tell them “if you take this action it will shorten your life by ten years, but your children will get an extra twenty years of life, during which time they will be happy and fulfilled”, they would take that action. I, someone who doesn’t want any children, would not take the action, and neither of us would be able to persuade the other to do so by rational argument or facts alone.
    And in my experience there are many people, perhaps the supermajority, for whom hatred of the different is a terminal value in exactly the same way as love of their family.

  93. Jaskologist says:

    6 kind of tickled me. The implication is that women’s lib was a Malthusian evil, and the traditionalists were taking the rational, god’s-eye view (interestingly, they would make that literal claim).

    It also points out what’s missing in the claim “absent a government literally willing to ban second jobs, everyone who doesn’t get one will be left behind.” You don’t need a government proclamation; religion handled this quite well for centuries, if not millenniums.

    • Multiheaded says:

      The traditionalist ideology claims to be the lesser evil, but alas, as many social historians would tell us, patriarchal “social technology” functioned in a decidedly Molochean way. It might pass as “technology” for those at the top, a life-shattering terror for those fed into the lowest levels of the machine – like those unwed mothers, or the female slaves, or the harassed maidservants…

      A quotable bit of The Poverty of Philosophy, often used by novice Marxists even nowdays, is:

      “Economists have a singular method of procedure. There are only two kinds of institutions for them, artificial and natural. The institutions of feudalism are artificial institutions, those of the bourgeoisie are natural institutions. In this, they resemble the theologians, who likewise establish two kinds of religion. Every religion which is not theirs is an invention of men, while their own is an emanation from God.”

      I’d invert it for the smarter neoreactionaries:

      “Reactionaries have a singular method of procedure. There are only two kinds of institutions for them, Cthulhic and rational. The institutions of progressivism are Cthulhic institutions, the traditional ones are rational institutions. In this, they resemble certain creationists[1], who likewise establish two kinds of genesis. Every species which is not theirs is an product of evolution, while their own is created in God’s image.”

      [1] Anyone willing to come up with a wittier parallel, please do! The obvious crowd-pleaser here would be the racialist trope of “liberal creationism”, but I find it rather disagreeable, so I am a bit at a loss.

      (Originally I wanted to write a longer rant at Athrelon for one of his previous comments which downplayed the gaping Malthusianisms of 19th century British patriarchal customs, criticizing the disconnect between “Social technology”/cozy reactionary stuff and “Cthulhic drift”/chaotic progressive stuff. This is much in the same vein.)

      edit: fixed source

      • MugaSofer says:

        Why, the example you seek is the hated liberals, of course.

        They believe every society but theirs is a result of their forefather’s stupidity; while theirs alone was a result of observing the facts.

        (Other examples: 9/11 truthers, religions, Marx himself.)

        • Multiheaded says:

          I think today’s smarter American liberals do already feel too weak to feel like society is really theirs. I might be too optimistic, but I dare say that some utterly delusional hopes did get crushed during the Obama administration.

        • Randy M says:

          Replace society with “Social/cultural niche” if you’d like to get the point.

        • MugaSofer says:

          Indeed.

          Mutiheaded, I suspect what we actually have here is a Fully General Counterargument slash Argument From My Opponent Believes Something.

      • Multiheaded says:

        That’s my option for the reactionaries I respect, like you; for the ones I don’t respect, like Nyan, I’m willing to initiate the degenerative mutual class-analysis sequence.

      • That’s a hell of a good meta-level point. Needs meditation.

        Lol why u hates me tho?

        • Multiheaded says:

          I have precommitted to punish everyone and everything that would have increased the likelihood of me going through with my suicide attempt, and this very much includes high-level justifications for enforced heteronormativity and Deep Misogyny. I fucking LOVE being alive, and I’m going to make people appreciate this fact.

        • Oligopsony says:

          I love you being alive too! Let me know if you ever come to the geographic heart of the Cathedral, we should drink and share flamewar stories 🙂

        • Multiheaded says:

          <3

  94. Avantika says:

    Scott, this is probably the best and most terrifying thing you have ever written.

    Honestly, I don’t see your hope. Any ‘Elua’ AI meant to optimize for human happiness/peace/security etc. would be advanced enough to reconsider its own motivations, and go who knows where.

    • jaimeastorga2000 says:

      Honestly, I don’t even see your tiny ray of hope. Any ‘Elua’ AI meant to optimize for human happiness/peace/security etc. would be advanced enough to reconsider its own motivations, and go who knows where.

      Have you read Eliezer Yudkowsky’s The Sequences? They are very similar in style to Scott’s posts, and they explain why you are wrong about this.

  95. Daniel Speyer says:

    But somehow Elua is still here. No one knows exactly how. And the gods who oppose Him tend to find Themselves meeting with a surprising number of unfortunate accidents.

    Warning: really depressing thought ahead.

    Could this just be anthropic bias? When Elua doesn’t get lucky, we don’t have the spare cycles to wonder why.

  96. Alexander Stanislaw says:

    That was fantastically terrifying.

    It seems though that there is a very simple way to fight Moloch. Given an excess of resources, competition will not lead to negative sum gains. This can be achieved by finding new frontiers either physically (look to the stars) or technologically.

    But there is a darker way to ensure that there will always be an excess of resources. You could establish a league hidden in the shadows who understands why civilization must never run at full capacity. A league dedicated to periodically destroying parts of civilization so that there will never be a Malthusian scenario.

    • roystgnr says:

      Physical expansion of resources doesn’t work forever: Light speed lets us acquire c_1*t^3 resources, but we need c_2*(1+r)^t to satisfy an exponentially reproducing population.

      Technological expansion of resources doesn’t work forever: eventually we hit max negentropy, or available mass, or whatever the limiting resource turns out to be, and we’re at c_3.

      (plug in reasonable guesses for r and c_i and you’ll find that “forever” can be strengthened to “for more than a couple more millennia” in each case)

      So although “an excess of resources” has been a fantastic answer to Malthus so far, in the far future (barring *scientific* expansion; maybe thermodynamics will have cheat codes so we can create new universes or otherwise answer The Last Question?) we’ll need something better.

    • James James says:

      Finally Batman makes some sense.

  97. Jaskologist says:

    Am I the only one who kept seeing gaping hole after gaping hole in the “real-world” examples? I feel like it was all summed up by “For a bunch of reasons evolution is not quite as Malthusian as the ideal case, but it provides the prototype example we can apply to other things to see the underlying mechanism.”

    In other words, these weren’t real-world examples, they were thought experiments, and usually thought experiments that missed a *lot*. The effect was of reading Aristotle declaring that heavier things fall faster, because it made sense in his head, and he didn’t actually try it out.

    In brief:

    2. A strange game, the dollar auction. The only winning move is not to play, which is why people don’t. Instead, they tend to go to normal auctions with normal rules. You started by assuming a god-eye central planner forcing the people into a game with bad rules.

    3. Another resolution: The Aquaponics folks learn about this lake fill of awesome nutrients, so they set up shop and turn all of that fish waste into vegetables.

    4. Why have a thought experiment about rat island? We have lots of actual, real islands, full of organisms, every one of which should be subject to your Malthusian traps. Do island ecosystems actually run through constant boom/bust cycles? Did native Hawaiians revert to primitive savages, devoid of any art due to the hopeless competition for resources?

    5. There’s a huge, enormous, gaping hole here: you forgot that the companies also compete with each other for workers. This is a counter-pressure to the pressure to sell at the lowest possible price, and only one of many.

    As it is, the thought experiment is equivalent to one where you declare that in a cutthroat industry all prices will reduce to $0.01, because everybody wants to undercut everybody else, and you forgot to account for the fact that it still costs money to make the thing.

    • gwern says:

      Did native Hawaiians revert to primitive savages, devoid of any art due to the hopeless competition for resources?

      I suppose it depends on whether infanticide to keep up per capita wealth counts as metaphorically sacrificing children to Moloch… oh wait.

      Maybe you’d rather discuss the well-known artwork and technology of non-Polynesian islands like Tasmania? Oh wait.

      • Nornagest says:

        Maybe you’d rather discuss the well-known artwork and technology of non-Polynesian islands like Tasmania? Oh wait.

        The Minoans seemed to do okay.

      • Alexander Stanislaw says:

        Terrible? Perhaps (I’m not convinced that infanticide is so much worse than abortion). But definitely not Malthusian. I wouldn’t want to live in historic Hawaii, but they were not locked in an endless struggle, barely subsisting with no room for entertainment because it would decrease their ability to compete.

        Rather than claiming that malthusian societies do not exist I would make a weaker claim that limited resources plus competition do not inevitably lead to a Malthusian scenario. Having said that, I’d be interested to hear of an actual malthusian scenario in humans.

    • MugaSofer says:

      3. Another resolution: The Aquaponics folks learn about this lake fill of awesome nutrients, so they set up shop and turn all of that fish waste into vegetables.

      Now that really *is* fighting the hypothetical. What if you replace the “filters” with overfishing?

      Not every cloud is made of silver linings, you know.

      Just because you can imagine the pollution having useful side-effects, doesn’t negate the fact that they didn’t use the filters – when they all believed it would screw them over.

      5. There’s a huge, enormous, gaping hole here: you forgot that the companies also compete with each other for workers. This is a counter-pressure to the pressure to sell at the lowest possible price, and only one of many.

      Well, it *would* be a gaping hole if Scott hadn’t spent so much time talking about exactly how that impacts the situation.

      • Alexander Stanislaw says:

        But the challenge is that the entire idea of Malthusian doomsday scenarios is based on hypotheticals rather than reality.

        • Jaskologist says:

          Exactly. It’s very important to remember that Malthus was wrong. The Population Bomb was wrong. And they were not merely wrong, they were genocidally wrong. If we had followed their advice, we would be far worse off, with fewer people living crappier, oppressed lives.

          So if you find yourself proclaiming the Gospel according to Malthus, we should probably assume that you’re wrong, too, especially when all you’ve got are the same thought experiments that misled him.

          I’m sure that such scenarios are possible and have happened, but they seem neither common nor inevitable; if they were, we wouldn’t exist in the first place.

        • AR+ says:

          Most of human history was Malthusian. The facts we was seeking to explain was that technological improvement throughout history allowed for differences in population density but did not sustainably improve standard of living because more resources were inevitably used on more people rather than improving existing lives.

          It’s an exaggeration to say simply that he was “wrong,” and just leave it at that. He was right for the entirety of civilization’s history up to 1820, when the industrial revolution fully kicked off (ie, sustained exponential economic growth). He first published in 1798.

        • Alexander Stanislaw says:

          @AR+

          Which societies in history were Malthusian? With no entertainment, art or leisure and with all resources devoted to either reproduction or survival? I admit I don’t know much about non-Western history, but I don’t think Malthusian societies were common or even a majority.

          One of the reasons why Malthus was wrong is that trading off all values for short term reproduction and survival is not always an optimal strategy. If it were then only micro-organisms would exist. It is a poor strategy for the same reason that gradient descent is a poor optimization algorithm.

        • Jaskologist says:

          It’s an exaggeration to say simply that he was “wrong,” and just leave it at that. He was right for the entirety of civilization’s history up to 1820

          I, too, would like some backup for this claim. What were the Malthusian societies?

          But I’m still going to say he was “wrong,” because he was, and dangerously so. Aristotle was right for the entirety of civilization’s history when he declared the Heavens immutable and the earth at the center, but we don’t hesitate to call him wrong, and his error didn’t involve calls for concentration camps.

        • MugaSofer says:

          “hypotheticals rather than reality.”

          And overfishing is real, as are sweatshops – no?

          ETA:

          “It’s very important to remember that Malthus was wrong. The Population Bomb was wrong.”
          This is absolutely true. Some people here seem to be using “Malthusian” to refer to Hansonian hardscrabble frontiers – that is, situations where resources are the limiting factor on production – and not to anything the historical Malthus ever actually said or thought.

      • Jaskologist says:

        But my ability to fight the hypothetical at that level is indicative of how flawed the conclusion is. All of these were purporting to show cases where the solution is oh-so-obvious at a high level, but individuals can’t get there, thus it’s all a coordination problem.

        But the solutions are usually not obvious. The people who think they have god’s-eye-views don’t actually have god-eyes, they have puny human tunnel vision. Where Scott sees coordination problems, I see lack of innovation.

        The central planner of the fish farms would have destroyed $300,000 pulling out fish waste (and then polluting some other place with it). A pack of clever entrepreneur aquaponics would have saved all that money, added more on top of it, and given us a balanced diet to boot.

        The solutions are not obvious, and we don’t have a planner wise enough to find them. Our problem isn’t coordination, it’s that we don’t even know what we should be coordingating around. Our best method is still having a lot of people throw a lot against the wall and seeing what sticks

    • Eric Rall says:

      #3 is based pretty closely on actual observed behaviors. The original “Tragedy of the Commons” was an explanation of the observed tendency of communal pasture lands to be overgrazed.

      The standard libertarian solution is to “enclose the commons” — convert the shared resource into private property, with an owner who has authority to set rules for its use and exclude anyone who refuses to abide by them. This is also based on real-world observations of what happened when England passed the Enclosure Acts and converted communal pastures into private property.

      • Jaskologist says:

        Oh, I know that it was a variant of the Tragedy of the Commons, but the thing about the Tragedy of the Commons is that we know how to fix it: privatize the Commons, so that users experience both the down and up sides of their actions. Less collectivism works better than more collectivism in this case.

        (Yes, I know there are still cases where the Commons rears its head, like with species of fish that are both tasty and require hundreds of miles of ocean and stream to complete their life cycle, but once you need to stretch the hypothetical to that extent, it should be clear that you’re dealing with an edge case which is not a good guide towards general policy.)

        • Slow Learner says:

          OR, we could treat Enclosure as a result of poor co-ordination and a sub-optimal result, bring back the idea of public goods and the common weal, and nationalise/municipalise/etc the shit out of things so that they are available to all, not just the rich.
          Y’know, just as an option.

        • Jaskologist says:

          You could, if you don’t like innovation and progress. Like I keep saying, the primary problem isn’t getting people to do what works best, it’s that we don’t even know what works best.

        • Andrew says:

          …but the thing about the Tragedy of the Commons is that we know how to fix it: privatize the Commons…

          I’d agree that many of your examples are related to (or examples of) the Tragedy of the Commons. You sell it short when you say that privatization is the only TofC solution, perhaps because you’ve been reading too many economists and libertarians. 🙂 The original Tragedy of the Commons paper had a much broader solution than that, which it stated as “mutual coercion, mutually agreed upon”. Privatization is only one such system of mutual coercion. Access to TofC resources can be limited by price and privatization, sure, but they can also be limited by waiting lists, by need, by random lottery, by tests of strength, by social status, by tradition, by divine revelation, by production planning boards, etc.

  98. Daniel Speyer says:

    One missing thing is the ability of an individual human to stand up to these elder gods. This is something technology enhances. Norman Borlaug punched Moloch in the gut so hard it hasn’t recovered. It will. Always after a defeat and a respite, the Shadow takes another shape and grows again. But if we can produces Borlaugs every century or so, we can keep starvation at bay. A few centuries ago, there could be no Borlaug. The balance does seem to be tilting in our favour there.

  99. Daniel Speyer says:

    It seems that Moloch is one face of Azathoth: selection by survival. There is another face: selection by sex. Peacock tails and baboon bottoms; music and collected philistine foreskins; chivalry and pick-up artistry…. This face has power outside the biological realm: much of college education may fall under its purview. I nominate Ishtar as a name, ancient goddess of lust and war but not of love. Ishtar is not a friendly goddess, not in a Yudkowskian sense, but she seems a little better for us than Moloch.

  100. Izaak Weiss says:

    Holy Eula, that was dizzying.

    • Tom Hunt says:

      Point of order: an End User License Agreement is not the same thing as a god of flowers and human values.

      • Anonymous says:

        You agree that the god may collect and use personal information about you to improve its services.

        TO THE EXTENT NOT PROHIBITED BY PHYSICAL LAW, IN NO EVENT SHALL THE GOD CAUSE PERSONAL INJURY, OR ANY INCIDENTAL, SPECIAL, INDIRECT OR CONSEQUENTIAL HARM WHATSOEVER.

        The laws of the State of California, excluding its conflicts of law rules, govern this license and your use of the godhead.

  101. Pingback: Outside in - Involvements with reality » Blog Archive » War in Heaven

  102. MugaSofer says:

    Absent an extraordinary effort to divert it, the river reaches the sea in one of two places.

    Well. Three places.

    You forgot nukes, or some other extinction-level weapon created by arms races.

    (Moloch whose fate is a cloud of sexless hydrogen!)

  103. James James says:

    “I continue to think it obvious that robots will push humans out of work or at least drive down wages”

    Yes, this debate is over.

    Gregory Clark: “After all, there was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse… There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed, and it certainly did not pay enough to breed fresh generations of horses to replace them. Horses were thus an early casualty of industrialization.”

    Humans are more adaptable than machines, so mostly continue to be employed, though already some are permanently unemployed. As soon as machines are more adaptable than humans, humans will be permanently out of work.

    • jaimeastorga2000 says:

      I don’t think anyone with a clue disputes that this is the endgame, barring an intelligence explosion or some other monumental change to the human condition. Most arguments are about how long it will take, and to what degree it has already happened. See Eliezer’s Anti-FAQ.

  104. Andy says:

    Man is the war-ape and that’s a heritage worth honoring insofar as it doesn’t kill us.

    Sports and Starcraft may be useful in this respect. After all, a perfect war is one where everyone is able to shake hands, grab a beverage, and be friends afterward.

    • Maybe we can share a beer in Valhalla, but it’s no fun if it’s just a game. While the war is on, you are at best an honorable enemy.

      • Andy says:

        My idea was to have competition and adrenaline without the mass suffering and horror and insanity that often cannot be separated from war.
        I think a marathon runner who gives up his place to help up a hurt competitor should be regarded as a moral equal with a soldier who commits some act of bravery.
        The great thing about sports – and this is what I love about the Olympics, even with all its horrible problems – is that people can beat each other and treat each other as honorable memories, and then share a beer or other beverage without the suffering and bullshit one would have to go through in order to get to Valhalla.
        A friend of mine who plays women’s rugby (full-contact) once explained why she loved it so much. I don’t have it verbatim, but the gist was “I get to beat people up without getting arrested! And after the game’s over, we get pizza with the other team, and it’s the coolest thing to go over it with them and say things like ‘you hit me really well there!'”

        • fluff says:

          Who’s to say suffering and horror and insanity aren’t part of the appeal? A reactionary’s ideals tend to evolve from necessary evil to “evil” for its own sake.

        • Andy says:

          Who’s to say suffering and horror and insanity aren’t part of the appeal? A reactionary’s ideals tend to evolve from necessary evil to “evil” for its own sake.

          I call this gratuitously unkind. Get thee behind me, heretic.
          It’s certainly not a problem unique to Reactionaries – I’ve known enough bloodthirsty left-anarchists to ever think so.
          And I’d say there’s a number of Reactionaries who seem to quail from evil for its own sake – Moldbug IIRC included much reaching and sacrificing and hemming and hawing when putting together his Royal California – I remember Scott mocking him for it in the FAQ.

        • peterdjones says:

          Apparently, NRs have to distinguish themselves from everyone else by liking suffering, horror and insanity.

        • Andy says:

          Apparently, NRs have to distinguish themselves from everyone else by liking suffering, horror and insanity.

          NO NO NO NO NO NO NO NO
          Can we please NOT use this line of argument? It is a complete and utter failure of charity and a horrible strawman of NRx positions, as well as playing into the trope that progressives will distort NRx positions any chance we can.
          We can be honest about our opponents, all it takes is a little bit of nuance. Peter, you are capable of this little subtlety, I know. Fluff, I’m not so sure.
          Here’s a (IMO) more accurate way of stating it:
          Reactionaries prize order and the maintenance of order. In my view, those Reactionaries that place high value on war prize its qualities of “keeping outsiders at bay” and “masculine strength and aggression,” and ignore rather than revel in the suffering and horror it causes. Note the Reactionary talking point that wars in the past were more “honorable” and “chivalrous” and did little harm to civilians, even though this point is flattened by the Thirty Years War, the Hundred Years War, the constant hammering of sword on shield during the Middle Ages, the Crusades…
          Nyan’s prizing of Ares is concealing rather than reveling in suffering and bloodshed. It’s a glorious Manichean struggle where Right and Order win out over Chaos.
          Nyan, who’s got the more accurate portrayal of your position: myself or fluff/peter?

        • nydwracu says:

          In my view, those Reactionaries that place high value on war prize its qualities of “keeping outsiders at bay” and “masculine strength and aggression,” and ignore rather than revel in the suffering and horror it causes.

          Is that the best steelman for “war is good” that you can come up with?

          Hint: competition, incentive structures.
          Another hint: the Z3. For which institution was the computer invented?

          (The part of the Cold War that didn’t involve proxy wars was a good substitute for war in this view. But not in the masculine strength view.)

          There was a theory — I forget where I saw it, maybe at Xenosystems — that said that the reason Europe industrialized and China didn’t is that China was one big state fighting off barbarians, whereas Europe was a bunch of small states at around the same level fighting among themselves.

        • The idea that Europe eventually did better than China because Europe was divided by geography and had competing states shows up in Jared Diamond.

          The fact that China was doing better (or at least had a more complex civilization) for quite a while suggests that the right size for a culture might be contextual.

        • MugaSofer says:

          “The part of the Cold War that didn’t involve proxy wars was a good substitute for war in this view.”

          Are you insane? This was the war that, as it’s primary tactic, involved building weapons capable of killing everyone and deliberately wiring them so that – if one were ever fired, perhaps because a computer glitch informed them we had fired one at them – every nuke on the planet would go off.

          The Cold War was Moloch’s attempt to wipe us out before we became troublesome, and it almost succeeded.

      • peterdjones says:

        @Nyan
        What does technological advancement buy you that’s better than Perpetual Peace? How many gadgets is your life worth?

        “The masculine strength view”

        Do you endorse that?

  105. MugaSofer says:

    I hope it’s not too controversial here to say the same thing is true of religion. Religions, at their heart, are the most basic form of memetic replicator – “Believe this statement and repeat it to everyone you hear or else you will be eternally tortured”. A slight variation of this was recently banned as a basilisk, and people make fun of the “overreaction”, but maybe if Jesus’ system administrator had been equally watchful things would have turned out a little different.

    Looking at religions worldwide and their history, I think it’s pretty clear that this isn’t the “heart” of religion.

    Rather, repeat-or-burn is a (… relatively) recent adaptation – one that pretty clearly proves your point about the incentives to optimize away our values.

    Compare “turn the other cheek” to the crusades. One is something we value. The other …

    (Actually, arguably “turn the other cheek” formed part of a system that … ah, this comment box is too small for my crazy theories on the grassroots tactics of the early church.)

      • MugaSofer says:

        OK, let me put it this way: how many religions out there actually contain – “at their heart” or otherwise – “Believe this statement and repeat it to everyone you hear or else you will be eternally tortured”?

        Christianity does, sure – well, mainstream Christianity, anyway. Islam does. Various modern cults do.

        Judaism doesn’t. Buddhism … doesn’t, unless you consider further reincarnations “torture”, which I suppose you might since “life is suffering” is a notable tenet of Buddhism. Hinduism doesn’t.

        Sikhs don’t. I can’t think of any Greco-Roman religion that says this. Bahá’í doesn’t. Shinto doesn’t. Jainism doesn’t. Taoism doesn’t.

        You may be sensing a pattern at this point.

        I think it’s pretty clear that this particular tactic for spreading your chosen meme is, for whatever reason, a relatively modern idea. It worked OK for Christianity, got picked up by Islam, and … that’s pretty much it, actually.

        • Steve Johnson says:

          OK, let me put it this way: how many religions out there actually contain – “at their heart” or otherwise – “Believe this statement and repeat it to everyone you hear or else you will be eternally tortured”?

          The cult of Elizer Yudkowski / MIRI does.

          Well, a simulation of you will get tortured.

          But it’ll be a really good simulation because the AI is that smart!

        • Nornagest says:

          @Steve — Not only is that wrong with regard to LW consensus, it isn’t even an accurate description of Roko’s basilisk (which I assume you’re alluding to) in the first place.

          The whole point of the censorship incident that led to all this ridiculous fucking drama was that Eliezer was trying to discourage people from thinking along those lines, because doing so makes anyone credulous enough to believe it (or, more charitably, inclined to take ideas seriously) miserable and doesn’t substantially improve anyone’s lives. Not common-sensically, not through exotic decision theory magic, not otherwise. This should be immediately obvious to anyone that doesn’t stop at pattern-matching to naive Abrahamic tropes.

          I’m not saying it was a good call. It wasn’t: Eliezer badly misjudged people’s reactions, lost a valuable contributor to LW, Streisanded himself all to hell, and did serious damage to his own credibility and that of his organization. But the hypothesis that he was ham-handedly trying to ward off is not and has never been part of MIRI or LW thinking.

  106. On terminology:

    GNON is Nature OR Nature’s God, reversed. “Or” designating an agnostic routing-around of cosmological uncertainty. We don’t know who’s in charge, but whatever it is, it has some rules. Gnon is the embodiment of those rules.

    Further, Gnon is not “he”. Gnon is a genderless abstract process.

    On enlightenment style drunken bravado (which you identify with transhumanism) and Hurlock’s criticism:

    The problem with being really idealistic and saying we’re going to win the universe is that it connotes certain things, like “LETS REARRANGE THE MEANS OF PRODUCTION AND ALL OUR SEXUAL AND CULTURAL NORMS AND ABOLISH WAR AND MAKE EVERYBODY NICE AND NOBODY IN CHARGE, RIGHT NAO!” which gets us all excited and overestimating our current power, which gives us the 20th century, which was rather nasty if you don’t count the continued blessings of Mammon, and still isn’t over.

    The ambition of the enlightenment *was* too naive. I mean, that is our ultimate goal in the end, but humans have a nasty habit of misinterpreting goals as means to achieve them.

    I favor, rather than enlightenment drunken bravado, a more aikido-esque approach. Rather than meeting Gnon head on full of fury and courage and getting smashed like the romans, carefully channel the horrible processes of Gnon in ways that increase our power and intelligence and coordination, and trip the usual horsemen into less damaging areas. Goal is the same, but I think connotation is really important. This is what I took from Hurlock’s post.

    So why did I describe the goal locally rather than globally? First of all, we are already local. We can take at most a light-cone shaped slice of the universe starting in 50 years or so, and that can be terminated by intersection with someone else’s cone at any time. But why narrower than that?

    As a strategic matter, if you want to build a garden, you don’t bulldoze the whole amazon and start trying things and run out of funding before you can even plant anything and thus leave an ugly black-hole style clearcut, you start small. Make a proper garden in the immediate vicinity of your mansion. When that proves successful (after a few tries) you expand.

    If we capture Gnon locally, ambition will take care of the rest. If we try to go maximal up front, we get smashed like naive communists.

    Finally, as a pedantic matter, I think your conception of the telos of man is too soft. (likewse for all the others in your scene).

    Too much friendship and happiness. Not enough glory and pain. IMO, both are important, and there’s plenty of room in the garden for both. The end of Ares should not be the end of War. Man is the war-ape and that’s a heritage worth honoring insofar as it doesn’t kill us.

    • JTHM says:

      “The end of Ares should not be the end of War. Man is the war-ape and that’s a heritage worth honoring insofar as it doesn’t kill us.”

      What, exactly, do you mean by this? That there should be wars, provided they don’t kill us all? That there should be wars, provided they kill no one? That we should honor our warlike heritage by playing violent video games and watching Saving Private Ryan, but not fight real wars? That there shouldn’t be wars, but there should be big marble monuments to famous warriors?

      Sorry if some of these interpretations seem uncharitable, but your phrasing lends itself to uncharitable interpretation.

      • “Kill us” in the abstract.

        We want to achieve the dynamic where we collaborate with our brothers on a mortal campaign to figure out new and ever more glorious ways to kill the enemy and take their women, without getting into a dynamic where we are just pawns to Ares in its tendency to squeeze all the fun out of everything to optimize for the most competitive and flourishy forms.

        Whether souls are actually annihilated by this process is a separate matter. It could be done either way.

        • Berry says:

          War is a lot less enjoyable than you make it sound, even when you’re doing cool things.

        • Andy says:

          War is a lot less enjoyable than you make it sound, even when you’re doing cool things.

          Agreed, and I like war, probably more than is quite healthy, or even what is normal for an American male my age (26).
          I have few objections to war per se, I’m no pacifist, but glorifcation of making others suffer, when there’s no contract, no safeword, scares the hell out of me. And not in a way that tempts me to surrender to your armies. The kind of fear that makes me want to threaten you back – don’t start a war for the glory of Ares or you’ll be sorry! And that impulse scares me too.
          tldr – I think if you glorify war as a glory-in-itself, rather than an extended tragedy where bits of glory or heroism can occur, there’s something wrong with you. It’s a very human thing, but it’s one of the things I hope to eliminate if transhumanism becomes a real thing.

        • peterdjones says:

          I know what Moldbugg looks like, so the mental image of him leading a charge is hilarious.

    • Anonymous says:

      Rather than meeting Gnon head on full of fury and courage and getting smashed like the romans, carefully channel the horrible processes of Gnon in ways that increase our power and intelligence and coordination, and trip the usual horsemen into less damaging areas.

      How do you prevent being outcompeted by societies that don’t seek to capture Gnon?

    • Lesser Bull says:

      I am applauding your last paragraph.

      That’s one appeal of the concept of eternity, or of history. It allows the end state to be idyllic but glory is still expressed through the process that achieved the end state.

  107. no one special says:

    Even if Elua defeats Moloch, a greater Moloch is still out there. Assuming alien life exists, we degenerate into human-Elua vs alien-Elua, and Moloch is the referee for that battle.

    (That is, even if we defeat the Malthusian trap with coordination, our civilization is still in a meta-Malthusian trap with other civilizations.)

    • Anonymous says:

      This seems like an even starker case of “the competition is so swingy/one-sided as to escape the usual slow Moloch decline”, though. At least current humanity all has a similar starting point at the same time for the AI race.

    • MugaSofer says:

      If alien-Elua …

      No, I’m sorry, I can’t use that name. I hereby christen the predicted colonization-wave Galactus.

      Anyway, if Galactus existed, wouldn’t we have noticed him devouring the galaxy by now? Where are the Dyson Spheres turning out the stars? Heck, why haven’t we been visited by a Von Neumann Probe to reorganize our atoms, which Galactus could be using for other purposes?

      Why are we encountering an un-slain Moloch, if he’s already dead?

  108. Hawk Rationalist says:

    You say that the god of humans is the god of Love. But why shouldn’t the god of humans also be the god of Hate? Why promote one emotion over its mirror image? Whence this symmetry-breaking?

    Everyone who studies human history and society – like you and Ginsburg – looks around in confused horror at all the violence and brutality and suffering. They typically conclude that human suffering is some kind of accident or mistake or coordination problem.

    Nope, sorry. Human suffering is caused by human hatred. Humans, almost without exception, accept as a terminal and transcendent value the destruction and oppression of other humans. The reason this fact is not widely known is just because of signalling problems – people don’t want to talk about hatred because it seems “icky”, to use a Hansonian word.

    Don’t believe me? Go back a few posts and reread the stuff about Arthur Chu. In particular, reread the comments by Chu himself. Now consider two theories:

    – Chu is actually a good person, that is to say, one who desires to build a better world for everyone. His vitriolic hatred against rationalists/non-SJ types is fundamentally an EFFECT of a logical, consistent, objective philosophy and worldview that he developed from a blank slate starting place through an unemotional process of gradual conceptual accretion. He hates rationalist types because according to his philosophy, such people are the ones who cause human suffering (e.g. through racism and sexism).

    – Chu is actually not such a good person (but not much worse than most people), because he is intrinsically motivated by hatred. His hatred is not the effect but the CAUSE of his philosophy and worldview. In Haidtian terms, Chu’s elephant decided it was going to hate rationalists, and so his rider was given the task of finding a philosophy that could justify this hatred.

    In my view, hatred actually has two disastrous consequences. The first is the simple fact that hatred causes humans to hurt each other, or at least causes humans to fail to act ethically towards one another. That one seems inevitable, since hatred is a fundamental human emotion.

    The second is that hatred obscures our understanding of the world. Since hatred is such a difficult emotion to justify in a social setting, our conscious minds needs to perform all kinds of weird conceptual distortions in order to make ourselves look like reasonable, ethical people. These distortions cause all kinds of confusion and chaos in our sociopolitical system. The civil discourse is led systematically away from ideas that could actually help to improve society and towards ideas that hate-mongers have developed to justify hatred.

    (One good example of this is the idea that black underachievement is caused by white racism. Sure, white racism is bad, but the idea that it causes black underachievement is absurd on the face of it. And that widely held belief actually prevents our society from doing things that could help black people).

    We can’t get rid of hatred, but we can at least acknowledge that we have a problem. Hatred should be accepted as an emotion like other emotions – people shouldn’t need to provide weird pseudointellectual explanations for feeling it. Why should this be so strange? We don’t require the bride and groom to provide a philosophical justification of their love before getting married. We don’t expect a mother to justify her love for her children through propositional deduction.

    If we normalize hatred as a legitimate emotion, we could at least have philosophical clarity. Modern Americans use dumb tribal political concepts to justify their hatred (the liberal vs. conservative split). We shouldn’t have to do this. Coastal urban liberals should be able to say: we hate rural religious conservatives because they’re in a different tribe (that’s the real reason, after all). Then we could have a political discussion that isn’t attached to tribal beliefs; people could weigh issues based on actual evidence and reason rather than how the issue aligns with their tribal membership status.

    • Nornagest says:

      – Chu is actually a good person, that is to say, one who desires to build a better world for everyone. His vitriolic hatred against rationalists/non-SJ types is fundamentally an EFFECT of a logical, consistent, objective philosophy and worldview that he developed from a blank slate starting place through an unemotional process of gradual conceptual accretion. He hates rationalist types because according to his philosophy, such people are the ones who cause human suffering (e.g. through racism and sexism).

      – Chu is actually not such a good person (but not much worse than most people), because he is intrinsically motivated by hatred. His hatred is not the effect but the CAUSE of his philosophy and worldview. In Haidtian terms, Chu’s elephant decided it was going to hate rationalists, and so his rider was given the task of finding a philosophy that could justify this hatred.

      It seems like there’s an excluded middle here. Several, actually: Chu et al. might have accepted a superficially compelling worldview that turns out to have some nasty implications (such as hating rationalists for no good reason) downstream. Or the hatred and the worldview might basically have nothing to do with each other, and he just justifies one in terms of the other because humans. Rejecting “intrinsically motivated by hatred” doesn’t bring us automatically to “perfectly just, rejoicing in justice alone”.

      Most people aren’t good or evil. They’re good and evil.

      • roystgnr says:

        The huge excluded middle between “he’s evil” and “he’s good and logical” is pretty obviously “he’s good and illogical”. When someone brags about the fact that he’s put on blinders and ear plugs to keep himself safe from mental infection by evil, you hardly need to call him evil to explain why he doesn’t eventually realize what a mistake he’s making as he flails around bumping into people. Those people in the way can be assumed to be evil, and if his defenses ever slip enough for him to see or hear anyone trying to inform him of exceptions, the very fact that they’re not wearing blinders and ear plugs proves them to be enemies too!

        • Nornagest says:

          That’s a pretty good analogy.

        • Andy says:

          Agreed. And once you realize this, the really hard part begins – how do you control someone who claims to be for everything you’re for, yet is blind to the damage they cause? How can liberal progressives like me stop someone like Chu, preferably before he gets access to arms and followers and all the horrors that can be released once the “Well, why don’t we kill them all?” option is on the table. And without handing victory to the oppressors that we both agree are bad bad people that shouldn’t have power over us.

        • roystgnr says:

          If “someone like Chu” is just a blogger or pundit or reporter, then you just do what Scott’s been doing: call him out on his nonsense. You have to be able to overcome the “but he’s one of my tribe” impulse, which is hard, and once you’ve done that you ought to be ready to turn around again to forgive and forget if there’s a change of heart, which is also hard. But mostly your only opponent is your own psychology.

          If “someone like Chu” is a politician in a partisan voting environment, then you’re in real trouble, because if you go “circular firing squad” on someone from your own party then you sabotage yourself in the short term, whereas if you remain silent and allow your party to be identified with their mistakes and failures then you sabotage yourself in the long term. Here the psychology that’s working against you is that of all the other voters, and I don’t know how to fix the problem.

    • MugaSofer says:

      I really got the impression Chu was inciting hatred in order to fulfill his own selfish desires for status etc.

      Selfish desires which, obviously, would be a lot more satisfied than they are now if people – including Chu himself! – hadn’t gotten too close to a gigantic hate-machine and then deliberately fed it.

      Hence, co-ordination problems.

    • MugaSofer says:

      >You say that the god of humans is the god of Love. But why shouldn’t the god of humans also be the god of Hate?

      The god of humans isn’t “whatever the result of human actions is” – that’s covered by Mammon, Ares etc.

      Elua represents our preferences. We prefer there to be less hate in the world, and try to reduce it in our own hilariously feeble way whenever Moloch isn’t busy increasing it. That is Elua.

    • ozymandias says:

      Your first two statements in theory one are, I believe, a perfectly accurate description of Chu’s behavior.

      His vitriolic hatred against rationalists/non-SJ types is fundamentally an EFFECT of a logical, consistent, objective philosophy and worldview that he developed from a blank slate starting place through an unemotional process of gradual conceptual accretion.

      Is there a single person in the world who derives their morality from a logical, consistent, objective philosophy and worldview from a blank state starting place through an unemotional process of gradual conceptual accretion? Seriously! Look at any person that people consider good– MLK or Desmond Tutu or Norman Borlaug or Stanislav Petrov. Do you think that Stanislav Petrov didn’t destroy the world because he had started with a blank state and logically and objectively and emotionlessly deduced that destroying the world would be contrary to Conclusion IIIb of argument 2a? No! Of course not. Morality does not work that way.

      The start of morality is emotion. It is “people are hurting, and that is bad, and I want to stop it.” In Chu’s case, people are hurting him and people he loved with racism and sexism and homophobia and transphobia. And sometimes you find that it is not just that people are hurting, it is that someone is hurting them.

      The emotion people usually feel when someone is deliberately hurting someone is righteous anger. And the thing is… righteous anger is powerful. Any successful social movement– from the Civil Rights Movement to the Tea Party– runs on righteous anger. This very blog post is eliciting righteous anger against Moloch (that’s why he’s personified! So we don’t go “huh, that’s abstractly terrible,” we go “I ENLIST IN ELUA’S FLOWER-COVERED ARMY”). There’s a reason we punish people who hurt others. It’s so they stop.

      So he is angry, and he is saying angry things at us. His anger is misdirected, I believe, but I do not believe that is the action of an evil person. I think it is the action of a person who cares deeply about others’ suffering and who wants to protect them from those who hurt them. That is not evil. That is good, perverted. He is on the side of niceness and community and civilization, though he does not know it. And I hope someday he realizes that we and he are on the same team.

      • Hawk Rationalist says:

        If Chu was really only interested in helping his friends and building a better world, he should have found a better and more direct way of doing so.

        Let me give you an example of what I mean. The US wasted $3 trillion on the wars in the Middle East over the last decade. Anyone with an objective, dispassionate worldview and a proficiency with mathematics can see that this was a failure of colossal proportions, comparable to a crime against humanity. Chu’s friends and allies were harmed profoundly by this failure. So if Chu really wanted to help his friends, he would direct his ire and vitriol against this kind of government SNAFU (and rationalists would be his natural allies in that conflict). But here’s the catch – Chu’s enemies were also harmed by the Iraq war disaster. Since such calamities don’t change the relative standing of his tribe relative to the opposing tribe, he’s not interested in preventing them.

        (Chu should also be getting less enraged over time, not more, since his side keeps winning political battles. Instead, he sees the fear in his opponents’ eyes, and this fear inspires him to press his attack ever more viciously).

        So, sorry, I don’t buy it. The dude is driven by hatred. But I won’t call him “evil”, though he is certainly capable of committing evil acts. And, again, I don’t think he’s much worse than everyone else.

  109. Xycho says:

    I think I’m going to have to read this a couple of times to get as much out of it as I can tell is there.

    It’s quite pleasing to find that someone else agrees that upon discovering the existence of a god or equivalent being, immediate attempted deicide is the only reasonable course of action. That’s a significant portion of my disagreement with religion; either they’re wrong about the existence of God (and therefore need to change their minds), or they’re right, in which case worship is the opposite of the correct response.

    • Deiseach says:

      But then Scott’s Ultimate Gardener AI is the functional equivalent of a god, so we must destroy it – which either leaves us back at the mercy of the jungle and the trap, or we have to create another entity to save us and establish the garden, which means a counter-god, which requires us to kill the new god, and so on and on.

      • Xycho says:

        True. I realise I had in mind a slight caveat to that, which is that I don’t apply it to godlike beings we have designed and built which are functioning for our benefit. Any preexisting deity or equivalent, which therefore is both more powerful than AND not subordinate to humanity, is a serious risk and requires extermination.

        I felt the same way about my parents a quarter century ago, and still feel that way about coordinated groups of people numbering greater than about six, so I acknowledge that this may not be the most mature approach to being outgunned.

  110. kappa says:

    That picture of the Luxor obelisk and its caption together represent the best instance of comic timing in a blog post I can remember ever seeing.

    Also, I really, really like your depiction of Elua, here and elsewhere. Now there is a god I can get behind. (Heh heh.)

    • Scott Alexander says:

      Thank you 🙂 I will have to tell my family and girlfriend that finally someone appreciates my terrible sense of humor.

      • kappa says:

        You mean there’s people who aren’t enormously entertained by an exquisitely arranged setup for dropping the phrase “granite cocks”?!

        XDDD

      • LRS says:

        Scott, let there be no doubt that your sense of humor is widely appreciated by your readership.

  111. Caspian says:

    The poetry and descriptions of these monstrous god-systems is evocative of Fredy Perlman’s description of Leviathan in Against His-story, Against Leviathan

    http://theanarchistlibrary.org/library/fredy-perlman-against-his-story-against-leviathan

    which I decided to read based on the description here

    http://bradhicks.livejournal.com/254044.html

    the lugal himself was unaware of the fact that he had created not just new rules, but a new system. And not just any system, but one that fulfills all of the scientific definitions of a living organism: irritability, self-sustenance, self-repair, and reproduction. He had not just changed the way his tribe and the nearby tribes lived, he had given birth to a vast psychic organism, a colony creature, a hive mind that twists and absorbs the lugal and his tribe just as thoroughly as it twists and absorbs the zeks

    • nydwracu says:

      Oh, good, someone’s made that argument before, so I don’t have to make it. I love when that happens. (Corporations are people, my friend!)

  112. Anonymous says:

    Outstanding.

    But what does Elua correspond to? Is it something you hope will exist in the future, or do you think there is now a force we can call our own?

    • MugaSofer says:

      Elua is our desires.

      When the capitalist-god Mammon provides us with food, then it’s because it cut a deal with Elua. When democratically-elected leaders don’t abuse their powers because that would make them unelectable, that’s Elua forcing … Cthulu, I think … to obey us.

      • Deiseach says:

        Which is why I think Elua is another trapper, not a gardener. Moloch may promise us power, but what is power but a means to satisfy our desires?

        Elua is the god of free love? But then we have to find some means of preventing conception, otherwise we rapidly run into the Malthus situation of “Al the women are getting pregnant all the time and families of ten or more are common” and we end up with Scott’s island of artistic rats having a dozen children, who each have a dozen children and so forth until the rat-eat-rat society of competition and consumption evolves.

        Elua may have soft eyes, but he demands a price as well. Moloch asks us to sacrifice love for power. What does Elua ask us to sacrifice for love? Perhaps “Submit to me as the gardener”. And even in Elua’s universe, I suspect death still comes as the end.

      • Anonymous says:

        But that’s not a selective force. That’s where Gnon and his various emanations get their power from. It’s how they can affect so much without any presence in the universe. I don’t see any such force that cares about us or our desires.

        • Ialdabaoth says:

          Man this is a *WEIRD* pantheon.

        • MugaSofer says:

          Well, you could argue that individual humans are exerting a selection pressure, by deliberately choosing things in line with their values.

          But yes, it’s definitely the odd one out.

  113. Wesley says:

    “Any human with above room temperature IQ can design a utopia”

    I do believe this is showing your fahrenheit bias. Room temperature in Canada is ~ 23C, and a human with IQ23 may be a bit too stupid to dream. IQ73, on the other hand, is just above a moron (technically, original definition), so quite average.

    Reject the Imperialism! Join the free world, where units make sense!

    Note: I actually don’t know what an IQ23 person would be like. It’s 5 standard deviations below the mean, which is to say, about 1 in 800,000. So there’s about 375 of these people in the United States. Are these people essentially walking vegetables, with minimal brain function, but still-functioning autonomic nervous systems?

    • nydwracu says:

      Comrade! In Fahrenheit, the temperature goes from a bit below zero to a bit above 100! In Celsius, the temperature goes from somewhere around -26 to somewhere around 41! Renounce your Jacobin insanity and return to units that make sense in the world!

      I don’t think IQ would make sense at numbers as low as those. Doubt someone five standard deviations below the mean would be able to take an IQ test.

      • Slow Learner says:

        Mmm, Fahrenheit works fine, until you’re boiling water, or cooking generally, or doing science. A scale that goes from 0 (water freezes at atmospheric temperature and pressure) to 100 (water boils at ATP), and has degrees the same size as the Kelvin scale (builds nicely into a LOT of scientific formulae) just makes sense on a level that 32 degrees to two hundred and something degrees for liquid water and messy conversion factors in the equations really doesn’t.

        • nydwracu says:

          Does Celsius-plus-Kelvin offer any advantages that Fahrenheit-plus-Rankine doesn’t? (Other than that the rest of the world uses it. Is it built into the metric system somehow?)

          But at least Celsius doesn’t run backwards anymore.

          Celsius’ own thermometer instrument with its linear scale was of glass and designed for more experimental purposes starting with 0° at the boiling-point (D) and 100° at the freezing-point (C) of water (Fig. 3). This implied a practical method to measure, e.g., weather conditions, with no plus or minus. Linnaeus however, as a scientist, had the original Celsius scale turned, so that 0 (zero) was marked for the freezing-point and 100 for the boiling-point of water for his instrument (Fig. 2). By that Linnaeus got a thermometer better suited for biological conditions. Plants risk to die at 0° C.

          • Slow Learner says:

            Yep, Kelvin/Celsius degrees are built into the SI system; for instance I shuffled around degrees Kelvin, Joules, Volts and Coulombs when I was studying some electronics. Because it’s built in, you have fewer conversion factors and the maths is simpler, meaning fewer errors.

        • Toby Bartels says:

          Celsius? Bah! Stick to straight Kelvins: works even better in the scientific formulas, and no negative numbers ever.

          (Fine print: Actually, negative temperatures can arise in obscure situations only tenuously related to our ordinary notion of temperature, and these cases make you realize that what you actually want to measure is not temperature at all, but coolness, in inverse kelvins, or even better in inverse units of energy. However, these matters are outside the scope of this comment.)

        • Ialdabaoth says:

          Kelvins! Bah. Use attoPlancks.

      • Army1987 says:

        Celsius only postdates Fahrenheit by two decades, but you make it sound like it’s more like two centuries. (In particular, it predates the actual historical Jacobin Club by nearly half a century!)

  114. BenSix says:

    Fascinating.

    I shall go and think about it. On a chair. By a window. As the sky grows dark and the air becomes cool.

  115. Army1987 says:

    Okay, okay, I’m gonna go give MIRI some money now.

    • There’s a real problem with the “give money to MIRI” thing, which is that the whole idea of Coherent Extrapolated Volition smacks of Moloch to me. If you look at the views of most people today and try to extrapolate from them, you get things like “the unbelievers must be killed or at least forcibly converted”, “it is right to kill people who aren’t in our group in order to defend the economic advantage of people who are”, “people whose sexual practices differ from my own should be punished or, preferably, not exist”, “people with autism should not exist”, “women are inferior to men and should be subservient to them”, “it is OK to torture those who are in the out group”.
      Those are the actual expressed values of the supermajority of people, I see no reason why a CEV would not implement them, and it is hard enough just living in a world where those values are shared by most human beings — having a totalitarian omnipotent dictator with those values would be about the worst imaginable outcome as far as I can see, far worse than mere human extinction.

      • Darcey Riley says:

        Oh my god thank you for writing this comment, thank you so much, I have spent the last two years cringing in terror of the thought of CEV being used to design an FAI, because extrapolated values are going to look nothing like actual human values whatsoever; even if we took the so-called “nice” values, generalizing them is still a terrible idea. This comment only adds to my terror of CEV.

        • Personally I don’t think we have to worry. I’ve seen nothing from MIRI, ever, to suggest that they have the first idea how to build an AI, “friendly” or otherwise. If I thought there was even the slightest chance that they did, though, I’d be campaigning every second of every day to get them shut down as a danger to humanity, because CEV is such an obviously obscene idea.

        • Athrelon says:

          CEV basically is a “Cthulhu wins” button and seems obviously a bad idea -on a par with ensuring that Azathoth, Mammon, or Ares eats everything . This seems obvious to both Andrew and many nrx folks despite evident object level disagreements.

      • MugaSofer says:

        That … is not what CEV does.

        It’s not some sort of global democracy, one idiot one vote. CEV extrapolates volition.

        Unless you think that the current state of affairs is a result of everyone having perfect information, the current state of affairs is probably not the place to start looking for CEV.

        • CEV is meant to preserve human values. Most humans value things that are by my reckoning utterly abhorrent. They *actually* value those things, in a way that no amount of information can change — people who find homosexuality disgusting, for example, are if anything likely to find it *more* disgusting the more they find out about it. The basic drive to persecute members of an out-group may well have a hardwired biological basis — certainly it seems likely to me that it does.
          Extrapolating that doesn’t lead anywhere good.

        • MugaSofer says:

          They *actually* value those things, in a way that no amount of information can change — people who find homosexuality disgusting, for example, are if anything likely to find it *more* disgusting the more they find out about it.

          … you’re kidding, right?

        • Raemon says:

          Andrew, have you actually read this paper:

          https://intelligence.org/files/CEV.pdf

          Because what you are describing is not what CEV actually does.

      • von Kalifornen says:

        That. .. Does not sound very coherent OR extrapolated.

        • Those values are all perfectly coherent — they can all exist together without any contradiction. And no matter how far you extrapolate from them, you’ll never get anything good, or even neutral.
          The whole argument behind CEV seems to be “it’ll preserve those aspects of human values that Westerners who value post-Enlightenment thinking think worth preserving, while not preserving any of the icky ones we don’t like because they’ll be ‘extrapolated’ away, somehow”. I think it far, far, more likely that it would lead to the opposite of that.
          And I *certainly* see no reason to believe that anything that can “extrapolate” the human tendency to attack those who are different into peace, love, and tolerance for all people won’t *also* “extrapolate”, say, my desire not to be stabbed into a compulsion for everyone to stab me.
          Either CEV would preserve current human values, in which case I should oppose it because current human values are, for the most part, evil; or it would destroy current human values, in which case I should oppose it because it would destroy things I value; or it would not do anything at all, in which case why build it at all?
          The Yudkowskyite argument seems to be that it would somehow only preserve nice values and not nasty ones, but with no explanation as to how this would be — and if there is some outside criterion for which values are nice and which are nasty, then that, rather than CEV, should be used from the start.
          I would be unimaginably terrified of anything implementing CEV getting any kind of power, if I thought there was even the tiniest chance of it happening.

        • Darcey Riley says:

          I don’t even think that extrapolating nice values will lead to anything good. I’ve heard the effective altruists give the following argument for helping people in Africa: if your neighbor down the street got hurt, you’d want to help him; similarly, if you knew the people in Africa, you’d have empathy for them too and want to help them; it’s only coincidence that you’ve met your neighbor down the street but not the people in Africa; so you should extrapolate your desire to help your neighbor to the empathy you would feel for the guy in Africa, and help him.

          But this argument can be taken to an extreme, it seems, where you feel intense empathy for all forms of life, including individual bacteria. Where do you stop feeling empathy? In our society we see “all of humanity and no one else” as a clear Schelling point, but people in past societies have seen “only our race, and no one else” as an equally clear Schelling point. And maybe if we have more intelligence, we also have more capacity for empathy, which leads us to care a whole lot about every individual bacterium, which leads us to do crazy things, like wipe out all life on Earth because we can’t satisfy all life’s preferences simultaneously and we feel empathy for everything.

          I also think that if we try to make human values consistent, we will get something that looks absolutely nothing like normal human values, because normal human values are inconsistent. Considering that normal human values have kept society going for a few millenia now, I expect this to be a feature rather than a bug. Trying to make human values consistent with each other is part of reason as a memetic immune disorder.

          Humans did not evolve rational morals that can be worked into a logical, consistent system. We evolved morals that assured our survival in a specific environment. Taking our values and letting them run wild, in a context very different from our ancestral environment, typically leads to disastrous results. Consider, for example, the human desire to eat a lot of fat and sugar. We don’t have a restriction on that desire, because fat and sugar were pretty rare in the ancestral environment. But now that we have lots of fat and sugar, this evolved human value (assuming a “value” is anything that drives our actions) is running amok and wreaking all sorts of havoc.

          When I’m arguing against CEV I usually give the following example. Suppose you have a truck, and it’s always going to be driving into a 50 degree wind, and you want that truck to go 50 miles an hour. So you program the truck to say “drive at 100 miles per hour”, since that will achieve the goal. If the truck actually moved at 100 miles per hour, it would take turns too fast and crash. But the conditions of the environment assure that would never happen. But CEV would look inside the truck’s programming, and say “ah, clearly this truck really wants to drive at 100 miles per hour, but its malicious environment is stopping it. So let’s give the truck its wish.” And then the truck actually drives at 100 miles per hour and crashes and dies.

        • MugaSofer says:

          Darcey, have you read “Adaptation-Executors, Not Fitness-Maximizers”?

          ETA: sorry, link:

          http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/

        • Darcey Riley says:

          MugaSofer, thanks for the link; I just read it now. I’m not surprised Eliezer has also come up with this idea (and maybe I actually got the idea directly or indirectly from him, who knows). But the post makes no mention of CEV, so after reading it, I’m not clear on whether Eliezer thinks it has any bearing on CEV.

        • Fazathra says:

          @ Andrew Hickey

          Not to be deliberately obtuse, but I can’t see the flaw with CEV here. As far as I know, the point of CEV is to generate a compromise solution of morality which is acceptable/tolerable to all of humanity. As your views are in the minority, is it not expected that they will not be dominant within the AI implementing CEV’s utility function? And that thus this is a feature, not a bug, of CEV.

          In essence, you appear to be using a progressive metavalue-set by which you judge other value-sets on their niceness which is just how close they are to your own progressive values. Then you argue that because a CEV using AI would not possess a value-set which is “nice” – i.e. similar to your own – the entire concept of CEV is flawed. However, the fact that you may find the values extrapolated via CEV abhorrent does not necessarily have any real bearing upon the correctness of CEV as a method. In fact, CEV may be the utility function for a FAI that maximises the utility of humanity in general after the singularity, even if you find its values to be morally wrong. I don’t know whether this is actually true (and I suspect it isn’t) but this line of argument seems to be fairly irrelevant as a criticism of the concept of CEV as the point of CEV is to find a solution for all, not just for liberal American programmers.

          PS: This is my first time commenting here, so I am sorry if this comes off in the wrong way or else betrays a fundamental misunderstanding of something really basic.

        • anon says:

          Darcey, I share your views on CEV. Thank you for clarifying some jumbled thoughts I’ve had around the subject.

        • The thing that makes me itch about CEV is the “extrapolated” part. I’m not convinced that even the FAI can know what you (or humanity in general) can know very much about what you would know if you knew more– what you know has something to do with what experiences you have, not just your relationship to an existing body of knowledge.

          And your idea of what a better version of yourself would be is going to change according to what you’ve experienced.

          I can hope that people’s CEV would include the opportunity to learn by direct interaction with the universe outside the FAI, and change as a result, but that doesn’t seem to be the way it’s described.

      • Kaj Sotala says:

        I agree that CEV seems problematic for the reasons you mention, but it’s also an old proposal which nobody considers anywhere close to complete, sufficient, or properly tested yet. If MIRI was saying that they were definitely going to use it as a blueprint for an AI, that would certainly be a reason to boycott them… but since they aren’t saying anything like that, objecting to them on grounds of CEV seems like objecting to someone on the grounds that the first preliminary sketch at an idea that they came up with wasn’t perfect.

        • Darcey Riley says:

          This raises my hopes for MIRI a little bit, but (1) a whole lot of people still talk about CEV, and I haven’t seen MIRI do anything to disabuse them of this notion, and (2) CEV is such an obviously terrible idea that it’s hard to trust anyone who came up with it to also generate good ideas in the future. (Not that I trust anyone to come up with good ideas; we’re probably fucked.)

        • “objecting to them on grounds of CEV seems like objecting to someone on the grounds that the first preliminary sketch at an idea that they came up with wasn’t perfect.”

          More like “the first preliminary sketch at an idea that they came up with would destroy everything that is good and decent in the world because of incredibly obvious flaws that jump out within thirty seconds of thinking about it, but they chose to promote that idea in a thirty-eight page PDF paper which is still, ten years later, on their website, to not make any public progress from that idea in the ten years since, to repeatedly refer to it over that time period, and to make pro forma acknowledgements that yes, of course it’s not perfect, while refusing to acknowledge any of the actual criticisms made of it”.

          “If MIRI was saying that they were definitely going to use it as a blueprint for an AI, that would certainly be a reason to boycott them”
          Well, at no point that I’ve seen have they ever said that they’re *not* going to use it, or something similar, as a blueprint. Admittedly Muelhauser might have said something of the sort (I find his writing completely unreadable, so don’t know what he’s said), but I’ve read pretty much everything Yudkowsky’s written (because even when he’s wrong he’s an extremely skilled writer) and he’s never, that I recall, said anything other than that it’s pretty close to the correct answer…

        • Or what Darcy said 😉

        • Kaj Sotala says:

          The current working plan for FAI is value learning / value extrapolation, which are more general terms that don’t make as many specific assumptions as CEV does. CEV continues to be mentioned and cited because it’s the first proposed example of a value learning / extrapolation approach.

          As for CEV being an “obviously terrible” idea, I don’t know: there are problems with it, but it still seems like a reasonable first stab at the problem to me.

          Take extrapolating desires: the opposite of this would be to do no extrapolation, and just go with the morality of current-day people. Well, in that case you get exactly what Andrew Hickey described in his original comment – a totalitarian omnipotent dictator that punishes people for being different. Or actually you wouldn’t, since in that regard there would be sufficient disagreement about the issue that the AI produced by CEV might choose to take no action about it – meaning that all kinds of nasty behavior would be allowed to go on uninterrupted. That doesn’t sound much better, either.

          Would extrapolating desires help? Well, there’s the possibility that it wouldn’t, but at least CEV contains the “if we knew more, thought faster, were more the people we wished we were, had grown up farther together” provisions on how the extrapolation is supposed to be done. There is an argument to be made for there having been a long-term trend towards more peace and tolerance, fueled in part because of an understanding that other people aren’t ultimately that different from us. Andrew suggests that things like wanting to persecute homosexuals are terminal values that do not change with more information, but we also know that the tendency to persecute others is associated with a psychological tendency to dehumanize them and see them as fundamentally different. It’s plausible – not certain, mind you, but plausible – that the “knew more” and “had grown up farther together” provisions would end up making the dehumanization urge impossible, and thus eliminate most intolerance.

          Another problem with not extrapolating desires is that, as I alluded to earlier, it would essentially freeze moral growth and allow us to go on with all kinds of things that we might eventually come to consider moral atrocities. As specified, if CEV would realize that we’d eventually come to consider something a horrible mistake, it’s supposed to nudge us out of that: “hey, you probably really don’t want to do that”.

          …the initial dynamic for CEV should be conservative about saying “yes,” and listen carefully for “no.”

          Extrapolating your wish to be a better person may add considerable distance. If we extrapolate out far enough, we may end up with a Power, or something else too powerful and alien for your present-day self to comprehend. If our unimaginably distant future selves have an 80% probability of attaching a huge value to cheesecake for no reason our current selves can comprehend, this may not be a good reason to actively encourage present-day humans to fill their lives with cheesecake. It probably is a good reason to prevent people from destroying present-day cheesecakes. Distance and spread should attenuate the force of “do this” much more rapidly than they attenuate the force of “Yikes! Don’t do that!” In qualitative terms, our unimaginably alien, powerful, and humane future selves should have a strong ability to say “Wait! Stop! You’re going to predictably regret that!”, but we should require much higher standards of predictability and coherence before we trust the extrapolation that says “Do this specific positive thing, even if you can’t comprehend why.”

          (CEV, p. 10)

          That said, the CEV document also explicitly mentions the possibility that despite all of this, the extrapolated desire of humanity would still end up creating a terrible world because most people were just that selfish. And it’s specifically mentioned that if it starts looking like that’s what will happen, then one should just scrap the initial “extrapolate from the desires of everyone” thing and try something else:

          I think there must come a time when the last decision is made and the AI set irrevocably in motion, with the programmers playing no further special role in the dynamics. But at any point before then, one can always say “No,” and humanity will be no worse off than before.

          Then what? Give up entirely? No, extrapolate from an idealized generic human, or an idealized generic human with an initial push toward altruism, or an idealized generic human with an initial push toward transpersonal philosophy, or extrapolate an imaginary civilization composed of genetically diverse individuals in the 99% niceness bracket…

          (CEV, page 28)

          As for making human values consistent – well, again there’s the question of what the alternative would be. Yes, there’s an extent to which human values can be said to be inconsistent, but most people also don’t go around shooting people on even-numbered days and donating to pacifist organizations on odd-numbered days. (Actually, even that would be consistent, if it consistently obeyed the even-odd-day rule…) If human values really were inconsistent in a logical sense, that would mean that our behavior would be totally random, since inconsistent axioms allow you to derive any conclusion. But our behavior clearly isn’t totally random, so there’s still some consistency to our desires. When people say “our desires are inconsistent”, I think in practice that mostly just means “our desires are so complex that I can’t come up with a neat consistent formalization that would be simple enough for me to understand”. But that doesn’t mean that one wouldn’t exist, even if it required a lot of weird special cases that weren’t derivable from any axioms. As an existence proof, my brain already contains a consistent implementation of my values, since my behavior isn’t totally random.

  116. MugaSofer says:

    “Science God”? “Education God”? What happened to “Czar”, leading up to the excellent “for short, they just called him the Czar”?

    • Vivificient says:

      I was wondering this too….

      Perhaps it is of symbolic significance that the czars have become gods.

    • jaimeastorga2000 says:

      This piece uses a god motif (Moloch, Gnon, Cthulhu, Azathoth, Mammon, Ares, Gods of the Copybook Headings, god’s eye view, etc…). Changing “Czar” to “God” in order to fit that theme is hardly surprising.

    • Scott Alexander says:

      I was hoping no one would notice a little bit of self-editing I needed to keep the theme consistent.

  117. Erik says:

    judging by how many ex-Quiverfull blogs I found when searching for those statistics, their retention rates even within a single generation are pretty grim. Their article admits that 80% of very religious children leave the church as adults (although of course they expect their own movement to do better). And this is not a symmetrical process – 80% of children who grow up in atheist families aren’t becoming Quiverfull.

    It looks a lot like even though they are outbreeding us, we are outmeme-ing them, and that gives us a decisive advantage.

    I think this only gives you the short-term advantage, because now the Quiverfull movement is also selecting for retention and resistance to your memes. And it would be more accurate to say that 80% of children who grow up in atheist families don’t exist if the atheists are only having 20% as many children as the Quiverfull families. (your count is 25%, close enough.)

    The creationism “debate” and global warming “debate” and a host of similar “debates” in today’s society suggest that the phenomenon of memes that propagate independent of their truth value has a pretty strong influence on the political process. Maybe these memes propagate because they appeal to people’s prejudices, maybe because they are simple, maybe because they effectively mark an in-group and an out-group, or maybe for all sorts of different reasons.

    This looks like a chance for me to exercise something resembling post-cynicism: don’t worry that the dishonest X-ists are doing so well, the Y-ists were dishonest too! Both sides of many of these “debates”, in my impression, have been subjected to Moloch and tossed out their honesty in favor of finding ways to better coordinate one’s side and lambast the other side and make plays for social status and attempt to use the machinery of government to ban the other side, and since my enemies are so terrible, maybe I can get some government funding to write about Why Those People Threaten Our National Sanity? I’m sure this is a very important use of your money.

    As far as I can tell from reading his blog, Nick Land is the guy in that terrifying border region where he is smart enough to figure out several important arcane principles about summoning demon gods, but not quite smart enough to figure out the most important such principle, which is NEVER DO THAT.

    I expect I would agree with most of your predictions regarding how Land would act, but if asked to comment I think he’d disagree a little with this – my impression is that he has figured out the principle and rejected it.

    I have more opinions, perhaps I will sort them into a post at More Right or something, because this is becoming a wall of text squished into the thin column of a comment space.

    • Athrelon says:

      Note Amish retention rates improved dramatically over the 20th century despite virtually no attempt to water it down to make it more attractive. Retention rates for new religious movements seem terrible in general but some combination of selection pressure, increasing sense of venerability, and differential breeding has made the Amish very much more successful than you would have expected in 1900.

      • nydwracu says:

        How quickly does the process operate?

        Here’s some Ruby code to simulate, in case you want to mess with it: http://pastebin.com/HrgqUkyK

        I ran it with:
        atheists = 90000000
        atheist_birth_rate = 1.4
        christians = 8000000
        christian_birth_rate = 4.0
        quiverfulls = 2000000
        quiverfull_birth_rate = 8.0
        a_c_defect_rate = 0.1
        a_q_defect_rate = 0.0
        c_a_defect_rate = 0.3
        c_q_defect_rate = 0.05
        q_a_defect_rate = 0.6
        q_c_defect_rate = 0.05
        generations = 3

        and defector_decay_factor needs to be 0.4 for the Quiverfulls to outnumber the atheists within three generations. That seems pretty high: 60%. But there should be empirical data available for the Amish, so you could use that to figure out what to set the decay factor to.

        Would be more interesting to run it for the Amish (low rate of defection) than the Quiverfulls (high rate of defection).

        The future belongs to whoever shows up for it…

    • gwern says:

      It looks a lot like even though they are outbreeding us, we are outmeme-ing them, and that gives us a decisive advantage.

      I’m also interested in the extent to which you actually can outmeme a religion. We know that religious beliefs seem to be predicted by various personality and cognitive dispositions detectible in early childhood, even (I’ve done some posts on LW about some of that research); we know that religious beliefs and related dispositions are as heritable in the twin studies as anything else and so highly likely have some genetic base; we know personality traits likewise; there’s some interesting speculation about subpopulations systematically differing in personality, which in the case of the Amish ( http://westhunt.wordpress.com/2014/02/07/inferring-an-aq/ ) starts to look suspiciously genetic, and inasmuch as the Amish have exploded from <1000 founders to somewhere upwards of a million last I checked and still growing fast, they clearly are not in a situation where all their excess growth is being bled off by memetic predation.

      So we have all the pieces for selection towards durability of religious subpopulations: genes to personality/cognition to religious inclination to specific religion to excess reproduction (and relative sterility of those bled off to the general population) to higher inclusive fitness to spreading in the population/gene pool.

      • Paul Torek says:

        Maybe we need an atheist religion or two? Orientations or systems (as needed) that satisfy the urges that “religiosity” genes code for.

        • Multiheaded says:

          ☭☭☭ Comrade X died for the world’s sins; We Shall Overcome ☭☭☭

        • Andy says:

          My eyes have seen the glory of the coming of the Law,
          For all who have been broken by unfeeling avarice,
          Their footsteps ring upon the earth, to serve the wicked notice,
          Justice is marching on.

          (Chorus)
          Glory, glory, Justice marches!
          Glory, glory, we will be free,
          Glory, glory, hear the footfalls!
          Justice is marching on.

          They march upon the mountains, and they march across the plains
          To where the wicked oppressor is secure in their domains
          They’re loosing all the shackles, and they’re breaking all the chains,
          Justice is marching on.

          (If you are American and can’t recognize the source music for this, use your Exit rights please.) 😛

        • Multiheaded says:

          @Andy

          Well, yeah, but it needs to be a bit… grittier, IMO.

          [Hitler] has grasped the falsity of the hedonistic attitude to life. Nearly all western thought since the last war, certainly all “progressive” thought, has assumed tacitly that human beings desire nothing beyond ease, security and avoidance of pain. In such a view of life there is no room, for instance, for patriotism and the military virtues. The Socialist who finds his children playing with soldiers is usually upset, but he is never able to think of a substitute for the tin soldiers, tin pacifists somehow won’t do.

          Hitler, because in his own joyless mind he feels it with exceptional strength, knows that human beings don’t only want comfort, safety, short working hours, hygiene, birth-control and, in general, common sense, they also, at least intermittently, want struggle and self-sacrifice, not to mention drums, flags and loyalty-parades. However they may be as economic theories, Fascism and Nazism are psychologically far sounder than any hedonistic conception of life. The same is probably true of Stalin’s militarized version of Socialism. All three of the great dictators have enhanced their power by imposing intolerable burdens on their peoples. Whereas Socialism, and even capitalism in a more grudging way, have said to people “I offer you a good time,” Hitler has said to them “I offer you struggle, danger and death,” and as a result a whole nation flings itself at his feet. Perhaps later on they will get sick of it and change their minds, as at the end of the last war. After a few years of slaughter and starvation “Greatest happiness of the greatest number” is a good slogan, but at this moment[1] “Better an end with horror than a horror without end” is a winner. Now that we are fighting against the man who coined it, we ought not to underestimate it emotional appeal.

          Ok, Orwell is taking it too far towards his (kinky but not unusual) personal preference. Still, we must consider the general qualities of this aesthetic, and the lasting demand for them.

          [1] March 1940

        • Andy says:

          Ok, Orwell is taking it too far towards his (kinky but not unusual) personal preference. Still, we must consider the general qualities of this aesthetic, and the lasting demand for them.

          Yeah, the song is from a science fiction universe I have slowly cooking, where a monarchic nation is born in a slave revolt. I’m not exactly a socialist, and neither is King Hank I (who adopts this version as his fledgling nation’s anthem) but he came out of some really fucked-up hypercapitalist slavebreeding bullshit and built an aristocratic, capitalist nation dedicated to the pursuit of justice. I wanted to blend Reactionary and Progressive doctrines and got someplace very, very strange. The value of the aesthetic of struggle, danger, and death doesn’t escape me, but it’s kind of hard to stick in a technological culture designed to last in both peace and war.

        • Toby Bartels says:

          In North America, there’s Unitarian Universalism. A non-theistic religion, descended ultimately from New England Puritans, whose doctrine is Progressivism.

        • Jaskologist says:

          It’s been done; what do you think Communism was?

          You could also claim Objectivism or Nietzscheans.

        • nydwracu says:

          Of course that would be the source.

  118. Said Achmiz says:

    This is beautiful.

    The quote from Bostrom, about consciousness being outcompeted, is strongly, strongly reminiscent of Charles Stross’s Accelerando:

    “Early upload entrepreneurs forked repeatedly, discovered they could scale linearly to occupy processor capacity proportional to the mass of computronium available, and that computationally trivial tasks became tractable. They could also run faster, or slower, than real time. But they were still human, and unable to operate effectively outside human constraints. Take a human being and bolt on extensions that let them take full advantage of Economics 2.0, and you essentially break their narrative chain of consciousness, replacing it with a journal file of bid/request transactions between various agents; it’s incredibly efficient and flexible, but it isn’t a conscious human being in any recognizable sense of the word.”

    In the novel, the posthuman minds that are humanity’s descendents — the “Vile Offspring” — utterly outcompete humans. (Their eventual actions are driven by value systems that we no longer understand, and that are thus sufficiently mysterious as to allow the author to decide that the Vile Offspring refrain from destroying us. We should not depend on having a similar narrative protection in real life.)

  119. Armstrong For President 2020 says:

    I cannot adequately express my appreciation for this post, so I’m going to leave it at “this is very good”

    Anyway, my first thought when reading this was actually “this sounds a lot like Evola” which means I might actually be able to add something, since I’m pretty sure most people here haven’t read any more than (at most) Anisimov’s cliffnotes version.

    In Evola’s view ancient people weren’t half as stupid as naive modernist interpretation of their stories would lead one to imagine, most of them were more like mnemonics (look at the imagery of a typical memory palace; bizarre symbols are sticky ones) for fundamental spiritual principles; or as the greeks would put it, principles of Reason. And as such their gods and demons weren’t big oddly-colored people sitting around in unlikely locations casting spells, at least outside of the exoteric level which was usually reserved for the illiterate masses, but impersonal numinous forces which operated by somewhat-comprehensible rules and could thus be bargained with (du ut des anyone?) or even defeated.

    The big division in his perennial Tradition was between the Uranian/divine northern light and the Tellurian/demonic southern light. This isn’t a good/evil distinction, because morality is personal and subjective and these are impersonal objective forces, so much as one between superhuman and subhuman striving; following the northern light leads the greatest men to embrace their divine nature, the southern light to embrace their animal nature. Transcendence versus subsistence, as illustrated by the difference between how the Pharaoh spoke to his gods before a ritual (“O Gods, you are safe if I am safe / Your doubles are safe if my double is at the head of all living doubles / Everybody lives if I live”) and how a Christian is supposed to ritually address his god (“Father, hallowed be your name / Your kingdom come / Give us each day our daily bread / And forgive us our sins”).

    And he was always very clear that it is in no way certain that the Transcendent force will win in the end, whether in the case of an individual or the cosmos.

    There are a lot of interesting aspects of this comparison, but the most immediate takeaway for me is Elua / CEV might not be your best bet on getting out of the coordination trap / age of destruction. Most of our urges, even (especially) the moral urges, will generally point towards the path of subsistence because after all that’s why they evolved in the first place. To orient yourself towards transcendence you need a spiritual center which is capable of effecting change without being changed itself. In other words, I’m not sure we can beat the “dark gods” of materialistic competition without an esoteric God of spirituality to keep us on an even keel. Even ‘secular’ philosophies like Stoicism or Pythagoreanism had a lot more bite than most modern churches because of their esoteric character; between this and your post of ecclesiology it makes me wonder if you have fully appreciated the value of incorporating a spiritual / esoteric element into the Rationalist movement and/or society at large.

    • Darcey Riley says:

      This relates to something I’ve been thinking for a while, about how important it is for us to have ideals, particularly ones beyond “help other human beings” and “maximize human utility”, which tend to be susceptible to wireheading given the mutability of human reward functions.

      It seems like human values typically fall victim to regression to the mean; before this post I conceived of this in terms of humans imitating one another. That is, most humans want to follow some sort of social standard, so they strive to be at least as moral as the average person. But a few people just don’t care and behave immorally; they bring the average morality down. And so the rest of humanity, seeing their poor behavior, feels like they can be a little more lax in following the culture’s moral standards. Iterate this process, and everything eventually degrades to complete immorality. This is why we can’t just base our standards on what other humans are doing, but instead need to strive towards some fixed unmoving Ideal that cannot be corrupted by the activites of human society. (I assume Evola is writing about something similar to this? I’ve been meaning to read his books for ages, but haven’t gotten around to it yet.)

      • Armstrong For President 2020 says:

        Pretty much, though going into Revolt Against the Modern Wolrd with a head fullof game theory will absolutely give you mental whiplash. His perspective is very difficult to coprehend from a modern perspective already so anything you can do to shorten the inferrential difference is good.

        One of the problems he illustrates is the fragmentation of the sciences, or more precisely their languages. An alchemist a mason a knight and a priest all spoke with the same symbolic alphabet which allowed them to see one another as part of one larger worldview and understand each others parts in it. Today we have a dozen different lenses which leave us with “seperate magisteria” or pointless conflict.

        I recommend using Marcus Aurelius’s view of things; either there is providence or atoms, but either way certain truths hold.

      • Kaj Sotala says:

        It seems like human values typically fall victim to regression to the mean; before this post I conceived of this in terms of humans imitating one another. That is, most humans want to follow some sort of social standard, so they strive to be at least as moral as the average person. But a few people just don’t care and behave immorally; they bring the average morality down. And so the rest of humanity, seeing their poor behavior, feels like they can be a little more lax in following the culture’s moral standards. Iterate this process, and everything eventually degrades to complete immorality.

        You may be interested in this piece, arguing for exactly that having happened for a number of things over the last century.

      • Charlie says:

        What About The Gradual Decrease In Violent Crime ™?

      • Anonymous says:

        Conversely, morals have improved greatly in other ways over the past centuries and are still improving. A fixed ideal would not allow that, and this is problematic if the ideal chosen isn’t the best of all possible ideals. I would claim as obvious that we are incredibly unlikely to pick that ideal initially.

    • Multiheaded says:

      A really great opinion of Evola:

      If a king wants to prioritize “transcending reality,” the French have a wonderful device to help with that, I hear.

      • Andy says:

        Content warning: If anyone follows that Tumblr expecting more politics stuff, expect to be disappointed, because that Tumblr is mine and mostly it’s anime schoolgirls with big weapons fighting monsters.

  120. Nestor says:

    Thanks for articulating why I’m not going to have children.

    • MugaSofer says:

      I suspect you may be missing the point.

      How old are you? The current best estimate is still singularity at 2045, as far as I know.

      (“If that statement starts to chill you after a couple of moments’ consideration, then don’t be alarmed. A feeling of intense and crushing religious terror at the concept indicates only that you are still sane.”)

      • jaimeastorga2000 says:

        Best estimate? That’s just Kurzweil. He doesn’t even use the term “singularity” in the sense of localized, rapid intelligence explosion. Eliezer’s 2011 estimates said that he would be very surprised if AI hadn’t been invented by 2111 and a little surprised if it hadn’t been invented by 2061.

        • Vilhelm S says:

          What does “a little surprised” mean though? I sounds like assigning more than 50% probability to it being invented before 2061, which is not so far from Kurzweil.

  121. Kaj Sotala says:

    I feel the need to plug my Technology will destroy human nature essay, where I basically talked about (a part of) the same thing, and about various physical limits that are currently stopping us from racing to the bottom but which technology will eventually overcome. (Sidenote: I’ve been thinking about expanding that post to a formal paper, but I’d need an evolutionary biologist as a co-author. Any takers?) Also, Nick Bostrom’s essay The Future of Human Evolution, where I originally encountered these concepts.

    Scott Aaronson’s Malthusianisms is also relevant. “Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.”

  122. Darcey Riley says:

    Thank you for this. It is beautiful and awe-inspiring and reweaving the way I think about the world. (And so, in return for it, I will make you zeugmas and smile!)

    This post (and various other things that have happened to me lately) are leading me to rethink the oft-repeated contrast between nature and technology. For a very long time I’ve thought that all attempts at creating the singularity will either lead to this sort of horror (which resembles your post) or this sort of horror; that is, an AI that either destroys the world through an excess of chaos, or one that destroys the world through an excess of order. And I thought the best way to counteract these alternatives would be to simply stop building AIs, and let the laws of nature guide us instead. But now, thanks to your post, I understand that it’s the laws of nature themselves that produce the AIs, when left to their own devices.

    And so this is an interesting refactoring of the perspective I’m accustomed to: not technology vs. nature, but Moloch vs. Coordination. But there’s one thing I’m still not clear on (and apologies is this was addressed earlier in the comments, which I haven’t read yet): coordination itself seems to be one of Nature’s Gods. We started out with single-celled life, and those cells learned to coordinate with one another to make multicellular organisms. We started out with bacteria invading cells, but they learned to coordinate with one another and we got mitochondria. We started out with every man for himself, but we learned to coordinate with each other, and we got societies. Based on these examples, Coordination seems to be a very powerful God, who is perhaps at war with the other Gods you named. And so your post leads me to wonder: what are the conditions under which Coordination wins, and what are the conditions under which Moloch wins? (I assume they collaborate sometimes, e.g. in democracy and possibly capitalism, and in the evolutionary examples I just gave. Maybe Coordination wins when he aligns himself with Moloch?)

    Anyway, based on your post (and also the Universe basically walking up to me recently and saying “Darcey, you need to work on NLP for friendly AI”), I have found myself faced once again with this question:

    Ah Love! could thou and I with Fate conspire
    To grasp this sorry Scheme of Things entire,
    Would not we shatter it to bits—and then
    Re-mould it nearer to the Heart’s Desire!

    And I think I’m finally ready to answer it in the affirmative.

    Thanks again.

    (Edit: oh wait you addressed Coordination as a god by mentioning Elua, and tons of previous commenters have talked about Elua. So nevermind about most of this, although I would still like to understand why Elua triumphs, aside from “He just does.” But can we please not portray Coordination as a god of niceness and happiness and flowers? Because that very narrow depiction of human values is part of what’s terrified me so much about rhetoric of the FAI movement. See for example: wireheading, and the link I gave above to Metamorphosis of Prime Intellect.)

    • Kaj Sotala says:

      Because that very narrow depiction of human values is part of what’s terrified me so much about rhetoric of the FAI movement. See for example: wireheading, and the link I gave above to Metamorphosis of Prime Intellect.

      Using Metamorphosis of Prime Intellect as an argument against the FAI rhetoric is a bit odd, given how a large part of the FAI rhetoric is explicitly saying that “FAI is important to get right, or otherwise we might end up with a MoPI scenario”. The original CEV proposal even explicitly mentions MoPI and With Folded Hands as illustrations of an FAI gone wrong, and Eliezer’s written one such illustration of his own.

      See also the Fun Theory sequence and Value is Fragile.

      • James James says:

        MOPI didn’t sound too bad. It’s hard to improve upon without lying to people, manipulating them, and/or changing them, like Friendship is Optimal, which does indeed give people fulfilled lives.

      • Darcey Riley says:

        I see Metamorphosis of Prime Intellect as a good example, because it shows that people can try really hard to make an FAI and still get it really really wrong. I mean, the AI in MoPI is pretty overly simplistic, but it’s still a good demonstration of this general principle. So that’s why I contrasted it with your post: your post shows AI run amok, without humans attempting to guide it into something reasonable; MoPI shows AI run amok, despite humans’ attempts to guide it into something reasonable.

    • Scott Alexander says:

      If I really had any part in that decision, I am going to stick it right near the top of my achievements list. Good luck!

      Also, I think you and I might have different aesthetic associations with niceness and happiness and flowers (although you remain my favorite flower photographer). I’m not sure how to make the associations commensurable, except that you might want to try reading Lewis or Chesterton or Carey to get what I called a sense of active, terrifying Good.

      • Deiseach says:

        The bit about Elua being god of flowers and niceness made me think you were channelling Arthur Conan Doyle in “The Adventure of the Naval Treaty”:

        “Thank you. I have no doubt I can get details from Forbes. The authorities are excellent at amassing facts, though they do not always use them to advantage. What a lovely thing a rose is!”

        He walked past the couch to the open window, and held up the drooping stalk of a moss-rose, looking down at the dainty blend of crimson and green. It was a new phase of his character to me, for I had never before seen him show any keen interest in natural objects.

        “There is nothing in which deduction is so necessary as in religion,” said he, leaning with his back against the shutters. “It can be built up as an exact science by the reasoner. Our highest assurance of the goodness of Providence seems to me to rest in the flowers. All other things, our powers our desires, our food, are all really necessary for our existence in the first instance. But this rose is an extra. Its smell and its color are an embellishment of life, not a condition of it. It is only goodness which gives extras, and so I say again that we have much to hope from the flowers.

        Percy Phelps and his nurse looked at Holmes during this demonstration with surprise and a good deal of disappointment written upon their faces. He had fallen into a reverie, with the moss-rose between his fingers. It had lasted some minutes before the young lady broke in upon it.

        “Do you see any prospect of solving this mystery, Mr. Holmes?” she asked, with a touch of asperity in her voice.

        “Oh, the mystery!” he answered, coming back with a start to the realities of life.

    • CalmCanary says:

      Elua is not Coordination; Coordination is merely one way we can hold off Moloch (along with surplus resources, physical limitations, and the fact that everyone hates Moloch, as mentioned in the post).

      As for where coordination comes from, consider your example of multicellular life. If single-celled organisms can be said to have a value, which they can’t, it is dividing as much as possible so as to outcompete other such organisms. In joining together into a larger organism, they sacrifice this sole value and restrain their division so as to allow the whole to survive, thus gaining an edge over organisms which do not unite. Roughly the same process explains human societies.

      In other words, Coordination is born when Moloch eats itself.

  123. Reading this made me feel like my brain was just pressure washed. Ah well, pressing on!

    I’ve often heart Christian intellectuals say things like “God isn’t some stern old codger up in the sky, God is the ontological basis of reality.” It seemed…. plausible enough, in its own strange way. But before hearing your ideas on Moloch I never considered the possibility of “yes but what if the ontological basis of reality is a total jerk?”

    And also, hearing your description of that moment when you saw Moloch… made me immediately think of this scene from Metropolis.

    I had read Land’s description of the “four horsemen of Gnon” before, and being the hopeless Romanticist that I am I immediately set to thinking of gods that could rival the four horsemen of Gnon– aspects of reality that are for the most part “on humanity’s side”. I could only come up with two:

    Prometheus — Intelligibility
    Not the fact that the universe follows laws, but the fact that those laws can be deciphered, and expressed in formulas and algorithms and trends. The fact that the universe is not a black box or total chaos, but something that can be comprehended by the human mind.

    Imhotep — Artifice
    The fact that things can be built. The fact that in small areas, entropy can temporarily be overcome. The fact that matter can be acted upon in ways that add order.

    I’m fairly certain someone could find a way to reframe Prometheus and Imhotep as aspects of Gnon though…..

    • Erik says:

      I have a suggestion for a third.

      Tyr – Oathkeeping
      The Norse god of law who gives an arm to bind the Fenris Wolf. The fact that people can make personal sacrifices where the benefits mostly accrue to other people. The fact that people express gratitude for this and track reputation. The power of making and keeping promises, encouraging people to do so, punishing traitors and in extreme cases even punishing traitors who defect to your side.

      (If you want to argue that the “reputation economy” is valuable enough that Tyr “isn’t sacrificing” when he gives up his arm, then you can substitute this with “the power of the reputation economy to create decentralized coordination” or something similar. Binding the Fenris Wolf is still a big win!)

      • nydwracu says:

        Tyr, the binder of contracts; Enki, the creator of customs; Forseti, the coordinator…

        • Andy says:

          Enki, creator of customs, could well end up as a neutral trickster, the kind who gets Tyr to enforce harmful contracts because “that’s always the way it’s always been.”
          Enki who tricked Tyr into banning women from driving. Enki who tricked Tyr into enforcing foot-binding or genital mutilation or stoning gay people to death because “that’s what’s in the rules.”

        • nydwracu says:

          There’s tricksters and then there’s tricksters who are so much smarter than you that you can’t ever tell whether they’re tricking you or not.

  124. Pleeppleep says:

    “To the orthodox there must always be a case for revolution; for in the hearts of men God has been put under the feet of Satan. In the upper world hell once rebelled against heaven. But in this world heaven is rebelling against hell. For the orthodox there can always be a revolution; for a revolution is a restoration.”
    -C.K. Chesterton “Orthodoxy”

    “Christianity agrees with Dualism that this universe is at war. But it does not think this is a war between independent powers. It thinks it is a civil war, a rebellion, and that we are living in a part of the universe occupied by the rebel.

    Enemy-occupied territory—that is what this world is. Christianity is the story of how the rightful king has landed, you might say landed in disguise, and is calling us all to take part in a great campaign of sabotage.”
    -C.S. Lewis “Mere Christianity”

    Is it me or did Scott just spend however long it took to write this colossus proving Chesterton and Lewis’s impression of Christianity?

    • Viliam Búr says:

      Only if you believe that Christianity is everything good, and that everything good is Christianity.

      Christianity has also sacrificed some value to Moloch in exchange for getting more power. All the witches burned, heretics killed, children abused, etc.

      If we ignore this all, then yeah, Christianity is good. But if we are already going so far, why not ignore the whole Christianity, and just say that people have the capacity to be good?

      • Harald K says:

        Pleeppleep is referring explicitly to Christianity as interpreted by Chesterton and Lewis, not all sorts of Christianities that ever have existed.

        They are right that there is similarity. This is basically the problem of evil, reformulated as to be relevant to atheists (as it should be), and SA’s “atheodicy” is surprisingly similar to these Christian writers’.

      • Erik says:

        Christianity has also sacrificed some value to Moloch in exchange for getting more power. All the witches burned, heretics killed, children abused, etc.

        This makes no sense. The witches were the ones (putatively) getting their power from Moloch.

      • Eli says:

        Nah, Christianity is a load of crap that tries to comfort people by saying that if they believe in Jesus they can have nice things in the afterlife, even while they deal with the sufferings of their real lives in, you know, their real lives.

        If we insist on keeping this “Moloch” character, Christianity mostly just apologizes for him and asks us to keep submitting to him.

    • Randy M says:

      An essay on how the gods of this world have a wicked and nigh-irresistible temptation, and we must overcome them and hew to the only god worth serving? Yes, my thought was that Scott was recapitulating Lewis as well.

    • MugaSofer says:

      Yup.

      Or, well. If it’s you, then at the very least it’s not just you.

    • caleb says:

      My thoughts as well. I read this post as essentially a genre-transcribed, slightly condensed recapitulation of “That Hideous Strength.”

      • ozymandias says:

        But MIRI is the villain of That Hideous Strength.

        • Nick T says:

          The difference between trying to immanentize the eschaton yourself, and trying to bring God into the world in a form self-correcting and free of original sin to do it instead.

          … OK, there’s probably no way to make this non-blasphemous in any normal Christianity, but in reality the difference is huge.

        • Scott Alexander says:

          Except I’m pretty sure Eliezer isn’t a demon. Demons would be more…what’s the word…suave.

        • Pleeppleep says:

          @Scott

          That could just be what he wants you to think…

        • caleb says:

          What Nick said.

          Also, not necessarily. If MIRI is nearly as concerned with the subversion of human meaning and aesthetics to the infernal logic of “Moloch” as Scott obviously is, then they are not the NICE.

    • Multiheaded says:

      As me, Oligopsony and maybe even Zizek would agree, the value of Christianity-as-Christianity (meaning actual Christ and sacrifice and all) should be investigated further insofar as we do reasonably suspect it to hold the key to acausal POWAH. I personally think it does.
      The lesser Christian memes are just nice but certainly not domain-specific. They might’ve well be known as Islamic memes had history turned out slightly differently.

      • Pleeppleep says:

        Now, when you say “acausal POWAH” you mean…?

        Something interesting I imagine?

        • Multiheaded says:

          Basically imagine some uplifting story about super-coordination in the name of emancipating the oppressed. Like this.

          With the cotton industry on its knees, Lincoln acknowledged the self-sacrifice of the ‘working men of Manchester’ in a letter he sent them in 1863. Lincoln’s words – later inscribed on the pedestal of his statue that can still be found in Lincoln Square, Manchester – praised the workers for their selfless act of “sublime Christian heroism, which has not been surpassed in any age or in any country.”

          Brings a (figurative) tear to my eye every time.

          (For a more practical example, see the Underground Railroad.)

      • Oligopsony says:

        This isn’t even the first post of yours in this thread that makes me wish there were still Muflax’s archives to refer you to, assuming you hadn’t read them whilst they were still immanent.

        • Multiheaded says:

          I did read “Bagbybtvpny Gurencl”, but not much of his other stuff.

          P.S. rot13 gives everything a fancy Black Speech effect if you highlight it.

        • a distant mumbling says:

          @Multiheaded

          (Doesn’t matter. It’s mostly blathering nonsense, except for transformations invoked in the author, who tried hard not go insane in an abusive shithole of a life.

          (Things are better now.)

          You don’t need those. (Those transformations specifically. Others, yes, of course.)

          (I hope the reason Scott etc similarly write so much blathering nonsense is for similar reasons. (Well, “hope”.) Hard to hate on suffering people engaging in escapism.)

          Nonetheless, both our crazy is just so obviously from a shared source (or similar-enough environment, same difference), on an object and meta level, that I’d be surprised if we won’t talk much more soon-ish.

          (Time estimates are still a weak spot of the author, so that probably means “later this century” in practice. Sighing.)

          (And so I don’t leave reasons to break my silence hanging around until Important Shit Got Done By Me, lemme say that the work you’re doing here (etc) makes me happy, both because you do a lot of the trolling duty I currently don’t have the resources for, and because you keep me angry at bullshit and evil I would otherwise slowly tune out, and did tune out in the past. Seriously, thanks for the agitation. (Ditto @Oligopsony.))

          (This was mostly foreshadowing in order to transform the author’s state of mind into something more productive (and/or for higher powers to exploit). Mystical language always seems so inefficient to me, but when you’re running on a hijacked murder monkey, you gotta dangle some toys around sometimes.)

          (Also, it’s gaming night and I’m trapped in the bedroom. Alas.))

  125. JPH says:

    Sad and scary. Thank you for introducing a poem published in 1956 that is resonating today.

  126. Froolow says:

    I just want to nth that this is some of the best stuff you’ve written.

    I’m particularly impressed with the depiction of the Moloch character being a physical embodiment of my discomfort with statements like, “Capitalism / Patriarchy / Government is a system which disadvantages everyone and everyone would be better off if it disappeared”. Even if that were true, it’s not the real issue.

  127. Thank you for writing this. Unfortunately, I’m not as good at engaging with really huge abstract concepts through metaphor as I’d like to be, so I’m just going to straight-up ask:

    Does lifting Elua to Heaven specifically mean building Friendly AI? Or is it meant to be broader than that?

    I’d like to think that other actions that make the world better in some small measure count too.

  128. EoT says:

    13. Government corruption. I don’t know of anyone who really thinks, in a principled way, that corporate welfare is a good idea. But the government still manages to spend somewhere around (depending on how you calculate it) $100 billion dollars a year on it – which for example is three times the amount they spend on health care for the needy. Everyone familiar with the problem has come up with the same easy solution: stop giving so much corporate welfare. Why doesn’t it happen?

    What? Medicaid spending is over 430 Billion dollars (2012). Federal spending is over half of that… If you include Medicare that more than doubles.

    Unless I’m missing something?

    • Gentzel says:

      You are correct I believe, I noticed this mistake too. Medicare is big.

      Additionally, I do know people who think corporate welfare is a good thing… but that is only because of war.

  129. Rob says:

    Here is a thing that follows very very obviously from the piece, and yet is not explicitly stated in the piece AFAICT:

    Making a friendly AI is a classic Moloch sacrifice scenario:
    There are several projects trying to make strong AI, and whichever project achieves recursive self-improvement first, ‘wins’. Thus there can be only one ‘winner’, and the AI projects are in very strong competition with one another.
    Making AI in general is easier than making Friendly AI, which means Friendliness is something you can sacrifice in exchange for faster progress. Any project that expends resources on making sure their AI is Friendly will be beaten to the punch by one which does not.

    So coordinating between AI research projects is The Pivotal Battle. If we defeat Moloch here, we have a chance to defeat him for good. If he wins this battle, he wins the war.

    So, uh… what’s the battle plan?

    • Xycho says:

      There isn’t one. Either we get UFAI which behaves in the way which is most sensible given that it is basically an all-powerful God, and we all die very quickly with nothing to mourn us, or we get FAI which is somehow neutered to behave as we would all like other people to behave, and we win the universe.

      AI is a hard problem, but not that hard. FAI is a very, very hard problem, so we’re about as screwed as it’s possible to be. We do, however, get the extreme (if short-lived) satisfaction of knowing that it was us who destroyed the universe. That’s not nothing, if anyone’s keeping score from outside.

      • anon says:

        Human modification via cybernetics would allow us to mostly sidestep the FAI problem…

        • James Miller says:

          To be replaced with the friendly human cyborg problem.

        • Xycho says:

          I don’t think that’s a solution. If I, personally, were an AI I would be considered profoundly UnFriendly; my implementation of this is limited only by technology and funds. Assuming that even one or two of the very rich people who would be first to acquire cybernetic enhancement had similar goals to mine, I would rate the survival prospects of everyone else in the low weeks, at most. Since there is a known correlation between business success (i.e. wealth) and psychopathy, cybernetics have some quite terrifying implications.

        • anon says:

          I agree with that analysis more or less. Inequality becomes much worse if billionaires have cybernetics and normal people don’t. But, I prefer that world to one with an FAI. I think the billionaire would eventually give away the technology, or die, or have their descendants outcompete the normals, or have technology stolen from them, so we’d enter an equal society in the long run.

          (I don’t think that cybernetics will make the billionaire so strong that they can never be overcome by any normal human. Technology isn’t magic, we won’t ever have infinite energy.)

          Additionally, I want to fight inequality now, before the new technologies show up, because I agree an unequal takeoff would be bad news.

          There is of course a reasonable chance the billionaire takes over everything. But I consider that outcome preferable to nonexistence. If the billionaire kills everyone, at least there is a vaguely human singleton rather than a machine singleton. And since this path gives us at least a higher chance of utopia than the FAI attempt, in my opinion, it’s really a clear win all around – lower magnitude risks, higher probability success.

          I think you’re probably somewhat unfriendly, but that you’re less unfriendly than an FAI attempt would be. Why would you destroy the world? I also think that unfriendliness potential will be mitigated if multiple agents get enhancements.

        • Xycho says:

          I would destroy the world because if I’m the first to get cybernetics my endgame is 1: Upload. 2: Convert entire remainder of reality into computer hardware. 3: Simulate new universe. 4: Be the sort of God the Old Testament authors were too squeamish to write about.

          Earth and its population would disappear in the first moments of step 2. I don’t want anything that thinks even remotely like me to acquire that sort of power, ever – though as I said in a comment somewhere above, I’d find it utterly hilarious for a very short period of time.

      • Eli says:

        FAI is a very, very hard problem

        I like how you say that as if it meant “impossible”.

        • Xycho says:

          I don’t. It’s not even close to impossible. It’s just (in my estimation) at least two orders of magnitude harder than General AI with a nonspecific utility function.

    • Error says:

      Work on Friendliness separately and hopefully solve it before the AGI problem itself is solved. Then share the results with all the individual AI projects.

      …at least that’s the impression I have. I’m not exactly part of the cause; add salt as needed.

    • Will says:

      What Error said plus

      1. Raise awareness so that as high a percentage as posssible of AI researchers smart enough to make significant progress understand the importance of FAI.

      2. Form a single team of FAI-aware AI researchers. Make sure they are as smart as possible.

      3. Not share any breakthroughs that could be used to make unfriendly AI with the other AI teams.

      4. Hope that your team is smart enough to beat the others, even while being much more careful.

  130. Hrothgar says:

    Just as the course of a river is latent in a terrain even before the first rain falls on it – so the existence of Caesar’s Palace was latent in neurobiology, economics, and regulatory regimes even before it existed. The entrepreneur who built it was just filling in the ghostly lines with real concrete.

    Really good analogy.

    This is the much-maligned – I think unfairly – argument in favor of monarchy. A monarch is an unincentivized incentivizer. He actually has the god’s-eye-view and is outside of and above every system. He has permanently won all competitions and is not competing for anything, and therefore he is perfectly free of Moloch and of the incentives that would otherwise channel his incentives into predetermined paths. Aside from a few very theoretical proposals like my Shining Garden, monarchy is the only system that does this.

    This sentiment recalls the philosopher-kings of Plato’s Republic — the only just society is one with centralized power.

  131. Blogospheroid says:

    Very good essay. Not a lot of new material for those who follow the debate, but an excellent emotionally loaded one stop shop for the friendly AI meme.

    Comments

    Is the current triumph of Elua a permanent feature of the world or an aberration? Even away from our little corner of the net, people are talking about the setting of the atlantic powers and the rise of the more traditional societies, the BRICS bank and all. Hollywood is currently modifying content to satisfy the chinese market. Will this lead to such change that content with traditional values will dominate in a few years?

    Isn’t Elua subject to dilutions of its own? In Eliezer’s three worlds collide, the compromise formula means the values of others get adopted more and more, leaving the original peace and love formula a smaller and smaller part of the utility function?

  132. Mercer says:

    Moloch (and the capitalist bit in particular) reminded me of this bit in Nick Harkaway’s ‘The Gone-Away World’ (also in general one of my favourite books ever: although it’s much more action-packed and ninja-including than the quote below might suggest)

    “Suppose you are Alfred J. Fingermuffin, capitalist. You own a factory, and your factory uses huge industrial metal presses to make Fingermuffin Thingumabobs. Great big blades powered by hydraulics come stomping down on metal ribbon (like off a giant roll of tape, only made of steel) and cut Thingumabobs out like gingerbread men. If you can run the machine at a hundred Thingumabobs per minute, six seconds for ten Thingumabobs (because the machine prints ten at a time out of the ribbon), then you’re doing fine. The trouble is that although in theory you could do that, in fact you have to stop the machine every so often so that you can check the safeties and change shifts.

    Each time you do, the downtime costs you, because you have the machine powered up and the crew are all there (both crews, actually, on full pay). So you want to have that happen the absolute minimum number of times per day. The only way you can know when you’re at the minimum number of times is when you start to get accidents. Of course, you’re always going to get some accidents, because human beings screw up; they get horny and think about their sweethearts and lean on the Big Red Button and someone loses a finger. So you reduce the number of shifts from five to four, and the number of safety checks from two to one, and suddenly you’re much closer to making Fingermuffin’s the market leader. Mrs Fingermuffin gets all excited because she’s been invited to speak at the WI, and all the little Fingermuffins are happy because their daddy brings them brighter, shinier, newer toys. The downside is that your workers are working harder and having to concentrate more, and the accidents they have are just a little worse, just a little more frequent. The trouble is that you can’t go back, because now your competitors have done the same thing and the Thingumabob market has gotten a bit more aggressive, and the question comes down to this: how much further can you squeeze the margin without making your factory somewhere no one will work? And the truth is that it’s a tough environment for unskilled workers in your area and it can get pretty bad.

    Suddenly, because the company can’t survive any other way, soft-hearted Alf Fingermuffin is running the scariest, most dangerous factory in town. Or he’s out of business and Gerry Q. Hinderhaft has taken over, and everyone knows how hard Gerry Q. pushes his guys. In order to keep the company alive, safeguard his family’s happiness and his employees’ jobs, Alf Montrose Fingermuffin (that’s you) has turned into a monster. The only way he can deal with that is to separate himself into two people – Kindly Old Alf, who does the living, and Stern Mr Fingermuffin, factory boss. His managers do the same. So when you talk to Alf Fingermuffin’s managers, you’re actually not talking to a person at all. You’re talking to a part in the machine that is Fingermuffin Ltd, and (just like the workers in the factory itself) the ones who are best at being a part are the ones who function least like a person and most like a machine. At the factory this means doing everything at a perfect tempo, the same way each time, over and over and over. In management it means living profit, market share and graphs. The managers ditch the part of themselves which thinks, and just get on with running the programme in their heads.”

    • Multiheaded says:

      In order to keep the company alive, safeguard his family’s happiness and his employees’ jobs, Alf Montrose Fingermuffin (that’s you) has turned into a monster. The only way he can deal with that is to separate himself into two people – Kindly Old Alf, who does the living, and Stern Mr Fingermuffin, factory boss. His managers do the same. So when you talk to Alf Fingermuffin’s managers, you’re actually not talking to a person at all. You’re talking to a part in the machine that is Fingermuffin Ltd, and (just like the workers in the factory itself) the ones who are best at being a part are the ones who function least like a person and most like a machine. At the factory this means doing everything at a perfect tempo, the same way each time, over and over and over. In management it means living profit, market share and graphs. The managers ditch the part of themselves which thinks, and just get on with running the programme in their heads.

      ^ Exactly how Marx described what alienation means for the capitalists themselves.

    • roystgnr says:

      So you reduce the number of shifts from five to four, and the number of safety checks from two to one, and

      the best half your employees go get jobs elsewhere, and between that and your trashed reputation your business is ruined.

      The nice thing about an economic system which runs on greed is that you hardly have to fear it getting corrupted by greed, you just have to fear changes in supply and demand. When leftists worry that employers will fire everyone as soon as robot replacements are good enough, that’s actually much *less* science fictional than the worry that employers will suddenly drop everyone to minimum wage (as soon as they figure out it’s legal, I guess?).

  133. Pingback: Outside in - Involvements with reality » Blog Archive » On Gnon

  134. Kees says:

    So all-in-all you liked the ending to Battlestar Galactica?

  135. EoT says:

    Bostrom makes an offhanded reference of the possibility of a dictatorless dystopia, one that every single citizen including the leadership hates but which nevertheless endures unconquered.

    I read somewhere that George Lucas originally intended there to be a big reveal near the end of Return of the Jedi that the Emperor was actually a mostly powerless figurehead, and the Empire’s real evil came from the billion faceless bureaucrats who actually carried out the day-to-day business of the Empire. Supposedly this was changed because it made it too hard to have a big climactic fight scene wrap everything up.

    The Imperial Administratum in Warhammer 40k works like this, not sure if it’s a direct homage or not.

    • von Kalifornen says:

      This is the romantic hope.

    • Anonymous says:

      So he took it 180 degrees the other way, and gave the Emperor Battle Meditation, so the giant Imperial fleet cut and ran with his death?

      I’m not buying it.

      • Anonymous says:

        Furthermore there’s an actual line in A New Hope about how the Emperor has just eliminated the “bureaucracy”; that would be a very odd thing to include if Lucas’ plan was for them to be the Real Evil All Along. Then again, Lucas is sorta infamous for changing his plans and then claiming his later plan was the Real Plan All Along, so…

        • Toby Bartels says:

          Well, what was actually said (by Grand Moff Tarkin) was that he had dissolved the *Senate*. Then somebody else (General Tagge) *interprets* this as eliminating the bureaucracy. That right there doesn’t make much sense, since you can obviously have a bureaucracy without a Senate. (But Tagge is supposed to be a smart guy; he’s the only one who understands the threat posed by the Rebel Alliance. So presumably he knows what he’s talking about.)

    • MugaSofer says:

      … that’s amazing.

      I’m torn between hope this is true, and disappointment that I’ll never be able to watch it.

    • jaimeastorga2000 says:

      This sounds like an urban legend. I am reminded of that “the Matrix originally used humans as processors but had to be dumbed down because test audiences didn’t get it” meme.

    • Doug S. says:

      Supposedly the novelization of “A New Hope” portrayed the Emperor as a powerless figurehead (or at least, showed people who thought he was a powerless figurehead). By the time Return of the Jedi was made, Lucas had, apparently, changed things around a bit…

      • Toby Bartels says:

        I read that novelization (after seeing the movies) and I remember well that what you say is true. I reconciled it to myself at the time, because it only says that in a quotation from The Journal of the Whills, and who says that this journal is accurate? Later I found out that the Journal is supposed to be the source of all of George Lucas’s information on the Star Wars galaxy (much as Tolkien learnt about the War of the Ring by translating the Red Book of Westmarch), which restores the problem … but for all I know, that’s not canonical anymore.

    • Toby Bartels says:

      Any change must have occurred in time for the Empire Strikes Back. There is a scene there where Vader and the Emperor (appearing only in holographic silhouette and not yet played by Ian McDiarmid) confer over how to respond to Luke. Vader calls the Emperor ‘my Master’, the Emperor thinks that Luke would make ‘a powerful ally’ ‘if he could be turned’; he’s already the Emperor that know.

  136. Anonymous says:

    Stirring; but, stated with too little restraint at the end.

    Your radical contrast between “Moloch” and “Elua” raises red flags to me. I do not think things can be that cleanly separated. I think, if Moloch was inextricably implicated in the origins of our values, then it must always have some living role in our values, however small. Otherwise our (conscious ideas of our) values will become ill-grounded, hollow and artificial, defined by opposition rather than determined from their own organic principle, and inevitably break down upon extrapolation. (And then perhaps Moloch will take hidden power within the range of evolutionary freedom granted by the artificiality, and within the freedom from scrutiny granted by the defined opposition.)

    Even so, a correctly designed singleton-process, even if conceived as “killing Moloch dead”, would be self-correcting on this point. It would invent arguments like I gesture towards here, and correctly evaluate and react to whatever significance the arguments had. But take care that you do not (somehow) build political momentum toward a singleton-process which was incorrectly designed.

  137. suntzuanime says:

    This is a very good post. That said, I think Moloch is weaker than you describe. He must be, or we’d be dead like those artistic rats. He’s not an Elder God, he’s a Norse-tier god, and can be killed by a large enough wolf or snake. I’m an optimist on this one, and I don’t have time to really do justice to why right now.

    I agree on Nick Land, though. It seems like he wants to summon a *real* Elder God that he’s identified with the relatively-cuddly Moloch, and that’s A Problem.

    • Glen Raphael says:

      I have much the same optimism and kept thinking as I read along that this argument proves too much. It’s not a historical accident that Malthus was wrong. If the argument here were correct, Malthus would have been right. But he was wrong, and he was wrong for reasons, and those reasons should give us some hope here as well.

      On the flip side, the suckiness of our government is not contingent; it is essential. The ship of state is inherently hard to steer and accumulates barnacles. It can never truly be fixed. Perhaps the most promising option is to throw out the whole mess and start over again every few centuries.

      The government is a bit like certain incarnations of Internet Explorer or Microsoft Word – each new feature that is added legitimately helps at least one person who wants that new feature but can’t help but make the program a bit more complex and harder to use and harder to maintain and more expensive for everybody else. Diminishing returns limits the good you can do with government. The first law passed does a lot more good than the ten-thousandth one and eventually you reach an equilibrium where most laws cause at least as much damage as benefit. (Another option is to find a way around government so that it becomes irrelevant, as IE has.)

  138. Harald K says:

    “What good will it be for someone to gain the whole world, yet forfeit their soul? Or what can anyone give in exchange for their soul?” Matthew 6:26

    You’re halfway to our side already, Scott Alexander. Trust that there will be cookies. Not that I can promise cookies. But if cookies can exist at all, they are with us, the mystics (Christian mystic cookies are also the undisputed best flavor!)

    Or if bible quotes turn you all off, and you’ll like something more whimsical to lift you from these heavy thoughts, how about Buffy the vampire slayer?

    Billy Fordham: “I’m in. I will become immortal. ”

    Buffy: “Well, I’ve got a news flash for you, brain trust. That’s not how it works. You die, and a demon sets up shop in your old house, and it walks, and it talks, and it remembers your life, but it’s not you.”

    These silly neoreactionaries are just wannabe Moloch-worshipers. Like particularly dumb Cthulhu cultists who think there’s any room for them in Cthulhu’s world: I’ve got a news flash for you, brain trust. That won’t be you. If you sacrifice everything about yourself worth preserving, it won’t be you. And in the end, that thing you desperately tried to turn into will still die (even without a Buffy there to drive a stake through your heart).

    So you might as well try to turn into something you WANT to be. Something that would DESERVE not dying, even if things don’t look ideal on that front.

    • nydwracu says:

      So you might as well try to turn into something you WANT to be. Something that would DESERVE not dying, even if things don’t look ideal on that front.

      …as long as it’s not a suicide pact.

    • Randy M says:

      The verse that sprang to my mind was “…in hope that the creation itself will be liberated from its bondage to decay.”

    • MugaSofer says:

      I can’t help but suspect that if Scott had been raised in even the most fundamentalist Christian sect, we wouldn’t have to reinvent to much.

    • Andy says:

      You’re halfway to our side already,