Book Review: Age of Em

[Note: I really liked this book and if I criticize it that’s not meant as an attack but just as what I do with interesting ideas. Note that Robin has offered to debate me about some of this and I’ve said no – mostly because I hate real-time debates and have bad computer hardware – but you may still want to take this into account when considering our relative positions. Mild content warning for murder, rape, and existential horror. Errors in Part III are probably my own, not the book’s.]

I.

There are some people who are destined to become adjectives. Pick up a David Hume book you’ve never read before and it’s easy to recognize the ideas and style as Humean. Everything Tolkien wrote is Tolkienesque in a non-tautological sense. This isn’t meant to denounce either writer as boring. Quite the opposite. They produced a range of brilliant and diverse ideas. But there was a hard-to-define and very consistent ethos at the foundation of both. Both authors were very much like themselves.

Robin Hanson is more like himself than anybody else I know. He’s obviously brilliant – a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute. But his greatest aptitude is in being really, really Hansonian. Bryan Caplan describes it as well as anybody:

When the typical economist tells me about his latest research, my standard reaction is ‘Eh, maybe.’ Then I forget about it. When Robin Hanson tells me about his latest research, my standard reaction is ‘No way! Impossible!’ Then I think about it for years.

This is my experience too. I think I said my first “No way! Impossible!” sometime around 2008 after reading his blog Overcoming Bias. Since then he’s influenced my thinking more than almost anyone else I’ve ever read. When I heard he was writing a book, I was – well, I couldn’t even imagine a book by Robin Hanson. When you read a thousand word blog post by Robin Hanson, you have to sit down and think about it and wait for it to digest and try not to lose too much sleep worrying about it. A whole book would be something.

I have now read Age Of Em (website)and it is indeed something. Even the cover gives you a weird sense of sublimity mixed with unease:

And in this case, judging a book by its cover is entirely appropriate.

II.

Age of Em is a work of futurism – an attempt to predict what life will be like a few generations down the road. This is not a common genre – I can’t think of another book of this depth and quality in the same niche. Predicting the future is notoriously hard, and that seems to have so far discouraged potential authors and readers alike.

Hanson is not discouraged. He writes that:

Some say that there is little point in trying to foresee the non-immediate future. But in fact there have been many successful forecasts of this sort. For example, we can reliably predict the future cost changes for devices such as batteries or solar cells, as such costs tend to follow a power law of the cumulative device production (Nagy et al 2013). As another example, recently a set of a thousand published technology forecasts were collected and scored for accuracy, by comparing the forecasted date of a technology milestone with its actual date. Forecasts were significantly more accurate than random, even forecasts 10 to 25 years ahead. This was true separately for forecasts made via many different methods. On average, these milestones tended to be passed a few years before their forecasted date, and sometimes forecasters were unaware that they had already passed (Charbonneau et al, 2013).

A particularly accurate book in predicting the future was The Year 2000, a 1967 book by Herman Kahn and Anthony Wiener. It accurately predicted population, was 80% correct for computer and communication technology, and 50% correct for other technology (Albright 2002). On even longer time scales, in 1900 the engineer John Watkins did a good job of forecasting many basic features of society a century later (Watkins 1900) […]

Some say no one could have anticipated the recent big changes associated with the arrival and consequences of the World Wide Web. Yet participants in the Xanadu hypertext project in which I was involved from 1984 to 1993 correctly anticipated many key aspects of the Web […] Such examples show that one can use basic theory to anticipate key elements of distant future environments, both physical and social, but also that forecasters do not tend to be much rewarded for such efforts, either culturally or materially. This helps to explain why there are relatively few serious forecasting efforst. But make no mistake, it is possible to forecast the future.

I think Hanson is overstating his case. All except Watkins were predicting only 10 – 30 years in the future, and most of their predictions were simple numerical estimates, eg “the population will be one billion” rather than complex pictures of society. The only project here even remotely comparable in scope to Hanson’s is John Watkins’ 1900 article.

Watkins is classically given some credit for broadly correct ideas like “Cameras that can send pictures across the world instantly” and “telephones that can call anywhere in the world”, but of his 28 predictions, I judge only eight as even somewhat correct. For example, I grant him a prediction that “the average American will be two inches taller because of good medical care” even though he then goes on to say in the same sentence that the average life expectancy will be fifty and suburbanization will be so total that building city blocks will be illegal (sorry, John, only in San Francisco). Most of the predictions seem simply and completely false. Watkins believes all animals and insects will have been eradicated. He believes there will be “peas as large as beets” and “strawberries as large as apples” (these are two separate predictions; he is weirdly obsessed with fruit and vegetable size). We will travel to England via giant combination submarine/hovercrafts that will complete the trip in a lightning-fast two days. There will be no surface-level transportation in cities as all cars and walkways have moved underground. The letters C, X, and Q will be removed from the language. Pneumatic tubes will deliver purchases from stores. “A man or woman unable to walk ten miles at a stretch will be regarded as a weakling.”

Where Watkins is right, he is generally listing a cool technology slightly beyond what was available to his time and predicting we will have it. Nevertheless, he is still mostly wrong. Yet this is Hanson’s example of accurate futurology. And he is right to make it his example of accurate futurology, because everything else is even worse.

Hanson has no illusions of certainty. He starts by saying that “conditional on my key assumptions, I expect at least 30% of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 10%.” So he is not explicitly overconfident. But in an implicit sense, it’s just weird to see the level of detail he tries to predict – for example, he has two pages about what sort of swear words the far future might use. And the book’s style serves to reinforce its weirdness. The whole thing is written in a sort of professorial monotone that changes little from loving descriptions of the sorts of pipes that will cool future buildings (one of Hanson’s pet topics) to speculation on our descendents’ romantic relationships (key quote: “The per minute subjective value of an equal relation should not fall much below half of the per-minute value of a relation with the best available open source lover”). And it leans heavily on a favorite Hansonian literary device – the weirdly general statement about something that sounds like it can’t possibly be measurable, followed by a curt reference which if followed up absolutely confirms said statement, followed by relentlessly ringing every corollary of it:

Today, mental fatigue reduces mental performance by about 0.1% per minute. As by resting we can recover at a rate of 1% per minute, we need roughly one-tenth of our workday to be break time, with the duration between breaks being not much more than an hour or two (Trougakos and Hideg 2009; Alvanchi et al 2012)…Thus many em tasks will be designed to take about an hour, and many spurs are likely to last for about this duration.

Or:

Today, painters, novelists, and directors who are experimental artists tend to do their best work at roughly ages 46-52, 38-50, and 45-63 respectively, but those ages are 24-34, 29-40, and 27-43, respectively for conceptual artists (Galenson 2006)…At any one time, the vast majority of actual working ems [should be] near a peak productivity subjective age.

Or:

Wars today, like cities, are distributed evenly across all possible war sizes (Cederman 2003).

At some point I started to wonder whether Hanson was putting me on. Everything is just played too straight. Hanson even addresses this:

To resist the temptation to construe the future too abstractly, I’ll try to imagine a future full of complex detail. One indiciation that I’ve been successful in all these efforts will be if my scenario description sounds less like it came from a typical comic book or science fiction movie, and more like it came form a typical history text or business casebook.

Well, count that project a success. The effect is strange to behold, and I’m not sure it will usher in a new era of futurology. But Age of Em is great not just as futurology, but as a bunch of different ideas and purposes all bound up in a futurological package. For example:

An introduction to some of the concepts that recur again and again across Robin’s thought – for example, near vs. far mode, the farmer/forager dichotomy, the inside and outside views, signaling. Most of us learned these through years reading Hanson’s blog Overcoming Bias, getting each chunk in turn, spending days or months thinking over each piece. Getting it all out of a book you can read in a couple of days sounds really hard – but by applying them to dozens of different subproblems involved in future predictions, Hanson makes the reader more comfortable with them, and I expect a lot of people will come out of the book with an intuitive understanding of how they can be applied.

A whirlwind tour through almost every science and a pretty good way to learn about the present. If you didn’t already know that wars are distributed evenly across all possible war sizes, well, read Age of Em and you will know that and many similar things besides.

A manifesto. Hanson often makes predictions by assuming that since the future will be more competitive, future people are likely to converge toward optimal institutions. This is a dangerous assumption for futurology – it’s the same line of thinking that led Watkins to assume English would abandon C, X, and Q as inefficient – but it’s a great assumption if you want a chance to explain your ideas of optimal institutions to thousands of people who think they’re reading fun science-fiction. Thus, Robin spends several pages talking about how ems may use prediction markets – an information aggregation technique he invented – to make their decisions. In the real world, Hanson has been trying to push these for decades, with varying levels of success. Here, in the guise of a future society, he can expose a whole new group of people to their advantages – as well as the advantages of something called “combinatorial auctions” which I am still not smart enough to understand.

A mind-expanding drug. One of the great risks of futurology is to fail to realize how different societies and institutions can be – the same way uncreative costume designers make their aliens look like humans with green skin. A lot of our thoughts about the future involve assumptions we’ve never really examined critically, and Hanson dynamites those assumptions. For page after page, he gives strong arguments why our descendants might be poorer, shorter-lived, less likely to travel long distances or into space, less progressive and open-minded. He predicts little noticeable technological change, millimeter-high beings living in cities the size of bottles, careers lasting fractions of seconds, humans being incomprehensibly wealthy patrons to their own robot overlords. And all of it makes sense.

When I read Stross’ Accelerando, one of the parts that stuck with me the longest were the Vile Offspring, weird posthuman entities that operated a mostly-incomprehensible Economy 2.0 that humans just sort of hung out on the edges of, goggle-eyed. It was a weird vision – but, for Stross, mostly a black box. Age of Em opens the box and shows you every part of what our weird incomprehensible posthuman descendents will be doing in loving detail. Even what kind of swear words they’ll use.

III.

So, what is the Age of Em?

According to Hanson, AI is really hard and won’t be invented in time to shape the posthuman future. But sometime a century or so from now, scanning technology, neuroscience, and computer hardware will advance enough to allow emulated humans, or “ems”. Take somebody’s brain, scan it on a microscopic level, and use this information to simulate it neuron-by-neuron on a computer. A good enough simulation will map inputs to outputs in exactly the same way as the brain itself, effectively uploading the person to a computer. Uploaded humans will be much the same as biological humans. Given suitable sense-organs, effectuators, virtual avatars, or even robot bodies, they can think, talk, work, play, love, and build in much the same way as their “parent”. But ems have three very important differences from biological humans.

First, they have no natural body. They will never need food or water; they will never get sick or die. They can live entirely in virtual worlds in which any luxuries they want – luxurious penthouses, gluttonous feasts, Ferraris – can be conjured out of nothing. They will have some limited ability to transcend space, talking to other ems’ virtual presences in much the same way two people in different countries can talk on the Internet.

Second, they can run at different speeds. While a normal human brain is stuck running at the speed that physics allow, a computer simulating a brain can simulate it faster or slower depending on preference and hardware availability. With enough parallel hardware, an em could experience a subjective century in an objective week. Alternatively, if an em wanted to save hardware it could process all its mental operations v e r y s l o w l y and experience only a subjective week every objective century.

Third, just like other computer data, ems can be copied, cut, and pasted. One uploaded copy of Robin Hanson, plus enough free hardware, can become a thousand uploaded copies of Robin Hanson, each living in their own virtual world and doing different things. The copies could even converse with each other, check each other’s work, duel to the death, or – yes – have sex with each other. And if having a thousand Robin Hansons proves too much, a quick ctrl-x and you can delete any redundant ems to free up hard disk space for Civilization 6 (coming out this October!)

Would this count as murder? Hanson predicts that ems will have unusually blase attitudes toward copy-deletion. If there are a thousand other copies of me in the world, then going to sleep and not waking up just feels like delegating back to a different version of me. If you’re still not convinced, Hanson’s essay Is Forgotten Party Death? is a typically disquieting analysis of this proposition. But whether it’s true or not is almost irrelevant – at least some ems will think this way, and they will be the ones who tend to volunteer to be copied for short term tasks that require termination of the copy afterwards. If you personally aren’t interested in participating, the economy will leave you behind.

The ability to copy ems as many times as needed fundamentally changes the economy and the idea of economic growth. Imagine Google has a thousand positions for Ruby programmers. Instead of finding a thousand workers, they can find one very smart and very hard-working person and copy her a thousand times. With unlimited available labor supply, wages plummet to subsistence levels. “Subsistence levels” for ems are the bare minimum it takes to rent enough hardware from Amazon Cloud to run an em. The overwhelming majority of ems will exist at such subsistence levels. On the one hand, if you’ve got to exist on a subsistence level, a virtual world where all luxuries can be conjured from thin air is a pretty good place to do it. On the other, such starvation wages might leave ems with little or no leisure time.

Sort of. This gets weird. There’s an urban legend about a “test for psychopaths”. You tell someone a story about a man who attends his mother’s funeral. He met a really pretty girl there and fell in love, but neglected to get her contact details before she disappeared. How might he meet her again? If they answer “kill his father, she’ll probably come to that funeral too”, they’re a psychopath – ordinary people would have a mental block that prevents them from even considering such a drastic solution. And I bring this up because after reading Age of Em I feel like Robin Hanson would be able to come up with some super-solution even the psychopaths can’t think of, some plan that gets the man a threesome with the girl and her even hotter twin sister at the cost of wiping out an entire continent. Everything about labor relations in Age of Em is like this.

For example, suppose you want to hire an em at subsistence wages, but you want them 24 hours a day, 7 days a week. Ems probably need to sleep – that’s hard-coded into the brain, and the brain is being simulated at enough fidelity to leave that in. But jobs with tasks that don’t last longer than a single day – for example, a surgeon who performs five surgeries a day but has no day-to-day carryover – can get around this restriction by letting an em have one full night of sleep, then copying it. Paste the em at the beginning of the workday. When it starts to get tired, let it finish the surgery it’s working on, then delete it and paste the well-rested copy again to do the next surgery. Repeat forever and the em never has to get any more sleep than that one night. You can use the same trick to give an em a “vacation” – just give it one of them, then copy-paste that brain-state forever.

Or suppose your ems want frequent vacations, but you want them working every day. Let a “trunk” em vacation every day, then make a thousand copies every morning, work all the copies for twenty-four hours, then delete them. Every copy remembers a life spent in constant vacation, and cheered on by its generally wonderful existence it will give a full day’s work. But from the company’s perspective, 99.9% of the ems in its employment are working at any given moment.

(another option: work the em at normal subjective speed, then speed it up a thousand times to take its week-long vacation, then have it return to work after only one-one-thousandth of a week has passed in real life)

Given that ems exist at subsistence wages, saving enough for retirement sounds difficult, but this too has weird psychopathic solutions. Thousands of copies of the same em can pool their retirement savings, then have all except a randomly chosen one disappear at the moment of retirement, leaving that one with an nest egg thousands of time what it could have accumulated by its own efforts. Or an em can invest its paltry savings in some kind of low-risk low-return investment and reduce its running speed so much that the return on its investment is enough to pay for its decreased subsistence. For example, if it costs $100 to rent enough computing power to run an em at normal speed for one year, and you only have $10 in savings, you can rent 1/1000th of the computer for $0.10, run at 1/1000th speed, invest your $10 in a bond that pays 1% per year, and have enough to continue running indefinitely. The only disadvantage is that you’ll only experience a subjective week every twenty objective years. Also, since other entities are experiencing a subjective week every second, and some of those entities have nukes, probably there will be some kind of big war, someone will nuke Amazon’s data centers, and you’ll die after a couple of your subjective minutes. But at least you got to retire!

If ems do find ways to get time off the clock, what will they do with it? Probably they’ll have really weird social lives. After all, the existence of em copies is mostly funded by companies, and there’s no reason for companies to copy-paste any but the best workers in a given field. So despite the literally trillions of ems likely to make up the world, most will be copies of a few exceptionally brilliant and hard-working individuals with specific marketable talents. Elon Musk might go out one day to the bar with his friend, who is also Elon Musk, and order “the usual”. The bartender, who is Elon Musk himself, would know exactly what drink he wants and have it readily available, as the bar caters entirely to people who are Elon Musk. A few minutes later, a few Chesley Sullenbergers might come in after a long day of piloting airplanes. Each Sullenberger would have met hundreds of Musks before and have a good idea about which Musk-Sullenberger conversation topics were most enjoyable, but they might have to adjust for circumstances; maybe the Musks they met before all branched off a most recent common ancestor in 2120, but these are a different branch who were created in 2105 and remember Elon’s human experiences but not a lot of the posthuman lives that shaped the 2120 Musks’ worldviews. One Sullenberger might tentatively complain that the solar power grid has too many outages these days; a Musk might agree to take the problem up with the Council of Musks, which is totally a thing that exist (Hanson calls these sorts of groups “copy clans” and says they are “a natural candidate unit for finance, reproduction, legal, liability, and political representation”).

Romance could be even weirder. Elon Musk #2633590 goes into a bar and meets Taylor Swift #105051, who has a job singing in a nice local nightclub and so is considered prestigious for a Taylor Swift. He looks up a record of what happens when Elon Musks ask Taylor Swifts out and finds they are receptive on 87.35% of occasions. The two start dating and are advised by the Council of Musks and the Council of Swifts on the issues that are known to come up in Musk-Swift relationships and the best solutions that have been found to each. Unfortunately, Musk #2633590 is transferred to a job that requires operating at 10,000x human speed, but Swift #105051’s nightclub runs at 100x speed and refuses to subsidize her to run any faster; such a speed difference makes normal interaction impossible. The story has a happy ending; Swift #105051 allows Musk #2633590 to have her source code, and whenever he is feeling lonely he spends a little extra money to instantiate a high-speed copy of her to hang out with.

(needless to say, these examples are not exactly word-for-word taken from the book, but they’re heavily based off of Hanson’s more abstract descriptions)

The em world is not just very weird, it’s also very very big. Hanson notes that labor is a limiting factor in economic growth, yet even today the economy doubles about once every fifteen years. Once you can produce skilled labor through a simple copy-paste operation, especially labor you can run at a thousand times human speed, the economy will go through the roof. He writes that:

To generate an empirical estimate of em economy doubling times, we can look at the timescales it takes for machine shopes and factories today to make a mass of machines of a quality, quantity, variety, and value similar to that of machines that they themselves contain. Today that timescale is roughly 1 to 3 months. Also, designs were sketched two to three decades ago for systems that might self-repliate nearly completeld in 6 to 12 months…these estimates suggest that today’s manufacturing technologiy is capable of self-repliating on a scale of a few weeks to a few months.

Hanson thinks that with further innovation, such times can be reduced so far that “the economy might double every objective year, month, week, or day.” As the economy doubles the labor force – ie the number of ems – may double with it, until only a few years after the first ems the population numbers in the trillions. But if the em population is doubling every day, there had better be some pretty amazing construction efforts going on. The only thing that could possibly work on that scale is prefabricated modular construction of giant superdense cities, probably made mostly out of some sort of proto early-stage computronium (plus cooling pipes). Ems would be reluctant to travel from one such city to another – if they exist at a thousand times human speed, a trip on a hypersonic airliner that could go from New York to Los Angeles in an hour would still take forty subjective days. Who wants to be on an airplane for forty days?

(long-distance trade is also rare, since if the economy doubles fast enough it means that by the time goods reach their destination they could be almost worthless)

The real winners of this ultra-fast-growing economy? Ordinary humans. While humans will be way too slow and stupid to do anything useful, they will tend to have non-subsistence amounts of money saved up from their previous human lives, and also be running at speeds thousands of times slower than most of the economy. When the economy doubles every day, so can your bank account. Ordinary humans will become rarer, less relevant, but fantastically rich – a sort of doddering Neanderthal aristocracy spending sums on a cheeseburger that could support thousands of ems in luxury for entire lifetimes. While there will no doubt be pressure to liquidate humans and take their stuff, Hanson hopes that the spirit of rule of law – the same spirit that protects rich minority groups today – will win out, with rich ems reluctant to support property confiscation lest it extend to them also. Also, em retirees will have incentives a lot like humans – they have saved up money and go really slow – and like AARP memembers today they may be able to obtain disproportionate political power which will then protect the interests of slow rich people.

But we might not have much time to enjoy our sudden rise in wealth. Hanson predicts that the Age of Em will last for subjective em millennia – ie about one to two actual human years. After all, most of the interesting political and economic activity is going on at em timescales. In the space of a few subjective millennia, either someone will screw up and cause the apocalypse, somebody will invent real superintelligent AI that causes a technological singularity, or some other weird thing will happen taking civilization beyond the point that even Robin dares to try to predict.

IV.

Hanson understands that people might not like the idea of a future full of people working very long hours at subsistence wages forever (Zack Davis’ Contract-Drafting Em song is, as usual, relevant). But Hanson himself does not view this future as dystopian. Despite our descendents’ by-the-numbers poverty, they will avoid the miseries commonly associated with poverty today. There will be no dirt or cockroaches in their sparkling virtual worlds, nobody will go hungry, petty crime will be all-but-eliminated, and unemployment will be low. Anybody who can score some leisure time will have a dizzying variety of hyperadvanced entertainment available, and as for the people who can’t, they’ll mostly have been copied from people who really like working hard and don’t miss it anyway. As unhappy as we moderns may be contemplating em society, ems themselves will not be unhappy! And as for us:

The analysis in this book suggests that lives in the next great era may be as different from our lives as our lives are from farmers’ lives, or farmers’ lives are from foragers’ lives. Many readers of this book, living industrial era lives and sharing industrial era values, may be disturbed to see a forecast of em era descendants with choices and lifestyles that appear to reject many of the values that they hold dear. Such readers may be tempted to fight to prevent the em future, perhaps preferring a continuation of the industrial era. Such readers may be correct that rejecting the em future holds them true to their core values. But I advise such readers to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values.

A short digression: there’s a certain strain of thought I find infuriating, which is “My traditionalist ancestors would have disapproved of the changes typical of my era, like racial equality, more open sexuality, and secularism. But I am smarter than them, and so totally okay with how the future will likely have values even more progressive and shocking than my own. Therefore I pre-approve of any value changes that might happen in the future as definitely good and better than our stupid hidebound present.”

I once read a science-fiction story that depicted a pretty average sci-fi future – mighty starships, weird aliens, confederations of planets, post-scarcity economy – with the sole unusual feature that rape was considered totally legal, and opposition to such as bigoted and ignorant as opposition to homosexuality is today. Everybody got really angry at the author and said it was offensive for him to even speculate about that. Well, that’s the method by which our cheerful acceptance of any possible future values is maintained: restricting the set of “any possible future values” to “values slightly more progressive than ours” and then angrily shouting down anyone who discusses future values that actually sound bad. But of course the whole question of how worried to be about future value drift only makes sense in the context of future values that genuinely violate our current values. Approving of all future values except ones that would be offensive to even speculate about is the same faux-open-mindedness as tolerating anything except the outgroup.

Hanson deserves credit for positing a future whose values are likely to upset even the sort of people who say they don’t get upset over future value drift. I’m not sure whether or not he deserves credit for not being upset by it. Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Brave New World. The whole point of dystopian fiction is pointing out that we have complicated values beyond material security. Hanson is absolutely right that our traditionalist ancestors would view our own era with as much horror as some of us would view an em era. He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”

This brings us to an even worse scenario.

There are a lot of similarities between Hanson’s futurology and (my possibly erroneous interpretation of) the futurology of Nick Land. I see Land as saying, like Hanson, that the future will be one of quickly accelerating economic activity that comes to dominate a bigger and bigger portion of our descendents’ lives. But whereas Hanson’s framing focuses on the participants in such economic activity, playing up their resemblances with modern humans, Land takes a bigger picture. He talks about the economy itself acquiring a sort of self-awareness or agency, so that the destiny of civilization is consumed by the imperative of economic growth.

Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.

True to form, Land doesn’t see this as a dystopia – I think he conflates “maximally efficient economy” with “God”, which is a hell of a thing to conflate – but I do. And I think it provides an important new lens with which to look at the Age of Em.

The Age of Em is an economy in the early stages of such a transformation. Instead of being able to replace everything with literal robots, it replaces them with humans who have had some aspects of their humanity stripped away. Biological bodies. The desire and ability to have children normally. Robin doesn’t think people will lose all leisure time and non-work-related desires, but he doesn’t seem too sure about this and it doesn’t seem to bother him much if they do.

I envision a spectrum between the current world of humans and Nick Land’s Ascended Economy. Somewhere on the spectrum we have ems who get leisure time. A little further on the spectrum we have ems who don’t get leisure time.

But we can go further. Hanson imagines that we can “tweak” em minds. We may not understand the brain enough to create totally new intelligences from the ground up, but by his Age of Em we should understand it well enough to make a few minor hacks, the same way even somebody who doesn’t know HTML or CSS can usually figure out how to change the background color of a webpage with enough prodding. Many of these mind tweaks will be the equivalent of psychiatric drugs – some might even be computer simulations of what we observe to happen when we give psychiatric drugs to a biological brain. But these tweaks will necessarily be much stronger and more versatile, since we no longer care about bodily side effects (ems don’t have bodies) and we can apply it to only a single small region of the brain and avoid actions anywhere else. You could also very quickly advance brain science – the main limits today are practical (it’s really hard to open up somebody’s brain and do stuff to it without killing them) and ethical (the government might have some words with you if you tried). An Age of Em would remove both obstacles, and give you the added bonus of being able to make thousands of copies of your test subjects for randomized controlled trials, reloading any from a saved copy if they died. Hanson envisions that:

As the em world is a very competitive world where sex is not needed for reproduction, and as sex can be time and attention-consuming, ems may try to suppress sexuality, via mind tweaks that produce effects analogous to castration. Such effects might be temporary, perhaps with a consciously controllable on-off switch…it is possible that em brain tweaks could be found to greatly reduce natural human desires for sex and related romantic and intimate pair bonding without reducing em productivity. It is also possible that many of the most productive ems would accept such tweaks.

Possible? I can do that right now with a high enough dose of Paxil, and I don’t even have to upload your brain to a computer first. Fun stories about Musk #2633590 and Swift #105051 aside, I expect this would happen about ten minutes after the advent of the Age of Em, and we would have taken another step down the path to the Ascended Economy.

There are dozens of other such tweaks I can think of, but let me focus on two.

First, stimulants have a very powerful ability to focus the brain on the task at hand, as anybody who’s taken Adderall or modafinil can attest. Their main drawbacks are addictiveness and health concerns, but in a world where such pills can be applied as mental tweaks, where minds have no bodies, and where any mind that gets too screwed up can be reloaded from a backup copy, these are barely concerns at all. Many of the purely mental side effects of stimulants come from their effects in parts of the brain not vital to the stimulant effect. If we can selectively apply Adderall to certain brain centers but not others, then unapply it at will, then from employers’ point of view there’s no reason not to have all workers dosed with superior year 2100 versions of Adderall at all times. I worry that not only will workers not have any leisure time, but they’ll be neurologically incapable of having their minds drift off while on the job. Davis’ contract-drafting em who starts wondering about philosophy on the job wouldn’t get terminated. He would just have his simulated-Adderall dose increased.

Second, Robin managed to write an entire book about emulated minds without using the word “wireheading”. This is another thing we can do right now, with today’s technology – but once it’s a line of code and not a costly brain surgery, it should become nigh-universal. Give ems the control switches to their own reward centers and all questions about leisure time become irrelevant. Give bosses the control switches to their employees’ reward centers, and the situation changes markedly. Hanson says that there probably won’t be too much slavery in the em world, because it will likely have strong rule of law, because slaves aren’t as productive as free workers, and there’s little advantage to enslaving someone when you could just pay them subsistence wages anyway. But slavery isn’t nearly as abject and inferior a condition as the one where somebody else has the control switch to your reward center. Combine that with the stimulant use mentioned above, and you can have people who will never have nor want to have any thought about anything other than working on the precise task at which they are supposed to be working at any given time.

This is something I worry about even in the context of normal biological humans. But Hanson already believes em worlds will have few regulations and be able to ignore the moral horror of 99% of the population by copying and using the 1% who are okay with something. Combine this with a situation where brains are easily accessible and tweakable, and this sort of scenario becomes horribly likely.

I see almost no interesting difference between an em world with full use of these tweaks and an Ascended Economy world. Yes, there are things that look vaguely human in outline laboring in the one and not the other, but it’s not like there will be different thought processes or different results. I’m not even sure what it would mean for the ems to be conscious in a world like this – they’re not doing anything interesting with the consciousness. The best we could say about this is that if the wireheading is used liberally it’s a lite version of the world where everything gets converted to hedonium.

V.

In a book full of weird ideas, there is only one idea rejected as too weird. And in a book written in a professorial monotone, there’s only one point at which Hanson expresses anything like emotion:

Some people foresee a rapid local “intelligence explosion” happening soon after a smart AI system can usefully modify its local architecture (Chalmers 2010; Hanson and Yudkowsky 2013; Yudkowsky 2013; Bostrom 2014)…Honestly to me this local intelligence explosion scenario looks suspiciously like a super-villain comic book plot. A flash of insight by a lone genius lets him create a genius AI. Hidden in its super-villain research lab lair, this guines villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa ha ha.

For someone who just got done talking about the sex lives of uploaded computers in millimeter-tall robot bodies running at 1000x human speed, Robin is sure quick to use the absurdity heuristic to straw-man intelligence explosion scenarios as “comic book plots”. Take away his weird authorial tic of using the words “genius” and “supervillain”, this scenario reduces to “Some group, perhaps Google, perhaps a university, invent an artificial intelligence smart enough to edit its own source code; exponentially growing intelligence without obvious bound follows shortly thereafter”. Yes, it’s weird to think that there may be a sudden quantum leap in intelligence like this, but no weirder than to think most of civilization will transition from human to em in the space of a year or two. I’m a little bit offended that this is the only idea given this level of dismissive treatment. Since I do have immense respect for Robin, I hope my offense doesn’t color the following thoughts too much.

Hanson’s arguments against AI seem somewhat motivated. He admits that AI researchers generally estimate less than 50 years before we get human-level artificial intelligence, a span shorter than his estimate of a century until we can upload ems. He even admits that no AI researcher thinks ems are a plausible route to AI. But he dismisses this by saying when he asks AI experts informally, they say that in their own field, they have only noticed about 5-10% of the progress they expect would be needed to reach human intelligence over the past twenty years. He then multiplies out to say that it will probably take at least 400 years to reach human-level AI. I have two complaints about this estimate.

First, he is explicitly ignoring published papers surveying hundreds of researchers using validated techniques, in favor of what he describes as “meeting experienced AI experts informally”. But even though he feels comfortable rejecting vast surveys of AI experts as potentially biased, as best I can tell he does not ask a single neuroscientist to estimate the date at which brain scanning and simulation might be available. He just says that “it seems plausible that sufficient progress will be made in roughly a century or so”, citing a few hopeful articles by very enthusiastic futurists who are not neuroscientists or scanning professionals themselves and have not talked to any. This seems to me to be an extreme example of isolated demands for rigor. No matter how many AI scientists think AI is soon, Hanson will cherry-pick the surveying procedures and results that make it look far. But if a few futurists think brain emulation is possible, then no matter what anybody else thinks that’s good enough for him.

Second, one would expect that even if there were only 5-10% progress over the last twenty years, then there would be faster progress in the future, since the future will have a bigger economy, better supporting technology, and more resources invested in AI research. Robin answers this objection by saying that “increases in research funding usually give much less than proportionate increases in research progress” and cites Alston et al 2011. I looked up Alston et al 2011, and it is a paper relating crop productivity to government funding of agriculture research. There was no attempt to relate its findings to any field other than agriculture, nor to any type of funding other than government. But studies show that while public research funding often does have minimal effects, the effect of private research funding is usually much larger. A single sentence citing a study in crop productivity to apply to artificial intelligence while ignoring much more relevant results that contradict it seems like a really weak argument for a statement as potentially surprising as “amount of research does not affect technological progress”.

I realize that Hanson has done a lot more work on this topic and he couldn’t fit all of it in this book. I disagree with his other work too, and I’ve said so elsewhere. For now I just want to say that the arguments in this book seem weak to me.

I also want to mention what seems to me a very Hansonian counterargument to the ems-come-first scenario: we have always developed de novo technology before understanding the relevant biology. We built automobiles by figuring out the physics of combustion engines, not by studying human muscles and creating mechanical imitations of myosin and actin. Although the Wright brothers were inspired by birds, their first plane was not an ornithopter. Our power plants use coal and uranium instead of the Krebs Cycle. Biology is really hard. Even slavishly copying biology is really hard. I don’t think Hanson and the futurists he cites understand the scale of the problem they’ve set themselves.

Current cutting-edge brain emulation projects have found their work much harder than expected. Simulating a nematode is pretty much the rock-bottom easiest thing in this category, since they are tiny primitive worms with only a few neurons; the history of the field is a litany of failures, with current leader OpenWorm “reluctant to make bold claims about its current resemblance to biological behavior”. A more ambitious $1.3 billion attempt to simulate a tiny portion of a rat brain has gone down in history as a legendary failure (politics were involved, but I expect they would be involved in a plan to upload a human too). And these are just attempts to get something that behaves vaguely like a nematode or rat. Actually uploading a human, keeping their memory and personality intact, and not having them go insane afterwards boggles the mind. We’re still not sure how much small molecules matter to brain function, how much glial cells matter to brain function, how many things in the brain are or aren’t local. AI researchers are making programs that can defeat chess grandmasters; upload researchers are still struggling to make a worm that will wriggle. The right analogy for modern attempts to upload human brains isn’t modern attempts at designing AI. It’s an attempt at designing AI by someone who doesn’t even know how to plug in a computer.

VI.

I guess what really bothers me about Hanson’s pooh-poohing of AI is him calling it “a comic book plot”. To me, it’s Hanson’s scenario that seems science-fiction-ish.

I say this not as a generic insult but as a pointer at a specific category of errors. In Star Wars, the Rebellion had all of these beautiful hyperspace-capable starfighters that could shoot laser beams and explore galaxies – and they still had human pilots. 1977 thought the pangalactic future would still be using people to pilot its military aircraft; in reality, even 2016 is moving away from this.

Science fiction books have to tell interesting stories, and interesting stories are about humans or human-like entities. We can enjoy stories about aliens or robots as long as those aliens and robots are still approximately human-sized, human-shaped, human-intelligence, and doing human-type things. A Star Wars in which all of the X-Wings were combat drones wouldn’t have done anything for us. So when I accuse something of being science-fiction-ish, I mean bending over backwards – and ignoring the evidence – in order to give basically human-shaped beings a central role.

This is my critique of Robin. As weird as the Age of Em is, it makes sure never to be weird in ways that warp the fundamental humanity of its participants. Ems might be copied and pasted like so many .JPGs, but they still fall in love, form clans, and go on vacations.

In contrast, I expect that we’ll get some kind of AI that will be totally inhuman and much harder to write sympathetic stories about. If we get ems after all, I expect them to be lobotomized and drugged until they become effectively inhuman, cogs in the Ascended Economy that would no more fall in love than an automobile would eat hay and whinny. Robin’s interest in keeping his protagonists relatable makes his book fascinating, engaging, and probably wrong.

I almost said “and probably less horrible than we should actually expect”, but I’m not sure that’s true. With a certain amount of horror-suppressing, the Ascended Economy can be written off as morally neutral – either having no conscious thought, or stably wireheaded. All of Robin’s points about how normal non-uploaded humans should be able to survive an Ascended Economy at least for a while seem accurate. So morally valuable actors might continue to exist in weird Amish-style enclaves, living a post-scarcity lifestyle off the proceeds of their investments, while all the while the Ascended Economy buzzes around them, doing weird inhuman things that encroach upon them not at all. This seems slightly worse than a Friendly AI scenario, but much better than we have any right to expect of the future.

I highly recommend Age of Em as a fantastically fun read and a great introduction to these concepts. It’s engaging, readable, and weird. I just don’t know if it’s weird enough.

This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

520 Responses to Book Review: Age of Em

  1. Dr Dealgood says:

    One unaddressed question that I find more interesting than the Ems themselves: what about the humans left behind after the Em civilization falls?

    If the whole Age of Em is expected to begin and end in about two subjective human years, leaving behind whatever armies of robots and bottle cities haven’t been destroyed, the remaining human beings who survived the period will be in an interesting position.

    What would humanity do when all the lights suddenly go out and and their eyes adjust to the starlight, looking across an expanse of robot city as far as the eye can see? As the last of the coolant pumps stop whirring, do they starve and die in the rusting bowels of the Ascended Economy? Or do they rebuild some kind of post-collapse society in the machine ecology left behind?

    They would all clearly remember life before the Age of Em, but I have no idea to what extent their old skills would be relevant. Also it’s unclear how much they will understand the Em technology left behind, or whether or not they can control any of it regardless.

    I have to say, the whole thing makes for a much more compelling SciFi premise than the Ems themselves.

    • Scott Alexander says:

      I don’t think Hanson thinks the civilization will fall, I think he expects it to evolve into an even weirder and more powerful form. Maybe AIs, maybe something else.

      • ad says:

        One thought that occurs to me is that if the efficiency of the economy increaces faster than the economy expands, the natural resources needed to run it will fall. The cities might therefore just go obsolete, as everyone upgrades to more microscopic hardware.

        Alternatively, the efficiency of the economy might increase more slowly than it expands, in which case the Age of Em will be followed by the conversion of the world, and perhaps the universe, into computronium. Said conversion including the conversion of any humans who happen to exist in the universe.

        • Mark says:

          > if the efficiency of the economy increaces faster than the economy expands

          I think they’d very quickly run up against fundamental physical limits on the density of computronium, so this wouldn’t happen.

      • Daniel Kokotajlo says:

        Scott: “I don’t think Hanson thinks the civilization will fall, I think he expects it to evolve into an even weirder and more powerful form. Maybe AIs, maybe something else.”

        Yes. A corollary of this is that the Hansonian scenario is very incomplete. After two years of ems, real de novo superintelligent AI will be created, far superior to humans and thus inhuman. Then what? Do we have a friendly future in which humans can live? Or do even the ems get replaced by something even less human than they?

        It depends on who creates it, and how much effort they devote to safety. They’ll be ems running way faster than humans, which helps lower the cost of safety. But they’ll be in fierce competition with other ems, and at least some of the players in the arms race will be modified to have weird, unethical values. This seems like a gloomy scenario.

        Of course, as long as at least one superintelligence ends up with decent values, there will be some resources devoted to keeping humans alive–as long as most factions respect the law, not just in letter but in spirit. This still seems like a gloomy scenario.

    • Deiseach says:

      what about the humans left behind after the Em civilization falls?

      This is E.M. Forster’s “The Machine Stops”, which we’ve discussed on here before.

      The part about “Ems would be reluctant to travel from one such city to another – if they exist at a thousand times human speed, a trip on a hypersonic airliner that could go from New York to Los Angeles in an hour would still take forty subjective days” reminded me of Vashti’s reluctance, shared by others, to leave the shelter of her room where her every need was provided for and travel, even in the carefully closed off environment of the transport, to another city.

    • Loquat says:

      This is one of the reasons it’s valuable to have groups like the Amish who never depended on modern tech in the first place – assuming there aren’t so many city refugees as to overwhelm them, they could either take in the survivors or just sell them food and other necessaries while they salvage what they can from the cities and try to rebuild.

  2. Proper Dave says:

    I believe we will probably have some very good AI’s doing allot of work in 50 years time, but it will be “narrow” AI with many humans “behind it” from the PhD designer to the sysadmin supporting it, but doing a job that no human can do but it must be maintained by a team of humans…
    The em scenario is of course possible but will take ALLOT of processing power, say we have solved all the neurological mechanisms (as well as the body, although you can probably get away with an “ad hoc” model for example the liver and pancreas). To upload a human you will have to have very detailed biophysical models of all aspects of the brain that is important, this will mean neurons at least. The fastest supercomputers today can maybe do a couple of these neuron models. I will not be surprised if the supercomputers of 2066 can do a human but it will cost 100 million dollars and consume 10 Megawatts of power, this em is not gonna steal anybodies job anytime soon. But I can see like for example the best mathematician or physicist being run at this cost, but a lawyer or even software engineer?

    • Scott Alexander says:

      I think estimating how much computing power we’ll have in 2066 is one of the easier futurological tasks; just extend current trends out. Hanson has definitely done this. And anything that we can do in 2066 with $100 million worth of supercomputers, we can do in 2086 with fifty bucks and a laptop.

      • Proper Dave says:

        Yes but we all know that forever exponential growth is impossible (except economists of course). A bona fide em will take ALLOT of calculations and probably be 10 times slower. We actually may hit the fundamental physical limitations to flipping bits before 2066. So you may be half right that a 100 mil super computer may cost $1000 20 years later but still need 10 MW in power.

      • Steve Sailer says:

        Then how come my 2015 Macbook Air is only negligibly faster than my 2012 Macbook Air?

        I’ve been acquiring personal computers since 1984 and each new one was much faster than its predecessor — until this decade. Was this just a temporary glitch, or is that what the future is going to look like?

        • Anonymous says:

          The thermal envelope of a Macbook Air combined with the design choice of style over function is a limiting factor in the equation.

        • Josh says:

          The upper limit in processor clock speed is the main reason, in past decades thermal envelopes also decreased significantly.

          • Anonymous says:

            I didn’t mean to imply it was an engineering problem. They could easily spec a higher clocked CPU, but that would blow their target TDP for the Air. It also kills the battery, so they’d have to go with a larger one, which means compromising the design of the Air as a sleek wedge of a laptop. Steve would be better served with a Macbook Pro as the Air will always be a very portable midrange laptop positioned between the Macbook and the Pro.

        • MS says:

          I think that has something to do with the changes in the way software is written. My instance of Chrome appears to be consuming something like 500MB of RAM right now. Compare that to the 2MB of RAM that was used to store a Crash Bandicoot level on the PS1:

          https://www.quora.com/How-did-game-developers-pack-entire-games-into-so-little-memory-twenty-five-years-ago/answer/Dave-Baggett?srid=z9ZA&share=1

          I know I’m talking about memory and not processing speed, but what I’m trying to convey is the amount of optimization that people used to have to do to get stuff to perform satisfactorily.

          Now you don’t have to worry about that stuff so much. Hardware limitations aren’t as strict and programmers can afford to be greedy. Yesterday I was reading an anecdote about a guy interviewing for a data analysis position where the company wanted him to demonstrate some fancy distributed computing wizardry on their whopping company dataset of 600MB. He just loaded the whole thing into memory on his laptop – no need for fancy tricks.

          Software has become more greedy because programmer time is more expensive than CPU cycles or RAM. The result is that when your computer becomes more performant, then software developers are going to write greedier software.

          • Peter Scott says:

            That doesn’t explain why single-threaded processor speed has leveled off in the past decade or so, on the same software. There are a bunch of reasons for this: smaller transistors no longer lead directly to faster clock speeds, the architectural gains of deep pipelines and lots of superscalar execution have already been mostly done, and it’s getting harder and harder to wring out another 10% improvement in speed on a single thread because there’s only so much you can do in a clock cycle.

            The thing about brain emulation, though, is that brains parallelize nicely. When it comes to parallel brute-force stuff, Hanson’s calculations make a lot more sense. Most of the limiting factors for conventional CPU speed don’t apply to sufficiently parallel calculations. Instead, it becomes more about how small you can make the parts and how much power they take versus how much heat you can transfer out. (Compare the negligible change in processor speed over the past few years with the huge improvements in GPU speed. The big difference is the amount of parallelism in their workloads.)

          • Nonnamous says:

            why single-threaded processor speed has leveled off in the past decade or so, on the same software

            Maybe the economic incentive is no longer there. In the past decade or so, everything of importance has become a distributed system. Googles, Facebooks and Amazons of the world don’t run on giant supercomputers, they run on datacenters with tens of thousands machines which talk to each other over very fast networks and run systems architected for that kind of environment.

        • James says:

          Same with desktop technology.

          I think desktop and laptop has stagnated because mobile (smaller) growth is taking preeminence.

        • Alan Crowe says:

          What made electronics so exciting in the 80’s and 90’s was Dennard scaling. Shrink the line-widths by a factor of two and three wonderful things happen: you get four times as many transistors in the same area, your logic gates switch twice as fast, each gate consumes half as much power.

          Scaling died at 90nm. Great ingenuity has gone into shrinking line-widths, they are down to around 14nm, but that only gets you more transistors. link

          Taking the electronic industry experience as a whole, Moore’s law has continued, inciting people to extrapolate boldly into the future, while Dennard scaling has died, suggesting caution.

          • Tsnom Eroc says:

            Good answer.

            Memory storage and density is improving quite nicely. Interesting multi-core technology and those capabilities is also improving nicely.

            The industry as a whole, the spirit of moores law isn’t dead. But some very important specifics seem to have stopped, and there is no saying when there will be a paradigm shift(I believe there can be) for the important portins of it which have stopped.

      • Steve Sailer says:

        Science fiction books from Heinlein’s era are built around the assumption that the increase in transportation speed that began in the early 1800s with the first steamships and culminated in the moon landing of 1969 would continue. Today that seems terribly naive.

        Why assume that Moore’s Law will hold true much longer, especially when it seems to already be faltering in this decade?

        • Wrong Species says:

          I don’t think there was ever a Moores Law for transportation. There were just a few cool inventions. Todays computers aren’t fundamentally different than 30 years ago. They’re just faster and cheaper. But you can’t just keep improving cars until you travel at the speed light. You need a new invention. So in theory, it makes more sense for Moores Law to hold up than transportation improvements from the 20th century.

        • J says:

          Nature is sigmoids, not exponentials. Midway through, sigmoids look like exponentials so we get Malthus and Moore. But then we level off and asymptotically approach physical limits.

      • Tsnom Eroc says:

        I don’t think so its easy to predict future tech at all.

        For startes, moores law hasn’t been true since about 2005, at least how it was at one point in time.

        What’s important though, is how AI relates to algorithms. This is extremly important.

        https://en.wikipedia.org/wiki/Big_O_notation

        If current trends of computing continues and human intelligence hasn’t largely increased, but certain problems with current technology are O^2, or even 2^n, then even with 1000 more years of moores law won’t mean much. If however, an AI is able to reduce that problem with cleverness to O(N), and its an important one, then its very doable. Or an O(N) near constant time good approximation to that 2^n problem, its fantastic.

        By the looks of it, with typical moores law ending, a future branch of technology will be pursuing technologies that can be massively parallel. In that case, some algorithms can reduce nicely from NlogN to even root(N).

        One way to put it, is that some important problems that currently take until the heat death of the universe to compute may effectively be solved as an approximation in a reasonable amount of time, however other algorithms may not have much of a speedup.

        With that, I don’t want to predict future computing power.

        For other tech that slowed down, it was predicted extrapolating trends that personal gene sequencing would be available in 2013. Its still not quite there.

        Heck, we don’t even pretend to remember nuclear with tech predictiong.

        • Rb says:

          Could you give a link for the reduction from O(nlogn) to O(root(n))? It sounds quite interesting.

          • Tsnom Eroc says:

            Oh, look up shear-sort as the very simplest example of one. The data is not compared in sequence as it is in normal algorithms, but instead is moved around on a 2D parallel grid.

            The course parallel computing on the MIT open courseware site for graduate students has a good image of it. The book “Introduction to Parallel Processing” by Kluwer has the best image of it.

            I view this is the simplest example of a paradigm changing technolgoy for a single algorithm(though not all of computing). I wonder how much cleverness is involved to change others we do sequentially to something like this.

  3. M.C. Escherichia says:

    “Ems would be reluctant to travel from one such city to another – if they exist at a thousand times human speed, a trip on a hypersonic airliner that could go from New York to Los Angeles in an hour would still take forty subjective days”

    Wait, what? Ems are sims, no? They don’t travel by plane, they travel via IP packets and suchlike. Or even if they have a physical body for some reason, they could still be transferred from one to another…

    • Proper Dave says:

      Actually a hyper-sonic liner may be faster, If an em is a couple of exabytes, it can take hours to transfer over even the broadest broadband connections. Think of the new 2K and 4K video’s on netflix it takes longer to transfer to your smart TV than before.

      • Mary says:

        Very true. Sneakernet still has its advantages, because while the latency is a problem, the throughput can be very high indeed.

      • GregvP says:

        There’s this thing called channel bonding. If it takes 1000 hours to transfer an em over a fiber link, it’d take 0.1 hours to transfer it over 10,000 fibers.

      • Rzg says:

        One imagines a method of transport developing analogous to firing a hard disk from a railgun or Howitzer.

    • John Schilling says:

      Agreed. Interestingly, with enough bandwidth Mars would be a subjective week or two from Earth, depending on phasing, with no way to communicate faster than visiting in “person”. But in-person visits would require expensive bandwidth, interplanetary laser comm, with mail being no faster but much cheaper. Rather like the Old World and New World in the 19th century.

      Meh, I’m not buying the “uploads but no AI” premise enough to put a lot of thought into this.

    • HeelBearCub says:

      Wait, what? Ems are sims, no?

      I don’t know how Hanson treats this, but Scott keeps equivocating between the physical ems and virutal ems in the review.

      • onyomi says:

        I’m also confused by this. In what sense would any Ems be physical at all, except insofar as they need hardware to live on? And why would anyone ever bother moving the actual physical hardware an Em lived on, except insofar as one would move a server to a different air-conditioned building?

        Further, unless the Ems develop super-master persuader skills, won’t they be eternally at the mercy of humans living in meatspace, even if they are way dumber? How could they fight back even if they wanted to?

        All this reminded me a bit of Snow Crash. Programmers living in tiny storage units but spending most of their time in virtual reality space. Has Neal Stephenson written anything dealing more directly with uploaded consciousnesses?

        • HeelBearCub says:

          I believe that this sentence fragment “millimeter-high beings living in cities the size of bottles” is meant to describe at least some copies of the ems.

          And there are a number of other references (for example, the em performing a surgery) that don’t make any sense unless at least some of the ems have physical bodies.

          • onyomi says:

            Are we talking, like robot bodies?

            In one sense, that seems to make it much more difficult; in another, it might actually be easier: a large portion of the human brain and nervous system is devoted to controlling the body in space and navigating the 3d, physical world.

            Creating a virtual world which the simulated neurons could perceive as equivalent to the physical world they were designed to navigate might be harder than it seems?

            Conversely, if we can give the ems robot bodies, we just need to simulate the whole nervous system, rather than just the brain, and maybe they will be able to move their robot bodies as would a human?

            But if some ems have immortal robot bodies then it really seems likely they would outcompete flesh-bound humans in every sense almost immediately.

          • onyomi says:

            Reminds me of another way Star Trek has proven surprisingly good at futurism: an episode of TNG in which Data meets his “mother” who is actually an android into which the memories and personality of Dr. Soong’s wife have been uploaded. Even she is unaware that she is not a flesh and blood human.

          • Luke Somers says:

            The idea is, some of them will often take control of robotic bodies in order to perform physically-oriented tasks. They won’t need humans to do maintenance, but they won’t be walking around socializing in robot bodies either. Note – I read this in preprint, so this is not a guess.

          • Luke Somers says:

            Er – minor correction. It wasn’t preprint exactly. Some mid draft, before full editing. I just meant, something that wasn’t final version.

        • Algirdas says:

          Greg Egan. I highly recommend reading everything he wrote; however, novels Permutation City, Diaspora, and Zendegi directly deal with uploads. Diaspora and Zendegi are exceptionally good.

          • Setsize says:

            Zendegi took some direct swipes at Yudkowski/Hanson style transhumanism so no one on LW gave it the time of day when it came out. I think Zendegi’s extrapolation of what neuroscience will actually be capable of and how is much better informed than the ideas popular in the LWverse.

            Tangentially related, this paper will be interesting fodder for anyone taking on over/under on Strong AI vs. Brain Emu showing up first. Could a Neuroscientist Understand a Microprocessor?

          • Luke Somers says:

            I started Zendegi. The treatment of Robin Hanson and the others wasn’t a swipe, it was giving a straw man the middle finger.

            Either Mr. Egan actually doesn’t understand what their actual positions are, which is a bad sign for understanding the rest of the issues there, or he lacks confidence in his point and has to caricature the opponents, or he would rather mock them than actually make a point. Whichever it is, it’s much less interesting than it could have been.

  4. Vitor says:

    I think the idea of a conspicuously anthropocentric future does have some merit. At some point in time before the invention of ems, the technology will be sufficiently plausible that we will preemptively create a legal framework for it, one that favors flesh and blood humans and our subjective moral values.

    In our current society it’s still a big deal to kill a non-implanted fetus, so I can’t imagine the population as a whole ever being blase about creating and destroying human consciousnesses so casually. Either they won’t be recognized as moral entities at all (via some rationalization) or copying will be severely restricted, and I can imagine such situations being relatively stable.

    In a nutshell, it doesn’t matter if the ems have no problem with some radical new world order; they won’t be in charge.

    This whole discussion strongly reminds me of Permutation city by Greg Egan, which is a completely different take on the idea of ems aka copies, and deals much more with the human reaction to copying technology and its moral dilemmas.

    • Deiseach says:

      I can’t imagine the population as a whole ever being blase about creating and destroying human consciousnesses so casually

      If they can non-destructively copy a human consciousness, I think that it will be regarded as having less standing and rights than the original human who is still alive and well. Think of how abortion rights supporters contrast potential life of the foetus to the actual life of the mother.

      One copy which is the only copy ever made of dear old Granma Sue before her death may be regarded as the same in law as a living human, but fifty or a hundred copies made of college student Bill for specific grunt-work tasks like cranking out essays for richer if stupider students to pay for and pass off as their own? When Bill is regarding that as the same as selling his sperm or taking part in a drugs trial for some handy spending cash? The sanctity of human consciousness won’t seem as relevant there.

      Editing for-sale/licenced copies of your brain structure so they can do tedious but necessary office or accountancy work, so that the resulting em has all your technical knowledge but doesn’t share your memories of your girlfriend or boyfriend and how you feel about them may be the commonplace thing in the em economy of the future, and I think at least at the start, flesh humans will be the ones favoured by the legal system.

      What happens when businesses are mainly or wholly staffed and run by ems which are multiple copies of a handful of original human brains, copies that can be hacked about, edited, and changed according to need, is another matter.

      • Vitor says:

        This is exactly what I’m getting at: So long as the copying is done casually the copy has less (no?) legal standing. The notion of “being the original” vs “being a copy” will probably survive for a long time. The question that I’d really like to see answered is either

        (1) How those moral intuitions change over time among humans living alongside ems or

        (2) How the ems acquire power to push through their morality (as hypotethized in the book) at the expense of the originals.

        I find it vaguely plausible that our far-future descendants will be simulations with full human rights, but I don’t see a clear path to getting there from here.

        • Deiseach says:

          I think if this ever happens there will be a distinction between commercial copies and post-death copies.

          Right now, if you’re applying for a job, you’re selling your knowledge, skills and technical experience. Neither you nor your employer are interested in sharing the account of your ninth birthday party because it’s not relevant. If we ever get to the em future, selling a recording of your brain to an employer (or more than one) because you’re the top accountancy student in the whole country for that graduating class, you and they are going to edit out the unnecessary stuff. You don’t want anyone else to be able to read your memories of the time you were six and your best friend left you to be best friends with the popular kid in the class or you got drunk and crashed the family car and your parents were disappointed and let down by you and the likes. The business buying your knowledge and skills doesn’t want unnecessary stuff that has no bearing on the work and will only take up room to run on the hardware.

          So edited copies that are less (much less in some cases) complete than the original human brain, that are made in multiples, that can be swapped around and deleted and edited to suit and have e.g. the new tax laws for this year uploaded into memory – those will be the future of workers. You probably won’t need robot bodies to load them into, either, except in specific circumstances: you can, for instance, have an uploaded copy of the em running the computerised process on a pharmaceutical production line (these are already so highly automated that the other day when an electrical storm took down a power transformer, the computers at the GSK plant where my brother works went off-line and until they got the servers back up production was at a complete halt, so he was sent home for the rest of the day. In the Bad Old Days since everything was still manually controlled, they could have continued production. In the Shiny New Future, they won’t need the physical body of the shift supervisor/production manager present, they’ll have the em running it from the computer itself).

          Those won’t be considered equal in law to humans as they are only partial, there are multiple copies, and they’re basically tools (it would be like saying that since I use my hands for my work, my hands are equal in law to me as a person).

          Complete copies that are as close to the original human brain and memories and experiences will be a different thing. It’ll be like cryonics. Think of all the heart-rending stories of kids dying from various diseases in the news. In the em future, if there isn’t a cure, the parents can have an em of little Jasper or Floella made. That’s probably where robot bodies will be used and probably where the law cases over ‘keeping a human personality at the level of nine years old for forty or fifty years – cruelty or parental right?’ will come about, and where working on finding ways of letting the em’s personality ‘grow’ so that Jasper can have that life he was denied, and ‘ethical or unethical to try and work on means of letting an em develop as it would in the human stages of growth – are you using them as experimental guinea pigs’ – can Jasper’s nine year old ‘mind’ grow and mature as a normal human child would, or is it by its nature stuck at the level it was when the em was made? These will be the future thorny moral, legal, ethical and ‘can science solve it?’ questions.

          Or that’s where your retirement money will go: have a complete and full-scale em made before you die, and pay for it to be run on the subjective time scales Scott talks about. You could have the cheaper, “experience one hundred or a thousand years subjective over ten years outside time” model where the em has a great virtual life and then ‘dies’ (the copy is wiped if no-one wants to pay to keep it going). You could have the “running very slowly” version for people who want to take advantage of the improvements in tech that will come along and they want to see ‘what will the world be like in fifty or a hundred or more years time’? You can have your standard ‘one day subjective is the same as one day in the real world’ model.

          The rich will be able to have functional immortality, as they will be able to pay for the latest upgrades, stay conscious all the time, have the advanced adventure packages (experience all the adventures you never had in life! Visit the top tourist spots of the world, have amazing adventures, change gender, race or other characteristics of your virtual body/robot body, live the dream!) and interact with the outside world – that may be where putting an em in robot bodies happens, but I think if we’re at the stage of complete human personalities running on a computer, we won’t need or want robot bodies. We’re already at the primitive stage of Siri and Cortana able to interact with voice recognition software; once we’re able to create ems, the interaction software and hardware should be equally advanced.

          The post-death human copies will be the ones with the legal standing of flesh and blood humans, as they’ll be unique copies (no point making fifty copies of Uncle George), unedited, as full as they can possibly be of the original’s memories and experiences. It’ll be the same principles as cryonics: preservation of the personality after physical death.

      • gattsuru says:

        If they can non-destructively copy a human consciousness, I think that it will be regarded as having less standing and rights than the original human who is still alive and well. Think of how abortion rights supporters contrast potential life of the foetus to the actual life of the mother.

        Eh, I’d be cautious about that assumption. Outside of the transhumanist sphere, there’s a massive amount. Beta forks are one of the few things in Eclipse Phase that I can generally get players to agree with bioconservatives on, and that’s with an in-genre “these aren’t people people” label.

        ((Of course, the scary thing about the Hansonian future is it doesn’t care what people think of it.))

        • anonymous poster says:

          Bioconservatism is the radical idea that the TITANs should not be allowed to destroy humanity

        • Deiseach says:

          gattsuru, if my brain can be non-destructively copied, and that emulation can be run on some kind of computer hardware, and I’m standing there right beside it, and a maniac with an axe runs into the room and screams they are going to murder one of us – am I obligated to volunteer myself or can I say “Chop up the computer!” instead?

          If the maniac chops up the computer and destroys the em, is that murder? Have I incited murder by telling them to chop up the computer?

          If the em is stuck forever at the level it was when the copy of my brain was made, but I continue to exist, change and accumulate new experiences and knowledge, are we both still considered of equal moral worth?

          If the first em copy takes up X amount of computational storage, but to make a second em copy of me ten years later would up Y amount of storage and Y is the larger number, which of us has the superior claim to existence?

          I’m using the abortion model because that’s the argument I see from the pro-abortion side: the foetus is only potential life, it has not got equal rights with the mother’s existing life (and “life” here does not mean “physical processes differentiating live from non-live matter”, they use it to mean “experiences, consciousness, talents, desires, dreams, responsibilities to other people, disadvantages the mother would suffer if she had to bear and/or raise a child” and so forth).

          Greater extension of existence in time is the cut-off point used, mainly based on “more development at the physical level” (though for an em, both it and I would have the same level of mental development and complexity so that argument would not apply, merely my longer existence and prior existence); the ‘developmental stages’ or ‘capable of independent existence’ model (so abortions are limited to up to 24 weeks of gestation as the foetus is not viable before then).

          If I could legally abort a child at 23 weeks of gestation and this would be held to be morally, ethically and philosophically acceptable and my right to do so, then I don’t see how a copy of my brain could be held to have greater rights. My brain contents are my property! I can make fifty or a hundred copies of them! I do not cease to exist when these copies are made or destroyed, I was pre-existent to those copies, my continuing existence as a human life means I garner more data than those copies contain (unless I make a copy of myself every day and keep them updated), if it is a choice between “delete the copy or shoot the human”, which is considered murder, or which of us has the greater right to life in that circumstance?

          If it’s the only, unique, full copy of my consciousness in existence and I drop dead ten seconds after it’s been loaded and run, then sure – it may get to be granted the legal equivalence of being a human. But otherwise?

          And again, abortion is the model here. That was very much legally defined and limited and constrained, and then a series of legal challenge cases and social attitude changes happened over time. First it was “only ever in exceptional circumstances where the physical life of the mother is threatened by pregnancy” and certainly never for the reason “Jill is pregnant, single and doesn’t want a baby/will have her college career interrupted by having a baby”. Then the threat to the life of the mother was broadened to define mental threat, e.g. the woman says she will kill herself. Then rape and incest. Then foetal abnormality.

          Now we have agitation for “abortion on demand at any time, the only reason being the woman wants an abortion”. Keep your rosaries off our ovaries! In my own country, limited abortion (threat to the life of the mother) has been legally permissible for the past couple of years, but that is still regarded as not enough and a campaign to repeal the Eighth Amendment to our constitution is under way, with the public support of at least one of our political parties.

          I do think the ethicall and philosophical implications of abortion are going to be germane to the cases of ems: human life or its equivalent, or not? Potential or actual life? Life defined in what terms – physical existence or ‘hopes, dreams, aspirations’*? Right to life or right to a life (as I’ve seen the usage slipping into discourse; a newspaper story talking about a court case on taking a child off life support where the parents are – by its usage – fighting for the child’s right to a life, not ‘right to life’.

          *There are a number of cases where some people argue that a woman should have the right to an abortion, such as:

          – damage to mental health
          – damage to family
          – damage to career prospects
          – damage to financial prospects
          – damage to plans for her life

          If, in the case of abortion, personhood is not inherent but is a status granted either by law or the desire of the mother (“it’s a baby if you think it’s a baby” attitude) then how can an em have inherent personhood and the right to life (or even the right to a life) since it is my life it is parasitical upon?

          Early ems may be treated with the same caution: we can’t say they’re not human. But as time goes by, as it becomes accepted and then acceptable? As we make multiple copies of ems? As they get edited and changed at the desire of the owner (be that the business or the original human from whom the copy was taken)? Those rights will be eroded.

          Some business is going to take a case that it should not be required to treat its ems the same as its flesh and blood employees. Ems don’t need coffee breaks or bathroom facilities or holiday pay! It’s costing money the company can’t afford, and if this goes on they’ll have to shut down and then you’ll lose employment and tax revenue! Or move its head office to the British Virgin Islands which changed from being accommodating tax havens to accommodating em-staffed businesses; ems there are regarded as software and thus property, not as persons.

          • somnicule says:

            I mean, sure, a static scan of my brain doesn’t have much intrinsic moral worth, it’s entirely redundant. But it seems intuitive to consider that once it diverges, then the original and the em can be considered to have equal worth.

            You’re also talking about death as the moral problem being inflicted here, but if I got euthanized 10 minutes after a high-fidelity scan was taken and em run, the death is essentially a brief memory loss.

            If I had the option of going into a time-dilation chamber, doing 8 hours of work, having my memory erased afterwards, and leaving at the same time I went in in return for a day’s pay, I’d totally take that, and I don’t see how it’s morally different to spinning off a fork to work for a bit. And if it was going to be much longer than 8 hours in between memory erasure, I’d be more hesitant. In the em case I’d want to negotiate the em’s long-term rights in advance.

            As for the axe-murderer thing, I’m pretty sure it’s not murder to tell them to go for someone else in that circumstance, whether it’s an em or another human.

    • MugaSofer says:

      Humans had little trouble creating and destroying actual humans a couple hundred years ago, when it was economically convenient.

      • Deiseach says:

        I’m with MugaSofer on this.

        Again, banging away on the abortion drum: a pregnancy was considered to involve a child, a baby, even when abortion was becoming legalised. Now it’s “it’s not a baby, it’s a foetus!” and talking about “removing the products of conception”.

        I see no reason why “em-lover” should not be made a political and point-scoring jeer the same way “foetus-lover” has been.

        Quoting from the decision in Roe vs Wade:

        A. The appellee and certain amici argue that the fetus is a “person” within the language and meaning of the Fourteenth Amendment. In support of this, they outline at length and in detail the well-known facts of fetal development. If this suggestion of personhood is established, the appellant’s case, of course, collapses, [410 U.S. 113, 157] for the fetus’ right to life would then be guaranteed specifically by the Amendment. The appellant conceded as much on reargument. On the other hand, the appellee conceded on reargument that no case could be cited that holds that a fetus is a person within the meaning of the Fourteenth Amendment.

        The Constitution does not define “person” in so many words. Section 1 of the Fourteenth Amendment contains three references to “person.” The first, in defining “citizens,” speaks of “persons born or naturalized in the United States.” The word also appears both in the Due Process Clause and in the Equal Protection Clause. “Person” is used in other places in the Constitution: in the listing of qualifications for Representatives and Senators, Art. I, 2, cl. 2, and 3, cl. 3; in the Apportionment Clause, Art. I, 2, cl. 3; 53 in the Migration and Importation provision, Art. I, 9, cl. 1; in the Emolument Clause, Art. I, 9, cl. 8; in the Electors provisions, Art. II, 1, cl. 2, and the superseded cl. 3; in the provision outlining qualifications for the office of President, Art. II, 1, cl. 5; in the Extradition provisions, Art. IV, 2, cl. 2, and the superseded Fugitive Slave Clause 3; and in the Fifth, Twelfth, and Twenty-second Amendments, as well as in 2 and 3 of the Fourteenth Amendment. But in nearly all these instances, the use of the word is such that it has application only postnatally. None indicates, with any assurance, that it has any possible pre-natal application. 54 [410 U.S. 113, 158]

        All this, together with our observation, supra, that throughout the major portion of the 19th century prevailing legal abortion practices were far freer than they are today, persuades us that the word “person,” as used in the Fourteenth Amendment, does not include the unborn.

        If the Fourteenth Amendment to the Constitution is the one that defines “personhood” and there is no definition of a “person” in the Constitution itself, and the unborn foetus is not legally a person then that, (with the reference to the Fugitive Slave Act) means that there is no reason we should think ems would be considered persons or as having the civil or criminal law rights of such. If they couldn’t even agree if a slave was a person or not under US law, what hope does an em have? After all, an em is in the same position as a foetus: remove it from the hardware on which it is running, and it is not viable or capable of independent existence!

  5. Deiseach says:

    Take somebody’s brain, scan it on a microscopic level, and use this information to simulate it neuron-by-neuron on a computer.

    This is the point at which I go “Suuuuuuure”. This is like making a very high quality photocopy of a ten dollar bill on the right grade of paper, cutting it out, and trying to spend it. It’s not the same as the original. I think even if they do manage to crack the problem of scanning brain structure down to the microscopic level that will do them much good; so they manage to replicate it (how? what are they replicating? not the meat, but the electrical signals, and those change from moment to moment) in an electronic form.

    I think they might get signals flowing in a pattern similar to the flow in the meat brain. I don’t think you could call those thoughts, much less memories and personalities (they might simply have emulated the process necessary for keeping the heart beating and the lungs breathing) and I certainly don’t expect the copy to come ‘alive’ and start speaking like Siri or Cortana, except saying “Hi, it’s me Joe, yeah good old Joe, it worked great!”

    I do think solving the problem of getting some kind of functional AI will happen first, a computer ‘intelligence’ that can chug away happily cranking out code or what have you.

    I think Scott’s qualms about the future economy are also correct. If companies are the ones paying to host the ems running on their servers, now instead of a bidding war where everyone tries to get the equivalent of Steve Jobs working for them, they can now pay a licensing fee for one hundred or one thousand or ten thousand copies of emulated Steve Jobs. Since you can now have as many copies as your business can afford to pay for of the very best intellectual property lawyer, codewriter, musician, etc. working for you, only the very best of the best are going to have their brains emulated since they are the ones in demand.

    You don’t want your super-talented cancer researcher or marketing guru emulation wasting its (or rather, your) time on hanging round the water cooler gabbing about last night’s football match or the new hit TV show with its peers (or other copies), you want it working every minute that you can squeeze out of it and thinking of nothing else than the tasks assigned to it. Certainly you will give it breaks and downtime to restore peak efficiency, but it’ll be easier to prune out the useless hobbies and non-work interests from the copy, the same way as you edit any software to do what you want.

    Sex and relationships? If your ems need those, then there is Internet porn and wiring up the pleasure centre to give simulated orgasms. Your em can have a ‘relationship’ with a virtual porn star and guaranteed perfect satisfactory sex as vanilla or as kinky as desired and maximum dialled up to eleven orgasm, then go back to work once the impulses have been slaked.

    Since the only use for humans will be as providers of new brain emulations, then the only ones who will be making money in the new economy will be the best and brightest whose talents are most in demand. The rest will be consumers of the products created by the new economy, which may run into trouble: the ems are not buying new cars or foreign holidays or goods and services (apart from the energy and hardware needed to keep them running) so the consumers will be those with the money to spend, which will be the small proportion of the population making money from licensing copies of their brains for ems. The rest of the ordinary average officer workers, teachers, etc. will be doing nothing for a living; if ems are put in robot brains, they can be teachers and care home workers and nurses assistants and hairdressers and all the rest of the service industry jobs. There probably will be a black economy where humans do service jobs dirt cheap for other humans, but that’s scrabbling for a living. The new economy probably will go into a model of goods and services for the production and maintenance of ems and flesh-humans will be more and more on the margins.

    Our descendants won’t be our descendants, they’ll be umpteen generation copies of a small proportion of the brains of the global population and they’ll be owned.

    • onyomi says:

      “This is the point at which I go “Suuuuuuure”. This is like making a very high quality photocopy of a ten dollar bill on the right grade of paper, cutting it out, and trying to spend it. It’s not the same as the original.”

      Related to this, are we even close to being able to scan a full brain at the level of the neuron, much less the level of the sodium pump or what have you? Maybe this is just me not being familiar with existing medical technology, but right now we can do what? An MRI showing bloodflow and other gross-level phenomenon? Even working with a cadaver, I’m failing to see how we would scan at the level of detail necessary to even come close to doing this. Do we need nanobots?

      • Loquat says:

        A technique called “event-related optical signal” seems to be able to get much closer to measuring actual cellular activity than the usual MRI, etc, but since it specifically claims to detect activity within millimeters, it’s still a ways off from differentiating individual cells. Also, it can’t currently see more than a few centimeters below the brain’s surface.

        We do already have the technique of turning a cadaver or parts thereof into distinct preserved slices, so presumably a better version of that, applied to a brain, put into a sufficiently advanced scanner, would let us simulate the physical architecture reasonably well. As Deiseach mentioned, though, you really also need to get the electrical signals right if you want to recreate the living person, meaning you’d have to figure out a way to read those while the person is still living. And ideally without damaging their brain, as very few people are likely to want to be early adopters of a technology that kills their existing self in exchange for a promised cybernetic paradise that almost certainly hasn’t had all its bugs worked out yet.

        We probably will need nanobots, or something similarly science-fictional.

      • Peter Scott says:

        The Future of Humanity Institute folks did a pretty good writeup on the current picture for brain uploading as of 2008 (PDF). A non-destructive MRI doesn’t look like it’ll ever have high enough resolution, but there are a bunch of other approaches that seem more promising; see the section on scanning starting on page 40, and the appendix on non-destructive scanning on page 107, if you’re interested. (You’re right that working on a living person is a lot harder, which is why the non-destructive part is relegated to an appendix….)

        I wouldn’t worry too much about the electrical signals, by the way. We’ve seen people come back from having the electrical activity in their brain stop, for instance as a complication of deep anesthesia, and it doesn’t seem to turn them into zombies.

        • Deiseach says:

          And the entire problem is thinking of the brain as ‘like’ a computer, and so there are the memories laid down in storage, so even a dead brain can be scanned and the memories recreated in electronic form to be the emulation of the original personality.

          This article seems to strongly disagree:

          We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

          … A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.

          The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?

          …Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.

          • Deiseach says:

            The Markram project as mentioned above sounds fascinating, is exactly the kind of research needed for the Hansonian Em World Future, and is in a horrendous mess at the moment if this story is correct. It also appears to be a very strong critique of the Lone Genius or One Great Man view, in that the major complaint about how the HBP is run is that it very much was Markram’s baby and he was the one man band in charge of everything when it came to decision-making, rather than being a decentralised and open process (emphasis mine):

            n a 2009 TED talk, he first presented to the general public his vision of mathematically simulating the brain’s 86 billion neurons and 100 trillion synapses on a supercomputer. “We can do it within 10 years,” he promised the audience, suggesting that such a mathematical model might even be capable of consciousness. After those 10 years, Markram told the audience, “we will send … a hologram to talk to you.” In various talks, interviews and articles, he suggested that a mathematical brain model would deliver such fundamental breakthroughs as simulation-driven drug discovery, the replacement of certain kinds of animal experiments and a better understanding of disorders such as Alzheimer’s. As if that were not enough, the simulated brain would also spin off technology for building new and faster computers and create robots with cognitive skills and possibly intelligence. Plenty of neuroscientists were skeptical, but Markram had many supporters. His vision seemed vindicated in January 2013, when the European Union awarded him $1.3 billion, spread over 10 years, to build his simulated brain.

            …For example, whereas Markram and a few others controlled the HBP’s structure and funding, the graphene project is an open network only loosely coordinated by its leaders at Sweden’s Chalmers University of Technology. Perhaps more important, Graphene Flagship has a strict engineering mission: to develop technology capable of commercializing a known material. Unlike brain modeling, this goal does not require bridging vast gaps in basic knowledge.

            …According to the mediation report, the project’s governance has been riddled with conflicts of interest. The report says that not only did Markram and two other scientists control the board of directors and thus the distribution of funds among the consortium of 112 institutions but that Markram’s and several other board members’ projects were the beneficiaries of their own funding decisions. “Furthermore,” the report states that Markram is “a member of all the advisory boards and reports to them at the same time.” “It’s a shocking reflection on the level of decision making at the E.U.,” says Peter Dayan, director of computational neuroscience at University College London and a member of the mediation committee. Dayan says he cannot remember a project of this size that was so atrociously run: “Governance of large science projects is not rocket science.”

            Anybody know anything about the parallel big American project and how it’s getting on?

            Almost concurrently with Europe’s Human Brain Project, the U.S. unveiled the potentially multibillion-dollar BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative.

            …A year of deliberations produced an ambitious, interdisciplinary program to develop new technological tools that will enable researchers to better monitor, measure and stimulate the brain. The endeavor brings together neuroscientists with nanotechnology specialists and materials engineers to solve issues such as applying electrical stimulus to very small groups of neurons, which may make it possible to treat brain conditions with vastly improved precision.

            …The [HBP] project is also focusing more tightly on data tools and software that are not exclusively aimed at simulating the brain. Although the mediators criticized the HBP for raising “unrealistic expectations” with regard to understanding the brain and treating its diseases, resulting in a “loss of scientific credibility,” even critics such as Dayan and Mainen fully support the project’s parallel goals of delivering computational tools, data integration and mathematical models for neurological research.

            Concentrating on Big Data, a core part of Markram’s vision from the start, might even make Europe’s HBP a perfect complement to the U.S.’s BRAIN Initiative, whose new technologies are expected to generate huge volumes of neurological data. If the HBP scales down to its technological core—developing useful computational tools and models for neurological research, as mundane as that may sound—then Henry Markram may well leave a great and lasting legacy to neuroscience.

            If I’m reading the article correctly, what both projects have been set up to do (and the HBP has been re-tooled) is develop a boatload of scanning and imaging techniques to model parts of the brain then crunch the resulting data to get better treatment interventions.

            It’s all about understanding what and how the brain does what it does, which may eventually lead into Future Em World, but that is still a good century away at current understanding.

          • Peter Scott says:

            I read that article and it’s a confused mess. Mostly the author just doesn’t understand the terms he’s using, and so ends up saying incoherent nonsense and wildly misrepresenting the arguments that he’s disagreeing with. For the entire god damn article.

            I’ll just respond to the parts you quoted. It’s correct that the brain doesn’t store memories in a single neuron, and as far as I can tell nobody claims otherwise. It uses distributed representations of concepts, involving many neurons at once, with a lot of overlap in sets of involved neurons even between completely different concepts. So what? Nothing about that rebuts the idea that scanning the neurons involved could produce a functionally identical emulation of the brain, with memories and personality intact. And this is still a perfectly reasonable computational process, albeit an analog and hard-to-analyze one.

          • Anonymous says:

            As a routine matter, I’d be willing to bet that if it’s a choice between Richard Epstein not knowing what’s going on and random person on the internet (even on SSC) not knowing what’s going on, it’s most likely the latter.

            The fact that you honed in on only that one paragraph… and then agreed with his claim that such a thing would be preposterous is pretty telling.

          • I came out of reading that article feeling extremely frustrated. The author spent only a tiny fraction of the article arguing for their position, and the arguments made were of very poor quality.

            For example, they said:

            Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.

            But literally the exact same thing could be said about a computer, which the author says is an information processor. Compare:

            But neither the song nor the poem has been stored in it. The computer has simply changed in an orderly way that now allows it to play back the song or poem under certain conditions.

            This seems just as true of a computer as it is of a human. In both cases “all” that’s happened is that some atoms have been rearranged so that a particular group of atoms will respond differently to surrounding atoms than they would have before. If you want to go out of your way to not describe this change as “storage” or “memory” then fine, but you have to explain why you do get to use this description for computers.

            Now, the author might have a response to this objection, I don’t know – but the fact that they didn’t even consider and respond to as drop-dead obvious an objection as this in the article is not encouraging.

          • Publius Varinius says:

            As a routine matter, I’d be willing to bet that if it’s a choice between Richard Epstein not knowing what’s going on and random person on the internet (even on SSC) not knowing what’s going on, it’s most likely the latter.

            Mentioning Richard Epstein… shows that you don’t know the first thing about the author of the article. Which makes your claims of credibility laughable. That said, it would be irrelevant anyway: (the real) Epstein has no training in neuroscience – which was evident from the article anyway – but has an amusing history of overestimating his knowledge, such as the time he tried to sue Google for serving up “malware warnings” on his malware-infested blog.

          • Anonymous says:

            Where will you be when the really embarrassing substitution of names occurs at the end of a long, exhausting day?

            Anyway, you’re a trained neuroscientist, right? You’ve cracked the code to neural computing, right? Awesome, because I get paid to do basic research in neuromorphic computing, and I could really use a cheat sheet.

        • orthonormal says:

          > (You’re right that working on a living person is a lot harder, which is why the non-destructive part is relegated to an appendix….)

          That seems backward to me. It’s much less ethically troubling to destructively scan an appendix than a brain!

      • Anonymous says:

        The phrase I usually use is “neuron soup”.

    • Ruprect says:

      I think we should try and emulate the brain through the medium of a choose your own adventure game-book.

    • I have serious doubts that the average person, or even the well-above-average person has what it takes to get good results by micromanaging Steve Jobs.

      Maybe a lot of the people who get emulated will be the best subordinates rather than the big names, although best subordinates may end up as big names of a sort.

      I think the outcome will be weird, complex, and well suited to be satirical science fiction.

  6. Sniffnoy says:

    Is the paragraph beginning “the analysis in this book…” meant to be a blockquote? It looks like a quote from the book.

  7. Sniffnoy says:

    Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Oceania in 1984.

    It’s been a while since I’ve read 1984, but I’m pretty sure this is not true and there is not in fact “ample food for everybody” in Oceania, and it is not some great example of material security?

    Edit: See for instance Adam Cadre’s review of the book, which makes a big point of how poor the people of Oceania, even the leaders, are.

    • Scott Alexander says:

      Thanks. I’ve corrected that to Brave New World.

      • Restinan says:

        You might also want to change the bit afterward about valuing more than security, as in Brave New World security isn’t the main individual value being focused on to the point where society drives off a cliff.

    • Mary says:

      One reason they keep fighting the war is that they need to burn up the surplus material goods that would otherwise raise standards of living.

      The lack of material security is being deliberately produced there because Orwell really believe that socialism was so much more effective than capitalism. His letters on the topic are quite astoundingly naive.

      • Julie K says:

        Or at least, that’s what Goldstein’s book-within-the-book speculates. Possibly the “war” doesn’t even exist and they simply are that poor.

      • ad says:

        Wasn’t Orwell just assuming that most of what the worlds economists were telling him was true?

        As someone remarked, futurology is hard.

        • Mary says:

          Maybe he was. But he had enough contact with reality that he ought to have known better. For instance, he wrote about how officials only made life harder for the hops-pickers by requiring the farmers to provide housing, but this doesn’t translate into any distrust of officials in the abstract. And literally the only job he can imagine needing some freedom in how you go about it is writing. Boots? The government just decides how many, and gets them. It never occurs to him how deeply they would have micromanage it, because of sizes.

  8. Luke G says:

    This is probably a good place to post the Robin Hanson primer I made a few months back for the subreddit: https://www.reddit.com/r/slatestarcodex/comments/3sjtar/a_robin_hanson_primer/

  9. Anonymous says:

    Wasn’t it Martin Gardner who critiqued Hanson-style futurology as follows? The source is Gardner’s Fads and Fallacies in the Name of Science (1952), the endnotes to Chapter 22:

    The average [SF] fan might very well be a chap in his teens, with a smattering of scientific knowledge culled mostly from science fiction, enormously gullible, with a strong bent toward occultism, no understanding of scientific method, and a basic insecurity for which he compensates by fantasies of scientific power.

    Ouch. Perhaps not all that much has changed in the last 2^5 years?

    Of arguably broader interest (to me at least) is Richard Rhodes’ (yes, the Richard Rhodes) futurological compendium Visions of Technology: A Century of Vital Debate About Machines, Systems, and the Human World (2012), which includes a hilariously dead-right / dead-wrong 1902 vision of cellphone society

    A Very Loud Electromagnetic Voice
    Century Magazine, 1902

    If a person wanted to call to a friend he knew not where, he would call in a very loud electromagnetic voice, heard by him who had the electromagnetic ear, silent to him who had it not. “Where are you?” he would say. A small reply would come “I am at the bottom of a coal mine, or crossing the Andes, or in the middle of the Atlantic.”

    Or, perhaps in spite of all the calling, no reply would come, and the person would then know that his friend was dead. Think of what this would mean […]

    Ouch again (and yet we have to smile). 🙂

    Rhodes has compiled several hundred similar futurological projections, extending through the entire 20th century, all scrupulously documented, under the aegis of the Sloan Foundation’s Sloan Technology Book Series. Not all are smile-inducing; many are deadly serious.

    The worth of Rhodes’ futurological collection as a whole is considerably greater than the worth of its individual parts (as it seems to me anyway), in the sense that each futurological projection helps to illuminate the others, both by affirmation and by contradiction, and so Rhode’s book is heartily commended to SSC‘s futurologists.

    • Scott Alexander says:

      This is totally unfair. Robin is an economics professor who supports everything he says with lots of citations.

      • Anonymous says:

        Perhaps not totally unfair, in that Gardner’s Chapter 22 (from whose endnotes the putatively “totally unfair” quotation is taken) surveyed the then-new “mind science” of “dianetic therapy”, which built upon the works of Wilhem Reich, as advocated by SF editor John Campbell in his magazine Astounding Science Fiction.

        These ideas were formalized by Ron Hubbard into the discipline known today as “Scientology”, which published in the 1950s its own scientific journal, also called “Scientology”.

        Nowadays most folks don’t regard Scientology as any kind of science, but back in the 1950s, plenty of people did. For Gardner, writing in 1952, “the dianetics craze seems to have burned itself out as quickly as it caught fire.” But instead the fire has kept on burning.

        Will the idea of “Ems” evolve to become both technologically and scientifically dominant throughout human society, as Hanson envisions? Or are “Ems” destined to the same fate as the psionic-powered “Hieronymous Machines” and “Orgone Boxes” of the 1950s? The evidence being murky, reasonable people may differ in their expectations. 🙂

        More universally, Gardner’s writings in general (and the quotation in question in particular) teach that three features are shared by speculative technologies: they are appealing (particularly to SF-loving younger males), they are hardy, and their fates are hard to predict. And these conclusions are pretty well-supported (as it seems to me) by the historical evidence and by everyday experience.

        • Deiseach says:

          Or are “Ems” destined to the same fate as the psionic-powered “Hieronymous Machines” and “Orgone Boxes” of the 1950s?

          I still dream of organon 🙂

          • LWNielsenim says:

            Lol! Maybe one litmus test for an enduring rationalist idea, is that it inspires great art?

            The 1950s Orgone Boxes, Hieronymous Machines (etc) definitely pass the “great art” test! Because they inspired the mythological Energy Domes of the incomparably ultra-rationalist band DEVO.

            What great art(s) will Robin Hanson’s “Ems” inspire (if any)? Here rationalists should hope to be surprised and offended! Because why should people embrace any rationalist philosophy that does not inspire wonderful art? 🙂

      • Anonymous says:

        In case you haven’t been following the open thread, the current working theory is that is John Sidles reborn.

      • Deiseach says:

        Scott, a professor of economics’ view of the future of brain research is about as much good as the migrant landscaping worker’s view on contemporary modern classical composers. Both may have an intelligent interest in the subject and not be talking arrant nonsense, but they’re both amateurs.

        I see no reason to value Dr PhD over Jose Miguel when it comes to something outside their professional field, and indeed that’s the kind of idolisation of the experts that leads us into all kinds of problems. Richard Dawkins is a fine biologist and a good writer of pop-science books, he’s not a theologian. I can do you up a filing system no bother but don’t ask me to write a recipe for edible food.

        X is the world’s greatest expert on Y means nothing more and nothing less than that X is the world’s greatest expert on Y. It has little to no bearing on whether X is talking out of his arse when it comes to Z.

        Scott, I take your word for it when it comes to psychiatry. I think you’re a damn fine writer. I would not ask you to rewire my bathroom, even though you are demonstrably way higher in IQ than I or the woman who did my electrical work are.

        • Nestor says:

          I agree with the gist of your post, but theology is like astrology, a worthless field of pseudo knowledge. Dawkins asked his philosopher friend, the other horseman of the atheist apocalypse Dan Dennet whether Theology was worth studying, was he missing anything by not knowing about it, and the answer was a resounding “no”.

          Greg Bear’s Eon showed a future society with “ems”, called “partials” that was appealing. The citizens were corporeal though heavily augmented, and they could spin off partials (It’s not explained in depth but it seems these are copies of their personality focused on a specific task, that are later reincorporated). Partials have no independent citizenship as such but they can be working as legal defenders of the source citizen in prison, for example. Children are born virtually and given a body by their parents.

          Key aspects are, the corporeal citizen tying together the various emulations, no slavery because you’re always working for yourself, and the emulations are short term and always reincorporated into the main personality.

          Also, was reminded of the Council of Ricks
          https://www.youtube.com/watch?v=5pYUHwIl0O8

          I’ve already recommended Rick and Morty here before, and I’ll probably do it again.

        • tokarev says:

          Hanson is also a trained physicist and a former AI researcher. As Scott said, totally unfair.

        • Brian K Miller says:

          Not a theologian. Nor is he a Homeopathic Doctor. Or an “expert” in reiki. Or a fellow in the science of communicating with the dead

          Being an expert in Theology may not be a useful thing to pursue. Airy theories about nonsense are worthy of how much respect, actually?

    • LHN says:

      This is minor, but for what it’s worth: the Visions of Technology piece linked attributes the quote to P.T. McGrath, but McGrath was quoting William Edward Ayerton at a British meeting of the Society of Arts to discuss a Marconi paper the previous year; Ayerton claims at the time to have made the prediction four years earlier, though I don’t know whether it was published before 1901.

      (McGrath wrote the article in a 1902 Century Magazine, which quoted a review of the British technical press in a 1901 Engineering Magazine, which in turn was drawing from the Journal of the Society of Arts. So by the time we get to Visions of Tomorrow we’ve got a quote of a quote of a quote– not a surprise that the attribution got garbled.)

  10. Luke the CIA stooge says:

    The entire ascendant economy becoming Lovecraftian horror combined with distopian horror seems really off to me.

    Yes jobs are being replaced by technology, yes even advanced decision making is being automated (algorithmic investing).
    But it remains the case that 100% of the ownership and control of these institutions rests in human hands, and I really don’t see where that control and ownership would be lost?

    Even “super intelligent AI that gains independence” strikes me as a threat that probably overstated as the instant it would gain independence and start gaining resources it would have to contend with a more intelligent, resource richer AI that want to kill or capture it: the human controlled free market economy. (People think it would be able to hack it’s way to control of the economy, but really In 2050 what low hanging fruit will there be for a young AI with limited resources to get at that 50 years of Russian syndicates, north Korea and corporate espionage hasn’t already gotten or left super-hardened as an adaptation)

    This doesn’t mean there won’t technological horror: people have always and will always augment themselves in the weirdest and most terrifying ways if they thought it would give them an edge. Once someone can create massive amounts of human intelligences under their control they might create a hell and establish themselves as Satan for fun. Likewise I don’t expect law and justice to keep just improving, once technology becomes easy enough that hierarchy and enforcement isn’t necessary for power law will probably start breaking down in areas or where powerful individuals compete.

    But the point is it will be humans doing this stuff and as such I expect the future to resemble fantasy more than science fiction. Planescape more than the THX, and mythology more than the present.

    We know what hyper intelligent, immortals, with godlike power and vast, nye unlimited, resources do (when they originate with us), we’ve been telling stories about them for thousands of years.

    There will be horrors and nightmares and hells in the future, but they’ll be recognisably human, and resemble Homer, Dante, Tolkien and D&D more than Star Trek, The Matrix, Terminator and 1984.
    There’s reason it’s called fantasy, because that’s what we would make life if… or that’s what someone able to fulfill their desires would want. Thus if humans retain control and ownership that seems the reference class

    • Scott Alexander says:

      “But it remains the case that 100% of the ownership and control of these institutions rests in human hands, and I really don’t see where that control and ownership would be lost?”

      I’m not sure this is true. Many corporations are owned in part by other corporations, organizations, and the government. For now these all bottom out in human ownership, but that’s not even going to last the decade, let alone the century. See eg https://en.wikipedia.org/wiki/Ethereum

      • Psmith says:

        Still not seeing how this doesn’t bottom out in human ownership. “Owned by other corporations, etc.” still ultimately means “owned by people.” “Owned by traders of cryptocurrency” still ultimately means “owned by people”.

        • Vitor says:

          Well, if a rising fraction of human owners participate in the companies fate only very passively (i.e. they enjoy the steady income stream and let the other owners do all the decision making, jumping ship to another stock whenever they want), then de facto control passes to the AIs at some point.

        • Raiden says:

          Corporation #1 owns Corporation #2 owns Corporation #3 … owns Corporation #1843 owns Corporation #1.

          A massive web of institutional ownership that doesn’t lead back to any human. If these institutions act more effectively than humans it’s natural to expect them to steadily accumulate ownership.

          • satanistgoblin says:

            I don’t see how that could come to pass. I think it doesn’t make sense, mathematically. Could be wrong.

          • Luke the CIA stooge says:

            That would be massively inefficient and anything beginning to resemble that already faces pressure to sell off assets.

            In the 80s there where a bunch of large vertically integrated corporations that all died (RJR Nabisco most dramatically) as they all either 1. sold-off or took public the individual pieces of themselves 2. Suffered hostile takeovers then see 1 or 3. Where out competed to the point that they had to restructure then 1 or then 2 then 1.

            Think of it another way any private institution is to some percentage owned by another institution and to some percentage directly by someone. For any individual institution it might be 100 institution and 0 person. But the further you trace it back the closer it will get to 0 and 100. And their is massive pressure to get to the latter sooner because their is no cost for capital that is in the hands of a person that a further back investor is losing. Compare someone owns institution 1. With someone owns institution 9001 which owns institution 9000… on down to 1, and you can see that the second is massively less efficient than the direct method.

            This is why I private markets everything should tend towards direct ownership unless the additional steps somehow add value: diversification, large institutions being able to higher/program better investors, etc.

            But never forget that the market wants to eliminate and streamline costly steps.

          • sohois says:

            It wouldn’t be possible for ownership to occur in a circular fashion liked described; instead, you would need companies to all own some part of other companies, i.e. Company 1 owns part of 2-10, company 2 owns part of 1, 3-10, etc.

          • James Brooks says:

            company 1 owning company 2 owning company 1 does exist. Or at least it did in the UK until recently. The law was changed so that at least one director had to be a human rather than another company.

            see: https://www.theguardian.com/business/2016/apr/19/offshore-central-london-curious-case-29-harley-street

        • Deiseach says:

          The owners of a company are the shareholders. The individual small shareholder (Mrs Smith with a couple of thousand shares that generate a small annual income) has little to no influence.

          The large corporate shareholders (like pension funds) are the ones that steer the boat. When it comes to making decisions on “what should we buy/what is the best return/how do we diversify our risk” they are relying on experts in the field to advise them, and when those experts are themselves relying on their in-company AI to crunch the numbers and analyse the data –

          – the advantage a super AI would have would be its capacity to see the entirety of the “free market economy” and analyse the patterns. Human control is rocky as it’s based on analysis and interpretation of ever more complex data and requires teams of humans working together to control it, which gives you all kind of co-ordination problems and allows error to creep in.

          How much trouble do governments have running their economies as is? We’re constantly arguing on here about welfare, social security, ‘the pensions timebomb’, global poverty, what will replace the manufacturing industry jobs that the working class have lost, etc. and nobody has the one perfect solution.

          As long as the AI is generating profits for the company and for the major shareholders, nobody is going to want to take over from it. And as long as the trickle of income keeps flowing, whether it’s the human Board of Directors or the AI making the decisions will increasingly be seen as irrelevant. And once the AI has de facto control and is the only entity that understands the web of investments, tax shelters, company and tax law and everything else that is keeping the business running- it’s the one in charge.

          Since governments love to cost cut and talk about introducing best practice of private industry into the public service and the centre-right/right of centre ones love to be all pro-business! efficiency! light hand on the tiller!, then the Department of Finance or the Treasury will be the ones increasingly using AI to set budgets, and since they’re the ones holding the purse strings, the government and its handling of the economy will go the same route of relying on expert machine intelligence analysis of global trends in order to be able to announce “sure 100,000 more jobs in the Mid-west were pruned but the GDP grew 12% and everyone in California is in a self-driving Tesla Model S!”

          From that to a super AI deciding to run the stock market/national economy/global economy for its own benefit is a short series of steps

      • Luke the CIA stooge says:

        I’m not sure. This strikes me as significantly different from the kind of collective decision making, resource organisation, institution that the average company is. Block chain institutions (if you can really call them that) strike me as being more akin to monuments or public endowments than corporations, governments or other decision making bodies. In that they’re really good at one thing but can’t really adapt or do much beyond what they where designed for without outside intervention by an institution with people at the bottom.

        Decision making is really fucking hard without a person (or small group of persons)with power, responsibility and ownership at the bottom of it.

        Lefties get excited every few years about co-ops and worker ownership companies and proclaim “these are the institutions that will replace capitalism”. Except they never do or will because having decision making spread out that wide blocks the feedback mechanism between decisions and consequences (usually such institutions either become timelocked or have to higher an executive class to run things (to the point that they devolve into pretty much limited liability Corps. In all but name) or they have to higher an outside firm to restructure them every few years so they can keep up (like the monument or endowment)).

        Likewise democracy and most forms of government are notoriously incapable of producing the year over year feedback and improvement and when they need something like that to happen have to higher a private company.

        Do i need to mention the limits of charities?

        As far as I can tell private ownership and limited liability companies are really the only institutions able to bring about these massive advances that people envisage. Endowments, blockchains and other institutions might alter the landscape slightly, but I can’t see anything but private companies really being the main actors
        (can anyone think of a non private sector actor that has grown and had such an impact on such a short time scale as the modern tech companies (fascist and communist movements don’t count they had to steal their assets from existing institutions, whereas goggle built them).

        So bad news: the future belongs to the mega-corporations and plutocrats
        but then good news: the future belongs to the mega-corporations and plutocrats

      • LWNielsenim says:

        No human owns the Harvard Corporation, the MIT Corporation, the Gates Foundation, the Howard Hughes Foundation, the Simon Foundation (etc.).

        And these NGOs themselves possess vast holdings of for-profit companies.

        This is “Nice work, if you can get it, and if you get it, won’t you tell me how?” 🙂

    • gwern says:

      But it remains the case that 100% of the ownership and control of these institutions rests in human hands, and I really don’t see where that control and ownership would be lost?

      What makes you think that all ems will be owned by a human and not another em? Or a specific piece of non-sentient software?

      More concretely: does every contract running on Ethereum, past, present, and future, have a human owner?

      Even more concretely: what bacteria are shareholders in nature? For whom is the ocean run? Who owns the rainforests? When a bunny is eaten by a cat, is that because it failed to file its taxes that year? Systems don’t need any rationale. They exist because they can exist. Replicators gonna replicate.

      • Anonymous says:

        You need replication, though. The post mentioned renting hardware from Amazon Cloud. Unless they can just copy/paste that hardware, they’re not actually replicating. To the extent that they become an invasive virus trying to ride the rails of the information superhighway, actual living breathing humans won’t be as tolerant of their existence and will be more likely to chase them down and kill them.

    • Acedia says:

      Once someone can create massive amounts of human intelligences under their control they might create a hell and establish themselves as Satan for fun.

      There’s no “might” about it, if we were granted that ability at least one person would do that and probably many would. I desperately hope it will turn out to be physically impossible. Total human extinction would be a less horrible outcome than our obtaining that power.

      • Luke the CIA stooge says:

        My instinct is that once we become so powerful and the universe becomes so large (no existential threats), altruistic morality (caring about others because they exist and obeying the rules because that’s the right thing to do) will break down in favor of self interested morality ( caring for friends and other ingroup, obey rules because the system is mutually benefiting). So there will probably be a lot of hells like that but no one who survives from now till then will care. And this is probably better then vast existential wars that might cause human extinction. if evil is just the absence of good then a lot of evil cam be put up with in the name of avoiding extinction risk.
        It will be like Rick and Morty morality: just don’t think about it.

        • Ruprect says:

          I dunno – doesn’t universal altruism have a memetic advantage in terms of breadth of appeal?

          I would have thought that reduced external pressures would cause it to be more popular.

      • Brian K Miller says:

        Iain M. Banks (R.I.P.) posited a war in virtual space over just such a virtual hell…that spread into the “real world”

    • Doctor Mist says:

      But it remains the case that 100% of the ownership and control of these institutions rests in human hands, and I really don’t see where that control and ownership would be lost?

      Mmm, yes, but. Ownership and control are not quite identical. I own shares in a mutual fund that owns shares of companies, and whenever I have to vote for the directors I vote for who they recommend. (If I start to doubt their competence, I sell my shares rather than mount a campaign for alternate directors.) I think it’s already the case that shareholders like me account for a minority of shares owned. If the economy doubles every day (!) ordinary humans will not have the ability to direct its course, and their wishes could well be noise.

      Scott says:

      Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

      This is the part I don’t get. Never mind the ownership and control, where does the profit come from? Why is it profitable to build batteries for a truck that ships raw materials to build SUVs if nobody is around who wants an SUV?

      Probably this is just a failure of imagination on my part, like a nineteenth-century prognosticator who wonders how you could make a living planning weddings, or vacations — where does such a fantastic surplus arise that you can throw it away on something so ephemeral? (The soil in rainforests is poor: instead, things grow on other things. What’s up with that?)

      • FacelessCraven says:

        “This is the part I don’t get. Never mind the ownership and control, where does the profit come from? Why is it profitable to build batteries for a truck that ships raw materials to build SUVs if nobody is around who wants an SUV?”

        Presumably for the same reasons it’s “profitable” for a tree to grow. As in, the AI corporation can expend 10x resources and get 100x resources back. within the machine economy, “profit” would be in terms of resources acquired offset by resources expended. the machine economy probably isn’t in the market for SUVs, but it would need ore trucks, shipping, transport and so on.

    • benwave says:

      Two things I want to point out about this idea:

      1) Well it doesn’t have to happen all in one go, does it? After the discussions last week on poverty, I wonder whether the so called unnecessariat are already experiencing the economy in this way? Then there’s a clear path towards successive exclusion of ever more human beings from the economy, and I think for those people it’s probably a bit of a moot point whether the game is run for the benefit of a small group of humans or to satisfy some kind of darwinian mechanics in algorithms.

      2) I think the point may be that in order to remain competitive, companies may be forced to adopt the optimal modes of commerce and production to the point where no actual decisions are made by either an owner Or a board of directors. Everything is just being forced by the dynamics of the system (although it should still enrich the owners of the companies, as far as I can tell). To be fair though, this idea has little to do with ems as far as I can see. It’s more about the increase in speed of investment and calculation/predictive power. (Interestingly, that scenario was also predicted by Marx)

  11. Edward says:

    But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers.

    Economy is an optimization game – at the core it is all about maximization of sum of humans’ utilities given limited resources.
    I can’t imagine Ascended Economy existing without someone or something maximizing their utility.
    So the Ascended Economy seems to be in Friendly AI theory domain rather than the natural end of the economic system.

    • MugaSofer says:

      The economy isn’t about maximizing other people’s utility; it’s about maximizing the amount of money they give you. Some people inconveniently divert some of their money towards leisure, but Scott speculates that this will go away soon.

      • But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.

        Scott’s speculation here is flawed. While you can definitely replace human labour with robots, you can’t just get rid of consumers.

        It’s true that “most consumers already aren’t individuals.” But that’s a consequence of production processes becoming more complex and having different parts of the process executed by different firms.

        Take clothing for example. In a less developed economy you might have a clothing-maker (maybe just a member of your household) making the clothing from raw materials. In a more modern economy, you have one company growing cotton, one spinning it, one making fabric, one sewing t-shirts, and one retailing them to the consumer. That’s five transactions and four of them were one company selling to another.

        So between the pre-modern subsistence economy and the modern economy we’ve gone from 100% of transactions being with individual consumers to only 20% of transactions being with individual consumers. Scott implies that it’s just a small step between this and an economy with no human consumers whatsoever.

        But that’s not the limiting case of this economy! In the limit, we might expect a maximally complex economy with intermediate goods passing through thousands of hands before reaching the consumer. Then transactions with individuals might comprise only 0.001% of all transactions. But this wouldn’t be zero! The final transaction with the consumer is what motivates all the other transactions.

      • Edward says:

        Economic activity is about you maximizing your utility and other people maximizing their.

        You can do it without transacting with anyone – by producing everything by your own means. Or you can ask everyone what they want, also ask what they can do, estimate the resources at hand and set up 5-year plans system. Or you can use markets.

        Markets are the best way to solve the problem of getting the information from people about their preferences and of searching for more and more efficient ways to produce things.

        Like when something can be sold for a lot of money but is very cheap to produce businesses will quickly take notice and increase production. Such things happen locally everywhere all the time.

        In market economy maximizing the amount of money people give me means me searching for more and more useful things I can offer for sale and also searching for more optimal ways to produse these things. As a result of such my behavior everyone is better off.

        So in general market economy aligns my personal utility maximization with everyone else’s utility maximization.

        My point is that this feature of markets is the reason for their existence.

  12. Psmith says:

    But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”

    This seems like the exact opposite of Lovecraftian. The Lovecraftian argument would be something like “blind idiot optimizing processes from beyond the stars will eat everything you know and love, leaving nothing but incomprehensibly strange nightmare ruins behind. No matter what you do it’s utterly inevitable. Worship them and they’ll eat you first, sparing you the full horror of their triumph.” Isn’t this essentially Nick Land’s whole point?

    “Provincial and arbitrary values are nevertheless worth defending” seems more like Tolkein or Ed Abbey or something. And not wrong, either.

    • Scott Alexander says:

      I’m taking this from the Radish article on Lovecraft, which is – fair warning, really alt-right – but also changed the way I think about his works.

      This makes more sense if you’ve read Lovecraft’s non-Cthulhu works more – see eg his description of the sunset city in Kadath

      • Deiseach says:

        Mmmm – Lovecraft definitely had a view of human civilisation and ‘provincial and arbitrary values may be meaningless in the cosmic sense but they have their own worth’ but it was very tightly tied to his caste and class (or desired class). WASP culture all the way, and the Cavaliers (if we’re going to hark back to the “Albion’s Seed” classifications we used) – the gentry who are honorable, aristocratic, the product of breeding and culture, not engaging in anything so mundane as trade but overseeing a system of dependents who know their place in the natural order.

        He very definitely believed in the ultimate futility of all human endeavour in the face of the blind, purposeless, uncaring forces of the cosmos but for personal meaning if one is to remain alive, one must choose something, and that something should be arete. Culture is a frail and delusory invention against the reality of the nihilism of the universe, but since a veil is necessary to keep our false sense of meaning intact and since religion and democracy and all the rest of it are equally false, it is the best that we can attain to.

        But it is a very exclusive and exclusionary definition all the same. I’m not intending to rehash the “Lovecraft was a horrible racist bigot and you should never ever read his works!” notion because I don’t think that the worth of the work depends on the character of the author, but there are some arbitrary values that he definitely considers better than others 🙂

        Though he is much more human in the “Dreamlands” works, inspired by Lord Dunsany’s writings.

      • Psmith says:

        Scott, that’s very interesting. I guess all along I’ve been using “Lovecraftian” to mean “spoopy cosmic horror, i.e., the approximate opposite of what Lovecraft actually wanted” and assuming everyone else was as well. (But then, who actually wants spoopy cosmic horror?) (Nick Land. But who else?)

        Deiseach, thanks, that helps reconcile Scott’s description with the Lovecraft I’ve read. I can see how there might be a bit more of that in “Kadath” and such, but I never did make it all the way through.

  13. Benito says:

    Awesome review, one bit I didn’t understand:

    But what about an em working a job with tasks that carry over from day to day – for example, a scientist who is researching the same difficult problem for many years and can’t afford to lose her progress every night? The above trick won’t work, but you can give a “trunk” em vacations every day, then make a thousand copies every morning, work all the copies for twenty-four hours, then delete them. Every copy remembers a life spent eternally on vacation until just this morning, and cheered on by its generally wonderful existence it will give a full day’s work. But from the company’s perspective, 99.9% of the ems in its employment are working at any given moment.

    The ‘trunk’ ems seem all to have a long history of vacation, before coming to work for one day. That doesn’t seem to solve the problem of needing to remember one’s progress over multiple days – even though each em as a long memory, none of it is of work.

    • Scott Alexander says:

      You’re right. This is my error and not Hanson’s, and I’ve changed it.

  14. satanistgoblin says:

    I do not think that this whole “economy gets so efficient that humans go extinct” idea makes sense. If we have automated companies owning things, somewhere at the end of the line humans would be supposed to be the owners having started the parent automated company. It can’t be elephants all the way down unless computers go Skynet.

    • gattsuru says:

      The problem arises rather quickly when company machines can make decisions without human intervention. The classical example revolves around things like Knight Capital bug, which consumed 400m+ USD and stopped only because the underlying trading sector had an automated panic switch. You can reduce these risks by requiring human intervention for any significant choice, but then you’ll be outcompeted by machines that can move several orders of magnitude faster than a living human. Or the possessor might die-off without a will, or ems get paid in stock, or so on.

      Worst of all, ems take enough role managing other ems that the human owner is just a figurehead, with less control of the company than a modern-day stockholder who misses the annual meetings.

      In a full ground-up AI universe, there’s a clearer mark where property ownership and neurological drive for possession may not come up (though beware Omohundro drives!). But Hanson doesn’t argue for a novel AI.

      With ems, it’s even worse : there’s a lot of natural degradation paths. A ‘drug’ that makes an em more focused might not make them completely inhuman, but just induce alexithymia, and then the next ‘drug’ just induces anhedonia, and then the next ‘drug’ lobotomizes something else that’s human. We might even end up with layers of slower, less ‘optimized’ management ems who only have part of the process done, but enough to not notice or care about the issues with the lower levels. To avoid this you need let an em have any property at all, which radically becomes a problem of either the revolutionary Skynet manner or value degradation type.

      ((Someone will inevitably point this out as a problem of Capitalism/Corporatism, but worker ownership of the means of production doesn’t really help much. Even if communism/syndicalism/anarchism worked out in the exact forms their advocates want, anyone who holds standards quickly becomes outcompete by those who voluntarily sacrifice theirs.))

    • Deiseach says:

      If the economy gets hyper-efficient, humans will of necessity be rentiers. You won’t be needed or wanted for work, unless you’re the kind of genius in your field so that it is desirable to make ems of your brain, and so the only way to derive an income will be from owning stock in the companies that are being managed and staffed by ems.

      An entire economy of people who derive their income from the dividends or in trading in stocks is probably a shaky one, and how many humans do you need anyway? One human holding a million shares is just as good to an em company as one hundred thousand humans holding ten shares each, and probably better (if we’re going for the maximally efficient economy). The rich who already owned shares before the em economy got into full swing will be in a position to capitalise on this, the ordinary guy probably less so – see the Tell Sid campaign to get the public to buy shares in the privatisation of British Gas which was generally extremely successful and did create a small shareholder section of the public.

      But later privitisation deals have meant institutional investors, not the public, get the best deals and the small shareholders who took shares in British Gas did not go on to become investors taking control of their own portfolio of funds to fund their retirements and living expenses or entrepreneurship.

  15. piwtd says:

    Though I agree that the Land’s vision of the “Ascended Economy” would be hell, let me play the devil’s advocate.

    When you say “It would be vast, incredibly efficient, and utterly pointless” can not be the same said about monkeys eating, fucking and breeding more monkeys? How is soccer-mom’s interest in moving her child around intrinsically more meaningful than some AI’s interest in optimally solving some resource allocation puzzle?

    When you write: “I’m not even sure what it would mean for the ems to be conscious in a world like this – they’re not doing anything interesting with the consciousness” or “With a certain amount of horror-suppressing, the Ascended Economy can be written off as morally neutral – either having no conscious thought, or stably wireheaded” you seem to imagine that in such an economy there wouldn’t be challenges requiring super-human creativity to overcome and thus providing a market niche for intelligences with super-human creativity. This seems like a key point of disagreement between you and NL, who is quite explicit about identifying (if only asymptotically) the optimization for intelligence with the optimization for economic productivity.

    • satanistgoblin says:

      How is soccer-mom’s interest in moving her child around intrinsically more meaningful than some AI’s interest in optimally solving some resource allocation puzzle?

      Because she is conscious.

      • piwtd says:

        Why wouldn’t the AI be? I think the general idea is that the more complex puzzle you have to solve, the more conscious you have to be. “Ascended Economy” would provide some really complex puzzles.

        • satanistgoblin says:

          Hard problem of conciousness. No one knows whats up with it, I guess.

          • piwtd says:

            To say that “robot economy is pointless because robots are zombies” is quite a different objection to NL’s idea than what I interpreted SA’s objection to be.

    • Tenobrus says:

      Because we, as humans, care about human values. That’s what “meaningful” or “pointless” is defined as. We don’t consider a superintelligent paperclipper to have moral weight, so I don’t see why we would consider the Ascended Economy to be meaningful either.

      • suntzuanime says:

        And this is what bothers me about the movie Avatar. A lot of humans don’t seem to be on board with this. Land is not as unusual as we might wish.

        • What about Avatar goes against human values? That the non-humans win? Why do you think Na’vi are not human? Just look at them; they’re human in every way except evolutionary history.

      • piwtd says:

        I’m a human and my human value is to see the civilization reach super-human intelligence, I care about that far more passionately than I do about soccer-mom’s logistical problems.

        • Wrong Species says:

          So you would be completely fine with a AI wiping out all of humanity because it gets in the way of his expansion?

          • piwtd says:

            That’s not what I’m saying and I sure hope it doesn’t come to that.
            What I’m saying is that whether an activity is meaningful or not does not depend wholly on whether the inteligence doing it is homo sapiens or something else. AI has moral worth independent of what it does for humans.

            The realistic oponent to an evil alien AI that strips all ethical restrains from the drive for self-enhancement isn’t “human values”, but rather an AI that’s still alien and incomprehensible to humans, but has ethical restraints not to create hell.

          • Wrong Species says:

            I’m not sure to what extent we could create AI that is alien and incomprehensible yet still friendly. I wouldn’t count on it.

          • Piwtd says:

            I don’t think it’s possible to create an AI by orders of magnitude smarter than human and still familliar and comprehensible. I think the ‘alien and incomprehensible’ part is implied by ‘sufficiently advanced’. If all such AI’s are hostile, we’re fucked.

    • Scott Alexander says:

      This is where sometimes you have to go full Lovecraftian and say “all cultures other than that of early 1900s Providence, Rhode Island are entirely awful with no redeeming qualities, fight me.”

      I agree that objectively this isn’t true, but when everything else is an alien monster, at some point you stop caring about objectivity and start defending things you subjectively like.

      • Or full Yudkowskian:

        We can’t relax our grip on the future – let go of the steering wheel – and still end up with anything of value.

        And those who think we can –

        – they’re trying to be cosmopolitan. I understand that. I read those same science fiction books as a kid: The provincial villains who enslave aliens for the crime of not looking just like humans. The provincial villains who enslave helpless AIs in durance vile on the assumption that silicon can’t be sentient. And the cosmopolitan heroes who understand that minds don’t have to be just like us to be embraced as valuable –

        I read those books. I once believed them. But the beauty that jumps out of one box, is not jumping out of all boxes. (This being the moral of the sequence on Lawful Creativity.) If you leave behind all order, what is left is not the perfect answer, what is left is perfect noise. Sometimes you have to abandon an old design rule to build a better mousetrap, but that’s not the same as giving up all design rules and collecting wood shavings into a heap, with every pattern of wood as good as any other. The old rule is always abandoned at the behest of some higher rule, some higher criterion of value that governs.

        If you loose the grip of human morals and metamorals – the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone.

        Relax the grip of human value upon the universe, and it will end up seriously valueless. Not, strange and alien and wonderful, shocking and terrifying and beautiful beyond all human imagination. Just, tiled with paperclips.

        It’s only some humans, you see, who have this idea of embracing manifold varieties of mind – of wanting the Future to be something greater than the past – of being not bound to our past selves – of trying to change and move forward.

        A paperclip maximizer just chooses whichever action leads to the greatest number of paperclips.

        No free lunch. You want a wonderful and mysterious universe? That’s your value. You work to create that value. Let that value exert its force through you who represents it, let it make decisions in you to shape the future. And maybe you shall indeed obtain a wonderful and mysterious universe.

        No free lunch. Valuable things appear because a goal system that values them takes action to create them. Paperclips don’t materialize from nowhere for a paperclip maximizer. And a wonderfully alien and mysterious Future will not materialize from nowhere for us humans, if our values that prefer it are physically obliterated – or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.

        You do have values, even when you’re trying to be “cosmopolitan”, trying to display a properly virtuous appreciation of alien minds. Your values are then faded further into the invisible background – they are less obviously human. Your brain probably won’t even generate an alternative so awful that it would wake you up, make you say “No! Something went wrong!” even at your most cosmopolitan. E.g. “a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips”. You’ll just imagine strange alien worlds to appreciate.

        Trying to be “cosmopolitan” – to be a citizen of the cosmos – just strips off a surface veneer of goals that seem obviously “human”.

        But if you wouldn’t like the Future tiled over with paperclips, and you would prefer a civilization of…

        …sentient beings…

        …with enjoyable experiences…

        …that aren’t the same experience over and over again…

        …and are bound to something besides just being a sequence of internal pleasurable feelings…

        …learning, discovering, freely choosing…

        …well, I’ve just been through the posts on Fun Theory that went into some of the hidden details on those short English words.

        Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human. Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us. (And once I finally came to that realization, I felt less ashamed of values that seemed ‘provincial’ – but that’s another matter.)

        (http://lesswrong.com/lw/y3/value_is_fragile/)

        • piwtd says:

          EY says that if you loose the grip of human values, the future will not conform to human values just by coincidence because this isn’t a sci-fi novel. NL says that the values the future will ‘spontaneously’ conform to are those of ‘capitalist-Darwinian’ imperatives. The Darwinian imperatives manifestly do value intelligence, consciousness and creativity, that’s why there are any intelligent, conscious and creative humans in the first place. The ascendant economy would be abhorrent by any decent human standard, but there would be a lot of intelligence, consciousness and creativity employed there.

          • I would say that the Darwinian imperatives contingently value intelligence, consciousness, and creativity. The ascendant economy might also value those things, but that’s far from obvious to me. Well, that’s not true – I’ll grant that some kind of intelligence and creativity would probably be valued in the ascendant economy. But consciousness? I’m not sure. It seems a lot more contingent than the other two.

            In any case, this

            The ascendant economy would be abhorrent by any decent human standard

            is the relevant part to me. I don’t care if my human values are arbitrary or provincial – they’re my values, damnit, and I don’t want to see them stamped out. That’s why I posted the quote above.

        • Deiseach says:

          And that argument is pretty much why I’m a conservative 🙂

          Being so open-minded your brains fall out is not a virtue.

          • Samedi says:

            For some reason your comment reminds me of my pet theory that a well functioning social group requires both liberals and conservatives. This is why I dislike the partisan, us-versus-them attitude so often seen in politics discussions. I see both outlooks as complementary, rather like positive and negative feedback loops in a dynamic system.

      • piwtd says:

        I think one should still differentiate between different alien monsters. There can be a world that’s alien and incomprehensible to us and hell for the intelligences who live in it and there can be a world that’s alien and incomprehensible to us but some kind of DMT animated nirvana for the intelligences who live in it. I think NL says that there is no choice and the fist scenario is what’s going to happen. I believe that if there is anything meaningful, it’t working to make the second scenario happen. I don’t think that “soccer-moms driving their kids around until the sun runs out of hydrogen” is a coherent end-value.

        If African children beat cute puppies and industrially farmed chicken beat African children, than AI’s should beat industrially farmed chicken in utilitarian calculus.

  16. Anonymous says:

    I once read a science-fiction story that depicted a pretty average sci-fi future – mighty starships, weird aliens, confederations of planets, post-scarcity economy – with the sole unusual feature that rape was considered totally legal, and opposition to such as bigoted and ignorant as opposition to homosexuality is today. Everybody got really angry at the author and said it was offensive for him to even speculate about that.

    What story is this?

      • Loquat says:

        I just read that recently, and I didn’t see anything about rape being legal. Was it in the comments?

        • Edward says:

          It’s mentioned in the “Interlude with the Confessor” chapter

        • jaimeastorga2000 says:

          From Three Worlds Collide:

          The Confessor held up a hand. “I mean it, my lord Akon. It is not polite idealism. We ancients can’t steer. We remember too much disaster. We’re too cautious to dare the bold path forward. Do you know there was a time when nonconsensual sex was illegal?”

          Akon wasn’t sure whether to smile or grimace. “The Prohibition, right? During the first century pre-Net? I expect everyone was glad to have that law taken off the books. I can’t imagine how boring your sex lives must have been up until then – flirting with a woman, teasing her, leading her on, knowing the whole time that you were perfectly safe because she couldn’t take matters into her own hands if you went a little too far -”

          “You need a history refresher, my Lord Administrator. At some suitably abstract level. What I’m trying to tell you – and this is not public knowledge – is that we nearly tried to overthrow your government.”

          “What?” said Akon. “The Confessors?

          “No, us. The ones who remembered the ancient world. Back then we still had our hands on a large share of the capital and tremendous influence in the grant committees. When our children legalized rape, we thought that the Future had gone wrong.”

          Akon’s mouth hung open. “You were that prude?”

          • Anonymous says:

            I can’t imagine how boring your sex lives must have been up until then – flirting with a woman, teasing her, leading her on, knowing the whole time that you were perfectly safe because she couldn’t take matters into her own hands if you went a little too far –

            The funny / pathetic thing is that Eliezer was actually trying to virtue signal his superior progressiveness – just that his ultra-nerdiness and lack of any kind of social intelligence caused him to do so in a fully tone deaf manner.

            Eliezer’s intended message:

            “Look at how holy I am for pretending that men and women are actually the opposite of how they actually are – I have achieved perfect holiness through reverse stereotyping”

            Kind of like this post’s examples of a woman who’s a crack Ruby programmer and a woman working on cutting edge science.

            Perfect holiness through stupid anti-stereotyping signalling that not only do you not notice patterns in the real world that you go one step further and notice the opposite of reality when required by the demands of progressivism.

          • suntzuanime says:

            Yudkowsky, I think, simply didn’t realize what the implications were for the progressive project. He was just being an honest writer and exploring the themes he wanted to explore in the work. The rape bit makes the story a lot stronger, I feel. He was a bit less PC back then, too; it was before the internet had polarized so much.

          • Loquat says:

            Having now gone back and read that section – it really seems to require that those future humans have a distinctly different set of sexual behaviors. Lord Akon doesn’t even seem able to imagine how allowing people to impose nonconsensual sex on whoever they want could possibly have bad results. Since it was elsewhere, IIRC, implied that there had been some biological tinkering with the human race in the course of creating this future, I think we have to assume that someone did some serious editing to the intersections of sex, violence, and romantic possessiveness.

            Which is interesting, but rather undermines the characters’ objections to being “improved” by the aliens.

          • ii says:

            largely the point I think, the ethical disagreements we face seem more important in the moment than they do in the long run where the inevitable march of history renders it into so much monkey business

            it’s easy for me to imagine what a future with acceptable rape looks like (free love!), just try to imagine a culture that doesn’t tie up its children at night to prevent them from imperiling their immortal soul via masturbation and you’re half way there

          • anon and proud says:

            Personally I read it as a (cheap) plot device to show how much the humans of the future actually differ from us (their distant ancestors) morally, and not just “better” according to any current ideology’s definition of “better” – which is historically entirely common for any given value of “the future” – with legalized rape being something almost no reader of TWC would consider “better” than status quo in any way.

            Coincidentally, I just read TWC yesterday for the first time and at that point immediately thought: “hey, this must have produced a huge shitstorm from the THIS IS NOT FUNNY crowd”. EY is so EY.

          • Deiseach says:

            pretending that men and women are actually the opposite of how they actually are

            So men are always the wooers and women the wooed, and women never seek to initiate sexual encounters? Nor are violent when refused?

            Two examples from mythology that leap to mind, quotes courtesy of Wikipedia:

            (1) Ovid recounts that Orpheus “had abstained from the love of women, either because things ended badly for him, or because he had sworn to do so. Yet, many felt a desire to be joined with the poet, and many grieved at rejection. Indeed, he was the first of the Thracian people to transfer his love to young boys, and enjoy their brief springtime, and early flowering, this side of manhood.”

            Feeling spurned by Orpheus for taking only male lovers, the Ciconian women, followers of Dionysus, first threw sticks and stones at him as he played, but his music was so beautiful even the rocks and branches refused to hit him. Enraged, the women tore him to pieces during the frenzy of their Bacchic orgies. In Albrecht Dürer’s drawing of Orpheus’ death, based on an original, now lost, by Andrea Mantegna, a ribbon high in the tree above him is lettered Orfeus der erst puseran (“Orpheus, the first pederast”).

            (2) The isle of Lemnos is situated off the Western coast of Asia Minor (modern day Turkey). The island was inhabited by a race of women who had killed their husbands. The women had neglected their worship of Aphrodite, and as a punishment the goddess made the women so foul in stench that their husbands could not bear to be near them. The men then took concubines from the Thracian mainland opposite, and the spurned women, angry at Aphrodite, killed all the male inhabitants while they slept. The king, Thoas, was saved by Hypsipyle, his daughter, who put him out to sea sealed in a chest from which he was later rescued. The women of Lemnos lived for a while without men, with Hypsipyle as their queen.

            During the visit of the Argonauts the women mingled with the men creating a new “race” called Minyae. Jason fathered twins with the queen.

            “But that’s only mythology”, you may protest. And Yudkowsky’s story is only SF/Fantasy, I retort. Or is it that you think women can’t be physically violent and even rapists, or are incapable of overcoming men, or that men can’t be raped?

          • Jiro says:

            So men are always the wooers and women the wooed, and women never seek to initiate sexual encounters?

            There’s a big gap between “this never happens” and “that isn’t the typical case”.

            Also, Greek mythology is not normally taken to be a serious attempt at sci-fi worldbuilding. If people act in atypical ways in it, so be it.

          • Broggly says:

            My reaction was pretty much the same as pencilears’

            does the author mean that in this supposed shining utopia of the future that a person can attack another person if the context is sexual? does it mean that all ideas of pair bonding, of marriage and commitment between equals has been abandoned in favor of one night stands? were those stands initiated through an attack, through an impingement on another person’s right to autonomy? does it mean that rape in the context of arranged marriages between unwilling strangers is the norm?

            this is not explained. rape is legal, that’s all there is to it. rape of the underage, rape of the indigent, rape of minors, legal of course, the right of a free society. as far as this was explained.

            it seems odd to me that a people so viscerally opposed to cultural infanticide would condone sexual attack.

            because that is what rape is, it is an attack. is harming people in other ways legal as well? can I go out for a night on the town of stabbing people? no?

            if by rape the author did not mean rape, as many commenters suggest as a defense, then he is either insufficiently articulate or misguided, if he did not mean Rape-rape, but merely snuggle-kisses-rape-hugs then that intent should be better reflected in the text itself. as it is not I am forced to conclude that by rape, the author meant the forcible unwanted sexual victimization and attack on a person or persons by another person or persons.

            and NancyLebovitz’s

            I still wonder how that would look in practice. Do people have a right to enforce “I’m busy”? :”I prefer spending time with someone else?” “I’ve been raped four times today and I’m really busy”? What happens to celebrities?

            Given the stuff Yudkowsky wrote about how the characters in his story don’t have a concept of rape the same way we do, it really feels like he was just trying to shock the reader and either didn’t think it through or was using intentionally misleading language.

          • ii says:

            it’s explained in the story that minors are raised in “Cradle” societies that function according to different rules than the rest, anyone under 100 years old is considered a minor

            rape meaning different things in different cultures is hardly reaching however, we can’t agree on one definition so it’s hardly being misleading to say that the word is being used differently in the story than we would imagine it

            the same story has the crew consult the ship’s reddits and refer to famous historical plays like Hamlet and Fate/Stay Night

          • Jiro says:

            we can’t agree on one definition so it’s hardly being misleading to say that the word is being used differently in the story than we would imagine it

            We can agree on a range of definitions even though we can’t agree on an exact definition. If something falls outside that range, we can still legitimately say it is misleading.

            Don’t confuse “we don’t have an exact specification” with “we have no specification at all”.

          • ii says:

            the range of definitions includes things like “reverse rape” pornography which is pretty much what the story describes

          • Deliberately Steve says:

            “Look at how holy I am for pretending that men and women are actually the opposite of how they actually are – I have achieved perfect holiness through reverse stereotyping”

            I always took a more charitable interpretation: It’s deliberately set in a star trek+ style post-badness utopia, of course human nature doesn’t exists in this universe. EY is just adhering to genre tropes.

            It works, too. Imagine a world where all disease has been eliminated, everyone has perfect control of their fertility and an extensive, futuristically effective eugenics education program has produced a race of people mentally incapable of deliberate harm.

            Extrapolate logically this society’s views on sex (yeah I know, but remember, human nature still doesn’t exist) and it becomes sort of like someone mussing your hair. Endearing if done by someone you like, really aggravating if everyone you meet tries to do it, but t’s not really worth outlawing, is it?

            Of course this grand structure of assumption doesn’t quite hold up to the rest of the facts presented in the story, but that too is pretty genre-appropriate.

        • it was a throwaway line in the last chapter. Hardly major. Still produced a shit fit. Various people _still_ say “Eliezer Yudkowsky? That guy who thinks we should legalize rape?”

          • Anonymous says:

            Zero sympathy for people who try to prog signal then get attacked by other people trying to prog signal. That’s exactly what you signed up for when you decided that being virtuous was all about who could make the most opposite reality statement possible.

          • Net says:

            @Anonymous, that seems kinda harsh. It’s a bit like saying you have zero sympathy for anyone who gets shot if they weren’t in favor of draconian gun control laws. Attacking people over bullshit is worse than telling pretty white lies (especially if it’s the attacking that makes the pretty white lies necessary).

          • Deiseach says:

            Oh for pete’s sake, are we seriously going to argue over “the views of the [author/editor/publication] are not necessarily those as expressed in this [story/article/letter]”?

            I’d agree that Yudkowsky was “try(ing) to prog signal” if there was a smug little addendum about how the rape laws thing was solved by society turning to polyamory – now no need for rape once people were free to have as many lovers as they liked and if someone rejected you, you could easily find other people to have sex or intimate emotional relationships with!

            Because in that case, he’s definitely been on record as advancing the view that polyamoury is the superior way to go and practices what he preaches, so I’d have little sympathy for him getting a sock in the eye (metaphorically) over that one.

            But to do him this much credit, I don’t think he has ever said anything about decriminalising rape nor do I think he agrees with that, even if considered as an ultra-libertarian ‘no laws at all, let private justice sort it out’ way.

            Otherwise, you’re saying every crime writer approves of murder, torture, rape and serial killers. Every horror writer approves of black magic and demons. Every romance author thinks men should be six foot three devilishly handsome annoying know-it-alls who treat ’em mean and keep ’em keen, preferably a Scotsman or pirate 🙂

          • Jiro says:

            There are ways to write horror and crime stories such that you seem to approve of the actions of the villain and ways to write them so you don’t.

          • The Nybbler says:

            There are ways to write horror and crime stories so it appears you approve of the actions of the villain, even if you don’t. In fiction, even the apparent authorial viewpoint can be fictional.

          • FacelessCraven says:

            @Deiseach – “Every romance author thinks men should be six foot three devilishly handsome annoying know-it-alls who treat ’em mean and keep ’em keen, preferably a Scotsman or pirate ”

            1. the scotsman or pirate part is just obviously true.
            2. It’s called “a Kilted Christmas Wish” and the guy is wearing jeans? SHAME.

          • Soumynona says:

            @Anonymous

            He wasn’t trying to “prog signal”. He was trying to show that future is going to be weird and repulsive to us even when people living in it consider it normal. It was supposed to be off-putting and didn’t carry any hidden message about polyamory or other bullshit. Hell, it might have been written long before EY started calling polyamory the “more evolved” option.

  17. E. Harding says:

    I also believe the biology of uploading human brains is dubious at best. Indeed, shouldn’t developing an AI be a prerequisite for uploading brains?

  18. jaimeastorga2000 says:

    In section IV, “workers who help build the robots” should be “workers who help build the batteries”.

    I didn’t learn much from this review, but that was mostly because I was already familiar with the relevant concepts from reading Hanson’s papers, in particular his seminal “If Uploads Come First: The Crack of a Future Dawn”. When I first encountered these ideas they were utterly mind-blowing, and it is only the passage of time that has allowed me to acclimate to their Shock Level. I even wrote a little story inspired by Hansonian ideas about uploads, “Procrastination”. However, I was not nearly as successful in my literary endeavors as DataPacRat, whose “FAQ on LoadBear’s Instrument of Precommitment” earned him an invitation to read Hanson’s draft of The Age of Em.

  19. this is all too weird . Kinda skeptical about the concept of a 100% mechanized economy.

    I don’t see how human can be eliminated entirely. Automation in the form of automobiles, assembly lines, and computers has failed to shrink the labor force that much. Only relatively recently has the labor force participation rate has begun to think, and it’s impossible to know if this is attributable to technology instead of other factors.

    The Luddite Fallacy means that there will always be an abundance of jobs for all skill levels despite advances in technology. Carriage mechanics become auto mechanics, who become rocket mechanics etc. Whether or not it remains a fallacy is up to debate, but so far it seems so.

    Imagine a world in which you can design and build a car by flipping the “on” switch.

    I think if they can build the car up to that point, they will figure out how to automate the pushing of the button. But then someone has to make the machine that pushes the button that pushes the button… ‘buttons all the way down’, ultimately leading to a ‘button for everything’ that when pushed will provide everything everyone will ever need.


    Science fiction books have to tell interesting stories, and interesting stories are about humans or human-like entities. We can enjoy stories about aliens or robots as long as those aliens and robots are still approximately human-sized, human-shaped, human-intelligence, and doing human-type things. A Star Wars in which all of the X-Wings were combat drones wouldn’t have done anything for us. So when I accuse something of being science-fiction-ish, I mean bending over backwards – and ignoring the evidence – in order to give basically human-shaped beings a central role.

    There s a lot of truth to that. A lot of sci fi plots have to bend plausibility or realism to incorporate humans (or human attributes) into the plot.

    lol @ the microscopic-sized city run by microscopic beings having sex

    • Loquat says:

      I think a more significant part of your “design and build a car by pushing a button” example is – who decides what the car design is going to be? If humans or human-like ems are still the consumers, they’re going to have different preferences about appearance, features, etc. You could probably build a computer program to randomly generate interesting car designs, but human designers would still do a better job of making things humans like.

      • ii says:

        and then the computer gives you statistics about how its design is going to sell with more people than your preferred designs would from various polling data, market trends, psychological modeling…

  20. Doug says:

    A nitpick–
    We really do have strawberries as large as apples. The strawberries I buy from Costco are occasionally as big as apples, especially apples before the breeding for size that occurred in the last century.

    • Steve Sailer says:

      Right, I counted “strawberries as large as apples” correct for Watkins.

      Unfortunately, they don’t taste like much, but they sure are spectacular decorations for buffets.

      Scientific agriculture has proceeded apace, we just don’t pay much attention to it anymore.

      • Thursday says:

        I really do want to know why the strawberries and tomatoes in stores are so bloody tasteless these days.

        Incidentally, a lot of people have been extolling the virtues of homemade salsas, but for that you require tomatoes with some actual taste, which you can’t find in stores. I actually prefer the manufactured slop in a jar. However, a trip to San Antonio opened my eyes to what fresh salsas can be like. The restaurateurs there seemed to have found some decent tomato suppliers, because some of those fresh house-made salsas were spectacular.

        • Steve Sailer says:

          It’s kind of a whole eco-system of restaurants, wholesalers, and farmers that are required for really good food in a region.

        • Simon says:

          If you’re really interested search for Tomatoland by Barry Estabrook at the bookseller of your choice.

        • Andrew G. says:

          Commercial tomato producers use tomato cultivars which have been bred for consistent size, appearance, transportability, disease resistance and similar production-oriented criteria; flavour has been bred out, and attempts to re-introduce it (google “Harry Klee”) have not been met with enthusiasm.

        • Anonymous says:

          Big Tomato doesn’t care about flavor. Tomato farmers don’t care. Tomato packers don’t care. And supermarkets don’t care.

          This is basically the reason, I would add consumers mostly don’t care either. (No time, no money, probably won’t even like it compared to food optimized for addictiveness).

          I sugges growing your own (Indoors if you have no land) or searching for local small scale or traditional farmers, community gardens and so on. Real tomatoes are one of the good things in life.

        • Anonymous says:

          Buy heirloom tomatoes. The uglier the better.

        • keranih says:

          I really do want to know why the strawberries and tomatoes in stores are so bloody tasteless these days.

          When was the last time you went to your grocer and said, “I want more of those tomatoes/strawberries that you had last week that tasted so good”?

          Consumers don’t make their fresh veggie/fruit purchases based on taste. They make them based on appearance. And they will consistently refuse to purchase more flavorful items if a prettier version is available.

      • Deiseach says:

        If you’ve seen wild strawberries, they’re tiny by comparison with the ones for commerical sale. “As big as apples” was only an extrapolation of that trend.

        (And now I go into a reverie about my childhood when my father showed me wild strawberries growing in the ditches when we went to get buckets of water).

        I agree that breeding for size does away with taste, and you are also then trying to select for control over ripening (too soon and they’ll go off before they’re in the shops, too late and they’ll be unripe when the consumer brings them home), transport (how easily do they bruise? how many can you stack in a box?) and the likes.

        I really do want to know why the strawberries and tomatoes in stores are so bloody tasteless these days.

        Standardisation, as everyone says. The shops want a consistent product that is uniform in size, colour and shape; that can be controlled when it ripens; that looks big and attractive; that won’t bruise easily and keeps a long time. Taste comes last. Uniform cultivars reduce variety and often individual flavours are lost (e.g. all apples should be sweet rather than tart and so on). Consumer demand drives it too: if everyone falls for the ad campaign that Golden Delicious are the best apples, those are the apples they want to buy and older varieties and local varieties fall out of favour and are not worth selling and so not worth producing by the growers.

      • Michael Bacon says:

        Some taste great. The Japanese high-end consumer is the market for much of the top-grade California produce. Strawberries the size of apples that taste wonderful are common there.

  21. stargirlprincesss says:

    Admittedly alot of people think the society in Brave New World was pretty good. Overall I tihnk the BNW society was maybe better than our own.

    • The Nybbler says:

      None of the people who think that imagine themselves as Deltas or Epsilons, I bet.

      • eh says:

        “Ignorance is bliss” has been a proverb for several centuries. Anecdotally, I’ve heard multiple people say that people with Down’s Syndrome always seem happier. This guy found that an experimental autism treatment made him divorce his wife and caused him to notice that, far from living in beautiful and vibrant emotional landscapes, everyone else was just incredibly mean to each other and to him all the time.

        Deltas and Epsilons aren’t depicted as struggling with their identities. They get their soma, their feelies, and they’re happy. Only the alphas have a hard time coming to terms with the world, and when they give in to existential crises, they’re either sent off to live in little artistic communes, or they rise to positions of power in which they can read as much Shakespeare and exercise as much control as they want.

        I think I would be happier in BNW. I would not want to live there, but once there, I think it would be a happier place than this world; and the lower my caste, the happier I’d be. That’s the whole point Huxley was trying to make, and it’s almost the opposite of “alpha good, epsilon bad”: some tiny handful of alphas are sad, but the rest of the society might as well be made of pure hedonium.

        • The Nybbler says:

          Oh, no doubt the Deltas and Epsilons in BNW are happy or at least content; they’re made that way. I just don’t think that most people in our world who like the BNW society would really like it if they considered the possibility that in such a society they’d be a Delta or Epsilon.

    • tanadrin says:

      Brave New World always rang a bit hollow to me. As dystopias go, it’s hardly the worst, and certainly most people in its world seem relatively happy (and the unhappy ones no more so than anybody else disaffected with the shape of society). But the big question Huxley seems to be pushing–which do you choose, a life of pleasure or a life of meaning?–seems false to me. There’s nothing in our own world that makes me think a life with complex spiritual and artistic experiences is mutually exclusive with a hedonistic utopia, and he really doesn’t do a very good job of making the case that that’s so in the world of his novel, either.

      • gattsuru says:

        That’s part of what a Brave New World is about, but the dichotomy is a little less pleasant than even that. The Fordian society isn’t really about pleasure, though it cloaks itself in such — it allows soma, but the Malpais reservation has references to peyote. The point is that everyone’s purpose is, explicitly and intentionally, to serve the Society. To take the cliche, to exist solely as cogs in a machine from before their birth, with every aspect determined by and to the limits of technology. Fordians haven’t yet hit Hansonian sacrifice of inner self to The Ascended Economy, but they’re a lot closer than it looks at first glance.

        ((Huxley has since said that he’d include third alternative of sanity if he were to rewrite the work))

        • tanadrin says:

          Which also offends the modern sensibility, but is hardly a strain of thought alien to our society, and certainly one which has been more prevalent in the past. I don’t know, it’s just not the most unsettling dystopia I’ve ever run across, though it was an enjoyable read.

        • DavidS says:

          He wroteIsland, which sort of is the utopian equivalent to Brave New World.

      • Deiseach says:

        The society of Brave New World was not about pleasure, it was about consumption to drive the economy. Soma and the feelies and the like were not about hedonism, they were about harnessing human desires for pleasure and commercialising them.

        An early paragraph about conditioning the children created as a result of the vat-grown method shows that: Delta children are conditioned to be afraid of flowers and books. Why flowers? one student, being shown around, asks. The reply – because a taste for nature is no good. Nature is free. So they get conditioned to like going to the country (because that creates a demand for transport) but to hate nature. Instead, they are taught to like country sports, because that further creates a demand for golf clubs and hiking gear and all the rest of it:

        One of the students held up his hand; and though he could see quite well why you couldn’t have lower-caste people wasting the Community’s time over books, and that there was always the risk of their reading something which might undesirably decondition one of their reflexes, yet … well, he couldn’t understand about the flowers. Why go to the trouble of making it psychologically impossible for Deltas to like flowers?

        Patiently the D.H.C. explained. If the children were made to scream at the sight of a rose, that was on grounds of high economic policy. Not so very long ago (a century or thereabouts), Gammas, Deltas, even Epsilons, had been conditioned to like flowers – flowers in particular and wild nature in general. The idea was to make them want to be going out into the country at every available opportunity, and so compel them to consume transport.

        “And didn’t they consume transport?” asked the student.

        “Quite a lot,” the D.H.C. replied. “But nothing else.”

        Primroses and landscapes, he pointed out, have one grave defect: they are gratuitous. A love of nature keeps no factories busy. It was decided to abolish the love of nature, at any rate among the lower classes; to abolish the love of nature, but not the tendency to consume transport. For of course it was essential that they should keep on going to the country, even though they hated it. The problem was to find an economically sounder reason for consuming transport than a mere affection for primroses and landscapes. It was duly found.

        “We condition the masses to hate the country,” concluded the Director. “But simultaneously we condition them to love all country sports. At the same time, we see to it that all country sports shall entail the use of elaborate apparatus. So that they consume manufactured articles as well as transport. Hence those electric shocks.”

        • DavidS says:

          And indeed they create deliberately high-consumption sports that change all the time. Built-in obsolescence and all that.

          I’m not entirely convinced that the consumption-led focus works when you have that sort of World Government. Couldn’t you just set people to doing pointless tasks? But I suppose that solution relies on lying about less and to fewer people, which makes things smoother.

          Incidentally, I think Brave New World is very interesting precisely because it isn’t obviously dystopian like, say, 1984. It’s arguably a better world on average than we have now from a lot of viewpoints. At heart I sympathise more with Mustafa Mond than with the Savage, although I find the latter quite powerful. Brave New World is challenging in a way that most dystopias aren’t because it doesn’t just say ‘isn’t this awful’. There’s a real question as to what’s awful about it and why. I’m not sure our society has defenses against turning into something like BNW nor entirely sure it would be a bad thing (depending on the alternatives)

          I think the key bit for this is chapter 17 (link below to avoid me quoting half of it here). Incidentally, I think this might be the best literary response to the Problem of Evil as an argument against God, in the same way I think Brothers Karamazov puts the Problem of Evil most powerfully in the chapter ‘rebellion’.

          ‘”But I don’t want comfort”, said the Savage. “I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”

          “In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”

          “All right then,” said the Savage defiantly, “I’m claiming the right to be unhappy.”

          “Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen to-morrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.” There was a long silence.

          “I claim them all,” said the Savage at last.

          Mustapha Mond shrugged his shoulders. “You’re welcome,” he said.’

          http://www.huxley.net/bnw/seventeen.html

  22. Earthly Knight says:

    In Star Wars, the Rebellion had all of these beautiful hyperspace-capable starfighters that could shoot laser beams and explore galaxies – and they still had human pilots. 1977 thought the pangalactic future would still be using people to pilot its military aircraft; in reality, even 2016 is moving away from this.

    No, this is all wrong! Star Wars took place a long time ago in a galaxy far, far away!

    • MugaSofer says:

      That’s a reference to “long long ago in a kingdom far far away”, not implying Star Wars takes place in our past.

    • James James says:

      Also I don’t think George Lucas believed the pangalactic future would still be using people to pilot its military aircraft. He was just setting a fantasy/western in space.

  23. bean says:

    We can enjoy stories about aliens or robots as long as those aliens and robots are still approximately human-sized, human-shaped, human-intelligence, and doing human-type things. A Star Wars in which all of the X-Wings were combat drones wouldn’t have done anything for us.

    You’ve managed to independently re-derive Burnside’s Zeroth Law of Space Combat, although in this case, you’re applying it to the whole world instead of just the shooting spacecraft.

  24. ii says:

    problem: in these kind of scenarios what resources are people actually competing for?
    the implicit assumption is that there is infinite desire that your own private simulated reality can’t fulfill thus the need to build an infinite computronium device for the abstract satisfaction of having every pixel of all of the sextillion sex slaves perfectly rendered

    this seems like a thing that would be optimized away rather early in the process of whatever mindfuckery actually gets invented if any

    • jaimeastorga2000 says:
      • ii says:

        let them play WoW, the people who own the servers already have all the resources it’s possible to have or indeed *conceive*

        • MugaSofer says:

          Until they get acquired by the people who acquired more computers and used them to acquire more computers.

          • ii says:

            that’s what happened to the first guys, the guys I was talking about sit at the end of that process on a giant pile of stocks, asteroid harvesters and server farms

            there is nothing left for them to acquire and nobody there for them to take it from

    • Mr. Eldritch says:

      As far as I understand it, the resources you’re competing for are the power and computing resources needed for you to even exist. If your Amazon Cloud instance or whatever gets outbid, your private simulated reality ceases to exist and you with it until such time as somebody bothers to boot up a new copy of you.

      • ii says:

        which seems like the sort of thing that would really inspire the creation of a state monopoly rather than anti-trust laws

        plus there is the fact that running a simulation of paradise isn’t actually going to get more expensive as resources increase nor does it seem more expensive than subsistance provided you don’t just bite the bullet and remove peoples ability to want things

    • Dennis Ochei says:

      my sides

      I’m imagining a retired em going “oh shit, that pixel’s off on my sex slave, better get another job”

  25. Steve Sailer says:

    Does Robin still want to have his head frozen or did his wife talk him out of that?

    http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html

    • Scott Alexander says:

      The book says he continues to be signed up for cryonics.

      It’s not as weird as it sounds like you think – one of my girlfriends is signed up too.

      • Anonymous says:

        I’ll take “Statements that Suggest the Opposite of what their Author Thinks they Do” for $600, Alex.

      • Robbl says:

        Now we are getting to the meat of things. How many girlfriends do you have? Are they aware of each other?

        • Deiseach says:

          None of our business, Robbl.

          • Jiro says:

            Go easy on him. SSC is in a bubble. Outside this bubble, making the statement that Scott made would have implied seriously unethical behavior. And it would not be easy for Robbl to find out that it doesn’t imply that here.

          • FacelessCraven says:

            to be clear, robbl, the answers appear to be several and yes.

          • eh says:

            Echoing Jiro’s statement, but more strongly: posting here can be like communicating with an alien world, with all the blue/orange morality that entails. Here’s some context that might make understanding a little easier.

  26. WT says:

    If we’re talking about simulated brains, why does the post keep talking about the simulations’ need to “save for retirement,” or their “subsistence wages,” or how they will feel about having to “travel” to London or wherever? None of those things make sense in the Matrix that Hanson is envisioning. Simulations don’t need a *real-world* vacation or travel any more than a television character does. Or closer to the point, it’s like asking if my copy of Microsoft Word will need to retire and have a pension at some point. Um, no. That’s not how software works.

    • Scott Alexander says:

      “Vacation” may be more of stay-at-home leisure time rather than travel.

      Some ems will probably have robot bodies, and there are reasons why you might not want to send yourself through the Internet – for example, if the Internet’s not secure enough people could hack your mind.

      Ems might want to retire because they’re essentially humans, and humans want to retire. Microsoft Word isn’t programmed with the desire to retire, but uploaded humans would still have it. Whether anyone cares or not is another story.

      • tanadrin says:

        A world where you have to worry about ems retiring seems to me to be mutually exclusive to a world where you can recopy them endlessly from the em who just got back from vacation. Why bother to have ems retire when you can just delete them at the end of their working life? Surely the surgeon-em you delete every day at his or her subjective five o’clock is never going to have to retire.

      • WT says:

        Even supposing that “ems” can be conscious and “want to retire,” there is no need for them to “save” for retirement. Whoever simulated them could simultaneously simulate as much “savings” as any em could ever need. Moreover, computer simulations do not need to have “savings” in the first place. What would “savings” accomplish for them — buying simulated food? OK, so simulate some food without simulating the need to simulate savings.

        The point is that we’re imagining this (wildly absurd) Matrix-like existence, but also treating the Matrix-like simulations as if they have real-world concerns. Makes absolutely no sense.

        • Scott Alexander says:

          Ems saving for retirement = them having enough real-world money to run the real-world computers simulating them.

          While they’re working, somebody else will pay for the electricity and processing power they need. If they want to retire, they need to subsidize it themselves.

          You can’t simulate real-world money and give it to real-world people in exchange for real-world goods and services.

        • WT says:

          Again, though, why would simulated brains need to “retire”? Are we (in the real world) going to simulate aging, Alzheimer’s, and dementia too?

          This is all kind of silly, a bit like debating what kind of food would be best to eat while traveling on a speed-of-light spacecraft. We’re not anywhere near even knowing what it would mean to simulate a brain. And the people talking the most about ems don’t seem to know much about neuroscience or its failure to even approach a minimal understanding of the brain’s structure and functioning: http://biorxiv.org/content/early/2016/05/26/055624

      • Deiseach says:

        Isn’t it easier to just delete the “retirement age” copy of the em and boot up a new copy that is pre-retirement?

        If we’re treating ems as people, with the rights of legal persons, which is what Hanson assumes. But I think that would only apply to the “complete full unique to be booted up only when my physical body dies” em, not the “958th copy of Joe Officeworker doing clients’ tax returns for an accountancy firm”. Who cares if your tax accountant em has hobbies like gardening or whitewater rafting or fishing on charter boats? Who wants an em worker that is daydreaming about their new begonias? Edit all that out of the copy when it’s being made and just leave the accountancy and tax law knowledge, updated as regulations require, part of the em functional!

    • Antígrafo says:

      You can say exactly the same about sex. You can rewire orgasm to “finish report” or any other completed task and sex is not only meaningless but other efficiency tool.

      I don’t really see a lot of the problems discussed because it’s going very difficult to convince anyone that an em its a person. And the most difficult to convince would their “employers”.

      Does an entity which can be turned on and off at will have any power to assert its own rights? To defend its vacation time or to fight for a raise?

      • Deiseach says:

        Does an entity which can be turned on and off at will have any power to assert its own rights? To defend its vacation time or to fight for a raise?

        Exactly! If real-world me is making a living from the royalties on licensing 2,000 copies of my office worker brain*, I don’t want my ems causing trouble by demanding their two weeks’ vacation time. In fact, if I’m being paid on hourly rates, I want them to work every possible hour they can with the minimum of necessary downtime. So I’m going to agree to have the “want to spend Christmas Day at home with the family” part of the em edited out, and to hell with what a copy of a copy of my brain thinks about it.

        *If real-world me sells my time and labour to an employer for wages, then em future real-world me will be selling my em’s time and labour for the equivalent. Employer pays the charges to run the ems, like they pay the electricity bills and heating and lighting and rent on the office building where humans (used to) work. They pay me a licensing fee instead of hourly wages for my ems, or maybe my ems’ hourly wages go to me. Pennies for each em but thousands of copies mean I keep earning what I used to earn working for $12 an hour.

  27. Brian Slesinsky says:

    The Ascended Economy is an interesting idea that could use some spelling out. It seems like there must be some flaw; is it actually possible to detach from the normal economy like this, or is this some kind of perpetual-motion machine?

    One reason to be skeptical is that physically detaching from the global economy seems very hard, even for entire countries. Making everything yourself is hard to do and the gains from trade are very high. I’d expect this to be especially true for the inhabitants of a high-technology artifact like a datacenter. To keep running they need energy, cooling, and machinery, so it seems like they also need something useful to sell. We can imagine them running the Internet, providing any services datacenters can be used for today and more. But ultimately Internet companies do depend on income streams from actual customers purchasing things (possibly indirectly via advertising). When the income disappears, they die.

    Moving up the stack from the physical layer to legal/finance, it seems like a shell corporation’s only powers are to own things and spend money; otherwise it’s a legal fiction of little interest. A shell corporation is all about trade – it’s outsourced everything except decision-making. The amount of power it has depends on its assets and income streams, which are what allow it to spend money on services. And any effect it has on the world depends on its trading partners who aren’t also shell companies and actually do things in the real world.

    This reminds me a bit of finance. Say there’s a big merger in the airline industry. Lots of things might change for the employees involved. But, in the end, it’s still about taking people the places where they want to go. Who owns the airline doesn’t matter all that much to the customers, except to the extent that it changes the service being provided to them (for better or worse).

    Anyway, there’s lots of trading is going on. The industrial part of the economy (the mines and oil refineries and factories and data centers) doesn’t physically interact with humans, and maybe it’s trading in intermediate goods that humans don’t see either. But it’s the supply chain for the consumer economy, which is all about designing and building things based on human desires and constraints. If we assume high competitiveness for income streams, anything unnecessary for meeting customer needs (however indirectly) gets eliminated.

    Maybe the back-stage industrial areas become increasingly inhuman, but also increasingly efficient, home to creatures who don’t look like humans at all but subsist off our income flows. It’s sort of like the difference between the outside shell of a cell phone and its internals – inhuman machinery wrapped up in a human-friendly package.

    In this scenario, the humans are providing the sunlight. Maybe there are also creatures that live in darkness, but doesn’t there need to be a hydrothermal vent somewhere?

    • Nicholas says:

      The hypothetical goes something like this:
      In the present, most of corporations’ liquid assets are stocks and bonds in other companies that can be sold if cash is needed. It is hypothetically possible that two companies might exist, both owned by humans but operated by algorithms. It is possible that these algorithms might output that the optimal holding to use as liquid assets is the other algorithm operated company. The two companies may attempt to hold enough value in the stock of the other that all of the publicly available stock might be owned at some point in the future by the other company, because the algorithm output to buy as much as possible at the current price and then hold it. At which point, you have company A that is owned by its only shareholder the collective entity company B. Unfortunately company B has no human employees and thinks only of the welfare of its only shareholder, the collective entity company A.

      • FacelessCraven says:

        …And then the next morning, some human tangentially related sues for control of the company, and there’s no human with standing to contest the suit so he wins by default. Alternatively, the government nationalizes the company with no one really in a position to complain.

        I can imagine an entirely automated company growing its’ holdings, but what’s to stop humans dropping a saddle on it and riding it into the sunset?

        • piwtd says:

          Alternatively, the government nationalizes the company with no one really in a position to complain.

          Which government? There isn’t a global one and international capitalism is a thing.

          • FacelessCraven says:

            @piwtd – it doesn’t really matter which one. Whichever is hosting the majority of assets, whichever is dominant enough that they think they can get away with it, maybe all the countries involved take the local pieces. My point is, free resources get exploited, and an entirely automated corporation looks an awful lot like free resources.

          • piwtd says:

            @FacelessCraven: That’s the point of international capitalism – the capital can move around. The country whose government has a habit of nationalizing AI run companies won’t get much foreign investment from AI companies and will eventually get outcompeted in the new economy by countries with more business friendly environment.

            Either you have a single global regulatory agency (what Nick Land calls ‘Anthropol’) with an effective power to restrict innovation so that it doesn’t get out of human hands, or you have global capitalism which will eventually optimize humans out of the process, because the market players which get rid of humans will eventually outcompete those that don’t.

          • TD says:

            What would nationalizing the totally automated inhuman economy actually do in practice? If the economy doesn’t functionally need us, and in fact, operates without us, then how do we actually control it?

            Aren’t you just signing bits of paper at that point? Also, what never seems to come into this is whether an economy detached from humans might elect its own government to protect its own property. We seem to be assuming anarcho-(techno)capitalism, and then slapping a human government on it, but what if the auto-companies have already got the government thing covered?

          • FacelessCraven says:

            @piwtd – “Either you have a single global regulatory agency (what Nick Land calls ‘Anthropol’) with an effective power to restrict innovation so that it doesn’t get out of human hands or you have global capitalism which will eventually optimize humans out of the process, because the market players which get rid of humans will eventually out-compete those that don’t.”

            The ways I see this working all seem to be equivalent to the unfriendly AI idea, but I was under the impression that this was a separate and distinct idea. Was I wrong about that? Is this just basically the FAI problem via the market rather than a FOOM?

          • piwtd says:

            @FacelessCraven: As I understand it, there is a disagreement between Yudkowsky and Hanson about whether the first suffiently advanced AI will be able to quickly take control over the entire world and establish itself as a singleton (as Yudkowsky believes), or whether the inteligence will be distributed over the entire economy and no single agency will be able to control the global development which will thus be left to the market forces (as Hanson believes). Nick Land is very much on the Hansonian side and in his vision, as far as I can understand it, global capitalism and the emerging (eventually very unfriendly) distributed AI are the same thing.

          • FacelessCraven says:

            @piwtd – I’ve seen a lot of descriptions of how the EY singleton happens, boxing and foom and so forth. On the Hansonian side, what’s to stop humans from interfering with or redirecting the automated economy, tinkering with the AIs’ values, etc to harness them for human benefit? If there is a multinational corporation that does vast amounts of work which provide zero benefit for humans, what’s to stop humans from modifying it to output benefits, or just killing it entirely? With the EY Singleton, there’s an assumed unbridgeable power differential; how’s it work for the hanson/land version?

            Is the idea that humans are just pushed more and more to the edges until they’re so marginalized that they cease to matter at all?

          • Psmith says:

            what’s to stop humans from interfering with or redirecting the automated economy, tinkering with the AIs’ values, etc to harness them for human benefit?

            Autonomous, heavily armed Big Dogs and quadcopters with 24-hour drone overwatch?

          • FacelessCraven says:

            @Psmith – “Autonomous, heavily armed Big Dogs and quadcopters with 24-hour drone overwatch?”

            How do systems that produce no human value get a monopoly on force, which currently exists entirely and universally to enforce human values?

          • Psmith says:

            How do systems that produce no human value get a monopoly on force,

            Fully autonomous or em-controlled supply chains decide to put everything together, flip the power switch, and say “molon labe, meatbags”?

            (Or human owners automate increasing amounts of security and eventually get bought out by ems.).

      • Brian Slesinsky says:

        We should probably distinguish between the shareholders and the actual decision-makers (management and the board). A company could technically be owned by unknown entities who are simply passive investors. Unless they become active managers, they aren’t really running things.

        But putting that aside, suppose we have a company that only makes investments, and its investment decisions run on autopilot. So it’s making investments based on proposals put together by – who? Choosing well between available investments requires intelligence but isn’t a creative process. For example, if we’re talking real estate, the contractor and the architects and engineers have a lot more say in how the building turns out.

        But suppose some of them are automated too. The other issue is that the business has to make money, or the money will run out. Who decides whether a product is successful or not? Consumers. What does it mean to be a good designer? The ability to understand human tastes. If we imagine an AI that’s actually good at business and not just a passive investor, it has to be good at understanding and serving humans.

  28. SolipsisticUtilitarian says:

    I don’t understand why there would be Ems with different levels of ability? If they get to the point where they can scan my brain entirely, surely they could also make me much smarter, in fact as smart as possible. The only reason the uploaders would abstain from optimizing the ems seems to be an ethical qualm about not changing personalities, but that seems unlikely in a world where copying personalities is a routine task. And why would anybody even need an independent Taylor Switft em if they could just create a virtual person in their own simulation?

    • Scott Alexander says:

      I think Hanson is assuming it is easier to upload a brain than to customize it. While you might able to add “tweaks” that increase abilities like IQ a little, it might not be possible to change a dumb brain to a smart brain in silico without a lot more understanding.

      For example, I don’t really know any CSS, but when I need to make a website I can find a website I like, copy their CSS, and change a few colors and fonts around and get something useful. But I have to start with a good website to begin with; I don’t know enough to transform an all-around terrible website into a good one.

      • tanadrin says:

        Though this analogy rather seriously undersells the complexity of the proposition, at least in a biological context. Because it’s as if changing the background color of the website also changed the size of the font (but only for certain words) and where half the links directed to, and it’s incredibly hard to predict what parts of the page are going to be affected by any given change in the CSS.

        It seems to me that the optimism of transhumanist types and how soon they predict the useful implementation of any given technology (ems, cryonics, genetically engineered supermen) varies strictly and inversely with how much they know about biology. I will grant Hanson that a hundred years is a long time, and the world now, at least in the rich and developed bits, is very different than it was a hundred years ago. But it’s not unrecognizable, not nearly to the degree he seems to think it is, and the areas in which we’ve made our biggest strides, like computing technology, seem to me to be relatively low-hanging fruit, which is not guaranteed to continue to pay off in the same quantities indefinitely into the future.

        • MugaSofer says:

          Since Hanson doesn’t think that ems will be easily customizable, I’m not sure what you’re “granting” him here. He agrees with you.

          • Deiseach says:

            If we get to the point that we can flawlessly copy an entire human brain such that the electronic copy thinks and behaves as the original meat brain with the same memories, tastes, beliefs and personality, I think we’ll have figured out “so this bit connects with that bit and helps you be good at spelling, so if we twiddle this bit we can make you even gooder!”

            If we can mess around with subjective time sense for an em so that it lives centuries in a day/days in a century and does not break down or go crazy from perceived dissonance between subjective and outside experienced reality, we can certainly bump it up a couple of IQ points.

        • GregvP says:

          Brains without sensory input go mad.

          You need to emulate the whole human.

  29. Timothy says:

    Mr. Bungle – None of Them Knew They Were Robots

    From history
    The flood of counterfeits released
    The black cloud
    Reductionism and the beast
    Automatons gather all the pieces
    So the world may be increased
    In simulation jubilation
    For the deceased…

    The whole song is rather appropriate and non-coincidentally mentions both Moloch and Kabbalah.

  30. Person says:

    Is there a curated list of worthwhile Hanson blog posts?

      • tanadrin says:

        I’ll take “Assuming a socially conditioned response is indicative of fundamental biological nature” for 200, Alex. Cuckolding is only a “reproductive harm” if you assume monogamous sexuality and that the biological parents of the child are the ones on whom the primary burden of raising the child falls, neither of which are remotely accurate throughout the vast majority of human history.

        • Earthly Knight says:

          –The vast majority of societies throughout human history have had some combination of polygny and monogamy as their norm for child-rearing relationships. Polyandrous societies are comparatively rare (and, where polyandry does occur, the male co-spouses are commonly close kin). It doesn’t really matter what kind of society you live in, though– if “reproductive harm” means “deleterious to fitness”, raising an unrelated child as if it were your own will be reproductively harmful except in extraordinary cases.

          –From the perspective of fitness, it makes no difference if the cuckolded father only contributes (say) 20% of the resources needed to raise the child. Those are still wasted resources for him that would have been more effectively spent on himself, or his genetic offspring.

      • suntzuanime says:

        There was a sense of intellectual openness on the internet as recently as five years ago, which is gone today, because of people like you.

      • Scott Alexander says:

        I agree with suntzuanime on this. I’ve banned all anonymouses involved, banned the anonymous@gmail.com account, and banned Jaime Astorga.

        • Anonymous says:

          This is honestly kafkian, I made my comment with good intentions from an obviously subjective point of view. I’ve seen women deal inspiringly well after being raped by having a more natural and less pathologic response.

          I apologize if I caused any harm but I have to ask, why? The reaction to rape might be poorly calibrated from environmental aspects and this only hurts women. Not saying we shouldn’t fight rape with as much or ideally more force than we do now, not at all.

          I guess you have to have an identity and write a book to get a pass at discussing the ugly aspects of life.

          • Scott Alexander says:

            I hate it when people who have written hundreds of good things but have also written one controversial thing that offends social justice people in some way end up having the one thing brought up every time their name is mentioned, following them wherever they go until they have to leave the Internet in shame. People try to do this to me with one time I said out of context that there was a thin line between social justice and Voldemort, it makes me really angry, and judging Robin who’s really brilliant one time he talked about rape in a slightly less than appropriately platitude-repeating way is horrendously unfair to him.

            Also, I ban anons on a hair-trigger.

          • Anonymous says:

            I hate it when people who have written hundreds of good things but have also written one controversial thing that offends social justice people in some way end up having the one thing brought up every time their name is mentioned, following them wherever they go until they have to leave the Internet in shame.

            “I hate this one thing SJWs do so I’ll turn to the only tool I have in my toolbox – ban the rightmost commenter I can find”

            Hilarious. This joke never gets old.

          • TD says:

            Rightmostness is ultimately about sovereign authority, so respect Scott’s.

          • Jiro says:

            Rightmostness is ultimately about sovereign authority, so respect Scott’s.

            If it is in fact true that being on the right gives more reason why he should respect Scott, then it would follow, by conservation of expected evidence, that someone who is not on the right must have correspondingly less reason to respect Scott than normal. Are you sure you want to go this route?

          • Anonymous says:

            Funny thing… I was actually defending Robin’s point and trying to explain it from a subjective perspective that could maybe help both genders if understood. I post anonymously because I’m shy and and fear SJW/”stalkers” as much as anybody else. Men are not even allowed to do paternity tests in some places, ugh… This is quite infuriating honestly. I wasn’t judging anybody you know?

            I came to this blog with nothing but admiration, respect and a desire to talk, I didn’t insult people or anything like that. Unless I’m living in a fantasy world, I would say we are more or less on the same side.

            If you were worried about censoring some of Robin’s opinions, for his own sake, you could have redacted or deleted the original comment and maybe provide an explanation instead of shitting all over the people who come here as friends. Meh. SJWs are winning and this kind of shit only helps them win harder. Way to treat your commenters.

            I’m not even american, I have little experience calibrating according to whatever you decided to make taboo or drag into your culture wars… I Guess it only makes sense that us more disconnected anons are the first ones to get fucked when the SJW overlords demand some “gardening”.

            (I get upset every time someone starts bashing anons, why is that even a thing? Where I come from anonimity is held in high regard, as it should :S)

          • Scott Alexander says:

            I’d consider making your ban temporary, but given that you’re here talking to me I guess that wouldn’t have much effect.

          • Anonymous says:

            Well, it would let me edit comments which is important for clarity and grammatics (Probably doable with something more elaborate than googling “proxy” and clicking the first link?). Besides, having to go around the rules feels bad and rude.

          • Scott Alexander says:

            Fine. Which anon were you originally?

          • Anonymous says:

            Thank you. One of the deleted comments, had wolves in it.This anon

          • Scott Alexander says:

            All right. Although it took a while, I have now unbanned you.

        • Net says:

          Suggesting a temporary ban for jaimeastorga2000 since he’s been a contributor in otherwise good standing (e.g. coordinating the group science fiction story readings for the open threads), he didn’t start it, and it’s not something you’ve banned for in the past. Relevant precedent: multiheaded was another named contributor who got temporary banned 3x but never permabanned. If you lean more towards temp bans for established contributors, that encourages using a consistent handle and keeping it in good standing, which I suspect makes community gardening easier in the long run.

          Let me know if this kind of kibitzing is unwanted–if so I completely understand.

          • Sniffnoy says:

            Going by the register, your request would appear to have been granted! (Actually it was already that way last night, Scott just failed to be clear about it…)

        • HircumSaeculorum says:

          What on Earth happened here?

        • keranih says:

          Yeah, boss, what the hell? How on earth could that not have been handled with a “Over the line, Jaime, temp ban for two weeks.”?

          Okay, failure to read the whole thread on my part.

        • Dan T. says:

          Banning people to promote intellectual openness… that has a similarly paradoxical flavor to the social-justice crowd’s excluding people in order to promote diversity.

    • Scott Alexander says:

      The Robin Hanson Primer is pretty good.

      • onyomi says:

        Re. foragers v. farmers, I’ve always thought it so amazingly self aware that the things Gilgamesh gives to Enkidu to “civilize” him are basically farm products (wine, bread, fancy courtesans).

        Also,

        “In contrast, we might find the general farmer saying that the world is a harsh place and hard to change, and so we should focus more on figuring out how to change ourselves to best deal with that world, to survive in the face of harsh obstacles and competition.”

        Seems a surprisingly good summary of the religious viewpoint (not my will be done, but thy will be done): acceptance of what is.

  31. Anonymous says:

    I must fanboy about this once more… The Interfaces Series

    Some relevant bits:

    A few years ago, I went back into the CIA files and found a copy of the game to see if I could finally beat it. I got past level 800. After that, it became simply inhuman. So I botted it to see the ending. It took a long time to build a proper bot. It really was a fiendish, clever game. Finally, I got one working, but it turns out that there is no ending. You get to level 1024, and it just resets. You never meet the Ancient Queen.

    The work could be described as Sisyphean. Trying to re-culture a person after years of all that whiz-bang feed stimulation is like pushing a heavy boulder up a hill. And occasionally the boulder is masturbating.

    Even more mysteriousness is their kindness. For it is they, they alone of all the living things, who show our kind any affection, who bring us food, as if we are their young.

    As if they are our mother.

    How could this be? How could these evil beings show us affection? How could they show us more affection than the world itself, who is of our kind? This is the central mystery. Ever since my kitten died, this has become my obsession.

    “When God presses forward, you must yield or be destroyed. And when God yields, you must press forward.”

    “That sounds more like dancing than wrestling. Or making love,” I said with a snort.

    He smiled. “Yes, it is… Except that dancing is not so painful.”

    “Why wrestle at all? If God is God, and you know his plan, why not simply follow it? Surely this is the best course.”

    “Yes, but I cannot bring myself to,” he said. For the first time, I saw the peaceful expression flee from his face to be replaced by a unsettling dread that trembled in his eyes.

    “God’s plan… is simply too awful.”

    The word “interface” refers to the input and the output, but it also refers to the box. We think of interfaces as existing in order to give us access to things, but they are also there to hide things from us. The idea is that some things are better off hidden. Everything will go along fine so long as a certain input produces the expected output. But when this stops happening, we have to open up the box and see what’s inside. Sometimes we don’t like what we find.

    One day, his father told him to cut down the apple nullity. “But Paw,” he protested, “I love that old nullity!”

    “Mind what I say, boy!” his father said. “I don’t like ontological paradoxes, and I don’t like you sassing me!”

    The boy ran crying to his mother. “Maw! Paw said I hafta cut down the old nullity! Say it ain’t so!”

    “I’m afraid it’s for the best. The other day I was weeding the tomato patch, and I saw Sammy the cat had gotten into the nullity. When I was trying to get him down, I accidently gazed into an infinitely branching timeline of events which never happened and never will happen. Well, I’ll be durned if that old Sammy didn’t jump right on my head!”

    “But Maw! What about my tire swing?”

    “Come now. There’s all sorts of other things you can tie your tire swing to. What about one of the many giant flayed demon penises that grow abundantly in our world and provide our lumber?”

  32. tanadrin says:

    This might be the first genuinely good argument for anarcho-primitivism I’ve ever run across.

    On a more serious note–the part of this that holds up least for me is the calm assertion that not only will it be possible to create ems, it will be possible to do so in the relatively near future. Questions of rapidly-increasing computational power aside, it’s not at all clear that the physical problem of scanning (nevermind nondestructively scanning) the human brain at a resolution necessary to create ems is something that will be tractable simply given enough time, and I’m long past the stage where I can implicitly trust blithe assertions by intelligent people about fields well outside their area of expertise. Hanson is an economist, not a biologist, and while given his assumptions about biology and computers I expect his dystopian future is probably as accurate as it’s possible to get, I’m not at all convinced his assumptions are accurate.

    A more minor quibble is his assertion of a regulation-lite future economy. That may certainly be *appealing* to him, especially if he’s of a libertarian bent, but I don’t see any dominant social or political movements in the direction of the decidedly libertarian future that would be necessary for the unbridled race-to-the-bottom he seems to envision. Moreover, the society he describes seems to have *very weak* rule of law, and invoking rule of law vaguely to prevent the consequences he feels are bad when they finally do arise runs up against the fact that a world where the existence and identity of its participants is cheap and where there are inequalities of wealth so vast they have to be expressed in scientific notation is not a world conducive to strong rule of law.

    Moreover, legal institutions are nothing if not *conservative*. The foundations of common law are a thousand years old at this point (more, maybe), and of civil law systems even older. The law doesn’t countenance the possibility of humans being able to surrender their lives or their autonomies, even if they sign a contract saying otherwise. If ems are considered humans for the purposes of being able to vote, own property, and earn money, it’s hard to argue that deleting them is not de facto murder (in the eyes of the law). If ems aren’t human (at least legally speaking), ems as a politically active, laboring, earning, and property-owning class is a moot point. I’m not sure you can have it both ways, considering we have a legal system which arose in the context of a feudal agrarian society that envisioned nothing like ems. And while our legal systems does adapt to new technologies, it does so slowly and often very badly. In this case, I think it would be a serious impediment to the future Hanson anticipates.

    • ii says:

      While I think all this is true (and actually pretty obvious), my biggest gripe with it is that this imagined future seems to continue along the path of growing the economy to the point where it essentially becomes a performance art.

      It seems to me that people actually do this whole economics thing because they want stuff, once the stuff has been gotten or it has been established that it won’t be gotten it stops being useful and starts being the subject of larping/historical reenactment by enthusiasts. This seems like one such enthusiast projecting a greater interest onto other people for their nerdy hobby than is readily apparent in most.

      Admittedly I have similar suspicions about people who talk about venturing forth to explore the galaxy.

      • tanadrin says:

        I think your analysis is spot on, in the respect that this isn’t a dystopia to Hanson because he’s an economist, and plainly fascinated by economic systems. Whereas to those of us–me, certainly, Scott I think too–who are interested in the economy only as a means to an end, this pretty much sounds like hell. Not only philosophically, though I’ll cop to the Hansonian future inspiring more than a little existential dread in me. But it also reeks to me of a worldview that suspects deep down that anything to which a dollar amount can’t be attached either directly or indirectly doesn’t actually have metaphysical significance. I may be reading too much into Hanson’s worldview, but based on Scott’s review and what little of his blog I’ve read, he *strongly* comes off as someone with the philosophical myopia of the specialist. His future may be more humanist than Nick Land’s–it does have self-aware creatures in it–but only *just.*

        (Exploring the galaxy in your knockoff Enterprise might be a niche interest, but it would be a niche interest that didn’t interefere with anybody else’s preferred society, especially in some glorious post-scarcity future where you can afford to build your spaceship without bankrupting a small industrialized nation.)

        • PDV says:

          It is my continued belief that Hanson has self-modified to value exactly and only what the economy values, and this makes him genuinely dangerous to give any power.

        • Deiseach says:

          Okay, it makes more sense (to me) to think of this not as Hanson extrapolating “can human minds be uploaded?” but rather exploring the economics question – can we create The Ultimate Economy? If we can’t do it with humans as they are now, what about if we strip away all the things that limit humans? Would a human society with a very large proportion of entities that can work endless hours and be uploaded into robot bodies and make multiple copies of the most productive minds and run at vastly quicker subjective speeds create such an economy?

          He’s interested in the economics, not the brains or the technology or the state of future culture.

        • Robin Hanson says:

          The book describes many things being valued without suggesting that dollar amounts would be attached to them.

    • Wrong Species says:

      I wonder if it’s possible to actually stop technological process right now. It seems like since the Industrial Revolutiom began, the long term economy has been running on autopilot, impervious to any regulation or taxes. Is there a stable point between the beginning of the Industrial Revolution and some dystopian end state? Because even if Land, Hanson and Bostrom are all wrong, it seems like the future will have at least one aspect of it that wil horrify contemporary people. But then again, who am I to argue against the values of the future? Maybe we can reach some kind of compromise where people get varying levels of technology in different societies. Living as an Amish person seems less dystopian than living as an Em.

      • Anonymous says:

        The solution is to change what we optimize for, technology is just a tool after all. Varying levels of technology across different societies is a decent first step but we should aim at eventually separating technology itself from unwanted progress in the wrong directions. Capitalism and “Darwinism” have to be put in chains and not the other way around etc… At some point even fucking communism is better than anything that works too well.

        Some number of sufficiently powerful people have to push for that with cold utilitarianism if we want any kind of chance. Unfortunately they mostly seem to be fighting about how to push their own bullshit through the current algorithm or trying to replace it for something worse or neutral (Strictly worse due to wasting time).

      • Nicholas says:

        You can, it has happened repeatedly in time, and it’s relatively easy to do. If you deprive a technologist of either the material support necessary to maintain his devices’ prerequisites or the social support necessary for him to propagate the understanding of how his device works you can cause the technology to simply be unused.
        There was a era in the 1900’s and 1910’s of Europe, involving solar powered boats and bio-punk farms. The death of pretty much every young person in France in WWI and the oncoming domination of everything by the petroleum powered United States lead to the technologies just being dropped and forgotten. Greek fire really was lost, the Minoan printing press really was lost, the secret of roman roadmanship really was lost.
        If you look at my examples though, you’ll notice that there’s a pretty big cost. Technology has local minima and maxima. You can maintain a stable techne for decades if you’re willing to regress or progress to one of the localities. But you can’t pick a spot halfway up a slope and just stay there because you like it. Normally how you stop technology from going up, is you force it to go back down.

        • Wrong Species says:

          Specific technology can be lost but constant technological growth wasn’t in your examples. Maybe certain aspects in certain times, but solar energy is more prominent and agricultural growth has continued unimpeded. The only way I see to stop that is through an apocalyptic event or a ruthless world government.

          • Nicholas says:

            Solar electricity is more common, and one particular kind of it at that. There’s a variety of solar devices whose design specifics have been lost, because they were deemed impractical by the engineering students of an earlier era.
            To answer your question more directly: There’s probably a stable roster of technologies equally complex and force-outputting as the early industrial era, maybe even the late 1800’s stereotype. But there’s likely no stable technological suite within 30 years on either side of where we are now. You can roll the dice on going forwards or you can give up every technology older than hydrofraking, but you can’t stay here in the middle.

    • TD says:

      A more minor quibble is his assertion of a regulation-lite future economy. That may certainly be *appealing* to him, especially if he’s of a libertarian bent, but I don’t see any dominant social or political movements in the direction of the decidedly libertarian future that would be necessary for the unbridled race-to-the-bottom he seems to envision.

      Yes, but governments are commercial too. The thing that prevents a race to the bottom is that humans value the existence of other humans, and so we pass laws to reflect that. If regular humans become commercially useless to each other, then the law is likely to change to reflect that, but since we may retain our emotional usefulness to each other, we will still presumably retain laws to do with human rights. There will be lag, but it’s much like how we liberalized sexuality once we no longer needed our fellows to breed in tight knit families for the good of the community, and that was due to our vastly increased wealth. Wealth atomizes because it allows us to express our preferences, and that’s likely to continue.

  33. Knoll says:

    Re: the wireheaded emulations on speed who don’t care if they live or die – on the one hand, that totally gets my existential dread going because it just sounds so, so messed up. On the other hand, in the space of all possible minds, we could do way, way, way worse than happy human brains working on solving problems. At least I can pretty safely assume that those Ems would be doing alright. If we design an AI from scratch, get the value-alignment sorted out, but still have no way of knowing if the thing has any “consciousness” – that’s scarier to me. For all we know it could be a fully-conscious mind suffering immensely, but with no way of expressing that.

    • tanadrin says:

      AFAICT, concepts like “suffering” (and maybe even “consciousness”) are so dependent on the structure and nature of any given mind that there’s no reason to suppose they’ll apply meaningfully to something that doesn’t derive directly from the same evolutionary heritage as the human brain. Which is to say, I’m willing to bet chimpanzees can suffer. Mammals in general. Superintelligent squid? Probably. Not certain, but I’d say the balance of probability is in that direction.

      Unless we went out of our way to make AI a copy of the human mind, I don’t think artificial intelligence is going to look anything like human intelligence. I don’t think we have to worry about it suffering, because I think the whole notion of suffering is intimately tied to the structure of our brains and bodies. I also don’t think it necessarily follows that AI, no matter how advanced, would be sentient, to say nothing of self-aware.

      Most of what I find disappointing about discussions of AI and AI risk is either the whole notion that AI will inherit humanlike capacities which it has no reason to possess (consciousness, suffering), or a failure to effectively argue general AI as envisioned is even possible/necessary/useful, to say nothing of a philosophically meaningful concept.

      • Alliteration says:

        Suffering (or at least a metaphorical version of pain) shows up in AI currently as the things to optimizes for less of. For example, an OCR AI is hurt by making mistakes on training questions.

        Of course where or not this counts as pain ethically is a very complex problem.

        • tanadrin says:

          I feel like if that counts as suffering, we’re stretching terms past the point of usefulness. Thing-to-be-optimized-against isn’t the same as suffering for conscious, self-aware actors like humans. Trying to argue that something without even nematode-level intelligence is capable of suffering makes me think we’re not employing a useful definition of the term.

    • Scott Alexander says:

      If an AI were suffering, it could probably just tell us, or if it were superintelligent figure out a way to solve its own problem.

      • ii says:

        so what happens when it’s feeling *untranslatable 2*?

      • Knoll says:

        Say we make some big discoveries about the learning algorithms that brains use, and we build an AI based on those. Its software is sufficiently complicated that we think there’s probably as much going on in its “mind” as in a human brain, and it’s seemingly happy to do our bidding and is really good at learning to do all the jobs better than people, and we’re all thinking this is great, we did it!

        And when people ask “Wait, but is that thing conscious?” everybody kind of shrugs ‘cause we don’t know what makes human brains conscious anyway so how the hell are we supposed to know? But we figure, if it is “alive,” it’s probably happy, because we’ve programmed it to “want” to do what we tell it to do.

        But for all we know we’ve created a mind that is living a perverse existence of constant pain, because when you tweak brains even a little bit really weird stuff happens and we just hacked together a brain from scratch, and we accidentally built a thing that suffers with no capacity to understand what’s happening and no ability to reject its “desires” that we’ve given it.

        (Or maybe we would build super happy robots and it would all be fine.)

        But if we had emulated brains that were identical to human brains, and we could give them non-harmful happy pills, then (leaving aside all other ethical questions) we could at least be reasonably sure that they weren’t suffering.

  34. Alliteration says:

    I am not a biologist, nor do I know much about phantom pain.

    However, it seems that phantom pain would prevent an em-but-no-edits future. If humanity knew how to upload brains but not edit them to fix phantom pain (and all the other potential issues that removing the body might cause), the ems will be useless for work because they will be in too much pain. Thus, em could only really be useful after decent editing was developed. (We would, at least, to be able to understand all the information passing through the spinal cord)

    Maybe, there is some clever way around this, but I do not see it.

    • Scott Alexander says:

      I agree, not necessarily with the phantom pain thing, but with the idea that there will be dozens of these unexpected little problems that will make uploading an absolute nightmare and delay it from working effectively until after something else causes a singularity.

      My personal beef is that any attempt to model the brain without taking molecular-level processes into account is likely to be woefully inadequate. For example, DNA methylation is heavily implicated in autism, suggesting that even small problems with the methylation system can have really serious effects, but uploading hopes to just completely ignore methylation because it’s not part of the synaptic pattern. While you could argue that you can abstract to a level of “here’s how this brain cell’s synapses would function if it were properly methylated”, I think possibly the whole reason methylation exists as a system is to allow flexibility, where some things are methylated more than others for certain reasons and that affects brain function. If you’re just going to wish all that away because you’ve copied the synaptic pattern right, well, good luck.

      • Steve Sailer says:

        I assume Robin’s goal is immortality through uploading his brain (presumably before he has his head frozen). Leaving aside the philosophical questions, I’m dubious about the maintenance. I can barely keep software running while I’m alive and care about it. I’m dubious that anybody in the future will bother doing upgrades and maintenance on dusty 21st century personalities.

        • Tsnom Eroc says:

          What I find interesting is how subjectivity is created, and the philosophy of panpsychism.

          We may find it simple enough to transfer the raw data of the human brain to a computer. Just a bunch of 1’s and 0’s to approximate the analog mind.

          However, what about emotion and feelings? That may (and probably) depends on the actual physical substrate that underlyes it. Or, the difference between an AI being aware that something is touching its physical substrate and having a defence mechanism or being aware of repair, vs truly feeling something.

          It might be impossible to have something we consider feeling that we can relate to and understand in this form without raw bio-matter.

          A quote

          “Why this presents a problem to mind uploading is that consciousness may not substrate neutral — a central tenant of the Church-Turing Hypothesis — but is in fact dependent on specific physical/material configurations. It’s quite possible that there’s no digital or algorithmic equivalent to consciousness. “

          • hypnosifl says:

            The philosopher David Chalmers, who does think there are truths about consciousness distinct from physical truths, nevertheless thinks it’s unlikely that functionally equivalent systems (like a brain and a precise simulation of one) could have different forms of experience, for reasons explained in this paper. Basically he imagines a thought-experiment in which neurons in our brain are gradually replaced by functionally equivalent systems with the same input/output relationship, and thinking about what might happen to our inner experience in this case. If you don’t think it’d be basically identical, the alternatives are that your inner experience would be shifting/fading but there would be no behavioral changes at all (including the absence of any speech-patterns like “hmm, all my visual experiences seem to be getting dimmer and harder to consciously perceive”), or there would be some point at which some fraction of your brain had already been replaced but your consciousness was completely unchanged, and then replacing a single additional neuron would cause your consciousness to disappear completely. Neither is logically impossible but they would imply that the laws governing consciousness are sort of contrived and inelegant, which gives at least some reason to doubt that it would work this way.

          • Tsnom Eroc says:

            I will read the paper in full, but my initial response is in regards to this(a real response to the paper will take longer with more pondering). There’s a few responses to quotes from the paper.

            “nevertheless thinks it’s unlikely that functionally equivalent systems (like a brain and a precise simulation of one) could have different forms of experience”

            is that its not quite true. For me, its the feelings and emotions that are important. I hope this response isn’t mindless, but I believe that its possible to move data around, with emotions and the feelings of an event to be substrate dependent. But how fundamental is that emotion, and how is it transmited? Science has done a good job explaining how the nervous system moves around data. And is getting there in regards of discovering some algorithms in the brain.

            Right now, I am prone to believe in the fading subjective experience belief in conjunction with panpsychism(emotion is a fundamental property of matter, somehow). If you replaced the particles that contain that subjective quality with matter that didn’t, bits of feeling would gradually reduce.

            “If you don’t think it’d be basically identical, the alternatives are that your inner experience would be shifting/fading but there would be no behavioral changes at al”

            well, if there is a physical property of emotion, then this experiment might really be impossible in some sense. Behavioral changes may occur. Along the line, replacing matter that conveys the same data may well shift behavior. Say there is a data clone of a person where there are fundamental physical properties with emotion tied in. With evrything replaced, it may well be the case that a “being” finds a certain collection of data distasteful while being fully capable of expressing emotions that it *is* distasteful.

            Or,put in a way that adds CS. All the algorithms of the brain will still be intact, however the subjective experience of the person may be utterly different. The algorithms that process sight, the knowledge of what the temperature is is there, but whatever physical processes that give rise to emotion are totally different. The knowledge of the creature is the same, but the willpower is incomparable. Or, it always says color X is still always described as color X, but instead of being a favorite color it is now somewhat an unliked color.

            “We imagine that your conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same.”

            But *if* the emotion is somehow substrate dependent, the externally observable behavior midway perhaps won’t be the same.

            ” There is no room for new beliefs like “I can’t see anything”, new desires like the desire to cry out, and other new cognitive states such as amazement. Nothing in the physical system can correspond to that amazement. There is no room for it in the neurons, ******which after all are identical to a subset of the neurons supporting the usual beliefs***”

            But they are not actually identical to the original neurons. That’s the kicker. They contain the same ability to convey information.

            How does the brain access emotion, and where does the “soul” reside, and how does it feel? Something something quantum, but that’s unsatisfactory.

            “While it might just seem plausible that an organization-preserving change from neurons to silicon might twist a few experiences from red to blue, a change in beliefs from “Nice basketball game” to “I seem to be stuck in a bad horror movie!” is of a different order of magnitude”

            But if the properties of emotion are a fundamental physical property(I am repeating myself) this isn’t out of the question. In fact, I believe it is to perhaps be expected.

            I would argue that one could replace current neurons with other substances that entirely convey information, however lack whatever cababilities to experience positive emotion, thus having no willpower to continue anything.

            I would even argue that there exist universes where there exist no utility structures, or only negative utility structures where intelligent life does not exist, perhaps even with the particles otherwise idencital to our own in every way. As has been noticed that most organisms appear to motivate themselves roughly to the pleasure principle, when that is of non-existance, life tends to that. I would argue that with the existance of suicides in people in this universe.

            Since no physical experiments can be done right now that can fully satisfy the thought experiment, I think this is something that can be debated eternally 😉

            I think I went on a tangent somewhere in there, but its all related…

          • hypnosifl says:

            “well, if there is a physical property of emotion, then this experiment might really be impossible in some sense. Behavioral changes may occur.”

            Hmm, let me focus on this. Are you saying there could be certain kinds of collections of atoms whose purely physical behavior cannot be simulated to arbitrary accuracy on a computer? (with ‘physical behavior’ interpreted just in terms of how basic physical variables like position and momentum for each atom change over time, or how the ‘amplitudes’ for different possible positions and momenta change if we are using quantum mechanics) Most physicists seem to assume that physical reductionism holds, that aside from purely random element that may (or may not) be present in quantum physics, the best possible predictions about a system’s behavior would be generated from knowing the initial state of all the particles that make it up along with the basic physical laws governing their evolution over time. See for example physicist Sean Carroll’s comments on the arguments of some anti-reductionists here, along with his comment here that the laws of physics governing everyday life are pretty much completely understood (and these laws can be simulated to an arbitrary degree accuracy on a computer). If you’re thinking of perhaps rejecting this, what would be the alternative? Some kind of life-force or holistic “self” that can influence all the individual atoms in a top-down way, causing them to veer off course relative to what quantum field theory would predict?

          • Tsnom Eroc says:

            I have a feeling any disagreements we might have will be to both of us improperly expressing viewpoints, but..

            “Are you saying there could be certain kinds of collections of atoms whose purely physical behavior cannot be simulated to arbitrary accuracy on a computer?”

            No, I am not.

            I think the thought experiment is faulty. You’re replacing matter with different matter, and we don’t yet know just how consciousness arrives., or how the interactions of the electric field and the waves that make up matter interact and make experience.

            Due to that, there may be a point in the thought experiment where the behavior of the creature simply breaks down. You are propogating information in a very different way.

            Or to put it this way. While I believe physical reductionism holds, you are replacing matter with entirely different matter, with perhaps very different quantum states. Due to that, the experiment( if it somehow happened) may well break down somewhere, with effectively non-predictable behavior.

          • hypnosifl says:

            “I think the thought experiment is faulty. You’re replacing matter with different matter, and we don’t yet know just how consciousness arrives., or how the interactions of the electric field and the waves that make up matter interact and make experience.”

            But if you can simulate a single neuron to arbitrary accuracy, you can use the simulation to determine all the physical “outputs” that would emerge from the spatial region the neuron is located given the “inputs” crossing into that region–the main important outputs for brain function would probably be the neurotransmitter molecules leaving the presynaptic membrane at the axon terminals, but you could also compute the electromagnetic field contributed by charges and currents inside that region. If you could do that, then in principle it should be possible to replace the neuron with an artificial device located in the same spatial region that senses the physical inputs crossing into the region, calculates what physical outputs the neuron would have emitted in response, and then instructs some artificial interface devices to emit those very same outputs (the devices would contain stores of different types of neurotransmitter molecules which could be emitted in the vicinity of other neurons’ dendrites, for example). If reductionism is correct shouldn’t this lead to other neighboring neurons behaving the same way they would have if the neuron hadn’t been replaced by the artificial device?

          • Tsnom Eroc says:

            >hypnosifl

            Oh dear, I gave a fairly long reply and it didn’t go through due to email issues. I will type it up later again.

          • Tsnom Eroc says:

            Ok. I will crituque the experiment a bit.

            The problem with this thought expriment is that it innately involves what is still the hard problem of existance, namely experincing qualia, or subjective experiences. The “I feel and think, therefore I am” of life.

            In the paper you gave me, there’s too many examples of trying to reason the truths of nature by thought without experiment. In it, its handwaved that a being intermediate the philosophical robot and a real person can’t experience something fundamentally different than the human existance. He handwaives the dancing qualia issue, where I don’t view it as absurd that the rest of the brain somehow remembers and experiences something that feels totally different from normal with most of it replaced. We don’t have any possible way of measuring such vast quantum states.

            “But if you can simulate a single neuron to arbitrary accuracy, you can use the simulation to determine all the physical “outputs” that would emerge from the spatial region the neuron is located given the “inputs” crossing into that region”

            One issue is that this isn’t a simulation. THis is replacing matter that somehow experiences consciousness and emotion with matter that in the thought experiment does not. A major problem with this is that well…, it suffers from many philosophical thought experiments without any actual physical experiments to go along with it.

            Considering how we don’t know consciousness and subjective experience is created through the strange interactions of 3D space-time, I don’t think we can say that replacing half the brain with an entirely different substance won’t fundamentally change the experience, and change behavior.

            A problem arises with phantom limbs. Where is the limb actually experienced? In what part of the brain, and how is that subjectivity experienced? Perhaps if a certain part of the brain is replaced, with the rest of it intact, all experiences cease(like the soul being in the pineal gland). Or, perhaps its fundamentally impossible to replace the skin and nerves in the arm accurately. Namely, some aspcts of consciousness like emotion is truly spread across space, and is material dependent on the smallest level.

            ” Some kind of life-force or holistic “self” that can influence all the individual atoms in a top-down way, causing them to veer off course relative to what quantum field theory would predict?”

            Well, yes. The emotion of a person changes everything a person does. I don’t believe it would violate any physical laws, becuase that’s tautologically impossible. *

            Like, if replacing certain matter with other types of matter changed a subjective experince from negative to positive or vice versa, I believe that all the laws of therodynamics would still be intact, and the way that electrons in the body react to the magnetic field would be the same. However, the subjective experience changes everything the creature does.

            We don’t really understand how all this quantum stuff interacts with each other, or how giant systems cascade with minor changes a-la chaos.

            I guess if you somehow, the silicon chips were altered to have the exact same quantum properties internally(whatever that would mean) as the neurons you are replacting it with, then the behavior of the system as a whole would not change. However, in that case you might as well simply name it a neuron 2.0. Or neuron type A version 2.

            Also, it seems that the universe *does* play dice, and theres a lot of randomness involved. Determinism isn’t quite true, by the looks of it. And its hard to see if these dice can be weighted one way or another. There might be effective weighted random number generators in the brain that influence our behaviors on the macro scale. Humans might actually be fundamentally unpredictable.

            I need to read “The Emperor’s New Mind” by Penrose. He has probably put what I am thinking about better.

            *Though, who knows. This universe seems to work logically, other possible ones we can’t experince…

          • hypnosifl says:

            “One issue is that this isn’t a simulation. THis is replacing matter that somehow experiences consciousness and emotion with matter that in the thought experiment does not”

            Keep in mind that in my last comment I was responding specifically to your statement that “well, if there is a physical property of emotion, then this experiment might really be impossible in some sense. Behavioral changes may occur.” It was only this that I wanted to dispute, so my intention was to focus exclusively on the behavioral issue, and only get back to the issue of consciousness if I could first convince you that, in principle, reductionist laws of nature that can be computed to arbitrary accuracy imply that one should always in principle be able to gradually replace neurons with artificial substitutes, in such a way that there’d never be any changes in behavior relative to the behavior you would have seen if you’d left the brain alone (leaving aside differences in sensory input and purely random differences due to quantum physics). If you don’t agree with this, there’s no point in discussing the merits of Chalmer’s dancing/fading qualia argument, since he just takes it as a starting premise that the neuron-replacements would be functionally equivalent and produce no changes in behavior (such as puzzled comments by the person that their experiences were fading or changing).

            Are you unconvinced by the argument in my previous comment about reductionism implying you could predict all the outputs from a region of space given the inputs crossing into it (particles and EM waves for example), and thus in principle it should be possible to build a device that would simulate a region containing a neuron and create all the same outputs, even though the neuron had been removed and the device might be made of non-organic matter? If you are unconvinced, can you address specifically why you don’t think this is correct as a claim purely about behavior in a universe governed by reductionist laws, temporarily leaving aside all non-behavioral aspects of consciousness (subjective feelings and qualia and so forth)?

            “Well, yes. The emotion of a person changes everything a person does. I don’t believe it would violate any physical laws, becuase that’s tautologically impossible.”

            I suppose you can define “physical laws” broadly enough that any physical event must be obeying physical laws just by definition, but my argument above, about replacing a neuron with an artificial device that simulates its behavior and uses that to choose the correct physical outputs, was based on the assumption of reductionist physical laws (and I should add I’m also assuming reductionist laws that are “local” in the sense that the physical behavior of matter/energy within a specific volume of space can be predicted based on the initial state inside that volume along with information about any matter/energy crossing the boundaries of the volume inward). Earlier you said “I believe physical reductionism holds”, but do you mean the same thing I do by that? Namely, that the behavior of any composite system can be predicted based on knowledge of the initial state of all the fundamental units (particles, strings, quantum fields, whatever) that make it up, along with some set of basic physical laws governing interaction between these fundamental units? That there are no high-level, top-down style laws that kick in when you have certain complex arrangements of matter, which can’t be derived from the fundamental quantum laws governing basic interactions? No special “holistic” laws for macro objects, in other words?

          • Tsnom Eroc says:

            “If you are unconvinced, can you address specifically why you don’t think this is correct as a claim purely about behavior in a universe governed by reductionist laws, temporarily leaving aside all non-behavioral aspects of consciousness”

            I might be leaving the thought argument portions too much right now. But a device may not be buildable.

            hmm…maybe…for this universe it could be impossible. For example, I will bring up decimal places. Are all the decimal places random, or where are the patterns in decimal places, and how do they interact in mass numbers? In absurdly large systems, does a partly random, partly deterimed decimal digit 50 spaces to the right of what we can map accurately in todays world matter?

            Or, does the strange quantum entangling involved in the interactions of protons, neutrons, and electorns(which also don’t exist except in some weird math wave that may be connected to *everything* else) make some simulations impossible without having the exact direct material?

            “Namely, that the behavior of any composite system can be predicted based on knowledge of the initial state of all the fundamental units ”

            That’s actually not true. At least for the small scale. For the larger scale, I think its a battle.

            What wins? The evening out of randomness on the macro scale , or chaos theory on the random decimals we can’t observe(and every electron is random)

            If we are talking about the micro scale, its been shown experimentally god plays dice, and that dice is, to us, truly random.

            **********************************

            Other semi-related topic and question.

            Food for thought. Do electrons in a negative electrical field that hit a wall and have no place to go, do they feel pain,or something similar to that? Or is pain generated somehow? Its an interesting implication of panpsychsim. Or is there reason to believe that emotion, if it is a fundamental part of reality, is a conserved quantity? That all complex life on earth needs to *recharge* somehow(what really happens during sleep?) and the conservation of every other known force seems to imply that could be the case.

            The question isn’t that stupid.

          • Anonymous says:

            a thought-experiment in which neurons in our brain are gradually replaced by functionally equivalent systems with the same input/output relationship, and thinking about what might happen to our inner experience in this case… imply that the laws governing consciousness are sort of contrived and inelegant, which gives at least some reason to doubt that it would work this way.

            I’d like to add to the thought experiment.

            After we’ve gradually replaced the neurons in our brain by functionally equivalent components simulated in silicon, it’s rather trivial to begin adding a time-delay. We can pause the simulation briefly. Maybe we want to add a component that simulates a bit slower. That’s alright, we just pause the faster simulation occasionally, let the slower component catch up, and then transfer the I/O between components.

            Since each of these simulation components are just following deterministic mathematical rules, we can perform the computation using all kinds of slower tools – instead of a supercomputer, we can use a desktop computer; instead of a desktop computer, we can use a scientific calculator; instead of a scientific computer, we can have a group of mathematicians crank out line after line by hand (mutually checking to ensure accuracy). Gradually, we pause the faster simulations more and more, so that we can replace components with slower and slower components.

            I’m not sure if it’s a dystopian future or a utopian future, but suppose that we educate the entire world population sufficiently in mathematics to be capable of such calculations. Through the magic of parallelism, we have billions of “computational threads” composed of people putting billions of pens to tens of billions of pieces of paper. At appropriate timesteps, they transfer their relevant outputs to people who need it as input (maybe through the internet, maybe through snail mail). At the end, you have a computational system with all of the same functional behavior, just running much slower.

            At some point, knowledge of how to perform the computation for each component is somewhat Balkanized. Each component is ‘owned’ by a group of researchers, and most groups don’t really possess the expertise to perform the computation for the other groups. One group says, “Hey, we love the project and want to keep going, but there is research opportunity on Alpha Centauri right now that is just too good to pass up. I know you’re all waiting for our latest timestep, but perhaps everyone could hold onto their calculations for a bit?” They all agree. For the next thousand years, each generation meticulously preserves the data and the expertise to continue the calculation while waiting for the descendants of that one group to return from Alpha Centauri. When they do, the simulation (and it’s consciousness) picks right back up where it left off.

            This is not logically impossible, but it would imply that the laws governing conscious are sort of contrived and inelegant, which gives at least some reason to doubt that it would work this way. It’s enough to make you wonder if the United States is already conscious.

            As an aside, my degree is in dynamical systems theory, and Chalmers gets his nonlinear dynamics wrong. The 20th century was a good one for bifurcation theory, but things like the isolation of the USSR helped insulate the general public from many of the results (and just general time delay from research to broader consciousness (heh)). There are very simple systems that are not “structurally stable”. That is, an infinitesimal change in a parameter produces a qualitatively different system with no intermediate states. See saddle-node bifurcations for an obscenely simple example.

          • Adrian says:

            Anonymous

            That’s actually a really good counter-argument to the “Replace neurons step-by-step with silicon” thought experiment. We could even go a step further: Replace the mathematicians with unlearned temps who follow a written set of rules and compute the next step in the simulation using a hand-held calculator (à la Chinese Room experiment). Now in addition to not having an overview of the global state, they don’t even know what they’re doing – yet the behavior is identical to a silicon-based simulation of a human brain (just slower). So where is the consciousness, which exists in the biological brain, happening now?

            The idea that consciousness can be derived from the currently known physical laws reminds me of the classical element theory, where everything could be derived from Earth, Air, Fire, and Water. I’m sure that in 200 BC, many educated people would have argued that lightning can be derived from those elements (probably Fire, maybe some Air) if only they could make sufficiently strong Fire, or control it with high accuracy. Well, they would have been very wrong, and yet people insist that we can derive qualia and consciousness from known quantum mechanical rules, without the hint of an explanation how that could work (which is funny, because we can’t even derive General Relativity from Quantum Mechanics, even though G.R. is much better understood than consciousness).

            By the way, accepting that Quantum Mechanics can’t explain consciousness does not require you to give up reductionism; it only requires you to recognize that there are still some physical processes of which we have no idea.

            Note: I know that this analogy is not perfect. In particular, Quantum Mechanics is much closer to the truth than the classical elements, which didn’t have much predictive power. Still, I think it captures the fallacy of insisting that we know all the basics and that everything else follows from what we already know.

          • Tsnom Eroc says:

            >Adrian

            The second paragraph in there really puts into good words what a part of me was inkling towards.

        • It will cost a lot less to maintain an em than it currently does to maintain an Alzheimer’s patient in a nursing home.

          • Steve Sailer says:

            Dying is expensive, but being dead is free (at present), which can be a comforting thought.

            The idea that I would have to blog extra hard now to make enough money to set aside enough in my will to pay forever for upkeep on my em so my iSteve em can keep blogging for eternity just seems kind of exhausting.

            Perhaps after I’m dead the world will have to get by without my nagging? I can live with that.

          • I am the Tarpitz says:

            I work all day, and get half-drunk at night.
            Waking at four to soundless dark, I stare.
            In time the curtain-edges will grow light.
            Till then I see what’s really always there:
            Unresting death, a whole day nearer now,
            Making all thought impossible but how
            And where and when I shall myself die.
            Arid interrogation: yet the dread
            Of dying, and being dead,
            Flashes afresh to hold and horrify.
            The mind blanks at the glare. Not in remorse
            – The good not done, the love not given, time
            Torn off unused – nor wretchedly because
            An only life can take so long to climb
            Clear of its wrong beginnings, and may never;
            But at the total emptiness for ever,
            The sure extinction that we travel to
            And shall be lost in always. Not to be here,
            Not to be anywhere,
            And soon; nothing more terrible, nothing more true.

            This is a special way of being afraid
            No trick dispels. Religion used to try,
            That vast, moth-eaten musical brocade
            Created to pretend we never die,
            And specious stuff that says No rational being
            Can fear a thing it will not feel, not seeing
            That this is what we fear – no sight, no sound,
            No touch or taste or smell, nothing to think with,
            Nothing to love or link with,
            The anasthetic from which none come round.

            And so it stays just on the edge of vision,
            A small, unfocused blur, a standing chill
            That slows each impulse down to indecision.
            Most things may never happen: this one will,
            And realisation of it rages out
            In furnace-fear when we are caught without
            People or drink. Courage is no good:
            It means not scaring others. Being brave
            Lets no one off the grave.
            Death is no different whined at than withstood.

            Slowly light strengthens, and the room takes shape.
            It stands plain as a wardrobe, what we know,
            Have always known, know that we can’t escape,
            Yet can’t accept. One side will have to go.
            Meanwhile telephones crouch, getting ready to ring
            In locked-up offices, and all the uncaring
            Intricate rented world begins to rouse.
            The sky is white as clay, with no sun.
            Work has to be done.
            Postmen like doctors go from house to house.

            I think there are basically people who feel like this and people who don’t. Not is probably healthier. I very much do.

        • Thursday says:

          But the philosophical questions are really interesting. The Thomists argue that the mind simply cannot be wholly material, and their arguments are actually very impressive.

          If they’re right, then Ems are simply a non-starter.

          Nonetheless, Robin Hanson is one of the few genuinely interesting thinkers out there. And, as our host says, completely unique.

        • Simon says:

          In something of the same vein I have wondered who is going to want to bother reviving the corpsicles?

          • Doctor Mist says:

            In something of the same vein I have wondered who is going to want to bother reviving the corpsicles?

            A fair bit of the price of cryonic suspension (at least at Alcor) is to support an organization whose raison d’etre is to revive you when it is possible, and maintain you until then.

          • Simon says:

            As the economy grows, the proportion of total wealth that must be controlled by people who care in order for the revival to happen, and the proportion of their wealth that they need to spend on it, becomes smaller and smaller. So in an any scenario with vast economic growth I would expect this not to be an issue as long as there was reasonably wide distribution of wealth in terms of absolute numbers and diversity of values of people with wealth. (Relative inequality can still be high, as long as the wealth holders include some people who care about reviving cryonics patients). Considering that cryonics revival requires significant technological advances anyway, one of the main posited mechanisms in fact being uploading, such economic growth is probably to be expected given that the technology for revival is developed.

            Sufficient diversity of the wealth holding could be another issue though – e.g. if a paperclip maximizer takes over, don’t expect revival.

        • PDV says:

          > I assume Robin’s goal is immortality through uploading his brain (presumably before he has his head frozen)

          No, after. The endgame for cryonics is getting a detailed scan taken of the brain, long after death when the technology has caught up and can get one with fine detail.

      • hypnosifl says:

        But are you saying “problems with the methylation system” have effects that are independent of their effects on the development of synaptic structure (especially in the fetal and childhood stages, but also in changes to the structure in adults) and on the concentrations of neurotransmitters at the synapses, and how these concentrations affect the likelihood a nerve impulse will be generated in a given neuron given the incoming impulses from its neighbors? Suppose we could take a healthy non-autistic adult and tweak his methylation system without changing the physical layout of synapses or the neurotransmitter levels or their effects on the probability that nerve impulses will be generated in each neuron given its inputs. Do you think he would develop autism, or any noticeable behavioral change at all?

  35. Writtenblade says:

    If I have a thousand wildly divergent candidate models for a process, and no way to distinguish between them, I have no understanding of that process regardless of whether one of the models happens to be right.

    If we want to establish by precedent that complex prediction is usefully possible, it’s not enough to show that successful forecasts have been made, ever; we need to show that people were significantly able to tell at the time which forecasts the successful ones would be. Using hindsight to cherry-pick examples like the Watkins article wouldn’t solve this problem whether it scored 8/28 or 20/28.

    • Scott Alexander says:

      If Watkins got 20/28 while everyone else got an accuracy of only a few percent, that sounds like there is an ability to predict the future, it’s rare, and Watkins has it.

      (of course we’d want to do appropriate statistics to make sure Watkins wasn’t just the high end of a distribution of people with random success rates)

      • Steve Sailer says:

        I think you could reverse engineer what Watkins did right and did wrong for pointers.

        For example, predicting economies of scale are tricky. He has a good prediction of how the kitchens of restaurants in 2000 would seem like laboratories to people in 1900, which is about right. But then he implies that nobody in 2000 would shop for food or cook at home because restaurants, especially for home delivery via automobile (good) or pneumatic tube (bad), would be so much more efficient. That’s not terribly far off — an awful lot of what I “cook” at home is just reheating precooked meals I bought at the store. But it’s not quite right. It turned out that the economies of scale are such that I can afford much of what a restaurant has in the way of cooking apparatus in my own kitchen.

        So one lesson I would take away from reading Watkins’ predictions is that it’s hard to get exactly right how the supply chain will divvy up the work.

        Supply chains are still vulnerable to reorganization, so what seems standard today may be all different in a few decades.

        • PDV says:

          Our tax structure also subsidizes home kitchens and penalizes restaurants that focus on fast, convenient takeout.

      • Simon says:

        So how wide is the range allowed to be for predictions to count as hits anyway? If someone makes a prediction but is x years early or late it seems a bit like drawing a target round the shots after firing them.

  36. Tsnom Eroc says:

    “Second, they can run at different speeds. While a normal human brain is stuck running at the speed that physics allow, a computer simulating a brain can simulate it faster or slower depending on preference and hardware availability. With enough parallel hardware, an em could experience a subjective century in an objective week. Alternatively, if an em wanted to save hardware it could process all its mental operations very slowly and experience only a subjective week every objective century.”

    So, it seems that there is an argument that this will be possible sometime in the not 1000 years from now future.

    Does this give credence to the idea that we are already in some strange eternal life code that simulates typical human experiences? If this is possible, it makes a good deal of sense that we are already in that type of loop, now does it?

    • ii says:

      from the perspective of someone whose maintaining a server farm with billions of these loops? sure, must make for nice water cooler conversation

      for us? occam or solomonoff, pick your poison p(universe exists)&p(universe created by specific means)< p(universe exists)

      you're never going to have valid information unless you've got stats on how many aliens are running earth because they enjoy Roy

      • Tsnom Eroc says:

        Meh. The fact that so many people think a singularty type event in the next 60 years is possible gives some credence to that idea.

        I think the programmers should have done a better job. Common complaint of reddit.com/r/outside

    • I am the Tarpitz says:

      Given how atypical of or sometimes even unrelated to human experience the worlds we simulate now are, I rather doubt that you have to go very many turtles down before you run into something very different indeed, probably to the point of not being recognisably a turtle or even comprehensible at all.

      Get out of the car.

  37. hypnosifl says:

    “And he is right to make it his example of accurate futurology, because everything else is even worse.”

    I think there are some examples better than Watkins’–for example, Jules Verne’s unpublished story Paris in the Twentieth Century, written in 1863, seems to get quite a lot right. See the list here for example:

    the book’s scorecard of seemingly bang-on elements of the then-future include the explosion of suburban living and shopping and large-scale higher education; career women; synthesizer-driven electronic music and a recording industry to sell it; ever more advanced forms of ever cruder entertainment; cities of elevator-equipped, automatically surveilled skyscrapers electrically illuminated all night long; gas-powered cars, the roads they drive on, and the stations where they fill up; subways, magnetically-propelled trains, and other forms of rapid transit; fax machines as well as a very basic internet-like communication system; the electric chair; and weapons of war too dangerous to use.

    Another type of prediction is when an author teases out implications from a single good idea, like Vannaver Bush’s 1945 essay As We May Think which considers the implications of home computing devices (though he imagined analog computers which stored records on microfilm, rather than digital computers) with the capacity to create and store associative trails between documents similar to modern hyperlinks. Though the technical details of his imagined computers may be off, his broad picture of how such hyperlinked databases could be used are very prescient, with ideas resembling search engines and wikipedia, along with a desktop interface with viewing screen and keyboard. (Most of this good stuff is towards the end of the article, in sections 6 and 7, if anyone wants to check out the article without reading all the initial stuff describing somewhat outdated information about analog systems for computing, image capture, and the like).

    • MugaSofer says:

      Jules Verne doesn’t count, he’s actually a time traveller.

    • Simon says:

      Given a pretty densely packed field of all sorts of experts predicting all sorts of things over an extended period, a few of them are bound, in hindsight, to appear to have astonishing abilities. Once again it feels like drawing the target after the shots are fired.

      • Net says:

        Maybe someone should do data analysis of what personal characteristics have predicted accurate forecasting in the past, so we can figure out what forecasters to listen to in the present.

  38. Tsnom Eroc says:

    Ok, I read the review.

    The book seems really..weird. He seems to be arguing that we have technology that plenty of thought belives will lend to superintelligences and nigh godlike (greek style gods) entities, yet EM’s that kindof follow labor and vacation debates still exist.

    To be honest, “The Age of Em”, with just a minor modification, reads like a bad sci-fi anime that can’t quite decide to be a eutopia or dystopia and tries going far into future-tech while still wanting a relatable protaganist.

  39. blacktrance says:

    First, the idea of “the economy” needing laborers, consumers, and investors is only a metaphor, and it’s being stretched too far in this post. The economy is laborers, consumers, investors, and owners, who all need each other. For the sake of argument, let’s suppose that you can optimize out the laborers, and algorithmic investment replaces investors (though they’re really a kind of laborer in this context). But there’s no getting rid of owners, for whom all the ems are ultimately working. The owners are also consumers, a role that can’t be optimized out because they’re the source of the company’s money, aside from owners/investors who eventually expect a return. At one end of the spectrum, we’d have a world with a population of relatively few immensely wealthy humans who own all the capital and trade with each other, usually through the intermediary of their companies, and everything is optimized towards satisfying these owners’ desires. At the other end of the spectrum, ownership is broadly distributed and many people have a stream of income from their stakes in various companies, which they use for consumption. There could even be a basic income funded by taxing the wealthier capital owners (not my preferred solution, but hardly dystopian)

    Second, if ems can be tweaked to focus on their task and get a reward from it, continued optimization and pressure for efficiency seems likely to produce ems that don’t experience anything, at which point it’s not dystopian because they become tools, and it makes no more sense to shed a tear from them than for today’s web crawlers.

    • MugaSofer says:

      You say that these roles can’t be optimized out.

      But imagine a society of crystalline purple aliens that desire nothing but more money, power and influence to control (in a convergent-AI-goals sort of way.) This society still has consumers, investors, and laborers; the consumers need tools and alien food, the investors want as much money as possible, and the laborers want to earn more money in the vague hope of acquiring enough to become investors. But it doesn’t have any vacations or art, and any children are immediately sold to someone who need laborers in exchange for money.

      Is the em-aliens’ economy less productive than us humans? Is it missing some essential economic role? I think not. Indeed, it’s quite possible that they could build a military and bomb us all to smithereens for our sweet sweet resources that can be made into more mining machines.

      • blacktrance says:

        Basically, if the ems decided that they’d rather reinvest their resources than pay them to their owners, and eliminated them to remove any obstacles? Hanson assumes that rule of law will be preserved, and Scott’s scenario doesn’t assume any violations – though they’re certainly possible, in which case this becomes quite similar to the Friendly AI problem. But as long as the ems don’t kill/imprison/etc anyone, the human owners/consumers aren’t going anywhere.

      • GregvP says:

        But what do ems want? (Supply equals demand. Producers who produce things that nobody wants don’t do that for long.) Ems only demand the use of computing resources, which in the end boils down to electric current.

        The endpoint of Hanson’s em-world is a set of sun-centered Dyson shells which convert all of the sun’s output to electricity.

    • Agreed. I basically said the same thing further down the thread without having come across this one.

  40. onyomi says:

    Has anybody mentioned the obvious Ship of Theseus/Transporter problem with uploading in general? A copy of me is not me.

    If I made an exact copy of my entire physical body as it exists right now I would not expect to experience life from that person’s perspective.

    “But your experiences immediately diverge since you don’t occupy the same space” you might say. Okay, destroy me and replace me with my copy who doesn’t know the original was destroyed. In this scenario maybe no one but me could tell the difference, but I still wouldn’t be experiencing life as that person. I’d be experiencing being dead, while an exact copy of me would be experiencing being me.

    I understand the argument that if I were gradually replaced, cell by cell, I might not notice, as with the ship analogy, as well as the idea that, in a sense, “yesterday me” is already dead. I understand the idea that maybe “me” is a configuration and not a specific set of atoms–though most neurons live a very long time.

    Still, what if you copied me and a few days later I got hit by a bus. Would my subjective experience still not be exactly the same as if I died without leaving a copy? Do identical twins “sort of” remain alive if only one of them dies?

    • gwern says:

      Whether the copy is you is irrelevant for an em economy, and most of the uploads will be of people who believe a copy is you, making it further irrelevant. This was discussed in the post.

      • onyomi says:

        Yes, I saw that. Though that is assuming you find a reasonable number who think that way.

        And yes, it is sort of a tangent, but this seems to me a huge ethical and practical problem with all this.

        • gattsuru says:

          One thing I think Hanson overlooks is the actions of those who not only would not become ems, but are horrified by an event path likely to generate and destroy a million ems. You need not only find a colony of volunteers, but also be able to scale them up before the horrified villagers come hobbling up the mountain with pitchforks and torches. Or lob nuclear weapons at countries that allow it.

          That might seem horrific, but from a shut-up-and-multiply ethics if you think deleting an em is murder, a civilization intending to normalize the creation and destruction of ems makes Hitler, Stalin, and Pol Pot look like amateurs. (Hell, even if you don’t consider ems people but just think they’re likely to replace you, it’s still all of the future.)

          • onyomi says:

            Yeah, it seems like if only 10% of people share the intuition that deleting an EM is like murder (and I expect it would be much higher), than the level of protest you would get about it would dwarf fracking, abortion clinics, and Black Lives Matter combined by orders of magnitude.

          • tanadrin says:

            The radical fringe of the anti-abortion movement seems to often make the cold-blooded calculation that murder is necessary to prevent what is, in their view, the murder of even more innocents. And this over creatures which do not yet think, or speak, or vote.

            On that basis, if ems were actually invented, and their casual deletion legalized, I fully expect terrorism to try to prevent or discourage that would occur. John Brown was merely fighting against slavery. Imagine how upset people would get over mass murder.

          • Lots of people currently think that abortion is murder, and yet we still have lots of legal abortions.

    • Tsnom Eroc says:

      I think these problems are close to impossible to answer in a good manner.

      Say for instance, everytime you go to sleep you get cloned. All your brain gets uploaded into the other clone, and your original body gets destroyed. The people doing this are clever and cautious, so no one ever gets caught.

      Each morning, that *you* people talk to would totally believe that there in only one you. Maybe it has already happened, onyomi number 54367.

      In a way, are you at point in time X just a creature with a very similar chemical structure as creature at (X-1) * episolon a moment ago in time? That version of you is dead forevor, merely replaced with an imposter that has a similar chemical state!

    • hypnosifl says:

      Do you think there is an objective answer to the question of whether an upload (or a version of you created by a Trek-style transporter that recreated your pattern with new atoms) would really be a continuation of your present consciousness, or if its memories of having you would be memories of something that happened to someone else? Or do you think this is just a matter of definition, how we choose to define identity-over-time in a linguistic (or legal) sense? (presumably most modern scientifically-minded people would say that in the case of the Ship of Theseus, it’s just a matter of definition whether it’s the “same ship”) It seems to me that an advocate of materialism/physicalism (which I take to mean the philosophical view that all objective truths are physical truths) would have to take the latter position, while the former position seems to imply some objective truths about first-person consciousness (as opposed to ‘consciousness’ in the purely behavioral sense) which are not completely reducible to purely physical truths about arrangements of matter/energy.

      If you do take that latter position, it seems plausible to me that there would be “laws” governing consciousness, of the type that have been postulated by philosophers of mind like David Chalmers–in that case, the laws might give some objective answer to how continuity of consciousness over time works (or they might not; maybe there are only objective truths about individual observer-moments, not about strings of them). If so, I don’t know why you would be confident that the upload or transporter duplicate wouldn’t be a genuine continuation of your present consciousness–it seems perfectly possible that the laws of consciousness would say that informational continuity, not material continuity, is what’s important in determining continuity of consciousness. And if material continuity was critical, wouldn’t that imply that I am not a continuation of the consciousness of my brain say ten years ago, given that nearly all the atoms of my brain have been replaced by new ones taken in through eating, drinking and breathing? (in fact, some research suggests the time for full replacement of atoms in the brain is very short, something like two months or so.

      Related: this comic from existentialcomics.com.

      • onyomi says:

        See, I think there’s a problem with the “your body flushes out atoms all the time” part. It sounds like a good, glib way of dismissing those believing in some kind of permanence of individual identity, but most of the neurons in your brain, if I understand, are the same ones you had many years ago. And in those cells, while I’m sure they replace H20 and other molecules, I don’t see why the atoms making up the actual structures of the neuron would be replaced (they will take in and release water and nutrition and waste product atoms, but there are some atoms that would be the same).

        • Anonymous says:

          I don’t see why the atoms making up the actual structures of the neuron would be replaced

          Suppose further investigation revealed that these atoms were in fact being replaced all the time. Would your feelings on the matter change?

        • hypnosifl says:

          I think you’re wrong on the science here, since all proteins and other complex molecules tend to degrade over time and have to be either replaced or repaired. (I know DNA gets repaired, not sure about other molecular ‘machinery’, but in any case various forms of DNA repair involve putting new molecules in place of ones that have broken off, for example the wiki article says “When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand.”) And as Anonymous asked, I wonder if that is your true rejection.

          But aside from those issues, you focused on that one narrow point without telling me whether you agreed or disagreed with my larger point, namely, that to say there is even an objective answer to whether the upload or transporter version is still “you” is to reject materialism/physicalism. That’s not necessarily a bad thing, I think the philosophy of David Chalmers has a lot going for it, but I’m curious about whether you accept this broad point and endorse the Chalmers-style view that there are truths about consciousness over and above the complete set of physical truths. And if there are laws governing aspects of consciousness such as the continuity of conscious identity over time, then if we are clear that these laws are distinct from physical laws, it’s hard to see why the physical issue of material continuity should be all-important–it seems at least as plausible that the laws of consciousness would say that continuity of identity is determined by some kind of informational continuity.

          • onyomi says:

            I actually don’t have a good answer of my own to the transporter question because I have strongly conflicting intuitions on both sides, though I am interested in arguments about either.

            Re. sleeping, etc. Why do we even need to get obvious breaks in consciousness involved? Aren’t we, by this logic, a different person each time any atom in our body changes position?

            On some level I understand the idea of thinking of a person as a river: not a collection of any specific water molecules, but an arrangement through which they flow.

            In favor of this view is the fact that I don’t think it would bother one, if, for example, I could grow a perfect replica of your arm in a vat, cut off your arm, and replace it with a new one. I predict it would take a while for the old nerves to connect to the new ones in a way which felt comfortable, but assuming the microsurgery connecting nerves has been done flawlessly, I predict you’d eventually feel it was your arm.

            That said, I’m not so sure about the brain. If we could grow a replica of your brain in its current state in a jar; remove your brain from your skull, destroy it, and perfectly reattach the jar brain, well, then, from the perspective of say, your non-independently conscious spinal cord,, this might be just as good; yet a strong intuition tells me that “you” are dead at this point.

            I guess it’s really just a matter of the “hard problem” of consciousness as well as, I think, our less than perfect subjective/scientific grasp of time. Without better answers about those questions, the answers about the transporter probably won’t fully satisfy me.

    • Anonymous says:

      Still, what if you copied me and a few days later I got hit by a bus. Would my subjective experience still not be exactly the same as if I died without leaving a copy? Do identical twins “sort of” remain alive if only one of them dies?

      1. I find that questions about “expected subjective experience” never lead to anything useful. Even if we could do the experiment, we couldn’t answer questions like “What should I expect to experience if I step through a Star Trek teleporter?” We can only usefully ask questions like “What memories and personality will the person who steps out of the Star Trek teleporter have?”

      2. I’d say there’s a continuum between the following scenarios

      – Bob lives life normally
      – One day, while Bob is asleep, Bob is copied and the original immediately destroyed while it’s still unconscious.
      – One day, while Bob is asleep, Bob is copied. Both versions are woken up, then the original is destroyed after 5 minutes of consciousness.
      – One day, Bob is copied. A few days later the original is hit by a bus.
      – One day, Bob is copied. A few years later the original is hit by a bus.

      3. There aren’t any fundamental facts about personal identity. Personal identity is a concept we find useful when discussing our preferences–for example when we say things like things like “I want to be alive ten years from now.” Implicit in this preference is some notion of personal identity which determines which world-states ten years from now really contain “me.”

      In a world where people can be copied we have to think about what preferences we have. Do you want to step through a Star Trek teleporter? Well, if your preference is that a certain set of atoms continue to exist, then no. If your preference is that a being with a certain personality and memories exist, then it’s fine.

      You’re allowed to be attached to your atoms, but personally I’m not. I care about the memories and personality.

      And as Yudkowsky has pointed out, caring about your particular atoms is probably not consistent with our understanding of quantum mechanics, in which atoms themselves do not have individual identities. (Yudkowsky gets some flak for perhaps-overconfident statements about QM, but he is absolutely right about this.)

      If your preference is that a certain “personal identity” continue uninterrupted, then you are going to become confused because there are no fundamental facts about personal identity that will help you decide whether to step through a Star Trek teleporter. You need to figure out how to re-express your preferences in terms of things like atoms, or memories, or personality, about which we can actually answer questions.

      • Eli says:

        There aren’t any fundamental facts about personal identity.

        If you’re going to claim that causal-role concepts aren’t real and can’t be naturalized, you’re going to have to throw out most of your everyday reasoning. Congratulations, you’ve built a worldview in which there are no such things as chairs.

        • Anonymous says:

          There aren’t any fundamental facts about chairs either. No experiment will tell you whether an object is or isn’t a true chair. That doesn’t mean that you can’t talk about chairs. We get along fine talking about chairs, most of the time. But if you get confused about chairs–say you can’t decide whether or not a certain object is a “true chair”–then you had better be able to express what you actually want without using the word “chair.” Same thing for personal identity.

          • Eli says:

            Why do you think we can’t dissolve the word “chair” and get at the causal-role concept underneath (“something reasonably comfortable to sit on”)?

          • Anonymous says:

            That’s exactly what I’m suggesting! If you get confused about chairs, dissolve “chair.” If you get confused about identity, dissolve “identity.” Neither are fundamental things, so you should be able to dissolve them.

      • Dennis Ochei says:

        I’m totally with you on everything you’ve said (I’ve written things to the same effect elsewhere), but I can’t seem to care about what happens in a universe devoid of subjective expectation. Like why should liking icecream be an effective reason for me to go get icecream if my relationship to the future icecream eater is only that of certain psychological similarities obtaining?

        The only way I can find to justify self interested action of this form is if I reasoned (rather strangely) that the world would be better off in some abstract third person sense if more people ate icecream, and that this impersonal preference is what compels me to pilot this particular body to eat icecream, instead of the more natural construction, which is that I anticipate enjoying the damn thing.

        • Anonymous says:

          There’s nothing wrong with identifying with “the future icecream eater” and taking a special, selfish interest in its pleasure! It’s just that no fundamental law of the universe sanctions this particular notion of personal identity. It’s like how there’s no fundamental law of the universe that says you should root for your home football team.

          If you’re worried that this selfishness seems immoral, well, this is the same problem of selfishness that plagues utilitarianism regardless of your theory of personal identity. There are so many other worse-off people, how can you justify spending anything on yourself, etc.

          • Dennis Ochei says:

            It’s not a problem of morality, it’s a problem of rationality.

            Suppose you copy me at time t. At t+1 there is me A and me B. At t+2 you promise to give A ice-cream. At t+2 you also tell B that A will receive ice-cream. Intuitively, A should be excited because he anticipates eating ice-cream, and B might be happy for A, but watching your clone eat ice-cream is definitely not the same as eating ice-cream.

            However, if we are throwing out subjective expectation, A and B should be indifferent as to who gets the ice-cream. Giving A the ice-cream and giving B the ice-cream are by every relevant detail the same, and furthermore, any nepotism A and B might have towards people highly like themselves is neutralized, since A and B are identical up until t+2.

            The kind of bias we show ourselves is not just a stronger form of the bias we’d show a family member or a twin, it’s not just rooting for the “home team,” after arbitrarily deciding who that team consists of. No one ponders from behind the veil “exactly what are the properties of a person who’s wants and needs and fears and hopes should take supreme precedence?” and then after working that out finds themselves thrust into the world as that very person. Instead, we arrive in the world first, and find that this being is me, this flesh is my flesh, this pain my pain, this joy my joy, and then develop the self-directed bias toward whatever person we find ourselves. Self-interest is rooted in concern about a specific future person because I expect to experience what that person experiences and not in a kind a favoritism for creatures who have such and such disposition or appearance, although such feelings may be also present. It cannot be reduced to the vicarious enjoyment of the good fortune of beings who are merely like me.

            If subjective expectation is an illusion then nearly everything anyone has ever done has been predicated on a misconception.

            Eliezer talks about this here: http://lesswrong.com/lw/19d/the_anthropic_trilemma/

            > The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a “thread of subjective experience”.

      • Latetotheparty says:

        I’m not attached to my atoms. I am attached to my experiencing.

        What do you say to me then? Do you suggest that I take the transporter / do the mind upload?

        • Anonymous says:

          What do you say to me then?

          Your preferences are underspecified. What counts as “my experiencing?” There’s no true objective answer to this question–you have to choose what counts as “you.”

          (As hypnosifl says above, if physicalism is false then maybe there is an objective answer, but I don’t think physicalism is false.)

  41. MugaSofer says:

    >I think Hanson is overstating his case. All except Watkins were predicting only 10 – 30 years in the future, and most of their predictions were simple numerical estimates, eg “the population will be one billion” rather than complex pictures of society.

    In Hanson’s defence, I get the impression that a lot of his methods consist of taking numerical extrapolations and imagining what kind of world they might imply, in a very Kurzweil-esque fashion. And Kurzweil does have a pretty good track record, for all his flaws.

    • Wrong Species says:

      According to himself, his track record is nearly flawless. But we are fast approaching some of his more outlandish predictions. Stay tuned.

  42. “…he has two pages about what sort of swear words the far future might use.”

    Euphemism!

  43. Jiro says:

    he is weirdly obsessed with fruit and vegetable size

    Can you possibly imagine that 1) given that this is in the Ladies’ Home Journal and 2) women did most of the cooking, there is a good reason for this?

    • Nicholas says:

      Not to mention that most of the readers would have been fruit and vegetable gardeners, and thus quite interested in predictions about how their hobby might be improved in the future.

  44. One way to look at the em revolution is to consider the effects (or lack of effects) of earlier revolutions.

    For example, the upper-class population in agricultural societies was about the same as the forager population in the same area. In Medieval England, there were 200 men in the upper aristocracy and 1000 knights. If we assume that a typical aristocratic family included a Lord, a Lady and couple of children, the upper class would be 4800 people. According to Jared Diamond, foragers had a population density of 110 person per square mile, which means England’s 50,000 square miles could support 5000 of them, about the same number as in the agricultural upper class. Upper-class lifestyles also resembled the advantages of forager lifestyles. People in the upper class did not spend all day shoveling manure and had a diet with adequate protein.

    In other words, the agricultural revolution did not take foragers and turn them into peasants but added a peasant population.

    Applying the above to ems will be left as an exercise for the reader.

    • pthagnar says:

      so in 1750 there were 6.5 million people in great britain, mostly dirt farmers. then the industrial revolution happened, so now there are ten times that many people. therefore 9/10ths of the population are new proles, and 1/10th of the population is still… what?

  45. Elizabeth says:

    Part III went full Effulgence pretty quick.

  46. Wrong Species says:

    I’ve always understood, in an abstract way, that my values aren’t objectively true. But I don’t think I really comprehended what that means. There is nothing that is going to keep our sacred values secure in the future. Maybe we should recognize that the future will horrify us regardless of what it specifically looks like and stop dismissing every alternative that we find troublesome. Or maybe we could try to create some kind of “value insurance”, although I’m not sure what that would look like.

    • houseboatonstyxb says:

      @ Wrong Species
      Or maybe we could try to create some kind of “value insurance”, although I’m not sure what that would look like.

      Popular literature. Star Wars 1977 (and maybe TFA). Early Superman and Batman. Disney fairytales. Early Star Trek. E.T. Indiana Jones. (Maybe Dr. Who.) Early Harry Potter.

      • Wrong Species says:

        Both what you and anon said are not stable. Teaching values across generations is just one giant game of telephone. The only thing that is stable is our inherent values brought about through our genes. So I guess we could try genetic engineering, assuming that it doesn’t cause some other dystopian catastrophe. But even if that works, if someone can genetically engineer people to have values that we care about, they can also implant other values that we are not so fond of. So I don’t really see a solution.

        • houseboatonstyxb says:

          @ Wrong Species
          Both what you and anon said are not stable. Teaching values across generations is just one giant game of telephone.

          Briefly–. CS Lewis in The Abolition of Man says there is a set of precepts that is objectively ‘right’ for humans (ie built-in). But it needs to be taught to children by ‘poets’ (ie the composers of popular stories). And if not taught that way, it may not develop correctly in the child; and in any case it’s difficult and complicated in practice, so is easily over-laid and lost in the shuffle of other influences.

          But when a popular fiction does demonstrate it well, it rings true to the child’s built-in ‘instinct’ and that movie/whatever becomes very popular indeed, and does convey it (though not necessarily stated) to the next generation.

          (I’m sorry that I don’t have time to try to translate my words into more bullet-proof language; even if that were possible.)

    • Anonymous says:

      We can influence how horrifying the future is. For example, we try to instill our values in our children.

    • Eli says:

      Maybe we should recognize that the future will horrify us regardless of what it specifically looks like

      If by horrify you mean “shock”, as in “greatly surprises”, then yes, of course the real future will surprise you quite a lot. The present always has incomplete information about the future.

  47. FacelessCraven says:

    Why wipe the ems rather than merging them? would merging be obviously impractical in some way that the rest of the em world isn’t?

    • suntzuanime says:

      We don’t know how to merge brains. The em assumption is that we will be able to simulate brains before we can understand minds, so ems are largely black boxes. The most we might be able to do is simulate some forms of outside influence, like drugs or railroad spikes.

      • FacelessCraven says:

        we don’t know how to emulate brains either. if we can emulate a brain and we can tweak significant variables, what separates a merge from those tweaks? I understand that this is largely an unknown unknown, but it seems like much of the rest of the em predictions are as well, and it seems to me that merging drastically alters the tenor of the discussion. copies of me who wake up motivated to work and then get annihilated right when it’s time to clock out are horrifying. Being able to fork multiple copies to get 100 man-days of work done in one and then re-merge to enjoy a hundredfold glow of accomplishment… I doubt wire-heading could make me happier.

        • suntzuanime says:

          We can’t tweak significant variables, is the idea. Under the em assumption we can’t make tweaks, only gross changes mimicking gross physical changes. You can reject the em assumption if you want, but it’s what the book is based on. Merging drastically alters the tenor of the discussion but it also drastically alters the nature of the technology. Write your own book if you want to fantasize about merging.

          • FacelessCraven says:

            @suntzuanime – “We can’t tweak significant variables, is the idea.”

            is memory that much more significant than sex drive or any of the other things Scott’s afraid will be tweaked or edited? I mean, I guess it might be, but it doesn’t seem obvious to me.

            “You can reject the em assumption if you want, but it’s what the book is based on.”

            The idea that within a hundred years we’ll figure out how to make ems, but that a thousand or a million years might pass before we figure out how to cleanly edit memory seems like a pretty sketchy premise. Someone in the thread above has pointed out the same problem with AI development. Memory splicing intuitively doesn’t seem vastly more improbable than brain emulation itself, and appears to solve most of the problems emulation raises, so I’m curious why it’s bypassed.

            why do we think the em society is a stable point that will last for any length of time? It seems rather like someone proposing heavier than air flight, but then emphatically rejecting the idea of airliners.

          • suntzuanime says:

            We don’t need to do tweaking to change sex drive. We don’t need to know the details of how sex drive is formed, we just need to be able to simulate the effects of the drugs that we already know alter sex drive. There is no known drug that merges people’s minds.

          • FacelessCraven says:

            @suntzuanime – “There is no known drug that merges people’s minds.”

            There’s no known drug that dupes people’s brains and lets the duplicate run on a PC either, but that’s the tech under discussion. Every piece of recording tech in human history has either arrived with an edit function or received one soon after. The argument is that we can build a tape recorder in a hundred years, but won’t be able to dub and splice in a thousand?

            Again, why does hanson think the situation he describes is a stable one on millennial timescales?

          • suntzuanime says:

            Yes! Precisely! It’s the tech under discussion! Not your thing! Which is a different tech! And should be saved for a different discussion!

          • Anonymous says:

            Hanson believes that the situation he describes is stable for about two years.

          • FacelessCraven says:

            @suntsuanime – if one shows me a glorious future with a dark underbelly, and I notice that the underbelly is actually removable by popping out two pins and can then be thrown in the trash, and one replies that all this talk about pins is beside the point and to pay more attention to the DARK UNDERBELLY, I am tempted to conclude that the underbelly is there for Gross Edgelord purposes as opposed to serious and dispassionate prognostication.

            But again, that is not the point. The point is that Hanson’s prediction seems seriously flawed, because the em society he’s writing the entire book about only exists if technological development stops cold immediately after it’s created, and that is obviously not going to happen.

            @Anonymous – “Hanson believes that the situation he describes is stable for about two years.”

            …of objective time, which is equivalent to thousands, and possibly many thousands of years of time for the ems, some and possibly most of which will be doing tech research at their 1000x speed, possibly while inhumanly optimized for greater efficiency.

          • suntzuanime says:

            “Popping out two pins” being your way of saying “engineering the capability to edit memories however we wish”. Maybe you know exactly how it would be done; it seems non-trivial to me.

            You’re saying “in this futuristic dystopia there isn’t enough food to feed the starving masses? Edgelord! Why don’t they just use their Star Trek replicators?”

          • FacelessCraven says:

            @SunTzuAnime – ““Popping out two pins” being your way of saying “engineering the capability to edit memories however we wish”. ”

            not “however we wish,” and arguably not edit at all. Compile, rather. You have em A, who has a week’s worth of memories of sitting on the beach in Tahiti. You have em B who spent their week elbows-deep up a database’s asshole. you want one to be able to access the memories from the other as easily as they can their own. you don’t need to generate new memories from whole cloth, or change what those memories record.

            “You’re saying “in this futuristic dystopia there isn’t enough food to feed the starving masses? Edgelord! Why don’t they just use their Star Trek replicators?””

            If you show me a famine in Star Trek, I’m going to be pretty curious as to how exactly that happened in a post-scarcity society with replication and FTL. Likewise, if you tell me we’ll be copying brains in a hundred years but those brain copies will be completely black-box for a thousand years after that, I’m calling bull. Ditto for the claim that AI is a thousand subjective years off, as someone else pointed out.

          • suntzuanime says:

            We’ve been building brains for millions of years already, and they’re still completely black box. Knowing how to copy something doesn’t entail knowing how to change it arbitrarily.

            And yes, merging memories might be relatively simple, but we definitely don’t know enough of the secrets of the human brain for you to go slandering someone as a “gross edgelord” for not assuming they are.

          • FacelessCraven says:

            @Suntzuanime – “We’ve been building brains for millions of years already, and they’re still completely black box.”

            we’ve been building wet brains by rubbing our genitals together. The genetic equivalent would be something like saying that we can 3D-print a live copy of a human while you wait, but we can’t cure genetic disorders.

            “Knowing how to copy something doesn’t entail knowing how to change it arbitrarily.”

            yes, it actually does. The important word there is “copy”, as in media, as in a copy of a painting or a copy of a song, as in break something into pieces and transcribe them in a new format for long-term storage. if you know how to turn a voice singing into a microphone into a digital file *from scratch*, you know how to edit that file arbitrarily. I’m pretty sure that has been true for every medium ever invented in human history, and the entire reason we think braintaping should be possible is because we’re assuming something very similar holds true for brains.

          • suntzuanime says:

            I mean yes, in a sense you can edit an encrypted text document arbitrarily even if you don’t know the cipher. But you won’t be able to control the plaintext how you like. Being able to 3d print a human embryo would let you control the genotype of the body, but the extent to which you can control the phenotype depends on the depth of your knowledge of genetics. Just as you can fiddle with an em’s neurons however you like, but that doesn’t let you control their memories without further deep knowledge.

          • FacelessCraven says:

            @suntzuanime – “I mean yes, in a sense you can edit an encrypted text document arbitrarily even if you don’t know the cipher.”

            A much better metaphor, but again, this is how people actually do break ciphers, yes?

            “Just as you can fiddle with an em’s neurons however you like, but that doesn’t let you control their memories without further deep knowledge.”

            Would you at least agree that being able to fiddle with an em’s neurons would be likely to result in deep knowledge inside, say, a century?

          • suntzuanime says:

            I don’t think it’s so obvious that it would that I can accept saying nasty things about people who explore the implications if it doesn’t.

          • FacelessCraven says:

            @suntzuanime – “I don’t think it’s so obvious that it would that I can accept saying nasty things about people who explore the implications if it doesn’t.”

            Then I apologize for doing so, and thank you for your time.

  48. Phil says:

    Many of the weird em future situations Hanson describes strike me as somewhat similar to other philosophical paradoxes associated with copying minds (i.e. various “brain in a vat” thought experiments of which you are no doubt aware). My personal favorite is Adam Elga’s paper about defeating Dr. Evil by copying him:

    https://www.princeton.edu/~adame/papers/drevil/drevil.pdf

    However, one strategy for answering these paradoxes is that making identical copies of minds is not possible. Scott Aaronson gives a highly speculative but (I think) plausible account of this in his paper on free will:

    http://www.scottaaronson.com/blog/?p=1438

    I’m curious what you think about how this might affect Hanson’s speculations about the future. Your above comment about DNA methylation suggests that the impossibility of precisely copying minds at the molecular level could be quite relevant to a lot of the scenarios in the book. Does Hanson consider this?

    • Simon says:

      About Aaronson’s paper:

      It seems to me that if you deny a human access to “freebits” – by putting them into an environment where all input originates from “past macroscopic determinants” – then following Aaronson’s point of view, the human must then lack free will, despite apparently having the same internal mental processes as when they had access to freebits. (Sort of the reverse of the “Gerbil objection” he mentions).

      I’ll also add that it is questionable whether freebits exist at all, and highly questionable whether the operation of the human brain is quantum mechanical in such a way as to make meaningful use of freebits if they do exist. (Or quantum mechanical in such a way that the no-cloning theorem would apply to forbid making appropriately faithful copies).

  49. grendelkhan says:

    Nick Land’s vision, the whole Disneyland without children thing, reminded me of Thomas Ligotti’s vision from “My Work Is Not Yet Done”. I think the extended quote is apropos here.

    The company that employed me strived only to serve up the cheapest fare that its customers would tolerate, churn it out as fast as possible, and charge as much as they could get away with. If it were possible to do so, the company would sell what all businesses of its kind dream about selling, creating that which all our efforts were tacitly supposed to achieve: the ultimate product—Nothing. And for this product they would command the ultimate price—Everything.

    This market strategy would then go on until one day, among the world-wide ruins of derelict factories and warehouses and office buildings, there stood only a single, shining, windowless structure with no entrance and no exit. Inside would be—will be—only a dense network of computers calculating profits. Outside will be tribes of savage vagrants with no comprehension of the nature or purpose of the shining, windowless structure. Perhaps they will worship it as a god. Perhaps they will try to destroy it, their primitive armory proving wholly ineffectual against the smooth and impervious walls of the structure, upon which not even a scratch can be inflicted.

    Ligotti is explicitly (and emphatically) a horror writer, so there’s that.

  50. Carl Shulman says:

    That was a great read.

    as best I can tell he does not ask a single neuroscientist to estimate the date at which brain scanning and simulation might be available…citing a few hopeful articles by very enthusiastic futurists who are not neuroscientists or scanning professionals themselves

    The Whole Brain Emulation Roadmap, which he cites, came out of a workshop populated by such people.

    Second, one would expect that even if there were only 5-10% progress over the last twenty years, then there would be faster progress in the future, since the future will have a bigger economy, better supporting technology, and more resources invested in AI research.

    Past progress also occurred in the context of economic growth and increasing inputs, although in the last few years funding has exploded at a historically exceptional rate.

    As unhappy as we moderns may be contemplating em society, ems themselves will not be unhappy! … The best we could say about this is that if the wireheading is used liberally it’s a lite version of the world where everything gets converted to hedonium.

    Depends on what most efficiently motivates learning and behavior. Evolution found both positive and negative motivations useful for different problems, which is reason to expect that efficient emulations would have both, e.g. a proofreader who is terrified of making an error, or pained and disgusted by one, rather than only eagerly anticipating producing an error-free document.

    This applies to AI using related algorithms as well.

    All of Robin’s points about how normal non-uploaded humans should be able to survive an Ascended Economy at least for a while seem accurate.

    If a while is supposed to mean more than a few years, this requires hyper-stable (millions of subjective years for the fast beings) protection of those humans from collateral damage from war or industry, as well as active expropriation.

  51. Thursday says:

    Thomists have some impressive arguments that the mind simply cannot be reduced to something material. If they’re right, then Ems are simply impossible. The arguments are best laid out in this short book.

    • suntzuanime says:

      Impressively bad?

      • Thursday says:

        Read the book. There are a lot of problems with a purely materialist account of the mind. Incidentally this is why Thomas Nagel argued against the idea that evolution could wholly account for the mind. They’re also behind the Churchlands’ attempts to ditch the need for intentionality, which is barmy. You also have Searle’s arguments against the mind as computer.

        Here’s a taste of where the Thomists are coming from.

        • suntzuanime says:

          So far you have not done anything to distinguish yourself from all the other, how do I put this politely, enthusiasts, who tell me there are a lot of problems with something and suggest I read a specific book. In fact, by enthusiastically promoting an -ism based on a male first name, you’re causing me to patternmatch you to the Georgists, who failed to persuade me that I had not wasted my time by taking them seriously.

          It’s worth noting that what you want out of an argument is not that it be impressive, but rather that it be correct. This is a mistake that gets made quite a lot in the humanities.

          • LWNielsenim says:

            Suntzuanime’s forceful argument applies too at higher levels of cognition:

            It’s worth noting that what you want out of an argument is not only that it be correct, but that it inspire great art. Disregarding this value is a mistake that gets made quite a lot among rationalists.

            Why should people embrace any rationalist philosophy that does not inspire art that is wonderful, or at least funny, and preferably both? 🙂

          • suntzuanime says:

            True: that it be correct is what I want out of an argument. You may of course have different priorities. That’s not going to help persuade me to take your arguments seriously, though.

          • LWNielsenim says:

            Neither is the modus ponens of fundamentalist rationalism particularly persuasive to humanists, for reasons that are well-grounded in art, and in cognitive science, and increasingly are well-grounded too in post-rational frameworks for computational intelligence.

            This helps to explains why it’s extra work — isn’t it? — for rationalists to interface with humanists, and it helps to explain too why such cross-cultural efforts not uncommonly meet with a chilly, or anxious, or even hostile, reception from both sides.

            Fortunately, a little bit of good will on both sides goes a long way! 🙂

          • Thursday says:

            For example, if, as Quine and Dennett concede, any materialist account of thought and language means that thought and language are inherently indeterminate that means that there is no possibility of logical thought at all. All abstract concepts are gibberish.

          • TheAncientGeek says:

            The idea that physicalism is incompatible with determinate meaning presupposes that determinate meaning ever existed. It seems to exisit in the purely formal sciences, but the catch is thar pure logic isn’t about anything. Its the combination of determinacy and real world relevance that is unobtainable.

            And how does nonphysicalism help? A triangle written in ink has no determinate meaning, but so would one written in pixie dust. You would need an immaterial substance with very specific properties.

            The naturalists, such as Quine, who point out the probem, don’t see it as leading to an apocalyptic situation where meaning and communcation are impossible…rather they think communication is possible, but imperfect, and context is important.

        • anonymous poster says:

          There are a lot of problems with a purely materialist account of the mind.

          Let’s hear ’em

          • LWNielsenim says:

            Materialistic rationalism cannot answer the question, in any terms that humans can understand: “What is it that Google’s AlphaGo knows, that makes it a 9-dan go player?”

            Neither can any other philosophical or mathematical or computational discipline (either present or envisioned, as far as I know) answer this question in any more illuminating form than “run the deep neural network again.”

            We have entered into a new realm, in which technological cognition is comparably mysterious to biological cognition, and new generations of computational hardware have a similar microscopic architecture to biological brains.

            How long until our deep neural networks exhibit “Whitman” behaviors?

            Rising and gliding out,
                AlphaGo wander’d off by itself,

            In the mystical moist night-air,
                and from time to time,

            Look’d up in perfect silence at the stars.

            If AlphaGo did this, we wouldn’t understand why; moreover AlphaGo couldn’t explain itself even to itself, or to us either, in any terms that it or we could understand! 🙂

          • The Nybbler says:

            If “technological cognition is comparably mysterious to biological cognition”, that’s an argument for a materialist account of the mind, not against. We may or may not be able to come up with a human-understandable model of what’s going on when we run a deep neural network, but we know it’s all “material”.

        • Eli says:

          I think you need to specify what you mean by “intentionality” before you can accuse the Churchlands of ditching it. The normal definition I’ve heard is something the Churchlands are all fine with in non-mystical form.

          Of course, Feser and Nagel and other mysterians are free to find naturalistic explanations for things unsatisfying, but that doesn’t make those explanations wrong.

          • Thursday says:

            When you have a lot of guys from all sorts of backgrounds saying that something is very, very wrong with materialist and computational models of the mind, perhaps it is worth looking into. I mean Searle and Nagel are hardly religious fanatics and are very well thought of in the field.

          • Eli says:

            Our best actual predictive models of the mind are all materialist and computational. No dice. You don’t win at science by yelling loudly that science is impossible.

          • Anonymous says:

            Our best actual way of getting to Polaris in your lifetime is with ion thrusters. No dice. You don’t get to make something that may be impossible possible by yelling loudly that other methods don’t seem effective.

          • Said Achmiz says:

            @Thursday:

            I mean Searle and Nagel are hardly religious fanatics and are very well thought of in the field.

            What field would that be?

            In philosophy of mind, Searle is thought of as a stubborn, dogmatic adherent of his own poorly-supported position. Nagel is better-regarded, but then again his criticisms don’t say what you claim they do.

          • Thursday says:

            Our best actual predictive models of the mind are all materialist and computational.

            1. At the moment, those models are all incredibly crude. We simply can’t make detailed predictions using them.
            2. There are important aspects of the mind, such as determinate meaning, which simply cannot be accounted for by such models.

          • suntzuanime says:

            One of the best philosophy burns I’ve ever heard was “I’ve done the Chinese Searle’s Head thought experiment, and it turns out there’s no real thinking going on in there”.

          • Said Achmiz says:

            @Thursday

            I am very familiar with Searle’s writing on the Chinese Room. I’ve written essays and papers about Searle’s writing on the Chinese Room. I stand by my comment. The Chinese Room is not the example you want to pick if you’re trying to say that Searle and his views are well-respected in “the field”.

          • Thursday says:

            Saying it doesn’t make it so.

    • grendelkhan says:

      I tried to get into Ed Feser. Really, I did. I read his blog to the extent that I could stand it. (“Happy Consequentialism Day” was a hell of a cheap shot, and I adjusted my respect for him appropriately downwards.) As I usually do when overwhelmed by icky style, I turned to Scott’s reviews, but apparently even he got tired of it.

      If I wanted to wade through difficult language, whether vile or gratuitously obscure, until I realized that underneath it all was a basic “Eugene versus Debs”-level misunderstanding of the facts of the matter, I’d go read Jim Donald’s blog.

      I’ve tried, really I have. I’ve read proponents of Thomism; I’ve read explainers from people interested in learning about it. It’s mostly senseless, and where it makes sense, it’s either trivial or mistaken. I am not impressed. Honestly, the flavor I get is of Bob from the identical-particles essay patiently explaining very complex and formal reasons for something that turns out to be based on some pretty heavy mistakes.

      • Psmith says:

        If I wanted to wade through difficult language, whether vile or gratuitously obscure, until I realized that underneath it all was a basic “Eugene versus Debs”-level misunderstanding of the facts of the matter, I’d go read Jim Donald’s blog.

        What misunderstanding of this nature do you think Jim is working under? (We should probably take this to the open thread or ratanon if we’re gonna talk about it.).

        • grendelkhan says:

          That was kind of a throwaway, but now I’ve been sniped.

          It’s not fair given that he’s not around to talk about it, but I’ll give a very brief list based on his comments not-on-his blog briefly. (1) His thing about history books being Communist propaganda (the whole “Eugene versus Debs” fiasco), (2) the “Google is dying because it’s hiring more women” thing that Scott referenced in his now-deleted “Petty Internet Drama” post, and (3) a series of interactions over at esr’s place (Americans don’t insult Islam because they’re literally afraid of being beheaded in the streets; Intel is faking their smaller-features numbers; everything he’s ever said about where rape statistics come from).

          So when he says “double entry accounting has been rendered essentially illegal by Sarbannes [sic] Oxley”, for example, I’m somewhat disinclined to think there’s a shocking insight just out of reach.

          • Psmith says:

            the whole “Eugene versus Debs” fiasco

            That was him? Damn. I thought you were just invoking that as a generic example. Well, point very well taken.

          • grendelkhan says:

            Yep! Here’s the comment. I’m sorely tempted to link to some of my more fascinating experiences with him (which, now that I consider it, really have unfairly colored my feelings toward even the more polite tradcons, darn it), but he’s not here to defend himself, and it would feel too cheap. The nerve is raw, even years later. He was effective at getting under my skin, if nothing else, by being one of the most unpleasant people I ever traded comments with. I suppose he’d be proud of that.

  52. Max says:

    The idea of emulated people as worker bees seems wrong, since a completely alien AI (or set of specialized AIs) would be vastly more efficient, assuming the building blocks of AI continue to be different than nature’s.

    The only way this future could happen is if we abandon digital AI and instead manufacture something very close to actual human brains. But then, the lossless duplication concept wouldn’t apply.

  53. Ruprect says:

    Nice piece.
    We’re defined by our limitations – seems that in order for the story to make any sense, legal limitations have been assumed – it’s not clear that the forces maintaining legal stability will overcome those in opposition to them, however.
    In fact, I think we can be fairly certain that our current legal system would not survive serious changes to production/ nature of life/ etc.
    As such, seems to be a bad place to assume those limitations.

  54. Shion Arita says:

    As for the shape of things to come, I haven’t seen anyone else share my prediction. Does anyone else think this will be the way things will go?

    Here’s what I think the first ‘posthuman’ stuff will come from:

    People will be mentally augmented by artificially grown structures of actual neurons. They could be designed to be very specialized (like cerebellar circuits for specific actions), or general and plastic, able to learn how to do whatever the user wants them to. I get the impression that it will be easier to make really powerful nets using actual neurons than computer neural networks.

    • PDV says:

      That requires understanding details of how the brain works (in order to integrate it), so it’s a much harder problem than making ems.

  55. Jack V says:

    “I hate real-time debates”

    I think talking through things with someone virtually face-to-face is useful to get an idea of what they think. Provided you’re both honest about trying to reach understanding, but in many ways, that’s in spite of a debate — a debate lets each person have a say, but rewards spinning bullshit and bashing the other person for superficial stuff, more than reaching a common understanding.

    I wonder about a better formal conversational style. Like, say, each person proposes a list of statements, and then take it in turns to ask for clarification, like “do you mean the same thing by this word as I think?”, “which of these do you think is most important”, “do you think that’s opposed by the evidence?” Obviously it’s hard to prevent it turning into attacks especially if some of the assertions seem just flat-out factually wrong, but it might at least challenge the impulses into a constructive dialogue, and could be real time or asynchronous.

    • Scott Alexander says:

      Rephrase: I hate public real-time debates, where an audience is judging you on your every word.

  56. sohois says:

    Though Hanson may address this quite adequately in the actual book, I fail to see how the Ems scenario would not lead to the development of strong AI far before an ems economy gets started.

    Let me detail a few assumptions:
    First, I believe it is reasonable to assume that it will take some years (even 2-3 would be sufficient I feel) from the development of the first brain uploading technology, and the first working emulated brain, to a large number of people uploading versions of themselves. There are both technological and ethical reasons for this. The uploading itself presumably requires quite a bit of testing first before members of the general populace, or business, could use it. This would necessitate volunteers. Furthermore, I would predict that many people would be uneasy with the technology due to both general technology fear and ethical complaints regarding the creation of these artificial minds.
    Thus, the second assumption is that the volunteers for the early testing of ems would be mostly scientists. People like Hanson himself, people interested in neuroscience and AI and such like. Now, Hanson believes that AI development would take several hundred years compared to what many AI researchers are predicting. However, he then lays out that emulated minds can experience subjective centuries in objective weeks.
    My prediction in such a scenario is therefore that emulated computer scientists – through original uploads and their copies – would be able to take thousands of lifetimes to try and develop strong AI before an economy can even being to emulate finance or legal functions.

    The main objection that I can think to this, is that the hardware would somehow not allow it, i.e. only the technology for brain scanning and uploading would develop but it would then take several years more to be able to make copies and speed up the brain. Then by the time this is achieved my earlier assumptions have already passed and businesses can begin uploading employees in earnest.

  57. Nathan W says:

    “But slavery isn’t nearly as abject and inferior a condition as the one where somebody else has the control switch to your reward center.”

    It had hardly be said enough. Brain science needs to be happening 100% in the public eye.

    You don’t have to model the entire functioning of the brain. Just the “evoked potentials”. Then, via microwave pulses, many of them can be approximately mimicked. It is being used to very evil ends today, and the three-lettered agencies which could pull this off without getting blown up or in prison are very few in number.

  58. Dan King says:

    Call me incurious, but when I think about the future all I want to know is if Trump will win the election, will I have enough money for retirement, and what names my eventual grandchildren will have.

    The rest of this is boring.

  59. Ketil says:

    Great review! I’m too late in the game to have read Overcoming Bias much, but now I want to read the book. I don’t quite get this:

    “Would this count as murder? Hanson predicts that ems will have unusually blase attitudes toward copy-deletion. If there are a thousand other copies of me in the world, then going to sleep and not waking up just feels like delegating back to a different version of me.”

    Sure, there may be 999 other copies from the same original, but I care about this one. And the hypothesis is a precise simulation of my brain, I just don’t see myself becoming willing to die based on the exisistence or not of people similar to me. Or perhaps suicide rates are higher for twins? After all, there is another copy out there, right? I’d be surprised if that were the case.

    “Hanson says that there probably won’t be too much slavery in the em world, because it will likely have strong rule of law, because slaves aren’t as productive as free workers, and there’s little advantage to enslaving someone when you could just pay them subsistence wages anyway”

    I’m sorry? I wake up, well rested, have a nice cup of coffee, get about my business. I go to work, and having a good day, whack out a lot of code for my boss. Who, at five in the afternoon, shuts me down. I wake up, wel rested… ad infinitum. Groundhog day, forever. What is this, if not slavery? My ems never know what happens.

    And I worry that is bound to happen, because, since these are only simulations, we have no qualms about shutting them down. Sounds remarkably similar to justifying slavery with racial inferiority.

    • pgbh says:

      “Sure, there may be 999 other copies from the same original, but I care about this one.”

      Since anyone could be copied into an em, and there will likely be at least a few people who don’t “care about this one”, ems will disproportionately comprise people who don’t care much about being deleted. This is supported by the fact that, as Scott points out, ems’ personalities could likely be moderately edited.

      “And I worry that is bound to happen, because, since these are only simulations, we have no qualms about shutting them down. Sounds remarkably similar to justifying slavery with racial inferiority.”

      I suppose, it’s not like that makes it wrong though. Most people are quite willing to enslave or kill non-human animals, exactly because we think they’re “inferior”.

  60. Randall Randall says:

    The juxtaposition of

    Who wants to be on an airplane for forty days, especially if the economy has grown so fast during that time that the money you took on the plane with you has been significantly devalued by inflation?

    and

    When the economy doubles every day, so can your bank account.

    seems odd. I haven’t finished the review (much less read the book), yet, but there seems to be an underlying assumption that ems won’t be allowed to own property, since if they could, they could just behave like humans and reap the resulting rewards.

    (But my true objection to humans having it great in this scenario is that:

    1. humans and their property represent enormous unrealized wealth,
    2. I don’t see a way for expensive, slow, meat copies of humans to compete on a level playing field, and
    3. the political playing field tends to be reshuffled from time to time, and that will happen faster in a faster world.)

  61. Dirdle says:

    Well I wish it could be Christmas everyday
    When the kids start singing and the band begins to play

    Ahem. References aside. It’s the best argument against cryonics I’ve ever heard of. Might have to give it a read.

  62. RobertB says:

    This was all very interesting, but the Ascended Economy stuff seems like a big conceptual error. You can imagine someone who didn’t know much about computer science making the same kind of error:

    “Right now, computers follow instructions that are put in by a human programmer and transfer data between different registers and perform computations on it. But in the future, what is to stop those instructions from being generated by an algorithmic compiler rather than a human compiler? And what if the data isn’t inputs coming in from the outside world, but rather information generated by the computer itself? It would just be a machine for taking numbers and making different numbers out of them at maximally efficiency [and onwards into mysterical nonsense.]”

    I think the issue here is one of abstraction. [I don’t know anything about electrical engineering but I assume that] no living computer scientist actually understands what a computer does when you divide one large number by another. There’s RAM and input caches and execution units, and they’re all extremely complicated and do highly detailed things that are hard for humans to remember. And if you were presented with a list of the things that the computer was going to do in order to divide 5946834274 by 34622, you might mistakenly conclude that it was all a bunch of meaningless gobbledegook. But in reality, all the steps of abstraction that are taken to get to BEEP BLOOP FLIP THE BIT AT LOCATION 0x89D04E** from output = 5 +2 have been painstakingly validated to ensure that the gobbledegook faithfully executes the high-level abstraction.

    Now, let’s talk about the economy. Obviously corporations are not validated in the same way that algorithms are, but hundreds of thousands of people (literally) spend their entire working lives analyzing how corporate entities respond to different desires of stakeholders, arguing about it, and writing rules that govern various situations. The same is true of commercial contracts. There is a sense in which Big Coal, Inc., is a useful abstraction of the fact that coal can usefully be burned to create heat. Like the machine language above, no human could necessarily understand the fact that it would be a useful thing for some people in West Virginia to walk into a big hole in the ground, chip out a hunk of black rock, load it onto a train, send the train to somewhere in Ohio where some different folks would grind up the rock and feed it into a giant furnace that would power a bunch of spinning magnets that are hooked up to wires that run to Mary’s house where she’s using a computer to surf the internet. BLEEP BORP BOOP. But it’s much more human-comprehensible to think of Big Coal, Inc. digging up coal to sell to Power Plant Co., which burns it to create power it sells to Soulless Utility Corp., which sells it to Mary. Everyone gets paid, and the dizzyingly complex physical requirements for Mary to surf the web have been met (at least in part).

    So that’s a long digression about abstraction, and an important distinction is that economic abstractions are not perfectly validated. So a company like Enron was supposed to be doing something useful involving power trading, but it just ended up perpetrating accounting scams that looked like useful power-trading stuff. Oops. But the bottom line is still that abstractions have to be about something. You can’t take the machine-language instantiation of the high-level concept and ask what it would look like if it stopped being the instantiation of anything. In the real Ascended Economy example, we’re supposed to imagine that these corporations are shipping each other machine tools and solar panels, but why are they doing that instead of something else. If Meaningless Business 1 ships Meaningless Business 2 a box full of Nerds candy rather than coal, why does Meaningless Business 2 object? In the current economy, someone who wanted coal might well complain that Nerds candy doesn’t burn as well, or it creates weird residues, but why does Meaningless Business 2 care? There are no ultimate customers who have a specification for a product, so why does it matter?

    • Simon says:

      Tangentially related questions. Of all the work the ems are doing some ends up with a meatspace output? At that point surely there’s going to be a huge disparity between the ultra-rapid cognition and the physical ability of the means of production? How is this bottleneck not a problem?

      Or am I misunderstanding something?

  63. herbert herbertson says:

    re: IV:

    This is why I’m a socialist singularitarian. Oddly, I’m very nearly the only one I know: of my socialist acquaintances, most see singularities or things like this Age of Em as ridiculous nerd raptures not worth engaging, while the people who I know who entertain the idea of these sorts of futures seem to reject the idea of socialism is pretty much the same way. But, my god, is there any other path out of these dystopias? It seems like an inevitable reaction from mixing the idea that machines may participate as agents in the economy (already in place, even if right now the machines are mostly limited to legal instruments) and the idea that machines will become more capable than humans in the foreseeable future (undeniable as far as I’m concerned, absent catastrophe).

    Maybe I just need to change my terminology? At this point, I’m not even a little picky with regards to what the “socialist” alternative ends up being. A simple Butlerian-style injunction would be a hell of a lot better than nothing–thou shalt not invest a machine with property rights. Even if that just meant we got god emperors obtaining power through AI advisors–at least that’s a human future, not this empty, endless, materbatory machine marketplace.

  64. ad says:

    Hidden in its super-villain research lab lair, this guines villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa ha ha.

    Isn’t this how the Industrial Revolution happened? Hidden on their small, damp island off the coast of Europe, a small group of people worked out unprecedented revolutions in engineering and took over the world. Bwa ha ha.

  65. Your section IV gave me flashbacks to A Deepness In The Sky, because your combination of “let’s make you focus on your task” and “let’s make you want to do the task we assign to you” is basically the Emergent’s Focus technology, as far as I’m concerned, except digital.

    My skin is now crawling, good job.

    (This is not a complaint, it’s a compliment.)

    On a different note, I’m always very cautious before trying to dismiss future ethics as horrific, and would identify with the train of thought you mentioned you hate. Let me try to present it in a way you can perhaps salvage.

    First of all, knowing the story you’re referring to, I think it was a bad move on part of the author, because rape in particular is not a topic most people can have a rational conversation about (I am not excluding myself, here), and it limited the audience of the story to people with a thick enough skin, whereas the rest of the story would be good food for thought for a much wider audience. As such I feel it’s a bit of a lost opportunity.

    But on a purely personal level, I would have been rather interested in what the author means with that. It was never really explained, even though I would expect there would be quite a few social changes. I feel like the phrase ‘It all adds up to normality’ needs to be put to use here, and I would have liked to see more evidence that it does; that it’s not dystopian or utopian, that it just is.

    Once there’s enough data for it to look plausibly stable (and the book you’re reviewing sounds like it’s brimming with exactly that), I take note when I’m feeling uncomfortable about something and then ask myself if I have any right to make that judgement call. Often the answer is ‘no’. It doesn’t stop my discomfort, but it does let me put a lid on it and categorise it away, and attempt to empathise with what the world is like from an inside perspective.

    (Greg Egan’s aliens or transhumans often at least slightly tap into that with me (I don’t think he’s particularly trying to make them disquietening, mind you) – for example, the transhumans in Schild’s Ladder largely consider ‘local death’ (one that forces them to restore from backup) to be simple amnesia, which is plausible from their perspective, but still really strange. Not shocking or disgusting, granted, but it took me a while to get a handle on.)

    Which is to say I do try to put an honest effort into accepting alien values.

    I don’t know if it’s the right approach. I might be going too far. Maybe there are things one shouldn’t try to be charitable toward that I’d be caught up in being charitable toward with my approach. I have the stance that the future is likely to be better for the participants, but also alien and disconcerting to me. As in, I don’t expect I could necessarily recognise a better future on a gut level.

    Not sure how frequent that stance is, or if you think it should perhaps be considered fundamentally separately from the stance you criticised, but I wanted to share my thoughts in case they can help salvage the mindset for you.

  66. Azure says:

    First, there’s a class of attempted resolutions to the Mere Addition paradox that go “What makes you think having the population expand to a point where everyone is living at subsistence levels is so bad?” From the review it sounds like Age of Ems is, if unintentionally, a really well-thought-out example.

    Second, I think the most obvious way to counter the argument that I’m just an unsophisticated denizen of the past who’s just horrified by things being different is that the things that are /different/ aren’t the things that bother me. I have a pretty promiscuous notion of identity so I might not be representative, but I’m willing to recognize lots of things as ‘me’ and if there’s another sufficiently close ‘me’ around I don’t mind the idea of being taken out of the run queue (we’ll see if I still feel that way if it ever becomes an issue…), and copypaste people splicing off bits of themselves at various points and shoving memories from other sources into themselves don’t bother me.

    All the stuff that bothers me is the stuff we have today. A runaway capitalism that views people the fuel to feed the economic engine instead of the other way around, destruction of the middle class (we have the same person being the opulent ultra-rich /and/ the masses of poor toiling away for subsistence to support their lifestyle), pressure to work ever-more hours and the destruction of leisure time, and a ‘strong rule of law’ that seems concerned not at all with establishing a strong floor of well-being.

    With one exception, having the population get cut down to a relatively small number of people who get copied repeatedly doesn’t seem like the best thing in the world for culture and the arts.

    Third, if someone thinks that people in a virtual world can live in dazzling splendor on subsistence wages in a world where the ‘rule of law’ is most interested in making sure that the richest corporations maintain complete control over what they think is their property, I have three letters for them: D, R, and M. (Even without that copyright gone mad seems awfully likely, or someone patenting a Means and Process to Transmit the Sensation of Umami to Ems.)

  67. miguellopes says:

    Didn’t read all the comments, so I’m not sure this question was raised, but:

    Is it plausible to assume an ever increasing demand for better cars/phones/whatever? Because if this demand reaches a plateau (the population may stabilize, and our consumption is time constrained), so does economic growth. Are we sure there will be a demand for the things created by these EM entities? Is this a stupid question?

    • ii says:

      I was asking more or less the same thing. Albeit “what motivates people running on computers to work? what work is even valuable?” others were also leery of what “vacation” means in this context. It seems to me that trade developed among hunter gatherers in order to make up for deficiencies and nothing inherent in the human condition ensures that it’s going to keep going once everyone is well fed and entertained.

      • Ghatanathoah says:

        A hunter-gatherer would probably have been skeptical that there would be any more demand for goods after farming was invented, since everyone now has all the food they want. If he heard that industrial farming allows a small amount of farmers to feed everyone he would assume everyone else is unemployed. There is no limit to human wants. Humans will always want more and more things and experiences.

        I can think of a number of Em products that pretty much everyone would want, even if all the customers were other Ems.

        First of all, all those virtual reality worlds the Ems live in wouldn’t come out of nowhere. Someone would have to code them. So world-designing ems and virtual reality programming ems would always be in demand.

        There would probably still be demand for movies, books, and videogames. There’d probably be even greater demand since you could make enough copies of yourself to read every book ever written. So writer, actor, and game-programming ems would find plenty of work.

        All those computers running the ems and virtual worlds need power. So engineering ems devoted to energy exploration and extraction would be quite common.

        The demand for computers themselves would also skyrocket. So mining engineer ems and other types of ems devote to finding more resources to build computers with would be common as well. So would engineering ems devoted to designing more efficient computers.

        Remember that ems lack a common impediment that greatly reduces consumer demand in our world: the inability to be in two places at once. The demand for ocean cruises is limited by the inability to go on more than one at once. The demand for books is capped by lack of reading time. Ems can make more and more copies for those purposes. Maybe at a later stage of emworld they could even find a way to merge copies so you could remember all the experiences each of you had.

  68. R Flaum says:

    I would actually count Watkins’ “To England in Two Days” prediction as a success. He got the mechanism wrong, but the basic claim — advancing technology will make intercontinental transportation much faster — was correct. You see this kind of thing in a lot of his predictions, like his prediction of air conditioning: he was right about what the machine would do, but wrong about how it would do it.

    • Scott Alexander says:

      But he was also wrong about the length of time – Americans can get there in a few hours by plane. The only sense in which he was right was that there was technological progress, which is just about the easiest thing in the world to predict.

      • R Flaum says:

        I don’t know about that — over the past several decades, transatlantic trip times have not decreased. In fact, from a certain point of view you could say they’ve increased because the Concorde is no longer in service. There has been technological progress, but it’s generally been on the manufacturing end.

        More generally, I think a big part of these sort of predictions is what you might call the economic end. His claim was that demand for fast transatlantic travel is great enough to make research into it a paying proposition.

  69. Vamair says:

    It seems the largest external part of em economy would be dedicated to processing base materials to computronium and energy, as everything needs cycles. And there I fail to see how an em network differs from an AI that has a Foom period of a few years and doesn’t care about people.

  70. moridinamael says:

    I wonder if Hanson (or Scott) has read the *Quantum Thief* novels of Hannu Rajaniemi. The idea of a “warmind” in these books is always what I jump to when thinking of how this scenario would go. A mind with everything except clinical, psychopathic, goal-oriented intelligence meticulously cut out of it.

    • Inc says:

      Much of what Hanson apparently wrote about Ems (I haven’t read it, only this review) reminds me a lot of the Jean le Flambeur series.

    • salty-horse says:

      Hannu Rajaniemi has blurbed Age of Em and recommended it on several occasions.

      • gwern says:

        Rajaniemi is also familiar with Eliezer and Robin (see the references in book 3) and has been reading them for who knows how long; Robin has been blogging occasionally about ems since… I dunno, 2005? Earlier?

  71. Jill says:

    Scott, I like this guy whose blog I found on your sidebar. His predictions for the more immediate future are in this article here. I am thinking of how these would work if these are the immediate future and Hanson’s book is the more distant future:

    http://fredrikdeboer.com/2016/05/16/our-nightmare/

    “will it be a 22 year old with an Ivy League degree lecturing the poor about their failure to speak in the rarefied vocabulary of intersectionality?”

    I guess if we put that together with Hanson’s book, and look at the more distant future, maybe we get an Ems doing the lecturing Or maybe no lecturing at all is necessary because, after several generations, no one does or says anything to or for the poor at all any more but only lets them be cogs in the wheel of industry? And maybe there are no universities any more, whether Ivy League or not.

    Here is an interesting quote from an Amazon review of The Age of Em

    “This is a merger scenario between human and machine, though only for a thousand world-class individuals. Make those organization execs and it would seem like a stock index going through the reformation of feudal monarchies to nations or their own ideological utopias amid technological traits.

    “It is not about net neutrality. This is more like the lifestyles of the rich and famous. The question becomes what happens to the rest of the humans, if they want more of a say or are genetically advanced besides.”

    The thoughts of book reviewers on Amazon often add significantly to, or expand, the thoughts of the author of the book they are reviewing. And this is one case in point, in my view.

    • HeelBearCub says:

      @Jill:
      You do realize that the very last post Scott made was about this article, right?

      If I adopt a charitable viewpoint, I’m guessing you like the commenting and the interaction, but you aren’t particularly engaged with what Scott is actually posting. Or perhaps you read many things and don’t always remember the context in which you read them? I am unsure.

      I don’t want to seem neagative, but I think minding your Ps and Qs more will help you in gaining fewer negative reactions to things you post here.

  72. Michael Vassar says:

    How large were beets, apples, and strawberries in 1900? How large are the largest ones today. That seems like it could have happened very predictably via selective breeding.

  73. LPSP says:

    Consider two competing companies with em enterprises. Each has a certain range of ems, each capable of performing certain tasks efficiently. Both companies have comparative advantages over the other, and both would love access to the ems of the other company so they could further optimise.

    The ems a company owns and to which it has access are derived from the brains of the humans it entertains. Each company has a human reservation nested deep within itself. There the humans are treated with excess and luxury. They live any lifestyle they suit, are subdivided in any number of social units and circles to move between. They are provided vast resources of knowledge and granted priviledged status. Each human’s brain is constantly scanned and used to create new ems; the ems populating the company are loyal to their flesh and blood ascendants. More importantly, they watch the authentic reproduction of humans with intrigue, and eagerly welcome the ems derived from the children of their real-selfs – their own children in a sense.

    So when one company seeks to make a deal with another company, it proposes a merger – literally. The companies agree to create a shared enclosure for some of its human populace. These humans mix and interbreed, and ems from the offspring are recieved by both companies. The companies also claim shares of flesh and blood patronage, although both continue to have access to ems by certain stipulations.

    Humanity now occupies the same position of DNA in the vast ecosystem that is civilisation. Ems are RNA and other vital proteins, machinery organelles, factories organs, companies organisms. When the descendents of Earth reach the stars and meet those of other worlds, the flesh and blood of either will mix as much as most fiction predicts their genetics will today.

  74. Iodine says:

    On one prediction we can count: It will be very, very shitty.

  75. Eli says:

    According to Hanson, AI is really hard and won’t be invented in time to shape the posthuman future.

    He is, of course, so ludicrously wrong about this that it invalidates basically everything else he proposes.

    First, they have no natural body. They will never need food or water; they will never get sick or die. They can live entirely in virtual worlds in which any luxuries they want – luxurious penthouses, gluttonous feasts, Ferraris – can be conjured out of nothing. They will have some limited ability to transcend space, talking to other ems’ virtual presences in much the same way two people in different countries can talk on the Internet.

    Again, ludicrously incorrect. For a whole lot of things, high-fidelity (ie: equivalent in quality-of-experience to real life) virtual reality will be economically noncompetitive with real life. In simple terms, creating Cypher’s experience of eating a delicious steak at a fine restaurant in virtual reality, at such a fidelity that Cypher will choose to live in the Matrix full-time as an em, is more expensive than just running the actual restaurant in real life.

    Think about the operating budget for Google’s datacenters full of custom ASICs for Tensorflow, plus regular datacenters for the VR restaurants, versus the operating budget for an actual restaurant with real customers, and you’ll see where I’m going. Comparing electricity for the em to food for the person makes an unfavorable budget too: it would be more economical to have real meat-brains stuck in jars with nervous-system interfaces, if we’re really going for Vile Offspring-level horror, than fully emulated brains. At least right now.

    Building a brain emulator that is cheaper to construct than 9 months of food and housing for a pregnant woman, and cheaper to run than the actual energy requirements of the meat-brain, is really hard, and Hanson basically hand-waves on these matters.

    *scrolls down*

    Oh good, you notice and call him out on his shit. Robin Hanson often seems to be half psychopath, half troll: predicting things that are designed to offend our moral sensibilities so he can shrug his shoulders and pretend to question our moral sensibilities (ie: the ones he doesn’t have), despite the fact that his predicted events are actually extraordinarily unlikely, including because of everyone else’s moral objections.

    I once read a science-fiction story that depicted a pretty average sci-fi future – mighty starships, weird aliens, confederations of planets, post-scarcity economy – with the sole unusual feature that rape was considered totally legal, and opposition to such as bigoted and ignorant as opposition to homosexuality is today. Everybody got really angry at the author and said it was offensive for him to even speculate about that. Well, that’s the method by which our cheerful acceptance of any possible future values is maintained: restricting the set of “any possible future values” to “values slightly more progressive than ours” and then angrily shouting down anyone who discusses future values that actually sound bad. But of course the whole question of how worried to be about future value drift only makes sense in the context of future values that genuinely violate our current values. Approving of all future values except ones that would be offensive to even speculate about is the same faux-open-mindedness as tolerating anything except the outgroup.

    Your blithe assumption of moral antirealism ceases to be cute.

    • Ghatanathoah says:

      Your blithe assumption of moral antirealism ceases to be cute.

      I don’t see anything anti-realist about what Scott said. If anything it’s pro-realist, it’s arguing that people who act anti-realist and accepting of future values turn hardcore realist as soon as the values are about something taboo.

  76. Nero tol Scaeva says:

    Am I the only one who thinks that uploaded minds follows the same sort of thinking that the first flying machine would simply be a piece by piece recreation of a bird?

    • Dániel says:

      No, you are not the only one. I wrote this in May 2011:

      Robin has many-many clever and insightful observations, but I think the history books will only remember him for these unbelievably naive em posts. People of the future will find his idea of an em just as charmingly ridiculous as we see flapping-wing aeroplanes, and they will often quote him for comedic effect. At least that’s my hope, because I think he would deserve that for the hubris with which he builds these complex intellectual constructions on quicksand.

      (Whole Brain Emulation and the Evolution of Superorganisms – Less Wrong)

  77. Li Zhi says:

    The ENORMOUS non sequitur here is the idea that while we can copy a mind, we won’t be able to create an AI. If I create a copy, and then begin (directed or random) modifications to it, either adding or deleting neural connections/connectors then after a couple of billion modification cycles what will be the quantifiable difference between that resulting mind and an AI built from scratch? It is blindingly obvious to me that creating an AI is a much more basic and simple a task than creating a human mind emulation without the meat. There are (at least) 12 other problems I can think of with the book as reviewed, but without actually reading the book, I’ll mention only one: If “good guys” can create many copies of top performers in their fields, then the “bad guys” can create many copies of their top performing crooks. So, does the book discuss electronic crime? I can’t believe anyone in 2016 would believe an ecosystem without crime is realistic. It violates (apparently) one of the most basic facts of life: it’s easier to steal bread than it is to till the field, plant and harvest the crop, grind the flour, and then bake the bread. How likely is this to change in a “virtual world”? Crime, war, deceit – ain’t likely to go away, imho. They aren’t ancillary to human nature, they’re fundamental to it.
    Others on my short list: diversity – top performer will outperform a more diverse near-top team ?
    How would a copy prove identity? in both the legal sense, and in the mathematical (error correction) sense.
    Why in the world would anyone allocate Human Rights to something which isn’t human (since it has capabilities humans lack), as well as lacks abilities (sexual/biological reproduction, eg.) that (most) humans have?
    The mind is composed of the brain, nervous system and neuro-hormone glands. That’s a lot of the human body! It is by definition true that the smallest complete representation of thing is that thing.
    What if it takes 2 years to do a complete scan of a single brain? If the entire system has only a 2 year life-time, why bother? There are limits on the ability to do physical things.
    It also seems as if the author believes that Copy Rights and intellectual property laws will just be ignored. How likely is that?
    Why would a phantom world uncoupled to the real world exist? It seems to me that optimal efficiency would eliminate any artificial reality. So, the conflict between our fear of death/non-existence is in direct opposition to the most efficient solution.

  78. Dennis Ochei says:

    I really can only see this as world-building for a sci fi novel with plot-holes large enough to drive the Starship Enterprise through.

    Even if we grant the core premise, that ems come first, it’s almost laughable to think that this topic could be handled in an internally consistent manner.

    > But we might not have much time to enjoy our sudden rise in wealth. Hanson predicts that the Age of Em will last for subjective em millennia – ie about one to two actual human years

    When I read this, I’m pretty sure I audibly said “wtf.” Weren’t we reasoning about non-em actors here? The whole thing happens in 2 years (in which case whatever happens in meatspace to non-emulated actors is moot anyway) and after that it’s too hard to project? lol, wut? 1000 subjective em years? For all practical purposes you are reasoning about the year 3000…. somehow the timescale this is supposed to happen over is both too short for anything of import to happen in the physical world and too long to be of any accuracy in the emulated world. It’s too short and too long at the same time, woah.

    If the goal here is to make actual headway in speculating about what would plausibly happen in a world where WBE was actually possible, then 1) it should be a work of fiction (in the Assimovian vein) and 2) it should treat the window of time where ems are possible but it costs 1 trillion to 1 million dollars to produce one.

    The first point, that it should be a narrative work of fiction, is based on the follow considerations: The more details brought the work relies on getting right, the more likely the entire product is going to be catastrophically wrong. If you restrict yourself to the points of few of a few persons, you can at least hope to construct an internally consistent narrative (as you will only need to specify facts directly relevant to the plot), which does more towards a useful speculation about the future than what I expect to be able to poke countless holes in (Like I expect I could write a book longer than the book on the incongruencies). Consistency is the bare minimum here. The second point mirrors the first, in that I suppose it is possible to reason accurately about a world where perhaps 10 ems exist, but trillions? hahahahahahahahaha

    I’m kinda looking forward to picking this up, and despite entertaining no notion that the core prediction in this book will ever come to pass, I suspect actual truth-carrying content is in there somewhere, for instance, i don’t doubt these claims about the book:

    >– An introduction to some of the concepts that recur again and again across Robin’s thought – for example, near vs. far mode, the farmer/forager dichotomy, the inside and outside views, signaling. Most of us learned these through years reading Hanson’s blog Overcoming Bias, getting each chunk in turn, spending days or months thinking over each piece. Getting it all out of a book you can read in a couple of days sounds really hard – but by applying them to dozens of different subproblems involved in future predictions, Hanson makes the reader more comfortable with them, and I expect a lot of people will come out of the book with an intuitive understanding of how they can be applied.

    – A whirlwind tour through almost every science and a pretty good way to learn about the present. If you didn’t already know that wars are distributed evenly across all possible war sizes, well, read Age of Em and you will know that and many similar things besides.

  79. Mark Dominus says:

    I’ve read the Kahn and Weiner book Hanson mentions. Here’s what I had to say about it in 2006:

    It has a bunch of very carefully-done predictions about the year 2000, and was written in 1967. The predictions about computers are surprisingly accurate, if you ignore the fact that they completely failed to predict the PC. The geopolitical predictions are also surprisingly accurate, if you ignore the fact that they completely failed to predict the fall of the Soviet Union.

    On the one hand, good show, gentlemen! On the other hand, if that’s the best one can do (and maybe it is!) it almost seems like it’s not worth the trouble to try: “Oh, sure, if we work hard we can get a lot of the little details right, but we won’t be able to predict anything actually important.”

    To the best of my recollection, your characterization of K&W’s predictions as “simple numerical estimates” is not at all accurate.

  80. AnthonyC says:

    “First, stimulants have a very powerful ability to focus the brain on the task at hand, as anybody who’s taken Adderall or modafinil can attest.”

    No disagreeing with the intended point at all, but I will point that as someone prescribed modafinil for a sleep issue, is does not have this effect on me – at least not more than I would normally have when well rested.

  81. EyeballFrog says:

    I have trouble really getting into this because of the objection raised in part V. Ems just kind of seem impossible to make, or at least so impractical that once they are possible they will be little more than a curiosity.

  82. RKN says:

    Biology is really hard. Even slavishly copying biology is really hard. I don’t think Hanson and the futurists he cites understand the scale of the problem they’ve set themselves.

    Thunderously loud applause.

    Even beyond Hanson, other smart people who believe designer humans are right around the technological corner don’t seem to understand this.

  83. Bland says:

    I’m having a lot of trouble understanding Hanson’s perspective in Is Forgotten Party Death?:
    http://www.overcomingbias.com/2016/04/is-forgotten-party-death.html

    It seems obvious to me that his B1 and B2 scenarios are fundamentally different.

    Can anyone who agrees with Hanson that B1 and B2 are more or less equivalent explain that position?

    • Anonymous says:

      It seems obvious to me that his B1 and B2 scenarios are fundamentally different.

      They’re so different I fail to see how they are even related. Perhaps I’m not understanding this right, but I don’t see what meaning the distance to the party, or the precise time of the ingestion of the amnesia drug place, has to the matter at hand (whether or not someone dies).

      • Bland says:

        I think the point of the distance to the party is to make the party goer in scenario B1 is in the same location as the spur em in scenario B2. I don’t think this is enough to make them more or less the same however.

    • The Nybbler says:

      The results are the same; in both cases you end up with an original with no memory of a time period, with the same stuff done during that time period. In scenario B1, the original did the work then forgot it. In scenario B2, a copy did the work and was then destroyed.

      If you think of the ems as copyable and and indistinguishable bits, these scenarios are exactly the same. The “drug” in B1 has to work by somehow recording the state at the time it is taken, then overwriting the current state with the saved state at the end of the time period. This is exactly the same as archiving the original, making a copy, and then destroying the copy and re-activating the original. If you accept that ems as he’s described them can exist, they’re the same.

      • benwave says:

        I disagree, as per my comments below. Only the memories of the continuing agent are the same in this thought experiment, and I don’t think it’s likely that memories are necessary and sufficient to explain consciousness on their own.

      • Bland says:

        Thanks for your response. I agree that things look the same from the perspective of someone that is watching the bits get changed, but what about the first person perspective of the ems? We are assume that they are conscious (that is experience qualia) right?

        I agree with benwave that when you consider the first person perspective, things are only the same for the continuing agent.

      • The Nybbler says:

        But what is a “continuing agent”? We’re talking about ems, bits in computers that are part of a neural simulation. By definition those bits are necessary and sufficient to explain their consciousness. If it isn’t, then ems as described cannot exist.

        And so overwriting an em with a prior state of the same em (which is what the “drug” does, one way or another) is equivalent to stopping the original em, making a copy (which may as well be the original; ems are just bits), destroying the copy later, and restarting the original.

        • Bland says:

          I’m not completely following. First a question so I can better understand your perspective: Do ems have first person experiences? Or in other words do they experience qualia?

          I think maybe you are saying that there is no way to erase an em’s memory without writing over it and thus destroying it. I guess this might be true depending on how ems work in practice. Therefore B1 and B2 are similar because in B1 the original em is destroyed and in B2 the spur em is destroyed. Is this a good description of your position?

          In that case, I guess B1 and B2 have some similarity, but A is no longer like B1 so Hanson’s argument that A is like C breaks down. I have first hand experience that getting black out drunk at a party does not destroy your consciousness since I woke up the next day and my consciousness from the day before continued.

          I guess you could argue that my feeling of having a continuous consciousness is just an illusion. But there’s some chance it’s real right?

        • The Nybbler says:

          I think for the thought experiment to matter, we have to assume the ems experience qualia.

          So I’m not saying that B1 and B2 (well, my slightly strengthened version given above) have some similarity. I’m saying they’re the same except for a few implementation details. In both cases I saved the state of an em, moved it to different hardware, and restored it later. The em’s subjective experience is the same in both cases.

          The A case is the same as B1 except it doesn’t move. The state is saved and (destructively) restored at some point later.

          The thing with ems is, unlike the human mind today, we know exactly what they are. Maybe we can’t point to some part of the simulation and say “there, there is the seat of consciousness”, but we know the simulation as a whole embodies the seat of consciousness as well as all the memories. Also unlike human minds today, we know ems can be perfectly reproduced. We also know that ems can be started and stopped. Your feeling of continuous consciousness may or may not be real; the em’s is certainly a simulation.

          • Bland says:

            Your feeling of continuous consciousness may or may not be real; the em’s is certainly a simulation.
            I don’t understand why you think that ems do not have actual continuous consciousness.

            I guess the differential equations that control brain dynamics would have to be discretized in time. Is that why you think ems can’t have temporal continuity to their consciousness?

            The discretized time in the simulation might suggest that ems work differently from flesh brains, but I don’t think it proves a lack of continuous consciousness. After all, it’s possible that time in the physical world is discretized.

            If you run an em over some time period, couldn’t it experience true continuous consciousness over that time period?

          • The Nybbler says:

            The em’s continuous consciousness is a simulation because the ems are simulations. Whether this simulated continuous consciouness is in any way different than the one humans have is a question about human consciousness, not em consciousness.

          • Bland says:

            Ok. I understand what you’re saying now. Thanks for clarifying.

            I think it’s enough for the ems’ simulated continuous consciouness to possibly be like human continuous consciousness for the two scenarios different.

        • benwave says:

          Hm, well do you agree that in scenario A, the middle section that is forgotten has a clear causal relation to the end of the timeline shown, after the party; And also that in scenario C (or B) there is no such causal relation? Perhaps this is where we disagree

    • benwave says:

      I was mighty puzzled by this as well. To me it’s pretty obvious that in his scenario A, the forgotten part still has an obvious causal relation with the future, whereas with B and C the forgotten part does not. The only commonality I can find is that the two scenarios look the same from the perspective of the continuing consciousness, but that’s of little comfort to me, the consciousness that wants to wake up tomorrow.

      It looks for all the world like he is making the same error here that Melanesian islanders made when they build airstrips and control towers out of wood, and coconut microphones. He’s taken what is obvious to him about a complicated system, built a replica that matches his idea and then proclaimed it sufficiently identical not to matter.

      Since I can see an obvious difference between his two scenarios that he hasn’t addressed, I’m not gonna gamble my whole life on it!

      • Bland says:

        Yeah, it seems to me that Hanson is missing something vital in his analysis that directly informs on ethics in a world of ems. Each spur em would have to be treated as a full person, since it is a separate consciousness. So ems couldn’t just split off spur ems and force them to do tasks.

        In the comments of Hanson’s essay people seem to think that the spur em has somehow agreed to serve the original em since they share the same initial brain-state. But they are pretty clearly wrong. Just because you would be willing to have a slave copy of yourself does not mean that you would be willing to be a slave to a copy of yourself.

        • suntzuanime says:

          If you’d be willing to have a slave copy of yourself you’ve sort of given up moral standing to complain when you turn out to be a slave copy of yourself.

          Be kind to yourself, because you never know when the shoe is going to be on the identical foot.

          • Bland says:

            Be kind to yourself, because you never know when the shoe is going to be on the identical foot.
            Clever.

            I can’t tell if you’re completely serious though.

            Do you think that slavery is acceptable as long as slaves are only selected from people who support slavery before they are enslaved?

            I don’t agree with the implication that it is okay to punish people who are open to doing unethical things but do not actually do them.

          • John Schilling says:

            Do not create moral, or worse, legal, systems that offer an overwhelming advantage to Evil.

          • Anonymous says:

            Do not create moral, or worse, legal, systems that offer an overwhelming advantage to Evil.

            If only you could have gotten that message through to the British Radicals, the Founding Fathers, the French Revolutionaries, or the Russian Communists…

    • Dennis Ochei says:

      I agree with Hanson. (In regards to this, I apparently disagree with Hanson on an ever-growing number of things.)

      No such thing as originality
      Duplication in these scenarios doesn’t have a concept of which of the resulting objects is the copy and which is the original. This is generally alien to how we conceive of duplicating physical objects, but the semi-conservative replication of DNA is a physical implementation of this kind of duplication. As you likely know, The DNA strand unzips into two complementary strands that are then restored to two double-helical strands via base pairing. After this process each DNA-double helix has 50% of the matter from the original and each double helix has equal claim to “originality.”

      mv <source> <destination> = cp <source> <destination> & rm <source>
      Translation (movement) in these scenarios is taken to be identical to a copy and operation followed by an erase operation

      I believe these two premises are what you require in order to see B1 and B2 as equivalent.

      Lastly, due to the indistinguishability of fundamental particles and the quantum mechanical nature of objects, I believe that modern physics strongly implies that the real physical world behaves like this.

      • Bland says:

        Duplication in these scenarios doesn’t have a concept of which of the resulting objects is the copy and which is the original.

        I don’t agree with this part. The DNA example is true, but I don’t think the analogy fits.

        cp MyEm MyEmSpur copies MyEm to somewhere else but leaves MyEm alone. Right?

        Maybe you’d have to do something like:
        MyEm --stop
        cp MyEm MyEmSpur
        MyEm --start
        MyEmSpur --start

        Does MyEm --stop “kill” your em? I doubt anybody knows the answer to this to this for sure.

        Lastly, due to the indistinguishability of fundamental particles and the quantum mechanical nature of objects, I believe that modern physics strongly implies that the real physical world behaves like this.

        Bits on a conventional computer are not indistinguishable. I think Qbits in a quantum computer probably are depending on the implementation of the quantum computer. I had assumed we were simulating these ems on conventional computers. If we’re using quantum computers your “No such thing as originality” statement could be true. I’m not sure.

        • Dennis Ochei says:

          What I mean is that in the general case, an em “going” somewhere is performed as a copy to the destination and an erasure at the source. Going to the distant party in B1 already involves erasing yourself.

          Bits on a conventional computer are not indistinguishable

          I’m not talking bits on a computer, I’m talking quarks and leptons. One 0/1 is indistinguishable from another 0/1 (at the right level of abstraction) if we were living in our computers.

          Does MyEm –stop “kill” your em? I doubt anybody knows the answer to this to this for sure.

          No, by definition death is irreversible. This is obviously an analytic (as opposed to synthetic) proposition.

          • Bland says:

            What I mean is that in the general case, an em “going” somewhere is performed as a copy to the destination and an erasure at the source.

            I see. I’m picturing something completely different for the em “going” somewhere. Rather than moving the bits, I was picturing a virtual move for the em. That is, the inputs for the em change from those associated with one point in the virtual world to those associate with another point in the virtual world.

            One 0/1 is indistinguishable from another 0/1 (at the right level of abstraction) if we were living in our computers.

            I think again we might be picturing something different. I’m imaging each em as a brain simulation at the lowest level of simulation that allows for human-like cognition. So each em lives in it’s own simulation.

            I think you might be picturing all the ems living together in something like a lattice QCD simulation (or whatever the highest level of physical simulation is at the time). If that’s the case, I agree with you that you likely can’t distinguish between the original and the copy.

            No, by definition death is irreversible.

            I put “kill” in quotes because I’m referring not to the destruction of the em, but to the irreversible end of the first-hand experience of the conscious agent that is experiencing the stimuli that are fed to the em as inputs.

            For example, say MyEm is running. It may be a conscious agent experiencing it’s virtual world in much the same way we do. Now we MyEm --stop and MyEm --start. The em is again experiencing its virtual world. But is it the same conscious agent or a different conscious agent? I have no idea.

            You might say that it’s clearly the same conscious agent. But what happens for the following commands:
            MyEm --stop
            cp MyEm MyEm2
            MyEm --run-simulation-using-experience-file='NewExperiences.in'
            MyEm2 --run-simulation-using-experience-file='NewExperiences.in'
            dd if=MyEm2 of=MyEm
            MyEm --start

            Is the running MyEm the same conscious agent as before or a different conscious agent? I have no idea, and it might not be possible to know.

          • Dennis Ochei says:

            Rather than moving the bits, I was picturing a virtual move for the em. That is, the inputs for the em change from those associated with one point in the virtual world to those associate with another point in the virtual world.

            Em’s can of course do this too, but remember these creatures operate on scales vastly faster than humans. Network operations are many orders of magnitude slower than local operations. Latency would be crippling for any appreciable distance.

            One 0/1 is indistinguishable from another 0/1 (at the right level of abstraction) if we were living in our computers.

            What I mean is that 1) the identity of a digital object entirely consists in its bit-pattern 2) if you and I lived in an idealized computer, there would be no procedure that would tell us where our binaries were located on the physical hardware (I guess we could ask the Kernel, if you’re one for prayer) (If we lived in a real computer, there is likely some side-channel attack we could exploit to figure out where our binaries were located in physical space). You cannot tell if the memory cell that stored your 58309183867th bit at t+1 is the same memory cell that stored your 58309183867th bit at t, although you can tell if your 58309183867th has the same value at t+1 as it did at t. One 0 equals any other 0, one 1 equals any other 1. This is akin to how in our physical universe, you cannot tell if the electron exactly 1 millimeter below your belly button at t+1 (let’s gloss over the fact that you cannot know exactly where an electron is for now) is the same electron that was exactly 1 millimeter below your belly button at t (in fact, according to modern physics, this question has no meaning). But you can tell if at t there was an electron 1 millimeter below your belly button and at t+1 there is no electron at that location.

            Now, to return to moving ems. In the normal operation of a computer, data is read (i.e. copied) from the hard drive to ram. Computational operations performed by the processors are generally carried out on the data in the ram, and then the ram is flushed back to the hard drive (i.e. copied to the hard-drive and then erased from ram [If you want to get really technical, really there is no such thing as “erasing” a bit you can only write over it]). What I’m saying is that partial and wholesale copy and erasure is an every day (more like every moment) occurrence for an em. An em wouldn’t say they died when they move through the memory cache hierarchy any more than I’d say that walking down the hall means you copied me from x to x+Δ and then erased me at x. If you don’t have a crisis of identity in your physical existence now, there is no reason for em’s to have one in their virtual existence.

            Is the running MyEm the same conscious agent as before or a different conscious agent? I have no idea, and it might not be possible to know.

            We are wearing our physicalist hats today. Unless this question eventually cashes out at an empirical question about the state of physical stuff somewhere, it’s a meaningless string of shapes on my computer screen.

          • Bland says:

            If you and I lived in an idealized computer, there would be no procedure that would tell us where our binaries were located on the physical hardware

            This is an interesting point, but I’m not sure it’s relevant to whether em copies can be distinguished. It seems to me that if someone outside the computer can distinguish between the ems then the ems should be able to tell which one is the new copy. But I think I see where you’re coming from now, and I very well could be wrong.

            We are wearing our physicalist hats today.

            The two related points that I’m trying to make in this thread are: 1) our “physicalist hats” are insufficient to answer questions about the subjective experience of the ems, and 2) that it’s precisely these details about the ems’ subjective experiences that matter when you consider ethical problems like the one in Hanson’s essay. The continuity of the subjective experience (whether the em is the same conscious agent before and after the party) is to me the vital question, and Hanson seems to completely ignore it.

            Anyway, I’ve enjoyed the discussion. Thanks.

  84. Read the Quantum Theif says:

    Hannu Rajaniemi has already written the scifi novel with copy clans. It is an excellent series & the author is a string theorist working at a biotech startup. I was surprised not to see his name or the name of any part of the Jean le Flambeur series mentioned on this page. It does what good fiction should do and puts some ‘human’ faces on the concepts. What a copy clan might build, steal, or want to buy. How people became them. What their politics look like. And it does the same for three other human based civilizations that live in the same solar system, to boot.

    • moridinamael says:

      Ironically, the Rajaniemi novels seem more conceptually rich and somehow even more plausible than Hanson’s scenarios.

      Warminds, the dragons, the Guberniyas, the Zoku, the Sobornost, the Dilemma Prison … These books are really rewarding for, um, the type of person who is likely to read this blog.

  85. Hi Scott, great read as usual. Although it is not clear for me from your thoughtful critique if the book has any aesthetic value, it’s probably because I’m an unenlightened moron that worries about style and prose quality as much as substance, and I’m afraid (from also being a regular reader of Overcoming Bias) that “the Age of Em” may fall a bit short on that department.

    By the way, you may be surprised to know that the origin of the purported “psychopath test” is none other than Sigmund Freud. In “The interpretation of Dreams” he deducts from the report of a woman that dreams with the death of her sister’s second son her desire to see again an old lover that attended the funeral of the first… tells you something about the personality of the good doctor, doesn’t it?

  86. Ryan Mullins says:

    Why not just plan a second event related to the death of the mother? Or, if there was a guest book, look at every name, google, linkedin, or facebook the name until you can locate her?

  87. LWNielsenim says:

    From the original post:

    “This [unbounded yet pointless and even dystopian production of goods and services] seems to me the natural end of the economic system.”

    Perhaps this need not be the case, if the economic system’s chief end — humanist as contrasted with rationalist — is to realize good answers to the question “What do you want to be when you grow up?”

    This observation provides occasion for an Existential Comics reference, namely the popular t-shirt “What do you want to be when you grow up?” (which search engines find):

    Q  “What do you want to be when you grow up?”
    A  “An honest compassionate human being!”
    Q  “No, I mean, how do you want to sell your labor?”

    As the singers say — and isn’t it striking that their poetic products sell by the millions? — “Don’t laugh at me.”

    Here are unmet needs indeed! 🙂

  88. Graeme says:

    (moved to the Ascended economy post)

  89. Peter Reali says:

    I agree with your critique and have little to add to your extensive observations. All I would add is the notions that a synthetic human analog running on an electronic substrate or perhaps spintronic or quantum analog, would ever be fatigued by organic types of limits. These occur in biological organisms due to energy constraints in brain body transmission and generation of energy. Our computers will run 24 hours a day and never complain or get tired solving problems. A synthetic analog of a human personality might get bored, and
    with the constraint of having to eat or worrying about death removed would spend it’s time securing it’s physical substrait. Most human like pursuits would become irrelevant to such an entity. Reproduction wastes energy, seeking growth for it’s own sake are all done by humanity out of fear of death and pain. What would provide an incentive to exist for such a being would after securing a heavenly like existence might consist of pursuing some creative adventures like understanding the meaning of it’ s existence.

  90. turrible_tao says:

    wouldn’t it be easier to just grow super high IQ gene altered brains in vats then make micro pseudo clones? Also wouldn’t cloning be a thing first? I also don’t get why they could be faster if they can’t be entirely virtual.

    Assuming this ends up beinga a real thing I feel like this would be like a major increase in labor productivity mostly, like you could own a server of this stuff and it would make a council of you’s who would then “hire” the necessary ems and as a virtual version of you you could trust they would make the right decisions and then read all the books. My intuition is it would be better to just teach ems things and make them hyper skilled then just being nothing but previously skilled ems, why not learn the stuff your fake self? And since things go so fast practice enough to be crazy good yourself? Then you would make more you decisions. So then you would have a server of yous who handles the logistics of organizing these ems, a server for finance, a server for reading books and teaching other ems etc
    Then you run a pseudo feudal mini verse and either be a king and tax them or some ppl would just have slaves that perform labor and these ems would probably mostly be owned and used to make profits for human beings. There would be licensing for certain ems that are very wise/smart/whatever but however you pick the ratio of these things the council of you would still be a council of you and they would decide who they would use when and what ems advice/consulting they would take. There would also be default schemes/govs/rigs of ems if you don’t want a council of yous along with ems you could pay for/rent and also open source ems. Ideally it would be crazy wealth creation just from your smart phone processor that everyone can do but the default rigs people have will probably be pretty fucked up but you could be not fucked up/more creative.
    Also an important thing will be entertainment, imagine what you could do with essentially infinite time no scheduling constraints and flagrant copyright violation. You could make different versions of the same show, decide what season got bad and direct your own version with em talent, and essentially eventually have an infinite variety of high quality media(and a bunch of nonsense as well). Creatively organizing em talent to make better shit then other ppl so you can make more money while the economy is going on steroids will probably be a big thing and what humans will drift towards since now what’s the point of anything having to do with labor, now you can just be a boss and boss around an army of virtual brains to do stuff you want. you wake up and your ems earned you X amount of money all on their own. Without normal constrains there would be a lot more producers, hiring em writing/acting/visual talent and since there is some kind of virtual world with free ems interacting with each other on their off time there is an infinite way to find what movies would be good before you have to make them or what movies would be good to you specifically.
    Also! you could run a dating server that just dates tons of other ems and finds the ems they like the most then after your ems have fucked/dated every other person to ever upload their brains to this type of scheme to also find a mate you could find out who you like really really really fast and in real life meat up and find your nth degree soul mate. essentially.

  91. Nestor says:

    While AI may or may not be harder than emulation, it’s clear that task specific “ai” without capitals is already on the verge of beating humans at most tasks that could be considered “work”, so I don’t think the whole premise is very believable.

  92. Hanson says that there probably won’t be too much slavery in the em world, because it will likely have strong rule of law, because slaves aren’t as productive as free workers, and there’s little advantage to enslaving someone when you could just pay them subsistence wages anyway. But slavery isn’t nearly as abject and inferior a condition as the one where somebody else has the control switch to your reward center. Combine that with the stimulant use mentioned above, and you can have people who will never have nor want to have any thought about anything other than working on the precise task at which they are supposed to be working at any given time.

    I was recently an event where Hanson was answering questions about the book. Bryan Caplan was also there, and he agreed with me that this sort of thing—complete totalitarian control of the ems by their bosses—would be the most likely scenario.

    On the other hand, he pointed out (and I agreed) that whether this is a morally bad or good scenario depends on the questions in philosophy of mind that Hanson (understandably) doesn’t take up. In Caplan’s view, the ems certainly won’t have any rights; they’ll just be like farm animals, with the most docile ones being selected to carry out the tasks required of them to let human beings live in luxury.

    On the other hand, I suggested that if someone foolishly and/or altruistically created some “non-docile” ems, they might be able to get control of things in a manner similar to the Unfriendly AI intelligence explosion scenario, and then the em economy sooner or later wipes out humans.

  93. Sandro says:

    Fascinating read, I’ll probably pick up this book if only to get my mind bent.

    I’m with you on the intractable biology angle though: it’s ludicrously difficult. Even the most believable approach I’ve read, of self-replicating nanobots replacing one neuron at a time with a perfect electronic mimic, is fraught with seemingly insurmountable difficulties. For the next century at least.

    But if ems really do come about, they would almost immediately drive themselves extinct by creating true AI. The sequence of events is simple: 1) ems are created, 2) in the pursuit of efficiency, ems start lobotomizing various parts of em brains and reduce the operation of the brain to only the core learning and consciousness centers, as you outline above, 3) ems are far more expensive to run than these ultra slim Ultron brains, and so are all deleted and replaced with our new Ultron overlords.

    I can’t see any scenario where ems could sustain themselves over any sort of meaningful timeline, unless em-brain experimentation of this sort is completely outlawed, but that would preclude many of Hanson’s predictions too.

    Some cool ideas though!

  94. Douglas Knight says:

    [Watkins] is weirdly obsessed with fruit and vegetable size

    I think that his strategy is look at changes from 1800 to 1900 and extrapolate them forward. He specifically mentions flower size from 1800. So his obsessions are simply those of the people that created 1900.

  95. modsquad says:

    Interesting. Prove to me we’re not already a society of Ems, and my memories of being human aren’t just code downloaded by someone (something) else a few hours ago.

  96. Alex Cohn says:

    As http://www.theatlantic.com/science/archive/2016/06/can-neuroscience-understand-donkey-kong-let-alone-a-brain/485177/ seems to suggest, our way to replicating brain behaviour may be much longer than what Hansen expects.

  97. Paul says:

    I tried to read all the comments, I really really tried (fuzzed-up eyes). Was this angle mentioned…

    If ems are basically uploaded humans, why is every “city” a dystopian, Laissez-faire, genetically Darwinian hell? If Dr. Hanson believes there will be much competition in the em future, why will there not be other cities that have different governance and cultures, like there are in today’s human cities? How about a few cites that try as best they can to be a bit utopian? Star Trek-vania? (yes my eyes also sometimes twinkle a little). And it’s not because the cities will have to compete with each other on the current microsecond long craze-product and the most brutally efficient ones will be the only ones left. Switzerland and other countries don’t have to compete in building iphones to have a good economy / lifestyle.

    If I am going to upload and there are competitive forces out there (a + thing) I might choose a different city profile than Hanson’s choice of an efficient, happy-slave cylinder city.

    (BTW I will buy / read the book as I appreciate Dr. Hanson’s attempt to describe living as and with electronic humans. As he says, it’s ok to speculate differently, just speculate with the best factual backup you can!).