Open threads at the Open Thread tab every Sunday and Wednesday

Poor Folks Do Smile…For Now

I got the opportunity to talk to GMU professor and futurist Robin Hanson today, which I immediately seized upon to iron out the few disagreements I still have with someone so brilliant. The most pressing of these is his four year old post Poor Folks Do Smile, in which he envisions a grim Malthusian future of slavery and privation for humanity and then soundly endorses it. As he puts it:

Our robot descendants might actually be forced to slave near day and night, not to feed kids but just to pay for their body rent, their feed-stocks, their net connection, etc. Even so they’d be mostly happy.

Robin seems to be a total utilitarian who has no objections to the Repugnant Conclusion. It’s a consistent position on its own (though distasteful to me), and taken at face value it has something to recommend it. I’ve been to some horrible places like Haiti. The Haitians have it tough, but they still sing and dance, they still love each other, they still have hopes and dreams. If the far future is Haiti with better sanitation, it wouldn’t necessarily be the worst thing in the world.

But Robin has a slightly higher bar here. He believes that the near future promises advances in the uploading of human minds to computers, creating cyber-organisms he calls “ems” for “emulated humans”. Ems will have many advantages over biologicals – less need for space and resources, possible elimination of biological need for sleep, and can be copied-pasted at will. A future of zillions of Malthusian ems competing for hardware and computing power is a little different from zillions of biological humans competing for land and food.

So here is my dialogue with Robin as I remember it. I didn’t take notes, so it’s probably a bit off, and I’m rewriting me being confused and ummming and errrrring and meandering for a while as me having perfectly flowing rational arguments with carefully considered examples. I think I understood Robin well enough to be able to put down what he said without accidentally strawmanning him but I do notice he was much more convincing and I was much more confused and challenged in person than it looks in this transcript, so perhaps I failed. Nevertheless:

Scott: In “Poor Folks Do Smile”, you say that a future of intense competition and bare subsistence will be okay because we will still have the sorts of things that make life valuable. But in a future of ems, won’t there be competitive advantage to removing the things that make life valuable?

Robin: What do you mean?

Scott: Suppose you have some ems that are capable of falling in love and some that aren’t. The ones that fall in love spend some time swooning or writing poetry or talking to their lover or whatever, and the ones that don’t can keep working 24-7. Doesn’t that give the ones that can’t fall in love enough of a competitive advantage that the ones that can will be outcompeted and destroyed and eventually we’ll end up with only beings incapable of love?

Robin: You can’t just remove love from a human brain like that. There’s no one love module.

Scott: It’s probably very hard to remove love from a human brain without touching anything else. But given that the future is effectively infinitely long, and that in a world of perfect competition it would be advantageous to do this, surely someone will succeed eventually.

Robin: Yes, the future is infinitely long. But you’re speculating post-Singularity here, and the whole point of the Singularity is that it’s impossible to speculate on what will happen after it. I speculate on the near and medium term future, but trying to predict the very long-term future isn’t worth it.

Scott: I agree we can’t predict the far future, but this is less a prediction than an anti-prediction. An anti-prediction is…wait, am I doing that thing where I explain something you invented to you?

Robin: No, I didn’t invent anti-predictions. Go on.

Scott: An anti-prediction is…gah, I wish I could remember the canonical example…an anti-prediction is when you just avoid privileging a hypothesis and this sounds like a bold prediction. For example, suppose I predict with 99%+ confidence that the first alien species we meet will not be practicing Christians. In a certain context, this might sound overconfident – aliens could be atheists or Christians or Muslims, we don’t really know, but since I don’t know anything at all about aliens it sounds overconfident to be so sure it won’t be the Christian one. But in fact this is justified, since Christianity is just a tiny section of possible-religion-space that only seems important to us because we know about it. The aliens’ likelihood of being Christian isn’t 1/3 (“either Christian, or atheist, or Muslim) but more like 1/1 trillion (Christianity out of the space of all conceivably possible religions). The only way the aliens could be Christian is if it was for some reason correlated with our own civilization’s Christianity, like we went over there to convert them, or if Christianity was true and both us and the aliens were truth-seekers. My point is that human values, like love, are a tiny fraction of mindspace. So saying that the far future won’t have them is an antiprediction.

Robin: Values like love were selected by evolution. We can expect that similar selection pressures in the future will produce, if not the same values, ones that are similar enough to be recognizable.

Scott: The hypercompetitive marketplace of an advanced cybernetic civilization is different enough from an African savannah that I really don’t think that’s true. Love evolved in order to convince people to reproduce and raise children. If ems can reproduce by copy-pasting and end out with full adults, that’s not a society that will replicate the need for love.

Robin: Love is useful for a lot of other things. Probably the same mental circuitry that causes people to fall in love is the sort of thing you need to make people love their work and stay motivated.

Scott: Antiprediction! Most mind designs that can effectively perform tasks don’t need circuitry that also causes falling in love!

Robin: The trouble with this whole antiprediction concept is…so what if I told you that in the far future, people would travel much faster than light. Would that be an antiprediction? After all, most physical theories don’t include a hard light-speed limit.

Scott: The trouble with traveling faster than light is that it’s physically impossible. Are you trying to make the claim that a mind design that doesn’t include something like human love is physically impossible?

Robin: I’m trying to make the claim that it’s not something you can plausibly get to by modifying humans.

Scott: Fine. Forget modifying humans. People just try to build something new and more efficient from the ground up.

Robin: Maybe in the ridiculously far future…

Scott: But we both agree on a sort of singularitarian world-view where “history is speeding up”. The “ridiculously far future” could be twenty years from now if ten years from now they invent ems that can be run at a hundred times normal speed. If the ridiculously far future aka twenty years from now is one where human values like love are completely absent, that seems…really bad. And if we want to prevent it, it seems like that goes through trying to prevent a “merely” Malthusian medium-term future in which people are effective slaves but we haven’t quite figured out how to hack out love yet.

Robin: Attempting to influence the far future is very dangerous. In most cases we can’t predict the long-term consequences of our action. The near future will be in a much better position to influence the far future than we are. My claim, which you don’t seem to disagree with, is that the near future will be non-hellish and preserve human values like love. Let’s let this near future figure out whether the far future will be unacceptable. As time goes on, people gain better ability to coordinate, so the near future should be better at fixing our problems anyway.

Scott: As time goes on people gain better ability to coordinate?

Robin: Yes. In the old days, most decisions were made at the village or provincial level. Now we’re gradually centralizing decisions to the national and often even the supranational level. The modern world is much more effective at coordinating solutions to its problems than the past.

Scott glances at Michael Anissimov, probably the most vocal Reactionary in Berkeley, who has been standing there listening to the conversation. He looks skeptical.

Scott: But I know Michael over here has been writing a lot claiming the opposite. That the modern world is terrible at coordinate problems, especially compared to the past. I’m somewhat sympathetic to that argument. In the old days, a king could just declare we were going to do something and it got done. Now we have nightmarish failures of coordination, like the Obamacare bill where the leftists had a decent and coherent vision for how healthcare should work, the rightists had a reasonable and coherent vision for how healthcare should work, and we smashed them together until we got a Frankensteinian mashup of both visions that satisfied no one. Or how back in the old days, the Catholic Church pretty much controlled…

Robin: Kings and the Church were very good at acting, not at coordinating. They could enforce their choices, but those choices were often terrible and uncorrelated with what anyone else wanted. Modern institutions coordinate.

Michael: But modern coordination is just through increased bureaucracy.

Robin: Call it what you want, it’s still coordinating.

Michael: And the results are often terrible!

Robin: Yes, coordinating seems to divide into two subproblems. The first is getting everyone to agree on a solution. The second is making sure the solution is any good. I don’t claim we have solved the second subproblem, but we seem to be increasingly skilled at the first.

Michael: Really? Like the largest-scale world-coordinating organization we have right now seems to be the United Nations, and it’s famous for not getting anything done.

Robin: The thing with the UN is that at the beginning people expected it to be the umbrella organization under which all world affairs were conducted. But there are a host of other more or less associated organizations like the WTO that are actually doing a lot more.

Scott: You make an interesting case that future coordinating power will be better, but saying “let’s leave this to the future” only works if we know when the future is going to be and can prepare for it. In the case of what Eliezer calls a “foom” where an AI comes and causes a singularity almost out of nowhere – well, if we put off preparing for that for fifty years, and it happens in forty, that’s going to be really bad.

Robin: I think that scenario is very unlikely. In the scenario I believe in, an increase in technology led by emulated humans, change will occur on a predictable path. They will know if we’re on the path to eventual complete value deterioration.

Scott: That makes sense. So I guess that our real disagreement is only over the speed at which a singularity will happen, and whether we will know about it in time to protect our values.

Robin: Sort of. Although as I posted on my blog recently, I think “protecting values” is given too much importance as a concept. If any past civilization had succeeded in protecting its values, we’d be stuck with values that we would find horrible, mostly a mishmash of outdated and stupid norms about race and gender. So I say let future values drift by the same process our own values drifted. I don’t mind if future people have slightly weirder concepts of gender than I do.

Scott: I think that’s kind of unfair. You’re assuming the future will vary over certain dimensions where you find variation acceptable. But it might vary in much stranger and less desirable ways than that. Imagine an ancient Greek who said “I’m a cosmopolitan person…I don’t care whether the people of the future worship Zeus, or Poseidon, or even Apollo.” He doesn’t understand that the future also gets to vary in ways that are “outside his box”.

Robin: It’s possible. But like I said, I think we have a very long time before we have to worry about that. I would also suggest you look at the light speed limit. That means that there’s going to be inevitable “cultural variation” in the post-human world, since it will probably include a lot of semi-isolated star systems.

Scott: I still expect a lot of convergence. After all, if this is a hypercompetitive society, then they’ll be kind of forced into whatever social configuration leads to maximum military effectiveness or else be outcompeted by more militarily effective cultures.

Robin: No, not necessarily. There may be an advantage for the defender, such that it takes ten times the military might to attack as to defend. That would allow very large amounts of cultural deviation from the ideal military maximum.

After this the conversation moved on to other things and I don’t have as good a memory. But it was great to meet Robin in person and I highly recommend his blog to anyone with an interest in futurism or economics.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

28 Responses to Poor Folks Do Smile…For Now

  1. Vanzetti says:

    I’m sorry, but once you start to discuss the consequences of removing one ill-defined concept (Love) from another ill-defined concept (Person) you might as well count angels on pinheads.

    • Fnord says:

      Forget love specifically. The point is that “AI optimized for productivity” might look rather different than “neurotypical 3rd world person” and any changes probably won’t be designed with the subjective well-being of the AI in mind.

    • gwern says:

      It isn’t even a good discussion, because we do have plenty of functioning institutions, places, corporations, and people without ‘love’. (Psychopathy comes to mind as being astonishingly common, for something that apparently is impossible pre-Singularity because it’s just sooooo hard.) What about the countless bizarre mental illnesses? What happened to the books by people like Sacks detailing how lesions cause almost arbitrarily complex aphasias and problems?

      Post-Singularity? ‘Ridiculously far future’? It’ll probably happen on day two of uploading!

      > Robin: I think that scenario is very unlikely. In the scenario I believe in, an increase in technology led by emulated humans, change will occur on a predictable path. They will know if we’re on the path to eventual complete value deterioration.

      Yes, just like we are beginning to understand the changes in values caused by concentrations of values (, have plenty of examples in both practice and research from the financial industry, and have Done Something About It.

      Oh wait. (Fox, meet henhouse. Do get along.)

  2. Daniel Armak says:

    Robin suggests leaving decisions to the near future because it will be better informed and coordinated. But it will also have different values from our present, and a different distribution of values if e.g. there is a huge amount of ems derived from only a few human templates. So the near future will not just make better decisions, it will make different and hard to predict decisions. That’s a reason for the present to try to influence the far future directly if we think it’s possible.

  3. Athrelon says:

    Robin: In the scenario I believe in, an increase in technology led by emulated humans, change will occur on a predictable path. They will know if we’re on the path to eventual complete value deterioration.

    Scott: That makes sense. So I guess that our real disagreement is only over the speed at which a singularity will happen, and whether we will know about it in time to protect our values.

    This vastly overstates both our desire and our capabilities to prevent long-term value drift that we might find horrifying. Hanson bites the bullet when applied to the distant past, but seems to believe that the conditions will never support really bothersome value drift or that we’ll invent magic cultural tech to avoid it in the nick of time; both claims seem unsupported.

  4. Joe says:

    “But Robin has a slightly higher bar here. He believes that the near future promises advances in the uploading of human minds to computers, creating cyber-organisms he calls “ems” for “emulated humans”.”
    This sounds awful how are these “ems” supposed to sing or even smile. I would much rather be a poor Haitian.

    • im says:

      The experience of being an em would be… like the Matrix? Indistinguishable from normal reality assuming somebody programmed a rich enviroment. Also, many people describe emulation without the disturbing human copypasta or other aspects.

      It could be just like running your mind in a virtual machine rather than physical hardware.

  5. James says:

    Hanson’s beliefs about cooperation are incoherent. He says,

    Now we’re gradually centralizing decisions to the national and often even the supranational level. The modern world is much more effective at coordinating solutions to its problems than the past.

    Human emulations are dangerous. There is bound to be some Veppers character, who would simulate people in most exquisite pain, even far in excess of our biological capacity. Uploads might also be used to conduct unsupervised AI research, political subversion and terrorism. The moral dilemmas might tear apart society. And the prospect is distasteful under the most optimistic assumptions.

    Why, if human coordination should increase, would any form of emulation be tolerated by the powers that be? I am no statist, but frankly I would endorse the most interventionist foreign policy, and invasive, even totalitarian police measures, to ensure that em-uploads are not liberally created; not until the rest of human nature, technology and society is transformed. And I am more tolerant of futurism than most.

    Might Hanson, in earlier decades, have discussed the economics of nuclear weapons: how the pocket nuke, and the community H-bomb might transform our lives?

  6. Max says:

    I think that the whole idea that we should defer problems to the future because it’ll be in a better position to deal with them is pretty ill-formed.

    Lots of problems become more and more difficult as the circumstances that formed them in the first place become more entrenched. It’s much easier to say “hey, this proposed government initiative implements some pretty perverse incentives and is likely to result in a self sustaining system that fails to achieve its intended purpose” than it is to scrap said self sustaining system and build something new from the ground up once it’s been operating for decades.

  7. Deiseach says:

    If we’re going to throw predictions out there –

    (1) I have to admit, I’m charmed by Mr. Hanson’s notion that amor vincit omnia and will continue to do so even when running on silicon rather than carbon machinery or in some quantum cranny of the foam of the multiverse. I don’t for one second believe it, but it’s rather sweet to find such a romantic view of human nature when one is blithely discoursing on turning human consciousnesses into some kind of inorganic life and/or creating artificial intelligences from same.

    (2) Can’t do without love? Can’t wipe it out that easily? Has he not heard of aromantics? I will gladly testify that, though I don’t use that label myself or consider to be of that orientation, I have never been in love. No, not ever. Never. One mild crush but nothing more. Either falling in love is not a universal human experience, or I’m not human (take your pick). You want to talk about familial love, or friendship, or sublimation of eros and libidinal energy into other channels such as work, okay, that’s a discussion that could and can be had. But Valentine’s Day is forever, even for electronic minds? Excuse me while I roll around on the floor laughing.

    (3) Now – on to the meat of the matter. If “emulated humans” come about, I think it will be first (a) a rich man’s toy – something to play around with, to make copies of your mind and see what they’re like, to try and preserve your consciousness or at least a record of same after death; to take an example, say Steve Jobs (and I’m not using him because I have anything against him, just pulled his name out of the air as an example) and his death – wouldn’t it be the exact kind of temptation and appeal to preserve his mind and personality (or a version of it) so that he could continue to be creative, to achieve, not to be lost to an untimely death? The very wealthy, who can afford to indulge this kind of whim, and the very talented/successful, who may be considered worth spending the kind of money and effort to preserve, are the first few who will become “ems”, not a former coal-miner in Virginia who has nothing much past his physical labour to offer. Someone who made a fortune on the stock market or investments or invented wrinkle creams that really do what they say they do and is swimming in dough will be able to live in a virtual world and pay for its upkeep.

    (b) Speaking of ex-coal miners and the like, the second thing I could see “ems” being used for, if or when the technology is stabilised and becomes cheap, replicable, and available (for certain values of cheap and available – again, how many Amazonian tribesmen have the latest iPhone – though I imagine a surprising lot more of them than I would imagine do!), then it will be for selling one’s labour or skills. Scott, let me use you as an example 🙂 You qualify as a psychiatrist. You have an “em” made, maybe even several “ems” made. You can sell/license/hire out these “ems” for the kind of work that we’re beginning to be seen done even now (e.g. encouraging sick people, or people who think they may be sick, not to go to the doctor but ring up a helpline where a nurse will diagnose are they really sick and do they warrant calling an ambulance or just take two aspirin and put a hot water bottle on it). Instead of inpatient stays or outpatient clinic visits, the customer goes online and your em reels off a script to see if they’re nuts, then legally prescribes anti-depressants if warranted. Real doctors are reserved for extreme cases and with more and more ems, what’s the point in having a real flesh-and-blood doctor when forty years’ worth of experience can be cut’n’pasted into five hundred copies for online clinics? You may become obsolete as soon as you graduate from university, unless you’re some kind of prodigy or start up your own company or be the absolute genius TV doctors are portrayed as being.

    Which brings me to (c) the poor. So instead of working on a car assembly line, you’ve been replaced by a robot, and now one guy supervises the team of five robots that replaced the fifty men who used to work there. Now even the supervisor gets replaced – either by an “em” which is a copy of his skills and experience, or one that’s been designed to be able to run automated machinery and supervise the same.

    I don’t see “ems” being treated as a form of ‘new’ human life; I think full human consciousness in such a case will be restricted (to the rich/talented/exceptional as above) and that the commonest type will indeed be “ems” built to spec for particular niches – operating machinery, carrying out call centre and helpline duties, and the likes.

    Why should MediSureMentHeal, Inc. care tuppence about hiring or licensing an “em” that has your liking for chocolate icecream or love of Mozart when all it needs is that the skills and qualifications are copied exactly, the Turing Test measure for interaction with breathers is met (live people still like to at least pretend they’re talking to a real person, so the “em” needs to be able to at least fake ‘hey, how ’bout them Rams?’ type interactions) and that it’s up to date on its legal requirements for diagnosis and prescription (need to keep the liability and malpractice claims down as far as possible). If “Dr. Scott” can go through the checklist of “Is Mrs. Smith suffering from anxiety disorder or just boredom?” and do as well as a real old-time flesh-and-blood doctor on deciding if she is or isn’t mentally unwell, why add in any extras you don’t need?

    • ozymandias42 says:

      …And if we assume that the ems Deiseach talks about in their last paragraph matter morally, I don’t think that kind of em is *bad* from a utilitarian perspective. Why not have an em that’s adapted for and completely fulfilled by being a psychiatrist do psychiatry, rather than a human more-or-less adapted for the African savannah?

      Although my brain objects to getting rid of the emotion of love altogether. I’m not sure why.

      • Deiseach says:

        I could certainly see rich people or – let’s be fair, not all the rich are like this – vain people deciding to have “ems” of themselves rather than offspring. Propagating themselves without the whole messy business of having kids, and giving themselves moral points for not burdening the planet with extra warm bodies at the same time.

        I can’t see (although it could happen, I suppose) that a whole population of “em” humans will spring up alongside or replacing – well, what term should we use? flesh humans? At least, not complete whole personalities. Like I said, modified and specialised models made to be Dr. Bob or Sexy Sadie, but why go to all the trouble of having a Real Working Human in A Box when you can get any old one of these for sixpence the dozen?

      • Deiseach says:

        Regarding the moral question – oh, ho ho ho. Now we’re getting into the area of “What is a person or what defines an entity as such?” and since that’s a fraught topic when we talk about it in the context of abortion, I’ll let you guys do the hair-pulling over that one.

        I think that only ‘full’ copy “ems” would be considered to matter morally (and the first inheritance court case where the family try to claim the estate that the “em” continuation of the deceased wants for his/her own possession is going to be fun to watch) but the stripped-down models like the “Dr. Scott, Psychiatrist” licensed copy will be something inbetween – not a ‘person’ but not quite a thing, either. Maybe on the level of an animal, so you can’t be needlessly cruel? If the level of sentience or self-consciousness or whatever seems to approach nearer to what we consider “human”, then it would be easier to treat it as having more rights than the model that’s only “Joe the Truckdriver’s road haulage skills” driving the fleet of transport lorries.

        • Mary says:

          To be sure, if you mean by “love” not merely lust but all affectionate relationships, we hit an age-old description:
          “Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god. “

  8. Mary says:

    For some notion of the problems that you may have attacking a star system, I recommend this:

  9. I’m puzzled about what most of this has to do with possible Malthusian features of the future.

    When people imagine “love” being eliminated due to competition, do they imagine minds with fewer positive emotions than we have? Or with emotions that are hard to compare to our current emotions? I find the former disturbing but far from an inevitable result of Malthusian competition, and I don’t find the latter disturbing.

    It’s odd that in the post to which you refer Robin speculated a bit on how many descendants we might have in a million years, yet you quote him as saying “But you’re speculating post-Singularity here, and the whole point of the Singularity is that it’s impossible to speculate on what will happen after it.” [I think I was listening to that part of the conversation, but my memory isn’t good enough to say how accurately you’ve quoted him.]

    • Scott Alexander says:

      I’m sure I didn’t quote the words accurately, but I thought I got the general idea right.

      So my idea of Malthusian competition is that it limits your options. If there’s more than enough resources for everyone, you can devote your resources in “inefficient” ways – that is, to ways other than succeeding at the competition. Lots of things we value like love and music are things other than efficiently competing for resources.

      When resources are scarce and competition becomes more important, there will be less opportunity to devote resources inefficiently without getting outcompeted and (in a Malthusian world) destroyed. Therefore, eventually all resources will be devoted to Malthusian competition.

      Except in very strange circumstances, love and music are not part of effective competitive strategies, so once it becomes possible to engineer beings that don’t waste time caring about love and music, those beings will outcompete beings that do.

      It’s hard to figure out exactly where this will end. Part of me wants to say “Well, music is still strategically useful for things like selling it to other people”. But this assumes other people can’t just engineer out their desire to listen to music, which is strategically more advantageous than buying music. It seems possible that all of this only ends with everyone devoting all of their resources to production of computronium and military assets to defend that computronium from people who would steal it. Anyone who deviates from that imperative even slightly gets outcompeted either economically or militarily and ceases to exist.

      In such a world no, I don’t anticipate any emotion that corresponds even slightly to our “love”.

      • I agree that music is mostly expensive signaling, and would become insignificant after long periods of unrestrained Malthusian competition.

        But emotions such as love consist of states of mind plus some signaling. It’s not obvious to me that simply having a state of mind corresponding to a positive emotion causes inefficiency.

        The signaling part of love serves a purpose related to exchange of information that results in descendants. I expect that the far future will contain exchanges of information that create minds that are sort of descendant-like, and that there will be something like signaling used to decide what to exchange. That will provide pressures which maintain something arguably similar to love, but not necessarily similar enough to satisfy what people want today.

        Robin may believe in unrestrained competition, but that’s not the only way to reach the total utilitarian goal of maximizing the amount of experience in the universe. If we’re willing to accept some minor slowdown in the rate of expansion (no Burning the Cosmic Commons), it ought to be possible (but hardly easy) to coordinate so that those who spend 99% of their time working don’t get to colonize more solar systems than those who work 90% of the time working. I approve of such coordination.

  10. Kaj Sotala says:

    You can’t just remove love from a human brain like that. There’s no one love module.

    Right, but even ordinary old-fashioned selective breeding is very successful in influencing various traits in a relatively quick time, as discussed e.g. here and here. Those breeders don’t even have access to any fancy self-modification techniques that uploading would enable. Robin himself has argued that a world of ems could lead to much stronger evolutionary pressures, and we already seem to have a bunch of existing variation in how likely it is for different people to experience love…

  11. Grognor says:

    Poor folks do NOT smile. I’ve seen them.

  12. jaimeastorga2000 says:

    I don’t believe in moral progress, and so I find myself disagreeing with Dr. Hanson about value drift. It was to the past’s perdition that they let their values drift into our values, and it will be just as much to our perdition if we let our current values drift into some weird future values none of us can currently imagine. It won’t be because the future is “better” in any meaningful sense; it will be just because it has gone wrong, from our perspective.

    Now, if you took a person from today and you sent them into the future, they might end up agreeing with those values eventually, just like I agree with Eliezer in expecting Benjamin Franklin to end up coming around to our way of life after a sufficiently long period of adjustment living in the 21st century. However, such a change in values will be because of human’s social nature, and not because one set of values is objectively better; I expect that if you sent someone to a past, in such a way that they did not know it was the past and was thus not filled with ideas that it is an antiquated way of life his culture had grown up from, he would likewise come to accept their values eventually provided his quality of life did not greatly diminish. So, for example, if someone today were sent to medieval Europe to live among the noble caste, surrounded by noble friends, and not already set on the idea that his native time was a huge improvement on his present one, I expect that eventually he would come around to the idea that feudalism and the lot of serfs is the appropriate way of things.

    • Ghatanathoah says:

      I’ve changed my mind about moral questions in the past. I’ve (somewhat more rarely) also changed my mind about questions of aesthetics. In these cases the reason for my change has not been conformity or sociability. It has been due to an increased understanding of what exactly my values are and what they imply. It has also been due to other people pointing out things that I valued about something that I did not notice before. Another big things has been mistakenly believing a heuristic I used was a terminal value, and realizing my mistake.

      Some of the changes in the values of our society are probably due to fashion or drift. But others are due to genuine moral progress. Moral progress, in this case, is understood as having a more coherent, rigorous, and complete understanding of what exactly our values are.

      So I think you’re partly right. There may be some changes in the future that will horrify us, and we will be right to be horrified. But this is because these are changes to our fundamental values.

      However, there will also be changes that will horrify us, but for which we’ll be wrong to be horrified. These changes won’t be to our fundamental values. They will be the adoption of new heuristics to help implement those values. They will also be new, more clear and coherent understandings of our most fundamental values. These will appear horrifying, but that is because we mistake them for a change in Fundamental Values, when really they are just clarifications or changes in instrumental heuristics. These changes are “moral progress.”

  13. Deiseach says:

    It strikes me that in a way, we already have“ems”. This ‘me’, this ‘Deiseach’, that is typing right now exists only on the internet; here is where she lives and moves and has her being. Real-space or meat-space me is very, very different. There’s four (four and a half, if I stretch it a bit) “emulated mes” out there in the ether.

    So I’ve created a copy of ‘me’ with the bits I want or am willing to exhibit to other people, cranked up some attributes I hope will seem attractive or appealing, suppressed others (for this ’em’; other ’ems’ have some or all of the suppressed bits on show), and maybe faked having some that I don’t have in reality. I’ve done the kind of tweaking we have discussed with my “em”.

    Scott’s “em” on here may be the nearest to a full-copy Scott. I certainly do not consider that “Deiseach” is a separate life-form or successor to me, and if Scott or any other person decided to ban “Deiseach” from coming on here, or I decided to ‘pull the plug’ on her, I don’t and wouldn’t think of it as the destruction of a new type of human life.

    All of us on here are “ems” in some form or another; I would be very surprised if anyone said “No, this ‘me’ is the exact same as the ‘me’ you’d meet if you went round to my house right this minute”. Is it really so big a jump from this kind of “em” to the kind I’ve considered, the model tweaked to a certain spec for a certain job and hired out or sold by the flesh original? On the other hand, I think we’re still very far from “Deiseach” being a ‘real’ person and a New Human. I think we will continue to be a long way from those kinds of new persons, not just because of tech, but because of human attitudes.

  14. Anonymous says:

    I think the related conversation that would be more interesting is Robin Hanson’s pro-value deathism stance vs. Vladimir Nesov.

  15. Robin Hanson says:

    Maybe you and I should do a podcast, and then we could get the transcript right. 🙂

    I’m happy to admit that things could get very strange on a time that is short, even when compared to ordinary human lives. If we try hard we might predict the next era. Even if that lasts a short absolute time duration it is the best basis for guessing the eras after that. And yes, we should probably try to let the next eras figure out how to deal with subsequent eras, and focus our efforts on dealing with the next era.

    We should have little confidence that very particular unusual features of our world will last to the very distant future, unless we try to coordinate to take control of the universe and change. I don’t think we are up to doing that well now, but maybe someday, as we get better at coordinating.

    Knowing what is a particular unusual feature and what is a general robust feature requires knowing something about a natural distribution of over possible features. Given such a distribution you can make predictions about is unlikely or likely. Calling those “anti-predictions” seems a bit misleading to me.

    Human minds are now where most useful knowledge is held and reasoning is done. Future minds are quite unlikely to be designed from scratch without reference to humans. Instead, they will be designed initially to work well with humans and to inherit investments made in dealing with humans. After there will continue to be big gains from working within existing frameworks and standards rather than trying to start all over from scratch. In this way human mind design should have a lasting future legacy. And yes that includes aspects of love.

  16. Pingback: » Superintelligence Bayesian Investor Blog