One vision of the far future is a “wirehead society”. Our posthuman descendants achieve technological omnipotence, making every activity so easy as to be boring and meaningless.
The pursuit of material goods becomes a waste. A nanofactory or a quick edit to a virtual world can already give you a mansion the size of a planet. Although economic activity may still exist in competition for computing resources, all beings in these competitions will be smart enough to behave perfectly optimally (and therefore in a way that makes even the illusion of free will impossible) and so first-mover advantages will be insurmountable. Economic differences will compound on the sub-second scale until different classes are so far apart that competition becomes impossible.
When sports risk becoming contests of who can enter the higher integer in the $athletic_ability variable of the computer that determines the universe, the World Anti-Doping Agency says everyone must compete using their original human bodies – assuming such things even exist at that point. But neither spectators nor athletes care about the result, since everyone is smart enough to simulate the game in their minds on a molecule-by-molecule basis long before it happens and determine the outcome with perfect accuracy – turning the actual competition into a meaningless formality.
Works of art become gradually less interesting; everyone can extrapolate back from the appearance of a painting to exactly what the mental state of the artist must have been at the time it was painted. Nor does the art enlighten, since the conceptual organization of everyone’s mind is already optimal and the only intellectual differences between entities are insurmoutable ones of available computing resources.
As for Science, everything was discovered long ago. If it hasn’t been, discovering it is a brute-force application of the best-known Bayesian reasoning algorithms.
(And developing better algorithms is also a brute-force application of the best-known algorithm-discovery algorithms.)
Even in the most utopian such world – one where the dominant minds are concerned with maximizing the happiness of everyone else – it sounds pretty boring.
One approach is the imposition of artificial limits. Entities can deliberately refuse to use their full cognitive capacity and so experience uncertainty, choice, and feelings other than that of algorithmically choosing purpose-appropriate algorithms. Maybe some entities will deliberately take on human brains and bodies, and interact with other such entities in a human-level world in order to operate at the level with which their value system is most comfortable. Maybe in order to avoid the temptation to call on their full omnipotence every time they experience a little pain or hardship, they will deliberately “forget” their posthuman status, living regular human lives utterly convinced that they are in fact regular humans.
(I assign moderate probability that this has already happened)
Other entities may have no time for such games. They may cope with the ennui of posthuman existence by reprogramming away their capacity for ennui, with the absence of aesthetic or scientific outlets by programming away their desires for such. Instead, they just reprogram their brains to be deliriously happy all the time no matter what, and spend their time sitting around enjoying this happiness.
The futurist community calls this “wireheading”, after an experiment in which rats had an electrode hooked up to the reward system of their brain which could be stimulated by pressing a lever. The rats frantically tried to stimulate the lever as much as possible in preference to doing anything else including eat or sleep (they eventually died). Stimulating the reward center directly was much more attractive than other activities which might result in some indirect neural reward only after work.
The same pattern occurred in humans, specifically chronic pain patients who had similar wiring installed in their heads in the hopes that it might alleviate their problem:
At its most frequent, the patient self-stimulated throughout the day, neglecting personal hygiene and family commitments. A chronic ulceration developed at the tip of the finger used to adjust the amplitude dial and she frequently tampered with the device in an effort to increase the stimulation amplitude. At times, she implored her to limit her access to the stimulator, each time demanding its return after a short hiatus. During the past two years, compulsive use has become associated with frequent attacks of anxiety, depersonalization, periods of psychogenic polydipsia and virtually complete inactivity.
It’s unclear to what degree these wires are making the subject so stupendously happy that she desires to maintain her bliss, or whether they’re instilling compulsive behavior. Likely there are some elements of both – just as in wireheading’s more prosaic younger sister, everyday drug use. But drug use is messy, and wireheading is perfect.
Wireheading is commonly considered an ignoble end for the human race – our posthuman descendants reduced to sitting in dingy rooms, taking never-ending hits of some ultra-super-drug, all their knowledge and power lying fallow except the tiny fraction necessary to retain delivery of the ultra-drug and pump nutrients into their veins.
On the one hand, it probably beats desperately trying to figure out something to do more interesting than setting your $athletic_ability statistic to 3^^^3 and playing sports. On the other, it’s hard not to feel contempt for beings that choose such a pathetic existence.
But I recently realized how unstable my contemptous feelings are. Imagine instead our posthuman descendants taking the form of Buddhas sitting on vast lotus thrones in a state of blissful tranquility. Their minds contain perfect awareness of everything that goes on in the Universe and the reasons why it happens, yet to each happening, from the fall of a sparrow to the self-immolation of a galaxy, they react only with acceptance and equanimity. Suffering and death long since having been optimized away, they have no moral obligation beyond sitting and reflecting on their own perfection, omnipotence, and omniscience – at which they feel boundless joy.
Pictured: ultimate reality
I am pretty okay with this future. This okayness surprises me, because the lotus-god future seems a lot like the wirehead future. All you do is replace the dingy room with a lotus throne, and change your metaphor for their no-doubt indescribably intense feelings from “drug-addled pleasure” to “cosmic bliss”. It seems more like a change in decoration than a change in substance. Should I worry that the valence of a future shifts from “heavily dystopian” to “heavily utopian” with a simple change in decoration?
You should recognize that you don’t understand value, and that your clever logical deductions that rest on the notion of value are built on top of sand. That’s the lesson you should take from this.
How would one go about repairing such a deficiency as not understanding value?
If I figure it out I will shout it to the world.
In other words,
“Should I worry that the valence of a future shifts from “heavily dystopian” to “heavily utopian” with a simple change in decoration?”
Yes.
Relevant short story.
I’ve been trying to find the reference verifier in my blind spot all morning but I can’t find it. Something is wrong.
to me, your positive feeling for the lotus-god vs the wirehead future looks like a framing effect. i’ve never felt that pleasure mitigates boredom over the long term. one of the epithets some buddhist writings use to describe the experience of enlightenment is ‘the state of no more learning’. that is the perfect definition of hell.
If you’re still experiencing boredom in your wireheaded state, then you’ve done something wrong. In a world so advanced that you can rewrite your own cognitive architecture on a whim, erasing boredom and every other possible negative experience as a part of the wireheading process would be trivial. To truly reject wireheading is to accept a scenario in which the experience really is the most positive experience you could possibly undergo given your available resources (up to and including a constant feeling of maximal personal satisfaction and a permanent perception of novel experience if necessary), and to reject it anyway (which I do, incidentally).
I’m also quite OK with total wireheading even on an emotional level. Maybe having both experienced suicidal depression (in the past) and very good drug experiences in the midst of it has affected my attitude. Now I frankly don’t care too much for all the convoluted philosophies of value that can be applied to the problem; if we’re reasonably sure that deathism would look absurd to posthumans, why cling to anti-wireheading too?
After all, if we sort of theoretically value the existence of challenge and achievement in the universe, but don’t truly desire the associated downsides for ourselves, we could probably eventually create beings that would not experience dissatisfaction as disutility, and let them live in an “interesting” world. (Altering ourselves to dissociate those might be too radical a change for our subjective sense of continuity. Then again, if you believe that e.g. teleportation via destructive cut-and-paste is okay, you might also be okay with going from suffering distress to enjoying distress.)
At the risk of being flippant:
…nope, still sounds awful to me.
I totally agree. Still pretty darn awful, not much more to say.
Agreed. This has always been my main (though not only) issue with Buddhism.)
I guess we could keep a few of these lotus gods around, y’know, as furniture, while the rest of us explore infinite video game worlds or build planet-sized works of art.
Eyup. That was my reaction too.
I think that’s a construal-level effect. You’re thinking about the culmination of a big, globally important trend at (what amounts to) the end of time. Whatever final state the universe ends up in is telos, the end it’s been striving toward since forever, and that knowledge casts a tint on everything that came before.
My post – the perils of perfect introspection – seems relevant here.
This post seems like a repeat of http://lesswrong.com/lw/xr/in_praise_of_boredom/. I’m not sure how much pseudomalice to attribute to this particular act, since it is indeed plausible you could have done it accidentally and without the accident being internally motivated.
> Maybe in order to avoid the temptation to call on their full omnipotence every time they experience a little pain or hardship, they will deliberately “forget” their posthuman status, living regular human lives utterly convinced that they are in fact regular humans. .. (I assign moderate probability that this has already happened)
There’s not just a “little” pain and hardship in the world. There’s way more than would be necessary for interesting limits, in fact I think the amount of suffering necessary for a life in interesting limits is precisely zero. For one, beings could be motivated by different gradients of bliss instead of pain and pleasure. Another idea is that the world could be set up in advance that suffering “could” happen and maybe but the inital conditions are chosen so that no real being ever actually ends up suffering due to “lucky” circumstances. The superhumanly intelligent buddhas could think of a million more ways.
Oh, excellent question.
My instincts align with yours, and I’m not sure why either.
I can think of two reasons why I lean that way, one good, one bad.
The good reason is, the lotus-throne scenario implies a functioning infrastructure even if it’s fully automated. All the “being gleaming and clean and not sitting in your own excrement” and “knowledge of the universe” implies that the infrastructure is still reliable. Whereas the dingy-based suggests it’s going to break down soon, and then where will you be?
The bad reason is pure social respect and acceptability. Lots of things we want to do, or don’t want to do, because we expect other people to respect us for it, even if they’ll never know. To some extent this is rational (choosing a respected career gives you more options, because it’s easier to move into a different career if people think you’re awesome), but to a large extent it’s just automatic even when it’s not a good idea.
I imagine “wireheading” as a process that removes a capability of the person to think, to optimize, to even care about something else. My heuristics scream at me that this is an extremely dangerous state. The Buddha scenario supposes we could have the advantages of wireheading, without the disadvantages. My heuristics don’t object to that, if that would be possible.
Hmm, I share your okayness with the latter but not the former, but the salient fact for me is that I perceive a drug addict in a dingy room as low status, but a god on a throne as high status. I’m more worried the preference flip means that value is inextricably bound-up in status comptetitions.
My automatic reaction is to drug addiction wireheading is negative. My automatic reaction to lotus gods is “well, I guess posthumans would know what the best thing to do with themselves is, I am not sure why I get an opinion” with a sprinkling of “why do I have to care if Nirvana is a good thing, neither I nor anyone I know is going there.”
I find myself okay with the *existence* of Lotus Gods, but they seem more like a nice thing to have in the *background* of “everything that goes on in the Universe … to each happening, from the fall of a sparrow to the self-immolation of a galaxy, they react only with acceptance and equanimity. Suffering and death long since having been optimized away…”
Whereas wireheads generally imply everything but the wireheader is just terrible (but they don’t care, because all that matters is them caring, not the things they care about.)
>Should I worry that the valence of a future shifts from “heavily dystopian” to “heavily utopian” with a simple change in decoration?
Your criteria for judging wireheading bad seems to have been “It makes me feel icky,” so of course using words with different connotations to describe more or less the same thing will give you different feelings.
Humans seem designed for a journey, not a destination. A future with nothing needed to do is hard to imagine. Pleasure is a feedback mechanisms that uses feelings to incentivize and reward certain behaviors (eat, mate, etc). Wireheading feels wrong, it seems to me, because it is separating the reward from the goal. Presumably in the dingy drug addict scenario, those goals are still needed. In the nirvana scenario, those goals are not needed. But that doesn’t mean that to an observer–or likely to a participant–it won’t feel like cheating to artificially grant the reward-feelings without the goal striving behavior.
I agree with you completely on this. (Hope it’s ok to post comments that are just agreement.)
I am a bearacrat: I try.
You are a serf: you try to try.
He is a lumpenprole: he tries our patience.
Them fucking reactionary bears, man.
…I am really wondering whether “bearacrat” is a typo for “bureaucrat” or whether there’s some reference I’m missing here.
I like this future better.
I share your intuitions on this. It doesn’t seem like a dilemma to me, since I’ve long been convinced that our aesthetic and value judgments on philosophical questions are easily altered by framing, and I guess I’m used to the idea by now, so its philosophical implications no longer bother me.
But anyway, I think this particular puzzle only arises in a consequentialist framework where utils quantify subjective happiness. The wireheading scenario seems unpleasant because we associate it with addiction, helplessness, and loss of control. The lotus Buddha scenario seems admirable because we associate it with hard work, discipline, accomplishment, and extreme self-control. The wirehead is just handed his pleasure on a silver platter; the lotus Buddha has to work for it before he is rewarded, at least in the non-posthuman setting. It seems like utilitarians would therefore actually prefer wireheading (assuming hard work has negative utility), while virtue ethicists would still prefer the lotus Buddha.
But in a posthuman setting where neither the wirehead nor the Buddha has to work for the pleasure, and it’s the exact same pleasure, then yeah, it really does seem like a matter of framing. And lest anyone think otherwise, our reliance on framing is not a cognitive bias that we can overcome through rationality; it’s a core aspect of how our cognition works. In order to reason about situations that we have no experience with (such as the far future), we have to think of them in terms of things we do have experience with, like drug addiction and lotus Buddhas. (Most of us have no direct experience with lotus Buddhas, so presumably our positive impression of them also arises due to framing.) But anyway, this means that the more abstract and hypothetical and distant something is, the more our aesthetic judgments of it will depend on framing. This is in contrast with things we can experience directly (or watch someone else experiencing), since then we know directly what kind of pleasure/pain is involved.
It’s also worth noting that the Lotus Buddha was originally based on a reaction against a very ugly world which is rapidly ceasing to exist.
>And lest anyone think otherwise, our reliance on framing is not a cognitive bias that we can overcome through rationality; it’s a core aspect of how our cognition works. In order to reason about situations that we have no experience with (such as the far future), we have to think of them in terms of things we do have experience with, like drug addiction and lotus Buddhas.
It is striking how much this resembles what many people would say about abstract mathematics. Nonetheless, at least in mathematics, it is not so bad. By thinking repeatedly about “situations that we have no experience with” we get some “modeling experience” and they become familiar, albeit in a non-direct way. So I don’t think the situation with reliance on framings is quite so hopeless as you suggest.
Oh! I actually don’t think that our reliance on framing and analogical reasoning is a flaw in how human cognition works, just a fact of life. My perspective is basically “Human reasoning is amazing, and we couldn’t do it without framing, so let’s embrace framing as one of the many things that gives rise to human-level cognition.” This is what I was trying to get across by pointing out that framing effects are not cognitive biases that we should try to overcome. I didn’t mean to say “Ha, humans are doomed to irrationality forever!”
But yeah, our capacity to abstract away from concrete situations is truly astounding! Abstract thought in general is. I wish I had a better understanding of the processes by which we form abstract concepts.
This blog has suddenly become *terrifying*.
I get the feeling David Chapman is tangentially related to this post.
That would probably make them a helluva lot more interesting to me, actually. YMMV.
But only for a moment. As opposed to now, you can extrapolate back, but not be certain or in agreement, so there is a lot of discussion about it and after a time of contemplation or discussion you can realize new perspectives on it. Versus everyone seeing a painting and instantly realizing, “Yup, that shade of red in the sunset represents the subject’s anger at his father” and the artist saying “Well of course, R1263 is the optimum hue and intenisty for paternal anger.”
I’d expect that there is still more to talk about even after the artist’s mental state is fully recovered; for example, one could compare it to the state of other artists who made other works, to the viewer’s state.
It’s important to distinguish between ideal wireheading and less pleasant scenarios. For example, a wirehead that makes us as happy as we can possibly be, while we are being taken care of by robots/nanotech run by an AI is one thing, but imperfect wireheading in the current world may not be that different from drugs and actually make some people who use it less happy in net. That said, I endorse wireheading, and think that opposition to it largely comes from how counterintuitive it is. As you said, it seems ignoble. But others have said that a peaceful rational commercial society is ignoble, because it doesn’t let man express his violent passions, and it seems that opposition based on something being “ignoble” is purely aesthetic. To the extent that aesthetic arguments help convince people, depicting people as gods in bliss is better than describing them as drug addicts with the perfect high.
So long as it’s not perfectly optimal to expend cognitive resources in piercing the illusion of free will, and why would it be, this Utopia seems to protect it rather than threatening it.
> Maybe some entities will deliberately take on human brains and bodies, and interact with other such entities in a human-level world in order to operate at the level with which their value system is most comfortable. Maybe in order to avoid the temptation to call on their full omnipotence every time they experience a little pain or hardship, they will deliberately “forget” their posthuman status, living regular human lives utterly convinced that they are in fact regular humans.
> (I assign moderate probability that this has already happened)
In a sense, this has definitely already happened. See: World of Warcraft. Somebody once commented that it’s telling that we can now create virtual worlds where almost anything could be possible, and by far the most popular one is one which puts strict limits on what you can do and forces you to spend considerable time grinding in order to get access to the various goodies whose availability is artificially restricted.
Indeed, some people define “game” by the presence of artificial limitations that force you to jump through unnecessary hoops in order to get what you want. (“Oh, chess is about capturing my opponent’s king? Well that’s easy, I can just grab it off the table with my hand. Whaddya mean I have to move all these pieces in order to do it?”) And of course, part of getting immersed in a game requires that you intentionally forget the fact that all of those rules are artificial and arbitrary, and take them as givens and utterly unbreakable.
Relative to the lifeless plastic or wooden figures in any board game, say, we are already omnipotent gods. And every day, countless gods step into the realms of these powerless beings, intentionally forgetting their divinity and becoming voluntarily bound by the laws of the lesser realms.
The continued existence of games is indeed one of my greatest hopes for our survival, and our wanting to survive, the lure of wireheading.
I don’t think people choose to play WoW from a position of perfect knowledge and self-awareness, as the “Lotus Buddhas” would. Games aren’t designed to be fulfilling, only to be popular – which they become by capitalizing on people’s natural compulsiveness, and by padding out the gameplay with lots of grinding. In other words, games are selfish like genes are selfish (or at least that’s my theory). I think an actual “Lotus Buddha” would be able to do better.
>Games aren’t designed to be fulfilling, only to be popular
I think you have a narrow experience with games.
What about art or story games? (Mass Effect might be a good commercial example.) Those are the only ones I’ve ever really lost myself in/.
Buddhism presents itself as a global optimum – the legitimately happiest state.
Drug addiction usually gets portrayed as a local optimum – you only stay addicted because the pain of quitting is too much, so you content yourself with what happiness you can find within the addiction.
Since your brain probably can’t comprehend “wireheading” as a real concept, I’d expect it’s just extrapolating from a baseline. I don’t like drug addiction, so I really don’t like wireheading-as-extrapolated-from-drug-addiction. I like buddhism, so I’m fond of wireheading-as-extrapolated-from-buddhism.
For one thing, one form of wireheading suggests there’s a “real” me trapped inside, in a lot of pain, and just using drugs to try and hide from that. The buddhist wireheadnig suggests that I’m “genuinely” happy 🙂
It’s fine to find wireheading awful, as long as you keep your grubby monkey paws away from my switch.
> As for Science, everything was discovered long ago. If it hasn’t been, discovering it is a brute-force application of the best-known Bayesian reasoning algorithms.
Why would this be uninteresting? “Brute-force” sounds bad, but it wouldn’t have to feel the way that sounds from the inside. I only know that I’m not an optimal reason algorithm because I can see my reasoning fail. If it turned out that all my actions were ‘optimal’ in some sense (you have to be a bit careful here because of bounded resources), that wouldn’t affect how interesting it is to choose them.
Nice catch.
One plausible use for eternity and unlimited power: exploring the space of conscious experience. Experimenting with different sensory modalities and stimuli. Integrating the memories of a million lives.
What is it like to be a really *big* mind? Can’t call it boring until you’ve tried it.
Your latter scenario is very strongly reminiscent of the effects of Ayahuasca. Except the thing about Ayahuasca is that, while not wholly unpleasant, is not the careless escape from the world that heroin seems to be. I also don’t know anyone who uses it who craves it. In fact, the times I’ve taken it, I’ve gone in with some trepidation and even resistance. It brings a sort of serenity that still leaves you fully aware of your surroundings and leads to deep and uncompromising introspection. And in my experience, rather than leading me to want that all the time, Ayahuasca has always pointed me back into the world, towards caring for and communing with others and towards doing my duty humbly and virtuously.
Which is to say, if this is any indication, I think there is a nontrivial, noncosmetic difference between the wireheads and the transhuman Buddhas.
The bliss state is not something you need to wirehead to get, it’s available from ordinary human level meditation practice.
I suggest Robert Thurman’s _The Jewel Tree of Tibet_ for those who are looking for an introduction to this. It includes meditation practice of several different types, emotionally charged visualizations, and a reframing of an internal practice as a universal good.
The sidebar on the history of monasticism is rather relevant to a post-job world.
The first salient difference between these scenarios that springs to mind is awareness, focus. With the Buddhas, you describe:
I think this is important because it’s clear that these wireheads are noticing everything that happens in the universe, grasping the underlying structure of it, seeing the whole web of cause and effect. These wireheads are infinitely more immersed in reality that you or me.
Compare this to the typical conception of wireheading as Addiction To Super Drugs, in which the person is in some featureless room, receiving incredible amounts of pleasure that don’t mean anything or even relate to anything, utterly sealed off from everything else happening in the universe. We imagine wireheads as being totally withdrawn from reality, the maximum exaggeration of the state of a drug user.
So it might be that part of what makes wireheading seem so ignoble is that the wirehead is totally out of touch with the rest of the universe, an existence that seems entirely pointless. It’s surprising that a one-way communication with the outside world (the Buddha wireheads don’t effect reality, they just perceive the hell out of it) takes such a large chunk out of wireheading’s ugliness.
Sorry, just to point out, you mean you fell for that Buddha crap in the first place ;-)?
Yes, those two situations are damn near identical. The problem appears to be that the Buddha situation presses the “divinity button” in people’s brains, that adds an instant veneer of Heavenliness to, well, anything. It’s not even that similar to the real Buddhist Nirvana, which is an upgraded Peace of the Grave that goes beyond even the capacity to be reincarnated into a total destruction of one’s self-awareness.
Which also sounds terrible, but must sound pretty great when you haven’t the bravery or the means to stand up and say, “I can and will make life better than it is right now!” Mind, I could spend a long time asleep, but I’d want to wake up eventually.
And why do the Buddhists make their semi-paradise sound so… dull, anyway? I mean, anyone who’s seen pagan, Islamic, or Jewish paradises can do better. Add some divine feasts, and orgies, and then intellectual puzzles while we’re at it! And that’s just a start! Let there be games, and sports, and plays!
I mean, honestly, why would you bother upgrading yourself to the point where all fun disappears from the world? That doesn’t sound very upgradey to me.
It’s worth noting that some traditional visions of Western paradises are *also* fairly dull.
Relevant.
Well yes, of course. Hence why I immediately started adding things as soon as I’d summoned one.
Also, I actually have a somewhat more cutting point to make about the difference between wireheaders and Buddhas. According to Buddhism, Buddhas are in touch with the Truth of Reality, whereas wireheaders are deliberately cut off from the Truth of Reality. So the Buddha scenario still doesn’t beat anyone’s lower bounds for good heavens to live in, but it does, in some sense, beat plain wireheading by placing value on material and spiritual/moral reality in the first place.
(In fact, deep readings of Buddhism come right out and say that this spiritual/moral reality is Emptiness, which is pretty accurate, empirically speaking. But Western philosophies are making an excellent point regarding our values when they say that we often/always desire to be entangled with an external spiritual universe (often labelled “God”) more than we desire a perfect but ontologically empty happiness.)
I’d tend to agree that in the utopian far future (whether achieved by scientific singularity or divine recreation-and-fulfillment of the world), satisfying/diverting/fun things to do will be created at a faster rate than individual beings are capable of exhausting them. How many hours of fun are there in, say, a strategic/tactical single-player game at the moment? And how many hours does it take to create such a game? Maybe a thousand times as many hours to create such a game versus to play one to fulfillment. If our far-future wireheading lotus selves can experience that fun in a millionth of the time, well, they can also create things that will be (momentarily) diverting at a million times the speed. You only need more than one thousandth of the people whose needs are all met to spend time creating diversions in order to provide enough diversions to keep us all entertained forevermore.
If this falls down, it’s because (a) the ratio of time-to-create-diversion to time-spent-enjoying-diversion changes for the worse (I can’t see any reason to expect this rather than the opposite); or (b) the concept of diversion itself will cease to apply. I think you’re assuming (b), but I can’t see any reason to assume that. Diversions for the perfected future humanity will look very different to diversions for us, but I don’t see why they can’t exist, especially when so many billions of creative people will be turning their perfected minds to the task.
Diversion is an ironic word to use when the pastime becomes all there is.
I realize this is beside your main point, but do you appreciate how wildly implausible either of these scenarios is? We have things like sensitive dependence to initial conditions and logical incompleteness to ensure that no finite beings will ever be perfect problem solvers or future predictors. (And we have infinite beings covered too, though the theory there is murkier.) Even something as simple as the numerous problems with exponential (or worse) worst-case complexity guarantees that there will always be (at least) abstract challenges that a mind can pose for itself.
Wireheading is still an issue, but for myself I know I don’t want it, and I’m confident enough people share this view that society won’t somehow decide to cut off support for all activity other than wireheading.
lotus eaters vs lotus thrones
“The pursuit of material goods becomes a waste. A nanofactory or a quick edit to a virtual world can already give you a mansion the size of a planet. ”
However, since it is cheap and easy, it will be low-status — a cubic zirconium, not a diamond. Unless people are somehow rewired to stop caring about status, the future will
consiste of people doing difficult, hard-to-fake things in order to get status. The present is already like that. There is status in playing the piano, none in ownign a CD of piano music. There is status is being a racing driver, none in playng racing games. Etc.
This thread illustrates the distinctiveness of the LessWrong style of thought.
A commenter mentions planet-sized art and exploration of infinite video game worlds. Why should size of artwork matter? It doesn’t, but size is impressive in our experience. Big cars, big buildings…big art. Massive art. (Recursive art, or sentient art would at least fit the LessWrong genre.) “Video gaming”, on the other hand, is trivial: the content of future sentience, simulated or otherwise, is under debate.
Another commenter says that journeys are more important than destinations, failing to distinguish between experiences of future entities and present-day ethics.
These comments are similar in that although the authors nominally believe in the singularity, or some technologically transformed future, such beliefs don’t animate their world view. What motivates one’s peers, what is high status? These are motives for action. When discussing greyed-out asocial beliefs, the normal reasoning style is to relate these to one’s immediate context and its tropes.
Eliezer Yudkowsky, riding an historic wave, argues that this is archaic and irrational. Reality is not socially constructed, although social construction is an important feature of the ordinary man’s behaviour and sense of reality.
Yudkowsky, as he trumpeted his brand of rationality, did not realise that he had committed a progressive faux pas.
Consider this Mandela statue. Mandela’s arms are said to be outstretched in a gesture of universal brotherhood. However it looks to me, by virtue of a small rotation here and a few lines there, to have a posture of calming dominance. “Sit yo monkey ass down.” Similarly, SIAI was the Singularity Institute, a radical claim to importance, whereas MIRI is more euphonious yet bland.
GiveWell refused to list this institute, the definitive efficient charity, despite apparent agreement with the essence of LessWrong futurism, and Yudkowsky has been browbeaten about the importance of listing academic sources who touch on his subject matter.
“A commenter mentions planet-sized art and exploration of infinite video game worlds. Why should size of artwork matter? It doesn’t, but size is impressive in our experience.”
Hello!
Yeah, you got me!
Exploring infinite video-game worlds comes from an example I used to convince a friend living for millennia wouldn’t be so bad after all – it’s basically combining and , which we’re both fans of – while the planet-sized art comes from .
The general picture seems like a decent Good Future as these things go, but I was definitely going for stuff that sounds appealing to me – hopefully appropriate in the context of the post.
Argh, messed up the formatting. Those over-large links are to places I nicked the ideas form, anyway.
Did you have an actual point in that comment, or did you merely wish to take a bunch of cheap, snide shots at other people with whose politics you loosely imply you disagree?
Imagining myself-as-myself, somehow abstracted into these hypothetical futures, I do see various substantive differences. For instance, should I come across a habitual wireheader, I imagine (assuming that ‘wireheading’ is its ordinary connotation of normal human bodies and brains hacked minimally to be capable of feeling absolute wire-stimulated pleasure) that I-as-myself would, basically, be more ‘powerful’ than he. By which I mean, he would presumably be essentially unaware of my presence, and unable or ill-adapted (or both) to do much to me; whereas, retaining my ordinary capacities of perception and action, I would be capable of killing him, or shutting down his pleasure-wire or his life support, as a relatively ordinary action. (If the wireheader is in some form of automated system for his provenance which is more robust and/or capable of defending him, the comparison still holds; here, he is merely borrowing the power of others, they presumably in some sense more impressive. I suppose if the wireheader was himself in some degree significantly responsible for the workings of his support system, it would be altered; however, this seems unlikely.) Meanwhile, it is unclear whether I could perceive the existence of the posthuman Buddhas at all, and if I could it is spectacularly unlikely that I would be able to damage them or, indeed, affect them much at all. Meanwhile, they could presumably squish me like a bug as a sub-reflex action, should they ever be disturbed in their eternal serenity enough to care.
Thus, as myself, I perceive the Buddhas as being entities far beyond me, and whom I-as-myself am really not competent to make much judgement on whatsoever; conversely, the wireheader is perceived as an entity essentially comparable to myself, who has degenerated. This seems sufficient difference to justify the change in valence. (For my own judgement, I think a universe consisting only of infinitely serene Buddhas might be a little monotonous; however, I would not disdain their existence, nor strenuously object to becoming one myself, as I would both with regard to the habitual human-ish wireheader.)
>Should I worry that the valence of a future shifts from “heavily dystopian” to “heavily utopian” with a simple change in decoration?
I would be more worried that a future without any even vaguely human-like entities bothers you so little. These types of navel-gazing entities would simply not be human.
If you could see and understand everything from begging to end I am sure you would beg for the mere complexities of a dull human life.
Pingback: “One vision of the far future is a “wirehead society”. Our posthuman descendants achieve technological omnipotence, making every activity boring and meaningless — they just reprogram their brains to be deliriously happy all the tim