Michael Huemer thinks there are objective moral truths, because we’ve been moving in toward a particular coherent ethical perspective for the past few centuries, and for all we know this could be because that ethical perspective is Objective Truth.
Achitophel: That’s a pretty uncharitable way of putting it.
Berenice: But does this view really deserve more charity? Suppose I said that in the past, almost nobody wore ties. Now lots of people do. This is probably because ties are the objectively correct fashion choice.
Achitophel: What if people in a dozen different civilizations independently converged on wearing ties? Wouldn’t that provide much stronger evidence?
Berenice: People in a dozen different civilizations have converged on wearing ties. Go to France, Russia, China, or Nigeria, and chances are that the most important people you meet there will be wearing ties. Sure, the convergence isn’t independent, but neither was the convergence in values. You don’t think that India becoming a bicameral parliamentary democracy with a bill of rights had anything to do with Britain being a bicameral parliamentary democracy with a bill of rights?!
Achitophel: You’re trying to make it sound like imperial Britain forced their values down India’s throat. And maybe they did. But how come things like representative government, human rights, and decreased torture took off in a bunch of countries that were never colonized at all?
Berenice: Which countries?
Achitophel: Japan? Russia? China?
Berenice: Japan requires an overly restrictive definition of “never colonized”. And China and Russia require a frankly insane definition of “representative government, human rights, and decreased torture taking off”.
Achitophel: Not in an absolute sense! Relative to before!
Berenice: Give me the Yongle Emperor over Mao any day of the week.
Achitophel: Mao was bad. But he pretended not to be. He didn’t say “Let’s go kill a bunch of people because killing is glorious.” He said “We shouldn’t kill people, but sometimes we have to.” He didn’t say “You’re all my slaves, because I have divine right.” He said “We’re all going to work towards freedom together, but the best way to do that is by doing what I say.” He still had more liberal values than the Yongle Emperor, he just did evil despite them.
Berenice: I feel like this is an odd distinction to insist upon when you are sitting atop a pile of skulls.
Achitophel: And Xi is better than Mao.
Berenice: Not too different from Yongle, honestly.
Achitophel: All right. Fine. Let’s forget about independent development by different civilizations. Let’s say we’re mostly talking about the West – which remember, is still a lot of different countries. Britain. France. Germany. Italy –
Berenice: I am aware which countries are in the West.
Achitophel: These countries all converged on the same couple of values. And those values were all coherent with one another. It seems pretty clear that “emancipation of slaves”, “”freedom of speech”, “decolonization”…
Berenice: Wait a second. Sure, we’ve done a lot of decolonizing the past fifty years. But we did a lot of colonizing the five hundred years before that. In fact, around 1450 the West switched from barely colonizing at all, to colonizing lots of stuff all the time. If Huemer had lived in 1750, wouldn’t he have argued that the arc of the moral universe is long but it tends toward colonialism? And then declared colonialism an objectively correct moral truth?
Achitophel: Stop interrupting! “Emancipation”, “freedom of speech”, “decolonization”, “women’s rights”, and “democratic governance” are all kind of in the same moral direction, so to speak. Do you agree that Western values, today, not in 1750, TODAY, are all going in a certain coherent direction instead of varying randomly?
Berenice: You know, it’s not just ties.
Berenice: If you think about it, practically every item of clothing has become less ornate. Think of Louis XIV in his huge expensive wig, his shiny blue fleur-de-lis filled fur robes, his carefully sculpted gold cane, his bejeweled ceremonial sword, his shiny red heels encrusted with diamonds, his gigantic outrageous hat, all sorts of weird neckbands and armbands. The Yongle Emperor would have had a more Chinese style, but it wouldn’t have been so different in conception. But nowadays nobody does that, not even the rich people who could afford it. The only time you’ll get shiny jewel-filled robes and fifty different things going around your neck is when somebody wants to look old-fashioned and traditional, like a Pope or Cardinal. And this is true everywhere. De Gaulle dressed more simply than Louis, and Mao dressed more simply than the Yongle Emperor. And when we picture the future, everyone’s dressed in featureless skin-tight suits. Evidence for objectively correct fashion?
Achitophel: There’s probably some driving force that made simplicity of clothing desirable, and which applied equally everywhere. For example, ornate clothing was a good signal of wealth back in Louis’ time. But after the Industrial Revolution, anyone could wear ornate clothing. Once the middle-class starts showing up to their bear-baitings in ornate fleur-de-lis gowns, wearing it just meant you were too clueless to know that it had no value anymore. So countersignaling took over – haven’t we talked about this before? The clothing thing isn’t because of some objectively correct fashion choice, it’s just a side effect of increasing wealth?
Berenice: Ding ding ding! Gold star for you! But why don’t you follow your theory to its logical conclusion and realize that the change in morality is also an effect of increasing wealth? Robin Hanson has just written about this in response to Huemer. Here, I’ll quote him for you:
One of the two main factors by which national values vary correlates strongly with average national wealth. At each point in time, richer nations have more of this factor, over time nations get more of it as they get richer, and when a nation has an unusual jump in wealth it gets an unusual jump in this factor. And this favor explains an awful lot of the value choices Huemer seeks to explain. All this even though people within a nation that have these values more are not richer on average.
The usual view in this field is that the direction of causation here is mostly from wealth to this value factor. This makes sense because this is the usual situation for variables that correlate with wealth. For example, if length of roads or number of TVs correlate with wealth, that is much more because wealth causes roads and TVs, and much less because roads and TV cause wealth. Since wealth is the main “power” factor of a society, this main factor tends to cause other small things more than they cause it.
This seems obviously correct to me and I don’t know why you and Huemer can’t see it.
Achitophel: You didn’t quote Huemer’s response! Here:
Perhaps there is a gene that inclines one toward illiberal beliefs if one’s society as a whole is primitive and poor, but inclines one toward liberal beliefs if one’s society is advanced and prosperous. Again, it is unclear why such a gene would be especially advantageous, as compared with a gene that causes one to be liberal in all conditions, or illiberal in all conditions. Even if such a gene would be advantageous, there has not been sufficient opportunity for it to be selected, since for almost all of the history of the species, human beings have lived in poor, primitive societies.
Berenice: Which gene that inclines us to take an airplane when we want to get somewhere quickly, but inclines us to take the bus if economy is more important? Is it DRD4 or SERT? I always forget that one.
Achitophel: You’re saying that it isn’t genetic.
Berenice: Or differently genetic, or complicatedly genetic, or gene-environmental-interactionic. This is what Robin Hanson says:
Well if you insist on explaining things in terms of genes, everything is “unclear”; we just don’t have good full explanations to take us all the way from genes to how values vary with cultural context. I’ve suggested that we industry folks are reverting to forager values in many ways with increasing wealth, because wealth cuts the fear that made foragers into farmers. But you don’t have to buy my story to find it plausible that humans are just built so that their values vary as their society gets rich.
Achitophel: That’s your argument? “We just don’t have good full explanations to take us all the way from genes to how values vary with cultural context?” Your whole point is just an argument from ignorance? Forgive me if I wait until you can come up with a plausible mechanism.
Berenice: You want plausible mechanisms? I’ve got your plausible mechanism RIGHT HERE. To put it in Haidtian terms, the Purity moral foundation, plus a sort of ethnocentrism that corresponds roughly to his Loyalty and Authority moral foundations, are carefully evolutionarily regulated by the prevalence of disease. Purity is the most obvious, given that the disgust reflex is obviously an evolutionary defense against pathogens. The reason you’re grossed out at the thought of touching feces, blood, or rats is that they’re full of plague; the reason you’re even more grossed out by the thought of eating them is that eating things is an even better way to get plague than touching things. Likewise, the best reason to avoid strangers is that they might have strange germs; about twenty million Native Americans who learned that lesson the hard way. Humans have an evolved behavior of upping their levels of purity and ethnocentrism under germ threat. Invent sanitation and antibiotics, eliminate most germs, and people naturally tend toward lower purity-concern and ethnocentrism. You get less racism, more sex, nontraditional families, cultural mixing, and all that good stuff. That’s why you get great correlations between the levels of pathogens in a region and the moderrness of their values. Go somewhere cold and lifeless like Sweden and you’ll get a liberal utopia. Go to a jungle in the Congo full of creepy-crawlies and everyone will be slashing everyone else with machetes. Really, read the article!
Achitophel: You think antebellum Southerners didn’t like black people because they thought they had cooties? Forgive me if the whole enslavement thing doesn’t seem to follow.
Berenice: I’m not saying that’s the only explanation or even the main explanation. You asked for a possible mechanism. I gave you one.
Achitophel: Fine. Give me a mechanism that explains slavery, then. And don’t you dare say it’s not the main explanation afterwards. Give me the best you’ve got.
Berenice: Have you ever noticed how much more virtuous rich people are than poor people? Poor people shoplift all the time, but rich people almost never do.
Achitophel: I don’t know where you’re going with this, but rich people commit white-collar crime and defraud people out of millions of dollars.
Berenice: Which just goes to show their moral superiority all the more! The poor person sells his principles for a dollar; the rich person holds fast until the temptation becomes absolutely overwhelming.
Achitophel: Shut up and make your point.
Berenice: A lot of moral decisions are a conflict between a principle and a temptation. People with fewer temptations have an easy time looking more principled. Not shoplifting is easy for a rich person, not because they’re more virtuous, but because they’re not in a position where they gain anything by doing so.
Achitophel: And this relates to slavery how?
Berenice: I would argue that we have many different drives and needs, some of which can be raw materials for making morality. Compassion is a drive. Xenophobia’s also a drive. Either one can be emphasized or deemphasized based on what’s useful or practical. If the most important thing for you is coming up with an excuse to enslave other people to make cotton, you might cultivate this primitive xenophobia into a complicated system of institutionalized racism that becomes the value system of your entire culture. If you’re not doing that, maybe compassion wins out. I mean, isn’t it interesting that all of the moral decent liberal people were north of a certain imaginary line, and all of the immoral bigoted people were south of it? And that imaginary line just happened to separate the climate where you could grow cotton from the one where you couldn’t? I’d argue instead that given a sufficiently lucrative deal with the Devil, the South took it. The Devil didn’t make the North an offer, and so they courageously refused to yield to this total absence of temptation.
Achitophel: You make the Southerners sound pretty Machiavellian.
Berenice: No more than the rest of us. I expect that once somebody invents vatburgers, we’ll all gain an sudden respect for animal rights, and recoil in horror that we ever engaged in factory farming. Until then, we come up with various moral justifications for the thing we’re not going to stop doing.
Achitophel: So liberal values are real morality, and older values are just excuses to justify greed?
Berenice: Not necessarily greed. “Necessity” is too strong, “convenience” is too weak, but somewhere in between the two. Back in the old days nobody really knew what STDs were. They just knew if you had sex too many times, you would break out in a horrible pox and die. And so would anyone else you had sex with, no matter how otherwise-pure they were themselves. Under those circumstances, having a very sex-negative morality where the promiscuous people are shunned and driven from society is a basic concession to the survival instinct. You’d be insane not to. But once we figured out testing and pencillin, the reasoning behind that morality died out and we stopped trying to cultivate those values. The sex-negative morality isn’t trying to justify greed. It’s making basic concessions necessary for survival. And you know what? If we suddenly had a zombie apocalypse and all of the gains of civilization evaporated, we’d be back to the old illiberal morality in the blink of an eye.
Achitophel: It still sounds kind of liberal modern values are the real morality, and other values are just sort of necessary evils.
Berenice: I think it’s more symmetrical than that. A lot of modern values would disappear if we stopped facing modern problems. We worry a lot about racial sensitivity, but if we ever got a society where racism was as thoroughly neutralized as syphilis, we’d probably drop that value pretty quickly too. If we ever totally conquer poverty, so that everyone’s got more than enough, maybe we’ll even stop worrying about compassion and fairness. Likewise, a lot of the democratic values – freedom of speech, freedom from slavery, equality, etc – are based on most countries being democracies which in turn is based on the historical situation. One of the big shifts was from the medieval system of “mostly super-well-trained professional warriors ie knights matter in projecting military force” to “any warm body with a gun matters”. That gave the common people a new level of power and probably led to democracy and the democratic virtues of equality and freedom. Likewise, technology has connected the world to the degree where different races and cultures and ideas are frantically mixing and mutating, making things like tolerance and freedom of thought much more relevant.
Achitophel: What about not torturing people? What about trying to solve poverty?
Berenice: So we’re too egalitarian to worry much about Authority and Loyalty. We’ve got too many antibiotics and contraceptives to care about Purity. But Care/Harm and Fairness seem as relevant as ever. Maybe even moreso. Given the advances in journalism, communication, and art, we have the ability to learn about and appreciate the struggles of others in a way we never have before.
Achitophel: That sounds a little forced. I could come up with a counter-story where given the worldwide increase in wealth and our lack of real-life exposure to any starving people or smallpox victims, the Care foundation atrophies away, but given our increasing crowding and exposure to superplagues like HIV and Ebola, Purity becomes obsessively important.
Berenice: *shrug* Maybe Care/Harm really is just the fundamental moral foundation, and the others are epiphenomena to be abandoned as we outgrow them. How does that saying go? – “The last enemy to be destroyed is submaximal global utility; destroying Death just buys us more time.”
Achitophel: So you kind of agree with Huemer after all?
Berenice: Perish the thought! Huemer thinks that this change in values proves there’s an objective morality and we’re moving toward it. The strongest claim I would dare is that one of these axes has always been the one that, all else being equal, would dominate the balance – and this is just the first time all else has been equal.
Ted Chiang’s The Merchant and the Alchemist’s Gate (2007) is a extended meditation upon the role of morality in a world whose physicality is absolutely deterministic, such that free will’s role (if any) is restricted to internal cognition, and morality, or the lack of it, can exert no objective effects whatsoever.
Chiang’s point (at least one of them) is that there is still plenty of scope for moral and empathetic cognition … even in this tightly constrained universe.
A great and thought-provoking work that (as it seems to me) is richly deserving of its numerous awards.
My point in posting this is to suggest that Scott Alexander’s essay is perhaps too exclusively rational to encompass Chiang’s meditations.
It seems to me that a consequentialist wouldn’t recognize morality in a world entirely devoid of agency. Thoughts matter, empathy matters, because it compels action, actions which can impact other sentient agents. In a world where cognition is truely a spectator sport–which some suggest is ours–of varied reflection upon programmed actions, I would have trouble taking seriously a materialist suggesting that those thoughts have any moral importance–any more than a pleasant dream is a more just action than a nightmare.
How does Chiang’s work respond to such criticism?
A consequentialist, or at least a utilitarian consequentialist, is allowed to have values about whatever they like. So, in general, there is no reason for ‘thought’ or ’empathy’ to be only valued because of what actions they produce, and even less reason for those actions to be the only fundamental values. Thus, in general, consequentialists may or may not recognize morality in a world devoid of agency, it depends on what consequences, exactly, they value. Now, you may only value thought and empathy insofar as they produce action, but your values are not the only possible values.
EDIT: misread your point.
I invoke the generalized anti-zombie principle on the idea of cognition as a purely spectator sport, and whatever the current favoured flavour of reasoning about logical counterfactuals is on the lack of meaningful decisionmaking in a programmatic universe.
Tl;dr: compatibilism ho!
Speaking as a consequentialist: the world devoid of agency, where cognition is a spectator sport – is not the world of determinism. The world of determinism and only one fixed outcome is the world where agency exists. The varied world of many (physically or indexically random) outcomes is the world where agency doesn’t exist, because there is no mechanism to correlate cognition with outcomes.
Don’t apply the question of motivation at the level of physics, apply it at the level of decisionmaking. Not “Could I have physically acted differently” but “did I consider other outcomes?” The rest of the logic falls out of plain iterated play, without even having to invoke acausal trade – I want my behavior to be predictable, so I consistently judge certain behaviors as morally bad and engage in some codified form of reciprocal punishment, not because I think I could have achieved any other outcome, but because my cognition thinks that’s the sort of thing to observe that will correlate with the virtuous options coming to the fore in other people’s decision loops in the future.
edit: But do I think it’s all completely fixed in stone anyway? Yeah, sure. But even in a function that’s completely deterministic you can say that some parts cause other parts – when they are computationally dependent, when the behavior of some agent at t=25 is calculated from the behavior of another agent at t=7, which that agent calculates using his cognition. Seen in those terms, the fact that my cognition still judges, even in total determinism, should not surprise you – it follows plainly from the fact that my model of the function we are in predicts good future outcomes are computationally depending on that.
(edit: “But isn’t that a lie? Your own behavior was never going to vary.” Yeah but to claim certain knowledge regarding my own behavior invites paradox, and any attempt to use a preevaluation of my own function in determining my own function is a self-referential loop, which is uncomputable. So I’m actually *logically* safe from that charge – I can always claim uncertainty regarding my own behavior from inside my decision loop.)
edit: oh, and: anybody who can make their mouth move, their fingers type, to detail how he is a spectator to his own cognition… isn’t one. The introspection is fed back into the cognitive system, that’s what it’s for, otherwise it would serve no purpose whatsoever. A true shut-in would be causally undetectable, like Chalmers, the true Chalmers, the one with all the mysterious phenomenal experiences, the one that our Chalmers only writes about because he’s a bit crazy.
edit note: I haven’t read the book though.
This is a very well written compatibilist view. I believe similar things, but I don’t think I could have articulated them as well as you do here. I hope I remember this so that I can copy some of it in future arguments.
Metaphysical libertarians do not believe that cognition is a “spectator sport.” Nor do they believe in a world of “random” outcomes.
Moreover, the world of determinism is most certainly not a world in which agency exists. The scholastics made a useful distinction between an “active power” and a “passive power”. An active power is able to cause the first motion of an object. A passive power causes the motion of an object only as an effect of a previous motion.
In a deterministic world, every cause is passive! (Except, presumably for the first motion in the universe; unless the universe has an infinite past.) There is no such thing as agency in a deterministic world. You can only get it by redefining “agency” to mean “passivity”.
Furthermore, scientific observations do not support the claim that determinism is true. (I will not go so far as to say that they absolutely rule it out in themselves, though.)
The middle part of your argument is sort of rambling and difficult to parse. But I will respond to this part:
There is no reason to claim that certain knowledge of your own behavior “invites paradox” unless your theory is not true. Free will is not an epistemological claim; it is a metaphysical claim. If I am Omega and tell you that you raise your left arm in 30 seconds, you either can or can’t refrain from doing this. It is a very simple thought experiment.
Okay, but that’s not what metaphysical libertarians claim. They claim that the mind actually does interact with the body, somehow or other. You have the ability to think one thought or another thought, and as a result you will type one paragraph or another paragraph.
Chalmers is a property dualist, which is an odd position that I reject, but even he is not actually an epiphenomenalist. The point is more relevant as a thought experiment. That a “p-zombie” would be causally undetectable is the whole point. It is conceivable, yet physically indistinguishable from a conscious person. Therefore, materialism does not explain consciousness.
If you don’t like “p-zombies” (I myself think they cause people to get off track wasting their time arguing about whether it is physically possible, which it isn’t), think of an AI. It is perfectly conceivable to me that you could program an AI that would outwardly mimic human behavior—but which would actually have no subjective conscious experience. You torture it and it cries out, but it feels no pain. It writes down an argument, but it has no thoughts.
See, to me the scholastics are the one who are warping the definition of agency here. There seems to be an equivocation between an abstruse philosophical meaning more akin to “you must be as Gods” (prime movers all) and an intuitive meaning more akin to http://wiki.lesswrong.com/wiki/Screening_off .
I am studiously ignoring quantum physics, because I believe it’s a massive red herring. Cognition does not have any way to force quantum outcomes! Hence, QM is the “truly random” agency-less world. There is no hope for free will to be found in quantum.
It invites paradox because it allows me to behave in a way that is self-inconsistent. If I know how I am going to act, I can just define my behavior as “do the opposite of what I predict”. This demonstrates that no agent capable of doing the opposite of what they predict can know with certainty how they are going to act. This does not contradict determinism, merely put limits on the use that an agent can put the fact that the future is predictable to.
It is not obvious to me that this is logically possible. Then again, it is not obvious to me that sufficiently large lookup tables aren’t conscious. [Go on! Ask me about my reasoning! :)]
I think the “abstruse philosophical meaning” is the one that people actually intuitively believe in! People do not believe that their choices are either random or necessitated. Therefore, they believe that they are little prime movers.
Of course, they also believe that they exercise this capacity of free will even in mundane choices. But these are called exercises of agency because they are thought to be instances of the “abstruse philosophical meaning”.
But it’s pointless to appeal to public opinion here.
I am not saying that quantum mechanics somehow proves or is evidence for free will. I am just saying that it doesn’t back up determinism, so it is silly to go on about how determinism is an obvious fact.
The agent is capable of doing the opposite of what it predicts precisely because its behavior is not determinate! Rather, it is free! You could very well program a computer to predict its behavior and then infallibly execute the prediction. Edit: and if you programmed the computer to predict what it will do and then do the opposite of whatever it predicts, you wouldn’t have a prediction—but you also wouldn’t have a behavior. It’s no different from giving it any other kind of contradictory order, like “Don’t obey my orders.”
Moreover, it doesn’t have to be you that predicts your future action. Imagine Omega (the LessWrong jargon for a practically omniscient being who knows everything there is to know about you) tells you that you will raise your left hand. In his prediction, he has taken account of your character, history, and what your reaction will be to being told that you will raise your hand. Or, if you are a strict materialist, he’s just extrapolated the motion of the atoms in your brain. Do you seriously believe that you couldn’t falsify the prediction?
If you do not raise your hand, the theory of determinism fails, since he predicted everything there was to predict and yet did not predict your movement.
I do not think your reasoning is correct.
For one thing, “not logically possible”?! Lots of crazy things that could never happen are logically possible.
Do you accept the reality of subjective conscious experience? For instance, the way pain feels to you? (Or perhaps you are a p-zombie? Maybe that explains the dispute in philosophy…)
If you accept that, you must believe that this is not something that could be spontaneously generated just by building a big enough lookup table. Not even “crazy” dualists think a conscious mind could be spontaneously generated like that! No amount of “If PUNCH, then FLINCH” can bring into being the experience of pain.
Consciousness and intelligence are not the same thing. It is perfectly conceivable (logically possible, surely) for a being to be intelligent but not conscious. A guided missile has a low level of intelligence. Yet I do not think there is any way it “feels” to be a guided missile.
I suppose a being could also be conscious but not intelligent, though that is less relevant.
@ Vox Imperators:
I don’t think that’s true. The correct prediction is merely uncomputable because the prediction itself alters the agent’s behaviour, resulting in an infinite loop.
Why wouldn’t you have a behaviour? The computer will choose to display “A” or “B”. Enter your prediction. If “A” then “B” else “A”.
If Omega tells you the prediction then yes you can falsify it, but so can a few lines of purely deterministic computer code.
“If you do not raise your hand, the theory of determinism fails, since he predicted everything there was to predict and yet did not predict your movement.”
No it proves that Omega doesn’t exist.
If there exists a being omega such that omega can guarantee to predict the future and is incapable of lying, and omega tells you you will raise your hand in 30 seconds, then that is what will happen.
You imagine yourself being told this, and then not raising your hand, and conclude that determinism is illusory. This seems very much like the modal ontological argument, in that is is a problem with your imagination rather than determinism.
(As a side note, in the classical uses of Omega (in decision theory), no information is leaked about the future to the decision maker. There is no reason to believe any agent has future knowledge of his own actions (outside of his internal decision making process).)
There’s actually a formal disproof of Omega in halting theory, but the short version is this:
Omega does its scan of you to perform its calculation of what you’re going to do.
Omega determines that your actions change in response to your environment; it can’t predict whether or not you’re going to raise your hand without knowing if a volcano’s about to erupt under you, whether you’ll be transported to a football game, whether or not you’ll suddenly see your long-lost brother and be moved to a fit of joy, etc.
Omega then uses its magical determinism powers to calculate absolutely everything in the universe that could affect your decision to raise your hand on a non-quantum level.
Omega realizes that you’re an ornery bastard and that your decision to raise your hand will be influenced by it telling you whether or not it predicted you raising your hand, so Omega needs to pre-calculate whether simulated-Omega-1 calculated whether or not simulated-you-1 raised their hand. This requires simulated-Omega-1 to simulate another Omega for its model of simulated-you-1, which leads to more simulation, and so forth.
Omega suffers an OutOfMemoryException when it runs out of recursion layers (because of course Omega is programmed in Java).
You can change the model. Omega can know that you’ll raise your hand because you’re a character in a fictional example and the author read ahead to what they wrote and adjusted Omega’s behavior accordingly. But this thought experiment doesn’t prove much outside of narratively-influenced universes, so I don’t know why Omega (or future-calculation rather than future-approximation in general) gets as much neural processor time in the rationalist community as it seems to.
This is obviously begging the question.
The agent might be capable of doing the opposite of what it predicts because its behavior is not determined, or the agent might be capable of doing the opposite of what it predicts because the prediction doesn’t take into account all possible causal factors. The question of which is the case is exactly what is at issue.
You could program a computer to predict the behavior and infallibly execute the prediction, but you cannot guarantee that the computer’s prediction is accurate.
I can imagine multiple scenarios here:
1) You are correct that there is such a thing as contracausal free will and I can falsify the prediction.
2) I can falsify the prediction, but only because Omega told me the prediction. If I had not been told that I would raise my left hand, it is 100% certain that I would have, and the reason I ultimately did not is because I processed the prediction before the event came to pass and I am an ornery bastard.
3) I am told the prediction and become resolute in my determination not to lift my left hand. Just then a rock comes hurtling through the window and I lift up my left hand to protect my face.
And that’s obviously not exhaustive. The point is that there are a lot of possible ways to resolve the thought experiment — the thought experiment really doesn’t drive me to believe contracausal free will is any more plausible than I already believed it to be.
Yes, but are there any things that can happen that are not logically possible? FeepingCreature seems to be arguing “logically impossible –> impossible”, not “impossible –> logically impossible”, so this is a bit of a non sequitir.
Can’t speak for FeepingCreature, but I certainly do despite being sympathetic to physicalism/naturalism/materialism! In fact, my only access to reality is through my subjective conscious experience. Physicalism can only be inferred as a good explanation for why my subjective conscious experience is the way that it is.
But the reality of subjective conscious experience has no bearing whatsoever on the question of determinism. Subjective conscious experience could be a completely deterministic phenomenon or it might not. The mere fact that it exists simply doesn’t give us any information either way.
Another note: the rhetorical trick where the anti-materialist accuses the materialist of being a zombie is a stupid and shitty one. It implies that your interlocutor is in some sense less than human because they disagree with you about a (so-far) unfalsifiable philosophical principle. It doesn’t add anything to the discussion. Stop it.
1. It doesn’t seem like you could build up consciousness from a big enough lookup table. However, since we really have no insights whatsoever into the nature of consciousness, we can’t conclude that just because it seems subjectively implausible that it’s definitely false.
2. Even if you cannot build up consciousness from a big enough lookup table, you cannot conclude that it is or is not deterministic based on that fact.
The “lookup table” comparison seems contrived as an unfairly dismissive response to materialist arguments as well. There’s any number of things we might compare consciousness to besides a lookup table, and many of those possibilities produce much less of a scoff reaction. (E.g. “consciousness is like a virtual machine running on a computer; it’s an information-based process that could hypothetically be emulated on any substrate, but nonetheless requires some kind of implementation in hardware; ultimately, it’s a thermodynamic process that orders information locally but disorders information globally through the expulsion of waste heat.)
I don’t know why some people are so vehemently against the very idea that a materialist theory of mind is possible. It seems obvious to me that a world in which such possibilities are considered is a better world than one where the dualists run all the materialists right out of the discussion.
Concrete STEM support for (what might be called) “Feeping’s choice” comes from an article “Closed Timelike Curves Make Quantum and Classical Computing Equivalent” by Scott Aaronson and John Watrous (arXiv:0808.2669v1); this article establishes that (subject to certain technical assumptions which are themselves quite interesting) quantum and classical “Alchemist Gates” (Chiang’s name for them) have the same computational power.
It is quite an interesting exercise to traverse a closed timelike cognitive curve, in which one repetitively reads first Chiang’s story “Alchemists Gate” (of June 2007), then Aaronson-Watrous’ article “Closed Timelike Curves” (of August 2008), over-and-over again, until a fixed point of understanding is reached.
Indeed it was a regrettable lost opportunity for STEAM bridge-building that the latter much-cited work did not reference the former much-honored work … the present comment attempts to reinitialize this bridge-building opportunity.
One finds that numerous passages in Alchemist’s Gate refer to physical and informatic principles of Closed Timelike Curves
Despite their shared postulate of closed time-like curves, the language of the two works is distinctively different, as befits their (seemingly) very different purposes: the Aaronson/Watrous article restricts itself to descriptive “Pinkertonian” language, while Chiang’s tale restricts itself to performative “Forsterian” language … here “Pinkertonian” and “Forsterian” refer to another SSC comment.
Is there a conclusion here? Even a tentative one?
Hmmm … don’t ask me just yet … I’m still iteratively (re)reading these two fine works of Chiang and Aaronson / Watrous, using each as a STEAM-variety Rosetta Stone to illuminate the descriptive language and performative implications of the other.
A practice of which the authors hopefully would approve, and which these two mutually illuminating works irretrievably serve to inspire.
Is humility allowed here on SSC? `Cuz I’m hoping someone else will answer Randy M’s question in light of Ted Chiang’s stories.
Uhhh … preferably someone who’s actually read these stories?
Three more Chiang-entangled stories/novellas (which are worth seeking via Google) are “Story of Your Life” (1998), “Seventy-Two Letters” (2000), and “The Truth of Fact, the Truth of Feeling” (2013).
Chiang openly discloses certain common themes in these much-honored works
Here Chiang’s phrase “in a sense” can be read as either the descriptive sense (of science and philosophy) or the performative sense (of engineering and medicine).
It seems (to me) that many SSC readers — not all, to be sure! — are more strongly attracted to descriptively ideal languages (“Pinkeresque” languages), than to performatively ideal languages (that is, “Berryesque / Forsteresque” or even “LeGuinesque” languages).
In this light, it is natural to ask whether Chiang’s integrated narrative quest — to describe languages that are nearly ideal both descriptively and performatively — affords a more nearly universal venue for appreciating (descriptively) and realizing (performatively) moral actions, than either descriptive or performative languages can separately support.
In short, Chiang’s work suggests that human morality evolves (toward an optimum?) in parallel with the evolution (toward an optimum?) of the descriptive and performative capacities of our human language.
At a minimum, Chiang’s body of work exposes the nourishing roots of our shared experience, that people who use the same words, aren’t necessarily speaking the same language!
Kudos to Scott Alexander too, for his sustained commitment and hard work, in hosting a weblog that is creatively hospitable — far more than most weblogs — to both descriptive and performative appreciations of tough subjects (like morality).
Ted Chiang is one of my favourite authors. His short story Exhalation (http://www.lightspeedmagazine.com/fiction/exhalation/) is one of the most beautiful metaphors for the terrible inevitability of entropy that I have ever come across, as well as containing a brilliant allegory for the fragility of consciousness.
His stories are the platonic form of what short science fiction should be, always incredibly original and thought provoking, and rarely outstaying their welcome.
Ildánach (great name!), your fine comment mentions two intertwined elements that “Exhalation” explicitly highlights: the inexorable increase of entropy and the dynamical fragility of consciousness.
A third intertwined element is the vivid contrast that Chiang’s story explicitly highlights between (rational) “Pinkeresque” cognition versus (performative) “Forsteresque” cognition … this is the same cognitive contrast that was mentioned above.
This cognitive contrast is portrayed explicitly from the very beginning of the Chiang’s story:
Chiang-fans among SSC readers will be interested to learn that principle filming of Chiang’s “Story of your life” (1998) — which similarly contrasts rational Pinkeresque cognition with performative Forsteresque cognition — has (apparently?) been completed, with the release of the film (tentatively?) scheduled to 2016. Detailed information regarding Story of Your Life is hard to come by, but what there is, is mighty encouraging!
Here the point is that reading Chiang’s works from an exclusively rational perspective, without attention to their Forsteresque themes, is like reading Huckleberry Finn strictly as a story about rafting techniques. Which is surely one way to read a story like Huckleberry Finn, but surely not the only way, and arguably, from a moral or even an informatic point of view, not objectively the best way.
The widening (“Chiangesque”) gyre encompasses both nominally descriptive (“Pinkeresque”) enterprises like mathematics and science and nominally performative (“Forsteresque”) enterprises like engineering and medicine. That’s why many folks (including me) perceive unbounded scope for future “Chiangesque” STEAM-works.
I think there’s a point that needs to be added. Even if morality is advancing because it’s approaching something and not just because wealth is increasing, that doesn’t mean that there’s an objective moral truth. That just means that there’s an objective stable point in human nature. It’s not really any different than the L4 and L5 Lagrange points being stable. It doesn’t mean that that morality is somehow inherently good.
So much this. The whole line of this rebuttal is unnecessary, because until someone either provides a reason to think “moral facts” are not epiphenomena, or provides a decent rebuttal of the general argument against realism about epiphenomena, the whole thing remains a silly waste of everyone’s time.
Super-short version: I don’t see any reason to imagine moral facts, should such things exist and whatever they might be, are capable of having physical effects. Hence no argument that can ever be made for some purported moral fact can actually be causally dependent on that fact – the same argument would still be made in physically identical possible worlds where different moral facts were true. So we can never have a good reason for believing any moral fact. (The argument is often applied pari passu to mind/consciousness, where I actually think it’s a mistake, but I believe it works exactly as intended here.)
This argument is an enthymeme without the further premise, “Only an argument for a fact which is causally dependent on that fact gives us good reason to believe in that fact.” To the extent that we can make sense of arguments being for and causally dependent on facts, this premise is false. My belief that the sun will rise tomorrow could not be causally dependent on the fact of the sun rising tomorrow, yet it is surely supported by good reasons. Similarly, my belief that thrice three is nine could not be causally dependent on any mathematical facts, these presumably not being physical features of our world, but again my reasons for belief are above reproach.
I endorse this 100%, and applaud your pithy presentation of it.
However, I’ll note that when this has come up before, some folks here have defended the possibly LW-endorsed (dunno if it is or not) idea that beliefs should be evaluated as budging our predictions about the world in some degree. Thus, we would believe (I think?) that math is a tool that lets us count sheep with a bucket of pebbles (e.g.) and thus use the pebbles to predict how many sheep ought to come back at the end of the day, but not that math facts are themselves true in some Platonic-type way. The general vibe seems to be a sort of logical positivist idea that beliefs ought to pay rent in expected experiences, or otherwise just amount to nonsense (like God, etc.).
I’m not endorsing that view (my view matches what you wrote above), but it’s far from contemptible, and I think it’s pretty common round here.
I think that view is just nominalism, a common view in the philosophy of math.
1. We can vary the counter-examples as needed until their import on future experiences is lost. It is true of planets in (appropriately-arranged) solar systems which no one will ever observe that the local sun(s) will rise tomorrow. Likewise, the recherche quarters of mathematics which have no application in natural science still contain many truths, their barrenness notwithstanding.
2. These are two importantly different claims which we had better not run together:
A. Mathematics is useful for predicting and controlling the flow of future events.
B. Mathematics is indispensable for predicting and controlling the flow of future events.
Mere usefulness will commit us to all sorts of odd entities– sakes and behalves, for instance– so the claim you are making had better not be the former. I agree that the latter claim, if true, plausibly gives (some) mathematical truths an edge over moral truths, but it is scarcely uncontroversial that it is true (and if that’s what you were going for the counting example was poorly chosen).
3. Surely a positivist would have no truck with causal dependence or possible worlds, both of which featured in Tom Richards’s argument. Better to say “naturalist” and be vague and right than “positivist” and be precise and wrong.
This is most assuredly the LW supported (or at least EY supported, but this one at least seems pretty uncontroversial) view. (for proof, see “Make your beliefs pay rent.”)
This is also the view supported directly by bayes: if P(observation|theory) = P(observation| not theory), you have learned nothing. If this is true for all possible observations (this is what I personally mean by ‘epiphenomenon’), then no possible observation can ever serve as evidence for or against the theory, and you’re stuck with your prior (give or take logical reasoning, which for an idealized(in the sense of ‘spherical cow,’ not in the sense of ‘ideal’) reasoner doesn’t happen because it is built into it’s prior, but humans aren’t idealized reasoners).
You’re right; I expressed myself clumsily and lazily due to haste. I don’t really think a causal construction is necessary at all for some version of the argument to work; even if we do prefer a causal construction, clearly we don’t want one as rigid as “x is causally dependent on y” (looser forms of causal connection are clearly admissible).
The salient point remains: why on earth should anyone believe in anything which lacks explanatory power, and what on earth do purported moral facts explain?
So, what explanatory power does the proposition “One should only believe in things which have explanatory power” have?
“why on earth should anyone believe in anything which lacks explanatory power”
It could hypothetically explain why some people’s beliefs pay rent and other people’s beliefs live in their parents’ basement.
The behavior of human beings seems heavily constrained in non-obvious ways. I could hypothetically pick up a pen and start stabbing everyone sitting near me, for example. I could drive down the exact center of a road instead of keeping to the right-hand side.
You can argue I don’t do things like these because there are obvious negative consequences, and that the existence of those consequences is the causal factor — no need for moral facts to explain the mysterious constraints on my behavior. But if we look closer, we’re going to find more and more constraints that become harder and harder to explain away using external factors. It isn’t necessarily hard to shoplift without getting caught, but most people don’t do it. Picking pockets is a skill that can be acquired by pretty much anyone, but most people don’t bother.
Or even dieting — how to I explain the internal conflict between wanting to eat cake and wanting to lose weight? Why not just eat the cake?
You can view “moral facts” as a label for the traditional explanations for these behavioral constraints. From a physicalist point of view, it seems likely that these constraints will all either be fundamentally physiological or social rather than metaphysical — but there is no need to assume that “moral facts” necessarily refers to something metaphysical.
The diversity of life on earth was once thought to require a metaphysical explanation, but now we can explain it through common descent with variation and selection. Similarly, the morally constrained nature of human behavior might not need a metaphysical explanation, but we can still call the explanations “moral facts” for the sake of having a handle for them.
It could hypothetically explain why some people’s beliefs pay rent and other people’s beliefs live in their parents’ basement.
I would be interested in hearing your explanation of the matter.
You’re assuming that moral facts can be different without physical facts being different, which is a contentious assumption and not one I agree with. There can’t be a physically identical world to ours in which murder isn’t wrong, because everything that causes murder to be wrong is physical or an abstraction drawn from the physical. There’s nothing mysterious/non-natural about moral facts.
But if moral facts are fully determined by physical facts in the way you suggest – such that they not only are not but could not be different, what information do they contain? They don’t appear similar to – say – economic facts, or macro-physical facts which are epiphenomena we’re happy to engage with because they’re useful in synthesizing very unwieldy sets of lower order facts, but moral facts do not relate to physical facts in the same way. Moreover, if they really are so fully determined by the physical, why are so many of them controversial and how would one begin to discover which were true and which false?
You say that moral facts don’t relate to moral facts in the same way that macro-physical facts relate to physical facts, but that’s the point of contention – to put it broadly, something like “one ought to do X” means “one would do X if correctly motivated by the relevant reasons”. People disagree about which reasons are relevant and what correct motivation is, but it doesn’t mean there’s no fact of the matter, or that it’s something non-natural. Determining which moral facts are true and which are false requires investigation into what one should do. I realize this sounds cursory, but this is a matter for a whole field of philosophy and isn’t easily condensed. To skip ahead, I think moral facts are similar in nature to game-theoretic facts.
I’m aware that it’s matter for a whole field of philosophy; I just think that whole field of study is a complete waste of time, precisely because it is easily condensed, into “it’s all bunk”.
I don’t think “one ought to do X” consistently means anything. Sometimes it can reasonably be treated as simple approbation of X, sometimes it conveys a muddled belief in some non-sensical system which in some way recommends X (with approbation probably muddled in), sometimes it implies that X will further the achievement of some goal the agent is presumed to have…
The problem is not that people disagree about which reasons are relevant and what correct motivation is, the problem is that unless one is happy for moral facts to collapse entirely into game theoretic facts, rather than merely resembling them, one is forced to arbitrarily decide those things.
If your contention is that moral facts are of the nature “X is the best way to achieve Y”, then we don’t really have an argument. I agree that such facts exist (insofar as existing is the sort of thing facts do); I just don’t think they’re philosophically noteworthy, or germane to the present discussion. Leave them to the behavioural psychologists (or economists, or whatever similar bunch are suited to any particular one).
There are questions of whether moral statements have truth-value, whether any are true, what it means for a moral statement to be true, whether it’s about what you want or not, what importance (if any) moral intuitions have, the nature of moral motivation, whether categorical imperatives exist, and many others. You can come to a certain conclusion after having gotten some measure of the issues, and note that they’re still ongoing and more discoveries can still be made.
Disentangling the various possible meanings of an ambiguous phrase is something that Philosophy can usefully do, so , no , the whole is not bunk.
“Suppose the moral facts are different than they are” sure seems like it can be given a clear sense. That might need to be in terms of conceptual possibility or metaphysically impossible worlds, but it’s a bad result if “If it is always wrong to wear purple, it is wrong to wear purple on a Tuesday” and “If it is always wrong to wear purple, it is wrong not to wear purple on a Tuesday” both come out trivially true.
“Imagine the moral facts are different than they are” is certainly sensical, but “Imagine the moral facts are different than they are without physical facts being different” is more difficult. One could say that moral badness could be a fundamental property of the universe alongside physical facts, but even in a world physically identical to our own and with these additional properties, we would have no reason to pay attention to these properties, and ought to act as we do now, because everything that gives us the correct reasons to act would be the same. In that sense, if morality is about what we should do, independent facts of this kind can’t exist.
Like, what if we found a stone tablet on Mars, formed naturally through erosion, that said: “If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them”?
Unless this convinced us that God exists and would punish those who violated his commandments (in which case atheists and religious liberals are very mistaken about some non-moral metaphysical facts), why would this stone tablet make them want to change their behavior?
If moral facts are “intrinsic” and free-floating, they can have no relevance for human behavior.
I don’t think anyone here is suggesting that there actually are independent, free floating moral facts or would be inclined to take notice of them if there were and one somehow could. The argument is that given the impossibility of interacting with such hypothetical facts, I and others are inclined to favour telling all the moral philosophers to go and get real jobs.
Or giving them and everyone else a basic income and letting them do as they pleased, but that’s a different argument…
@ Tom Richards:
Perhaps no one here here, but Huemer believes exactly that: there are free-floating moral facts, we have access to them via “intuition”, and these facts are sufficient reasons to motivate our behavior.
Sorry, I was being unclear. I’m talking about pure normative facts, say, “the principle of utility is false”. The view you’re defending, I gather, is that the moral facts at minimum supervene on the total physical state of the universe. How, then, can we suppose that the principle of utility is true? The only answer you’re allowed is that we must imagine reconfiguring the microphysical bits and pieces of the world until the principle of utility stops being everywhere false and starts being everywhere true. But this seems crazy. It’s very difficult to see how, by moving atoms around, we could change the truth value of any purely normative claim, because the purely normative claim is totally unmoored from particular states of the world.
This threatens to make whatever normative theory you take to be true true by metaphysical necessity, which leads in turn to the undesirable consequences pointed out above. Any conditional which takes “if the principle of utility were true” as its antecedent comes out trivially true by virtue of being counterpossible, regardless of whether the consequent is “we ought to maximize utility” or “2+2=5.” So I don’t think we should rest content with the claim that questions about what follows from the moral facts being different can be defeated by supervenience theses.
@Vox, though your point about nihilism is well-taken, I would like to suggest the following analogy contra-nihilism and pro-ethical “facts”:
Elsewhere (I forget where–if someone knows the link, I’d be grateful), Scott addresses the question of “objective” aesthetic judgments.
As nihilism forces us to accept the seemingly absurd conclusion that “Hitler was not a better guy, in any objective sense, than Mother Teresa,” aesthetic nihilism forces us to accept the equally absurd conclusion that “Macbeth is not, in any objective sense, a better piece of literature than Twilight.”
Scott basically points out that we can dismantle the notion of “good” literature until we reach a bunch of objective questions: “is the plot novel or hackneyed?” “are the characters fully realized or flat?” etc. Thus, we can ultimately say, through a preponderance of answers to low-level factual questions, that Macbeth is objectively better than Twilight.
But how can we say that a well-developed, novel plot is superior to a hackneyed one? Or that a complex cord progression is superior to my cat walking on a piano?
Well, here’s the point where “intuition” becomes key: our intuition that a creative, well-developed plot is superior to a hackneyed one is arguably as strong as our intuition that the sun is rising, or that we have actual bodies and are not simulations running on computers: that is, all perceptions depend on intuitions, and when we have an incredibly strong intuition, like we do about Twilight our Hitler, we ought to trust it in the absence of contravening evidence, since there is really no better option.
But isn’t our intuition just that “humans prefer well-developed plots” and “humans always like charity better than genocide”? Maybe there is an alien species who universally loves genocide? Maybe, but wouldn’t we still say it was wrong if said species committed genocide against us? Maybe “genocide against humans is wrong” is just a fact about the universe, as obvious to base-level perception as “I am not a computer program” (conceivably open to error, but we have to go with it for the time being)?
Consider the fact “one ought to defect in a one-shot prisoners’ dilemma”. It’s true, but one couldn’t reconfigure the microphysical bits of the universe to make it false, and yet the truth of that fact isn’t dependent on any non-natural properties.
It’s not our intuition that makes a well-developed plot better than a hackneyed one, it’s the fact that we tend to like well-developed plots more. When I say that a book is good, I either mean that I liked it (i.e. that it caused me pleasure) or that a lot of other people (would) like it – I’m not getting at any intuitive independent notion of “good”. This produces objective answers in the sense that someone who says “X is bad” is wrong because a lot of people like X, but it’s ontologically subjective because it’s determined by people’s tastes.
I’m not worried about non-natural properties, I’m worried about the semantics. You are on board, I take it, that it is metaphysically impossible that one should cooperate in a one-shot prisoner’s dilemma, because there’s no possible way of reconfiguring any sort of physical stuff in the world to make that claim true. This example will do fine. Now, we need to assign truth-values to the following two statements:
1. If we should cooperate in a one-shot prisoner’s dilemma, we should sometimes act in ways that are not completely selfish.
2. If we should cooperate in a one-shot prisoner’s dilemma, we should all set our pants on fire.
On the standard Stalnaker-Lewis semantics, these will both come out vacuously true in virtue of having an impossible antecedent. But this seems wrong– (1) is intuitively non-vacuously true and (2) is intuitively bollocks. A natural way of repairing this defect is to understand these conditionals in terms of conceptual possibilities or metaphysically impossible worlds, but how exactly we do it is beside the point. The important thing is that we can make sense of claims which require us to contemplate worlds where the moral facts are different than they can possibly be. And if we can contemplate worlds where the moral facts are different than they can possibly be, your metaphysical qualms about Tom Richard’s conditional– paraphrase it as “If we lived in a world where all the physical facts are the same but the moral facts are different, we would still hold the same beliefs”– lose their traction.
But it’s conceptually impossible for it to be true that we should cooperate in a one-shot prisoner’s dilemma, so that doesn’t resolve the problem. A better way would be to say that any fact about how one should act about in a game-theoretic scenario has no implication by itself on whether one should set one’s pants on fire, so 2 is false.
We have the appearance of being able to contemplate worlds in which moral facts are different without physical facts being different, but that’s not actually the case, because moral facts can’t be different without physical facts being different.
“Even if we should cooperate in one-shot prisoner’s dilemmas, humans still wouldn’t be able to breathe underwater” also rings true, although obviously there’s no connection between the antecedent and the consequent. So this won’t work. I suspect that either you’re gong to be stuck with conceptual possibilities or impossible worlds, or your moral theory is going to commit you to denying a host of intuitively true claims (many having nothing to do with morality).
It’s not at all obvious to me that we can’t contemplate at least some impossible states of affairs, in fact, my instinct is to say the opposite. But let’s try impossible worlds instead. What’s wrong with “in the nearest impossible world where the physical facts are the same but the moral facts different, humans still have the same moral beliefs”? This seems to me a reasonable interpretation of the conditional, and true to boot.
Yes, and therefore the antecedent doesn’t affect the consequent, so if the consequent is true in our world it’s true in the hypothetical, assuming that hypotheticals with an impossible antecedent have a truth-value.
It’s an impossible world, so I can’t say anything about it. I could try to say something about it if I didn’t know that it’s impossible, but those statements would have questionable truth-value.
This will be enough for the conditional we were considering to come out true. In the same way that moral truths do not “affect” our ability to breathe underwater, they also do not “affect” our moral beliefs, because moral truths by themselves are causally inert. Consequently, if “even if we should cooperate in a one-shot prisoner’s dilemma, humans still could not breathe underwater” is a non-vacuously true counterpossible, so too is “even if the physical truths were the same but the moral truths different, humans would retain the same moral beliefs.”
“I can’t say anything about that world, it’s an impossible world”, he said about the impossible world.
It is, incidentally, highly suspicious that you seem to have been led to adopt a dogmatic view concerning impossible worlds to avoid facing unwanted meta-ethical consequences. This is very much letting the tail wag the dog.
@ Earthly Knight:
What exactly is your point? I don’t mean this aggressively, but if you make it clear what you are trying to establish, it will get rid of some of the confusion apparently existing between you and blacktrance.
Are game-theoretic truths causally inert? If not, why not, and what makes them different from moral truths?
If moral truths were not made true by physical truths, then humans would have the same moral beliefs and moral facts would be causally inert, as you say. But if moral truths are necessarily made true by physical truths, then “if the physical truths were the same but the moral truths different” is contradictory, even if spoken by someone who doesn’t realize that it is.
It would be more analogous to say “In a world physically identical to our own except that married bachelors are possible, would people still have the same beliefs about bachelors?”. I can’t say anything about that, because unmarried bachelors are impossible. Or, I could say that people have the same beliefs because the world is physically the same, but that world would still not be one in which married bachelors can’t exist, because that’s just how the concept of “bachelor” works.
I don’t think it it is an arbitrary matter of “intuition” that we prefer well-developed plots to derivative, sketchy plots. Well-developed plots better satisfy the purposes for which we read books. They offer a more stimulating world in which to immerse oneself, more grist for discussion, and they are likely to stick with you longer. All of this makes them more effective at contributing to one’s ultimate happiness.
As for the alien species that loves genocide, the question is whether they are right or wrong to love it. For example, are they making a mistake about the relevant facts, such that if their understanding were corrected, they would no longer love genocide? If so, then what they ought to do is correct their understanding, not commit genocide.
But suppose genocide is somehow central to their rational self-interest. If, all things considered, it is in the best interest of the Vogons to demolish Earth to make way for a hyperspace bypass—and for some reason going around or evacuating humans is out of the question—then it is good for the Vogons to kill all humans. (I don’t know how this rationally could be the case, but suppose.)
The question is not, “Is the death of all humans bad?” but “For whom is the death of all humans bad?”
Obviously, the death of all humans is bad for humans. It need not necessarily be bad for Vogons, though. All we have here is a conflict of interests.
“The question is not, “Is the death of all humans bad?” but “For whom is the death of all humans bad?”
Obviously, the death of all humans is bad for humans. It need not necessarily be bad for Vogons, though. All we have here is a conflict of interests.”
But, intuitively, that seems to be quite wrong to me. There seems to me to be a very real sense in which the Vogons destroying our planet for the sake of their space highway would just be wrong in a non-species specific sense, just as we can imagine it being wrong for us to eat an intelligent but unusually delicious alien species.
If we assume this alien species has as at least as much capacity for deep feeling, rational thought, and suffering as we do, but also happens to be unbelievably delicious, we can see that it would be wrong for us to kill and eat them because our gustatory pleasure does not justify their deaths. We can understand this in a generalizable way.
If it were true that morality simply equaled what makes humans happy then we couldn’t really imagine there being anything wrong with eating this alien species, or, for that matter, eating animals, which obviously some, if not most humans perceive to be wrong (though arguably because animals remind us of humans: but we can imagine that even if the Vorgons were very, very different from us in most ways, we would still be wrong to inflict suffering on them for our gustatory pleasure).
So the above description of the good as being just “what is good for humans” seems to be obviously wrong to me. Yes, it only “seems” (that is, I have a strong intuition about it), but then, it only seems to me that I am not a brain in a jar. For the time being I have to operate on the assumption that it is an objective fact that I’m not a brain in a jar, so why shouldn’t I also operate under the assumption that genocide for gustatory pleasure is objectively wrong, regardless of the perspective?
The reason you should not eat the delicious aliens is, presumably, the same as the reason you should not eat people (who, by all accounts, taste pretty good). Namely, that it is much more in your interest to interact with other people by peaceful trade than it is to set yourself at war with them for a little gustatory pleasure.
And, of course, once you have a set habit and attitude of general benevolence (which is very much in your self-interest), cannibalism won’t even be remotely tempting to you. If the delicious alien is a rational being, then the same principles that make trade preferable to war and slavery among humans will hold between humans and delicious aliens.
(Even if we grant that there is an innate psychological revulsion toward eating humans—and cannibal tribes make me doubt this—it still wouldn’t be in your interest to eat them. And the same goes for the delicious aliens.)
Now, is you postulate that eating the delicious aliens will bring the greatest happiness which it is possible for humans to attain…then so much for the aliens. But I hardly regard that as likely or relevant to the real world.
The question is really no different from that of slavery among humans. Slavery was wrong because it was ultimately neither to the interest of the slaves nor of the masters. In my opinion, this is the central amazing insight of classical liberalism and economics: that in order for one man (race, nation) to be rich, he does not need to make another man (race, nation) poor.
But suppose that slavery really were in the interest of the masters but not in the interest of the slaves (and again, I mean the ultimate interest, not merely the proximate interest. The ultimate interest includes all material, social, and psychological factors.) In that case, the slaves ought to rebel, but the masters also ought to subjugate them. Why should the slave sacrifice his interests for the masters, and why should the masters sacrifice their interests for the slaves?
However, if one considers what the empirical facts would have to look like for it to be true that slavery is in the ultimate interest of the masters, the slaves would have to be something on the order of brute beasts. Which…we do “enslave” for human amusement.
Also, to directly address your central “intuition”, I don’t know what it would mean for a thing to be good or bad in a way that is not agent-relative (or species-relative). I understand the concept…in the same way I vaguely understand what a round square would have to be, despite the fact that these qualities are incompatible. In any case, nothing seems to me to be good or bad in that way.
As an analogy: consider “useful” or “healthy”. How can a thing be useful or healthy but not useful for anything or healthy for someone?
I do not think there is any confusion. Blacktrance claimed that we could not make sense of the conditional “If the physical truths were the same and the moral truths different, we would still hold the same moral beliefs,” because it is metaphysically impossible that the antecedent could be true. I have pointed out that it does not in general seem to be the case that we cannot make sense of conditionals with metaphysically impossible antecedents (in fact we could not get by without them).
How is it that the negation of a metaphysically necessary truth is a contradiction? The analogy to bachelorhood suffers from the same problem: a bachelor being an unmarried man, if true, is true by analytic or logical necessity. So here is a better analogy. Suppose that all of the microphysical truths were the same but that gold was a hard, blue non-conductive metal. Would we then believe that gold is a hard, blue non-conductive metal? The answer seems to be clearly yes, because it is the chemical and not the microphysical properties of gold which are causally responsible for our gold-beliefs. So it seems as though we can indeed prize the supervening layer and the subvening layer apart, metaphysical necessity or no, and the moral conditional comes out non-vacuously true.
@ Earthly Knight:
The microphysical truths about gold entail that it is a soft, yellow, conductive metal. If you ask us to imagine that all these microphysical truths are the same but that gold is a hard, blue, non-conductive metal, you are demanding that we accept a contradiction. It’s just a less obvious contradiction than asking us to imagine a table which is all white and all black.
The denial of any necessary truth is a contradiction. It contradicts the facts that make the truth necessary!
As another example, imagine a human being whose biological and genetic makeup were exactly the same, except he had a third eye on the back of his head. This is also a contradiction! A biologically and genetically normal human cannot develop an eye on the back of his head. The only way you can entertain this possibility is because you are ignorant of (or mentally setting aside) the implicit facts that make it impossible.
Now, if you think that, given the microphysical properties of gold, it is still an open question whether it is soft or hard, your point makes sense. But that is absurd.
Well, I think I’ve smoked out the confusion. A contradiction is a statement which is false solely in virtue of its logical form, i.e. a statement of the form “p and not p”. “That table is at once white all over and black all over” is not a contradiction.
You might petition to have whatever tuition you paid for your introductory logic class reimbursed.
@ Earthly Knight:
You are incorrect (or trivially correct in a misleading way), and I do not appreciate the petty jibe.
Yes, obviously, a contradiction is a statement of the form “p and not-p”.
To say that a table is white all over and black all over is not superficially of this form. But the statement obviously relies on the unstated fact that if a table is all black, it must also be all non-white—and vice versa (since they are mutually exclusive).
Therefore, one implicitly asserts: “The table is white (and non-black) all over and black (and non-white) all over.” But as “non-white” is implicit in the meaning of “black”, it is unnecessary to be this explicit.
Your line of argument is about as relevant as arguing that “p and not-p” is not a contradiction, either; after all, perhaps we are actually talking about two different propositions which happen to be abbreviated with the same letter. You didn’t specify that we weren’t!
And if you recognize why saying that the table is both all white and all black is a contradiction, you will see why it is a contradiction to imagine a world in which the microphysical properties of gold are exactly the same but also gold is blue. The contradiction is implicit but no less real for that.
This is just how language works. If a wanted criminal is known to be grossly fat, my observation that a given person weights only 100 lbs. suffices to prove that he must not be the criminal. Saying someone weighs only 100 lbs. entails that “he is not grossly fat.”
I repeat that it is not a contradiction to assert that a table is at once black all over and white all over, because the assertion is not false solely in virtue of its logical form, i.e. it is not of the form “p and not p”. If you like we can distinguish a third grade of necessity intermediate between metaphysical and logical, call it analytic necessity*, and define it as being necessarily true in virtue of logical form or the meaning of the terms involved. It is plausible that statements which assign an object each of two contrary color properties are analytically impossible. But note that supervenience claims are still not analytic– they are substantive metaphysical theses not true in virtue of meaning– so your and blacktrance’s analogies continue to miss the mark.
Your idea seems to be that a statement is a contradiction if it is incompatible with some set of facts. But this cannot be right: “my speakers are on the floor” is incompatible with another fact, namely, that my speakers are on my desk, but it remains in all senses a contingent proposition.
Carefully distinguishing different modalities is not a frivolous exercise. I don’t want to take on logically impossible worlds because of concerns about explosion, but these do not arise for merely metaphysically impossible worlds, which are happily devoid of contradictions.
I am also unsure how helpful it is to have this discussion when it requires me to deliver lectures on basic logical and metaphysical concepts. If you like, draw a point on a piece of paper and label it “our world.” Then describe a circle around this point and label the interior “nomologically possible worlds” (i.e. worlds compatible with our laws of nature), describe a second, larger circle around these and label the interior “metaphysically possible worlds”, and a third circle around these whose interior is the “logically possible worlds.”
*For reasons which are obscure to me but may be mostly historical, this is routinely conflated with logical necessity.
@ Earthly Knight:
Again, your condescending tone is quite uncalled-for. There is no need for you to “deliver lectures on basic logical and metaphysical concepts”. You are not imparting new information to me. Leave them out if you care to.
This is not the same type of situation as with the blue gold. I believe you are misunderstanding me in some fashion, because this point does not actually engage with what I have said.
That your speakers are on your desk is (we may suppose) a contingent fact. Therefore, it would be wrong—but not contradictory—to assert that your speakers are on the floor. However, it is a contradiction to assert that your speakers are simultaneously on your desk and on the floor. See the difference?
Now, yes, you could entertain fantasies about a “metaphysically possible” world in which speakers are capable of being located in more than one place at the same time. But in ordinary language, when we say “on the floor” we also mean “not on the desk”. Therefore, to say that your speakers are simultaneously on the desk and on the floor is a contradiction, since the unstated meaning is: “My speakers are on the desk (and not on the floor) and on the floor (and not on the desk).”
I don’t think your distinction between “logical necessity” and “analytic necessity” is useful. The statement above about the speakers is false in virtue of its form. It is just that we don’t talk in Bertrand Russell language, explicitly stating every proposition which we are conveying. If we expand the terms out (as I did), it is apparent that the statement is false in virtue of its form.
Furthermore, I suppose one could consider the relation between the microphysical properties of gold and its color, hardness, and conductivity to be only “nomologically necessary” (that is, if there is a meaningful distinction between the two; e.g. if physical laws are not metaphysically necessary facts). But you yourself referred to it as a metaphysically necessary fact, so I will continue to use it as an example of one.
To simultaneously assert that a fact is metaphysically necessary and to deny it is to utter a contradiction. To say that a fact is necessary is also to say that it is true. Therefore, you are saying that it is both true and not true.
Now, denying that a fact is metaphysically necessary is straightforward, unproblematic, and not a contradiction. In that case, one is just wrong. But this is not what you seem to be saying.
You are saying: “Consider a world in which a) it is metaphysically necessary (and therefore true) that you ought to defect in a one-shot prisoner’s dilemma and b) it is not true that you ought to defect in a one-shot prisoner’s dilemma.” That is a contradiction.
@ Vox Imperatoris
False. This is true of some types of formal logical systems, but not all. Your mistake lies in confusing the Law of Non-Contradiction for the Principle of Bivalence. Non-contradiction is mandatory, while Bivalence is optional. That the two concepts extensionally coincide in Boolean Logic does not generalize to Many Valued Logics. Much like how it’s true that “parallel lines will never cross”, only so long as we constrain our discussion to Euclidean Geometry.
This violates the Law of Identity, that “p = p”.
Alternatively, it can be compared to saying “2+2=100” because Binary(100) = Decimal(4). Technically, there’s no Ministry of Truth to punish you for saying that. But playing fast and loose with variable names doesn’t engender productive discussion.
Okay, maybe this is the source of the confusion. I am not asking you to consider a world in which it is metaphysically necessary that one ought to defect in one-shot prisoner’s dilemmas, I am stipulating that it is metaphysically necessary relative to our world that one ought to defect in one-shot prisoner’s dilemmas and asking you to consider the metaphysically impossible but logically possible world where the physical facts are the same as here but it is not the case that one ought to defect in one-shot prisoner’s dilemmas. In the diagram, this world will fall into the region enclosed by the third circle but not the second. No contradictions are true at this world, that is, of the statements describing this world none are of the logical form “p and not p.”* Similarly, no statements which are false solely in virtue of meaning are true at this world. It’s just our world except that you ought to cooperate.
*You seem to mean something deeply idiolectic when you say “contradiction”, but I’m not talking about Vox-contradictions, I’m talking about contradictions with precisely the definition given here.
I’m not seeing how bivalence comes into this.
“Denial” of something seems to mean, at least asserting that it is not true. Even if this doesn’t also mean asserting that it is false.
To say something is necessary entails that it is true. To deny it is to assert that it is not true. Therefore, to deny a necessary fact is a contradiction.
Now, if you deny that a fact is necessary, that’s another question. Because then you are presumably denying the background facts that would make it necessary.
You believe that saying “gold is blue” is a contradiction. You also believe that “the table is all white and all black” is also a contradiction. Perhaps the existence of blue gold is ontologically impossible. Perhaps “gold which is both yellow and blue exists” is a falsehood. But that’s a semantic consequence, not a contradiction (which are syntactic). You immediately follow the above quote with
Which (as you point out) is true statement. But the statement clearly doesn’t mean what you think it means — since you believe that “gold is blue” is somehow a contradiction or that “gold is blue” logically denies the fact that gold is actually yellow.
You’re either confused over the definition of “contradiction”, or playing the motte.
@ Earthly Knight:
Alright, I think I see the problem here. The world you describe is not logically possible. It contains contradictions.
Is there a logically possible world in which one ought to cooperate in the one-shot prisoner’s dilemma? Yes, obviously. But is there a logically possible world in which all the physical facts are the same as this world, and nevertheless one ought to cooperate in the prisoner’s dilemma? Given your premises, no.
1) You hold that moral facts supervene on the physical—and moreover, that there is no possible arrangement of physical matter that could make it true that you ought to cooperate in the one-shot prisoner’s dilemma. I am not arguing for this; I am just taking it as a premise.
2) Therefore, the current arrangement of facts necessitates that you ought to defect in the one-shot prisoner’s dilemma. (Obviously true, if any arrangement of physical facts would necessitate this.)
3) Therefore, it is true that you ought to defect in the one-shot prisoner’s dilemma.
4) You ask us to imagine a world in which the physical facts are the same, yet in which it is not true that one ought to defect in the one-shot prisoner’s dilemma; indeed, one ought to cooperate in this world.
5) But the arrangement of physical facts in this imagined world necessitates that you ought to defect in the one-shot prisoner’s dilemma.
6) Therefore it is true in this world that you ought to defect in the one-shot prisoner’s dilemma.
7) And yet, in this world it is not true that you ought to defect in the one-shot prisoner’s dilemma.
8) Consequently, this world is logically impossible.
Presumably, you object to #5. But in what respect?
In what exact way do you think the physical facts in the (real) world necessitate that one ought to defect in the one-shot prisoner’s dilemma? It seems to me that you must mean something like, “Given the facts, this is true simply according to the meaning of the terms involved.” That is, if you analyze what you mean by “ought”, “defect”, etc. you are contradicting their meaning in the physical situation unless you use them in this way.
Do you mean that the physical facts necessitate the moral facts in some other way? The physical facts surely don’t cause the moral facts to be true in any literal, active sense, do they?
None of this is to suggest that there are no metaphysically impossible worlds which are logically possible. For example, the world in which water is not H2O is metaphysically impossible but logically possible. However, in such a (metaphysically impossible) world, the physical facts would have to be more or less different from our own. The world in which water is not H2O but everything else is the same is logically impossible.
To give a final example, there is certainly a nomologically/metaphysically/logically possible world in which Obama is a bachelor (unless you believe in strict clockwork determinism). But there is no logically possible world in which all the physical/socio-conventional facts are the same but Obama is a bachelor!
The possible worlds in which Obama is a bachelor are those in which he is not married. But the physical/socio-conventional facts entail that he is married. Therefore, a world in which these facts obtain and yet he is a bachelor is logically impossible.
I do not believe that saying “gold is blue”, as such, is a contradiction. If my previous statements suggested that, they were phrased poorly.
It is not a contradiction to deny that a truth is necessary (even if it is). It is a contradiction to deny a truth which one simultaneously—implicitly or explicitly—holds to be necessary.
It is not contradictory for someone ignorant of gold, who has a confused recollection from a novel he once read, to assert that it is blue. But for someone to affirm the microphysical facts (and, obviously, the facts of human biology which I have previously not mentioned in order to keep things simple) that entail the yellowness of gold—and to simultaneously affirm that gold is blue—this is a contradiction.
Of course, as you say (and as I said before), this statement is only contradictory given the background assumption that the speaker intends by “yellow” to also mean “not, in the same respect, also blue” and by “blue” to also mean “not, in the same respect, also yellow”. It’s not a contradiction if this is not being implicitly expressed. But—it is being expressed in all normal talk about something being “blue” or “yellow” (or “black” or “white”).
Moreover, consider a world in which a) the microphysical facts are the same and entail that gold is yellow, b) gold is (in the same respect) blue, and c) the fact that a thing is one color entails that it is not also in the same respect a different color. This world is logically impossible.
A primer on modality:
A statement is false by logical necessity in the strict sense only if it contains a contradiction. Contradictions are statements which are false solely in virtue of their logical form or syntax. For a statement to be in any sense a contradiction we must be able to deduce from it by logically necessary steps (roughly, syntactic operations) a statement with the form “p and not-p.” Meanings are irrelevant here. “Tom is a married bachelor” is not a contradiction, because if we replace the predicates “bachelor” and “married” with the variables p and q to reveal the logic form– “Tom is p and q”– the apparent incompatibility vanishes. When considering logical form we are not allowed to inspect the semantic content or meaning of the predicates to see if they are mutually copredicable of the object in question.
Analytic falsehoods are the broader class of statements which are either false in virtue of being contradictions or false in virtue of semantic incoherence. The test of whether a statement is an analytic falsehood is if we can transform it into a contradiction by substitution of exact synonym for exact synonym.* “Tom is a married bachelor” is an analytic falsehood because (let’s falsely assume) “bachelor” is an exact synonym for “unmarried man”, which by substitution gives us “Tom is a married unmarried man,” which has the logical form “Tom is p and ~p and q”, which contains a contradiction.
A statement is false by metaphysical necessity if it must be false no matter the state of the world. I leave this relatively vague because there is much bickering about how metaphysical necessity should be understood and what exactly it encompasses. All analytic falsehoods are false by metaphysical necessity, and (by transitivity) so are all contradictions. A good, although not entirely uncontentious, example of a statement which is false by metaphysical necessity but not false analytically is “Tom went back in time and killed his own grandfather.” This is false no matter the state of the world, because if Tom had gone back in time and killed his own grandfather, he could never have been born to go back in time and kill his own grandfather. But it is not false solely in virtue of meaning or logical form, because we have to import unstated assumptions about time and causality to expose the paradox.
A statement is false by nomological or nomic necessity if it is false by metaphysical necessity (and so, by transitivity, if it is analytically false or a contradiction) or if it is incompatible with the laws of nature. “Tom is so fat he attracts objects inversely as the distance” is false by nomic necessity because the laws of nature are such that (let’s falsely assume) gravity attracts everywhere inversely as the square of the distance. But intuitively it is not metaphysically impossible that Tom should attract objects inversely as the distance– this seems like a way the world could be.
This is a tidy and fairly orthodox picture which is almost certainly wrong in many respects (the nesting of the categories seems especially dubious to me at present). But it is a good way to make sure we are all on the same page, and you have to know the rules before you can crawl.
*If this prompts you to ask how we determine which words are exact synonyms, you are very sharp. The answer is that they can necessarily be substituted for one another in any statement salva veritate, that is, with the truth value of the statement preserved. The “necessarily” in the preceding sentence gives rise to a famous narrow circle of definitions.
Necessity is relative to a world (this is often left implicit because for most purposes we are interested in possible worlds accessible to the actual world). It is (plausibly) metaphysically necessary with respect to our world that gold is yellow. Relative to the world where gold is blue, it may be metaphysically necessary that gold is blue.
Consider what would befall our notion of nomic necessity if we did not relativize modal operators to worlds. It is (suppose) nomically necessary that gravity attracts objects rather than repelling them. What are we to say of World G, the nomically impossible world where gravity repels and does not attract? If it is nomically necessary in all worlds that gravity attracts, it must be true in World G that gravity attracts, which means that our description of World G contains a contradiction, which means in turn that World G is in fact logically impossible. But this is absurd– it is not logically impossible that gravity should not attract. What’s worse, this argument generalizes to all nomically impossible worlds. As a result, if we do not relativize modal operators, all such worlds will turn out to be logically impossible, and we will no longer be able to distinguish any grades of modality save the one. This is, I think, intolerable.
So… if I say p is not p, that is a contradiction.
If I say p is q (where q is known to be not p) that is an analytic falsehood.
So, to say 4 is 5 is not a contradiction, but to say 4 is not 4 is?
I guess contradictions aren’t often seen in the wild?
Unhelpfully, mathematical statements are an exception. The natural numbers can readily be treated as strings of syntax, so we can express arithmetical truths either in first-order logic with the introduction of identity or in higher-order logic by quantifying over an equinumerosity relation. So “4 is 5”, suitably rendered, will come out a contradiction in virtue of its logical form alone.
“This dog is a cat” is not a contradiction. You are right that it is very rare for someone to contradict themselves in the strict sense. A child working at arithmetic is probably your best bet to see a contradiction live and up close.
How is that a useful definition of “contradiction” here? (Or anywhere?)
Contradictions are special because in classical logics they lead to explosion, which is to say that any statement whatever follows from a contradiction. This is, obviously, a bad thing. It is not all that useful to distinguish analytic falsehoods from contradictions except when constructing formal proofs*, and indeed the two are commonly run together. But it is quite important to distinguish analytic falsehoods from metaphysical impossibilities for just the reasons we are concerned with here. “If the physical truths were the same but the moral truths different, we would still hold the same moral beliefs” has a metaphysically impossible antecedent but does not contain a contradiction, which means we can make sense of it by considering metaphysically impossible but logically possible worlds. If the conditional contained a contradiction I would not bother defending it, because I don’t want the mess of an explosion on my hands.
“That dog is a cat”, perhaps contrary to appearances, is probably false by metaphysical necessity but not analytically, because “dog” and “cat” are translucent names for sets of organisms or branches on the tree of life which have no semantic content to speak of. It’s not hard to see how conditionals like “if dogs were cats, felis would be the largest of the mammalian genera” could occasionally crop up in biology, so it is good that the statement is not analytically false.
*Actually, it is also extremely important in computation theory and consequently in cognitive science, but that’s not my bag so you’ll have to ask someone else about it.
How can something be a metaphysical impossibility but not an analytic falsehood?
[Also, is it a metaphysical position to take a view on what is logical possible?]
“As an analogy: consider “useful” or “healthy”. How can a thing be useful or healthy but not useful for anything or healthy for someone?”
You actually seem to take a basically Randian view, which is that the only meaningful definition of “the good” or “ought” is what is in the rational self interest of life or human flourishing.
Two points: doesn’t this run counter to your nihilist view–aka that there really are no “oughts” whatsoever? I understand your position that for the nihilist, the only logical way to interpret statements like “one ought to do x” is as the first half of a statement like “one ought to do x IF one wants to enjoy a long, happy life.”
But, if you are a true nihilist, then you are left with no way of making any judgment calls about whether a “do whatever makes you happiest” standard is superior to a “do whatever causes the most misery standard.” They would both be equally arbitrary, no? Maybe you are willing to bite that bullet and are merely offering the “do what makes humanity happy” standard as one of an infinite possible number of standards, but the idea that a “do what makes humans happy” standard is, in no objective sense, any “better” than a “do what makes humans miserable” standard is a pretty tough bullet.
Second: I don’t think the “square circle” analogy is a good one, as that implies a definitional contradiction. Even if one supposes they do not exist, it does not seem hard to me to imagine that there could exist, somehow, objective moral laws of the universe, as there exist physical laws. Why is that hard to even imagine?
I’d submit it is not hard for most people to imagine given the way they use moral language: when most people say “doing x is wrong,” they don’t seem to mean it as the elliptical first half of a statement like, “doing x is wrong if you want to be happy.” They mean it as “doing x is wrong… period.”
Presumably you are objecting to non-reductive moral facts. Moral naturalism is a form of moral realism.
The Lagrange points image is perfect. Bravo.
In what sense is a peak in your utility function’s output not “inherently good”? Because it’s a “subjective” human brain doing the computing?
You might as well say nothing is inherently red, either; it’s just that there’s a band of wavelengths that forms a stable point in human perceptions of colour. That doesn’t mean that band of wavelengths is somehow inherently red, does it?
(Yes, yes it does. enough with the sophistry.)
Uh, no it doesn’t.
Nothing is “inherently red”. Redness is an aspect of sensory awareness. It’s a relation between the mind and the external world.
But the same wavelengths could easily produce the sensation of greyness—and in many people, they do. The color-blind are not wrong about the nature of things; they just perceive reality differently (and in a way that gives them less information directly).
The same wavelengths could also produce the sensation of blueness, or some sensation which is as impossible to describe to a normal human as is redness to the blind. Now, red-blue inversion is not a thing that appears actually to exist in human beings, but it is quite conceivable.
Color does not exist independently of the mind. Wavelengths of light, as such, have no color. But that doesn’t mean color is only in the mind, as some kind of hallucination. Color is a mode of awareness of light.
Now, you might say that certain wavelengths of light as perceived by human beings with senses of a certain type are “inherently red”. But then you’re just abusing the word “inherently”. “Inherently”, in this context, means something like “independently of how it is perceived”. If you want to call this “inherent”, what isn’t?
Stability perhaps implies a peak in some function, whether that function is a utility function is another question. In the classic prisoner’s dilemma, D-D is a Nash equilibrium and therefore stable, but it’s not a utility maximum – it isn’t even a Pareto optimum.
Also, do local optima count as “good”? One could imagine some tiny little optimum down at the bottom of a pit of despair, like those little peaks you sometimes get at the bottom of craters.
Arguments about history work better when you get your facts straight:
– Slavery (yes, of blacks) in the US started in the North.
– Most Northerners during the Civil War weren’t especially moral or decent with regards to black people.
– Lincoln wanted to relocate freed slaves after the war.
– Slavery in the South wasn’t very good for most Southerners, save the 5% or so who owned slaves. The rest had to compete with slaves in the labor market. Oh, and it wasn’t that good for the 5% either: hired workers were much more productive than slaves, and not as much of a risk.
– The South fought for slavery because 1) the slave trade itself was lucrative for some very influential people and 2) slavery was part of their culture and “way of life” that they thought the North was intruding upon.
Okay. But while lots of Western regions tried slavery in early modernity, it seems to have been mostly colder places (England, New England) where what abolitionism there was really got anywhere, and mostly (sub)tropical places where thorough-going ideologues of slavery (e.g., Calhoun, Fitzhugh) really got traction. So sure, attitudes can’t be sorted into discrete geographical blocs, but there’s still a noticeable climatic cline, which I think is all Scott really needs to ground that part of his point?
Jared Diamond sort of made that point in his “Guns Germs and Steel” book, didn’t he? (I didn’t read it, only a bunch of commentary about it when it came out. Don’t remember well.) Basically he said that your culture/values is largely determined by your geographic latitude.
But then other people (like Steve Sailer) presented some good counterarguments. I can’t recall them now but that’s worth a DuckDuckGo search.
You’re summarizing Diamond and Sailer correctly, IIRC myself. But I think all Scott needs for his sub-argument there isn’t “geographically determined” (or “economically determined” or whatever) but just “influenced,” since all he’s looking for is a bias/confound to “the arc of history bends toward the values of the current intelligentsia, as demonstrated by the fact that we are the winners of the prior conflicts about morality, and hey, look, we won, which proves we were right.”
That’s fine, in fact I agree with that I think. I just couldn’t get past the one guy in the dialog talking about the line dividing moral decent northerners and evil slave-holding racist Southerners.
I think that line was meant more than a little ironically, NZ. I mean, the fundamental point is unaffected, so I think Scott was just presenting the civic-mythological narrative, with a wink and a nod, for fun.
In that case it went over my head.
Montesquieu made that argument in 1748, One of his more shocking examples was how people grow up faster and are lustier the warmer the climate is, so it was moral for Muhammad and Aisha to marry, and Middle Easterners are happier because Islam came along and confined Christianity to the North.
Didn’t the first slave ships arrive in Virginia? So your #1’s false.
I don’t know where ships arrived. I know that according to the best information on hand, there was slavery, of blacks by whites, in the North at least 20 years before it was in the South.
Edmund Burke was writing about mostly island slave plantations, like Barbados, so there is that too.
Arguments about history do indeed work better when you get facts straight, and the first slaves in the US were in Jamestown, Virginia. But that doesn’t matter much because obviously slavery took off in the South and was abolished much more quickly in the North.
The North wasn’t great about black people, but it was very clearly anti-slavery by Civil War times, which is what I said.
Yes, the South believed slavery was related to its culture and way of life, which was my whole point. You are throwing out irrelevant historical facts and saying they prove my totally different assertions wrong.
I’m pretty sure the first black slaves were in Massachusetts or New York. Jamestown may just have been where the first black people to the US came. I have a book at home called “Seeds of Racism” by Paul R. Griffin that spells out this history very well. I’ll check it tonight.
Anyway, I wasn’t ever really clear what part of your post was Scott Alexander talking, so don’t misconstrue that I meant to contradict your beliefs. I don’t know how much the facts I threw out would change the outcome of the dialog (it felt over my head), I just thought it was important to point out the facts.
OK, found it. It was on page 11 (first page of the first chapter). I’ll quote:
And the book starts by pointing out that the first governor of the Connecticut Colony, Theophilus Eaton, announced he was holding black slaves in 1637 and intended to continue doing so indefinitely.
That is a really stupid metric.
Admissions of slaveholding and laws supporting slaveholding in the North 20 years before slaveholding in Virginia are a stupid metric to show that there was slaveholding in the North before there was slaveholding in Virginia?
I think “stupid metric” referred to judging the attitude of northerners towards African slavery in the 1850’s according to the actions of northerners from more than 200 years earlier.
200 years is a long time.
I was responding to Scott who had said in a comment that the first slaves were in Virginia and that slavery “took off” in the South. I wasn’t making any judgments about Northerners when I corrected him, except perhaps to counter his judgments about Southerners.
Speaking of inappropriate standards of judgment though: in Scott’s original post, one character said that Northerners were “moral, decent, and liberal”–ostensibly only because they had outlawed slavery and were fighting a war against the Southerners who had not. In other words, that character was judging them according to attitudes from much later.
There were plenty of people at the time who considered slavery wrong; racial equality is anachronistic, gradual emancipation is not.
But there were plenty of Southerners who considered slavery wrong, too. Like for example Robert E. Lee.
And there were plenty of Northerners who did not really care about slavery or the welfare of black people. Like for example, most of the Irish immigrant population of New York City.
So, the Mason-Dixon line didn’t actually cleanly divide “decent moral and liberal” people from evil racist bigots.
Robert E. Lee was only barely a Southerner — he lived in Northern Virginia, a few miles from Washington, D.C. His home was on the grounds of what’s now Arlington National Cemetery (this is, of course, not a coincidence).
He seems to have been an exceptionally decent person, but his opinions should not be treated as representative of his country.
Are we getting into No True Scotsman territory here?
I named Robert E. Lee because he happens to be arguably the most famous person who fought for the South, whose name is now virtually synonymous with the Confederacy. But there was plenty of opposition to slavery in the South besides him of course, frequently from what you’d call “representative” Southerners. You can do the DuckDuckGo search yourself.
When you say something about Scots and your example is somebody from the Border, I think it’s reasonable to question how culturally Scottish they are.
But still, if that border Scotsman is Rob Roy, and if there are hoards of other Scotsmen up in the highlands with similar opinions on the issue at hand…
Did you do the DuckDuckGo search?
Also, black slavery is a result of disease burden. Once malaria made it to the New World, Europeans and Americans couldn’t usefully be enslaved to do the awful, awful work of tropical cash-crop agriculture. But Africans were much less susceptible to malaria, and thus could be usefully enslaved. The Spanish tried slavery with Indians, and it worked in climates where malaria didn’t spread (Potosi, etc), but didn’t on tropical plantations growing sugar cane, or indigo.
Can you link to evidence for this? Is the issue that being outside -> more mosquito bites, so whites could live in these areas but not farm in them?
I’ve heard that argument mentioned before too. I’m also curious about the evidence. I think it’s worth noting that while whites did live in the South, they were outnumbered something like 3-1 or 5-1 eventually by black people there. So, maybe white people could live in the South but not in as large numbers.
The book 1493 went into this in a lot of detail. Here’s a short explanation.
But slavery wasn’t invented in the US. Virtually every pre-modern civilization practiced slavery.
Slavery became out of fashion in times and places where population density became sufficiently high (relative to the agricultural capacity of the land) and/or industrialization happened. High population density and industrialization reduced the economic incentive of employing imported slave labor, and also increased the germ pressure, which, according to the germ-xenophobia theory mentioned by Scott in this post, would have further decreased the propensity to import foreigners.
In continental Europe (and to a lesser extent, England) this happened by year 1,000, with chattel slavery being replaced by serfdom (a form of milder quasi-slavery of the native population), although various European countries continued to capture slaves to sell them to the Muslim world.
In the Americas slavery was more massive and lasted longer because the Americas were more rural and much less densely populated: agriculture was the main industry, but there weren’t enough people to maximally exploit the land, and those who were there could command relatively high salaries, therefore land owners had a high incentive to import foreigner slaves, even if they looked weird and came from jungles infested with Ebola-carrying bats and other creepy stuff.
Eventually, the regions that where more densely populated and/or more industrialized, abolished slavery first.
You are positing an economic reason for slavery: cheap labor to make maximum use of ample rural land. But this ignores the considerable cost of housing, feeding, and providing medical care for the slaves.
If there was an economic incentive to own slaves, it was that they were a renewable form of wealth–your slaves counted as your assets, increasing your net worth, and you could borrow against them. But of course that was diminished as the supply of slaves increased.
The main economic incentive was to slave traders, of course.
Hired workers also have these needs, either their employer provides them directly, or indirectly through their salaries.
In fact, slaves are cheaper in this regard, since they can’t do much to object the quality of housing, food and medical care their owner provides, while hired workers can be picky if they have sufficient bargaining power, as they would if there was high demand for their labor.
I suspect in the 1800s the concept of paying a worker’s healthcare needs (the way progressive companies do today) wasn’t as big of a thing. Also, if that worker is a contractor, the employer doesn’t pay for ALL of his healthcare needs, only part of them–the rest is paid by other employers. The health of one’s slaves was a lot more visible and pressing on slaveowners.
North American slavery had mostly an ideological rather than economical basis.
I don’t see the relevance of this. The worker consumes health care, which has to be paid for. Whether the employer pays for it directly, or the worker pays for it with part of the salary they receive from their employer, whether the worker has a single stable employer or many employers on short-term contracts, the worker health care cost concurs to the labor cost for whoever hires that worker. The same applies to food, clothing and housing.
I’m assuming no government redistribution (public health care, food stamps, etc.). With government redistribution things can get more complicated depending on tax policy. But I suppose that little or no government redistribution existed in 19th century US.
Slaves were given little more consideration than work animals. The health of a slave was a pressing concern to their owner only to the extent that the slave could be healed back to working condition and doing so would cost less than the expected discounted wealth that that slave would produce in their remaining lifespan.
What do you do to a horse with a broken leg?
Hired workers put more value in their own health than slave owners put in the health of their slaves, resulting in higher health care costs for hired workers.
And why did that ideology exist in that particular time and place?
My point was that big upfront costs, even if they are a better value over the long term, are generally not preferred because our brains are wired to interpret them as a worse value. That’s a big part of why credit card debt is so prevalent. Slaveowners basically had a kind of religious belief that holding slaves was this great thing to do, and so they paid the big upfront costs anyway.
Given the “12 Years a Slave”/”Django Unchained”-style history we all get taught about slavery and the South, you’d think slaveowners shooting their slaves for getting injured would come up more. I don’t know if it happened or not, but I doubt it happened much if it did.
I have, on the other hand, heard stories of slaveowners who did not let their slaves do extremely dangerous dock work and hired whites to do it instead because they didn’t want to risk losing a slave. Slaves were valuable assets.
This supports what I’ve been saying: that slavery existed because it was especially lucrative as a market to traders (but not as a labor source to farmers), and because of a sort of religious belief that holding slaves was a righteous thing to do.
That’s a good question, I hadn’t thought about it. I guess they brought it with them from the Old World? But (reducto warning) many peoples of the world had–and many still have–the belief that they ought to hold other peoples as slaves. I imagine being a non-slave class in a society that has slaves tends to boost your feeling of self-worth, natural dominance, and so forth. Human nature, etc.
“This supports what I’ve been saying: that slavery existed because it was especially lucrative as a market to traders (but not as a labor source to farmers),”
How would the traders be making money? If slavery wasn’t very profitable, their margins would be low.
“and because of a sort of religious belief that holding slaves was a righteous thing to do.”
That dates to the 1830s and John C Calhoun. Notably people before thought it was a necessary evil and many believed that afterwards including individuals who owned large numbers of slaves.
“I guess they brought it with them from the Old World?”
There were essentially no slaves in England in the 1600s. I’m not seeing how that would work.
Slave traders were middle-men. They made their money by facilitating exchanges.
The earliest American slaveholders believed that holding slaves was a good and righteous thing to do. See Theophilus Eaton, Cotton Mather, etc.
Like I said, I don’t really know where the slaveholding-as-virtue ethos came from. I don’t really believe that it was in the New World water and that early European settlers became infected with it when they got here. Then again, who knows.
“Slave traders were middle-men. They made their money by facilitating exchanges.”
They need surplus to live off of. If there is little profit for the buyer, there won’t be enough for the slave traders.
“The earliest American slaveholders believed that holding slaves was a good and righteous thing to do. See Theophilus Eaton, Cotton Mather, etc.”
The first people who bought slaves isn’t exactly a surprise (since they would be the people most likely to buy slaves). It did not survive to Washington’s day.
“Like I said, I don’t really know where the slaveholding-as-virtue ethos came from. I don’t really believe that it was in the New World water and that early European settlers became infected with it when they got here. Then again, who knows.”
It looks like they believed it was a chance to bring more people to Christianity. I’m not seeing what is a big surprise about such an attitude.
It is always amazing to me, that whenever Southerners get defensive about a slavery they never mention Cassius Clay or Levi Coffin or any of the other Southern anti-slavery figures. Come on people, focus on the positive rather than bitching about Lincoln.
I’m not a Southerner, just FYI. And I’m not being defensive–why would I be? I don’t own slaves, am not related to anyone who did (heck, my ancestry in this country doesn’t even go back much past 1900), and I think slavery is awful–but I do have some specialized knowledge about that part of history and like to see it represented accurately.
Maker’s breath… every time this topic comes up, I want to scream “TABOO ‘OBJECTIVE'”
Good one. Here, maybe “non-relative” or “true independent of personal viewpoint”? Jupiter is bigger than Mars regardless of my personal feelings about that. I think Huemer is arguing that, say, torture is wrong in the same personal-feeling-independent* way? (Personal feelings about the okayness of torturing people, I mean, not whether torture makes people feel bad, which is obviously an important part of the definition of torture.)
Even there, “relative”/”non-relative” can mean two very different things.
One meaning, which is typically what is attacked as “relativism”, is that “good” is whatever a particular individual, culture, or nation chooses to define as “good”. Therefore, polygamy is good for Muslims but bad for Christians. There are no actual relevant facts of the matter independent of particular beliefs. (I think the more proper word for this is “subjectivism”, but it is often called “relativism”.)
The other meaning is agent-relativism, which says that there are facts of the matter which are independent of anyone’s particular beliefs—but they are relevant to different agents in different ways. For example, egoism says that my having water is enormously good for me, but your having water is considerably less (though still somewhat) good for me. On the other hand, your having water is enormously good for you, and my having water is considerably less good for you.
In the first meaning of “relativism”, there is some kind of irresoluble dispute over the standard of what constitutes “good”.
In the second meaning, there is no such dispute. We agree on the standard, and we agree on how it is applied. It is just that “good” is in the same category as “healthy”: something can be healthy for me but unhealthy for you. In other words, it is a concept that relates one kind of thing (objects in the world) to another kind of thing (agents and their goals).
I don’t think that cuts to the main problem.
Suppose we were instead arguing over whether Jupiter is bigger than Saturn. Suppose I argued that Saturn is bigger because it’s radius is larger considering its rings. This is a non-standard usage if the word “big” in astronomy. But is it “objectively” wrong? No its just a different definition. Personal feelings aren’t the issue, the problem is that there are two competing notions of what the question means. One may be useful that the other for some set of purposes but not objectively correct.
I don’t see how making “objective” taboo would help. We all agree on the definition of objective being “true regardless of personal feelings” right? This isn’t like arguing over the definition of socialism or anything.
Subjectivity is the only objective fact.
If “objective” only means “true regardless of personal feelings’, then there’s a lot more room for objectivity than for the specific kind that Huemer and other substantive realists argue for. What they mean is that morality is ontologically mind-independent.
Huemer (and many others) think moral values are what Ayn Rand called “intrinsic” in her trichotomy of “subjective/objective/intrinsic”. Which, regardless of whether you agree with her on anything else, is a very useful categorization scheme in many fields.
The subjective is that which exists in the mind, completely independently of the external world. For example, delusional beliefs (and even then, they typically have some external cause, but let’s keep it simple).
The intrinsic is that which exists in the external world, completely independently of the mind. For example, the tree in the quad when you’re not looking at it.
The objective consists of a relation between the mind and the external world. For example, the sensation of redness: it is how light waves of a certain wavelength are interpreted by the mind.
However, outside of Rand and her followers, the word “objective” is usually interchangeable with “intrinsic”. The two are sort of packaged together. (Sometimes it is the other way around, with “objective” and “subjective” being packaged together and opposed to the “intrinsic”.)
In my view, it is not possible ever to have “intrinsic” knowledge, although it stands to reason that there is a something-I-know-not-what which exists “intrinsically”. But that free-floating categorical moral facts exist “intrinsically”, I find extremely implausible; moreover, one could not know them and they could not be relevant for human behavior.
However, if “objective” morality is meant in Rand’s sense (and as blacktrance is using it), it can certainly exist. It just names a relation between certain facts and human goals.
On a side note, a third thing is sometimes meant by “objective morality”, which is frankly stupid but widely believed. I don’t know if it has a proper name, but one might call it “deontological absolutism”. This is the belief that some actions are always right or always wrong, regardless of the context. I have observed people using “objective” to mean this.
“On a side note, a third thing is sometimes meant by “objective morality”, which is frankly stupid but widely believed. I don’t know if it has a proper name, but one might call it “deontological absolutism”. This is the belief that some actions are always right or always wrong, regardless of the context. I have observed people using “objective” to mean this.”
But that’s exactly ho w*legal* right and wrong work.
Right, and I think most people also recognize that we could come across some situations for which the law was not designed—and in those situations we could break the law.
And most people recognise that such justified law breaking needs to be exceptoional
Are you disagreeing with me?
If there are exceptional cases when an action is wrong, it is not always right. Now, there may be certain actions that happen in reality always to be right or always to be wrong, but the contrary is still conceivable.
In any case, it isn’t clear why having certain acts always be right or always be wrong is particularly desirable or necessary for morality to be “objective”. Perhaps only in the sense that it would be “desirable” in physics for all objects to be spheres.
The default that you shouldn’t break rules, even somewhat arbitrary ones, is desireable because it enables co-ordination. Consider driving on one particualr side of the road.
There’s a kind of implicit point that the practical upshots of objective morality are to keep rules fairly fixed, which reduces defection, and to have the possibility of some sort of engagement with socieities with different rules that doesn’t consist only of agreeing to differ. However, both can be achieved without full strength objectivism.,
Good point. How can values be objectively correct, as opposed to facts? Generally speaking values are generated by this process: we consider value V1 the goal, apply facts F1…Fn about what is the best way to reach that goal, and that way is V2 value and _relative_to_goal_value_V1_ objectively correct. This is not a new idea, I think “a good knife is one that cuts well, one that is well suited for the purpose of cutting” is straight from Plato.
The trick part is thus in the V1, the “final goal”. The rest can be objective related to it. CEV? Human flourishing / eudaimonia? God’ beatific vision? Minimal local entropy, at the price of maximizing the rest of the universe’s entropy? (As this is what life and technology is all about.)
Generally speaking we on the nnnn… _la droite_ often would be okay with an eudaimonic approach provided it would be pessimistic enough, as in, focus on not so much on how to optimize everything but how to prevent everything from crashing horribly, or better yet, let it crash, but in a low-harm way, not fail-safe but safe-fail. Imagine a society that survives collapses really well and probably it will be something made of… clans? Another example is the utilitarian cascade much of the droitiste instincts reduce to, help people who are the most likely to help others and in a way that you have an influence on that, make sure your gifts go on giving, which quickly reduces to making kids, teaching work ethics and having serious borders if you want anything like welfare. This is safe-fail policy.
So for us the pessimists, “safe-fail” would do as a V1 final goal, probably.
One thing I am going to blog about, and let me ask y’alls opinions about it now, is that getting richer is basically farmers moving from delivering with horse carts to delivering trucks which decreases local entropy, but getting poorer is not going back to horses, it is just having old rusty trucks that have the same capacity and practical speed as the good ones, just they are unreliable and fail all the time. In a downturn / collapse / getting poorer, both a theoretical move back to horses, and moving to old, rusty, unreliable trucks increase local entropy, but in different ways and I would like to understand how. Predictable and unpredictable entropy?
This relates to your taboo rather closely, as if my objective morality is fail-safety, if there is a collapse and downturn and I have to accept higher local entropy, I would rather have horses again than rusty old unreliable trucks, as it looks more resilient, but somehow it never happens.
Moral realists don[t have to attach truth to values, they can attach it to virtues, or rules, or actions, etc.
Sheesh. I actually believe morality is objective for other reasons, but Huemer’s argument (as presented) just seems like a really obvious argumentum ad populum with a relatively high proportion of chronological snobbery. Honestly, it seems so obviously wrong that if this wasn’t Scott summarizing (and possibly steelmanning?) it, I’d be really suspicious Huemer was getting strawmanned.
Huemer does have a whole book, Ethical Intuitionism, devoted to making this case, and I don’t think he makes this argument in that book, at least not in much detail, which is probably why he later decided to write an article on it. Presumably this article is just Huemer trying to buttress his existing position with one more argument in its favor.
Of course, asking Scott to read a book or two is a bigger ask than asking him to read an article, but I would be interested in Scott’s response to the book at some point, though I think Vox Imperatoris gives a pretty good one below.
I liked Ethical Intuitionism, but I think his Problem of Political Authority is even better. In the former he gives the most persuasive arguments for moral realism and intuitionism I’ve read, and reading it converted me enough that I have and probably will continue to describe myself as a “moral realist” and intuitionist, but I also have some doubts about it, mostly in the more nihilistic direction.
Political Authority, however makes what I consider to be knock-down arguments in favor of libertarianism with the advantage that they do not depend on ethical intuitionism or moral realism. Assuming Scott still mostly agrees with what he said in his anti-libertarian faq, I hope he reads and responds to it at some point.
And, interestingly, while rejecting evolutionary explanations for morality in his first book, Huemer lists some very convincing evolutionary explanations for statism in his second.
Yeah, as I was reading this I kept thinking, “But Huemer’s position doesn’t rely on historical movement towards liberal values. It’s just one piece of evidence that might convince some people that Ethical Intuitionism is correct.”
Yes, The Problem of Political Authority is very good.
And it doesn’t even show that you have to be an anarcho-capitalist, unless you accept the empirical facts he regards as relevant to the argument, showing that such a society would be tolerable.
All the main thrust of the book argues for is philosophical anarchism, which simply means that there is no such thing as political “authority” that gives someone the moral right to order people around, independently of being justified by some other moral standard. If you are a strict consequentialist, like a utilitarian or egoist, you already are a philosophical anarchist.
Ethical Intuitionism is pretty good, too. If nothing else, it absolutely shows how ridiculous non-cognitivism and subjectivism are. I didn’t quite explain this in my other post, but what Huemer really means to attack here is the view that “good” and “bad” really just mean “Yay X” or “Boo Y”, and that people are tricked by grammar into thinking they mean something else. Obviously, this is not the case.
When people say, “X is categorically good”, they are trying to get at some kind of coherent notion. “Error theory” (also called ethical “nihilism”) merely says that people are always wrong when they say “X is good” or “X is evil” because in fact nothing is either good or evil. It’s not a matter of opinion or feelings: they are asserting facts, and they are wrong about them. As I said below, I believe that this is true in regard to alleged categorical moral truths.
Huemer’s response to it is, “No: Hitler was evil. QED” (I am being flippant here, but that really is the thrust of it. See below for details.)
When people say “X is categorically good”, they are (often, not always) trying to get at some kind of coherent notion. And in those cases where they are, they are always wrong. And what it is reasonable for listeners or readers to conclude about the utterers almost always includes that they approve of X. The problems with non-cognitivism stem from regarding meaning as a property of statements, when in fact it’s a messy natural-language handwave in the direction of a number of different relational properties connecting utterances and people.
Are people always wrong when they categorize other things into manmade categories? Are statements like “this is well-made”, “this is a science-fiction story”, or “this is really fucking long” always wrong?
Well, that’s not particularly interesting. This is a particular subtype of non-cognitivism known as emotivism, generally considered to have died some time around the middle of the 20th century.
I wonder why philosophical anarchism really matters. Coercive authority is practically dominance, while the practical consequences of having moral justifications or not basically all reduce to prestige or eminence. As the practical consequence of doing morally indefensible things, both for a random individual and a ruler, if a higher coercive power that could punish it is absent, is basically prestige loss. Is it a question of whether we should give high prestige to our rulers or not?
But this problem is solved to long ago: rulers borrow their ideas from high prestige people, professors, journalists, activists and so on, thus the prestige rubs off on them and they get this kind of justification: they can say that what I am forcing you to do is what you – or most prestigious people – agree with anyway. E.g. sure taxes are coercive, but who could refuse to give to the poor without a serious prestige loss? In a purely voluntary ancap world, would you want to be That Guy in the local community who does not contribute to some popular bleeding-heart issue? The prestige loss would not worth it, probably.
Thus employing coercion to force people whatever is prestigious anyway acts exactly the same way as having a moral justification for coercion: in both cases most people / high prestige people will say “Yeah, that punishment serves them well, they had it coming!”
This is a stable equilibrium. Hence the Cathedral.
A clever ancap would try to figure out how to make coercion itself extremely low prestige. Make it stink. The problem is that coercion is probably so closely linked to dominant alpha-male attitudes that you had to wimpify people a lot to make it stink, and if you succeed then rulers can just rule by fear, without the need for prestige. So the equilibrium really seems stable.
Consider the possibility that human psychology is a little more complicated than prestige and dominance. I mean, you may disagree. Nevertheless, it’s a common and defensible notion.
In any case, many people, from John Rawls to Martin Luther King, Jr., have defended the idea that people owe a certain allegiance to the law. This allegiance is supposed to be independent of the particular content of the law, at least to a degree.
Huemer rejects this. He says that people ought only to obey the law to the extent that the actions are independently justified.
If you can’t get people to do things because “You should do this because it is the right thing to do”, because people come back at you “There is no such thing as ‘the right thing'”, then you need to be able to make them do things.
Dominance – “do this or I’ll kill you/beat you up and take all your stuff/put you in jail” – is one way of doing it. Prestige – “only not-cool losers do/don’t do that, you don’t want to be one of the not-cool losers who can’t get a lover and everyone laughs at you, do you?” – is another way of it.
I think even anarchists would find the necessity of some means of getting everyone to pull together once the group exceeds a certain size; if Joe and Bob don’t get on, either (or both) of them can “light out for the Territories” and live as they like, but when you’ve got ten thousand Joes, Bobs, Sues, Janes, etc. all living in one area and the Territories are just as full, you need some way of getting people to put up with one another.
Prestige (only losers do/say/think that!) would certainly be one way for anarchists and libertarians to exert social pressure.
An important point: slavery has existed in all known civilizations, among many settled peoples without cities, and even among hunters and gatherers. Frequently without racial bias, as common sources were criminals, exposed infants, and debtors.
The South was not inventing an institution, it was making some modifications in the existing one. This will have its effects.
And the South got that institution from the North, where it existed first.
Yeah, but sometimes an institution takes off farther from its prophets’ home country because it’s better suited to someplace else, either because of something like climate (as here?), or because it’s an invasive species that the locals in the new spot haven’t got an evolved defense for, or whatever.
Nay; it began in Jamestown.
Nope, it began in the North. Check out “Seeds of Racism in the Soul of America” by Paul Griffin. Chapter 1, page 11. (I quoted it earlier for Scott, just do a search for “Theophilus”.)
OK, then say what you mean, and be specific.
Before there were slaves in the Northern colonies, there were slaves in the Caribbean and among the tribes in sub-Saharan Africa who sold them to the Portuguese for transport across the Middle Passage. Before that there were slaves among the Aztecs and Maya and Toltecs, and among the Ottomans, Spanish, Russians, and Anglo-Saxons (though they called them “thralls”). Before that there was slavery among the Romans and Greeks (whose practices and rhetoric the 18th and 19th century defenders of African-American chattel slavery in the U.S. cited heavily). Before that there were slaves in Egypt and Asssyria. Your historical arguments need some depth.
@Schmendrick churls, not thralls.
Nope, thralls, not churls. A churl is of low birth, but not servile. Anglo-Saxon law distinguished between lords and churls, not freemen and churls.
Naw. It began prior to written history. Slavery is alluded to the very oldest documents we have as an institution already in existence and taken for granted.
The institution of race-based cash crop slavery was well underway in the new world long before either Plymouth or Jamestown was founded. England was copying in North America what was already going on in Brazil and the Caribbean.
If slavery existed first in the North, that does not mean that the South got it from the North. Slavery existed in England and the Latin and Dutch Caribbean before there was a South.
I’m guessing that you’re talking about Slaves in New Netherlands in 1625. Does that count as “the North”? Well, then slaves in Spanish Florida in 1565 count as slaves in “the South.” It makes more sense to say that the Dutch brought slavery to the English. They famously brought a ship of African slaves to Jamestown in 1619. The settlers promoted them to temporary servitude on the grounds that they had been baptized, but seemed to endorse the institution of slavery. In the next couple of decades they came to accept the slavery of Christians, without any influence from the Dutch or anyone else.
African slave trade to English colonies probably got serious in Barbados in 1640. Jamestown probably had slaves, both black and white, before then, but the history is murky. Massachusetts received a shipment of slaves in 1638, or possibly much earlier.
Granting that northerners had laws about slaves and a few individual northerners had slaves before any southerners did…
It does not follow that the South “got that institution” from the North. In fact, it’s incredibly obvious that this isn’t the fact.
Was the character of northern slavery as outlined in your book very much at all like southern slavery? I.e. was it chattel slaves planting and harvesting cash crops on plantations of wealthy land owners?
Or was the character of southern slavery perhaps much more similar to the institution of slavery as it existed in the west indies and Spanish territories? It seems much more likely that the South “got that institution” from places with similar climates and economic situations (i.e. not the north).
Was the character of northern slavery as outlined in your book very much at all like southern slavery? I.e. was it chattel slaves planting and harvesting cash crops on plantations of wealthy land owners?
How “very much like” Southern slavery was to Northern slavery had a great deal to do with how one measured it. In comparison to slavery in the Middle East and Asia, there was very little difference between North and South. If the comparison was limited to the variety present in the New World, a bit more. Compared just to that practiced in French-controlled New Orleans, one might distinguish quite a bit more difference between Delaware and Georgia.
However, there was also a very broad spectrum of human bondage across the South. The median slave was working cash crops on a plantation with hundreds of other slaves. The median slave owner was working mostly subsistence (w/some cash crops) farming in the fields with his slaves. Broad generalizations would lead to mistakes.
According to my book, slavery in the North was different from slavery in the South in the following ways:
1) There were fewer slaves per slaveholders
2) They planted different crops/did different jobs.
It was similar in most other ways, including one that’s often overlooked by people today: its purpose was not primarily economical, since hired workers are more productive and less costly, but as a lifestyle/cultural choice.
“It was similar in most other ways, including one that’s often overlooked by people today: its purpose was not primarily economical, since hired workers are more productive and less costly, but as a lifestyle/cultural choice.”
I think you are overgeneralizing. Slavery was economic to begin with (because free labor simply wasn’t available), became a cost when the soil was over farmed and became economic again as the interior of the country opened up.
I don’t buy that slavery was ever very economical.
For one thing, slaves were expensive. I can’t remember the exact numbers, but a skilled slave in the mid-1800s would cost about $50,000 in today’s money, and a slave who couldn’t do much was like $10,000. Those numbers are probably wrong, but probably not by much. And keep in mind, that was when the supply of slaves was at its peak. Then those slaves also had to be housed, fed, clothed, provided with medical care–at all times. And on top of all that, slaves typically were not as productive as hired workers.
I remember reading somewhere–I think it was in David Horowitz’s heavily-footnoted book “Uncivil Wars”–that the additional value to slave owners from slave work over that of hired workers was something like 2%, and even that always struck me as way too high.
“For one thing, slaves were expensive. I can’t remember the exact numbers, but a skilled slave in the mid-1800s would cost about $50,000 in today’s money, and a slave who couldn’t do much was like $10,000. Those numbers are probably wrong, but probably not by much. And keep in mind, that was when the supply of slaves was at its peak. ”
You do realize the US banned the importation of slaves in 1808, right (while imperfect, the addition of the British blockade insured imports would not compare to nature increase)? Restricting supply drives up price.
Also a high price implies slaves were valuable because they were expected to pay for themselves. If they weren’t, then the plantation owners should be losing money.
“Then those slaves also had to be housed, fed, clothed, provided with medical care–at all times. ”
Or you have the slaves grow their own food and build their own shelter.
“And on top of all that, slaves typically were not as productive as hired workers.”
It is a good thing the majority of slaves were used for plantation based agriculture where they were more productive than hired workers (because you could work them like crazy for back breaking labor).
“I remember reading somewhere–I think it was in David Horowitz’s heavily-footnoted book “Uncivil Wars”–that the additional value to slave owners from slave work over that of hired workers was something like 2%, and even that always struck me as way too high.”
Why? If free labor was better, the slave owners would have freed their slaves and made more money. They were certainly aware of the existence of serfdom, sharecropping and other methods of extracting surplus from peasants. They went with slavery because it was the most economically efficient method for their situation.
Samuel Skinner, having worked in corporate America for some time, I have completely lost faith in “This large group of people would enact sweeping economic reforms just because it was blindingly obvious their current policies were causing them to be outcompeted.”
Once a society embraces a trade or a practice as part of its cultural identity, people defend it beyond any economic expected outcome calculation (as shown by the Civil War itself.) If there was a point when it was economically rational to advocate for slavery, it was well before the shooting started.
If the slave works for 40 years, the slave costs $1250 in today’s money per year. Minimum wage workers today are normally paid more than $1250 + the cost of food and shelter per year. Therefore slavery is plausibly economical (although you have to count the costs of keeping the slaves chained, the poorer quality work done by slaves, etc. as well.)
“Samuel Skinner, having worked in corporate America for some time, I have completely lost faith in “This large group of people would enact sweeping economic reforms just because it was blindingly obvious their current policies were causing them to be outcompeted.””
That doesn’t follow. Corporations suffer from diseconomies of scale- the owner and the manager are not the same and responsibility is widely diffused.
384,000 owned slaves
10,780 owned fifty or more
88 per cent of America’s slave-owners owned twenty slaves or less.
The majority of slave owners were essentially small businesses where the finances were exceptionally straight forward. Why would they buy an additional slave when they could make more money hiring a white worker? They already hire white workers, what is stopping them from hiring more?
“Once a society embraces a trade or a practice as part of its cultural identity, people defend it beyond any economic expected outcome calculation (as shown by the Civil War itself.)”
That isn’t what the Civil War shows. They thought that
-the war would be quick (and then hit the sunk cost fallacy because treason is typically punishable by death)
-were afraid that freed slaves would murder them. Haiti’s slaves killed the slave owners after they freed themselves. Then they proceeded to exterminate the white population (men, women, children- including whites that had helped blacks).
Also keep in mind the cost of the slave has to be paid in its entirety up front (or take on debt to pay for it). Normally that would be a disincentive, especially to pre-industrial farmers whose living depends a lot on luck from year to year.
The Civil War can show all those things; they aren’t mutually exclusive. For the purposes of this side-discussion I’m more interested in what the institution of Southern slavery shows, which is that the basis of slavery was more ideological than economical, and that this was just as true in the North where English-speaking American slaveholding originated.
“which is that the basis of slavery was more ideological than economical, ”
You quoted someone saying slavery was more profitable but you insist it was ideological? How does that even follow?
Also, if slaves are worse, the people selling slaves would be better off than the people buying them.
Who did I quote and when did I quote someone saying that? Because I didn’t, but I want to see where you got confused so I can clarify.
“I remember reading somewhere–I think it was in David Horowitz’s heavily-footnoted book “Uncivil Wars”–that the additional value to slave owners from slave work over that of hired workers was something like 2%, and even that always struck me as way too high.”
Is that before or after factoring in the purchase price? If it’s before, then the whole trade doesn’t make sense, but if it’s after, then it just looks like the efficient markets hypothesis at work.
@Samuel Skinner: I said that 2% struck me as too high. I guess I should have specified, I thought it should be negative.
slavery has existed in all known civilizations
So how many slaves were there in England in 1607? My understanding of medieval England is that slavery died out sometime in the twelveth century. Was England not part of civilization?
And England is further north than The North.
“And England is further north than The North.”
By latitude or by climate?
England has (slightly: ~5C lower average summer daytime high temp) cooler summers and (much: ~20C higher average winter nighttime low temp) warmer winters than, say, Maine. Does that make it further north by climate, or not?
It makes it more surrounded by water than Maine.
A wide variety of practices are translated into English as “slavery.” The phrase “chattel slavery” exists because it is not the central example of slavery, unless you only study American history, as is true in, say, American high schools. Chattel slavery was rare in England after the Norman conquest, although it was picking up again with the African slave trade. Serfs were common, but people from other places with the same rights and obligations are often called slaves in translation. Serfdom died out largely for economic reasons after the Black Death, but there were some serfs left in England in 1600.
“Chattel slavery was rare in England after the Norman conquest, although it was picking up again with the African slave trade. Serfs were common, but people from other places with the same rights and obligations are often called slaves in translation.”
This is a good point. “Slave” is such a loaded term, and you can alter moral judgments of a society by how you translate the name of a social class. Take a class of feudal Japanese people with land rights but no liberty to get a job other than farmer and can be killed by the local bushi. Peasants, serfs, or slaves? Surely you wouldn’t give them a more positive name than European serfs, who had a right to life? But wait, Japanese farmers outranked craftsmen and merchants…
From what I know, slavery had a continuous history in Iberia (and some other parts of southern Europe) through the Middle Ages and into the colonial period, partly thanks to Muslim influence in that area, but had died out in England within a century or two after the Norman conquest; the Domesday Book lists a substantial slave population, but references get scarcer thereafter. English colonists presumably picked up the practice from their Spanish neighbors.
Wikipedia says that the first unambiguous slaves in English-speaking America were in the Virginia Colony.
More important than just a list of values is knowing where each value butts up against counterbalancing values. Value equilibria, if you like.
Everyone thinks “openness” is important, but you don’t want to have a chip implanted that lets your employer know exactly what you’re thinking all the time, and your employer doesn’t want to have to share their IP with competitors, and you don’t want guys in trenchcoats flashing you as you walk your kids home from school. Openness is counterbalanced by privacy, security, and decency.
Totally agree those qualities are in conflict. But if you’re thinking about openess as a value, specifically from the big five personality traits (the trait sometimes associated with liberalism to a degree), then it actually refers to openness to (new) experience, rather than living an “open” life without privacy. Maybe they correlate, but I don’t think its a one-to-one correlation. I think openness to lack of privacy probably correlates strongest with extraversion.
I was thinking about it as more like “transparency”.
Openness to new experiences of course also has counterbalancing values: you don’t want to be so open that you put yourself or your loved ones in certain danger, for example.
My point is that it’s not much use just to say “these values are important.” You also have to know their limits, i.e. where they are in equilibrium with other values.
It helps to focus on one thing at a time. Make it slavery. Via Wiki: Slavery was known in civilizations as old as Sumer, as well as almost every other ancient civilization. The Byzantine-Ottoman wars and the Ottoman wars in Europe resulted in the taking of large numbers of Christian slaves. Similarly, Christians sold Muslim slaves captured in war and also the Islamic World was engaged in slavery. Slavery became common within the British Isles during the Middle Ages. Britain played a prominent role in the Atlantic slave trade, especially after 1600. Slavery was a legal institution in all of the 13 American colonies and Canada (acquired by Britain in 1763). Slavery was endemic in Africa and part of the structure of everyday life. David P. Forsythe wrote: “The fact remained that at the beginning of the nineteenth century an estimated three-quarters of all people alive were trapped in bondage against their will either in some form of slavery or serfdom.”
But now, slavery is no longer legal anywhere in the world. And, it began its rapid decline just as Enlightenment spawned science led development took hold.
Similar stories can be told about the emancipation of women. Reductions in individual violence and increasing educational and medical progress, the decline of poverty and the growth of science and knowledge.
These stories aren’t perfect, they’re not one way streets, there are ebbs and flows. But as with slavery, how can one can reasonably deny that its demise (if not total elimination) is real, actual, objective PROGRESS toward something better.
Humanity is learning more about how the world works, and it that sense it is becoming smarter. Wealth creation; scientific advances, medical achievements, expanding human rights; every aspect of life is affected and become levers for this progress. No guaranty that it will continue — we have to keep learning and innovating, but I think it’s pretty clear that we’re making progress. 😉
Well, I think slavery is objectively wrong. But playing relativist’s advocate here, couldn’t you say that rather than discovering the objective immorality of slavery, the Brits and the Yanks just discovered the Industrial Revolution instead? I mean, the whole Marxian “ideas are no more than epiphenomena of economics” thing is IMHO false, but I don’t think it’s obviously false.
Yes, you can look to the Industrial Revolution as a cause. But it doesn’t answer the question simply to say say that it was what transformed our “morality.” What caused the industrial revolution? Wasn’t it progress being made in a number of different but ultimately related areas of human endeavor. The progress that was made in all the various areas was part and parcel of the same swift changes that were occurring, and each affected and reinforced the other. I see no reason or basis to argue that morality is somehow separate and different, and standing apart from the broader tide of progress.
Well, I’m sympathetic to morality as just another class of facts about the world. (I’m a virtue ethicist.) But I don’t think that technological or even scientific progress can adequately ground an account of moral progress. It’s like saying, hey, look, a bunch of us figured out how to build nukes, and wouldn’t y’know it, lots of major powers think it’s okay to have a nuclear arsenal. The stories that the “progress” either generated excuses (as with nukes and Truman) or made them a moot point (as with industrialization and slavery) are just too easy to tell. Again, I’m not saying “morality is objective” is unsound. I’m just saying “scientific progress -> morality is objective” is IMHO invalid.
Well, I wouldn’t claim it is any one thing. My view is simply that all of this progress is part and parcel of human’s gaining more knowledge generally. I put morality in the same category. And I believe that they are all self reinforcing.
Okay, but sometimes learning science and tech makes us forget moral knowledge, if you will.
E.g., we progress to the cotton gin, and forget that slavery is wrong. Or we discover the Indo-European languages and do really bad things with this “Aryan” idea. (Sorry if that was a Godwin?)
In particular, it’s relatively uncontroversial that civilizations rise and fall, go through dark ages, stuff like that. So survive/thrive farmer/forager kill-zombies/find-happiness values might wax and wane rather than being just on an upslope. In which case, it’s kinda question-begging to assume that we’re the one civilization that’s never gonna fall, the one boom that’ll never bust, and that in consequence present trends represent something real rather than just the present part of some survive/thrive farmer/forager cycle.
To clarify, I’m NOT arguing here that modern values are wrong. I’m making the much weaker claim that even if they are right, mere historical trends are just a sort of chronological version of argumentum ad populum, and thus an invalid form of argument even if the conclusion happens to be sound.
Adding on to the discussion…
Huemer is arguing specifically that it is liberal morality which is steadily rising to ascendency. As I commented at Overcoming Bias, this doesn’t actually match up with the trend.
Boehm has clearly documented that liberal (he calls it an egalitarian ethos) morality dominates throughout nomadic, armed foragers. He also calls it a reverse hierarchy. This morality was lost with agriculture and the rise to dominance of exploitative alpha elites.
As Ober has documented, rule egalitarianism re-emerged as the dominant ethos of more than half of the hundreds of Hellenic city states, only to die out again for thousands of years.
It then re-emerges in Enlightenment Europe. (In both Classical Greece and Europe we see a condition of hundreds of years of hundreds of integrated but competing states, effectively selecting for improved internal-state cooperation)
What we see are phase transitions to and from a cooperative, egalitarian, liberal state on one extreme and an exploitative, hierarchical, game-theory defector state on the other. Liberal values thrive in a cooperative environment. They are easy pickings in a non cooperative (from game theory) perspective. Saint in one mode is sucker in the other.
Be we then as wise as serpents and as innocent as children.
I think there is a wrinklle in your version of Boehm, in that modern liberal socieities are only egalitarian in a fairly qualified sense, basically equality of opportunity, but not of outcome. A Warren Buffet figure in a HG society, one who had loads more stuff than everyone else and didn’t share it, would not have been popular/
Everyone agrees there’s technological progress. If moral progress is just a result of technological progress, the mystery is explained.
I agree with Irenist. Ancient societies really really didn’t think in a capitalist way. Once you have a fully fungible economy, hiring workers beats having slaves for most things.
” If moral progress is just a result of technological progress, the mystery is explained.”
No argument there Scott. But not really my point.
I don’t think it’s moral progress just because technological progress has made it easier to be moral. It has long been noted that people tend to be good at the virtues that are cheap.
Foxconn? Congolese coltan miners? Bangladesh garment industry? Even if we don’t call it slavery any more, we’re certainly not adverse to inflicting horrible suffering on the not-rich, not-educated, out-group members who provide cheap, cheap, cheap labor to make our stuff.
The problem with that assessment, Schmendrick, is that the people “we” “inflict horrible suffering” on in order to get cheap labor are beating down those sweatshop doors in order to get those jobs. Working at an exploitative third-world sweatshop or factory is so much better than any other option they have without the evil, wicked West it’s silly. So how is it evil to present them with an option that beats the pants off of all their other options?
Assessments that America or the West are immoral always seem to ignore the agency and decisions of every other person or group in the entire world, ignore what happens to them and what they do, and concludes they are morally superior to and being harmed by America/the West, who are wicked and immoral and shameful for doing so.
Many Ancient societies, until the High Middle Ages, were labor-limited. There weren’t enough workers to hire to maximize the productivity of the capital (mostly land), therefore land owners (capitalists, if you want to call them that way) imported slaves.
And with technological collapse, moral collapse? I’ll buy that, if the “Today we have contraception so everyone can have Free Love – and that’s wonderful, because Free Love is the only right way to have sexual relations!” crowd leave off the “Free Love is the only right way” and accept that if they end up back in a situation where sex = babies/STDs, Free Love is probably going to be reined back as well.
On the other hand, I don’t buy that “killing isn’t murder when everyone only has a wooden club but when we have guns then it’s murder but when we regress back to clubs then it’s not”.
Murder is murder, and murder is wrong, whether it’s the 12th, the 21st or the 42nd century. Making your morality dependent on your chronology is (to borrow and abuse a phrase Chesterton used in another context) “(L)ike seeing somebody commit a murder and then saying, “But this is the second Tuesday in August!”
As a rule, it can be evil to present someone with an option that is better than all other options, if the existence of the option creates bad incentives.
The claim that the option is better than other options makes the assumption that you can just add the option and do nothing else. It may be that adding the option changes the incentives, and thus changes the existence of other options in future cases, even though it doesn’t change anything at the moment you add it.
Jiro: This is my view of most “let’s give out free clothes, food (and bibles) to people in Africa” charities. It is kind of ironic because it seems to be like the biggest anti-capitalist fears actually realized but in an entirely unexpected (and much more tragic) way. Instead of an “evil monopolist” keeping everyone down, you have a group of well-meaning people who, at their own loss, seem to be doing the same (choke the possible competition which could make T-shirts or food in Africa). The monopolist would not do it, or would not do it for long, because he would just keep losing money that way in the long term…bad a charity is in a way exactly an institution in which people “lose” their money willingly (or where they buy good feeling about themselves), so it can work. In a way, it should still work out for the better. After all, if you get free food and clothes, you can simply concentrate on other industries instead. But I guess that if you have a sizeable proportion of the population who is even illiterate, there are not that many things you can do other than that. It seems like a rare example where protectionism could be a good idea (hmm, the more I think about this, the weirder and more complicated it seems to get 🙂 ).
However, you have not pointed out which bad incentives the sweatshops produce exactly. They provide people with a higher income than they would have had a chance to have otherwise, possibly saving some of it to improve the lives of their family, providing for a better education and material security and allowing the next generation to start a bit higher up because of that. The reason we do not have child labour and sweatshops in Europe anymore is because the child labourers and sweatshop workers of the past allowed us to start a little bit higher up. The main difference between now and then is that then the sweatshops were everywhere (except for the places that were even less advanced…and even there there was child labour) whereas today they are only somewhere. Of course, you can make the sweatshop phase shorter by giving out money to the sweatshop countries (in a smart way though!). You can also encourage liberal (economically liberal, not social democratic) policies in the sweatshop countries. Had it not been for India’s socialist policies, it could have been close where Hong Kong or Singapore are now (not quite the same, I do understand that a combination of a good natural harbour and being a city-state make things easier, while countryside is always poorer, if also cheaper, than the cities…although this is also a good argument for cutting big countries to smaller and more efficient pieces…I am pretty sure that a world of, say, 1000 states would be better than the current of about 200). Chile is the most capitalist country in the Southern America (and one of the most in the world) and while it is true that there is a lot of inequality, the same is true of neighbouring countries and Chile is the most well off of all of South America (the only country there which is a member of OECD). Colombia is the second most capitalist country in the region and despite having problems that are not really their own fault (like Venezuela subsidizing the communist guerillas in parts of the country) they have managed to decrease the absolute poverty from 60% of the population to just 20% in the last 15 years. China is still pretty socialist (well, it depends on the economic zone there) but despite its many political problems, the rise of capitalism there cannot be called anything else but a tremendous success and helped many many times more people than any fair trade initiative.
I mean, it sure would be nice if we by some realistic means could turn India to Europe in terms of infrastructure, education and so one overnight. But we cannot and while the free market in a place like that may (does) seem ruthless and cruel to us, it also seems to be the best way to eradicate poverty. If someone succeeded at eradicating sweatshops in the name of helping people, they could be unintentionally hurting them instead, prolonging the poverty.
“Murder is murder, and murder is wrong, ”
Is all killing murder? Is killing on the battelfield murder? Does Timeless Deontology imply absolute pacifisim?
Tibor: To use another libertarian example, if you permit people to charge the price of a house for water in an emergency, you create incentives for those selling water to raise their prices to the price of a house, comparing to only permitting a charge of, say, $10 per glass.
If you set the price too low, it is also bad since people will not be incentivized to bring in water during emergencies, but there’s a big gap between “price too low that there are no incentives to bring water in” and “price so high that there are incentives to make people mortgage their houses for water”.
Jiro: I think that is a good example but it only seems to work in, as you stated, an emergency – that is a situation where a single water seller might have a de facto local monopoly since everything is in disarray, nobody else is likely to come in the next few days with fresh water and people die without drinking water for a few days. The higher price in this case also won’t work as an incentive to increase supply, because there is no supply. In a sense, this suggests that it might be a good idea to regulate natural monopolies. But they are not very common, really. In fact, the only case I can think of is exactly this – an emergency scenario where something that has no substitutes (water…and not many more things actually) but cannot be lived without is all under control of just one entity. Still a good point, though.
Slavery isn’t legal anywhere (unless you count some ISIS-controlled territory), but it’s hardly gone. I’m sure there’s much less slavery than there would be if it were legal.
The Islamic State controls most of Syria and a fifth of Iraq: I’d be careful about saying it is only “some” territory.
Leaving aside the fact that “some territory” adequately describes “most of Syria and a fifth of Iraq”, that’s not even what she said. She said that “some of the territory that ISIS controls” has slavery going on in it.
> Slavery isn’t legal anywhere (unless you count **some ISIS-controlled territory**)
Well, slavery is legal in all IS-controlled territory, not just some of it.
Just for these situations: http://smbc-comics.com/index.php?id=3907
It seems reasonable to say slavery is “gone” in the same way it seems reasonable to say horse-drawn carriages are “gone”. Sure, some still exist, but they’re not normal anywhere, and the places they exist there’s a special reason for it.
Philosophical note: since slavery is a status at law, how can it exist where illegal? Mind you, you can be treated like a slave, but if the law does not, in fact, recognize you as property, how can you be property?
You assume that the law is very powerful and can make final definitions. In a huge portion of the world, *custom* is still more powerful than law, and when the two clash, law gives way.
The prime example of this is Mauritania. Slavery was formally abolished in 1981, and the official government stance is that there are no slaves in the country. Yet something close to half a million people still live in bondage, as the central government simply isn’t strong enough to force it’s laws over the traditional customs.
+1. To say nothing of the black market in domestic or sex slaves all across the world…or as we like to euphemize it, “human trafficking.” As of 2013, the WP reported that there were ~30,000,000 slaves worldwide, with ~60,000 in the U.S. That’s not a lot per capita, but it sure ain’t nothing.
In which case, does it really make sense to say it’s against the law, given that custom is law there?
Depending on your definition of “law”, this is either true or false, but it doesn’t have any bearing on the question of whether slavery exists.
Same with slavery. You can quibble about the definition of slavery, but that doesn’t change anything about the question of whether (something that can be reasonably characterized as “slavery”) exists. At most it will affect what people call it, but it won’t change anyone’s minds as to whether it “really is” slavery.
Chattel slavery is the possession and trade of humans. When possession and trading rights are legally recognized by the state, they are called “property”, but clearly possession and trade can exist even without and against legal recognition, consider the illicit drug market, for instance.
Well, that is why there exist the terms “de jure” and “de facto”. You can pass a law that says that “no bad things do not exist”, but it won’t make them go away. Still, often people think that this is a way to solve problems. The so called “war on drugs” is one very unfortunate example of that kind.
If you taboo the word “slavery” and instead start talking about the legal bondage and forced labor of humans for the profit of others along with the transfer of those humans against their will as if they were property, you find that practice alive and legal in the USA.
And most everywhere else, too.
Slavery isn’t legal in Mauritania. Smoking weed isn’t legal in most of America.
> But as with slavery, how can one can reasonably deny that its demise (if not total elimination) is real, actual, objective PROGRESS toward something better.
Slavery acts as an incentive against murdering people, it is a basic economic reason for keeping someone alive whom you would otherwise want to kill for whatever reason, such as ethnic hatred. When and if there is a lot of killing done, see genocides, allowing it would be an improvement, although this is an oxymoron, as killing is even less allowed, so allowing does not matter much, rather, we could say if any place becomes really lawless and a lot of killing is done for a long time, slavery comes back inevitably. Not in the legal sense, as legality does not matter in lawless places, and perhaps not chattel-trading as that requires a working social order, but more as serfdom, being in the protection, and at the complete mercy of the local gang boss without any sort of enforceable rights.
Meta-level: be more pessimistic and suddenly this progress becomes far less clear.
My historical knowledge regarding slavery isn’t particularly good, but didn’t a lot of slavery involve slavers raiding coastal regions of Africa which were otherwise unimportant and shoving people into ships for “export”?
I am pretty sure that is a misconception. White slavers bought their slaves from African coastal settlements, and the slaves there were captured by other Africans to sell. Trans-Atlantic sea voyages were already a huge pain in the ass and very risky, why commit to do one of those in order to engage in some more risky behavior that might not get you what you came there for, when the alternative is just buying the thing you came for?
DrBeat has got it essentially right. It was never profitable to land a ship full of armed men in Africa and try to go round up future slaves. It was often profitable to land a ship full of cheap rum, obsolete guns, and other such trade goods in Africa and pay the more successful locals to sell you some slaves. The people thus enslaved were, at least initially, people who would otherwise have been simply killed in local wars because it was too dangerous to leave them alive.
Eventually, of course, wars that wouldn’t have occurred were started because “…and hey, we can make a profit selling the slaves” was enough to tilt the balance. Also, most of the slaves were exported to Caribbean islands where they died horribly over a couple of years rather than quickly at the outset. So you can perhaps justify an argument that the first slave ship into Jamestown was a net moral good, but not the general case.
Slavers raiding otherwise-unimportant coastal regions and shoving people into ships for export was occasionally profitable a couple of centuries before the classical African slave trade, but that version involved slave ships coming out of (northern) Africa and raiding Europe. Even then, it worked better if you established some sort of quid pro quo with the more powerful locals. And even then it’s hard to argue that it was a net moral improvement outside of very special cases.
It suspiciously fits a narrative of the strong dominating the weak being somehow good for the weak, but as I don’t know much about the topic and don’t want to do bulverism, OK.
I don’t follow that at all. What is it that “suspiciously fits a narrative”?
…Who said anything about it being good for the weak? Even the part about enslaving people instead of killing them, which was completely irrelevant to what American settlers or Europeans were doing, said that this led to wars being started because of the value they’d get from all the slaves. He ends with saying that it’s hard to argue it a net moral good even with the fact people were enslaved instead of dying.
Saying that bad actions weren’t maximally bad in every imaginable dimension isn’t the same thing as claiming they’re good.
It seems awfully convenient when people both ignore the Barbary slavers and transfer their MO to Europeans, which then lets them ignore that West African elites owned slaves.
One theory I heard, pre-Jared Diamond, is that the empires of the pre-Columbian Americas lagged behind Europe and Asia socially and technologically because they practiced widespread human sacrifice (mostly on captives of war), while the Eurasians practiced slavery. Converting a percentage of your defeated enemies into productive labor, instead of just cacking them all, must confer some benefit.
This is, of course, not to argue that slavery is a net moral good. “Better than human sacrifice” is not a high bar, and in both cases wars were being fought for the purpose. Anyway, I find the Diamond arguments about trade, technological catchment, and domestic animals much more compelling.
I don’t think that theory holds water. Human sacrifice was common in the pre-Columbian Americas, but not at scale; we all know about the Aztecs, but they were an outlier there. The Maya and Inca sacrificed individuals or small groups to consecrate temples or commemorate important dates (and mainly children, which are harder to acquire as prisoners of war), but the Aztecs often killed adults, and their appetite for sacrifice was so huge that big chunks of their politics revolved around it.
It seems to have been rarer still in North America; only a handful of tribes are known to have practiced it consistently.
stillnotking: I never though about it too deeply, but my explanation for why Europe (eventually) developed faster than the rest of the world was that there has for the most part been more competition with many small nations and countries in a very densely populated continent (I’ve always known Europe was small but when I played with that truesize app Scott posted a link to a while back, I realized how extremely tiny compared to the rest of the world it really is), whereas this was not the case in the most of the rest of the world (but one would have to make a much more careful analysis to see if that is really true).
Of course, in case of the Americas it could boil down to available resources or maybe even (if that is the case, I don’t know really) that it took longer for the nomad groups to get there and settle there while Northern Africa and the Middle East had already had a civilization (and at least Greece was in contact with it, spreading it further to southern Europe). If we just model the history of technology by having important discoveries being made and spreading randomly after an exponential time after the prerequisite discovery (you first need to learn how to cast iron before you can make steel and that before you can make locomotives ), then having a heads-up start of a few hundred years is all it takes to explain a difference in technology of not much more than a few hundred years. Maybe you can just explain all by a slightly improved model with discoveries popping up like that at an exponential rate on separate islands (roughly different culture groups) with some (also random and not that large) migration of ideas between them. Probably not all though. Parts of the world had kept living in the stone age pretty much until the 20th century, so that model seems to be too simple.
I don’t follow that at all. What is it that “suspiciously fits a narrative”?
There is a group of you guys that tend to put forth a series of historical claims that all happen to support a theme of more authoritarian right approaches to government. I agree on occasion with specific arguments you guys make, but it concerns me that your historical perspective might be constructed to suit the political theme, as my own historical reading tends to tell me there’s bits and pieces everywhere that either support or negate all political camps at some point or another, meaning things aren’t especially clear-cut. Again, I don’t hold this up as a refutation of what you say, that would be fallacious and I hate it when people do that. But I feel political groups with common historical narratives have trouble truth-seeking in areas like history, whether left or right. No offence intended, I just got the impression you were after why I felt this way.
For what it’s worth, I’m neither authoritarian nor particularly rightist, race politics bore and irritate me, and yet John Schilling’s account matches the history I know.
Europeans had very little luck maintaining any kind of long-term presence in sub-Saharan Africa until the late Victorian period, mostly for disease resistance reasons. There may have been a few European or European-led raids, especially early on, but it didn’t take long for the Atlantic slave trade to become institutionalized, with ports dedicated to it set up and run by the various West African kingdoms. And at that point, raiding the coastal settlements of those same kingdoms would have been counterproductive as well as dangerous.
“It was never profitable to land a ship full of armed men in Africa and try to go round up future slaves. It was often profitable to land a ship full of cheap rum, obsolete guns, and other such trade goods in Africa and pay the more successful locals to sell you some slaves. The people thus enslaved were, at least initially, people who would otherwise have been simply killed in local wars because it was too dangerous to leave them alive.”
Also note that criminals and prisoners of war continued to be killed as annual and funerary human sacrifices in states involved in the Atlantic Slave Trade, e.g. Dahomey. So it seems like the supply of people the locals saw as expendable exceeded the demand for slaves.
That wasn’t an answer to the question.
What part of that post was the part that did that? What is the specific thing that Schilling said that “suspiciously fits a narrative of the strong dominating the weak being somehow good for the weak”?
I thought it was obvious – the claim that slavery is usually the lesser of two evils, the other being straight-up killing. I think for immoral people there’s a fairly massive financial incentive for slavery, so the question for me is, if the broader public didn’t abhor the practice, why would immoral people, equipped with transport ships and superior firepower, actually refrain from such a profitable opportunity? That seems overly optimistic when it comes to human nature.
“It’s better to be enslaved than murdered, but this caused wars to happen that would not otherwise have happened, so it was not a net moral good anyway” is not “a narrative of the strong dominating the weak being somehow good for the weak” unless you change the definitions of half of those words, and if you do, then the observation that ANYTHING bad is still better than being murdered is “a narrative of the strong dominating the weak being somehow good for the weak”. And it is not by any means in any way a narrative that supports a more authoritarian right approach to government.
Nobody said that European slavers didn’t raid African coasts for slaves because they were so moral. They said the financial incentive for raiding was poor. Raiding is dangerous to your crew (who you would have to pay to replace) and might not get you any captives, or at least not enough to justify the expense of lost crew, expended resources, and opportunity cost of time you could have spent doing profitable things. It also made it harder to trade with the natives, who were willing to sell you slaves without you having to take on any risk at all!
It really seems like you’re applying emotional reasoning here, wherein white Europeans are Bad and thus all things they do are maximally Bad in intent and effect, and anyone who explains any of their actions as being due to any incentive beyond maximal Badness is a right-wing apologist who thinks racism and slavery were good.
Don’t call it “bad”, call it “brutal”, and then you’ve got a concept for the debut album of your new black metal band.
Better yet, “brvtal”.
Edit> Oh I see where part of the confusion is, my focus is on the arguing against the upstream comment John and Drbeat seemed to be agreeing with, not their comments alone, which while I did not totally agree with them, were pretty reasonable.
I will try to clarify my position. I want to first of all suggest you may be pattern matching me with some lefty ideal type you’ve got, and I can see in this case why you’re doing that, but I don’t think that its accurate. I aim for meritocracy which I think would make history morally irrelevant. With a few caveats, I also happen to really like the Western world. I also think slavery is a common practice in many civilizations and that “white people” (whoever they are) don’t hold any unique historical burden in that regard. I just think slavery really really sucks, and I think we need to be very cautious before making arguments with the effect of saying it doesn’t matter too much either way.
The original statement was “Slavery acts as an incentive against murdering people” and “so allowing does not matter much, rather, we could say if any place becomes really lawless and a lot of killing is done for a long time, slavery comes back inevitably”. The reason suggested was that generally slavery is associated with incendental wartime capture of opposition forces, and therefore is not the cause of significant net harm.
I feel this is likely to be false for several reasons (1) creating a market demand for slavery will create a market supply. The incentives involved are huge – decades of near-free labour (2) wars are fought between armed people usually with some knowledge of fighting, but soft civilians in isolated villages make easier slavery targets, so it seems unlikely to save lives in the way described (3) even where slaves and war targets are the same, and where the slavers and slave owners are different parties, slavery incentivises war by increasing its profitability considerably. War is expensive and unpleasant and without a strong financial incentive it is a lot less attractive.
I also have noticed that far-right arguments involve a general pattern of saying domination-type harms aren’t that bad and that other harms are much much worse and more urgent (ie. major misallocation of moral priorities). Because this issue is both important and fits that pattern, I am more skeptical than usual concerning the empirical claims. I happen also to agree with Mike’s assertion that enlightenment principles may have contributed to a fairly amazing improvement in this area. I have tried to acknowledge that I am not an expert and that I do not consider the claims to be refuted by empirical evidence (as I didn’t put any forward), only that they are in conflict with what I feel are very reasonable principles of economy and human nature.
Killing is an improvement over slavery because the things you can get from killing someone for bad reasons are not as useful to you as the things you can get from enslaving them for bad reasons, so there are fewer incentives for killing random people than for enslaving random people.
Except for the case when you believe that the human sacrifice wins you all sort of favours from the gods.
By the way, there are documented cases of human sacrifice in ancient Rome, abolished 97 BC (but practically nonexistent for much longer).
The Norse (and other Germanic peoples prior to Christianization) did it too, into the early Middle Ages, and they were a pretty successful civilization for their time. They also practiced slavery, through, and sacrifice was much rarer.
>But now, slavery is no longer legal anywhere in the world. And, it began its rapid decline just as Enlightenment spawned science led development took hold.
People in all societies still routinely rip off their workers, exploit vulnerable populations by offering them poor wages, human trafficking is a still a thing, and slavery still exists either overtly or de facto in many places.
The presentation changes, but the substance does not.
This article touches on a lot of this, (http://www.theguardian.com/books/2015/mar/13/john-gray-steven-pinker-wrong-violence-war-declining) but I find arguments that start with the premise “but everything’s BETTER now!” questionable.
“But as with slavery, how can one can reasonably deny that its demise (if not total elimination) is real, actual, objective PROGRESS toward something better.”
*puts on devil’s advocate hat*
Well, there are plenty of people who lack the conscientiousness, impulse control, and general mental wherewithal to be orderly and productive on their own, or to pursue with any efficacy their own ends. Being incapable of self-support, they are ultimately dependent upon others to provide their basic needs, and when free, will turn to beggary and parasitism, or even to theft or banditry. And so, will ultimately find themselves the dependents of some other. Further, if their labor is to be put to effective and productive use, it will be with the close direction and oversight of some other as well, most likely the same as providing the basic needs. If these sort are to work to any end, it will be to someone else’s ends, not their own. Thus, such a person is “while being human, is by nature not his own but of someone else”, to quote Aristotle, and thus a “natural slave”.
With the abolishment of legal slavery, such individuals still exist, and are still provided for by others; but now, it is done via the state, rather than individual owners. And their labor goes mostly unused, albeit mostly because technological advances make the value of their labor too low to justify the costs of the close oversight, direction, and enforcement of dicipline needed to make natural slaves productive. And what little discipline is provided, to prevent destructive short-sighted behavior arising from their free time to act on their undirected impulses, is found mostly in the form of either mass or revolving-door imprisonment. Thus, one could say that slavery, in the Aristotelian sense, was not so much ended as nationalized, and so, like most nationalized industries, is grossly inefficient and ineffective.
a) About Mao – Mao may have been much worse than Yongle, But he was worse for reasons that are much closer to our current values, even if they lead to worse results by now (some sort of morality uncanny valley).
b) There’s a difference between “There’s a morality that we converge to” and “there’s a universally true morality”, as you pointed out with the fashion example, but this leads to two more questions – If there is a universally true morality, would we converge to it? And if there isn’t, why can we say things like “this value is bad”, instead of just “this value looks bad in our system”? And if there is, and we know it (even if we’re not aware of it), isn’t it just that how moral we are changes in accordance with the situation, instead of the actual morality changing?
And example for knowing morals without being aware of them: When I lived in Jerusalem, I’d occasionally see posters inviting people to pray to help raise some recently-deceased person’s soul to heaven. These people (claimed to) follow divine command morality – things are good because God approves, and you get closer to heaven by doing good things. But it this case, they prayed to get someone else into heaven, so they got no heaven points for it, which means it wasn’t a moral thing to do by their divine command theory – but they still believed that it was morally the right thing to do.
Well, for Jews who believe in an afterlife, I assume that praying for the departed would be a mitzvah (lit., commandment; broadly, good deed), and they’d get their own points (to adopt your framing) toward the world to come for that.
Yeah, I don’t think this is a problem with divine command theory.
The problem with divine command theory is the same as with deontology or any kind of impartialist ethics (such as utilitarianism): it doesn’t show why anyone is obliged to follow it.
Let’s grant the existence of the Christian God, with all the relevant perfections, etc. Still, he can make all the commands he wants, but there’s no reason why I ought to follow them. Unless God has something I want, or threatens me with something I don’t want. But then following the commands is only good because it gets me the thing I want, which is the real basis of my ethics.
For instance, would people follow God’s commandments if those who followed them went to Hell and those who sinned went to Heaven? Obviously not. If the Christian God exists, following his commandments is good because it is in your rational self-interest to attain the beatific vision, rather than to suffer eternally in Hell.
(For the atheists who think this is irrelevant, the exact same argument applies to utilitarianism.)
Divine Command is special because God is held to be both omnipotent and omniscient, so if God says that some impartial system is correct, that is necessarily true. It may seem incorrect to us, but God knows better.
In that case, it is of course a truth incomprehensible to human reason. And once you let those in, you might as well stop arguing and switch to hitting people with clubs.
Maybe He’s lying.
Taboo “obliged”. The actual reason people follow a moral code is that doing otherwise means a prestige loss. This is obvious from the moral language commoinly used: “Am I a bad person for eating meat? Should I feel bad about myself if I eat meat?” What else could it mean that some kind of a self-worth, i.e. either real or subjectively felt prestige?
Moral arguments reduce to convicing people to deduct prestige points from people who don’t act according to the principles proposed. It is really hard to see that without mechanism like prestige or coercion, what would an obligation really mean, as in, how could a moral obligation actually incentivize people to follow it.
This can be external, real prestige loss, called shame, or internalized, subjective prestige loss, called guilt.
Divine Command is a really cleverly simple way to handle prestige-obligation. Prestige means respecting people who did something useful for the community. We propose the existence of a $deity who who did basically everything for us, created the universe etc. so he has infinite prestige. We are grateful to him for everything. Praise The Lord and all that, gigantic amounts of prestige heaped on him. And not obeying the commands of an infinitely high prestige being is obviously a huge prestige loss.
Thus, it does work, in practice, as an obligation, as in, the consequences of not following divine commands result in prestige loss. But every other kind of obligation works the same way, too.
“he actual reason people follow a moral code is that doing otherwise means a prestige loss.”
You get a prestige loss for failing to follow hygiene codes, but that doesn’t mean the actual reason for hygiene is prestige: the actual reason for hygiene is health and wellbeing . Mechanisms aren’t reasons.
People think they mean something by “obligation”—and that thing is certainly not “avoid suffering a prestige loss”. They may be wrong as to whether there are any such things as categorically binding obligations, but the idea is not nonsense.
@TheAncientGeek @Vox Imperatoris
Explaining the mechanism of the Diesel engine or evolution is actually a good way to learn about them. Some things are just mechanisms and nothing more, or not much interesting more. And a mechanism is a nice empirical thing. See the Sequences, the classic blegg-thing, how an algorithm feels from the inside. Detecting something beyond a mechanism can be difficult and error-prone. Are we really sure morality has actually something beyond the social mechanism?
I mean, reasons for the morality mechanism are rooted in sexual selection or something sufficiently closed to that.
For a biologist viewpoint, it would be very tempting to reduce everything to these mechanisms. Is it really sure there is something non-reducible here, that requires inviting a philosopher? Maybe a philosopher is just a clever, high-prestige arguer.
Vox, obligation is prestige loss in a shame culture. This is an empirical fact that this happens, of course it does not exclude that something more can happen. But in a shame culture a honest person will tell you “I am obliged to do this or else I lose face”. And it sufficiently explains the phenomenon. (I grew up in a mixed guilt-shame one, and I remember when I was a child my mothers arguments about eating nice or not making a scene in a restaurant or the hundred other things parents tell to kids basically all reduced to “what will people say if you do X” and my counter-argument, namely “nothing, I did X before in public and nobody said a thing” wasn’t working. My mother was the shame-mover, my father the guilt-mover, and his way of explaining moral things to me was clearly related to prestige: “only a coward attacks another boy from the back, issue an open challenge to him from the front instead” “lying is something done by cowards who are afraid to face consequences of their actions” etc.)
In a guilt culture, this is shame internalized, as conscience, at least that is the simplest explanation, because otherwise it is really hard to gather data about it. As in the above example, I internalized in my childhood that cowardice isn’t merely something other people shame you for, it is also something you should feel bad about yourself.
(Another aspect of guilt is empathy/sympathy but only towards people who aren’t just a statistic. But don’t even try to reduce morality to empathy or compassion because that is only 1 or 2 of Haidt’s 5 axes.)
So, is there anything remotely empirical beyond the mechanism?
Vox Imperatoris: “(For the atheists who think this is irrelevant, the exact same argument applies to utilitarianism.)”
What? No! No of course not. Utilitarianism is not in my rational self-interest, it requires me to give my money to people who will never know me and never reciprocate. The most benefit I gain from donations is acclaim on the internet when I brag about them, which impacts my real life a lot less than fact that I now have less money.
Sorry, it seems my point was unclear.
I did not mean that utilitarianism is actually in your rational self-interest. You are, of course, right that it isn’t. I am saying that, unless it were, there is no reason for you to follow it.
It is therefore in exactly the same position as divine command theory: you should obey the commands (or maximize aggregate utility) only to the extent that it maximizes your self-interest. At best, they completely supervene on some deeper ethical theory.
(Also, many early utilitarians, including Mill, tried to claim that maximizing aggregate is in your self-interest “rightly understood”. This is crazy, but they claimed it.)
What! No! Of course not! “utilitarianism” says nothing about /which/ utility function to use at all! Now, if it happens that your preferences include terms about the well-being of random africans, we have some advice about how to do that more effectively, but if you utility function does not include any such terms, your point about listening to a hypothetical god applies equally well to you listening to us.
It depends on what you mean by “utilitarianism”. The classical utilitarians clearly meant that you ought to maximize the greatest good of the greatest number.
Contemporary utilitarians like Singer are much the same, clearing saying that you ought to maximize total utility in an impersonal way.
You seem to be interpreting it to be an extremely broad metaethical camp that includes all forms of consequentialism that involve maximizing pleasure, happiness, or preferences of some kind. You are welcome to use it like this, but other people most often do not use it this way.
For one, egoism is typically taken as an alternative to utilitarianism (as, I suppose, is strict altruism). Egoism is the theory that says you ought to maximize your utility function. Utilitarianism says you ought to maximize the aggregate of all utility functions (or the average, or some similar impartial measure).
It is in your rational self-interest. Altruism is a trade of utility against prestige. You give utility, you get prestige. Even if you do it anonymously, it is not in our evolutionary past so the brain still feels some tasty prestige coming.
>The problem with divine command theory is the same as with deontology or any kind of impartialist ethics (such as utilitarianism): it doesn’t show why anyone is obliged to follow it.
Same reason as any other system of ethics: it’s in your own best interests.
Any system of ethics is ultimately about how to correctly relate to other people. If we’re all made in God’s image, by and for God, then relating correctly to him is both in our own best interests (since it’s what we’re explicitly designed for) and it also explains why you ought to trust God (since he has intimate knowledge of you, your needs, what is most beneficent to you, etc.)
Now, of course, you can distrust God, take the position that God is a liar, etc. You *can* do that; but keep in mind it is impossible to please God without faith – and that another way of saying faith is “trust”:
>And without faith it is impossible to please God, because anyone who comes to him must believe that he exists and that he rewards those who earnestly seek him. – Hebrews 11:6.
At this point, it’s just a matter of having that trust -perhaps through experiential knowledge that leads to you to either A: trust God directly or B: trust the message/messengers conveying information about God to you.
>Taste and see that the LORD is good; blessed is the one who takes refuge in him. – Psalm 34:8.
None of this is true in a utilitarian system; it might sometimes be suitable to act according to such principles, but sometimes it might *not* be; it’s kind of a wash. but in a Divine command theory where God really is beneficent, then there is no good reason to not obey God.
Exactly. So, in fact, egoism—the view that one ought to maximize one’s own self-interest—is true.
It just so happens, in the alleged Christian reality, that the best way to do this is to slavishly obey God. But if God’s commands were not in your interest to obey, there would be no reason to obey them.
Therefore, the fact that God has commanded you does not make the action moral. The fact that God promises to give you a big reward is what makes it moral.
God doesn’t get to legislate the standard of goodness, let alone create categorically binding commandments. The standard of goodness is knowable completely independently of him. He just has (enormous) non-moral powers, so if happiness is your goal, you’d better be on your side.
This is no different in principle from a billionaire promising to give you a million dollars if you go to church on Sunday. If you value a having a million dollars more than you value not having to go to church, you should do it. But this doesn’t mean the billionaire has some kind of special moral powers.
>God doesn’t get to legislate the standard of goodness
Except he does in our specific case, because he made us.
All ethical judgments of good or bad are subjective because only subjects can interpret good or bad; it’s just in the case of DCT, there exists a person (God) who everyone must come to terms with and the subjective standard is universalizable.
God designed us in such a way that what is, in fact, most beneficent to us, is to relate to him correctly in faith, worship, love, and so on. Now, sure, hypothetically, we can imagine alternate states where God created us differently – but taking us, ourselves, at face value, we have our own nature, and barring God’s intervention, we cannot be happy without him.
So if egoistic happiness is a consequence and therefore a motive for right ethical action, and that’s only possible through correctly relating to God (and to other human beings, since, remember, human beings are made in God’s image)* – which necessarily entails sincere and real love, faith, and worship of God and loving our neighbors as ourselves – then it can be really said that God becomes the standard of ethical goodness, since any attempt to relate correctly to God (and other human beings) requires obeying God. (unless, again, you think God is some weird gnostic trickster or something – which I do not.) He designed the scenario this way.
>This is no different in principle from a billionaire promising to give you a million dollars if you go to church on Sunday. If you value a having a million dollars more than you value not having to go to church, you should do it. But this doesn’t mean the billionaire has some kind of special moral powers.
It’s more like if the millionaire was your parent, raised you, taught you, build your home, your town, your community, and all technology you use from scratch, and then insisted on giving you your inheritance only if you lived and walked in a way that indicated a mindset of loving trust in him. See the difference?
The fact that God creates *literally everything that is created* probably informs the nature of our subjective ethical realities.
Now we’re pretty much just into a semantic debate. I agree with you on the object level.
All I will raise is G.E. Moore’s “open question argument”. Knowing that God commanded something, is it still an open question that the thing is good? Yes. (We also have to know about the reward.) Therefore, the command is not the standard of the good.
You may think it is false that following God’s (alleged) commands is not the way to maximize your self-interest. But it is conceivable. (You must believe this; otherwise, all of God’s commandments, as well as the promise of salvation, could be shown through pure reason, and there would be no need for revelation.) If it is conceivable that something is not good, that thing is not the fundamental standard of the good.
It’s the same case as with the billionaire parent. It may actually be the case that it is in your self-interest to do what he says. But it is conceivable that matters might be otherwise? Yes.
(On the other hand, is it conceivable that whatever maximizes my self-interest also doesn’t maximize my self-interest? No.)
Also, I have been working on the assumption that following God’s commandments is a way to earn entry into heaven. You do what God says, he pats you on the back, and he lets you into heaven because you deserve it. But the most consistent kinds of Christian theology are not like this. Following God’s commandments to the best of one’s human ability is neither necessary nor sufficient for salvation. So now it’s totally mysterious why following them is good (unless following them brings the maximum earthly happiness, which the Bible implies is not true).
>Knowing that God commanded something, is it still an open question that the thing is good?
Well, it depends: what’s “good”?
>But it is conceivable that matters might be otherwise?
Probably. Maybe. I don’t know, honestly. But why does it matter even if it were the case? is the utility function up for grabs?
>(On the other hand, is it conceivable that whatever maximizes my self-interest also doesn’t maximize my self-interest? No.)
Well, I can’t conceive (which I take to mean “think of or understand in some way that makes sense to me”) of it, but that doesn’t make it impossible. I have a hard time conceiving of a 3-in-1 God, but I believe that’s true too.
>Also, I have been working on the assumption that following God’s commandments is a way to earn entry into heaven. You do what God says, he pats you on the back, and he lets you into heaven because you deserve it. But the most consistent kinds of Christian theology are not like this. Following God’s commandments to the best of one’s human ability is neither necessary nor sufficient for salvation. So now it’s totally mysterious why following them is good (unless following them brings the maximum earthly happiness, which the Bible implies is not true).
I high suggest you read Romans and Galatians (and the sermon on the mount too!) as they help with understanding these mysterious things:
>15 To give a human example, brothers: even with a man-made covenant, no one annuls it or adds to it once it has been ratified. 16 Now the promises were made to Abraham and to his offspring. It does not say, “And to offsprings,” referring to many, but referring to one, “And to your offspring,” who is Christ. 17 This is what I mean: the law, which came 430 years afterward, does not annul a covenant previously ratified by God, so as to make the promise void. 18 For if the inheritance comes by the law, it no longer comes by promise; but God gave it to Abraham by a promise.
>19 Why then the law? It was added because of transgressions, until the offspring should come to whom the promise had been made, and it was put in place through angels by an intermediary. 20 Now an intermediary implies more than one, but God is one.
>21 Is the law then contrary to the promises of God? Certainly not! For if a law had been given that could give life, then righteousness would indeed be by the law. 22 But the Scripture imprisoned everything under sin, so that the promise by faith in Jesus Christ might be given to those who believe.
>23 Now before faith came, we were held captive under the law, imprisoned until the coming faith would be revealed. 24 So then, the law was our guardian until Christ came, in order that we might be justified by faith. 25 But now that faith has come, we are no longer under a guardian, 26 for in Christ Jesus you are all sons of God, through faith. 27 For as many of you as were baptized into Christ have put on Christ. 28 There is neither Jew nor Greek, there is neither slave nor free, there is no male and female, for you are all one in Christ Jesus. 29 And if you are Christ’s, then you are Abraham’s offspring, heirs according to promise.
A vague term used to mean many different things. I propose that we ought to use it to mean that which maximizes one’s self-interest: that which attains the thing(s) which are most satisfying to him, such that if he had them, he would not want anything else.
I mean, you tell me whether it matters. In your worldview, it is a conceptual distinction, not a metaphysical one. That is, self-interest and the commandments of God demand the same actions.
In the same way, a whistle is in one sense made of tin and in another sense made of subatomic particles. Is it important to know which of the two is more fundamental?
As for your quotes from the Bible, first I must point out that the first paragraph is an obvious sophistical attempt to twist the meaning of God’s covenant with Abraham and the Jewish people to make it compatible with Jesus. In fact, the whole thing is an exercise in strained interpretations—or what is commonly described as “lawyering”.
Nevertheless, the final paragraph makes it pretty clear than man is “no longer under a guardian”, i.e. the law. This has had a big influence on Christian antinomianism. On the other hand, there are passages in the Bible that say the exact opposite: that man is still under the Mosaic law as strictly as ever. This makes sense under the interpretation that the Bible is a self-contradictory document.
@Vox I think what Brad is saying that if you valid reasons to be extremely grateful to a person, anything such a person commands you makes it good as it is a way to repay your debt.
Now, of course, what makes the whole thing tricky is that if a millionaire saved your life a hundred times and as a repayment he commands you to murder an innocent person, that does not make it good.
Theists argue that god is a special case. For example, because god “owns” the innocent person, and also you. Destroying someone’s property is supposedly OK if he asks you. Except that a persons are inherently valuable. But for the Christian persons are inherently valuable only because god, besides they wouldn’t even exist without god, so god commanding you to un-create a created person would be good.
It’s seriously weird but that is sort of it looks like. A theist would say if the pope or even an angel commands you to murder an innocent person, that does not make it good. But god yes. Simply it seems for the theist god is much more of a special case, special category, than an atheist can imagine. It seems there is one set of rules for god, namely no rules, and a different set for everybody else from archangels to maggots.
There are also consequential considerations to obeying God’s commands; I think there’s a open question in another part of this comment thread over whether doing some reprehensible (to our intuitions) thing at the command of a friendly AI is a good or bad idea; it seems that answering for the affirmative in that particular case becomes even more powerful when asking the same question of God. if you believe God ultimately has some beneficent end-game in mind and he understands the future better than yourself (which, if he made the space-time continuum, he most certainly does), then he would have the best grasp on what actions lead to the best results on a strictly consequentalist basis.
This ties into virtue ethics too – if I murder someone on my own whims, I don’t know the future well enough to know anything except that I killed someone for my own petty reasons, (and if I pretend that I can know with certainty that such a murder will makes the world a better place, I am being pretty dishonest with myself about my own epistemic certainty in predicting the future) but God can permit me, a murderer, to exist because he understands my existence will lead to X Y and Z consequences down the road that are ultimately more beneficent than if I did not; hence when a man does evil, he does it from purely selfish motives, but if God permits evil to exist, he can do so from a pure motive. That sounds outrageous, to be sure, but this is the conclusion I have come to.
Paul is basically saying that the promise to Abraham of offspring was a reference to the protoevangelion (https://en.wikipedia.org/wiki/Protevangelium), the promise that the seed of the woman that would crush the head of the serpent, that is, a reference to the messiah. This is an idea that Christian theology sees as highly influential in understanding old testament theology; the idea that any woman (and eventually and specifically, any *Hebrew* woman) could, in principle, give birth the messiah is kind of a big deal; it’s why Noah has to be “perfect in his generations”; it’s why Pharaoh throws male children into the Nile but not female; it’s why the bible is so concerned with genealogies. I am, of course, taking the position that the bible is a document written by God and transmitted through men through the same, however.
If you dispute Paul here, okay, sure, whatever. I still would consider Paul authoritative on the basis of reported miracles and on the overall testimony of the church, however.
As for antinominalism, it’s a thing, but it’s a heresy too. Go read 1 Corinthians 6:9-11, for example, or Ephesians 4:17-32.
As for the law, seriously – read the sermon on the mount! the law (and much of the sermon on the mount points to the law in a big way – c.f. Matthew 5:20, for example) an insurmountable standard that indicates to sinful men their need to throw themselves on God’s mercy; it is a school teacher that leads to Christ; it is a means by which God reveals his attributes and character and ultimately, points to Christ.
Like, for example, go to Leviticus 14 and read the ritual for the cleansing of a leper; the ritual prescribes killing one bird and dipping the other bird in it’s blood and letting the latter one go free; this is such a blatant picture of penal substitutionary atonement that I almost did a spit take when I read it.
As for a discussion of self-interest, I defer to Jaskologist’s excellent post below; it covers the same territory I wanted to, but in better words.
If the Christian God exists, following his commandments is good because it is in your rational self-interest to attain the beatific vision, rather than to suffer eternally in Hell.
On the one hand, this is clearly true. Righteousness is eternally rewarded, and indeed it would be unjust of God to do otherwise. On the other hand…
(and I realize that at this point I am turning to personal experience that most of you cannot relate to*)
As you develop your relationship with God, He will eventually put you in the position where you have to do the right thing even though you do not want to, and don’t really believe that it will ever pay off for you. I do not know why, but He clearly wants us to become the sorts of people who will do the right thing even if it doesn’t benefit us.
* FWIW, while this is indeed my personal experience, it is not unique to me. The same thing happened to CS Lewis, and I suspect most of the famous saints.
It’s all in Aristotle, all in Aristotle! Bless me, what do they teach them in these schools?
Or, to put it more clearly, virtuous actions, like any kind of action, become a habit if you do them long enough. Even if you start out because some external compulsion/reward — fear of going to Hell, say — soon enough you’ll find virtuous actions becoming second nature, and you’ll do them naturally, without having to be threatened or bribed.
My point is that you’re praying for someone else (=to get someone else to heaven=to give someone else mitzvah points™), while also getting Mitvah points of your own for doing this. The models I can think of for this all seem kinda lacking – It feels like you’re laundering mitzvahs and trying to scam god here. (Though to be fair, “trying to scam god” is a pretty big part of judaism.)
Far more people in China died from famine and disease in Yongle’s day than in Mao’s. Mao is only remembered as being a particularly bad character because the Chinese famine of 1960-61 (which was only a reversion to the death rate of late 1940s China) took place against a background of skyrocketing mainland and island Chinese life expectancy, nutrition, and health.
You must mean that people under Yongle died at a greater *rate*, because the population numbers in wikipedia can’t support a claim of absolute deaths.
In any case, my wife’s family is likely to claim that there are other reasons that Mao is remembered as a monster. Every member of her extended family spent at least some time in prison and work camps, some for decades.
There were ten times as many people towards the end of Mao’s reign as under Yongle’s, so you’re right about me confusing number with rate. Sorry for my careless wording. More realistically, a fourth to a fifth as many Chinese under Yongle died as under Mao from famine and disease. That’s still roughly comparable- Mao ~40 million, Yongle in the high single millions.
China’s total death rate fell more than two-thirds under Mao from 1949 to the late 1960s.
The ratio is probably correct, but my numbers were calculated wrong and are, thus, severe underestimates. More like 60-80 million Yongle, ~250 million Mao.
I think the idea of an ruler supreme to all others whose bloodthirst is universally accepted without any justification is a wrong impression of the past. Mao needed at least to pretend that he was doing something good (maybe he even believed it, well then Stalin certainly did not although he also pretended, at least in public) for everyone. So did most of the rulers in history. In no era could you get away with randomly murdering people with no justification unless you simultaneously made sure you had a group of powerful supporters (basically the army) behind you. Even so, it weakens your position with the people at no gain. Most of more modern dictators realized this, hence propaganda and trying to conceal things that are hard to justify even using it. At “best” a less efficient model of a dictatorship has been gradually replaced by a more advanced one as the dictators learned from the mistakes of the past. Some inventions, such as the printing press or radio, also increase both the possibilities of propaganda and the necessity to use them as they make the transfer of both information and disinformation much faster.
But even those older (the successful ones anyway) princes knew that they had to do this with some people. The peasants were usually (if not always) quite easily subdued if you had all the nobles behind you but amongst the nobles the prince had to tread carefully, otherwise risking a rebellion.
The severity of punishment decreased with most of the world now having abolished th death penalty for example and even when it is instituted, quartering by horses and similar things are gone. Again, I don’t think it has much to do with morality. If you chance of catching criminals is very low (as it was back them) you have to increase the penalty for the same amount of determent. I think that if for some reason we could only catch only one murderer in 100, quartering would come back.
I think the very basic moral foundations are the same everywhere and everytime, i.e. if you go to sufficient detail of what follows what, people will end up agreeing on what is good and bad. Aztecs would have regarded human sacrifice as a moral good because it brought the favours of the gods to the rest and preventing their wrath while the souls of those sacrificed would be immortal anyway. Now if I believed the same (the way I believe in Australia), I would not be really very opposed to human sacrifice either. At the same time if you managed to convince an Aztec that what they are doing is just sending people to meaningless deaths after which there is a good chance of just oblivion they would probably reconsider. Similarly with everything else. What seems to change over time (and also space and political conviction) are the assumptions we make about the facts of the physical world rather than morals themselves.
Though, in a very trivial way, a constant converges to itself over time 🙂 I still think that objective morals are too strong a term. If we have some hard-coded underlying proto-morals then it is because they evolved as an adaptation in our species. I would imagine a society of sentient spiders or ants (given the way they reproduce and care for their young and in ants the very biological caste system of their society) to have morals very different from our own (good books by Heinlein that touch this kind of idea are “Stranger in a Strange land” and “Starship troopers”…note that the film Starship troopers has sadly very little to do with the book).
How about urbanization? Wealth and urbanization are highly confounded. But the causality makes more sense. Urbanization requires tolerance of other people, a reduction in conflicting purity norms. Whereas, as you say, wealth might allow the increase in purity norms, as a luxury.
I like this theory. I wonder how well population density correlates with scores on Haidt’s foundations, and whether the correlation would have been the same historically.
Brazil and Argentina have same-sex marriage; less than 10% of Russians support the idea. Both are roughly equally urbanized, with Russia being somewhat more so.
And Russia is intermediate in wealth (PPP) between the two, so these three countries and one issue doesn’t really distinguish the two theories. Not that singling out three places and one issue is ever a good test.
No, Russia is above the two. Look it up.
Pro tip: the phrase “look it up” always makes you sound like an asshole even when you don’t mean it to.
Per capita PPP gdp (according to the IMF arbitrarily)
Russia 50th argentina 55th brazil 74th
Perhaps Douglas Knight is thinking of:
Total nominal gdp (according to the IMF arbitrarily)
Russia 10th argentina 24th brazil 7th
In context, it was clear he meant “per capita”.
And stating something that false does merit a smidgen of assholishness.
That makes your example even worse, asshole.
Brazil got same-sex marriage through creative interpretation of the Constitution by the Supreme Court. There’s never been a vote about it, and there’s never been a survey that showed majority support for it (the usual result is a bit over 50% against, a bit under 40% for). Support is higher in the wealthiest regions and upper social classes, and I think large cities as well.
Quantitative measures like support are better than binary measures like legality, if only because they contain more information. But I bet the support is lower in Russia than Brazil.
The first measure of this I thought of before looking at data:
Take the top and bottom 5 states in the US for urban density, take the top and bottom 5 states for p.c. gdp. total how many Obama got in 2008 each of the 4. If his lead in the urban areas over the rural ones is more than his lead in the rich over the poor then this is worth thinking about properly.
Most rural: maine, vermont, west virginia, mississippi, montana
Most urban: california, new jersey, navada, massachusetts, hawaii
Poorest: mississippi, west virginia, arkansas, south corolina, idaho
Richest: delaware, alaska, north dakota, connecticut, wyoming
urban/rural = 307.3/258.1 =1.19
rich/poor = 237.5/205.5 = 1.15
Wow that was a way screwier metric than I expected. Someone should design something not based on “what can I wiki in 30 seconds”. But this theory seems to make some sense.
See Steve Sailer’s “affordable family formation” idea. The correlation is good. There’s another good correlation I found once that I can’t think up now.
I expect taking into account all the states would show urbanness to be more correlated with voting for Obama than your initial figures.
How would you explain Japan? Isn’t Japan urban, xenophobic, and ocd-about-purity?
And rich. Japan is weird for both theories. Which is kind of the role of Japan in the world.
A possible explanation might be that cultures change slowly in this way, and that Japan was essentially feudal up until end of WW2, and unindustrialised until the start of the twentieth century. Then after WW2 their economy was driven by large-scale, centralised industries like electronics and car manufacture, which had a corporate culture that provided a vehicle (hehe) to continue the tradition of strict hierachy. I’m not sure about this though, because the small business stats don’t seem to back up this argument, and because Japan is plain hard to understand.
Plus the claim of slow cultural change seems to apply everywhere except Japan. One of the most militaristic and masculine cultures on earth has become the most pacifistic and feminine (to grossly exaggerate and oversimplify) inside the last half century.
Japan is just weird.
Japan is xenophobic and OCD about purity relative to what? Relative to the US, probably. Relative to a random third world or Arab country, probably not.
Also, Japan may just be evidence that some things only seem to be natural progressions as a side effect of how ideas spread. Japan still has the death penalty because it’s a lot harder for activists and ideologues in one country to influence another country that is on the other side of the globe and speaks a different language. Likewise, Japan has retained a lot of non-progressive elements of society because it’s really hard for someone in the US or Europe to shame Japan for having them.
And what about Poland? Is the weather there so much hotter and wetter than the weather in Germany?
“We worry a lot about racial sensitivity, but if we ever got a society where racism was as thoroughly neutralized as syphilis, we’d probably drop that value pretty quickly too. If we ever totally conquer poverty, so that everyone’s got more than enough, maybe we’ll even stop worrying about compassion and fairness.”
-This is not happening. Racism is conquered, yet, there’s more concern about it than ever. Poverty is pretty much conquered in the U.S., yet, I don’t see much of a sign people are stopping worrying about compassion and fairness.
Racism = conquered & poverty = conquered aren’t uncontroversial enough IMHO to serve as handy premises. You’ll get stuck defending those instead of the argument you’re using them for.
How will we ever know racism and poverty have been conquered? The Catholic Church was always pagan-hunting even when there were no pagans to be found. Stalin always talked about foreign Capitalist/Trotskyist conspiracies even if such conspiracies, if real, were effectively harmless to the continuation of his rule.
Well to the degree that poverty is a statistical artifact – the bottom 20% are always the “poor” no matter what their absolute condition historically – then it will never, ever be conquered.
“The Catholic Church was always pagan-hunting”
Catholic here. That *always* is kinda controversial, too. We’ve totally had other hobbies besides pagan-hunting, and pagan-hunting was arguably not really much of a thing in lots of places in lots of times.
“even when there were no pagans to be found.”
And the idea that paganism ever completely died out will get you in a tussle with at least some Wiccans.
“How will we ever know racism and poverty have been conquered?”
Well, absolute poverty just requires some economic data. But relative poverty ye shall always have with ye, what with Zipf’s Law and whatnot. (Basically what Dain said.)
As for racism, I dunno. I think “structural racism has not been conquered” is a really popular viewpoint among even moderate parts of the liberal political coalition, and while I suppose one could have a limit case where they poll all the people of color in the year 3015 and none of them feel oppressed, I think it’s kind of an idle question in our lifetimes?
If by paganism you mean “Wicca”, then I’m afraid this is wishful thinking. Certainly some folkways with religious overtones have pre-Christian origins, and genuine indigenous paganism held on in some obscure corners of Europe, but Wicca and modern Druidism both have mid-20th century origins and have a lot more to do with the anthropological theories of that era than they do with aboriginal European religion.
I agree. I was just pointing out that there’s still a dearth of noncontroversial premises here that detracts from what I presumed to be E. Harding’s point w/r/t whether the absence of racism/poverty would lead us to deemphasize harm/care values. I don’t agree with the Wiccan; I just note that the Wiccan disagrees with E. Harding.
I mean, the lack of people living in poverty might be a useful indicator? Like, right now, “homeless” and “starving to death” are things that really happen. I’d go so far as to argue that people working 80+ hours a week, suffering from malnutrition to make food go further, etc. are also pretty good indicators of poverty.
As far as racism goes: blacks have double the unemployment, and make less money when they do get a job. There’s reams of studies showing they get pulled over by cops more often. So it seems like either racism is alive and kicking, or the racists were right about black people being inferior -.^
“I’d go so far as to argue that people working 80+ hours a week, suffering from malnutrition to make food go further, etc. are also pretty good indicators of poverty.”
-This does not happen in the First World.
As there is much institutional anti-racism around (cf., Tokowitz, Bollea), I have to go with the latter explanation.
How many homeless people are homeless from primary poverty, rather than a preference for spending their money on heroin?
No, no, no. When the supply of pagans ran out, we switched to heretic-hunting! 🙂
Poverty is conquered in the US? I will inform the guys sleeping in the street who I pass every morning.
Most poor people in the U.S. are either disabled, taking care of a child, retired, or students. Last time I heard, being disabled severely hurts your productivity in any situation, and the other three aren’t mandatory for anyone. I got this from an AEI blog post:
I consider that First-World poverty effectively and largely solved.
Sure, involuntary unemployment exists, but the poverty rate in 2009 didn’t skyrocket as compared with that in 2008.
E. Harding, those people are poor even if there is an explanation for their poverty. It is inescapable that “being disabled severely hurts your productivity”, but not inescapable that this requires you to be poor.
(It might be effectively inescapable for now if, e.g., the only possible measures that would stop those people’s lives being miserable would wreck the economy by destroying incentives to work or running out of resources or something. If so, that would mean we aren’t ready to conquer poverty year, it wouldn’t mean it’s already conquered.)
Taking care of a child and being retired aren’t exactly mandatory. But some people conceive children when they have no good reason to expect that that will leave them in poverty, and then suffer misfortunes that have that effect; and many people find that beyond a certain age it gets very hard to find anyone willing to employ them.
It is in fact inescapable that being unproductive will cause you to be poor. American poverty is, by definition, low market wages. With a few exceptions, non-market income is mostly excluded from the definition of poverty.
Having low market wages does not imply any lack of consumption or disposable income. The stereotypical poor household, a single mother with child, will have consumption of at least $20k/year regardless of market income.
Chris, I think you are misunderstanding me (perhaps deliberately to make a point?).
Having low productivity doesn’t inescapably require you to be poor because there are ways of not being poor that don’t depend on your productivity.
If your family is very rich and has something like the usual attitude to inheritance, then you can be as unproductive as you please and you will not be poor.
If you are foolish enough to play the lottery, lucky enough to win the jackpot, and then sensible enough not to squander your winnings, then (at least for some lotteries) you can be as unproductive as you please but not poor.
If your unproductivity is the result of disability which is the result of an accident, and you got a big insurance payout or a big negligence lawsuit win or something, you need not be poor.
In some hypothetical post-scarcity society, no one need be poor (and perhaps human “productivity” will no longer matter at all).
In a society that isn’t post-scarcity but is much more redistributive than, e.g., that of the USA, prima facie “unproductive” people needn’t be poor (but one might argue that extremely redistributive societies can’t work well, hence my second paragraph above).
These scenarios, and doubtless others you can readily think up, are why I said that being “unproductive” because of disability doesn’t *inescapably* lead to poverty.
(I do not think that poverty is “by definition” low market wages, and if anyone has adopted such a definition then they have made a mistake. If you have, say, $10M in the bank then you are not poor even if you neither earn nor could earn any wages at all.)
@g, how do you define poor?
If you define it the way the US government does, then I’m right – by definition low productivity => poverty.
Now I guess you disagree with this definition. So how do you define poor? My attempt at a charitable reading of your post suggests something along the lines of low consumption, or perhaps the lack of some specific goods or services. Am I misreading?
If you define poverty this way, then the US has virtually no poor people. Again, read my John Cochrane link, or BLS consumption statistics, or the American Housing Survey, or any stats you like – the people with low market income aren’t materially suffering. The only thing they lack is status.
Well since I mentioned homelessness earlier, would homelessness be a sufficient criterion for poverty? Wikipedia has lots of conflicting estimates but it seems like 1.5-3.5 million people in the United States will experience homelessness for at least a few nights within a given year, so something in the rough neighborhood of 0.5-1% of the population. I wouldn’t call that “virtually no” people. (And when another recession comes the number will of course rise.)
That’s a very conservative definition of poverty; I think (hope?) that most people would define the poverty line considerably higher than that.
Homeless for a few nights is not necessarily poverty. Homeless for a few nights is a man choosing to live out of his car while pursuing job opportunities in a city he can’t presently afford to live in. Homeless for a few nights is a woman who needs a few days to set up alternate living arrangements after walking out on an abusive husband. These people are more likely to be poor than rich, but the most that can be said with any certainty is that they have poor credit.
Long-term homelessness is generally associated with poverty, but it is also generally associated with the sort of mental illness that prevents people from e.g. filling out the “give me free housing as I can’t afford to pay for housing” forms. So long as such people exist, it may be technically true that we haven’t conquered poverty but it is more informative to say that we haven’t conquered mental illness.
Yes, I will grant that not every single one of that population is “poor,” but it’s going to be a rare non-poor person who chooses a homeless shelter/the streets over a budget hotel when they have a temporary lack of housing.
@Mark Atwood I’m not sure if you’re trying to provide anecdata that this is a common thing that would skew the statistics, or just telling a story? I’d expect that someone in your situation wouldn’t have been counted at all. Some of the studies seem to be based on use of homeless services (people coming to homeless shelters and the like), and they would have properly ignored you. I guess it’s possible other methodologies might have picked you up as a false positive but I would hope not – beyond my google-fu in the time I’m willing to allot to it.
Chris, I don’t know that I have a precise definition of “poor” (or of any other word other than technical terms), but it would be something like this: You are poor in so far as shortage of money makes your life unpleasant relative to some contextually given baseline. In the present context the baseline might be something like the median US citizen in late 2015. The median US citizen in late 2015 has easy access to reasonably tasty and nutritious food, somewhere to live with no more than about two people per bedroom, a car to get around in, clothes for their children that won’t get them laughed at in school, etc. In this context, someone who is homeless because they can’t afford anywhere to live is “poor”; someone who misses meals because they can’t afford to eat regularly is “poor”; someone who lives in a place where it’s difficult to get by without a car but doesn’t have one is “poor”, etc.
A little over 5% of the US population (so, somewhere around 15 million) have “very low food security” as defined by the USDA; that means affirmative answers to >= 6 out of 10 questions like “In the last 12 months, were you ever hungry but didn’t eat because there wasn’t enough money for food?”. I don’t know exactly how well this definition captures actual hardship, and no doubt some of the people who answer yes to these questions will be comfortably off pedants who skipped lunch once because they didn’t have any cash in their wallets or something, but I’d be quite comfortable betting that at least, let’s say, 2 million of those 15 million people are in fact suffering (hunger, poor nutrition, anxiety about keeping their family fed, …) for want of money to spend on food.
There are apparently about 600k people homeless in the US on any given night. About 100k are “chronically homeless”. That’s a small fraction of the US population but it’s an awful lot of people. And, again, it doesn’t seem credible to me to say that these people are not suffering anything worse than low status on account of their lack of money.
“Poverty is conquered in the US? I will inform the guys sleeping in the street who I pass every morning.”
But the streets are paved.
And not literally full of horseshit
… and? They could be paved with solid slabs of diamond, it wouldn’t make the people sleeping on them any richer.
I’ve done work with the homeless. Most of them <a href="http://newyork.cbslocal.com/2013/03/26/man-who-claimed-to-be-homeless-defends-accepting-boots-from-cop/"have places to stay, or did, and family to take care of them.
What largely happens – and this is also what happened to a couple relatives of mine – is that their dependence on alcohol/drugs and/or other mental health/behavioral issues steadily increases to the point where they are continually harming the people who have been supporting that person. Theft is fairly universal, as is failure to keep up even the most minor of contributions to the family group.
These people are not on the street because they are impoverished, because no amount of money could keep them from getting back on the street.
This is interesting. When you say “no amount of money could keep them from getting back on the street” were many of the people you worked with actually rich? I don’t see why someone with money in the bank would respond to the loss of their familial support by sleeping on the streets rather than in a hotel or something. Unless the alcohol/drug/etc. issue in these cases is always of the sort that also means they’re going to get turned away from a hotel too?
It feels to me that if impoverishment doesn’t force people onto the streets in iself becuase of available support etc., it’s still likely that living on the streets is caused by the combination of behavioural/addition type issues with lack of cash/credit. It’s telling when you say theft is universal – presumably there would be less incentive for theft if you were rich!
“Have you ever noticed how much more virtuous rich people are than poor people? Poor people shoplift all the time, but rich people almost never do.”
A contrary claim:
“Fourthly, thieves and non-thieves have similar earnings during the years of peak theft activity . . . Theft in the United States thus appears to be substantially a phenomenon of individuals entering a temporary period of intensified risk-taking in adolescence.”
I don’t want to overgeneralize, but this suggests to me that your dialogue ignores the possibility of malice as a motive for bad behavior.
[Edited to make more direct, polite, useful.]
Just last weekend I was again watching the episode of Planet Earth where Attenborough explains how a rainforest’s ecology is stabilized by disease: if any one insect species becomes too numerous then that species’ specific fungus is able to rapidly spread and bring the population back down.
He maintains a remarkably composed voice throughout; calmly explaining that “these attacks do have a positive effect on the jungle’s diversity”. I wonder what fraction of his audience is instead thinking about the last few centuries’ order of magnitude increase in the human population and wondering how expensive a biowarfare-proof bunker would be.
Why would a numerous species be more vulnerable to a species specific fungus? The only thing I can think of is with more individuals, there are more opportunities for the fungus to mutate in to something really awful.
Population density increases the number of available vectors for a disease to spread.
In re evolution, disgust, and food taboos: Many (most? all?) cultures have food taboos, but those taboos don’t converge on the same foods. I’m inclined to think food taboos exist at least as much for group bonding as for safety.
Can you elaborate on that? Because people often use “group bonding” as a mysterious answer which can “explain” any human behavior they can’t explain any other way. Examples: laughing, music, ritual/religion.
Can’t speak for Nancy Lebovitz, but the first example I can think of is kashrut. Plenty have argued that not eating pork, e.g., was adaptive b/c it prevented trichinosis, etc., but the “costly signaling leads to more cohesive in-group” aspect seems hard to deny, and indeed the more or less explicit purpose seems to have been to get the Israelites to be and to feel separate from neighboring pagans.
ETA: Or take table manners. Seems like they exist partly to signal social class in-group status. Our sort know which fork to use, a good Confucian doesn’t use a knife at table, etc.
Often people say that food taboos exist to discourage socializing with outsiders. That promotes group bonding by default.
By far the best take on that kind of theory I have seen is to be found on the Other Best Blog on the nternet.
Nice article. Although I’m not yet convinced by this particular theory, this is just the sort of thing I’m looking for when I ask for a more explicit explanation.
This is a good point.
I was just thinking about the ubiquity of suits and ties across many different cultures the other day. Now I’m wondering if “objective fashion” might be a thing.
I think you could make a legitimate case for it in the same way that it makes sense for us all to drive on one side of the road. It doesn’t matter which side you drive on; it does matter that everyone in the same area drives on the same side. There might be a larger category that includes ties, and it was important that we converged on the same member of the category, but which member doesn’t matter.
Anecdotally, suits are a bad example. They are really uncomfortable in warm climates, so their being the global norm is IMHO rather unfortunate. (I’m an attorney in Texas, and I’d be a lot more comfy in the summer if the globe had been conquered by natives of Hawai’i instead of natives of Britain.)
“Also, well, humans, ah… leak. It is much easier to do laundry, than it is to keep washing the upholstery. (Nudists carry towels around with them at their social venues because, well…)”
Really? That makes nudists look pretty irrational. If you freed up the arm you were carrying that textile with by wrapping it around your waist, you’d have a rational sarong.
Also, not every advanced culture with a deep history has preferred full-body clothing. Hindus going topless is associated with purity. Yes, sewn upper-body garments are now common, but they have some negative associations. Gandhi was pretty clearly deriving symbolic power from running around in just a dhoti.
Re: suits and ties as the global norm
That’s the contrary point to what Berenice says about “If you think about it, practically every item of clothing has become less ornate.”
Less ornate, perhaps, but still plenty of rules. “Call me Bob” CEO/co-founder of Silicon Valley Big Tech Beast and Joe “Code Monkey” may both be wearing T-shirts and jeans, but I am willing to guess that Bob’s T-shirt and jeans are not from (um, what’s your equivalent of Dunnes Stores? Wal-mart? Target?) anyway, not ‘cheap chain store off-the-peg’ gear.
Designer labels, cut of the jacket, material, etc. are all ways that ornament and colour may have been removed from men’s fashions, but signalling about wealth, taste and status still remain.
No more mysterious than the ubiquity of coca cola. The UK and US conquered the world and replaced their elites with people who wore suits.
Is there a list of newly banned/tabooed words posted, somewhere?
I feel like I have missed something when Scott and other people here use “taboo” in a way that seems to mean something related but specifically different to its everyday usage… isn’t taboo when it just becomes an unspoken dislike of a group for something to be mentioned, rather than something that’s explicitly not allowed?
There is a party game called Taboo, that consists on talking to people about something without using a set of predefined taboo words and your teammates have to guess what are you talking about.
The game and encourages creativity and clarity of speech, and I at least found it very educational. It’s extra fun when played in a second language.
oooooooooooooohhhhhhhhhhhhhhh. Now I get it. Thanks.
Also Sprach Yudkowsky:
I like this.
But suppose that they both dereference their pointers before speaking
If a tree falls in a forest and the only person in the vicinity who sees the tree falling is a deaf person, does the tree make a sound? 🙂
Scott, have you read The Better Angels of Our Nature by Steven Pinker? It seems like he’s coming at the same question from the other direction.
I think he has:
Re: purity. A complication.
The left/Blue Tribe/whatever doesn’t have racial purity taboos (except against icky privileged people, maybe?) but there’s a lot of food taboos about things being organic, or local, or grass-fed, or fair trade, or vegan, or gluten-free, or raw, or….
I long ago read a book where this guy said that he knew a lot of ex-hippies who really noticed how a tiny quality of drugs could radically affect them, and then went on to be really super-obsessed with the purity of their food and water and environment based on that experience.
I’m not sure how to project organic food onto the purity axis. Just looking at it it’s more disgusting, more likely to be misshapen, discolored, etc. It takes a bit of sophistication to think it’s more pure.
Recall that blue-tribe people are more likely to have “quirky” (aka visually unattractive) things in their homes, more likely to adopt 3 legged puppies etc.
Maybe this is a “blue tribe more comfortable with things s1 says are impure, even if they prefer things s2 says are pure” thing?
A hair shirt is more disgusting than a silk robe, but with the right mentality it’s purer.
“Maybe this is a “blue tribe more comfortable with things s1 says are impure, even if they prefer things s2 says are pure” thing?”
I assume that means the blue tribe is in favor of nuclear energy.
That’s just a different definition of “purity”, though: “it’s natural, it’s not loaded with artificial chemicals or sprayed with pesticides and herbicides; the unevenness and discoloration and so forth shows it has not been tampered with; what comes out of the earth is purer than what has been processed by industry”.
I’d also argue that there is a lot of linguistic purity mandates; using the right terms in the right way, using specific terms of disfavour (tell me how “homophobe” as a signifier of “this is someone who has disgusting attitudes and beliefs which are repugnant and repellent to all right-thinking people” is all about being concerned on the Fairness axis rather than the Purity axis?), tone policing and so forth.
> doesn’t have racial purity taboos (except against icky privileged people, maybe?)
Cf. The recent discussion of whether, if a political candidate mainly attracts the support of white voters, that proves that the candidate is racist. (On the other hand, if a candidate mainly was supported by blacks, presumably that would prove that the whites who failed to support him were racist, or something…)
… This is why I can never understand American race politics. If one group supports a candidate that proves the candidate is bad, if the other group supports him that means the supporters of other candidates are bad? Is that the thing?
Moral realists, here’s a question. Lets say that we discovered aliens. Thousands of different intelligent species with morality that is completely different from the morality people on Earth follow. Would you consider that evidence against moral realism and change your views accordingly?
Or better yet, lets say that all of these aliens had the exact same morality but it included the idea that slavery is not only acceptable, but mandatory. We’re the odd ones out so obviously they have a stronger claim to being ethically right. So should we bring back slavery because that is the morally required position of every other species?
Virtue ethicist here. I view moral goodness as an index of the conduciveness of habitual actions and character traits to being a flourishing member of one’s species. Eating meat makes a dog healthy, but it’s bad for cows, etc. So it would depend on the axes along which their morality differed. If they were antlike colonial eusocial sorts, then their valuing individualism less than we do wouldn’t trouble me. If they reproduced by having their eighth gender get killed by their sixth, their different attitudes toward murder would be understandable, too, and might be okay for their own species.
Now, if they differed on something I consider non-negotiable, and that I think should be a universal for the flourishing of any rational animal, I’d either change my mind about the non-negotiability, or just consider them culturally or congenitally depraved, like a militarist nation on earth or an ethnicity that was all sociopaths due to some mutation.
ETA: Missed the slavery question. I don’t think an argumentum ad numerum would be valid even if the numbered were species rather than individual humans. Back to the flourishing criterion: a majority of humans seem to flourish around “room temperature” (about 73F/23 C or thereabouts, although I like it a bit cooler), but we can assume for argument’s sake that Inuit folks flourish when it’s way colder than that. The fact that the Inuit condition for flourishing is a minority preference wouldn’t make it wrong.
Likewise, maybe all the other sentient species are antlike or something, so slavery works for them. We’re primates, and it makes us miserable. No real implications for moral objectivity there. Or for relativism. It’s just a pretty contingent difference is all.)
“Virtue ethicist here. I view moral goodness as an index of the conduciveness of habitual actions and character traits to being a flourishing member of one’s species. ”
You’re sounding rather consequentialist, in fact, there.
No, he’s talking about habitual actions and character traits, not instances of the execution of an algorithm. The virtue-ethicist answer to the trolley problem is “What? Go outside.” (This is the Correct answer.)
What kind of behaviour does a virtue ethicist programme into a self driving car?
I’d lean towards harm minimization behaviors, the same as I’d do if I were driving the car and had to select between two bad choices (and could think fast enough/had enough time to make such a decision).
You could say I’m behaving like a consequentialist; I’d say this is missing the point in favor of trying to make a point, because internally, it might be represented as “Good people do as little harm as possible”. Yes, different ethical frameworks can come to the same conclusion, and even for similar reasons. The concerning cases are where they don’t.
>Moral realists, here’s a question. Lets say that we discovered aliens. Thousands of different intelligent species with morality that is completely different from the morality people on Earth follow. Would you consider that evidence against moral realism and change your views accordingly?
If I encountered thousands of people who believed 2+2=5, would that be evidence that 2+2=5?
>We’re the odd ones out so obviously they have a stronger claim to being ethically right.
Why does us being the odd ones out mean they have a stronger claim to being ethically right? Most moral realists don’t think the consensus of people is the determinant of morality.
You seem to be confusing the map and the territory. What people believe about morality =/= morality, just like what people believe about mathematics =/= mathematics, and what people believe about physics =/= physics. Most ethicists, I imagine, are interested in what morality is, not just what people believe about it (this is one reason why many philosophers look down on x-phi).
>If I encountered thousands of people who believed 2+2=5, would that be evidence that 2+2=5?
If we had no way of verifying the claim, then yeah. Luckily we have created various devices that perform and allow us to verify mathematical operations, from the abacus to the CPU.
No such device exists for moral facts, and moral realists are conspicuously not trying to build one of them.
2+2 = 5 is a statement to the effect that things are not themselves, or that x = not x.
Is that a statement for which empirical verification is necessary/possible?
This seems like a strange line of argument. Our knowledge of mathematics precedes our having built abacuses, calculators, or CPUs, and certainly such tools are not the truth makers of mathematical statements.
Indeed, if a calculator output 5 as the result of 2+2 we would say the calculator was defective, not that 2+2=5 was true. This seems to cast doubt on the idea that our tools verify our mathematical knowledge, we built them to do computations we already knew the answers to, they don’t generate new mathematical knowledge.
We seem to lack “verification” of mathematics the same way we lack verification of morals. Unless you want to argue mathematics isn’t real this argument doesn’t seem to get off the ground.
“We seem to lack ‘verification’ of mathematics the same way we lack verification of morals. Unless you want to argue mathematics isn’t real this argument doesn’t seem to get off the ground.”
This is a strong argument against materialism. Even Quine’s naturalism (“our best theories are our best scientific theories. If we want to obtain the best available answer to philosophical questions such as What do we know? and Which kinds of entities exist? … we should consult and analyze our best scientific theories. “) basically concedes that numbers are real entities, that Plato and Goedel were right.
If a calculator said 2+2 = 5, and then another, and then my colleague confirmed that indeed, 2+2 = 5, and then, upon opening it back up, my old math book from elementary told me that 2+2=5, I would probably stop thinking “2+2=4” and instead start thinking “I am wildly confused – do I have a head injury I am unaware of? I need some serious help, what is the phone nuber of a good therapist.”
“This is a strong argument against materialism.”
When the argument from abstract truth to immaterialism is fully spelt out, it involves a contestable claim that all truth is truth by correspondence.
If you’re a materialist what other theory of truth would you subscribe to? It seems like believing everything is material implies believing things are true in virtue of their being some material conditions that obtain.
@Shieldfoss: Have you read Descartes’s Meditations?
@AncientGeek: Surely correspondence theory of truth is the one most congenial to materialism? One type of objection supports certain forms of idealism. Then there’s more important objection, the vague-or-circular dichotomy:
“If no theory of the world is offered, the argument is so vague as to be useless or even unintelligible: truth would then be supposed to be correspondence to some undefined, unknown or ineffable world. It is difficult to see how a candidate truth could be more certain than the world we are to judge its degree of correspondence against.
On the other hand, immediately the defender of the correspondence theory of truth offers a theory of the world, he or she is operating in some specific ontological or scientific theory, which stands in need of justification. But the only way to support the truth of this theory of the world that is allowed by the correspondence theory of truth is correspondence to the real world. Hence the argument is circular.”
… condemns the scientific materialist as least as much as the proponent of any other theory.
@Le Maistre Chat:
I think I have, in philosophy class, but if I have, it has been over a decade.
“f you’re a materialist what other theory of truth would you subscribe to? ”
I don’t have to subscribe to a one-size-fits-all theory. I can avoid immaterialism by adopting a formalist/conventionalist theory of truth about maths specifically, whilst retaining a correspondence theory about observation sentences.
@Shieldfoss: I was just thinking that Descartes’s demon functions very much like your hypothetical brain disorder.
“Is 2+2 4 or 5? I could swear it was 4, now everything says 5… what can I know if I have a delusion so basic?”
@TheAncientGeek: That’s an epicycle. (frowns)
How does this change if we consider the four colour theorem instead? We have a way a verifying that claim. But if I found many many people who thought it wasn’t true that would at least shift my opinion because the verification contains the possibility of error (when humans do it).
But morality does depend on what people think, if you’re an ethical intuitionist. The idea is that morality is objectively real because we all have the same basic intuitions(other than psychopaths). If we’re the only species that has these intuitions, how could you possibly believe that our intuitions are superior to theirs?
Let me ask this more directly: What kind of evidence would it take for you to not believe in moral realism?
No, Wrong Species, that’s not what ethical intuitionists believe. Or, at the very least, it is not what Huemer believes.
I am not an ethical intuitionist, but they think the causation is the other way around. That is, we all have the same moral intuitions because of the moral facts being true independently. It’s the same reason we all have the same beliefs about whether you will die if you shoot yourself in the head: because the same reality is operating on all of us.
Your point still stands about other species having different intuitions. However, this doesn’t necessarily show that morality is not “objective” (in the sense Huemer means, which I would call “intrinsic”). It could also mean that morality is objectively agent-relative. What’s good for the space-goose might not be good for the space-gander, with both of them being able to discover and realize this without having any conflicting beliefs.
This is what’s bugging me though. There is basically nothing that will convince the moral realist that they are wrong. They believe that their intuitions are right and nothing seems to conceivably convince them that something could be wrong. Usually any time someone makes an argument that can never be refuted no matter what, that means there is something wrong. And of course I can’t prove them wrong(in the same way I can’t prove God isn’t real) so there isn’t really any way to argue against them. It’s a very frustrating experience.
@ Wrong Species:
I agree. It is very frustrating.
Huemer doesn’t say he can’t be wrong, though. He does leave ways to argue against him. All he says is that it very much seems to him that ethical intuitionism is true. And he combines this with a principle of “phenomenal conservatism” that argues we ought to go with how things seem unless we have a reason not to.
His process is to refute the other theories of ethics, leaving ethical intuitionism as the most plausible.
His argument against “nihilism” is very weak, though. In a paper blacktrance links below, Richard Joyce challenges him. He essentially says, “All right, it seems to you that categorical morality is objectively true. But it honestly doesn’t seem that way to me. So now you have to produce an argument if you want me to believe it. Moreover, my theory explains why your intuitions would be strong, even if they are false.”
But are you willing to run the experiment in the opposite direction? What if we discover an alien species that has substantially the same morality as us?
(It’s a trap. This scenario has already played out many times in the past with different human cultures.)
As I said upthread I don’t think the experiment has any bearing on the truth of moral realism. I also don’t think the non-realists believe it has any bearing on it either.
I suspect if we did run the experiment the other way non-realists would not update in favor of moral realism, rather they would posit that the aliens were under evolutionary selection pressures substantially similar to ours and so developed a similar idea of morality.
“Evolutionary selection pressures did it” almost seems to have become a Fully General Counterargument at this point.
If thousands of intelligent alien species living in completely different environments with different evolutionary pressures managed to have the same intuitions as humans I would probably consider that substantial evidence that moral realism is true.(Of course, it’s possible that some alien species actually seeded life billions of years ago to come up with the same ethical intuitions but that doesn’t seem nearly as likely.)
>Or better yet, lets say that all of these aliens had the exact same morality but it included the idea that slavery is not only acceptable, but mandatory. We’re the odd ones out so obviously they have a stronger claim to being ethically right. So should we bring back slavery because that is the morally required position of every other species?
I’d treat it the same way as any other situation where society demands I tolerate something that I personally find disgusting. Which is to say that I’d probably try to avoid it and get on with my life, or maybe rebel if I thought it important enough – but I’d accept that as evidence that maybe I was crazy.
(Of course in any realistic scenario the first thing I’d do is ask them to explain, and listen to and evaluate their arguments)
“Or better yet, lets say that all of these aliens had the exact same morality but it included the idea that slavery is not only acceptable, but mandatory”
Please be more precise. Who are these slaves? Criminals whose price was used to compensate their victims? Under what circumstances did they come to be enslaved?
Heck, what if God showed up and said, “I want you to sacrifice babies to me on every full moon” Would you be able to say he was wrong to ask this? I think we could. This is a strong argument against divine command theories of morality (without necessarily implying atheism: God may exist, but even if he does, the fact that we can imagine him asking us to do something immoral means our notion of moral isn’t coming exclusively from him), but I think it’s also an argument against morality being species or context dependent: we can imagine a being infinitely smarter and more powerful than us asking us to do something wrong, or, indeed, even doing something wrong himself (though monotheists might have a harder time with the latter).
How does one determine whether this being is God?
I’d assume God probably has a a good reason for it. Y’know, since we’re assuming that I have evidence this is actually God.
(Although, if I wasn’t certain, that would be strong evidence that this isn’t God. There are several Bible passages about messages claiming to be from God that contradict His will, and the gist is “don’t follow ’em.”)
If a Friendly AI told you you needed to sacrifice a baby every full moon, would you do it?
I would assume that this is not God, as the “sacrifice babies” requirement was pretty much rejected in my faith a long time ago. So based on this prior evidence, and given the long history of false gods, it would be most rational to assume that this was yet another pretender whose instructions should be ignored.
But didn’t Abraham assume it was God asking him to sacrifice his son? He didn’t say “go away, foul demon, for my god would never ask me to do such a thing!” And the general tenor of the Old Testament is not such that it leads me to believe it would have been a good idea to do so.
Right off the bat, I need to ask: When you say “wrong”, what does that mean? I have my definition, but what do you mean?
>the fact that we can imagine him asking us to do something immoral means our notion of moral isn’t coming exclusively from him)
Right, your moral intuition tells you it’s wrong to sacrifice babies – but where did the moral intuition come from? If the bible is to be believed, it came from God as well! c.f. Romans 2:14-16.
>14 For when Gentiles, who do not have the law, by nature do what the law requires, they are a law to themselves, even though they do not have the law. 15 They show that the work of the law is written on their hearts, while their conscience also bears witness, and their conflicting thoughts accuse or even excuse them 16 on that day when, according to my gospel, God judges the secrets of men by Christ Jesus.
But if you do not believe this, my question is this: why do you trust your moral intuitions at all? Doesn’t it just boil down to some “bad feeling” you have? How would it be different from being squeamish at the sight of blood? Couldn’t the intuition be mistaken?
>Right off the bat, I need to ask: When you say “wrong”, what does that mean? I have my definition, but what do you mean?
Not onyomi, but I suspect wrong means something you shouldn’t, or ought not, do. If you want to taboo “ought” I’m not sure that’s possible. Prescriptive statements don’t seem reducible to descriptive ones, this is what the is/ought gap is all about.
>But if you do not believe this, my question is this: why do you trust your moral intuitions at all? Doesn’t it just boil down to some “bad feeling” you have? How would it be different from being squeamish at the sight of blood? Couldn’t the intuition be mistaken?
Sure, our moral intuitions can be mistaken just like any other intuition can be mistaken. The problem is to actually show that those intuitions are mistaken. Most of the arguments made against moral intuitions apply equally well to intuitions we don’t seem to want to abandon.
Take for instance the law of non-contradiction, P ^ !P = F. Presumably we all believe this is true, but what would verify that it was true? How do we know that it’s true? I would argue the best we can say is that it seems to us to be true.
This is the basis of Huemer’s phenomenal conservatism, that seemings provide (defeasible) justification for belief. As a defense of moral intuitions, the idea is that most/all arguments that would get us to jettison moral intuitions also work as arguments against epistemic or other intuitions that we don’t want to get rid of.
Is God just a being infinitely smarter and more powerful than us, or is God the Ideal of Correctness?
If sky-dude comes down and tells you how much a kilogram is, he might be wrong. If the cylinder by which the kilogram is defined tells you how much a kilogram is…
In your analogy, God would merely be the definition of some ethical standard, not the ethical standard. People are, after all, perfectly free to define a kilogram however we like, or use Imperial.
I don’t know if you’ve read it, but if you haven’t, Three Worlds Collide is EY’s take on a very similar scenario.
For that matter, what you’re describing is basically the same problem as unfriendly AI— you could just have easily have made the alien species’ value be paperclip-maximization instead of slavery. I think the correct response in both cases is to stick with human values, even if they’re on some level arbitrary.
I think you’re begging the question by referring to what the aliens believe in as “different morality.” By using the term “morality” to refer to whatever this different thing the aliens believe in is, you are smuggling in the implicit assumption that morality is relative and subjective, and that anything can be morality if someone believes it.
The later case, where the majority of intelligent creatures that exist believe in slavery, sort of describes reality, if you take a “timeless” view of term “exist.” The majority of people who have ever lived have considered slavery to be acceptable (at least since agriculture), and the dead far outnumber the living. I don’t think any realists consider all the dead pro-slavery people to be a good argument in favor of slavery. If you count dead people, we are the odd ones out, yet no one considers this a good pro-slavery argument.
Similarly, most people who have ever lived have had different moral beliefs from most moral realists. But most realists maintain that is because one of those sets of moral beliefs is mistaken.
“housands of different intelligent species with morality that is completely different from the morality people on Earth follow. Would you consider that evidence against moral realism and change your views accordingly?”
Objective morality can be a mapping from local conditions to moral truth, since local conditions are an objective fact. Also, realists are entitled to say that other are simply wrong, although there are problems in doing so too glibly.
Realist here: that’s a confused question. The natural properties mapped out by our moral theories, and the ways they provide us ethical motivation, can and will vary from species to species. Each of us, given full knowledge of the other’s psychology, would have to figure out what sort of relations between humans and aliens best serve our objective interests. If those sorts of relations differ and no compromise can be reached, there will be conflict.
Did you expect to find xenos who would be exactly like us? Who said that an accurate map should describe vastly different territories?
NOW SUBJUGATE THE FILTHY XENOS IN THE NAME OF OUR GLORIOUS GOD-EMPEROR OF MANKIND!!!!! /s
Depends where morality is “located”. Some MRs believe that it’s “located” in intelligence or “pure logic”, and any intelligent creature will converge on the correct morality after thinking about it for a bit; some believe it’s located in our common heritage (whatever they believe that heritage to be.)
Pretty much everyone agrees that evil people exist, though, so I don’t see why a whole planet of evil people would change that much.
I agree entirely with the general thrust of your objections to Huemer.
In general, I like Huemer’s (and Bryan Caplan’s: they are friends and generally think exactly alike on every issue) common-sense approach to philosophy. But he just doesn’t see how it completely fails when he wants to apply it to ethics.
His argument is really more basic than the derivative application you respond to above. He simply employs the G.E. Moore argument against skepticism, adapted to ethics. That is, take the proposition, “Genocide of 500 million people for no reason is evil.” No conclusion of any fancy argument you come up with will ever (allegedly) be as plausible as that. In order for an argument to refute a belief, the conclusion of the argument must be more plausible than the belief. Therefore, no argument will ever refute the objectivity of morality (and thus the statement that “Genocide of 500 million people for no reason is evil.”).
He then proceeds (in his book Ethical Intuitionism, which is very well-written) to demolish non-cognitivism, subjectivism (in the form that does not claim to be nihilistic), and reductionism considered as an attempt to establish objective morality. Since the other theories don’t work and since objective morality cannot be refuted, it must be the case that we “just know” certain moral beliefs. They are foundational and stand completely on their own. (And he is reasonable about this: he doesn’t say our “intuition” is always right. He says a particular intuition [e.g. killing is always wrong] can be refuted by countering it with a stronger one [e.g. the right to self-defense]; this explains how he thinks common sense defends the decidedly uncommon view that we should abolish the government. There is, furthermore, no single foundational intuition from which you derive everything else.)
The problem is that the theory to which he devotes a scant few pages is true: nihilism, or “error theory”. Or rather, in my view, nihilism is true in regard to categorical imperatives in ethics. (It is not true in regard to hypothetical imperatives.)
The G.E. Moore-style argument is refuted quite simply: by saying that it honestly doesn’t seem that way to me. It does not seem to me that “Genocide of 500 million people for no reason is evil”; at least, not in the sense Huemer means the word “evil” (as a violation of a binding categorical imperative). Like the concept of “God”, I have a good idea of what a categorical imperative would be, but there is no evidence that makes me believe such a thing exists in the world. And I can’t think of any evidence that would make me think so. (Miracles like the parting of the Red Sea would make me believe in the existence of a very powerful being, but not one capable of possessing the incompatible traits the Christian God is supposed to have; nor would it show that I ought to obey this being for any reason other than self-interest.)
“Genocide of 500 million people for no reason” seems evil to me in the sense that I do not believe it would further any goals I have (including most significantly the promotion of my own life and happiness); nor do I think it would further any reasonable goals that other people might have. Indeed, it would run powerfully against those goals. Anyone who wants to kill 500 million innocent people has pledged himself against all those people who value earthly life and happiness—and from that perspective is properly regarded as evil.
But unless one adopts a “perspective” by choosing a goal, there is no sense in which killing 500 million people is evil “from the point of view of the universe”. There may be (and I think there are) goals which are more compelling and choice-worthy than others, but there is no goal that is categorically and universally compelling—not even the choice to live. As Ayn Rand said, “Mister, there’s nothing I’ve got to do except die.”
By the way, I highly recommend this response to Huemer’s critique of the Objectivist formulation of egoism. It gets across well the completely arbitrary nature of Huemer’s viewpoint. (There is actually more in the old Usenet discussions, if you care to read them. The most prominent thing to take from them is that Huemer regards the evolution of human beings on Earth as an intrinsically good fact, while Lawrence—the Objectivist critic—takes the view which I consider obvious: that this is only so from the perspective of human beings.)
I partially agree with that, but I disagree with its determinism, materialism, and above all its general tone.
Life does inherently involve consumption and the continual input of resources. But life also is the wellspring of production and trade. Yes, life requires “aggression” against inanimate nature and against beings that threaten one’s existence—but this obscures the fact that human beings, as rational animals, are capable of developing civilizations in which their interests that are more or less in harmony with one another.
And yes, the last paragraph expresses this, but it still seems to imply that there is something “crass” and “low” about all of it. The world and the people in it are nothing but “things”, etc. Obviously, everything is a thing, but not everything is “just a thing”.
Moreover, I fully agree with the Huemer/Caplanite position on metaphysical dualism and free will. Their arguments actually work when applied to that field. (The essential reason is that, there, their arguments are opposed to skepticism about all knowledge. Whereas “nihilism” about categorical imperatives is not a threat to other forms of knowledge.) For example, see Caplan’s knock-down argument for dualism as opposed to Searle-ism. Caplan terms his version of dualism (which, again, I think is the same as Huemer’s) “entity dualism”, and it is oddly underrepresented in philosophy (I can’t name anyone else who holds it). Yet it seems to be very sensible. At the very least, it is more sensible than substance dualism (as he says, typically motivated by a “hidden religious agenda”) and property dualism (which is thoroughly epiphenomenalistic).
I’m not usually a Caplan fan, but that’s a very nice little paper: crisp and clever.
The more insightful, current and academic part is at the end:
“I would be more than half-surprised if something like the true knowledge doesn’t develop, only I suspect the founders aren’t going to be Korean indentured laborers with nothing to read but Darwin, Marx and Stirner, but western academic evolutionary game theorists and behavioral economists.”
What has been shown by the game theorists is that to be evolutionarily stable – that is, dynamically persistent over time – societies require a composition of diverse strategies and temperaments. Some are, in the jargon, cooperators, some are defectors and some are punishers. It is the mix of those types that is stable, while any one or two of them is not. They do the math, and so conclude.
Behavorial economics is post-rational. It matters less that the medicine works than that the patient feels cared for. See http://bigthink.com/Mind-Matters/exploring-the-post-rational-21st-century for some intro to post-rationalism.
It is a leap to conclude that something like The True Knowledge will result, and yet I find it hard to simply dismiss.
Maybe morality is anti-inductive. Buy low and sell high. It sounds like Huemer’s personal ethics are currently overvalued. He’s gonna be hella disappointed when his credibility tanks during the next zeitgeist-crash.
Diversify your morality, kids.
It’s certainly interesting to think of morality in economic terms. You can use status as the currency and try to buy morality right before it takes off to get the maximize status. More importantly, you need to sell before it has lost fashion or risk being on the wrong side of history. I wonder what a “morality market” would look like. Maybe something similar to indulgences and ethical offsets?
In my head, a corporation maps to an ideology. So a market would map to the ethics literature.
If you think Huemer’s gonna rise in popularity, then you better take up his ideology quick — before it becomes too mainstream. Or if you think his argument’s fundamentals are flawed, it might be best to short him by passing him off as a clown.
Be sure to check the P/E ratio before making a hasty decision. I.e. the ratio between the price in weirdness and the dividends in savvy. And ask yourself how much volatility your identity handle. Are you the contrarian who’s looking for the next Liberal Democracy? or the blue-chip Golden Rule type.
If researching ethics consumes too much time and energy, one can always invest in an index fund like Common Sense, Family Values, or Best Practices. Just buy and hold. Or if you suspect a downturn, hedge in a little bit of Nihilism. It doesn’t manufacture any new insights, but it’s counter-cyclical and a safe bet.
I have radically different ethical views, but I nevertheless enjoyed and learned from this comment, which I thought was of very high quality. Kudos.
This paper by Joyce in response to Huemer may be of interest to you.
Thanks for the link. I read that one recently.
The best excerpt:
The G.E. Moore-style argument is refuted quite simply: by saying that it honestly doesn’t seem that way to me. It does not seem to me that “Genocide of 500 million people for no reason is evil”; at least, not in the sense Huemer means the word “evil” (as a violation of a binding categorical imperative). Like the concept of “God”, I have a good idea of what a categorical imperative would be, but there is no evidence that makes me believe such a thing exists in the world. And I can’t think of any evidence that would make me think so.
I would suggest that if you are committed to hypothetical imperatives being binding, you are committed to at least one categorical imperative being binding, namely one (along the lines of) ‘you ought to take the means to your ends,’ or ‘you ought to maximize expected utility.’ Otherwise you can’t get from the ‘is’ of ‘doing X will satisfy my ends’ to ‘I ought to do X’ or ‘it would be rational for me to do X.’
In general, I think that moral anti-realists, nihilists, etc. are too sanguine about the prospects of saving non-moral norms (e.g., epistemic, practical), given their professed reasons for rejecting the existence of moral norms.
Your point is well-taken, but I don’t disagree with it.
Yes, all you can show by hypothetical imperatives is that, “If you want X, it is rational to do Y”. Of course you can’t show that you ought to be rational. That would make everything categorical again, which can’t be done.
You choose to be rational—a reasonable choice—but not one that is absolutely compelling. And if you choose to be rational, then everything follows.
In any case, it is quite right that morality cannot become binding upon you without your making a freely chosen commitment. And hypothetical imperatives are, of course, only hypothetically binding.
I think it is very difficult to consistently maintain the nihilist position here, because so many terms of our language have normative content. For example, I don’t think you’re entitled to the term ‘rational’ if you reject categorical norms. To say that it is rational for A to X is a way of saying that A ought to X (or at least, the two are analytically connected). So I don’t think you’re even entitled to, “If you want X, it is rational to do Y”. If you want to avoid committing yourself to categorical norms altogether, all you can say is, “You want X. Y is a means to X. *nudge nudge wink wink*” and hope the agent “sees the connection.”
I don’t see the problem here.
Yes, the rational is always normative. It depends on goals that people have, including most basically: living. To say something is “rational” ultimately has meaning only in relation to getting something you want.
For example, I say it is irrational to hold that 2+2=5. Why? There’s nothing that absolutely forces me to conclude that 2+2=4. It violates the axiom of non-contradiction to hold that 2+2=5, but the axiom of non-contradiction isn’t the boss of me!
If you don’t choose to live, you need not think in accordance with logic (or think at all). But if you do choose to live, it is a brute fact that there is a certain reality independent of your will, and you must conform your thoughts and behavior to it. Reason is man’s mode of survival.
None of this means that reason’s conclusions are merely subjective, or that something could be rational for one being and irrational for another. If they are operating under the constraint of the same objective reality, conformity to this reality will demand they both hold that 2+2=4 (though it may, of course, be represented in many different ways).
Just to be clear, I’m not arguing that your view commits you to subjectivism about rationality; I’m arguing that it commits you to nihilism about it.
Let me try to make my argument more explicit.
(1) ‘If Y, it is rational to do X’ implies ‘If Y, you ought to do X.’
(2) ‘If Y, you ought to do X’ implies ‘Categorically, if Y, you ought to do X.’*
(3) ‘Categorically, if Y, you ought to do X’ is a categorical imperative.
(4) Therefore, if some sentence of the form ‘If Y, it is rational to do X’ is true, categorical imperatives exist.
(5) Hence, either categorical imperatives exist, or no sentences of the form ‘If Y, it is rational to do X’ are true. [from (1)-(4)
This conclusion holds whether the conditions in Y describe external features of the world, someone’s desires, or anything else. So (I claim) one cannot consistently hold that no categorical imperatives are true but that some hypothetical imperatives are true, because if some hypothetical imperatives are true then some categorical imperatives are true.
Do you deny one of these premises? If so, which one?
*If there are scenarios in which ‘If Y, you ought to do X’ is false, then just put the negation of those into the antecedent, and the implication will go through. All I need is one such implication.
I reject premises #2 and/or #3, depending on how you interpret them.
All you are saying there is that a hypothetical imperative is a categorical imperative! But this is not true.
“If you desire X, you ought to do Y” can be looked at as a categorical statement, yes. But it is precisely that: a statement, not an imperative.
The form is [Categorically(If X, then Y)]; not [If X, then CategoricallyY]. There is a big difference.
The fallacy here is analogous to confusing the “necessity of the consequence” for the “necessity of the consequent”. For example, it is necessarily true that whatever happens, happens. Nevertheless, this does not show that whatever happens necessarily happens!
More formally, [Necessarily(If H, then H)] is not the same as [If H, NecessarilyH].
And on a completely different note, even the “categorical” statement that “If you desire X, you ought to do Y” can be thought of as ultimately hypothetical. Because it ultimately goes back to, “If you accept the axioms of existence, consciousness, and identity, then you ought to believe XYZ.”
Thanks, that’s helpful. You are right that I am essentially saying that hypothetical imperatives are categorical imperatives. I don’t think there’s any interesting formal difference between the two categories, as opposed to a difference in content, namely, what people call ‘hypothetical imperatives’ usually mention desires.
One way to get a formal difference is, as you suggest, to define categorical imperatives as statements of the form
[If X, then Categorically(Y)]
and hypothetical imperatives as statements of the form
[Categorically(If X, then Y)].
The problem with this strategy is that then I can grant that there are no binding categorical imperatives, but say I don’t need them, because moral norms are all hypothetical imperatives. For example, the norm [If promise to X, then X] is hypothetical in this formal sense. The only difference between that norm and [If you desire X and Y is a mean to X, do Y] is that the latter mentions a desire and the former doesn’t. And what I’m denying is that that makes any difference to the philosophical respectability of the norms.
And on a completely different note, even the “categorical” statement that “If you desire X, you ought to do Y” can be thought of asultimately hypothetical. Because it ultimately goes back to, “If you accept the axioms of existence, consciousness, and identity, then you ought to believe XYZ.”
I don’t see how this reduction goes, but at any rate this merely trades one “categorical” ought for another, in this case an epistemic one about what you ought to believe — which is no less problematic than “categorical” instrumental norms.
First of all, categorical imperatives need not be of the form, [If X, then CategoricallyY]. More typically, they can just be [CategoricallyY]. For example, “Thou shalt not kill”. It doesn’t say, “Don’t kill if you want to please God,” or “Don’t kill if you want to get into heaven.” It just says “Don’t kill, period.”
Well, yes. A “hypothetical imperative”, in the relevant philosophic usage, is not one that takes just any fact as an antecedent. It refers to one that takes a desire or goal as an antecedent. (You can call the other kind a “hypothetical imperative” if you want, but then we must say it’s just not the relevant kind of hypothetical imperative.)
The problem with an imperative on the order of [If you promise to X, then X] is that it provides no motive to act upon it. A desire, on the other hand, is ipso facto a motive.
To say this all more clearly, it is not the hypothetical form that makes the imperative binding. It is the fact that it hinges upon a desire that you already have. A categorical imperative is ruled out because it cannot hinge on such a desire, not because of its form per se.
It may be an epistemic imperative, but it is a hypothetical epistemic imperative. Your willing or desiring to think and to live comes first, and rationality consists of conforming your beliefs and actions to reality in such a way as to achieve your goals.
Think of it this way: forget “you ought”. Substitute: “it would be consistent to”. If you don’t want to be consistent, you don’t have to be consistent. Hell, if you want to be consistent, you don’t have (categorically) to be consistent. All the hypothetical reasoning shows is that you’re not going to get X unless you do Y. You are free to do with that knowledge as you please.
First of all, categorical imperatives need not be of the form, [If X, then CategoricallyY]. More typically, they can just be [CategoricallyY]. For example, “Thou shalt not kill”.
The pedant in me wants to reply that [CategoricallyY] is equivalent to [If A = A, then CategoricallyY], but sure, it’s helpful to make this explicit.
A “hypothetical imperative”, in the relevant philosophic usage, is not one that takes just any fact as an antecedent. It refers to one that takes a desire or goal as an antecedent. … The problem with an imperative on the order of [If you promise to X, then X] is that it provides no motive to act upon it. A desire, on the other hand, is ipso facto a motive.
So now we’re back to the difference between hypothetical and categorical imperative being substantive and not merely formal, as you go on to point out:
To say this all more clearly, it is not the hypothetical form that makes the imperative binding. It is the fact that it hinges upon a desire that you already have. A categorical imperative is ruled out because it cannot hinge on such a desire, not because of its form per se.
Okay. So now we’ve got a principle according to which hypothetical imperatives are binding because of something special about their content — namely, they appeal to a ‘desire you already have’.
Here I have two worries. The first is that I don’t see why this matters. (I mean this in more than a devil’s advocate sense — I really don’t think you should always follow your desires, because sometimes you have the wrong desires.) How does the fact that doing Y will get you something you want suddenly attach this mysterious quality, to-be-doneness, to the act of Y-ing?
But second, it seems to me that appealing to a desire you have cannot in general be necessary for imperatives to be binding. For if it were, then ‘you ought to follow the hypothetical imperative’ would only be true if you have a desire to follow the hypothetical imperative. Not everyone has such a desire — I don’t, since I want to have what is really good, and not just what I believe to be good or what my desires represent as good: as Plato says, “Further, do we not see that many are willing to do or to have or to seem to be what is just and honourable without the reality; but no one is satisfied with the appearance of good –the reality is what they seek; in the case of the good, appearance is despised by every one.”
Even if you did have such a desire, this would quickly lead to a regress. But if ‘you ought to follow the hypothetical imperative’ can be true without you having a desire to follow it, then it seems there’s nothing mysterious about there being binding imperatives which do not appeal to a desire you already have.
Perhaps you think ‘you ought to follow the hypothetical imperative’ is false. But that looks inconsistent with thinking that in every instance in which the hypothetical imperative prescribes Y-ing, you ought to Y.
Hell, if you want to be consistent, you don’t have(categorically) to be consistent. All the hypothetical reasoning shows is that you’re not going to get X unless you do Y. You are free to do with that knowledge as you please.
This is the conclusion that I’m trying to get you to. I just want to get you to give up epistemic and instrumental norms if you give up moral norms because they have some mysterious quality of applying to you in all circumstances.
“You ought to take the means to your ends” is sort of tautologically true – if you don’t, that just means that your supposed ends aren’t really your ends. The fact that you ought to maximize expected utility is baked into the concept of expected utility.
“You ought to take the means to your ends” is sort of tautologically true – if you don’t, that just means that your supposed ends aren’t really your ends.
First, people can be weak-willed. Introspectively, it seems that in such a case I have an end but do not seek it (or vice-versa). In addition, you can’t explain what’s wrong with this (even from a self-interested point of view) if you identify ends with what is actually pursued. Indeed, if people did necessarily take the means to their ends, then the advice to maximize expected utility would become useless as a guide to action, because it would be impossible to violate it.
Second, it is doubtful that any formalization of “you ought to take the means to your ends,” such as expected utility theory, can be used to empirically identify ends in functionalist fashion, because people often have, e.g., intransitive preferences, so that there is no set of preferences that can be attributed to them on which they maximize satisfaction of those preferences.
Third, I don’t think “You ought to take the means to your ends” is tautologically true because I think it is false. For example, if you desire things that are bad for you, you ought not take the means to those ends. Perhaps you think I’m wrong, but do you think I’m wrong in virtue of not understanding the meaning of the word ‘ought’?
Weakness of will isn’t really a counterexample, because I’m not saying that people necessarily seek their ends, only that they necessarily will to seek their ends. I’m not identifying people’s ends with what they actually pursue, because what they pursue can be different from what they will to pursue. But it is really contradictory to say “X is my end, but I don’t will to achieve it” – if you don’t will to achieve it, in what sense is it your end?
As for whether you ought to take the means to your ends if your ends are bad for you, the analysis of that requires a somewhat more complicated conception of one’s ends. Rather than “you ought to take the means to your ends”, the true statement is “you ought to take the means that would be recommended to you by a rational version of yourself”. Presumably, your rational self wouldn’t want you to act in ways that are bad for you, and you can’t rationally reject seeking your rational version’s ends. And there’s a meaningful sense in which your rational self’s ends are your ends, though they aren’t necessarily what you currently desire. Also, if your rational self doesn’t will for you to act in ways that are good for yourself, then it really isn’t true that you ought to not act in a way that’s bad for you.
I’m not saying that people necessarily seek their ends, only that they necessarily will to seek their ends.
In that case this moves the problem back a step. ‘If you will to X, and Y is a means to X, then you ought to X’ is a categorical imperative: it tells you what you always ought to do in the circumstances described. That’s not a strike against it, in my book, because I accept the existence of categorical imperatives. But I’m just arguing here that almost everyone else is committed to them too. The only way out is to be a complete normative nihilist, which I don’t think I’ve ever seen anyone actually endorse, because everyone I’ve talked to about this wants to at least keep instrumental normativity.
Rather than “you ought to take the means to your ends”, the true statement is “you ought to take the means that would be recommended to you by a rational version of yourself”.
I agree with this statement, but think it’s only true in virtue of the fact that there are objective rational norms, and the rational version of yourself would advise you to follow them. I’m skeptical that this kind of ideal observer theory can give us an analysis of practical normativity.
Presumably, your rational self wouldn’t want you to act in ways that are bad for you, and you can’t rationally reject seeking your rational version’s ends.
It seems to me the fact that Rational Me advises X only gets purchase because I take Rational Me to know objective normative facts. I’d follow his advice for the same reason I’d trust a scientist who I took to be in touch with the scientific facts. If Rational Me is spelled out in some other way, e.g., in purely formal terms of not being swayed by temporary emotions or something, it’s not clear to me why I should listen to him.
You could describe it that way, though that requires a rather broad view of categorical imperatives – it has nothing to do with universalizability or treating people as ends in themselves. But another way of looking at it that’s descriptive rather than prescriptive is “If you are rational, and you want X, and Y is a means to X, then you’ll do Y” combined with “If you understand the implications of wanting X, you will want to be rational in getting it”.
Not necessarily. Rational Me has a better understanding of how to achieve my ends, i.e. is better at understanding that “Y is a means to X”, which is not a normative fact. The fact that I should listen to my rational self stems from having the same ultimate ends, and from him having a better understanding of how to achieve them than I do.
You could describe it that way, though that requires a rather broad view of categorical imperatives – it has nothing to do with universalizability or treating people as ends in themselves.
It does have to do with universalizability inasmuch as anyone in exactly the circumstances described in the antecedent ought to do the consequent. My basic point is that there isn’t an interesting formal difference between what people call categorical and hypothetical imperatives: the difference is just in content.
But another way of looking at it that’s descriptive rather than prescriptive is “If you are rational, and you want X, and Y is a means to X, then you’ll do Y” combined with “If you understand the implications of wanting X, you will want to be rational in getting it”.
You can stipulate that by ‘rational’ you just mean ‘takes the means to one’s end.’ Then I’ll object that you’re stipulating a meaning for a word that has normative content in its ordinary use, and so being misleading at best.
Not necessarily. Rational Me has a better understanding of how to achieve my ends, i.e. is better at understanding that “Y is a means to X”, which is not a normative fact. The fact that I should listen to my rational self stems from having the same ultimate ends, and from him having a better understanding of how to achieve them than I do.
Granted — I feel the pull to follow Rational Me’s advice inasmuch as I share his ends, and he has more accurate empirical beliefs about how to realize them. I still don’t feel the pull of following his advice to change ends of mine which he doesn’t share, though, if such there be.
Traditionally, the categorical imperative is “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction”. While it’s true that anyone in the circumstances of the antecedent ought to do the consequent, it’s not a command to will anything universal. And also unlike the categorical imperative, it can be formulated descriptively (as I’ve done above), without the “imperative” part.
The way I’m using it still has normative force, derived from the inherent normative force of having ends. Being rational follows from having ends, because if you reject being rational, you won’t achieve them as well, which is contradictory given that they’re your ends.
That’s why the advice is what your rational self would recommend to your actual self, which is not necessarily what he would do himself. To use an analogy, if you’re lost in a city of which you have some knowledge, he would tell you to get a map, even though he wouldn’t need one himself. Your rational self would also give you advice that would eventually (in the limit) make your ends identical to his, and improve your network of preferences according to your own standards.
I fear I may have caused the confusion here. I used the terminology “categorical imperatives.”
Kant thought he could prove the existence of a Categorical Imperative by reason alone. Yet the space of potential categorical imperatives (lowercase) is much broader than this.
A categorical imperative is any kind of absolute moral command. For example, “Treat others the way you want to be treated”, “Thou shalt not kill”, or “Obey the law.” They do not say, “Supposing you have some goal X, do Y in order to achieve it.”
In practical use, some apparently categorical imperatives (such as “Obey the law”) may suggest this, but strictly speaking they simply demand that you follow them, independently of your goals or desires.
It is these kinds of imperatives that I do not think are ever actually binding.
Sorry, I should have been clearer on ‘categorical imperative’ too; like Vox, I was using it in the generic sense, not to refer to Kant’s particular Categorical Imperative. (Where Vox and I disagree is that I think that “Supposing you have some goal X, do Y in order to achieve it” is a categorical imperative if ‘categorical imperative’ means ‘absolute moral command,’ inasmuch as, as I said, this norm always applies to you — if you’re ever in the circumstances mentioned in the antecedent, you ought to do the consequent. If ‘categorical imperative’ means something else, like ‘doesn’t mention a desire,’ then this is not a categorical imperative, but then my claim is that that’s not an interesting distinction.)
The way I’m using it still has normative force, derived from the inherent normative force of having ends.
What is this inherent normative force of having ends? It sounds to me no less mysterious than certain actions having inherent normative force because of their moral value.
Being rational follows from having ends, because if you reject being rational, you won’t achieve them as well, which is contradictory given that they’re your ends.
In whatever sense that it’s contradictory, why does that matter? From whence comes the ‘don’t be contradictory’ norm?
I haven’t read Ethcial Intutiionism, but the secondary material seems to indicate that”The elephant in the room is Kantian constructivism.”, which, as it happens, is my preferred theory.
Abortion has killed far more that 500 million people world-wide. Some people find this extremely troubling, others not so much. Obviously there is a lot of debate as to whether or not the 1-2 billion people “count” in a moral sense. Some people think the the number of abortions is not evil – far more humans die due to things like spontaneous miscarriage, so the abortion numbers are really just a drop in the bucket and we really shouldn’t get too upset about it. Abortion is not murder because then ignoring deaths due to miscarriage would also be murder (and we clearly don’t act that way). On the other hand, others note that 1-2 billions people who otherwise would have been born have not been born, so really it’s just a form of pre-infanticide, so definitely evil – abortion is murder, doctors who perform abortion are murderers, and people that promote abortion rights aid and abet murder. Both positions are ethically plausible.
This is why moral realism is a non-starter for me – if reasonable people can’t agree on whether or not we can “just know” whether or not the deaths of *billions of humans* is evil, it calls into question (for me at least) the existence of moral facts
The same reasoning should call into question the existence of physical facts. Reasonable people couldn’t even agree on what the elements were. (Is everything fire? Water? Fire, water, wind, earth? Fire, water, wind, metal, wood?)
When the Jesuits went to China, they discovered a clearly advanced society that didn’t even know that the earth orbited the sun, not unlike the hypothetical alien race with different morals.
Good point – you need some justification for even establishing physical facts, and hopefully something more robust than simply intuition
I think “all of this goes in a consistent direction” says little more than “all of this seems to be going the way I want it.”
I could just as well say that everything has been tending toward my personal morality, since as I look back more into the past, things seem to have been increasingly different from my personal morality.
If you think there is a general broadening of rights over time, what about the rights of God, the unborn, and dead people? None of them seem to have been doing too well lately.
I think we’ve discovered that God and dead people don’t exist, whichoverwhelms the general trends.
I would lay money that the rights of the unborn are going to increase in the next few decades. I think it’s just an accident of history that taking them away became seen as aa feminist thing.
An increase in the recognition of the rights of the unborn might discredit any ideology associated with the lack of recognition of those rights.
Dead people certainly exist. Otherwise, their currently existing descendants wouldn’t have been born. They just don’t happen to exist at this particular point in time.
I’d actually argue that the rights of the dead are fairly well-protected, and continue to be so. Dead people’s property rights are protected in the form of inheritance (that’s taxed, but not that much more than the property of the living). Their rights to bodily integrity are protected, in that their opting out of organ donation is usually respected. The right of the dead to not be slandered and libeled is often more respected than the rights of the living.
Dead people most certainly do not exist. Ancient Rome does not exist. The future does not exist.
Ancient Rome existed, then it passed out of existence. The future will exist, but it does not exist.
In philosophy, a distinction is made between the “A-theory of time” and the “B-theory of time”. These extremely uncreative names refer to two fundamentally different conceptions of time, the most consistent versions of which are, respectively, “presentism” and “eternalism”.
Presentism says that our sense of time as moving forward constitutes veridical, objective knowledge of reality. The world is constantly changing, with new things always coming into existence and going out of existence. That which is past is gone; it is nothing. Time is a way of relating one of these changes to another, like the rotation of the Earth to its revolution around the Sun.
Eternalism says that our sense of time is somehow an illusion or trick. The world is actually a completely static and unchanging four-dimensional “block” of “space-time”. The past and the future are just as real as the present. Time is a dimension along which 4-D “worms” are located, like length, width, and height.
Eternalism was adored by early Christians like Augustine, who saw it as a fantastic way for God to be able to know the future, since he could see all parts of the “block” at once. Of course, it completely contradicts free will—a source of constant problems for Christians. (It wasn’t only Christians. Parmenides and Zeno were eternalists, and Zeno’s paradoxes were supposed to illustrate that change cannot occur, which is a corollary of eternalism.)
I believe that the truth of presentism is implied by many elements of human experience, free will not least among them.
The most common objection to presentism is Einstein’s theory of relativity. It is quite true that Einstein was a committed eternalist, and the idea of the universe as being composed of four-dimensional “space-time” is a thoroughly eternalistic doctrine. However, it is completely possible to reformulate the theory to account for all the empirical phenomena on the basis of presentism. (This is left as an exercise for the reader.)
I don’t know whether a presentist theory of relativity would make physics easier. There are many false simplifying assumptions in physics which are nevertheless useful.
“I believe that the truth of presentism is implied by many elements of human experience, free will not least among them.”
I have no certainty that I experience libertarian free will rather than compatabilist free will. What else you got?
Stipulations in wills are very frequently voided when they go against the wishes of the living, and many stipulations are simply forbidden in advance, even when they are things that people would be able to do when they are alive.
Most of the people in charge of making laws are not atheists, but this hasn’t changed the trend. That indicates that whether or not God exists is irrelevant.
“I think we’ve discovered that God and dead people don’t exist, whichoverwhelms the general trends.”
Reallu? We’ve discovered that materialism is absolutely true?
As true as the fact you aren’t a brain in a vat.
‘note- terminate subject 122’
That is very funny.
What is meant by materialism?
Let’s see … Materialism is a matter of rejecting supernatural explanations and supernatural explanations are those that reject materialism …
What is meant by materialism is the doctrine that there is no substance besides matter. No God, no mind that survives a body, no numbers… just material.
It’s not a doctrine I find very convincing.
Right. “Materialism” is not synonymous with “naturalism” or “atheism”. It does not mean rejecting the “supernatural”. Immaterial minds may be thought of as perfectly natural.
Whatever the “supernatural” is, is actually the unclear part. Basically, it only has meaning in a religious viewpoint. The natural is the created universe, and the supernatural lies outside it.
Sometimes, the word “physicalism” is used by “materialists” who are worried about the whole matter-energy conversion thing technically falsifying their theory.
@Vox: “Whatever the ‘supernatural’ is, is actually the unclear part.”
It is indeed an unclear term, outside of Scholasticism.
God is philosophically believed to have a nature, so naturalism != atheism.
The version I’d heard was that “physicalism” was coined by positivists to say “materialism is a metaphysical position and therefore literally meaningless, so we’re not going to say we believe in that, here’s some complicated thing that basically boils down to materialism/resembles materialism if you squint, but we’re not calling it materialism because that would be metaphysics” and at some point later, positivism (at least logical positivism) stopped being popular and metaphysics became respectable again, but “physicalism” had stuck.
Also (I’m speculating wildly here) “physicalism” has the advantage that it can’t be confused with consumerism or greed or other things to do with the other sense of “materialism”.
A lot of those logical positivists were also straight up phenomenalists, so they were both physicalists and the methodological sense (physics is the fundamental science) and phenomenalists in the ontological sense (statements refer to anticipations of sense experience).
I think “physicalism” and “materialism” are generally terribly underdefined and really need to be clear on their implications for the relations between theories. Does physicalism require strict reductionism? If so, then it’s very likely false. Does it require mere supervenience? Etc.
You could try Richard Carrier’s definition of the supernatural as something which contains ontologically basic mental elements (also endorsed by Eliezer Yudkowsky here, so I guess it’s a candidate for the LessWrongosphere consensus position) – supernatural things have some sort of mindful property, consciousness, intentions etc, without being made up of smaller components that do not individually have mindful properties.
This seems to be true of humans, so far as we can tell – you can cut out specific bits of brain and remove specific parts of someone’s personality – and no one has yet demonstrated the existence of anything with mindful properties that does not depend on a complex physical structure of individually non-mindful (as far as we can tell) interacting parts.
Doesn’t Carrier’s definition make mathematical objects supernatural?
I’m not sure what you mean. Do you mean abstract mathematical objects, like ‘the number 3’ or ‘the Fibonnacci sequence’? If so, those are abstractions about how reality works that run on human minds; I don’t think anyone with any credibility in maths is claiming that the Fibonnacci sequence is itself mindful-despite-not-being-made-up-of-simpler-non-mindful-stuff.
But that seems so obvious that I’m sure I must have misunderstood you.
“I’m not sure what you mean. Do you mean abstract mathematical objects, like ‘the number 3’ or ‘the Fibonnacci sequence’? If so, those are abstractions about how reality works that run on human minds; I don’t think anyone with any credibility in maths is claiming that the Fibonnacci sequence is itself mindful-despite-not-being-made-up-of-simpler-non-mindful-stuff.”
I mean, what if they aren’t just correct abstractions about how the material universe works that run on human brains? If I can have three protons but not infinity protons, does “infinity” become a delusion in human brains?
If you look up section 4.3 in Stanford’s “Philosophy of Mathematics” article, it talks about the implications of trying to comprehend mathematics without abstract entities. My understanding is that Quine gave it up and tolerated Goedel’s Platonism as indispensable to scientific naturalism.
Sorry for the late reply – been away at a festival – but if anyone’s still reading, then… Okay, I guess I should have been more precise – mathematical concepts are abstractions about how some aspects of reality work, and/or abstractions about how other abstractions respond when you manipulate them according to certain sets of rules. So the concept of infinity may be a delusion when applied to protons, but still valid when you want to know how a hypothetical endless quantity behaves when you apply the same rules to it. I still don’t understand why that would make it supernatural in Carrier’s sense – no one is claiming that the mathematical concept of infinity is capable of having desires, intentions etc.
Firstly, it’s quite a bit of chutzpah to want to assert the rights of a god before it has been demonstrated to a sufficiently high standard that the god in question even exists. The very fact that you use the singular, capitalized form implies that you would prioritize the rights over (presumably) the god of Christianity over, say, Vishnu or Melek Taus if those gods came to a conflict. And yet there is not any good evidence showing the god of Christianity more likely to exist than the gods of Hinduism or Yazdan (and the demands that different gods make are quite often incompatible, so we simply cannot just grant them all the same set of rights). Not to mention that gods which are omnipotent are by definition infinitely capable of defending their own interests and therefore cannot need us to vindicate them.
If someone could prove in a court of law that their god or gods actually existed, and actually had interests that humans are capable of violating, then anyone with the slightest concern for moral progress should be happy to include that god under the umbrella of rights-protecting legislation.
I mean, there is also a lot less concern these days for the rights of the fairy folk, but I think that that’s entirely reasonable given that we have had good reason over the last few centuries to downgrade our probability estimate of the fairy folk actually existing, and the same applies to gods. We have not disproven the existence of any gods, but we have come to a much improved understanding of how likely we were to have been mistaken to believe in them in the first place.
As regards the rights of the unborn, well, as long as you mean ‘future people’. I think there is a lot of concern. The environmental movement is (a fringe of human extinctionists notwithstanding), concerned with maximising the extent to which our planet remains habitable for future generations. And if you go down the transhumanist route, you will find people exquisitely concerned about making sure that we fill the galaxy with QALYs – both by engineering ourselves and the world to maximise enjoyable lives, and by colonising other planets to increase the available niches for future beings to have enjoyable lives. It’s only if you get into the area of asserting that specific zygotes have rights that trump the rights of people who do not want to carry a pregnancy to term that you can reasonably extrapolate a relative lack of concern for the unborn – and even then, it isn’t a total lack of concern, it’s just a position that holds that an individual hypothetical future person is of less moral concern that a currently living person (and that there is no good reason to pick the time of conception to elevate something all the way from hypothetical future person to full-subject-of-moral-concern-on-a-par-with-a-fully-conscious-person).
Regarding the dead … I’m not sure what rights you would want to extend to them. I mean, we do things with corpses that we didn’t used to do, like harvest the organs for transplant. But we are still pretty damn deferential to the dead person’s last known wishes, and to their surviving family. You might need to explain what rights you think we have stopped affording the dead, and why we should reinstate them.
Your object level arguments are irrelevant; my point is that rights do not simply increase over time. They also decrease. Maybe you have good reasons why that should happen; but then you support my point that rights also decrease.
Okay, but if they increase for beings that exist, but decrease for beings that either don’t exist at all, or have a much more tentative claim on existence, isn’t that relevant? Like, for instance, if we once afforded rights to Cthulhu, a being who I expect we can both agree doesn’t actually exist, but now no longer afford rights to Cthulhu (while holding everything else constant), then rights-that-actually-attach-to-things-that-it-is-meaningful-to-attach-rights-to have not actually decreased.
When your livelihood is dependent on your social capital (e.g. the approval of your tribe), thinking about whether your neighbors might be wrong is a luxury. When your children or siblings die at a young age, you’re unlikely to be unaffected. When, from a young age, you have it beaten into you that you have to respect authority and keep to your social sphere, you learn that going against that hurts. And most generally, when your overriding concern is survival, you don’t have much time for reflection.
So as those problems diminish, people become more capable of critical thinking. Just as a certain level of wealth is a prerequisite for scientific investigation, it could be argued that something similar is necessary for moral investigation. That doesn’t mean that all newly proposed moral theories are true (nor are all new scientific theories true), but that in the past two hundred years, the environment has become more conducive to moral discovery.
Nicely put. But I immediately think of survive/thrive farmer/forager. If thriver-forager comfort is necessary for sufficient moral self-awareness, then you’re obviously right. But if it’s just cossetted decadence that prevents us from seeing the bedrock reality of surviver-farmer values…. I guess what I’m wondering is if there isn’t a begged survive/thrive question here.
That would require us to be better at discovering morality when we’re not thinking critically about it, which seems highly implausible. Nothing else works like that.
This assumes morality is primarily about accurate theory rather than habitually practiced virtue. If the latter, a System 1 state of Flow might be expected to characterize the responses of the Taoist, Zen, or theist sage to whatever life serves up. (“You turned off your targeting computer, Luke.”) Such a second nature might be more rapidly and deeply acquired under survive-farmer sink or swim conditions? Maybe?
But surely we can theorize about what virtues are best to habitually practice? Why would this work in a way opposite to everything else?
I’m feeling strong parallels to “Universal Love, Said the Cactus Person”. Namely, the futility of arguing against (or for) mysticism. If it’s ineffable, it just can’t be effed.
You need theory to determine whether you need virtues, what they are, how to cultivate them, etc, even though you aren’t constantly thinking “Am I cultivating virtue X?” in practice.
I’m not sure if this is too far off-topic; if it is, Scott, I’m sorry, please delete.
If not, then…
To utilitarians: I’ve noticed an interesting implication of utilitarianism that I haven’t seen discussed. You believe, as I understand it, that one ought to act so as to maximize aggregate utility. From the perspective of most people posting here that means giving all your money to the Third World. But what if you are a child in the Third World, and you come across a rich Westerner? Is it your moral imperative to steal from them – not just acceptable, but obligatory, the same way it is our moral obligation to give our money to them? It seems this would be the action you could take, if this situation came up, that would maximize aggregate utility. So are you obliged to do it? Moreover, is a Third World child who has this opportunity but doesn’t take it committing just as immoral an act as the Westerner who doesn’t donate his money to the Third World – and if not, why not?
Not a utilitarian, but this one seems easy. Social norms against theft are vital to the greater good. Even assuming you could raise your personal utility by stealing from the rich tourist (i.e., you’re guaranteed not to get caught and jailed, etc.), that would be outweighed by the way you’ve weakened the norm.
As the Less Wrong crowd would say, assume the least convenient possible world. Handwave those problems away. Nobody will catch you, you have a plausible explanation for where the money came from if anyone asks. What now?
EDIT: ‘your personal utility’ is an understatement. The reason I made you a Third World child in this example is so that your stealing could raise the aggregate utility, which I think is a much more interesting prospect.
In the Least Convenient Possible World, I would say the child probably is acting immorally by not stealing. I don’t understand how you think this is a big deal. Poor people who steal are often portrayed sympathetically both in real life accounts, and in fiction. Most people consider Jean Valjean’s act of stealing bread to feed his sister’s children to be a good act, or at least an understandable one.
Of course, the world we live in doesn’t really resemble the LCPW. So I instead endorse Harean Two-Level Utilitarianism, which says don’t steal.
If your hypothetical involves the poor person stealing to give to others then it could certainly get a lot less convenient. Make it, as I implied but probably didn’t state clearly enough, such that the poor person is stealing entirely to give to themselves. Maybe they are the lowest utility person they know by a long way, maybe there is some practical reason why they can’t share the spoils.
> Make it, as I implied but probably didn’t state clearly enough, such that the poor person is stealing entirely to give to themselves.
That’s not clear at all. Usually giving money to someone in the Third World is good because it helps other people too.
They can go to school, which leads to them earning more money, which means they pay more taxes, improving their country, then when they have children they can afford to send them to school and so on. That’s what it usually means for something to give more aggregate utility.
In the case that the person spends money only on himself and it benefits no one else and it still gives more aggregate utility, the situation is the same as the famous Utility Monster, right?
Sure, but I hadn’t heard it posed from the Utility Monster’s perspective before – that it would not only be required for someone else to donate to the monster, but also required for the monster to steal from others so as to gorge itself on utility.
By reasoning similar to superrationality, if it is the right thing for you to do, it is the right thing for everyone in a similar situation to do; they don’t need to be able to catch you or communicate with you in any way for your decision to be correlated with their decision. In other words, the social norm reasoning applies whether they can catch you or not. If everyone were to do it we’d be worse off. So it’s wrong.
If you think rules per se are a good thing, are you still a utilitarian or a a utilitarian-deontologist hybrid?
> You believe, as I understand it, that one ought to act so as to maximize aggregate utility. From the perspective of most people posting here that means giving all your money to the Third World.
I prefer to think of actions as part of a scale from worst to “meh” to best – i.e. it would be better if I did this, though not being the most moral person I don’t actually. But some of the EA folks do about as well as they reasonably could.
> you come across a rich Westerner? Is it your moral imperative to steal from them – not just acceptable, but obligatory
With the above framework in mind, the world in which the terribly impoverished steal from the rich is probably better, yes (barring hypothetical scenarios in which theft leads to the downfall of civilisation etc, which may indeed be a valid objection).
> So are you obliged to do it?
It’s better if you do it (same caveat as above).
> Third World child who has this opportunity but doesn’t take it committing just as immoral an act
It’s a “meh” act of omission – neither especially praiseworthy nor especially blameworthy.
“It’s a “meh” act of omission – neither especially praiseworthy nor especially blameworthy.”
So, to be more precise, is an act that increases aggregate utility equally good whether or not the person whose utility you increase is yourself? There is no discount such that doing things for yourself is worth less than doing things for others – it’s entirely about the number of utiles involved?
If so, that’s interesting. Could there be a situation, then, in which a person forgoes an opportunity to raise their own utility by an enormous amount, and this act would be morally bad? Not just meh, but bad, if enough potential utiles are missed out on?
> There is no discount such that doing things for yourself is worth less
Hmm. In the end I subscribe to the view that there’s a person at time A, and there’s a person at time B with memories/personality etc inherited from the person at time A, and there’s no further truth of “being the same person”. Though I suppose I’m unusual in this.
(I can’t pretend I generally act as if the things I believe on a philosophical level were actually moving me.)
But yes, if I had a more common theory of personal identity, I think I would still not believe in any such discount in principle, though in practice praise for helping others is useful to society, I expect.
> forgoes an opportunity to raise their own utility by an enormous amount, and this act would be morally bad?
The situation is awkward because you must take people’s preferences into account. The hypothetical person forgoing some great good for themselves must have some reason for doing so. If that reason is because they subscribe to some other ethical theory, that is also awkward. I dunno.
But I prefer “better/worse” to “good/bad” (at least I do when I’m defending utilitarianism) and the world you describe seems worse than the alternative. One could argue that encouraging such self-denying virtues in people is bad. Uh, I mean… worse than not doing so.
This is, indeed, the case. The reason it seems intuitively better the other way around is that people are often untrustworthy. If your hypothetical thief steals for others, we can more easily trust the statement “I did it because I want to increase net utility, not because I am selfish.” If your hypothetical thief steals for their own good, they might be correct when they say “I stole to increase net utility,” but their excuse cannot separate them from the selfish thief, who would lie and offer the same excuse.
You are forgetting the Lucas Critique, Timeless Decision Theory, and that Newcomblike problems are the norm.
It may not be a good idea for the the child to steal since it will do enough damage to niceness/community to be net bad if they were caught. That’s the Lucas Critique, your choices affect incentives which will alter behavior.
Now, what may not be clear is that this even applies if you aren’t caught. If you steal then the model people have of third world children is worse off due to Newcomblike considerations and you still do damage to niceness/community. In addition your decision can “cause” other minds pondering similar decisions to steal when they shouldn’t. Depending on how similar other minds are to your’s, how much information you leak, and the “value” of niceness it may or may not be better to steal.
To illustrate this, consider a true prisoner’s dilemma between yourself who values paperclips and yourself who values paper. Since you are identical in every way except your values you should choose to cooperate since they will as well, and you both will be better off than if you both had chosen defect.
Finally judgement is another problem. Your moral model is somewhat of a negotiation. If you are too strict like pure utilitarianism then nobody can live up to the model and everyone is shamed so it doesn’t really work. It is likely better to dial back what one is “obliged” to do.
These all sound like conveniences to me… If we really have to go there then make it a magic button. The Third World child can press it, it takes 100 utiles from some rich Westerner and gives one million utiles to the child. The Westerner finds out nothing, maybe it makes him lose his wallet or scrape his car or something. Has a moral wrong been committed if the child does not press? Is there any number of utiles beyond which point not pressing would become a moral wrong?
A button is probably “disentangled” from other choices that I think the answer is yes.
Increasing net utility is always of the good, because it is the meaning of good.
Pose the question: Would the world be better if the $100 were transferred from the rich man to the poor child, absent other considerations? If yes, transferring the money is good. It matters not whether the transfer is accomplished by the rich man, the poor child, the invisible hand of the market, a magic button or the very nature of God Himself.
The difficulty arises in the complexity hidden behind “absent other considerations.”
Can I get “The last enemy to be destroyed is submaximal global utility; destroying Death just buys us more time” on a T-shirt? It may well end up as my forum signature.
>Achitophel: That sounds a little forced. I could come up with a counter-story where given the worldwide increase in wealth and our lack of real-life exposure to any starving people or smallpox victims, the Care foundation atrophies away, but given our increasing crowding and exposure to superplagues like HIV and Ebola, Purity becomes obsessively important.
This seems like an empirical question. Disease rates really have gone down in developed countries, but we still hear about plenty of tragedies, especially once scope insensitivity is factored in. “Human interest” is a concept for a reason!
Funny – I would question both whether destroying Death is necessary to maximize global utility (perhaps people have a finite lifetime utility or even just a diminishing ability to experience it, perhaps the optimal solution would be ‘fixing’ death such that we are perfectly happy to die once the utility we can each experience is less than a new person would experience in our place) and whether destroying Death really buys us more time (it buys each person more time, sure, but perhaps more people equals more unique insights on difficult problems equals maximal global utility attained sooner).
The problem with these debates is that no-one clearly states what it would mean for morality to be “objective”.
I think that for a moral rule to be objectively true, it must be true for any possible mind. I think that something that is true for any possible mind, will be a tautology.
I can see how meta-ethical rules could take the form of tautologies – objectively determining the possible forms that moralities can take – but I don’t really see how rules determining the best specific course of action could operate independent of the specifics of the mind.
Perhaps there is some feature shared by all minds that necessarily makes a certain course of action appealing?
[[An example (probably a bad one) – treat others as you wish to be treated – this is a meta-ethical tautology – we have no moral duty to those who aren’t conscious, but since we have no direct experience ( and can have no direct experience) of other minds, the consciousness of others only has reality as a projection of (and within) our own minds. The pain of others only takes substance within our own minds, as our own pain (otherwise it is a pure abstraction.) The extent to which others exist is the extent to which they are a part of us – hence treat others as you wish to be treated. This doesn’t actually tell us what we should do, as it is perfectly permissible to not empathize with the things (humans) around you.]]
No, it isn’t. It’s not even close to a tautology.
This is absurd. One is quite well capable of recognizing that other mental beings exist without completely (or even slightly) empathizing with them. One simply observes one’s own case and appeals to the best explanation: I am conscious, and these other people appear to be built in the same way as I am, so they must be conscious, too.
The same goes for others’ pain. One feels pain and cries out when pricking oneself with a knife. When others get pricked by knives and cry out, one correctly infers (by appeal to the best explanation) that they also feel pain. This is not a “pure abstraction”; it is a fully concrete thing whose existence is inferred by a great deal of evidence.
You will never directly sense a neutrino. Does that mean they don’t exist?
Since no one actually fully feels the pain of all other people, this is everyone’s normal mode of relation to almost everyone else. Yet people are well-justified in believing that others exist.
“It’s not even close to a tautology.”
OK… well let me make a different claim: To the extent that other minds only exist as aspects of our own, “treat others as you would wish to be treated” is tautological.
“You will never directly sense a neutrino. Does that mean they don’t exist?”
Well, presumably, the extent to which they exist is exactly the extent to which the concept is tied on to some observed relationship between different bits of sense data.
I mean, you could say the same thing about a tree couldn’t you? It’s a concept to tie together different observations: sense data relationships as an object.
Let’s say that to be “abstract” means to be concerned with relationships, while to be “concrete” means to *be* sense-data/qualia.
You are saying that the word “mind”, when applied to others, is a word like “tree”, or “neutrino” – a way of describing relationships between pieces of sense data we have received. “Pleasure” of others, likewise, would of a similar class. They are abstract in the sense that they describe relationships between sense data rather than sense data itself.
I would say that if this is what we mean by “minds of others”, then the “minds of others” are actually something entirely different to my own mind, which is concrete, in the sense that it *is* qualia.
“One is quite well capable of recognizing that other mental beings exist without completely (or even slightly) empathizing with them. One simply observes one’s own case and appeals to the best explanation: I am conscious, and these other people appear to be built in the same way as I am, so they must be conscious, too.”
But in this case, to say that “they are conscious”, is to say something very different to “I am conscious”, in the same way that observations of some machine that tell me green light has certain properties is different to the observation of the color green. What does it actually mean to say that someone is conscious (in the same sense as me) without empathizing with them?
I can (perhaps) say that “the leaf is green” without imagining a green leaf – but where the word “green” isn’t tied to sense data (either generated internally by my imagination, or as a brute fact of observed reality) then it becomes purely abstract – just a matter of relationships, not directly referring to anything concrete. Likewise, if the word “mind” isn’t tied to the experience of the mind (that is experience itself) then it becomes purely abstract (Experience itself cannot be observed, only generated internally.)
If I enjoy the color green, I might make a calculation about how to maximize green without imagining it, purely on the abstract level – but would this calculation *in and of itself* increase greenness? Don’t we have to do something, or imagine something to increase green?
I think you can ask the same question for mental states – and I suspect that since mental states can never be observed, they can only meaningfully maximized internally.
(If I never observed or imagined green, but instead used information from machines to tell me that a certain wavelength of light existed, does that meaningfully maximize green?)
‘To the extent that other minds only exist as aspects of our own, “treat others as you would wish to be treated” is tautological.’
So, not remotely tautological then.
I – and I expect lots of people here – aren’t buying your (positivist? idealist? anti-realist?) account of existence. I think most of us are of the opinion that existence is independent of observation, there’s nothing logically impossible about something existing and having no sensory consequences, although for any given postulated thing, people will think it highly unlikely, due to Occam’s razor. Some people like to go on at great length about not mixing up the map and the territory and all that. Now as to knowledge of those things, that’s another matter.
” there’s nothing logically impossible about something existing and having no sensory consequences”
It might not be logically impossible, but I wouldn’t imagine it would be very compelling.
[I don’t think I’m really taking a metaphysical position here – I’m taking a meta-ethical one – why should we be compelled to do certain things]
Let’s see. We’re sort-of covering minds anyway, let’s think of something else.
Stuff beyond the observable universe. The observable universe is isotropic – so either we’re at the dead centre (seems unlikely due to the Copernican principle) or there’s more stuff beyond.
Of course lots of people are in the habit of conflating “universe” and “observable universe”, and some people who like to talk about “multiverses” are really talking about “mutually non-observable portions of a larger universe”, the larger universe having the usual 3+1 (or however many it is that the relevant varieties of string theory like to talk about) dimensions, rather than “different planes of existence”.
Question: why do we believe that we are conscious? If I can’t observe my own mental states, how do I know I have them? (This gets onto a related question – why is epiphenomenalism even remotely appealing – why do the people who like to (try to?) shoot it down find it even worth trying to shoot down?)
“If I can’t observe my own mental states, how do I know I have them?”
You say “experience itself cannot be observed”, you say, “mental states can never be observed”, so from that I conclude that by your definitions, axioms etc. I can’t observe my own experiences and mental states. So how do I know I have them? Where are these words, these thoughts about my own experiences, coming from? If I find myself with a thought, “I have experiences”, why should I trust that thought?
(My aim here isn’t to deny my own experiences by asking rhetorical questions. “What makes you so sure?” is usually a rhetorical question, roughly meaning, “I think you’re wrong, think about it”, but not here.)
Also, your definitions of “concrete” and “abstract” are weird. By your definitions a block of concrete, as in the building material, is abstract. The actual sense data/qualia/whatever are various shades of grey spatially arranged in a visual field, possibly some tactile sensations too.
“You say “experience itself cannot be observed”, you say, “mental states can never be observed”, so from that I conclude that by your definitions, axioms etc. I can’t observe my own experiences and mental states.”
Well, my point was that when we say “mind” in reference to ourselves, we mean experience (ours, obviously). If we use “mind” to refer to others, we either mean (1) some aspect of our own experience (2) an abstract concept that ties together our observed experience of “external” reality (other people’s smiley faces) (3) Something of which we can have absolutely no knowledge and cannot speak of meaningfully.
If we mean (2) then it means something very different than the word “mind” when used to refer to our own experience.
“So how do I know I have them? Where are these words, these thoughts about my own experiences, coming from? If I find myself with a thought, “I have experiences”, why should I trust that thought?”
Because the thought is itself an experience. It is a statement that by its very existence proves itself.
“Also, your definitions of “concrete” and “abstract” are weird. By your definitions a block of concrete, as in the building material, is abstract.”
Perhaps concrete and abstract are not the best words – I want to get across the sense that it is possible to think using words without having any internal representation of that thing in mind – like I can think about 6,000,000 things without actually having to have an image or whatever of 6,000,000 things in my mind, and the reason I can do that is because I can just think about numbers in terms of their relationships to other numbers. I can also do this for words in general.
The sensation of actually seeing, or of imagining a load of things is distinct from talking about it – but ultimately we must be *motivated* by the actual sensation (either imposed by external reality, or generated internally) rather than an empty relationship –
so for the purposes of ethics, it is the qualia that are important.
Ah, from (3) this is all looking like Kantian idealism – correct me if I’m wrong. (3) is something we can have beliefs about; whether those beliefs are justified and true enough to count as knowledge is another question, but belief I think is enough for meaningful discourse. One question of justification relies on the validity of the Copernican principle (which I think is a special case of the principle of parsimony), of course since Kant’s Ptolemaic counter-revolution this isn’t universally accepted. Ho hum.
(2) is a concept, not a mind. A concept of a mind is not a mind.
Most people in modern societies meet a lot of people, and must forget a lot of people. I must forget lots of people, yet I do not think of myself as obliterating lots of minds from existence, and lots of people must forget me and I think little of it.
Also: loads of people talk about observing their own thoughts, especially in the case of mindfulness, meditation etc. Observing my own thoughts has been a useful way to help mitigate an anxiety disorder. But if thoughts are experiences, and experiences can’t be observed, am I wrong about this?
Nice. And an absolutely correct characterization.
(Though I must concede that the sense Kant meant his theory to be a “Copernican revolution” was not in saying that the Earth moves around the Sun, but rather that it is the Earth which rotates and not the heavens.)
If I want to imagine that Russell’s teapot is beaming happy rays down to me, the key point is that I am *imagining* something. It has a reality as internally generated qualia (and if I enjoy doing this, why not.)
That is something very different from a thing which is neither imagined or observed – can we have beliefs about something of which we know nothing, which has no properties? Surely unless our beliefs are entirely a matter of abstract connections that may as well consist of nonsense words, even the act of having a belief about it gives it some properties inside your mind ((1))
And then – even *if* you had some beliefs (unrelated to any base qualia) about such a thing, how on earth could it provide you with motivations?
“One question of justification relies on the validity of the Copernican principle (which I think is a special case of the principle of parsimony), of course since Kant’s Ptolemaic counter-revolution this isn’t universally accepted. Ho hum.”
I’m sorry, I didn’t understand this.
[edit – I checked wikipedia I get it now ]
“(2) is a concept, not a mind. A concept of a mind is not a mind.”
“Most people in modern societies meet a lot of people, and must forget a lot of people. I must forget lots of people, yet I do not think of myself as obliterating lots of minds from existence, and lots of people must forget me and I think little of it.”
Are you motivated to act by the people who you do know, or by those who are nothing to you?
Even if we believe that a reality exists externally to us I think it’s a bit of a stretch to say that we can be *motivated* by things that we have no knowledge/imaginings of.
“loads of people talk about observing their own thoughts, especially in the case of mindfulness, meditation etc. Observing my own thoughts has been a useful way to help mitigate an anxiety disorder. But if thoughts are experiences, and experiences can’t be observed, am I wrong about this?”
What happens when you observe your own thoughts?
@Vox Imperatoris – it’s not original to me. I think I got it from Mario Bunge, but a little googling suggests it’s not original to him either; Quentin Meillassoux seems to be a good candidate for the originator.
Interesting how imaginings have now suddenly entered the picture.
What happens when I observe my own thoughts: well, you’d be better off getting someone who’s better at this than I am to explain. I’m struck by how much of what goes on inside our heads is so fragmented and fleeting, and how inarticulate we are in the face of it; how I can be so unsure of what’s going on in there, and how I can be so comparatively sure about what’s going on with tables, chairs, the basics of other people, etc. I suppose that on attending to your own thoughts in the right way, it’s easier to avoid being motivated by them; to notice an anxious thought and let it fade rather than having that anxious thought be something you feel you have to act on.
Motivation is one of those bits of human psychology that’s complicated and messy; layer upon layer of things that were bodged together as best as possible given the time and resources available to fulfil some objective. Thus I tend to be dismissive towards neat little theories of human motivation that assert (a priori) that people can’t possibly be motivated by such-and-such a thing. Anyway, you theory of motivation should leave room for the notion of duty, and of doing the right thing because it’s the right thing to do. Highly abstract, of course, but definitely motivation.
Motivation – consider charity. Each month a standing order goes out of my account, for the benefit of people I will never meet; I can’t even say precisely how many. There are loads and loads of people like me. People worry about the good of generations yet to come. This is especially relevant to environmental concerns, be it climate change or Yucca mountain. Or concerns about accident prevention. Yeah, I think I have some fairly vauge and fragmentary imaginings of the sorts of people who might be affected. Those imaginings are neither people nor minds, and don’t correspond neatly 1:1 to anything or anyone that might exist; I think there are rather more actual people relevant to my concerns than my imaginings of those people.
In my personal experience, fairly abstract matters are associated with “mental abstract art”, there’s sound images of my thoughts, mental images of graphs. Apparently people who use sign language often think in mental images of hands making signs.
 In a loose sense of the term.
“I suppose that on attending to your own thoughts in the right way, it’s easier to avoid being motivated by them; to notice an anxious thought and let it fade rather than having that anxious thought be something you feel you have to act on.”
So it is a sense of awareness of what is occurring in your mind, and the idea that you are somehow separate from them?
” Thus I tend to be dismissive towards neat little theories of human motivation that assert (a priori) that people can’t possibly be motivated by such-and-such a thing. ”
But presumably you would accept that whatever it is that motivates us, it can’t literally be nothing.
“Anyway, you theory of motivation should leave room for the notion of duty, and of doing the right thing because it’s the right thing to do. Highly abstract, of course, but definitely motivation.”
Could we say that it isn’t the abstract concept that provides motivation, but how we feel about it? It isn’t Russell’s teapot as an abstract idea that makes me happy, it is the image of it beaming happy rays at me, and the fact that this makes it easier for me to internally generate happiness.
“Yeah, I think I have some fairly vauge and fragmentary imaginings of the sorts of people who might be affected. Those imaginings are neither people nor minds, and don’t correspond neatly 1:1 to anything or anyone that might exist; I think there are rather more actual people relevant to my concerns than my imaginings of those people.”
So, perhaps the question is, are our dealings with people motivated by our feelings about an abstract concept (of the form of Russell’s teapot), are they simply a reaction to how looking at smiley faces make us feel (presumably not in this case, if you give charity to people you never see) or are they motivated by imagining the internal life of others.
Perhaps different people have different ways of looking at others.
So it is a sense of awareness of what is occurring in your mind, and the idea that you are somehow separate from them?
So my spleen and my liver are both a part of me, and yet distinct from each other. “I am touching my keyboard” and “my hands are touching my keyboard” are both valid ways of referring to the same thing. So, likewise, “I observe my thoughts” and “My [part of me that I can’t name] observes my thoughts” both refer to the same thing. Yet I suppose there are degrees of identification with parts of myself; I identify more with my heart (as in, the actual literal blood pump, rather than a metaphor for emotions) that with my hair or fingernails, and more with my brain (again, actual literal brain, not a metaphor for intellect) than with my heart. Hair and fingernails are routinely trimmed, and heart transplants and artificial hearts are totally a thing – but if you say “brain transplant” then you’re talking about something identity-changing.
Motivation – I suppose it’s like perception, which is a bit of a bucket brigade. When I say I can see my coffee cup, you could talk about seeing light rays coming from my coffee cup, or an image on my retina, or signals going down my optic nerve, or … well, my knowlegde of the neurology of vision gets murky at this point but I’m sure there are several different parts to it. Note how the parts I’m least clear on are the parts most inside me. If I’m pleased to see my coffee cup, it’s the cup itself that’s motivated this – it’s the cup itself which is important to imbibing coffee, all of the visual stuff is secondary, I could drink coffee with my eyes shut once I know where my cup is.
So the details about how the news about other minds reaches the parts of us that can start to think about doing something (or refraining from doing something) – is of secondary importance; the details of the chain of processes and events can vary, what’s important is what’s going on at the far end of the chain; that’s what we refer to when we talk of other minds.
 Actually, come to think about it, the very trimmability of hair and fingernails means that they can be quite important for self-expression, and self-image. So it’s more complicated than that – but hair and fingernails are still trimmable, the loss of the far ends of my hair and fingernails is no great loss.
 This gets complicated in the case of charity where you do something in hope of good happening, and the specific news about your specific actions is unlikely to get back to you.
“So the details about how the news about other minds reaches the parts of us that can start to think about doing something (or refraining from doing something) – is of secondary importance; the details of the chain of processes and events can vary, what’s important is what’s going on at the far end of the chain; that’s what we refer to when we talk of other minds.”
Hmmmm… we get to choose the metaphysical position we wish to adopt. I could choose to believe that everyone else is a video game character without any feelings what-so-ever. I could choose to believe that everyone else is the same as me. These beliefs are likely to have a strong effect on my behavior.
If both of these beliefs are consistent with “what is going on at the other end of the chain” ie, reality, then surely it isn’t that end of things that is important with regard to moral behavior?
If both of these beliefs are consistent with “what is going on at the other end of the chain” ie, reality, then surely it isn’t that end of things that is important with regard to moral behavior?
If P and not-P, then Q.
we get to choose the metaphysical position we wish to adopt. – I’m not sure that we actually do, maybe we just get convinced of one position or another. But if we do choose, we run the risk of choosing wrongly. If we choose wrongly, we get beliefs inconsistent with reality. If we choose carelessly, then we run the risk of doing harm in a morally culpable manner.
The risk of choosing wrongly, given at best incomplete information, may strike us as unfair, but who said life had to be fair?
“If P and not-P, then Q.”
I’m sorry, I don’t understand this. If I placed either a red or blue ball inside a box, such that you couldn’t see it, and would never see it, and then told you that if the ball was red you must kill me, and if it is blue you must not kill me – in what sense could you be basing your decision to kill me on the color of the ball? You might decide to *say* the ball was red (and this wouldn’t be inconsistent with any information we receive), but presumably your motivation for believing such a thing would be that you didn’t like me very much and that this belief gave you a pretext for killing me. Alternatively, you might just like the color red and be rather indifferent as to whether I live or die, and therefore like to imagine the ball is red. Either way it’s your feeling that is important rather than the actual color of the ball (which you can never know).
It might be an contradiction (analytical falsehood?) for me to believe that the ball was *both* red and blue, but is it a contradiction to believe that it could be either?
“I’m not sure that we actually do, maybe we just get convinced of one position or another.”
I’m convinced on the basis of which ideas appeal to me – the brute facts of external reality certainly play a role in some of my beliefs, but there are other things that seem to me to be important, but not amenable to the scientific method (such as the consciousness of others.)
Either a belief that the ball is blue is inconsistent with reality, or a belief that the ball is red is inconsistent with reality. Both may be consistent with the portion of reality you’re able to observe.
Note that I’m carefully avoiding saying “external reality” – what’s external to you may be internal to me and vice versa.
One of the big philosophical debates seems to have been about the possibility or otherwise of a priori knowledge; in particular synthetic a priori (we’re back to Kant again…) seems to have been a particular question. I’ve mentioned Occam’s Razor and the Copernican principle; these seem to be a priori.
An observation is that attempts to get a priori knowledge with certainty don’t seem to work; one key part of this is the ever-present possibility of error. The theoretical infallibility of mathematics does you no good when you write down a “2” and read a “z” (this actually happened to me, it took me days to figure out why things were going weird). Kant had a scheme for getting “certainty” that involved some alarming metaphysical commitments, and some deeply alarming semantic tricks. That “certainty” included the “certainty” that reality conformed to Euclidean geometry (apparently he got that from Descartes). Of course he was writing before the discovery of non-Euclidean geometry and before Einstein, so in a sense Kant got unlucky, but if the price worth paying for pseudo-certainty is less than the price worth paying for certainty (i.e. the sort where you’re actually right, rather than merely thinking you’re right).
I’ve played around with the idea of “a priori error” before; that there are ideas that we have a predisposition to believe that are false. My line of work (which involves machine learning) from time to time leaves me pondering and reading up on the nature of induction, and has let me find out about inductive bias (there’s no induction without inductive bias), which sort-of looks like a priori knowledge if you squint.
I mean, I don’t buy the classic rationalist project of finding nice neat little propositions you can write down with certainty just by racking your brain for innate knowledge, and using those to derive everything else. But I do think we come with predispositions to believe some things – I find it hard to get away from that, and we can’t help but use at least some of those predispositions at least somewhat. Even if we may profess blank skepticism, our actions belie us.
Thought for the day: Moore’s paradox. There’d be something deeply odd about me saying “It’s raining but I don’t think it’s raining”, but you could say “It’s raining but Peter doesn’t think it’s raining” and it wouldn’t be weird in the same way at all, despite both utterances standing for the same proposition (or at a later time I could say “It was raining but I didn’t think it was raining” – again, same proposition, but not weird). I can’t put my finger on why, but it’s something that’s kept feeling relevant during this exchange. Something to do with the possibility of error, I think.
 Now, how to use it responsibly, now there’s a question…
Well, yes. For me, the idea that the form of observed reality must be mind dependent seems entirely correct – but at the same time the fact that seemingly fundamental aspects of reality are counter-intuitive seem to undermine this claim. Perhaps Kant was right that the form of reality is mind-dependent and that the details are not – but wrong to think that the relation between two objects moving in a strait line was a form rather than a detail.
(As far as I’m concerned, if something is a form of reality, we shouldn’t really be able to be surprised by it – it should be obvious to us whatever the intellectual firepower at our disposal, or our state of knowledge of the world.)
Whether we can then use the (common) knowledge of these forms to develop novel insights is another question.
(I think I have worked out one necessary feature of reality – the subjective experience of time is always the same for all possible minds. That is a necessarily shared experience.
I’m just not sure whether that has any implications, or can lead to any further insights.)
Whereas, for me, the possibility of surprise is not just a feature of reality, it’s getting on for being the key feature. I’m tempted to say “reality is that which can bite you in the back when you’re not looking”.
Objectivity isn’t universality. You can have objective facts that are local, like the acceleration due to gravity.
> I think that something that is true for any possible mind, will be a tautology.
I claim that it is impossible to get “shoulds”, values or morality out of pure logic without any input from outside of logic.
The closest you can come is the logic of cooperation which comes from game theory. The problem is that there are multiple solutions to building a cooperative society.
(if bees were as intelligent as us, we could not use logic to persuade them away from their solution where individual bees lack the independence and moral value of individual humans)
I tentatively agree… there might be some universal aspect of experience or thought from which you could derive a rule for ethics that are always correct for any mind, but I certainly can’t think of it.
The only truly univeral aspects of an intelligent agent are those that are logically necessary for intelligence in the sense of map-territory rationality. You definitely can’t get morality, in the western 21st century human sense of that term, from map-territory rationality (having a model of the world, which broadly attempts to be accurate).
Of course there are general principles like the golden rule (respect the utility functions of other agents), but the golden rule is most certainly *not* a tautology. Rather, it is a natural equilibrium into which a large homogeneous group of social agents can fall.
If an alien somehow developed intelligence without having any peers, it seems likely that it wouldn’t even bother to think of the “golden rule” until it met social agents like humans. But you can be damn sure it would invent the halting problem and prime numbers by itself.
“But you can be damn sure it would invent the halting problem and prime numbers by itself.”
I’m living proof that that isn’t true.
Objectivty can imply that agents out of contact with each other can converge on the same solutions, or that something is mind-independent. The two definitons are hard to bring together when considering morality, because mind-independent morality make so little sense.
Societies and singletons out of contact with each other would simply not converge on the exact same morality.
However, certain common processes would lead to a landscape of more heavily occupied peaks in morality-space.
“Societies and singletons out of contact with each other would simply not converge on the exact same morality.” Even under exactly the same circumstances? Why
Modern Russia is far from a liberal utopia, even though large parts of at are just as cold and lifeless as Sweden is. Cool and temperate Germany produced one of the most violent, purity-obsessed, and ethnocentric regimes in history well after the invention of modern medicine. Even sticking to Scandinavia, I’m not sure if I’d describe the cultures that existed there in the Middle Ages as “liberal” or “tolerant,” even in comparison to their contemporaries.
There’s also the fact that Alaska consistently votes for the Republican party in US federal elections, whereas Hawaii consistently votes for the Democratic party.
Counterexamples: Prince Dipendra, Caligula, Uday Hussein, Harold Shipman, too many historical monarchs and nobles to count…
The poor person sells his principles for a dollar; the rich person holds fast until the temptation becomes absolutely overwhelming.
No, that’s just haggling over the price. The person who would steal for a million is just as much a thief as the person who steals for a dollar.
As in the anecdote attributed to everyone from Shaw to Churchill, which I will reproduce as a poor précis:
[Famous Guy] is at dinner party. Lady guest is making slightly flirtatious conversation with [Famous Guy].
[Famous Guy]: Tell me, madam, would you sleep with me for a thousand/hundred thousand/million (the amount varies in the telling) pounds?
Lady Guest: *coyly giggling* Oh, Mr [Famous Guy]!
[Famous Guy]: Would you sleep with me for one pound/five/ten pounds?
Lady Guest: *offended* Mr [Famous Guy]! What do you take me for?
[Famous Guy]: Oh, we’ve already settled that. We’re just haggling over the price now.
The man who stabs a guy because he roofied and abused his sister is as much a murderer as the one who stabs a guy because he spilled his beer, but I know which I’d rather have over for dinner.
What about moral rules against deception? Because I guess that their universality could be argued for more easily than that of any of the values named here. After all, you talk to yourself pretty much like you talk to others, and if you don’t hesitate to deceive yourself that has to incur some problems for fairly fundamental reasons?
Interesting theory about an increase in sanitation resulting in a decrease in purity-obsessed values. In some sense, this is opposite to a proposed mechanism behind the correlation between good sanitation and allergies.
I’m in favor of more Socratic-dialogue-style posts like this one.
In contrast to fashion, where trends come, go, and reemerge, and while there are probably historical examples of morality going backwards, often, but not always in response to worsened material circumstances (patriarchy seems to strengthen as wealth increases for most of history, which, depending on your viewpoint, is probably bad), there do seem to be clear trends where, once you realize slavery is bad, for example, you never go back. In fashion I don’t think there are really clear trends you can point to which don’t completely reverse themselves at various points: right now we’d say the trend is for clothing to become simpler and less modest, but for a long time it was to become fancier and more modest.
I think we’ve actually banned and reinvented slavery a number of times. Back in the Middle Ages, the Catholic church managed to ban enslaving other Christians, which in practice amounted to banning slavery*. But then we got more mobile, and had more contact with non-Christian, primitive societies, and so recreated slavery on a mass scale, this time with the racial/chattel components.
Eventually we did away with that. But then we invented Communism, which is slavery on an even more massive scale, and without even the profit incentive that an owner might have to take care of his slaves. This form of slavery persists in N Korea, and most citizens of former communist countries still aren’t exactly free.
*But then again, is serfdom slavery? It seems like it belongs on the scale.
An interesting point, though it could also point to the idea that we are, in fact, slowly but painfully, converging on better ethics by continually adding to the list of no-nos even as something in our nature keeps trying to sneak them in the back door.
Most of the problems with social justice, for example, seem basically to be intolerance sneaking in a back door labelled “don’t tolerate intolerance!” But fortunately we have people like Scott to point this out to us and hopefully eventually move closer to real tolerance.
I thinks its somewhat self-obvious that moralities are the result of gene-cultural evolution under selection pressure and competition. Liberal values are the result of the easing of formerly dominate selection pressures (War, famine, poverty, disease, etc etc) and the addition of new pressures (free markets and free love etc).
ofc, having a serious, in-depth conversation on the what’s and how’s is alot of work.
Forgive me if this was already pointed out, I would argue that good roads cause a decrease of friction in trade, and therefore roads cause wealth. Savvy societies have invested in wheels and roads for a long time. Yes there may be a minimum threshold wealth requirement before roads are constructed, But I would submit that roads and wealth correlate the other way.
Does anyone else really dislike this snarky back-and-forth dialog format? I hate it when Popehat does it and I didn’t like it here either. It seems to me to add a lot of extra text that has very little substance.
Yeah, I pretty much agree.
I think it can be done better (and I think Scott has used it more effectively in the past), but it often just a method of setting up straw men, or at least making an appeal to ridicule.
And I don’t even think Scott did that here. It was just…superfluous.
I found that when I was trying to write this, it turned into an “of course, you could argue this, but then you could counter-argue that. And that might be wrong because of this, but then you would have to say THIS” at which point I figured I might as well make it official.
It would be nice to see it written in a more Aumann-like way, where the points of view change like random walk (as the other Scott A recently explained). Assuming your participants are following your mind.
I like it for a light, informal discussion of things I don’t intend to take too seriously. It would have to be done with a great deal of care for more serious subjects, though I think Scott could probably pull it off.
I agree. Dialogues are only annoying when one position is a straw man (e.g., Simplicio in Galileo’s dialogues), and Scott didn’t seem to doing that here.
It’s very useful as a writing exercise to make sure you’re being as fair to the opposing side as possible.
Of course, it’s much more commonly used as an excuse to set up cheap strawmen to show off how obviously superior someone who agrees with the author is, and this has kind of tarnished its reputation as a rhetorical device somewhat.
If the stated moral values represent an Objective Truth, then they must have existed prior to the genesis of the species Homo Sapiens, and also would have existed had life not come into being on planet Earth. And should future sentient AI entities come into existence, then presumably they would also be influenced by this imperative.
Seems like overweening hubris to assert this degree of presumption given the sample size and short galactic time interval.
TL;DR: Increased liberalism results from two entirely different sources: 1. From cultural rationalization of locally optimal behavior in conditions of increased wealth (you got more to loose through violent conflict than to win, so optimize how to get along), and 2. from a particular series of historical developments with many meaningful bifurcation points, which may have led to an entirely accidental and temporary strengthening of liberalism in the West, but could also have ended in fascism and still might.
I have a strong suspicion that separate factors are conflated in the discussion. Some values are rationalizations of optimized local utility, for instance, a nomadic tribe passing through the lands of a sessile tribe might not have strong prescriptions against theft, and a society where the poverty line is high enough to make the risk/benefit ratio unfavorable to small theft will have universal values against it, whereas a society with low poverty lines will encourage draconian punishments to shift the risk/benefit ratio.
But there is another source of values, which stems from social drives that evolution has probably installed as hooks to make individuals conform to the imperatives of group competition. It may simply work because adopting the right set of beliefs enables the proper social signaling to tell likely defectors from likely cooperators. If you have individuals that attempt to form their world models independently (nerds) competing with those that do so to optimize their signaling (jocks), the latter ones likely end up with more reproductive success. As a result, belief systems may become mind viruses, colonizing big populations of brains as state building memeplexes. Populations that are in the thrall of competing memeplexes may start to reorganize and aggressively compete with each other, triggering a second-level evolution. Part of this evolution may be Lamarkian, because religions and ideologies can be designed for the purposes of making populations fitter for competition. For instance, we may see a rise in militant Islam because there is no communist ideology left to unify the resistance in the Western resource colonies.
Catholicism was certainly instrumental in optimizing Feudalism, but religious doctrines that cemented the rule of feudalist aristocracies became a hindrance at the brink of the industrial evolution, so the rationalist traditions of enlightenment were invented to immunize minds against the thrall of religions. Rationalism turned out to be incompatible with innate normative needs (basically, transcendence), which gave rise to romanticism and eventually humanism (injecting rationalism with the idea of human dignity). Humanism went on to become the dominant ideology among intellectuals in Middle and Eastern Europe, but it is not without alternatives. Humanism begets the illusion that the most rational world is also the most just and most kind one, but as the European Jews discovered, it makes alternatives only unthinkable for the humanists themselves. Rationalism without the fictions of human rights is entirely compatible with fascism. Humanists simply could not comprehend the inhumane rationality of Fascism and Stalinism as they happened in front of their eyes. The rise of liberalism after WW2 may well have been the result of a stalemate between competing power blocs, but may have ended now. With increasing resource scarcity and food instability due to climate shift and weather instability, we may find that variants of fascism may become the most successful memeplexes again.
Humanists simply could not comprehend the inhumane rationality of Fascism and Stalinism as they happened in front of their eyes.
This is just labeling of Fascism . They were not “inhumane” . Reason Hitler and Mussolini rose to power was precisely because how humane and appealing their ideologies were. Or you think majority of the population of germany and italy at that time were alien inhuman monsters and the “good” guys ( AMERICA FCK YEA!) killed all the “nazis” and thats why germany is such happy place now!
And if Liberal ideology is ” humane” that is precisely the ideology at the root of communists thinking. “Everything for the benefit of Man” – that is real soviet slogan . And there was even a joke about it (” I went to Red Square parade and seen this Man!)
“Human” and “humanist” are very different things. You can kill your prisoner humanely, but not humanistically. The fascists were not all inhumane, but they were not humanist, i.e. the concept human dignity did not reign supreme. In a non-humanist world, it becomes ok to euthanize the weak, the abberant, the resistance.
Hitler rose to power because the economy was in shambles, and the centrist parties had no recipe to reduce inequality. Hitler did: he taxed the rich (few) and created well-paid employment for the poor (many). This made him very popular with those people that did not find themselves in concentration camps. Fascism is anti-liberal.
You are also mixing up communist and liberal and Stalinist. Liberalism is originally the mindset of the enlightenment. It attempts to liberalize the individual, from ideologies, oppression and exploitation. In liberalist thinking, individual freedom should be unconstrained, except where it interferes with the freedom of others. If we don’t want to exert more force than necessary to reduce that interference, you want people to act as responsibly as possible, which means that you need to encourage autonomy, rationality and individual responsibility. You want to raise people to become responsible citizens. Ideological indoctrination of any kind reduces the inner freedom of the individual, and is thus not compatible with liberalism.
Communism is an attempt to remove exploitation and oppression by creating a classless society, and implement governance by participatory democracy (the US does not have a participatory democracy but a representative one). Communism limits the right to own factories etc. If economical instruments become too large and powerful, they need to be communally owned. Communism needs to constrain property rights to prevent power imbalances. Liberalism does not.
Communism is an ideal, there was never a society that called itself communist. Communist parties are those that attempt to introduce it, in a slow or rapid revolutionary process.
In the view of communist parties, it may be necessary to strongly indoctrinate people with the right mindset, to make them conform to the ideals of the society. This indoctrination would be incompatible with liberalism. But in principle, liberalism is compatible with socialism, communism and capitalism, because it is agnostic with respect to economic organization and administrative modes.
Scott might be a classic liberal.
Stalinism is a perversion of the communist ideal; it used the lofty goal of creating an ideal society to justify whatever atrocity seemed to be desirable to the dictators. Of course, everything in a communist society happens to the benefit of man. In Stalinism, this became a perverse slogan. Stalinism is anti-liberal.
Huemer seems like just another guy trying to label his personal preferences as The Objective Morality. Eliezer Yudkowsky is guilty of this too.
Personally I don’t think this subject is very interesting as all the relevant questions have already been dissolved in my mind.
There is a lot more history of moral change than just the last few hundred years. Recent change is a popular topic because it is still contentious, but that is exactly why it is a poor starting point. Also, globalization probably means that recent change is subject to greater common cause.
If you want to know whether thought leads to moral progress, you should study groups of thinkers that are isolated from each other. Maybe the Greeks and the Chinese. Similarly, if you want to know about wealth, or irrigation. Rome is wealthier and less thoughtful than Greece, perhaps an opportunity for distinguishing the two; perhaps the same with Chinese dynasties (a parallel in Song vs Han?). Many people claim that the Axial Age was convergent moral progress (whether from thought or wealth or density). However, others suggest that the common cause was Zoroaster. But he was just a starting point and different thinkers elaborated in different directions.
*shrug* Maybe Care/Harm really is just the fundamental moral foundation, and the others are epiphenomena to be abandoned as we outgrow them. How does that saying go? – “The last enemy to be destroyed is submaximal global utility; destroying Death just buys us more time.”
I’ve largely concluded this. Everything else is just a heuristic/hack that gets you there, or is useful for other reasons (Disgust preventing disease).
Except, how does this square with values being complex? Might values get more complex as we encounter things like wireheading?
One day, the LessWrong crowd will admit that wireheading is actually great. Scott basically admits this with his “Lotus God” thing.
The problem is that people confuse self-destructive wireheading—and perhaps wireheading that only causes compulsive behavior and not even pleasure—with a true “experience machine” that produces maximum bliss.
When I think about this sometimes I confuse myself, because the true bliss machine is simultaneously the least convenient world (with respect to the argument) and also the most convenient world (with respect to the counterfactual-world’s inhabitants).
The true bliss machine is a no-brainer in theory: it’s basically the same as heaven.
The problem is knowing in practice whether something really is a true bliss machine. You wouldn’t want to get plugged in, only to find that it is super-cocaine or some kind of Twilight Zone casino where you never lose.
Maybe a year ago, I had a brief exchange about the true bliss machine with a friend of a friend. He asked “If you had the choice to be put into a machine which guaranteed a life of maximum happiness, would you take it?” I responded “it depends, but I generally lean towards yes.” His own objection was “but the experiences in the machine aren’t really real. What about everyone else you leave behind?” (And then I started to mumble something about Bodhisattvas, but our conversation was cut short prematurely.)
One difference between Heaven and the Bliss Machine is that Heaven is assumed to be “real” (whatever that means) while the Bliss Machine is “merely” a simulation. I think this is kinda what EY was driving at in failed utopia 42.
>One difference between Heaven and the Bliss Machine is that Heaven is assumed to be “real” (whatever that means) while the Bliss Machine is “merely” a simulation.
Compare a situation where I live in bliss with my wife forever, happy and at peace with the one where I live a simulation with a simulation of my wife, but my wife is actually suffering.
The difference is that in one scenario, someone is still getting hurt, and that’s *bad*, and far-away hypothetical aside, for practical day-to-day human living, the downside of giving into escapist delusions is that you can’t address real harms being done to other people.
Now, of course, if you don’t care about other people, jump in the lotus machine all day. That sounds harsh, but I’ve thought about this a lot and think that the big downside of delusion is it allows for others to be hurt. It’s not enough to have manufactured others when the _real_ others are being harmed. (This is probably the same reason I’d probably be against star-trek-style transporters.)
@Brad (the other one)
>(This is probably the same reason I’d probably be against star-trek-style transporters.)
But I don’t think there is any suffering there. In a transporter a copy of you is created quickly and your original copy is vaporized or something. I don’t think your copies suffers by being vaporized.
This naturally leads into the Bodhisattva line (that I never got to). There’s lots of ways the thought experiment could go Faustian irl. But in the most convenient world (for me at least), everyone is allowed to enter their own machine and things like poverty and disease have been eradicated in “reality”. Freed from real-world responsibilities and commitments, is the Bliss Machine still desireable?
The first contention is what Vox Imperatoris mentioned: “Is the Bliss Machine really just a heroin IV?” But I think the second contention is deeper and more interesting: Is the gut-aversion to the Bliss Machine contingent? or is our aversion founded on some inviolable ethical principle of “aversion to the illusory”.
My brain disagrees with itself. My system 2 says “In the most convenient world, I see nothing immediately wrong with the Bliss Machine.” But my system 1 says “Something still doesn’t feel right about this. I don’t know what it is exactly. It might or might not have anything to do with aversion to the illusory.”
Agreed. “The complexity of human value” and the memes it spawned are among my biggest problems with the standard LW philosophy.
Wireheading exposes issues with consciousness as part of morality. If your values are simply to maximize conscious experience then you should tile the universe with copies of a small number of brain types experiencing bliss (since other brains will be less efficient at generating bliss). If I decompose brains into just it’s pleasure center is it still a brain/conscious? If I just simulate a moment of sheer bliss then is it really conscious?
1) I don’t aim to “maximize conscious experience” in general, and I don’t know why anyone would want to do this.
2) I’m not a materialist, and I don’t believe that a simulated brain would be conscious (unless it somehow also had a mind, in which case the mind would be conscious).
3) I want to maximize my own happiness, which I define as whatever psychological state(s) are most pleasing and fulfilling to me, such that if I had them I wouldn’t lack for anything.
Under the pathogen -> conservatism theory, the Middle East has the kind of culture that you would expect from a high-pathogen area… but are there really that many pathogens there? Maybe it had lots of pathogens before desertification? It was the cradle of civilization, so I imagine it was quite fertile pre-desertification. (BTW, I also heard a theory that desertification was a result of overgrazing by human-managed livestock.)
How much of the pathogen -> conservatism theory could simply be temperature? Maybe people in colder climates evolved to be friendly towards everyone because it’s much more difficult to survive on your own in a colder climate? I remember getting the impression that people whose ancestors came from colder climates are much more likely to commit suicide, which could be an adaptive response to resource scarcity in a cold climate (improving one’s inclusive genetic fitness). When I get suicidal thoughts, they tend to take the form of me thinking that I am not doing my part, not pulling my weight, etc. and others would be better off if I was gone (and my ancestors came from a pretty cold climate). (I also think internalizing this explanation of why I get suicidal and convincing myself that my community is very rich and I am not actually a significant resource drain has been pretty helpful.)
Speaking as a man, I also get suicidal thoughts when I start thinking that women will never love me, which also makes sense from an inclusive genetic fitness perspective: if I’m not going to reproduce, might as well stop sucking up resources from my family. BTW I wonder if the number of siblings one has would be an input here… I’m not an only child.
The Vikings were hardly “friendly towards everyone.” All of these theories about how modern Scandinavian liberalness/tolerance/peacefulness/whatever is a result of living in a cold climate seem to conspicuously ignore the fact that their ancestors were for many centuries possibly the single most fearsome and warlike people in Europe.
There is also plenty of archaeological evidence of warfare among Eskimos and Alaskan Natives.
It’s much more complicated than just vikings are violent. The exact same guy who would burn your house down then rape you and either kill you or sell you into slavery would take you in as a guest for a couple days if you showed up on his doorstep cold and hungry in the winter.
We got Jury trials and voting by way of the danelaw as well.
I’ve heard the theory that the geographic zone in which a culture resides affects its sense of time. In tropic zones, there’s a dry season and a monsoon season. The cut-off is super obvious, so agriculture doesn’t require a lot of planning. But temperate zones experience the four seasons. The change in seasons is subtle, so this requires planning using a farmers almanac. As a consequence, temperate zone cultures are more future oriented while tropic zones are more present oriented.
Obligatory youtube presentation. I give it points for style if nothing else.
To me, the significant issue is the relative payoff in terms of prepping for different seasons.
In Norway, it is cold, snowy, and ice-bound a lot of the year. And there is a relatively low burden of lumber-eating insects. In this environment, it makes total sense to have a stack of lumber to last 2-3 winters, just in case. You can also spread harvesting construction equipment over several years, and so slowly build a quality structure. You can also easy store a lot of food – and you must because there will be times of the year when nothing grows.
In (say) Honduras, the environment is largely moderate, and there are many many things that eat wood. Putting up more than a few days to a week of firewood is useless and a waste of energy, because it is just termite fodder. And it’s not just the lumber that rots – so does meat. And it’s nearly always planting season for something.
One environment rewards heavy pre-planning, the other penalizes it.
How much of the pathogen -> conservatism theory could simply be temperature? Maybe people in colder climates evolved to be friendly towards everyone because it’s much more difficult to survive on your own in a colder climate?
I’d expect it to work the other way: it’s much more difficult to survive, so you’d need to prioritise more to help yourself and your in-group, and tell any out-group members to sod off.
mostly super-well-trained professional warriors ie knights matter in projecting military force” to “any warm body with a gun matters”.
Pretty soon it will be “only most sophisticated combat bots matter”. Where will it leave “warm bodies” and “liberal values”. I would say temptation for those in control of those bots will be high
Combat bots and drones will matter in near future wars about as much as America’s tanks and jets mattered in Iraq and as much as Britain’s tanks and jets mattered in Northern Ireland. Which is to say, very little if at all. All of the fancy military hardware in the world won’t help one bit if the other side can smuggle a bomb into the hotel where your head of state is staying or make certain cities or neighborhoods too dangerous for your security forces to enforce your rule there.
The only conceivable technology that could significantly shift the balance of power away from insurgencies would be some kind of super sensitive “artificial nose” capable of reliably detecting explosives from great distances, which was also much cheaper than a bomb sniffing dog.
All of the fancy military hardware in the world won’t help one bit if the other side can smuggle a bomb into the hotel where your head of state is staying or make certain cities or neighborhoods too dangerous for your security forces to enforce your rule there.
You overestimate actually efficiency of guerrilla/terrorist tactics when those who are affected are not the one in control of bots. Insurgencies are actually pretty trivial to suppress if there is a will to do so (for example hostage taking is pretty effective – such this is what Britain did in Boer war, and Nazis in Russia, and both red and white during civil war)
And yet the Nazis were unable to suppress insurgencies in Yugoslavia, Poland, and France, among other places. Nor was the extremely ruthless Assad regime able to prevent insurgencies from taking over more than 60% of his country.
I thought exposure to Moldbug et al gave you the perspective to at least consider if it is all about what groups have prestige and power. I mean, you are being a bit too charitable here, discussing if civilizations move towards liberal morality because it is correct or because of wealth. In both cases you assume they just move on their own, as if by natural law, without intent and purpose. You are missing the perspective that they don’t move on their own, they GET MOVED by powerful and prestigious groups of people, with full intent and purpose.
You see something like “evolution” or “market” where “intervention” or “politics and power” or “prospiracy” would be more appropriate.
And your tie example would have been perfect for this. Ties became universal because everybody likes to imitate the powerful and the prestigious. The powerful and prestigious began to wear these less ornate clothes because in 19th century nationalism imitating military uniforms was cool, and because an extremely prestigous dandy used simplicity to counter-signal: https://en.wikipedia.org/wiki/History_of_suits#Regency
It is not by wealth and not by correctness. It is because everybody wanted to be like Beau Brummel because he had extremely high prestige because he had a huge force of personality, basically inborn charisma. It was moved by a small elite group. It did not move on its own. And low-status non-Westerners wanted to be like high-status Westerners and so on. (Example: Eastern Europe is full of business with English names. Except if it is food or fashion, then French. If you start an upmarket restaurant there or a men’s clothing line, just call it André or Henri. I wish I was joking.)
The funny thing is that you often tend to agree to this perspective, to an extent, how signalling plays a role, Zebra Stripes of status and all that. But somehow you don’t even consider the perspective that it is not a natural evolution but an intentional change pressed by high status groups.
Of course, one level of abstraction further, it is perhaps a natural evolution of wealth that military generals lost prestige and professors gained prestige, and thus we get the kind of ideas pressed by professors, not generals.
I love quoting Orwell because despite his impeccably left-wing credentials he saw these things so clearly. The Road To Wigan Pier (full text: http://gutenberg.net.au/ebooks02/0200391.txt)
“The first thing that must strike any outside observer is that Socialism,
in its developed form is a theory confined entirely to the middle
classes. The typical Socialist is not, as tremulous old ladies imagine,
a ferocious-looking working man with greasy overalls and a raucous
voice. He is either a youthful snob-Bolshevik who in five years’ time
will quite probably have made a wealthy marriage and been converted to
Roman Catholicism; or, still more typically, a prim little man with a
white-collar job, usually a secret teetotaller and often with vegetarian
leanings, with a history of Nonconformity behind him, and, above all,
with a social position which he has no intention of forfeiting. This
last type is surprisingly common in Socialist parties of every shade; it
has perhaps been taken over _en bloc_ from the old Liberal
Party. (…) For instance, I have here a prospectus
from another summer school which states its terms per week and then asks
me to say ‘whether my diet is ordinary or vegetarian’. They take it for
granted, you see, that it is necessary to ask this question. This kind
of thing is by itself sufficient to alienate plenty of decent people.
And their instinct is perfectly sound, for the food-crank is by
definition a person willing to cut himself off from human society in
hopes of adding five years on to the life of his carcase; that is, a
person but of touch with common humanity. (…) The truth is that, to many
people calling themselves Socialists, revolution does not mean a
movement of the masses with which they hope to associate themselves; it
means a set of reforms which ‘we’, the clever ones, are going to impose
upon ‘them’, the Lower Orders.”
It is just so obviously a status game. But if status games are usually about imitating those who are the highest status, that is precisely the leverage through which small high status groups can intentionally change civs.
Also, perhaps it is not good form to copy-paste long quotes into a comment and thus make it a huge wall of text. I will refrain from this in the future, I just thought this one case is as acceptable exception as Orwell made the point just so perfectly.
“I You are missing the perspective that they don’t move on their own, they GET MOVED by powerful and prestigious groups of people, with full intent and purpose.”
OK. And that’s bad? Why?
That is a different question than the original one namely what explains this.
Scott’s question is not so much “what explains this” as “wehther there are objective moral truths”.
Explaining increasing liberlalism as the result of the prestige of liberal intellectual doesn’t really explain anything, anyway..why is the liberal intellectual who is prestigious and not someone else?
It’s great if you’re the mover. Not so great if you’re the movee
Just ask the Yazidi.
You wrote more than you quoted, so I don’t think that’s a problem.
You should use blockquote, though. It makes long quotes much easier to parse.
My personal theory is that personal and cultural (but not the objective platonic ideal/spherical cow type) morality results from a mechanic similar to conservation of risk. Higher standards of morality (such as what our fairly recent ancestors followed) allowed for civilization to function in conditions of low safety.
Sort of like programming conventions in an unsafe language – by following the suggestions, at your personal cost in discipline and extra effort, you gain reduced chance of your program segfaulting for unclear reasons. In a safe(r) language, however, you get many safeguards built-in, so following the safety conventions – while still offering an advantage – is less of a relative advantage than in an unsafe language. So a new programmer brought up on a fare of safe languages, without excellent reasons to do otherwise, will eschew as much effort-expensive conventions as he can while maintaining acceptable levels of reliability from his program. His program could be better than it is if he took pains to follow safety conventions, but if it works 95 times out of a 100, that’s good enough, right?
This is analogous to how technology increases the physical safety of dangerous behaviour. Since formerly dangerous behaviour is now not quite so dangerous, more people are inclined to engage in it, because they judge the risk acceptable. A fairly major problem with that is the perception of riskiness versus actual riskiness. For example, a lot of women appear to think fertility technology is much more advanced than it really is, and get a rude shock when it turns out that near-menopause childbearing is both dangerous and not anywhere near guaranteed.
“I think it’s more symmetrical than that. A lot of modern values would disappear if we stopped facing modern problems. We worry a lot about racial sensitivity, but if we ever got a society where racism was as thoroughly neutralized as syphilis, we’d probably drop that value pretty quickly too. If we ever totally conquer poverty, so that everyone’s got more than enough, maybe we’ll even stop worrying about compassion and fairness. Likewise, a lot of the democratic values – freedom of speech, freedom from slavery, equality, etc – are based on most countries being democracies which in turn is based on the historical situation. One of the big shifts was from the medieval system of “mostly super-well-trained professional warriors ie knights matter in projecting military force” to “any warm body with a gun matters”. That gave the common people a new level of power and probably led to democracy and the democratic virtues of equality and freedom. Likewise, technology has connected the world to the degree where different races and cultures and ideas are frantically mixing and mutating, making things like tolerance and freedom of thought much more relevant.”
Are you serious? Poverty and racism are as gone in the West as STIs are, while mass manpower armies have been irrelevant since the 1970s at the latest. George Orwell wrote an essay just a couple of weeks after the Hiroshima bombing saying that all that was over – http://orwell.ru/library/articles/ABomb/english/e_abomb – so how come we’re not just not living in the world of 1984, but a world that looks even less like it than Orwell’s did?
Note that the places that have tons of STIs, poverty, and racism, but also conservative values have been massively out-breeding the West for some time and are presently colonising it. Yes the West has higher living standards but that doesn’t matter if it isn’t translated into higher fertility or greater application of military force. The Romans of 410 AD probably looked down their noses at the Visigoths too.
I don’t have any grand theory or any strong prediction, but there’s a good chance that non-conservative values are simply maladaptive and have a short half-life. The West certainly looks strong now but that is the legacy of more conservative times and a lot of luck originating the industrial revolution. The West will probably be culturally and genealogically gone within a hundred years unless current trends change. The liberal urban elite will hollow out more quickly than the conservative rural fringe.
Now no doubt current trends will in fact change in the next century, hence I make no strong prediction, but the current trajectory is not inevitable conquest, it is doom.
“Note that the places that have tons of STIs, poverty, and racism, but also conservative values have been massively out-breeding the West for some time and are presently colonising it. Yes the West has higher living standards but that doesn’t matter if it isn’t translated into higher fertility or greater application of military force. ”
Higher GDP can be translated into greater military force through technology.
“I don’t have any grand theory or any strong prediction, but there’s a good chance that non-conservative values are simply maladaptive and have a short half-life.”
If you assume that immigrants retain 100% if their values, then you might have a scenario where they dilute the native culture. If you make the opposite assumption, that they assimilate, then they make up for the West’s low native fecunditiy, and allow the West to continue. I am not sure which assumption is more correct, but I note that people who make this kind of argument hardly ever justify the assumptions behind it.
Hmm. What about 50/50?
Suppose that 50% of the colonists each generation are unassimilable to the native way of life. Suppose that unassimilated colonists have a fertility of 6.0 (tripling every generation ;like say, Amish or Orthodox Jews, or pious Muslims, etc), and natives and assimilated colonists have a fertility of 1.0 (halving every generation; like non-religious western Europeans). Suppose that initial population of colonists is 50 and the initial population of the natives is 1000.
Unassimilated colonists: 50
Assimilated colonists: 0
Unassimilated colonists: 75
Assimilated colonists: 75
Unassimilated colonists: 112.5
Assimilated colonists: 150
Unassimilated colonists: 168.75
Assimilated colonists: 243.75
Unassimilated colonists: 253.125
Assimilated colonists: 375
So in four generations, the unassimilable colonists went from a minority of 4.7% to a minority of 36.7%, whereas the natives are now a minority of 9%. The majority is still culturally native, but genetically colonist.
Unassimilated colonists: 379.6875
Assimilated colonists: 567.1875
Unassimilated colonists: 569.53125
Assimilated colonists: 853.125
Does this converge on a 50/50 split of unassimilated colonists and assimilated colonists? I should make a spreadsheet.
It tends toward 25/75 with the assimilated forming the majority.
No – according to my spreadsheet, it’s 40/60 with assimilated majority.
Some more scenarios. The numbers suggest that if native/assimilated growth is substantially positive and assimilation is more than 50%, unassimilated colonists aren’t threatening an SK-class dominance shift. If the natives aren’t growing or are shrinking, they will rather quickly be replaced by the colonists.
My bad, you’re right! I’ve got the ratio of 1.5 ac/uc and swiftly declared 75%, way to go maths. BTW, genocide sim is a rather interesting choice of name.
Well, open borders are genocide according to the definition co-authored by Uncle Stalin.