My name sounds a lot like Scott Aaronson’s and I get confused for him a lot. I try to encourage this confusion, since it can only increase people’s opinion of me. So let me propose a tool for investigating morality through algorithmic systems very similar to Aaronson’s recent post on eigenmorality.
I say we use DW-Nominate.
I wrote about DW-Nominate before. It’s the tool political scientists use to calculate Congresspeople’s position on the political spectrum. Whenever you hear an alarmed-sounding voice on a black-and-white attack ad say something like “Senator Schmendrick is the third most liberal senator in Congress,” chances are they used DW-Nominate to calculate “third most liberal”.
The system is beautifully elegant. They take a Congressperson’s votes on all the issues and compare them to other Congresspeople’s votes to find blocs of Congresspeople who tend to vote together. Then they do factor analysis stuff to see how many dimensions of “similar voting” there are. They end up with something that looks a lot like the traditional left-right dimension and occasionally a Northern-US-vs.-Southern-US-dimension that doesn’t always matter that much. Then they use Congresspeople with multi-decade careers to bridge the gap between current Congresses and past Congresses, and use dead Congresspeople with multi-decade careers to bridge the gap between past Congresses and even-further-in-the-past Congresses, so that they can compare any Congressperson, living or dead, to any other Congressperson, living or dead. They also get the opportunity to evaluate bills as liberal or conservative, based on whether liberal or conservative Congresspeople support them.
And the neat thing about it is that at no point did they enter into the system that it was supposed to give “left” vs. “right” – or even that it was supposed to come out with only one major grouping. It could have found that actually Democrats and Republicans vote much the same, but men always vote with other men and women with other women, or that the real difference was a religious worldview versus a secular worldview or whatever. Instead they found that our notion of left and right emerges naturally from the data, even if you’re not looking for it, and that this transcends party lines – ie some Democrats are further left than other Democrats, and this has consistent effects. If you really wanted, you could use this to rate whether, say, cutting carbon emissions vs. gun control is a more truly leftist cause, by seeing which bills get more heavily supported by leftists.
And I wonder what would happen if you tried DW-Nominate with moral decisions.
Now there’s a very boring interpretation of that proposal, which is that we hand a hundred people a hundred different multiple-choice questions on moral dilemmas, and then use factor analysis stuff to see if we can divide them into groups. I bet we’d come out with something a lot like “utilitarians vs. deontologists vs. virtue ethics”, or maybe something more like “religious vs. secular” or maybe “people who use all Haidtian foundations vs. people who just use care and fairness”. Actually, forget what I said before, this would already be a quite interesting thing to do and somebody should do it.
But I would be much more interested in a (much harder) naturalistic experiment. What if we took the real decisions people engage in? I’m not even talking about obviously morally charged decisions like whether to have an abortion, I’m talking about things from “what college major should I have?” all the way down to “do I drink alcohol at age 17?” to “do I call my parents tonight like I promised I would, even though I’m very tired?”
A lot of people have to go through very similar decisions, which allows DW-Nominate style ranking. There would be some wiggle room in deciding which two decisions were equivalent (is the person who decides not to call loving parents making the same decision as the person who decides not to call abusive parents?), but let’s say we get a panel of raters to decide among themselves which decisions are equivalent and throw out any they can’t agree upon. This isn’t meant to be Pure And Objective here, only statistically useful.
In the same way that ten thousand Congressional votes, suitably analyzed, naturally group people into two categories that look to our trained eyes like Left and Right, would ten thousand little life decisions, suitably analyzed, naturally group people into two categories that look to our trained eyes like Good and Bad?
If so, it would be pretty easy to tell who the best person was, in the same way we can identify the most liberal member of Congress. We could give them a nice little award. Even better, it would be pretty easy to tell which option on each decision is more moral, for the same reason DW-Nominate can tell us that supporting gun control is more liberal than opposing it.
Suppose that we learned that one factor that naturally fell out of the data included giving money to the poor, supporting one’s aging parents, never committing violent crimes, avoiding ethnic slurs, conserving water and electricity, helping one’s friends when they were in trouble, and everything else we traditionally think of as good moral choices. And suppose this factor was heavily, heavily associated with being pro-life, to the same degree that the “liberal” factor in DW-Nominate is heavily associated with gun control. Would this provide some evidence in the debate over abortion? I’m not sure, but it would sure get me thinking long and hard about it.
I mean, we would probably also find some really silly things . Like that our moral factor loads on not getting tattoos of flaming skulls, ie the decision to get a tattoo of a flaming skull clusters with lots of immoral decisions. Presumably we would want to be able to say that getting a flaming skull tattoo is not itself immoral, but is correlated with immorality. But then we might as well say the same thing about being pro-life. Indeed, maybe everything religious will end out correlated with morality for religious reasons. We’d probably have to sort through this and fight a bunch of interminable correlation vs. causation debates.
But then there are areas where this could really shine.
I think this might solve a problem that Aaronson thought was unsolveable in his proposed algorithms. He said that in a world that was completely backwards – for example Nazi Germany – where everybody thought right was wrong and wrong was right, any moral sorting algorithm will give backwards results because it has to start with majority opinion in some sense. His example was that a PageRank style algorithm where people-believed-to-be-moral are the ones people-believed-to-be-moral believe are moral would fail, because most Nazis would believe that the high-ranking Nazi authorities were moral, and then the circle would complete with the high-ranking Nazi authorities getting to determine who the moral people were.
I think DW-Nominate might go part of the way solving that problem. Consider three different things we might find if we DW-Nominated Nazi Germany:
1. There is a General Factor of Morality, which includes giving to the poor, caring for your aged parents, cooperating with your neighbors, et cetera. People high in this General Factor of Morality are much more likely to oppose Nazi policies and hide Jews in their attics.
2. There is a General Factor of Morality, which includes giving to the poor, caring for your aged parents, cooperating with your neighbors, et cetera. People high in this General Factor of Morality are no more likely to hide Jews than anyone else, or maybe less likely to hide Jews.
3. There are multiple dimensions of morality. One dimension is something like “prosocial in-group patriotism” and captures things like paying your taxes on time, going without luxuries in order to help the war effort, and sending nice care packages to the troops. Another dimension is something like “willingness to go against consensus when it’s the right thing to do” and would include whistleblowing against corruption and being a passive resister to unjust wars. Hiding Jews in your attic might be negatively correlated with the first factor but positively correlated with the second factor. Universally beloved things like giving to the poor and caring for your aged parents might load about equally on both factors, or be a third factor, or whatever.
If Hypothesis 1 were true, that would be super interesting. It would suggest there’s something kind of objective about morality. Also, we could make it do work. Like we could go around to the Nazis, and say “Look, you agree that helping the poor is moral, right? And caring for your aged parents? Well, now that we’ve established what morality is, we have bad news for you. You don’t have it. Moral people are much more likely to oppose you. So stop doing what you’re doing.” This might actually work. Or if it didn’t, then when World War II ended and everyone agreed they should have listened to the General Factor Of Morality, then maybe after ten or twenty iterations of this people would start listening eventually.
If Hypothesis 2 were true, that would also be super interesting, albeit disappointing. It would mean that morality probably isn’t very objective, and that our moral positions are a lot closer to random than we want to believe. If being moral in every other way we can think of had minimal correlation with being moral in the particular way of saving Jews from the Nazis, it would mean that there was no consistent basis to morality and it was just a hodgepodge of popular positions. Or that if there was a philosophically consistent basis, it has little to do with how it’s practiced in the real world.
If Hypothesis 3 were true, that would be very boring, but possibly still worthwhile. Like we could have debates on whether Factor I Morality is more important than Factor II morality, and what to do when they contradict each other, and these debates would probably be more interesting than our current more vague debates on things like “what do you do when your duty and your moral intuitions conflict?”
I don’t really have some grand plan for how this could be used to solve everything or how a utopia could be created around it (although now that I mention it, if we can easily identify the most moral people in a population, they would make good candidates for judges and other high officials, though perhaps not legislators or executives).
I just think it would be fun to study.