[Epistemic status: something I’ve been thinking about recently. There’s a lot of complication around these issues and this is more to start a discussion than to present any settled solution]
There’s a scene in Fiddler on the Roof where Tevye is describing his peaceful little town. He says they never fight – except that one time about a horse some people thought was a mule. Someone interrupts him to say it was really a mule some people thought was a horse, and then everyone in town starts shouting “MULE!” or “HORSE!” at each other until they get drowned out by the chorus.
The town is happy and peaceful as long as nobody brings up the horse/mule thing. As soon as somebody brings it up all of the old rancor instantly resurfaces and everybody’s at each other’s throats. And the argument itself never gets more sophisticated than people yelling “HORSE!” or “MULE!” at each other. Maybe it would be worth it to create a norm around never bringing it up?
The rationalist/EA/etc community has a norm that people must be able to defend their beliefs with evidence, and a further norm that people shouldn’t be confident in their beliefs unless they’ve sounded them off others and sought out potential counterarguments. These are great norms. But their failure mode is a community where dredging up interminable horse/mule style arguments is seen as a virtue, and avoiding them is seen as a cowardly refusal to expose one’s own beliefs to challenge.
I’ve been thinking about this in the context of some arguments that keep cropping up on rationalist Tumblr. These have gotten so repetitive and annoying that I made a joke argument calendar to shame us for it; some of the people who replied were less able to see the humor in it and thought we really should be ashamed.
After some thought, I agree. Avoiding interminable arguments is an important social engineering problem we’re really bad at. Part of it is that we need a way to distinguish the baby from the bathwater. What does it mean to seek out productive discussion while avoiding interminable arguments?
A while back I wrote about memetic evolution for controversy. If an idea is outrageous, it’s likely to spread as people condemn it. if something is controversially outrageous, it’s likely to spread even further, as people argue against it and then other people counterargue against the arguments ad infinitum. This gives it visibility far beyond things that are objectively more important. The entire news media freaks out over BernieBros for weeks, and nobody ever hears about desalinization policy in drought-stricken nations.
Interminable arguments are the local version of the same process. For me, the biggest difference between a productive discussion and an interminable argument is simple. I’m participating in the productive discussion because I want to. I’m participating in the interminable argument because I have to.
I mean, obviously nobody’s pointing a gun to my head and forcing me into any arguments. But there are a lot of reasons I might feel obligated to debate something I really don’t have time for.
First, there’s the feeling XKCD describes as “someone is wrong on the Internet”. When somebody makes a deeply flawed argument against a position you hold dear, and everyone else seems to believe them, it creates (at least for me) this weird irresistable urge to correct them.
Second, sometimes people are jerks. Nobody on Tumblr can just say “I don’t think AI is a big problem”. They have to say “I don’t think AI is a big problem, and the only reason some people worry about it is because they’re cultish sci-fi dorks who are too brainwashed to look up what real scientists have to say”. Nobody on Tumblr can just say they think feminism is important; they have to post comics like this. This is hard to just leave be, especially when it’s not just yourself but your friends who are being insulted, and especially when a lot of people are vocally agreeing with them and even some people you think should know better are being convinced. Letting jerks have the last word is really hard.
Third, sometimes there are actually things at stake. I’ve written before on how one of the main reasons I get defensive is because I think some groups actively strategize to push their opponents out of the Overton Window and turn them into despised laughingstocks. When it works, it means I either have to be a despised laughingstock or spend way too much mental energy hiding my true opinions. The alternative to letting these people have the final say is defending one’s self.
When I don’t want to argue but feel forced into it, I’m doing a very different thing than when I’m having a voluntary productive discussion. I’m a lot less likely to change my views or admit subtlety, because that contradicts my whole point in having the argument. And I’m a lot more likely to be hostile, because hostility is about making other people feel bad and disincentivizing their behavior, and in this case I really do believe their behavior needs disincentivizing.
But of course this just starts the cycle where people who disagree with me – both the original people I’m arguing against, and bystanders who just happen to hear – feel forced to write nasty replies of their own. And so on. It only dies down once everybody can tell themselves that they put enough effort into self-defense to acquit themselves well. And as soon as somebody challenges that – like Tevye in the Fiddler song – the whole thing starts up again as bad as ever.
So how does a community prevent this? Blocking jerks – the people who start the whole cycle by deliberately trolling others – is an obvious good start. But some people on the Tumblr discussion have mentioned some more subtle points worth thinking about.
First, the influx of newbies is a big driver of this dynamic. Newbies are less likely to know the relevant arguments, won’t be bored of them yet, won’t want to steer clear of them, and may mistake somebody’s unwillingness to engage for the 9000th time as unwillingness to engage at all. People should be more tolerant of newbies, and newbies should be more tolerant of “look in the archives for the last time we discussed this issue, but seriously, don’t start up about this again”. This is what I think social justice people mean when they talk about “this is not a 101 space”.
Second, be aware that some problems with interminable arguments might be asymmetrical. I’ve heard this most often in social justice contexts. For example, cis people might never have discussed trans issues much before and might find them really interesting and not particularly defensiveness-inducing. But consider what happens if there are a hundred times more cis people than trans people, plus trans people have spent a hundred times longer thinking about it, plus trans people have a lot less tolerance before they get annoyed. A cis person might innocently ask “Hey, isn’t using chromosomal sex as a proxy for gender a pretty elegant system which is much easier than all of this stuff about different identities?” because they really want to know and their local trans person seems like a good person to debate with. Meanwhile, the trans person might have had this exact debate two thousand times, find it personally insulting, but not know how to disengage politely without giving the impression that they’re too much of an intellectual lightweight to answer the simplest of questions about something very important to them.
(This isn’t limited to social justice or identity politics. I feel this same way sometimes on relatively transhumanist-free parts of the Internet. Someone will make a well-intentioned attempt to start a discussion, like “But what you people don’t get is that the AI will be smart enough to realize that paper clips are silly and compassion towards all living things is the best goal,” and I’ll want to say something like “Look, I promise that in fifty years of thinking about machine ethics somebody else has raised that point before, I think you’re so confused about things that correcting you wouldn’t be a very good use of my time, but if you google anything at all in this general area you’ll probably find an answer to your question.” But that would just be the “it’s not my job to educate you” or “read the Sequences” answer which so many people find annoying.)
But third, as I said before, make it absolutely super-duper crystal clear that there is not a community norm that everybody has to defend their positions every time they are asked, do not say anything like “I’ve never heard you response to Point X so now I’m going to assume that you have no argument against it and are just a brainwashed cultist taking your position on faith” and emphasize that although everybody who wants to be accurate should discuss and challenge their beliefs, nobody should have to do it to anyone else’s schedule.
The exact way this works is something I’m still working on. More and more I’m abandoning the idea of debating on social media/comments/forums entirely, and switching either to email or longer forms. Email is private and removes the performative factor; I can’t say how many times a previously terrible discussion has become manageable and productive as soon as it gets outside the public eye. And by longer forms, I mean things like books and (really good long form) blog posts. I would much rather read the best book by someone I disagree with, and hear all their best arguments laid out by a leading intellectual with a good editor, rather than have to listen to somebody taunt me on Tumblr. And if I don’t understand something about the book, or I still have questions, then I can pick one or two people I know and debate it with them privately.
Part of me thinks this is another point in favor of niceness. A community made mostly of nice people can probably hold more productive debates and have fewer interminable arguments than one that’s not as good at civility. On the other hand, I see these problems even among nice people. I think that the skill of structuring a point such that nobody feels attacked by it is complicated in ways beyond regular niceness, and that otherwise nice communities can sometimes make the right choice in deciding to avoid the whole issue.
EDIT: comment
Of course, the flip side of this is “I don’t have to respond to your argument unless you read this 100 page document first and understand what has already been said about the subject”. Which has its own problems.
That’s called academia, and it works far better than almost everything else.
Way more than 100 pages. I’ve spent a full third of my life studying economics and they’ve only just started taking me seriously.
The sickness of the Internet age is that we’ve started to think we deserve to be taken seriously about something without putting a third of our lives into it.
This!!
my rule of thumb these days is that any opinion, my own or others, that hasn’t been empirically tested multiple times in the real world is basically empty garbage. Whereas many of these interminable subjects are by their nature not testable until too late, not testable by an individual, unclear what a test would even be, etc….
The opposite problem is at least as frequent: “Who are you to contradict this person who has spent 30 years studying X?”, where X is something that the critic obviously regards as fundamentally mistaken, or even fraudulent. I don’t need to spend 30 years studying astrology to know that astrology is bullshit.
I actually think most people who have studied astrology for 30 years probably know astrology is bullshit. Or, they have some highly complex metaphysics akin to Kabbalah where they’re really using astrology for purely spiritual purposes.
Why is that a sickness? I pay a lot of dollars to Uncle Sam, so I might just have a word or two when he starts throwing money at possibly stupid projects.
ADBG is right. This isn’t a sickness of the Internet. It’s a sickness of democracy.
Sort of. I’m sure back in the feudal era, the few hundreds lords that provided the king with his funding probably thought they deserved input just as people who pay the most taxes and give the biggest campaign donations think today, but today is different because everybody thinks they deserve input. Probably why plunder is the best way to fund a government. Your dead enemies can’t insist they tell you how to run things.
Whether plunder is the best way of doing things depends on your goals. if your goal is to use the money any way you want without interference, plunder looks pretty good, but we normally don’t consider that to be the goal of government (however much it may end up being that in practice.)
True. Plunder is the best way if you can otherwise use overwhelming force to keep your population in line, in which case you can just plunder them, but it’s desirable to plunder other places so your own country will prosper more (which is why every government out there tries to export as much tax burden as possible).
But the goal of democracy is explicitly not to be the best at doing any particular thing a government might be called to do. The goal is to present an air of legitimacy that maintains public support and peaceful transitions. A side effect is largely terrible policies, but it works as long as enough people feel sufficiently represented by it because even terrible policies are usually not terrible enough to completely stop the rising tide of technological innovation making labor and capital both more productive and the worst of it gets thrown on people no one really cares about, especially foreigners.
Actually, as you become more qualified in academia (grad student => post-doc => tenure-track professor => …) you have less and less time to spend on the relevant literature for an issue. This is why you often get intelligent, well-established professors (Dawkins Lawrence Krauss) that proffer uninformed opinions about certain topics.
within their subject.
outside their subject, uninformed opinions are probably the problem diagnosed by Socrates, of not knowing where your ignorance begins
That’s true, but increasingly there’s a serious problem with perverse incentives (eg. publication bias and sensationalism). It’s probably better than most other subcultures/institutions, but I don’t think its functioning particularly well as a truth-seeking discussion club at the moment.
I thought a couple of the ideas about truth-machine websites people were constructing were promising, but I haven’t seen anything remotely workable. Organising the information in an accessible structure, then trying to keep the community focused on truth-seeking (with a dash of niceness) seems to be important. SSC is better than most places for the latter, but isn’t pursuing/trying to be the former.
What are you proposing? A rationalist movement to go improve Wikipedia?
If you change it to “build a better tool than Wikipedia”, this project is already underway.
I don’t have a proposal, just thinking aloud / adding my two cents. I’m hopeful someone smarter than I will come up with a workable solution.
http://arbital.com/
I actually wish there were some way I could bounce ideas around about my academic subject as frankly and with as little pressure as I discuss things on SSC. Though forums and facebook groups for discussion of my topic do exist, they are mostly for saying “here is a new book on…” “does anybody know where I can find…” not for really debating an issue.
The problem is, when it’s your profession you’re worried about being thought stupid for your bad ideas and having your good ideas stolen (or, at least, I would be).
I tend to agree with everything said here. Academia’s weakness is a lack of open, productive discussion, but its strength is an insistence on familiarity with the literature (at least within specific disciplinary boundaries, a limitation which is an instance of the general weakness).
It works pretty well in the STEM fields. In the social sciences, not so much; we’ve got the replication crisis our host has discussed in the past, and various sub-fields that have been pretty much taken over by pure political activism in the guise of scholarship.
The more frustrating thing is “So I read your 100 page article on privilege and I still think it’s wrong for the same reasons. I also did three google searches and found nothing new. Can you pretty please link something actually refuting my point?”
I actually think an indexed list of arguments (one both sides) would be an amazing resource for almost everyone to solve this problem. “Search the archives” is always a HUGE pain.
Even worse, there isn’t any particularly good way to approach this issue with a community or even an individual unless there is a VERY strong assumption of good faith in place, because otherwise you’ll just get pattern-matched.
The nearest thing to a convincing argument I’ve ever managed to construct has been “So, you understand how theologians have spent centuries constructing countless arguments, an entire body of thought that is based upon a singular premise? And said premise is one you disagree with, therefore rendering the entire school of thought pretty much meaningless? Well, that’s what I think about pretty much everything your community believes.”
Now imagine trying to have that discussion with someone used to postmodern standards of discourse. I’ve taken to just beating my head against a wall instead, it’s less painful and I’ve more chance of actually changing something, if only the shape of my own skull.
Not everyone is worth communicating with. Some (and I group most postmodernists and fascists in this category) are only worth telling off as rudely as possible.
If someone’s acting like a bigot *and* can’t be bothered to defend their position, there’s a time to stop acting like you’re talking to a reasonable person and cut your losses.
Are there actually postmodernists? Is that not just an art style / epoch?
@The Smoke, No, there are, or at least there were, postmodernists. Modernism wasn’t just an art movement; there were intellectual traditions associated with it. And fashions must change; people must find new banners to rally under, lest they appear unoriginal. And so eventually people who were still mostly modernists started calling themselves post-modernists, in order to appear to have made progress and to be doing something new. And they invented straw modernisms to criticize, in order to conceal as much as possible the extent to which they were still modernists themselves. And their deception sometimes worked on careless observers, who have often been misled by the rhetoric into thinking the post-modernists actually did represent radically new ideas (different groups of careless observers might react either with enthusiasm or with horror to the radicalism that the post-modernists pretended to but did not actually represent).
Post-modernism is largely out of favor these days; we seem to be in a phase where all of the cool kids are denying that they are part of any school or movement. That’s not a new position either, of course, and it should further go without saying that when it comes to actual issues of substance, there has been far less change than anybody is willing to admit.
Thanks for the explanation!
I think not wanting to associate with any particular movement should be the normal state in any healthy democracy and is neither original nor problematic. It’s of course different if you deny that many parts of your mindset come from some specific schools of thinking, which is probably the case if you live in any modern society. Still that doesn’t mean you have to pay them tribute for having influenced you.
I’ve been on both sides of the fence and it’s a pain either way.
I’ve been there when the innocent well-meaning newcomer brings up something they think is a stunningly original point, and you have to find a nice way of telling them this is the five hundred and ninety-sixth time you’ve heard it and you’re not interested in flogging the dead horse anymore. Handle it badly and you risk sending them off in a snit about “these twerps haven’t a clue what they’re doing and they have no good arguments, only slogans”.
I’ve also been on the other side (not directly) of the “I’m not here to educate you” thing and that is frustrating when you’re going “Hang on, this is the first time I’ve ever heard of this thing, I don’t even know where to look for the Idiot’s Guide To Understanding The Topic may be, you want me to take this seriously and use your terms and attitudes, but you won’t throw me a bone about why this is a stupid question or what the reasons behind ‘all X are scum’ are”. Then I’m the person in danger of heading off in a snit about the twerps who have no reasons, only slogans.
I actually *hate* being in the position where no matter how much I research something, I can’t find my argument in it. I always assume I’m not that original, and try to find someone who has already refuted or proved what I’m thinking.
Usually the problem is either that a) I don’t even have the right vocabulary/know the right resources to search or b) anyone writing on the topic in this community has very different base assumptions than I have, which means my argument is either dead on arrival or not thought of at all.
Which is of course where you get into the “echo chamber” debates.
I think resistance to actually providing the link is an indication that something is wrong, and that these concerns are not addressed rationally but instead just socially disapproved topics.
For example, consider an alternate field – high frequency trading. This is a particular part of the stock market that attracts lots of uninformed media attention and conspiracy “theories” (flash boys, zerohedge, nanex).
Some years back I wrote an introductory article on the field, basically “what’s a limit order” and “why does speed matter”. Since then I’ve observed numerous folks trying to rehash the issue and informed people just refer them back to my blog post.
I’ve seen similar patterns repeat in physics, in math, economics, etc. For instance, I’ve seen Scott Alexanders post criticizing Cosma Shalizi used at least twice.
When a field doesn’t post the links, it suggests to me that there isn’t really anything to go read. It’s a lot more satisfying to link to a detailed takedown that 100% proves your point than it is to say “the truth is out there”, so why would anyone do the latter when they can do the former?
Suspicion of Bad Faith. Reportedly “Hi, I’m new, please explain ____ to me like I’m five.” was a common sock-puppet troll in the 00’s, to the point where saying “Hi, I’m new, please explain ____ to me like I’m five.” is a little like saying “___ did nothing wrong.” on a history discussion: It signals “I am a troll who’s just fucking with you.” so hard that it can’t be counter-signaled.
Inquisitive five-year-old did nothing wrong.
I actually think an indexed list of arguments (one both sides) would be an amazing resource for almost everyone to solve this problem.
Agreed.
The more frustrating thing is “So I read your 100 page article on privilege and I still think it’s wrong for the same reasons…can you pretty please link something actually refuting my point?”
I’ve had that experience more than once; I make an argument, someone says, “you obviously need to research the subject” and gives me a few links, and nothing in the links actually addresses the specific point I made, it’s just some insultingly basic information that I already know.
On the other side, I’ve visited blogs where they frustratingly keep making (pop culture media) arguments based on just one or two examples, calling it a trope, apparently having never visited TVTropes for a full list of examples and counter-examples to check their argument.
Needless to say, their writing tends to come off as very naive. I feel like commenting on the majority of their posts with “DO MORE RESEARCH.”
One failure mode: said document is not (or only barely) relevant, but of course you need to read the whole thing before realizing that.
Worse is when they won’t admit it.
I once was in an online argument with someone who thought that fictional sorcerers and wizards were clearly different, and had a habit of pointing at things as if they would prove it.
Sounds like an Eddings fan.
Or a D&D player. There are all sorts of specific fantasy contexts in which the two are distinct. In, Tolkien wizards are angelic messengers (and no one who isn’t can be one), while sorcery is a morally dodgy art humans can practice, but one that’s at least flirting with coming under the sway of a Dark Lord. Lyndon Hardy’s Master of the Five Magics has its five titular schools, of which sorcery and wizardry are two. Steven Brust’s Dragaera series has “wizard” as a specialized high-level sorcerer (with witchcraft and psionics as separate magical disciplines).
But that’s different from the two having any clear distinction in common English usage. The fact that the nature of the difference (if any) varies from work to work is a strong sign that it’s contingent on authorial choice.
I’ve heard of D&D players arguing about what class and level Merlin was.
Not, would be if you tried to introduce him. Was.
It depends, it’s more about nuance really. “Sorcery” has the element of black magic/dark arts about it, so a sorcerer would be on morally dodgy ground by comparison to a wizard from the start.
“Wizardry” also has an implication of knowledge, study, learning, research to it. A wizard might have learned a new spell or potion from a grimoire or other work while a sorcerer learned it or acquired it from a spirit or by making a deal with a demon 🙂
Yeah, but if you read a fantasy story where a sorcerer compounds a potion from herbs, or a wizard conjures a demon, I think you would not have strong grounds to object that the author got it wrong.
(At least I hope not, since Eyes of the Sorceress features a sorceress heroine not meant to be dicey.)
If the question is frequent and you are going to respond in text, it is supposed to be feasible to respond with a link to a well-formed text that is close enough to the answer you would’ve given. That part is known as “FAQ”.
How closely does this idea map to the social norm of never discussing politics or religion?
Not starting obviously tribal conflicts seems a bit different from having better arguments.
This seems to be one of those things that academic credentialism actually kind of solves, I remember one of my professors telling me “your math degree is a piece of paper saying you are allowed to discuss math at /this/ level”, and if you want your argument to be taken seriously without credentials, it needs to be well-written, rigourous, and public.
The ‘religion’ part of that phrase seems like vestigial organ.
When’s the last time you saw a bar fight break out between some Mennonites badmouthing Baptists? Catholics conspiring to keep an Anglican’s paper out of peer review? Calvinists parading in the streets championing “grace” as a principle needing legal recognition?
It seems ridiculous today, but people used to get so worked up about that stuff. To a positivist like me, it seems like race/gender could be going the same way: a strong social consensus of agree to disagree, live and let live.
However if we think about why religion was such a flashpoint in the 18th century America, it’s because communities were founded on them. You needed the right religion to get certain jobs, join certain clubs, and being considered an outsider made you vulnerable especially to legal exploitation, e.g. most famously witch trials. This points to two choices for: we can “secularize” or double-down on identity politics as a foundation concept.
In my family, it is not a vestigial organ. You’re living in a bubble.
The social norm against discussing politics, religion, and sex applies to “polite company”, which may exclude families like yours and mine. 🙂 In a typical U.S. corporate workplace, or in a public space among strangers like a store, park, or sidewalk, the social norm is still very much in effect in much of the U.S. (I can’t comment on other locations.)
It may not be exactly along the same lines, but if I’m at a dinner party and someone starts in on her “spiritual but not religious” beliefs I consider it a faux pas. Similarly if someone spontaneously decides to mock people that are “spiritual but not religious”. Organized religion as a whole is a little bit of a safer ground, but not all that much. Better to just stay away from the whole topic. You never know who is or isn’t going to have strong feelings on the matter.
Now if you want to start talking filioque controversy than yeah that’s not going to cause any hard feelings or an uncomfortable argument to break out.
When’s the last time you saw a bar fight break out between some Mennonites badmouthing Baptists? Catholics conspiring to keep an Anglican’s paper out of peer review? Calvinists parading in the streets championing “grace” as a principle needing legal recognition?
Intradenominational Christian theological disputes represent a tiny fraction of possible religious arguments. If we’ve mostly (mostly) left those behind, it’s because there are so many other good ones to fight over now. Off the top of my head, and note that most of these would not have been debatable a century ago:
Atheists are/are not immoral nihilists
Christians are/are not delusional ignorant sky-fairy-worshippers
Christians do/don’t really have to go to church every week, 1-2 times a year is OK
Muslims are/are not just a bunch of terrorists and apologists for terrorists
The ethnic stereotypes around Jews are/are not mostly correct
Your religious beliefs must/must not drive your political beliefs w/re, e.g., abortion
Religion always/never motivates inquisitions, crusades, and other great evil
Religion always/never motivates charity, forgiveness, and other great good
Nondenominational spiritualism is the worst excuse for a/best possible “religion”
Wicca, neopaganism, et al are/are not “real religions”
The United States of America is/is not a Christian nation
The Amish are basically decent people/evil child abusers
Feel free to add more. And feel free to bring any of these up at your next family reunion or workplace social gathering, taking a contrarian position if there is a local consensus. Let us know how that turns out for you.
Or, don’t talk religion and politics in place where you want to keep things civil.
I very much enjoyed and agree with your comment, but shouldn’t it be “Interdenominational Christian”?
Yes, either “interdenominational” or “intrafaith”, I was waffling between the two and mangled them together in a bad way.
I think these are all a stretch.
Sure I could provoke a fight discussing the quintessential tepid topics:
Weather: “it must be global warming”
Movies – “yeah mad max shows the fate of humanity unless all power is handed to the matriarchy” (“ummm, I liked the special effects”)
Baseball – “I hate the Red Sox because they were the last team to racially integrate…”
What you see in my examples, and all of yours above is *injecting politics* into a topic which is nominally about something else.
While there are small pockets of severe atheism, and severe antisemitism to be found, you’re not going to be able to avoid discussing them with people who form identities out of these beliefs.
I’d like to agree with that sentiment, but I pulled most of my examples from things that have been injected into otherwise polite, rational discussions here in this generally polite, rational forum by people who I don’t think intended to derail the discussion. Which is one of the things “don’t discuss politics or religion in polite company” is meant to guard against, the other being the nigh-universal (even here) tendency to be reliably baited into flamage by such passing mentions of political or religious matters.
Maybe in some parts of the United States and western Europe. Even so, consider how many live political issues in America (abortion, public funding of contraception, teaching of evolution, support for Israel at all costs) are based, in whole or in part, on religious differences.
That seems wildly optimistic. From my perspective it seems that the left has won most of these arguments among the political and intellectual elite, and its most extreme adherents are conducting a ruthless mopping-up operation against the substantial numbers of Americans who disagree with them. A lot of Trump supporters are motivated by this: one thing that attracts them to The Donald is that he can get away with saying things that would get them in trouble.
one thing that attracts them to The Donald is that he can get away with saying things that would get them in trouble.
Agreed, the all too cliche he’s a racist! he’s a sexist! barbs have been played too many times and they seem to only fuel the popularity.
It seems like he’s hitler! is the only one left. I saw the racheal maddow show last week (no sound) and I think they devoted a full hour to cutting between trump and videos of nazis. Maybe with sound it was insightful, but it just seems so godwin-ly unoriginal.
Catholics conspiring to keep an Anglican’s paper out of peer review?
Hey, I’ll argue with any variety of Anglican about the Synod of Whitby! 🙂
Now if you want to start talking filioque controversy than yeah that’s not going to cause any hard feelings or an uncomfortable argument to break out.
I’m not sure if you mean that tongue-in-cheek or not. Western and Eastern Christianity are still very much divided on this, and I personally wasn’t very impressed with the Anglican decision to drop that from the Creed at personal discretion.
Then again, I have the feeling that the Holy Spirit is very much the neglected member of the Trinity when it comes to (Western) practice, and that if the “Filioque” distinction was not important, then it should not have been added in the first place. If it wasn’t worth the Great Schism, why did we bother about it?
Sorry if I wasn’t clear. The context was supposed to be at the sort of dinner party I attend. In that setting saying that people that believe in crystal healing spheres are obviously mentally deficient might spark an argument or leave someone with hurt feelings. Claiming that the Holy Spirit obviously proceeds only from the Father is overwhelmingly likely to just get you puzzled stares. On the off chance that anyone else has the faintest clue what you are talking about they’ll probably be delighted to have a kindred spirit at the party, even if they happen to be on the other side of the Great Schism.
If you properly attempt to be nice and use correct argument-making while discussing politics you will find out that it’s very hard to discuss it and most discussions tend to end with “this needs more data”.
And if that is made a norm then it quickly turns into a near-absence of political discussions due to lack of motivation to go and look up that data (especially when it’s not publicly available, such as detailed income distribution data for a particular country).
I think this is mostly true.
Eventually, though, I think that as the number of discussions terminating in “this needs more / inaccessible data” increases, the motivation to find / gain access to / generate that data will grow until it burgeons in a resolution of whatever obstacle was in the way.
This strikes me as a very good thing, relative to the drawbacks.
Some good norms:
(1) Always ask yourself what you are trying to accomplish in a debate or discussion. Choose your battles wisely.
(2) Approach conversations with a consensus-seeking, rather than adversarial, mentality. This is “arguments are not soldiers” writ large: it’s easy to get competitive about rationalism or epistemic virtue just as people do with “enlightenment” (on the left) and “virtue” (on the right). Having agreement as a goal will limit that, while still giving you many of the benefits of debate/exchange of ideas.
(3) Try to understand why the other person disagrees with you. Is it a difference of values? Temperament? Do they perceive the facts differently? Do they have a reason to distrust your methods? Your perspective? Do they think you have biases that you would do well to self-reflect on? Be charitable.
(4) Be open to compromise. Freely grant any parts of your opponents’ arguments that you can, be gracious about the ones you’re not very attached to, and offer them opportunities to agree with you on some point or other. Emphasizing points of agreement at least as much as points of disagreements will go a long way towards clarifying debates and pacifying the participants.
(5) “Winning” an argument is not the same thing as convincing your interlocutor.
Corollary–Under no circumstances should you allow yourself to feel good about “winning” an argument: if you’re correct and it matters that your opponents believe the truth, then it’s a tragedy if they don’t.
I’m sure there are more. The sad part is that doing these things (let alone knowing to do them in the first place) is a matter of emotional intelligence plus practice, not philosophy, with the perverse effect that people who care the most about “the truth” (the abstractly-minded) end up having the most (and most acrimonious) discussions, and learning the least from the ordeal.
TL;DR There can be no responsible truth-seeking without reckoning with all the uses humanity has for truth (and controlling access thereto).
““Winning” an argument is not the same thing as convincing your interlocutor.”
One of my father’s sayings was that the objective is not to convince the other person, it’s to give him the ideas with which he will later convince himself.
Arguments are more persuasive than they look. The trick is that it’s humiliating to be beaten in an argument, so you will not want to admit that you have been at the time. But, since it is humiliating to be beaten in an argument, you will also not want to hold an easily beatable position going forward. So people often claim to be unpersuaded but actually have changed their position.
Yeah. The one or two times when I’ve been well and truly convinced by an argument, it happened at a lag of about six months after I denounced my interlocutor in the harshest possible terms.
Much better to just demand the banning of the person who presents inconveniently good arguments.
Yes, if you’re determined to fuck up your intellectual environment, it’s not particularly hard.
I’ve been meaning to ask you how you manage to not appear impatient with people in discussions. Heck, you don’t even look particularly exercised.
I imagine it’s very easy to maintain that demeanor here, but you look calm even relative to many other SSC commenters. And meanwhile, I’ve also seen you exhibit that calm in various YouTube videos that don’t feature SSCers. I know you discuss things on Facebook, but I’ve never seen any of those discussions, so I don’t know how they go.
>>(5) “Winning” an argument is not the same thing as convincing your interlocutor.
Indeed not! One or both parties might not care to convince the other, but instead want to convince uninvolved spectators.
In online debates, the combatants are often talking past each other, both thinking of the Great Unseen Audience.
Ladies and gentlemen of the jury..
Maybe newbies can duke it out against other newbies? Like, when you get asked a question about AI you’ve heard 100 times, instead of explaining you can make a post saying “I don’t have time to debate, does anyone else want to field this question?” And maybe there will be some recent AI converts willing to engage.
Newbies who nominally hold the same position as you rarely understand the subject matter enough to lay out the best arguments for a position correctly. This can cause weak-personing of the position and can be net harmful. (Unjustified or poorly expressed justification for AI concern being a textbook example.)
I guess, but it should still be enough to counter anti-AI newbie’s claims if they’re really *that* stupid.
To some extent, if the AI critic argues so well that only you can counter them, then maybe you really ought to – in that case, they’re no longer an uninformed newbie.
Part of the problem is AI risk is a fringe belief. Is a phd in CS whose never heard of it a “newbe”? I’m pretty sure you don’t want to ignore those people.
Yes, someone with a Ph.D. in computer science who has never heard of the AI value alignment problem is a newbie with respect to the subject. Why wouldn’t they be? CS is a big field, and getting a Ph.D. just requires that you have a high B.S.-level understanding plus deep knowledge in one specific specialty.
What about someone who has read all the literature on AI safety, but never heard of computational complexity… are they a newbie too?
@TheAncientGeek
It can be one of the annoying problems.
If you have a background in computation some of the arguments put forward even by some of the older lesswrong members without a CS background on less-wrong are pretty cringe-worthy because while they’ve got the basic idea of value-alignment some of the things they say based on that are absurd on account of things like requiring more compute power than you could get from using all energy in the universe for computation with perfect efficiency.
Lesswrong has quite a lot of philosophers with no CS knowledge at all who’ll still talk down to people with practical CS and AI experience who don’t fully agree with everything.
And how good is LW at doing something about that sort if issue when it is painted out?
@TheAncientGeek
What would “doing something” entail in such a case?
It’s a comment thread. Individual posters can be frustrating.
If you point out that a claimed ability they attribute to a powerful AI is physically impossible they can sometimes turn to the same tricks as religion and claim that since you aren’t a super-intelligent AI you probably just can’t understand how such an AI would access [practically infinite] computation.
@Tom D. Fitzgerald Jr.
Where has Yudkowsky ever said he believes P=NP? I found two links where he uses it as an example of something ridiculous, but can’t find anywhere where he’s proposed it confidently.
https://www.reddit.com/r/HPMOR/comments/3ikzva/hpmor_reading_companion/cuivbza
http://lesswrong.com/lw/sg/when_not_to_use_probabilities/
I’m about 80% sure that was sarcasm.
(Incidentally, even if we throw our straw Eliezer a bone and assume that P=NP, that still doesn’t guarantee that any particular problem in NP will end up being efficiently computable in practice. There is a habit of treating P as if it means “tractable”, but it’s really not well justified; there are plenty of algorithms with polynomial complexity that you don’t want anywhere near a large dataset. And machine-learning datasets tend to be huge.)
I’m 95% sure it’s sarcasm, or parody, or something else which is not intended to be taken literally.
Although I definitely understand the “I’m not going to answer these same basic questions over and over,” I also think the person thinking that should wonder to himself: “am I yet able to express my ideas clearly and succinctly enough that it doesn’t feel like a hassle for me to write them again and again?”
Of course, not everything can be explained in a few sentences, but a lot more than you think can probably can. As Einstein supposedly said, “if you can’t explain it to a six-year-old [and, I would add, in 100 words], you don’t understand it yourself.”
This question alone consumes a full 10% of my clock cycles.
Our thought processes have been so similar lately, I’ve been wondering whether we’re twins.
Impossible. You’re not even from the same anime!
Edit: Yeah, go ahead and ruin my joke, anon know-it-all >:|
Hell, one of them is not even from an anime.
Between us two and suntzu, we’re the anime.
(I don’t recognize onyomi’s gravatar either. But upon eyeballing it, I feel like it counts.)
It’s from Persona 5, a videogame… which is probably the closest a videogame can get to being an anime without becoming a visual novel.
Persona 5? That isn’t even out yet! (Though your mention of it made me check, and I guess it’s due this year; hope it’s as good as 3 and 4!)
We generally consider Shakespeare the greatest writer of all time, even though playwrights were considered lowbrow during their own lifetimes. Similarly I have a theory that several decades from now, videogames (with exceptions) will be widely accepted as mainstream literature.
Naively, I want to give a prediction of like ~30 years. But then I remember Fusion Power and Dippin Dots, and extend my estimate to 50 – 100. But then I remember events like {arab spring; barack obama; donald trump} which remind me that the future is now, and therefore embolden me to dial it back down to ~30.
All it takes is for us millenials to get old and crotchety and decide what real art is, as opposed to whatever kids are doing right now.
I don’t know. There is a key difference between Shakespeare and video games: anyone can sit down and watch a performance of one of Shakespeare’s plays or read the script of one, but only certain people have the resources, skill, free time, and patience to play video games (that aren’t visual novels) to their conclusion. LP videos often lose a lot, and still take a long time to watch.
That’s without even getting into the fact that preserving video games for future generations is turning out to be much harder than it was to preserve Shakespeare’s plays, even taking piracy into account.
I think the proper equivalent of Shakespeare’s plays in today’s world is Hollywood blockbusters, given that he aimed to appeal to as broad an audience as possible. Indeed, I’ve read some very convincing essays that claim that this is precisely why Shakespeare’s work is still relevant today. Because he chose story themes that people from every social class of his day could relate to, such as young love (Romeo and Juliet), the temptations of power (Macbeth), and the desire for revenge (Hamlet), he ended up choosing story themes that would still be relevant to audiences hundreds of years later.
One can’t help but wonder if a similar thing will happen to movies, with future generations forgetting topical Oscar winners that appealed to the intelligentsia of their time in favor of broadly appealing blockbusters like Independence Day and The Avengers. Actually, it seems like such a thing is already happening. A Wizard of Oz is far better remembered than any of the movies that won Best Picture in the 1930s except Gone With The Wind. More recently, when was the last time that anyone considered American Beauty to be relevant, especially when compared to other 1999 films like The Matrix, Fight Club, and The Sixth Sense?
On the other hand, Avatar was completely forgotten within a few years despite being the most successful movie in history, so it’s clear that financial success is no guarantee of a lasting cultural impact.
To give you an idea of how far outside the mainstream even the most successful video games are: a few years ago, I met an American woman who couldn’t have been older than her 50s, was a counselor at an art college with a large Game Design program, and had no idea who Mario was. Even after someone drew a picture of Mario and showed it to her, she still didn’t recognize it at all.
@NN
This assumes that the artistic value of gaming is in their narrative.
But the games that survive the test of time are more about their gameplay, particularly games that impart their theme/narrative through their mechanics. (Has “ludonarrative dissonance” stopped being a dirty word yet?)
I expect that games where losing is the point, like your classic arcade wallet-eaters, are going to be the long-term remembered media. Candy Crush, Donkey Kong, Angry Birds, Pac-Man, Super Hexagon.
As for artistic value in gaming now, isn’t that one of the sticking points in some communities right now? “X isn’t a real game!” opposition. Meanwhile, there’s already an indie-gaming exhibition at Seattle’s EMP museum.
And the Resident Evil movies, lol.
@NN
There’s a reason why I mentioned the old people thing in my earlier post. Video games are a fairly recent medium, and most people tend to prefer the forms of entertainment they were exposed to while younger. It’s probably going to take a while, but I suspect it’ll form a similar sorts of development as comic books did/are doing.
True, but I was responding to FullMeta_Rationalist predicting that video games will be widely accepted as mainstream literature, so I naturally focused on the narrative aspects.
Comic books are actually a pretty instructive case. If you look at comic sales statistics, you’ll find that the 2000s-2010s comic book movie boom hasn’t made actual comic books much more popular. In 2001, at the nadir of the late 90s comics crash, the combined unit sales for the 300 best selling monthly comics in the US totaled 67 million copies. By 2015, that increased to 89 million copies, for an increase of 32% over a period of time where the US population increased by 13%. The total revenue of the entire North American comics market did more than double (after adjusting for inflation) over the same period of time, but most of that growth seems to have been driven by the sale of trade paperbacks, especially through Barnes and Noble, Amazon.com, and other avenues that aren’t dedicated comic book shops.
TLDR: Movies based on comic books may be ruling the box office, but comic books themselves are still only slightly less niche than they were 15 years ago.
So if things go for video games the same way that they went for comics, then we’ll eventually get movies based on video game franchises that achieve mainstream critical and commercial success, which will be accompanied by (probably mostly mobile based) tie in games that will be very successful. The core video game industry that originally created those franchises, meanwhile, will see at best modest growth as a result. Actually, I’d expect less spillover benefits for video games than for comics, since the cost of entry is so much higher (buying a several hundred dollar console or gaming PC and then paying $60 for a game vs. paying less than $5 for a comic book). Whether that counts as video games becoming part of the mainstream is ultimately a question of semantics, but it probably isn’t what most people imagine when they think of video games becoming mainstream.
It seems a little odd to be ignoring trade paperbacks, B&N, and Amazon if we’re trying to estimate the success of comics as an art form. The 22-page floppy comic is a delivery system for the art form, not an essential component, and so is the dedicated comic shop; saying that the comic market’s flatlined because trades and generalist stores are getting more popular is like saying that video game sales have flatlined because online purchases through platforms like Steam are starting to edge out the essential buyer experience of going to a Gamestop and spending a few minutes wading through clouds of gamer funk before the bored, pimply teenager behind the counter will give you your copy of Halo.
Comic shops were basically a creation of the collector market, by the way; they were at least as uncommon twenty years before the Nineties comic boom as they are now, twenty years after.
His point might be paperbacks are just more expensive or the same people who already own serials are buying them. The proper measure of reach is distinct consumers, not revenue.
I would agree that pure plot is not where the art of video games lies, as it is not where the art of movies lies. It always takes a while for each new media to find what makes it unique. This is why I think Mad Max should have won Best Picture and is also one of the best movies I’ve ever seen: it does the most possible with the medium. That movie could never be a book (technically, of course, you could write a book with the same plot, but it wouldn’t be a completely different creature).
For this reason, though I love JRPGs, I’m not a fan of the “video game as interactive movie” as represented by, for example, Final Fantasy XIII (which, in fairness, I never played because of how it looked and how it was described to me). This is not to say a good game can’t feel “cinematic,” of course, but the gameplay comes first, followed, imo by the environment (combination of character design, backgrounds and soundtrack).
“Are video games art yet?” is probably an interminable argument. Well, maybe it’ll terminate when all the haters are dead or senile, but leaving that aside as a non-standard termination condition. Luckily enough, the groups with differing opinions have pretty much no need to interact and the issue is basically completely irrelevant. Though, it could be fun to write an FAQ for it. Might make a good toy-example of trying to improve Interminable Argumentation, since there’s not much at stake.
As far as I can tell, this is icycalm’s shtick, if you abstract away from the fact that he’d punch you in the face for using the word “gameplay”.
But the Intellectual types don’t see any value in that stuff. If arcade-style games came back, they’d probably Cow Clicker them. Games aren’t supposed to be about testing skill [and eating quarters]; they’re supposed to be about Imparting Valuable Messages, because the Intellectuals are the same sorts of people as the Victorians and the Communists. Games aren’t even supposed to be about providing enjoyable aesthetic experiences (which was the whole point of ‘art game’ at the beginning — see: Seiklus); they’re supposed to be about The Message. (And why shouldn’t developing competence at something like Pac-Man over time be considered an enjoyable aesthetic experience?)
Part of the problem, I bet, is that people want objective-sounding standards to judge things by, and in the absence of objective aesthetic standards they’ll come up with objective political standards instead. Beauty is in the eye of the beholder, but service to the Cause is not.
Another part of the problem is that objective political standards don’t require game developers to have anything recognizable as talent in order to make Great Things.
Anyway, “are video games art?” translates to “should some video games be given this form of establishment prestige, and what criteria should video games have to fulfill for them to earn this prestige?”, which is a stupid argument to be having in the first place, because Shakespeare. Death To Art. Art Is Dead. Marxism-Nixonism Will Win. We Will Destroy The Museums, Libraries, Academies Of Every Kind.
Actually, what people like to do with “real Art” these days is not rating in on an objective 1-10 scale, but commenting on it from various angles — yes, sometimes including political ones.
I looked up three reviews of the same book for the other thread. One said: from a feminist perspective, the female characters are poorly written. Another said: from a stylistic perspective, it’s a great homage to the 30s/40s aesthetic. And the third said: from a narrative technique perspective, it reflects the quirks of the protagonist’s artificially limited mind.
Art criticism is similar to philosophy in this regard — most people seem to have given up on objective standards and moved on to clever arguments (probably a good thing, in this case).
@nydwracu
Nah, there’s leftist channels discussing the role of sticky friction in gameplay as a part of its narrative value:
https://www.youtube.com/watch?v=_Ux0-yYWTgM
https://www.youtube.com/watch?v=DY-45DjRk_E
It’s the “how does this media reflect our current state” school of artistic analysis, as well as just regular secular “what makes a story work?” lit analysis.
In The Fault in our Stars, there’s this scene where a cranky, eccentric author (Peter) is trying to explain his novel to the fan-girling protagonist (Hazel). The author plays on a stereo a loud rap/hiphop song in swedish(?) for several seconds. The rapper (though incomprehensible) exhibits abundant confidence and attitude. After Hazel is sufficiently confused, Peter turns to her and says “It’s not about what the rapper said, it’s how he said it.”
Plot and theme are only 2 elements of literature. Plot and theme are the most salient, so that’s what people grab onto first. But there’s like 100 other elements. Irony, mood, tone, juxtaposition, foreshadowing, etc. A novel is more than the sum of its plot and theme. Plot and theme are just a few of many vehicles wielded to convey an experience.
Though some videogames don’t have a plot, they still have {literary, artistic, idk} value. And the reason they have value because they immerse the audience in a unique experience.
Consider Katamari. You’re given this sticky-ball-thing the size of a balloon. And all you do is roll the ball over debris until the ball is the size of a continent. Did it have a plot as intricate as The Da Vinci Code? Nope. But it still won a bunch of awards somehow.
As far as I’m concerned, the primary measure of literary value is in its immersion. Any 5th grader can write a paragraph or two about boats. 5th grade writing won’t convince me to undertake a day’s journey 2000 leagues under the sea.
(In case you guys think I’m a plot-hater, I thought Fire Emblem’s plot was a gem.)
(Also, I don’t care whether any particular franchise reaches the popularity of Mario. I’m making a statement about the medium joining the overtonwindow, as opposed to being viewed as an activity performed by South Park’s Evil Griefer who lives in the basement.)
I was kind of shocked that Blizzard collaborated on them with that, I would think it would hurt the brand. I understand having a desire to roll with the punches or whatever, but it seems like insulting your customers is a bad business move.
Actually it seems like the South Park guys are generally kind of assholes and nobody seems to notice or care, they get a lot of leeway on every issue imaginable.
If you must conduct debates on controversial subjects in the privacy of one’s own email, please do post it publicly after. Being able to read good-faith debates between intelligent, informed people is likely a very good way to learn about these issues.
Both sides would have to know it was going to be public beforehand, else it would be very bad manered (and no one would play anymore), and then you get all the problems of having a public debate. Vive la démocratie!
“Both sides would have to know it was going to be public beforehand”
Alternatively, you ask the other party’s permission. As I recently asked Scott’s permission to web our exchange over his old anti-libertarian faq (not done yet, but soon).
Oh now that sounds like something I would want to read.
Seconded!
Done.
http://www.daviddfriedman.com/Miscellaneous/My%20Response%20to%20a%20Non-Libertarian%20faq.html
༼ つ ◕_◕ ༽つ GIVE DIRETIDE ༼ つ ◕_◕ ༽つ
Meme concentration levels getting dangerously high in here.
Of course, I think this goes without saying. In fact, it would be really cool if, after the debate was done, the participants got the opportunity to polish/clarify their contributions (within the bounds of good faith, ect…) before the debate transcript went live.
Yes. Came here to make sure someone had posted this.
The benefit of a community with interminable arguments is that one rarely has to participate in any given argument oneself.
Also came here to make sure someone had posted this. Let people edit or redact parts of the email exchange, make sure both people sign off on a finished version they feel accurately represents their views, and then if everyone’s happy about it and it seems useful, post the thing. Seems like the best of both worlds, and dialogues are a fun read!
It seems to me that something like “Rationalists Stack-Exchange” would be good way to mitigate most of the problems you’re describing, as it could serve as an accumulative one-stop-shop for the “state-of-the-art” answers regarding well defined and sufficiency-resolved issues.
I think that the Q&A format, together with the scoring system, is well suited and rich enough to reflect nuances and different perspectives – provided that the subject matter is indeed “sufficiency resolved” (which is an underlying assumption in your post, if I understood you correctly).
Some potential problems with reputation-based systems might be that they’re vulnerable to demographic change, that random walks of opinion become self-reinforcing, or that small flaws within initial norms can quickly generate big problems. Care would have to be taken to nurture it as it grows, just as Jeff Atwood and co. have guided Stack Overflow/Exchange, and as Eliezer Yudkowsky managed Less Wrong, and the site might quickly turn into Rationalwiki with or without such care.
101 question: What is the problem with Rationalwiki?
This probably isn’t a definitive answer, but it’s at least an introduction to RationalWiki’s problems.
It said some mean things about Yudlowsky. Alternatively, it doesn’t use “rational” in the same way as LW.
Yudkowsky’s ideas deserve to be criticised, particularly the ones that are outright wrong, but also the ones that are kind-of-sort-of-right, and even the ones that are contextually right. I have heavy reservations about what he and Bostrom can achieve with philosophy, given that AI research is such a practical field, and he might even be doing harm by creating a false sense of security or by increasing the arrogance of AI researchers who are more hands-on. <previous unkindness removed>. Adulation of Yudkowsky is not why I used RationalWiki as an example.
The problem with RationalWiki is that it abandons rigor in favour of applause lights, and strawmans where it could steelman, and the overall effect is to make it more entertaining but less convincing to outsiders. Instead of saying “Yudkowsky’s estimates of when certain landmarks in AI research will be hit are considerably earlier than the estimates of a majority of other AI researchers, by decades according to some surveys” RationalWiki says “[d]espite glacial progress in AI so far, the AI-based Singularity will occur soon. Trust us!” The difference between the two statements is obvious: the former is an argument, the latter is ego-boosting snark. This tendency is displayed throughout the site.
SSC and LW have problems, in that one has to learn the local dialect and the indigenous taboos before commenting, and that the learning process filters out incompatible ideas and generates a bit of an echo chamber, but they make a good faith effort to engage almost any idea which makes it through. SSC even tries to do so with kindness. So yes, the problem with RationalWiki is that it says some mean things: not just about Yudkowsky, but about anyone swimming outside the mainstream.
This seems a very good idea. It can be infuriating to be dismissed with “go read $huge_body_of_knowledge”. Not so much if the dismissal is in the form of “your argument[link] is old, here are the classic rebuttals[1][2][3] with counter-rebuttals and counter-counter-rebuttals; you are welcome to continue this conversation if you have any original content to add”.
Yeah, part of the problem with “go read huge body of work first” is I don’t actually believe the person telling me I need to know all that before engaging on the topic. This is a very good aspect of the Sequences being divided into little bite-size chunks, and why it is much more helpful to say “read Eliezer’s piece on scope insensitivity” than just “read the sequences.” The former is a legit shortcut way of saying “the argument I would offer in this case is already written somewhere else.” The latter is just a way of dismissing someone as being totally unqualified to even engage on the subject, which, even if occasionally true, is also insulting to be told.
Sometimes, though, the only workable answer is “go read this huge chunk of work”. Sometimes it’s even “we’re still discussing that and there are various opinions”.
That last often gets taken as “Ha! So you admit you guys are just making it all up!” rather than “This is very complicated and there are valid, different opinions and we don’t yet have enough data to come down on one definitive solution”.
But 99% of the time, “go read this huge chunk of work” translates to either “I don’t know enough about this to be able to summarize the relevant parts but I’m still going around having opinions about it” or “fuck off”.
It’s very rare for there to be ideas so complex in their original presentation that they can’t be summarized.
+1. But it can also mean, “I don’t grok your view enough to both summarize my own AND perform the relevant conceptual translation”.
I have a dream for a website that nothing but crowd sourced arguments on controversial topics.
You’d have a section for “pro” and a section for “anti” for each topic. Each section is full of tweet-length summaries of an argument like “Studies show countries with more gun control have fewer murders” or “The 2nd amendment protects gun rights” with a link to a longer article expounding that argument.
Each of these gets two sets of upvote counters — one from people who are “pro” and one from people who are “anti.” You can sort by either, so you see which articles people think represent their best arguments, and which people from the other side find most convincing.
Most importantly, there are *no* comments, so no opportunity/motivation to be nasty to each other. If you want to say something new, write up a long-form post and link it, see if it gets voted up.
You’re in luck — there’s not just one, but several projects in that vein.
Examples:
Debatabase – sample debate
ProCon – sample debate
Debate.org – sample debate
Ooh, exciting. There is nothing so wonderful as finding you didn’t have an original idea.
You assume there is one (1) pro side and one (1) anti side.
Quick, which of these is the AGW anti?
1. The earth is not warming.
2. The earth is in an intergalacial period and would be warming even if humanity did not exist.
3. The non-human sources of warming dwarf the human ones.
4. Though human contribution to global warming is real, it’s mostly a done deal; the big effects were in the 19th century. The AGW activity is expensive and useless.
5. The harm done by the AGW activity is greater than AGW itself will produce.
6. While we could stop AGW, it is more useful to save the resources to mitigate its effects and improve people’s lives in other ways.
Funny enough, in college, I was once assigned the task of debating against AGW in a class where the students and professor were very hostile to my position. They were looking for a debate on #1, but the resolution was worded vaguely. Being an econ major, and not caring to play the fool, I pushed my team to concede #1 and focus entirely on #6.
The other team was blindsided. They had about 30 pages of tables and charts asserting that the earth really is warming, but no response to, “Yeah, so?”
7. AGW has both positive and negative effects, the size of both is uncertain, and we don’t know if the net is positive or negative.
Sure I did, as a simplification. But theoretically you could name any cluster of views and keep it as a separate column and sort the votes the same way.
Only if you split up the subject between AGW and the anti-AGW efforts, because 3-6 are actually pro-AGW in one sense.
Well, off you go to Area 51, then!
I like the idea, but it occurs to me that tech questions often have one clear best answer, and the stakes are less or at least very different from questions that are philosophical or especially political in nature. Rationalist topics at least border on politics some of the time. People already argue pointlessly over minor tech-preferences in many tech forums, so you need to have a way to manage people’s motivations and actions when many people involved literally think large numbers of people can die as a result of widespread belief in a particular answer.
That sounds negative, but I do also think its an interesting line of thought that could be developed!
The control probkem subreddit https://www.reddit.com/r/ControlProblem sort of does that…its got the voting and the FAQ.
Its no better than LW at updating core doctrine, though.
You seem like an intelligent person who disagrees with the standard Yudkowsky/Bostrom view of AI safety. I’ve seen you express this disagreement multiple times in a variety of fora, but I’m still not sure why exactly you disagree. Can you link to any kind of extended rebuttal you wrote (or a rebuttal someone else wrote that you endorse) that explains the key flaws in the argument?
Regarding the newbie problem – back in the early days of Usenet, the quality of discussion was cyclical. There was a large influx of newbies every September as freshmen arrived at college and got shiny new Unix accounts. For a few weeks they ran roughshod over Usenet, then gradually they came to learn how things worked and discussions returned to normal.
This cycle abruptly ended when AOL connected to Usenet, overwhelming every newsgroup with a massive flood of newbies who responded to every post with “me too!” It was known as the “Endless September” and I’m sure every Usenet regular would be singing that Green Day song to themselves if it hadn’t been written a decade later.
Well, the Internet now is far bigger than AOL in the ’90s, and far far bigger than Usenet ever was, and we still haven’t figured out how to manage newbies properly. Wake me up when September ends.
Oh, right. Here I was thinking it was Basket Case (“Do you have the time / To listen to me whine / About nothing and everything / All at once?”), then I get to the end of the post and yes, yours *does* make more sense, doesn’t it?
I came to Usenet in 1988, when I was in grad school at Cornell. I remember those discussions. When did “endless September” start?
EDIT: I looked it up. It was September 1993.
https://en.wikipedia.org/wiki/Eternal_September
This suggests the solution is for someone to create a new “walled garden” on the internet for having high quality discussions that’s invite-only, then gradually add new users to it at a rate that allows them to acclimatize.
The problem I see with this otherwise laudable idea is that the internet would perceive such high quality discussions as damage, and route around them.
In other words: how would we entice people to want to qualify for entrance into such gardens, or to refer to them as reliable sources?
Early 90s Usenet had the advantage of there being relatively few alternatives for people to discuss something esoteric. Now, if you want to talk about it, There’s Probably a 4chan For That.
Aren’t interminable arguments one of the draws for us (as commenters, Tumblr followers, etc)? Both to argue ourselves and to watch others do the same. Sometimes someone from on high (like our host) will toss us a new topic to gnaw at extensively, but until then we can entertain ourselves with Round 5000 of Should We Banish MIRI. Of course, those who don’t like it are free not to read or participate, and it would be good if there were a norm that people don’t have to debate when they don’t want to. But I doubt I’m the only one who would find having to use email to debate someone would be too high of a barrier to entry (maybe I should carve my arguments on stone tablets and mail them to my opponents, and expect them to do the same?), and if that were the general norm, the Internet would be much less fun for me.
Yeah, I actually like rehashing some topics again and again precisely because they are the topics on which my thought is still developing. And the nice thing is I think the problem has something of a built-in dissolving mechanism: as real consensus is reached and/or peoples’ ideas on it come into sharper focus, debates should tend to become more and more succinct, as I can express my ideas on a topic I’ve thought about a lot much faster and more succinctly than a topic I’m still working through.
Of course, intractable differences may remain with two sides 100% convinced of their rightness on two different sides of an issue, but I think the SSC community is pretty decent about not just having the exact same argument over and over; we have the same argument with new wrinkles and new participants over and over.
There’s definitely value in debating old topics in order to improve one’s debate skills. At the same time, there’s value in getting the latest points made, in order to see the most developed state of an issue.
One could imagine a site where people retread discussions to hone themselves, and possibly getting to the edges of an issue and pushing further out, and the result is made available to all.
I keep wishing there was a resource for this, and above, I see links to three possible candidates, so I’ll be checking those out soon.
I have no idea how to solve this, but, a few quick notes:
I believe back on Usenet, this is what FAQs were for.
Hey, at least the Sequences are a specific reference! And a pretty good one, too. But probably not one well-organized for this purpose; for that you’d want something more like an FAQ with links to appropriate parts of the Sequences.
Anyway, when it comes to FAQs, here’s the failure mode I’m worried about. Because, well… in my experience, social justice FAQs/101s are just really bad — I say this not to pick on social justice once again, but because, well, if they’re making the mistake, that shows the mistake can be made, and needs at least some effort to avoid. I wrote a comment about this a while back over at Thing of Things, but, basically:
1. Their FAQs/101s fail to anticipate serious counterarguments; they appear to be aimed at people holding some barely-reflected-upon position, not somebody who’s thought about it but honestly disagrees.
2. Not a problem with the FAQs per se, but I’ve seen them referenced in entirely inappropriate ways — people saying “see the FAQ” when your objection is simply not addressed in there.
Basically these together come down to “nobody could possibly come up with an argument we haven’t seen before”, and if you do, well, it gets pattern-matched to one they have.
So why are the Sequences better? Because it sure seems to me problem #2 was generally avoided — when people were told to “read the Sequences”, generally what they were saying was already addressed in there. It was pretty rare that somebody disagreed with the Sequences in a novel way, and when they did, I don’t think they were generally dismissed with a “read the Sequences”. Some or all of that might be better community rather than a better reference (the Sequences are certainly not a convenient FAQ), but man, how on earth do you create that?
I guess one thing about the Sequences is just how all-encompassing the ideas put forth in them are. So a lot of times, even if an objection wasn’t explicitly addressed, often the response to an objection could simply be “But if you abstract a little, you’ll see that’s already accounted for.” But that’s not a feature of the Sequences per se — that’s a feature of the subject matter! In other areas you won’t be able to rely on that, you will indeed have to actually anticipate counterarguments, and be willing to engage if a new one comes up.
So if this was avoided on LW, I’m not sure it’s due to the Sequences being better-written somehow rather than due to other factors. In short: I have no idea, FAQs are hard, take appropriate care when writing them I guess? And remember to not make the mistake of thinking you must have answered every possible objection the first time.
The sequences are essentially infinite. I don’t know if anyone actually read them all. This makes “read the sequences” about as useful as “read the entire works of Marx”. Linking to a specific sequence post is often pretty good – the main remaining problem is the writing style, which is sometimes dismissive and off-putting to outsiders.
What?! The Sequences are a finite, strictly defined set of materials. I’ve read them all (except for the very last one, about epistemology, that’s much more recent than the rest).
What on earth could you possibly have meant…?
The sequences consist of over 300 essays that must total something like 400k words, the length of 4 to 5 books. Until recently, they were poorly organized, with no clear place to start or order to read them in. They refer to each other in maze-like ways. There are other sequences by other authors that may or may not count as part of “the sequences”.
Some rationalist people have previously tried and failed to read them all. Here’s Vika failing in 2014, for example:
https://vkrakovna.wordpress.com/2015/01/11/new-year-review/
I feel like many people have in fact successfully read four or even five books.
http://hacks.ciphergoth.org/lesswrong.epub
Here is the reading order: Start from page 1
WEB 2.0 had some great ideas.
“Let’s tag things,” people said, “So you can easily find related content.”
“Let us use links within pages” people said “So you can find citations.”
“User centered design” people said “Will let you quickly get everything by the content creator you like.”
“Let’s put the newest content first,” people said, “So you don’t have to skip pas things you have already seen.”
“An archive of everything? In chronological order? From the start?” they said, “that sounds very web 1.0, we don’t need that.”
This is the bane of twitter, tumblr, and unfortunately also now of LW unless you go searching for the reading order. It wasn’t always like this – when I started reading, there was a very useful page that listed everything Yud had written, in order. SSC is thankfully not like this – the archive is very usefully organized, that is, chronologically (Still “newest first” but it is literally one button press to go to the beginning so Scott is forgiven.)
I’m currently about halfway through the Sequences. I find most of them easy enough reading, although from time to time I don’t really understand what Eliezer is getting at, and as my reading comprehension is bad in general I’m sure there’s a lot I’m not absorbing. I don’t remember exactly when I started on them (I took some long breaks), but at my current rate, I expect I’ll finish sometime this year.
If they hadn’t been organized into e-book format, I’m pretty sure there’s no way I would have gotten this far, though.
@Liskantope:
The sequences, while generally good, do have sections that are outright bad. As such, the proper spirit in which to read them is to find interesting what you find interesting, learn where you can, and disregard entirely such parts as you find uninteresting or poorly explained.
“The sequences, while generally good, do have sections that are outright bad. As such, the proper spirit in which to read them is to find interesting what you find interesting, learn where you can, and disregard entirely such parts as you find uninteresting or poorly explained.”
That’s the treatment you’d give a holy text if you are a moderate, modern sort of person. Ideally, the bad bits in the Sequences would be eliminated and the unclear bits would be improved.
If you think of the sequences that way, then “read the Sequences” is no longer really a useful answer in a debate.
@suntzuanime there are also people who read the entire works of Marx. That doesn’t make it a reasonable thing to ask someone to do.
(My comment saying *no one* read them is a hyperbole.)
@Nancy Lebovitz:
That would ruin their authenticity, but there is absolutely space for an edited work. I have heard that “Rationality: From AI to Zombies” is such a work, but I have not read it. (It is available on a pay-what-you-want basis from https://intelligence.org/rationality-ai-zombies/ but I cannot confirm, as I have not read it)
@Jiro: Sure it is – I mean, not for the person being told to read the sequences but immensely useful for somebody who doesn’t want to rehash the debate. I see it like theology: Step ONE is “read the bible.” Yes, it is boring. Don’t care. If you want to discuss with me the true nature of the trinity, first read the bible.
@Shieldfoss, I don’t know if your bible example is the best. Theological arguments among people all of whom have read the bible don’t seem to be notably less interminable.
Also, what happens when I read it through and say, “What you are saying is in there is not,” and the person tries to send me back?
I had no idea that there was a significant set of rationalists who thought that reading 4-5 books was a reasonable test for debate.
1) I’m not saying they’re wrong – I have no idea, and it appeals to the part of me that used to fantasize about a Stranger in a Strange Land style societal shift – that once we all learned to think in Martian, we could communicate more effectively with one another, achieve more, etc.
2) On the other hand, I might worry that if you set reading 5 books worth of EY analysis as the price of debate, you’re screening your correspondents in a way that cuts off a lot of tests of your ideas. As much as I love steelmanning, and I really really love it, some contrary ideas aren’t going to make it to the debate because people holding those ideas won’t read 5 books worth of Yudkowsky essays.
3) Generally on philosophical debates, I find it a good exercise to summarize a complex idea. It’s true that Hume dealt with basically all freshman thoughts on induction long before any of us were born, but instead of saying “Read Hume,” I can break down what I see as Hume’s theory of induction, and see what people think of it.
And if I can’t explain and apply what I know about Hume, do I really know it, or am I just using “go read Hume” as a shield against introspection? I’m thinking generally of some SJ debates – when you dig all the way down, the more thoughtful members of community have some very specific understandings of terms like “objectification” that are pretty helpful to know if you’re going to discuss it, but telling someone to “read Lachan” doesn’t help at all.
Re J Mann’s 2) I think that this is a really important point. As someone who’s studied Western philosophy in the academy extensively but never been part of the internet rationalist community, the importance of shared canon hits me very hard. There really should be a way to bridge this gap. Maybe if I ever get tenure it’s something I can work on.
3) After having a spectacular lack of success lately in teaching college fre-froshes Descartes as an introduction to philosophy, I think that this is also a really excellent point. Descartes, Hume, etc. wrote in both styles and contexts that are very hard for people to leap into today; providing contemporary, synthetic accounts of philosophical positions is very important.
Of course, the price paid is always that people are then reliant on the interpretation of the person giving that contemporary account. This leads to an immense amount of strawmanning. (I can’t tell you how many times people have dismissed all things Hegelian to me because ‘Hegel said that all things have antitheses and there’s no antithesis to a rock’.)
@Orwell’s Ghost – I can still remember my freshman philosophy professor discussing some point on Descartes, with “read literally, this passage would mean X, but that would be stupid, and we know Decartes isn’t stupid, so let’s think about what he might mean.”
It was a frustrating and interesting point, and probably right, so I’m still thinking about it, many years later.
I echo Said Achmiz above – I have read the whole of the sequences, more than once. First on LW, then later again in .mobi format on my kindle.
They’re long if they don’t interest you – which might well be the case for somebody who was told “read the sequences” or who have recently joined “rationalist tumblr” and are reading them not out of interest but out of duty. They’re a breeze if you find them interesting on their own.
I’m interested in most of the material contained in the sequences, but I’d call them anything but a breeze to read through.
At the very least, reading them critically and actually digesting their content fully is a significant commitment in both time and energy.
I second Shieldfoss: if you’re reading the Sequences because you have to, then sure, it may be a slog. I found them wonderfully written and enjoyable, and reading them to be relaxing — akin to reading a good essay or novel.
So: is there anything we can do to increase the probability that the Sequences will be interesting to a given newcomer?
Recently I prefer to point people towards the book form of the Sequences; specifically saying that it’s free to download. It’s finite and linearly ordered. Too bad it wasn’t made much sooner.
(I often wonder why some things take so much time. I translated the whole book into another language in a shorter time than the whole rationalist community needed to proofread the English original. Makes me wonder if there is a more general lesson about the community here. But I digress…)
The complaints about length are in my opinion mostly bullshit. I agree that for a newbie, the length is threatening (especially when it is not well-organized). But people who already spent a few months debating on LW — and in general people who spend a lot of time debating on internet — have probably read much larger quantities of text; they are merely not aware of it.
Try the following experiment: Take a piece of paper, and for every medium-length article you read, make a mark on the paper. How many marks would you get on the average day? If you are like me, it would be about 10 during a “mostly offline day”, and more than 100 on a “day spent mostly procrastinating”.
When you have about 400 marks, you have read approximately an equivalent of the Sequences book without the “Intuitive explanation of Bayes Theorem” and “Technical Explanation of Technical Explanation” chapters (yes, those two are insanely long, just skip them). I usually read that amount of text in a week. Slower readers who still procrastinate a lot probably read it in a month or two. So maybe just reducing Facebook usage to 50% and reading the Sequences in the remaining time could solve the problem. But even reading one blog article per day would make you read the Sequences in a year; so if you are regularly debating on LW for more than one year, what exactly is your excuse? You have already read more text on the LW website itself.
Or the people who compare the length to Lord of the Rings or Harry Potter — but you have read LotR or HP, probably both, haven’t you? So I guess it’s about priorities. I am not saying that LW Sequences should be a priority for everyone, but if you spend a lot of time debating on LW but you don’t have time to read the LW Sequences, then you are simply using “rationalist community” as just another chatroom.
I am doing the same mistake myself, with other books. I have dozens of downloaded cool books that I never have time to read, because I spend my free time on LessWrong, SSC, Reddit, Facebook, and other time sinks instead. And it’s not that I wouldn’t want to read the books… it’s just that I browse the web first, and then it’s suddenly midnight and I have to go sleep. This is how I “don’t have time” to read the interesting and potentially important stuff, but I alway “have time” to read inane arguments on social networks. Addiction, low willpower, lack of strategic planning. At least I am not blaming Eliezer for that.
Or if you’ve read Worm, that’s like twice as long as the Sequences or more, isn’t it?
Reading Worm costs most people 2 or 3 weeks of productive time; and Worm is easy to read, since it contains no deep ideas and has some paragraphs that can be safely skipped/skimmed.
Again, compare asking people to read the sequences to asking them to read Marx.
On a Marxist website that tries to have a higher standard of debate than most of the internet, that would actually make sense.
On a website about rationality and cognitive biases and artificial intelligence… okay, I’d say that if you have read Kahneman’s “Thinking, Fast and Slow” and Bostrom’s “Superintelligence”, then you are excused and you don’t have to read the Sequences. Otherwise, you should get some background in the main topics of the website.
I agree that getting some background is desirable, but I think I was able to pick up almost all of it without reading either Kahneman or Bostrom. The amount of interesting things to say about cognitive biases is not that large, especially for someone that already knows Bayes’s theorem and some game theory.
The amount of interesting things to say about AGI is also not that large – paperclips, foom, and box experiments are probably the majority of it. Maybe there’s one or two things I’m forgetting (e.g. some hand wavey arguments about the evolution of intelligence, or some hand wavey arguments about AIXI). But probably it can be distilled down to a couple of essays rather than a whole book.
Not sure if “It’s only half as long as Worm” is really a defense. 🙂
Worm is insanely, ridiculously long. Might as well tell somebody that this rock you want them to move is only half as heavy as the Empire State Building.
> The complaints about length are in my opinion mostly bullshit.
My complaint about length is that it should not have included postings refuted in the comments, or postings with no dscernable point.
If someone said “read Lord of the Rings, then come back to this discussion,” or “Read all of Harry Potter, that’ll answer your question,” I’d still think they were arguing in bad faith. Even though I’ve read both of them and they’re much more accessible than the Sequences. Yes, it might answer my questions about Voldemort, but surely you can make whatever point you’re making in less than seven novels of space.
The issue is not the length, the issue is the length compared to the length of the argument. You’re asking me to read several books of text in order to engage with a few paragraphs of internet argument. As you point out, there are a lot of arguments I could be getting into right now, so why am I going to put in several books worth of effort just to engage with this one in particular?
I don’t know if it’s quite comparable. If somebody says to me, “I don’t see why [complex mathematical truth] is true,” and I reply, “What is your background in [theory underlying complex mathematical truth]?” and they reply, “It’s nonexistent, but that shouldn’t matter!” then I’m going to request that they familiarize themselves with the theory, which may require quite a lot of reading.
The Sequences are sort of math, sort of philosophy, and generally not compressible to pithy explanations, not without destroying most of the nuance.
The sequences are about 80-90% not original.
@ moridinamael
If they care about the truth value of [complex mathematical claim] at all, they must have some comprehensible idea (and a comprehensible counterexample) in mind, even if it doesn’t really match the actual claim.
@Nita
They may not grasp that the claim they are making is complex. “A witch is causing our crops to die” sounds like a simple claim. The refutation to this claim is going to be pretty complicated, especially for any party who believes in witches.
Something like “This number keeps getting bigger and bigger, I don’t see why it doesn’t go to infinity!” is a claim that sort of makes sense in a common-sense mode of thinking. But depending on the details, you may need to understand calculus and at least the details of how limits work to show that the number doesn’t go to infinity.
“A witch is causing our crops to die” sounds like a simple claim. The refutation to this claim is going to be pretty complicated, especially for any party who believes in witches.
One notes that, historically, what brought down the witch hunts in Europe was not disbelief in witches, but applying standards of evidence to the charges. In those regions where they insisted on standards all along — Spain, Italy, England — witch hunts were rare or extralegal. (Things like an Inquisitor saying that if this confessed witch said she saw this woman at the sabbat when other people saw her asleep in bed, that only proved that one of them was illusionary, not which one. Or that they needed some evidence that the crop failure was not natural in origin, and then that the supernatural origin was actually linked to the accused.)
Controversial issues are almost fractal in nature: you can keep zooming in increasingly higher detail on something like gender equality until you’re reading an in-depth discussion of selection bias in a study on the pay ratio of male to female quants living in Manhattan. As a layman, some of Scott’s past FAQs have been really good at providing a simple summary with just enough added complexity to help further study, but the sequences are so detailed that they have a FAQ which explains the other available FAQs and summaries.
EDIT: this graphviz diagram is a particularly good example of why the sequences are better as a reference or an intermediate-level exhaustive look, rather than a guide for curious beginners.
I count, in order:
Link to a FAQ that is not a Sequences FAQ but an LW FAQ
Link to a FAQ that is not a Sequences FAQ but an LW FAQ
Link to what appears to be an index in chronological layout (That is, what you would find on the first page in a book)
Link to what appears to be an index in thematic layout (That is, what you would find on the last page in a book)
Link to somebody proposing a reading order for Less Wrong, not The Sequences
A reference guide to LW terminology
A reference guide to additional off-site reading.
…
…
You know, it’s not hard to be confused if you are deliberately trying to be confused.
May I suggest that you take the chronological index, start from the top, read to the bottom.
I find it amazing that you take the fact that somebody has decided to gather this information in easily read formats and uses it as evidence that the subject is confusing. What would evidence of clarity be? An equal length work without an index?
The specific, exact problem is that the sequences were written without a good index, referred to without a reading order, and are often indistinguishable from the rest of LW. As a result, they are suffering from a proliferation of different, competing indices. Evidence of clarity would be a number of choices that an ordinary person can reason about – say, one page in chronological order with a prominently displayed download link for an ebook, and one grouped by topic.
Navigating the sequences isn’t a problem for me because — while I’m not a mentat like many of the commenters here — I’ve been aware of Less Wrong for years, have a reasonably high level of education, and have an interest in rationality as a hobby rather than a tool. You are likely much the same. Someone without those advantages gets sent to the sequences and either gives up or tries to slog through and gets nowhere. For that reason, the sequences aren’t a good beginner-level introduction.
EDIT: To clarify, the reason I’m saying this is because I think there needs to be a better resource than the sequences or wikipedia for introducing beginners to these topics, and I couldn’t think of one. Arbital is very exciting for that reason.
The Sequences seem to replicate all the flaws you enumerate of SJ FAQs, as well as being much more extensive.
Critics of LW-space, certainly on Tumblr, are frequently dismissed unless they have read the Sequences in their entirety.
An you still have to read them all if you have pre existing knowledge of logic, maths, physics, CS,..
No, not really. If you actually have that preexisting knowledge, the Sequences mostly look like a mixed-up gumbo of uncontroversial obvious stuff, strange phrasings, complete insanity (though we do tend to overemphasize the examples of that), weirdly vehement opinions on matters where science has nothing to say (like interpreting quantum physics), and overly complicated and dramatic explanations for things that are actually very clear and intuitive (like Bayes’ Law).
For instance, Bayes’ Law in real life is just the trivial consequence of taking the definition of conditional probability and slicing the joint probability in the numerator into two conditionally-dependent parts. If you explain it from the measure-theoretic/geometric/frequentist point of view, it makes perfect sense, and its application in extending Boolean reasoning to continuous degrees of truth/belief comes as a neat afterthought, especially when you include the caveat that your sample space must be a Boolean algebra before you can use Bayesian reasoning on it.
Oh, and a remaining portion of the Sequences consists in entirely justified vehemence about how metaphilosophical naturalism does in fact reals, guys.
I’ve seen Eliezer himself say that 85% of what he wrote in there simply was not original.
I do have the preexisting knowledge and that’s exactly what they look like to me.
One additional problem I see is that there’s an asymmetry about what someone’s argument depends on. You can be told to read the entire Sequences. But if someone’s argument really does depend on the entire Sequences, that dependency cuts both ways–it also means that I only need to refute one of that large collection of Sequences and I’ve refuted the argument.
Of course it never works that way. The whole Sequences is essential to understanding someone’s argument, except that parts that are refuted suddenly aren’t essential any more.
> entirely justified vehemence about metaphilosophical naturalism does in fact reals, guys.
Not all that justified, since they don’t address most of the hard problems, including The Hard problem.
I think this sums ups why “read the Sequences!” is not a good counter-argument.
For instance, Bayes’ Law in real life is just the trivial consequence of taking the definition of conditional probability and slicing the joint probability in the numerator into two conditionally-dependent parts. If you explain it from the measure-theoretic/geometric/frequentist point of view, it makes perfect sense, and its application in extending Boolean reasoning to continuous degrees of truth/belief comes as a neat afterthought, especially when you include the caveat that your sample space must be a Boolean algebra before you can use Bayesian reasoning on it.
Makes note.
Then you have the unfortunate situation where the LW interlocutor has only read the sequences and thinks he knows logic, maths, physics, CS. When confronted with someone that actually does know one of these subjects, you have one side telling the other to read the sequences and the other saying to go read Tarski.
I think people upthread are getting at the other issue people have when you point at something as big and ranging as The Sequences: what happens when I reach a sequence that is Wrong or even just questionable?
If you tell me to “Read The Sequences” before we talk about AI risk, I’m going to come back with “Okay, I read up to the point where he proposes we judge complexity using a measure that is literally uncomputable. Now what?” The number of times that happens is going to increase proportional to how informed your newb already was and how meandering your reference text is.
Better yet is when the LW interlocutor has only read the sequences and thinks he knows philosophy. When confronted with someone that actually does know philosophy, you have one side telling the other to read the sequences and the other side saying to go read Kant.
Bayes means several things in LW land.
One of them is the probability calculating rule.
Another of them is “the only epistemology you’ll ever need”. The idea is to use Bayesian maths as an algorithm, turning the handle in it and cranking out Truth without any need of that pesky intuition or creativity,
The other is heuristic advice with no strong quantitative element. In the famous Science versus Bayes article … Bayes is supposed to be the good guy… Science is characterised as following the evidence wherever it leads you, whereas Bayes is characterised by by valuing consilience with existing knowledge. I wasn’t previously aware that science ignores consilience. The biologist Edmund O Wilson wrote a book on the subject.
This is kind of what Eliezer is trying to solve with Arbital. A wiki-like project to explain complicated ideas for people at different levels of knowledge.
https://www.facebook.com/yudkowsky/posts/10153995446789228
To any readers: that Facebook post says Arbital is *not* ready to handle an influx of users at this time. Please don’t break it.
There are really two, quite distinct problems here:
1) Some complete idiot is wrong on the Internet about some important issue, and I am compelled for the sake of the general intellectual ecosystem to correct him or her.
2) Some complete idiot thinks *I’m* wrong on the Internet about some important issue, and feels compelled for the sake of the general intellectual ecosystem to correct *me*.
These are obviously very different cases, and should be treated entirely differently. In the first case, for example, it is very rude of said idiot to express such a stupid opinion so openly, in a way that might persuade uninformed or easily-swayed audiences that there’s actually some validity to his or her idiocy. In the second case, on the other hand, it is very rude of said idiot to challenge my obviously correct opinion so openly, in a way that might persuade uninformed or easily-swayed audiences that there’s actually some validity to his or her challenge.
Miraculously, though, I find I can use the same policy in both cases, and it works quite satisfactorily: Don’t try to persuade the other person–he or she is obviously too idiotic to change his or her mind. Instead, assume that someone else who’s uninformed on the issue but neither idiotic nor closed-minded is watching the debate, and say only what is necessary to persuade such a person that the other person is an idiot. If the other person then continues arguing after having been clearly demonstrated to be an idiot, then that’s his or her wasted energy, and not one’s own concern.
I think there is a good extent to which a toxic heated issue tends to drive out moderates from the debate. This is what I feel about with AI risk in the rationalist community, or abortion in pretty much any political discussion, or any other interminable issue of choice. I have an opinion on those things. But they aren’t big deals to me, so, I’d rather just STFU than get a flamewar cooking.
Generally agree but there are topics where you don’t want or need moderates. Sometimes one side actually is right and one side actually is wrong. Who needs “moderates” on the question of whether the earth’s flat, or combustion is caused by phlogiston?
Surely the moderate position on whether the world is flat is “it’s round, but not worth arguing with complete idiots about”.
But round and flat are not mutually exclusive. 🙂
[See “flat torus”.]
Because people are pretty good at convincing themselves that all sorts of ideological claptrap is as settled as the spherical nature of the earth.
And even when we’re talking about things like the spherical nature of the earth, groups like the Flat Earth Society aren’t taking the hard and obviously wrong road because they think it is correct, but because it provides a better (to them) metric for understanding other things. People who argue about flat earth today are not talking about reality, but about intellectual rigor. It may be worth engaging in this for the sake of process alone.
Even well known facts can become mystical as “everyone knows this” becomes the justification, rather than the original method of discovery. This is how knowledge is lost.
“Who needs moderates in a question where there is only one obviously correct side?”
And we mop up the blood, generation upon generation, in answer to that.
A good point, and a poetically phrased one as well.
Who needs “moderates” on the question of whether the earth’s flat, or combustion is caused by phlogiston?
In those places where the flatness of the earth is non-material?
I agree that evaporative cooling is a big part of the problem. It’s not just “moderates” who get driven away – it’s also people who want to have a productive, respectful discussion.
People who want to have discussions in order to learn something new, and who approach disagreements with the idea that they might be missing something which the people on the other side understand, will be repelled from discussions of topics that tend towards interminable arguments and flamewars. People who want to fight the righteous battle against the idiocy of the people who disagree with them will be drawn to discussions of topics that tend towards interminable arguments and flamewars.
So there wind up being certain topics that respectable people just don’t talk about in public. And when the topic does comes up on the internet, two packs of wolves predictably swoop in and turn the discussion into a familiar-looking frenzy.
From the perspective of some wolf packs, that might not be a bad thing.
There are plenty of groups who will tolerate being marginalized themselves as long as the other side is also thoroughly marginalized (a marginalized enemy can’t do anything meaningful to hurt you, after all) so if a social norm of “don’t talk about X because then these two bunches of idiots who are as bad as each other will show up and start fighting” is established they’re okay with it.
This also means that it’s very much in the group’s interest to seek out these fights in otherwise uninvolved fora and jump in.
In practice it seems difficult to distinguish the following types of arguments:
1. An argument where people are taking sides on a genuinely ambiguous issue (the horse/mule debate, the whitegold/blueblack dress to some extent)
2. An argument where people are failing to appreciate a well-understood body of knowledge that provides the correct answer (debating the Monty Hall Problem)
3. An argument where people disagree due to both parties holding to a different system of knowledge and assumptions that may or may not be internally consistent and comprehensive (a Christian and an atheist debating the existence of God, a Republican and Democrat debating politics). Both groups may use different vocabularies and have different assumptions at multiple levels that may take a while to challenge and untangle.
For #1, it seems like the answer is to operate with some amount of humility and realize that good people may disagree. For #2, if it truly is a solved problem, there should be enough reference points or a master thread to send people to. For #3, the challenge can be that both sides think they’re dealing in argument type #2, thinking that if the other side would just open their eyes, the whole thing could be settled. But if there is no genuine consensus among experts in that domain, resolving which set of assumptions is correct might be closer to an argument of type #1. Choosing between worldviews can itself be an ambiguous problem.
Answers I would suggest to repeated arguments of each type:
#1: “We’ve given up discussing this: ultimately, no one knows the answer.”
#2: “This has already been discussed and answered, here’s a link to the answer.”
#3: “We mostly engage in discussion with people who share certain assumptions in our community. If you’re interested in inter-community dialog which challenges those assumptions, there’s a separate space for that.”
What about
#4 arguments that one side thinks is settled, and the over side doesn’t….?
That’s basically 3, isn’t it?
So, for example, if I go onto Feministing tomorrow and loudly disagree that Bad Thing Du Jour Is Patriarchy, then I’m either gonna get banned or pointed at some Feminism 101 FAQ. But the problem is not necessarily going to be that I don’t understand the arguments put forward, since I do; the problem is going to be that all of those arguments are based on premises I reject. You can’t have a real dialogue if there is no fundamental agreement about reality.
From the perspective of Feministing it makes sense to exclude or box (ban or 101) me. What would be the point of having me around, when literally everything on that site is premised on ideas I reject? Where would be the benefit for their community?
So it’s not always really about Interminable Arguments. Sometimes it’s just that someone doesn’t agree that the sky is green and the grass is blue, or whatever combination happens to float your community’s particular boat. Far better to spend your effort on in-community bonding and campaigning against painters who insist on depicting the sky as being blue, those damn shitlords.
(The complication comes in, of course, when they’ve used premises you reject to come to conclusions with which you agree. That one’s stung me to the quick recently; one arrives, admires the wallpaper, then discovers that the floor is made of bees.)
Is that really so weird? There are lots of wrong beliefs that can lead to common conclusions; there’s even a SMBC comic about it. “There is a vengeful god who punishes sinners; therefore, cooperate in the prisoner’s dilemma.” is a lot more fragile than looking at the problem in terms of game theory, but in terms of evidence, but it leads to the same place.
Plus, motivated reasoning is a thing. People will ignore or look to explain away parts of their philosophy or group-identity that go against what they want, and most people want pretty similar things.
Oh, I wasn’t suggesting it was weird. Just deeply uncomfortable when you suddenly realise *just* how outgroup you are in that context. (To the point of outright dehumanisation on one occasion.)
It’s definitely where I’ve stumbled into being hated by the most people? And where I think the majority of the online fights surrounding, for example, feminism come from.
I kind of thought I *was* a feminist until I ran into modern feminism. But like almost all ideologies the assumptions matter much more than the conclusions.
If someone tells you the penultimate bottle of a sixpack is the fifth, because of “pen-“, since “penta” means five, the illogic does not change that it is the penultimate one.
I don’t like the term “101 space”. I think it sounds condescending and can insinuate academic legitimacy where none exists. This is worse when it’s paired with a fringe belief.
Agreed. Most of the time I see a “read $ideology 101” post its of a nature that denies the possibility of legitimate disagreement. It pretty much says that any one who understands the argument will agree with it. Its a discussion shutdown.
Not at all. If you understand the argument but disagree, you will write differently from somebody who does not.
There is a difference between e.g. “But won’t the AI be smart enough to realize paperclips make no sense?” and “The Paperclip AI will be smart enough to independently figure out the iterated prisoners dilemma, realize that tit-for-tat wins and want to establish a reputation for cooperating early, with e.g. the local humans, so that other future, potentially-stronger-than-itself AIs it meets as it expands into space will cooperate with it rather than tear down its paperclip empire while they’re busy tiling the universe with whatever stupid thing their creators accidentally programmed into them.
One of those gets the answer “Short answer: No. Slightly longer: Read the sequences.” The other will not.
It’s not a shutdown. Or well, it is, but what they are saying is not “There is no legit disagreement” it is “While there may be controversy here, you are so uninformed as to not even be worth debating with”. And in my experience, this is usually correct. My first Un-informed thought on any subject usually has a validity of approximately nil. This is why I read FAQ’s.
There are certainly cases where they may have meant the latter but said the former.
My experience matches yours. It probably depends on the particular ideology and also on the person making the “read 101” comment. I have definitely experiemcved the attitude that “if you disagree, you must be uninformed”.
Sometimes its appropriate sometimes its not, and the main solution I can think of is for other experienced members of the community loudly disapprove of any dishonest or unjustified use of “read the 101” statements. The 101 should be considered a community asset that loses its shine every time its wielded as a status weapon.
A community can self police only if it is mostly honest.
I assume we were talking about a mostly honest community. Not totally sure your specific meaning, but I also imagine a community wouldn’t exist long if most people in it were dishonest, at least with eachother?
A dishonest community can survive just fine if they all agree on the same lie.
Same here. I have never seen a community up-end a 101 document that supported their culture because an outsider demonstrated it was factually wrong. I have never seen a 101 document with a Controversies tab, warning the community “Wait, there are these half-dozen new studies which cast doubt on our sacred theories; when newcomers bring them up criticizing our premises, they very well could be right! Tread carefully!”
They are not to educate or inform, but to proclaim the sacred values of the group.
And you damn well can discuss rationalism, UFAI, and math without reading the sequences, and if someone responds to “Wouldn’t AIs be too smart for this trap?” and “Wouldn’t AIs work through the Iterated Prisoner’s Dilemma and want to avoid this trap per their utility functions?” completely differently, the problem’s not the person asking the questions, its them replying entirely to signals and keywords, and not the substance of what’s being said.
I’ve seen 101 documents with Controversies tabs, they’re Wikipedia pages.
And even there, there is a HUGE movement against Controversies tabs on a whole lot of issue-related Wikipedia pages.
But then Wikipedia NPOV is a bad joke at this point anyway.
Great, then let’s make one.
I don’t care about the AI stuff one way or the other, but I do care about building communities with functional discourse norms — it’s a lot easier to embarrass factions of bullshitters if there’s something they look bad in comparison to.
(Another thing that could be useful is a tool to track stories/narratives in the media — combined with a fact checker and an index of every time the motherfuckers have lied. Steve Sailer as a service, without the stuff that puts normies off. On that note, is anyone keeping track of the darkside shit the Clinton campaign is up to? Check this out. If there’s one, there must be others. I don’t know Python or its ecosystem well enough to have any idea how one would go about implementing it, but it should be possible to build a news firehose and scan it for things like that ‘muscular’ line which are very unlikely to be coincidences, right? And remember: this would work just as well under a Republican president and his army of deep-state PR spooks, if that’s what gets you off…
(I was hoping Wesearchr was going to be that — let’s call it ‘metajournalism’, that’s a better name than ‘Steve Sailer as a service’ — but it’s not. As usual, I’m hoping someone else writes this thing so I don’t have to.))
You might want to check out the discourses in sports communites, talking about data and narratives in a low stakes environment, plus lots of betting and clear winners/losers.
I think the right way to handle it is either:
1. “We don’t want to debate this right now, but here’s a document that contains our responses to the objections we hear most often, see if yours is among them. This doesn’t imply you’re wrong, just that we’re not bored enough to talk about it.”
Or
2. “We do want to debate this right now, but only with people who are familiar with the following terms and arguments, look through this and make sure you’re one of them.”
This doesn’t have to be something everybody says/agrees on, but maybe it should be something some people have the option to say.
Also, as mentioned below it’s really important to frame it as “I don’t want to talk about this now” rather than “You are too wrong to be worth talking to”
I think that’s ok as long as the objection isn’t automatically dismissed. I’ve been hanging around less wrong stuff for a while and it’s worrying how often valid seeming objections are put down this way. Just because an argument sounds like something that was settled doesn’t mean it was.
It also feels a little sketchy to make people to learn nonstandard jargon when the objection can be stated plainly.
The ideal response might be:
“That may be true, but I think someone responded to it here. I’m too busy to look it up right now.”
It also feels a little sketchy to make people to learn nonstandard jargon when the objection can be stated plainly.
Not when there’s an illusion of plainness. Oftentimes, at the depths these arguments come up from, “plain language” is talking about things that aren’t ontologically basic, but *appear* so due to quirks of human neural design. So the jargon is actually doing a clear and important function of preventing people from sticking to the ‘my sense impression is an accurate map of the territory’ fallacy. (This is one big reason for jargon to evolve – once enough people get a truly ‘intuitive’ understanding of your jargon that “everyone knows what this means”, they’re almost certainly wrong.
Is the jargon still doing a good job when it is a reinvention of other, more mainstream jargon that does the same job? Your comment was quite condescending…it assumed that anyone who comes to LW is confused about map/territory issues, whatever theory background.
Would it be useful to have special subfora for these discussions with more moderation and stricter rules?
I think we can divide issues into three types: frontier issues, factional issues and core issues. A frontier issue is something that a community has no entrenched positions on: no factions have yet formed. A factional issue has factions with different positions within a community, so that you can predict roughly who will say roughly what each time something related comes up. A core issue has a community position that is strongly dominant amongst its members and which may be a sacred value of that community. (Note that an issue can be frontier, or even undiscussed, in one place and core in another.) For example, racism in the American police is a frontier issue amongst econometricians (I hope!), a factional issue within the USA as a whole, and a core issue in the Black Lives Matter and All Lives Matter movements.
Suppose that I want to argue with Fooists about major tenets of Fooism, or to bring up an issue they’ve discussed a dozen times before. Wouldn’t it be helpful if there was a special subforum on Foo.com for me to do that in? This would allow people to avoid seeing the issue again if they were fed up with it, and would give people a place to ask their questions without getting yelled at for it. It might therefore make discussions more productive by reducing flaming.
The obvious problem is that it would create an echo chamber elsewhere, but I feel that “I raised my issue on the Foo Controversies board, following all the rules, and nobody gave me a decent answer after two weeks” is a moderately strong argument that the Fooists have no decent answer and a valid reason for asking elsewhere.
Another advantage is that the rules for posting on the Foo controversies board could be stricter than for posting elsewhere. The moderators could insist that everyone have read the FAQ and punish people for blatant unfamiliarity with the main terms and positions. They could also enforce higher standards of discusssion by insisting on citations, decent writing, not flaming people, not playing to the audience and so on.
The obvious problem here is moderator bias, but I feel there are hidden advantages. Firstly, biased moderators is the default position on controversial issues, and aving a special place where standards are meant to be higher and everyone is watching might help. Secondly, it would mean less awfulness elsewhere on the site, so the moderators might have more time (and would know that the Foo controveries board needs extra attention). Thirdly, it would be the obvious place to have anti-Foo moderators as well as pro-Foo ones.
In order for this to be workable in anything less than the most rigorous academic contexts, the document in question needs to be brief, or it needs to be well-indexed, or whoever is passing the buck needs to specify which chapter of the document is the place to start. Classical Usenet FAQs usually went for the “well-indexed” approach. This seems to have become a lost art in some quarters.
Also, the document needs to be written with an understanding and respect for the arguments that intelligent people will be making on both sides of any contentious issue.
Fail on either of those, and “go read the document” will properly be taken as an indication that you don’t think the newbie is worth talking to. Which is usually right, because newbie, but when it goes wrong it can go very badly wrong.
My problem with approach 2 is that a lot of jargon seems to come with embedded premises. And technical re-definitions often seem like an attempt to “claim” common, morally-loaded terms for a specific side.
It would be a lot easier for me to defend affirmative action* if I defined “discrimination” as “unjust differentiation between two groups.”
The problem is that when I say “unjust” I mean “unjust from my perspective”. So, it becomes impossible for any policy I like to be called discriminatory.
This would force an interlocutor to accept my implicit notions of ‘just’, sidetrack into a debate about language-use, or talk around a bunch of very common terms.
* Trying to pick an example that would be concrete without being too distracting.
A thousand times this. In so many arguments people use morally loaded jargon. Then they often are imprecise in its use. Sometimes they use a motte and bailey doctrine, so reading their definitions doesn’t even let you know what the term means.
My example is from economics. Free Markets. Freedom is a morally loaded term. It’s also used pretty vaguely. People call markets that don’t meet their own definitions free.
Personal anecdote: I got tired telling LW “stop talking about Bayes, machine learning, and statistics, go read textbooks.” It doesn’t help. Basically your solution has to work on your own tribe.
In other words, when you say what you are suggesting against an interlocutor, it makes you feel better, but doesn’t solve the problem of interlocutors continuing with nonsense. No one is going to read any documents you suggest, because reading and comprehending and building understanding is difficult, while arguing on the internet is easy and fun.
The big difference, as mentioned above by John Schilling, is that saying “go read entire textbooks” is requesting an impractical amount of effort and time, especially without a very good demonstration of why a textbook is needed (and then you’ve presumably just gone ahead and explained anyway). It’s reminiscent of the Courtier’s Reply; that’s why it didn’t work, I believe.
What Scott and John have suggested makes a lot more sense to me: a well-indexed and easy-to-use FAQ that doesn’t make unreasonable demands of the reader’s time, and tries to remain divided into relevant, self-contained chunks.
I also tried things like “read this paper,” which also did not work. I think basically if you think papers and textbooks are too much work, maybe don’t have strong opinions on topics that require that amount of preparation — which of course is not going to stop anyone from having strong opinions.
—
A lot of the stuff LW people get wrong doesn’t require an FAQ, it’s just basic misunderstandings a class in probability would fix (like confusing “Bayesian” and “using probability theory.”) Or, God forbid, confusing causation and conditional probabilities. It’s a tiring, neverending, thankless slog to correct these things. So corrections stop getting made. So community consensus reaches some energetically favorable state which doesn’t necessarily have any relationship to a textbook-informed opinion. The whole process is not truth-producing — truth-production requires effort / energy input.
I also tried things like “read this paper,” which also did not work. I think basically if you think papers and textbooks are too much work, maybe don’t have strong opinions on topics that require that amount of preparation
You just glossed over an order of magnitude or two in effort there, lumping together “papers and textbooks”.
“Go read a paper” should work, provided the person you’re saying it to is experienced in reading academic papers and already understands the basic of the field at the textbook level and the paper is specifically on topic. The former limits you to a very small fraction of the population and I don’t think you can count on a majority even in college-educated rationalist circles. The latter requires that you have actually read the paper yourself, in its entirety, and unless you’ve read it recently or it is seminal in the field it likely calls on you to reread the abstract and conclusions before using it to deflect any particular argument.
““Go read a paper” should work, provided the person you’re saying it to is experienced in reading academic papers and already understands the basic of the field at the textbook level and the paper is specifically on topic. The former limits you to a very small fraction of the population and I don’t think you can count on a majority even in college-educated rationalist circles.”
I agree that I shouldn’t count on most college-educated rationalist circle people to understand graduate level stuff and above. That would be unfair. But I should count on them having the epistemic humility to just not wade into those waters very much then, right? That’s not what happens. A lot of LW is amateurs talking about graduate+ level topics. (Although there are grad students and professors who used to post there also, on occasion — I am not talking about them).
—
The “Bayesian/probability theory” confusion is not a graduate level confusion, though. And it (and other similar stuff) comes up over and over, and it is very tiring.
My probability theory class called the “probability space realist” approach “Bayesian” and the PAC-style approach “Frequentist”.
Eliezer is an hardcore realist, so it makes sense for him to call his approach “Bayesian”. Or is there a subtle point I am missing (links?)?
The “Bayesian/probability theory” confusion is not a graduate level confusion, though. And it (and other similar stuff) comes up over and over, and it is very tiring.
Perhaps this shouldn’t be a graduate level confusion, but there are many professors, not just graduate students, who are very confused about the fundamentals of probability theory. People who should know better — e.g., philosophers of science and statisticians. I don’t think there’s another area of mathematics or logic where conceptual confusion runs so rampant.
“Go read this paper” requires a level of trust that most people haven’t earned with me. If I do read the paper, I had better not get one of the following responses when I come back with objections.
“Go read this other paper or textbook. I’m not responding to your response to the first paper until you do.”
“Even if you pointed out a major flaw in the premise of the paper, you didn’t refute it point by point. Your critique is not sufficient.”
“Your response to the paper rests on different philosophical premises than the paper. Prove your premises are better. Do it using their premises.”
“You don’t have the academic standing to disagree with the prestigious author of that paper.”
“I can’t argue the points in that paper as well as the author. So, if you want to argue with the paper find it’s author and argue with them. Even if that is impossible, I will remain convinced by that paper until you do and that author concedes defeat.”
“That paper was a good statement of the argument, but not the best possible statement. Let us continue with the argument ignoring that you read the paper.”
“That paper was an interesting argument, but not the only or most convincing argument. Let us continue with the argument ignoring that you read the paper.”
“I’ve moved on from that argument, so I’m not going to respond. Never mind that I got you to back off when everyone was paying attention.”
I’ve read papers when I got similar requests. I’ve sometimes crafted detailed responses refuting them point by point. And I’ve been burned again and again. Unless I already trust you, or you have made a good argument citing that paper as a better and more complete version, I don’t feel obligated to read the paper and respond.
And more and more organizations which value quality over quantity are abandoning the idea of comments, forums, etc, for reasons you outline.
It’s not even that the general problem of having an open forum with good content hasn’t been solved. It’s that overall there doesn’t seem to even have been progress. Arguably things have gone backwards over the years.
Note, this is not refuted by niche sites with very active moderators who have managed to keep from sinking into a morass by dint of constant bailing in their own boat. It’s about the rising tide itself.
I feel like this is part of the dynamic equilibrium that makes up the atmosphere of a community. Constant arguments breaking out about a thing X means that everybody can tell that X vs. ~X is still a live controversy, and when an anti-Xist talks as though X were false, bystanders can know that that’s not a consensus position. Basically it’s a means of keeping track of where the community as a whole is on these positions.
As I see it your options are:
1) Have constant arguments over X.
2) Don’t let anyone talk about X.
3) Accept that people who are not 100% following everything the community does, including its less-active members, are going to reach erroneous conclusions about community consensus on X.
4) Hold Ecumenical Councils to establish doctrine so that people can just look it up and not have to worry.
>>4) Hold Ecumenical Councils to establish doctrine so that people can just look it up and not have to worry.
That sort of requires for someone to be officially at the helm, and a collection of official authorities. How organized are the Rationalists?
Scott is the Great Leader and the rest of us are all plebs. All hail Scott!
I guess Scott has come down in favor of “establish doctrine” before.
This fits what happened in the “horse/mule” narrative well, since it was them describing it to an outsider and someone overhearing deciding they’d told it inaccurately that set it all off again.
I wonder if there’s an option along the lines of “Explicitly acknowledge that your position on X is not consensus when talking about it”, which was also something violated in that narrative’s example, but I’m dubious of the ability of people to actually follow/judge the following of a norm like that.
It seems like asking someone to do background reading is open to abuse. You could pretend your argument was beyond reproach just by making huge demands of background reading, i.e. the Courtier’s Reply.
Telling someone to read the FAQ should be accompanied with a link to the specific section of the specific FAQ. That is a meaningful “your question is answered here, and you may want to read the rest of the document to get answers to your other questions more quickly”.
Maybe in future everyone could have their personal FAQ (“things I am frequently tempted to comment on”) and just link there. Something like Knol.
| Maybe in future everyone could have their personal FAQ (“things I am frequently tempted to comment on”) and just link there. Something like Knol.
…isn’t that what blogs are for? Certainly I’ve known people to write their ‘definitive answer to X’ and cite the reason for writing it to be so they could link others to it later.
This is so true. I was recently in a discussion with an anti-vax activist, and they eventually threw up their hands in frustration and emailed me a PDF with, quote, “over a thousand peer-reviewed studies refuting vaccine science”. The individual articles were things like “lead is a neurotoxin” or “herd immunity isn’t typically achieved using only infant vaccination for one particular disease.”
But hey, throw the 2500-page FAQ at people, it’s totally an argument.
I had a similar experience arguing about AGW on Facebook. I had pointed out that a particular claim was much more extreme than the position in the latest IPCC report. The person I was arguing with responded that there was newer evidence since that report was written. I asked for some. He gave me a link to a bunch of articles, probably from Google scholar. I looked at the first of them–which was published well before the latest IPCC report.
In my experience, at least of FB climate arguments, many people use cites to articles they have not read as arguments, possibly based on what someone else said was in those articles.
I’ll second the idea of a FAQ designed for linking to. Preferably coupled with a strong principle that you don’t link someone there unless you expect the answer to their specific question to really be there.
Yes, this is a problem that I too have grown concerned about recently. These days I just prefer to cut a discussion short if I can predict it’s likely to fall into a pattern of pointless interpersonal friction ostensibly about truth-seeking. It’s not just a time-saving habit change; I also have the problem of irrationally remembering my own contributions as much worse than they were a few hours down the line, which tends to cause me a lot of anxiety over the eventual fate of the argument. Unfortunately, I’ve noticed lately that people tend to interpret it as snubbing, or intellectual cowardice. Anticipation of, or reaction to that impression are now why I still let myself get sucked into negative-value discussions. After all, you wouldn’t want to develop a reputation as some drive-by troll.
Also in agreement with the claim that the top way to keep your conversations productive is to switch to private communication and/or book-length. It’s much more difficult to obscure the gaping holes in an argument through careful omission when you have to stretch your argument out onto 450 pages; the length forces a much more rigorous presentation of the topic. And indeed a lot of people mellow down in private communication, as the only value they derive from the interaction is left to come from the interlocutor.
(As an aside: since when did “one’s self” replace the much more English-sounding “oneself”?)
On the aside: it seems that google thinks that “oneself” is still more popular than “one’s self”, and while I was there I found this which claims “The two-word phrase one’s self is only justifiable when self is used in a spiritual, philosophical, or psychological sense.”
Possibly this is one of those mistakes, like mixing up its and it’s, which is easy to make, especially if you’ve been reading lots of philosophy and therefore are used to seeing a free-floating “self” surrounded by whitespace on both sides. Also, why “its” but “one’s” rather than “ones”? English is weird.
Also, “oneself” may indeed be English-sounding but I find it awkward to write – I never know how many letters s to use and “oneself” doesn’t look quite right. Whereas “one’s self” looks like a more natural way of writing out the sound of “oneself”.
Well, perhaps beliefs like “UFAI risk should be taken seriously” and “giving money to MIRI is worthwhile” have become parts of Scott’s “self”, in a psychological sense.
In the “peaceful little town”, the horse/mule argument is probably part of the social glue that keeps the town peaceful. -It’s a ritual. I’m not generally a fan of rituals but I’ve witnessed this one in every community I’d call functional and frankly it is one I like to participate in. Horse/Mule style rituals that is, I can say nothing about “rationalist/EA community”.
Horse/Mule works because everybody in a closed community is up to speed about every argument implied in “Horse” and every argument implied in “Mule” [There is an old joke where prisoners just yell numbers around because they have numbered all jokes known to the prison community. You get the idea.]. Beyond that point it is just (sub)group signalling. And, as the example also implies, usually of subgroups that otherwise get along rather well.
This all breaks together once outsiders come into play. Big surprise. Scott gets that point right in the second half of the essay. So I guess all I’m saying is the initial example is not a got example for the real problem ™
It’s a good point. A community may be held together by its favorite debates as much as by its points of agreement.
Intellectual blood sport!
The question at the heart: what is an online community? Is it a place have conversations, or a repository of information? Put another way, is it a coffee klatsch, or a library?
At the “library” end you have places like LessWrong, where everything is archived and curated, even decade-old Overcoming Bias blog posts. All users have karma scores. The past haunts the present and people’s reputations follow them.
Of course a place like LessWrong wants to cut down on repetitive, dead-horse conversations. Otherwise their “library” ends up with 500 copies of Moby Dick. Their information must be as concise and compact as possible.
But then you have “coffee klatsch” places like GaiaOnline, which has such a volume of traffic that nobody remembers or cares about what anyone said yesterday. I was once on a forum that purged its database from time to time. The admin reasoned that it kept the place from becoming a warzone of drama and bitterness and old grudges. Spring cleaning, if you will.
(Some people find this type of environment freeing. It uninhibits you when you know that your posts won’t be documented (and held against you) for all eternity. That’s part of the reason Snapchat had a $16b valuation – “this message will self-destruct” social media is an attractive thing)
So it’s a question of administrator philosophy. Do you let the past haunt the present? Or do you act like everyone’s a newbie?
I have posted a technological solution in another comment. But yours is another technological solution to this problem: karma points mitigates (in my experience) many of these problems.
Yeah, that’s a really good point.
And the “asymmetric” thing is a really good explanation for why some topics are taboo: not because there’s anything wrong with them in isolation, but because they force a minority of people to rehash an emotionally painful justification of their existence every time someone gets curious. I try to say, “that’s rude”, rather than “that’s wrong”, for that reason, and give some helpful pointers (eg. “google X” not just “find a 101 site”). But obv that can only ever help a tiny bit.
And good point about emailing people. It seems like, part of the problem is, it depends very much with _whom_ these topics are settled or not settled. If it’s within a fairly stable community of people you know fairly well, you can establish a norm of “don’t just keep bashing, but it’s good to revist every so often and see if we can get a better understanding”. But if that’s in public where lots and lots of people peripherally involved can see, SOMEONE is going to knee-jerk defend the other position and cause a spiral.
In fact, that’s a good description — the spiral can be caused by both people feeling like you, they have to defend themselves, but the more often they do it, the less they have time to do it methodically and clearly. I see if especially much where two communities overlap.
These have gotten so repetitive and annoying that I made a joke argument calendar to shame us for it
Maybe I shouldn’t ask this, but what’s the “N E O T E N Y” item? Don’t think I’m familiar with that one (well, I know what the dictionary meaning of neoteny is, but haven’t run into any endless debates related to it).
Discussions like this and this, most likely.
Thanks!
And this.
My favorite was that time when someone asked “Do these people who hate cupcakes and cuddles and curiosity and open borders and nonviolence and being nice also have glowing red eyes?”
I’d nominate The Offspring’s “The Kids Aren’t Alright”.
It says nothing about having or not having children, though. It only laments the fact that many of them fail to thrive — arguably, it calls for improving the world, which is relatively mature and not self-centered.
Adolescents – I Hate Children
Appropriate for so many reasons.
Scott, there is another (partial) solution to this problem. A technological solution. And with all the talk about AI in this blog I am surprised you have not considered anything along these lines. 🙂
Suppose there is an algorithm that detects when someone is rehashing some old flamewar that has been covered in another thread. At the end of each comment flagged by the algorithm, the algorithm could automatically reply with a link saying something like “isn’t this a rehash of this debate?” (Or before the person posts, or while the person is typing his/hers post, the algorithm could notify about duplicate discussions. This is how things operate in such “ask any question” website as stackexchange.)
Now, I say this is a _partial_ solution because no algorithm is 100% accurate. But it could mitigate some of these problems. If the algorithm was accurate, other people would feel little pressure to reply as well.
(If your website team would like to incorporate a machine learning algorithm in your website, let me know. :))
There is a subreddit with a definition bot, which replies to threads with a list of definitions for any terms found on the original post for which definitions exist in the FAQ. Using the terms in a different way without explicit justification is a moderation-worthy offense. While this mostly serves the function of minimizing motte-and-baily and talking-past-each-other and the need for tabooing terms, it seems like a start toward what you described.
These arguments get stylized enough that they can be presented as a chart with a bunch of nodes. A program might be able to say “you’re at this node on the chart”. Humans certainly could.
These arguments get stylized enough that they can be presented as a chart with a bunch of nodes.
That sounds fascinating.
Here’s something relatively innocuous. I found it by searching on [debate flowchart images].
People are also developing software/spreadsheet templates for recording competitive debate rounds, (just look up “debate flow”) which could be applied to general discussion formats, or be used to better structure a discussion! Ensure that certain points aren’t forgotten due to a certain branch gaining all of the attention.
(So much nostalgia. I did Chinese-style right-to-left flow b/c I was left handed, hah. And I used to take lecture notes in the same format.)
A lot of machine learning algorithms do not concern themselves with being able to explain their reasoning. They are blackboxes.
But there is a line of machine learning research that does. For instance: https://en.wikipedia.org/wiki/Markov_logic_network
These Markov networks build hypothesis and probabilities around them. I think they could work in the way you describe, but I have never worked with them.
@ Ricardo Cruz
(Or before the person posts, or while the person is typing his/hers post, the algorithm could notify about duplicate discussions.)
I like this idea. A link to a recent good SSC discussion of the topic will show how the terms are used in context, what opinions are already familiar, and what tone is acceptable on the topic.
A thoughtful person can edit zis post accordingly, or a less thoughtful person can go ahead and Post it … on the old thread … via a very convenient link the algorithm offers.
The only workable answer to this problem I’ve ever encountered is the FAQ, or things that are equivalent to a FAQ. This is what academia is about, with the whole series of things you have verifiably been informed about before you get to debate the next level.
But creating a norm of “Don’t talk about issues where you haven’t read the FAQ” is really hard to enforce.. and a lot of topics don’t have a proper authoritative FAQ that people agree on.
I agree. But I think this goes along the line I described in my post above. It is not necessary for this FAQ to be curated by humans. Something very close to a FAQ could be automated by current machine learning algorithms.
My impression is that the misuse of “this is not a 101 space” and “as a [such-and- such] I find it hurtful to be asked to discuss this question” form far more prevalent and pernicious problems than the “intractable debate” problem. If we face a choice between promoting norms of accepting these vs promoting norms opposing these (e.g. always give reasons for positions you take), which we may, it seems to me like almost every space would benefit from more of the latter.
Some of it depends on what you’re talking about. If you’re arguing with creationists it helps to have a FAQ which says “this is a list of species that have been observed to evolve” for the next time a creationist tells you that species have never been observed to evolve.
In most less clear cut subjects, it doesn’t work so well.
I think the phrasing “this is not a 101 space” is pointlessly insulting, but the idea of “we’re kind of tired of this argument and feel like it’s already been addressed, please see our FAQ here and if you don’t like it go to our special Debate Forum to argue about it” is a useful idea in certain circles.
I agree that it’s useful in “certain circles,” but claim that it only works in an incredibly tiny minority of circles, comprised of the most epistemically virtuous and untroubled by partisan controversies. (I worry that it may be one of those solutions which, ironically, only works for those who don’t really need it). Outside of supremely well functioning virtuous circles, I think even polite 101 responses just lend themselves to anyone who disagrees being told that they are so obviously wrong that they don’t need to bother explaining why. Indeed, this is virtually the only use I see these kinds of responses ever being put to.
Having a dedicated forum to address and host such disagreements, with lengthy description of your view which sincerely attempts to engage with the objections people have made and are making in the FAQ sounds good.
But this is basically the obverse of the ‘101’ response!
Re: the newbie problem.
We’ve seen something similar come up in a hackspace that’s been running for a few years.
There’s a wiki and a mailing list and newbies often post questions on the list.
A while back some of the older members were gradually getting more and more ratty at newbies and more and more they were responding with things verging on abuse when newbies would ask [standard common question 7] or commit [minor common violations of the rules or norms 3]. Add in the oldsters complaining that there was a “serious problem” because people just weren’t learning or “reading the wiki”.
Of course the real problem was that they’d stopped seeing the newbies as individuals and started seeing them as a vague homogeneous group of strangers.
Happily once it was explicitly pointed out that “no, this person has never before violated any rule in any way”, “this is literally this persons first question ever to the group, stop being a colossal dickhead to them over it” or “It’s unreasonable to expect newbies to have read close to 500 wiki entries” people looked at how they were acting and actually stopped being dicks to newbies.
In that kind of group that kind of intervention actually worked and for a long time the mailing list has been a far nicer and far more newbie-friendly place.
Part of the solution is to give the people being dicks a verbal kicking because “no, I don’t care how many times you’ve heard that, for them it’s the first and you asked the same question when you joined the community so suck it up and act like an adult” (I suspect this tactic would not work in any tumblr-style community)
Part of the solution is for the older members of the group to periodically step back and let the younger mid-level members take up the reigns. If you’ve seen the same question 2 thousand times let the person who’s only seen it 20 times answer it.
I would suggest that this is, in part, why teaching is a skill. Just being an adult doesn’t give you the infinite patience required to gently explain the same concepts forever to an ever-changing sea of nameless faces who don’t especially want to learn.
For any group who can be asked questions: create a separate -newbies subforum/newsgroup/mailing list. Make it opt-in—don’t automatically force everyone from the main place to hear these questions. Reward the people who choose to do so anyway. Limit tenure to avoid burnout, like with 911 dispatchers.
It can work on forums which have multiple boards, because you can use one board to contain all the interminable arguments.
I’ve seen this work well on the Ship of Fools, where I’ve participated on-and-off for 13 years. It’s “the magazine of Christian unrest”, frequented by Christians of all sorts of persuasions, some atheists, and some people of other faiths. Debate is very much encouraged. But topics that have been argued at great length and become very repetitive (homosexuality, evolution, biblical infallibility, etc) are confined to the Dead Horses board. Anyone who wants to avoid these topics can just avoid that board; anyone who wants to argue them can participate on that board. These topics are heavily policed on the other boards on the forum – but not to ban them, only to redirect them to Dead Horses. So discussion is never shut down, just contained in one place and prevented from derailing other debates. And the DH board is active and lively; it’s not just a place where threads go to die.
+1 for that solution.
This works on IRC as well.
Another example of the “Dead Horses” thread is Wikipedia’s “Perennial Proposals” page, which is where they’ll send you if you suggest things like “articles should only cite free online sources so that people can check the citations easily”or “anonymous editing should be banned”.
The point isn’t that disagreeing on these issues marks you as a bad community member; but rather that arguments posed as new are unlikely to actually be new. If someone wants to offer a genuinely new argument about whether the article “Penis” should have photographs, they’re welcome to do so; but P(novelty|penis argument) is very low.
https://en.wikipedia.org/wiki/Wikipedia:Perennial_proposals
I really like the format of that page.
This highlights the degree to which the Rationalist Community is not a community.
Back when 95% of the discourse actually took place on lesswrong.com, a solution like this might have worked. But now people are spread across their own blogs, Tumblr (which, as far as I can tell, is a mimetic weapon inflicted on us by an early-stage unfriendly AI), Twitter and Facebook. There can be no mimetic hygiene within the community while there is no central organized hub, and we have people hosting Catholic Rationalist blogs and still claiming to be part of the community. Put another way, in the current millieu there is absolutely no footing for situating Slate Star Codex relative to The Future Primaeval in terms of Rationalist Orthodoxy.
Ironically (?) I think the Eternal September effect has actually strangled Less Wrong, and I think it’s theoretically salvageable as a hub for Rationalist discussion, but there isn’t the will or vision to do so.
You know, I vaguely remember when tumblr was first created, and I’m reasonably certain the point was to be an easy place to host a photo blog for creative types, and still to this day all I read there are porn feeds. It wasn’t meant to be a place for discussion. Same with Twitter. It was meant for ‘hey small group of friends who care to know this, I’m getting tacos on 6th St right now, come join me if you’re in the area.’ It’s commentary in itself on something but I don’t know what that we use these (and worse yet, imgur) as media for debate.
People are lazy. Why bother going to another website to discuss things when you’re already on Facebook/Twitter/Tumblr?
Even the relative handful of active communities that aren’t on the “social” platforms tend to be blog comment sections (*cough*) where once they would have been web forums, or even earlier Usenet groups or mailing lists.
What sort of will or vision do you think is required?
*raises clenched fist to the sky* Mine.
But seriously, all I mean is that there is no actual person who is invested in establishing a good and lasting Rationalist Community, so one won’t be made.
For example, after reading this thread, I looked into how much effort it would take for me to just go ahead and create my own Rationalist Community Hub. You could say I had a vision for what I wanted to create. I determined that it would take a lot of my time and at least some of my money if I wanted to do a decent job at it, and I knew I wasn’t up to it, so I aborted. So, I lack the will to execute on this.
On Less Wrong there have recently been discussions about what could be done to “save” Less Wrong. Lots of people make lots of suggestions. Nobody has the means or motive to implement any of these suggestions. (The “vision” is vague and unfocused, the “will” is not aligned.) The discussion just sort off drops of the page. Maybe sometimes a minor tweak will come out of it, something like deactivating the Main page. There’s usually an agonizing need for consensus on Less Wrong, where tasks like bannings and site improvements are “put to the committee” instead of just being solved by the person who is most logically responsible for them. So when I say “will” I am also trying to capture a sense of “just do it”.
Then do it.
Fine.
He just said he doesn’t care enough to invest the required amount of time.
I’d prefer to report things anonymously, but you can’t provide a message when doing so. Whether you meant “memetic” or “mimetic”, it’s hard to read your statement that there can be no “rationalist community” with the filthy Catholics around as either kind or necessary.
That’s what I get for trusting my spellcheck.
So, part of me wants to apologize for hurting anybody’s feelings, but I can’t wholeheartedly do so. I feel like there are certain beliefs that are incompatible with the project of perfecting one’s rationality.
I didn’t say “filthy Catholics”. As a (former? lapsed?) Confirmed Catholic, I think I’m qualified to talk specifically about that specific sect. I would in fact double down on the assertion that making a “Catholic Rationalist” blog is probably not helping the Art of Rationality and it’s probably not helping Catholicism much either, beyond giving one more person one more tool to manage their cognitive dissonance.
I find it quite fine to say, though obviously utterly wrong, as long as you actually say it rather than assume everyone thinks it and talk past it. 🙂
I suspect that there’s a substantial audience for a ‘rationalist hub’ that is not LessWrong. I get the feeling that there are quite a few people (myself included) who read/comment on SSC, or Rationalist tumblr, but who are adverse to the style/culture of LessWrong. I think the reason for this is that LessWrong is so strongly imbued with the personality/beliefs of one person (EY) that it cannot function well as a hub for people who react negatively to EY.
I don’t know what such a hub would look like, but reddit-style comment threading is an obvious must-have.
SCOTT IS ASLEEP
( ͡° ͜ʖ ͡°)
POST HORSES
You’re very naughty today.
There is another problem you don’t address… what happens when a community has long ago settled on THE ANSWER to a question… but it’s genuinely a bad answer.
It’s most obvious with some religions which encode the practice of not changing THE ANSWER after it’s been chosen by important people with special hats. Often cultural values have changed so much or knowledge about the world has changed so much since then that the answers they’ve settled on are now obviously crazy.
Communities can hit the same problem where a charismatic founding member has caused the membership of the community to be partly shaped by who agrees/disagrees with them. You can then end up with a big group who all see [ANSWER] as obvious with lots of outsiders who can see that [ANSWER] is obviously absurd… but if they enter the community to say so then they’ll be told that it’s a settled question, pointed to the relevant holy writ and then shouted down.
If the group has a tradition of allowing any argument then there’s at least a chance that eventually enough people will be convinced to question the holy writ. On the other hand if the community has a “this is not a 101 space” norm then awful [ANSWER] shall persist forever.
Your “bad answers” are my “things that are keeping my pretty good religious community together.” Unless these outsiders who see that our answers are clearly bad can either convince me that adopting their new answers won’t cause my community to collapse, or convince me that they can offer me something that meets my needs just as well, I have very good reasons for not wanting to listen to them.
I have no idea which religion you follow but they all have a reasonably large selection of nonsensical rules so I’ll go with a representative one which manifests both the property of being based on false information and based on things which don’t make any sense.
I find it hard to see any way in which forbidding people from eating rabbit “because he cheweth the cud, but divideth not the hoof” achieves much functionally to keep a community together beyond being a crazy rule that your community followed in the past at cost to itself. The fact that it’s based on false premises (that rabbits chew the cud) and premises which don’t make sense (it mattering if something chews the cud) are just icing on the cake.
Following weird rules for no reason grants a certain level of cultural identity but if that’s the only argument then it would be nice to make sure those weird rules don’t have a significant cost in human suffering and pain.
The rule I mention above does have a very real cost in human suffering and pain because rabbit is a good source of food.
My point is that groups who have no mechanism for removing weird rules based on beliefs we now know to be false are going to end up with a lot of nonsensical rules based on fantasy with no way to change the situation.
Good example!
Have you looked at the divergent paths of Orthodox, Conservative, and Reform Judaism? Because it is difficult for me to believe that you have and that you are still rejecting the idea that these rules keep communities together. Or maybe you’re not saying that you reject this, but rather that you don’t see a value to my religious community existing. But if it’s the latter, I understand your case but I hope you understand that I’m not rejecting your views about what dietary restrictions my religion should have because I haven’t heard the point you’re making about rabbits.
Edit: You know we’re not saying other people shouldn’t eat rabbits, right? Do you really think Jews not eating rabbit is exerting a major cost in pain and suffering?
I’m not outright rejecting the idea that “rules keep communities together”.
I’m objecting to the idea that just because something is a rule it is de facto a good and positive rule purely because it is a rule that exists and rules keep communities together.
Rules can have negative effects and if you’re going to pick nonsense rules to keep your community together then they should ideally be actually helpful or at the very least not-harmful to the carriers.
If they are intended to be helpful at the start but are based on factual beliefs about the world then there should also be some mechanism for your community to update the rules if and when you learn that your original reason for creating them was based on incorrect factual beliefs.
Following their customs and rules allowed the group to survive to this day. It is difficult to determine which specific rules and customs were responsible, and in what measure, for the survival of the group. It may be that some of the utterly mystifying restrictions have some purpose which is obscure to us even today.
It is quite risky to start removing bricks from a building, not being extremely sure which are load-bearing, and which are not. Indeed – even if you are certain that one is not load-bearing, it may have some ancillary purpose, like keeping the heat in along with many the others. Remove enough of even the non-load-bearing ones, and you’re left with a ruin that nobody can live in. Same with societies.
@Anonymous
Again though, that’s a fully general counterargument.
Assuming all rules as good/positive purely because they are rules that currently exist even if they’re pretty obviously written by someone in the middle of a magic mushroom trip.
You can apply it to every rule followed by every living human alive today. They’ve made it to their current age following the rules they follow so you can’t say they wouldn’t have been hit by a bus if they didn’t follow that rule hence every single rule is default considered positive.
We have mechanisms for updating rules, obviously. But one of the questions religions are faced with is “how easy should that mechanism be?” Again, based on looking at the experiences of the various denominations within Judaism, it is certainly possible to have a mechanism for changing rules that is so easy that you don’t have much of a religious community anymore pretty quickly.
So what exactly should those mechanisms be/to what extent should we be changing our rules? Those are difficult questions, but they’re the questions we’re faced with. It seems where we started originally was that you thought it was a problem that your (certainly correct!) belief on the digestive systems of animals was not going to lead to us updating our beliefs or practices. I do not think that is a problem, and if you think that is a problem, that seems like an illustration of why you probably don’t have much to add to the much more difficult, complicated discussions around to what extent we should be updating our rules.
@Emily
The dietary restrictions were just meant to be a basic example and it’s more than just religions that I was referring to in the first post.
Though from it I would take a couple of things.
When your community is choosing rules including any factual basis or justification for the rule is probably a good step. If you say “don’t use wood for X because of woodworm” then in later years, when considering the rule, it’s easier to say “well, Wood type Y is immune to woodworm so that rule shouldn’t apply to it”.
Without the justification included it’s easier to ignore any reason you had the rule and appeal to mysterious reasons.
Or put even more simply. If you feel you must build a Chesterton’s fence then at least try to build a sign next to it explaining it’s reason for existing. If the sign says “beware of the bull” vs “beware of the unicorn” it’s easier to make choices later about that fence.
My examples of harmful beliefs were probably too weaksauce so lets try a more severe one, stepping right away from judaism.
If your religion calls for an infant from a foreign tribe to be cast into the holy lake at the foot of the pyramid steps once a year to make sure the harvest is good then it doesn’t matter how much the yearly drowning brings everyone together and creates community harmony, the realities of meteorology, it’s lack of real effect on the harvest and murder being bad mean the tradition should be stopped.
There’s also the question of what price for community harmony is acceptable, both to your own community and to the communities around you.
If your religion mandates everyone wearing silly hats 3 times a year it’s probably not hurting anyone. If your religion is anti-vaccination that can cause real death and suffering to people who aren’t part of your religion.
Sure, I’m sold on some religions having beliefs/practices that are harmful and not worth it.
What I’m not sold on is where we initially started, which is that it’s inherently a problem that if you come to my religious group (or, ok, non-religious group) and point out that one of our rules is “obviously absurd” that we’re not going to change it – and that’s even if you make a reasonably good case that, say, a justification that was given for it is factually wrong. We don’t change our rules based on that. That doesn’t mean we don’t have processes for changing our rules. For what you’re arguing, pointing out some examples of religious practices that would be better if “outsiders pointing out absurdities and getting people to change rules” were the decision rule process is not enough. As a general rule, would yours be a good one? I certainly don’t think it would be good for my religion, even though it undoubtedly has what you view (perhaps correctly!) as absurdities, and what some outsiders would argue are dangerous, harmful absurdities.
@Emily
For a group like the lesswrongians who have holding factually correct beliefs about the universe as a terminal value being corrected by anyone, internal or external to the group is a positive for their terminal goals of being less wrong about things.
For groups who don’t particularly care about what’s true/real and what isn’t but instead value tradition/culture/community in it’s own right I can see where you’re coming from. If having the weird nonsensical rules as a form of cultural identity is a terminal goal in it’s own right then resisting changes, even those based on simple reality would be a net positive in terms of achieving those goals unless the disadvantage conferred is really massive.
You initially included religions as an example. You are modifying that?
I do not think you are doing a good job of characterizing the motivations of people who are in communities which seek out truths in different ways – or seek out different truths – than you do.
For instance, let’s say your options for communities were one which affirmed that having and raising children was a worthy activity and source of meaning, and also that you shouldn’t eat rabbit, vs. one which affirmed neither, and as a result the people in the first community had children and didn’t eat rabbit and the people in the second didn’t have kids and did eat rabbit. Which community is pursuing truth more? That should depend mainly on whether you think having and raising kids is a worthy activity and source of meaning. Is that true or not true? Because the rabbit thing is pretty minor in the scheme of how you live your life.
Of course, we really choose from many communities. Nonetheless, every bundle of beliefs is not available to us, and what is available is not random – certain beliefs go together.
At a slight tangent to the orthodox Jewish part of the discussion …
Jewish law does have ways of correcting rules that people think are wrong. So far as I know, no disobedient sons have been stoned to death in recent centuries (millenia?), and Maimonides (among others) reads enough restrictions into that rule to conclude that it will never happen.
The accepted view is that the law is according to the opinion of the latest authorities.
The interesting question might be why some apparently irrational rules get changed and some don’t. One possible answer is that irrational but harmless rules are kept, irrational but seriously damaging rules get interpreted around.
@Emily
“Which community is pursuing truth more?”
What number does a bag of jelly represent?
Are cupcakes true or false?
That’s a meaningless question.
As you lay it out neither are referencing “truth” in any way.
On the other hand.
If they both start off not eating rabbit because they believe rabbit is poisonous and later it is confirmed that rabbit is not poisonous and one community accepts that information and starts eating rabbit while the other maintains that it must be poisonous in some abstract way then the former might be said to be more truth-seeking since they are willing to change their beliefs and actions when faced with new information while the second just tries to justify it’s old position.
You appear to have got very very very hung up on the example. Rabbit is not the only random/crazy rule. I picked it because it’s simple, obviously nonsensical and based on false information. The level of harm is very very small. But it’s not the only cultural/religious belief that shares the quality of being obviously wrong, at least somewhat harmful and based on bad information.
Are you ok if we aim this away from you peer group?For the sake of argument can we try pointing it at any sects of your religion which you don’t like.
Can you think of any extreme, unpleasant, ultra-orthodox etc sects who maintain traditions which harm children, which harm girls/women born into them (who get little or no choice in the matter), which restrict their freedoms or make their lives worse in some other significant ways?
Can you think of any southern churches where religious/cultural beliefs about the roles of women based on the bible actually harm women born into them by restricting their access to education because their parents religion has decreed that their duty in life is to be obedient to men around then and as pregnant as possible?
Can you think of any theocracies which inflict their version of The Rules (no matter how crazy) on their citizens regardless of the citizens opinions on the matter?
It’s about bad rules, not rabbits. Rabbit is just a trivial example which ate the rest of the discussion. .
I already agreed with you that of course there are examples of really terrible, destructive religious rules. But you were proposing a broad decision rule/problem, and you didn’t specify that you were just talking about the really terrible, destructive religious rules. In fact, you proposed rabbit as an example, and then justified this example by saying it caused significant pain! It seems to me you’re now retreating from that position to a more protected motte, without actually saying that you modified your initial position.
If I’m understanding you, you also don’t think that “having and raising children is a worthy activity and source of meaning” is something that has truth value? (Like, it’s not false or not false, it’s something else entirely?) That would not surprise me, but that’s a pretty narrow way of looking at truth. It’s possible to get so caught up in the knowledge that (as an example, feel free to pick some other factual stuff that is more important to you personally) rabbit is not poisonous and that other communities are wrong in thinking rabbits are poisonous and wind up with bad answers on the more important stuff. Which is to say, I don’t belong to a religious community out of lack of interest in pursuing truth, I do it in part because there are particular truths I think it is important to be regularly reminded of because otherwise it is so easy to lose sight of them.
Murphy suggests that communities with dangerously false ideas must be constantly challenged by outsiders.
Emily challenges Murphy’s dangerously false idea as an outsider, by saying that communities should be allowed to maintain dangerously false ideas.
I do believe it probably caused harm historically but probably only “significant” in the statistical sense where any additional suffering would only be visible if you could look at enough people with lots of people slightly poorer than they would been had the Arbritrary Rule chosen to create cultural identity had been something more neutral or net-positive like “don’t eat piss-soaked snow” or “don’t paint pictures of cats on tents” rather than “don’t allow your children eat this perfectly good and safe protein source”.
In modern America it’s probably harm-irrelevant since people are so rarely lacking protein sources and in the modern american context it’s probably not worth the trouble to repeal.
I gave up on the harm angle because you made it clear that you don’t care about it and that if I did then it was an “illustration of why you probably don’t have much to add”.
You seem to use “truth” where I’d use the term “positive” or “good thing”.
Cuddling up with my family in front of a warm fire is a good thing but it isn’t a truth.
Knowing the best/safest material to build a fireplace is a truth which allows me to facilitate things I consider positive and good things.
Truth seeking and good-thing seeking are neither the same thing nor mutually exclusive.
For what it’s worth, I don’t think that not eating rabbit or pig has a significant cost in human suffering or pain. Probably a non-zero cost, but IMHO approaching zero.
I eat both, but I don’t object to vegetarians, even though I obviously disagree with enough of their priors not to become vegetarian myself.
And if a vegetarian’s belief that a rabbit has enough moral worth not to be eaten even if killed painlessly is enough to risk the human pain and suffering that comes from not eating the rabbit, I think a group’s desire to maintain solidarity is also sufficient.
(IMHO)
This century the cost is very low unless you’re in a third world country but historically cultural/religious dietary restrictions can cause harm, particularly in times of famine and sometimes simply because an industry which could have reduced harm from such events ends up underdeveloped.
When counting harm don’t just think of modern americans walking around Whole Foods. Think in terms of every child with stunted development who could have been fed X but wasn’t because their local culture/religion had a nonsensical rule against eating X.
I’d also distinguish it from personal moral beliefs. Cultural/religious beliefs also tend to have an element of community enforcement. If you personally believe that eating meat is wrong then that’s a choice you’re making. If your neighbours and everyone around you believes eating rabbit is wrong then you can be forced/pressured into not allowing your children to eat it even if they’re hungry and even if you think it’s a bloody stupid rule.
Though I think my religious diet example it absorbing too much of my original point. There are non-food related rules and beliefs which many groups settle upon which can also turn out to be harmful.
@Murphy – thanks, the historical context is a very good point.
@Murphy: Think in terms of every child with stunted development who could have been fed X but wasn’t because their local culture/religion had a nonsensical rule against eating X
Can you point some of them out to me?
Seriously, I consider myself fairly well-read in history, and I don’t think I’ve ever seen this mentioned as a problem anywhere. Nor does it pass the giggle test to me. First, every religion I know of with dietary restrictions includes an “unless there’s nothing else to eat” escape clause, and second what is the situation in which e.g. a bunch of hungry Jews have nothing to eat but pork? Nobody raises swine in Israel, and the Jews of Brooklyn understand the concept of trade with their non-kosher neighbors.
Aside from hunter-gatherers, humans eat food that humans have gone out of their way to cause to grow in their environment. Who is going out of their way to grow food they aren’t willing to eat?
Really? People going hungry because of bad random cultural food-rules doesn’t pass the giggle test for you?
Your giggle test appears wildly poorly calibrated or you’re desperate to believe that bad religious rules can’t actually hurt people.
Lets go for a slightly more complex example. For various reasons fish was unpopular as a food in ireland in the 1800’s including but not limited to cultural and religious beliefs that viewed fish as a penance.
partly because of this the fishing and preserving industries were woefully underdeveloped. (other reasons included english economic policies)
When the staple crop failed nobody within 10 miles of the coast should have starved. The country was sitting in the middle of some of the worlds richest fishing grounds. But you can’t ramp up an industry overnight and far more people starved than should have had the fishing industry not been hamstrung previously. People did starve
Rabbit may be a good meat source, but it is not something to be relied upon exclusively. You make an argument that it’s a stupid rule if you can’t let your starving children eat rabbit.
If you have literally nothing else to eat, rabbit is better than nothing. But rabbit as a food without a supplementary source of dietary fat is not ideal. I had a hazy idea that rabbit was a good diet food because it takes more calories to digest than it provides (I don’t know where I picked that one up) but there are more definite drawbacks to relying on it as a sole or main meat source:
The “rabbit starvation” thing may be where the initial idea of not eating rabbit came from, though another source says a reason the hare (not rabbit as such) is treif is because:
Short conclusion from article quoted: if you’re thinking of going into animal husbandry in a small way, and you have the room to raise rabbits, raise chickens instead as they’re a better food source.
And now we get into “but battery farming is cruelty!” which brings us back to where we started.
Short conclusion from article quoted: if you’re thinking of going into animal husbandry in a small way, and you have the room to raise rabbits, raise chickens instead as they’re a better food source.
Object level disagreement: Chickens are noisy and more fragile and carry more diseases shared with humans. They also reproduce more slowly and need more space. Prior to the development of dedicated (ie, industrial) poultry raising, rabbits consistently out produced chickens in the homestead.
Additionally, that source is based on wild-caught rabbits. Home raised rabbits do have a fair bit of fat.
(Rabbits are more heat sensitive, and need somewhat more specialized housing, but favoring chickens is a cultural thing, imo, not necessarily a dietary one. Although rabbit starvation is a real thing for people living off what they can catch.)
This makes perfect sense for a religion, where in many cases the entire point of there being a religion is to keep a group of people together and preserve their norms and traditions, circumspect explanations of difficult to understand worldly phenomena that prove to be wrong when subjected to any level of rigorous testing being incidental at best to that goal.
For a community whose stated purpose is to be wrong less often when trying to discover facts about the world, norms intended to preserve traditions and induce social cohesion, at the expense of being systematically wrong about some things, make a lot less sense.
Sure! But the comment I was responding to specifically mentioned religions.
I’m not sure if I have a larger point or not in there about the rationalist ‘community,’ as in, maybe it’d be better off if it wasn’t a community at all. At least, keep it a community of people who discuss things and engage in practices designed to induce being incorrect less often about important questions, but not necessarily friends. Reduce the natural inclination to defend your tribe by not making this your tribe. It has to get harder to be objective about how much of a point someone actually has once you’ve been in a cuddle party with that person.
Part of what annoys me is when there’s no clear policy relevance for an interminable argument. Nobody in that town cared if the animal sold five or ten years ago was a horse or a mule, the argument has become entirely about itself.
I’m more sympathetic to these kinds of arguments when there’s policy relevance, but even then I feel like relevance is limited outside of an organization. The most annoying part of the “should MIRI be kicked out of EA” arguments is that nobody has the power to kick MIRI out of EA or even knows what it would mean to do so. Once people settle on a specific practical idea (like not include MIRI speeches as centerpieces of EA summits), the argument becomes much different and less interminable (I agree there shouldn’t be MIRI speeches as centerpieces of EA summits; also, now the argument can be limited to convincing people in the specific summit organization and to the specific summit time)
Eventually every argument becomes a horse/mule argument.
Personally, I’m fine with debating the same topic over and over again. In fact, doing it with new people is even better, because they can offer new insight. And every time I get to do it, I get better at it.
I wonder if the interminable debates correlate positively with writing short texts.
Not sure if I can explain it well, but I have the feeling that when people write many short texts, pretty much everyone forgets what was written about a topic months ago, so they are having the debate again. Imagine Facebook: what was written two months ago might as well not exist because no one is going to find it anyway. As an opposite of that, imagine someone writing a popular book ten years ago; the book still remains known, and people can talk about it.
In the environment of short and short-lived texts, no debate is ever closed, because anyone can reopen it again, and the old arguments against are long gone, so the new ones need to be written. Even if old members remember the conclusion the community had years ago, the conclusion itself often has a form of hundred comments, reacting on each other, sometimes not making sense out of context.
That and short texts are inherently more punchy, less nuanced, and more prone to being inflammatory(and being seen by the sort of people likely to get inflammed by them – I don’t pick up Derrida books, but I have friends who’ll talk about postmodernism unironically)
I think a big part of the motivation for interminable arguments has gotten lost here, because the discussion is a bit too meta and/or not cynical enough, so here’s my two cents:
A big driver of interminable arguments is that in many of these cases, agreeing to a particular conclusion more-or-less commits you to an unpleasant course of action.
If you accept the LW cryonics argument you have basically agreed to blow a substantial sum of money on dodgy-looking ‘life insurance’ policies that will make you look like a selfish dupe to your friends and family. If you accept EA-adjacent animal suffering arguments then you’re a monster if you don’t at least become a vegetarian if not a full vegan. EA itself has a core argument which, if true, mandates forking over a substantial tithe to (capital-E) Effective charities. Agreeing with arguments in favor of polyamory can literally lead to your wife or girlfriend fucking other men.
Obviously this doesn’t mean that everyone who disagrees is a ‘denier’ who just wants to avoid work. For example, I vehemently disagree with all of the above arguments not because of their conclusions but because of their premises. But at the same time it’s clear that neither I nor a lot of other people would care about many of these topics one way or the other if the arguments weren’t structured as Calls to Action.
That people have a bias toward the values of their own friends and family and don’t wish to alienate them is a good thing. I would not characterize it as wanting to avoid work. Sure, there’s some balance here and I don’t know where it is – but particularly if you are someone who finds yourself relatively easily convinced of weird ideas, you should be aware of that tendency and be wary of following ideas to their logical, could-blow-up-your-life conclusion, no matter how intellectually convincing you find them.
“I can’t see where exactly this argument went wrong, but I think it went wrong somewhere because of what it concludes about what I should be doing” isn’t necessarily a bad thought.
I disagree with that notion, your dislike of the conclusion doesn’t say anything about the argument.
Possibly what you’re getting at is that if you don’t like to do something that would be more logically consistent with your beliefs, you don’t have to – and you don’t have to feel guilty about it.
It might, depending on the what you mean by dislike.
Sanity-checking results is extremely important: while you might occasionally uncover an unexpected truth from a strange result, such as with the positron, the vast majority of the time it’s a sign that the argument isn’t sound. If your ethical theory tells you that some large number of termites are as valuable as a human being, or your decision theory says that you can ‘negotiate’ with counterfactual beings in the far future or other universes, that should be a red flag.
Emily’s position here is very reasonable practical advice. You might not be able to identify the error(s) precisely but you can certainly infer their presence.
You cannot treat ethics like science.
A scientific theory should be logically and mathematically consistent, yes, but that doesn’t have to do anything .
If you try to set up a rule system describing what is right and wrong, and it gives results that don’t agree with your intuition, the obvious conjecture would be that those rules are not well fit to deduce what you think is desirable.
I’d agree with Emily if this is just stated as an observation of actual human behavior as opposed to a prescription. Behavior is rarely motivated by following abstract reasoning to its logical conclusion. If I did that, I’d probably be moved to murder a significant percentage of currently living humans and then possibly myself. Doing that would make my life very uncomfortable, however, and my life is currently pretty comfortable, so I’m not moved to change very much if any of my present behavior. I guess I value comfort in life. Emily values social cohesion and the security of being part of a group that all identifies her as one of them. Whatever people value is what moves them to act. If reason tells you some of the core premises on which you’ve justified those actions are wrong, so what? It wasn’t actually those core premises driving your actions anyway.
In the same way that a proof that says “If A, then B; if A, then not-B” shows that A is an impossibility, “If A, then B; if B, then I will be unhappy” shows that A is a premise I don’t want to adopt. From that, the right thing to do is reject A, or at least adjust it until it no longer implies B. Or I can figure out a reason why being unhappy is acceptable in this case, but the other effects of A better be pretty damn good.
Most of life is about reasoning backward from outcomes to premises. We call it “learning from experience”.
This sounds like yet another argument in favour of having different types of rules/norms in different places, like the “safe spaces” argument, or the reason so many people like federalism. I want there to be some places I can go to debate issues I’m not too familiar with, and others want places where they won’t be bothered by it. If(to use your first example) a trans person wants to go on a cis forum and discuss the topic, they can, and if they want to hide out where they won’t be bothered, they can do that too. Let people make meaningful choices – it usually works out pretty well.
Same reason you have the “no gender in the open thread” rule, really – you know there’s enough other places people can go to discuss it, and that it leads to bad discussion, so you block it.
I actually liked the comic. It feels like there is more to it than a first look suggests.
So did I.
OK, I’m intrigued. What do you see in it?
I thought it was an artful decomposition of those “every non-feminist is either a patriarchal egomaniac or a nerd who is afraid of women” trope.
Judging from the comments of the author linked below, this seems not to be the case.
I can see why people would like it: it’s a big bingo board.
A collection of hostile, simplified versions of opponents.
I particularly like the feminist drawn in a pool of money being given a book deal for criticizing feminism. It allows the reader to pattern match any internal dissenter to the idea of a judas just selling them out for 30 pieces of silver.
It’s like the feminist version of Chick comics.
When a real, non-strawman person comes along and says something that sounds vaguely like what the cartoon characters are saying everyone gets reminded of the comic and can thus dismiss anything they say and proceed to pointing and laughing.
You mean it’s a *comic* that, unusually for the art form, wasn’t subtle and nuanced?
If what you’re implying is that political comics are, as a whole, the lowest form of humor, then I agree.
>I particularly like the feminist drawn in a pool of money being given a book deal for criticizing feminism. It allows the reader to pattern match any internal dissenter to the idea of a judas just selling them out for 30 pieces of silver.
To be fair (unfair?) to the artist, it seems to me that you’re reading into it much further than they are, and they just wanted to make a personal attack to a specific person.
I vividly remember this comic because I had been an avid reader/lurker of Alas, A Blog for ~years~, because I like to read blogs with a lot of ideas and extensive comments sections – and after I read that comic I just could not ever go back. …And I actually am a feminist! But I just thought… this is terrible, this is dishonest, and what is reading this possibly doing to me without me noticing? I was sad about it :/
I’ve never liked his cartoons. Some of his writing is good — but his cartoons are mostly just strawmaning and weakmaning. I see very attempt in them to understand the arguments the other side is making. The cartoons also tend to emphasize the worst arguments of the other side instead of jousting with arguments worth listening to.
That is sort of the genre of political cartooning, it’s an awful thing that no decent person should get involved in. Our host had a great point when he lined up feminist caricatures of SCWMs next to Nazi caricatures of Jews, but I think it made a point against caricatures as much as against feminism.
One thing I don’t like about the “different types of x” comic format, even when it is making fun of people I disagree with, is that it attempts to match a political or philosophical view to a particular race/gender/physical appearance/fashion sense.
Now, anyone who didn’t make such connections in his mind would be lying: one is bound to notice that feminists are more likely to be of a particular sex and to look a certain way and ditto libertarians and most other groups. But I find it very icky to point to and ridicule that because humor and stereotyping can become powerful excuses for not engaging with an idea.
It seems to subtly encourage the type of thinking which goes: “that is a pimply, nerdy, fedora-wearing, ponytail-having, fat white guy idea which you hold because you are a pimply, nerdy, fedora-wearing, ponytail-having, fat white guy” or “that is a short-haircut-having, hairy-armpitted, pierced, tattooed, butch lesbian idea which you have because you are a short-haircut-having, hairy-armpitted, pierced, tattooed, butch lesbian.”
“I think it made a point against caricatures as much as against feminism.”
The problem may be, to some extent, with political cartoons in general, especially when they are attempting to depict groups rather than individuals. The Onion editorial cartoons parody this aspect of the genre brilliantly, of course, though, like Stephen Colbert, they are quite left-leaning for my taste by virtue of being a parody of a right-wing perspective.
@Onyomi:
This physical description more-or-less describes me and a lot of my friends.
It’s a little unclear if this is a criticism of the genre or of my “worst anti-feminists” cartoon. But if it is a criticism of my cartoon, it’s unfair. I consciously avoid using those stereotypes in my cartoon, and in a few panels consciously attempted to draw the characters as conventionally attractive people, to avoid the “he’s only saying/doing this because he can’t get laid” stereotype. (e.g., “the pick-up artist” and “going his own way.”)
In fact, the only character who is drawn as “nerdy… ponytail-having, fat white guy” is my self-portrait in the final panel.
And just to be clear, I absolutely deny that there’s anything wrong with being pimply, nerdy, having a ponytail, wearing a fedora, being fat, etc.. Not a single one of those things is in any way a negative trait.
I even did a whole cartoon which was pretty much about why I don’t use that stereotype in my work.
I like how your cartoon 1) displays that no matter what kind of person you are, feminists will always try to guilt you (E.g. that ‘I Don’t hate men’ makes you an antifeminist.). and 2) shows that feminists often don’t take the person seriously but try to fit them into a bingo chart in their minds (your actual bingo chart being a phyisical manifestation of this)
@ The Smoke
Wait a minute. It’s a classification of already identified anti-feminists into various subtypes, not a guide to discerning who is or isn’t an anti-feminist.
I have, personally, seen people say “I’m an anti-feminist because I don’t hate men”, and generally play up the “dear men, those evil feminists are out to get you” angle. They seem similar to the male feminists who go “unlike most guys, I’m not a disgusting misogynist asshole with zero respect for you, ladies”.
>I have, personally, seen people say “I’m an anti-feminist because I don’t hate men”, and generally play up the “dear men, those evil feminists are out to get you” angle.
I’m sure you’re telling the truth, but that sounds like it’d be pretty rare, unless you consider “not a feminist” to be equal to “anti-feminist”.
>It’s a little unclear if this is a criticism of the genre or of my “worst anti-feminists” cartoon.
While i agree that your comic is not particularly bad, arguing that “all the others are awful too” is pretty much a concession.
Sure, it’s not common, but, just like Scott encounters a lot more MIRI criticism than the average person, Barry probably encounters a lot more anti-feminism.
“I even did a whole cartoon which was pretty much about why I don’t use that stereotype in my work.”
I do like that cartoon; especially the “stereotyping anti-feminists who look like this lets anti-feminists who like this off the hook” part.
The former criticism wasn’t aimed so much at your anti-feminism comic, but at the sort of thing you are criticizing in that part of the neckbeard comic, though maybe also to some extent against political cartoons depicting group stereotypes in general.
It’s a bingo card. Bingo cards are about sharing frustrations within a group. The targets of those frustrations often don’t consider them legitimate: “Feminists have no business being frustrated by my anti-feminist arguments, because my arguments are correct. They should just convert.”
(It’s kind of funny that argument bingo is mostly associated with progressive groups these days. It’s a descendant of “buzzword bingo” in corporate culture, as popularized by Scott Adams in Dilbert.)
I’m a fan of both Scott and Barry (the comic’s author), and I’d really like to see the two of them discuss it.
Not quite, but close? Scott and Barry
Scott writing this:
” I haven’t punched Mike Cernovich in the face yet”
was hilarious for obvious reasons.
I may regret asking this, but who’s Mike Cernovich?
They did on Tumblr several months ago, when the comic came out. I can’t find the posts due to Tumblr’s bad user interface, but it led to Scott removing Barry from his blogroll.
EDIT: Yep, Nita found it.
I saw “cartoon” and “Scott” and my brain went to Scott Adams.
I feel like a cartoon jointly made by Scott Adams and Barry Deutsch would be extremely amusing. Or, the “making of” would be extremely amusing. Or, the police report would be extremely amusing.
Isn’t Scott Adams a libertarian? If so, presumably we could pick a topic we agree on to do a comic strip about, and in that way get along well enough to avoid creating an amusing police report. (I wouldn’t be at all surprised if he and I agreed on issues like legalization of drugs, legalization of prostitutes, problems with local regulations on who can braid hair, that sort of thing).
Or we could do a “debate” format, where we alternate drawing panels and address each other. That might be better, since having Scott and I do a comic about something we agree on would create an “elephant in the room” problem.
In any case, I suspect that both Scott Adams and I are total wimps in real life, and so wouldn’t actually present a danger to each other. 🙂
I haven’t got a clue what Scott Adams personally believes, but in terms of his public-facing opinions I have a feeling it’s some version of “whatever’s funniest or most provocative at the time”.
It specifically names “Rationalists” as one of the groups deserving mockery. It’s clearly designed to amuse SJWs and rake in pageviews and is the exact opposite of something someone actually interested in a genuine dialogue or debate would have created.
Since when don’t rationalists deserve mockery?
There’s obviously nothing wrong with mocking the rationalist movement.
But that wasn’t my intention.
The panel I assume Anon was referring to was intended to make fun of the sort of anti-feminist who couches his every unsupported opinion as “objective” fact – second row across, third row down, if you’re interested in reading it – and constantly pats himself on the back for his superior rationalism.
I initially was going to title that panel “The Objectivist,” but then didn’t because then people would assume it was about Ayn Rand followers in particular, which wasn’t what I intended. So I used “The Rationalist.”
It wasn’t until I read Anon’s comment on this thread that I realized that some people might have taken the panel as a comment on the Rationalism movement. I don’t think that’s how most people reading my comic strip would understand the phrase “the Rationalist” – the Rationalism movement that I’ve encountered on Tumblr is extremely obscure, and I doubt most people have even heard of it.
I don’t believe that for a second. You knew exactly what you were doing, and now you’re backpedaling after getting caught.
@anon
That’s too harsh. I see no particular reason to think he is acting in bad faith here.
To be completely honest, just as “The Ani-Feminist Feminist” is a super-obvious stand-in for CHS, I assumed “The Rationalist” was a stand-in for Thunderf00t.
I’m not sure if it was your intention but it certainly seemed like that to me.
Theo, thanks.
Anonymous, I honestly don’t know who Thunderf00t is. (It’s possible that I’ve encountered them and forgotten; among my friends, I’m infamous for how terrible my memory is.)
As a rule, it’s completely safe to assume people making bingo cards are acting in bad faith
(But what happens when “makes a bingo card” is itself a space on a bingo card?)
Thunderf00t is a blogger (screen name of Phil Mason), who was involved in the Atheism+ mess and the #GamerGate mess.
Its hard to find a neutral source about him. Wikipedia has an article, but its mostly about his professional career.
https://en.wikipedia.org/wiki/Phil_Mason
RationalWiki has an article about him. But its RationalWiki
http://rationalwiki.org/wiki/Thunderf00t
Thanks for the link! I don’t think I know Thunderf00t’s work at all. I tend to ignore video bloggers, because it takes too long to listen to an argument vs. reading one.
Talk about protesting too much!
I think the “rationalist” panel roughly refers to what I’ve called on my WordPress blog an “anti-emotion rationalist” (since then, I’ve seen “proto-rationalist”, which is perhaps a better term). I strongly believe there’s a need for attempting to attack gender issues with more rationality than is usually used in such discussions. But among those who I’ve seen making this attempt in my life, I’ve too often encountered the “anti-emotion” mentality (characterized by “see, my simple argument based on basic abstract principles, and not stupid nebulous things like how one’s emotional experiences can impact their well-being, proves my point! Now you’re getting visibly frustrated — I guess that means you’re just not able to think objectively about this things like I am, are you?”) It seems unfortunately much more common than the type of rationalism I see hanging out at SSC or rationalist Tumblr.
Chiming in to agree with Liskantope. When I see various leftist spaces use the word rationalist with mocking quotation marks, they have most likely never heard of Eliezer or Scott or LW. (They may have heard of HPMOR) And their concept of the scare-quotes “rationalist” is about the person who does their best Straw Vulcan impression, usually accompanied by the Tone Argument.
Most of those self-titled “rationalists” have often never heard of Eliezer/Scott/LW/EA, either. Hell, I went “yay rationality!” years ago way before I ever read HPMOR, or found this blog. (and I don’t consider myself a capital-R Rationalist) I was using the term in its “thinking clearly and methodically about something” definition.
I would like to see someone (besides me) say that both sides of a frequently repeated argument are generating bingo cards– it’s not some special stupidity of The Other Side.
>I would like to see someone (besides me) say that both sides of a frequently repeated argument are generating bingo cards– it’s not some special stupidity of The Other Side.
Does a parody count?
http://4.bp.blogspot.com/-Q-bz4A4DVAY/TwEDVjcS9ZI/AAAAAAAAA6Y/B-j4na9sLgA/s1600/24typesof.png
Scott doesn’t seem to agree with you back when he first covered this:
When I Googled for good examples of those bingo games to post above, it was pretty hard to find the Zionism ones and so on. Almost every ideological bingo game out there was feminist. This is not a coincidence.
It may have gotten more popular since then. The Bingo gag is fairly recent, and I’m pretty sure it came out of the feminist blogosphere.
(I haven’t seen one for a while, feminist or not, but I might have just gotten better at curating my reading.)
I think the “bingo card” format is much more popular in the SJ side of the internet.
But it’s easy to find substantively similar approaches from the right.
It’s not what I’m looking for, but I did like the Sane Wing.
This is a “no true Scotsman” argument.
I am demonstrably interested in civil dialog and debate, and have spent an enormous amount of time engaging in it. So if you think that anyone who is interested in debate and dialog couldn’t also produce a comic strip satire of a political movement they disagree with, then I think you’re mistaken.
More generally, I think the “one strike and you’re out” approach is not the best approach.
So then, how many strikes are we supposed to give people? What is the rule, here? And if there’s not a rule, who decides, and why?
Nobody decides how many strikes you should personally give a person. That being said, this community was at some point proud that their answer was, in most cases, more than one.
If nobody decides, then there’s a rule. If nobody decides and there’s no rule, then someone’s deciding, and they don’t want to do it explicitly. Which means they have something to hide, which makes me think some people get more strikes than others.
I’m all for forgiveness. Seven, seventy times seven, the pope decides, whichever. Vagueness, though, is where the bias creeps in.
That’s not a “no true Scotsman” argument.
It isn’t a ‘one strike and you’re out’ approach. If you start the conversation with an insult, you shouldn’t be surprised if people dismiss you immediately.
Would you have wanted to reply to my post if I had started by describing you as a “worthless SJW piece of shit” or some other inane thing?
Give Barry Deutsch a break. Everyone makes dumb jokes that are offensive in retrospect. Let’s stay on the anti-witch-hunt side of the pro-witch-hunt/anti-witch-hunt divide.
Agreed. I’m not a fan of his cartoons (and I have a few doubts of whether cartooning is a particularly good medium to discuss complex ideas). But he doesn’t deserve the dogpile. I see no indication that he is acting in bad faith. He makes a better effort than most at taking opposing viewpoints into consideration, and if you look at the Tumblr thread mentioned up thread, he did make some changes to the “types of anti-femminist” cartoon in response to criticism.
“Everyone makes dumb jokes that are offensive in retrospect.”
Not everyone publishes them on the internet under their real name, and leaves them up, the fact you assume they do is rather telling. I understand there are certain classes of people one simply isn’t supposed to criticize, but let’s not pretend this is a principled position.
Hear, hear. This has been a depressing displayof butthurt humourlessness.
“I am demonstrably interested in civil dialog and debate, and have spent an enormous amount of time engaging in it.”
In my experience you’re mostly known as the “Christians don’t really believe what they believe, but if they do, then it’s immoral to be Christian” guy outside your ideological circle,
So I’ve yet to see something like this that doesn’t become a way to block the outgroup. Having a basic FAQ is great, but you need to have everyone on both sides agree with it. If you have anything stated as true in your FAQ which is at all controversial the whole thing loses all legitimacy. There’s also the fact that many of these arguments seem to crop up from people have have been engaged in them before, if not are inflamed by those who constantly go in on them.
If you’re sick of dust specks, MIRI, EA, or whatever just don’t engage with it at the time. It’s that simple. Saying “we already solved this and won’t hear your objections” just lends credence to the idea that you’re a cult.
Something that might work for a lesswrong style group but probably wouldn’t work for groups more inclined to believe that there is only one “correct” answer:
Perhaps instead of an FAQ have something like a flowchart of different coherent positions with paths depending on precepts/values.
The community can argue about the exact shape of the chart and it’s leaf nodes.
You then have a yearly poll where members of the community mark their path through the flowchart and the public version is shaded to show the positions of the community and their reasoning paths.
there is a difference between telling someone “your beliefs are wrong and evil” and “your conclusions differ from ours because you believe X to be more important than Y while we mostly believe the opposite”
Rather than saying, “These are the uncontroversially true answers; don’t question them,” an FAQ can say something like —
These topics are the heavily trodden paths. There are well-known arguments about them that have been worked out in detail by proponents and opponents. Previous discussions have shown that naïve discussion of these topics almost always merely retreads the well-known arguments. So we are just going to take all of these arguments as read, and gently discourage new posters from bringing them up until they’ve familiarized themselves with the existing state of discussion.
Yes, I quite liked the Christianity Apologism site I found a couple of years ago, sorting essays by topic/common questions, which weren’t used as the end-all-be-all answer to arguments, as they continued to update sections with follow-up essays, but as a pointer to where the conversation has already been.
(I mean, a lot of the essays didn’t satisfy my questions, but I liked the structuring of the site and their efforts to answer arguments directly.)
talk.origins was seriously the best example of this ever. Summary responses to just about anything a creationist of any stripe had ever brought up as a point against Darwinian evolution with links to much more detailed responses, hierarchically organized by argument type. I guess the downside is it didn’t present the ‘other side,’ but it had the advantage of being a controversy that is pretty heavily weighted in reality toward one side (not something like growth mindset that seems honestly up in the air even among experts).
The Effective Alturism community should be discussing the effectiveness of MIRI. If large numbers of EAs where giving decent fractions of their income to feminist causes than the EA community would need to discuss feminism also. In addition I disagree that these “recurring debates” are useless. The last round of MIRI discussions on tumblr had alot of new substance. Especially the post trying to establish what a “reasonable research output” would look like and how to “count” conference presentations.
My main takeaway with the MIRI arguments is that I actually learn a lot from each round! Granted, it’s mainly from the people who don’t see MIRI as effective and tend to have much more convincing arguments as they’ve either been in or are in a related field while most defenders aren’t.
So this kind of looks like defensive posturing rather than any sort of attempt at raising the bar. If people are bringing up objections and others are learning from them you should be encouraging them, not setting up walls and moats!
“If large numbers of EAs where giving decent fractions of their income to feminist causes than the EA community would need to discuss feminism also.”
Implication: Large numbers of EAs are giving decent fractons of their income to MIRI as opposed to other EA groups/causes.
This is a claim that needs evidence, and by not providing the evidence I must assume you don’t have it, unless you can source me.
“The last round of MIRI discussions on tumblr had alot of new substance. Especially the post trying to establish what a “reasonable research output” would look like and how to “count” conference presentations.”
Implication: Research output of MIRI is a novel discussion with respect to how to weight certain publications over other.
This also seems a dubious claim.
Make explicit claims!
“This is a claim that needs evidence, and by not providing the evidence I must assume you don’t have it, unless you can source me. ”
Is this a joke? This is a ridiculous absurd and aggressive statement.
I should have said “giving a sizable fraction of their charitable givings to MIRI” because the average “effective alturist” does not give much to charity.
But regardless in the 2014 EA survey MIRI was the fourth most common donation target among EA’s who took the community survey. The numbers donating to AMF/SCI/Give Directly/MIRI were 211/11411/71.
I could not find hard numbers on the amounts given. But the evidence seems to suggest that MIRI is a major recipient of EA funds.
I assume you are either just an obnoxious jerk (high probability!) or jsut have no idea how the Ea community operates.
You don’t care about making empirical claims, otherwise you would cite stuff without being asked.
“I should have said “giving a sizable fraction of their charitable givings to MIRI” because the average “effective alturist” does not give much to charity. ”
Source?
“I could not find hard numbers on the amounts given. But the evidence seems to suggest that MIRI is a major recipient of EA funds.”
More sources? Or is just the one?
“I assume you are either just an obnoxious jerk (high probability!) or jsut have no idea how the Ea community operates.”
This is your True Objection: you wanted to call people who disagree with you jerks or ignorant. Have a nice life.
From the 2014 EA Survey:
The median donation among self-identified EA’s who took the survey is $450, or 3.2% of their income. 114/362 (31%) give at least some money to MIRI. So, whatever stargirlprincesss‘s True Objection might be, her facts are essentially correct.
“More sources? Or is just the one?… This is your True Objection: you wanted to call people who disagree with you jerks or ignorant.”
Asking people to produce an infinite number of sources and then accusing them of having hidden motives is not a good way of arguing.
@Evan Þ
I actually agree with what (I think) stargirlprincess is claiming, I’m disagreeing with how she structured it. That’s basically the whole point of this post, lol.
@sweenwyrod
“Isolated Demands for Rigour” is I think the appropriate LW jargon term for this, though I am disappointed that all numbers bigger than 1 are infinity. And I don’t think the motives are hidden, I think they are explicit, by the way they structured the argument. That is The Point, I guess. If you want to write an argument, you write it like an argument, if you want to name call, well
meant 211/114/111/71
> Second, sometimes people are jerks. Nobody on Tumblr can just say “I don’t think AI is a big problem.. They have to say “I don’t think AI is a big problem, and the only reason some people worry about it is because they’re cultish sci-fi dorks who are too brainwashed to look up what real scientists have to say”
Tone aside, there is a substantive point there.
There’s evidence of treating Real Scientists who disagree with them with derision. The fact that there is a list of scientists do agree with them doesn’t change that. This is days after another messenger was shot.
If the point was that attacking the credentials and methodology of MIRI isnt sufficient to refute AI doom, then the point is weak There s not a presumption in favour of AI doom. If a person starts with a low prior for it, then it is s rational for them not to raise it if they see it (only) being defended with poor methodology.
Take the outside view:
1. X is an a fantastic science-fictional sounding claim.
2. X was “noticed” by some amateurs, (who happen to be science fiction fans) whilst the experts in the field remained oblivious.
3. The amateurs who “noticed” X react with paranoid hositility whenever the experts try to correct them.
What does that add up to?
From John Horgan interviews Eliezer Yudkowsky.
“But enough about the stuff you’re scared about. What about the stuff I’m scared about?”
If the empirical facts in question depend on evidence that I can’t reasonably acquire firsthand (e.g. the alleged existence of the continent of Antarctica), then a critical step of the process is evaluating the trustworthiness of the people providing me with the evidence. Same deal if there is some particularly complex analysis that needs to be performed and the question isn’t worth enough of my time to do it all from scratch – at some point I’ve got to trust someone.
Is anyone here really going to say they’ve never been presented with an argument for, say, a perpetual-motion machine or a nefarious political conspiracy, noted that the speaker exhibits the usual markers for the usual sort of crackpot in the field, and decided on that basis alone that no further attention was warranted? Because I have, and I’m pretty sure it has worked well for me in the sense that I have saved myself time and aggravation without missing out on knowing of any perpetual motion machines or nefarious political conspiracies.
> Eliezer: Because you’re trying to forecast empirical facts by psychoanalyzing people. This never works.
Foecasting in general tends not to work. Prediction is difficult, especially about the future, The gold standard is a quantitative model based on well established principles. Yudkowsky doesn’t have anything like that. His alternative to Horgans handwaving is handwaving of his own.
This is why I’ve never made a Twitter account. Because if I made an account to help me *read* on Twitter, eventually I would be tempted to use that account to *write* on Twitter, and when compared to their essays/blogposts/books, everybody seems to lose 30 IQ points when they write on Twitter. Geniuses sometimes just look like weirdos when they can’t take the time to explain themselves, and average people look like morons when they can’t show each step (including whichever slightly flawed step goes all Principle of Explosion by the end) in a train of thought.
The word for “unable to talk” would never have morphed to mean “dumb” if only we’d known just how much less intelligent “able to talk, but only 140 characters at a time” could be.
The python community’s answer to this is to store all the arguments (distilled from community discussion) on both sides in PEPs (Python Enhancement Proposals) – which serves a few purposes:
1. Preserve the historical record of the thinking behind the decision embodied in the PEP (the title of the pertinent section is ‘Rationale’)
2. Allow the shortcut in discussion of ‘see the PEP for arguments for/against’
3. Allows the addition of new arguments (in a revised PEP, probably) if something genuinely new comes along
I’m not claiming this is a perfect solution or anything, just noting that it’s a solution found and used by one particular community.
If you don’t like taunting and mockery, why like The Sequences? They’re full of taunting and mockery of people the author disagrees with. My least favorite parts are the short stories whose sole identifiable purpose is to cast Yudkowsky’s enemies as contemptible laughing-stocks. Did your edition of the Less Wrong archives not include those posts?
Eliezer has apologized for the virgin pregnancy one. I can’t think of any others like that right now. The zombies one is magnificent and above criticism.
This isn’t a short story, but I think http://lesswrong.com/lw/i5/bayesian_judo/ counts as an example of taunting and mockery of an actual human being, based on tricking him by claiming Aumann’s Agreement Theorem supports something that it really doesn’t.
I don’t know anything about Aumann’s theorem, but it sounds like Yudkowsky “won” the argument by making a false claim and expecting the other guy to accept what he was saying by the appeal to authority (both Aumann’s and Yudkowsky’s: ‘believe that this thing says what I say it says’).
Also definitely an element of peacocking going on there with the mention of the approving woman who stood by and witnessed; didn’t participate herself in the conversation, apparently, no, the woman’s part is to say to the man after the Trial of Strength “Oh, what a huge, massive, virile – brain – you have! Yes, that’s the organ I meant! I am so aroused – I mean, stimulated! By the amount of intellect you plainly possess!”
(If that sounds grumpy, it is; I’d love an anecdote by a man to finish without “and this hot chick thought I was the business!” sometimes)
Pretty much all male behavior is designed to impress women, though.
At least that’s what I learned from Dead Poets’ Society.
Do you have a link to the apology? Glancing through LW posts / comments I couldn’t find one.
Even Eliezer disapproves of mockery now (after being on the receiving end of it).
Looking at that short post, the rationale for “open contempt for religion” seems to have worked on the idea of “status loss causes loss of attachment to bad ideas”.
Which is a stunningly bad notion upon which to base a method of persuasion: “thinking/saying this will make my friends not like me!” Believe this not because it’s true, but because otherwise people will think you’re a square!
(a) “But mom, I have to wear the right brand of runners (the expensive Name Brand ones in a particular style) to school, otherwise my friends will laugh at me!”
“Honey, if they laugh at you, they’re not your friends ”
(b) If someone drops belief in religion solely because they want to impress the Cool Kids, as soon as they meet an even Cooler Group who laugh at “rationalists? those dweebs?” they will also drop you and yours like a hot potato
(c) If somebody is a convinced believer, then they’ll probably shrug it off with Matthew 5:11 “Blessed are you when people insult you, persecute you and falsely say all kinds of evil against you because of me.” If people saying “Ooooooh, you believe in an invisible sky fairy!” is the worst thing anyone says, you can consider yourself lucky. If you can drop a belief merely because of some light mockery, you never held it very firmly in the first place, and that applies to all beliefs.
(d) I know I’m not one to talk because I am a hermit crab in terms of sociability and interacting with Actual Real Humans, but what a wretchedly miserable life it seems like, to constantly have to be on the qui vive about censoring your tastes and likings in order to fit in with the approved mode du jour in order not to incur status loss from those you wish to impress and claim acquaintanceship (one cannot call it friendship) with, in order to maintain that position on the greasy pole of status; never able to relax and wear your ratty old comfortable favourite sweater while hanging out with your mates, always having to be bright and poised and keeping up with the intellectual Joneses!
(e) God is not an invisible sky fairy; those would be apsaras, and gods are not apsaras 🙂
The reason Rationalists have the same arguments over and over is that they’re (sort of) less likely than other ideologies to purge people for having differing views, or to break off and form their own movements. A lot of “settled questions” within groups, political ideologies, etc were settled by either driving out everyone who held minority opinions, or by making them unacceptable to express in public in order to marginalize dissenters.
So I guess this isn’t the place to rehash the oldies but goodies “Are SJWs literally the worst or only figuratively the worst?” or “So, who’s up for another 782 comments about libertarianism?”, huh?
Some people like to give a negative definition of the field of philosophy: it’s the field in which we study questions that we haven’t come up with some empirical method of answering that everyone can agree upon. In other words, it’s the study of interminable arguments. (Personally, I think non-philosophers tend to think that the foundations of their fields are more settled than they really are. So I suppose that by that definition, I think quite a lot of stuff is philosophy.)
At certain times and in certain places, you do get particular answers to particular questions being taken for granted. But settled answers tend to not remain settled for long. In the 1950s, hardly any philosophers took theism seriously as a philosophical position. Then a bunch of Christian philosophers came along and started actually arguing about it, and nowadays one can do work on, say, ethics, and bring in theistic resources without being badgered out of the philosophical community (even if most people will still disagree with your theistic claims).
Likewise, it might be that at your school everyone accepts functionalism about the philosophy of mind, and all the debates are about the form of functionalism that you should accept. But go to conferences and you’ll meet dualists, eliminativists, and all manner of others and have to justify yourself to them.
I think there are at least three reasons that professional philosophy is like this. One is that we just self-select for people who like to argue. A related reason is that we value contrarianism, and so being willing to question almost any premise often brings professional rewards. A third reason is that many debates about subject matter X ultimately reduce to debates about how to settle debates about subject matter X: and so no matter what it is we’re talking about, we often end up rehashing very old debates about what good arguments look like, whether we can trust our intuitions, and so on and so forth.
An option: reschedule.
I’ve had luck on Facebook and in chat rooms with “Tapping out,” or saying “I need to think/calm down/do other more important things before I keep engaging on this topic.” Most of the time, I get a positive response and the argument isn’t even brought up again.
I have no idea how to handle such things on Tumblr, Tumblr is the weirdest debate format I’ve ever seen.
This reminds me a lot of the issue of engaging in debate when your side requires a much greater level of expertise to argue than the other side does. For instance, many evolutionists refuse to debate creationists on the grounds that in a debate format for a non-biologist audience with equal time for both sides, creationist talking points will deliver relatively strongly (e.g. Richard Dawkins saying, “I refuse to debate an ignorant fool” [citation needed]). But this, of course, gives the creationists ammunition to say, “See, they’re afraid to debate with us on equal terms!” I’ve never been able to think of a good solution to this, but at least this post and ensuing discussion seems to be addressing it.
http://lesswrong.com/lw/17f/let_them_debate_college_students/
I’ll just leave this here:
“Everone kills Hitler the first time”
http://www.tor.com/2011/08/31/wikihistory/
Beauty.
A huge problem is that when a somewhat insular community ‘settles’ an issue it has often come to an ideological consensus and exiled heretics, not actually demonstrated the truth of its viewpoint. This is probably the modal situation when an issue is rejected by a community. I wouldn’t bring up my belief in the centrality of chromosomal sex in a trans community, or my views on Judaism on Stormfront, but it’s not because I think I’m wrong. It’s just pointless. Issues that get raised repeatedly to the point that they become exasperating are often not because of newbie naivete, but because newbies have not been indoctrinated to the ideology.
This goes across numerous types of intellectual communities, including scientific ones. E.g. the economics establishment has agreed to proceed ‘as if’ a massively oversimplified view of human nature and social dynamics is correct, because it permits easier theory construction (many seem to also believe that the obvious flaws of the assumptions can somehow be adjusted for later using common sense). The assumptions are well understood to be dramatic oversimplifications by everyone in the community, but they continue to play an absolutely central role in theorizing. If you bring them up then there will often be precisely the ‘this again?’ reaction of exasperation, but not because the issue isn’t a serious one but just because the community doesn’t want to deal with it.
(Admittedly things are getting better post-financial crisis, in part because that crisis involved at the intellectual level such a clear demonstration of the practical problems with economic abstractions).
I don’t remember which post it was that talked about movement building and the tradeoff between groups with strict rules and an official ideology versus ecumenical groups with a wide variety of beliefs who can never coordinate on anything, but this is as good an example as any.
https://slatestarcodex.com/2014/07/14/ecclesiology-for-atheists/
That’s the one
This is true. People talk a lot about rationality and debate here, but if they were honest they’d realize they view those as a meaning to an end, tools that produce the right sort of person who happens to believe the ideology offered. When they realize that no, it doesn’t, then comes the desire for peaceful villages and rediscovering the joys of moderation and “shut up and read the FAQ.”
https://xkcd.com/386/ is called Cunningham’s Law: “The best way to get the right answer on the Internet is not to ask a question, but to post the wrong answer.” In my head canon, the urge is called Cunningham’s Syndrome.
But doesn’t that risk confusion with Chuck Cunningham Syndrome? (warning, TV Tropes to follow: http://tvtropes.org/pmwiki/pmwiki.php/Main/ChuckCunninghamSyndrome)
I’m going to start one. I’ve been, as a newbie, guilty of a variant of the “Let researchers worry about AI” argument. But the thing is, I think arguments like this are misinterpreted from both sides. Let me try to explain the two sides as I see them:
* The “I worry about AI” side, familiar to people here, is of the “two heads is better than one” variety. It’s something like this: “I’m not an AI researcher, but I consider myself intelligent and capable of coming up with/arguing between complex arguments in a way that may be original. If we have enough people doing this and talking about it, then we are more likely as a society to be prepared if and when the need to make AI-related decisions arises.”
* The “leave AI to academics” argument, in my view (and, as far as I understand, in the view of certain other people I know who criticize this aspect of rationalism), is not the patronizing “academics don’t worry about AI” or “academics know more than you do”. Rather, it’s the following long-winded idea: “general AI is an extremely complex and uncertain, potentially catastrophic scenario, on the order of nuclear war. Since we are not at a point where it is possible to make accurate and nontrivial predictions about it, our thinking is inevitably somewhat shaped by our emotions about it (whether they be fear or hope), which makes it less reliable. The best way we have nowadays of making non-emotional, informed judgments about things that invite being emotional is the formalized language and peer review of academia. Since such an academic dialogue is going on at the moment about AI, it can be suggested that the addition of well-meaning, but less scrupulously impartial, opinions by a large group of intelligent people hinders rather than progresses the ongoing attempts at understanding how general AI is likely to occur and what to do about it. This point of view is supported by other cases where nonacademic discourse about highly contentious issues made the problem more, rather than less difficult (viz, Marxism, identity theory, darwinism, etc.)”
Hopefully this is a constructive clarification, and I’m curious if this point has been made/argued before somewhere.
AI researchers, imo, have bad incentives. “AI Safety” is a public good. The prestige from “AI progress” is a private good. This is a problem.
This idea that nobody who is an expert in the field can be trusted on account of the conflict of interest that comes from their seeking prestige, power, and funding – it seems like I’ve heard this someplace before.
I actually don’t know what you are referencing.
He’s probably talking about climate change, although I’ve seen the same general argument aimed at fields from economics to psychology to sociology to medicine. Sometimes it’s even accurate.
Got it in one, Nornagest – both the specific and the generic.
There’s a third possible party, which is “AI may or may not be the kind of existential risk you claim; if it is, however, the danger lies in a quite different direction than the one you are forecasting. It is less likely that something catastrophic will come about through a god-level intelligence AI making unilateral decisions and having the power to enforce them, and more likely that good old-fashioned human meddling and stupidity will abuse the tool of AI to make disastrous decisions that at the time looked good, profitable or necessary”.
Generally the best way to get the discussion under control is to make a wiki of arguments and counter-arguments and link people to it whenever they want to discuss instead of rehashing the same stuff over and over. Maybe there could be a convention that there are two pages on the wiki, pro and con, and people only edit the one they’re in favor of, to get rid of ridiculous edit wars.
Maybe a similar result can be obtained if certain users have the ability to close threads with a “see reference” response.
Technically you’re voluntarily entering arguments you feel you “have to” too (unless a gun is being held to your head). We engage in debates for the same reason we do anything – because our constitution compels us to (either for pleasure or to satisfy some other purported motive). It’s worth thinking about why humans debate: status signaling/jockeying, finding like-minded teammates, swaying the prevailing view points, and establishing norms for society. All these are especially true when the matters under discussion are normative rather than empirical, and the argument rests on the moral sensibilities of the participants.
I sometimes feel like huge invisible masses of people over the years have been finding Less Wrong, quickly learning the key lessons, and moving on with their lives, thenceforth making better decisions and holding incrementally more correct beliefs. If any of these people ever became aware of “Rationalist Tumblr”, they immediately identify it as a mess of costly signalling and tribal mentality and hypnotize themselves to be unable to hear or read the word “Tumblr.”
We’re just the conflict-seeking morons who stick around long after the party died, bickering pointlessly over gaffes Eliezer may have made ten years ago.
… Either that, or Rationalist Tumblr really is the vanguard of human rationality, in which case …
For myself, the commentary on this site is the closest I ever come to Tumblr or Twitter. I haven’t returned to LessWrong since reading most of the sequences and feeling like I was learning less per additional time spent on that as compared to time spent elsewhere. I get my light fluffy news from IFLscience and my real news from Google Scholar, and my entertainment from xkcd, smbc, and FreeFall.
When the commentary here starts down a chain of back and forth about an issue with no substantive new facts being added, I close the thread (e.g. the gun control stuff).
I don’t care if people are wrong on the internet. Most people seem to me to be quite dumb and quite wrong about most things. I like people anyway, like I like dogs and cats, but I don’t assume their yammering has any worthwhile meaning except under unusual circumstances.
With the exception of most of the commented here, of course, which is why I’m here….
I read the sequences, noted that posting and even commenting on lesswrong is too high effort, and moved on.
There’s a useful distinction to be made about degrees of voluntariness. He’s talking about cases where he feels his long-term interests being threatened. If he doesn’t intervene now, he thinks that eventually he’ll then be “forced to” engage in the most dichotomously literal way. So instead he does engage now, in light of that expected future. His response is thus less than completely forced, but still less than completely voluntary.
Off topic: getting by in this world is hard, and I am so often weaker of will than I wish. So many ideas for inventions and research, and so little power to act on them. Oh, for the will to work tirelessly on my goals!
Modafinol, we’ve already been over this.
(Not a doctor, this is not a recommendation)
I’ve been very tempted, and would’ve already if I weren’t so poor. Vicious cycle and all that….
Luckily, Scott’s last post was literally a review of nootropics. The guru you seek is Gwern.
I think the rationalist community, or at least SSC, could benefit from adapting a solution that’s popular in the church: small groups. The nice thing is that this can help solve several problems at once. First, my impression is that many people consider themselves rationalists, but don’t know how to become more involved in the community. Successfully leading a small group would be a clear path to gaining status and recognition in the community. Next, small groups leaders would often be people who are relatively new to the community, so they wouldn’t yet be burnt out on answering the same questions ad nauseam. Plus, by its very nature a small group would be small, so this would limit the number of times the same argument or question would pop up. If a small group leader finds they don’t have any good answer to something that’s brought up in the group discussion, they can bring the question to other leaders or more experienced people, and such questions would necessarily be somewhat interesting or different. Finally, this set up would be attractive to outsiders as well, since it would be easier for accountability to flow both directions: a leader can say that they’ve done their duty to consider outside opinions even if they’ve only interacted with a dozen or so outsiders, but at the same time, they would no longer be able to tell people in their group that they’re inundated with the same old questions and can’t respond to them all.
Practically, I think this wouldn’t be too difficult to implement. Just close comments here, or at least restrict them to a few of the “top quality” commenters. Other people would comment on different forums, say a series of subreddits like r/SSC_group_1, r/SSC_group_2 &c. Each group would be limited to a dozen or so people, and we would just need a master list of groups that are looking for new members.
Long ago on LessWrong I proposed the same thing* and got downvoted to very negative oblivion. I interpret this as indicating that there is a large segment of ‘newbies’ who want very much to be able to communicate directly with the ‘oldies’ whom they have learned from and reasonably admire.
[*Well, a similar thing. A forum that automatically clustered people into small Dunbar-number-like groups, so that by default they’d only see (1) the posts from their own small group and (2) highly upvoted posts from other groups.]
At this point it is clear that most newbies are NOT actually able to meaningfully interact with the oldies. The oldies might read your comments but a real dialogue is not logistically going to work. There are too few “oldies” and they already have friends.
The consensus is that lesswrong.com has stopped functioning. Now people might be more receptive.
I like this idea.
Looking forward to the inter-group Hunger Games.
Make well informed (ideologically pure) posts or risk relegation!
This seems like an extremely good idea.
However implementing this on a site wide scale without testing seems like a bad idea.
I think the best idea would be to create such a group (10-12 people+a leader) and see how it goes. I would be very happy to join such a group btw.
If there’s anything to make the blood run cold, it’s when it’s announced that there will be small groups for church study purposes! 🙂
I’m sure you’re speaking from experience, but I have no idea what that experience is. Details?
It’s not my job to educate people if I’m not asking for them to change or to do anything. If I’m asking for change, either donations or political action (and the personal is political) then it is my job to educate them. Anything else is open aggression.
Open aggression is bad?
Scott,
My apologies if this comment comes too late. I never get a comment in early enough to know that you’ve read it. But here it goes anyway…
Two things stood out to me from this post. The first one is that you say you cannot just walk away from these “interminable arguments,” something compels you to sit there and argue. The second is that your response to this is to attempt to manufacture new group norms to remedy the problem.
Together, these two things lead me to believe that you will not be successful at avoiding interminable arguments in the future, even if you manage to only debate in “long form” and private media. The reason I think so is because the core problem is not the set of group norms to which you’re exposed. The core problem seems to be your inability to just walk away. You don’t need a theory for walking away, and you don’t need a rule for when to walk away, and you don’t need to manufacture better group norms to prevent the problem from reoccurring. No, you just need to walk away from interminable arguments. Just go.
Just go. It’s the internet, you can just get up and leave. You can just stop replying. In some cases it might make it look like you “lost the debate” or “gave up and conceded victory to the other person,” or whatever else you might be afraid of. In some cases it might even make you look like a fool. But who cares? Internet communities do not define who you are. The outcome of a debate does not determine who you are. The judgment of internet onlookers doesn’t determine who you are. And none of these things determine whether or not your position is correct.
So why not give yourself permission to just walk away? Rather than enforcing a new group niceness norm, why not establish a new SCOTT ALEXANDER norm, which is “sometimes I get bored or annoyed by arguing and I go play with my kids or listen to music instead?”
I think you have to just let this stuff go. The internet can be a weird Sartre’s Gaze – you turn it off and walk away, and then because it’s etched in cyberspace forever, you can come back later, and The Other is still there, gazing back at you. You’re either going to let the Gaze get to you, or you’re going to say, “WTF am I doing? This is the internet!” I think the latter option is the more reasonable.
Best,
Ryan
I imagine Scott is aware of this, given the Right of Exit is the Archipelago’s sin qua non. I suspect his thought-process might more closely resemble “The most important rule of the garden is the freedom to leave the garden. That the first recourse should be to improve the garden is obvious. I wonder what the best means to improve the garden is.”
fwiw this is also how I frame the Euthanasia debate.
Exile, or self-exile, should probably be the harshest possible penalty from within a group, since from a group perspective it’s basically the death penalty. It shouldn’t be the first response, else you get escalating individualism/alienation which is very harmful.
Walking away from a specific argument is not necessarily exiling oneself from the group.
No, but I think if it becomes the first response, that’s where things are inevitably going to end up. I’d be happy to be proven wrong. And maybe I think it’s too much of a bad thing, too, I’d also like to be wrong about that.
Frog Do, nobody’s talking about “the first response.” SA has identified a personal problem in that he can’t seem to walk away from bad arguments. To remedy this, he is proposing anything from rational rules for conversational governance to invoking the power of peer pressure to shame bad conversationalists into getting in line. The problem here is that none of this addresses the real problem, which is SA’s compulsive need to engage in arguments even when (or especially when?) he knows that nothing good will come of them.
He needs to learn to walk away. Helping him craft group rules in order to prevent him from having to make a personal change and just walk away from a bad conversation is not going to result in a solution to Scott’s problem.
@RPLong
Important to point out is that it’s specifically an issue “with his in-group”, not a general problem with people being stupid on the internet. He probably doesn’t bother with RationalWiki, or comment sections of online newspapers, or Youtube comments, etc. It seems like this is a specific “how do we improve the in-group” discussion, so “just leave” isn’t helpful.
Frog Do, now I really don’t understand you. I’m clearly proposing that SA step away from his computer, not leave his in-group. Why do you keep insisting that what I really mean is the latter?
You’re solving a single problem, I’m trying to understand the problem generally. If a single person defects, he wins, if everyone does it, we all lose.
@ RPLong, Frog Do
I think Scott was using the ‘generic-I’ which is more polite than the ‘generic-you’.
In any case, as moderator here he has responsibilities to the group that don’t let him ‘just walk away’ from things that harm group discourse.
I agree with RPLong. There is no systemic or structural problem, just a bunch of individuals ho can’t get the preferences they have in line with the preferences they want to have.
But we do care. The internet isn’t a natural environment for humans, dunbar-sized groups in meatspace are. And in such, “losing” an argument, especially a hostile one, comes with consequences – losses in social if not in material status. We probably feel compelled to defend ourselves online for the same reasons we can’t help but wince at plastic vomit.
Right, but most of us aren’t slaves to our emotional impulses. Part of growing up is learning how to walk away from a stupid argument. Trying to figure out how to sculpt top-down in-group rules you can invoke on people instead of just walking away strikes me as being playground stuff. Grow up, walk away, maintain some shred of personal dignity that doesn’t live and die on your relative status with other people, that’s my advice.
One time, someone on LessWrong suggested a keyword to signify “I still disagree. But for now, let’s end this discussion since I don’t think its continuation would be productive”. I don’t remember what the keyword was. Its intent was to allow its invoker to amicably end the discussion while saving face.
Outside LW, I’m pretty sure the analog keyword is “agreeing to disagree”. Do you consider this immature also? Are Scott’s call-for-norms and your suggestion to walk-away really mutually exclusive?
Because if we were to implement “agreeing to disagree” in order to solve the problem of Interminable Debates, I believe your critique reifies to ” ‘agreeing to disagree’ as a {keyword, group-norm} is a childish instrument of Satan. Real adults are silent stoics, 100%”.
I mean, you’ve dressed the concept up in a way that sounds good, but think about it: You’re talking about making a safe word for conversations you have on the internet.
Yes, absolutely: My opinion is that that is childish.
It’s called “tapping out”. And it’s a good idea. And safe words are a good idea.
@RPLong: Pretty sure that’s a noncentral fallacy. The things we dislike about childishness* are things that are absent from having a community-widespread way to ‘officially’ disengage (ideally temporarily) without losing face or backing down, so arguing “but it’s childish” is just a distraction from the actual questions.
I have to say, your problems with the idea of engineering around emotions rather than just bulldozing through them – well, okay, you’ve found something that works for you, and there are plenty of people who could stand to try doing so a little harder. But suppose other people are allowed to be really, shockingly, unpleasantly unlike you in that regard? Then what? Okay, leaving aside the open or veiled invective about how they’re all pathetic slaves to their emotional impulses, what else? What actual use is responding to a description of a problem with “that’s not a problem for me”? Oh, it works on your machine? Great. Thanks. We’ll put your machine in every client’s office, then.
* Irresponsibility, recklessness, incompletely developed ideas, messy eating habits, etc
@ suntzuanime: I didn’t say safe words aren’t a good idea, I said I thought they were a childish idea. I think they are viable on the playground, but I wouldn’t want my adult conversations to feel immature. I could choose to attempt to modify other people’s behavior, or I could choose to modify my own. Which one do you think is the more mature choice?
By the way, the other context in which we typically encounter safe words used to diffuse arguments is in marital counseling. This, coupled with SA’s admission that he cannot seem to walk away from these arguments, suggests a level of emotional over-investment, doesn’t it? All the more reason to step away.
@ dirdle: From what I can discern of your comment, you’re saying that even if I don’t have difficulty walking away from a stupid argument, some other people do, and so they need top-down rules. I agree that not everyone is good at just walking away, but I disagree that they need new community guidelines. Instead, I think they just need practice.
I once heard a weight loss expert suggest that people with an over-eating problem should just make a point to leave *some* amount of food on their plate, even if it’s only a trivial amount. The point is simply to practice the behavior until it becomes something more like a habit.
Might something similar work for people who have trouble walking away from stupid arguments? They don’t have to quit cold turkey, but maybe they could practice leaving *one* conversation and see how it goes. Or maybe they could give themselves a personal rule, like “I’m going to try to make my point in 3 total comments.” Just being aware of the 3-comment threshold would be something like leaving a single bite of apple pie on the plate.
Practice, that’s all. I’m not being smug, I’m just saying: sometimes the solution really is to walk away.
On the “safe words” subthread …
Early in their marriage, my parents invented a set of code words, numbers that represented some statement for some reason difficult to make. The one that survived in their family practice, and still survives in my family, is “number two,” which stands for “you were right and I was wrong.”
It works for two reasons. One is it is shorter and easier to say. The other is that using the code word reminds one that it is a hard thing to admit, hence that one is being virtuous by admitting it, which helps reduce the difficulty.
David Friedman: What a wonderful tradition!
I had wonderful parents.
@RPLong:”I could choose to attempt to modify other people’s behavior, or I could choose to modify my own. Which one do you think is the more mature choice?”
The mature choice is to figure out whose behavior you *can* change, then of those, which change leads to better results, and then make *that* change. Defaulting to changing oneself at cost of not only failing to get what you want, but *also* at cost of the community is not mature. It’s a knee jerk reaction against immaturity that misses the wisdom exposed for the taking.
Sometimes the right choice is to walk away. When you can’t handle your shit and you know that your continued participation will make things worse, then the right choice is to walk away – “hard” or not, doesn’t matter. However, sometimes when you reflect on it, your actions – even emotionally invested and *passionate* actions – will make things better. In those cases, the right answer is to be a big kid and do what needs to be done. This is one of those cases.
@ RPLong
Grow up, walk away, maintain some shred of personal dignity that doesn’t live and die on your relative status with other people, that’s my advice.
Yes. A pattern of silence at such times also gives you status points with the best of the audience. 😉 And it defaults to your still having the option of nodding back in if a good idea occurs to you, or if someone else brings up a new idea you want to reply to.
@ FullMeta_Rationalist
One time, someone on LessWrong suggested a keyword to signify “I still disagree. But for now, let’s end this discussion since I don’t think its continuation would be productive”. I don’t remember what the keyword was. Its intent was to allow its invoker to amicably end the discussion while saving face.
Outside LW, I’m pretty sure the analog keyword is “agreeing to disagree”.
In either wording, I strongly object to that message. The speaker is not just withdrawing zimself, zie is also telling everyone else to STFU That miht be a legitimate Moderator power, but imo it’s an area-effect weapon too violent to be thrown around by just anyone, or even by a majority.
For an alternative with AGW as an example, I wish someone would start a topic at /r/slatestarcodex/ that all new AGW comments can be gently shunted to — thus keepin the whole ongoing discussion in one place. The topic is worthwhile imo, but it’s spread over too many blog threads. For example, recently David Friedman posted a link to something from a beef producer source that I didn’t follow soon enough, and now that link is buried several Threads deep.
@ jimmy – You *can’t* modify other people’s behavior, so the rest of your comment falls flat to me. At best, we can persuade people to voluntarily see things differently, but these situations fall far outside the context of the kind of stupid arguments to which SA refers in this post.
Creating new community standards and mod powers results in a change of outcomes at the expense of *no* change in behavior. It side-steps the problem of doing the mental work of change; the work to which, if you re-read my initial comment closely, I have been referring throughout this discussion.
@ houseboatonstyx – I don’t put much stock in things like status, but aside from that, we’re in agreement. 🙂
@Houseboat:
Possibly this:
http://www.omafra.gov.on.ca/english/livestock/beef/news/info_vbn0713a4.htm
It’s relevant to whether there is a benefit from the shift of temperature contours towards the poles due to AGW—whether some of the warmer land would be suitable for agriculture.
Which explains why I linked to a page with a picture of cattle.
@RPLong: Have you tried it? It’s not incredibly hard. While it’s also not relevant, it’s certainly not *necessary* to do things in a way that are easily and accurately framed as “voluntary”, but it’s nice.
In general, before declaring that someone that they have a mental problem that they should fix, it’s a good idea to check whether they see the problem as “I feel this need” or “someone is messing up my community” – that is, do *they* think it’s a problem.
If you disagree with them, before stating your view as The Right Answer, you should also probably check to see if you can pace why they see it as a problem and why they feel like they might actually be successful changing others. And then provide some actual arguments that take this into account. If you don’t do these things first, it kinda conveys that you don’t think the persons reason’s for doing things their way are even worth addressing, and for patronizing approaches to be successful, you generally have to have a pretty cool relationship established already.
You’re free to do whatever you want, of course, since I can’t make you change your behavior – but if you want to voluntarily persuade someone to voluntarily see things differently, it tends to help.
@ David Friedman
Possibly this:
http://www.omafra.gov.on.ca/english/livestock/beef/news/info_vbn0713a4.htm
Ah, thank you. I have opened the site and saved it to disk. So far so good.
@ jimmy – The point of your most recent comment seems to be that if I want to persuade you, I should address SA’s reasoning point-by-point, rather than proposing a solution that fails to directly address those points.
Okay, I’ll try. Here goes:
SA says the primary defining feature of an interminable argument is that his participation is not voluntary. He gives three reasons for this: The first two reasons are variations of “it’s just hard to walk away,” and the third reason is that he doesn’t want to get pushed out of the Overton Window.
Taking his reasons at face-value, I propose the following solution: When you sense that you are getting sucked into a bad argument, try to just walk away, even though it’s difficult to do so. To Scott’s concern about the Overton Window, the surest way to avoid falling out of it is to avoid conversations when they get heated. My “3 comments” strategy offers a direct example of how one might achieve this. To Scott’s concern about the difficulty of walking away, I propose that practice makes perfect.
Finally, I concede that fostering a good community atmosphere is a laudable goal and humbly suggest that one important way to achieve this is to keep that community free from coercion, especially coercion surrounding discourse. While bad arguments will occasionally happen in any coercion-free community, I believe that the stultifying effect of conversational rules is a cost too great to bear, so long as walking away from a bad argument is possible. Since it is, and this is the internet, I don’t think such rules are justified, and it is the burden of those who feel otherwise to demonstrate that they are.
A major upshot of my proposal is that practicing this skill will not only help the community and your interactions within it, but it will also improve every relationship you have inside and outside the community, for the rest of your life.
How did I do?
That’s wonderful. Like “Solomon Isaacs” in Private Lives.
I’d love to adopt it, but it’s tough to establish just one in isolation. What were some of the other coded statements?
@RPLong: Yeah, that’s the kind of thing I’m talking about 🙂
FWIW I thought the rationalist tumblr crowd and especially you came away from the SU3 and ensuing “now we can have a nice conversation about MIRI” worse off, not better.
In particular:
1) despite protestations to the contrary it looked like the doxing was semi-celebrated
2) a bunch of you seemed unnecessarily nasty and dismissive towards argumate
3) The way you set up the MIRI discussion it looked like you were only wanted to hear about why everyone else couldn’t see how great MIRI is, and not at all open to the idea that it isn’t.
If the risk of not-engaging was that someone, somewhere would be unchallenged in claiming that MIRI is a scam or a cult or something I think that the cure was worse than the disease in this case.
Yes, I’d like to echo this. SU3 was the best thing that had happened to the MIRI debates. You had someone who actually knew their shit and could explain it! And when he gets run off you basically go “thank god.”
Did he do some shitty things? Yes, sockpuppeting to support arguments is bad. But his actual arguments were pretty damn good and nobody actually could refute them in any sort of meaningful fashion.
When you have people who aren’t directly connected to MIRI but have education and insight into the field we should celebrate them. Since most of the MIRI discussion I’ve seen from the Rationalist side looks a lot more like circling the wagons around friends you trust rather than solid analysis
can someone link to a summary of the best SU3 arguments?
After the recent unpleasentness the only place you might find them is the internet way back machine. I’m not willing to go digging through that.
What’s “the doxing”? In all the posts I saw, everybody were careful to not release any personal information.
I didn’t pay a whole lot of attention to the su3 thing, because I don’t care about MIRI or AI or whatever, but what I did see is that he consistently came off as a prick — a very specific kind of prick. I recognized the type and found his Something Awful account. He denied it was him, and then owned up to it a few months later.
Now, rationalist/critic interactions do tend to turn into emotionally-charged defect/defect scenes that end with the critic deleting, which is probably an argument for carrying out these interactions on platforms that aren’t so intrinsically friendly to shit-flinging (heuristic: anything with a repost function is of the devil), but su3 didn’t do himself any favors.
Emotionally charged defect/defect scenes are the norm. It takes effort from everyone involved to avoid them. Rationalists aren’t so great at this either, because nobody is, probably because coordination problems — same principle as the ‘stranger danger’ stuff.
Context for people not on Tumblr: su3su2u1.tumblr.com was a pseudonymous blogger with a physics PhD working in data science. He had a largely negative detailed chapter-by-chapter review of HPMOR that claimed it failed on the science and rationality fronts (most of it available archived). He sometimes criticized MIRI claiming among other things that their scientific output was dismal, and citing friends in the industry/academia for support. He wasn’t only MIRI-focused and wrote on physics, data science, and assorted other topics as well. He seemed an intelligence and interesting voice. A week ago another Tumblr user confronted him privately with evidence that he used sockpuppet accounts to bolster anti-MIRI arguments, and lied about various things claimed by these sockpuppet accounts, mostly claiming false credentials (having a math PhD, having attended EA events and reporting on MIRI-related activities at them). He suggested that su3su2u1 admit to this publicly or else the user would post about it publicly (but not doxx him). In response, su3su2u1 deleted his account. Soon after that, Scott posted a much more detailed list of sketchy activities of su3su2u1 and claimed that he’s known about most of them for about a year but haven’t said anything publicly being concerned that someone might try to doxx him because, or with the help of, such accusations.
Tangential, given the above, your recap looks really biased to me. In particular:
1) there was no doxing, and the protestations of everyone against it hypothetically happening looked credible. Some people seemed relieved su3su2u1 is gone, but it’s really weird of you to call this ‘doxing’.
3) Scott seemed to go out of his way to invite serious anti-MIRI arguments re: their scientific output, and the discussion did in fact include such credible arguments.
How do people wind up getting doxxed for this, I read SU3 for a while and I never noticed him reveal anything that could pin down his identity. Is it just a matter of hes a phd who lives in city X so we brute force linkedin?
It could be that easy. I don’t think Scott makes much of an effort to hide his identity, but I was curious and it was really easy.
There’s a person who lurks here but never comments that catalogs everything I say and tells my wife, but luckily, I already know that because she admits to doing this and my wife tells me, too. She might be doing that to others, and there may be others like her doing the same thing. Who know? Anonymity is a myth, guys.
I have to say, Scott looks a lot like I expected him to. And while he might’ve not taken many precautions to hide his identity, I’m not sure making it easier for people to find out who he is will make him very happy.
How much of a principled difference is there between doxxing someone and providing a detailed description of the methods for doxxing them?
Fair point. I edited that out. I did warn him privately like six months ago.
Re: 1) Feel free to substitute “blackmail with doxxing” but I don’t think it changes much of anything.
Re: 3) Scott was willing to concede the bare possibility that MIRI’s output might be low but only because: ‘all those other academics are overpublishing due to silly publish or perish norms’ or ‘how important is publishing really’ or ‘they are doing the heroic work to start the field from the ground up’.
Those answers rather strike me as akin to claiming your biggest weakness as an employee is just caring too much about excellence.
I take it you agree with me re: 2?
I think it changes a whole lot. Using sockpuppets in arguments your main account participates in is seriously unethical behavior, and ought to be exposed. In this case the way to prove the unethical behavior involved, as an unfortunate side-effect, doxxing him. They were interested enough in not doxxing him that they let him admit his misdeeds rather than do so. You can spin this as “blackmailing him into admitting his misdeeds” but given that they had proof of these misdeeds anyway that they could have made public if they didn’t mind doxxing him, it seems like a pretty unfair characterization.
Revealing someone’s secrets is generally bad. Lying is also bad. Unfortunately, you can’t prove someone is lying without revealing their secrets. The punishment fits the crime.
The big issue with sockpuppets is that they killed his credibility. But at the same time, actual people with expertise in the field were backing him up on the main points. So I think his criticisms still stand an Scott handled this absolutely atrociously with his crooning over how SU3 had been driven out.
Overall while SU3 ruined his ability to really participate in a community based on trust, for me he completely ruined by ability to think that the Rationalist movement is within a lightyear of unbiased in regards to MIRI.
So in the end literally everyone lost from this situation
Well, I’ve found all this recap riveting. So I’ll mark that as a win, if a small one.
No, the big issue is that it’s a scummy thing to do. I’m not an EA and I have no dog in the larger fight, but I do care about argumentative norms, and saying “oh well some better people agree with him” does not remotely absolve his sin.
The point of saying it was a full loss situation is that what he did was shitty and undermined the truth value of his statements.
I think it’s okay to want people out of your community when they do bad things to it.
I don’t think SU3 did bad things to the community though. I think he had done some bad actions behind the scenes which hurt his credibility and that alone, but he was great for getting people to actually examine their biases about MIRI and AI Risk.
It’s incredibly in bad faith of you to refer to Scott’s “crooning over how SU3 had been driven out”, when what Scott wrote on the subject is
“For the record, I didn’t expect/hope this would happen and I am disappointed. Su3su2u1 was an intelligent and interesting voice and counterbalance and I hope he returns soon under his own name or another.”
>I don’t think SU3 did bad things to the community though. I think he had done some bad actions behind the scenes which hurt his credibility and that alone, but he was great for getting people to actually examine their biases about MIRI and AI Risk.
I have to disagree. SU3 turned out to have been someone who’s obsessed enough with MIRI to create many sockpuppet accounts and lie creatively and repeatedly through them to bolster his arguments. This behavior throws into doubt *everything* he’s said, including things he said with the authority of a physics PhD, which were certainly not all of them vetted by other experts. It’s very inconvenient to have someone like this around in the community. People who are obsessed with winning on the Internet to this degree and with these means are a huge drain of energy on other participants, because nothing can be taken in good faith, everything needs to be examined and all pseudonyms need to be suspected. For example, it’s very likely that SU3 is following this thread, and because of his known pattern of behavior, I get to wonder now who of the participants in this subthread that are pushing very biased descriptions of what happened are actually SU3 again. I don’t want to think about it, and nobody should have to, it’s a drag.
In the sense that “you shouldn’t lie to someone unless you hate them enough that you would also slash their tires”, I think the presence (and perversely intense involvement) of people like su3 is damaging and poisonous.
However! This is roughly half our fault anyway, because we’re collectively far too willing to swallow information brought to us by anonymous people using pseudonyms. This is part of the reason why I have made zero effort to disconnect my real name from my Less Wrong handle.
If you don’t think repeatedly sockpuppeting in arguments within a community is a bad thing to do to a community, I’m going to look askance at anyone who agrees with you.
Scott said something along the lines of now being able to discuss MIRI in a civilized fashion after SU3 deleted his account. If you don’t see that as crooning I don’t know what to say.
And SU3 seems to have been backed up by people on the topic of AI who we know aren’t his sockpuppets, same with his physics knowledge. If he had a tendency to say things that other experts denied then sure, but his sockpuppeting really only should make us question the veracity of his “overheard at the coffee shop” tales rather than the actual bits about AI.
And SU3 only did damage to himself via the sockpuppeting, he didn’t damage the community in any meaningful fashion. Let’s assume nobody ever found out about the sockpuppeting and he just instead decided he had a nice run but it was time to leave. What damage would there have been? Since otherwise we’re saying that the reveal of his sockpuppeting is what’s done the damage in so much that people realize that you shouldn’t trust non-verified sources on the internet
Lots of really dumb arguments, for one thing.
I have nothing against criticism of Eliezer or of MIRI — I’m not a MIRI insider, nor do I care much about the AI topic, but I’ve criticized Eliezer myself a number of times — but the moment “no u” starts coming out, it’s safe to say that the discussion is no longer making anyone smarter. That’s every time I’ve seen su3 on Tumblr or anywhere else.
what’s the harm in robbery? it’s just a good reminder to the community to lock their doors. the real victim is the robber, who might get yelled at by the people whose things he stole
Can you actually point out the dumb arguments that SU3 made? Since he did seem to know his shit and this was backed up by others who knew their shit
And the robbery comparison is completely nonsensical. There’s an obvious harm there, in that one person got their shit jacked and made to fear for their life. Nobody got robbed in this case nor was any sort of equivalent harm made
you’re right, it’s more like embezzlement in that it might have gone unnoticed
Pick one. Seriously. Every argument I saw turned dumb sooner or later, whether or not it started with a good point.
su3 doesn’t bear sole responsibility for this, but when he’s turning to sockpuppets to keep himself in the game, I’m not gonna cut him any slack for it either.
Oh, you mean the arguments generally turned dumb. That I very much agree with. But that’s mainly because the people defending MIRI don’t actually have much experience in the field. You had a few people with actual experience and/or credentials talking about the actual issues at play, the MIRI researchers not chiming in (to their credit I might add), and the standard internet brigade defending MIRI. Is it really a judge’s fault that an argument turns dumb when engaging with a sovereign citizen?
Except embezzlement has real harm done to the company at hand. Can you actually show where SU3 was wrong yet used his sockpuppets to claim he was right? I would say that any harm here comes from people being more paranoid in the future about sockpuppets, not harm to the actual discussion at hand.
Like I said, I’m not an EA, I don’t give a shit about the object level argument and who was right or wrong. I give a shit about upholding the norms of civilization.
Out of curiosity does your idea of civilization include a norm against shitposting?
I feel duty-bound to point out that the argument in Fiddler on the Roof centered around how old the horse in question was not whether it was a horse or mule. (https://www.youtube.com/watch?v=gRdfX7ut8gw) To someone who used a horse everyday the difference between a horse and a mule would be about as obvious as the difference between a horse and a frog.
Otherwise, very interesting . . .
In the stage show, the argument is horse vs. mule; they changed it for the movie.
No they didn’t, it was horse vs. mule in the movie and age in the stage show.
No, it was horse vs. mule in the stage show and age in the movie
(Mule!)
It was age! It was aaaage!
Horse! Mule!
Six! Twelve!
The Canon! THE CANON!
The horse was a patient of Scott’s, and some details about it have been changed to protect anonymity.
I don’t think there is any position that any internet person could take on anything that would be threatening enough to me that I would feel compelled to take a stand against it.
I think I finally found my cis white male privilege.
I think it might be a good idea to make intermittent open threads related to broad topics and delete anything that doesn’t fall into that category. There is a really severe problem of a number of topics popping up and immediately dominating the thread (the fact that content can only go a certain level of nodes deep makes this particularly annoying, since we immediately get a long string of replies that are hard to sort out).
The joke calendar might not be such a bad idea. Each month, a “dead horse” is proposed. Open Threads are encouraged (but not limited) to discuss the Dead Horse of the Month. If you squint, it’ll function as a State of the Union Address.
Is it really that hard to just ignore comments you find uninteresting?
I think it is important to lean heavily towards not creating a jury on what the “right arguments” are. Controversial subjects are the most popular areas where the shout down police form their posses and will abuse this authority. It is the most emotionally entangled people who will want to be the judges here.
If a trusted authority can really be found, maybe this could work. I’m a bit hesitant to believe the community will handle it appropriately. The closest thing is the upvote / downvote system and even that system gets weaponized quickly by advocates. I think just allowing a thread to be easily collapsed or hiding comments from known trolls is enough.
The larger a site gets, the less interesting the discussions get. There is a probably a mathematical law there somewhere.
There are some real disadvantages to doing this. There’s not much difference between a pro-Skub community where people just ignore uninteresting anti-Skub comments, and an anti-Skub community with a few persistent pro-Skub trolls (who, it appears to the anti-Skubs, refuse to engage with any actual anti-Skub content). The situation is even worse if you want to be open to both sides without letting twice the amount of useless, been-over-this-five-times content occupy the majority of discussion: now you just have a battle of cliché views between those who don’t know better, and an exchange of awkward glances between those who do.*
The rest of your points are spot-on and I support them.
*It goes without saying that the Skub issue is far too contentious for a shared distaste for bad arguments to overcome the two sides’ mutual hate. It’s Skub, after all.
There is a great deal of difference between responses such as “STFU n00b” or “Read the Sequences”; and responses such as “This is one of the many common arguments that people bring up; it is covered in FAQ sections 10.2 and 11.3, here is the URL”. The first response makes you look like a cultist who is covering for his lack of evidence with bluster and faith. The second response is informative, and, at the very least, will make you look like someone who knows what he’s talking about.
The major problem with the second kind of response is that it takes more time to write. However, it also is exactly the kind of thing that machine learning was designed to automate. Seeing as everyone here (except for, sadly, myself) is an AI specialist, I think we should at least try to write a simple classifier that will automatically prompt the user with the relevant FAQ links as soon as he clicks the “post” button. Of course, someone still needs to write the FAQ…
“Of course, someone still needs to write the FAQ…”
Maybe we could crowdsource it to the subreddit?
I think your article is somewhat inconsistent. At first, you say that you feel the need to defend yourself, since if you simply ignore the attempts to move the Overton Window, they will succeed, and you will become a social pariah. But later on, you write that the solution is to retreat from public view, into personal correspondence and academic publications (which are read by very few people). So, does this mean that you are giving up on that Overton Window thing, or what ?
One explanation of what “read the sequences” means that I gave to someone in LW irc who was upset at being given the instruction repeatedly was the following:
There is a well written answer to your exact question that would fully explain the idea to you in The Sequences. However, the Sequences are very long, I don’t remember which one specific post it was, and I don’t have the time to go search through it all for you right now. Please go use the search tool or google to check for keywords so that you can read the relevant articles, get the answer to your question (which I promise is in there), and then come back.
That is a bad instruction because if you have to tell someone specifically where in the Sequences their answer is, and it turns out you made a mistake or are lying this will quickly be discovered. If you tell another person to do the searching, you’re placing the responsibility for your mistakes or lies on him and have no incentive to get the reference right.
This reminds me of a long ago exchange on Usenet. Someone made some historical claims I thought implausible about women warriors. I asked for a source. She told me to research it myself. I did.
It turned out that one of the claims started as a period reference to a woman who, in the battle of Junain, “had a dagger which she carried about,” morphed to “tied a dagger around her waist above her pregnant belly and fought in the ranks of Mohammed and his followers” (modern Arabic speaking feminist), morphed to “with an armory of swords and daggers strapped around her pregnant belly fought in the ranks of Mohammed and his followers” (modern English speaking feminist) and ended up as “one of the Prophet’s wives was renowned for winning a cavalry charge when eight months pregnant, …” (Usnet poster).
All of which I put in my Usenet response. For some reason the poster never got back to me.
Keep in mind that it will be much harder for the newbie to find the specific post you’re thinking of than it is for you.
Obligatory link to the relevant bit of Shoggoth On The Roof: https://youtu.be/t0URLGsf1e0?t=5m51s
Oh, man, I’m getting nostalgia for my CX Debate days!
In many ways, I think the structure of competitive debate meets the description of interminable debate. There’s no clear answer to the proposed resolutions, because teams are required to play both affirmative and negative during the season. (Seasons with powerhouse plans or a dominating side are the result of poorly chosen resolutions) The same resolution is debated every round, teams tend to stick to a single affirmative plan, the most successful plans and arguments narrow down over the season, and until championship tournaments, teams are debating the same local teams at each tournaments.
So how has CX developed in the face of that dead-horse-ness? 14-gallon tubs of prepackaged arguments. (Most teams have at least six)
The first speech is the canned affirmative plan.
The second speech is the negative reading their canned argument modules.
The third speech is the affirmative reading their canned counter-arguments.
And then you have alternating rebuttals, finishing with final entreaties by each team.
Because teams are debating the same local teams during the season, and the same more effective arguments/strategies are proliferated over the season, each speech gets more and more and more pre-packaged. You get pre-written summaries to read for even the last speech.
The skills are primarily in the eponymous cross examinations, where you do your best to create openings for your subsequent arguments by avoiding getting pattern-matched to your future arguments, and thus getting stonewalled with canned rebuttals.
But the biggest skill in CX debate is, unfortunately, mastery of hedgehogging. The best teams aren’t the ones with the most number of tubs, but the teams carrying the one expando with their pet argument that they use for both negative and affirmative. Always abstract one level higher up and sidestep everything your opponent says about the object level! Framework, motherfucker!
And as someone who was pretty good at that strategy, yeah, I felt that same horror when Scott linked the article about the destruction of CX Nationals by that “critical pedagogy” team. I watched the documentary about it, too, and it was just as enraging. It’s really fun to hedgehog when debate is a game. It’s incredibly frustrating to see it shut down actual educating and productive discussion.
But competitive debate is just that, a game. It’s appealing to a judge, not convincing the other team. It’s limited time to prep and make speeches, with “dropped” arguments considered concessions. It’s a limited number of speeches. It’s a world where having an internet article with a timestamp five minutes more recent wins you evidential precedence.
Anyways, as far as interminable debates go, CX shows that canned responses are probably the way to go. Build those expandos and tubs. Define an acceptable stopping place, so that both sides can give final speeches. When you aren’t giving/receiving new material with one particular discussion partner, there’s nothing to be gained from further interaction with them. Don’t have the discussion with that person again, until new material surfaces. It’s time for the outside audience to judge, now.
But unlike competitive debate, shut that hedgehogging shit down. Give in to the wiki-walk, open up those millions of tabs, explore those branches without funnelling them towards an “endgame,” and then apply your nonfiction writing advice to map those branches into future canned arguments so that that exploration isn’t lost. Every time you link back to Moloch or Niceness or Whale Cancer is doing just that.
And then, let your partner take the next speech.
I have two or three positions that run counter to the ‘consensus’ of society, one of which has been born purely out of my actual academic work and the other two by engaging with academic/legal work outside of my area. As far as fundamental objections to an entire schema go, they’re pretty well thought out (could still be wrong, obviously).
That makes me a little sensitive to the responses, “Have you read X?” “Here’s a bunch of links that give the same conclusion as me.” “Why haven’t you published your idea in a peer-reviewed journal?” “How come a bunch of other people disagree with you?” One one issue, I hardly even get that; I often get just complete disbelief that I’m even taking the position I’m taking and it’s incomprehensible that someone could do that.
My strategy has kind of mirrored the “point noobs at FAQs” strategy. The times that I do engage on one of the topics, I find myself writing a mini-FAQ in advance… just to make them bother to engage me back. Basically, I anticipate their objections (because when you’re in a real minority position, you know the objections you’re going to see) and respond to them before they can dismiss me with them.
In addition, I try to point to anyone who might agree with me on a piece of the problem but who can’t be stereotyped away as “they just don’t understand the basics”. On the internet, it’s really difficult to wave your own credentials around and say, “No, on this particular subproblem, I’m actually an expert,” just because no one will believe you… but if you can use that expertise to frame an analogous problem in a way that lowers the temperature of the conversation and shows that you’re actually working from a position of expertise on one topic, you might be able to get someone to engage with you long enough to try to ‘help’ you map the analogy… until they realize that the analogy gores their sacred cow. At that point, a high percentage scurry off, and I never know if they’ve dug into it later or just plugged their ears and tried to forget that the conversation ever happened.
Would you be willing to state that position? I’m really curious what field this is about. (Except if it’s climate change.)
One of them is, “We have less evidence than you think that sexuality is biological/innate.” This one is aided by “point[ing] to anyone who might agree with me on a piece of the problem but who can’t be stereotyped away as “they just don’t understand the basics”.” In particular, when I wasn’t sure how to understand these issues, I went and took classes in the gender studies department of my graduate university. It helps a little bit when you can mention professors of gender studies who are (quote) “agnostic on biological determinism”. This is the position that is most often met with complete disbelief that anyone could possibly take such a position… so much so that they sometimes don’t even respond (I’ve seen this one in person by people I know… not just on the internet).
Another more broad topic that I had in mind is more just completely misunderstood on reddit – tech/national security law. This one is easier, because I’m often able to point to actual court decisions or actual parts of the law as evidence that, “No, that term is not so undefined that it could mean anything; Section X of Title Y gives the following definition.” This one doesn’t quite fit as well into the same dynamics, which is why I had said “two or three” and then didn’t come back to the fact that I really only had two good examples.
The final topic… as you guessed… falls under the broad house of climate change. This one is probably the most relevant for this post and what I had mentioned in my previous comment. Of course, my claim is nothing like “climate change doesn’t exist” or “climate change isn’t man-made”. I’m not a climate scientist, and I’m not qualified to critique that work.
For sexuality, people often don’t even try to point me in the direction of research; for tech/nat’l sec, people may point to shitty tech ‘journalism’; at least with climate change issues, people tend to point you in the general direction of the typically correct destination – academics. Nevertheless, it’s often, “Here’s a bunch of links that give the same conclusion as me.” “Why haven’t you published your idea in a peer-reviewed journal?” “How come a bunch of other people disagree with your conclusion?” It’s hard to get someone to pay attention long enough to get them to realize, “No, I’m not contradicting 99.9% of what you’re linking to. I have one particular concern that came directly out of my PhD study in mathematical, theoretical dynamics. There is at least one publication out there that is also pointing at this problem (besides the theory publications that form the basis of my issue, but those aren’t “publications on climate change”).” This one is very much the, “Actually, I am a credentialed expert on this particular subproblem,” type.
I considered bringing up the “topic that shall not be named” in my post as well. If you want to feel what it is to have an abusive crowd shout you down no matter what you say, take a contra position on *** any *** aspect of climate change in many technical forums.
I will get down voted 5:1 in some forums for quoting exact text from the IPCC without further comment.
The sad story is that it is these type of forums where the desire to censure is the most prevalent. They don’t want to shut down discussion on where the science is clear, they want to shut down discussion on any aspect of climate change orthodoxy. Bringing this subject up poisons the well of any discussion, you might as well say you don’t think Hitler was so bad.
I should note that the pitfall here tends to be that if your minority-position FAQ is too long, you can be pattern-matched to the crackpots who send handwritten copies of “108 reasons why the sun revolves around the earth” to professors in philosophy of mathematics (I had a prof show me some he received before; they’re incredible and also sad).
Cogency would help a lot with that…
…so long as they begin reading.
And that all makes sense if you’re talking about normal people going about their normal lives. Just because Steve is gay doesn’t mean it is his job to educate all his friends and co-workers about the subject. But on Tumblr, this gets mutated into the claim that activists don’t have a “responsibility to educate” you, which seems to rather miss the point of what activism actually consists of.
Perhaps the rule could be that I am not required to educate “the population at large” about XYZ, but when I want any portion of that population to change their actions, I am required to provide a persuasive (to them) justification for that change. If I’m okay with them going on doing as they were doing, then no action is required on my part.
(OTOH providing a justification – particularly one chosen to be persuasive to my audience – in order to bring about change is going to short circuit my desire to use the fierce moral urgency of now to lecture the unwashed masses from high atop my soapbox. This will decrease my happiness and so is a non-starter.)
Communications format is important.
Tumblr was designed as a platform for artists to share their work, nor for debates.
And it’s really bad at debates:
There’s no comment section, so if you want to reply, you have to reply by making a blog-post.
So, there’s no easy way to check other replies, conversation is hard to follow as it is spread all over the place, and everything you say is treated as official public statement and an attack, and everything turns into quoteception..
I can write for days on how awful Tumblr is for blogging, and how destroying it forever will be only a benefit to society.
But we can’t close Tumblr. and owners aren’t interested in changing it, so trying to move discussion to another place seems to be best strategy available. Everything will be an improvement.
E-mail is a good choice, but private conversations not as impactful as public ones.
Forums preserve conversations in a nice single “thread”, with argument-counterargument structure that is easy to follow.
But if there are too many people participating, or many points being debated at once, forums tend to turn into mess..
Imageboards improve on it by providing ability to link to posts you are replying too (and anonymity adds benefit of protecting your reputation, so you are not afraid to lose an argument, and makes it harder to filter out people you disagreed with in the past)
Tree-structured comments like on this site are the best, in my opinion. They separate conversations into separate branches, but still preserve everything in a single place. Only downsides are – it’s hard to see new posts, and a bit tedious to scroll past the replies to comments you don’t care about. But those design problems can be solved with ease, I’m sure.
New comments are in the expandable list in the top right corner (note: it needs cookies for full functionality). Alternatively, Ctrl+F ~new~
You have materially improved my life.
I’m glad to hear that (also concerned about the apparently low discoverability of these features, and impressed with your ability to get by without them until now).
previous policy was if there were under a hundred or so new posts, click through them one by one on the right, breaking the sequence to read strings of replies. More than a hundred new posts, just jump to the top of the page and scroll down, reading the green boxes.
The ~new~ search method catches all instances, clicks you to the new stuff immediately, and arranges itself in thread order, not chronological order. best of all possible worlds.
But we can’t close Tumblr. and owners aren’t interested in changing it
The owners are very interested in changing it, but to something that is profitable and can be “monetised”. There is constant grousing about changes, some of which is the kind of grousing that people do anytime anything they’re accustomed to is altered, but some of which is justified: e.g. taking away the “replies” function because seemingly Tumblr staff/owners were trying to push the messaging service instead.
This was a dismal failure and now they’re promising to bring back replies on the grounds that it never worked the way they wanted to but now the new version will be great. Except we’re still waiting.
I agree that Tumblr was never intended as a blogging platform, but I think with the problems at LiveJournal a lot of people moved from there to places like Tumblr and we brought our habits with us 🙂
Livejournal era skipped me, what problems did it have beyond spam-bots? (lj still seems to be somewhat popular in Russia)
Livejournal also drove off users in their attempts to commercialize. Swaths of communities and accounts were deleted/purged without warning due to pressure from advertisers which was the main reason people left.
Livejournal didn’t host uploaded media, so users had to use things like imageshack and such for their non-text reaction needs. Especially as copyright banhammers started increasing on video upload sites, this made media-sharing more difficult.
And finally, you shared someone else’s work with good ol’ fashioned hyperlinks. Getting to see the content itself instead of the link would be for feeds.
Tumblr is like making your feed into your blog. The most popular tumblrs are those that are basically aggregate feeds, hundreds of queued reblogs a day, generating no original content of their own. Its upload system also means that you get an concentrated dose of non-text media in a single place, instead of having to trawl through lots of other sites to compile the same amount of pics, gifs, and videos.
Yes, Tumblr is a terrible site for text-based interactions. It’s meant to facilitate media consumption, not feedback. (So ironically, it can be good for media creators in terms of pure promotion, but still ultimately fails them because of aforementioned poor feedback mechanisms) Tumblr is shit as an archive, too, with how easy it is to scrub credit from re-uploads.
But Livejournal’s not any easier to search, and in a world where non-text media is the primary means of interaction, the platform that better facilitates non-text media sharing (even via non-unique uploads) is going to win. More bread, more circus, more users.
I’ve found that I can use google to search lj.
I can’t find google “X character LJ” and get the sheer deluge of posts of various media types on X character, as I can by clicking a Tumblr tag. Searching LJ via Google usually meant I had to know specifically what I was looking for in the first place.
On the other hand, it is infinitely easier to find a specific post on LJ than Tumblr’s reblog black hole. (“Which one in this post of 100000 notes had that one amazing sequence of reblog replies?”)
Sorry, I meant that I can use google to search lj if I know which lj the article I want appeared in.
And if the LJ isn’t friends-only. Pretty much everyone I knew took theirs FO at some point during LJ’s heyday. (Except for a couple who were in creative fields and using it as a public-facing blog.)
Blog comment sections have an extreme “home court advantage” for the blogger, who gets to moderate the discussion as desired, and tends to have an audience skewed to whichever side of the issue the blogger favors, and prone to giving support and deference to the blogger out of visitor’s courtesy even when they happen to disagree on some particular point. Thus, when a really divisive conflict comes up, you can often find highly distorted accounts and discussions of it from each side in the blogs of partisans, where position X (or position Y) is OBVIOUSLY the only sane and ethical position and proponents of position Y (or position X) are OBVIOUSLY the scum of the earth. Finding neutral ground to carry on a balanced discussion is much more difficult.
There’s just no perfect solution, sadly.
If I were limited to a single word comment it would be this:
Socrates.
Actually, I quite like the performance aspect of argumentation. The glorious glitter of a rhetorical flourish, hammering home a dialectical point. Both rhetoric and dialectic should have their place in the rationalists Arsenal, even if the former is only learned in a ‘defense against the dark arts’ way.
My thinking is most often changed not by arguing, but by watching someone smarter than me who agrees with me argue with someone also smarter than me who disagrees. I can digest the argument and counter-argument without becoming too emotionally invested. I think the best way to do this is an exchange of emails between intellectual heavyweights which is then collated, edited and published, although I understand Moldbug spent some time thinking about a technical implementation of this.
As an example of both of these things, consider this exchange of letters between Luke of Common Sense Atheism and Vox Day.
Glorious.
Mule!
This post brings back Alexander the Great’s obsession with “niceness” in debate. Why is this the criteria he holds dear? It’s totally subjective, has nothing to do with the quality of an argument or with the quality of the truth, and probably impossible to define consistently *even in a totally subjective way.*
I continue to suspect that what Scott means by this is “People should only argue with me on terms that I dictate, which are also the terms on which I am most likely to win, and if I don’t win, at least I won’t face consequences for pushing us slightly closer towards genocide.”
lol
I probably shouldn’t be responding to this, because it seems potentially flammable, but this raises some questions for me. I mean that literally: the following are not rhetorical questions, and I’m genuinely curious what your answers to them are.
1. How close do you think we are to genocide, and against whom?
2. What is Scott doing that pushes us closer to said genocide(s), and how hard is he pushing?
3. What sort of consequences are appropriate for that in your view?
I don’t think social media are a good place for having most serious debates, but I don’t think many books laying out the arguments for a position do that great a job either. The trouble with books written to argue a position is, they’re rarely made with the help of a person who’s seriously skeptical of the position, and willing to probe and ask all the questions and raise all the objections that a person with reservations about the position would actually have. Without a person who actually doubts or disagrees to work off of, it’s a lot harder for a writer to work out the level of necessary evidence to make a compelling case for someone who doesn’t already lean in the same direction they do.
The Atlantic has recently started a sort of FAQ section on their website with respect to some interminable arguments:
http://www.theatlantic.com/special-report/a-and-q/
They actually call it answer and question, where they posit potential solutions to some social phenomenon (police violence, gender pay gap, etc.) and they question those solutions. It’s… something.
I like it!
This description of the “ignorant newbie” problem reminds me of the Thomas Sowell line, “Each new generation born is in effect an invasion of civilization by little barbarians, who must be civilized before it is too late.” Children pose something of the same problem to society generally that newbies pose to a forum: if every conversation you had was interrupted by a constant stream of four-year-olds asking why the sky is blue, you would go insane. Society’s solution for this is to appoint a small number of guardians to each child at birth to answer the excruciatingly ignorant questions, then send the child off to professionals who are specially selected for (among other skills) tolerance for answering the same ignorant questions every day for an entire career. We mostly do *not* tell kids to go look up the answers on an FAQ, partly because they are mostly illiterate but also because kids don’t necessarily understand why their questions are annoying. Instead, they are forcibly sequestered with people who have volunteered to field their questions.
The analogue in a forum, I guess, would be that new users would *only* have the privilege of posting to a special newbie zone. They could ask their ignorant questions among themselves, and experienced users who wished could delve in and answer those questions. After a newbie passed some threshold (a quiz? a moderator’s certification?), she would be promoted and granted access to the other parts of the forum. Note that you wouldn’t necessarily have to *agree* with the group’s orthodoxy to graduate from Newbie School, but you would at least have to demonstrate familiarity with the standard arguments.
As kids’ level of education increases, they do increasingly get expected to have some degree of ability to look things up on their own and not pester parents/teachers/etc. with questions about everything. Often homework assignments in middle school and later center around using reference works to find answers. (I’m of an old enough generation that in my day this involved going into the library and looking through books, but nowadays it’s likely the Internet is involved heavily.)
How do you feel about the idea of doing this in a genuine philosophical dialogue, where different parties work together to prepare a publishable document which represents a dialogue between them about the issue? Basically like an email conversation, but after a misunderstanding you can go back and say “After we reveal this difference in definitions, let’s pretend that a while ago I said X’ instead of X” or “So, I think your point would be more understandable if you phrased it as Y’ instead of Y; do you agree?”.
“Nobody on Tumblr can just say they think feminism is important; they have to post comics like this”
LOL you actually think that’s an example of people being jerks? You must be thin skinned if such harmless jokes offend you.
late to party comments:
The best lists I have been on were heavily moderated.
The StackExchange sites are the best Q&A model, but then I am a computer guy and anyway its easier to talk about testable answers.
It strikes me that the discussions here and at LW have the pros and cons of many academics present:
high-flying/pretentious; detailed/nit-picking; thoughtful/stentorian; kind/supercilious and so forth. Infamous academic pettiness may be required to get the hard thinking and study. If so, so be it.
The talk around systems of linking and FAQ building and I-don’t-quite-know-what-to-call-it, less linear organization, fewer duplicates… would be served by a wiki, at least in part. I’d like a sort of 3D wiki with threads of discussion and edited nodes that automatically link. Wikipedia has solved many of the argument problems and consensus-building.