In Mod We Trust

The Verge writes a story (an exposé?) on the Facebook-moderation industry.

It goes through the standard ways it maltreats its employees: low pay, limited bathroom breaks, awful managers – and then into some not-so-standard ones. Mods have to read (or watch) all of the worst things people post on Facebook, from conspiracy theories to snuff videos. The story talks about the psychological trauma this inflicts:

It’s an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions…where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers…

It’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.

One of the commenters on Reddit asked “Has this guy ever worked in a restaurant?” and, uh, fair. I don’t want to speculate on how much weed-smoking or sex-in-stairwell-having is due to a psychological reaction to the trauma of awful Facebook material vs. ordinary shenanigans. But it sure does seem traumatic.

Other than that, the article caught my attention for a few reasons.

First, because I recently wrote a post that was a little dismissive of moderators, and made it sound like an easy problem. I think the version I described – moderation of a single website’s text-only comment section – is an easi-er problem than moderating all of Facebook and whatever horrible snuff videos people post there. But if any Facebook moderators, or anyone else in a similar situation, read that post and thought I was selling them short, I’m sorry.

Second, because the article gives a good explanation of why Facebook moderators’ job is so much harder and more unpleasant than my job or the jobs of the mods I work with: they are asked to apply a set of rules so arcane that the article likens them to the Talmud, then have their decisions nitpicked to death – with career consequences for them if higher-ups think their judgment calls on edge cases were wrong.

While I was writing the article on the Culture War Thread, several of the CW moderators told me that the hard part of their job wasn’t keep the Thread up and running and well-moderated, it was dealing with the constant hectoring that they had made the wrong decision. If they banned someone, people would say the ban was unfair and they were tyrants and they hated freedom of speech. If they didn’t ban someone, people would say they tolerated racism and bullying and abuse, or that they were biased and would have banned the person if they’d been on the other side.

Me, I handle that by not caring. I’ve made it clear that this blog is my own fiefdom to run the way I like, and that disagreeing with the way I want a comment section to look is a perfectly reasonable decision – which should be followed by going somewhere other than my blog’s comment section. Most of my commenters have been respectful of that, I think it’s worked out very well, and my experience moderating thousands of comments per week is basically a breeze.

Obviously this gets harder when you have hundreds of different moderators, none of whom are necessarily preselected for matching Facebook HQ’s vision of “good judgment”. It also gets harder when you’re a big company that wants to keep users, and your PR department warns you against telling malcontents to “go take a hike”. It gets harder still when you host X0% of all online discussion, you’re one step away from being a utility or a branch of government or something, and you have a moral responsibility to shape the world’s conversation in a responsible way – plus various Congressmen who will punish you if you don’t. The way Facebook handles moderation seems dehumanizing, but I don’t know what the alternative is, given the pressures they’re under.

(I don’t know if this excuses sites like the New York Times saying they can’t afford moderators; I would hope they would hire one or two trusted people, then stand by their decisions no matter what.)

Third, I felt there was a weird tension in this article, and after writing that last paragraph I think I know what it is. This was a good piece of investigative reporting, digging up many genuinely outrageous things. But most of them are necessary and unavoidable responses to the last good piece of investigative reporting, and all the outrageous things it dug up. Everything The Verge is complaining about is Facebook’s attempt to defend itself against publications like The Verge.

Take, for example, the ban on phones, writing utensils, and gum wrappers:

The Verge brings this up as an example of the totalitarian and dehumanizing environment that Facebook moderators experience. But I imagine that if an employee had written down (or used their phone to take a picture of) some personal details of a Facebook user, The Verge (or some identical publication) would have run a report on how Facebook hired contractors who didn’t even take basic precautions to protect user privacy.

And what about the absolutist, infinitely-nitpicky rules that every moderator has to follow (and be double- and triple-checked to have followed) on each decision? Again, totalitarian and dehumanizing, no argument there. But if a moderator screwed up – if one of them banned a breastfeeding picture as “explicit”, and the Facebook Talmud hadn’t include twelve pages of exceptions and counterexceptions for when breasts were and weren’t allowed – I imagine reporters would be on that story in a split second. They would be mocking Facebook’s “lame excuse” that it was just one moderator acting alone and not company policy, and leading the demands for Facebook to put “procedures” in place to ensure it never happens again.

If I sound a little bitter about this, it’s because I spent four years working at a psychiatric hospital, helping create the most dehumanizing and totalitarian environment possible. It wasn’t a lot of fun. But you could trace every single rule to somebody’s lawsuit or investigative report, and to some judge or jury or news-reading public that decided it was outrageous that a psychiatric hospital hadn’t had a procedure in place to prevent whatever occurred from occurring. Some patient in Florida hit another patient with their book and it caused brain damage? Well, that’s it, nobody in a psych hospital can ever have a hardcover book again. Some patient in Delaware used a computer to send a threatening email to his wife? That’s it, psych patients can never use the Internet unless supervised one-on-one by a trained Internet supervisor with a college degree in Psychiatric Internet Supervision, which your institution cannot afford. Some patient in Oregon managed to hang herself in the fifteen minute interval between nurses opening her door at night to ask “ARE YOU REALLY ASLEEP OR ARE YOU TRYING TO COMMIT SUICIDE IN THERE?” Guess nurses will have to do that every ten minutes now. It was all awful, and it all created a climate of constant misery, and it was all 100% understandable under the circumstances.

I’m not saying nobody should ever be allowed to do investigative reporting or complain about problems. But I would support some kind of anti-irony rule, where you’re not allowed to make extra money writing another outrage-bait article about the outrages your first outrage-bait article caused.

But maybe this is unfair. “Complete safety from scandal, or humanizing work environment – pick one” doesn’t seem quite right. High-paid workers sometimes manage to do sensitive jobs while still getting a little leeway. When I worked in the psychiatric hospital, I could occasionally use my personal authority to suspend the stupidest and most dehumanizing rules. I don’t know if they just figured that medical school selected for people who could be trusted with decision-making power, or if I was high-ranking enough that everyone figured my scalp would be enough to satisfy the hordes if I got it wrong. But it sometimes went okay.

And lawyers demonstrate a different way that strict rules can coexist with a humanizing environment; they have to navigate the most complicated law code there is, but I don’t get the impression that they feel dehumanized by their job.

(but maybe if the government put as much effort into preventing innocent people from going to jail as Facebook puts into preventing negative publicity, they would be worse off.)

It seems like The Verge’s preferred solution, a move away from “the call center model” of moderation, might have whatever anti-dehumanization virtue doctors and lawyers have. Overall I’m not sure how this works, but it prevents me from being as snarky as I would be otherwise.

(except that I worry in practice this will look like “restrict the Facebook moderation industry to people with college degrees”, and we should think hard before we act like this is doing society any favors)

Last, I find this article interesting because it presents a pessimistic view of information spread. Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little. If it’s true, how come more people haven’t been infected? How come Facebook moderators don’t believe the debunking of the conspiracy theories instead? Is it just that nobody ever reports those for mod review? Or is this whole phenomenon just an artifact of every large workplace (the article says “hundreds” of people work at Cognizant) having one or two conspiracy buffs, and in this case the reporter hunted them down because it made a better story?

Just to be on the safe side, every time someone shares an SSC link, report it as violating the Facebook terms of service. We’ll make rationalists out of these people yet!

This entry was posted in Uncategorized. Bookmark the permalink.

285 Responses to In Mod We Trust

  1. sharper13 says:

    They did a couple of documentaries on this. I imagine it’s even worse when the moderators are third world denizens who aren’t normally exposed to anything like the “stuff” out there in normal life and may not even have Internet access beyond perhaps a mobile phone with data.

    Thousands of people being exposed to child porn, snuff videos, etc… in order to try and protect the masses from them. – What a thankless task. Anyone who lasts very long has got to become hardened to it, like a homicide cop or an inner city paramedic.

    • toastengineer says:

      I remember hearing that 4Chan-ers used to lace anything they didn’t want normal people to read with shock images, because the ingroup had already seen them a million times before and were desensitized to them, but thay may just be a story.

      • albertborrow says:

        I think dismissing anything related to 4-chan as “just a story” is a recipe for disaster. It’s the internet. We have evidence. I’m not sure if 4chan laces normal posts with shock images to keep people away, but they definitely use them as weapons of indiscriminate terror. Like when they spammed tumblr with hardcore porn and gore during the tumblr raid. Beyond that, I can’t be bothered to research, but it wouldn’t surprise me. There’s something cathartic about seeing someone react to internet classics like goatse (wikipedia link, not shock image link, for historical context) for the first time, even if it’s objectively a horrible thing to ever do to someone.

        Incidentally, while searching for evidence on Google, I found this link (warning, /pol/) where they seem to be discussing whether they can spam Facebook enough to redpill its moderators, based on the same article this post is referencing.

      • POGtastic says:

        The top free PDF of Starship Troopers has a really bizarre image of a naked woman as the first page, and I could never figure out why. It was posted by someone on 8chan, so I assume he did it just to mess with people.

        • onyomi says:

          In China I once saw a bootleg DVD of the film Capote with this cover, but with flames and scantily clad babes. Seriously. I assume the bootlegger took one look at the original cover and said “this will never sell. Spice it up a bit!”

      • Gerry Quinn says:

        I think the whole of 4chan works like that

    • Conrad Honcho says:

      I have heard, but don’t have a source and don’t really want to enter the search terms I would need into google to find one, that FBI child porn investigators are on something like an 18 month rotation, because nobody can handle looking at abuse images for that long.

      • albatross11 says:

        I know people who have done a lot of computer forensics work, and they talk about the emotional impact of having to look at tons of these images.

      • McMike says:

        In that sense, it’s not that different from being an ambulance driver. Or Search and Rescue team. A substantial portion of SAR missions are of course body recoveries, not rescues as the name implies.

        • Conrad Honcho says:

          I think there is a huge difference between dealing with some blood and guts from a traffic accident and watching people deliberately abuse children day in and day out.

          • McMike says:

            I think there is a huge difference between dealing with some blood and guts from a traffic accident and watching people deliberately abuse children

            I think that may be a distinction without a difference. Except to the extent that first responders are better paid and treated as humans.
            link text

          • Mr. Doolittle says:

            @ Conrad

            Frequency appears to be the primary difference. Rural ambulance drivers might have the occasional blood and guts. City drivers might spend every night fighting for the lives of stabbing victims. That personal interaction of a dying person might very well be far worse for a person than seeing images online. Of course, that would depend on the individual seeing it.

          • deciusbrutus says:

            Occasionally with child blood and gore caused by abuse or neglect in person is in the same category as regularly dealing with images of child blood and gore.

            Nobody who isn’t disturbed by that has no business being in that job.

        • MoebiusStreet says:

          I recall reading, after some disaster or other, that even the dogs working on S&R getting burned out by the disappointment of locating victims for whom it’s already too late. Their handlers, the article said, will occasionally plant a healthy volunteer for the dog to “find”, making him feel more successful.

        • Thomas Jorgensen says:

          Search and rescue workers have the days where they actually do save someone to carry them through. And, also, recovering someones body is, however gruesome, a service to their family. Just constant exposure to the worst of humanity with no upside? Urgh.

      • ilikekittycat says:

        I wonder how much of the problem is the people they tend to employ having a strong purity component to their morality that leaves them feeling “poisoned” or “contaminated”

        If I were picking people to have to look at a lot of disgusting images in a row and have to evaluate them objectively cops or FBI people (as they exist in America) would be some of my last picks

  2. Bakkot says:

    Me, I handle that by not caring. I’ve made it clear that this blog is my own fiefdom to run the way I like, and that disagreeing with the way I want a comment section to look is a perfectly reasonable decision – which should be followed by going somewhere other than my blog’s comment section. Most of my commenters have been respectful of that, I think it’s worked out very well, and my experience moderating thousands of comments per week is basically a breeze.

    There’s another important difference between your experience moderating and Facebook’s: you can be consistent by just not changing your mind very often, whereas Facebook has to try to coordinate thousands of people with their pre-existing inclinations and biases. Even with just the half-dozen mods on the subreddit, one of the most common complaints we get is inconsistency. No one likes trying to abide by rules which aren’t even coherent.

    I gather that legal systems tend to solve this by having a strong culture of respecting precedent (and paying lawyers lots of money to know what precedent is). I don’t think that scales to Facebook.

    • Jiro says:

      Even with just the half-dozen mods on the subreddit, one of the most common complaints we get is inconsistency.

      That’s not just caused by there being more than one mod, though. It’s also caused by:

      1) mods’ reluctance to ever say that something is permitted and

      2) mods’ reluctance to ever admit that another mod made a mistake or to say “no, that wasn’t policy after all”.

    • po8crg says:

      The real way legal systems solve this is having formal appeal processes.

      In the US, courts at the same level are allowed to be inconsistent with each other. When they are, you can appeal to the level above and they will consider the question and decide which one was right (or compromise between the two positions).

      The highest level has only one court, so it can’t be inconsistent with itself. It can change its mind – the rules of precedent are that any court is bound by precedent from higher level courts, not by their own precedent. Following your own precedents isn’t called “precedent”, but “stare decisis”, which basically just means that courts try not to overturn their own precedents very often. But the court can’t have two opposing decisions at the same time, where two lower courts clearly could do that.

      The Supreme Court of the United States can choose which cases it hears, but one case it will almost always hear is when there is a “circuit split”, ie when two of the courts at the next level down (the United States Circuit Courts of Appeal, known as the “circuit courts”) have issued contradictory rulings.

      One reason why, IMO, big moderation systems don’t work well is by not having clear, public rules (that Facebook handbook should be public), and by having formal appeal processes for when a moderator makes a ruling. If there’s a formal appeal, then you do need a fast process for saying “yes, of course the original moderator was right, go away and stop trying to run a denial of service on the appeals process”. In real courts that’s called a “vexatious litigant”; also lawyers can be struck off for bringing pointless cases or appeals that have no chance of success.

  3. eterevsky says:

    I wonder if it possible to replace the arcane rules by moderator’s discretion provided you’ve picked your moderators appropriately. Say at a job interview you ask the candidate to rate 50 examples using there intuition, and just pick those candidates that that were 95% correct.

    • brmic says:

      There is even such a thing as signal detection theory, which enables one to differentiate a mods detection ability (i.e. ability to make correct decisions) from their willigness to ban (i.e. tendency to be lenient/strict) and this in turn can be used to train people, especially their overall response tendency (willingness to ban). This works for training radiologists and similar personnel. I’d be suprised to hear facebook isn’t using something along those line. Straight up signal detection theory has the downside that it bluntly admits it’s impossible to only make correct decisions and I’m not sure facebook is capable of admitting that to itself and/or to the outside world.

      • Simon_Jester says:

        Yeah.

        Radiologists have the huge advantage of being generally accepted as professionals rather than “I don’t pay you to think” style cogs in a corporate machine. And the health care industry is so accustomed to having to accept a certain level of wrong decisions as part of the cost of doing business that they have well-established systems to provide insurance to pay for the things that go wrong.

        Most other industries don’t seem to have that same attitude towards risk. The managerial-entrepreneurial culture, at least in the US, seems to hold that you can eliminate bad decisions and mistakes entirely, if you just micromanage and bully your employees hard enough. Which appears to be what Facebook is doing.

        • sclmlw says:

          Yes, but the problem is two-fold:
          1. You can’t eliminate risks entirely. In a country of 350 million (or billions worldwide) even a system that works 99.99% of the time will still produce negative results, and those results will become newsworthy. If we have a policy of hardening systems due to every newsworthy incident we will approach totalitarianism.
          2. You can’t stop people from responding to newsworthy items. It’s dumb to have policy driven by one-off incidents, but what’s the alternative? If you’re a terrorist and you want to make the US respond you can do this by kidnapping someone – and they know this. Rational policy would say, “Yes, we want to take reasonable steps to prevent this kind of thing, but in the end we have to accept this kind of thing will happen from time to time – and more so if we respond disproportionately to every kidnapping.” But as Dan Carlin once said, yes I know that intellectually but I have kids of my own and I have to tell you if it was one of my children I’d be ready to start dropping nukes.

          So how do you calibrate a system that allows an acceptable amount of error, when that error can never be accepted by the victims of the policy? And if you have victims on both sides of a policy, like Scott points out, you’ll tear yourself apart seeking some illusory perfect system.

          It’s easy to say, “Take it easy. These minor incidents are bound to happen.” But in practice we handle risk management poorly as a society. The best I can think is on individual level I can remind myself that +90% of all ‘news’ will be irrelevant by next week, so I don’t sweat the petty things.

    • McMike says:

      Of course FB picks its mods based on who will do this for $24k as a contract worker.

    • Ketil says:

      But 95% is way too low, isn’t it? This isn’t science, but journalism, or worse: social media mobs. It only takes a single mistake to stir up a lot of heat, and with centralized/federated censorship, everything crumbles on top of the mothership, every single time. The only defense is pointing to the relevant rule, and blaming the subcontractor.

      This amplification effect, that a single incident can be magnified up to continent-sized proportions, is a new feature of social media networks, and I don’t think we have found a good way to deal with it yet. How often do twitter storms rise from some completely unimportant person doing something moderately offensive, or from some twitter-famous person making an unfounded claim or other? Say what you will about journalists, but they did apply a modicum of standards – at least, if they worked for outlets with ambitions or pretense of being serious. Tabloids had fewer limits, of course, but social media is like tabloids squared, and with any and all accountability out the window.

      • McMike says:

        Somewhere between the indiscriminate disproportionate amplification of social media, and the de facto censorship of gatekeeping professional media, lies the sweet spot.

      • deciusbrutus says:

        You can square-root your error fraction by slightly more than doubling your budget and having two people review everything.

        Facebook already does a thing where if a small enough fraction of people report a post, they just get told that it didn’t violate the community guidelines with no investigation done. You can best see that effect by joining a closed group that is a full step away from a hate group and reporting the posts making calls to violence.

    • oconnor663 says:

      A couple issues that I can think of:

      – The rules change over time. At one point the rule was “no nipples”, mainly to avoid exposing minors to inappropriate sexual content. Then breastfeeding mothers got more vocal and got a lot of press, and exceptions were made to protect them. You don’t want to be in a position where you need to hire a new workforce when that happens.

      – Relying on intuition doesn’t give the organization a good way to defend its decisions. Suppose it came to light that Facebook had a racial bias in its handling of pictures of people with guns. (Maybe one race was disproportionately classified as “promoting violence” or something.) If Facebook can explain away that effect as the result of a concrete rule like “pointing guns at the camera counts as promoting violence,” they might be able to defend themselves. But if they’re relying on “people’s intuition about guns and race” then they might not have a leg to stand on.

      • ilikekittycat says:

        The rules change over time. At one point the rule was “no nipples”, mainly to avoid exposing minors to inappropriate sexual content. Then breastfeeding mothers got more vocal and got a lot of press, and exceptions were made to protect them. You don’t want to be in a position where you need to hire a new workforce when that happens.

        IIRC it got even worse than that… there was a very poor tribe in Kenya that had to breastfeed young goats for a certain part of the year to keep them alive during the drought part of the year and because they are part of facebook on cellphones like all of us, sharing pictures or discussion of the practice were banned, and it had the result of them thinking that Western Morality judged them to be immoral people because of the the absolute anti-bestiality-and-nipples rule

        Facebook’s scale is waaay beyond where they need “forum moderators generally applying California standards of discourse” because things like “yea you should probably make an exception for not shaming very poor foreigners having to do a very weird thing” are beyond the moral imagination of a lot of people. It’s not quite a state but a lot of “Seeing Like A State” type effects about The Right Way to Be Modern reinforce themselves through social media hegemony

        • Gerry Quinn says:

          Not to mention that a lot of stuff in California would seem equally weird to this Kenyan tribe.

        • sclmlw says:

          I read this and thought, “Okay, so just have different moderator rules for Kenyans versus Californians and vice-versa.”

          Then I realized the consequence of this would be a bunch of Californians looking up Kenyan posts and vice-versa, because this is the internet and global sharing means it’ll get out there faster than you think.

          “Okay, wall off the Kenyans from the Californians.”

          That devolves into walling off the internet, which even FB would struggle to accomplish. And if they try the result will be outrage and a PR nightmare. “FB is totalitarian. They’re walling off the internet to control your mind! Etc.”

          “Okay… apply transnational filters? Only block the nipples/goat breastfeeding for some nations but not others?”

          There have to be a dozen reasons this will never work (half of them logistical).

  4. fbanon says:

    I just wanted to say thanks for writing this. I work at Facebook as an engineer, and we’re not as evil as the news makes us out to be. For example, when content is flagged as gore, we apply image matching (think TinEye or Google Reverse Image search) so other moderators don’t have to view the content again and is automatically flagged. The moderators are hired from sourcing agencies, so we don’t have too much oversight into their working conditions, but we’re trying to improve this as well.

    • Reasoner says:

      Glad to hear it!

      Have you thought about trying to detect the behavioral signature of users interacting with the content you’d like to censor? Example: Maybe after seeing a snuff video, users are more likely than normal to close Facebook in disgust, or unfollow/defriend/block the person who shared it.

      Then even if no one reports a piece of content which has these properties, you could still use tricks to make it a bit harder for users to stumble across (suppress it in the newsfeed, say). So instead of a fixed boundary between censored content and non-censored content, you get more of a gradual transition.

      A gradual transition also lets you make use of methods which have a higher false positive rate. E.g. make it a bit harder for users to stumble across any piece of content which a machine learning algorithm believes is similar to a piece of content which we already know has the wrong behavioral signature.

      If you could find a smart way to do this kind of stuff, you could turn Facebook into a garden of thoughtful discussion. Imagine trying to detect and promote the behavioral signature of bitter enemies coming to understand each other as friends.

      • Taradino C. says:

        I don’t think that approach would work when the base rate for that type of content is so low. The overwhelming majority of behavioral matches would be false positives.

        • Reasoner says:

          Sure, but nerfing the virality on that stuff makes Facebook a better platform anyway.

          • Taradino C. says:

            My point is that this method won’t detect “that stuff”, or any category of “stuff”.

            For example: people leave the news feed after viewing all kinds of content, simply because they have to stop somewhere. Since the news feed is already ranked by how much FB thinks you want to see each story, most of the time, your stopping point is just a function of how much time you’re willing to spend scrolling through the feed and how well the stories above that point held your interest.

            When people take action to hide the content itself, I believe that’s already tracked and used in the model. Page admins can see how many followers hid a post or unfollowed/unliked the page in response to it. But other than feeding that back into the ranking model, it’s hard to justify any more serious action, because most of that content will just be stuff people found uninteresting or disagreed with.

    • Winja says:

      I’m sure you’re a decent person and a good engineer, and congratulations on getting a job at a company that can afford to be super picky.

      But that doesn’t change the fact that Facebook is a cultural cancer and all of the C-suiters there are psychopaths.

      Sorry to have to state it like that. 🙁

  5. JulieK says:

    Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little.

    I don’t find it surprising. If there weren’t a human tendency to believe such things, they wouldn’t be so widely believed.

    It also demonstrates that censorship can be beneficial.

    • HaraldN says:

      I am not so sure about your last sentence. Censoring something can often have the opposite effect. Especially as a central pillar of most conspiracy theories is “this powerful group doesn’t want you to know”, so having a powerful group confirm that they don’t want you to know it will naturally lend credence to the other statements.

    • DutLinx says:

      I agree completely with your first sentence and, to me, that’s the strongest argument against the “free marketplace of ideas”-style philosophy of free speech. We’re not Homo Economicus or Homo Rationalis, willing to hear all arguments then forming a coherent conclusion from them. We’re full of biases, fears, insecurities and heuristics, all of which can be exploited by a sufficiently rhetorically-skilled person. Kinda worries me.

      • Virriman says:

        Aren’t there rhetorically-skilled people on every side of any issue though? Once the competing exploitation of biases cancel out, shouldn’t the weight of evidence be just as decisive as it would be in a world where no one was rhetorically-skilled?

        • Not as decisive. Some arguments are better suited for rhetorical exploitation than others.

          • albatross11 says:

            There are actually ways to proceed on questions of fact, even when they’re surrounded by a lot of culture war, tribalism, partisan screaming, etc. Those things make it harder to figure out the truth, but not impossible.

        • Yosarian2 says:

          We have cognitive biases though that leave us unusually vulnerable to certain kinds of false ideas. Anything involved with race or tribe or politics for example. Or even just “doctors sticking you with needles are scary” is an idea people may be especially vulnerable to for example, even if the better rational arguments are on the other side.

          • albatross11 says:

            But all humans are susceptible to those same biases, right? So who would you get to vet the ideas that needed to be suppressed to keep the proles from getting the wrong ideas? Why wouldn’t they be equally susceptible?

          • DeWitt says:

            This really shouldn’t need to be said, but humans differ from one another a hell of a whole lot and definitely aren’t all the same.

        • Pattern says:

          Aren’t there rhetorically-skilled people on every side of any issue though?

          No. (And not all issues have just 2 sides.)

    • toastengineer says:

      Meh, all it says to me is that the Mere-exposure effect still exists. Sure the “free marketplace of ideas” doesn’t work if the wackoes outnumber everyone else a hundred to one, but I don’t think anyone ever claimed it did.

      • Ketil says:

        The big advantage for non-wackoes is that they tend to agree. There might be more wackoes, but they all have different wackinesses. I’m not worried, except that with censorship, there is a good chance a wacko will be the one in charge.

        • albatross11 says:

          Ketil:

          Lots of non-wackos agree on young-Earth creationism, so this doesn’t necessarily track with being right, just with beliefs that are consistent with living a pretty normal and functional life. And that’s true of all kinds of oddball beliefs–I mean, some kinds of mental illness will make you paranoid or convince you the CIA is beaming thoughts into your head or something, but most people who have weird beliefs are otherwise functional and normal people who just believe some silly things.

          • brianmcbee says:

            Recently watched a documentary about flat-earthers. One of the flat-earth people mentioned that it seemed like a lot of her fellows started believing more and more conspiracy stuff over time that seemed obviously wrong to her. I would guess the motivation for this would be “what else are they lying to us about?”

            She also showed some self-awareness, in that she wondered if some of the things she believed might actually turn out to be wrong.

    • albatross11 says:

      Who chooses which things to censor, and how do they decide? Why are they any better at deciding what to believe than I am?

    • Winja says:

      In some cases, there are often compelling arguments in favor of various conspiracy theories.

      If you aren’t the sort of person who error-checks that kind of thing by deliberately seeking out critical information on the conspiracy, enough exposure without getting anything to counteract it will sway your beliefs.

      • albatross11 says:

        Notably, the Madoff scam was reported to the SEC a couple times by serious folks, but the people at the SEC blew those warnings off.

        • Winja says:

          If you asked the SEC why they didn’t investigate, they’d probably claim they didn’t have enough money to properly look into it.

          • Edward Scizorhands says:

            Variation of the Mount Rushmore Syndrome?

            To know how bad the SEC messed up, I would need to know how many tips they get, and how well the details of this tip compared to others.

  6. brmic says:

    Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little.

    Have a chat with the guy who wrote

    your mom is a brute-force statistical pattern matcher which blends up the internet and gives you back a slightly unappetizing slurry of it when asked.

    Obviously spending 8 hours a day reading conspiracy theories has _some_ effect. The real problem here is the missing effect size. From the article we don’t know whether this jobs tips over 1/100000 who was at the margin anyway (your conspiracy buffs) or 1/100 who initially were perhaps not particularly open to this stuff.

    • jumpinjacksplash says:

      I think this is probably it, especially as a moderator doing a six-hour[?] shift five days a week is going to see waaaay more conspiracy-related articles than a random Facebook user. I suspect there’s a big difference between occasionally seeing an Alex Jones video your weird friend sense you (probably less than 1 hour per month of exposure) and spending maybe 100 hours per month only reading conspiracy theories. That level of saturation would probably affect someone’s whole view of ‘what normal arguments/information look like.’

      • Pattern says:

        Additionally, if there’s a good arguer out there, maybe they’ll see their work as a result of seeing everything.

  7. TyphonBaalHammon says:

    You are not the first one to suggest that there’s a “contradiction” in people complaining both about Facebook having bad moderation and no respect for privacy AND ALSO for Facebook having a terrible way of dealing with moderation by using a hellish contractor that enforces authoritarian rules on its hapless distressed employees who get PTSD from watching all the crazy shit they have to moderate.

    Let’s not even take into account that the rules about not having paper or phones on-site is probably not applied to Facebook employees themselves, or that Facebook as a company is guilty of far worse than violating the privacy of only one of its users (it violates even the privacy of non-users) :

    But there’s a simple solution, which is that Facebook should just stop existing. Kill it with fire, burn it down and pour salt on the ruins. We can manage without Facebook (which is why your comparison with psychiatric hospitals is inappropriate).

    Facebook and its ilk are structurally impossible to moderate because they try to swallow the world whole. They deserve to be held responsible for every bad thing that happens as a direct result of this hubristic imperialist project.

    The “archipelago” that you once spoke about would be IRL a utopia, but on the internet, it used to be the natural state of things. Let’s just go back to that. As you point out, it’s far easier to moderate even one big individual blog, than the entire world.

    • eyeballfrog says:

      Few things would make me happier than to see social media die. Unfortunately I can’t see a way to make it happen.

      • Virriman says:

        Not sure if I want to see big social media die, but I am very curious about what it would take to make that happen if the government isn’t going to do it. More generally, what can break apart a large and powerful organization that benefits from network effects besides a larger and more powerful organization?

        • deciusbrutus says:

          A smaller, more powerful organization that has and is willing to use nuclear weapons indiscriminately?

    • onyomi says:

      “‘We Will not Repeat the Mistakes of the 2016 Election,’ Vows Nation Still Using Internet.”

    • vV_Vv says:

      Let’s not even take into account that the rules about not having paper or phones on-site is probably not applied to Facebook employees themselves, or that Facebook as a company is guilty of far worse than violating the privacy of only one of its users (it violates even the privacy of non-users) :

      Indeed. I’d bet that any Facebook sysadmin or cybersecurity engineer could easily steal petabytes of private user data if they wanted, yet we are supposed to believe that the company is concerned of minimum-wage contractor employees scribbling notes on gum wrappers? I think a better explanation is just managers at the contractor business power tripping on their disposable employees.

      • stucchio says:

        I have worked with both devs and call center workers, and what I’ve learned is that the call center workers are higher risk. I know this on the basis of actual measurements; if you’ve hired 100 call center workers, you can expect 1-5 to misbehave.

        Misbehavior might involve things like falling in love with someone you called (“ma’am, your voice is so beautiful, now I’m going to call you 5x/day and tell you I love you”), becoming overzealous in their job role (“ma’am, I’ll kill you if you don’t pay the bill”, or c.f. the Wells Fargo fake account scandal), as well as general office nonsense (sex in the lactation room, bathroom ‘accidents’, threats of workplace violence,, sexual harassment).

        I have never encountered a dev team with anywhere near the same high level of bad behavior. I had a colleague who was convicted of rape but he was smart enough to do it far away from work.

        • albatross11 says:

          One plausible explanation for that (I don’t know how much it explains) is that the devs have a lot more to lose–if I’ve got a profession that pays well and gives me lots of interesting things to work on, screwing it up in a way that left me as a bottom-tier worker would really suck. OTOH, if I’m already a bottom-tier worker taking a crappy call-center job, getting canned from this job and maybe blacklisted from this company still leaves me in a pretty similar state.

          I have a friend who used to work for a major cable company overseeing some of their call-center people who handled major complaints (the stuff the bottom-tier call center people didn’t have a script for). Her opinion of her company’s contractors and bottom-tier employees was spectacularly low, after a few years of cleaning up after their disasters in various ways.

          • Ninety-Three says:

            There is also an obvious story where causation flows the other way. If we imagine that some people are born with an inclination towards bad behaviour while others have the foresight/willpower/whatever to stay saintly in the workplace, we should not be at all surprised that the well-paying, high status jobs filter out miscreants who then accrete at the bottom rung of the career ladder.

        • Simon_Jester says:

          There may also be a more ‘mercenary’ attitude coming out of that condition of poverty and willingness to take on a very punishing job for low wages.

          Compared to the Facebook engineer, the Facebook moderator is going to be more inclined to think, “I have nowhere near enough money to be comfortable, I don’t get enough endorphins or dopamine or whatever-the-happy-chemical is from my daily activities to feel anything other than gritty dull grayness, everything’s terrible, I might as well impulsively grab whatever little tidbits of money or happiness come my way.”

          Because nothing promotes short-term thinking faster than sheer, painful scarcity, of having too few places to count on for money or pleasure in one’s life.

      • Nornagest says:

        If I were an “investigative” “reporter” working for one of Valleywag’s furiously rebranding castoffs, and I had a budget of, say, $1000 for bribes, I bet I could get a lot more mileage out of offering that $1000 to contract moderators making $24K a year than to devs making $140K a year. They need the money more, they have less to lose, and they’re probably less conscientious as a population (since that correlates pretty well with income).

      • tossrock says:

        I’d bet that any Facebook sysadmin or cybersecurity engineer could easily steal petabytes of private user data if they wanted

        I doubt that. At a purely practical level, a petabyte is a lot of data. That’s 250 completely full 4 TB drives, and probably in the vicinity of 150 kilograms of data. It’s also at least $20,000 to buy all of them. I feel confident someone would notice if you showed up to work with a wheelbarrow full of HDDs and started methodically filling them one by one.

        “But what about the cloud?” you ask. Well, I am also confident that punching a hole through the ACLs separating Facebook’s internal network to S3 or whatever, and then initiating a petabyte worth of outbound traffic would ring serious alarm bells in their NOC – if it’s even possible from a network topology standpoint. And of course, a petabyte in S3 is also ~$20,000… per month.

        And from a privacy standpoint, having worked with petabyte scale volumes of PII in the past, I would expect that Facebook keeps most user data in a non-identifiable form for consumption by eg advertising pipelines. Joining that data to the actually identifiable user would probably require special privileges that I wouldn’t expect the average sysadmin to have. Then again, Facebook does play pretty fast and loose, so who knows.

        So, is it possible that someone from within Facebook could steal a bunch of user data? Yes, but it probably wouldn’t be a petabyte, and if it were, they would probably be at level beyond just “average sysadmin”.

        • vV_Vv says:

          “But what about the cloud?” you ask. Well, I am also confident that punching a hole through the ACLs separating Facebook’s internal network to S3 or whatever, and then initiating a petabyte worth of outbound traffic would ring serious alarm bells in their NOC – if it’s even possible from a network topology standpoint.

          Depending on how fast you go. Facebook servers are by their nature exposed to the public Internet, it doesn’t seem implausible that you could set up a side channel somewhere on the website, or the app API, or whatever obscure accessory service that we don’t know about, and have a bot outside Facebook’s internal network collect the data.

          And of course, a petabyte in S3 is also ~$20,000… per month.

          And how much could that data be worth?

          And from a privacy standpoint, having worked with petabyte scale volumes of PII in the past, I would expect that Facebook keeps most user data in a non-identifiable form for consumption by eg advertising pipelines.

          Given the regular data leaks, it doesn’t seem that user privacy is among their top concerns.

          • Majuscule says:

            I used to work in consumer data, and honestly you could probably corral the most lucrative data for a whole lot of people in a fairly modest amount of space. Personal info for the million-odd consumers registered by my old company fit on a reasonably beefy USB stick.

        • Dack says:

          I doubt that. At a purely practical level, a petabyte is a lot of data. That’s 250 completely full 4 TB drives, and probably in the vicinity of 150 kilograms of data. It’s also at least $20,000 to buy all of them. I feel confident someone would notice if you showed up to work with a wheelbarrow full of HDDs and started methodically filling them one by one.

          Alternatively, you could steal it gradually. If you fill (and empty) two 512 gig flash drives every day you’d have a petabyte in…under 3 years.

    • Walter says:

      “They try to swallow the world whole”? is how you phrase “they show you stuff other people post?”

      ‘hubristic imperialist project’…of putting text, pictures and video on screens?

      Seriously, what is your gripe with social media? Nobody is forcing anyone to use it. People had the choice between the past and the present, and we can all see how that turned out. If you did restore the old internet it would just become the modern one again, much faster this time since we’d all know the way.

      • albatross11 says:

        I don’t particularly hate Facebook or Twitter, and certainly don’t want them banned or abolished, but I do think Twitter is probably making the world a worse place right now. The OODA loop of a lot of powerful people, especially media types, has been shortened to the point where people are routinely taking serious actions before they know what’s really going on, and that’s especially true among prominent media types. Reading comments from the blue-checkmark folks on Twitter is to consuming journalism from mainstream outlets as taking a tour of the slaughterhouse is to enjoying your breakfast sausage.

      • Seriously, what is your gripe with social media? Nobody is forcing anyone to use it.

        If you don’t use it, you are locked out of access to the “general will”, and this is important for democracy, hence the mainstream left’s concern about Russian bots, and the mainstream right’s concern about anti-conservative bias in removing posts. In a democracy, “the people” rule, but then you have to ask how the people know what to value and how to vote, and then we have to start being concerned about how “informed” the public are, and this concern has only been accelerated and magnified 1000x by social media.

        Democracy really sucks. It’s the least bad system, but it still really really sucks, and one of the ways in which it sucks is that it compels public opinion to be hyper-scrutinized to live up to some mythical idea of purity. If newspaper magnates were the old managers of democracy, then tech oligarchs are the new ones.

      • No, when I say Facebook is trying to swallow the world whole, I’m saying that they have currently more than a billion users and are actively and agressively trying to get more of them. Many of these users are unaware of what the internet is and don’t know that Facebook is a part of it, and it’s probably the only website they visit : https://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/ (these people didn’t have “the choice between the past and the present” )

        Facebook is not just trying to host “the national conversation” (to use the term Scott was recently using), Facebook is trying to host the global conversation. This is no small task and they fall so spectacularly short of the job that they’ve been accused of being accessory to genocide (See : https://www.theguardian.com/technology/2017/sep/20/facebook-rohingya-muslims-myanmar https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html).

        My gripe with social media is that they put themselves in the position of being structurally impossible to moderate and then their representatives whine about it being a difficult job. My gripe with social media companies is that they are dishonest weasels cynically exploiting their users and crying crocodile’s tears when people point this out.

        • Simon_Jester says:

          I think what’s being said here is that Facebook has a pretty clear aspirational goal of becoming One Platform To Rule Them All, where “Them” is pretty much every form of human interaction that can conceivably happen over the Internet. The only things they don’t show a sign of wanting to take over are the things that would be more trouble than they’re worth because of social mores (e.g. porn).

          And I can get behind the point that this is not an innocent motivation.

          There are some good reasons why many traditional philosophies call ‘greed’ a true and serious evil. Perhaps the best of those reasons is that greed can motivate a person to do things no sensible person would want done, but that are lucrative in the short term, or lucrative for a person who figures out how to profit from the advantages and turn the costs into externalities.

          So far, Facebook has been very, very good at profiting from the advantages of being the world’s largest medium of social interaction, and also very good at turning all the costs of its own operations into externalities. There doesn’t seem to be any upper limit on how much they’re willing to enable people to do, and there doesn’t seem to be any lower limit on how little accountability they expect us to put up with.

          Condemning this in flowery Chesterton-esque tones as “wanting to consume the world” seems fair to me, at least in a poetic sense.

      • PeterDonis says:

        “They try to swallow the world whole”? is how you phrase “they show you stuff other people post?”

        ‘hubristic imperialist project’…of putting text, pictures and video on screens?

        But Facebook doesn’t just do those things. They don’t just show you stuff. They set up a system where everybody goes to communicate with the people they want to communicate with, which just happens to have the teensy weensy side effect that now every one of billions of people is potentially exposed to everything every other one of billions of people is posting. That side effect is the cause of all these moderation woes they have.

        And they do it not to serve users, but to serve advertisers: users are the product, not the customer. Which means they have locked themselves out of the option of offering their users a way to communicate with the people they want to communicate with *without* the teensy weensy side effect–because their value to advertisers is their one global network.

    • theredsheep says:

      After thinking about it, it seems to me that FB is basically divided into three types of things. Two of those things mostly work, in my experience, and the third is a dumpster fire (and the dumpster is outside a poorly run seafood market and filled with a mixture of rotting fish, dead rats, and seagull poop). They are:

      1. Facebook groups, where people with a shared interest get together to discuss that interest, and nothing else. They can moderate their own groups, and if the shared interest is something terrible, it’s compartmentalized so random bystanders aren’t exposed to the group’s horrible opinions. You can leave at any time if you don’t like the way the group behaves. I like groups; they’re relatively low-drama and often quite helpful.

      2. Pages, which are like groups but more one-way. You like X’s page, and the person in charge of X gives you information about X as it becomes available, and Facebook shows you that information automatically as long as the person in charge keeps throwing money at them. In practice, you pretty much have to keep clicking on the page to guarantee you aren’t missing something, but again, it’s not hard to manage the grief here. You unfollow or unlike if the page bothers you or becomes boring, and you don’t have to see it.

      3. Facebook proper, where you “friend” distant family members, coworkers, old college roommates, etc. and discover that their interests are some mixture of boring, evil, and wrong. This is where basically all the drama happens, nothing of particular social utility occurs, and Facebook has to play moderator itself because it’s the only one in charge. Also, that creepy “Momo” hoax.

      So, basically, kill 3 with fire, or at least restrict it to people sharing cute pictures of their kids or things that are really inane. The parts of Facebook where you can arrange to buy stuff or discuss writers are fine.

      • Randy M says:

        Some of us actually do use facebook to exchange photos with friends and family.

        The mistake was dragging politics into that.

      • Politics is automatically dragged into everything. The only question is how to manage the flames. Maybe people should go back to using – email – to share things.

      • DinoNerd says:

        My suspicion – non-user – is that things go wrong when there’s an algorithm that decides what one actually sees, and the goal of that algorithm is to increase engagement.

      • PeterDonis says:

        They can’t kill 3 because it would eliminate, or at least drastically reduce, their value to advertisers. They have locked themselves out of the option to kill 3 and go with 1 and 2 by not selling their services directly to users, making them the customers instead of the product.

      • AnonYemous2 says:

        Thing is, 1 is Reddit, 2 is Twitter, and 3 is…only Facebook, basically. If they get rid of that, they’ll be directly competing with better services – maybe they’ll have some advantage, but the big advantage Facebook has is that it has all the people you know. If Facebook becomes mostly about meeting new people, why’s it better than Reddit or Twitter?

        • The 1,2,3 explanation sort of explains why I find Reddit and Twitter correspondingly less objectionable than Facebook. Even the name lets you know it’s evil.

          The internet should be for esoteric conversations with weirdos you’ll never meet. If grandma wants to share cat pictures she can do it in person.

        • theredsheep says:

          I think 3 is bad not because it’s people you know, but because it’s an utterly unstructured conversation and there are no supervisors save the company itself (which has no way to sift through the firehose torrent of new data it generates). I imagine you could fix it by somehow imposing a structure on it–making everything groups themed around a subject somehow, with moderation outsourced to the zookeeper for each group. That probably has lots of complications I haven’t thought of, though.

          I don’t use Twitter, but I’m given to understand that it has similar issues.

          • brianmcbee says:

            Twitter is even worse. In some ways people use it like Facebook number 3, except that everybody in the world can see what you posted. Your in-jokes with your friends and fellow tribes-people is not going to look good to outsiders when it’s stripped of all context.

      • 10240 says:

        3. can also be “moderated” as you can unfriend/unfollow people who post stuff you don’t want to see. Indeed, I find it weird if some people insist that the only way to deal with shit from people they personally know is to have Facebook remove their posts.

        • theredsheep says:

          It’s a politics thing. Especially when you get into friends-of-friends territory and you have your reactionary bigot friends tussling with your liberal zealot friends on your wall. And unfriending them has real-world consequences that complicate things. Facebook makes every day into an online Thanksgiving. The wonders of technology!

      • Taradino C. says:

        Some groups are low-drama. Others, not so much.

    • J Mann says:

      But there’s a simple solution, which is that Facebook should just stop existing.

      I think this is a big part of it. People aren’t going to have sympathy for Facebook being caught in a Catch-22 if they think Facebook is a net bad that should cease to exist.

      I’m not clear on how moderation or privacy would be better if we had a bunch of merely large services (Twitter, Tumblr, Instagram, LinkedIn, etc.) and people used aggregators to see what their contacts were up to, or if people used some kind of distributed P2P service that no one was in charge of, but I do think the basis of the Facebook coverage is that Facebook is bad enough that any criticism should stick.

    • But there’s a simple solution, which is that Facebook should just stop existing. Kill it with fire, burn it down and pour salt on the ruins. We can manage without Facebook (which is why your comparison with psychiatric hospitals is inappropriate).

      I’m beginning to agree with this a tiny little bit. As much as I hate Facebook, one thing that I’m worried about are calls to make Facebook a public utility or even nationalize it to “fix” the problem, but then that would freeze the status quo in place. However, I can live with government action here if the government behaves more like a warrior with a sword than a bureaucrat who wants to take over management.

      Maybe all large scale social networks should be destroyed? Social media undermines the legitimacy of democracy, and then in turn increases polarization due to the entire country having to fight over the bias of these gigantic centralized digital meeting places. I understand this would be drastic, and the knock on effects may be too large, and it may be unpractical, but when I look at every negative effect produced by the centralized normie meme prisons we call social media, I begin to desire something this decisive.

      I don’t how this could work in practice, however, other than to have a law that sets a maximum number of users for a social media platform. Of course, one of the problems is that this could work for Facebook and Twitter, but wouldn’t work for e-commerce as Amazon and ebay need the economies of scale to function. You then might need another law governing how e-commerce platforms allow users to communicate, to avoid them becoming proxy social media.

      The “archipelago” that you once spoke about would be IRL a utopia, but on the internet, it used to be the natural state of things. Let’s just go back to that. As you point out, it’s far easier to moderate even one big individual blog, than the entire world.

      It still is like that for us. All those tiny forums and communities from the 2000s? Most of the ones I visited still exist and I still visit them. The real difference is for the normals, who through the network effect, pile together into these huge mega-platforms, and then this means that we have to fight over the political bias of these platforms and talk about managing them (which only threatens freedom itself due to knock on effects), because if we don’t then whatever bias that does develop will lead to distortions of democracy, and calls for elections to be undone, and the rise of tech oligarchs. We’re only seeing a tiny hint of that now.

      It’s really the normie problem.

      • ilikekittycat says:

        I’m not arguing one way or the other for the public utility/government takeover approach, but ironically, it would probably narrowthe scope of what Facebook/Twitter/social media is. Having everyone understand “this is unquestionably under the thumb of the US government and their standards are being enforced” would give Facebook/Twitter/social media a lot less leeway over the global conversation & norms than the adhoc “whatever pushes adlinks” corporate version

        • brianmcbee says:

          I can’t imagine the public utility/nationalization thing could possibly work. Imagine these social networks with all moderation removed. I know that there are some free-speech hardliners that would see this as a good thing, but to me this is prima-fasce bad.

        • po8crg says:

          It would badly damage them outside the US. I know people talk about “Russian bots” now, but imagine if Facebook was formally a part of a foreign government.

      • @ilikekittykat

        I’m not so sure. People already know corporations are incredibly biased, but that doesn’t change much, so I doubt the background information that the United States controls the internet would change much. It’s the hidden bias that moves the masses, which is why the conversation is all about bots and algorithms controlling feeds. It could possibly have even more leeway because there would be a unity between the physical force apparatus of the US government with the perfect psychological control tool. The folks at the CIA would have a field day with algorithmic manipulation when no property barriers stand in their way.

        Another issue is that government control would essentially freeze things where they are now. Facebook is a giant propped up by network effects, but that doesn’t mean that it will last forever. All network effects ensure is that only one social media network of that size will exist at any one time; they do not define which social network that will be. If Myspace could give way to Facebook then maybe Facebook can give way to something else. You could say it’s different now because Facebook is truly unique in its scope, but network effects do not ensure permanent dominance. They make it so that the barriers to competition are much higher, because anyone who trickles over to a competitor will quickly get bored without their friends, but that only means that you require a certain critical mass to get a torrent. When Facebook does something to piss off normal people as much as libertarian fringe bloggers then it will fall, and we’ll see whether the new giant that catches all of those users will fare any better in its policies.

        Or maybe not… but I’d rather take that chance. I’d rather not make it an institution that is propped up by government finances. I don’t want Facebook to become just another government agency that lasts a century or more.

        The worst thing the government could do to Facebook is run it. The best thing it could do is crush it and anything like it.

    • Anon256 says:

      If Facebook didn’t exist people would just post more of the same stuff on Youtube and Twitter instead, and their mods surely have similar problems. The same problem will arise if there’s any way for people to post stuff that other people can see at all.

      • Reasoner says:

        Arthur Chu, of all people, suggested an interesting way to solve this problem. It is perhaps the only thing that Chu and Robin Hanson would agree on. Here is Chu’s post:

        https://techcrunch.com/2015/09/29/mr-obama-tear-down-this-liability-shield/

        Short version: With a small legal tweak, we could create a financial incentive for social media websites to take proactive responsibility for what happens on their platform. (The kind of thing I encouraged a FB engineer to do upthread.)

        If they fail at that, they get sued into oblivion. Who knows what happens afterward. But one possible outcome is an archipelago-like internet of small blogs and discussion sites, operating on a shoestring, that lack the deep pockets necessary to be attractive to lawyers. Basically the “old blogosphere” described in this tweetstorm.

        • A1987dM says:

          A proposal I’ve seen (on Popehat IIRC, though I can’t seem to find it at the moment) is to keep that liability shield, but only for companies who agree to be bound by the First Amendment as though they were part of the government.

          • Reasoner says:

            When people say that “Facebook should just stop existing. Kill it with fire, burn it down and pour salt on the ruins”, their concerns are rarely motivated by insufficient First Amendment protections on the part of Facebook. The issue is that the marketplace of ideas has become dysfunctional.

            Consider Scott’s recent RIP Culture War Thread post. This post illustrates that the greatest threat to free speech today is not the government or social media platforms, it’s vigilantes using extralegal enforcement methods to try & suppress ideas they don’t like. Under Chu’s proposed regime, it’d be possible to profitably counter this vigilante action by suing social media websites for hosting defamatory statements about Scott. De facto freedom of speech increases and the marketplace of ideas gets a little more functional. Lawsuits against bad actors like Gawker improve de facto freedom of speech because without Gawker journalists finding creative ways to misinterpret everything everyone says, people feel more free to speak.

          • John Schilling says:

            Under Chu’s proposed regime, it’d be possible to profitably counter this vigilante action by suing social media websites for hosting defamatory statements about Scott.

            It would also be possible to profitably sue Scott for hosting defamatory statements about “vigilantes”.

            In any event, Scott almost certainly doesn’t have the bandwith to sue every forum that would host the sort of vigilante that would defame Scott, or even to defend himself against baseless lawsuits by vigilantes. Meanwhile, places like Reddit will necessarily have large legal staffs with great expertise in deflecting lawsuits, and the marginal cost of one more letter saying “If you sue us we will ruin you” is going to be pretty negligible.

            It’s possible that someone like Peter Thiel might step in to finance Scott’s defense in a case like this. But in general, “Let’s make it so people can sue if they are the victims of injustice!”, rarely works out in favor of the little guy who has been on the wrong end of an injustice.

          • Reasoner says:

            Scott isn’t rich. Social media websites are.

          • Paul Zrimsek says:

            Scott isn’t rich. Social media websites are.

            As are, presumably, the easy-peasy software-filtering-plus-liability-insurance services which you’ve suggested elsewhere in the thread– which in any case will be dropping Scott like a live grenade once the vigilantes come after him. (At what point did the vigilantes’ motivation change from deplatforming to making money, anyway?)

            I’ve noticed a common foible among those who want to plant landmines in this or that corner of the public square to encourage us to stay off the grass: it never occurs to them that normal people will react not by watching their step and hoping for the best, but by avoiding the danger zone altogether. There might still be a few big platforms in this brave new world whose business models are profitable enough to let them write off the monster legal expenses– but the only small-timers who won’t flee for their lives are those who are too angry, addled, or immature to give a damn about consequences. Say hello to your new Curators of Quality Content.

          • moonfirestorm says:

            A proposal I’ve seen (on Popehat IIRC, though I can’t seem to find it at the moment) is to keep that liability shield, but only for companies who agree to be bound by the First Amendment as though they were part of the government.

            At first glance, wouldn’t this give the same legal trouble the liability shield is trying to avoid? Now, instead of defending against defamation claims, you have to defend against alleged First Amendment violations. There’s still a pretty easy mechanism to get a website owner in court any time someone wants him there.

            I don’t think you can have an effective moderation policy without at some point bumping up against the First Amendment, so this still creates plenty of cases that are plausibly court-worthy.

          • 10240 says:

            @moonfirestorm The proposal is that if they want the liability shield, then they should have no moderation at all (except perhaps to delete illegal content that gets reported). I presume the point is that the liability shield is based on the idea that they are an automated intermediary (like ISPs) rather than a publisher, and thus they obviously don’t have the capacity to check the content of the posts. Then they should act like it.

            This proposal would be problematic if it applied to (presently) moderated blogs, forums etc. It wouldn’t particularly hurt if Facebook wasn’t moderated at all at the site level (moderation would only be done at a lower level, by groups, pages etc.). However it would be a problem if forums couldn’t be moderated, and in the absence of a liability shield, even forum comments might create liability.

          • moonfirestorm says:

            @10240

            Ahh, that makes more sense.

            I would think you’d need moderation for spam at least in addition to illegal activities, but maybe that can be handled with lower levels of moderation. Is there a risk that those lower levels of moderation would become liable for the content instead, since they’re now the arbiters of what remains on the site?

            Is spam protected speech? I guess it would be commercial, but not all spam is advertisement, and it would at least need some intepretation to rule it commercial. I imagine the spam provider would have difficulty filing suit and getting any sort of serious consideration if they did, though.

          • Aapje says:

            I don’t see how such a law would necessarily ban moderation of legal content, rather than site-wide moderation by the social media company itself, which is not the same thing.

            The social media company could then still allow moderation by the users themselves in certain spaces, like on Reddit; or they could have optional opt-in moderation by the social media company for certain spaces.

            If the law is what seems most sensible to me, they just couldn’t moderate all spaces with a single policy that is stricter than what the law demands.

        • The Nybbler says:

          Removing the liability shield is “destroy the village in order to save it” stuff. If service providers are legally responsible for everything their users say on the service, they are basically required (at “lawyerpoint”) to censor with a very heavy hand, allowing only the most bland, anodyne, socially approved postings. And they probably can’t do it with after-the-fact moderation, either, because removing a defamatory post doesn’t make the defamation go away; the tort has already happened even if the removal limits the damage. That means posts will have to be pre-approved. I can’t see any large useful forum surviving such a regime.

          (and that includes this comments section)

          • Reasoner says:

            I think you are overstating your case. Consider the Gawker lawsuit. Gawker has no liability shield for their articles because those articles are written by employees of Gawker. But it still took years of journalistic misbehavior before Gawker got sued. And there are many misbehaving journalists who haven’t yet been sued, despite their lack of a liability shield.

            To see what happens when the liability shield is removed, all we have to do is look at other countries where the liability shield is not present. As Chu writes:

            Far from turning us into China or North Korea, it would bring the United States into line with every other developed country in the world, including our close allies in Canada and the UK. It would remove the competitive advantage that keeps most social media companies in the US, despite the talent and capital in other nations. This advantage is a law that makes us a liability haven.

            I don’t actually know much about the blogosphere in the UK or Canada, but I would be interested to learn about it.

            In terms of pre-approval, I contend there are many, many steps that social media platforms could take to improve the quality of discussion on their platform and thereby reduce their risk of a lawsuit under Chu’s proposed regime.

            A simple step would be to screen posts using a machine learning algorithm which attempts to predict the likelihood of a lawsuit. If the algorithm thinks the post is provocative and untruthful, it could go through a manual review, or the user could be charged a fee calculated based on the expected legal cost of publishing the post.

            Platforms aren’t taking those steps because they’re supported by ads, and the more heavily users engage with the platform, the more ads they see. We need to change the incentives.

          • The Nybbler says:

            I think you are overstating your case. Consider the Gawker lawsuit. Gawker has no liability shield for their articles because those articles are written by employees of Gawker.

            And Gawker’s editors had control over what could be published, and lawyers to allow them to avoid torts. It wasn’t even a defamation tort that got them. If sites have liability for defamation posted by third parties, they need to control those posts, which means they must not just be moderated but moderated before posts are made visible

            To see what happens when the liability shield is removed, all we have to do is look at other countries where the liability shield is not present.

            OK, where’s the Reddit, Facebook, Medium, Twitter, Slashdot, WordPress, Livejournal, etc, based in those countries?

          • Reasoner says:

            And Gawker’s editors had control over what could be published, and lawyers to allow them to avoid torts.

            Can you give me any examples of Gawker refusing to publish something because it might make them liable? The stories I remember are just the opposite: Gawker’s editors blowing raspberries at people who asked them to take stuff down. Gawker was an extremely brazen publication, yet it still took a long time for their behavior to catch up to them.

            If sites have liability for defamation posted by third parties, they need to control those posts, which means they must not just be moderated but moderated before posts are made visible

            It would be good for the internet, and for society as a whole, if the internet was better moderated. And it could be done in a way that didn’t inconvenience small sites substantially. See the software as a service proposal in my response to MTSowbug.

            For a big company like reddit, Facebook, etc. there is a serious competitive advantage to being located in the US due to the volume of posts on their platform. That’s why big social media companies choose to locate in the US. That doesn’t mean that online communities located in other countries are unviable. For example, a quick Google search yields this discussion of popular forums based in the UK.

          • The Nybbler says:

            Can you give me any examples of Gawker refusing to publish something because it might make them liable?

            Since I am not and never was a Gawker insider, I would not know of such things.

            It would be good for the internet, and for society as a whole, if the internet was better moderated.

            Which is just saying you’re willing to accept the damage.

            And it could be done in a way that didn’t inconvenience small sites substantially.

            No, it cannot. Liability for third party posts is a massive burden. It means either the owner of the forum accepts being wide open to a lawsuit that they should lose, any time a user cares to make this so by defaming someone, or that they must have a human moderator, trained in libel law, moderate each and every user posting before it appears.

            The UK actually DOES have limitation for liability for operators of websites, but only for identifiable users. Which means you can’t have anonymous posting in the UK without exposing yourself to unlimited liability.

          • Reasoner says:

            The UK actually DOES have limitation for liability for operators of websites, but only for identifiable users. Which means you can’t have anonymous posting in the UK without exposing yourself to unlimited liability.

            A quick trip to thestudentroom suggests they allow pseudonyms. (I chose them because I’ve seen their site in Google search results before. BTW, I’ve never heard of any internet nastiness coming out of thestudentroom.) It appears pseudonymous forums are still viable in countries without a liability shield.

            Liability for third party posts is a massive burden. It means either the owner of the forum accepts being wide open to a lawsuit that they should lose, any time a user cares to make this so by defaming someone, or that they must have a human moderator, trained in libel law, moderate each and every user posting before it appears.

            I proposed a software as a service company to solve this problem in this comment.

            If we remove the liability shield on discussion platforms, then over the years that follow, we will start to accumulate case law regarding when damages will be awarded against discussion platforms in practice. Precedents may be set where if the site’s administrators can show they were making a reasonable effort at keeping defamatory content off their site, they won’t be liable for damages. Social media companies have deep pockets, and they’ll hire top lawyers and try to set precedents that are favorable to them. But this will still be an improvement, because now they have an incentive for quality, alongside existing incentives for engagement and ad revenue.

          • The Nybbler says:

            I proposed a software as a service company to solve this problem in this comment.

            Outsourcing doesn’t help; the cost is still there. Automatic review is not practical, certainly not against malicious humans. Slipping defamatory statements past an automated system will be trivial. (Google/Jigsaw’s analysis system for hate speech was an example of this sort of system, and it was a bad joke)

            If we remove the liability shield on discussion platforms, then over the years that follow, we will start to accumulate case law regarding when damages will be awarded against discussion platforms in practice.

            Case law is built on corpses. Not literal ones in this case, but platforms destroyed and people forced into bankruptcy as a result of lawsuits (both successful and unsuccessful, as the process is punishment). The pile of corpses serves as a lesson of its own, aside from the details of the case law: don’t get into this area unless you’re large enough to weather a lawsuit and can hire lawyers to minimize it. If you want internet speech limited to a few highly censored forums run by large corporations, this is great. Otherwise, not so much.

        • MTSowbug says:

          The Electronic Frontier Foundation argues that the liability shield is more valuable for small blogs and discussion sites than it is for megasites like Facebook. The financial incentive you describe would act as an upfront cost for starting a forum, which Facebook can afford and the individual cannot. Under current law, I can start a forum with merely the computer in my bedroom. Without the liability shield, I would require the time and/or money for a moderation team. Or maybe I’d need to take out some sort of insurance policy against lawsuits. Or maybe I’d need to host the forum decoupled from my identity, which would be illegal.

          Regardless of the particulars, the hypothesized result of removing the liability shield is a chilling effect that increases internet consolidation. It would raise the sea level for the archipelago.

          • Reasoner says:

            First, I think you’re overstating the threat of lawsuits. See the point I made upthread about just how far Gawker had to go before they finally got sued, despite their lack of a liability shield.

            Second, I don’t think this upfront cost would be significant. Someone could start a software as a service company where before a post is made on my forum, I send the content of the post to the company’s API. Then the company uses some combination of manual and automated review to estimate the likely legal cost of publishing the post, and sells me insurance. (One strategy would be to manually review a random sample of everything that’s written on my forum, in order to get a general sense of how likely discussions on my forum are to trigger lawsuits. Note how this incentivizes me to create a forum culture where defamatory discussions seem unlikely to occur.) The company could build a plugin for common discussion platforms like Discourse, meaning point-and-click installation. Given easy integration, my bill to the software as a service company will scale linearly with discussion volume of my forum. In economics terms, it’s not an upfront cost, it’s a variable cost. No advantage accrues to big players.

        • Anon256 says:

          Would Google still be shielded from liability for the small blogs and discussion sites it enabled you to find, or would it have to employ a mod team who read horrible things all day to keep them out of search results? Or would Google also get sued into oblivion and people just would never be able to find anything (except sites that could afford real-world advertising or guessable URLs)?

          At any rate, I don’t see how this addresses the fundamental dilemma. One of the following must happen:
          1) People will post horrible horrible things on the internet where they can be found, spreading harmful lies and traumatizing each other.
          2) Huge armies of people will have to read through lots of horrible stuff to censor it, and sometimes make the wrong decision, and get traumatized and occasionally end up believing harmful lies.
          3) It will be impossible for people to post things on the internet where they can be found at all.
          Any realistic solution is likely to involve some mix of all of the above that nobody will be entirely happy with.

          (I suppose when AI gets good enough it can replace the censors and maybe solve this problem, though that also sounds like a worrying outcome.)

          • Reasoner says:

            You’re missing solution 4, which is: run a forum where people say truthful and reasonable things because that is the culture of the forum. There are many ways this could be achieved. They are unexplored because social media companies prioritize engagement (ad revenue) over truthful and reasonable discussion.

    • Ninety-Three says:

      But there’s a simple solution, which is that Facebook should just stop existing. Kill it with fire, burn it down and pour salt on the ruins.

      Who exactly should do the killing and salting? I am entirely with you on the notion that it would be nice if Facebook fell off their horse and into a spiked pit, but I get very afraid when you start advocating for the existence of someone with both the power and inclination to give them a push.

    • j r says:

      But there’s a simple solution, which is that Facebook should just stop existing. Kill it with fire, burn it down and pour salt on the ruins. We can manage without Facebook (which is why your comparison with psychiatric hospitals is inappropriate).

      How is this a “simple solution?” It’s like saying that the simple solution to drug addiction is just to stop dealers from selling. Good luck with that.

      Moreover, any institution with the unilateral discretion and the ability to shut down Facebook, would by definition be more dangerous than Facebook ever was.

      • It’s simple in that it’s easy to formulate as an end goal. Similarly, although it is very hard to quit drugs, it’s pretty easy to understand it as an end to be achieved. I’m certainly not suggesting that the dismantlement of Facebook is something that can happen in a handsnap.

    • WashedOut says:

      Agree. When organisations get too big, too complex and too tyrannical they ought to collapse, and facebook’s expiry date is well and truly up. The silver lining for society is that people are quitting by the millions; Zuck is drowning in lawsuits; and every month a new damning revelation into the company’s appalling behaviour emerges, forcing more people to quit.

      Facebook’s latest pathetic ruse is to develop a ‘cryptocurrency’. My only hope is that the company’s annihilation occurs before they have a chance to ruin yet another aspect of modern life.

  8. habu71 says:

    As for why the Facebook moderators don’t believe the anti-conspiracy theory information rather than the conspiracy theory information, my guess is that employee selection effects are a large factor. It seems quite plausible to me that the kind of individual who is more likely to get a job as a Facebook moderator is also more likely to either believe in conspiracy theories. Now, is selection bias responsible for the entire observed effect? No idea. But I would be surprised if, assuming such an effect exists, employee selection bias wasn’t a noticeable chunk of the cause.

    • Saint Fiasco says:

      It could be something more simple than that. People don’t report the debunkings to the mods, so the mods don’t see them.

      Then again, it could also be something more insidious. If the moderators see lots of publications like “The evil MSM doesn’t want you to know this!!!11!” and they are asked to evaluate if it should be moderated, they can’t help but notice that they themselves are censors working for the media.

      It’s like how Chinese people sometimes fall for Western anti-vax conspiracies. Sure, the government says it’s bullshit, but they already know government hides important information from them all the time.

  9. Kuiperdolin says:

    Or is this whole phenomenon just an artifact of every large workplace (the article says “hundreds” of people work at Cognizant) having one or two conspiracy buffs

    In my experience there’s some (more than one or two) at any workplace but at the normal ones they won’t proclaim them out loud. It’s the kind of stuff you find out when there’s an out-of-office dinner and your teammate starts ranting about the ZOG after two drinks.

  10. johan_larson says:

    Much of the crappiness of that job seems to be caused by the drive to do the content moderation on the cheap. Presumably the job would be less crappy if the moderators had half the quota and double the budget, giving them more comfortable surroundings and work rules, and time to actually think about things and discuss issues among themselves. Quality would presumably improve that way too, particularly if you spent part of the budget on training. I wonder how much FB spends on content moderation yearly. An appreciable fraction of their income, or just noise?

    • JPNunez says:

      Yeah, it’s weird to me to try to blame The Verge on this as, Facebook and the subcontractors are the ones forcing this on their own employees.

      What’s the Verge supposed to do? NOT report the subhuman conditions the mods work in?

    • ilikekittycat says:

      +1 to the general “unpaid/low paid moderation causes a lot of internet problems” intuition, although I don’t have any grand idea of how to fix things. The current model of “any IT worker/experienced computer-using normie contractor can be trained to fit the moderator cog” seems to have not worked out very well

      It seems like the extremely online rotten.com/somethingawful poster of yore who shares goatse, shrugs off ISIS beheading videos, and doesn’t flinch at gorey horror movies has a “type” of brain made for specialized work that isn’t being exploited by the market yet. Between the moderation sort of PTSD and the unexpected drone pilot PTSD it seems like you need a certain kind of person who can compartmentalize awful things they see from a screen and keep going

  11. Baeraad says:

    The Verge brings this up as an example of the totalitarian and dehumanizing environment that Facebook moderators experience. But I imagine that if an employee had written down (or used their phone to take a picture of) some personal details of a Facebook user, The Verge (or some identical publication) would have run a report on how Facebook hired contractors who didn’t even take basic precautions to protect user privacy, contractors who irresponsibly let employees keep cell phones around your precious private data.

    Even with my considerable cynicism I find it hard to imagine journalists drumming up outrage around allowing employees to use pen and paper. And even in my darkest moments I can’t picture articles arguing that Facebook practically invited employees to take down sensitive information by negligently failing to outlaw gum wrappers in the office.

    Admittedly, my cynicism is sufficient to imagine journalists drumming up outrage against employees making off with private data without offering any suggestions for how Facebook was supposed to stop them from doing that, exactly, and the Facebook management desperately trying to keep that situation from coming up in the only way they could think of.

    If it’s true, how come more people haven’t been infected?

    Because most people don’t sit around reading conspiracy theories all day, every day? (yeah, there were nine comments ahead of me when I started typing up this reply, and I think at least half of them mentioned this, but all the same…)

    How come Facebook moderators don’t believe the debunking of the conspiracy theories instead?

    Because putting an idea in someone’s head is easier than taking it back out? Again, not that I’m the first to note that even in the very short comment thread that has had time to form so far…

    Which does make me wonder, though, whether the best way to combat conspiracy theories might be to not dignify them with a response but rather to talk incessantly about how amazingly sane and normal everything is. The more people hear about how the world makes sense and everything is pretty much the way it looks like it is, the firmer they’ll hold an idea that is incompatible with most conspiracy theories.

    (there is the problem that conspiracy theories are usually sexier than reality, though. And also, as Scott has noted before, on rare occasions conspiracy theories actually turn out to be correct. But still)

    • Conrad Honcho says:

      there is the problem that conspiracy theories are usually sexier than reality, though

      And it’s much easier to write about the conspiracy theories, because the space of fiction is much larger than the space of truth. My 4-at-the-time-year-old son and I were talking about the moon one day and he was shocked to learn people had been there. So I went to YouTube and searched “moon landing” for a video to show him. One video of the moon landing. Pages and pages and pages of “the moon landing is a hoax” videos.

      • Walter says:

        Also, like…hrrngh, finding one fool is much more valuable than finding a hundred sensible folks, in terms of parting from money.

        It is why Nigerian scam emails are so obvious, right? Anyone who bails at the obvious errors in the first email would have aborted the process somewhere down the line anyway, they are better off not wasting the time.

        If you post the moon landing, scientific master achievement of the human race and blah blah blah, well, you’ve found a lot of vaguely patriotic and pro science people. Good for you. But if you find people who will share and like your debunking video…well, if they’ll do that, what else might they do? Seem like good addresses to keep/sell.

    • J Mann says:

      Even with my considerable cynicism I find it hard to imagine journalists drumming up outrage around allowing employees to use pen and paper. And even in my darkest moments I can’t picture articles arguing that Facebook practically invited employees to take down sensitive information by negligently failing to outlaw gum wrappers in the office.

      I think the point is that if it turned out there were several privacy violations by Facebook employees who were able to take out private information, journalists would pounce. They wouldn’t really care if Facebook said “we have effective but reasonable supervision; what were we supposed to do – outlaw post-it notes and gum wrappers?”

      • Tarpitz says:

        This is just absolutely bog-standard call centre practice. You’d see more-or-less exactly the same thing at the fundraising outfit I used to work for. I assumed it was mandated by regulations, to be honest, but if it’s the same both sides of the Atlantic I guess that’s less likely.

    • Aapje says:

      Even with my considerable cynicism I find it hard to imagine journalists drumming up outrage around allowing employees to use pen and paper.

      Haven’t you noticed that journalists have a really strong tendency to want to blame systemic problems and/or demand systemic solutions when an incident occurs or things are imperfect?

      It’s also very typical that they will not actually point out what the systemic problem actually is or propose systemic solutions. If they actually did that more often, it would be a lot more obvious what kind of bureaucratic nightmare they are often pushing for.

      Right now they generate their own news:
      1. Incident or structurally bad thing happens
      2. Media outrage: “Do something”
      3. Something is done, but that solution is imperfect
      4. Incident or structurally bad thing happens due to the solution, continue with step 2

      • J Mann says:

        Yeah. the outrage wouldn’t be “you allowed your moderators to bring in gum wrappers” it would be “your moderators violated member privacy, and you didn’t stop them.”

    • Randy M says:

      I wonder if it is related at all to the fact that moderators have to analyze the content to determine if it is acceptable. Like, maybe the ten pages that are obviously bad don’t have the effect, but then there are some that are borderline that they let through, which primes them to think they are harmless, and therefore true.

  12. hypnogoge says:

    And lawyers demonstrate a different way that strict rules can coexist with a humanizing environment; they have to navigate the most complicated law code there is, but I don’t get the impression that they feel dehumanized by their job.

    Perhaps people in the legal profession are biased towards making decisions that don’t make their working environment more dehumanizing and unpleasant. Equally, maybe legal firms are just very good at defending themselves against lawsuits.

    • McMike says:

      I dunno. I think lots of attorneys find out their jobs are mainly about protecting scumbags, inserting themselves between dysfunctional parties, and looking for ways to inset the knife with a smile on your face. My personal experience with attorneys is many of them can lie effortlessly and then gaslight you until you give up and walk away.

    • aristides says:

      I went to law school and Know enough lawyers to know that they do feel dehumanized by their jobs. I’ve heard lengthy rants about being forced to parrot asinine law codes. The benefits lawyers get however is large compensation, more prestige, and the potential to promote to a high enough level that they get to dehumanize others rather than be dehumanized themselves. Lawyers only complain about how much their job sucks to other lawyers so they can keep the prestige of the profession up

    • Simulated Knave says:

      …Dehumanization is rampant. Lawyers have an ungodly high suicide rate, are in many jurisdictions exempt from overtime laws, and you are often held personally responsible for things that aren’t your fault. Oh, and clients are unreasonable idiots who expect the impossible, don’t say thank you when you provide it, and will ask the same question over and over in the hope the answer changes. You are legally obligated to do everything possible for the client, and while legal organizations talk a good game about work-life balance they rarely adjust the rules to say “reasonable measures” re client service.

      This goes about quintuple for criminal law. Where, more than any other field, the rules you are enforcing were drafted by idiots responding to public outcry about a problem the public didn’t understand. Oh, and the clients are EXTRA difficult, and the consequences to them often even worse – so why aren’t you working more hours? You have an ethical obligation to do everything you can for them…

      Law is extremely dehumanizing, and anyone who tells you differently isn’t a lawyer. This doesn’t mean people can’t cope with it, but it’s not a mentally easy field.

      • Quixote says:

        This is one of the reasons lawyers are well paid. Because the job sucks and people wouldn’t do it without good reason.

  13. Somethatname says:

    In facebook’s case I’m strongly inclined to see it as a leadership/executive problem. They intentionally grew too fast, which means things like moderation which you generally want to expand at a rate that keeps up with everything else couldn’t keep up. So they went into an insane scramble which meant they learned a lot of hard lessons all at once and overreacted and came up with rules that actually make things worse.

    This might sound jaded of me, but at some level you have to accept that you’re going to have X% failure rate. If a patient can commit suicide in 15 minutes, I doubt a 10 minute rule will stop them from figuring out another way.

    I also suspect that a lot of these arbitrary rules which are more about appearance than sanity are made because they’re only understood at a very superficial level. There’s a good chance that there were other factors which if picked up on would make life easier. For example again in the suicide case, they were likely hiding something in their room, or opportunities for rapport were missed ect. I understand why the system often takes the most literal and obvious interpretation of events, but it’s rarely the full picture. I know in the UK they’ve really cottoned on to narrative therapy, and I suspect that this is the reason.

  14. Applied Aspergers says:

    I wonder if any of them have had to watch this pizzagate debunking video: https://www.youtube.com/watch?v=8iTb0ta5_84

  15. Matthias says:

    Of course, in future this will all be done by perfect immortal machines.

    It should be possible to train neural networks to make less mistakes (not zero though) than humans in following Facebook policy.

    And I think that’s already mostly happening at YouTube. At least if I remember the outrage stories about YouTube moderation right.

    • MugaSofer says:

      Youtube’s attempts at automated moderation are widely hated by their users.

      • albertborrow says:

        Youtube’s attempts at automated moderation were created in response to advertiser preference, to avoid a situation like Disney pulling ads out of Youtube entirely. That’s a good reason for drastic action on Youtube’s part, but that doesn’t mean that drastic action is good for the platform – a large part of the most-viewed original content on Youtube is advertiser unfriendly. The other auto-moderation problem that Youtube has is copyright claims, which have gotten way worse over the last couple of months. (for largely the same reasons) The problem is that Youtube’s copyright claim system works – it keeps the big corporations happy, which keeps the platform online. See Hank Green’s (Vidcon founder) video talking about why it’s the best of a lot of bad options, and also on how Youtube is looking to improve the system.

        Basically, what I’m getting at here, is that it’s not impossible to create an auto-moderation algorithm that does its job perfectly fine. Youtube’s does exactly what it’s supposed to. The problem is that the corporate definition of “supposed to” is not what the users on any internet platform want.

        If it were up to me, I’d tell all of the advertisers and old media to stuff it, and then find a way to destroy their economic and social capital as quickly as possible, so they don’t fuck up the (objectively superior) platform that Youtube has. Copyright is a dead system made to support publishers in an era when it actually cost more than a fraction of a cent to make copies of things. But I can understand why it hasn’t been done yet. Those institutions have power that Youtube can’t challenge, and I imagine it’s the same relationship that Facebook has with its detractors.

    • Simon_Jester says:

      This only helps reduce the difficulty of moderation, insofar as your big company lets the automated system delete people’s content without appeal.

      Thinking about the incentives from the company’s point of view, they have alarmingly good reasons to replace their moderators with an unreliable neural net that routinely flags innocuous content or is blatantly biased, then immediately fire most of their living moderators so that there is effectively no court of appeal. Plus, if the neural net starts getting gradually worse somehow, the incentive to keep it switched on and not replace it will be powerful.

      I think we’re a very very long way from the point where we should trust automated censorship on anything much more complicated than blocking out specific obscene words or specifically using image recognition to screen out dick pics.

    • albatross11 says:

      So, we’ve created a concentrated stream of the craziest, most horrible, and most offensive things anyone ever has said on the internet, and we’re going to use that to train our AI. Hard to see how *that* could go badly for us….

    • Bugmaster says:

      perfect immortal machines

      If it’s good enough for Citadel Station, it’s good enough for me !

    • theredsheep says:

      I have a hard time imagining that machines will be better at predicting what will upset people than people are. Granted, people are really bad at it, but we’ve been trying for a long, long time.

      • Simon_Jester says:

        Remember “scissor statements?”

        It would totally fail to surprise me if an effective online content moderator proved to be just one extra predictive module away from being able to generate scissor statements.

    • AG says:

      Tumblr laughing in the distance…

  16. P. George Stewart says:

    It’s not investigative journalism that’s the problem, the problem is grandstanding politicians piggybacking off of investigative journalism. We need more scepticism that politicians can or ought to “do something” about x,y,z, said something usually involving more artery-clogging legislation.

    In tandem with that, we need more vigour in retiring legislation that doesn’t do what it’s supposed to do, which in turn requires a more serious effort to find out if legislation actually does what it’s supposed to do.

    • Simon_Jester says:

      The problem is that if politicians don’t act on investigative journalism against someone like Facebook, then Facebook is effectively bulletproof against investigative journalism.

      Because Facebook has become a sort of weird upside-down “too big to fail.” It’s too big to boycott. A handful of individuals dropping it at the margins out of outrage isn’t going to affect it noticeably. The only thing that can even make it sit up and take notice anymore is a government, because governments have the power to deprive it of millions of potential sheep to fleece, not just a few here and there.

      Government regulation of megacorporations is the solution at least as much as it is the problem, because the megacorporations don’t magically become trustworthy just because no one bothers them.

  17. Garrett says:

    I think the version I described – moderation of a single website’s text-only comment section – is an easier problem than moderating all of Facebook and whatever horrible snuff videos people post there.

    I think this needs to be analyzed along different axes. There’s the question of “ease” of making the “right” decision – this falls into precision/recall issues. And then there’s the “ease” in terms of psychological harm.

    Reading a very polite, well written, well sourced text article in support of eg. legalizing pederasty is a challenging issue to get right from a moderation attempt for all the reasons you outlined your essay. But other than being “offensive”, I would hope it would have minimal impact on the people moderating it.

    At the same time, something like child porn might be very straight-forward to evaluate and determine that it violates the standards that you want in an environment. Fast and effortless to dispense with. But it can be soul-sucking. And non-stop dealing with this could quickly result in severe psychological effects.

  18. vV_Vv says:

    Can I put on my leftist hat and suggest that these working conditions are what you expect in a sweatshop employing masses of low-skilled, low-pay, non-unionized workers?
    Short bathroom breaks, callous micromanagement, arbitrary catch-22 rules, etc., I can’t see much difference with accounts of working conditions of factories in 19th century England or modern Vietnam or whatever, and it’s not like these business were particularly worried about the Verge writing hit pieces on them. It’s just that when your employees are cheap and replaceable you’ll tend to treat them as disposable commodities.

    • Ninety-Three says:

      On the one hand: eight hour workdays in air conditioned offices above modern minimum wage, on the other hand: your boss kind of sucks and there are limited bathroom breaks. I totally see the parallels to 19th century English factories.

      • vV_Vv says:

        eight hour workdays in air conditioned offices above modern minimum wage

        Because laws.

        • Jiro says:

          By definition, laws do not require that you work above minimum wage.

          • The Nybbler says:

            They also don’t require air conditioned offices (OSHA has no temperature requirements). Nor 8-hour workdays (the unit is the 40-hour week, and required overtime at time and a half is not forbidden by Federal law). OSHA regulations are based on making factory work somewhat less hellish, not to make office workers comfortable (though amusingly Google managed to violate them at the main campus).

        • Ninety-Three says:

          Skipping past the fact that those things are not required by laws, I’m intrigued to hear how “there are laws requiring employers to provide all kinds of reasonable conditions” helps your Victorian factory comparison.

  19. Emily says:

    the hard part of their job wasn’t keep the Thread up and running and well-moderated, it was dealing with the constant hectoring that they had made the wrong decision

    I don’t remember getting hectored much. What stressed me out was the tension between my values about letting people say what they wanted so long as they were polite about it, and being appalled with the results of that* in terms of some of the content we were getting (and that I was having to read and allow, thus agreeing that it was within certain bounds). And I suppose also feeling like these rules I genuinely liked (and still like) were being gamed by people who were likely trolling –purposely being inflammatory in order to an elicit an emotional response/maybe get their arguer banned.

    *This was, in retrospect, naive.

  20. Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little. If it’s true, how come more people haven’t been infected?

    I’m pretty sure this is a matter of quantity. If you hear about one crime committed by a cardiologist you’re not going to start thinking all cardiologists are criminals. But if you hear about every crime ever committed by one you’re going to start wondering what is wrong with these people. Likewise a sufficient number of (dubiously truthful) facts about the world presented to you that are most easily explained by conspiracy theories is going to cause you to start crediting conspiracy theories more. You’re just updating on the evidence. The highly selected and one-sided evidence but the evidence that goes past your eyes nonetheless.

    • albatross11 says:

      I think it’s pretty common to evaluate ideas partly based on how commonly you hear them. Stuff that you never hear anyone say, the first time it’s said, tends to trip your crazy/evil filter a lot more easily than stuff you hear people say all the time. Now, once it’s gotten past your crazy/evil filter, you might still think rationally about it. But then, you’re getting paid to read crazy conspiracy theories, but not being paid to read refutations of them, so probably you’re spending a lot of time reading 9/11 truther stuff or vaccines-cause-autism stuff or whatever, and none reading anyone trying to refute those.

      Another aspect of this: when some idea trips the crazy/evil filter in most peoples’ heads, the common arguments against that idea tend to be really low-quality. Everyone already knows the conclusion they’re supposed to reach, so the arguer can phone it in and nobody cares.

      And again, all this is content-neutral. It works the same whether the outside-the-Overton-window idea you’re seeing is nonsense (9/11 truthers) or true and well-documented (the CIA’s network of secret prisons/torture chambers during the early years of the war on terror). We’re talking about mechanisms that affect our willingness to consider an idea that are independent of the merits of the idea.

  21. McMike says:

    As a young manager at a small company I was shown a contract from a vendor at another small independent company. It was something like 10 pages of non sequiturs. The owner says to me: “THAT is a list of every way he’s been screwed in the past.”

    As an experienced manager who writes procedures and agreements, I must constantly find the balance between over-prescribing versus ignoring hard lessons/avoiding predictable errors. In the end, we are forced to try and legislate common sense, shoehorn-in institutional learning, and define a duck. It’s a difficult balance to say the least.

    Having been through some lawsuits, I can tell you this with certainty: EVERY SINGLE WORD MATTERS, THOSE SAID AND THOSE UNSAID. As do commas and word order.

    We all know that as institutions get larger they tend to become sociopathic. We also know that as they get larger, the people executing the agreements on the ground will not have access to the reasoning – the intent, the data gathering, the risk analysis, and institutional learning behind the policy. They just have this rule, and a job to do.

    We also all know that 90% of the annoying nonsense in this world exists because 10% of our fellow humans are evil/stupid/lazy. And also that they have a disproportionate ability to wreak havoc.

    Regarding moderation. It seems to me that one interesting solution is a single universal login. It can be anonymous. But it follows you to every board. Your history and the history of other people’s reactions to you follows you too. Every post is accompanied by an information measure of some sort. People can then set filters to block if they choose. I am sure there’s downsides. Maybe you get to reset your ID once every two years or something, to allow for changes in the human seasons.

    • McMike says:

      Addenda: yes, once the media gets on a moralizing outrage lynch mob, their victims are like mice at the mercy of a malicious playful cat. There is no way out.

      This has led to the style of media relations perfected by Trump. (1) control the narrative with distraction and changing the subject or changing the facts, (2) never, EVER, admit a mistake, (3) never back down, never get off the offensive.

      Non-pathological people think that if caught in the tsunami of a major media frenzy, the truth is on their side, and their best strategy is to refuse to play along and hope it blows over, and that smart, decent people will know the difference. They were wrong about this in High School, and they are wrong about it now. And they are the ones that end up scapegoated, unemployed, divorced, forgotten, in jail, or dead.

      • albatross11 says:

        Having serious employment protections helps a lot with riding these out, though. Even if there’s an angry mob of Twitter users calling for your firing and Buzzfeed is reporting on your problematic Facebook posts from high school, if the boss can’t fire you without going through a months-long slog of prodedural and legal stuff because of {the union, civil service protections, employment law, your contract, tenure, etc. }, then probably the outrage storm will blow over and you’ll still be employed. Over time, I expect that institutions will evolve antibodies to outrage storms, and in a decade, it will seem crazy that anyone got fired the day an outrage storm broke over them and their employer.

        • McMike says:

          Before #METOO there was Brietbart and O’Keefe. It’s even crazier that people get fired based on attacks from known professional liars with dubious agendas.

          But hey, that climate scientists sent this email once…

    • TheFlyingFish says:

      It seems to me that one interesting solution is a single universal login.

      I’ve been saying this for years, but the problem is that for it to truly work there has to be some means of preventing the same person from re-registering as a different user every time they get banned from a given community. The only way I can think of to assure this is to require some form of physical verification to create such an account, with biometric verification. And the problem with that is that it will take massive resources to deploy such an institution at sufficient scale to ensure it gets adopted.

      Maybe the solution is some sort of open “identity provider service” standard, to which any company that wants to create identities along these lines must adhere. The standard would mandate things like “you must not allow people to create new accounts without biometric verification,” etc. Then all you need is some sort of oversight body that makes sure all the identity providers are toeing the line, and you’ve got yourself a workable system.

      Once it’s possible to truly permaban bad actors from your community, then eventually the bad actors will be filtered out from participating in much of anything. 4Chan might still exist, but their ability to impact the rest of the world will be greatly limited.

      • Conrad Honcho says:

        Can you compare and contrast this idea with China’s “social credit score” system?

        • McMike says:

          Can you compare and contrast

          Well, slippery slopes aside, its not as if our own spooks and retailers don’t already have a similar system in place, they just don’t share it with the masses.

          I think the difference is in [1] intent of the system, and [2] purpose. The intent is to alert people to the past practices of people they are interacting with. The purpose is to participate in discussion boards.

          That said, the data matrix that accompanies an identity would have to be highly transparent.

      • McMike says:

        I thought the sign-on could be controlled by paying a nominal fee with a credit card. An algorithm could review names and billing addresses and flag possible duplicates. Then review the ongoing activity for possible evidence of duplicates.

        Not perfect for sure (PO boxes, company cards, what if you are “Jr.” lviing with your dad, etc). But would certainly narrow the field to those wiling to defraud credit companies to game the system and to the truly motivated. Who could then be handled with backend analysis and banishment if caught.

        But the pro side is it piggybacks an existing system that has a reasonably high motivation to ferret out fraud and control identities for its own reasons.

        • roystgnr says:

          I thought the sign-on could be controlled by paying a nominal fee with a credit card.

          Replace “nominal” with “enough to pay for the hassle of repeat bans” and you could stop worrying about duplicates altogether. Heck, even “nominal” alone seems to be a massive deterrent to abuse. E.g. Metafilter has an awful system in a lot of ways but the hassle that a one-time $5 signup causes trolls seems to outweigh all the technical flaws and inadequacies put together.

          • McMike says:

            Perhaps paying with a CC automatically serves as a filter, because it links your ID to the real world in a direct way. Which can be a disincentive to misbehavior in a few ways, including simply as a wake-up reminder that the internet isn’t just a role-playing game.

          • roystgnr says:

            I hadn’t thought about it, but the implicit “links your ID” threat makes sense. That was how Scott Adams got shamed for breaking Metafilter rules.

            Edit: no, wait, that’s not really right:

            Let’s be super-duper clear here: we didn’t out Scott. We told him he need to decide between disclosing his identity on mefi or cutting it out with the Vehemently Defending Scott Adams as a purported third party. He chose to identify himself on the site; if he’d chosen to walk away, that’d have been fine too.

            We very explicitly did not make the decision of revealing his identity. His behavior was obnoxious in either case, but we went to considerable effort to make sure it was his call to make. It’s not the first time we’ve had to deal with something like this, and we care a hell of a lot about not casually compromising folks’ identities.

    • Ozy Frantz says:

      I, for one, appreciate the fact that it is somewhat difficult to connect the account with which I talk about effective altruism, the account with which I write trashy romance, and the account with which I ask questions about the color of my baby’s feces.

      • albatross11 says:

        +1

        Combine a true-names-only commenting policy on the internet with angry social media/traditional media mobs going around hassling people with the wrong opinions, and I think you guarantee that everyone will be very careful to say only the very most inoffensive and uncontroversial things on the internet.

        • McMike says:

          Ok. fwiw I was not reccomending true names. I was recommending a single persistent ID.

          • Jaskologist says:

            If everything you have ever said online can be connected, it’s pretty easy to draw up a profile of who you’re likely to be. And if you ever in your life slipped up and released some actual identifying info (for example, back when you joined the internet as a young kid who didn’t know any better), congratulations, you’re unmasked forever.

            All this accomplishes is making outrage targets very easy to dox. Personally, I’m more troubled by the prospect of CNN feeling justified in doxxing people who make fun of them than I am randos saying dumb things.

          • acymetric says:

            @Jaskologist

            More generally, I’m surprised this is getting so much support here. I would not have expected SSC to be so positive on the idea of centralized identity tracking (which is essentially what this is), but maybe I’m calibrated wrong or the people who dissent are just ignoring it.

          • I’m very against it. I mostly like the anonymous internet, because the bad side comes with the good side of allowing me to talk about controversial stuff and keep it separate from an identity where I get to discuss more mundane things. Some identities I want to link together, and others I don’t, and that power is useful and protective. A single persistant ID would wreck that and would make it a lot easier to find your real name. It’s not worth it just to fight mobbing, because it ultimately makes the work of mobs a lot easier to accomplish.

          • McMike says:

            Well, the advantages and conveniences of multiple identities to reasonably responsible people are clear. However, I think we differ on premises for two starting points:

            [1] Trolls, creeps, criminals, and cyberbullies will eventually lead to the end anonymous posting, one way or the other.

            [2] There is no anonymous posting anymore (f ever); and predictive profiling already exists (or will soon be).

          • [1] Trolls, creeps, criminals, and cyberbullies will eventually lead to the end anonymous posting, one way or the other.

            [2] There is no anonymous posting anymore (f ever); and predictive profiling already exists (or will soon be).

            If we’re so sure of [2] then we can just wait for [2] to happen and organically solve the problem without the need to overhaul the entire internet. Even if something is inevitable I’d rather fight it for as long as possible rather than giving up and deliberately centralizing something that I already think is overcentralized.

            Personally, anonymous posting is what makes the internet great in the first place, and I’d probably stop using it for anything but commercial activity if anonymity was destroyed (a frequent discussion topic among me and my irl friends is how to create an alternative uncensorable internet where personal filters are the order of the day). I think [1] is massively overrated as a problem, and largely the problems arise (such as that in the article) through a multiple step process caused by organizations poorly reacting to and/or overreacting to this problem.

            Social media is an abomination anyway, and I don’t want to destroy the parts of the internet I enjoy just so that soccer moms can feel safe on Facebook. If a site gets too big to be efficiently moderated without forcing workers into a totalitarian corporate nightmare then that’s something to weigh against the benefits of having sites with so many users trapped in there with the trolls and creeps, and if we want a solution then the solution should come down hard on social media giants and not the internet as a whole.

          • albatross11 says:

            Yeah, a single linkable ID for all your online activities means you get identified with very little effort.

      • McMike says:

        I, for one, appreciate the fact

        Absolutely, but we have reached the point (I believe) that the bad actors have ruined that for the rest of us. You may have to return to that pre-internet era, where you were forced to live in one person for a lot of your activitites.

        That said, one modification to my single log-in system would be to make it optional per site. The sites could decide if they need to control trolls or not. The baby poop board could allow ad hoc users of they want to.

        • That said, one modification to my single log-in system would be to make it optional per site. The sites could decide if they need to control trolls or not. The baby poop board could allow ad hoc users of they want to.

          I’m not so against this if you tweak things in this direction. We could go further. I’d honestly be okay with two separate internets; a bad boy internet with only personal filters, site level moderation, and no universal identity, and then a good boy internet with copious government curration of content, and a single universal ID.

          To make this work, and give it teeth, have registering to vote, or registering the birth of a child tied to a legally binding contract to use the good boy internet, with ridiculously punitive and nasty penalties for not upholding this agreement. That way, the main issue of bad content (children’s sanity and people becoming conspiratorial radicals who vote wrong) is walled off on exactly that basis, and good citizens who want to live a productive life and vote for the mainstream ideologies of 2019 are affixed to the good boy internet on the basis of their life choices by clearly referenced documents, and those who do not sign these documents have the freedom to use the bad boy internet, but simultaneously miss out on the privilege of raising children and voting.

          If that was the Devil’s Compromise, and he upheld it in perpetuity lest God struck him from his hellthrone, then I would accept it instantly. That’s an ID system I can accept and live with.

          • Jiro says:

            Why don’t we just execute people who don’t sign up for the good boy Internet? Just like they miss out on the privilege of having children and voting under your concept, they instead miss out on the privilege of living.

          • PeterDonis says:

            a bad boy internet with only personal filters, site level moderation, and no universal identity, and then a good boy internet with copious government curration of content, and a single universal ID.

            The names you have chosen for these two internets are very revealing of your underlying assumptions. To illustrate how a different set of underlying assumptions would lead to different naming choices: my choices for the names would be the “free person internet” for the one with personal filters, site level moderation, and no universal identity, and the “dictatorship internet” for the other one.

            To make this work, and give it teeth, have registering to vote, or registering the birth of a child tied to a legally binding contract to use the good boy internet, with ridiculously punitive and nasty penalties for not upholding this agreement.

            In other words, punish all the people who actually want to have reasonable discussions online, by forcing them to buy into an autocratic system designed to rein in the people who want to poison the discussion. I.e., punish the law-abiding for the misdeeds of the criminals.

          • PeterDonis says:

            the main issue of bad content (children’s sanity and people becoming conspiratorial radicals who vote wrong) is walled off on exactly that basis

            You’re missing the fact that the sites that are able to filter out bad content reasonably well now are the sites that have “only personal filters, site level moderation, and no universal identity”. For example, this site right here. The sites that have problems with filtering the bad content are the ones like Facebook, that have “copious curation of content, and a single universal ID”. So by your definitions, what you are calling the “good boy internet” would be the one that had all the problems of Facebook, but squared and cubed. Whereas what you are calling the “bad boy internet” would be the one that had sites like SSC.

    • j r says:

      Regarding moderation. It seems to me that one interesting solution is a single universal login.

      Suggestions like this make me think that we are all trapped in one big shell game.

      Someone already brought up the similarities to a social credit system. Beyond that, one of the biggest criticisms that I keep hearing of Big Tech is that they are relentlessly following us across multiple platforms and logging our behavior to creates us-shaped profiles that can be modeled and used to predict and influence future behavior. Does a universal login get is closer or farther from that end?

    • PeterDonis says:

      It seems to me that one interesting solution is a single universal login.

      The fix to too much centralization is not more centralization.

      • McMike says:

        @jack, forward, j r, peter

        No doubt my premise is that the profiling and tracking horse has left the barn. “More centralization” is redundant and a battle already lost.

        When I make up my little anonymous avatars I assume the only people who don’t know who I am is each other on the board. I even assume that a dedicated stalker could follow me long enough and probably deduce who I am with nothing more than google.

        I further assume that the revolution won’t be internetized. All our little politcial posturing on the web won’t mean diddly. And any true reform will come low-tech, across kitchen tables and barstools. And in a throwback to our founding, will be done furtively, secretly, and looking over our shoulders more like the Taliban than millennial digital superheroes.

        But let’s circle back to the context of the problem statement, which is the issue of sick f**ks posting evil and disturbing stuff on the internet, to which I add trolls and the like on discussion boards, who ruin good conversations, bully people, poison dialog, and even even cause young and fragile people to self-harm or hook up with predators.

        It sounds like you all enjoy lovely islands of healthy internet discussion like this. I won’t ask you to list the names of them here. But remember, Scott is a refuge from Slate, which was destroyed by trolls. As was the Gawker family (long before Hulk Hogan dropped his member on their assets). Many major media outlets have eliminated comments. And the only way to reach most authors and thinkers now is via Twitter.

        So that’s the problem I proposed a solution for via universal registration.

        • PeterDonis says:

          “More centralization” is redundant and a battle already lost.

          I don’t think this is true. But it could become true if enough people give up trying to fight it. I don’t want to give up.

          So that’s the problem I proposed a solution for via universal registration.

          How does universal registration solve the problem? Facebook already has it for the billions of people that are on Facebook, and it’s the cause of the problem, not the solution. How does conglomerating every place on the internet where anyone can post anything into one giant super-Facebook make things any better?

          • McMike says:

            Perhaps. Convincing us that the war is already lost is certianly part of the strategy.

            Yet between the NSA, Amazon, and the banks, what’s left to know about me? You are assuming for example that wordpress is not backdoor compromised, and every thing you type here isn’t instantly accessible to some bunker in the suburbs of SLC.

            As for the login, please be clear: I am not proposing a single universal discussion board platform. I am proposing a single independent universal login, to be used by independent platforms if they choose to.

          • PeterDonis says:

            between the NSA, Amazon, and the banks, what’s left to know about me?

            About you? I couldn’t say. About me? Lots. There’s a lot about human beings that can’t be reduced to database entries.

            You are assuming for example that wordpress is not backdoor compromised, and every thing you type here isn’t instantly accessible to some bunker in the suburbs of SLC.

            I’m making no such assumption. I think you and I mean different things by “centralization”.

            Entities collecting information by backdoors, profiling, etc. is just an unavoidable consequence of having computers be able to connect to other computers anywhere in the world. There would still be entities doing that even if Google, Facebook, Twitter, etc. never existed. That’s not “centralization” as I’m using the term. It’s just life in cyberspace.

            “Centralization” as I am using the term means things like: everybody goes to Facebook to connect with other people and see what’s going on; everybody goes to Twitter to emit their latest thoughts; everybody goes to Google to search; etc. You’re proposing to add that everybody goes to your universal login provider to identify themselves. I don’t see how that helps anything.

            I am proposing a single independent universal login, to be used by independent platforms if they choose to.

            But unless everybody else chooses to, it’s not universal. It’s just another OpenID.

          • albatross11 says:

            I think there’s a pretty big difference between “the NSA can probably link together most of your online presence if motivated to do so” and “Your online presence is linked together under a single name which is held in some database by a private company and can be accessed by activists, lawsuits, court cases, new laws, etc. I’d prefer a world where NSA and the ad networks weren’t able to do that, either, but I think adding the centralized one-stop-shopping side of that tracking would be pretty bad overall.

          • McMike says:

            @peter, albatros

            well, it seem that I have surrendered to the borg. I hadn’t really fully realized it until this moment.

            I hope you folks are right and I’m wrong. Don’t get me wrong, I would prefer a world without FB or Twitter or the NSA. But I also wouldn’t miss television or the health care system. Or big banks.

            So I am used to dissapointment I guess.

            I also remember when Bil gates scared me. I miss him now like some people miss Nixon (no, not Mr Stone).

          • acymetric says:

            @albatross11

            Exactly. I don’t think the solution to “some entities can connect the dots of my online activities if motivated” is “make it trivially easy for nearly anyone to do so”.

          • McMike says:

            @acymetirc

            I’m curious though. Is the fact that you are currently in five or six databases rather than one, of varying levels of security and operational segregation from the NSA and the major data brokers, truly that far away from trivially easy either?

  22. belvarine says:

    Muckrakers and other investigative pieces designed to instill outrage/mass action wouldn’t be necessary if these places were publicly accountable for mistreatment and subject to regular audits. Can you imagine magazines devoting resources to reporting violations that will show up and be corrected during the regularly scheduled review process? If the public had faith in the company’s internal review process, would anyone find these reports interesting?

    Why do you think these pieces about oppressive work conditions resonate emotionally with so many people? You’d think if these workers had a choice in the matter they’d simply choose to work somewhere else. Then why are people outraged by these reports instead of pinning the blame on these workers for failing to improve their own lot?

    • J Mann says:

      Other than the fact that they have to look at awful content (which is a big difference), this looks like normal entry level temp work. As the article points out, actual Facebook employees get much more money and perks, but these are entry level positions at a contractor.

      As the article points out, the job pays McJob wages, but better than the alternatives:

      She had been frantic for a job when she applied, as a recent college graduate with no other immediate prospects. When she becomes a full-time moderator, Chloe will make $15 an hour — $4 more than the minimum wage in Arizona, where she lives, and better than she can expect from most retail jobs.

      • McMike says:

        Indeed. Someone whose previous career prospects was limited to retail clerking is not likely emotionally or professionally prepared to spend all day wallowing in the worst filth internet trolls and sick f**ks can dish out.

        FB deserves every bit of scorn it gets. But of course it’s the contract serfs who end up eating the turd sandwich.

    • raj says:

      why are people outraged by these reports

      Why indeed? What is facebook supposed to do here? They’re obviously expected to moderate content, but the only reliable way they currently have to do that is to have someone get eyes on, at some point.

      The public doesn’t care about the guy who has to slaughter 1000 pigs a day, or clean up medical waste.. this is just a new shit-shoveling job someone has to do, outrage will fade and nobody will care in time.

  23. John Schilling says:

    I find myself mildly surprised that there hasn’t been an instance of mass murder at a Facebook moderation sweatshop yet, and wonder how long that will last. The environment seems almost calculated to produce such a result, far more so than any 1980s mail-sorting room.

    • johan_larson says:

      Maybe they’re just quietly collecting perverts. I mean, where else can you get paid to look at seriously twisted shit all day long? The job doesn’t offer a lot of other reasons to stay. And if you’re there because you like the stuff you’re watching, why make trouble?

      • 10240 says:

        Being that sort of pervert may be correlated with being the sort who commits a mass shooting though.

        • McMike says:

          Being that sort of pervert may be correlated with being the sort who commits a mass shooting though.

          I don’t think so. I think many of them would be happy as pigs in slop being paid to do what they used to worry about being marginalized and getting arrested for. Ironically, it might even desenstize the thrill for some.

          It may not be unlike the attraction of positions of authority like cubmaster and priest brings to a certain type of pedophile.

          • albatross11 says:

            This is like the old Tom Leher line about the necrophiliac who finally achieved his life’s goal of becoming a coroner. (“And the rest of you can look it up when you get home.”)

    • Randy M says:

      There are a lot more mail carriers/sorters than facebook mods, surely.

  24. 10240 says:

    [I haven’t used Facebook for many years.] What does Facebook moderate? Only posts that everyone can see? Bigger private groups? Anything a user reports, even if it can only be seen by your friends?

    • A1987dM says:

      Rumour has it that anything that is reported by fewer than a handful people and doesn’t contain blacklisted words is kept, anything that is reported by more than a few dozen people is banned, and human mods are only involved in between.

  25. As much as people don’t want algorithms to be reviewing and removing all the bad material, largely because they suck at it currently, the article gives a good ethical reason why this “job” should be automated out of existence as soon as possible for the sake of the humans having to do it.

    • McMike says:

      What about an opt-in warning over a gray screen? Warning: a computer algo has determined this image might contain… XYZ. Then let the user decide to override or not? Parents could set blocking filters. Cops could get reports of flagged posts. etc.

      Is that naive?

    • I would prefer that, McMike, but one of the reasons social media won’t do this is because governments and the media pressure them to avoid letting people see certain things, and then other media groups and think tanks pressure them to allow people seeing things, and then we get a haphazard hodge podge, but certainly nothing like a universal opt-in solution.

      I prefer filters and opt-in solutions. Personal censorship is better than organizational censorship as it’s closer to the source, but it’s not just that individuals don’t want to see certain things, but that they don’t want others to see those things, be it members of the public who they believe will be brainwashed into wrongthink by this or that ideology, or children who will choose to opt-in to see things that it’s believed might warp them for life in some underspecified but still innately concerning way.

      • albatross11 says:

        It seems like most of the time, McMike’s proposal is what you see. There’s plenty of content on the internet that is NSFW in various ways, but you might have to seek it out a bit to find it.

        Facebook tends to spread some of this stuff to people who don’t want to see it, and to people who will get mad about it and start yelling at each other. Twitter, too. Youtube’s recommended next video and autoplay functions tend to give you “related” videos (defined as what the ML algorithm thinks is related), and often can take your child from watching a funny cartoon to watching something you probably didn’t want them watching. I’ve also read the claim that Youtube tends to re-enforce and amplify your starting interests–start by watching a video that’s Republican, and it may keep feeding your recommendations until you’re watching white supremacist videos.

        I gather Youtube tends to leave things up, but demonetize them (not run ads and so you don’t make any money off them). I assume they have a similar moderation problem, and also that they have their own political biases that apply when doing moderation. (Also, everyone tries to game the moderation system to screw over the other side, because people.)

        • Youtube’s recommended next video and autoplay functions tend to give you “related” videos (defined as what the ML algorithm thinks is related), and often can take your child from watching a funny cartoon to watching something you probably didn’t want them watching. I’ve also read the claim that Youtube tends to re-enforce and amplify your starting interests–start by watching a video that’s Republican, and it may keep feeding your recommendations until you’re watching white supremacist videos.

          There’s a real vague complexity to all this though. We clearly don’t want kids to watch certain things, but then isn’t that the parents job? But then parents can’t watch their kids all the time, so who do we outsource that to? Do we have some official oversight committee governing how the related video algorithm works… and then the same fight is fully politicized within government itself?

          Spitballing here: maybe the government should command Youtube to treat everything in the politics & news category as a category that gets a different filter applied that randomizes the next video so that you get a non-curated look at the political idea space rather than looking for keywords and pushing you deeper and deeper into the far-right or far-left. That way watching Republican videos doesn’t start you on a climbing stairway of tags the traditional algorithm would use to eventually lead you to white supremacy, but since it’s randomized and not an algorithm, there’s no political bias in the correction of that. If you’re in the cat section of Youtube you get more precise algorithms that make related videos based on precise tags like “persian”, but if you are in the political section, you simply get randomization within that topic, so every political video is related only in the sense that it’s political, and there’s no coherent bias towards a particular point of view except what happens to appear randomly, and then it will be different for different people, and the only large scale trend will be based on the number of videos coming from a particular slant…

          Problems it solves:
          1: Current extremism escalator
          2: The problem caused by trying to legislate what the algorithm should be biased towards instead

          Potential problems:
          1: If it’s randomized completely, then people who make more videos supporting a particular view have an advantage.
          2: People can choose what category their vids go in currently, so Youtube would still have to make judgments itself on what counts as political, and I can see this leading to problems.

          It might be that a solution like that would work, but it’s hard to tell because of the underspecified nature of the problem.

          I’ve also read the claim that Youtube tends to re-enforce and amplify your starting interests–start by watching a video that’s Republican, and it may keep feeding your recommendations until you’re watching white supremacist videos.

          But what’s the main reason we care about this? Isn’t it that we fear that they will then form a political power block to elect a white supremacist party? So the fear is that the algorithms dictate democratic outcomes ultimately, isn’t it? What if tomorrow the thing feared isn’t something like white supremacy that 99% of people here agree is wrong, but some other ideology that perhaps people here believe in?

          In that vein, here’s an even more radical solution: you can choose to be registered to vote, and obviously you choose to live in a household with kids. If you are registered to vote or have kids then you use a special cuddle hug internet to make sure that the content you see is inoffensive and safe for democracy and childhood development.

          If instead you have no kids, or aren’t registered to vote, then you get to use the bad boy’s internet with only personal filters and regular site moderation, and no outside government curration. If you sign a birth certificate for your child, or adopt, then you are simultaneously signing a document that makes it ILLEGAL for you to use the internet without the safe for wee bairns and good democracy filter, and the same is true for signing up to register to vote; you are making a legally binding promise that you will NOT violate under EXTREMELY HARSH penalties.

          I’d be happy with this extremist compromise. Clear formal separation, cut like a knife, with freedom by birthright, and obligation contingent on choice. Solved in one fell swoop as far I’m concerned, but I don’t believe for a minute that I’m in good company here.

          • Simon_Jester says:

            So basically, we need to segregate the Internet so that voters are protected from politically demented content (if I understand you rightly). And so that kids are protected from graphic violence and other forms of obscenity.

            So…

            What about, say, people with weird sexual fetishes or who like to watch lots of gun videos, but who still want to be able to vote?

            Or, conversely, people who have kids but want to be able to participate meaningfully in intensive politics?

          • So basically, we need to segregate the Internet so that voters are protected from politically demented content (if I understand you rightly).

            Not quite. Ideally we don’t need to do this, and I don’t want this, but if the choice is between censoring everything, and walling off my freedom in exchange for some things, then at that point, I’d accept the bargain.

            What about, say, people with weird sexual fetishes or who like to watch lots of gun videos, but who still want to be able to vote?

            These are already supported by mainstream ideologies. Weird sexual fetishes are politically supported by mainstream progressivism, and gun videos are supported by mainstream conservatism. The good boy internet subject to heavy government control and ID would be able to put filters in place so that children didn’t see these things.

            If the mainstream good boy internet for normal people who have kids and vote is controlled by the government, then it will reflect a mainstream normal person idea of censorship. This probably means that somewhat risque and edgy things will not be banned but the context in which these things are presented will be radically controlled. An ID system as some have mentioned in this comment section could provide this context control by ensuring a real identity and thereby determining factually the age of the viewer and never allowing the underaged to see anything they shouldn’t to an absolutely perfect and previously impossible degree.

            Or, conversely, people who have kids but want to be able to participate meaningfully in intensive politics?

            It’s an even trade if you think about it. Your choices are pretty much always confined to voting for a mainstream right wing party, or a mainstream left wing party, with the occasional moderate triangulating centrist thrown in. Radical parties get nothing in terms of votes outside of historical turning points such as when your country is paying war reparations after a cataclysmic newly modern war you lost and took all the blame for while diving into an economic crisis, subject to the whims of international speculators, who you racially cast in the midst of rising nationalism as the reaction, or when entire systems like traditional monarchies exhaust their economic and social basis entirely and fall to alliances between communists and peasants (see the 20th Century).

            The point of this system wouldn’t be to ban such things anyway. The point is to corral the human herd and normalize it to what is already mainstream, so as to maintain a fixed point of existence. First past the post pretty much ensures this already, but people are still worried, so they want more. Then let’s give them more security.

            In theory you could start on the bad guy internet and absorb all the islamocommienaziscientologist propaganda and then register for the good boy internet by registering to have kids, but how many people would actually do that? Most people already fixedly vote as a great mass, merely bifurcated and nothing more. The extra and final layer of security would be to ensure that if you did so then you could never walk it back, and that if you tried to spread your propaganda among the normals and their normalized system, you would be normalized by your fixed identity, and quickly be banned and isolated.

            The radicals would then be ill equipped to try and play entryism with mainstream groups. They would have to be content to play in their own pond waiting vainly for society to collapse completely, as they usually do, a now ringfenced minority.

            This sounds horrible? Better than imagining that the entire internet, the whole thing, every space, be normalized.

          • Taradino C. says:

            if you are in the political section, you simply get randomization within that topic, so every political video is related only in the sense that it’s political, and there’s no coherent bias towards a particular point of view except what happens to appear randomly,

            The biggest problem with this is there’s also no coherent bias toward usefulness. YouTube has a massive amount of content, spanning many years and cultures, so a random “politics” or “news” video is unlikely to interest the viewer at all. When a local news report on a scandal at the mayor’s office is followed by 6 hours of raw C-SPAN footage from 2011, a red carpet interview from the Oscars, “Kids React To Newspaper Vending Machines”, and soccer highlights from a country that uses a different alphabet, the feature isn’t doing anyone any good, and people will just turn it off.

      • DeservingPorcupine says:

        Agree totally with FwdSynth here.

  26. eqdw says:

    One of my biggest gripes with that article was

    It seems like The Verge’s preferred solution, a move away from “the call center model” of moderation, might have whatever anti-dehumanization virtue doctors and lawyers have. Overall I’m not sure how this works, but it prevents me from being as snarky as I would be otherwise.

    Ok. I 100% buy that the call center model is bad, for all the reasons listed in the article. But what’s the alternative? The article spent a long time talking about how the current model is bad and should be moved away from, but unless I missed this detail, they didn’t spend any time at all talking about what a better model would look like.

    I don’t think it’s possible for facebook to achieve all the moderation goals they believe they need to, on any model other than the call center model.

    • bottlerocket says:

      unless I missed this detail, they didn’t spend any time at all talking about what a better model would look like.

      Of course not! Making an active recommendation would open them up to having disparaging articles written about it.

      To steelman their position a little, they do note that the average Facebook employee makes about 8 times what one of these moderators make and then point out a number of ills caused by squeezing for efficiency. There’s the undertone that Facebook should be using some of its profits to better compensate these workers for dealing with all of the issues portrayed in the article.

      Since Facebook wouldn’t do that out of the goodness of their hearts (and the article even notes that one of the people interviewed didn’t have better prospects), the Verge is taking the public shaming route to try to make being nicer to these moderators less costly than putting up with whatever bad PR they can drum up.

  27. C.H. says:

    And lawyers demonstrate a different way that strict rules can coexist with a humanizing environment; they have to navigate the most complicated law code there is, but I don’t get the impression that they feel dehumanized by their job.

    What does Scott mean by this? I’m a lawyer, and I wouldn’t call my environment a very humanizing one (nor is it dehumanizing), and I don’t feel like there are very strict rules in most cases.

    Then again, I’m not a family lawyer having to deal with child custody nastiness on a day to day basis, or a criminal lawyer dealing with terrible injustices.

    • Simulated Knave says:

      What DO you do, out of curiosity? Because the only things I can think of that might not be dehumanizing to some extent are wills and estate planning.

      • McMike says:

        I guess you haven’t been to a contested will involving warring siblings, evil step spouses, estranged children, secret love children….

        • Simulated Knave says:

          I had thought of that, but with wills at least that seems to be something of an exception. Whereas corporate law, criminal law, family law, etc suck day to day.

  28. FormerRanger says:

    The “maybe [crazy conspiracy theory] is true after all” effect reminds me a lot of Scott’s post about how weak-manning is a “superpower.” Each time you encounter a “weak man” it makes you a little more open to believing the whole thing.

    • albatross11 says:

      I wonder what fraction of the conspiracy theories circulating out there are substantially true. I mean, in a world without the media coverage it has received now, suppose I told you about the Catholic Church’s decades-long conspiracy of silence on child sexual abuse. It would have sounded like an anti-Catholic conspiracy theory. (“Yeah, right, and even a bunch of *bishops* and *cardinals* are serial sexual abusers? And the whole upper tier of the church hierarchy knows about it, but turns a blind eye to it to keep the scandal out of the newspapers? Yeah, go away, you lunatic.”)

      The true fraction is probably low, but not zero. (And the reality is likely not as sexy as the most effective conspiracy theory memes.)

      • Simon_Jester says:

        I’d rank conspiracies in three tiers, from highest to lowest order of a priori plausibility.

        1) Conspiracies to conceal criminal behavior.
        2) Conspiracies to falsify some publicly known fact.
        3) Conspiracies to control large swathes of society and the world.

        Conspiracies of the first type aren’t just plausible, they’re practically the default condition. Large organizations routinely shelter one or another form of criminality if not extensively audited. The only thing that makes such a crime implausible is if there’s some strong a priori reason to think the crime didn’t or couldn’t have occurred, or couldn’t be concealed at all. For instance, it’s plausible that the Catholic Church is concealing thousands of pedophiles, but not so plausible that they’re concealing thousands of axe murderers. Because the axe murderers would get caught a lot more easily.

        Conspiracies of the second type are hard, because a conspiracy is a type of interest group and for every interest group there is an opposite interest group. Your opponents will always have an incentive to either stop you or at least out your conspiracy to the public- at which point calling you a conspiracy is like calling the NRA a conspiracy to convince Americans that gun ownership should be legal. They’re not a conspiracy in that they’re not [i]hiding[/i].

        Conspiracies of the third type are like those of the second type, only exponentially harder still, because now you have not just one particular interest group but ALL the interest groups opposing you, because as a secret master of the world you have to tread on a lot of toes.

  29. blacktrance says:

    The problem with the anti-irony rule is that it’d ban a second complaint about an organization even if it responded poorly to the first complaint. Suppose a newspaper writes that there’s a lot of crime and police are doing nothing. The cops respond by harassing people and arresting them on minor pretexts. Complaining would be justified, and “Well, you wanted us to do something about it!” would obviously be an inadequate response.

  30. StevieT says:

    The biggest problem faced by Facebook (and Youtube, Twitter and anybody else who allows users to post large amounts of content) is that what they are doing (and what is expected of them) is not really content moderation any more. This has been true for around a decade. In today’s climate they have to do behavior moderation dressed up as content moderation.

    Seriously, read the Facebook Community Standards and ask yourself how many of the rules refer to the nature of the content vs the behavior and intent of the user posting it. It is trivially easy to come up with counterexamples to every rule in the book where the content would be allowed in one context but banned in another. So content moderators, in the 30s that they have to examine a piece of content, are not really looking at the content itself, they are trying to work out, “who is this user and why are they doing this?” It is extremely unsurprising that the moderation guidelines by now are approaching the size and complexity of a legal code, because that is exactly what they are.

  31. Ghillie Dhu says:

    This situation reminds me, somewhat abstractly, of the story of the Radium girls; a low(ish)-skilled role with long-term dangers at best poorly understood by their employers and not at all by the workers themselves.

    • McMike says:

      From the perspective of FB, moderators are a regulatory/PR-driven pure cost center, with only the possibility of downside potential or at best outcome is invisibility. A function they would rather not have to do, not have to deal with often, not have to think about, would rather no one talked about, and definitely not pay much money for.

      It serves only as a “yeronor” service. Yes, your Honor, we have a content moderator department.

  32. magnacarta says:

    The legal profession has its own issues to a lesser degree, including high anxiety rates. In some cases, criminal lawyers see/hear some awful content (including violence and child porn) and having to deal with guilty parties face-to-face. Obviously, not all clients are guilty.
    The legal profession is also uncertain about consistent reapplication of rules. Cases can be overruled by higher courts. More fundamentally, there is conflict between conservative and progressive perspectives – eg. Should judges abide by previous rulings or be allowed new judgements because of nuance, changing social values, etc. We also have significant pressure to judge different groups of people by different standards in recent years (differentiated by biology). I apologise for being aloof on that point (if you want to know, you’ll find it).
    I also found interest in Scott’s statement about the influences of conspiracies and how easily people are swayed. It’s been said many times… a lie told often enough becomes the truth. I recently read an article about Troyvon Martin raising some surprising observations. I assumed Trayvon was about 12 years of age when he was shot by George Zimmerman. I wasn’t aware that Trayvon had no signs of injury beyond a gun shot (vs Mr Zimmerman who had a broken nose, black eyes, and cuts/abrasions to the back of his head). Nor was I aware that the president became involved (if he had a son, he’d look like Trayvon). I confirmed each of these points through multiple sources. There are many deceptive and false narratives in the media. I openly admit I didn’t see this one at the time. The reporting of those aspects of Trayvon were a case of well crafted and coordinated deceit from many groups. For example, I assumed Trayvon was about 12 because the media frequently displayed a young happy boy in a red t-shirt. He was a young adult, possibly 18 if my memory serves.
    My discussion on Trayvon might be fragmented (I’m not in an ideal location as I type this). The point is that our lives are full of deceit, much of it coordinated by multiple groups to give authenticity to the deceit. In the Trayvon Martin case, the misreporting came from many media outlets and from an ex-president whom I used to highly respect (for better or worse). Most people often don’t have the time, energy or interest to see through the deceit. Similar to Plato’s The Cave, it’s not uncommon for people to be happy with the first story, or first in-group story they hear. I’d like to tweak Scott’s view and say there are many things that are either false or deceitful that aren’t common knowledge. For many reasons, we don’t see them or don’t want to see them.
    Edit: on re-reading my comment it may be possible to assume I’m linking my biology statement with the Trayvon Martin ruling. That’s not the case. My concern is the pressure to legislate laws (eg. If you are a [category a] and you murder someone, you won’t be guilty because [category b] is different”. If you put this into law, every person in category a who behaves a certain way will not be guilty.

    • McMike says:

      Re: inconvenient data points.

      Exactly. I remain troubled by a few things about 9/11. What was up with that passport they supposedly found right away? (overeager reporting, or CIA psyops plant: both plausible and with precedent). What the heck WAS Bush doing that day? Why were they so eager to spirit the Saudis out? How can it be they never found the airline short sellers?

      And on and on. Being exposed to that all day sounds like cult brainwashing tactics.

      Having a government that is not above doing (or fantasizing about doing) many of the things it is accused of doesn’t help.

    • albatross11 says:

      As best I can tell:

      a. Respectable media organs get major things wrong in their stories all the time.

      b. When there are intense culture-war or activism or political pressures, they are even more likely to get things wrong via activism-induced blind spots.

      c. Once a narrative is established for some story, it’s *really hard* for it to change–this seems like a pathology of pretty much all media organs. By the 4th or 5th story, even pretty obviously contradictory facts will often not change the narrative. NPR will probably omit them; the NYT will stick them in the last paragraph of the story.

      d. There’s a kind of momentum here, so that eventually, anyone calling the narrative into question is quickly dismissed as a nut or some kind of bad actor.

      The Trayvon Martin and Michael Brown shootings are both good examples of this. So are the Duke Lacrosse Team rape hoax and the VA Tech rape hoax. So was the Planned Parenthood scandal about selling fetal tissue, and the Shirley Sherrod scandal. And for every case that falls apart like this[1], there must be dozens more where nobody looks closely into the story, and so the widely believed story is just massively and unfixably wrong. To the extent these are random errors, they probably more-or-less cancel out and leave us a fuzzier but still broadly sensible world picture; to the extent these errors are ideologically motivated or tend in a particular direction for narrative reasons (sensationalist, clear good and bad guys), we end up with a flawed picture of the world by reading them.

      The best way I know to proceed is to keep in mind that the real story may be wildly different from what got reported, to read with a fair bit of skepticism, and to maintain some epistemic humility about whether I really know the details of something I’ve only read about in the newspapers. And of course, to look for better sources of information where I can.

      [1] Often leaving most of the public believing the original narrative even when it turned out to be completely wrong.

  33. DeservingPorcupine says:

    It’s amazing how much trouble we’d save if we could just tell people who were offended by things to simply use the very convenient blocking/content-pruning tools that every major social media site provides and then STFU.

    Am I being too flippant here? I must be missing a gene or something, but I just fundamentally can’t understand why anybody would ever ask, say, Twitter to ban something I don’t like when I can simply “ban” it for myself. The trend seems to be the exact opposite of what you’d expect: as media consumption becomes more individually curatable the louder the calls to curate stuff for everyone.

    • March says:

      The worst I’ve ever seen on FB are the sensational-type news photos (dead people, wounded kids, maimed animals) and off-putting, completely mistargeted ads. And weird conspiracy theories. Those I’m happy to curate myself, even though on bad days they definitely get to me.

      If the odds were good that on any given day I’d bump into child porn or snuff movies or animal-cruelty-for-entertainment, I’d just ditch the platform altogether.

      I’m pretty sure FB has a preference for which of those scenarios ends up happening. And I’m fairly liberal and would not care if I saw women’s breasts in sexy pics or in breastfeeding pics. But the anti-nipple-brigade are still valuable users.

    • albatross11 says:

      There is a huge thread of argument about whether some content is harmful to me, even if I don’t have to see it. That might be content that spreads wrong ideas, or encourages discrimination, or whatever.

      There’s a second huge thread about whether that’s good enough reason to ban some kinds of content from the internet formally (via hate speech laws) or informally (via boycotts, Twitter mobs, harrassment of moderators, etc.).

    • Hyzenthlay says:

      If we were talking about just “opinions that offend people” I’d agree. But a lot of the stuff they’re banning is pretty graphic and not properly flagged. I mean, they talk about kiddie porn, snuff, etc.

      If I’m unexpectedly exposed to a video of a dead, maggot-covered fetus, even if I quickly close the window, I’ve got that image in my head and I’m going to feel grossed out for a while. I can still try to screen for dead, maggot-covered fetuses in the future but I’d rather not see it while I’m randomly browsing.

      Granted, I don’t actually use Facebook so I don’t understand the mechanics that well. But I’ve encountered enough freaky images just Google-searching for innocuous terms that I don’t assume it’s easy to avoid this stuff.

      • DeservingPorcupine says:

        Given how FB is designed to show you things you want to see (so you can buy things you want), I find it extremely unlikely that a person who didn’t want to see such gross things would end up seeing them very often. I mean, why would your Aunt Maggie be posting such things? Or why would you “like” a dead-maggot-baby page, or anything like it? I just don’t think it’s a reasonable worry.

    • blacktrance says:

      Part of the problem is journalists/hate mobs making noise about that content existing at all (the dynamic Scott described in “RIP Culture War”) – Facebook doesn’t want to be tarred by association with porn or hate groups, even if its users can easily avoid that content. Another part is that advertisers and payment processors are skittish about appearing next to that stuff, and Facebook definitely wants to keep them around.

  34. JohnBuridan says:

    I find it totally unsurprising that these people become a little paranoid or conspiracy theory laden. I would guess that about 20% of people who are exposed to extensive conspiracy theory videos come to believe at least one theory. I pull this number out by considering the number people I know who were extensively exposed to conspiracies 10 years ago and still believe in them today.

    There is little doubt in my mind that conspiracy theories are spreading very quickly. The number of people who believe in the moon landing continues to decrease and I’ve been watching this phenomenon spread for years. https://earthsky.org/space/apollo-and-the-moon-landing-hoax

    I keep being surprised when suddenly a university professor of business makes a passing reference to the evil international banking associations, or I hear a copyright lawyer talking about scientific collusion to hype global warming, or a school administrator mention in a meeting that chemicals in our foods are turning more people gay.

    High schools kids are especially prone to believe in some weird stuff. Pyramids in the Antarctic, no moon landing, and cryptozoology, all things I have personally heard from kids this year.

  35. Hyzenthlay says:

    Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little…Or is this whole phenomenon just an artifact of every large workplace (the article says “hundreds” of people work at Cognizant) having one or two conspiracy buffs, and in this case the reporter hunted them down because it made a better story?

    Probably that last thing. Even if only 5% of the people who are exposed to conspiracy theories end up believing them, those 5% are the ones who’ll get mentioned. And a dehumanizing environment probably primes people to believe in more pessimistic ideas as well. It’s not all that surprising to me that if people are trapped in a depressing, totalitarian office environment all day and their job is to basically read weird, paranoid stuff about how society is lying to them, a few of them will end up believing the weird, paranoid stuff.

  36. benjdenny says:

    I applied for a job that definitely wasn’t this job and is fictional at some point, actually – there weren’t a lot of high-paying secretary jobs available (still aren’t) and I was a little desperate.

    The interview process was 95% them showing you disturbing images and videos, extreme animal violence, gore, pornography of the worst sort that isn’t illegal, that sort of thing. The interviewer watches your face the entire time to see if you can fake being undisturbed well enough to make not enough money to feed yourself. Luckily, I was apparently not good at this.

  37. MartMart says:

    Last, I find this article interesting because it presents a pessimistic view of information spread. Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little. If it’s true, how come more people haven’t been infected? How come Facebook moderators don’t believe the debunking of the conspiracy theories instead?

    Is it really surprising? Hasn’t there been an influx of flat earthers/anti-vaxxers and other similarly dedicated groups?

    Conspiracy theories all have the basic appeal that they make the believer special. The rest of the sheeple go thru a boring world, but all you have to do is believe in lizard people, and just like that you get to wake up a brave hero fighting evil every morning.

    There should be (and almost certainly is) something providing people with some kind of immunity to this, but I’m not entirely sure what it is. It doesn’t seem to be just intelligence, or perhaps intelligence helps meme immunity the same way that being physically fit helps with disease immunity. Yes, you’re better off, but if a plague breaks out you aren’t safe either.
    Smart people become infected by weird ideas too, and then those idea hijack that intelligence to spread even better.

    I think the whole problem can be greatly reduced by getting rid of the share button. Perhaps changing the comment input field so that the user may not paste into it. If you want to say something, you have to go thru the trouble of typing it.
    It’s content neutral, but drastically reduces the rate at which memes can spread, and should lower rate of meme mutation, and fewer people gravitating to the fringes.
    It does sounds harmful to user engagement so it will likely be at least viewed as a bad thing for social media companies, but I think it will be a great boon for society.

  38. Prussian says:

    “When you stare into the abyss, the abyss also stares into you” doesn’t even begin to cover those mods gradually succumbing to insanity.

    Just to be on the safe side, every time someone shares an SSC link, report it as violating the Facebook terms of service. We’ll make rationalists out of these people yet!

    Mass spamming this piece into Facebook’s moderation system sounds like a perfect job for 4chan. To work, people!

    If I sound a little bitter about this, it’s because I spent four years working at a psychiatric hospital, helping create the most dehumanizing and totalitarian environment possible. It wasn’t a lot of fun. But you could trace every single rule to somebody’s lawsuit or investigative report, and to some judge or jury or news-reading public that decided it was outrageous that a psychiatric hospital hadn’t had a procedure in place to prevent whatever occurred from occurring.

    We have met Moloch, and He is us.

    …just the sort of thing to start my day with. Thanks, Scott 🙂

  39. BBA says:

    I retain a bit the mindset of the old, decentralized, volunteer-based internet and it’s still a little weird to me that “moderator” is a paying job. A thankless, dehumanizing and low-paying job, that much I get.

    So here’s how it is: the unmoderated spaces are hellscapes. A space small enough for “common sense” moderation is too small to be profitable. The large, profitable spaces end up with the problems described in this post.

    Trying to come up with a set of neutral rules that can be impartially enforced is folly. Take the fact that certain prominent figures have repeatedly violated Twitter’s terms of service, but have not lost their accounts because Twitter deems them “newsworthy.” Recently the Supreme Leader of Iran tweeted that the fatwa against Salman Rushdie was still in effect. This was reported as a death threat, and the Ayatollah’s account was suspended, despite the tweet certainly meeting the newsworthiness exception. Or consider how the neutral rules of the CW thread led to its domination by people who write multi-paragraph posts that calmly and logically argue for the inherent genetic superiority of the white race, and the exclusion of those who passionately dissent from that view for being too passionate. These perverse scenarios are the norm now.

    We’ve lost the internet of the ’90s and ’00s and we’ll never get it back. It wasn’t that great, but it’s so much better than what we have now. Hail Moloch.

    • WashedOut says:

      Or consider how the neutral rules of the CW thread led to its domination by people who write multi-paragraph posts that calmly and logically argue for the inherent genetic superiority of the white race, and the exclusion of those who passionately dissent from that view for being too passionate. These perverse scenarios are the norm now.

      Consider that moderation applies to standards of discourse as well as content, especially in the case of SSC. In your example, “calmly and logically” is a desirable and encouraged manner for conduct on this forum. If by “passionate dissent” you are referring to some of the snide ad-hominem bickering that was happening, then it is expected they would be moderated even if their viewpoint was sufficiently PC.

      If you want to find out what “perverse scenarios are norm now”, look no further than the arbitrary and inconsistent enforcement of Terms of Use by Twitter and Youtube. SSC norms are a distant galaxy of an outlier by comparison.

    • 10240 says:

      One option would be to delegate moderation to smaller communities on a platform. Like reddit, where most of the moderation is done by the subreddit moderators (and sitewide moderation could be reduced even further, to only removing illegal content). Then the platform itself is large enough, but users only follow specific communities that are moderated at a lower level (so they are not hellscapes), on a volunteer basis.

  40. JenniferRM says:

    Normal people who are exposed to conspiracy theories – without any social connection to the person spouting them, or any pre-existing psychological vulnerabilities that make them seek the conspiracy theories out – end up believing them or at least suspecting. This surprises me a little. If it’s true, how come more people haven’t been infected? How come Facebook moderators don’t believe the debunking of the conspiracy theories instead? Is it just that nobody ever reports those for mod review? Or is this whole phenomenon just an artifact of every large workplace (the article says “hundreds” of people work at Cognizant) having one or two conspiracy buffs, and in this case the reporter hunted them down because it made a better story?

    Cognizant is vastly vastly bigger than hundreds of employees. Checking wikipedia for the latest stats I see that it has over a quarter million employees. (Accenture is a similar company, and has roughly half a million employees.)

    I would not have recognized Cognizant, having never worked for them, except that I spent a long time analyzing giant databases of resume data.

    Once I got my hands dirty there, I found that there was a bidirectional flow between IBM and Cognizant and was possibly the SINGLE BEST attested flow between any two white collar companies (where people work at one, then the other, in numbers that are essentially impossible to explain by chance). My guess is that they outsourced to each other and/or were outsourced to for some of the same tasks, and perhaps good low level workers switched from Cognizant to IBM for more security and opportunities, and often managers switch from IBM to Cognizant to escape IBM’s brutal “up or out” management culling system. My leading hypothesis for the main cause of the SPECIFIC link between THESE companies is that both IBM and Cognizant seem to make most of their money in essentially the same way, doing “technical services consulting” work.

    Basically any time an executive (with an MBA instead of a CS degree?) has a serious budget and a serious technical problem and they need to make something happen that they don’t want to handle the task directly or expensively, they get bids from several “technical services companies” they can outsource the non-sexy job to. Apple, Facebook, Google, Amazon, and many many other companies all do outsourcing like this, and generally have more of these “non-employee employees” than they have “real” employees.

    The whole thing is somewhat similar to the “adjunctification of the professoriat” except of course there is a centuries-long tradition of managers of universities granting special respect to the technicians who teach in universities (where the loss of negotiating power by the technically skilled teachers has a dramatic contrast object when comparing modern university teachers to very prestigious and secure tenured faculty 100 years ago).

    Computer programming and so on hasn’t existed for more than maybe half a century and there has never been any such thing as tenure or contractual respect for them that could function as a historical contrast object…

    For every full time programmer that a big tech company has, I’d guess they have between one and three “technical services contractors”. Sometimes they will literally be shoulder to shoulder in neighboring desks (though this is more and more frowned upon), other times on the other side of the planet and coordinating via email and video calls. Often they are on different floors, with different security badges. Often meetings will be held where the contractors can’t hear what was said and have to pick it up via gossip from the “real employees”. There are various little dynamics that are pretty fucked up, and it comes up especially a lot in ML where data labelers don’t need to be able to code to add significant value :-/

    (It used to not be quite so ignominious, but in Vizcaino v. Microsoft it was found that if you only have legal distinctions between people (rather than status distinctions and different reporting hierarchies and workspaces and so on) then the permatemps can claim they didn’t understand that they weren’t employees, and then they can sue for the difference in compensation between them and “real” employees.)

    Basically, the thing that surprised me about the Verge article was simply that they had people from the US doing the job. Most of the time if such work CAN be outsourced to India, it WILL be, and a huge number of Cognizant’s employees are, in fact, actually in the developing world. One hypothesis I have (P=15%) for why the Verge article is getting all this oxygen is that Facebook doesn’t really mind it existing and is helping push it, partly because this particular team (which apparently is in America) experiences dilemmas and work conditions that American readers will, by default, be sympathetic to. By not saying otherwise, the article implicitly suggests that all such teams are composed of sympathetic American residents.

    (By not saying otherwise, the article also suggests that these employees are empowered to decide specific cases (rather than to decide classes of similar cases via telling an ML model how that part of the space of possible moderation cases space should work).)

    The article basically presents the decision makers here as empowered but also very stressed out by their difficult job telling American what to think about and how to think about it.

    As to the epistemology of the process: what the article called “Accuracy” sounds like it was actually “Inter-rater Reliability” and for non-trivial semantic classifications by normal humans who have to look up the answers in a handbook for the first month of doing the task 40% is lazy, 65% is decent, 85% is impressive, and 95% is both desirable and basically impossible unless the ontology is small, stable, and cuts reality at the joints.

    “Which of 12 clearly distinct barnyard mammals is being discussed?” is easy. Normal humans could probably get 95% on this task without cheating.

    If the question is “Which definitely existing psychiatric classification from the DSM III does the person who produced this text sample likely have?” the rates are going to be much much lower because the construct validity of any nosology of thought will be shit and key diagnostic features might be obscured or unavailable.

    To get 95% inter-rater-reliabilty on an ethical content classification task like this, I would guesstimate that 50% of the cases would require that a “per-question conspiracy” be convened so that their concordance goes up for reasons other than what is in the instruction manual… The people who don’t think they are participating in a conspiracy there, are probably just the leader of the conspiracies that other people are consulting with. On this reading, what they would really be hiring and firing on the basis of, when they cite the “accuracy” measure in a firing, is how well the people play along with the fiction that the entire labeling process is actually detecting a real fact-of-the-matter about content acceptability rather than engaging in a collective ass-covering exercise.

    As to the epistemology of the people: after exposure to a lot of crazy stuff in an environment like this, I would expect their belief sets to begin to diverge from normal. People are not good at original thinking, and make errors a lot when they attempt to do so, and when they make a lot of attempts, and get intense feedback, and that feedback is coming from a somewhat insane bureaucratic process… basically, I would expect them to end up a bit like pigeons in an experimental Skinner box?

    (I’ve heard rumors that people who help build “expert systems” back in the 1980s often ended up, after round tripping their living theories into technical systems that could detect inconsistencies, changing from verbally confused but pragmatically functional humans to verbally consistent incompetents.)

    Thus, it isn’t “Cognizant employees” who are becoming conspiracy theory tolerant, it would instead be the specific Cognizant employees at this location working on this project for this company, in a bubble whose existence is explained by many layers of bullshit, and which has the job of detecting certain kinds of bullshit.

    I mean… inside that bubble they are almost literally engaged in the maintenance of a censorious conspiracy themselves! They cause to be censored everything the (Top Secret?) data labeling handbook says should be censored, whether or not the handbook is internally coherent (which (Fermi estimate): it isn’t), thereby literally cybernetically controlling the discourse of maybe 20% of the English speaking world, all while facing intense employer surveillance, and “social pressure to lie to your family about your job” that would in the past have been impressive for a CIA agent in the literal cold war to seriously play along with. At least the CIA agents in the 1950s mostly knew they were CIA agents, and got pensions in exchange for loyalty!

    Seriously, is it any wonder that people in a bubble like that have begun to suspect that other conspiracies exist? 😛

  41. abystander says:

    An arrangement where the moderators work a couple of days a week and a different call center job the other three days would be healthier and maybe reduce the turnover.

  42. component.elements says:

    It wouldn’t surprise me even if a good percentage of them are embracing the conspiracies. You spend enough time around crazy, especially in high-pressure environments, and it tends to get in your head.

    And they have to engage with the crazy well enough to understand whether it breaks each of however many rules, so they have ample opportunity to be persuaded, but they don’t necessarily have the time or resources or inclination to check everything out or understand all the weird things that make implausible conspiracies seem plausible.

  43. Loriot says:

    This reminds me a lot of the “organizational scar-tissue” concept, where rules exist to prevent the reoccurence of past issues at the organization. Except that in this case, the issues are externally imposed.

  44. benjdenny says:

    Two thoughts:

    1. Nobody should really be all that impressed that in a staff of hundred-to-hundreds of low-paid-in-bad-job workers in Arizona that a flat earther and a few other kinds of conspiracy theorists can be found. It would be much weirder if there weren’t several.

    2. I don’t believe a guy who says “I don’t believe 9/11 was a terrorist attack, but I know this is weird and find it distressing that I believe this untrue thing that I brought up as an example of clearly untrue things I believe”. I don’t find this to be a particularly believable thing.

    • MostlyCredibleHulk says:

      If in some way I came to believe 9/11 was a government conspiracy, I would probably be depressed by this – not because it’s “clearly untrue” thing that I believe in but because of what it implies about the world I am seemingly live in, and how mistaken I was previously about it. A lot of our specific beliefs rest on general beliefs about “how things work” – and if those change, it may be quite an overwhelming thing.

  45. wanda_tinasky says:

    Now I feel like I have a moral imperative to go through Facebook and ‘report’ posts about kittens playing with dogs or wholesome family vacations. Every non-horrible story I report will get seen by a moderator and possibly be the straw that keeps him sane that day.

    Also, does this make Facebook the real-life version of Omelas? Is this why I’ve walked away from it?

    • No. The people described working for FB are presumably better off than if FB didn’t hire them, since otherwise they wouldn’t take the job.

      That was not the case for the child in Omelas.

      • MostlyCredibleHulk says:

        “Better” is a tricky term here. For example, if I voluntarily smoke and eat unhealthy food, am I better off? Obviously, I am doing what I want, but it also ruins my health. Am I “better”?
        If I am offered a job requiring capabilities I thought I have but I didn’t, and I end up hurting myself – am I better off? Nobody forced me to do anything, everything is completely voluntary – but does it always make me “better off”? That sounds questionable.

        • Hence “presumably” in my statement. People sometimes make mistakes. But the fact that someone chooses something is good evidence that it makes him better off–much better evidence than a stranger’s opinion that it makes him worse off.

          In this case, the only evidence that it makes them worse off is that someone ignorant of their lives who has not experienced what they experience and is probably a good deal richer than they are thinks their job looks terrible.

          And it isn’t even that strong evidence, since the writer has an incentive to make it sound terrible in order to make his story more interesting.

      • wanda_tinasky says:

        In a literal sense, of course you’re right. But surely people like the story because they feel that it’s a metaphor for something. This certainly seems like a good candidate for what it’s a metaphor for.

        • I don’t think so. The fact that it’s a helpless child who has had no choice at all matters. You wouldn’t get the same punch if it was a story about a mission where you knew some of the volunteers would end up dying horribly—and they knew it too.

          • Mr. Doolittle says:

            There’s a train of thought that thinks of wage laborers in certain positions as “wage slaves” who have no choice but to work some kind of soul crushing position (I’m not a proponent of such a line of thought, but I can understand the reasoning). Even with that it’s not quite the same as the Omelas story, but as far as metaphors go, it seems pretty close.

            It does seem that the needs of Facebook indicate that someone will be doing this work. Outsourcing it to a 3rd world country seems plausible, and makes the Omelas comparison more, rather than less, obvious.

            Agency is an important distinction here, but if you asked the child of Omelas the question of whether they would doom the rest of society in order to gain their own freedom, I’m not sure that the child would change their own fate. Giving the child agency (very dubious agency with that level of guilt attached) doesn’t really correct the moral dilemma.

          • wanda_tinasky says:

            Yes. Hence, ‘metaphor’. That’s how fiction works.

          • There’s a train of thought that thinks of wage laborers in certain positions as “wage slaves” who have no choice but to work some kind of soul crushing position (I’m not a proponent of such a line of thought, but I can understand the reasoning). Even with that it’s not quite the same as the Omelas story, but as far as metaphors go, it seems pretty close.

            There is a large difference between “in order for your society to work, someone must have a horrible life” and “in order for your society to work, someone who has a horrible life, and will any way, has to be part of it.”

            The view you describe makes FB the second case–if it didn’t hire those people, they would have at least as bad a job somewhere else, which is why they agree to work for FB.

      • Protagoras says:

        It is not clear what would be the fate of the child in Omelas if the society were not set up as it is. It is implied that the alternative is being a society like ours, in which, to be sure, the overwhelming majority of people are better off than the child in Omelas, but there are some who are worse off than the child in Omelas. It’s thus possible, if not likely, that the child would have ended up being one of those worse off if they’d been in something more like our society. Plus what Wanda said about Omelas being a metaphor for a lot of real world trade-offs, which of course do not have exactly the same features but merely some interesting similarities.

      • J Mann says:

        That’s definitely a difference, but even if I learned that the sufferer in Omelas had volunteered for the position and preferred it to alternatives, I still might feel ethically compelled to help him or her if possible.

        • In the story, what you are doing is not helping the sufferer but dissociating yourself from the society whose functioning requires him to suffer.

  46. Clutzy says:

    My Facebook reporting shows that there must be a significant backlog, because only 1 of 6 reports I submitted all time has ever been reviewed according to their own system.

  47. Pete Michaud says:

    This resonates with me. For various reasons I’ve been in the hotseat to make controversial decisions in public for a lot of my life, and what most often happens is that some faction of people will heckle me by either pointing out a downside to the final decision or a procedural violation they think I made. Invariably I’m like “…yes, I agree, and I thought about that downside. Did you think about the downsides of alternatives and notice that they are worse?”

    I’d like to think that just having the habit of thinking through those sides before heckling would solve the problem, but I actually think it’s more complex. First, you have to have some grounded sense of how tricky and political these things can be, and what realistic actions can actually be taken. Then you have to be able to empathize with the decision-maker in the abstract, because when someone is making a decision there is most often information they have that you do not and cannot have.

    I think most people lack both the experience of making tricky decisions in public, and the ability to empathize even when they’re not convinced on their own terms. Thus heckling.

  48. MostlyCredibleHulk says:

    If I sound a little bitter about this, it’s because I spent four years working at a psychiatric hospital,

    It is interesting that the comparison is with the facility that caters for people who are mentally unhealthy, and yet the similarities are indeed striking. Is Facebook – or the whole internet? – akin to a psych ward and should be managed in the same way, with responsibility over inmates largely resting not on themselves but on whoever is running the asylum?

  49. po8crg says:

    The great advantage that a blog has, when moderating comments, is that the blog has a point of view – that of the author. It’s perfectly OK for a blog to remove someone for being rude about the author, even when “being rude” is generally acceptable behaviour on the blog in general. If you disagree with the consensus view on a blog, then you are – rightly – expected to be especially polite and to accept that others will not be polite to you.

    If you go to a left-blog as a right-winger, and they call you an idiot, and then ban you for calling them idiots, that’s normal. Exactly the same applies if you go to a right-blog as a left-winger, or to a vegan blog as a carnivore, or (etc).

    This is because it’s a community, and the community has standards. If you go against that, then that is already an abrasive thing to do, so you have to use very careful, polite language.

    But Facebook can’t do that. Most newspapers will struggle with justifying having a strong positive position. One option is reddit’s approach – divide into separate spaces, which are each a community with its own standards (and then reddit has the problem of deciding to ban whole communities occasionally, but that’s a high-level policy question, not a day-to-day moderation question). You can do that on a newspaper if you let individual columnists moderate their own comments sections (Ta-Nehisi Coates’ columns at The Atlantic had a famously successful and productive comments section, for example) but if your columnists disagree (as they do on good newspapers), then you’re going to get a brawl on general news articles.

    The only other alternative is to have a big and complex rulebook, to publish both the rules and the rulings, to have a formal appeals process, and to generally end up with something like the actual legal system.

  50. wanda_tinasky says:

    These folks should be treated at least as well as military enlisted personnel or medical folks

    Why? Security and health seem like MUCH more positive social externalities than slightly-less-annoying social media.

    • wanda_tinasky says:

      Right – strong negative effects. And zero evidence that self-enforced ‘quality’ has any bearing on those effects. But if you want to make the argument that social media is a social good – which I would oppose – then you don’t have to worry: the people most responsible for it (the engineers and product managers) are very well compensated.

      Common decency – or ‘being good’, or pious, or a good comrade, or other ill-defined subjective notions – has nothing to do with the functioning of labor markets. Such considerations bring nothing constructive to the discussion.

  51. Edward Scizorhands says:

    Facebook doesn’t think it’s mission critical. Facebook, I bet, resents that they are stuck with the job. That’s why they hired contractors to do it. You don’t outsource mission critical functions.

    You are probably right that they could easily be treated a lot better. They are a tiny part of the expense system.

  52. 10240 says:

    The working conditions and the salary are two different issues. The working conditions (at least the “no paper etc.” parts) are for ass-covering, rather than money-saving.

    As for the salaries, if unions or anything else forced companies to pay higher than market-clearing salaries for a particular kind of work, then some people who would like to get this kind of work even at the current wages (in the sense that they would be better off than in any other option available to them) couldn’t get the job.

    You talk as though there was something especially evil about Facebook preferring money to flow to them, rather than anyone else. Who doesn’t?

  53. benf says:

    There is very little wrong with Facebook that wouldn’t be solved by moving to a subscription model. Facebook doesn’t fix itself because bilking advertisers with fake impressions from fake profiles is stupidly profitable. It’s not sustainable but Facebook is not run by strategic thinkers.

  54. thetitaniumdragon says:

    They’re hiring people with no skills at the bottom of the SES ladder ($28.8k/year is a pretty paltry wage). These people are probably mostly not very well-educated and are probably on the bottom of the general intellectual/psychological spectrum.

    I wouldn’t be surprised if some people cracked under that stress, and let’s face it, a lot of this stuff is directed towards the bottom of society to begin with.

    I would wager they’d probably encounter far fewer problems if they had people undergo a psychological battery of tests and hired college graduates, but moderation “seems” like a low-end job.