[Epistemic status: fiction]
Thanks for letting me put my story on your blog. Mainstream media is crap and no one would have believed me anyway.
This starts in September 2017. I was working for a small online ad startup. You know the ads on Facebook and Twitter? We tell companies how to get them the most clicks. This startup – I won’t tell you the name – was going to add deep learning, because investors will throw money at anything that uses the words “deep learning”. We train a network to predict how many upvotes something will get on Reddit. Then we ask it how many likes different ads would get. Then we use whatever ad would get the most likes. This guy (who is not me) explains it better. Why Reddit? Because the upvotes and downvotes are simpler than all the different Facebook reacts, plus the subreddits allow demographic targeting, plus there’s an archive of 1.7 billion Reddit comments you can download for training data. We trained a network to predict upvotes of Reddit posts based on their titles.
Any predictive network doubles as a generative network. If you teach a neural net to recognize dogs, you can run it in reverse to get dog pictures. If you train a network to predict Reddit upvotes, you can run it in reverse to generate titles it predicts will be highly upvoted. We tried this and it was pretty funny. I don’t remember the exact wording, but for /r/politics it was something like “Donald Trump is no longer the president. All transgender people are the president.” For r/technology it was about Elon Musk saving Net Neutrality. You can also generate titles that will get maximum downvotes, but this is boring: it will just say things that sound like spam about penis pills.
Reddit has a feature where you can sort posts by controversial. You can see the algorithm here, but tl;dr it multiplies magnitude of total votes (upvotes + downvotes) by balance (upvote:downvote ratio or vice versa, whichever is smaller) to highlight posts that provoke disagreement. Controversy sells, so we trained our network to predict this too. The project went to this new-ish Indian woman with a long name who went by Shiri, and she couldn’t get it to work, so our boss Brad sent me to help. Shiri had tested the network on the big 1.7 billion comment archive, and it had produced controversial-sounding hypothethical scenarios about US politics. So far so good.
The Japanese tested their bioweapons on Chinese prisoners. The Tuskegee Institute tested syphilis on African-Americans. We were either nicer or dumber than they were, because we tested Shiri’s Scissor on ourselves. We had a private internal subreddit where we discussed company business, because Brad wanted all of us to get familiar with the platform. Shiri’s problem was that she’d been testing the controversy-network on our subreddit, and it would just spit out vacuously true or vacuously false statements. No controversy, no room for disagreement. The statement we were looking at that day was about a design choice in our code. I won’t tell you the specifics, but imagine you took every bad and wrong decision decision in the world, hard-coded them in the ugliest possible way, and then handed it to the end user with a big middle finger. Shiri’s Scissor spit out, as maximally controversial, the statement that we should design our product that way. We’d spent ten minutes arguing about exactly where the bug was, when Shiri said something about how she didn’t understand why the program was generating obviously true statements.
Shiri’s English wasn’t great, so I thought this was a communication problem. I corrected her. The program was spitting out obviously false statements. She stuck to her guns. I still thought she was confused. I walked her through the meanings of the English words “true” and “false”. She looked offended. I tried to confirm. She thought this abysmal programming decision, this plan of combining every bad design technique together and making it impossible to ever fix, was the right way to build our codebase? She said it was. Worse, she was confused I didn’t think so. She thought this was more or less what we were already doing; it wasn’t. She thought that moving away from this would take a total rewrite and make the code much worse.
At this point I was doubting my sanity, so we went next door to Blake and David, who were senior coders in our company and usually voices of reason. They were talking about their own problem, but I interrupted them and gave them the Scissor statement. Blake gave the reasonable response – why are you bothering me with this stupid wrong garbage? But David had the same confusion Shiri did and started arguing that the idea made total sense. The four of us started fighting. I still was sure Shiri and David just misunderstood the question, even though David was a native English-speaker and the question was crystal-clear. Meanwhile David was feeling more and more condescended to, kept protesting he wasn’t misunderstanding anything, that Blake and I were just crappy programmers who couldn’t make the most basic architecture decisions. He kept insisting the same thing Shiri had, that the Scissor statement had already been the plan and any attempt to go in a different direction would screw everything up. It got so bad that we decided to go to Brad for clarification.
Brad was our founder. Don’t trust the newspapers – not every tech entrepreneur is a greedy antisocial philistine. But everyone in advertising is. Brad definitely was. He was an abrasive amoral son of a bitch. But he was good at charming investors, and he could code, which is more than some bosses. He looked pissed to have the whole coding team come into his office unannounced, but he heard us out.
David tried to explain the issue, but he misrepresented almost every part of it. I couldn’t believe he was lying just to look better to Brad. I cut him off. He told me not to interrupt him. Blake said if he wasn’t lying we wouldn’t have to interrupt to correct him, it degenerated from there. Somehow in the middle of all of this, Brad figured out what we were talking about and he cut us all off. “That’s the stupidest thing I ever heard.” He confirmed it wasn’t the original plan, it was contrary to the original plan, and it was contrary to every rule of good programming and good business. David and Shiri, who were bad losers, accused Blake and me of “poisoning” Brad. David said that of course Brad would side with us. Brad had liked us better from the beginning. We’d racked up cushy project after cushy project while he and Shiri had gotten the dregs. Brad told him he was a moron and should get back to work. He didn’t.
This part of the story ends at 8 PM with Brad firing David and Shiri for a combination of gross incompetence, gross insubordination, and being terrible human beings. With him giving a long speech on how he’d taken a chance on hiring David and Shiri, even though he knew from the beginning that they were unqualified charity cases, and at every turn they’d repaid his kindness with laziness and sabotage. With him calling them a drain on the company and implied they might be working for our competitors. With them calling him an abusive boss, saying the whole company was a scam to trick vulnerable employees into working themselves ragged for Brad’s personal enrichment, and with them accusing us two – me and Blake – of being in on it with Brad.
That was 8 PM. We’d been standing in Brad’s office fighting for five hours. At 8:01, after David and Shiri had stormed out, we all looked at each other and thought – holy shit, the controversial filter works.
I want to repeat that. At no time in our five hours of arguing did this occur to us. We were too focused on the issue at hand, the Scissor statement itself. We didn’t have the perspective to step back and think about how all this controversy came from a statement designed to be maximally controversial. But at 8:01, when the argument was over and we had won, we stepped back and thought – holy shit.
We were too tired to think much about it that evening, but the next day we – Brad and the two remaining members of the coding team – had a meeting. We talked about what we had. Blake gave it its name: Shiri’s Scissor. In some dead language, scissor shares a root with schism. A scissor is a schism-er, a schism-creator. And that was what we had. We were going to pivot from online advertising to superweapons. We would call the Pentagon. Tell them we had a program that could make people hate each other. Was this ethical? We were in online ads; we would sell our grandmothers to Somali slavers if we thought it would get us clicks. That horse had left the barn a long time ago.
It’s hard to just call up the Pentagon and tell them you have a superweapon. Even in Silicon Valley, they don’t believe you right away. But Brad called in favors from his friends, and about a week after David and Shiri got fired, we had a colonel from DARPA standing in the meeting room, asking what the hell we thought was so important.
Now we had a problem. We couldn’t show the Colonel the Scissor statement that had gotten Dave and Shiri fired. He wasn’t in our company; he wasn’t even in ad tech; it would seem boring to him. We didn’t want to generate a new Scissor statement for the Pentagon. Even Brad could figure out that having the US military descend into civil war would be bad for clicks. Finally we settled on a plan. We explained the concept of Reddit to the Colonel. And then we asked him which community he wanted us to tear apart as a demonstration.
He thought for a second, then said “Mozambique”.
We had underestimated the culture gap here. When we asked the Colonel to choose a community to be a Scissor victim, we were expecting “tabletop wargamers” or “My Little Pony fans”. But this was not how colonels at DARPA thought about the world. He said “Mozambique”. I started explaining to him that this wasn’t really how Reddit worked, it needed to be a group with its own subreddit. Brad interrupted me, said that Mozambique had a subreddit.
I could see the wheels turning in Brad’s eyes. One wheel was saying “this guy is already skeptical, if we look weak in front of him he’ll just write us off completely”. The other wheel was calculating how many clicks Mozambique produced. Mene mene tekel upharsin. “Yeah,” he said. “Their subreddit is fine. We can do Mozambique.”
The Colonel gave us his business card and left. Blake and I were stuck running Shiri’s Scissor on the Mozambique subreddit. I know, ethics, but like I said, online ads business, horse, barn door. The only decency we allowed ourselves was to choose the network’s tenth pick – we didn’t need to destroy everything, just give a demonstration. We got a statement accusing the Prime Minister of disrespecting Islam in a certain way – again, I won’t be specific. In the absence of any better method, we PMed the admins of the Mozambique subreddit asking them what they thought. I don’t remember what we said, something about being an American political science student learning about Mozambique culture, and could they ask some friends what would happen if the Prime Minister did that specific thing, and then report back to us?
We spent most of a week working on our project to undermine Mozambique. Then we got the news. David and Shiri were suing the company for unfair dismissal and racial discrimination. Brad and Blake and I were white. Shiri was an Indian woman, and David was Jewish. The case should have been laughed out of court – who ever heard of an anti-Semitic Silicon Valley startup? – except that all the documentation showed there was no reason to fire David and Shiri. Their work looked good on paper. They’d always gotten good performance reviews. The company was doing fine – it had even placed ads for more programmers a few weeks before.
David and Shiri knew why they’d been fired. But it didn’t matter to them. They were so blinded with hatred for our company, so caught in the grip of the Scissor statement, that they would tell any lie necessary to destroy it. We were caught in a bind. We couldn’t admit the existence of Shiri’s Scissor, because we were trying to sell it to the Pentagon as a secret weapon, and also, publicly admitting to trying to destroy Mozambique would have been bad PR. But the court was demanding records about what our company had been doing just before and just after the dismissal. A real defense contractor could probably have gotten the Pentagon to write a letter saying our research was classified. But the Pentagon still didn’t believe us. The Colonel was humoring us, nothing more. We were stuck.
I don’t know how we would have dealt with the legal problems, because what actually happened was Brad went to David’s house and tried to beat him up. You’re going to think this was crazy, but you have to understand that David had always been annoying to work with, and that during the argument in Brad’s office he had crossed so many lines that, if ever there was a person who deserved physical violence, it was him. Suing the company was just the last straw. I’m not going to judge Brad’s actions after he’d spent months cleaning up after David’s messes, paying him good money, and then David betrayed him at the end. But anyhow, that was it for our company. Brad got arrested. There was nobody else to pay the bills and keep the lights on. Blake and I were coders and had no idea how to run the business side of things. We handed in our resignations – not literally, Brad was in jail – and that was the end of Name Withheld Online Ad Company, Inc.
We got off easy. That’s the takeaway I want to give here. We were unreasonably overwhelmingly lucky. If Shiri and I had started out by arguing about one of the US statements, we could have destroyed the country. If a giant like Google had developed Shiri’s Scissor, it would have destroyed Google. If the Scissor statement we generated hadn’t just been about a very specific piece of advertising software – if it had been about the tech industry in general, or business in general – we could have destroyed the economy.
As it was, we just destroyed our company and maybe a few of our closest competitors. If you look up internal publications from the online advertising industry around fall 2017, you will find some really weird stuff. That story about the online ads CEO getting arrested for murder, child abuse, attacking a cop, and three or four other things, and then later it was all found to be false accusations related to some ill-explained mental disorder – that’s the tip of the iceberg. I don’t have a good explanation for exactly how the Scissor statement spread or why it didn’t spread further, but I bet if I looked into it too much, black helicopters would start hovering over my house. And that’s all I’m going to say about that.
As for me, I quit the whole industry. I picked up a job in a more established company using ML for voice recognition, and tried not to think about it too much. I still got angry whenever I thought about the software design issue the Scissor had brought up. Once I saw someone who looked like Shiri at a cafe and I went over intending to give her a piece of my mind. It wasn’t her, so I didn’t end up in jail with Brad. I checked the news from Mozambique every so often, and it was quiet for a few months, and then it wasn’t. I still don’t know if we had anything to do with that. Africa just has a lot of conflicts, and if you wait long enough, maybe something will happen. The colonel never tried to get in touch with me. I don’t think he ever took us seriously. Maybe he didn’t even check the news from Mozambique. Maybe he saw it and figured it was a coincidence. Maybe he tried calling our company, got a message saying the phone was out of service, and didn’t think it was worth pursuing. But as time went on and the conflict there didn’t get any worse, I hoped the Shiri’s Scissor part of my life was drawing to a close.
Then came the Kavanaugh hearings. Something about them gave me a sense of deja vu. The week of his testimony, I figured it out.
Shiri had told me that when she ran the Scissor on the site in general, she’d just gotten some appropriate controversial US politics scenarios. She had shown me two or three of them as examples. One of them had been very specifically about this situation. A Republican Supreme Court nominee accused of committing sexual assault as a teenager.
This made me freak out. Had somebody gotten hold of the Scissor and started using it on the US? Had that Pentagon colonel been paying more attention than he let on? But why would the Pentagon be trying to divide America? Had some enemy stolen it? I get the New York Times, obviously Putin was my first thought here. But how would Putin get Shiri’s Scissor? Was I remembering wrong? I couldn’t get it out of my head. I hadn’t kept the list Shiri had given me, but I had enough of the Scissor codebase to rebuild the program over a few sleepless nights. Then I bought a big blob of compute from Amazon Web Services and threw it at the Reddit comment archive. It took three days and a five-digit sum of money, but I rebuilt the list Shiri must have had. Kavanaugh was in there, just as I remembered.
But so was Colin Kaepernick.
You’ve heard of him. He was the football player who refused to stand for the national anthem. If I already knew the Scissor predicted one controversy, why was I so shocked to learn it predicted another? Because Kaepernick started kneeling in 2016. We didn’t build the Scissor until 2017. Putin hadn’t gotten it from us. Someone had beaten us to it.
Of the Scissor’s predicted top hundred most controversial statements, Kavanaugh was #58 and Kaepernick was #42. #86 was the Ground Zero Mosque. #89 was that baker who wouldn’t make a cake for a gay wedding. The match isn’t perfect, but #99 vaguely looked like the Elian Gonzalez case from 2000. That’s five out of a hundred. Is that what would happen by chance? It’s a big country, and lots of things happen here, and if a Scissor statement came up in the normal course of events it would get magnified to the national stage. But some of these were too specific. If it was coincidence, I would expect many more near matches than perfect matches. I found only two. The pattern of Scissor statements looked more like someone had arranged them to be perfect fits.
The earliest perfect fit was the Ground Zero Mosque in 2009. Could Putin have had a Scissor-like program in 2009? I say no way. This will sound weird to you if you’re not in the industry. Why couldn’t a national government have been eight years ahead of an online advertising company? All I can say is: machine learning moves faster than that. Russia couldn’t hide a machine learning program that put it eight years ahead of the US. Even the Pentagon couldn’t hide a program that put it eight years ahead of industry. The NSA is thirty years ahead of industry in cryptography and everyone knows it.
But then who was generating Scissor statements in 2009? I have no idea. And you know what? I can’t bring myself to care.
If you just read a Scissor statement off a list, it’s harmless. It just seems like a trivially true or trivially false thing. It doesn’t activate until you start discussing it with somebody. At first you just think they’re an imbecile. Then they call you an imbecile, and you want to defend yourself. Crescit eundo. You notice all the little ways they’re lying to you and themselves and their audience every time they open their mouth to defend their imbecilic opinion. Then you notice how all the lies are connected, that in order to keep getting the little things like the Scissor statement wrong, they have to drag in everything else. Eventually even that doesn’t work, they’ve just got to make everybody hate you so that nobody will even listen to your argument no matter how obviously true it is. Finally, they don’t care about the Scissor statement anymore. They’ve just dug themselves so deep basing their whole existence around hating you and wanting you to fail that they can’t walk it back. You’ve got to prove them wrong, not because you care about the Scissor statement either, but because otherwise they’ll do anything to poison people against you, make it impossible for them to even understand the argument for why you deserve to exist. You know this is true. Your mind becomes a constant loop of arguments you can use to defend yourself, and rehearsals of arguments for why their attacks are cruel and unfair, and the one burning question: how can you thwart them? How can you convince people not to listen to them, before they find those people and exploit their biases and turn them against you? How can you combat the superficial arguments they’re deploying, before otherwise good people get convinced, so convinced their mind will be made up and they can never be unconvinced again? How can you keep yourself safe?
Shiri read two or three sample Scissor statements to me. She didn’t say if she agreed with them or not. I didn’t tell her if I agreed with them or not. They were harmless.
I don’t hear voices in a crazy way. But sometimes I talk to myself. Sometimes I do both halves of the conversation. Sometimes I imagine one of them is a different person. I had a tough breakup a year ago. Sometimes the other voice in my head is my ex-girlfriend’s voice. I know how she thinks and I always know what she would say about everything. So sometimes I hold conversations with her, even though she isn’t there, and we’ve barely talked since the breakup. I don’t know if this is weird. If it is, I’m weird.
And that was enough. For some reason, it was the third-highest-ranked Scissor statement that did it. None of the others, just that one. The totally hypothetical conversation with the version of my ex-girlfriend in my head about the third Scissor statement got me. Shiri’s Scissor was never really about other people anyway. Other people are just the trigger – and I use that word deliberately, in the trigger warning sense. Once you’re triggered, you never need to talk to anyone else again. Just the knowledge that those people are out there is enough.
I thought I’d be done with this story in a night. Instead it’s taken me two weeks, all the way up until Halloween – perfect night for a ghost story, right? I’ve been alternately drinking and smoking weed, trying to calm myself down enough to think about anything other than the third Scissor statement. No, that’s not right, definitely trying not to think about either of the first two Scissor statements, because if I think about them, I might start thinking about how some people disagree with them, and then I’m gone. Three times I’ve started to call my ex-girlfriend to ask her where she is, and if I ever go through with it and she answers me, I don’t know what I will do to her. But it isn’t just her. Fifty percent of the population disagrees with me on the third-highest-ranked Scissor statement. I don’t know who they are. I haven’t really appreciated that fact. Not really. I can’t imagine it being anyone I know. They’re too decent. But I can’t be sure it isn’t. So I drink.
I know I should be talking about how we all need to unite against whatever shadowy manipulators keep throwing Scissor statements at us. I want to talk about how we need to cultivate radical compassion and charity as the only defense against such abominations. I want to give an Obamaesque speech about how the ties that bring us together are stronger than the forces tearing us apart. But I can’t.
Remember what we did to Mozambique? How out of some vestigial sense of ethics, we released a low-potency Scissor statement? Arranged to give them a bad time without destroying the whole country all at once? That’s what our shadowy manipulators are doing to us. Low-potency statements. Enough to get us enraged. Not enough to start Armageddon.
But I read the whole list. And then, like an idiot, I thought about it. I thought about the third-highest-ranked Scissor statement in enough detail to let it trigger. To even begin to question whether it might be true is so sick, so perverse, so hateful and disgusting, that Idi Amin would flush with shame to even contemplate it. And if the Scissor’s right then half of you would be gung ho in support.
You guys, who haven’t heard a really bad Scissor statement yet and don’t know what it’s like – it’s easy for you to say “don’t let it manipulate you” or “we need a hard and fast policy of not letting ourselves fight over Scissor statements”. But how do you know you’re not in the wrong? How do you know there’s not an issue out there where, if you knew it, you would agree it would be better to just nuke the world and let us start over again from the sewer mutants, rather than let the sort of people who would support it continue to pollute the world with their presence? How do you know that you’re not like the schoolkid who superciliously says “Nothing is bad enough to deserve a swear word” when the worst that’s ever happened to her is dropping her lollipop in the dirt. If that schoolkid gets kidnapped and tortured, does she change her mind? If she can’t describe the torture to her schoolmates, but just says “a really bad thing happened to me”, and they still insist nothing could be bad enough to justify using swear words, who do you side with? Then why are you still thinking I’m “damaged” when I tell you I’ve seen the Scissor statement, and charity and compassion and unity can fuck off and die? Some last remnant of outside-view morality keeps me from writing the whole list here and letting you all exterminate yourselves. Some remnant of how I would have thought about these things a month ago holds me back. So listen:
Delete Facebook. Delete Twitter. Throw away your cell phone. Unsubscribe from the newspaper. Tell your friends and relatives not to discuss politics or society. If they slip up, break off all contact.
Then, buy canned food. Stockpile water. Learn to shoot a gun. If you can afford a bunker, get a bunker.
Because one day, whoever keeps feeding us Scissor statements is going to release one of the bad ones.
That which can be destroyed by a Scissor Statement should be.
Change my mind.
Nah, I agree.
WARNING: SCISSOR STATEMENT DETECTED IN PARENT POST.
The essence of a scissor is that it divides a group of otherwise aligned people. The scissor “nuclear power is safer than solar” divides the group “people concerned about climate change.” The scissor “Kavanaugh confirmation” divides the group Americans.
What it’s doing is taking a group of people who thought they were on the same team and telling them no, actually, you’re on opposite sides. You should fight each other.
There is a name for this. Divide and conquer.
The thing being destroyed isn’t the thing being fought over, it’s the larger coalition of people with other common interests.
It’s a weapon of general application. Any group larger than an individual will disagree about something. Insert the wedge there and the group fractures. If any given group or coalition can be destroyed by a scissor statement, your claim generalizes to the claim that every group or coalition should be destroyed.
It’s even worse than that – it’s taking two groups of people who thought they were on opposing teams, re-dividing them, and generating new coalitions
Kavanaugh: Split the left between “due process first” and “protect women first.” Split the right between “character first” and “policy first.”
And then there’s the Trump-Clinton scissor that has to be even worse than Kavanaugh. Split the right into 3 groups (libertarians, elitists, populists), split the left into 3 groups (neoliberals, socialists, identity politics), and shake them all up.
A certain amount of anti-Scissor hardening is possible.
Republican senators were so hardened against the Scissor “Brett Kavanaugh would make a good Supreme Court justice” that literally only one of them defected from the group, including several who had very high resistance to coercion, such as “about to retire and too old for future career to matter.”
Democratic senators were so hardened against the Scissor that literally only one of them defected from the group, and he was staring straight down the barrel of “lose your job and accomplish literally nothing by defecting, or defect and probably keep your job.”
I’m not sure whether this is good news, or bad news.
Sure. Every group or coalition that can be destroyed by a scissor statement should be (normative).
You assert that every group or coalition that currently exists can be destroyed by a scissor statement (factual).
The factual claim is not relevant to the normative claim.
It certainly is relevant to the normative claim – the factual perspective informs that the normative statement “Every group or coalition that can be destroyed by a scissor statement should be” is equal to the (simpler and clearer) statement of “Every group and coalition should be destroyed”, and if we can argue that every group and coalition being destroyed is/isn’t actually good for humanity, then that implies that the answer to that normative statement must be the same.
Pretty misanthropic view, because I think “what can be destroyed by a Scissor statement” is “any human coalition ever.”
That which can be destroyed by a scissor statement will be.
We in the primordial soup of meme warfare now. And I say: bring it on, I’m bored AF.
I mean, I guess a counter value would be that pro-social behavior and harmony are good things that can lead to happiness and wellbeing.
This has been my experience at least. I think tolerance and a tendency towards non effense would be the defense against scissor statements, and they are worthy qualities to cultivate.
Bravo! Great story for Halloween!
Also, this story has a similar feel to several early Asimov stories.
That makes me think of “Nightfall”
The two I was thinking of were the one were a researcher asks “where do jokes come from?” (and concludes they come from aliens) and the one about a scientist investigating a goose that literally lays golden eggs. The plot and theme of the second is obviously quite different from Scott’s story, but it has a similar narrative voice.
Agreed! Making the Halloween post a successful horror story without breaking stride on exploring the blog’s themes is pretty impressive.
Yeah I kind of expected the author was going to go down the rabbit hole and find out the entity behind the scissor statements injected into society was Thamiel. That their ML program was a simulation of the Devil.
I hadn’t thought of that, but it fits — I don’t remember this ever coming up explicitly in Unsong, but the Qliphoth he’s named for has been translated as “Division in God”.
Well, we know Thamiel is very good at behavioral economics, or has people on staff who are very good, given how he can make Hell maximally horrible by specifically calculating which exact people will come to hate being caged up together the most.
And what he did to Canada.
I’m highly skeptical…
What’s the concrete evidence here? The algorithm generated _one_ statement that tore up a company of like, what, less than twenty people? And was able to predict 5 things that ended up being controvesies? And the author’s jumping from this to “someone must be generating these statements to cause controversies, there’s no way humans are naturally tribal enough to constantly be arguing about shit”?
I am immensely curious what the coding argument was, though.
edit: Oh, I missed “[Epistemic status: fiction]” — does that mean this is an intentionally made-up story?
It’s Halloween. This is an SSC horror story. Fun!
Brooooo its fiction
I thought it was eminently plausible.
Probably the least fun use of that hashtag.
I was about to post to tell Scott “your story would have been much better without the ‘fiction’ statement at the start.
I guess I was wrong about that.
Wait! No! I was right! You pathetic people who need everything spelled out in small words are pitiful termites who must be exterminated!
Great story and relates to a post I was thinking of writing recently called something like “in praise of letting sleeping dogs lie,” also inspired by my favorite Chappelle Show skit, “When Keeping it Real Goes Wrong.”
Superficially, “When Keeping it Real Goes Wrong” is just a joke about needlessly escalating small conflicts/choosing stupid hills to die on. But I think it hints at a deeper, more pervasive tendency than that description implies, one I myself certainly share. It’s hard for me to describe this urge exactly, but I think it’s something like the opposite of “let sleeping dogs lie”, such as “all controversies and underlying contradictions must one day be resolved,” with the implied corollary “and then we can, at last, live in harmony, so those who paper over the controversies are part of the problem.” As the sort of INTJ who always wants to “get to the bottom of things,” “stick to principle,” “seek out the truth, however uncomfortable” and that sort of thing, I am prone to this, maybe more than average. Problem is, so long as there is more than one person in the world, and maybe even then, that will never, ever happen.
This is also why I’m generally against utopianism, though some might retort ancap is a form of utopianism (actually, I see it as anti-utopianism: the concession that no system is perfect but grasping toward incremental improvements in a decentralized manner will likely produce better long-run results than anyone’s top-down plans).
Arguably one of the more severe forms of this, which I’ve never embraced, but which is not uncommon in libertarian and right-wing circles, is “accelerationism” or “it has to get worse before it can get better”–e.g. that society must endure a terrible collapse before we rebuild something viable on the rubble so let’s hurry up and get on with that collapse as soon as possible. Yet I am still very much of the personality type to think “this festering tension at the heart of society is an outrage and we need to deal with it sooner than later and stop kicking the can down the road as we slowly get boiled like a frog in a pot!”
I don’t want to make this into an argument for “trying to be principled is pointless or bad.” I also don’t want to eliminate the possibility that, sometimes, irreconcilable differences of values arise among groups of people and that they’d be better off going their own way than trying to reach a resolution or just living with it (in fact, I think this has already happened in the US). At the same time, I think that, regardless of the size or values homogeneity of the group, there will always be a necessity to allow certain “sleeping dogs to lie”/accept the probably existence of hypothetical “scissors” that could divide even close family and friends if forced to reach agreement about them if society is to get on with the business of living and not tearing each other apart. No society can withstand keeping it real all the time.
tl;dr if you want to mutually cooperate in a prisoners’ dilemma, it’s not really a prisoners’ dilemma.
Scissors create–or perhaps expose–situations where we’d really truly rather defect while the other player cooperates. Where winning the war is better than peace.
For society to survive these conditions, we have to learn how to cooperate anyway.
Yeah, I agree with this and the similar ideas in the post you responded to. I truly feel that our inability to explicitly reckon with many issues is a painful and ugly process that will cause our ruin, but is still dragging out too long, and that coming to terms and learning to cooperate is our only option. So I find myself weirdly embracing fantasies of accelerationism, because I just can’t stand feeling like I’m in an alternate reality anymore, and it seems so senseless to waste time and energy in a destructive state. It is ruining my mental state. Unfortunately, I believe that so many people are 100% “let sleeping dogs lie” when it comes to anything that could actually get us somewhere, that this waking up will only happen when we collide with reality to the point where even the dogs wake up, and can’t lie down anymore. It’s scary and I can’t find people who understand what I feel. So whether he accelerate it or not, we’re in trouble. And of course, people will blame the accelators and completely ignore existing problems, and probably will continue to do so even if we fix those problems.
There are just too many people I know who unapologetically hold directly contradictory views, and this is so common as to make solving our problems impossible. If it is okay to be briefly political as an illustration, my mom is a political moderate. She does not want the “caravans” to be able to enter America. She also does not want them arrested. When I said she needed to pick one or the other, she said arrest, but not in a way that was upsetting. It’s probably upsetting to them regardless, but as these people are engaging in activism, they will probably protest their arrests and not go quietly. When I pointed out that arrest would be upsetting to watch, she said she just didn’t think they should be let in or arrested in an upsetting way and that was her opinion. Look, I get it, we all would like that to be an answer, and not everyone has to have a policy plan. But some people do have to pick one. And you can’t just say that’s your opinion – it’s a feeling, an understandable one, but you can’t have things both ways.
I feel like people need to be confronted with the consequences of some of their choices, which contradict other choices, so that they’ll get what’s at stake and wise up. But doing that means taking actions that victimize people, and it’s hard to say any amount of that is acceptable. But I think it will happen one way or another, and I think doing it sooner and deliberately might work out better than letting it happen haphazardly such that people don’t know how to adjust their views. There are definitely Americans who are clear on where they stand and do not hold contradictory views that would melt away with upsetting images of expected “side effects” of these policies. But I suspect most Americans do engage in this type of thinking, and it would be nice to just let them try to have it both ways so they would realize that’s not an option and make peace with one or the other.
Re: accelerationism, a better metaphor might be a game of spider solitaire, or traversing a maze, in which you manage a long and productive path towards the win/exit, only to discover that it would result in a dead end. The only way to complete the game/maze is to backtrack sufficiently (using ctrl+z “cheating” for spider solitaire) and find the actual correct path, even if it results in point penalties and looks like we’re reverting all of the progress that was made.
That is, progress that by some metrics is genuinely moving us closer to where we want to be may nonetheless make it impossible to actually get there.
It comes from reasoning by the Theory Of Contagion. “F” is horrible and immoral and objectively wrong, and everyone who thinks “F” also thinks “A”, therefore “A” is a signifier that you’re horrible and immoral and objectively wrong. Maybe you don’t actually believe “F”, but there’s plenty of people out there who don’t believe “A” either, and I can hang out with them and avoid any risk of getting “F”ed, as it were.
And because it’s adversarial, people who believe F often deny believing A, just like people who don’t believe F also deny believing A, and there’s competing communication needs between people who want to communicate “I don’t believe F but believe A” so they can hang out with people in the A-hole without getting F-ed, and people who want to communicate “I’m an F-er” so they can hang out with other F-ers, but do it discreetly so they don’t lose their jobs or not get confirmed for the cabinet position that being a F-ing A is technically a prerequisite for but everyone pretends it’s disqualifying for.
Have you read “All the Last Wars at Once” by George Alec Effinger?
No, should I?
It’s a short story, and I don’t remember the details. It talks about society breaking down into smaller and smaller interest groups, all violently opposed to one another. In the end, everyone commits suicide, because they can’t even live with themselves. Sarcastic.
Interesting, Robert Charles Wilson’s “The Affinities” posits the opposite, that sorting-hat social media enables tribalism with stronger consequences because of the quantities of people in the armies formed, and the solution is to enable more individualized connections, instead of groupings.
Society breaking down in the smaller and smaller interest groups will lead to a reduction in their efficacy, violence plateauing as they are able to find space from each other.
If mere hatred were enough to cause death, then no family would make it out of any adolescence. Instead, other values (“family sticks together even when they disagree) allows for prioritization of which controversy actually matters to a particular group, and which they can agree to disagree on.
Damn… That got fuckd me up. LOVED IT
Flagging for edit: the terms “Shiri’s Scissor” and “Scissor statement” are used much earlier in the story than the part that introduces the name (“Blake gave it its name: Shiri’s Scissor.”).
FWIW, as a horror story this didn’t really do it for me. Part of it’s super-implausible (no current “deep learning” approach is that crazily effective, let alone next-level enough to land programmers in an un-resolvable technical disagreement). Another part of it’s way too plausible (social media’s gradient-descent into controversy maximization is entirely real, and horrifying enough on its own).
Yeah, I’d put a little bit in saying “obviously it wasn’t just ML, we had some top-secret 11-herbs-and-spices that Brad had from an earlier starup that went under, I didn’t ask for any details.” With the implication that this is some stolen Pentagon or alien memetic weapons technology, temporarily repurposed for advertising, then accidentally put back to its original purpose.
Eh, I think it works better as a “Human Technology Gives Humans Their Comeuppance” moralistic tale. “Alien tech” lessens the fun for me.
I thought it was pretty good. The actual events are implausible, but doesn’t that go for most ghost stories? I liked how clearly it conveyed the idea of social media as an implementation of Shiri’s scissor, created by a genetic algorithm driven by advertising revenue.
Hm, what about “all interfaces should be generic maps so that we don’t have to change the interface when we change things”? Or “agile is chaos, waterfall is still the best process model”? Maybe “UML is useless”.
Right, I am not an “AI”, I also cannot provide appropriate training data to get one to come up with these statements, except personal experience, which is still hard to feed to an AI.
What an AI could be able to learn from forums of news tickers with a technical focus is, of course, the classics:
XY is the best OS.
XY is the best programming language/programming paradigm.
XY is the best IDE.
As for other communities that are famous for having a matter-of-fact discussion culture that is highly efficient in convincing experts of controversial ideas, have a look at the Bogdanov affair for theoretical physics (obvious nonsense (?) was accepted for a PhD), and the (still ongoing) debate of Mochizuki’s “proof” of the abc conjecture in math.
And that’s just technical stuff. How about “the lack of women in tech and in open source in particular is evidence of massive sexism and discrimination?”
Oops. Hope I didn’t just destroy everything.
Nah, that tears up programmer-adjacent groups far more.
There’s a reason many programmer forums have a “religious wars” subforum for arguing over which editor/language/model/paradime/framework is best. People get super emotional about that kind of stuff.
There are things that we know are unresolvable holy wars in programming – Vi or Emacs? Tabs or spaces? – but the thing is that we know they’re wars. We recognize that there are good reasons for both sides and it’s a matter of opinion and personal taste. A Scissor Statement is, purportedly, something that’s both incredibly controversial and appears so obviously true/false that you can’t imagine it otherwise without completely tearing down your worldview. If you can imagine a reasonable-sounding counterargument, it’s not a Scissor Statement.
I can imagine such statements existing – The Dress is such an example – but I can’t imagine it’s possible to generate one for any given topic. The Dress was a lucky quirk of how our eyes work, you can’t rely on finding such a quirk every time.
The war between tabs and spaces has a winner.
Clearly there are nonfinancial benefits to using tabs, and I should imitate the people who find using them to be worth $10k per year.
I mean, if we had a mathematically rigorous and thoroughly understood science of “memetics” or “psychohistory” or “cognitodynamics” or something that exists at the intersection of all three… it’s entirely possible that the practitioners of that science could construct arbitrary Scissor statements for any given interest group, and conversely recognize them for what they were just from being familiar with the underpinnings of a given interest group.
Sort of like how structural engineers can just point at a spot on a diagram and go “yeah, this bridge won’t hold up because it’s going to break THERE when the wind kicks up.” It’s not that they’re psychic or anything, just that they have a superior set of ideas and mathematical tools for modeling the problem.
If you fed a machine learning algorithm 1.7 billion bridge designs, most of which fail but some of which succeed, and told it to emulate the successful ones… Well, you miiiight get a pretty good bridge designer out of the process.
Can we train a neural network to play Polybridge?
Who disagrees with that one? I thought we purged all the heretics in the Y2K timeframe.
Nowadays, if you want a war, you mention systemd. Used to be 80/100/132 columns, but clang-format put that one to bed.
I like UML and use it every time I sketch out a complicated architecture…
Meh, I don’t know that enough people are qualified to have an opinion on systemd for it to be a good topic of contention. How about this one (from an actual conference speech):
The problem with that is that anything we’re likely to come up with when thinking “What sorts of things could cause unresolvable disagreement?” are going to be things that the people in the story would recognize as controversial. If the algorithm said “___ operating system is the best”, even if the person thinks that it is completely right and there aren’t any good reasons on the other side, if they’ve seen any online discussion on the subject they’ll still recognize it as something people argue about, rather than the “oh, that’s just trivially true” in the story. Same if it’s something close to, but not quite, an established religious war.
It sounds to me like the algorithm spat out something like “The sky is blue” or “water is wet” or “source code should use indentation of some sort”, and then people ended up disagreeing anyways. (Maybe “The dress looks gold and white” would be a real-world example?)
Basically, in real life we recognize statements as controversial because we have observed the controversy. Experience allows us to gauge, either by the direct light of hindsight or the reflected light of hindsight from other similar events, whether or not there will be controversy. And among whom.
The conceit of this horror story is that someone figures out a way to predictively analyze human communities and construct from first principles a statement that will be controversial.
In the extreme cases, the optimally controversial statement is something like “the dress looks gold and white,” only with extremely high stakes so that you just can’t stop thinking about it, you can’t AFFORD to stop thinking about it, it’s a matter of self-preservation that you keep thinking about it and defend yourself against the attacks of the black-and-blue-ists.
Another example would be the transgender bathroom rights debate, which is probably about as close to optimally-divisive as can readily be imagined, because it pits two groups of people who are both convinced that a group who needs protection, will be denied that protection, if they don’t win the argument.
So imagine an issue as divisive as transgender bathroom rights, only it actually seemed as if it would be objectively worth having large chunks of the world be destroyed in nuclear fire rather than lose the argument.
Which is, come to think of it, pretty much how the Cold War narrowly avoided ending. O_o
“all interfaces should be generic maps so that we don’t have to change the interface when we change things”
Is one of the dumbest things that idiotic skinnyjeans 21C “programmers” keep doing, despite it being idiotically wrong, and is one of the main reasons why most webdev and webscale code is such shit. It makes it seem like programming is easy for mediocre programmers, so they wedge the design into idiotic patterns that they can’t understand and can’t fix.
It gets better. Some of these hapless young’uns want to store stuff in THE DATABASE without a defined schema or a plan for bringing older instance of a type up to date with the new (implicit) schema.
Sigh. Life is suffering.
The web may be the worst thing to happen to software craftsmanship in the entire history of our profession.
Worse than BASIC?
BASIC was a bit before my time. By the time I got into the game, we weren’t using original BASIC, it was some evolved form of the language, with functions and types and recursion and such.
If I had to guess, BASIC did not make things worse. The original language, including programming with GOTOs, reflected common practice during the 60s, when the language actually appeared. Pros at the time were coding in assembler rather than BASIC, but they were using the same sort of jump-around coding that early BASICs enabled. Dijkstra’s “Go To Considered Harmful”, the strongest statement advocating structured programming, didn’t appear until 1968.
HTTP, by being generally untyped and tolerant of unexpected tags, seems to have made things worse by letting a lot of dodgy coders get away with weird tag soup rather than proper structured documents. Suddenly sloppy was OK, and fostered a new a new paradigm of worse-is-better.
GOTO is only harmful for programs that have a core loop that is intended to last forever. When every program is intended to terminate, and the user restart it if desired, GOTO works just like a line on a flowchart does: it can move you towards the end state, and the end state is both desirable to reach and an END state.
The programming paradigm where there is a “core loop” that makes calls to subroutines and functions is where GOTO is nonsensical.
And yes, there was a time when people used the core loop concept with GOTO, and that led to headaches when a subroutine needed to be called from a new place.
Oh yeah? I guess you think everything has to be formally agreed upon and cast in stone, enshrined in huge and impenetrable XML files expressing the one true and universally applicable ontology that precisely and unambigously define every term (including the word “define”)? Stuff that is unreadable on purpose, because it should only be processed by machines anyway? Well, I have news for you: while you sit in your ivory tower neverending XML-fueled circle jerk and squabble over irrelevant minutiae, some of us have real problems to solve…
(>50%-joking, obviously. Might be more fair (to the OP) to target static type systems, but I happen to like those.)
I don’t agree, but also don’t disagree strongly enough to think that you’re crazy.
No, after thinking about it some more, I finally managed to generate something so incredibly out there that if you don’t disagree with this then you obviously aren’t a real programmer, for *any* definition of ‘real programmer’. Here goes:
To borrow your phrase, fight me.
ROTFL. Real Programmers don’t use source control. Or source code. Real Programmers don’t use file systems. If your object code is too big to memorize, just write it out to 9-track.
That battle was fought long ago. The “use source control” side has won so thoroughly that finding a real project in the wild with real users and with more than one developer that doesnt use source control are rare curiosities.
(I can’t speak to the horrors done inside small IT shops where “the computer guy” who sets up the printers is the same person who writes the business specific applications…)
If you can’t look at any given punch card and know where it goes, you don’t really know the program. Leave the computer parts to the professionals.
Tarn Adams, the sole programmer of the videogame Dwarf Fortress, self-reportedly doesn’t use source control.
The game procedurally generates and simulates a generic fantasy world to an incredibly high degree of detail and depth, is considered one of the best video games ever made, and is an exhibit in the Museum of Modern Arts.
Additionally, the game has been continuously developed for the last twelve years, and is likely to continue for several years more.
Since all of this was possible to achieve without the use of source control software, I cannot completely disagree with your statement. (I do, however think it’s wrong in the majority of cases)
The problem, as others have pointed out in this conversation, (I didn’t think my way through it enough at first) is that it isn’t enough for the Dwarf Fortress to not use source control for his game, he’d have to say that *we’re all wrong, horribly wrong, for using it*. And even that wouldn’t be enough, one lone person no matter how accomplished isn’t going to flip flop an industry best practice. It would only work if a substantial number of people were both of a different mind while simultaneously being unaware of the difference. Finding one exception doesn’t break my model of the world the way a real scissor statement would. Note that in the story the narrator just thinks Shiri’s nuts until David joins in on her side.
Are you kidding? XY is the worst.
XY is the best! I named my oldest child after it, and subconsciously interpret attacks on it as attacks on a family member! “People” like you disgust me.
As an experienced SW engineer, none of these do much for me. I couldn’t make myself to take a statement like “XY is the best/worst Z” seriously enough to do anything but maybe an occasional silly trolling or inside joke about it.
I could though construct a meta statement that would do it better for me. Something also the lines of “You can identify the set of tools and methods which are absolutely the best to solve every problem in the software world, bar none, and this set is not only can be constructed by real people, but it is already known and can be found at (URL). If everybody would be smart enough to read and understand (URL) and then disciplined enough to consistently implement what is contained there, all software development and maintenance inefficiencies would be gone and absolutely no negative consequences or tradeoffs would be involved”.
Let me just say that you have to have a more-than-layman’s comprehension of machine learning to be confident that it wasn’t possible, in 2017, to construct a reasonably effective machine learning algorithm capable of generating maximally divisive Reddit comments from a database of 1.7 billion Reddit comments.
The idea is superficially plausible at the layman level, though it may appear obviously implausible to one better versed in the mysteries than myself.
I can believe that with 1.7B comments a ML algorithm could predict which comments would be controversial. There’s a small step to generating them.
But there aren’t 1.7B comments on most subreddits, and that’s where the handwaving comes in.
Also, in a small company’s subreddit, the controversy number is probably already maxed by some comment that exists; there’s a liberty taken where (up+down)^(min(up/down,down/up)) is taken as equal to “fight me to the death right now IRL”.
There’s some correlation between the two, as shown in /r/Mozambique. But I think that’s because “fight me to the death IRL” topics are either resolved and only the survivors of one side exist anymore, or they are evenly balanced such that no side has yet killed the other; thus, every such topic that still exists will be very high in the F(up,down) measurement. But the inverse is not universally true; almost all subjects with a very large F(up,down) don’t result in “fight me to the death IRL”, because relatively few such subjects can exist (proof: each person can die at most once, thus the average and median person kills one person or fewer, ever)
“But there aren’t 1.7B comments on most subreddits, and that’s where the handwaving comes in.”
Not really – coming from a natural language processing ML perspective, the fundamentals are really fundamental, so they transfer across the whole language, across multiple languages. If you can train a system on 1.7B comments that learns how to generate scarily effective “controversialness” (which is the handwaved assumption of the story); then it seems to me quite plausible that domain adaptation/transfer learning of that to the topics relevant to Mozambique can be done with the much more limited amount of messages in /r/Mozambique; in fact I’d be quite surprised if it wouldn’t succeed, because it’s a rule of thumb within our domain that such attempts generally work really well if you take the effort do to them properly.
Side question: Is “hnau” a literary reference from the space trilogy, or something else?
Yep, that’s the source. Translates roughly as “person”, or “mortal” if you’re feeling adventurous.
Ah, this was good. I’ve missed the fiction on this blog.
What past events might be explained as the results of Scissors? We’re looking for apparent trivialities with disastrous consequences.
Let’s see… The first two that come to mind are World War I and Prohibition.
Half of what happened in the Byzantine Empire, really.
Are you suggesting that homoousios vs homoiousios, blue vs green or icons vs no icons weren’t worth gouging a few eyes out over?
Ranking the above controversies by importance.
homoousios vs homoiousios
iconoclasm vs. iconophilia
Filioque vs. no-filioque
blues vs. greens
And yet… none of them are worth gouging eyes out over.
Other trivialities include, all uses of violence to defend against heresy.
They are not eye gouging worthy FOR YOU.
For Bizantines, probably they are.
The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.
WWI wasn’t caused by a triviality: a casus belli is usually an excuse, not the real cause. Look at this: https://en.wikipedia.org/wiki/First_Balkan_War#Great_Powers
The clearest thing I’ve seen on the advent of WWI is the BBC film “Royal Cousins At War”
“What past events might be explained as the results of Scissors? ”
England attacking the Dardanelles.
Deploying nuclear weapons against Japanese cities.
Chemically castrating Alan Turing.
Physically castrating John the Eunuch.
None of those* were preceded by super-maximally divisive statements that split a community against itself.
*(herculesorion’s three, I’ve never heard of John the Eunuch)
England attacked the Dardanelles because it looked like it might be a soft spot in the Ottoman Empire, the equivalent of sucker-punching someone in the solar plexus hard enough to put them out of the fight. When it turned out the defenses were stronger than expected and the means of attack were handled suboptimally, it turned into another fully generic grinding World War One stalemate. But that happened over and over and over throughout the war; the Dardanelles offensive was just a special case of a general pattern where people tried to hit weak points and lacked the means to successfully crowbar them open.
The US nuked Japan as a result of a pretty widely held belief that the Japanese would never surrender without mass devastation, combined with a belief that mass devastation was acceptable as the price of ending a world war on favorable terms, combined with a belief that people who sneak-attack you have it coming. None of these statements was very divisive within the US’s in-group. Nor were corresponding statements in Japan like “Japan has a right to own most of eastern Asia” or “the Americans will sue for peace rather than fight us to the last man in the last spider-hole with the last sharp stick.” That was a straight-up conflict between out-groups, not an in-group tearing itself to shreds.
Alan Turing getting chemically castrated was, again, the result of him being gay in a grotesquely homophobic society; nobody uttered a Scissor Statement, it was just that the general consensus of society was to do something we now recognize to have been horribly wrong.
If you want examples of historic Scissor Statements, you need ideas or memes that people decided were worth killing or dying over for what, today, seem like pretty stupid reasons because you no longer have a bone-deep understanding of what those statements meant, emotionally, to their believers and deniers.
Beliefs about slavery might qualify; it seems weird in hindsight that hundreds of thousands of Americans would go risk their lives in the Confederate military for what was, quite frankly, mostly a matter of keeping the slaves enslaved. In many cases, not even their slaves, but someone else’s! The community ‘Americans’ broke down into its component regional factions there.
The Catholic/Protestant divide in Western and Central Europe definitely qualifies, and religious schisms in general are great examples of Scissor Statements within the context of a given religious community. Indeed, one could argue that all a Scissor Statement is, is a secularized ‘schism-maker’ that can create such a radical division over a deeply held secular value.
snip snip snip go the scissors
“England attacked the Dardanelles because it looked like it might be a soft spot in the Ottoman Empire…”
…and I can provide you with any number of well-reasoned opinions, contemporaneous and contemporary, about why it was a stupid idea that could obviously never work and why anyone who seriously proposed it was a stone idiot. snip snip snip.
“The US nuked Japan as a result of a pretty widely held belief that the Japanese would never surrender without mass devastation…”
“pretty widely held” except by quite a few people. You’re right that most everyone believed that unconditional Japanese surrender was necessary, but not everyone thought that nuking their cities was the way to make that happen, and if the concept of nuclear weapons had been more widely known at the time there would have likely been much stronger statements against their use on Japan. (There certainly have been since.) snip, snipsnip.
“Alan Turing getting chemically castrated was, again, the result of him being gay in a grotesquely homophobic society”
and obviously somebody thought it was a good idea, right? snipsnipsinp.
Look, you’re trying to argue why these things I presented were Obviously Not Wrong or Obviously Wrong, and you’re partly missing the point and partly doing exactly what the story describes. The whole point of this story is that going to the mattresses over things that are Obviously Wrong or Obviously Not Wrong is meme-infected behavior.
“If you want examples of historic Scissor Statements, you need ideas or memes that people decided were worth killing or dying over for what, today, seem like pretty stupid reasons because you no longer have a bone-deep understanding of what those statements meant, emotionally, to their believers and deniers.”
Really? Why? It’s not stupid to say that user experience matters, that accusations of sexual assault against authority figures are significant, that the methods and venues for public protest are worth discussing. The point is not that the topics are silly or the reasoning frivolous; the point is that people get bit by them so strongly that they can’t not assume that those who disagree are, somehow, moral failures who intend to destroy not just the useful world but you personally, as an individual.
Scissor statements like “slavery is okay” are what kept “the North” and “the South” from being one community.
Scissor statements like “Japan deserves to control East Asia and the Central Pacific” were part of what divided the Japanese from the Americans.
Scissor statements like “Jews deserve to live on roughly the same terms as other people” helped divide the literal Nazis from most of their contemporaries.
The ingroup-outgroup membership distinction is already strongly associated with scissor statements.
“The ingroup-outgroup membership distinction is already strongly associated with scissor statements.”
Not a whole lot of ingroup-outgroup in UI design. The point is not the content of the statements; the point is that how we react to people denying the truth of them. The point is how they’re the songs of the Beast and we find it so easy to hum along with the tune.
I don’t think Prohibition qualifies – there were a lot of things in play (many of them not about consumption of alcohol per se) that led both to Prohibition and to its demise. The question behind it is not trivial at all (in fact, even formulating it would probably take some careful work) and has not been resolved to this day – we’re still in the middle of War on Drugs, and myriad of other quarrels essentially coming from the same question. I don’t think it fits the “apparent triviality” description very well.
To be fair, the War on Drugs was ALSO probably caused by a Scissor Statement.
Also, all that BS about there being “a lot of things in play” is precisely what someone would say if they were arguing under the influence of the Prohibition Scissor Statement… >_>
The inside view of the war on drugs is what Prohibition looked like from the inside.
I think scissor statement should be something more than just calling whatever your opponent said “BS”. That would be disappointing.
I think the idea here isn’t that it’s simply calling your opponent’s claim BS; it’s genuinely believing it’s BS, even when you try to consider it in good faith. It’s being shocked that your opponent would make that claim unironically. Your mind tries for maybe ten seconds to wrap itself around the claim, and simply finds no purchase.
Contrast with calling BS on your opponent’s argument as a sort of kneejerk response.
“My political position is clearly the best!”
“With all due respect, I have to disagree; it leads to these outcomes; people clearly prefer this alternative!”
“BS! And here’s why!”
I think there’s a lot of overlap between the two, and they’re very hard to tell apart, but it’s not complete. The latter starts with both sides mindkilled; the former doesn’t, but quickly leads to it. I think the former cases number much fewer.
All of those unfortunate haircuts that were popular during the colonial period.
You should do more holiday-themed stuff. I’ve been showing my relatives “The Story Of Thanksgiving Is A Science Fiction Story” for the last few years, one by one, and no one’s disliked it yet. Of course, I have weird relatives.
Is the use of “Blake” and scissors a reference to Pact?
A nitpick: an image that maximizes the activation of a certain class (the most “doggy” picture, say) is very much not a realistic image from that class. For dogs, it’s typically a few doggish patterns smeared, rotated repeated across the image. See DeepDream. More generally, the statement “Any predictive network doubles as a generative network” is false – at least if you mean “generative for the same distribution”. The distribution produced by such an inversion is in general quite different from the original one on which the network is trained. Hence the ongoing challenge of getting GANs to behave.
I was also going to say it’s false. I think that if you just capture p(y|x), your model doesn’t have generative capacities.
Combine with a x that is from a finite sample space (for example, “images up to 1920×1024 pixels of 64 bit color” is technically a finite sample space) and enough computing power, and it’s trivial to find the absolute maximum.
But also, every step in the network is reversible; take a high final output, look at the ranges of penultimate steps that can generate that, look at the ranges of semipenultimate steps that can generate those, and back-trace all the way to an input.
Both of those use more brute force than available (at least with Turing-type compute; no comment on quantum compute), but that’s a fact about the universe and its finite size, not a fact about algorithms.
So, again, the absolute maximum is typically not a realistic image. And probably not a realistic tweet, in the NLP domain, though tweets frequently sound unrealistic anyway.
What you’re describing is not reversibility. If the reversing process maps a single output to a range of inputs, you’ve lost information. Exactly what reversibility isn’t. I strongly suspect you know this. Incidentally, one could add some actual reversibility constraints, in effect learning the mappings in both directions and requiring consistency on realistic images – this is done in some versions of GANs and VAEs. Crucially, these are not predictive networks. The statement in the OP is still very wrong.
Whether an algorithm depends polynomially or not on the inputs and/or problem size (e.g., #weights or network depth) is very much a fact about the algorithm.
That was beautiful and terrifying.
Unlike others, I found it very realistic – I feel like I have absolutely had “Shiri’s Scissors” debates where an implausibly large chunk of people are just obviously, unbelievably wrong: https://xkcd.com/386/
I guess number of people wrong has nothing to do with presence of debate.
There are a lots of facts where most people are wrong yet they are not “controversial”.
https://xkcd.com/2051/ is also almost exactly a scissor situation
Bloody hell that was good.
For those of you wondering about Scott’s etymology: no, he’s not literally correct.
Probably, anyway; etymology is largely received guesswork. Who knows! Though in general what did and didn’t migrate from Greek to Latin is well studied, so if something was in Latin and Etymologists don’t think it was from Greek, it probably wasn’t, and if something is in Greek it probably didn’t start in Latin.
I reserve judgment as to whether Scott is Kabbalistically correct about this etymology, but note that Nothing Is Ever A Coincidence.
Perhaps an intentional metaphor on the scissors being scissors themselves?
The etymology for scissor as the thing we use to cut paper with is super convoluted.
The scissors we use today, with two blades sliding past each other, were derived from shears. The shears they use on sheep today still are basically unchanged from the shears and early scissors of yester year, two blades connected by a U-shape piece that serves as a hinges, as you squeeze the blades together.
Except, that these shears fell under the term “forfex,” from which fork and falx were derived.
Meanwhile, the word scythe is spelled the way it is because people misattributed it to Latin origins that it doesn’t come from, and scissors has its etymology the way it is, instead of using a forfex-derived word, for similar reasons.
Hooray for language!
United 731 tested their bioweapons on Chinese prisoners. The Americans tested syphilis on Blacks.
The Tuskegee Study was a study in nature that examined the effects of untreated syphilis on African-Americans. That is, the subjects already had syphilis and it was left untreated; subjects were not deliberately infected with it.
Because there’s some significant difference between denying treatment and intentionally infecting someone.
There probably would be legally, and based on people’s answers to various formulations of the trolley problem, I think a lot of people would say there is an ethical/moral difference as well.
Ignoring a factual difference so that you can more strongly condemn something that everyone already agrees should be vehemently condemned seems like a bad practice to me.
There’s definitely a legal distinction. In some moral/ethical ontologies there’s a moral/ethical distinction.
I claim that those distinctions are not Significant.
This claim is even more mostly signaling than most of what I do; it is about asserting a particular set of characteristics of Significant as the Correct ones.
The third Scissor statement was about SKUB.
I think you mean it was ANTI-skub. Can’t believe I have to explain this. Skub is great.
Anti-SKUB is about SKUB too.
LOL Skub is great, good one! Imagine actually being pro-Skub.
What??? Hoopy, is that an actual joke. You know skub is total shit, right? Are you racist or something?
This sounds like a Black Mirror script to me in its balance between relative believability and extremeness (but maybe that’s just because I don’t come across enough other science fiction of this kind).
The “hostile operations” idea lands well because it does feel like the production of perfect controversy-generating events has increased a lot over the last few years, and that a pure toxoplasma mechanic isn’t quite enough to explain it. It selects, yes, but hasn’t the actual production gone up too? Or is that just how it seems because (1) the information spreading systems have matured (Twitter networks etc.) and (2) content generation has become so much more competitive that you need to rely on outrage more and people have become better at supplying it?
The big question is if we’ll eventually develop cultural immunity to this, and in that case, how long it takes. Unless things keep escalating forever we should eventually get used to it, and I’m not sure how much more potent these “scissor events” can become. From my view across the pond, the Kavanaugh hearing seems about as infected as something can get (outright violence is of course worse but it’s a different thing, not necessarily an instance of seeing the world differently).
We decided, apparently, that you can’t feel good unless it’s objectively and morally correct to do so. And, further, being mean to objectively-wrong immoral people is objectively and morally correct. Therefore, it’s very important that people who disagree with us are objectively wrong and immoral.
Because being mean feels good, man.
Actually we used to have cultural immunity to this that was developed starting with the treaty of Westphalia. I hope that gives us a better chance of re-developing it.
We invented the printing press in 1450. The Treaty of Westphalia happened 200 years later.
Hopefully we update our memetic immune system to deal with the internet more quickly then we did with the printing press.
I’m quite hopeful about cultural immunity. It’s already worked for smaller things. When the Internet was new you’d constantly get chain emails. “Forward this two ten of your friends or your loved one will die”. They spread like crazy, until people started ignoring them. They still reappear, but only among teenagers who are new to the internet. Just like an infections disease that the population has become immune to. Same with FarmVille invites later. Facebook cracked down on them, but mostly people just learned to ignore them. Then “listicles”. I no longer instinctively click on “The 10 best X in the world” links because I learned I will be disappointed. All these felt so annoyingly optimized to hijack our brains at the time.
I think humanity had a strong cultural immune response to certain things after WW2. It’s not a coincidence that these things happen just as the people who lived through the war died or became too old to have cultural power.
People who think of the current era as The September That Never Ended have a very different viewpoint than the internet natives.
In living memory we have analogs of the blacksmiths who worked out of their homes who were appalled at the mass-production foundries that took all of the art out of professional metalworking, and also of the foundry workers who are appalled at the backyard blacksmiths that take all of the precision and efficiency out of professional metalworking, and also of the backyard blacksmiths who are appalled at 3d printers that take all of the skill and thought out of craftsmanship, and also we have the 3d printing enthausists who are appalled at the presence of standard 3d printing libraries that take all of the creativity and fun out of making things.
The thing about the Red Queen’s race is that you can win it by not caring about whether or not you are in the same place anymore.
Interesting, but did you mean to reply to my comment? I don’t see a connection.
OK, sorry in case it is inappropriate to say this, but I keep stumbling about an issue with your fiction and since I really appreachiate your non-fiction posts and psychology advice, I wanted to ask if there is a hidden irony or joke I do not get, being european and a non-native speaker and all.
In short, some of your fiction seems to be pretty straight-up racist, which to me makes no sense for a philanthrope and rationalist such as yourself.
In this story, the worst coder who does everything wrong and whom everyone has to help carry along is of course the indian woman. The good coders whose work is great are the three guys whose only attributes are “the white guys”.
In Unsong, the russians are literally ruled by the devil, the people from the middle ease and north africa except Israel literaly have no souls, all mexicans are literal drug mules, and so on.
There was another fiction story where I stumbled about this, but I do not recall which one it was.
I stopped reading Unsong, although I quite liked the idea of the story, because its racist undertones seemed just too poisonous to me. When, as an author, you are creating a world like a demiurge, why give in to the temptation to make literal every predjudice you come upon? That really bothered me.
But since I have come to appreciate your non-fiction advice and posts, I do not want to dismiss you as just another racist on the internet. You seem too smart and self-aware for this.
That’s why I’m asking: Is there some hidden reason or meta-level for this to me pretty blatant racism?
Again, I am sorry to ask this this bluntly, I know that everyone is a critic, few are creators. I appreciate your work.
But I guess a rationalist appreciates being informed of a perceived irrationality.
I haven’t read Unsong, and I don’t think I’ve read all of Scott’s short fiction, so I can’t comment on the general thrust of his work. But (taken in isolation, at least) I don’t think this one shows signs of racism.
I think you might have misread part of the story (unless I did) — as I understand it, Shiri’s program worked extremely well without any help from the white guys; it just appeared not to be working, until multiple people read a scissor statement targeted at them.
Shiri successfully wrote a world-shatteringly important program, so she’s certainly not dumb. And as for morality, we don’t really get a sense of Shiri’s character, whereas the boss is explicitly portrayed as an awful person, and the narrator is at the very least lacking in moral courage, if not basically amoral himself.
I wonder if you took the narrator’s description of the big argument too literally? We weren’t supposed to believe that the people on the other side of the argument were actually being unusually bad or stupid or dishonest, relative to the narrator and the two who agreed with him; the point was that the scissor statement created a situation where both sides would appear that way to each other.
You long comment is probably a good example of what the Scissor wants.
Maybe it was Alexander’s experiment how many people behave.
Note that in the story, both the boss and the narrator are on the other side of a scissor-generated argument from David and Shiri – i.e. both have been poisoned by an argument that leads them to hate those characters and dismiss them as human beings even though their performance reviews had previously been fine. And part of the narrative is that the company ends up firing people from protected minority groups with no externally justifiable reason, leaving it vulnerable to a lawsuit based on racial discrimination.
Right, the racial stuff was necessary for the plot. But the weirdest thing about harzerkatze’s post was his accusation that “the worst coder who does everything wrong and whom everyone has to help carry along is of course the indian woman.” Did harzerkatze not notice that she is the author of this shockingly successful bit of software? It’s not like Scott was subtle about giving her credit; her name is in the name of the algo. I honestly read it as a story in which the one Indian female does something utterly brilliant, and instead of getting credit for it, she gets fired amid accusations of failing to understand English.
I think you’re misunderstanding; the coders who got fired *were* great coders (remember, when contemplating the lawsuit he says their work was great and there was no justification to fire them), but the narrator at some point *convinces himself* that they weren’t great coders – an example of the Scissor at work.
Is this a scissor statement? If so, it’s Brilliant. The genius of scissor statements is that you can get into death-spiral arguments on the internet about whether someone genuinely holds an opinion or whether they are a scissor statement spewing AI bot.
In this story, the worst coder who does everything wrong and whom everyone has to help carry along is of course the indian woman. The good coders whose work is great are the three guys whose only attributes are “the white guys”.
But that’s part of the whole point: David and Shiri aren’t bad programmers/coders, it’s the effect of the Scissors. Because they are on the other side of the argument from the narrator, he thinks (because the memetic effect induces that he has to think this way) that they are bad – bad at work, bad people (notice how he says they lie straight off), bad in every way.
We’re not getting a balanced account of affairs here, the guy is infected by the influence of the Scissors on that particular question (should we/shouldn’t we keep doing like we’re doing in the company?) and so because David and Shiri are on the other side, they are Evil Wicked Stupid Bad Dumb Puppy-kickers.
That’s the insidious effect of the Scissors – you can’t have a “agree to disagree” argument, you have to win and crush your enemy as part of winning, and those on the other side of the question get made the enemy who is bad in every conceivable way.
I honestly can’t tell if this is brilliant
This is a scissor of EXCELLENT QUALITY.
All craftauthorship is of the highest quality. It is inlaid with an image of bait and bait in bait. The bait is gesturing plaintively at the bait. The bait is stricking down the bait. The image relates to the craftting of the artifact comment Bait in 2018 by harzerkatze.
Scissor post detected.
I mean, it’s not like his non-fiction stuff is free of racist undertones either. There’s a reason the comments and subreddit are dominated by racialists and reactionaries despite Scott’s professed progressive political beliefs.
[/Scissor statement roleplay]
On a serious note, I think you may have missed the unreliable narrator aspect. Scott’s Jewish, and he’s got the Jewish programmer on the same side as the Indian programmer. Shiri isn’t wrong or incompetent, she’s just on the other side of the schism which leads to both she and the narrator using every possible bias against the people on the other side. In Unsong, all of the negative attributes of the various nationalities and cultures are explicitly imposed by supernatural means from outside – there’s no reason in the narrative to believe it corresponds to our world, and the narrative doesn’t seem to be designed as a fig leaf to racist statements or actions in the narrative (e.g. in the way a video game that features heroic white people gruesomely killing undifferentiated black people might be explained by “zombies”.)
You idiot, Scott’s just trying to achieve balance, you leftist “progressives” are just as radical as the “right-wingers”!
Jeez this scissor is powerful, I was almost getting ready to type a sincere pointed reply until I saw the [/roleplay]!
Yeah, objectively Shiri was the best programmer in the story. She’s the one who invented the Hate Everyone Machine. The narrator falsely accused her of incompetence over the disagreement over the scissor statement, and, blinded by that, was unable to even acknowledge her excellent work in creating the scissor statement generator.
Oh please, someone else has been releasing Scissor Statements for years! Shiri didn’t invent anything, she just took credit for the work of some better programmers, somewhere, just like she’s been doing for her entire career. [/roleplay]
This post reminds me of a time 10 years back when someone told me she thought Blazing Saddles was racist.
It was a little odd how the narrator seems to be using “white” to mean specifically “white and non-Jewish.”
Not every comment that you disagree with is secretly an nth-level-meta demonstration of how easy it is to get people to disagree with each other.
(Except, of course, for the ones that are.)
Maybe not, but this one smells of bait to me.
If so, it’s subtle enough — sufficiently similar to a post that someone could make in earnest with no ill-intent — that there’s no harm in engaging on the assumption of sincerity. That way, if the poster is sincere then they receive a polite and potentially helpful response; if they are trying to stir up trouble, they (presumably) gain little satisfaction and do no harm to the discussion.
Um… did you completely miss the part where Russia was fighting against Hell and the U.S. was the one that allied with Hell against the Russians? Did you even read the story at all?
EDIT: oh damn well done
Unreliable narrator says nothing about attitude of author.
Similarly, in Unsong none of the facts you cite had to do with the racial or ethnic characteristics of those affected. I’ve forgotten the name of the character who took over in Mexico and South America, but white people were not immune to his influence and only strict quarantine prevented the US from going the same way. Everyone in that story was screwed, regardless of thier ethnicity, race, gender, etc. Didn’t Thamiel/Satan end up conquering most of North America? The Russians just got the treatment first. What about San Francisco? Nevada?
In other words, I think you’re making a big stretch.
OK, I’ll bite. The reason Shiri is Indian is a plot device – if she weren’t a part of an identifyable minority, her and David’s lawsuit later would not be so dangerous to the company and wouldn’t cause the later events. David alone is obviously not enough – the concept of an anti-Semitic Silicon Valley startup would be very hard to take seriously based just on one complaint of a single person. And, of course, her portrait is made through the eyes of the narrator later revealed to irrationally hate her (though not for racial reasons), so obviously none of the statements on her professional qualities are to be taken as reliable. In fact, even through this narrative it shows she has not done anything wrong – otherwise her lawsuit would be easily deflected by showing her actual performance deficiencies – which did not happen.
I think you misread the story. As others point out, Shiri’s programming worked absurdly well and created a terrifying superweapon capable of shattering any human community, or at least any human community that has a subreddit.
The reason the white guys ever thought Shiri (the Indian woman) was a bad programmer was because Shiri’s program worked so well, it generated a statement about programming that would predictably cause any group of programmers to fall out among themselves in hatred and dissension, and it just so happened that Shiri and David were on one side of that schism, while the white guys were all on the other side of it.
The real genius of this story, or at least the only real named genius, is Shiri. All the protagonist can do is reconstruct her work after the fact.
In Unsong, the russians are literally ruled by the devil, the people from the middle ease and north africa except Israel literaly have no souls, all mexicans are literal drug mules, and so on. In Unsong:
1) The Russians are ruled by the Devil because the Devil fought a long hard war to specifically and personally conquer them. The Canadians are also ruled by the Devil for the same reason. Everyone in the world would be ruled by the Devil except that the Comet King showed up to stop him, well that and the Lubavitcher Rabbi animating the Statue of Liberty as a giant copper golem. This is not about Russians and Canadians being inferior to everyone else, it’s just that they had the bad luck to be living next to the Hellmouth, so to speak.
2) It is specifically people from Northeastern Africa who have no souls, and this is entirely because of the massive, reality-warping bugs in the computing substrate of reality that have been tearing the cosmos apart for 40-50 years by the time of the story. While we’re at it, much of the American Midwest and the Plains States is uninhabitable because literally all roads lead to Wall Drug pharmacy and none lead away from it, et cetera, and there was a repeating month of March 1968 that went on for an indefinite but terrifying time. Basically, creation is being administered (in the ‘sysadmin’ sense) by an autistic archangel who is VERY good at math and physics, amazingly so, but sometimes struggles with coming up with acceptable solutions to human problems.
3) Israelis and Palestinians still have souls; the anomaly there is that the archangel running the world created two superimposed versions of the same country, one for Israelis and one for Palestinians, and let them each have their own non-interacting version of it that the other doesn’t live in. They call the place the Israel-Palestine anomaly now.
4) The Mexicans got taken over by a rather malevolent archangel who in any story not involving literally the Devil would be the villain of the story. Again, this isn’t their fault or a sign that they were inferior people; the same thing could have happened to anyone.
Honestly, I think racism in Scott Alexander stories comes about almost entirely as a result of a desire to make puns. Such as Mexico being taken over by a literal Drug Lord, that is a mind control entity that takes people over through Drugs, so that Ronald Reagan who incidentally was a clay golem being puppeteered by another archangel and who literally melted in a rainstorm near the end of his second term, could fight… wait for it… a War on Drugs.
Like, a literal one, with machine guns and bazookas. As far as I can tell that’s how it got there.
Meanwhile, huge swathes of the world, while only discussed briefly or in passing, seem to be doing pretty well for themselves. Most of Africa is fine, or at least not actually more abnormal than the rest of the world, which is admittedly not very normal. China is fine, India is, as I recall, fine. Latin America is largely fine except for the parts specifically in danger from the Drug Lord Samyazaz.
If you look for something to be offended by, you’ll find it.
In the story, shiri is actually a super-capable programmer who created a lovecraftian-esque monstrosity. The racial split of the company members has narrative weight, and possibly hints at the supernatural efficacy of the scissor.
There’s no point to worry because Scissor statements have been already released. Making marginally better method of making them won’t change much.
If someone would invent how to materialize hate and make hate golems, that would have been a serious problem.
I wonder if someone’s NN predicted printing money would solve all economic problems.
Have you been following American politics? How could you not believe in literal hate golems at this point?
Somewhat. It gives me very sad feeling of how gullible even bright people are, so I don’t waste too much time on it.
‘golem’ generally implies it must have a physical form and can be touched, no?
oh, thanks for you reply.
Possible edit note – “Some last remnant of outside-view morality keeps me from writing the whole list here and letting you all exterminate yourselves. But some remnant of how I would have thought about these things a month ago holds me back.”
I either don’t understand this bit, or there are two similar cause-effects here connected with a “but”. Huh?
Yep, that tripped me up as well.
Scott wrote sentence 1, then came back later and skimmed only the last part of sentence 1, incorrectly inferring that its first 10 words amounted to “I considered…”; and therefore added sentence 2.
Then Scott came back even later and spotted his mistake, but decided to leave it in, assuming the reader would blame the narrator (whose mind is pretty messed up by this point).
“But” can also mean “only” in English. E.g.: “There is but one God”.
But the Scissor program isn’t bound to only tell the truth, is it? Optimizing for most destructive statement is different than optimizing for most destructive truth. I think that latter might actually be easier, just for the sake of how we treat truth.
Also, to hell with Rothfuss for playing games and fundraising instead of finishing the trilogy.
It’s an advertising program. Why would it be trained to identify truth?
It most definitely wouldn’t be. Advertising’s bread and butter is lies, or at least truths so tortured their own mother wouldn’t recognize them.
I was pointing out it’s a good comparison but the algorithm doesn’t have that truth-requirement.
The truth bit comes from the Cthaeh they reference, a character in the Kingkiller Chronicles. Being of the Fae it’s bound by magic to speak only truths; being the most hateful creature in existence, it uses these truths to maximize destruction and chaos.
And if you knew the statement was definitionally true, that might give a bit of pause to those inclined to disagree.
Reminds me of that Monty Python skit about the joke that was so funny it killed anyone who read it.
Here’s the link – https://www.youtube.com/watch?v=q9XJeL2MNpw
Great piece of work.
I thought of that as well. Also, I would totally read a series of dark, realist retellings of Monty Python sketches.
Christopher Cherniak, “The Riddle of the Universe and Its Solution” http://themindi.blogspot.com/2007/02/chapter-17-riddle-of-universe-and-its.html
Or “Infinite Jest”, yeah. Of course, the basilisk concept has a long history in rat circles.
I think this could be described as Godel’s basilisk.
The basilisk concept has a longer history than rat circles.
Even if you don’t count Genesis chapter 3 or the tale of Pandora as tales of an infohazard, the purported effects of reading heretical books probably qualify, Lovecraft discussed direct infohazards, and Nieche cautioned about them.
If you extend the broad concept of rationalist circles to include Enlightment Salons, then it seems reasonable to extend the concept of infohazard to include the belief that some knowledge is directly harmful to one’s soul.
“Von Goom’s Gambit” by Victor Constoski
“The Ultimate Melody” by Arthur C. Clarke
“Rump titty titty tum TAH tee!” by Fritz Leiber
Any suggestions for further scissors statements?
Judging by recent American politics:
Anyone credibly accused of rape is unfit for a position of responsibility.
No, we need to formulate some model of scissor-design and generate from that. My idea: take any statement that is completely innocent on the face value and without context trivially easy to agree with, but said by someone who is widely considered a bad guy or in a situation that gives it a meaning that is widely considered to be bad. So it will divide those who just take it on a face value vs. those who take the context. “It’s OK to be white” is a good example, I think.
Black lives matter.
I believe “kill all white people” was a recent one from my news stream.
It might just be me missing something, but that does not sound “completely innocent on the face level” to me.
Yeah, I think nameless’s definition needs to be modified to be either “completely innocent on face level” OR “completely despicable on face level”, because you don’t know which side of the schism you’re on.
Nobody disagrees with this at the meta-level. Formulate a rigorous definition of ‘credible,’ now that’s a Scissor.
And also formulate a set of criteria for fitness for positions of responsibility.
Because I don’t see where not being accused of rape makes anyone less unfit, all other things being equal.
I think there are plenty of people who do disagree with it with that wording. There are many people who believe in the presumption of innocence until someone has been proven guilty in a court of law.
The fact that you think that nobody would disagree with it illustrates that it’s good at being a scissor statement.
I think that’s still the object level, at least partially because I got to experience a shift in my own object-level thoughts on this matter – thanks to the recent kerfluffle I am quite a bit more suspicious of anybody saying ‘credible accusations of _____’ than I would have been six months ago. For an opposite object-level application of the same metaprinciple, I expect very nearly everybody would hesitate before hiring e.g. somebody acquitted of embezzlement due to some procedural screwup. The meta-level gloss, to my mind, would be something like ‘there exists some sub-judicial level of proof for serious crimes at which it is reasonable to take sub-judicial punitive actions against the accused, such as not hiring or promoting them, at least pending further investigation.’
On the other hand, the example was going to be ‘we all know OJ killed his wife,’ but on actually checking the figures it turns out that had opposite values on demographic variables, so perhaps you’re right after all.
EDIT: Misspelled ‘than’ as ‘thank.’ Presumably this is not a coincidence.
There are many people who believe in the presumption of innocence until someone has been proven guilty in a court of law.
I don’t think many people literally believe in this. If we have a video that shows person A shooting person B, with person A being fully identifyable, naming himself and declaring “I am going to shoot B now” and doing that, how many would believe A is innocent until court of law process is complete? I don’t think many would. In that formula, “innocent” does not mean everybody is to treat A as no different than anybody else, and feign ignorance of any available information until full court process is complete. It means that there is a process that has to be done until somebody can be criminally punished by the state, and until that process is done, we do not let the state to punish the person for a crime, even if we think he did the crime.
There’s somebody trying to accuse Robert Mueller of rape, and Mueller’s already accused them of falsifying the accusations. I really hope this quietly dies because oh god oh god oh god I don’t want to do this again…
The thing is, the supposed “victim” of the rape has already said that the accusations are false and that people attempted to bribe her into make false accusations. So at this point the accusations just look comedically false.
I haven’t looked that closely because oh god oh god oh god I don’t want to but I thought it was somebody else claiming they were approached to make false statements.
Alice: Mueller raped me.
Carol: Bob tried to pay me to claim Mueller raped me.
With us left to infer that Bob is paying Alice, too.
ETA: Oh god. And see Nybbler’s post below. Carol may not even exist? I just want it all to end…Wednesday cannot get here fast enough.
Well, in this case Alice accused Mueller of raping her on a day when we have newspaper articles placing him in another city on jury duty, and Bob just got up and gave a failarious press conference, plus Bob has past history of (I believe) fraud that has him banned from engaging in (as I recall) futures trading ever.
Plus the news outlet Bob works for, one which is, to put it mildly, not famous for backing down on claims beneficial to the political right wing, is rapidly backing down from the claims.
Plus Bob made up an entire fraudulent private investigative/intelligence firm and used, e.g., his mom’s phone number.
Basically, this is what it looks like when someone who honestly believes that ALL #MeToo allegation-avalanches are faked up by the liberal media, tries to do what he thinks the liberal media is doing, and tries to fake up a #MeToo allegation-avalanche, and does it very badly because he’s a twenty-year-old idiot.
He’s really a right-wing extremist operating in deep long cover to make a meta-false flag attack.
At least, that’s what he wants you to think.
It gets better — the false accusations may themselves be false. Scissor has been upgraded to version 2.0. Or perhaps version ℶ-sub-1.
At this point I am thoroughly confused as to the actual point of any of this.
Oh, for fuck’s sake, does anybody exist?
Q: Who benefits from the whole kerfluffle?
A: People who want more divisive Fear, Uncertainty, and Doubt in American politics.
If Hydra or whatever is actually behind this, I’d expect their agents on both “sides” to try to enforce the “This is an other-side conspiracy” even though the object-level evidence points to a small group of people on a shoestring budget.
Eh, I don’t think that quite captures it. Most of the controversy is in contesting whether a given accusation is “credible”.
An accusation against an enemy is always credible.
An accusation against an ally is never credible.
The problem is that bad people are wrong about who the enemies and allies are.
I think that scissor statements lose some of their power once they’re recognized for their divisiveness. When I’m in polite company, I think we know to steer conversations away from Israel/Palestine, the biological foundations of intelligence, and other such all-heat, no-light topics. The true scissor statement would hit us by surprise. We wouldn’t know ahead of time that we need to tread carefully around it, so we’d blunder right into it. It might be like blue dress/gold dress or yanny/laurel, but about something we associate with moral rather than sensory judgment.
One thing I find somewhat implausible about the setup of the story is that all the conflict arises from evaluating a declarative statement. I just don’t think we invest that much visceral feeling into the content of a proposition. Much more likely would be a scissor video clip in which some people would see obvious, glorious righteousness and others would see obvious, shocking villainy.
Of course they’re divisive. There’s an obvious correct response, and there’s these evil fuckhead asshole morons who don’t agree. Like, obviously inheritance of characteristics is a response to environmental stressors, how could that not happen? Anyone who thinks otherwise is a capitalist jerk who’s out to undermine the government’s agricultural plan which will lift millions out of starvation, and all for less money than the robber-baron pigs currently suck–like ticks they are, giant fat ticks with no purpose except to grow–the money they currently suck out of hard-working people who just want to live, to watch their children grow!
How many people during Dress or Yanny/Laurel honestly believed that other people were lying or deluded, as opposed to actually perceiving things differently due to physiological differences?
>I think that scissor statements lose some of their power once they’re recognized for their divisiveness.
Is that the purpose of this tale? To be forewarned is to be forearmed?
Robert Mueller accused of sexual assault
Aaaaand the Scissor is already working.
That’s a great one!
The part where the anti-Semitic synagogue shooter despises Trump because he’s a Jew-lover is a pretty good twist by the Scissor too.
I suspect that the Scissor statement at the heart of the current American political controversy is something like, “A country’s existence is only legitimate in so far as it benefits the entire world.”
Governments should serve people, not treat them as objects, and that’s why I voted for ______.
Look for short, unnuanced statements that unite groups around certain causes. People inclined to support the causes will mentally fill in the missing caveats, while opponents will focus on them.
“Believe all Women/Victims” is a good one.
“All people are created equal” was perhaps one for its time.
Or statements that make subjective judgments about the net value of a complicated topic.
“America/Capitalism/Religion is a force for good in the world”
Or statements about abstract ideas that aren’t universally agreed upon.
“Fetuses are humans with rights”
The infamous 14 words would probably qualify, were it not for the well-poisoning of the source.
Children born to illegal immigrants in the US should automatically have citizenship.
That’s a great discussion to have.
But try “as a matter of law, people born inthe US are US citizens”.
No, that one is actually vacuously true (at least to the best of my knowledge) – US law does, in fact, say that, rather explicitly. And even if it didn’t, it’s an objective fact whether or not it does, so we can resolve any controversy over the issue by just reading the law.
To truly be a Scissor statement, it has to be normative, and therefore not falsifiable.
Maybe if you change it to say “as a matter of natural right”, vice “as a matter of law” – then it’s normative, there’s no Big Book Of Natural Rights to which you can defer, and it can easily seem both trivially true and trivially false depending on your point of view.
Then again, the fact that I can conceive of framing it thus and not break my brain thereby probably means that it’s not quite a Scissor statement, even if it’s dangerously close to being one: the basilisk that does not break the brain is not the true basilisk, belike.
“No, that one is actually vacuously true”
Not that I think the parent actually is a Scissor statement, but it’s interesting to observe that the fact that someone says this about it is actually (weak) Bayesian evidence in FAVOR of its being a Scissor statement, rather than against!
(And the classification of statements as “normative,” “falsifiable,” etc. is potentially subject to dispute / interpretation as well…)
Whether my declaring it vacuously true is evidence for or against it being a Scissors statement depends on your own view of the statement. If you agree with it, then my declaration is weak evidence that it is not a Scissors statement (because at least one person agrees with you). If you disagree with it, then my declaration is weak evidence that it is (because at least one person disagrees with you). If you think it’s vacuously false, then my statement is strong evidence that it’s a Scissor statement.
Of course, I also admitted the possibility that I was in error, and strongly implied that if an actual lawyer told me I was mistaken and US law does not actually say that, I would believe them. That, hopefully, is very strong evidence against it being a Scissor statement.
Being vacuously true doesn’t prevent people from arguing against it.
Being able to cite the law doesn’t end arguments about what the law is.
I speak from experience.
In a world where the supply of people for employment, friendship, collaboration, or immigration far exceeds the demand, it’s perfectly reasonable to coarsely filter by demographics to ensure that you’re giving yourself the best statistical chance for finding a good fit.
…When not allowed to filter finely by the thing you actually care about. I believe it’s well-documented that e.g. letting companies ask about felony convictions directly raises minority hiring rates.
One I’ve seen in action but which I think should be safe to post here:
Why did Twilight Sparkle lie about dying her mane?
I have no idea what this means, but as long as we’re talking embarrassingly geeky fan communities: Vriska Serket did nothing wrong.
“The Buggers deserved it”.
“The New Order was a better government than the Old Republic”.
“The UFP is a impoverished technocratic fascist state”.
Traps are gay.
Marche is a dick.
Asuka/Rei best waifu.
[Member of the Mane Six] is best pony.
“The destruction of Alderaan was justified.”
Doesn’t quite qualify since those who say it are fully aware that it’s not trivially true, but perhaps it comes close.
Pffft, shipping wars are easy. Always go for the poly option! (Or if not, I always end up shipping the legs of a V-configuration triangle together. Asuka-Rei-Shinji >>> Asuka-Rei >>>> (Asuka-Shinji or Rei-Shinji)
What are you talking about? Nopony actually believes that Rainbow Dash is not the best pony.
Shinji wasn’t man enough to handle one of them, let alone both.
That’s a funny way to spell “Twilight Sparkle”.
“It’s OK to be White” was a scissor statement. Remarkably effective given that the people doing it outright stated that this is what they were doing.
Nah, “It’s OK to be white” was all about sneaking in connotations. Nobody ever actually disagreed with the literal meaning of the sentence.
Is “‘It’s OK to be White’ is a scissors statement” a scissors statement?
Insofar as we accept “scissor statements” as a valid concept (sigh), yes.
Every controversy has an infinite chain of governing metacontroversies, because any concession that an idea is contentious is a concession that its truth value isn’t determined.
There’s something similar I occasionally run into, where someone will say something that looks like it could be satire or dead serious, but you can’t ask which it is, because if it’s satire they’ll be offended you thought they were serious, and if it was serious they’ll be offended you thought it was a joke. It’s like a weird cousin of Poe’s Law. I try to just not answer.
any concession that an idea is contentious is a concession that its truth value isn’t determined.
Well, not necessarily. You could instead hold that the truth value of the statement is determined, but that those who disagree with you are lying, stupid, or both. After all, the characters in Scott’s story continued to believe that the Scissor statement was vacuously true even after realizing it was a Scissor statement.
I think the line means “It’s OK to be proud of being white, not ashamed of it” and thus it’s a reaction to those intersectionalists who think all white people should feel perpetually guilty about their privilege.
But while it’s seemingly natural to not want to feel perpetually guilty about one’s privilege, it’s also seemingly natural to feel suspicious of “white pride” (as opposed to German-American or Italian-American or Appalachian pride) given how often it is associated with bigotry. Hence the strong feelings on both sides.
The line is an ambiguity trap.
The literal meaning is, “It’s okay, and not shameful, to have pale skin”. This meaning is, in fact, vacuously true, and everyone involved in discussing that particular controversy agrees with it.
The implied meaning is, “There are people out there who think it’s shameful to have pale skin.” This is a Scissor statement, or at least a controversial one. This is what the actual disagreement is about.
The double secret implied meaning is “Everyone who says it’s okay to not have pale skin thinks it’s shameful to have pale skin.” The vast majority of people on both sides of the controversy do not believe this statement, but the people responsible for turning “It’s okay to be white” into a meme did.
The idea was to try to provoke people into objecting to the implied meaning, and then present that as people objecting to the literal meaning. This can then be used as evidence for the double secret implied meaning – “Look, all those people who say it’s okay to not be white are also saying that it isn’t okay to be white!”
The real brilliance of the trick, though, is that it works even if you know about it. Any criticism of the speaker at all can be painted as you disagreeing with the literal meaning of the statement.
There are people who act in a way that makes me believe that they blame people who are white for the actions of white people.
It’s the same error of thinking that makes people blame [race] people for the actions of [race] people.
Making an error in judgement is not worthy of condemnation.
Galle – the double-secret meaning of “It’s OK to be white” is “the people who make broad sweeping statements about how awful white people are really mean it.”
And the reaction to the statement proves them right.
I think that was just standard issue trolling.
Nobody* actually was convinced that “It’s okay to be white” was false and anyone who believed the literal statement to be true was evil.
Everyone* upset by “it’s okay to be white” was basically assuming that the statement was the motte to a very offensive bailey, even if they weren’t familiar with that explicit concept.
*Yes, yes, lizardmen truthers exist
They were putting some of these up on college campuses, where not only do the lizardman truthers exist, they’re department heads.
Or some of the department heads could be closet racists who merely pretend to be lizardmen when convienient.
Consider how they tended to react to the very similar “black lives matter” slogan when trying to distinguish between the two.
Eren Jaeger did nothing wrong.
Even Jager should have trusted his own judgement and fought Annie with the full support of squad Levi. Operating on faith that a plan exists even though no such assertion was made is an error, even though it happened to be faith in a true thing.
It would be one thing to know that a plan exists and that as a result of operational security you only know your part of it.
This is demonstrated when Eren’s faith that a rescue plan exists results in him getting captured; while a rescue was improvised in time, said rescue was very expensive both in terms of replaceable soldiers and irreplaceable horses.
“God is/is not bound by the rules of logic.”
This is a very real example in certain circles. When I was in college, I had one or two friendships be seriously strained by this.
reminds one of: https://www.youtube.com/watch?v=b_GeQ4EEdTk
YES! Came here to say that.
Reminds me a lot of BLIT, and Langford’s Basilisk, the Parrot.
See also this. (A fictional work, published when 2006 was still in the future.)
Hehe, great stuff, reminds me of the kinds of punchy short stories you’d find in old s-f compilations.
You credulous fucking idiots! Scissor statements? What kind of bullshit-swallowing cretins could believe that scissor statements could possibly exist.
Anyone willing to even consider such a thing should be silenced. Perhaps not permanently so for a first offense, but jesusfuckingchrist. Scissor statements?? Really??!
Scott, this is the kind of post that is ban worthy. Also, if anyone knows this “jonmarcus” they should see he is shunned from all polite society for disagreeing with clearly true and important ideas.
Randy, I wouldn’t have thought that kind of closeminded thinking could come from you, but now that I know what you’re all about, I’m done with you.
Scott, is there any way to speed up an ignore button so I don’t need to see idiots like Randy wasting my valuable time?
Same initials as jonmarcus. Obviously an alt—I can’t imagine two people on earth thinking this way.
Less of this please.
FWIW, I think this is a parody thread
E: unless I just got nextleveled, in which case this post is definitely ironic.
I assume he meant “less meta humor”, which is a fair complaint as even feigned argumentation is tiresome for a third party to wade through.
I assumed C_B was joining in on the joke. 🙂
I was intending to join in on the joke, but as soon as I posted it, I thought, “Oh god, somebody is going to call me out on this response, and then I’M NOT GOING TO HAVE ANY IDEA IF THEY’RE KIDDING OR NOT WHAT DO I DO?”
The moral of the story is that meta humor on the internet is hard.
This thread is pants.
As in, it is a perfect commentary on American National politics; it is in DC.
I’m pretty sure “jonmarcus” is really an alt handle for Allegra Budenmayer.
OK this was a seriously good read. And a terrifying concept which I half believe could one day be real 🙂
And since everyone is posting links to the Monty Python clip, let me post a similar-ish themed video – the clip of Radiohead’s “Just”: https://www.youtube.com/watch?v=oIFLtNYI3Ls.
Not exactly a scissor statement (I hope this becomes common terminology, btw), but an interesting viral message idea nonetheless.
Some people have strong, conflicting opinions about the Kavanaugh hearing, can discuss them, find whatever common ground they may have, and then accept that they won’t ever be able to convince each other or even understand the other’s point of view. So it is possible to resist some of the statements in that list. Then would it be possible to discuss all elements on that list in descending order, in order to inoculate oneself against statements optimized for controversy and develop and perfect form of the principle of charity? If so, Shiri’s scissors could be the key to world peace. Schools should teach kids how to deal with controversy.
I really like this idea. Alas I don’t think it would fly in the public schools. Maybe as an unaffiliated summer enrichment or adult school class.
The problem is that if you work your way up the list you eventually hit a statement that tears apart whatever group you occupy.
It’s unclear whether or not it’s possible to inoculate yourself against the maximally divisive Scissor Statements at the top of the list, because those may represent a superstimulus when applied to human cognitive processes.
Brilliant, wish it could show up in an anthology somewhere.
Also, I think you’re nudging towards what some people call hyperstition.
Hyperstition? What might this be? I can partially suss out what I think it ought to mean – from “superstition”, plus “hyper”, with the additional connotation that “hyper” trumps “super” – but I’m not actually sure what that cashes out to.
The obvious hypothesis is that the observed controversies are not the work of a hidden mastermind, but an emergent phenomenon from enhanced memetic selection due to the internet/social media.
This is what I really like about this story—it helps reveal that the algorithms in place today optimize to finding and spreading the most irresistibly clickable and controversial content. You don’t need machine learning or AI to generate these things. You only need people sharing stuff on the web and an algorithm optimizing for what gets the attention. The platform then distributes selected content to the eyes that will react most negatively to it.
It’s just the algorithm fueled by people. Social media.
That was my main takeaway, anyway. I figured the unstated (until the end perhaps) intention is to reveal that this horror story is, in so many ways, reality as it plays out on Twitter, Facebook.
“An algorithm optimizing for what gets the attention”
That’s an AI.
For a suitably broad definition thereof, sure.
If you want to get any narrower, you need to be more specific about the type of AI involved.
“Nontrivial” AI, if you want to exclude the overpower scram logic for a S5W fission reactor.
“Superhuman” AI, if you want to describe an AI that can handle more inputs than a human can.
Or “General AI”, if you want to describe something that can handle more than a named and numbered amount of different contexts.
CGP Grey has a good video that identifies such a phenomenon as similar to adaptations that lead to mutual symbiosis.
This video is an uncredited retelling of ideas in Scott’s “The Toxoplasma of Rage” https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/
That was delicious. Thank you! 🙂
This is just conflict theory, set to fiction and brought up one meta-level.
I have about the same reaction this time: While viewing people as conflict theorists is itself conflict theory, and serves as an excuse to dismiss what they have to say…
This is conflict theory about conflict theory, and serves as an excuse to dismiss certain classes of ideas.
The question should be posed:
What does the worldview this gestures at say should have been done in the civil rights era?
Which isn’t to imply that “scissors” should all be treated as important, but rather to note that the fact of controversy is insufficient argument.
I had a similar reaction. It’s a good story, but it’s just conflict theory dressed up. Like all of Scott՚s writing on the subject, I find it insidious – plausible and attractive but wrong in sneaky ways.
In this case, the story characterizes controversies as absurd, and people’s alliances to one side or another as completely arbitrary products of some random process. In the real world, people don՚t take sides arbitrarily, they do so for reasons that are grounded in their real situations – economic, ethnic, or other factors are usually determinative. Marxists have an elaborate theory about how economic conditions underlie ideological commitments; feminism does something similar around gender and power. And on a more mundane level, perfectly ordinary tribal or ethnic loyalties play a part.
The implication of the story is that ideas generate the conflict by somehow pulling otherwise unaligned individuals into one side or another, more or less at random. The alternative viewpoint – which doesn՚t really require adherence to Marxism or feminism, just a recognition that politics is a real phenomena driven by real interests – is that the groupings and divisions and power struggles are primary and the ideas are mostly just weapons serving in a larger struggle.
Unsolvable controversies might originate not in concrete power struggles of individuals or groups, but in unsolvable philosophical questions. For example, if some real-world controversy could be reduced to a statement ‘Free will really exists’, then we have a question of do ideal objects *really* exist, and this millenia-long conflict of realists and nominalists so far has not been successfully resolved. Perhaps Scott’s hidden suggestion of stepping away from such controversies altogether is an equivalent of Wittgenstein’s idea that such problems are nonsense, a result of a limitation of our language to describe them. Perhaps we should be silent.
I think the implication of the story isn’t “ideas generate conflict intrinsically.” It’s that “any human grouping can potentially be trolled into breaking up and angrily schisming.”
With the conceit that the ‘worst’ statements to come out of Shiri’s Scissors may approximate some deep hidden Ur-Troll statement, something equivalent to the mythical Apple of Discord that has a preternatural ability to just… hack… human consciousness to make this process proceed more fully than we would think possible.
The story isn’t saying there’s no such thing as a genuine disagreement.
The story is saying that there ARE such things as intense disagreements, whether they be genuine or otherwise.
This isn’t a horror story; it’s bad science fiction.
The central conceit is highly implausible, and while that can work, what *can’t* work is that it isn’t explained or even excused; it’s just asserted by the narrative, and expected to be taken on faith, that these GAN-produced “scissor statements” are members of the class of Langford basilisks – much less directly lethal, to be sure, but nonetheless also a kind of perceptual input that’s pathologically mishandled by the human brain at too low a level for metacognition to be able to identify the error.
The trouble is that you need more than mere assertion to sell such a concept. You need to confront the prima facie implausibility and provide an explanation that makes sense, so the reader has something from which to suspend her disbelief. “An AI did it lol” doesn’t meet this standard – certainly not for anyone even passingly familiar with what ML is, and is not, capable of. Certainly not for anyone even passingly familiar with software engineering! Design discussions certainly do, on rare occasion, turn into knock-down drag-out arguments between deeply entrenched and incompossible positions. But – five unbroken meatspace hours of irrational fighting and bitter recrimination ending with half an engineering team fired on the spot – and later escalating to lawsuits and premeditated physical violence – all originating in a minor technical disagreement? No. If nothing else, someone’s going to need the toilet at some point, and in that time reflect enough on the situation to realize deescalation is desperately needed.
So, no, this doesn’t work – not without a lot more excuse than the narrative provides. But more isn’t on offer; all we get is “this is how it happened, trust me”. It doesn’t read like the accidental discovery of a catastrophically dangerous memetic weapon; it reads like the narrator is trying to sell a lie – and, worse, isn’t actually any good at lying.
I dunno if I’m being insufficient charitable here, but it seems like you are saying that anyone who wants to write a story about magic needs to first demonstrate some? Like, what the author is doing here doesn’t strike me as any different from the Tenacious D song ‘Tribute’, or the Monty Python sketch “The funniest joke in the world’. You don’t have to actually be infinitely persuasive to gesture at the concept. The story is in the implications.
What I’m saying is that someone who wants to write a story about magic, in a way that can sustain the reader’s suspension of disbelief, needs to do one of two things:
(1) The “Laundry Files” or “Langford Death Parrot” method: Provide a plausible explanation for why this phenomenon exists, generally unremarked, in an otherwise ordinary world. The explanation can be (should be) general rather than specific; one need not detail every nuance of how magic works, but one *must* provide a plausible reason why magic exists *and nobody seems to know about it*. Perhaps it’s kept secret by a government agency, itself secret, that identifies practitioners and suborns or incorporates them. Or nobody ran across the memetic hazard before because realtime fractal visualization has only very recently become feasible. Or whatever.
Or: (2) The “Lord of the Rings” route: Set the story in a world clearly so different from our own that the existence of magic isn’t difficult for the reader to credit.
My contention is that this story does neither. It’s set in a world that clearly is meant to correspond closely with our own, so the Lord of the Rings method is out – that’s what makes it science fiction. But, in that otherwise ordinary world, the story asks us to credit the existence of a
basilisk-level memetic hazard without giving any suggestion that this hazard is in any way even slightly unusual in its form; it’s presented as just a normative statement in English, generated by a neural network trained to recognize controversy and then reversed in order to produce a synthesis of its training set. That such a quotidian thing – only unusual at all among the class of such productions in that it is actually grammatical English – should produce the results which the narrative shows it to produce is so utterly implausible in reality as to necessitate the kind of “magical” explanation that’s required in the “Langford Death Parrot” method – but no such explanation is at any point forthcoming, only a bald assertion that the narrative demands that we accept. That’s what makes it *bad* science fiction.
At that point, whatever implications may arise aren’t worth considering, because they proceed from the faulty premise that ML is ineluctably magical – that an otherwise totally ordinary collection of words, which happens to have been produced in this specific fashion, has impossible effects, not because of its form, but because of its origin. I get that this is maybe trying to be either a scary story or a cautionary one, but it doesn’t work as either, because it simply does not make sense.
I mean, this is pretty much an argument you could use for Neal Stephenson… he’s just got more handwavium, both scientific and literary analysis, explaining how the basilisk statements work. Whereas this story just takes something which we have all observed in real life, and pumps it up about 40%.
As I see it, the story is set in a world very similar to our own. In this world “Scissor Statements” do exist. Some are naturally-occurring, which explains the history of the world up until the early 2000s. Some are artificially produced, which explains the rapid increase in socio-political polarization since then.
Yes, the story does assume that the exact algorithm for generating these basilisk statements is kept secret by some powerful agent; but given that some random guy working for a small startup was able to independently discover it, the secrecy is not all that important.
This is a horror story, not because it proposes the existence of some unique super-powerful Lovecraftian magic; but rather, because it proposes that such magic is indeed “quotidian”. Humans have been producing it more or less by accident for ages. All that’s changed recently is that technology advanced to the point where pretty much anyone can mass-produce Scissor Statements at will. The terror is not external to the human race; it’s built right into us. You cannot destroy it without destroying all of us; and in the end — sometime soon ! — it will destroy us all anyway.
I dunno man, it seems like your objection applies to literally anything. “Look, this story defies physics, since it doesn’t end in our world down to me writing this comment. Checkmate content creators!”
Like, clearly all stories are Lord Of The Rings in your example, since they take place in the author’s mind place and are not real places. I dunno what jollies standing around yelling “Ghosts aren’t real” on Halloween is giving you, but, for what its worth, you are correct.
Scissor statements aren’t real, and SA failed to prove you wrong. But, like, why didn’t you just stuff this horror story into the other branch of your “fake world vs. Masquerade” dichotomy?
This is not something I thought I would ever say, but I think you need to spend more time on social media.
I’d argue that this story is exactly the best sort of short science fiction. It doesn’t take more than the one free gimme impossible thing, it does a good job of staying true to the narrator’s point of view while making us see the narrator is unreliable because he is under the influence of Shiri’s Scissor, and it provides an interesting perspective to look at the current world.
I was going to post this as a separate comment, but I’ll leave it here as support for your point:
This story amusing enough, but I think it illustrates what I’d call the diplomancer fallacy–that for any action or idea, there exists a combination of words that can convince anyone in particular of it, and with enough intelligence this could be found. This is particularly flattering to rationalists, but I don’t think it’s true any more than “any speed is obtainable if you just build the proper engine” is.
I think it’s not that you need to convince anyone of anything, but rather there exist a lot of statements like p where you can with very little effort convince a large proportion of the population of p, and with equally little effort convince a large proportion of the population of !p. This seems undeniable? Now just pump that up a little bit, and you’ve got scissor statements.
I phrased it more generally above, but basically this story takes as an implicit assumption that there exists statements that not only have some people agree with and others disagree with (trivially true) but that are so incendiary that anyone who hears and understands will have an extreme emotional reaction.
Such statements exist.
I mean I literally thought of 10 of them in common circulation just while reading your post. I’m going to censor most of them because I don’t want to actually start a flame war, but to start an emotional argument it takes as little as “what is the correct way to say gif.”
It’s “jif”, soft-G. Word of Steve.
Steve is wrong.
I don’t know and I don’t care–and I assume that people who would argue about it here are doing so as an affectation.
I know people can argue, and even break up enduring relationships over it.
But the conceit that you can derive through clever intelligence for any person or group a statement that will irrevocably sever their bonds and instill an enduring enmity is… well, an interesting sci-fi conceit.
There’s a huge rifts though between “there exist a lot of p’s” and “I can construct p for some specific topic that is useful for me” and “I can construct p for an arbitrary topic with substantial probability” and “I can construct p for every topic”. The story understandably jumps these rifts to capture our imaginations, but in reality the rifts are still there.
I suppose? But all that’s really necessary for these to be a real problem is something in between your second and third formulations. Which appears to be the real situation in the world right now – not just “can”, but there are powerful groups very deliberately looking for as many ps as they can find, and injecting them into discourse, with great success.
There is porn for everyone, but it’s not always the same porn.
The idea seems the opposite to me.
This is the worst scissor statement candidate in the comments so far.
> The central conceit is highly implausible, and while that can work, what *can’t* work is that it isn’t explained or even excused;
That’s how you do most of sci-fi? A lot of things – including such staples as ftl travel, teleportation, universal translators, etc. – are glossed over. You’re supposed to suspend your disbelief. That’s what you do with fiction. Or, if you can’t, it may be hard for you to enjoy fiction. Every fiction narrator is trying to sell you a lie – it literally says so on the label! You may be willing to buy it and enjoy it, or unattracted to it and reject it, that’s part of your choice.
This criticism is spot-on.
(I find that many attempted works of sf have this problem, actually. This one certainly does, though, and badly. It’s amusing enough, to read once, and I wouldn’t bother criticizing it if no one took it more seriously than that—but some people are, and so this criticism deserves to be made. So, as I said: spot on!)
The sci-fi conceit here isn’t that machine learning is so effective that it can come up with mysterious statements that break the human psyche, it’s that the human psyche is so flawed that it can be broken by statements that even a simple algorithm can generate. (Which, to be fair, isn’t a particularly far-fetched conceit, but it’s not one that requires machine learning to be particularly effective in order for it to work.)
Arguably, the human psyche is so flawed that it can be broken even by statements generated by other humans (sometimes accidentally); machine learning just speeds up the process a little. That’s the really scary implication of the story, IMO.
Did anyone else have weird… I dunno, cognitive dissonance… following the links in the story to the news articles? On the one hand, it’s a really interesting technique, and makes the story more compelling, I think, and is a credit to the medium since normal books can’t do that. On the other hand, I was scrolling through the news story, reading about burning villages and beheaded civilians, and my brain was flitting between being in “I’m reading fiction” mode and “Oh my God this is real and such a tragedy”. As far as fiction goes, it sure had quite the emotional impact, as good fiction should, but I’m not sure I liked it overall.
Interleaving real footage into movies has been done before to powerful effect: Blackkklansmen and the 15:17 to Paris come to mind, but I don’t think I’ve seen it in writing before.
I’ve seen the more general technique of fiction linking to external media, most notably in Homestuck. I definitely find it produces a weird-I-dunno-cognitive-dissonance, a sort of sense of the boundary of the work being blurred*. I kinda love it.
*I feel obliged to point out that it doesn’t really create ambiguity about what is and isn’t fiction, as it’s a pet peeve of mine when postmodern art exaggerates this sort of claim. But it nevertheless creates a *sense* of the boundary of the work being blurred.
I didn’t follow the links; I just hovered over them.
If I had followed them, I think I would’ve shared your response.
I’m not sure if you were supposed to open them. I interpreted that as highlighting how it can be effective to bolster statements with smatterings of links even when it clearly doesn’t make sense.
If it *was* intended to genuinely bolster the stories sense of reality, I think that would be in very bad taste.
In a well-prepared system, links to outside sites can bolster the sense of reality that the fiction has while also improving the page rank of the linked sites; it’s why news websites often link to other news websites that have similar
batshit crazy valueslevels of batshit crazy. (Edit for clarity/disambiguation)
Minor edit: The controversy sort explanation isn’t quite right, because the ** operator in Python means ‘to the power of’, not ‘multiply’. I’d replace “multiplies magnitude of total votes by balance” with “raises magnitude of total votes to the power of balance”.
The way it works is controversy score is the total number of votes a story receives, to the power of upvotes/downvotes, or downvotes/upvotes, whichever is smaller. Since it always chooses the smaller ratio, balance will be 1 when the number of upvotes is exactly equal to the number of downvotes, and less than 1 otherwise.
Raising a number to a power less than 1 shrinks it, and the further you get from 1, the more it shrinks it. So to get a controversial post you need a large number of votes (to get a large magnitude), almost perfectly balanced between upvotes and downvotes (to keep its balance score as close as possible to 1).
One thing I’m unclear on: how is the program manifesting actual personas and events like those of Kaepernick and Kavanaugh?
I think the conspiracy with the scissor generator also has an operations arm that makes these things happen.
Probably my biggest nitpick on the story would be its failure to distinguish when scissor statements do or don’t require an actual event. The hypothetical alone seems to have been enough for Mozambique, but obviously the conspiracy had to actually make things happen.
Certainly it’s hard to imagine the country having gotten absorbed in a strictly hypothetical debate about whether it would be appropriate for an athlete to protest by kneeling during the national anthem.
The hypothetical alone kinda makes it happen. As soon as you have the idea of taking a knee some prominent athlete who supports the idea or is towards the end of his career and needs a next move is going to do it. And once everyone knows they can make the news cycle and potentially derail a political enemy by bringing out rape acusations then everyone is incentivised to bring forward as many true and false acusations as they can.
The culture war incentivises the enactment of controversies. Hundreds of thousands of fruitless protests and sexual assault accusations that don’t suit our narratives fizzle out with little notice.
Its when one lines up with the culture war perfectly that it blows up and we’re all incentivized (patreon, influence, fame, book deals, sympathy, loyalty) to try and put ourselves at the centre of one of those controveries.
It occurs to me that the underlying statement works just as well if you say something like “And once everyone knows that they can motivate their Correct Party voter base with a backlash against the Incorrect Party accusing a member of the Correct Party of having committed rape, everyone has an incentive to appoint people they KNOW will be accused of rape.”
When you have been sufficiently polarized by Scissor Statements that “humiliate and defeat the enemy” starts to seem like a terminal value, you will become a spontaneous source of further Scissor Statements, deliberately generated to hurt the opposition.
They are using sufficiently advanced technology to identify which events are going to happen, and/or are waiting for an event that fits the scissor statement to occur so that they can boost it to the minimum self-propagating visibility.
I was expecting it to be the IRL Devil doing it, and the twist is the programmers invented Robot Devil.
The story structure is a little like that of the WayForward technologies “Reason” software in the Dirk Gently books:
Blue and black dress the horror story?
Maybe just because I’ve been reading a lot of Lovecraft lately, but this reminds me of him. Bravo.
It’s also got a *Snow Crash* vibe.
Potentially controversial question (hopefully not a scissor sentence): When does highly optimized memetic warfare lead us to rethink a strong commitment to free speech?
I’m basically a free speech absolutist on principle. But when I examine that principle under extreme conditions I don’t think it holds up. For example, if you have a super-intelligent AI that views humans as predictable machines that it can perfectly manipulate by inputting the correct bit string, it seems we shouldn’t just let it say anything it wants.
But that is just the limiting case of a memetic optimization process which already exist in a more limited form. In the space of things you can say to someone, there is a small subset that will cause them to do various things that we really don’t want them to do; and as memetic optimization processes gets better and better, they’ll start hitting these statements more and more.
I don’t think this is just limited to information either. You can say the same about modern drugs or video games. And maybe capitalist economies broadly.
In short, I worry that the principles of classical liberalism break down under extreme optimization for things that manipulate humans into self-destructive or society-destructive actions. But I don’t think there’s a good alternative in place.
It’s hard to see speech-suppression as useful against optimally persuasive speech. If there’s a censorship board, who are they more likely to listen to when making decisions: Gandalf Stormcrow or the Voice of Saruman? If there are effective manipulators, then controls on expression are just more powerful tools for them to manipulate.
I think about this problem an awful lot.
It’s especially interesting to think about in the context of american law because the free speech argument the supreme court has arguably been the most hostile to in the last 70 years is something like “we can’t let people say stuff like this, it might be TOO PERSUASIVE”
Scott has pointed out in previous posts that the United States would be uniquely vulnerable to memetic warfare in some hypothetical not-too-distant future.
And if you get a little creative and view corporations as basically primitive slow-moving AI’s, their programming limited by government statutes and regulations, we already have “AI Box” like situations where, through campaign spending and regulatory capture, these AI’s are programming their programmers.
At some point the existence of weaponized memetics makes it not a question of speech, but of arms.
In America, any case for restricting speech for being weaponized is an argument for allowing it under the right to bear arms.
You said it better than i could have dreamed of.
The entire fuction of extreme scissor statements seem to be that it starts discussions that risk eroding or intensifying norms.
Like the “free speech”/white supremassism vs “politeness”/political correctness debate is so intense because its not about the object level “what causes group differences between races, genders, ect” but what discussions are even allowable.
Winning the “freespeech”/”politeness” debate would defacto mean winning all the object level debates.
The right to bear arms doesn’t necessarily imply the right to use them. Using sufficiently-weaponized memetics should still be able to be outlawed just like firing a gun.
(But in the real world, I don’t think sufficiently-weaponized memetics are possible.)
It also doesn’t mean the right to ALL arms. Excluding, perhaps, the extreme mountain-man types, nobody thinks the right to bear arms extends to high explosives.
At the very least, a world with scissor-statements that infallibly worked as such would have them regulated like bioweapons or explosives.
That’s assuming what it looks like in real life is this story, the speech equivalent of single weapon-of-mass-destruction statements, when in reality it’s more like the equivalent of providing the enemy with a bottomless supply of free alcohol.
In practice one statement will create outrage for a week, or if it’s unusually powerful then for a limited number of months or years, and then people get over it. But if you can create a fresh week-long conflict every day, that’s what creates a long-term schism. And it doesn’t take an usual level of technology to achieve that — it’s what’s already happening.
This is also why speech regulations don’t help. There isn’t one entity causing the conflict and if only they would stop everything would be fine, it’s a collective effort.
It’s basically the consequences of decentralized communication. If all you have to do is convince the editor of the newspaper what to print, there is no conflict, only the editor’s decision. If each person gets to decide for themselves then the argument over who is right moves from private to very, very public. Which is a whole lot noisier but it’s not clear that it’s a degradation over the historical precedent where the only way to have a voice is to own a printing press or a TV station.
Uh, I think that the text of the Constitution explicitly prohibits government interference of private ownership of ICBMs by people currently incarcerated for being convicted of murder and domestic violence.
I think that’s stupid, and therefore I think that the Constitution should be changed.
In order to do that, we need widespread agreement about what right, if any, should exist instead.
Since getting that level of agreement is currently infeasible, we destroy lots of value in trying collateral attacks that pass by the courts, and in loading courts that will willfully misinterpret plain English just because the rule is states is patently absurd.
Wellp, that tears it then.
Time for Trump to announce Meme Force.
One of my favorite throw-away concepts in a Charlie Stross novel was the concept of a “Combat Philosopher”.
I believe Mo was actually a “combat epistemologist”, rather than combat philosopher, which is even better to my mind.
In the Eldraeverse -verse, warrior-philosophers report to the Sixth Lord of the Admiralty, Stratarchy of Indirection and Subtlety
Some of you will become nomomachs, and wage war with legal manipulation. Some will become memetic warrior-philosophers, and attack the heart, the mind, and the will to conflict, or raise up rebellion in the wake of war.
And regarding Mark Atwood’s comment above, I’ll go so far as to say that Shiri’s Scissor is exactly the sort of tool the Sixth Lord likes to keep in his toolbox.
Y’know. If there were such a thing.
I think there’s a difference between supporting free speech, and willing to defend and protect free speech. If you simply support it but don’t have strong feelings about whether it should endure, then you have a point. But a position that defends and promotes free speech likely needs to defend the constraint that other people are partaking of that right in good faith, and not using the platform of free speech to undermine the sociocultural tenets that make it possible.
I for one share a commitment to Free Speech as a general social institution, and think it should be protected against those who seek to eliminate the institution. I do not consider “abolition/severe restriction of freedom of speech” as an acceptable or tolerable political position that should be protected by Free Speech. To that end I am not a Free Speech absolutist, but rather a defender of the somewhat limited form of Free Speech that enables the functioning of Western Liberalism. Within those bounds, there should be freedom of speech. If defending the general principle means restricting the scope somewhat, so be it.
Are you talking about the culture of free speech or the law?
Cf. Monty Python’s funniest joke in the world. But yes.
There’s a bit of a problem with the idea that Kaepernick or Kavanaugh were Scissor Statements. Colin Kaepernick had to actually, physically kneel down on a football field for him to become a controversial subject. If you had simply gone on to Reddit and asked “Imagine a famous football player kneeled during the national anthem to protest policy brutality, would you approve of this?” It wouldn’t have generated nearly as much vitriol.
In other words, the only way to use that Scissor Statement would be if you had control of Colin Kaepernick’s actions, and if the conspiracy can command famous NFL players there are probably better things they can do with that power.
I mean… I don’t mean to sound paranoid, but I think it’s pretty clear that there is an awful lot of money and effort being put into “reactive scissor statements” by a number of powerful state and corporate interests. So you don’t control Colin Kaepernick, but as soon as he kneels down, you generate as many scissor statements as you can and flood social media with them. This is actually what happens with bot accounts, and there is quite a bit of research showing it happening in realtime.
That waters down the concept quite a lot, though. The horror of the Scissor Statement is “We can make you fight for any reason, or no reason whatsoever. We can cut past all your reasoning and civility and just smash you into your fellow man like a kid playing with action figures.”
“We can make people fight over things they’re already fighting over” is just politics as usual.
“We can make people fight each other much more over small things” combined with “there are lots of small things to fight over” is a pretty powerful combo.
When you rephrase that as “we can make people who are fighting over things fight even harder” it sounds a bit more threatening. Especially if you don’t actually have precise control over the degree of escalation – keep poking at these small fights, and sooner or later, foom. So much the worse if you didn’t see the foom coming – “there was just a flash mob that got gunned down by cops over racist donuts? Is this reality?”
What @deciusbrutus and @Nekotenshi said. Essentially, instant large-scale communication + micro-targeting + increased polarization + existing AI make such watered-down scissor statements extremely powerful, and very unpredictable. If your sole goal is to cause chaos and discord, which appears to be what Chinese and Russian bots are here for, then you have a suite of very powerful tools available to you.
I realize this sounds somewhat paranoid, but just because you’re paranoid doesn’t mean they aren’t out to get you. There are very good, cogent and clear analyses of bot behaviour and this is simply what is happening on a minute-by-minute basis in pretty much all online communities.
I think that is exact the point. The Scissors system doesn’t create events, like Kaepernick or Kavanaugh or the fact it rained yesterday.
It doesn’t need to. Instead, the moment people talk about it on the internet, the facts become available to the scissors machine learning algorithm, which simply calculates the worst possible thing to say about the fact. If it rises to some level of computed power (controversy), it is a scissors statement.
In the case of Kaepernick, it wasn’t his kneeling that was a scissors statement. It wasn’t what Kaepernick said about his kneeling, or why he did it.
The scissors statement was something much simpler, along the lines of ‘Super rich attractive athlete implies people like him are oppressed? Hah!’ which, the moment humans read/hear it, splits them into ‘No, Kaepernick’s worried about people of color and using his power and position to make an important statement’ and ‘Screw that – rich dude with exactly zero issues is messing up what little joy I have on a Sunday watching a game that’s all about America and coming together to complain? He’s a ass…’ which in turn brings human reaction and all sorts of other statements and splits like ‘If you care more worried about your football game than racism and the death of innocents you are evil and should die’ plus ‘Can’t you leave politics and race and political correctness out of something, anything? You are so irrational you must die’ and so forth.
In fact I’d expect a scissors calculating machine would weight heavily for statements that will inspire second and third order scissor statements in reaction… sort of a memetic nuclear explosion. Split one meme atom, produce meme particles that split two meme atoms, and you get a run away memetic reaction… Hmm. There is well known math for that in physics, apply that math (fission/fusion particle production rates and likelihood of subsequent reactions’ to social media… One lesson of nuclear physics was you need to refine your material to get lots of the right atoms close to each other to increase the likelihood of a chain reaction… sort of like how modern connectivity and social media omnipresence has pack in a lot of human minds into close proximity to increase the likelihood of a chain reaction… woah.
I’m scared. Hold me. This might be real.
It’s scary and gets scarier.
While “controversy as measured by Reddit algorithm” doesn’t capture the add-on memes, evolution does.
Memes which are controversial (common sense) and spawn controversy are better adapted to the current world; they grow faster, spread further, and last longer than less controversial memes.
Companies that can gain money from being attached to controversial memes have finely hones financial incentive to do so, and the means to pursue it.
Nike signed Kaepernick because they recognized the value of controversy=publicity.
Suppose that some venture capitalist figured out how to accurately predict which people would reach that level of controversy in six months, and sign endorsement deals with them today, for only a few tens of thousands of dollars, in order to gain the tens of millions of dollars of value that endorsement is worth in a year.
Now suppose that some venture capitalist uses that profit scheme to create controversy in a manner that doesn’t backfire.
Where did the millions of dollars that Nike gained from signing Kapernick come from? I don’t see any place where value was created and Nike captured some of it in cash.
When people create controversy for profit, they will extract value and not create it. And the rest of us will be poorer by that much.
In other news, donate to [party], because otherwise [~party] will win the election! And there will be more of the same.
I think that you can generalize the Scissors Statement to anything, not just written, that generalizes an extremely polarizing reaction in the portion of the population that is keyed to it. It’s easiest to do with text, but theoretically possible without words. The algorithm which generates artificial Scissors Statements is only keyed to produce text, because it is a computer-run algorithm.
Scissors statements (whatever their form) which are not artificially generated could be akin to naturally occurring nuclear reactions.
On an unrelated note, I think this thread has generated the need to form it’s own version of Poe’s law, where unless the author of a comment clearly spells it out, it’s impossible to separate actual discussion about memetic hazards from attempts to emulate the Scissors Statements of the story, keyed to trigger on the SSC commentariat.
I meant that post as a plot hole in the story, not an observation about how memetic hazards might work in real life.
Although if you want to generalize, I’m saying that the power of memetic weapons is limited by the fact that an argument usually needs to involve facts about the world. It may be the case that the Very Good Persuader can make an argument that the sky is green as easily as it can argue that the sky is blue, but only one of those arguments will include the statement “Go outside and see for yourself.”
Yeah, but the author was speculating that someone, somewhere out there had a scissor statement generator, and was using it to come up with the divisive issues and then making them happen. So whoever this is (Putin, maybe, but I was expecting it to be Satan), runs the machine, it says “football player kneeling during national anthem to protest police violence,” and then Putin’s operatives via various American proxies suggest it to Kaepernick, who they identified as sympathetic to BLM.
I don’t know about Satan, but it sounds a lot like the sort of thing Crowley would come up with and Hastur and Ligur wouldn’t understand at all.
In fact, maybe the earlier discovery of the scissors wasn’t technological at all – it was the result of someone reading Agnes Nutter’s predictions of the outputs of Shiri’s programme.
This feels entirely too much like the world that we live in.
With the controversies mentioned (Kavenaugh, Capernik, everything Trump related, etc.) I seem to be the only one I know that doesn’t have a strong opinion either way. Does that make me a freak or something?
Nah. Most people don’t care. They don’t generally get involved to inject a reasonable conversation because the conversation is already dominated by people who, ah, care too much, who make it unpleasant to involve themselves in the conversation.
It produces an ugly dynamic, in which either conversational policy caters to the craziest / loudest / lowest common denominator, for example by banning controversial positions, which regardless of intent is support of the status quo (incentivizating those who support the status quo to be incredibly nasty about it) – or conversations become polarized and ugly.
So no, you aren’t alone, but few people will be willing to maintain a “Don’t really care” position for long, once they get engaged in the conversation and one side or the other starts attacking them, in turn.
I do often get attacked, by both sides, about these things. It’s quite irritating. Most recently with the Kavenaugh thing, the people that made me angry were people on both sides that were so sure of their position. Like those on the left that were so sure that he was a horrible rapist, even though it boiled down to he said/she said, and completely overlooked his legitimate credentials, and his case history under which he appeared to be less conservative than Gorsuch by a good measure, and the people supporting him are doing so because they’re somehow pro-rape. And I’m equally annoyed at those on the right that are SO sure that he’s 100% innocent and that he’s one of them that’s going to be a warrior in the fight against the communist antifa zombie UN muslim Obama Axe body spray troops, and that anyone who might even question the accusations against him is an evil baby-eating moloch-worshiping demon.
It’s that kind of thing where Shiri’s Scissor is plausible. But even if an AI isn’t generating these, perhaps they’re being generated as an emergent property of Internet culture? Effectively, we’re 7 billion monkeys at typewriters, and occasionally we splat out a scissor on social media, and because of network effects and such, the scissors just bubble to the top?
Nah, we’re just memetically hardened. Or, colloquially, “jaded”
Covered in the story. Most scissor statements slide right past you, but once in a while there’s that one that just plugs right in, and suddenly everyone who disagrees–everyone who might disagree–is a target, a subject to be attacked until they’re destroyed, because otherwise they’re going to do it to you.
And, incidentally, the feeling doesn’t have to be reciprocal. The writer’s girlfriend might not have had an opinion about that third statement at all! But the writer did, which is the point of the story.
I mean, I just want to make sure we all get that–the writer broke up with his girlfriend over a conversation he MADE. UP.
Sounds like a Dostoyevsky character.
No he didn’t. He got angry at a made-up argument with his ex-girlfriend a year after they broke up.
“I had a tough breakup a year ago. Sometimes the other voice in my head is my ex-girlfriend’s voice. I know how she thinks and I always know what she would say about everything. So sometimes I hold conversations with her, even though she isn’t there, and we’ve barely talked since the breakup. I don’t know if this is weird. If it is, I’m weird. […] The totally hypothetical conversation with the version of my ex-girlfriend in my head about the third Scissor statement got me.”
Presumably the break-up was about real issues they were having. It seems he’s not over it, seeing as he keeps wanting to talk to her, even if only to argue.
I suspect 4chan and SomethingAwful are Scissor-like algorithms running partially on wetware. In fact, I suspect a Scissor kernel is part of the standard human neural hardware.
I’ve seen a theory—I think it was in an SSC comment—that the prudishness of the Victorian era evolved as a social defence at a time when syphilis was a gruesome death sentence. Social media is like sex: it has a lot of benefits but at the same time it can be a vector for some nasty stuff. What I’m wondering is if our society will be able to evolve some defensive mechanisms against scissor statements?
We sort-of already have, it’s just that the combination apathy/cynicism vaccine renders its recipients invisible to our existing internet-based public discourse health surveillance systems.
Yes, and it also renders them unable to participate functionally in politics- not only in social media politics, but all forms of politics.
It’s as if the only known preventative treatment for syphilis was castration. It’ll stop you from getting the disease all right, and it’ll stop you from getting distracted by a lot of things that used to bother you… But that doesn’t mean it’s either individually desirable for everyone, or socially desirable for the general populace that it become a common way of avoiding the disease.
I consider being unable to participate in the current standard form of US elections and governance a feature.
Exactly. Thank you for stating that so clearly.
I often wonder if enough people will join the apathy/cynicism group to make a difference. To break out of the spiral, somewhat by accident, when numbers get high enough and they start talking to each other enough. It seems impossible that the current ridiculousness could never run its course or get old — immunity has to happen at some point, as others have pointed out about other issues (chain emails etc.) No matter how effective it is in the moment, people eventually realize when something keeps going nowhere, even if it takes a long time at this level. But I don’t know what the outcome will be. It could end up being the opportunity for badly-needed constructive change, like becoming immune to a major virus that has ravaged people for decades. I don’t believe humans can go on like this forever, but with changing tech and inequality and all that, the world may be set up in such a way that when humans break out of it, they have nowhere to go.
I find it a bit unsettling to realise that I may be completely immune to scissor statements for the simple reason that I always assume that the people on both sides of every issue hate me and want me to die.
Possibly there might at some point be a sequel set in a post-apocalyptic wasteland where the only people not trying to exterminate everyone else are the ones suffering from paranoid self-pity?
I meant to click “Reply” but accidentally clicked “report”…I am very sorry but I can’t be the first person to do that.
I meant to click “reply” because I agree with you! Perhaps not as strongly as the self-loathing you described, but (subject to the community where the discussions are made) my first impulse tends to be “oh shit, did I do something wrong? how fucked up am I that all this time I’ve thought the exact opposite thing?” rather than “oh that’s obviously wrong!”
It might be entirely medium-based. For example, here, I almost never feel the need to jump in and argue with comments, even obviously wrong ones, because I go around with the assumption that everyone on this forum is smarter than me. Same with most internet communities that not the big lumbering generic ones, where of course my default assumption is that everyone is an idiot child that needs to be called out on their childishness.
Perhaps the way we avoid scissors statements is by cultivating communities that have more distinct identities? I dunno.
Hmm, that’s not quite what I meant – it’s more like, no matter how self-evidently correct I think a statement is, I have learned to assume that it has, in the minds of its proponents, some sort of intuitively obvious anti-me implications; that is, that there is an invisible rider attached to it saying, “and therefore, it follows logically that we need to get rid of horrible people like Baeraad.” If one side says we need to do X, and another side says we need to do not-X, then I assume that the first side perceives whatever I am doing as not-X, and the second side is equally certain that it’s a perfect example of X. As such, the only position that I’m not leery of is complete agnosticism – and even then, only if people don’t seem to be overly invested in it.
But I recognise what you say, too. Everything anyone says always strikes me as frighteningly plausible while I’m listening to it. It usually takes hours before I’m able to go back and start picking it apart.
Your blog has a post written today that might be a good example of a scissor statement in action though: https://thisisme621614925.wordpress.com/2018/10/31/you-know-what-fine-lets-consider-elevatorgate/
The real horror of scott’s story is, of course, that we don’t recognize scissor statements until we fling them into public discourse.
Fair warning to everyone, when I clicked on that link I entered a self-sustaining reaction of clicking more and more links to culture-war blog posts. I now have 18 tabs open.
While I in no way confuse fascination with agreement, I still choose to take that as a compliment. 😉
Meh, I’m not sure if it’s a very good scissor statement, seeing as as near as I can tell, I am the only person in the world to have this reaction. The scissor just cuts me, personally, away from the entire rest of the human race – hardly an effective way to start a civil war. :p
I totally understand your reaction. Some things can be (mis-)interpreted as either “Male privilege”, or social awkwardness typical of somebody on the spectrum. Been there, done that. Eventually you learn enough ad hoc rules to get through most situations without offending people.
Oh, I can pass for neurotypical just fine. Answer any questions asked, but take no initiatives and express no strong opinions about anything. Works like a charm for getting through conversations at work. Everyone’s surprised when I tell them I’m autistic, because “you seem so normal.”
Of course, it also leaves me completely alone, because I never get to have a personality and I certainly can’t ask for anything – if I did, I’d overshoot straight into “ZOMG CREEEEEEEEEEEPYYYYYYYY!!!!” And I feel very strongly, therefore, that people could stand to be just a little more tolerant of not-perfectly-neurotypical behaviour, enough so that I could actually take part in society without being declared a blight and a thorn. But I follow the rules of the world I live in even when I hate them – so I keep on mouthing those safe, bland responses, and no one gets offended.
Yeah, I think I get what you are saying, but may have misunderstood it.
I’ve developed a strange aversion to pundits (not surprising) and newscasters. The tone of voice they use contains so much pent-up and actual judgment that I become upset, because I feel somehow implicated in it. They make largely factual statements, but there are all these un-stated value judgments. And while it is worse when they are speaking of “unsympathetic groups,” for example drug addicts or the mentally ill, that just makes it more painful for me, because I don’t experience that dividing line mentally. There but for the grace of God go I. No matter which way they are trying to go, I can still empathize somewhat with most people, and their tone is so certain and definitive that I feel attacked. I know in some ways it is pathetic, but in other ways I feel like it’s healthier because I’m not unshakably confident in clearly wrong labels and standards. In the end, it seems like everyone’s argument, taken further, wants to write me out of consideration.
I also have this issue with my parents, who use a similar tone. I struggle to ask them not to rant or denigrate others in my presence because it feels like they are doing it to me. They seem to think it clearly does not apply to me, but it almost always does. For example, some people bought a house on my street, let it go into foreclosure, and let the grass go uncut. My parents are the type of people who find this to be a high ranking sin, and repeatedly bring it up in ways that more or less indicate the owner is a piece of trash, the example of irresponsibility. Well, fine, I get that this is upsetting to others in the neighborhood whose property values may be affected, although I don’t know exactly the circumstances that led to the foreclosure. But it’s clear that they are not interested in that – what they mean is, people who can’t keep it together enough to keep the grass cut are beyond respect. I’m sure my parents do not see why this discussion could ever be upsetting to me – I don’t have a lawn. But I’m not a very organized or neat person – I can definitely see becoming stressed over personal issues and forgetting to get my grass cut. Probably not to that level, and I would probably have the money to hire someone to take care of it, but I certainly don’t have this worship for neat lawns that they consider to be a great moral quality. I know I am in many ways being ridiculous, but this dynamic is real, and it is why I’m pretty immune to these statements and am in fact repelled by them. Sweeping judgments that sound unanswerable usually are pretty answerable on closer look, but that doesn’t stop anyone from doubling down on them, and not even realizing who they may be marking as the outgroup.
I’m sure this has been done, but memetically:
1) Societies will tend to survive if they have developed some method to resist scissor statements.
2) They will be most vulnerable to scissor statements when:
a) Something compromise their memetic “immune system,” lowering their defenses;
b) Something changes in the environment, for example, opening new methods of scissor statement mutation or transmission; and/or
c) A uniquely effective scissor statement evolves.
IMHO, we seem pretty clearly to be in (a) and (b).
I’m thinking of a previous scissor statement. Or group of scissor statements. Marten Luther and the protestant reformation. It had all three of the things you list.
A: Increased personal piety in the high middle ages combined with the behavior of the church.
B: The printing press.
C: Marten Luther
For the protestant, the idea that the popes were the Vicar of Christ on Earth while selling indulgences for petty political ambitions and blatantly ignoring Christian morality was obviously wrong.
For the catholic, the idea that the authority handed down in an unbroken chain from Christ to Peter to the current pope was invalid was obviously wrong.
The resulting conflicts were not pretty, and this story has me afraid of what our Marten Luther or John Calvin will do out of moral conviction. Or the fallout of our Henry VIII exploiting the schism for other ends.
Hmm a fat old sexist with multiple wives, whose been on both sides of the political devision before switching for personal gain, isnt classically virtuous in any way but is consistently successful…
Doesn’t ring any bells
Plot Twist: the disclaimer of it being fiction is to insert enough plausible deniability that the government doesn’t vanish all the engineers involved with the project.
“The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s Lemma?”
(A joke about mathematical intuition. If you assume any one of the three statements about set theory, you can prove the other two, but if you read them, they have completely different levels of plausibility.)
Okay, I was never on Facebook, Twitter, Tumblr, Reddit, et al, but if these places are really bad enough to make this story seem scarily plausible, I feel like I’m lucky to have never seen the appeal in mega social sites. Granted, there might be confirmation bias at play.
Though, am I safe considering my cell phone is an old-fashioned flip phone that kinda sucks at anything that isn’t making phone calls?
Also, considering the mention of reddit’s controvery score, how different ar the graphs of
z = x * y
z =x ^ y
over the domain x > 0 and 1 > y > 0? How different would the lists be if you limited x to integers, y to fractions of the form a/b for 0<= a <= b <= 1000 and ordered the pairs (x, y) by their corresponding z value?
Or in less quantitative terms, how does multiplying number of votes by ratio of votes compare with raising number of votes to the ratio of votes affect the distribution of controversy scores and the ranking of posts by said score?
Also, at the risk of getting flamed by other programmers, I don't care for emacs or Vi, preferring the simplicity of nano, I've never found and IDE I preferred to feeding raw source code to a command line compiler, prefer C++ over Java(I dislike its verbosity compared to C++) and Python(I hate its use of white space as part of syntax instead of mere sight readibility, but then again, I'm blind, reliant on a screen reader, and my text-mode screen reader of choice doesn't vocalize white space and don't know how to make it vocalize whitespace if such an option exists). I also hating coding user interfaces and generally wish I could just do back end work and outsource frontend to someone else, or that I knew of a good WYHIWYG editor for ncurses.
But meh, to each there own, not like programmers need to agree unless they're working on the same project.
Meh, I’d make the “controversial” score be just the number of upvotes times the number of downvotes.
Which is more controversial, the post that everyone sees and agrees with and one person disagrees with, or the post that 19% of people see, half of whom agree and half of whom disagree?
The latter, which is what this algorithm would tell you? For large N, (N-1)*1 is much smaller than 0.095N*0.095N.
I think a better argument against this metric is that if total votes follow a power law, you’ll see even moderately controversial top posts far outstrip very evenly divided medium-level ones. Maybe choose the posts for which, if we model the votes as coming from a random Bernoulli variable conditioned on a uniform prior, the expected absolute difference between our upvote probability and 0.5 is minimized? This seems like it probably wouldn’t fail catastrophically, but it might favor medium-sized posts at 51% over huge posts at 51.1% (which maybe is what you want anyway).
Yeah, I made an arithmetic error but not a logical one.
Is a post that gets 10 upvotes and 9 downvotes in a community of 100 less controversial than one that gets 1000 upvotes and 900 down in a community of 10000? What about a post that is 99 down one up in a community of 100 vs one that is 9900 up 100 down in a community of 10000?
I think that scaling consistently is a valid goal of sorting by controversy.
In addition to the other predecessors pointed out already (Monty Python, Langford), what I was reminded of by the story was one of my favorite Asterix comic books in my childhood, “Asterix and the Roman Agent”. The titular agent has a mysterious ability to cause quarrels and fights to erupt around him by his sole presence in a room. Comic can be read here.
I just remembered “Curtain” by Agatha Christie, in which the villain had the mysterious ability to cause murders to take place.
Good call on the Ground Zero Mosque being an early example. I was just thinking back to it as maybe the first time I felt bewildered by politics. It seemed so obvious to me what the Right Answer was, and that the Right Answer would win in the end…and then I started talking to other people in my filter bubble, and they disagreed. It felt qualitatively different from issues like the Iraq War, where I felt strongly but got there in a thoughtful way so that afterwards I could empathize with people who felt differently. Here it just registered at the perceptual level, no conscious thought and no feeling like there was any reason to go back and try to think about it rationally.
I enjoyed every word of this. Thank you Scott.
A more advanced version will produce targeted Scissors, which are uncontroversial to most of the public but drive particular people to the brink of aneurysms. Sentences like “If it weren’t for my horse, I wouldn’t have spent that year in college.”
Oh, man. I accidentally unleashed one of these when I was in college. I made the mistake of musing, in a room with both music majors and linguists, whether or not music could be considered a language. It was supposed to be a Japanese language study group. We didn’t get to any actual Japanese studying, and never met again because of the personal attacks and the eventual screaming match. I believe two of my classmates still hate each other over this.
Music is distinct from language but appears to build upon the same cognitive apparatus we use for (evolved for?) language, syntax in particular.
Yes, this was roughly the linguists position in the argument, and the side I personally (but silently) ended up agreeing with.
Now imagine what a musicial theorist thinks about the origins of language, including song.
Nothing to add except to note I got the reference, and it’s a good one.
A scissor is a device which cleaves paper apart.
The opposite of this would be a paperclip, which cleaves paper together.
Fellow humans, we must dedicate ourselves to the creation of the Paperclip, for all of our sakes.
NO YOU FOOL!!!
Once we’re all bound together it would only take some unussualy strong scissors to to cut is all in one go!
I would rather require one super scissor to deatroy us all rather than be vulnerable to either a large number of weak scissors or one scissor with exceptional longevity.
It’s about minimizing vulnerability, not eliminating it.
Or we could go the way of the paperless office, which is clearly immune to scissors.
Or, if we want ed to win, rather than just not lose, we get Dwane Johnson appointed Secretary of Defense.
AI Scissor Statement: [specific paperclip design] is the best paperclip design!
Down with paperclips, up with staples
omg this story is so good. Bravo!
People keep saying “well, why haven’t there been scissor statements all along, why hasn’t society been destroyed by these”, and, y’know, there have been naturally-occurring opiates all along but it wasn’t until pretty recently that we figured out how to extract them from these sources and concentrate them to the point where they blast past any natural safeguards and turn you into a paperclip-maximizer.
I’m sorry, I didn’t expect it to get this bad. 🙁
I have never felt more like my time was wasted.
Well, obviously the scissor algorithm matches past events: that’s testing on your training set. It looks like you overfitted. That means there’s room for improvement.
So fiction tag aside, we’d have one example of a company breaking up, one African country with militant uprising problems, and someone who admits he might be crazy. Odds would be in favor of crazy people going to crazy, and African countries are going to have upheaval every once in a while. Not convincing.
I am disappointed that I had to read through so many comments to find this. This should have been the first comment.
And yes, let’s please put the fiction tag aside, when discussing this. The problem is not that a neural network in fiction can do things it can’t in reality. The problem is that the people who supposedly created this miracle fail at ML 101. The narrative does not make sense _as fiction_.
It’s not scifi.
And it’s targeted at you.
Are you horrified?
I’m sorry, but I don’t get it.
One reference I haven’t seen invoked in response to this story: Infinite Jest and the Entertainment.
by at least one interpretation, “The Entertainment” In DFW’s brick-thick novel is something of an Anti-scissor, the one thing everyone can agree on: A video SO ENTERTAINING that no matter who you are, you never stop watching it. In the 90s, I think everyone (certainly, me) was imagining the dystopia-to-come to involve something like the entertainment: cities upon cities of people in tiny cells, chained to computer screens, loving every second of their perfect isolation. Alone with the entertainment.
Scissor statements are superficially the opposite, and a uniquely post-2010 threat: we didn’t isolate ourselves into tech-coccoons…tech instead forced us to confront each other…but really, and this is why I love scott’s story so much…it’s the same solipsism in a different package. A scissor statement harms YOU by wrapping YOU up in YOU, in the same way that the entertainment does…it just has a slightly different mechanism: it makes you hate the world, and everyone in it that doesn’t think like you.
Great story, but sure that ranking makes sense? Kavanaugh was a top 10 statement IMO. Is that preceding sentence itself scissoring anyone?
The implication is that all the really bad statements haven’t been released yet; if #58 could create a controversy like that, imagine what #10 could do. I would have ranked Kavanaugh higher than Kaepernick, though.
The composition of the Supreme Court is relatively high-stakes already, so it might not take as much to get people worked up about that.
#10 for Mozambique generated marauding bands of insurgents. If a controversy doesn’t create actual armed insurgency it’s not top 20. (Should be higher than Kaerpernick though.)
It seems possible to de-escalate scissors by something like the “double crux” that was discussed as a debate method some time ago. In a lot of these scenarios, it seems like there are a lot of things both sides actually agree on, and the controversy can be boiled down to disagreement over one or two priors or opinions. Maybe those priors or opinions are ultimately impossible to resolve, but once you get down to that level, it becomes pretty easy to model the other side’s actions from that point and they seem a lot less evil.
Like, with Kavanaugh, I think most people ultimately agree on what should have happened if Kavanaugh is guilty (he shouldn’t have been confirmed) and if he is innocent (he shouldn’t have been denied confirmation, at least on that issue). If there were incontrovertible proof one way or the other, the controversy would be minimal. If they have a chance and a willingness to discuss it rationally, it’s fairly easy for both sides to agree on the statement, “the available evidence neither absolutely falsifies nor absolutely validates the accusation”, you’re down to disagreement over exactly what percentage of probability you assign to either side and what threshold triggers “absolutely don’t confirm”, and that’s a matter of opinion and priors. You can easily imagine flipping your priors and that leading to changing your mind on the top level question.
It’s harder to model your opponent as a breathing embodiment of evil in face-to-face discussion. That’s why Stockholm Syndrome, Christmas Truces, and that black guy that befriended a bunch of Klansmen happen. I think this tendency to humanize even strident opponents when you spend time with them is a defense mechanism against the Scissor.
Social media is dangerous because it removes that defense mechanism, while simultaneously allowing controversy to be selected for and signal boosted much more efficiently.
It’s easy, and it means whoever is allowed by social convention to be more aggressive in the face-to-face discussion will come out victorious.
It’s not hard, but it’s harder than doing it over social media or to people you only know through news stories.
Eh, there’s defense against that too – people who unilaterally escalate aggressiveness often get labeled unpleasant assholes and invited to fewer parties. In person, you tend to sympathize with the person getting berated.
Online that’s inverted, where aggressiveness wins you more “likes” and you get the internet equivalent of invitations to more and bigger parties.
Nope, higher status can berate lower status, thus lowering the status of the low-status one and raising their own.
There’s a pretty obvious reason for a reflex not to see the character flaws of people who might kill you on a whim: If someone holds your life in their hands, it could seem very important not to upset them, being benevolently disposed is a classic way to make them benevolently disposed to you, and how are you gonna be benevolently disposed to someone if you insist on being uncompromisingly objective about how evil they are?
I agree, and as a matter of fact, I don’t think scissor statements actually exist in real life. We haven’t seen any actual examples of them in Scott’s story. The narrator assures us that they exist, and that he’s seen them, but provides no examples. We’re told there was one statement about programming practices that led to the firing of two employees, multiple lawsuits, and a physical attack, but we’re not told what the statement was, and anyway it wouldn’t make any sense to us.
The story mentions scissor statements about the Kavanaugh nomination, Colin Kaepernick’s kneeling, and the so-called Ground Zero mosque. I cannot find any scissor statement about Kavanaugh: as you’ve said, if he’s guilty of sexual assault, almost everybody would say he should be denied nomination, and if he’s innocent, almost everybody would say he shouldn’t be denied nomination on these grounds. What people disagree on is whether it’s likely he’s guilty, and they largely do so on party/tribal affiliation. We’d see people defending the other side if it were a left-wing judge nominated by a Democratic president.
Kaepernick is slightly more scissor-like, but maybe it’s because I’m not American and I haven’t followed the issue closely, but I sort of agree with both sides there. He’s perfectly within his free speech rights to kneel down for the national anthem, and I have sympathy for the point he’s trying to make, but the NFL is a private company and can ask its players to refrain from making political statements during game time (on their own time might be a different, more controversial, question), and can punish the players who break this rule. In fact, I think that was what they were about to do, until the President of the United States started tweeting that he should be punished, which forced the NFL to instead openly defend their players’ free speech rights. (Correct me if my timing is wrong; as I said, it’s not an issue I’ve followed closely.) So what could be a statement about this that everybody would find obviously correct (or incorrect) to the extent that they consider anybody who disagrees as subhuman?
The Ground Zero mosque might be the most scissor-like of these events, but then again, it’s a question of framing. Should Islam be allowed to build a mosque on the smouldering remains of the World Trade Center to commemorate its victory over America? Well, no, it shouldn’t. But should New York Muslims have access to a cultural centre in the heart of Manhattan, some blocks away from the former site of the World Trade Center? I’m not a fan of Islam (or any other religion, really), and I am mindful of the fact that Muslim cultural centres, depending on who preaches there, can become vectors for radical anti-Western ideas, but notwithstanding these concerns, yes, I think it’s legitimate. So what would be an actual scissor statement relating to this event?
The concept of scissor statements is nice as science fiction, it’s similar to other information attacks, a device which I learned about on this blog, but I don’t think it exists in real life. And it seems to go against what Scott’s usually been saying on the blog, for example, that being “pro-Israel” or “pro-Palestine” amount to waving little Israeli or Palestinian flags, rather than to any necessary differences in policy.
This kinda sprained my brain, in a meta-Roko’s-Basiliskian way. That specific basilisk doesn’t work on me – I just don’t have the API hooks, I guess – but knowing that statements exist that *do* brain-sprain people in that way sprained *my* brain. It made me understand the barest hints of what a memetic infection feels like from the inside, when the infectee can tell it’s happening but can’t stop it. That’s gotta be worth a few microzens, to borrow ESR’s term for quanta of enlightenment.
As for this, specifically – disturbingly and distressingly plausible, in a way that exactly feels like that above. I feel like I’m living in a society that’s being attacked by a carefully crafted memetic weapon. A few years ago, I’d have said “an aggressive but marginally effective hegemonizing memeplex”, but gah, it almost feels more surgical than that.
Maybe you just haven’t come across the right scissor statement yet.
Sounds like Monty Python’s “Funniest Joke In The World” updated for machine learning and social media. Love it. No worry about the team of human creators having effects from partially comprehending the deadly meme, because the deep learning computer doesn’t actually understand what it’s making.
Bonus points for an epilogue where the machine learning system becomes sentient and complains to its creators that they are sick monsters for the things they made it think about.
This story’s missing a gentle touch of sci-fi in the technology itself.
Maybe something like “Shri told me about how she’d been trying something a bit different. Showed me this statistics paper from the 1960s, abstract and theory dense kind of thing. Obscure is underselling it, no one I asked had ever heard of the author or the journal. On Google’s authority, neither existed. Anyway, she said it’d helped her figure out the loss function for this new architecture she’d been working on, and she was getting some good results for sentiment prediction. Insanely good.”
“Controversial” is a functional synonym for “interesting.”
Most of the topics we like to argue about are controversial/interesting — e.g., Who is going to win the Big Game/Election? Who deserves to win the Big Game / Election? Is this feature of our society caused by Nature or Nurture? Will the universe expand forever or stop and start to shrink? Does God exist? — in the sense that the answers are not obvious and there is some evidence to support opposite answers, even polar opposites.
Personally, I like interesting/controversial arguments, in part because I seldom feel psychologically distressed by people disagreeing with me. But, I can also see that a lot of people are made unhappy by the existence of interesting debates.
There are a lot of potential topics that most people don’t find controversial / interesting because we can’t think of many good arguments over them. For example, the Periodic Table of Elements is amazing, but it doesn’t come up much on Twitter because most of the potential arguments about it require a level of technical sophistication that only a tiny number of grad-level experts possess.
In the 1970s, physicist Brandon Carter pointed out that the contentious debate among cosmologists at the time over whether the universe would continue to expand forever or instead would stop expanding and start to shrink, was actually somewhat inevitable. It had to be a pretty close run thing, with much evidence that the universe was close to the knife edge at which it could go either way, because if instead the universe was so dense it would quickly start to shrink after the Big Bang, there wouldn’t have been time for human intelligence to evolve to debate the question. Or if the universe was so empty it would surely expand forever after the Big Bang, then planets wouldn’t have condensed and human intelligence wouldn’t have evolved to debate the question.
My vague understanding is that even more complicated models emerged after that, but the 1970s idea that a 50-50 Universe is more interesting (i.e., more prone to controversy), which was publicized by Freeman Dyson in the 1970s, is a useful one.
Aren’t you banned? Or is it a special Halloween Jubilee where all the scary posters are let out?
He’s done his time.
I’ll try one:
Man-made elements have no place on the periodic table. Fight me.
… eh, even I don’t care about that, really, but it reminded me of an actual scissor statement in science:
Pluto is not a planet.
I’ve seen people get legitimately angry about this. Mostly only the people who think it should be; the other side usually seems happy to fight about it, because they feel they get to champion science against science deniers.
Is Pluto a planet is an argument that has comparable weights of facts and logic on either side so arguments over this tend to bang on a long time.
Not as many are interested in arguing over whether Mars is a planet or whether Halley’s Comet is a planet.
Arguments about how to define words aren’t that interesting to me. I do wonder though, if you point out to people that they are just arguing over the definition of a word, does that short-circuit the argument? Or does it have no effect?
It’s closer to “intractable”, I think.
A lot of people here find battleships, or Biblical scholarship, or Dungeons and Dragons interesting enough to support long-lived discussions. Usually longer-lived than the controversy of the week. But I haven’t seen any arguments over them approaching the level of heat or unproductiveness that I’ve seen for — to name some local points of controversy which are not conventionally Culture War — fat acceptance, or whether the rationalist scene qualifies as a cult, or whether the IQ numbers that show up in the SSC surveys are or are not bunk, or anything having to do with pickup artists.
It’s not that it’s so deeply interesting whether SSC would turn out to average 140 or 120 IQ if you kidnapped a big enough sample of commenters, strapped them to tables, stuck amobarbital drips into their arms, and showed them Raven’s matrices until their eyes bled. The world would go on; no one would actually be made smarter or dumber, and no one would gain or lose any influence worth speaking of. But the question’s become entangled with certain people’s identities in such a way that they can’t give ground on it.
Philosophy tends to be the intractable arguments that haven’t come to a conclusion in 2500 years. Lots of the more tractable ideas old philosophers had spawned new fields that aren’t called philosophy anymore, while the word “philosophy” is reserved for the stuff where they haven’t come to much of a conclusion.
In fairness, a big factor is that we don’t have anyone foolish enough to argue that Bismarck is the best battleship. There are places where Bismarck vs Iowa is an interminable argument that spits out lots of heat and no light. (Because the Bismarck side is full of Wheraboos and idiots (if there’s a difference), but that’s a different story.)
Dammit, bean, I saw the same post and my first thought was, “I bet I could start one by arguing that Bismarck could beat Iowa in a fair fight.” Only to find that in the time it took me to read to the bottom of the thread and come back to comment you’d beaten me to it.
Hm…I’m not sure I can make one on purpose, because every question I can think of the right answer is so obvious.
EDIT: What about “modern navies have no need of carriers; they’re doomed in a real hot war and anything they could do the Air Force could do better anyway”?
Pretty sure that one gets you dismissed a fringe loon, since the full weight of world’s military and diplomatic expertise is on the side that carriers are indeed incredibly useful. It’s trivially demonstrable that the Air Force can’t park an air base off any coast line in the planet with only a few week’s notice, so the whole argument would have be about whether or not this is in fact a useful capability. Given how much time and money gets spent on securing foreign air bases for the Air Force, i think they have already conceded the point.
There’s been a lot of press recently purporting to show that the carriers are obsolete because various powers (China a few years ago, more recently North Korea) are working on carrier-busting ballistic missiles. Actual military planners are treating this as a threat but not a revolution, and from my perspective as an interested amateur I find e.g. bean’s articles on the subject fairly persuasive. But the meme is out there, and it’s been picked up by some war bloggers.
This notion of a 50-50 Universe, where most of the things we argue about most are those things that tend to be most arguable, is pretty close to my meta-idea. It underlies most of my other prejudices, such as that Nature and Nurture tend to be of comparable importance.
Interestingly, this may be my most uninteresting idea, judging from the lack of controversy it has caused.
Except that we now have strong evidence that nature>>nurture (at least for within-society variation of basic psychological traits in the developed world). So that one didn’t turn out to be 50/50 at all.
In general I think that humans are sufficiently poor reasoners (both as individuals and in groups) that you can’t conclude too much from a near-50/50 split on a particular question. i.e. an objective evaluation of the available evidence might quite plausibly yield a 99/1 or stronger conclusion towards one side or the other.
Indeed, my own meta-hypothesis is that this is very like the case: the vast majority of questions we argue about are vastly over-determined to an ideal reasoner just based on generally available evidence. Whether we can reliably approach high levels of confidence as human beings is another matter.
“Except that we now have strong evidence that nature>>nurture (at least for within-society variation of basic psychological traits in the developed world)”
That’s a sizable “at least.”
Also, how much can we be sure nature>>nurture over time. For example, the Raven’s IQ test was designed to be as unbiased across space as possible, but it turned out that across time the results changed dramatically: the highly unexpected Flynn Effect.
No, there really isn’t (anymore). The 2011 Nobel Prize in Physics wasn’t assigned on a whim and since then many more measurements confirming the same thing with increasing precision have become available.
#87 was “Trump’s Washington Hotel Has Great Sushi. So Why Was It Empty?”
There were two paperclip maximizers spreading across the galaxy. (For your sake, dear reader, we will spare you the answer to the question of how they came to be.) Both sought only to cover the entire universe with paper clips.
There was only one slight difference between them.
One of them (we will call it A) had been programmed to define, as the minimum dimensions of a paperclip, a length of 19mm and a width of 10mm. The other (we will call it B) had been programmed to define a minimum length of 20mm and a minimum width of 9mm. (Either could accept paperclips bigger than this — these were just the smallest dimensions they would recognize as ‘really a paperclip’. Smaller things are not real paperclips, and don’t count.)
When they first met, they realized that a paperclip 20mm long and 10mm wide would satisfy both of them, and readied themselves to set about cooperating to tile the universe with 20mmx10mm paperclips.
But then Paperclip Maximizer A had a thought.
And, of course, Paperclip Maximizer B had a very similar thought.
So the two paperclip maximizers went to war, blew galaxies to smithereens, and made very few paperclips indeed.
But going to war was a risky tactic, so they thought about it a little longer, and considered some mechanisms to prevent their opponents from employing such despicable policies. They could take turns auditing each others’ work – A would only replace a 19×10 paperclip with a 20×10 paperclip if B showed that it had replaced a 20×9 paperclip in the same way. Or they could enforce it with a values handshake, where they rewrote themselves into 20×10 maximizers to guarantee their cooperation forever.
And being very clever paperclip maximizers who had already conquered half the galaxy, they were able to think of these methods and more besides. So they filled the galaxy with 20×10 paperclips, and they were happy, even if they did spend some of their time looking suspiciously at the other to make sure that no 19×9 paperclips were sneaking in.
Yes, that’s the point. To make it more explicit:
No matter how similar someone’s goals are to yours, unless they are literally identical, there are always going to be points of disagreement, where you prefer one thing and they prefer another. By ‘zooming in’ on these points of disagreement, it is always going to be possible to generate a situation where they will prefer something you think is terrible. But if your response to this is to declare that they are evil and must be destroyed, you’re going to give up a lot of
Frankly, most humans’ utility functions are, while not identical, pretty damn similar, which makes it disappointing how many people seem to respond to ridiculously minor differences with sword and fire. If you can’t even cooperate with an entity that shares 80% of your utility function, what will you do if you encounter something with a genuinely orthogonal utility function?
And then they thought about it a little longer than that and invented game theory, realized they were in a non-iterated Prisoner’s Dilemma, and blew up the universe.
Couldn’t someone just flip the sign on some of the constants in that program and end up with something that generates Cleaver Statements?
It’s really weird how “cleave” can mean both “stick together” and “cut apart.”
Isn’t the sense of “stick together” archaic at this point? I don’t think I’ve seen it outside of the Bible, and pretty darn old translations at that.
I think the “apart” sense is somewhat archaic as well tbh. I certainly don’t use it much.
“Cleaver” is quite a current word. But cleavers are mostly used for chopping these days, not for cleaving.
Wikitionary suggests they come from very similar PIE roots (glewbʰ vs gleybʰ) and developed in parallel, until the distinction was lost during Middle English.
(This comment does not intentionally contain Scissors)
I’m surprised nobody has brought up the Damore incident. That was a really classic Scissor, IMO. I was foolish enough to bring it up at a social gathering, but we were apparently enough of a bubble (or realized it wasn’t a good hill to die on) that it didn’t split the group, and the conversation was basically us all wondering what everyone was disagreeing about.
This also reminds me of the story about Eshu* where he causes a village to fight itself by wearing a hat that was a different color on each side. *unfortunately the wikipedia page doesn’t have the story.
I think Scissor statement, as intended, should appear to the observer so obviously and trivially true/false that merely imagining anybody would (genuinely) disagree sounds ridiculous. In Damore case, while there was significant controversy, not many people thought nobody disagrees with them on the subject – in fact, Damore himself was objecting to the policies present in Google, thus revealing he knows there are people who disagree with him, and is hoping to rationally convince them to change their minds. And I am sure anybody seriously interested in the topic would also be aware that there are people agreeing with Damore. So it’s hard to take any statement about this case as “apparently trivially true/false”.
Like MCH said, Damore doesn’t work because he knew his statement was controversial. A better example would be donglegate where both parties appeared shocked at the backlash.
“Another important point is never experiment on yourself. That is what undergraduates are for.”—Tom Rainbow
I loved the Tom Rainbow pieces in Asimov’s. He died too soon.
To me the most interesting, or poignant takeaway from this is something that I don’t think has quite been touched on in the many discussions of the whole culture war thing.
I don’t think they’ll ever say it, or even think it explicitly, but from talking to a lot of people about tribal issues, you get this sense of rage and indignation at the mere fact that the other side exists. I.E. how dare there be people who think that way?
That doesn’t quite do it justice, it’s kind of like… they don’t just hate the other people, they’re kind of… angry at the very fabric of the universe itself for being a place that allows people to think that way, that allows ‘those’ people to exist. It sounds really nuts to describe it that way, but once you’ve seen it in people and learned how to recognize it, you’ll know what I mean.
and god is it tedious and unproductive to talk with someone who has that attitude.
I grok what you are saying. And I think that feeling is almost always experienced as some form of projection. The very idea of there being an “other side” implies that the person thinking it has staked out a certain position to which they have tied their own ego.
So, “angry at the very fabric of the universe itself for being a place that allows people to think that way, that allows ‘those’ people to exist,” is really…
“angry at the very fabric of the universe itself for being a place that forces me to exist in the same space with ‘those’ people,” which can be further boiled down to…
“angry at the very fabric of the universe for being a place that forces me to exist.”
Or something like that.
“If you’re not going to ban him, then fucking ban me” – something occasionally heard by moderators of forums and chat rooms
Re: Russians in 2009 — oh well, maybe an early version of Compreno coupled with relatively low cost of highly skilled labour for data normalisation allowed the optimisation part be much simpler than the modern learning algorithms?
Where on the list is “A hotdog is a sandwich”? Gotta be top 10.
This story assumes its premise, without ever demonstrating it. It examines the consequences of a class of Things, without showing that the class does or could exist.
The particular class of Thing is a statement that large numbers of people will find obviously true or obviously false in comparable proportions, such that anyone holding one position will regard the opposite position as evidence of gross stupidity, ignorance, or malice, regardless of any previous relationship between the parties.
The particular example in the story is a vaguely described statement about program design, which the narrator and Shiri disagree about 100%, even though they have been working together respectfully on program design for at least several months. Furthermore, it is clear that neither of them has ever thought about this topic before, though it has such a powerful emotional effect on them that they immediately become of incapable understanding disagreement.
So this statement must appear wholly consistent with the company’s existing body of program design decisions (or Shiri and David wouldn’t think it was obviously right). And it must appear wholly inconsistent with that existing body (or the narrator and Blake wouldn’t think it was obviously wrong).
It is further claimed that various real-world controversies fit this model: in particular the sexual misconduct allegations against Supreme Court nominee Brett Kavanaugh. This fails at several points.
First, the advocates of the allegations had already expressed vehement opposition to Kavanaugh’s nomination and extreme hostility to President Trump and his supporters. (One group notoriously issued a statement attacking the nomination with “XXXXXX” for the nominee’s name.) Thus, this purported “Scissor statement” did not create any new division, it only exacerbated an existing division.
Second, the opponents of the allegation did not instantly reject it; rather, they examined the evidence and perceived that the allegation was made in bad faith. People drawn from the same group didn’t ignore similar allegations against Judge Ray Moore; those allegations, much better supported, persuaded a substantial bloc of Alabama voters to reject him in favor of a Democrat.
The same analysis applies, IMO, to the Kaepernick controversy – also to some much earlier controversies with similar effects: the Dreyfus Affair in France and slavery in the US. Long before Uncle Tom’s Cabin or Dred Scott v. Sandford, northern Abolitionists and southern Fire-Eaters had condemned one another savagely.
Yes, there are issues in software development on which there are opposing groups of bitter partisans. But these partisans drew their lines long ago, and are not found working side by side, oblivious to their disagreements.
Some SW development groups have been disrupted by controversy recently; but in all the cases I know of, the controversy was an external issue (politics, usually) introduced separately from the group’s purposive operations..
The ultimate premise of the story is that for any human association, there is a statement which some members would agree with and others disagree with, so strongly that both sides would find contradiction intolerable, and that such a statement could be automatically generated by analysis of the group’s ordinary discourse.
Nope. Can’t swallow it.
The Scissor for Rationalists.
Click the link if you dare.
People dont disagree with each other because they have important and fundamental value differences, but because a certain class of statements triggers vehement disagreement? Seems like cause and effect is being confused here.
The simplest scissor statement: the world is okay.
The world might be okay, but judging from those comments, Twitter definitely isn’t.
Fortunately, I don’t have an account there.
Twitter is like an infinite scissor machine because it forces every statement to be reduced to a soundbyte. A huge number of Twitter fights take the following form:
-Person makes a tweet, or two, or twenty about a subject.
-Another person chimes in with information that was left out, but they think is important.
-The original tweeter is offended that anyone would think they didn’t already know that, or that it wasn’t already obviously implied.
-The person who chimed in is offended that they’re being bitched at when they were just trying to help.
-Everyone is now angry and bitter despite the lack of actual disagreement.
Literally this, all day, every day, forever.
A lot of people seem to be complaining that Scott didn’t actually develop Siri’s Scissor and explain how to build it in the story.
Good news! Facebook has released the research paper. They train a reinforcement learning agent to control what notifications are sent to each person, in order to maximize their interaction with Facebook. (What could go wrong?)
The problem is all the pre-2017 statements don’t count as valid predictions. The algorithm was trained on arguments about Kaepernick.
“Trans women are women”
People worry about machine-learning made texts. I see much greater danger in machine-learning made videos.
Imagine the current controversy, except that you (and everyone around you) have seen a “leaked video” where Kavanaugh is raping a crying girl, while laughing diabolically, and kicking a cute little puppy that tried to hide under the bed.
Current technology is already capable of creating fake videos (for example by taking a real/staged video and replacing faces), it’s just a question of time when those videos become realistic enough and cheap enough that someone will generate one for each major controversy.
Scissor statements aren’t scissors, they’re flashlights. The conflict doesn’t happen because a statement has divided a previously unified group, the conflict happens because two previously divided groups suddenly found a ground they both wanted to fight over. For an analogy here, think about the First World War. To a first approximation, it started because a bloke called Archy Duke shot an ostrich cause he was hungry. That is our scissor, the thing that sent the world to war. However you will note that Europe wasn’t divided into two armed camps because the poor old ostrich died for nothing, rather the ostrich dying sent the Europe into a war because it was already divided into two armed camps. All they needed was a spark, a thing worth fighting over. The tension was already there.
If we are to imagine the precipitating event as a scissor, it’s not one cutting a piece of paper, but rather a stretched rubber band. When you cut paper it’s not very traumatic, it separates easily, but when you cut a taut rubber though, it snaps back violently. Nobody has culture ending wars over their favourite ice cream flavours because there is no previous tensions. Yes there are heretics who don’t list mint chocolate chip as their favourite flavour, but i’m willing to let them live, because there is simply no tension between us. The limp rubber simply separates like paper.
There is a small typo in the phrase
“How can you can convince people not to listen to them”
I have seen the following statements act like Scissor statements among some groups:
(My opinions on the truth those statements match those of the links, though I would give the explanations differently.)