[self plagiarism notice: this is mostly copied from last year’s contest announcement]
1. Announcing the second annual Adversarial Collaboration Contest
An adversarial collaboration is an effort by two people with opposing opinions on a topic to collaborate on a summary of the evidence. Just as we hope that a trial with both prosecutor and defense will give the jury a balanced view of the evidence for and against a suspect, so we hope an adversarial collaboration will give readers a balanced view of evidence for and against some thesis. It’s typically done for scientific papers, but I’m excited about the possibility of people applying the concept to to less formal writeups as well.
For example, a pro-gun activist might collaborate with an anti-gun activist to write a joint article on the evidence for whether gun control saves lives. We trust each person to make sure the best evidence for their respective side is included. We also trust that they’ll fact-check each other and make sure there aren’t any errors or falsehoods in the final document. There might be a lot of debating, but it will happen on high-bandwidth informal channels behind the scenes and nobody will feel like they have tailor their debating to sounding good for an audience.
Last year, SSC held an adversarial collaboration contest. You can see the entries here:
1. Does The Education System Adequately Serve Advanced Students?
2. Should Transgender Children Transition?
3. Should Childhood Vaccination Be Mandatory?
4. Are Islam And Liberal Democracy Compatible?
I want to repeat the contest this year. Prize money depends how many people enter (see terms below) but will probably be around $2500 (thanks to people who support this blog on Patreon). All entries that meet a minimum level of quality will also be published on SSC.
2. How To Form A Team
Setting up teams was chaotic last year, so I’m going to try to be more organized about it. If you want to participate, please post a top-level comment on this post saying the topic you’re interested in, your position on it, any other relevant information, and an email that would-be-partners can contact you at. Make the comment in bold so that it stands out against the background of people idly discussing things. For example:
Hi, I want a partner for a collaboration on whether the moon is made of green cheese (I think yes). I am an astrogastronomy PhD student and would prefer to work with someone else who has at least degree-level knowledge in the field. Email me at fake[at]example[dot]com
If you’re interested in someone else’s topic, please send them an email. Don’t reply to their comment, since a) the person might not see it, b) it might discourage other people from replying, and the proposer might want to get contacted by more prospective partners so they can see who would be the best. Once two people have agreed to be a team, the person with the top-level comment can edit it to clarify they’re not interested in taking more offers.
Once two people have agreed to be a team, please email me, scott[at]slatestarcodex[dot]com, with your names and the topic you’re working on.
3. Terms And Conditions
1. You should write an essay summarizing your joint summary of the evidence regarding a controversial topic you disagree on. Strongly recommend that this be a single factual issue, like “Does gun control save lives on net?”, rather than a vaguer moral question like “Guns – good or bad?”, though it can still be a pretty broad topic – I would love to see people write about Caplan’s case against education, for example. Even though most of the examples here are political, this doesn’t have to be; it could involve controversial topics in medicine, history, religion, et cetera.
2. You will write the essay as a united front. Please don’t write “Alice says this study proves guns save lives, but Bob says it’s wrong and this other study proves guns are bad.” Instead you are going to have to come to an agreement on how to describe each study. For example “Here is a study purporting to show that guns save lives. It seems to accurately describe what is going on in rural areas, but it might be of limited applicability elsewhere.”
3. You will come to at least some sort of unified conclusion, even if that conclusion is “There’s not enough evidence in this field to be sure either way and we should default to our priors/biases”.
4. The essay should be similar in length, tone, and amount-of-research to one of my Much More Than You Wanted To Know essays, eg here and here.
5. By entering the contest, you are giving me permission to publish your essay on SSC (with full attribution to you, of course). You can also publish it wherever else you want. I will probably publish the winning essay, and I might or might not publish the others depending on how good they are.
6. Because of (5), please don’t research any topic that I would not be able to publish on SSC if you came to a taboo conclusion. If you want to do an adversarial collaboration on taboo topics, you can feel free to arrange it in the comments, but it won’t be considered an official entry, it won’t be eligible for prizes, and I probably won’t post it (I might link it if it’s posted somewhere else). If you’re wondering whether a specific topic is taboo, you can ask.
7. Due date is November 1.
8. I won’t hold the contest if fewer than five teams sign up. That’s “sign up”, not “complete their collaboration”; I realize many teams will drop out. I’ll let you know if I’m holding the contest or not within a week or two, before you waste too much time on it.
9. If I hold the contest at all, I’ll disburse $1000 in prize money. If there are at least four complete eligible entries, I’ll disburse $2500 in prize money. If there are at least ten, $5000. More then ten, I don’t know, but I’ll try to make it worth your time.
10. I’ll give the winning entry somewhere between 50-100% of the total prize money. If I don’t give it 100%, the rest will go to second place, third place, etc. I haven’t decided how/whether I will do this and it depends on how good the individual entries are.
11. If you win, I will pay through PayPal or online donations to the charity of your choice.
12. Winner will be determined by poll of SSC readers, plus my vote counting for 10 percentage points in the poll.
13. I reserve the right to change these conditions in minor ways that don’t significantly inconvenience contest participants.
14. I’ll give an update on the next visible Open Thread.
Did the winners/also-rans last time publish their papers anywhere?
Or would Scott’s publishing a paper here tend to disqualify entrants from trying to get their paper published in a journal?
(Genuinely asking)
From what I could tell, the essays were more on the “rhetorical synthesis” side of the scale than they were actual meta-studies. Same as Scott does. I don’t think publishing here was the main obstacle to landing in a journal, unless journals suddenly take verdicts like “it’s a complicated issue and we can’t describe mathematically whether it was resolved or by how much”.
I think most of us (surely me and my collaborator) were not professional researchers in those fields, so I don’t think we could have published the results in a peer-reviewed journal.
I thought for a while to write down my final opinion and try to publish it in Quillette, but I didn’t.
Would peer-reviewed journals refuse to consider an otherwise good paper just because the authors don’t have the “proper” credentials?
Lacking an academic affiliation is a bad sign in many fields. I’m sure it varies by discipline and journal, but I don’t think I really hear of major journals in physics and math publishing papers where no author belongs to a university or other research institution (though obviously if, say, Perelman sent in a manuscript from his mother’s attic they’d probably take a look at it).
This isn’t necessarily all bias (though I’m sure that bias exists) because frankly the majority of amateur papers (see vixra) are either of low merit or are difficult to assess on scientific merit because the authors seem to have worked in isolation from much of the literature and don’t communicate in a way conventional to the field. The latter doesn’t preclude merit, but it means the paper takes much more labor to review, and I don’t blame journals if they choose to focus their finite resources on credentialed submitters.
All that said, the arXiv’s endorsement model seems like a reasonable web-of-trust approach to disseminating research from outside the establishment. It doesn’t have the prestige of a major journal, but submission to arXiv is generally acknowledged as far as precedence, and is much more open than a peer-reviewed journal without being, well, vixra.
You made me think about writing an article based on the research I did for my collaboration, for arXiv (or maybe for bioRxiv or medrxiv?) The problem is, as you said, being an amateur, I “don’t communicate in a way conventional to the field” and on top of that, I’m not even a native English speaker.
Assuming the authors are able to communicate/format in a way that is “conventional to the field” and one or more of the authors cite current relevant literature…what about journals for the types of subjects under which SSC adversarial collaborations would be more likely to fall? (Economics, psychology, philosophy, public policy, technology, etc.)
I’m sure there are some (plenty of?) fields where the answer is no. But I do know a lot of peer-reviewed journals refuse to consider an otherwise good paper if it’s already been published somewhere else. I’m trying to understand whether ‘somewhere else” likely includes places like SSC.
Well, “somewhere else” generally doesn’t include conference proceedings, for one (assuming that by “same paper” you mean same content, rather than same words), and I can’t imagine blog posts being considered more relevant than conference proceedings in this regard.
In some fields it is typical to anonymize papers for the review process, so this wouldn’t directly be an issue. But nevertheless, if you don’t have a *lot* of experience reading and writing papers within a given academic discipline, there’s going to be a lot of stylistic features you’ll lack, and you’ll likely be asking and answering a different subset of questions than the ones that readers of the journal will want, so it’ll be a lot harder to convince them that your paper is “otherwise good”. It would be very natural for instance for a sociology journal to say “this seems like an interesting paper, but it’s probably a philosophy paper and not a sociology one” and a philosophy journal to say “this seems like an interesting paper, but it’s probably a sociology paper and not a philosophy one”. If you don’t have an established readership, it’s hard to get someone to use their journal space to publish your paper.
I have a subject that I’m totally unqualified to research, but I think might be interesting:
Is peer review, as practiced in scientific journals, an effective system at weeding out fraudulent or erroneous studies? There is almost no evidence on this as far as I can tell, and I wouldn’t even know how to set up a study. Also I am paywalled from a lot of the big science aggregation sites. But that is a question of interest to me.
One potential study design: different journals probably started using peer review at different times. Use the variation in that timing to do an event study, measuring diffs in retraction/nonreplication rates.
These are probably not very reliable.
Are there any journals that existed in a period without peer review and then started using peer review, or vice versa? At the moment I’m not aware of any academic journals that don’t use peer review (or something similar, like editor invitation), other than predatory ones that will publish anything if you pay them $1000 (or whatever).
A minor point about peer review: its main purpose is to weed out the crazies. I don’t think peer review has a significant added bonus in improving or distinguishing between good papers. But it can sometimes improve or distinguish between medium papers, and it’s very good at weeding out terrible papers. You occasionally filter out new good ideas (eg plate tectonics, AI risk until recently), but by forcing everyone to become a) aware of the general results in the field, and b) forcing them to use the standard language of the field, you filter out a lot of terrible stuff.
Why would peer review be better at this weeding out than simply trusting the judgment of the editor?
Mostly because judging the quality of a paper is hard. If you don’t work in the particular sub-sub-field that a paper addresses, you might not know if some of the assumptions have been shown to be invalid or that some technique is being misused. This is particularly hard for papers that deal with a variety of techniques. if a paper includes a complex experiment and a complex model, it is unlikely that any one person has enough specialized knowledge to evaluate both (and is sufficiently removed from the work at hand).
“Trusting the judgment of the editor” is one way of doing peer review, because the editor is a peer researcher in the field.
I think peer review does more than just weed out crazies. I think the right view is that it is a process to put a paper into one of a few bins:
Potentially paradigm-shifting and of interest to a broad area of the field (Science/Nature)
Very strong paper of interest to people in a subfield (Nature [Field])
Strong paper of interest to people who work on related topics (workhorse journals)
Paper with valid, but not terribly interesting results
Junk
I think the expectation of peer review is to allocate a manuscript into one of these bins with no more than one level difference from the “true” bin. I’m sure there are some edge cases where a paradigm-shifting paper will be binned as junk, but I think that’s going to be pretty rare.
Another effect of peer review is that it changes the behavior of researchers. In my field (materials science), there’s no real peer review for conference presentations (organizers decide whether or not to accept your talk based on a 150 word abstract). In my experience people are much looser with what they present in a conference than what they put in a paper. I think the threat of being called out by a reviewer makes people more conservative in a manuscript (in addition to the fact that as the permanent record of the work, getting it right is higher stakes).
I’m not suggesting that people commonly bring fraudulent results to conferences (I have no first-hand experience of that). Rather it’s that the results haven’t been tested as throughly for artifacts and the experiment design might be sub-ideal.
Indeed, in my field at least, you often see conference talks where all results not already published in peer-reviewed papers marked as “PRELIMINARY”.
I believe there is plenty of anecdotal evidence on this, which points to an overwhelming no, peer review is not effective.
The replication crisis in the humanities is but one example, Sokal-squared is another, and anything published on twitter from @RealPeerReview.
Peer review is obviously dependent on who the “peers” are, and what their integrity is. A more interesting question would be whether peer review is effective in some fields and not in others, and the extent to which peer review is used to gatekeep against competing paradigms instead of gatekeeping against bad science.
I’m very grateful for the peer review of mathematical literature and results. We make mistakes occasionally, but for major publications, they are notable and embarrassing.
Just a minor comment – the replication crisis isn’t in the humanities, but in particular areas of psychology/cognitive/behavioral sciences (and probably to a lesser extent in medicine/nutrition/etc.) Humanists aren’t doing randomized controlled trials of the sort that one could succeed or fail at replicating – they’re making arguments that the reader is supposed to find convincing or not (in some ways much more like mathematics, though with much less agreement on what types of argument should be convincing).
Am I the only one here who has actually been a peer reviewer? Because there seem to be a lot of misconceptions about how this works, at least in the hard sciences.
It isn’t primarily about weeding out the crackpots and other junk science. The editors can recognize crackpottery when they see it, and while they will pick a set of reviewers to dot the i’ and cross the t’s of “yes, this is junk science and no, we’re not going to publish it”, but there was never much chance of it being published in the first place.
And it’s not about serving as ivory-tower gatekeepers to lock out anyone challenging orthodoxy by officially labeling them as crackpottery.
A big part of it is improving the quality of the papers submitted. “Publish after revision” is a far more common outcome than “do not publish”. Sometimes authors make outright mistakes that can be caught and fixed. Sometimes they overlook things that would add to the value of their work. Very frequently, they assume that what was obvious to them is obvious to the reader when it really needs specific explanation.
As Steve notes, it is also helpful in ensuring that works are published in the right journal for their content and importance. A shortcoming in this area is that there isn’t always a clear path from the reviewer who says “no” to the right journal, and even less for “yes, but please publish in a higher-impact journal than ours!”. So it depends in large part on the authors making a fair assessment of their own work, with the reviewers acting as a deterrent to extreme overconfidence.
So, peer reviewers are mostly for keeping honest and diligent scientists, honest and diligent. And I have never seen them, from either side of the fence, reduce the quality of the work being published, nor block outright the publication of anything with real value.
I have not been a peer reviewer, only been peer reviewed. It did seem like a helpful way to increase the quality of the final product of my work in Bioengineering. However, law review, which has a different, but allegedly similar, process with student and advisor reviews of submissions, was useless from my insider POV.
But all these comments are why I asked the question and also why I don’t think I’m capable of being 1 of the 2 parties in this discussion. Also, I don’t have a strong stance.
Law review journals are notorious in academia for relying on first and second year grad students to do most of the reviewing. So they have a very cargo cult idea of what is needed – they are great at making sure that people provide appropriate citations for things, but not great at making sure the argument is actually clear and significant.
Can confirm. When I’ve done peer review, my comments were mostly along the lines of ‘explain this better’.
That said, one paper that was about some aspect of modelling automobile dynamics by some German team had awful English and I could make no sense of it a at all. I think I suggested they take the entire paper to a German technical writer with good English skills and rewrite it completely. The phrase ‘about his emphasis’ kept appearing in the text of the paper and (according to a German grad student I tracked down) they probably meant something about moment of inertia by that.
All that said, it seems much more likely that peer reviewers might act as some sort of gatekeepers of orthodoxy in softer sciences. Much more likely to be political. If I think I have something interesting to say about model verification then maybe I’ll write a paper, and it’s not likely to step on any toes or trigger anyone. If random political science guy thinks he has something to say about history before vs after women got sufferage, I could imagine peer reviewers… having an effect?
In my experience, the main way that reviewers can “reduce the quality of the work being published” is that they sometime ask the authors to consider and respond to a few potential objections, and show how this relates to some particular existing theory (that may be less significant than the reviewer thinks it is), so the resulting paper is moderately more convoluted, and has some passages that the ordinary reader will consider irrelevant.
While I largely agree, I’d like to offer a partial counterexample. In my experiences as a reviewer, I have generally tried to provide some “value added” to papers that are well thought out and well written–correcting mistakes, suggesting ways to communicate better. I have also recommended not publishing papers that had too many mistakes or just not enough original content.
But I do think it’s possible for the peer review process to reduce quality. I wrote one paper in grad school that, in the end, I never submitted for publication–because my co-authors insisted on taking out all my caveats about not being sure if the results were reproducible. Of course, the reason they wanted the paper to put everything in the best possible light was because of the expectation that reviewers would not accept something wishy-washy. If I had been a little less hesitant about fudging things, I’d have submitted it anyway and it probably would have been accepted… the paper would have been somewhat misleading, and it wouldn’t have been the fault of the reviewers but I think some of the blame would lie with the peer review process in general.
Probably it’s better than nothing, but still not quite sufficient.
I work as an academic in machine learning (think like ICML/Neurips/ICLR) so I can briefly comment on this:
“Is peer review, as practiced in scientific journals, an effective system at weeding out fraudulent or erroneous studies?”
I think it’s an effective mechanism against certain types of erroneous research and I think it defends against some types of fraud. However, I’d caution that I think by far the most common type of rejected paper (speaking from a reviewer’s perspective) is just a paper that’s too boring, not novel, or not appropriately framed in terms of existing work. I haven’t seen that many papers that are just blatantly wrong or include fabrications.
—
If you wanted to study this rigorously, one thing you could do is try to measure impact and then see if there’s a correlation between conference/journal acceptance and long term impact, but somehow try to disentangle out the causal effects of being accepted itself.
There have been a couple recent papers on this topic in philosophy of science and sociology of science. As far as I know, none of them involve quantitative empirical research, but instead use mathematical models to show certain effects. I believe Liam Bright and Remco Heesen wrote one of the recent ones, and another was recently written up in Vox or FiveThirtyEight or something.
I think this is something I would really enjoy participating in, but even after thinking about it since the last one, I have no idea what I would propose as a topic. Something that fits the criteria of controversiality, having a strong personal belief about (IMO those two traits should be negatively correlated for pretty much all issues), having more than passing knowledge about, and is important/not too obscure enough to have a body of literature, is a very tall order. Here’s hoping someone else posts something I could jump on.
I’d consider working on a review of the evidence to determine whether phonics or whole-word teaching techniques are more effective at teaching children to read English? I have anecdotal evidence (my own two kids) that phonics is far superior but the American educational establishment mostly comes down on the opposite side. I’m aware of a few studies that do show phonics to be superior but perhaps they are only applicable for certain subgroups? I don’t have any specific educational expertise but do have a science degree and can read and summarize scientific/statistical papers.
Not meant as criticism, but I’m surprised this is considered controversial. In neither England nor the US have I encountered whole-word partisans. In general, what I’ve seen are curriculum dominated by phonics, complemented with a small number of sight words.
However, I could easily be missing a large chunk of typical practice.
It would be interesting to see some sort of data of teaching practices. From what I can see, “Whole-word vs. phonics” doesn’t describe an existing dichotomy; rather, it seems like “Whole-word + phonics vs. just phonics” is the actual dichotomy, with most practicing teachers being in the former camp. There are very loud pro-phonics, anti-whole world advocates, but I don’t know if I’ve ever seen someone argue that phonics shouldn’t be a part of reading instruction at all. But again, this is all subjective perception, and perhaps survey data would tell a different story.
I semi-follow the literature on schooling. From what I saw, the whole-word model was taught in the 70’s and 80’s, but teachers found that it didn’t work so started sneaking phonics into their whole-word lessons for better results by the 90’s. When I was attempting to become a teacher, the first grade teacher was definitely using a blended approach.
From personal anecdotes, I wasn’t taught phonics until I was in first grade. I was surrounded by books and a literate family (my biological father was an engineer, my biological mother was a mathematics professor until mental stuff happened), and I was greatly encouraged to attempt to read both at home and in church, both through verbal encouragement and modeling by everyone older than me (this is the whole-words approach). In first grade, once the phonics instruction started, I went from being able to read 0 words to devouring books meant for 3rd and 4th graders in a month.
So…
My understanding is that the scientific evidence is firmly on the side of phonics, but there are significant pushbacks against phonics because a generation of teachers didn’t learn it that way. Personally, I know as a young adult I thought the concept of teaching through Phonics sounded ridiculous.
What you have is a lost of evidence that is just starting to trickle back into standard practice, where in a more malleable results driven industry/organization design, you would have seen almost all of the schools pile into phonics instruction in a very serious way.
Unfortunately, you also have parents who are not very good at identifying what good education looks like, and they do not help shifts into better modes or ‘experimenting’ on their children.
I wish you luck, but am afraid that the paper might end up somewhat like the vaccine paper from last year, despite tons of evidence, little ability to actually drive to a scientifically backed conclusion.
JAC
Maybe this is because I did learn phonics as a kid, but the idea of not learning phonics seems ridiculous to me. It seems like not learning the radicals for Chinese characters, i.e. not learning a major component of the logic of their construction. I mean, English spelling may not perfectly reflect the pronunciation of modern English (probably doesn’t perfectly map to any one, real state of the language), but it’s also far from arbitrary?
That sounds like the same mechanism by which opposition to Common Core mathematics is supposed to be.
The actual implementation of Common Core math in schools is substantially worse than the design suggests, because the school system has no idea how to measure results.
I might be able take you up on this, kind of. I don’t think whole-word is superior to phonics, but I do think we have reason to believe that the best approach would probably be classified as a “balanced” approach. There is substantial nuance to this position. Send me an email at {my username} at fastmail dot com dot au
I didn’t see this piece by NPR mentioned here and it’s fairly relevant:
https://www.apmreports.org/story/2018/09/10/hard-words-why-american-kids-arent-being-taught-to-read
It echoes what previous posters have said: so-called “balanced literacy” (whole words) was taught in the 70s and onwards, but phonics is clearly superior and is now becoming the new standard in American schools.
I offer the following claim: The American criminal justice system has, by its own terms, never rightly convicted anyone.
That is, if we express “beyond reasonable doubt” in quantitative terms that seem sensible to us, and quantitatively express the standard our convictions in fact meet, then I believe we will find considerable daylight between the two.
Is this a Bayesian argument?
I suspect it could be presented entirely with frequentist methods, but it’s a statistical one, yes.
Never? Way too strong a claim. Sometimes people are just bad at crime and leave behind a lot of evidence (while also getting caught on a camera they didn’t see).
It sounds like your position is better stated as “the justice system has deeply entrenched norms at each stage of due process which distort outcomes towards conviction, and philosophically this delegitimizes even very clear cases”.
I think this is a bad candidate for adversarial collaboration. Scott is looking for cases where the binary can at least be clearly stated and I’m not convinced this thesis can be salvaged into that.
Never is a really strong claim. Like what about someone who turns themselves in and confesses?
Confession by itself is inadequate, but confession combined with video, eyewitness, and physical evidence is probably adequate.
Have you looked at stats on the rate of false confessions?
Never?
I’ll wager at five for four that for any widely accepted quantitative terms for “beyond reasonable doubt”, I can find at least one criminal conviction that is widely accepted as meeting that standard.
If you intended “not reliably”, I can’t disagree.
Apparently very clear-cut convictions with truly impressive amounts of evidence have sometimes proven faulty. Given this, you’d need to find factors about such a case which make it significantly less likely to be mistaken than the average conviction.
I’m making the strong claim that the rate of false convictions is so high as to render *convictions in general* untrustworthy, not merely the weaker claim that some errors fall through the cracks. The latter would be a pretty damn boring adversarial collaboration; the former, you can talk me out of with evidence but you’ll at least have to present *interesting* evidence. If we were to find a definable subcategory (large or small) of convictions that seem, statistically, to actually lack doubt we quantify as reasonable, that would be an important result. Do you believe you can demonstrate one? If so, prove it and win Scott’s money.
I propose the conviction of Colton Harris Moore on the charge of flying a plane without a pilot’s license as a small definable subcategory of convictions that seems to actually lack doubt that would generally be quantified as reasonable.
Now, did you mean ‘never’ or ‘not reliably’?
(As a preface to this comment: I believe with a high degree of certainty that the widely-reported account of Moore’s crime spree, including but not limited to the fact that he flew the stolen airplane without a license to do so, is correct in all relevant details.)
Are you saying Moore is in a reference class of one, or that convictions like his are in general very likely to be correct? If the former: what makes his case *uniquely* tractable by our trial process? If the latter: I have no definition of the category for which I have a researched claim that you’re wrong, and will gladly take part in trying to discover whether it is in fact a thing courts decide more effectively than crime in general, but my prior is on it being exactly like the general case and the stats on the general case are abysmal. Yes, I disagree that this case is an exception.
I mean something a lot stronger than “not reliably.” I mean “so unreliably even in apparently strong cases that it casts serious doubt on everything, even the cases that look rock-solid.” I mean “your confidence in the facts established by the American criminal justice system even in a rock-solid case should never rise to a high level of certainty unless you have additional insight (such as your own review of the evidence), simply because of the massive flaws in the source of your information.” I mean “the certainty claimed by the courts when they convict is not well-calibrated in any instance.” I think “never” is a very strong word and that any deviation from it that I intend is rounding error, but I hope those more verbose answers clarify the matter.
But do note what it is I’m saying never about.
I absolutely do not claim that evidence is incapable of rising to a standard sufficient that asserting somebody didn’t do a crime is unreasonable. I merely claim that the American criminal justice system is structurally unable to make that determination. That is, I doubt in all cases the legitimacy of *its finding* of guilt beyond reasonable doubt, not the applicability of such guilt as a concept in all cases that have gone to trial. I’m not making the absurd claim that guilt is never knowable, just denying the reasoning capabilities of American criminal courts as an institution.
Finding that there’s one instance to which the failings of the system don’t apply is a much weirder claim than saying across the board that it’s never good enough. I do not believe the weird claim to be well-founded (I invite you to demonstrate otherwise, and would likely enjoy the demonstrating whether or not I find it persuasive), and so I do believe the weaknesses of the system to be applicable across the board.
The Bible says the sun rises, that doesn’t mean the Bible is good at astronomy, even selectively.
tl;dr yes, Moore committed crimes; no, mentioning one guilty convict doesn’t prove that the system which convicted him was therefore valid.
That’s a lot of different, poorly defined things to mean at once. And most of them cash out to what I mean by ‘not reliable’, if you’re willing to round enough.
I’d be interested to read the collaboration, but my first suspicion is that the authors would agree on the features of the justice system and end up foundering on what “reasonable” doubt means.
If it is the case that you can convince people that the evidence of any case have not been able to provide for beyond reasonable doubt the conviction of a crime by the legal definitions…
you should become a lawyer.
You don’t fix a broken-down car by being a better driver.
What I should become is a politician, except I’d be terrible at it. So instead I support the ones who seem to be against mass incarceration and otherwise seem to at least not be actively making the problem worse. (As a note: if you’re eligible to vote in a Democratic primary election, there’s a candidate who has actively used governmental authority to keep probably-innocent people in prison, one who’s an architect of the current mandatory-minimum regime, and oodles with no such history. Vote for a better class of Democrat, whether or not you mean to vote for a Democrat in the general.)
I think helloo’s point is:
1) If 12 people in each conviction (so let’s spitball >100,000 unique people) concluded they were satisfied beyond a “reasonable doubt” but you think they’re not, then unless you could hypothetical convince many of those 100,000 people, the discussion is likely to result in the discovery that your private definition of reasonable doubt is different from the one held by the people who (a) created and (b) use the definition.
2) If you were capable of convincing at least one person in every 12 to adopt your private definition of “reasonable doubt,” you could get rich as a lawyer.
3) But you probably can’t, so this is a feesh problem.
Right – I’m saying that you should be taking advantage of the system to become rich/prevent bad convictions/show weaknesses in the system in a way that they are forced to fix it or never convict anyone.
And if every car was broken, you totally could make a living by being someone good enough at driving to overcome those difficulties.
I wouldn’t think “most people’s estimates of their own certainty are woefully ill-calibrated” would be controversial among SSC readers, but that’s really most of what I’m asserting when I say the average juror is wrong about when reasonable doubt exists.
Studies put the floor on the wrongful conviction rate for cases that put people on death row at 4%. In every one of those wrongful convictions, twelve people were carefully chosen by lawyers gaming the system, presented evidence selected for in many ways that have very little to do with anybody’s particular attempt to discover the facts, and concluded based on what they were shown and the rhetorical skill of the lawyers, that there was no reasonable doubt. They probably did *not* say “yeah there’s almost one chance in twenty that they didn’t do it but that’s good enough, kill them anyway” – subjectively, their probability estimates are likely far more confident. And yet, they’re wrong – at least one jury in 25 that sends somebody to their death, is condemning an innocent.
I think it’s much easier to say “this system is in general not working as we wish” than “this specific case doesn’t merit exceptional certainty.” The case that the system itself is broken is much easier to demonstrate than the case for the potential innocence of any specific apparently-probable murderer – especially since saying “this court system needs to be torn down, root and branch, and replaced with one less pervasively rotten” *in court* is at best going to be construed as advocating jury nullification, which is a good way to be held in contempt and very possibly disbarred.
The court system itself can only be attacked from the outside, and even if I were able to make a killing working within it, I would not thereby bring about its utter overhaul. And if I gave up and made my fortune helping individuals instead, well, going to law school to profit off rampant injustice does not strike me as a moral course of action.
I could be wrong. I live in Virginia, a state which permits reading law as an alternative to traditional legal training. If a Virginia criminal attorney wants to mentor a decently smart person knowing that their goal is the utter destruction of the system in which you operate, I’m happy to talk.
Hi, I want a partner for a friendly adversarial collaboration on any of the claims in the book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom. I generally agree with all the arguments and conclusion in the book, so you can choose the specific claim for the collaboration. Email me at soeren.elverlin@gmail.com
Hi, I want a partner for a collaboration on whether AI risk research is an effective cause for charitable giving, at the margin, in 2020. (I think no).
This isn’t a factual question but I think there is interest in answering it in as evidence-based of a way as possible. I’m a working data scientist and have done some second-tier machine learning research so I understand general developments in AI but am not a specialist.
I also unfortunately don’t have much of an internet paper trail but if you do and want to point me to it I may try to respond to something of yours and see if we are a good discursive pair.
Email: experimentalist@gmail.com
What is your reason for believing this ?
Do you think that AGI is unlikely to happen soon (say, before 2050) or that even if it happens soon, current AI risk research is unlikely to significantly improve the outcome ? (Or that AI risk research is not effective for an other reason ?)
I wonder if they mean that additional donations from the general public would not have any significant effect on the issue, relative to other causes, judged from an EA perspective?
I could see an argument (and I haven’t actively kept track of the developments, so I know mainly what I happened to absorb from general science and tech news and blogs like this that I’m following) that the huge increase in attention, publicity and support from high-profile individuals from the scientific, tech and super-wealthy fields has led to a point where marginal donations aren’t going to have a significant impact.
I do not hold a strong belief on this question, and currently do not feel informed enough to form one. Some years ago, the first couple of years after I got into the topic, and before the big shift in public awareness (or at least awareness among the people who have the most potential for impact here), I believed pretty strongly that it was a cause that should get a lot more support. If I remember correctly, I myself donated at least once, while being just a humble university student (Yes, I’m not entirely certain that I donated to AI risk, and yes, my memory is normally very good. One-off online donations charged to a credit card are so convenient, low-friction, and lacking in paper trails that they don’t necessarily leave much of a memory trace…)
Today, I can imagine a well-supported argument convincing me that criteria like “room for more funding” disqualify AI risk research from receiving the “cause where your donations will likely have a large positive impact per dollar” stamp. A major limiting factor is probably a scarcity of qualified people, AI risk research being a field where one or a few highly skilled and capable experts will have more impact than a hundred “meh, alright I guess”-ones. Experts in the relevant fields are extremely in demand and scarce right now, and with the increased awareness about the issue and the increased funding, it’s possible that pretty much all qualified people who are concerned about AI risk and have some desire to work on mitigating it are already working in the field.
Of course, with enough money you could draw away expert AI researchers from industry jobs – e.g. people who otherwise don’t particularly care about AI risk, and mainly respond to financial incentives – but this would require six-figure dollar amounts each year per person. I could see how a price tag like that for any meaningful additional impact could lead a person committed to EA to the conclusion – dependent on their attitude regarding future discounting – that AI risk is a cause where their donations would have very little impact relative to e.g. other causes common in EA discussions, even if they believe it’s an important and impactful cause in general.
I could, of course, be completely wrong, and experimentalist simply believes AI risk research is in some way inherently unlikely to have significant positive impact.
PS: I’d be very interested in any arguments for or against the “AI risk research is important and effective in addressing the serious problem of AI risk, but at present, donations are unlikely to have any significant impact at the margin” position I outlined above! And/or in information (articles? blog posts? people posting numbers on Twitter?) about the development of funding, coverage/interest, and people working in the field. As I said, I am a bit out of the loop on the topic, and don’t currently hold a strong belief for or against. I’m also not sure where to start looking for this information these days.
I want to go ahead and concede that AGI will happen soon because it seems very difficult to estimate the timing and the probability would have to be negligible, not just small, to really devalue current AI risk research.
Here are some of my arguments against donating to AI risk research — they are kind of random but all based in a pretty chaotic view of how research and history work:
(1) I do think that the question of “scientific time” rather than calendar time is on the table, though also hard to answer. I don’t think someone concerned, in 1920, with the increasing lethality of human weapons, and wanting to reduce the risk that catastrophically lethal weapons would ever be used, would be able to accomplish very much. The concepts leading to the atomic bomb reorient earlier thinking about the issue so as to make it largely irrelevant (one might make an analogy to an, um, singularity). And I think people underestimate the extent to which AGI will require deep conceptual developments rather than a few tricks and a lot of processing power.
(2) I think Pascal’s-wager type arguments about AGI underestimate the likelihood that AI risk research will make the problem worse, rather than better. One is dealing with a chaotic system with many of the actors beyond direct influence — why do we think are likely to make a good outcome more probable? This isn’t just abstract — I think the consensus is that all of the RAND Corporation game-theorizing about nuclear war made everyone less safe (supposedly because the Soviets couldn’t be trusted, it was better to use our nukes to target their nukes, maintaining a limited first-strike capability, thus encouraging a first-strike on their end). Similarly one can imagine the argument that one should rush to make a ‘good’ AGI, because soon someone will make an evil or self-serving one, leading to someone screwing up and wiping out humanity. This argument requires a pretty strong claim actually — the odds of making the problem worse have to be close to 50% — but I think it’s actually a quite plausible claim via the chaotic system analogy.
(3) In immature fields I think it’s really hard to fund the answers to specific questions. I think it’s less like ordering from Amazon and more like managing an ecosystem: people research the things that seem interesting x doable and then new things seem interesting x doable and then you iterate. To make computers better at Go, at one point at the margin it would definitely have made more sense to throw money at image classification versus “making computers better at games”.
(4) [less serious] A lot of AGI research is described, not incorrectly, as philosophy — a field in which it is notoriously difficult to make progress. As David Lewis once said:
“Can you tell them, with a straight face, to follow philosophical argument wherever it may lead? If they challenge your credentials, will you boast of philosophy’s other great discoveries: that motion is impossible, that a Being than which no greater can be conceived cannot be conceived not to exist, that it is unthinkable that anything exists outside the mind, that time is unreal, that no theory has ever been made at all probable by evidence (but on the other hand that an empirically adequate ideal theory cannot possibly be false), that it is a wide-open scientific question whether anyone has ever believed anything, and so on, and on, ad nauseum? Not me!”
Hi, I want a partner for a collaboration on whether Blanchard’s transsexuality typology is true, i.e. whether trans women end up gender dysphoric in one of two ways, either as a result of autogynephilia (a sexual interest in being female), or through a more-complicated process related to attraction to men and femininity (I think yes). I’ve spent some years informally trying to study the causes of gender dysphoria, but I’m open to a wide range of collaborators (mainly because last year I found it difficult to find anyone to discuss it with). Email me at tailcalled[at]gmail[dot]com, or contact me through Discord at tailcalled#7006, or on twitter @tailcalled.
Good luck! I hope you find a collaborator, because I’m very interested in reading this (as a trans woman self-identifying as autogynephilic, I think this empirical question has real consequences for the LGBT community). I also currently believe “yes” to the question, though, so I’m no good as adversary.
If you’re a trans woman who self-identifies as autogynephilic, you should consider joining our AGP trans discord server. Send me a message on discord if you’re interested. 🙂
I do think that some people are overstating the “implications” of Blanchard’s typology. Most trans women have been autogynephilic for decades, so we have a pretty good idea of what AGP transition means in practice, and so it doesn’t IMO mean reversing existing conclusions on how trans women should be treated.
(Some people argue that there’s some dubious things about transition for reasons orthogonal to the typology, e.g. that research on transition outcomes isn’t very high quality. This is more ambiguous, but importantly this is a point orthogonal to the typology and may or may not hold regardless of whether the typology itself holds.)
Thank you for being so candid. I do have some questions though, that I believe reflect widespread views about trans issues.
First, is there a distinction between being autogynephilic and gender dysphoria? The former suggests being aroused at presenting as a woman, whereas the latter suggests having the mind of a woman in a man’s body. Those seem like different things to me, but I have never experienced any of this so I realize that I cant grasp the complexities of it. This may have implications in whether trans women are seen as women or as men presenting as women.
But generally, I agree that regardless of whether Blanchard is correct or not, my view on various trans controversies doesnt change: trans women in sports, brazilian waxing, etc.
I’d estimate that 15% of men (and 50% of male SlateStarCodex readers, but that’s another story) are at least a little (but usually only slightly) autogynephilic, but only 20% of these would rather be women than be men, and only 20% of those who would rather be women are seriously gender dysphoric and transition. Thus, there’s quite a big distinction between the two.
I’d say that “gender dysphoria” suggests being distressed by one’s lived gender role or biological sex, not “having the mind of a woman in a man’s body”. The phrase “woman in a man’s body” has been adopted by trans people, but was originally coined to describe feminine gay men, and I think logically speaking this is a more natural meaning of the term. Of course, some feminine gay men end up gender dysphoric and transitioning, but some do fine as feminine gay men. (I’m not sure how feminine gay men feel about having it used to refer to them; it was originally coined by a feminine gay man, but it might be unpopular in general, though I do know at least one feminine gay man who seems to think it’s a good term. If they’re uncomfortable about using this term to describe themselves, it’d probably make more sense to just go away from the term. But 🤷.)
(Incidentally, the phrase “gender dysphoria” was originally introduced so that it could be applied to other groups than those who “most convincingly” presented themselves as “women trapped in men’s bodies”.)
As I’m personally both autogynephilic and gender dysphoric, I can attest that there definitely is a difference between the two, but that it’s also possible to be both.
In practice, there are decades of accumulated experience with what trans women are like. It seems like this experience should mostly “screen off” questions of etiology.
Thanks I really appreciate the answer.
15% x 20% x 20% = 0.6%. Seems slightly higher than the % of trans women but it’s the right order of magnitude.
Can I ask a really stupid question?
What does it mean to be a cis man? That you actively prefer to be male and seen as male? Or just that you don’t actively prefer to be female?
Let’s say someone doesn’t really think much about whether the word “male” or “female” applies to them, but they have certain genitals, and therefore everyone classes them a certain way, and they are indifferent to this, but go along with it because why not. What are they?
@eric23, The expression that has become popular for such people is “cis by default.” They seem to be relatively common.
Yeah, the estimates aren’t perfect. You’d need to get some high-quality samples to figure out the broken link, but it’d probably be silly to put too much work in when the exact figure can change over time. It’s also worth noting that 15% * 20% = 3%, and I’ve seen a study finding that 3% of adult men would rather be women than be men. The first 20% number also roughly matches the fraction of AGP men who’d rather be women in online surveys I’ve done, but internet communities appear to be unrepresentative, having AGP rates around 50%.
I’m not aware of any representative studies that estimate the AGP rate directly. There’s some that ask about transvestic fetishism and find various rates, from 3% to 7% IIRC, but IME there’s a lot of AGPs who aren’t into transvestic fetishism (one of my surveys suggested that a majority of AGPs aren’t into it, but this survey wasn’t representative), so I think that’s compatible with higher AGP rates. There’s some studies I’ve seen on Amazon Mechanical Turk that asked about proper AGP and found 15% rates there; that’s where this 15% number comes from. (Along with me doubling the 7% estimate, based on most AGPs not being into transvestic fetishism.)
@eric23: Personally, the definitions I find most coherent is defining transness/cisness by transition status, rather than by gender feelings. Thus, under this definition, a cis man is someone who was born male and has not transitioned MtF. However, if one wants to use this definition in a non-confusing way, one has to acknowledge that “cis man” carries a connotation of “man who is fine with being male”, and probably avoid using it to describe gender dysphoric men.
As Protagoras said, “cis-by-default” is a commonly-used term in the rationalist community for cis men who don’t dislike being men but also don’t feel particularly attached to it.
I have a feeling that a lot of the people who IDed as cis-by-default when we talked about this before are showing a lack of reflection more than a lack of gender identification.
And what’s your basis for thinking you know them better than they know themselves? Are you sure you’re not just typical minding here?
Well, I seem to recall a couple of them coming out as trans, for one thing.
There’s a lot of people who identify as cis-by-default, so it’s not realistic to think that many of them “are trans”. (Scarequotes because I’m unsure that one can coherently define them as being trans without them transitioning.)
Didn’t say that many of them are trans. Said that many of them aren’t as lacking in gender identity as they thought they were, and offered as evidence that a couple of them came out as trans. That’s consistent with many others having an unexamined cis identity, but we’d never hear about it because that’s what they behaved as in the first place.
@Nornagest:
How does that distinction make sense? What IS a gender identification, if not something one is aware of? Isn’t the awareness the entirety of the thing? It seems to me the fact that someone doesn’t think about it – doesn’t actively notice it – suggests they don’t have one.
For that to be evidence you’d have to believe that “gender identity” is a constant unchanging attribute. Is that what you believe? My contrary explanation of the same data would be that those people didn’t have a gender identity at the time they answered the question but sometime later on developed one or its valance/nature/magnitude/salience changed. (Or perhaps they still don’t have one but acquired a new dysphoria for unrelated reasons).
Currently I tend to think of “gender identity” as an intangible mythical attribute – much like a “soul” – that some people like to pretend exists. If I don’t notice that I have a soul, does that make me unreflective? If we assume not only that souls exist but that people can perceive them, perhaps it does. But shouldn’t we also consider the possibility that souls don’t exist, or that some people don’t have souls, or that some souls are (sometimes?) imperceptible to their hosts?
Suppose I claim not to perceive my soul but later I change my mind and say that now I can perceive my soul. We certainly might consider the possibility that the soul was there all along but I got more self-reflective later so I can see it now, but our differential diagnosis should also include some other possibilities like:
– Perhaps my soul developed later in life than most – it wasn’t around before.
– Perhaps my soul became perceptible later in life than usual – it wasn’t perceptible before.
– Perhaps my soul still doesn’t exist, but I became either more delusional or more agreeable later in life and am now somewhat more willing to claim it does.
Do those seem like good explanations in the case of a “soul”? If so, what makes them bad explanations in the case of a “gender identity”?
“Gender identification” might not have been the best choice of words there. What I was getting at is, cis-by-default describes people who see themselves as cisgender but believe this to be an accident of their upbringing rather than some kind of deep psychological structure. I’m suggesting that some structure along those lines does in fact exist in many cases but has gone unnoticed.
@Nornagest
My understanding is that the term is broader than that and merely reflects not having a (strong) sense of gender.
However, it seems to me that this can result from having a good match between sex and gender, where those people could have gender dysphoria if they are or had been made to live as the other gender.
I came across this idea which I’d not heard before about vitamin D, sleep deprivation in terms of REM, and abnormalities in brain development that result from improper growth during childhood leading to some of these issues. It goes along with Scott’s general bent towards reaching out for biological explanations for why things happen as opposed to cultural reasoning about identity, feelings, or psychological processes.
She found that vitamin D went down, then rates of trans and other sexual variation went up. The mechanism proposed has to do with a little understood aspect of vitamin D being used to achieve proper sleep paralysis. The effect of poor gut health and poor vitamin D levels from everyone being indoors and wearing sunscreen are the mechanisms for why those systems are not working well. She then links out to how these lead to sleep disorders and then to developmental issues as the brain and body do most of their growth and repair work when you sleep. The rise of other sleep disorders and factors leads to an interesting hypothesis for why rates of these sexual dysmorphia have risen. Certainly not something I’d heard anywhere else in the mainstream or main-subculure debates on this topic.
About halfway through this rather long talk on sleep, the gut, and vitamin D they get into it. I realise it is a big ask to watch, but hopefully it is of interest to propose adversarial ideas.
https://youtu.be/74F22bjBmqE
I’m looking for a collaboration partner on the following: “There are critical learning periods in early childhood, after which it is impossible or near-impossible to learn some topics as effectively. As such, those interested in true mastery in many fields should start formal training early (<5 years old)." I'm interested in arguing in favor. I have no formal credentials, but have a persistent obsessive interest in expertise and learning. My essay with Michael Pershan in the last ACC should give you an idea of my work.
Failing that, again arguing in favor: “Conservative policies are more effective than progressive ones (liberal sexual norms with an emphasis on contraception and comprehensive sex ed) at reducing the absolute abortion rate, without a proportional increase in illegal abortions or unmarried/teen births.” Here as well, I am a layman with an interest in and some familiarity with research/data on the topic.
Those interested in collaborating on either topic can email me at tracingwoodgrains[at]gmail[dot]com.
I’m unlikely to be able to collaborate well on the former question, but am very likely to vote for such a collaboration.
Which topics? My feeling is that all the controversy is in the specific topics. (Nobody really disagrees that there is a critical period for learning how to walk, is my understanding.)
At least: foreign language, music (particularly absolute pitch), rock climbing, tennis, ballet. Likely: Chess, philosophy/logic, reading. Extrapolating: most likely the majority of fields.
For all these fields, five seems to be a very early cutoff.
I listed five to give an idea of the timescale being examined. My impression is not that there is a strict cut-off at five, but that certain parts of learning become progressively less effective from age 6-8 onwards, so five is a good bet to stay on the right side of whatever fuzzy lines there are.
The second proposition looks like it is equivalent to “conservative policies are better than progressive policies at reducing unwanted pregnancies”, which seems like a clearer statement and should at least have a bunch of data you can look at. Unless you’re specifically interested in unmarried or teen births and for some reason don’t want to consider unwanted married births.
It’s particularly inspired by things like the Unit of Caring on abortion, seen here. The focus is intended to be directly on abortion itself, with other statistics used as proxies for potential harm caused by policies and culture.
Unit of Caring says that nobody should be forced to endure a pregnancy because that’s a pretty extreme thing. But almost no women are forced to become pregnant. They become pregnant as result of their own actions or inactions. Abortion is much more like donating a kidney and then demanding it back, and much less like being forced to donate a kidney.
I know you are just talking about the policy part of that post, but I needed to get that off my chest.
There’s also the question of whether the act is immoral independent of whether it should be legal.
For example, if you are a match to donate a kidney to your sick child, would it be immoral to refuse and let the child die? Likewise, would it be immoral to refuse to birth your child and kill them instead?
That, of course, presumes that “personhood” is granted to the fetus, which people like JJT do.
Children generally have a special entitlements from their parents, the way people in general don’t have from people in general. Nobody expects me to donate a kidney to a stranger, or feed them, or shelter them, though some might call me a bad person for not doing it under specific circumstances.
But a parent who refuses to feed or shelter their child is generally liable, at the very least they’ll have to deliver a baby in a safe place, so if we accept that unborn child is still your child, we would have to attack two other pillars – consent for sex being consent for pregnancy (And thus abolish child support) or child having a specially entitlements (I actually met a libertarian who claimed woman must carry a baby to term and then is free to starve him to death, but I think he was trolling)
That’s a great post, thanks for sharing it. I see why an adversarial collaboration on something like that would be interesting – although the critical learning question would probably be more useful, on the whole.
I’m unable to AC with (against?) you, but would love to see a thoughtful classification supported by data of which topics/skills fall into the “critical period” vs “anytime” categories. My bias is that most mental skills can be acquired late, but that childhood is advantaged by circumstances that give the learner (a) strong motivation and (b) available time.
TBH, because of my belief that sufficient motivation and time are necessary, it may be too easy to excuse failure from late learners and I’m not really sure what evidence would convince me that a critical period existed.
I won’t take up either of these with you, but I think for the first question it would be interesting to identify an optimisation frontier. For example, earlier training runs a higher risk of learning something not-useful, but later training might yield a lower peak level of performance.
I found a collaborator. Thanks to those who emailed me expressing interest! I’m always keen to discuss things like this, so feel free to drop me a line regardless if you’re interested in the topics.
I’m looking for a collaboration partner on the following: “The Reproducibility and Data Transparency Crisis Also Extends to the Humanities” and I would like to argue that required open data publication and automation of analytical processes outside of a non-published spreadsheet contribute to the advancement of knowledge more than the present article publication system. I’m currently building a research integrity guideline based on the Australian National Statement (p. 35-36) mandating this sort of research transparency, and having a paper exploring this topic would be fascinating.
I would like to engage with fellow academics on this one, as my objective is a publishable paper in a research ethics journal or its equivalent. (If you do not get some sort of personal or professional reward or satisfaction out of publishing in a journal this collaboration will be problematic.) I would like to use any rewards we get towards paying open access fees. My ORCID is: https://orcid.org/0000-0003-4932-7912. I have a paper (still in preprint because the book is stalled) exploring this topic as well. I am brian[dot]ballsun[dash]stanton and my institution is mq.edu.au
Hey, I’m curious what you mean by both reproducibility and data transparency with regard to the humanities. In my experience, much or most of the humanities don’t have data per se, and many of them aren’t attempting to achieve empirically reproducible results in the same way that social sciences or hard sciences are
History most certainly has data.
I would like to participate. I will argue for the claim: we should not believe in IQ, and believing in IQ has bad effects and is an impediment to reasoning correctly.
I will not argue that:
– IQ tests only measure how well you do on IQ tests
– differences in cognitive function have nothing to do with differences in genes
My background: I publish a large number of psychometric tests on the internet, including ~10 “iq tests”. I have read widely in the field and am good at math, but have no relevant background (degree was Electrical Engineering).
@Eric
“We should not believe in IQ” seems like a way too vague claim. What do you mean by ‘believe’ in this context? Use it to judge people?
You didn’t include an email, but I might be interested in this. Could you clarify what you mean by “believe in IQ”?
I’m a maths PhD student, and I’m interested in psychology as a side project/potential future academic area. My email is emckernon at gmail dot com.
IQ tests are intended to predict later academic performance and metrics of same, such as years of education, and they do that quite well.
I’m curious whether you intend to dispute the claim that IQ tests are effective tools for predicting life outcomes other than educational attainment and/or future ability to do well on tests. That is, one could theoretically believe the above is true while simultaneously believing that “believing in,” talking about, or administering IQ tests is a social net negative, but I tend to assume there’s not a lot of overlap between those belief sets.
I’m not sure what you’re going to be arguing, but I’m interested in more information. Can you be more precise as to your claim?
I’m looking for a collaboration partner on the following: “Utilitarian movements such as effective altruism demand (at least) vegetarian diets from anyone seriously following them”.
I believe this is true – the best evidence we have is that animals suffer when we farm them, and utilitarian movements such as effective altruism should reduce suffering wherever the personal cost is low.
Email me at my throwaway account frankiehenshaw@gmail.com if you’re interested in collaborating
Minor nitpick. Virtually all Effective Altruist organizations specifically state that you do not have to be a utilitarian to be a part of effective altruism. Effective Altruists are definitely disproportionately utilitarians, but it’s not a requirement.
Given that farm animals can have a net-positive life and given that the only way to pay at scale for such net-positive lives is to eat the animals, I think that any utilitarian is required to eat as much non-factory-farmed meat as possible.
In principle, I agree farmed animals can have net positive lives, in which case, perhaps, there would be as strong a case for utilitarians to eat meat as there is for their having as many children as possible to increase total utility of humans in the world (ie realistically not a great deal, but at least a colourable philosophical point).
However in actual reality, farmed animals have net negative lives, and I don’t think we are even close to a point where the utilitarian calculus on this is in any doubt. This is an empirical point which I would seek to prove in the course of the advo collab. If I am right, the fact that some animals could have lives that are net positive doesn’t seem to give you the conclusion that you should behave as though they actually have lives that are net positive.
I’d be interested in more exploration of that if you do the collaboration. As an outsider, my intuition is that if you respect animal lives, the perspective from which we view whether the animal’s life is net positive or negative is the animal’s.* My first though on this question is that revealed preference seems to show that farm animals attempt to continue living and therefore seem likely to prefer existing.
It’s possible that they do so based on mistaken information, but I question any ethical philosophy that lets us overrule millions of entities preference over whether they should exist ‘for their own good.’
(Excepting the case where allowing A to live literally prevents B from living, in which case we would need also to judge the opportunity cost of preventing B from living, but IMHO any such argument requires some very unlikely assumptions under current conditions.)
This is a really interesting point – I’d never thought about it before.
From my understanding, humans are the only animals that will kill themselves because they are unhappy (examples of animal suicide kind of exist but not really in the context of their having unbearable lives). My initial response is that therefore it isn’t reasonable to describe a non-suicide as a ‘revealed preference’ for life, because animals are incapable of deliberately killing themselves for whatever reason. But I don’t know what that reason is!
The inability of animals to kill themselves isn’t surprising given that propensity to suicide is probably not a trait that has been strongly selected for by evolution.
@Hey
Oh yeah, well . . . your face has been strongly selected for by evolution! 😉
(By which I mean to say that I am very interested in your opinion, but IMHO, if we disregard preferences that have been strongly selected for by evolution, I’m not sure why I should respect *your* preferences, but as I said, I’d be interested in seeing the collaboration explore this further.
I’m still on the principle that “we should prevent millions of entities that prefer living from doing so because in our personal moral calculus, their lives lower a Platonic quality we call ‘utility’ ” strikes me as something Dr. Doom or Galactus might say on a very bad day, and I’m intuitively skeptical of any moral philosophy that makes me sound like Dr. Doom)
@Froolow. Animals don’t just not commit suicide, they actively try to continue living – they flee from threats, they respond to negative stimuli, they eat and drink, etc.
Granted, preferring to exist is obviously a trait that gets selected for, but even granting that, if the animal prefers to exist, why should we disrespect its preference? (Or hypothetical preference, in the case of hypothetical animals.)
I think the term “preference” creates some confusion here because of its multiple meanings : revealed preferences (a.k.a behavior) are not obviously the same as ethical preferences (a.k.a the things that are good to respect). For instance, I sometimes have a revealed preference for reading political articles I disagree with between 1 and 2 AM, but I think I’d be better off if I slept instead, and someone who prevented me from reading and got me to sleep instead would probably be doing something good from an ethical standpoint.
When separating the two kinds of preferences, “we should prevent millions of entities that prefer living from doing so because in our personal moral calculus, their lives lower a Platonic quality we call ‘utility’ ” becomes “we should prevent millions of entities that act in ways that keep them alive from doing so because in our personal moral calculus, their actions are not the best ones they could take”, which sound less evil.
What Hey said. Eating is somewhat pleasurable (presumably for animals in a manner similar to humans), and thus eating is (locally) always better than not eating. Death does not occur immediately after not eating, so a suffering animal cannot escape such a local optimum, unlike a human, who can better conceptualize an indirect path towards less suffering. That doesn’t make the animals suffering any less real or important! I’m not a utilitarian, by the way, just framing my argument in those terms here for clarity.
@Froolow
There are actually some reported cases of animal death that we would unquestionably classify as suicide if people would do the same.
The reason seems obvious to me…? Suicide requires higher reasoning. An animal would need to understand death as an abstract concept, that its consciousness would go stop if it took certain (non-obvious, probably painful) actions. This is way beyond tool usage and the mirror test; no animal comes even close to this level of intelligence.
Unfortunately, regardless of how much it’s suffering, an animal doesn’t have the mental tools to escape. The universe is merciless.
As far as I’m aware, it’s rare for animals to deliberately commit suicide in response to any level of prolonged negative experience, so it would seem like this argument means that no level of suffering can ever be worse than nonexistence for an animal – that feels like an argument that proves too much to me.
My model is that animals don’t have anywhere near the level of preferences that humans do about things like their own existence, nor the reasoning capacity to anticipate future suffering and commit suicide (even if they were able to override instinctual responses), but they do feel pain and suffering, and so we should act in accordance with (our best guess of) their direct subjective experience rather than attempting to construct coherent preferences from their actions.
You have to distinguish between factory farmed animals and free ranging whatevers. When I walk half an hour from where I live, I see plenty of cows grazing on a meadow. If the farmer doesn’t torture them at night, I don’t see how they could not have a net-positive life.
@BlindKungFuMaster
Being indoors isn’t necessarily negative for animals, just like I’m not being tortured for working in an office.
Cows are actually a lot like humans, in that they like being outdoors, except when it rains, is too cold or too hot. Always being outdoors in a meadow with no cover for the sun or rain is pretty clearly not their preference, because they often seek cover when it is available.
Sure, chickens also dislike the open sky. My point was that those cows seem content as can be.
What is a net negative life?
It sounds like a concept from a dystopian future where we are ruled by demented authoritarians seeking to establish utopia. It sounds like animals (including humans) who experience more seconds of sadness than seconds of happiness should be eradicated to “reduce suffering”. I’m sure it’s not as straightforwardly evil, but the phrase sends chills down my spine.
Keep in mind, we can easily eradicate all suffering on earth by eradicating all life on earth. Life implies suffering.
Also, you are not merely advocating for ending the suffering of farm animals, but for the extinction of species which feed billions of people.
States worse than death (for humans) are a pretty standard part of health state evaluation. IIRC a majority, but not a plurality, of respondents across a range of cultures assign “worse than death” status to a number of described health states, while a significant majority refuse to engage with the notion that any conceivable state could be worse than death.
That doesn’t directly lead to policy conclusions, though “we, as a society, shouldn’t pay much, if anything, for a treatment which extends life only in a state which people consider, ex ante, to be worse than death” seems pretty persuasive to me.
“Actions which bring about time lived in states worse than death are morally disfavored, all else being equal” is probably another fairly easy sell on this basis, spine chills notwithstanding.
The real world application to animals, and vegetarianism, is much harder, IMO, but then I enjoy eating meat, so I would say that wouldn’t I?
@pdbarnlsey
My understanding is that people often shift their preferences when they themselves are in a situation of poor health. In other words, when in good health they consider certain situations unlivable, that they no longer consider unlivable when actually in that situation.
IMO, this should make one very wary of deciding that those without a voice suffer to such an extent that they should be killed or prevented from existing.
@Aapje,
Yes, there is evidence that reported preferences shift in favour of life with the disease/disability once this disability is being experienced. This is tough to deal with, and bring us to the same kind of “how do we treat seemingly nonsensical revealed preferences clearly designed by evolution?” question that’s being discussed upthread.
Obviously, actual sufferers have access to much better information about their life than someone who has heard a hypothetical description, but equally, desperate situations can impair decision making and there’s something to the idea of precommiting to a course of action when faced with a change which dismantles your prior preferences.
But, “there’s no such thing as a period of negative utility” doesn’t fit well with what actual humans say when asked.
@pdbarnlsey
Perhaps many people simply shift their perspective. Like a person whose passion is cycling, who can’t imagine being happy without the use of their legs, but whose passion becomes hand cycling, once they do become paraplegic.
It’s dangerous to decide that these shifted preferences are nonsensical because the shift was caused by suffering. Suffering doesn’t automatically impair reasoning ability and people who experience very little hardship aren’t necessarily good at reasoning or happier than if they suffered more*.
* People often seem to become happier from ‘meaning,’ social contact or achievement than a lack of hardship. So it may be much more cruel to own a dog that is left alone for long periods than to put a lot of chickens in a shed.
Not all utilitarianism it total utilitarianism. If you consider that the utility of a group is the average utility of its members, avoiding the worst cases of suffering would be good, but you wouldn’t necessarily have to create new lives, unless you had good reason to think they would be better than average. Average utilitarianism avoids defining an arbitrary “net positive” utility threshold and does not fall into the repugnant conclusion, though it still needs a few epicycles to work properly.
Average utilitarianism suggests the only reason to kill off the lowest-utility person (and iterate until only the most satisfied person in the world is left) is that higher-utility people would rather you didn’t, which is also pretty repugnant.
Despite being some sort of utilitarian, I’m not aware of a formal statement of utilitarianism that doesn’t go one of those two directions.
You can fix that problem by considering that dead people still count and have very low utility (but people who don’t exist yet or will never exist don’t count at all).
Absent belief in an afterlife, it seems like a very ugly hack to assign utility to dead people. At that point you’re really abandoning utilitarianism in favour of just rigging the system to give you whatever outcomes seem right to you.
Even in the case where we value farm animal lives as net-positive, this argument would only apply if the cheapest way to maximize total happiness in the world would be to increase the number of animals produced by raising demand for meat products. In practice, this is a quite costly way of increasing happiness, and therefore I disagree.
Why does effective altruism, or utilitarianism, demand that we care about utility for all species and not just humans? Or why does it demand that we judge them on the same scale?
I don’t technically have a personal stake in this as I’m not a utilitarian, but it seems like you’re painting with too broad a brush here.
I wonder this same thing, and would take advantage of your use of the phrase “all species” to point out that this question should extend to all other eukarya.
My argument for why this is false is that some effective altruist causes are so important that spending time on your personal diet isn’t worth it. Or if anyone is in a position of power, their actions are so high impact that their personal diet becomes comparatively insignificant.
Mostly stating this because, in case that this line of argument is something you don’t want to deal with (because it doesn’t deny that vegetarianism is important), you might want to clarify that. On the other hand, if you’re fine confronting that position, then proceed.
Isn’t this disproven with a simple counterexample of a utilitarian who doesn’t put any value on animal suffering? I mean you can argue whether someone should put any utility on animal suffering. But it’s not inherent in the definition of utilitarianism.
Yes, exactly. Being a utilitarian doesn’t say anything about the set of beings whose utility you’re maximising, nor does it say anything about what weights to assign to these beings’ utility.
I would like the result of this to identify criteria for how to decide exactly how far “demands” imposed by utilitarian ethics extend. If they can demand anything at all (vegetarianism being a proof of concept), then I would expect they end up demanding vastly more than that.
“Why don’t you reduce the suffering of that homeless man by buying him a beer so he can forget about his troubles? The personal cost is low.”
My attitude towards veganism is similar. There is a cost. It may be “low”, but my resources are better spent elsewhere.
@Froolow
IMHO, ethical theories are models that are heavily parameterized and can therefor support pretty much any outcome depending on the parameters that you pick.
So a collaboration then either has to first define a specific variant of utilitarianism or describe for which variants of utilitarianism vegetarianism is ideologically mandatory.
As someone who considers myself utilitarian (although not completely) my general rule of thumb is to be skeptical whenever someone says that utilitarianism demands any particular action. It is extremely difficult to calculate the effects of following some policy, and it would be downright suspicious to me that a dietary change is the best way of having an effect on the long term future, or the multiverse.
I agree quite strongly that animal suffering is bad, but to me, this argument alone is insufficient to prove the case. The real world is complicated. Therefore, maximizing utility requires devising careful and precise strategies — not simple lifestyle changes.
Less abstractly, I think that the evidence alone speaks for itself. There are many estimates out there which try to calculate the effect of becoming vegan as an individual. Although it is generally accepted that these estimates could be off by several order of magnitudes, I do not think that the value of becoming vegan is more than $1000 per year when compared to donating that money to an animal charity. This, in addition to the fact that becoming vegetarian or vegan adds quite significant social strain to one’s life, should shed doubt on the idea that it is required for a utilitarian to become vegetarian.
One obvious problem with this position is that “vegetarian diets” is undefined. Can vegetarians eat simple animals like bivalves or (theoretically) polyps? If so, then what qualifies their diet as “vegetarian”? If not, then why is it OK for vegetarians to eat relatively sophisticated organisms like certain plants and fungi?
Answering those questions will require you to say exactly which organisms can suffer and which can’t, which will require you to define suffering, and I don’t believe you can do that in a way everyone who might read your paper will agree on.
I’d be interested in taking you up on this.
Full disclosure: I’m more comfortable debating the question of whether they’re conscious or not as opposed to the degree to which they are suffering if they are conscious due to factory farming conditions, but I’d be happy to play devil’s advocate on that aspect too. If you’re only/more interested in the second part then it might be better to work with someone else.
If you’re keen, email me at zeleza – @ – gmail . com (no hyphens)
I’d like to see a writeup on what degree of caution, if any, is appropriate for the production and use of CRISPR and/or GMO food products. I tend to not be concerned about them at all myself, but every now and then someone asks me to consider “the hazards”, and I think “what hazards? Haven’t heared of anything serious yet.” But this may be just my ignorance.
The best argument against GMOs I’ve encountered is that the (current) main use case of GMOs is to help eliminate insects, which is done either by making the plants produce their own pesticides, or by making them resistant to an existing pesticide, and then spraying that pesticide over them. Given how harmful to health pesticides can be, it may be dangerous to allow GMOs without extensive testing to certify their safety and/or the safety of associated pesticides.
This argument seems to assume that a) ‘pesticides’ all have the same properties; all are equally bad (for human health), and also that b) using pesticide on pesticide-resistant crops leads to more pesticide use than a ‘standard’ practice.
Neither need be the case, and in reality (a) definitely isn’t, and my impression is that (b) generally isn’t.
Also, of course GMOs are safety tested. In the EU at least, GMO are much more stringently regulated than non-GMO crops. Sometimes to a ridiculous extent.
Non-“GMO” tinkering with the genetics of the potato got a lot of people suffering from painful dermatitis, but wasn’t regulated to near the same degree as if it had met the definition of a GMO.
I do not offer to collaborate on this topic, but I think it’s pretty clear that messing about with food crops can in some instances have bad outcomes, but that banning GMOs is probably a net negative for humanity compared to current regulatory regimes.
GMOs generally result in greatly reduced pesticide use
Insecticides are often harmful to humans. They are designed to kill animal life after all. Pesticides in general aren’t inherently dangerous.
I don’t think I’m really qualified to participate, but I second this. GMOs have become such a boogyman of anti-science “They’re bad for you because they’re artificial” that it feels like people ignore the fact that they have very real (potential) downsides. Personally, they legitimately concern me from an ecological perspective. It seems entirely possible that a company could genetically engineer a plant to be able to outcompete local plants, at which point you’re basically making artificial invasive species. I’d be surprised if humans aren’t able to make vastly more effective versions of invasive species that can destroy local ecologies just by existing. It also seems like we’d only notice this problem once it was already too late to stop, just like with many regular invasive species.
My impression is that this isn’t an issue yet because companies selling GMOs want to sell seeds so want their plants to be actively bad at reproducing. Still, there’s nothing fundamental about that being the most profitable angle to producing GMOs.
I’d love to see a discussion of the ecological risks of GMOs from the perspective of pro-science people who aren’t just afraid of new technologies because they’re new.
GMO is an unclear legal category that is based more on politics than science.
For example, FasTrack breeding produces trees that do not contain trans genes. We are still waiting on the decision by the EU Commision, which, as far as I know, has not arrived yet. Who knows what they will decide, but I expect the result to depend on lobbying and political compromises, not facts.
Like when France started banning GMO import to appease environmentalist about nuclear stations.
OK. If history is open as an area to look at, I’m definitely interested. I can think of a couple of topics, but in most cases I’m going to have trouble finding someone here to take the other side. Plausible options that spring to mind are (inspired by recent discussion on Naval Gazing) how good of an idea the Washington Naval Treaty was, or the utility of battleships during/after WWII. Anyone interested?
I’m not going to offer to collaborate on this since I haven’t done a lot of research on the particular issue and don’t expect to have enough time to work on any collaboration in the near future, but I’d be quite interested to read a collaboration on the proposition:
“Due to armed conflicts avoided by MAD (hard to measure, of course), and despite e.g. Hiroshima, Nagasaki, and various proxy wars and provocations, as well as the reduced utility of preparing for and worrying about the horrors of nuclear annihilation, the invention of nuclear weapons has been, and can probably be expected to continue to be, a net positive for humanity–maybe even one of the biggest recent net positives ever, after vaccines and antibiotics.”
I tend to believe the above proposition; am not sure if that intuition is widely held here, or among the general public.
What does can probably be expected to be a net positive mean?
I think a global nuclear war is less likely than no global nuclear war and thus, due to nuclear weapons, no global war at all.
Does that satisfy can probable be expected to be a net positive?
This is the most difficult part of the question, as it involves predicting the future, but my intent is:
If you think the fact we’ve not yet had a global nuclear war is due to having been very lucky, then even if the net effect thus far is positive one could nonetheless be pessimistic about the longterm cost-benefit proposition of the invention. If you think the fact we’ve not yet had a global nuclear war is not just good luck but a result of e.g. the incentives involved, then you can also be more sanguine about the net effect, I’d say.
That’s a good one, although I can’t do the adversarial collaboration with you because I also believe it.
What if perpetual peace is not net positive for humanity? How’s that for a hypothesis? War would prune unfit and incapable regimes and decadent empires from the globe, now they can fester forever.
Obviously, war sucks, but I wonder if on a long run knowing you’ll never have to go to proper war can have worse effect.
I tried posting this as a reply but it didn’t stick to the thread. Trying again, sorry for double post. None of you included a contact email but if you’d like to follow up, email me at tprismic@gmail.com.
I’m really interested in this one. I’d argue against it. It created a climate of fear that destroyed the possibility of real peace and created a generation that grew up in fear, strengthened nationalism and caused wars and conflict to this day even beyond thr fears of nuclear annihilation. I wrote several papers on related topics in college (including the effects on children in the 50s) and my thesis was on the effects of atomic espionage on international relations in the early cold war.
Does it count as adversarial if you want to engage in inquiry over a specific topic, but are generally agnostic on the answer to the question or maybe just have mild biases? Or is the idea more for two people who have already looked into a topic deeply and came to different conclusions to engage in a dialogue and produce something they both agree on?
My personal opinion, you have to strongly believe your position at the outset, but you do not have to have already deeply researched it. Obviously the more you know the better, but doing the research can be part of the collaboration.
I second this question. I enjoy testing supernatural/pseudoscientific hypothesis and I feel like I could actually do a decent job arguing “For” astrology or something, if someone wanted to take the other side.
However, as a law student I am aware that the best arguments tend to be found where the ultimate motive force is coming from someone who actually has something to lose, and it’s possible I might not “try as hard” or make the “best arguments” if I’m just coming up with stuff for fun.
Hi, all!
Proposed question: Is there a strong case against abortion based solely on secular ethics?
My position: There is no strong secular ethical case against abortion.
Obviously, there are some complications in here ( abortion until what fetal stage, etc.), but that’s what makes it fun! We can work to define a more narrow question if necessary.
My background: Currently an MD/PhD student. I wrote my undergraduate thesis about the ethical debate surrounding abortion and have always wanted to revisit the question.
I have a lot of the well-known papers on this topic saved and annotated, as well as access to many journals by virtue of being a professional student!
Email at cjames (dot) gblock @ gmail.com
What is your argument in a nutshell?
If you’re going to be my adversary, I shouldn’t tip my hand right? 😛
In a nutshell- when I was writing my thesis, I found that arguments regarding “personhood” as the primary criteria for a right to life most compelling. While there can be some wiggle room, generally speaking we would not accord a fetus with personhood, at least not until very late stages of development. Consequently, the mothers right to safety, freedom of action, etc. win out.
In general, I think all of the arguments on both sides have problems. Most of them are intuition-based, which is problematic due to the extremely personal and incidental characteristics of intuition. Also, many rely on absurd hypothetical situations (looking at you, Judith Jarvis Thomson! https://www.jstor.org/stable/2265091?seq=3#metadata_info_tab_contents)
For anybody interested in AI, futurism, etc., the strongest anti-abortion argument I found was based on the concept of “potentiality.” If we were to demonstrate that this criteria is sufficient to accord a fetus life rights, it could have implications for arguments about future persons/person-like entities.
Keep in mind that MANY abortion opponents are genuinely of the opinion that “secular ethics” just doesn’t make sense in and of itself. As other people have mentioned, your opponent might be able to just argue that there is no strong case for ANY ethical position based solely on “secular ethics”
at the very least you’d have to define that term, and as a person with a secular ethical belief system myself, let me be the first to say that I don’t think anyone’s found a good, consistent one to date.
I think any ethical debate requires both parties to agree to certain baseline assumptions. I’d counter that the reason ethics seem to work better when religion-based is that all participants in the discussion share the same assumptions.
So, as an example, we would have to agree that a human life has intrinsic moral value and that value implies certain conclusions (killing people is wrong, you probably should at a minimum not hurt other people, one life is roughly equivalent to another, etc.) I think once a solid baseline is established, one that doesn’t make TOO many assumptions, a good discussion can be had.
Best of luck in establishing your secular moral axioms. I’m not sure how you will avoid the conclusion that we are just very complicated arrangements of molecules that produce consciousness through some yet undiscovered laws of physics.
Whatever our physical makeup may be, it appears you agree that consciousness is something valuable and special. So it seems reasonable that we’d want to figure out the best way to act while being alive (whatever that means).
That’s the nice thing about ethical discussions vs existential ones: You can start with priors that all parties agree on (human life is valuable), regardless of what is underlying those priors. So, I don’t think we will arrive at that conclusion should this collaboration go forward, seeing as it’s outside the range of the discussion!
Hmm. No offense, but it seems to me that if you:
a) Start your discussion with a set of agreed priors for “secular ethics” such as “human life is valuable” and
b) Demand that a “strong” secular ethical case is one that cannot be reasonably challenged by any rational person, then:
The outcome is going to depend almost entirely on the nature of the priors you agree to, which isn’t unreasonable, but from one perspective reduces the dispute to question begging.
When we agree to start the discussion with the assumption that human life has value,* whether or not we have a strong secular ethical case against abortion is likely to depend on whether we agree that fetal life is relevantly human life at various stages of development, and how much value we agree it has.
* I do not agree to this assumption under these circumstances, FWIW.
Yes, I do. It’s just that there are inevitable exceptions to the very reasonable sounding “human life has value” axiom, and the conclusion of “dont kill people”. Abortion, euthanasia, war, self-defense, capital punishment, etc.
“human life has value” is not going to help you in deciding any of these issues, because everybody agrees with it, yet nobody agrees on those issues.
You need to dig deeper and find the source of the value of human life. Religious people have no problem with this. Atheists have a massive problem. Because ultimately, they believe humans are just very complicated arrangements of molecules that produce consciousness through some yet undiscovered laws of physics. So, I would say that this question is not only very much inside the range of discussion, it is the central issue that you must address if you wish to succeed in establishing secular moral axioms.
By even admitting that human life is inherently valuable, you are severely undermining your abortion argument. You’re effectively saying that the foetus has inherent value, but not as much value as a woman’s career (unless you oppose abortions for career reasons).
J Mann- So, I don’t agree that we will end up question-begging. If we DO start with the assumption that human life is valuable, we have to start talking about how we assign value, what risks are entailed by pregnancy/childbearing, and to what degree the value of a life imposes a duty on other moral agents.
But you don’t agree with that assumption so, NEW collab proposal- “Human life is valuable.” I’ll take the affirmative side!
Jermo S- Well, fortunately for my future collaborator, I don’t mind taking the weak side of an argument! I’m more than willing to proceed with the assumption that human life is valuable intact.
I really think ultimately this will just turn into an argument about what “secular” means.
You: “Well, it means things that aren’t religious”
A pro-lifer: “Your supposed values like ‘human freedom is an important end in itself’ are just another form of religion”
You: “No, because MY value is something everyone can agree on”
PL: “Well so’s mine, all I’m saying is that all human life is inherently valuable. All we disagree on is what a ‘human’ is.”
You: “Well yeah but you have a RELIGIOUS definition of when human life begins.”
PL: “What’s your definition?”
You: “Well when we can prove using science or something that the entity is conscious and has free will etc. ”
PL: “how do you know other humans are conscious and have free will?”
You:”Well at a certain point we just need baseline agreed-upon priors”
PL: “That sounds a lot like faith…”
Also potentially “a strong ethical case”
You: “But x secular case isn’t strong, it leads to the following absurd contradiction/doesn’t comport with actual human behavior!”
PL: “Oh darn you’ve got me. I guess I’ll have to use one of those ethical systems that doesn’t have absurd contradictions and perfectly comports with human behavior.”
Theodidactus: So, for me, a “secular” ethical position is one that does not rely on the particular assumptions of any one religion. For example, the notion that human life is valuable is shared by most religious people and people outside of religion. The difference lies in the justification for that belief. As a gross simplification, let’s say a Hindu value human life because it is an incarnation of the Godhead, while an atheist values human life for purely subjective or hedonistic reasons. Starting with the proposition “life is valuable” allows the Hindu and the atheist to skip the argument about “why” and proceed to a dialog between two people that can be conducted with mutual respect.
Since we (for better or worse) live in a secular society, where laws cannot be justified by the particularities of one faith, I think it’s important that we try to have these conversations. At a minimum, it allows us to have a common language to explore issues. At most, we might be able to come to reasonable conclusions that help make the world a better place to be. I’d rather try to have that conversation than give up prior to starting it.
Also, belief that life is valuable is a wager that’s just about as appealing as Pascal’s- if life isn’t valuable, it’s not like you can lose anything!
I think a starting point for “secular morality” would be that “in general a human life, including that of infants, is valuable, and the deliberate taking of human life is only acceptable in limited sets of cases roughly where not killing a human may result in the death of another human.”
The morality of Just War may overstep that boundary somewhat, but the criminal code of most places we consider civilized is pretty much an explication of what those limited sets of cases are, and what level of “may result in the death of another human” is sufficient justification.
Interesting that your starting point for secular morality would result in most abortions being banned.
“In a nutshell- when I was writing my thesis, I found that arguments regarding “personhood” as the primary criteria for a right to life most compelling. While there can be some wiggle room, generally speaking we would not accord a fetus with personhood, at least not until very late stages of development. Consequently, the mothers right to safety, freedom of action, etc. win out.”
You could make a similar point to oppose the Endangered Species Act. After all, endangered species are not persons and landowners are, so shouldn’t the landowners win out?
And again I feel I must point out, I would question whether there are airtight ethical cases for rights even existing, so it’s not self-evidently true to me that the rights of x or y or z must win out if there’s not a strong secular ethical case for w
Of course, that depends on the definition of “personhood,” or “entity with inherent moral value.” I think that (at least some) animals have many of the criteria we associate with personhood, and thus should be accorded more moral standing than they currently are.
Even if the organism/thing isn’t itself a moral agent, it might be valued in a way that requires its preservation. Moral agents can value the same thing for different reasons. For example, a logging company might think a sequoia is valuable as a bunch of future tables. Conversely, a bunch of naturalists valued sequoias because they’re freaking amazing organisms! In that case, society decided that the (aesthetic, biologic, ineffable something) value of the sequoia exceeded its economic value and kept them from destruction. So, even if a tree (or some other thing) isn’t a moral agent itself, we might assign that thing values that necessitate its preservation.
Just spitballing, but here are some potential arguments that might qualify as “secular.” I’m not saying I believe any of these, but I think these are “strong cases” or at least as strong as the foundational case for a lot of human laws.
* Human life is valuable, in fact the most valuable thing there is. Great damage has been done historically by assuming that certain kinds of humans were worth killing, or at least less deserving of basic human dignity because they lacked certain capabilities. Its therefore better to err on the side of a BROAD classification of what “human” is for the purposes of a rights analysis.
* Many laws (speeding, age of consent, dwi) involve drawing an obviously absurd bright line. No one seriously believes something magically happens at 66 miles an hour, 18 and one second of age, or exactly .08 BAC. These lines are not often drawn using rigorous science either, they are ALWAYS going to be arbitrary. Society has decided to arbitrarily set the threshold for human life at , and it has the right to set that threshold wherever it wants…it’s not like you can rigorously define the complex concepts at play like “consciousness” or “self-sufficiency”.
* Along with that, many laws involve balancing what may appear to be an irrational human impulse against a detached “far mode” analysis of the situation. My favorite example is assault. In most jurisdictions, spitting on someone is assault, in fact as serious as punching them in the face. It’s no defense to argue “well yeah but your honor, spitting doesn’t do any real/objective/scientific harm, it’s based on an irrational human aversion to being spit on. The victim should get over it. My rights should win out here.” Similarly, society, even pro-choice society, obviously recognizes the fetus as a human (when it’s in the womb, in face-to-face conversations with the mother-to-be, we refer to it as a “baby” not a “fetus”. It’s only a “fetus” if the mother doesn’t want it). We can’t help but think otherwise. It’s irrational, but law isn’t rational at rock bottom.
Good suggestions! One of the questions my collaborator and I will have to address is whether a “strong” secular ethical position is sufficient to justify a legal mandate.
I think you may actually have to go backwards. Can you supply a strong moral case for a law NOT being made.
Legal prohibitions without particularized strong secular ethical cases for their basis are made all the time. Take my “it’s illegal to spit on someone” example. When the first judge ever to adjudicate the question of whether spitting was a form of assault sat down to analyze that question, they didn’t go “Hmmm, well on one hand we have a spitter, and on the other hand we have a spittee, whose rights should win out? Let’s carefully balance how much both sides want their respective outcomes…”
Instead they thought “well, it’s perfectly clear the government has the power to punish assault, and assault is defined as [x]. Is Spitting a form of assault? Yes, because assault has these 6 elements and all are present in spitting on a random stranger.”
Note also that with the spitting example we don’t carefully (or objectively) analyze how much joy the spitter gets from their spitting, and how much of a burden it is for another to be spit upon. The “Strong Secular Argument” for spitting getting you a fine or some jail time is “Society wouldn’t work if you could just go around spitting on people with impunity. It bothers people…like, it REALLY bothers them.” We don’t objectively balance whether one side or another is justified in feeling a certain way…we simply accept that one side is ABSOLUTELY justified in feeling demeaned and hurt when they get spit on, and the other side should just please please stop doing that seriously why are you going around spitting on people?
A lot of laws work that way. I submit that you probably couldn’t work backwards from objective facts to “people have a good reason to not want to get spit on, so spitting on people should be illegal.” People get to be irrational about what hurts them.
Perhaps people get to be irrational about what counts as a person?
I think it may be difficult for you and your partner to define what constitutes a “strong” secular ethical case.
For what it’s worth, My first intuition is that the case against abortion at various stages of fetal development should be as strong or stronger than the case for not eating animals of a certain stage. (Exempting the environmental case, which I think is not strictly necessary for an argument for vegetarianism.)
Sure, that’s something I’m leaving open to interpretation by potential collaborators. For me, “strong” means somewhere between “reasonable and compelling enough that people have a hard time challenging it” to “totally infallible and changes everyone’s mind once they hear it.”
I doubt we could arrive at the latter extreme! However, as my opinion is that NONE of the arguments currently floating around even meet my more modest definition of “strong,” I think my potential adversary has their work cut out for them!
Thanks, I had privately guess that “strong’ meant “convincing enough that a reasonably person might find it convincing.”
One challenge you have is that I personally don’t believe that any ethical argument for anything meets your definition of strong. 😉
There are arguments I find personally satisfactory on some points, but I have a hard time imagining an ethical argument that reasonable people wouldn’t be able to challenge.
“Hard” time challenging it, not “won’t” be able to challenge it. Basically, I would be happy with anything at least as compelling as the arguments which are pro-abortion.
As for a secular ethical position that (I think) meets my most extreme criteria, how about “Murder is morally wrong, because it deprives the victim of their life, their loved ones of their company, and society of their future contributions.”
Now we have to define “hard!!!!” 😉
Thanks for engaging! IMHO, and for what it’s worth, I think the question would benefit from the additional clarity if you changed it to something like
That’s a good idea! If Scott thinks this topic is not outside of the taboo zone I think we will redefine it that way.
I can suggest two potential challenges to that intuition:
(1) The countervailing costs imposed on the mother by pregnancy, like the potential of dying or being debilitated, are much higher than the costs imposed by abstaining from meat (especially if eating meat is actually bad for you!).
(2) You get into the problem of imposing positive vs negative duties. To not eat animals, one must refrain from acting, while “carrying a child to term, and then taking care of it” is a requirement to act.
But if we were to fight through those questions, maybe that’s a path forward!
Um, what? No one argues that people have to take care of the baby, adoption is fine. Carrying a child to term is just refraining from having an abortion, seems like they’re obviously equivalent in that sense?
Well, nature of a baby is, it’s a baby. So somebody’s gonna have to take care of the baby, at least until it isn’t a baby anymore…
And as for acting vs refraining from action, do you know exactly how babies are born? 😀
Sorry, I already won: utilitarianism (any version where killing someone people don’t like is wrong).
Ok, well if you want to play that card, I’ll have to play Alasdair MacIntyre.
ALASDAIR MACINTYRE uses TELEOLOGICAL ACCOUNT OF THE GOOD!
CLIFF faints!
Seriously, I’m curious how you would actually respond to (maybe a better version of) what Cliff said.
I wasn’t sure if this was a serious comment, so I hedged my bets and went with absurdism… I would need some more detail to respond to it.
If you grant that a fertilized human egg implanted into an ovary wall is a human life, then abortion is murder. I don’t know how many lives you have to know you’re saving to make one murder OK under utilitarianism. Is it 5? 5,000? 5 million? In what circumstances does an abortion for sure save that many lives?
Just updating to say I have a collaborator! I don’t mind conversing about the subject on an individual level, and if I end up using any of your ideas I’ll be sure to credit in the final collaboration write-up!
I’d be interested in one of the following. In both cases, I’d really like to kick the tires on the evidence pro and con and get together a list of the evidence.
1) The allied intervention in the Libyan civil war was based on unjustified assertions of an impending massacre in Benghazi, and was illegal under international law. Whether through negligence or intention, allied leaders misled the public about the imminence of the threat and whether the mission was aimed at toppling Qaddafi and winning the civil war for the rebels.
Current belief – very likely true in all respects. Note – by “allied,” I’m intending focus mostly on Britain and the US, because I don’t speak French, but I’m open to expanding the scope.
2) Ilhan Omar very likely committed marriage fraud, and there is a substantial possibility that the person to whom she was married is her brother.
Current belief: At those probability estimates, true. Note: I’m not sure if this is a permissible topic. If not, sorry!
ETA: Email address is J dot Mann dot Corr and the email host is protonmail dot com
As a single datapoint, this question seems to me to be quite likely to produce far more heat than light.
I can’t collaborate, since I would be on the same side of both, but I would very much like to see an informed assessment of #2. I also feel like it’s an interesting question as to whether anyone’s opinion on anything meaningful would change even if they reluctantly conceded your #2 position was probably correct.
The second half of your comment is why I think this is a questionable topic. This question will drag up intense feelings related to a whole bunch of really controversial topics around Racism, Antisemitism, and MAGA vs. Woke-ness in general, but actually answering it won’t meaningfully address any of them.
I’m personally interested in knowing the quality of the evidence pro and con, and as a test of adversarial collaborations generally.
I can see an argument that if the evidence is still so sketchy that it’s mostly published in the alternative press, maybe it shouldn’t be signal-boosted at all, but in a world where people get on the front page of the NYT for sending or being the recipient of an angry tweet, I think that ship may have sailed.
Is there any evidence for (2) except a post on social media allegedly belonging to Ilhan Omar’s former husband, that calls her newborn his niece?
If that is the only evidence, that may be just a language confusion. in my language, Romanian, we use the same word, “nepot” (nephew) – feminine “nepoata” (niece), for the children of our siblings and for the children of our cousins (and also for grandsons/granddaughters). I think other Romance languages do the same, and maybe also some other languages. So maybe her former husband is actually her cousin? Cousin marriages, afaik, are not that rare among Muslims.
It’s that, but a whole bunch more or that. (The provenance of everything is a little suspect – it’s mostly provided by anonymous sources allegedly in the MN Somali-American community to some alternative news sites).
As I understand it, according to the sources and in many cases the screen caps they provide, husband 2 identified Omar’s father as her father when he was in high school, identified her daughters as his nieces a number of times, she identified him as her children’s uncle, and they both identify the same woman as their sister.
That’s definitely not conclusive proof (especially since a bunch of it rests on believing the anonymous sources and the alternative news sites), so an AC might result in the conclusion that it’s all likely a conspiracy theory.
Personally, my priors on “person X married a sibling for any reason” and “family X habitually used terms of familial relationships in a loose sense” are such that the latter dominates the former in cases where the evidence seems to demand that one of the two is true (and does not otherwise distinguish between the possibilities). Furthermore, I may be falling prey to the typical mind fallacy, but I think that most people’s priors on these two events would tend to exhibit this same property, in cases that were not hot buttons for political or other non-relevant reasons.
In other words, even if it weren’t for the potential legal liability of this issue, I’d think that this collaboration would be signal-boosting something that arose from bias if not bad faith, and thus would strongly advise against it.
@Jameson Quinn:
I agree with your priors, but feel completely the opposite about the appropriateness of this as an adversarial collaboration. Given that J Mann believes the non-obvious side, wouldn’t an adversarial collaboration be exactly the remedy that is needed? Either he comes up with strong enough evidence to overcome the unlikelihood of the situation and the collaborator is convinced, or it becomes painfully apparent that his reasoning is flawed and he appears recalcitrant and foolish, or, more likely, changes his mind.
> even if it weren’t for the potential legal liability of this issue
What legal issues are you referring to? My impression is that discussion of a sitting US representative is pretty well protected by the 1st amendment, and highly unlikely to result in legal prosecution, especially if it’s part of an attempt to find the truth. Are you aware of anyone who has been prosecuted — even unsuccessfully — for similar discussions?
Not knowing you, my impression from the outside is that you believe discussion of Omar’s marriage should be avoided for other reasons (appearance of impropriety, fear of where it may lead, possibly personal offense to her) and this leads you to believe that there might be legal consequences. Could you explain more why you feel this is an inappropriate topic for an adversarial collaboration?
@nkurz
FWIW, my guess is that we would agree on the evidence, and then have a separate session where we discussed the Bayesian case, and whether it supports my initial prediction of a “substantial possibility.”
I’ll leave it to others whether that would be valuable – I have some thoughts on the Bayesian case, but my math is limited, so at the end, it might end up with me being Eulered and still unconvinced. (That might be helpful enough to get a good comment thread going, though).
For what it’s worth, both my brother and I are referred to as “Uncle [x]” by the children of some friends we’re not related to at all (except in the “everyone’s descended from Ghengis Khan” sense). We don’t refer to them as nephews/nieces, but it doesn’t seem like a stretch to imagine similar relationships where the people involved did.
I think it’s very unlikely Scott would publish 2). Those kind of accusations against a prominent politician reflect very badly on the blog.
And open one to *huge* amounts of liability.
I can’t imagine any circumstance under which accurately stating public evidence supports liability in a US court.
I’m not sure if this is the best place to post this, but can I suggest the use of approval voting to select the winner, rather than plurality (which I believe was used last time)?
Seconding this suggestion. I don’t think it is a very controversial opinion here (if so, I might consider an adversarial collaboration) that “first past the post” systems of voting can create very perverse results. Since we have the chance, we may as well try something else.
Thirded.
I approve of this suggestion
But mightn’t ranked preference work better? Actually, I think we ought to have a vote on how we ought to count the votes. 😛
But yeah definitely we should use approval voting.
I rank this suggestion above plurality voting, but below ranked voting.
At the risk of dividing the movement, I’d prefer we go full Condorcet, with an approval fallback.
The problem is that if you want to determine whether there’s a Condorcet winner, you need ranked preferences, not approval votes, as input. Your suggestion would require eliciting votes in both formats.
Seconded. (Score voting would be even better.)
Anyone want to do routine infant circumcision with me (I’m pro). If so reply here or JoelCollaboration@gmail
Hello, I’m interested in doing an adversarial collaboration on the topic of government nutritional recommendations, including optimum dietary carbohydrate intake. My current position is that the FDA’s recommended sugar and starch consumption values 300g = 1200 calories) are much too high. I’ve heard the current values were influenced by agribusiness industry lobbying and want to look into that and see if there’s any truth to that matter. The values were established a long time ago and if I ran the agency, I’d direct its staff to update their recommendation based on modern research, and would expect the value for carbohydrates to come out substantially lower, maybe around 150g. I’d also establish a minimum recommended value for protein, and expect a higher maximum recommended value for fat intake, as it’s now known to be less harmful than previously believed.
I have a bachelor’s degree in chemistry but no expertise specific to nutrition or biochemistry. As a disclaimer I’ll note up-front that I once lost 30lbs on a low-carb diet and am aware I might be biased, but disregarding my personal experience I think the evidence is leaning in this direction anyway.
If you are interested in researching this topic with me, please email me at: ebovenuzna94@tznvy.pbz (decipher with rot13).
I attend an SSC meetup each month and will by default probably discuss the ongoing collaboration at some point. If you don’t want me to do this while we’re working on it, please include a note about that.
Note that in CRON carbs are very high and I believe this is an off-the-charts healthy diet. My own prior is that macronutrients hardly matter.
I’m not remotely qualified to comment but I would find “micronutrients hardly matter” to be a truly fascinating topic to read an AC on.
I’d be willing to do:
–The biggest problem facing the American economy is the declining rate of high-quality new business formation and that the new cohort of business owners is becoming wealthier, more geographically concentrated, better educated, and better connected due to a variety of barriers.
–America is developing an Old China-style political/social elite based on an expensive course of education with a series of key examinations creating ranked tiers. Further, American politics is beginning to take on the character of a conflict between central bureaucratic nobility and provincial local nobility as was a repeated pattern under the system in East Asia.
–Automation/AI will not lead to a general, sustained economic crisis within our lifetimes or for the foreseeable future. Automation/AI’s effects into the future will have effects similar to technology’s effects in the past and, on the whole, follow the general trend.
–It would be advantageous to return to the pre-Progressive Era policy of letting in almost everybody who met some basic requirements and then granting them citizenship after a period of well-behaved residence.
-And if anyone’s interested in finishing up the old collaboration from the last contest, I was doing UBI is bad not only for general economic output but inferior to more practical policy prescriptions in helping the lower classes. We got pretty far but my partner dropped out and no one else came forward.
You didn’t leave an email, but I differ with you on at least two of those positions and would consider doing an adversarial collaboration.
Specifically
–Automation/AI: I think it’s possible but unlikely we’ll reach AGI in my lifetime. Regardless, efficiency gains in production (both proper “automation” and simply labor-saving innovations and shifts in global production priorities) are already causing major economic problems, but their effect on developed nations has been reduced, hidden, or delayed by two factors:
1. Various economic manipulations enabled by a global economy and asymmetric resources which will cease to have this effect as resource concentration equilibrates across said economy.
2. Efforts to decouple labor from subsistence in the form of social safety nets
I believe the question of basic income is intrinsically related to this, and so:
–UBI is useful if not necessary as a more permanent way to ensure efficiency gains in economic production do not lead to considerable economic strife (In the form of poverty, unemployment, and in the extreme case, instability or upheaval of the economy)
I believe they are two separate questions, though automation alarm often serves as a justification for UBI.
The UBI question is primarily one of policy: is UBI a good policy to achieve its stated aims and are its stated aims something we want to achieve? The Automation/AI question is one of economics, technology, and society: will technology create an unprecedented mass economic (and therefore social) disruption?
If you want to pick one tell me and I’ll send post my email.
“UBI is bad not only for general economic output but inferior to more practical policy prescriptions ”
Such as…
Oh, I’ve got loads of those that I’d be happy to elucidate in a collaboration. There was actually a lot of academic ink spilled over UBI in 2016 since Clinton was thinking of making it part of her platform. She ultimately decided other policies were more practical.
Erusian: I would be interested in taking on the opposing side for the question
–Automation/AI will not lead to a general, sustained economic crisis within our lifetimes or for the foreseeable future. Automation/AI’s effects into the future will have effects similar to technology’s effects in the past and, on the whole, follow the general trend.
I believe it will have profound economic effects within the foreseeable future, breaking the overall trend.
you can contact me at dsummerstay at gmail com or post your own email and I will contact you.
You should receive an email shortly.
I am fascinated by the Old China US analysis, and my Chinese wife likely would be intruiged by this as well, but I A) don’t know that I disagree with you enough to br a proper adversary and B) have no idea where I would go looking for sources to argue against you. I am an excellent copyeditor though, if your team winds up wanting one of those .
If you send me your email, I’d think we could definitely use a copyeditor. Maybe I’ll produce the Chinese analysis on my own as a ‘more than you wanted to know’ type post and you could edit that?
I’m interested in the UBI question. My current positions are:
1. It isn’t bad for general economic output, at least not through employment/labour participation rate.
2. It may or may not be inferior to other policy proposals depending on a) the universe of policy proposals under consideration b) the concrete implementation of UBI e.g. what we mean by “universal”, whether it’s done through a negative income tax, etc.
You can drop me a line at javier[dot]prieto[dot]set[at]gmail[dot]com
How can the winner be decided by SSC vote if they’re not even being published per #5?
I think #5 just means that they won’t get their own proper SSC post where all text is actually on-site. There will however still be a SSC post which links to all entries.
Topic: Inequality has measurable impacts on economic growth of an economy
My current position: Income inequality does not have a significant/measurable impact on economic growth of the economy.
There are numerous side topics that would be need to be discussed, but none of them would necessarily be the focus of this proposed topic, these include
> Wealth Inequality as a separate idea
> Consumption Inequality
> Inequality as a social justice issue
> Inequality creating instability in society with attendant bad social/political outcomes.
My background: I have undergraduate economic, history, and mathematics degrees, as well as significant course work in a Masters Program in Economics. Furthermore, I have worked in public policy areas as an Economist since Nov 2007.
Current Biases/Viewpoints- White Male Protestant who is happily married with kids, and a family income that puts us in the top 10% give or take. I have a generally Libertarian view of politics, but this includes items like a social safety net and provision of public goods, not an anarcho-capitalistic outlook.
If interested, you can email me at CoachClary (at) gmail (dot) com
I’d be interested in your results, but would be inclined to agree with you, except for one caveat, which is that I’d suggest income inequality sometimes correlates with economic growth. In other words, sometimes neither causes the other, but other things exist which sometimes cause both to occur as a result.
So I wouldn’t make a good partner, but maybe someone else will take you up on the question, or a variation of it.
I think this is one of the most interesting topics suggested so far, I hope you find a partner!
I would be willing to partner with someone on a death penalty adversarial collaboration, but it might need a proponent willing to articulate specifics. My position on the death penalty would probably be the same even if the most frequently-stated rationales for its existence were accurate, but I also maintain that they are not. I would be willing to argue:
1) The death penalty does not meaningfully deter crime
2) Somewhat more fuzzy: the death penalty does not provide “closure” or vindication for the victim and their friends and family in most cases (I am not really sure this is testable but in principle it feels like it could at least theoretically be tested)
3) The death penalty does not provide a meaningful incentive in plea bargaining
I am a 3L law student specializing in criminal law who had a prior career in library/information studies where I mostly taught social science research methods to college students. I think this is a fruitful area for adversarial collaboration because most proponents/opponents of the death penalty have non-utilitarian rationales as their primary motivator (IE retribution for the sake of retribution vs. the inherent dignity of human life) however these curiously seem to correspond with utilitarian, and eminently testable beliefs (IE the death penalty is right because it deters the worst criminals vs. the death penalty is wrong because it doesn’t). Why is that? One side must be walking on air.
I also would like to do this because it would be a way to learn this stuff on a more rigorous level.
My email is [Theodidactus] [at] [gmail] [dot] [com]
If you are bringing up “non-utilitarian” rationales you suppose your opponents to hold, doesn’t it make sense to include:
4) Execution, if allowed to happen quickly and cheaply by those who oppose it, is not a quick and cheap way to eliminate members of society with a reasonably well proven, larger negative utility to society
Valid. I think this is perhaps harder to argue about though? I think this argument would largely happen on the margins concerning “if allowed to happen cheaply and quickly.” It would mostly about what that hypothetical system would look like.
For example, if we were to argue about Torture, one of the points probably couldn’t begin “given a sufficiently reliable due process apparatus making sure only the correct people are subject to torture…” because that’s giving away an awful lot. We’d mostly be arguing about the actual shape of the hypothetical government itself in that scenario.
My primary opposition to a point four as described would be that there’s no “fair” death penalty process could really be “quick” or “cheap”. The numerous roadblocks to carrying out an execution exist because you can’t “take it back” if you get it wrong. Any imaginative new process would have to contend with human biology and due process as currently understood by the courts, neither of which is really likely to change.
The “due process” parts are largely because of opponents to the death penalty in the first place, I’d say. It’s not as if “due process means delaying execution for 20 years after a conviction, just in case” was always the way things were. I’m not saying “let’s get rid of all due process!” but it seems like there’s an element of “Well, the opposition to the death penalty has spent decades and decades engineering roadblocks to quick and cheap execution; that means that quick and cheap execution is impossible, even though it existed before”.
I say this being mostly OK with the N+1 appeals before execution regime. That being said, my main point was that your set of arguments and your comments on those arguments de facto say “There’s no utilitarian reason to like execution”, but that’s similar to me arguing “condoms don’t decrease STD rates” if I’m the mayor of a town who artificially priced all condoms at $10,000 per unit.
This might actually be a fruitful area for an adversarial collaboration. I could try to argue that these rules exist for “good reasons” besides just arbitrarily making the death penalty hard, and you could argue they mostly exist to just exist to make the death penalty complex and messy.
I concede at the outset that some “roadblocking” might exist. I think most of what we need to settle early on, though,is how much we could change the existing legal infrastructure. There is no doubt that at one time the death penalty was actually remarkably efficient, but a lot has changed since the 1850s, even the 1950s, and I don’t know how easy it would be to change it back.
Putting it another way, we would also have to figure out what “artificially” means in this context. In the context of the condom example, we have a prior understanding of what “a fair market condom costs in 2019” and can talk about how our mayor has “artificially” jacked up the price. We don’t really have a prior understanding of what “a fair execution would require in 2019” because most of these onerous procedures were added by judges and legislatures opining what “a fair execution” even is, and part of their job is to define “fair.” One way to look at it might be to examine how other countries do it.
…but we’re generally comfortable saying “what’s fair in [x]istan is not fair in the United States” so we probably are stuck with something close to the modern the united states’ understanding of “fair.”
EDIT: As another way to put it I guess what I’d be saying in any argument on this is that the public distaste for certain efficient methods (IE hanging); the due process concerns pointed out by judges; and the numerous legal hurdles aren’t “artificial” in the sense that they cannot easily be changed. A hypothetical opponent (maybe you) would argue that these are actually really changeable or arbitrary, or that there’s some other efficient way to do this.
I wouldn’t mind taking a stab at this – on the condition that you understand I’m pretty fluid on the subject and not at all a subject matter expert yet. bendenny@gmail.com
Death penalty opposition in one country is not independent of death penalty opposition in another.
I suspect the reason why Japan managed to keep the death penalty is that it’s really hard to send your activists to a country that’s on the other side of the world and speaks Japanese.
> there’s no “fair” death penalty process could really be “quick” or “cheap”
There *is* no such process, or there can *be* no such process. You might imagine a subsection of crimes wherein:
* The crime is defined as a capital crime and;
* There is video evidence capable of substantiating all of the physical elements of the crime and;
* There is separately independent physical evidence to substantiate the guilt and the identity of the criminal.
So to qualify for ExpressDeath, you’d need say, a convenience store robbery in which the robber shoots and kills the store clerk, the clerk was visibly alive in the video in advance, there is video of the robber intentionally aiming a gun at the clerk and pulling the trigger, clerk dies from gunshots sustained. Robber caught shortly thereafter with gun, gun matches to video, ballistics match bullet, gunshot residue on robber matching cartridges used, with multiple high-quality fingerprints on-scene, robber having cash and checks from convenience store on-person.
I have no idea how many murders would be able to qualify for such treatment, but that could be done fairly easily.
You can’t take back a life sentence after you’ve administered it, either. (You can take it back before you’ve administered it, but this is also true for the death penalty).
I also think that the disadvantages of execution as practiced in a system beset by death penalty opponents interested in messing up the system, should not count as the disadvantages of execution itself. The death penalty isn’t going to provide much closure if the family has to wait for years of delays for the execution to actually happen, but it seems unfair to do all you can to delay the process and then claim that the process is bad because of the result of the delays you instigated.
Seems to me that the death penalty severely limits recidivism (barring a zombie apocalypse down the line).
Presumably life without parole limits recidivism equally well?
yes. that would be my argument. Now there’s a tiny chance they’ll escape and a small (still very small I’d argue) chance they will be more likely to kill or otherwise harm their fellow inmates (keep in mind the chance would have to GREATLY EXCEED the chance of others unworthy of the death penalty).
One could perhaps argue that an “efficient/quick” death penalty would be cheaper than life without parole, but see my comments above on how “efficient” we could really make the death penalty in modern america, because I really feel like the inefficiency is baked in.
Something that’d be cool to see discussed, though perhaps too tangential, is an opt-in death penalty. Anyone sentenced to life can choose death instead. I think there are a lot of really complicated issues surrounding it, beginning with it being a recognition that our prison system sentences at least some people to a fate worse than death (for them). Personally at least, I’m strongly opposed to the death penalty in a normal sense, but have a really hard time not thinking we should allow people to choose death if they want it.
I had a fictional society in a story once where there was a different kind of “opt-in.” Basically, for future recidivists, as a condition of release.
“You are hereby sentenced to jail for 10 years for this assault. You must serve the whole time, or you may be released in two years, but if you commit another assault in a ten year period we can kill you.”
How is this different from saying – repeat offenders for assault (with definitions that previous convictions “expire” after a period) are eligible for the death-penalty but first time convictions aren’t.
I don’t feel this needs a fictional society, even if I’m not sure if any current jurisdiction has rules exactly like this.
The gimmick in the book was that you could OPT for the early release. It was basically your way of stepping up and saying “I’ve learned my lesson” and it gave society a plausible excuse to off you for being a “really bad person.”
This wasn’t advanced as a good idea, and in fact I think I was trying to show how it’s a bad idea given what we actually know about how humans do cost/benefit analysis. The idea that you’ve learned your lesson and you’ll never make a similar mistake again is a seductive one.
@Theodidactus – since you’re a 3L interested in criminal justice, let me strongly recommend “Torture and Plea Bargaining, by John Langbein. (TLDR – in ye olden times, due process protections made trials unwieldy but torture was legal, so authorities tended to obtain confessions by torture. In modern times, due process protections make trials unwieldy, but plea bargaining is legal, so authorities tend to obtain confessions by overcharging, which makes the penalty for going to trial and losing substantial).
The relevance here is that one effect of the death penalty may be to encourage plea bargaining and/or cooperation with authorities in investigating others, for better or worse.
I mentioned that as one of the things I’d be willing to adversarially collaborate with people on. Being a lowly 3L, I have no firsthand experience with that kind of plea-bargaining, but my assumption on the matter, informed only by my rough understanding of the process and experience thus far lead me to conclude that threatening death isn’t a super-effective tool for that.
Ironically, I believe it WOULD be a good tool if the general target of who is “deserving” of the death penalty were altered. I agree that overcharging works as a tool to induce plea bargaining (in fact, THE BEST tool for it) but not on the sort of ghoulish monsters beyond all hope of redemption that we tend to feel comfortable executing. you get your best returns in the “we caught you driving drunk, now we could give you 90 days for that, but if you admit you did it and promise not to do it again we’ll give you 3 days of community service” zone, and diminishing returns on up from there.
Here are some topics. The first one is more easily debatable. The second one is harder to debate in a vs format, but it can be done(perhaps with a rephrasing)
Antidepressants, as a whole, do more harm than good and society would be better off if they were banned as a class of medications.
Some caveats such as “well maybe in some of the elderly SSRI’s work, but for reasons that are more or less lies compared to the commercials” are little sub-exceptions to the general point that as a whole, they do more harm than good.
Another possible debate:
This is almost certainly a simulation and that has meaningful theological implications, including on how to live your life.
Background: An autodidact with a lot of books and who reads a lot of blogs, thus i’m basically Yudkowsky. I guess(kidding). I have a really high rank on this site, hopefully that gives me some social proof as to at least having intelligence in a debate partner.
My email address is singularityentity343[at]gmail[dot]com
I’m curious about your rationale w.r.t. antidepressants. If I’m half a day late taking mine (Bupropion, for what it’s worth), I become too tired to function, and it can take a week or more to recover from missing a day.
The half life of Bupropion is 35~~ hours.
http://com.hemiola.com/half-life/
Plop in 99.9999 in “Percentage elimination to consider (below) close enough to 100”
and plop in 35 to the half life
Once steady state is reached and assuming a 24 hour interval between dosages your minimum-maximum ratio of the drug is 0.62121212121. An additional 12 hour interval, indicating “half a day late” puts the min-max ratio at approximately 0.45. I do not believe such a severe withdrawal effect is due to the 0.17 difference.
Assuming a 12 hour gap between dosages the drop-per-unit of time after a steady state is reached is nearly linear. 372/472 is going from 0.78 to 0.57 ratio. Perhaps noticeable, but still taking a week to recover is probably a mental quirk.
If you look at the large amount of woo practitioners and their customers it becomes apparent that a large percentage of the population becomes convinced by placebo.
I want to argue further on here, but I want to leave my best sources for a real debate.
Hypersomnia is not a withdrawal effect; it’s the entire reason I medicate. And the thing is, depression itself is one heck of a “mental quirk.” It’s self-reinforcing, and it’s strong enough to work through the medication sometimes (especially during winter months).
As another data point, Bupropion is the third medication I’ve tried, and the other two did absolutely nothing. Placebo doesn’t explain that.
Do you offer an alternative to antidepressant medication? Because this honestly comes across as “have you tried not being depressed?”
I’m more than willing to have a full debate about the effects of the medications, that’s why I started the prompt!
I should have rephrased “mental quirk”
Its odd you mentioned hypersomnia. Here is a quick look at what happens when people medicate for hypersomnia with caffeine or attempt to gain the long term benefit of a stimulant for another purpose, adderall.
Though may I point out. If someone gave you caffeine and spread it out throughout the day, you would feel quite horrible off of it.
And virtually the entirety of that is merely withdrawal reversal after a month of usage. And you will then believe that its what is keeping you up and feeling fine.
I feel like “this is almost certainly a simulation” and “if life is a simulation, this has meaningful theological implications” are both probably big enough to be a single discussion each. They also flow-chart out in such a way that the answer to the first sometimes precludes discussion of the second.
With regards to the simulation hypothesis, would a naturally occurring “simulation” count? (Such as the carpets discussed in Greg Egan’s story Wang’s Carpets) Or must a simulation be built by intelligent life forms of some kind? I would imagine that “intelligent life forms” would be required for most of the theological implications you have in mind. If so, I might be interested in taking the negative side (either the world is not a simulation, or if it is, it arose only by chance in the parent universe).
I’m not really sure how much empirical evidence there is to collect here, though. Discussing physics could be fairly fruitful. For example, quantum mechanics seems to suggest that if there is a parent universe, it either also follows quantum mechanics, or it has an obscenely huge amount of computational power, even by simulation hypothesis standards.
Also, which version of the simulation hypothesis are you focusing on? Ancestor simulations seem unlikely to me, since they would be considered ethically monstrous by any civilization that retained human values, while an unfriendly AI would most likely consider them pointless.
I am arguing for a simulation built by intelligent life forms. Perhaps not perfectly “rational” intelligent lifeforms, but a superintelligent life form nonetheless.
Beyond that, I will mention the various possibilities under that, such as ancestor simulation, AI testing area, a borglike agreement to simulate assimilated species…etc.
This is a bit disappointing because I don’t believe your position is too broad to be defensible.
Now I consider starting a new prompt for somebody to simply steelman Bostrom against me.
* I believe your position is too broad to be defensible
Why don’t I see edit buttons anymore?
Okay, I’m interested. I’ve sent you an email.
I am not interested in doing a collaboration on simulationism, but simply as a data point: I believe that quantum MWI, combined with the anthropic argument, is a strong argument against simulationism. Yes I’ve thought about this for more than 5 seconds so I probably have a counterargument to the first few responses you’d have, but as I said, I’m not interested in pursuing this right now.
I would be interested in collaborating with someone on the value of space exploration and specifically colonisation from an X-risk perspective.
My position is that it is important to establish multiple independently viable human colonies within the near future.
In terms of qualifications, I am a masters year aerospace student who has worked for a while in the space industry.
You are supposed to add an email address so that people can email you rather than apply by replying to this comment.
I hope you get a partner, I’d be interested to read this. But I wonder what you will be looking for in terms of research data?
Could you post an email? I have a masters in aerospace, currently work in the industry, and strongly believe that near-term efforts to establish colonies are not a meaningful way to reduce X-risk. I do worry that any evidence would be speculative, and neither position would have good (or really any) research to back their claims.
Do we have the technology to establish independently viable human colonies, in the first place?
I would really like to see this
Sorry, I forgot to add an email, I can be reached at njd38@bath.ac.uk
Hi, all! I’m seeking a partner for collaboration on whether the construction of new market-rate housing is better for low-income affordability than constructing no new housing, in a supply-constrained market. (Say, the Bay Area, or Seattle, or New York City.) I’m an engineer in my day job, and have no particular qualifications here apart from writing a lot of posts about housing on the subreddits, and doing a lot of reading. I’d like to work with someone who supports moratoria on market-rate construction, or believes that affordable-only construction is a better alternative. (Example here.) Email me at grendelkhan at gmail dot com.
Hi, I want a partner for a collaboration on whether usury laws are, in general, efficient (I think not).
Background: I have a law degree and I work in the Ministry of Finance of Colombia. Email me at jserrano27 at unab dot edu dot co.
One thing I’ve been thinking is, for future contests, could you allow teams that are bigger than two people? Assuming people can figure out how to arrange these things internally (which would probably involve electing a “leader” on each side to reach consensus more effectively?), this might lead to some collaborations that are more-thorough and more well-informed. I don’t know how well it’d work in practice, but it’d be interesting to experiment with, I think.
Personally I’d worry that this would encourage people to enter who weren’t quite so dedicated, resulting in a feeling of “well it doesn’t matter too much if I take a week or two to focus on the rest of my life because the rest of my team will pick up the slack…”
some ideas:
– the evidence with respect to the use of sunscreen and exposure to the sun
– What factors contribute towards a country’s level of happiness
– a cross country examination of ethnic group longevity in different countries (ie how long Nigerians or Japanese people live in countries like Canada, USA, UK, Germany etc.)
I could do something in nutrition. Lotta controversial stuff these days. Potential topics: do carbs cause disease? does meat cause cancer? is saturated fat bad? are low carb diets safe? are low carb diets better than other diets? Assume I take the pro-carbohydrate anti-meat stance on all these.
Or, I would also be interested in discussing fat acceptance and health-at-every-size (me: pro).
email is jennytalia 9 @ gmail dot com
I might take you up on that offer. We could cover some combination of the nutrition ones. Would you care to include my claim above? (FDA recommended daily value for carbs is far too high)
Sure. I would prefer to focus more on “what is the healthiest diet” rather than getting into like the policy implications of the USDA food guide, which adds a whole history element to it that is not really researchable in medical journals. For example you could have a whole discussion about how much the recommendations even influence what people eat. The science covering the healthfulness of fats and carbs is already deep and complicated.
I am interested in collaborating on the following proposition: the proposed responses to climate change pooled across everyone who offers an opinion on the topic are bimodal in cost (i.e. there is a cluster of low-or-zero cost responses, and a cluster of very high cost responses), and this implies that at least one of the clusters of people is being unreasonable. In turn, this implies that refusal to engage with the other side of the debate (from either side) is not sufficient to show that the refusing side is being unreasonable (because it is not unreasonable to refuse to engage with people who are themselves being unreasonable).
Email is {my username} at fastmail dot com dot au
I feel like that logic is kind of twisted backwards around itself. If we’re looking at a high enough level of abstraction that we ignore particular evidence and only see two peaks, how do you know that your side is the reasonable one?
I’m not sure what claim you’re trying to make, but I think it’s either trivially true (“refusing to engage clear crazies is reasonable”, “if we here all agree on the basics, we don’t need to re-argue the opposing side every time we want to discuss details”) or clearly faulty/circular logic (“If I assume I’m reasonable, anyone who strongly disagrees with me must be unreasonable, thus it is reasonable for me to always ignore their arguments.”) Which are you trying to say?
Your claim is made somewhat more difficult to parse by the “not un-” construction; I suggest removing it.
I don’t know which side is reasonable; both sides might be unreasonable!
The controversial claim is that we can deduce (or at least find strong evidence for) the fact that at least one side of the debate is in fact crazy by looking at the aggregated distribution of positions advocated. It would be interesting if this were true, as we could then show the existence of crazy/unreasonable/bad faith arguments without becoming subject matter experts even if we can’t identify them without becoming subject matter experts.
Alas, the edit window has passed.
It is reasonable to refuse to engage with someone who is themself unreasonable.
Ah, fair enough. I think I understand you better.
However I don’t think that a bimodal distribution of opinion necessarily implies unreasonability. It would imply to me that there is perhaps a single binary crux that is responsible for most of the variation in people’s beliefs/opinions/recommendations. However, I think it doesn’t follow that just because there’s a binary, one side or the other is unreasonable.
For a fictional and politically non-charged example, let’s say we’re concerned about the atomic weight of the newly-discovered element, Unobtanium. Now, the equipment we used to discover it was imprecise, and our models are preliminary. However, most leading chemists and physicists believe that it has an atomic mass of either 42…or 43. Since neutrons and protons are discrete, there’s no change of it being 42.5–anyone claiming that mass would be laughed out of the room. But with this bimodal distribution…must half of the scientists be unreasonable? (Half of them are wrong, of course, but in this contrived case it’s pretty clearly an honest mistake.)
Or for another example: About half of the town wants to build a new high school, to reduce overcrowding at the existing school. The other half doesn’t want to build it; they claim that it would necessitate tax hikes or budget cuts that the townsfolk won’t easily be able to handle.
Does the lack of people advocating for building half of a new school (or some other fraction) mean that one side or the other must be completely unreasonable?
You might be shocked to learn that I too think there is room for doubt.
Wanna do an AC?
Alas, school’s starting up again soon and I don’t think I’ll be able to devote the necessary time/focus to a project like this that I presume is gonna take more than a week.
Sorry if I’ve come off as a bit flippant, I think we’re probably talking past each other to some extent.
This claim seems to not be the one you mean to argue about, but something you expect collaborators to agree with you on. Is it actually true? If so that’s pretty interesting.
No, I don’t know this to be true, we’d have to check. I agree it would be interesting. I wouldn’t be surprised if it turned out something like: weighted by people, it’s unimodal, weighted by audience it’s bimodal.
Is anyone interested in looking at routine infant circumcision? I’d be on the pro side, as a health intervention whose benefits outweigh the risks. I can be emailed at JoelCollaboration @ Gmail.com
ok!
I’m on that one.
Against infant circumcision, I actually feel pretty strongly on this. Or I think I do. Hopefully I can do two topics, this and the one I am starting.
Cool, I’m glad you guys are doing this. I hope the paper gets written. I’ll read it.
I believed in circumcision more strongly when my son was born than I do now. I had him circumcised and don’t feel too bad about it, but do sometimes wonder if it was unnecessary and philosophically immoral.
BTW, MissingNo, if it’s all just a simulation then who cares? 😛
The entity in charge of the simulation will get bored if we don’t act like it matters and shut it off I guess???
Anybody interested in gerrymandering? I’m not very confident and I have no expertise on this, but I think that while independent commissions might do a better job redistricting ultimately this isn’t such a big deal for the health of democracy.
I would want to review empirical evidence on gerrymandering to try to understand how the preferences of voters are improved when gerrymandering is addressed. I’d also want to understand more clearly how much Republicans vs. Democrats benefit from gerrymandering.
I have no qualifications for this and I have weak beliefs. You should next expect any collaboration with me to be adversarial, except in the sense that I’ll probably be skeptical that gerrymandering is a big deal. I’d be totally fine changing my mind. If we collaborate, we would be researching, reading and writing together, not arguing. I don’t have studies to sling around yet. But I think we could write something cool about this.
Also, to be perfectly honest, this project would coincide with the beginning of the school year and the fall Jewish holidays and I might not be able to fully commit. We’d have to see.
mpershan at the gmail place is where you can reach me.
Michael, if you’re interested in gerrymandering and math (which I know you are), you might want to check out the Metric Geometry and Gerrymandering Group. If you really get into it, I have some friends who are involved and could connect you.
Yeah! I’m familiar with their work and have gone to a number of talks from people associated with the group. I come at this from the math world (I’m a math teacher) and while my understanding of this topic is rough, it seems to me that the math world hasn’t really tangled yet with a number of tricky empirical questions.
“People associated with the group” — does that include “formerly-associated”, by any chance?
I haven’t re-read it recently, but I liked this gerrymandering paper:
http://www-personal.umich.edu/~jowei/gerrymandering.pdf
I think one issue that’s going to plague any gerrymandering debate is that it is hard to determine what a “good” district is. Given a certain definition of what makes a district good one can determine a fair map that reflects that definition. But without a definition, any number of maps could be argued as fair. I think any productive discussion of gerrymandering would need to include at least a rank-ordered list of what makes a good district (and possibly more than that).
I strongly dislike gerrymandering and was in favor of a recent initiative in my state to establish an independent commission. That said, the stakes are high and zero-sum so I don’t expect large changes to come from it.
Yeah, I also dislike extreme gerrymandering, but I think things get messy quickly and it’s hard to tell how much better democracy would be without gerrymandering.
@Steve?
I think that it’s more important to have a consistent way to draw districts. Then there still may be an overall bias, but then you won’t have people picking different methods based on what benefits them in the local conditions.
I believe that gerrymandering is a big deal, but I don’t believe that redistricting commissions are the solution. I’d be willing to do a collaboration on the subject of gerrymandering, but I think our levels of prior engagement with this topic are quite disparate, so I fear doing this with you would not end up being adversarial enough to fit in this contest.
I am interested in researching and analyzing the following topics, which I have vaguely right-wing intuitions about but little to no knowledge or expertise about. I currently support the following statements, and would be willing to research any one of them with a collaborating adversary:
1. Incarceration is an effective approach to reducing crime, and future deincarceration will have higher costs than benefits to society.
2. The War on Drugs was not (primarily or substantially) motivated by racism, and was beneficial to society.
3. The U.S. Government will go into a budget crisis and will have to default on >10% of its obligations (or inflate/regulate them away) within the next 50 years.
The ideal collaborator would be someone who disagrees with the statements, but doesn’t (yet) have an array of studies to cite in support of their belief. That way, we could be on a level playing field and both go through a symmetrical process of discovery.
Email me at kandh2o (shift-2) me (period) com.
as with some posts higher up, I can’t AC with you, but I would like to see these topics attacked. For what it is worth, my biases on each topic:
1a. Incarceration: yes, reduces crime through direct mechanism (reduces the free population of people likely to commit crimes) but probably not the most cost effective approach.
1b. Deincarceration: yes, will be costly, but this is largely because time in US prisons trains people to be more criminal, not more able to function in civil society.
2a. Racism: not explicitly/primarily driven by racism. I don’t know what evidence for/against this claim would be convincing.
2b. Beneficial to society: not beneficial.
3. Agree with your claim.
I have found a partner for topic #1 that I’ll be working with! Anyone else who was interested—hold off until next time, I guess.
I’m curious how you were planning to define “racism” in #2. It seems like for most conventional definitions you’d have a hard time proving it either way.
Hi, I want a partner for a collaboration on the following claim (with which I agree): virtually all secular ethics frameworks practiced in the Western world imply a vegan lifestyle in almost all circumstances, and almost everyone who consumes animal products is violating their own implicitly or explicitly held morals.
I realize there is a similar proposal already, but I would like to address a broader version of this claim, and discuss a vegan lifestyle (not diet) as opposed to a vegetarian diet.
I would like to cover all the common arguments for and against being vegan, and would prefer someone who has read and thought extensively on the topic. Email me at srlariosr(at)gmail(dot)com.
I doubt that there are any vegans that don’t violate their own implicitly or explicitly held morals.
Thanks for your comment 🙂 and of course I agree with you that all people at some point or another violate a few of their own morals (and I apologize if that line sounded a bit snobby/mean! it was not my intention – I think people do this often subconsciously). I should have been clearer – “almost everyone who consumes animal products is violating their own implicitly or explicitly held morals repeatedly, consistently, without recognizing what they are doing.” Most people violate their own morals by occasionally telling a harmful lie, littering, or whatever. But the consumption of animals happens on a large scale, consistently, repeatedly, and without any feelings of guilt or acknowledgement of wrongdoing. I believe this is due to a sort of social blindness or refusal to look at reality. If you disagree with me, email me so we can collaborate 🙂
I believe that something very similar to Gödel’s incompleteness theorems is true for secular ethics frameworks, as well as humanity not being designed around an ethics framework.
Then trying to achieve moral perfection is futile and any such attempts are inevitably going to run into:
– them being so unpalatable to real people that practically no one is willing to be pure and/or people being miserable if they try to live a very pure life
– incompatible demands
Then any attempt to achieve moral perfection seems destined to result in dangerous rationalizations, like absolute denial of the immorality of certain choices. In other words, an entire spectrum of grays gets called white, even if it is dark gray.
With immoral behavior often involving excesses and the most immoral behavior involving great excess, I think that the goal should be to reduce the worst excesses and thus the worst immoral behavior. Attempts to achieve more moral behavior will at a certain point probably become counterproductive.
So specific to your case: it’s way more important to me that animals in human service are treated fairly decently, than that we seek to be utterly just to them, especially as animals treat other animal in the most horrific ways. Great sacrifices that address relatively small injustices, while way greater injustices are still happening, seems rather pointless, as well as more narcissist than altruistic.
So I don’t so much oppose your claim, but rather the entire meta-moral framework in which your claim is relevant.
Why do you think denying existence to animals is more ethical than letting them live a good life and then turning them into bacon and belts? To me that is the rub. To cows veganism means the near total extinction of their kind.
I proposed the similar animal rights collab. I’ve been getting some constructive criticism that the focus on utilitarianism specifically is distracting and unhelpful, so I’m likely to ask to work with my collaborators to fix that. Upshot is that I don’t think our two proposals are that similar to begin with, and are likely to get less similar as I refine mine, so I think we’ll end up covering different ground.
I’d be really interested to see the results of this!
I have similar questions for you as I asked the vegetarian: what exactly is veganism, both diet and lifestyle? Can a vegan eat or use products from very simple animals such as bivalves or polyps? If not, why is it OK for vegans to eat or use products from certain relatively sophisticated plants and fungi? If so, what exactly qualifies this diet/lifestyle as vegan?
This is a great question and it’s probably something I should have addressed in the proposal. Different people disagree on what a vegan “can” and “cannot” do, but the general consensus is to try to live your life harming as few living creatures as possible. There is some disagreement around exactly which species are sophisticated enough to be a “living creature”. PETA, I believe, draws the line at whoever has a nervous system (and bivalves do). There is no clear line on the spectrum of creatures between something that “lives” but clearly feels no pain, such as grass, and something that lives and clearly feels pain, such as humans. I have my own opinion on this issue and how to make decisions on what is harmful, and I think a portion of the paper would have to address that. However, since most people consume obviously intelligent animals, I don’t think it needs to be the main focus of the paper.
Seems to me it’s a central issue your paper has to tackle. Plants and fungi don’t have nervous systems but they have analogous systems for sensing and responding to their environment, often very sophisticated ones, and sometimes these are more sophisticated than in simple animals. If you are drawing the line based on what can or cannot clearly feel pain, you have to establish exactly what “clearly feel pain” means in a way everyone would agree on. I don’t know if that’s possible. If you choose a definition not everyone agrees on, then there’s no reason for those who disagree to accept any of your subsequent arguments.
I’m willing to possibly adversarially collaborate on some weird topics.
Here are a few I would like to look into
1. Do locks do anything other than keep honest people honest?
2. Is watching the news a better use of your time than watching porn? (or going to the amusment park ect, porn is just the most clickbaity headline)
sorry I forgot my email
enelson3 (at) horizon.csueastbay.edu
Begging the question?
You seem to define “honest people” to include people who violate the law/norms when given the opportunity.
Sorry it’s mostly a phrase said by locksmiths (“locks are there to keep honest people honest”)
The argument being that locks are there to prevent people who don’t intend on stealing stuff but would steal things if it were literally lying there waiting to be stolen.
Hence the phrase *keep* honest people honest. (they don’t get tempted to steal something that’s right in front of them). think of a teenager who normally wouldn’t commit a crime but sees something just lying there and assumes that it was abandoned
I don’t steal things that I could steal. I think that makes me honest and that a person who would take something is not honest.
If it is unclear whether something is abandoned, then the issue is not a lock. Something without a lock can clearly belong to someone and something with a lock can look abandoned (like a bicycle wreck locked to a lamp post for many months).
As for the saying, it seems similar to “locks keep out only the honest” where the intent is more to communicate that one shouldn’t/can’t trust a lock too much, because a determined attacker can defeat or circumvent it. So then the casual thieves are called “honest,” in contrast to hardcore criminals.
Using this kind of language suggests to me that the people saying this have low trust in others.
Going to the amusement park is almost certainly the better use of time.
For 1, what about opportunist dishonest people?
Like, if they found a unmarked open envelope full of money, they’ll likely keep it, but would turn it in if it were sealed.
Not interested in a collab atm sorry.
I really like #2 but a more salient alternative than porn or amusement parks might be “engaging in gossip with/about people you know”.
I am interested in collaborating on the following proposition (sorry if it sounds very CW): the main motives behind the immigration and AGW stances of the US left are political self-empowerment and virtue signaling rather than genuine concern about the welfare of immigrants or the negative impact of AGW.
Email: alcam719 at gmail dot com
“Is X self-aggrandizing virtue signaling” doesn’t sound like the kind of question you can do a productive AC on.
I think this is a falsifiable statement that is quite possible to test empirically.
Okay, then what would it take to disprove it by your standards? What does it look like to prove it to the satisfaction of the other side?
Bearing in mind that we’re talking about “the left” here, so showing it for any one individual — already a tall order — won’t be enough.
I would suggest examining situations with conflicting motives. For example, what happens with policies that promote the welfare of immigrants (or reduce AGW) but simultaneously hurt the left politically? If in most such situations left-wing politicians and media side with the immigrants that would be a clear indication that my claim is false.
Personally I think it’s more like US politics have become more violent and less productive in general lately rather than some specific issues on the left, but I guess stuff like this could be a counterargument:
https://www.reddit.com/r/slatestarcodex/comments/9kdkzo/culture_war_roundup_for_the_week_of_october_01/e6zhmvt/?context=3&sort=top
i don’t think i have time during that time period to do it justice, but gradualism vs radicalism would be a good debate. i could probably take either side although i’m dispositionally inclined toward gradualism.
Here’s a few claims on which I’d collaborate, each written so my position is “in favor”.
1. Essentially all (i.e. at least 80%, likely > 90%) of the growth in US healthcare expenditure over the past ~30 years is due to demographic change, specifically the aging population. The numbers are just to give a rough idea of what I currently think based on only cursory research; they could easily wiggle a bit.
2. More generally, US “economic stagnation”, to the extent that it exists, is due primarily to the aging population. The main underlying mechanism here is a reduction in the savings rate, as resources are instead consumed caring for old folks who don’t themselves work. Other, less direct mechanisms include a number of more political things – a glance at the US government budget shows healthcare & social security costs squeezing basically everything else, reducing expenditure (relative to GDP) on infrastructure, military, research, etc.
3. The US government should establish a sovereign wealth fund (similar to the Alaska permanent fund or Singapore’s fund), funded entirely by borrowing. By “should”, I mean this is just an obvious win with minimal downside. Reasoning: US borrowing costs, after adjusting for inflation, have been roughly zero over the past decade. Obviously a key prerequisite for this to work is to make sure that Congress has basically-zero input on how the funds are to be invested – presumably using a management structure similar to the Fed board.
4. Growth in US costs of housing, education and healthcare over the past ~30 years have basically nothing to do with Baumel-style cost disease. Specifically, healthcare is all about aging, education is all about smaller classes in a wider variety of subjects, and housing is all about reurbanization running up against NIMBYs. (Also, the difference between sticker price and actual expenditure is substantial in healthcare and education.) None of those mechanisms would mediate a Baumel effect.
The specific positions above probably give some idea of my wider views, and I’d be open to suggestions on those as well.
Email me at [my username][at]gmail[dot]com, or just reply here within the next week or so. (1) or (4) would be easiest, since I’ve done at least some research on those already; (2) or (3) would involve more legwork but I’d potentially learn more.
I don’t have a particularly strong interest in doing a collaboration on (1), but I’m very surprised by the claim. The US is very unusual for its spending of healthcare as a per cent of GDP, but is not unusual in terms of its demographic profile (and in fact is a long way behind many countries such as Japan). To be plausible, such a claim needs an explanation for why demographics has such a vastly different impact on US healthcare spending than it did on every other major country in the world.
If I’m remembering the charts correctly, US healthcare expenditure as a per cent of GDP is actually very typical. It’s absolute per-capita expenditure which is atypical, because US per-capita GDP is about 50% higher than most other first-world countries.
I’m pretty sure demographics did have a similar effect on other major first world countries.
You’ve got these back to front, which is why I think your premise in (1) doesn’t hold up.
US healthcare as a % of GDP isn’t typical in the slightest – at 17% it’s the highest of any western nation in the world. Every other country usually used as comparisons to the US (Canada, the UK, Australia, Germany, Sweden, Norway, Italy) spends between 7-11% of GDP on healthcare.
If you’re using the “last 30 years” as your frame of reference, life expectancy has only risen from 75 to 78 during that period. I don’t doubt that it’s had some impact, but “80-90% of the increase?
I’d surmise that the bulk of the increase is due to the influence of private health insurance companies, which is what happens when you don’t have socialised medicine. Somehow the US manages to spend almost double what any other country does on healthcare and still have the worst health outcomes.
The demographics of the US don’t differ markedly from the other countries I listed above. The healthcare systems they utilise do though.
(Background: i’m an Australian epidemiologist who has spent some time working in Canada and the UK looking at how health care delivery in different countries affects health outcomes).
Ah, I was indeed misremembering. This was what I was thinking of.
Life expectancy isn’t that relevant here. We’re talking about demographics, e.g. what fraction of the population is over 50 or over 60 or whatever. Life expectancy is one piece of that, but the long-term decline in fertility is a bigger piece. (Life expectancy would be the main piece if the age distribution was roughly at equilibrium, but it’s decidedly not at equilibrium.)
As I said, US demographics are not atypical, and I expect demographic shifts in other countries had a similar effect. That doesn’t mean that they ended up at a similar level of overall spending; it means they saw similar growth rates in spending. I’d assume UK, Canada, etc had lower health expenditures 30 years ago, and they still have lower expenditure today, but presumably they’ve also seen healthcare expenditure grow rapidly as their populations age – and I’d guess that savings rates fell accordingly.
Anyway, if you want to get into the weeds on it, then email me and we can do an adversarial collaboration.
The data does not really show what you suggest John. The WHO dataset shows that healthcare spending in the US went from 12.5% of GDP in 2000 to 17% in 2016.
In contrast, in the Euro area (with worse demographics than the US), spending went from 8.5% of GDP in 2000 to only 10.2% in 2016. In Canada the story is similar – spending went from 8.3% to just 10.5%.
You could probably find some counterexamples, but the simple point is that many countries – including many with worse demographic trends than the US – have not seen similarly rapid growth in healthcare spending.
Except that no other country had “similar growth rates in spending”:
https://www.healthsystemtracker.org/chart-collection/health-spending-u-s-compare-countries/#item-since-1980-the-gap-has-widened-between-u-s-health-spending-and-that-of-other-countries___2018
The US spends about the same amount of GDP as comparable countries on public health spending – around 8-9%.
The US spends more than triple the amount of GDP on private sector health spending than the average of comparable countries: 8.5% compared to 2.7%.
Every country is dealing with an ageing population. It’s not the reason healthcare costs are twice as high in the US as anywhere else in the world.
My current understanding is based on directly looking at US healthcare expenditures over time, bucketed by age (this dataset). That’s a much more direct way of tackling the question than any international comparison. So I’m fairly confident that I’m at least not way off the target here.
If people think other countries have not seen similar expense growth as their populations age, then email me and we can look into that discrepancy as part of the adversarial collaboration.
I would be interested in doing a collaboration on empiricism as a form of understanding theism/spirituality. My essential concept is that personal spiritual experiences are empirical insofar as they are (subjectively) measurable perceptions of metaphysical phenomena. While such “observations” are not objective, perhaps they reach an acceptable level of objectivity when confirmed by others who observe the same things.
This may be more philosophical than the instructions allow, so I’m happy to be told that this topic doesn’t qualify.
I am a committed theist, but I don’t think my beliefs are irrational. I was struck by the SSC survey results that about 50% of readers are theists. Seems as though there may be some interest in this kind of topic.
If you disagree with me and want to collaborate (or if you are intrigued by this idea and want to talk more!) please email me at jeremiah [at] godexperiment [dot] org.
EDIT: I have a Ph.D. in Theology and think it would be good to partner with someone with a philosophy background. But I’m open to anything.
I would consider a collaboration on monetary policy. I would be interested in taking the side – broadly defined – that monetary policy is effective at stabilising nominal demand (inflation, nominal GDP etc) and (optionally) that it should do so. I would prefer to collaborate with someone with a background in economics (some higher education preferred).
If someone was interested I would want them to sketch out their opposing view relatively clearly before we began, to ensure I think their opposing view is at least plausible.
I’m interested in a collaboration on standardized testing. My current position is that the use of standardized testing in admissions, hiring, etc. should be greatly expanded, since relative to alternatives (e.g. job interviews) tests are cheap to administer, fairer, and more accurate.
Email: [username]@gmail.com
I don´t think that I have time for an adversarial collaboration but I would question a increase in the use of standardized testing in the case I know something about, Swedish university admissions. Right now these admissions are done with GPA from the upper secondary or equivalent and for those who take it, the Swedish SAT.
GPA works well and SWESAT does not. Or more to the point, SWESAT does not add more information if you have GPA but GPA adds on SWESAT.
GPA-reliance have serious issues, especially in the Swedish system with weak control of differing grading practices and fully voucher based school competition. Therefore a recurring suggestion is to move university admissions to some other kind of standardized test. Since the limitations of SWESAT are well known and don’t lie in the handicraft of test making but in its general content nature the route for admissions would be several subject specific tests.
That would create many more problems than it would solve in my eyes. At least if it would be used for a high percentage of the admissions.
I think that a much more promising route would be better control of grading practices, partly with help from standardized testing already done during upper secondary, partly done with register based punishment of upper secondary schools whose graduates perform badly in university (in relation to their GPA) and partly done through normal regulation.
I’m Robert McIntyre, CEO of Nectome where we’re building advanced brain banking technology. I believe that modern neuroscience overwhelmingly supports the idea that long-term memories can be preserved (in an information-theoretic sense) by chemical fixation. If you feel differently and have a PhD or equivalent experience in neuroscience, information theory, or biochemistry, or another related field, email me at r [at] nectome.com. I’d love to do an adversarial collaboration with you.
More info/background: https://nectome.com/the-case-for-glutaraldehyde-structural-encoding-and-preservation-of-long-term-memories/
How would the competition work if the essays aren’t published at slatestarcodex before the vote takes place but those voting should be slatestarcodex readers?
I assume there would be a post linking to all essays, so the “not being published” part just means “not get their own post with text on-site”.
Hi, I want a partner for a collaboration on voting methods (possible specific propositions below). I am an statistics PhD student (defending in late September 2019; obviously, serious work on this collaboration would only begin after that) and a known theorist/activist in the amateur election reform community. I would prefer to work with someone else who has enough experience in the field to have justified confidence that they disagree with one of my statements below, so that I won’t just steamroll you; I’d estimate there are only hundreds of such people in the world. Email me at firstname dot lastname on google’s public email platform.
The issue is that my views are nuanced, so it’s hard to make a single proposition that hinges a substantial amount. I’ll pose several statements I agree with, followed by the initials of people I know who disagree with them and with whom I think I could do this collaboration. If you’re active in the voting methods theory community, you’ll probably recognize the initials.
1. Single-winner ranked choice voting (instant runoff) does not live up to the benefits claimed for it. Specifically, if implemented in the US, it would not favor centrists, substantially reduce polarization, or lead to third parties winning more than 10% of seats in Congress within 10 years (95% credibility; thus, my median prediction would be under 5%). People who disagree with me on this with whom I could collaborate: RR, DS, LD. Empirical basis: contemporary and historical uses of IRV/AV/RCV
2. Single-winner voting methods in general are less promising than proportional representation. We have reason to believe that a broad class of utilitarian agent-based models, which includes basically all realistic models that aren’t too complex to make any predictions about, will favor the latter. Possible collaborators: CS. Empirical basis: modeling, statistical proofs, and principled arguments about why those proofs do/don’t apply.
3. Looking to the future, multi-winner RCV (that is, STV) is not the most promising proportional representation reform for the US. Its substantial advantage in terms of mind-share among activists is cancelled out by disadvantages in terms of political viability. Possible collaborators: any of those whose initials are above, or AH, or others. Downside: this is the least-empirical, and that’s not because the others are particularly empirical. However, it’s also IMO the most important and the one on which I think my own views are in the smallest minority, so it’s the one I’d be most interested in pursuing.
I’d also be willing to workshop other propositions like the above.
(ps. I suppose I could work with JS or AH on whether federal reform should come before state reform. But that has the downsides of option 3 without it’s upsides; the only upside would be the delight of working with JS.)
I’d like to oppose Bostrom’s simulation argument, specifically his implicit assumption that if there are N substrates for your mind in the multiverse, then you’re a specific one of them with probability 1/N. My intuition says that the substrates should be weighted by entropy or occupied memory .
Perhaps the Everett worlds are the easiest example to see why Bostrom’s counting might have a problem. Assume that I toss a quantum coin; if it gives me 0, I give you a King card. If it gives me 1, I toss another one and give you a Queen for 0 and a Jack for 1. If you haven’t seen how many times I’ve tossed coins and you haven’t seen your card yet, what’s the probability that you have a King?
If you want to calculate it the Bostrom way, you see the timeline split into three Everett worlds, each of them containing a version of you, and only in one of them you’ve got a King. Thus it seems the answer should be 1/3. But of course it’s 1/2. (Is there somebody who really believes otherwise?)
If you want to channel your inner Bostrom and steelman the simulation argument to me, write me at eige[at]tutanota[you-know-what]com.
I suspect that you’re wrong, for an interesting reason.
First, my qualifications: I can’t currently do the math for general quantum mechanics, but I can do enough math to have understood Shor’s algorithm, and I think my math foundations are strong enough that I could learn general QM in about 6 months. In my experience, when I have mathematical intuitions that are approximately as well-founded as the one I’ll state below, and then look into it deeply enough to resolve the issue, the three outcomes “I was wrong”, “I was kinda right”, and “I was right” have roughly comparable empirical probability. (Note that by conditioning on me having resolved the issue, I may be introducing bias). In other words, from an external point of view, my intuition here should be counted as only very weak evidence, though not quite entirely useless.
So in this case, my intuition is that in general, it is not possible to “toss a quantum coin” without it being somehow entangled with outside-of-the-box quantum states; and that this means that the (countable but intractably-large) integer number of distinct possible world-states where person B has a King is grossly equal to the sum of those states where they have a Queen or Jack. In other words, if it were possible to simply count distinct quantum states, you’d get the right answer.
Quantum states do not evolve independently until a measurement splits them up. It seems you’re saying that it’s a valid move to slice a brain into quantum states and count those instead even though the slices still affect each other.
But then I wonder how else you can slice a brain. What if we assign every second atom of a brain into slice 1, and the rest into slice 2? It would seem that a brain with twice the mass counts twice in the same sense as a brain with twice as many quantum states counts twice. Do you believe that though?
Is this really how Bostrom calculates it? You say “implicit assumption”, but how sure are you that he is actually assuming this? Because this seems to me to be the wrong way to do the calculation, even if you’re counting Everett branches.
It should be like this:
There are many Everett branches (not 2 or 3, but anywhere from 4 to much more than that). In half of them, I have a King. In a quarter of them, I have a Queen. In a quarter of them, I have a Jack.
Then you get p=0.5.
Yeah, I also got some mail saying that Bostrom isn’t that stupid 🙂
I shouldn’t have used the term “implicit assumption” because Bostrom writes a lot and he could make that explicit somewhere.
I think you’re using the term “Everett branch” incorrectly; see also my answer to the previous comment. Worlds only branch when something thermodynamically irreversible such as measurement occurs. If the timeline would come pre-split into independent branches, there would be no probability amplitudes, just probabilities.
There are a couple of objections to your quantum mechanics, but (putting aside the question of whether or not they’re sound) they don’t seem to be objections to your argument about how different substrates should be weighted. That said, your intuition seems sound to me. If you’re getting emails saying Bostrom isn’t that stupid, I’d like to know what he actually thinks.
Since the mail was not from Bostrom, we might never know 🙂
But my guess is that his principles include a clause like “… as long as no other information is available” and situations like those are sweeped under this rug.
I’ve found a partner, so hopefully in November we’ll know what Bostrom thinks and how sound my intuition is.
I’m looking for a collaborator on the subject: Is large scale Islamic immigration compatible with maintenance of liberal democracies?. Subsidiary questions under this involve:
1. What measures define liberal democracies? I am thinking of measures such as freedom of speech, equality of the sexes etc.
2. What defines ‘compatible’? For example, if women’s equality goes from an 8 to a 7, that’s bad, but not necessarily incompatible. If it goes from 8 to a 2, that’s another matter
3. To what extent does Islamic immigration import illiberal ideas?
4. To what extend does Islamic immigration empower native illiberal ideas? (here I’m thinking of the political changes throughout Europe at the moment).
In the last A. C. this was tackled only in Islamic countries. However, what concerns people mostly is whether or not islam is compatible with liberalism _in the West_. So, this is worth revisiting. I’m particularly interested in collaborating with someone who thinks Islamic immigration IS compatible with liberal democracy, and who can add the necessary rigour to keep my writing in check.
Scott, please feel free to shoot me down if this is taboo.
Were you one of the authors of the last Islam-related AC? I noticed when I was reading that paper that it really lacked any kind of control group for the study countries. It seemed other commenters felt the same way.
I wish you luck in finding a partner, but I hope you’ll reconsider your methodology this time around.
Nope, I wasn’t. That’s particularly why I want to do this – and I’ll take your advice on board. If you have more, keep it coming 🙂
Hi, I want a partner for a collaboration on whether or not to call American migrant detention centers concentration camps (I think yes). I write things online mostly anonymously and I am therefore in no place to put conditions on who you are; besides, I have contempt for you just because of your position, but I believe your idiocy is so farcically self evident that I can work with you to put your racist “still calling wolf” ass on display. I will work hard at making sure you get a platform provided I’m on that platform too: I will be serious about producing a decent essay. All you gotta do is not be a coward in the face of my actually held beliefs about your character, authoritarian temperament, and so on (if a racist is racist I want to believe the racist is racist, but I can work with racists.)
Hit me up at impassionata[at]protonmail[dot]com
I’m just curious, is your argument that we should call the detention centers concentration camps because (A) this is technically a more accurate term than detention centers, or (B) the term “concentration camps” is more politically useful than “detention centers” to some political goal you expect your collaborator to share?
Or do you plan to make some third argument I didn’t mention?
Mostly A, in the context of (extensively studied) slides into fascism over the course of history. What we’re seeing is not new or unique, but… rationalists have a special kind of arrogance.
I’m surprised. I didn’t expect you would argue anything so mundane as (A). Besides, your explanation sounds more like (B). But if you are really going with (A)…
“A facility into which people are concentrated” could describe lots of things — jails, rescue shelters, etc. Maybe even children’s daycares. So what?
The detention centers (sorry, “concentration camps”) themselves have nothing to do with race, and I’m pretty sure you can find people of all races in them. As I understand it, they’re holding facilities where the government houses people who’ve attempted to illegally cross the border, while they’re being processed by our immigration system to either be let in or sent back home.
The facilities are not always managed very well I suppose, but even if they’re consistently managed terribly you’re facing a serious uphill battle if you mean to prove that racism is the motivating factor for calling them by a different term than what most people associate with the extermination facilities used by Nazis during the Holocaust.
If saying “detention centers” proves you’re a racist (presumably against nonwhites), how would you respond to nonwhite people who deliberately use that term?
Am… am I being negged?
Okay, this might be a dangerous place to get in to this one, but:
A hard-takeoff situation where an artificial general intelligence (AGI) becomes rapidly more “super-intelligent” to an exponential and uncontrollable scale in a period of minutes to months is almost certainly impossible. AGI in general may or may not be possible, but it is far enough away that “working on AGI so it can help us solve a difficult problem” is not an effective alternative to working on those problems directly.
My background – I have a math background (but only a bachelors) and work as a programmer, but I don’t have a specific computer science background. I don’t have access to paywalled research papers, and my arguments will generally not be mathematical ones, although some numeric insights from complexity science will likely be necessary. I would like to collaborate with anyone who would agree with the statement “I think a hard-takeoff situation is likely enough that it is worth expending significant resources to explore mitigation strategies”, especially someone who knows multiple sources that feel this way and feels comfortable synthesizing their arguments. (Oh, but I plan to just use Google Docs for the collaboration, so if you legitimately fear Roko’s Basilisk you would probably not feel safe working with me, since there will exist a digital record of your belief in hard-takeoff super-intelligence.) In addition to reading whatever summaries you produce, I am willing to read ~5 essays/articles/papers and ~2 books in favor of your position, and ask that you extend a reciprocal courtesy.
I don’t want to get in to the argument in these comments, but it’s probably worth previewing where I’m coming from so you know if it feels “worth your time”. I largely agree with Adnan Darwiche’s paper on human level intelligence vs. animal-like abilities, and I think that the jump from the latter to the former will be stymied by the fact that there are so many ways to be “intelligent” that it’s impossible to “optimize on intelligence” without invoking very domain-specific scoring functions, in the same way that a car that goes 10,000 mph on an open track would still take just about as long as any other car in going from DC to LA. I also think that the findings that our universe is inherently incompressible (we can’t simulate our universe in our universe); and the outsize impact of tail risks on so many domains in life mean that there’s a hard limit on “how intelligent” it’s even possible to be. Essentially, I think that the world is just too complex and machine intelligence will hit the same roadblocks humans do as it attempts to digest more and more of the world at once, instead of retaining the machine-like scales and efficiencies computers enjoy in bounded domains. Additionally, any argument on AI takeoff relies on self-modification of its own code, at which point it falls prey to the Santa Fe argument. :’)
The second statement is a lot less strong, because I do think AGI is probably possible at some point in the future. I do think that a number of breakthroughs in information encoding and scoring functions that properly account for tail risk are hard prerequisites for AGI, and I hope that this is a non-weaselly way to say “AGI is maybe possible, but currently still ‘far away'” by translating it in to a hard boundary. There are probably eventual societal returns in to researching AGI: but I think that currently, a statement like “I think climate change is an existential threat and I want to dedicate my time to researching mitigation strategies; additionally, I think that building a AGI and having it think about climate change is the most effective way to do this” is nonsensical.
collin dot lysford at gmail dot com.
I posted this last year, but I’d still be interested in reading an in depth, multi-perspective discussion on the US vs more liberal, larger safety net European countries in terms of social services, quality of life, economic well being etc.
I’d also be interested in hearing from people on opposite sides of the political spectrum talk about Venezuela, and how it’s either the inevitable progression of socialism or not really true socialism at all.
I think either of those could be good opportunities for people on opposite ends of the spectrum to collaborate on. They seem less “persuasive essay topic” and more “here’s a summary of the facts” type topics.
I am looking for a partner to collaborate on a paper on the topic of whether implicit racism/unconscious racism exists and whether it is a common phenomenon in the USA today. My position is that there is no evidence of unconscious racism among any group in the USA among people who do not consciously hold racist viewpoints.
In my experience, almost all research on the topic of implicit racism comes from the USA, but if I am incorrect about this I would be willing to expand the paper’s coverage based on the research that is found.
I have an M.S. in mathematics with strong background in statistics. As far as the social sciences I am strictly an amateur but I’ve read a lot of academic articles in psychology and related fields. I am willing to work with anyone regardless of qualifications.
Email: fargomath[at]gmail[dot]com
I’m late to the party, but I’m interested in collaborating on the following point:
Does the high-tuition, high-financial-aid model of college costs do a good job of helping students/families with limited resources afford college?
My opinion is no, that between students being scared of applying from the sticker price and the complexities of the financial aid system, this model hurts more than it helps. I’m mostly coming at this from the perspective of public universities, since I spent a long time in one for graduate school, but I’m interested in exploring how this model works for private universities as well. I’m potentially also interested in other topics about college/higher education. I have a background in environmental science and statistics, so I’m familiar with research and digging through the academic literature, but I have no professional experience working in the social sciences. I’m happy to work with anyone.
Email: pslyndon[at]gmail[dot]com.
I can’t sign up for this one, but here are the bones of the alternative argument. (Not sure if I agree, but I’d like to see you address it)>
1) High tuition, high financial aid is in line with what a successful college education is “worth” in terms of increased earning. It’s also in line with what a college education “costs” in terms of the resources required to provide the education under the current model.
2) It has the effects you might expect from a system that transmits accurate price info. It drives some students to lower cost alternatives such as community college/living at home, etc., and it discourages some students who have doubts that the college education will produce enough increased earning capacity to pay for the debt.
3) Without some strict gatekeeping policy like you hear about in Germany/Japan/etc., wouldn’t we expect lowering the cost of college below its actual level to result in people who aren’t confident that their college education will increase their earning capacity to select more costly (to the taxpayer) educations?
4) In a world where only half of students are going to college, subsidizing college means that you have less money available for services to the half that don’t go to college. Is it right to take services away from those people to encourage college education among people who don’t expect to put their college education to use earning money?
These are interesting points. Here’s how I’d respond at the moment – although I haven’t done a ton of research on this topic yet, so my responses might change as I look into this more:
0. I think I’m examining this from a different perspective. Most of these arguments are at a society-wide level, i.e., is the high-fee high-aid model good for society? The question I’ve been thinking of is: does the high-fee high-aid model do what colleges say that it does, of directing limited resources to those who need it most?
1. I think I’d push back on the second part of this argument, that high sticker prices are necessary because that’s what college costs to provide. First, if a large fraction of the student body is recieving financial aid, then the effective tuition/student is a lot lower than the sticker price. Second, and I know this is something that’s been discussed a lot on this blog, but I’m not convinced that the high costs of college are a cause of the high fees rather than the reverse.
2. I think I agree with this point, but I think it supports my larger argument. There will be some people who would not qualify for financial aid that the high fees will drive to lower cost alternatives, but there will also be a number of people who would qualify for financial aid who are scared off by the sticker price. Financial aid decisions are made quite late in the college application process, and even then, generous help in year 1 doesn’t garuntee generous help in year 4.
3/4. Both of these questions presume that the only way to lower college costs is increased subsidies. I disagree with this premise. I think a combination of reducing/eliminating financial aid and reducing services would go a long way to lowering the cost of college.
Thanks, and very interesting.
I’m really interested in this one. I’d argue against it. It created a climate of fear that destroyed the possibility of real peace and created a generation that grew up in fear, strengthened nationalism and caused wars and conflict to this day even beyond thr fears of nuclear annihilation. I wrote several papers on related topics in college (including the effects on children in the 50s) and my thesis was on the effects of atomic espionage on international relations in the early cold war.
I would like to do an adversarial collaboration where I argue that Climate Change is an urgent threat to human survival & thrival, as quantified by the following (falsifiable!) claim: By the end of the 21st century, climate change will rank among the top 10 causes of premature death globally. This would include deaths due to extreme heat, natural disasters and variable rainfall (including resulting malnutrition) patterns, and changing patterns of infection, all measured as increases over levels in 2000.
– Background: I have a master’s in a related topic. Would prefer at least degree-level knowledge.
– My availability is pretty limited, especially in August
– There’s plenty of ways people disagree with the above statement, and I’d be interested in hearing yours! I’ve thought reasonably carefully about my specific claim, though, and am probably not interested in sidetracks – e.g. costs vs benefits of climate change mitigation.
Email: Truculent.Hyacinth at the mailservice hosted by google.
“Among the top 10” seems really weak, and also really subject to lumper/splitter semantics. Why not name a percentage?
Fair enough. That might have been a better metric, because it reduces complications from changes in the distributions of all other causes.
Currently, it seems like the percentage equivalent would have to be around 2.4% of premature deaths for it to make the top 10.
Wouldn’t you need to bake in the probability of a technological breakthrough that allows massive geoengineering or carbon fixation, and is there any reliable methodology on that?
I’m also not super confident in predicting what the other causes of premature death are going to be in eighty years. (I’ll bet on suicide being one of them, but I’m not very confident in predictions that far out).
My personal gut feel is I’d say it’s likely to be in the top 10 by 2050, but I think it will be easy to argue for 2100 as a weaker case.
Yes, you need to deal with mitigation inasmuch as it may invalidate the potential severity of climate change, but I’m a pessimist on that front – I think we can and should do a lot, but a lot of reasonably irreversible damage has already been done in a way that already has implications for mortality. I realize this is contentious, but that’s part of the challenge/fun for me!
I would like to collaborate on an essay about Venture Capital. Broadly speaking, my position is more positive than most articles I’ve seen. More specifically, I would like to argue that Venture Capital is aligned with social needs, and that claims about it being a “shell game” are overblown, or even more simply that the Social Capital Annual Letter is wrong.
I would be happy to take the opposite side of other common criticisms as well, but should note that I don’t have any interest in defending harassment or homogeneity.
[username] at gmail
Hi, I am looking for a partner for a collaboration on any of the following medical models: free radical theory of aging, plaque theory of Alzheimer’s, somatic mutation theory of cancer, link between sun exposure and skin cancer, assumption that only vertebrates have adaptive immunity. I would like to argue against all the aforementioned. My academic background is in biochemistry and molecular biology. Email me at nitajain8@gmail.com
https://simpsondispensary.com
It created a climate of fear that destroyed the possibility of real peace and created a generation that grew up in fear, strengthened nationalism and caused wars and conflict to this day even beyond thr fears of nuclear annihilation. I wrote several papers on related topics in college (including the effects on children in the 50s) and my thesis was on the effects of atomic espionage on international relations in the early cold war.
Hi, I am looking for a partner to collaborate on the utility of gene editing technologies, such as CRIPSR-Cas9. I would like to argue against the use of such technologies. My academic background is in biochemistry and molecular biology. Email me at nitajain8@gmail.com