[ACC] When During Fetal Development Does Abortion Become Morally Wrong?

[This is an entry to the 2019 Adversarial Collaboration Contest by BlockOfNihilism and Icerun]

Note: For simplicity, we have constrained our analysis of data about pregnancy and motherhood to the United States. We note that these data are largely dependent on the state of the medical and social support systems that are available in a particular region.

Introduction: Review of abortion and pregnancy data in the United States

We agreed that it was important to first reach an understanding about the general facts of abortion, pregnancy and motherhood in the US prior to making ethical assertions. To understand abortion rates and distributions, we reviewed data obtained by the CDC’s Abortion Surveillance System (1). The Pregnancy Risk Assessment Monitoring System (PRAMS), Pregnancy Mortality Surveillance System (PMSS) and National Vital Statistics datasets were used to evaluate the medical hazards imposed by pregnancy (2, 3, 4). Finally, we examined a number of studies performed on the Turnaway Study cohort, maintained by UCSF, to investigate the economic effects of denying wanted abortions to women (5, 6, 7, 13).

Abortion rates by trimester and maternal age: Using data collected by the CDC, 638,169 abortions were performed in the United States in 2015. Data was received from 49/52 reporting areas, suggesting that these rates are likely close to the population rates. This was equivalent to 188 abortions per 1000 live births, a 24% decline from 2006. Of these, approximately 65% were performed prior to 8 weeks of development, and 91% before 13 weeks of development. An additional 7.6% were performed at between 14-20 weeks. Approximately 90% of abortions were performed on women older than 19, and adolescent women between the ages of 18-19 accounted for 67% of the abortions in women under 19. By race, non-Hispanic black women were most likely to undergo an abortion (25 per 1000 women), while non-Hispanic white women were least likely (6.7 per 1000). This translates to a rate of 390 abortions per 1,000 live births in non-Hispanic black women and 111 per 1,000 live births in non-Hispanic white women. (1) These data show that most abortions are undertaken prior to the end of the first trimester, that most women choosing an abortion are adults, and that non-Hispanic black women are disproportionately more likely to choose an abortion.

Mortality and morbidity associated with abortion and pregnancy: On average, there were 0.62 fatalities per 100,000 legal abortions between 2008-2014 (six reported fatalities in 2014). For comparison, in 2015 there were 17.2 pregnancy-related fatalities per 100,000 live births in 2014. These data suggest that an abortion is generally safer than attempting to carry a child to term. Also, it is important to consider the racial disparities within these data. For example, African-American women were three times as likely to die as a result of pregnancy than non-Hispanic white women (42.8 vs 13 per 100,000 live births). The reasons for these disparities are unclear. (3)

Pregnant women are also at risk for severe morbidity associated with pregnancy and delivery, with approximately 50,000 women experiencing at least one severe complication in 2014. This translated to a rate of ~140/10,000 deliveries. Approximately 1.2% of live births resulted in severe maternal complications. Women can also experience significant psychological morbidity after pregnancy, as 1 out of 9 women who deliver a live fetus develop postpartum depression. We were unable to find CDC data for morbidity resulting from abortion procedures; however, one publication reported approximately 2% of abortions result in a medical complication. As this data did not discriminate between minor and severe complications, it would be reasonable to assume that abortions result in a lower overall severe complication rate than pregnancy. We will make the further assumption (based on educated guessing) that late-term abortions are more risky than early-term abortions. (2)

From these data, we conclude that pregnancy and delivery pose a significant risk to the mother’s health. These risks are greatest for African-American and Native American women. By comparison, abortion appears to pose much lower risks of death, and probably much lower risks of morbidity. Consequently, mothers undergo unique and substantial hazards which are imposed by pregnancy. 

Comparison of pregnancy-associated risks and other common risk factors: It is difficult to compare the risks of pregnancy with other factors due to the disparate means of measuring those risks (per live birth, vs per person). However, a naive interpretation of the available data suggests that, while pregnancy is relatively unlikely to lead to severe consequences, it compares in risk to other common activities. For example, the mortality rate associated with motor vehicle accidents is 12.1 per 100,000 people. This is similar to the risk of death per 100,000 live births for women in the US (16.7). (8)

An alternative approach is to examine how pregnancy, childbirth and post-pregnancy changes affect overall mortality. According to the National Vital Statistics Reports (Volume 68, 2016), pregnancy and childbirth was the 6th leading cause of death for women(all races and ethnicities) aged 20-24 and 25-29, accounting for 652 deaths in the two groups combined. Pregnancy and childbirth was the 10th leading cause of death for women between the ages of 15-19 (28 deaths). These data indicate that pregnancy is a leading cause of death in women of child-bearing age. (4)

Socioeconomic costs of unwanted pregnancy: The socioeconomic effects of abortion denial have been studied extensively on the Turnaway Study cohort at the University of California-San Francisco. One study on this cohort found that mothers who were denied a wanted abortion due to gestational age experienced a significantly higher likelihood of being unemployed, in poverty and using public assistance programs like WIC. (6) Another study based on this cohort found that already-born children of a mother denied an abortion were significantly more likely to live in poverty and fail to meet developmental milestones.(5) Mothers who were denied abortions were also less likely to have and meet aspirational goals.(7) These data indicate that women who received wanted abortions experience significantly less socioeconomic strain than women who are denied an abortion.

Adoption vs abortion: Adoption is commonly suggested as an alternative to abortion. Adoption does eliminate the direct socioeconomic burdens of parenthood. However, adoption is rarely considered as an alternative to abortion. For example, in the U.S., there were approximately 18,000 adoptions compared with nearly 1 million abortions. A recent article in The Atlantic did an excellent job of summarizing potential reasons for the discrepancy. Adoption obviously does not alleviate the physical burdens and hazards of pregnancy. Additionally, several studies have suggested that women do not choose adoption due to worry about their perception of the emotional effect of giving away a child. Pro-adoption groups also suggest that both pro- and anti-abortion advocates fail to emphasize or properly counsel women on considering adoption as an alternative to abortion. (9)

Who are the stakeholders in the abortion question? The mother, the father, the fetus, and society at large. The mother’s unique interests are her safety and health, the development of a unique bond with a new human life, and the economic, emotional and physical burdens of motherhood. The father, if held responsible, shares the economic and emotional burdens of parenthood. The fetus, once it has developed the fundamental features of a human being, has at least a theoretical interest in preserving its life. Society at large has an interest in justice and preserving the rights of its members, if only out of self-interest for the individuals within that society. At some point in time, a fetus becomes considered a member of that society, with the same rights as all other individuals. Consequently, the point of conflict arises when a mother (or both parents) desires to terminate a pregnancy prior to delivery.

The question: At what point during development does abortion become a moral wrong? 

Starting positions: At conception (icerun), At fetal viability/minimal neurological activity (BlockofNihilism)

icerun’s Position: A Future Like Ours: Conception

Many arguments for and against abortion pick out a characteristic of the fetus – its size, level of consciousness, ability to feel pain, etc. – and go on to argue why this characteristic, or lack of one, gives the fetus a right to life. Unfortunately, these characteristics tend to have accidental byproducts – they may give the right to life to sheep or remove it from infants. The Future Like Ours arguments begin by determining what best accounts for the wrongness of killing people like you and me (who people on both sides of the abortion debate agree it is wrong to kill). And then use this standard to determine if it is wrong to kill a fetus (who it is contested whether it is wrong to kill).

A Future Like Ours
In Why Abortion is Immoral, Marquis argues killing someone like you or me is prima facie wrong because the deceased is robbed of a valued future like ours. (10) Killing most directly and significantly harms the one who is killed.

The harm to the deceased is the loss of her valued future. Her future would have included all of the experiences, relationships, and works that were valuable for their own sake or means to something valuable. She loses not only those parts of the future she valued in the moment but also those experiences, relationships, and works that she would have come to value as she grew older or is not currently aware of as she grew older: a 16 year old may not value parts of his future whether that be a career, family, or woodwork but if the teenager had been allowed to develop may have come to value these parts of his future.

In summary, it is wrong to kill somebody like you or me because it robs them of a future like ours. The value of a fetus’ future is its current experience, relationships, and works that the fetus values now and those experiences, relationships, and works that the fetus would come to value. A typical fetus cannot currently value it’s experiences, relationships, and works but as the fetus develops it will come to have the same experiences, relationships, and works that we do. Therefore, a fetus has a future like ours. By this definition, it is wrong to kill a fetus from the point of conception (for the record, Marquis does not claim it is wrong to kill a fetus from the point of conception; however, this seems to be the implication).

Intuitions: The future like ours argument works off common assumptions by pro and antiabortion proponents. In doing so it both avoids assuming an ought from an is and creates common ground. The account of the wrongness of killing humans must fit within these intuitions: it must account for why it is wrong to kill typical adult humans, infants, and those who are suicidal but it is not wrong to kill typical sperm, eggs, and some animals. However our intuitions differ on whether it is wrong to kill a typical single cell zygote. Intuitively we both believe it is not wrong to kill a typical zygote, however BlockofNihilism believes this strongly and I believe this weakly. Many anti-abortion advocates have the opposite intuiition.

For BlockofNihilism, this future like ours argument violates his strong intuition that it is not seriously wrong to kill a zygote and this argument fails. For myself, it violates a weak intuition and while on it’s own is not enough to completely overcome the intuition, it holds the strongest sway and influence over my view on abortion as it offends the least intuitions and is more coherent than most other arguments.

BlockofNihilism’s Position: Conscious Perception and Viability

Abortion is morally acceptable until the fetus develops the structures required for perception of external stimuli, with exceptions for preserving the life and health of the mother. Abortion is acceptable because a fetus does not experience conscious suffering “like ours” and simultaneously imposes a significant physical, mental and economic burden on the mother. As the minimum requirements for conscious perception are actually met after fetal viability, I suggest we fall back on viability as a compromise ethical barrier to abortion.

When does the fetus develop “conscious” perception? By conscious perception, we mean perception which a human person would recognize as their own. Obviously, this question in general pushes the limits of our ability of description. As perception is an (obviously) complex topic, I will use the perception of physical pain as an example of the requirements for conscious perception. Pain, too, is a complex psychological concept that arises at the intersection of physical sensation with emotional constructs. At the minimum empirical level, certain neurological structures are necessary, but not sufficient, for the perception of pain. Thus, until these structures are present and active, perception (as we understand it) cannot occur. (10)

To experience pain, afferent nerves must synapse with spinothalamic nerves projecting to the thalamus, which then connect to thalamocortical neurons projecting to the cortex (the region of conscious experience). Thus, all three components (peripheral pain sensor, thalamic project, and functioning cortex) must all be active for the perception of pain. Based upon multiple studies, nociceptive neurons develop around 19 weeks, thalamic afferents reach the cortex at 20-24 weeks, and somatosensory activity provoked by thalamic activity is detectable around 28-29 weeks. Several behavioral studies have found that at 29-30 weeks of development, fetal facial movements in response to pain are like adults. However, these results have been contradicted by other studies, and these findings may represent non-voluntary and unconscious responses to stimuli rather than the conscious perception of pain. (10)

In any event, a fetus does not have the required neurological structures for what we would recognize as the conscious experience of pain until at least 29 weeks of development, three weeks into the third trimester. (10) Prior to full integration of the various components of the nervous system, and the development of an active cortical system, the pain experience of a fetus would likely be akin to that of a comatose individual- no conscious experience at all.

As other types of experience require these same structures to be active, we can conclude that a fetus does not have the minimum capacity for conscious experience until approximately 29 weeks of development. Thus, when considering an abortion prior to this stage of development, we are balancing (1) the harms posed to the mother, a conscious agent, against (2) an entity that does not “experience” anything. To me, this suggests that abortion is permissible at this point.

The fetus is truly viable at ~27 weeks: With intensive care, a preterm neonate can survive at as early as 24 weeks of gestation. However, survival rates at this point are approximately 50%. Also, these severely preterm neonates are at a significantly increased risk of a variety of both short- and long-term complications. By 27 to 28 weeks, the fetus can be delivered and survive in most cases without major interventions. So, true fetal viability and the development of the fundamentals for conscious experience are roughly concurrent, with viability likely being reached prior to conscious experience.

Potential harms and viability: Viability means that the fetus no longer requires the mother’s body to survive. Within the womb, the fetus imposes both a significant immediate burden as well as the potential for significant harms. Once safely delivered, these harms are no longer present. While the mother is still on the hook for the economic and emotional burdens of motherhood, her life is no longer at risk. Also, while adoption is a possibility after birth, it is obviously not an option prior to delivery. Consequently, viability represents a special moment in the development of a fetus- it can live without posing a significant hazard to the mother’s physical well-being. While we could not find solid evidence (likely due to the very low number of late-term abortions performed), my educated guess is that an abortion at this late stage is approximately as dangerous as performing a natural delivery or C-section. Consequently, at viability, it is reasonable to treat the fetus as having full human rights and intercede to protect its life.

icerun’s rebuttal to BlockofNihilism:

Viability: The only difference between a viable fetus and an infant is location, which is not a moral distinction (except in cases of direct harm to the mother) therefore a viable fetus is seen as having the same right to life as an infant. The chain would seem to continue. The primary distinction between a viable fetus and nonviable fetus is that a nonviable fetus survival depends solely on one person (the mother) whereas a viable fetus survival can depend on others. This does not appear to be a moral distinction either and so the viability argument appears to be very closely related to the argument that the fetus gains the right to life at birth or when it becomes an infant. Therefore, a viable fetus would have the same right to life of a newborn however without further reasoning, it seems likely a fetus gains the right to life earlier.

Experience: BlockofNihilism argues that since a fetus before 29 weeks is not capable of conscious experience it is not capable of suffering and therefore it is not wrong to abort. However, there are times when adult humans are not conscious and are even unable to achieve consciousness in the case of temporarily comatose humans. Because they are not conscious they are not capable of suffering. This argument seems to allow for the killing of sleeping and temporarily comatose humans as long as they do not suffer, feel pain, or realize what is happening in the moment.

Further, an adult would likely not recognize the consciousness of a fetus as its own. It is unlikely that a fetus or infant has a sense of self and they seem to operate at a significantly lower level of self-awareness. Though we do not have a good understanding of the level of consciousness a fetus holds, a dog appears to operate at a higher stage of consciousness than a fetus though this is very speculative.

For these reasons, the experience of suffering is not what makes it wrong to kill a fetus or human.

BlockofNihilism’s Rebuttal of the Future Like Ours Account:

Consciousness-based and FLV-based arguments arrive at the same place: For me, any ethical argument that places the interests of a non-conscious entity incapable of experience above the interests of a conscious agent capable of both rational decision-making and of suffering is intuitively absurd. Prior to the development of the basics for neurological experience, the fetus represents the potential for a future life of value or the potential to be a conscious agent. In either case, I do not believe that the potential outweighs the present!

I understand how the future life-of-value (FLV) argument can seem to apply to a fetus: We imagine the entity that will come from the fetus, imagine its potential for an FLV, and extrapolate rights from there. However, a fetus represents the potential for having a life of value and cannot be said to currently possess that future in the way implied by Marquis. I believe the intuitive appeal of the future life-of-value argument arises from our experience and knowledge of what a “future” constitutes. However, fetuses prior to their development of the basic neurological structures required for experience cannot have or value their “future.” 

My interpretation is informed by Boonin’s famous critique of Marquis’ “future of value” argument. (11) According to Boonin, the intuitive value of the future can be found in the dispositional ideal present value of a future. A dispositional value or belief is one that is held by someone but not consciously on the mind. An ideal value or desire is one that would be held if one had full information about the situation. The dispositional ideal desire formulation is more parsimonious as it does not invoke potential desires but only present ones. Thus, the wrongness of killing someone like you or me is the taking of a future like ours they dispositionally, ideally, and presently value. Upon developing the neurological structures necessary for experience, a fetus can begin to (at least unconsciously) desire food, close touch, and parent’s voice. The necessary neurological structures for these desires, and for meeting the minimum requirement for having an FLV, is near or at the point of viability.

Consciousness-based accounts do not allow for murdering sleeping people! There is a clear distinction between an entity that has had the experience of consciousness (a sleeping or temporarily comatose individual) and an entity that has never been conscious. A sleeping person still has her memories, desires and agency encoded within her brain; the fact that she is temporarily unaware of those attributes does not mean they do not exist! Conversely, a fetus prior to its developing consciousness has no memories, desires or agency. It cannot be said to be a person yet. My argument is simple- prior to having the minimum requirements for consciousness there is absolutely no chance whatsoever that a fetus can experience any harm like we (persons) do.

Once these structures are developed and active, it becomes far more difficult to determine “when” a fetus or infant reaches consciousness. At this point, I become squeamish with the prospect of destroying something that potentially does have a conscious experience (including a “future of value” concept) like ours. The moral calculus changes: Instead of balancing a person’s interest (mother) vs a nonperson’s interests (fetus), we now have a person vs (maybe a person?). This is where, to be safe and prevent potential harms, we can draw a clear ethical line.

Preventing abortion prior to viability will cause significant harms: As previously discussed, substantial scientific evidence suggests that preventing wanted abortions will lead to harm. First, there would be a significant increase in morbidity and mortality associated with pregnancy. This increase would disproportionately impact economically disadvantaged and minority women. Second, women denied wanted abortions are significantly more likely to suffer socially, economically and psychologically. Perhaps most importantly, women (or both parents) are denied agency and denied the ability to make the ethical decision for themselves according to their unique circumstances and beliefs. 

Location, location, location! Viability represents the best point for ethical compromise: Terminating a fetus after it is capable of living “on its own” is equivalent to infanticide. In the special case of a fetus, location does have moral significance. The fetus, living within and dependent upon the mother’s body, poses immediate and potential costs and hazards to the mother. By contrast, once delivery has taken place the fetus/neonate no longer poses these threats. While the mother still has the significant economic and social burdens of motherhood, these burdens are unlikely to lead to immediate physical harm. And for the mother unable to cope with these burdens, adoption or surrendering the care of the infant to the state is an option once delivery of a viable neonate has taken place.

Icerun’s Defense of the Future Like Ours Account

Capacity of a fetus to have a future: The fetus does not have a potential future nor is the fetus’ future simply a concept in its brain. The future of a fetus are those unrealized experiences the fetus will have if its development is not impeded. Likewise, a 20-year-old will be a 25-year-old with experiences, relationships, and works if it’s development is not impeded. Sometimes a human’s development is impeded by natural causes in which case we mourn their loss of a future or by conscious decisions in which case we mourn and try to provide restitution as best possible.

In fact, one’s future is most certainly not in or dependent on the brain. A 4-year-old does not have a good understanding of what it is like to be a 60 year old yet being a 60 year old is still a part of his future. If the 4-year-old is killed, it has lost not only on the relationships it understands as a 4-year-old but also a future that includes a career or children or what it would have found valuable and meaningful as an adult.

Boonin’s present future account fails: Marquis and Boonin account for the value of the parts of our future that we do not know yet (ex: our future in 20 years) in different ways. Marquis includes both our present valuation of the future and our future valuation of the future while Boonin argues for a present ideal desire of the future. However, to have an ideal desire, one must first have an actual desire (if an actual desire is not required, then one could say the zygote or trees have ideal desires).

Though the fetus can be said to have desires, these desires are unconscious. A conscious desire is willed and chosen to a certain extent whereas an unconscious desire is simply the body doing what the body does; a personification which is often helpful but, in this case, not relevant. The unconscious desire for warmth is simply the brain releasing chemicals based on external states. Similarly, the zygote will begin to multiply based on external states, stem cells will divide into different cells based on external states, the heart begins beating based on external states. The heart beating or zygote splitting apart seem to fulfill the requirement of some unconscious desire. The fact that it is the brain responding to outside stimuli is not morally relevant – the fetus does not appear to be aware of a desire for warmth just as it is not aware of the heart’s desire to pump blood. If conscious desires are necessary then newborns and possibly older infants likely do not have a right to life as they do not appear to have conscious desires or a sense of self. In this case though it is more parsimonious, it fails because it does not grant infants a right to life.

Conclusions

icerun’s conclusion: The point where the fetus gains the right to life is rightly contested and debated as I do not believe there are any completely coherent and consistent arguments that define the point of development where the fetus gains the right to life.

The latest possible point where abortion may be permissible appears to be viability where the sole difference between an infant and a fetus is the location (one inside the womb and dependent on a specific person and the other outside the womb that could be cared for by others). However, abortion may be impermissible at an earlier point and the point of viability does not appear to have a moral significance that makes the fetus seriously wrong to kill.

At the end though, I have not come to a solid position at what point it becomes wrong to kill a typical fetus. And it is important to note, have failed to provide a coherent argument. In making my decision on abortion three items weigh heaviest:

First, in cases of consensual sex (excluding rape), parents hold a strong positive obligation to provide and protect a child once it gains the right to life. This obligation comes from the fact that children have a right to life, require support to survive, and that the parents engaged in activities that are known to create humans. Second, the future like ours argument points to the fetus gaining a right to life at conception and though this goes against my intuitions, it comes the closest to providing a coherent and consistent argument. It is a model to understand why it is seriously wrong to kill humans and thus points to an earlier rather than later point in the fetus’ development. My choice of this argument is likely biased by various intuitions that I hold and others would not doubt come to focus on other flawed arguments based on their own intuitions. Third, there are situations where bearing a child brings significant issues and problems for either the mother or fetus where abortion apears the best option.

A mesh of all three in light of it being uncertain when the right to life begins for a fetus perhaps leads to the stance that abortion should be safe, legal, and rare that investigates abortion on a cases by case basis that attempts to balance the weightiness of aborting a fetus with practical costs and difficulties that are imposed on parents.

BlockofNihilism’s conclusion: If my ethical standard were to be adopted and used to change current practice in the US, it would allow for a few more elective early third-trimester abortions than are currently performed. However, it would have little to no effect on the current situation, as most abortions are performed well before viability. I believe that communicating our knowledge about the fetus pre-viability, including its lack of internal conscious experience, would significantly reduce the potential for psychological harms to women who choose abortions. In contrast, if abortion after conception was prevented, there would be several negative consequences. There would be a significant increase in pregnancy-related morbidity and mortality that would disproportionately affect minority and socioeconomically distressed women. The likely uptick in illegal abortions would increase the likelihood of unsafe abortions, further increasing the risk of morbidity and mortality. Finally, the denial of wanted abortions imposes pronounced social and economic strains on new mothers and their families. These consequences are, obviously, of significant moral concern.

I remain convinced that abortion is acceptable prior to fetal viability. I believe that the intuitive appeal of the FLV argument is, as suggested by Boonin, not applicable to a fetus prior to developing the fundamental requirements for neurological experience. Even if we decided that the FLV argument pertained to fetuses, the fact that abortion pre-viability cannot cause conscious harm outweighs any potential for FLV that could result from a fetus carried to term. I believe (like Aesop) that a bird in the hand (the mother’s rights, interests and potential for harm) far outweighs a bird in the bush (the non-conscious potential person represented by a fetus).

Shared conclusion: Abortion is never a happy choice. Regardless of our ethical position on the abortion question, we agree that new people are of tremendous value! Improvements in the delivery and efficacy of birth control options, increases in social support systems for mothers and parents, reducing pregnancy-associated morbidity and mortality and increasing access to alternative options like adoption are all essential factors in reducing the number of abortions and any potential harms that arise from them. By focusing on these issues rather than on preventing abortions directly through legal or ethical edicts, we can make having a child a more reasonable and safe option than at present.

Works cited:
1. https://www.cdc.gov/reproductivehealth/data_stats/abortion.htm
2. https://www.cdc.gov/prams/index.htm
3. https://www.cdc.gov/reproductivehealth/maternal-mortality/pregnancy-mortality-surveillance-system.htm
4. https://www.cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_06-508.pdf
5. Foster DG, Raifman S, Gipson JD, Rocca CH, Biggs MA. Effects of Carrying an Unwanted Pregnancy to Term on Women’s Existing Children. February 2019. The Journal of Pediatrics, 205:183-189.e1.
6. Foster DG, Biggs MA, Ralph L, Gerdts C, Roberts SCM, Glymour MA. Socioeconomic Outcomes of Women Who Receive and Women Who Are Denied Wanted Abortions in the United States. January 2018. American Journal of Public Health, 108(3):407-413
7. Upadhyay UD, Biggs MA, Foster DG. The effect of abortion on having and achieving aspirational one-year plans. November 2015. BMC Women’s Health, 15:102. (Request pdf)
8. https://www.cdc.gov/nchs/fastats/accidental-injury.htm
9. https://www.theatlantic.com/health/archive/2019/05/why-more-women-dont-choose-adoption/589759/
10. Marquis, Don. “Why Abortion Is Immoral.” The Journal of Philosophy, vol. 86, no. 4, 1989, pp. 183–202. JSTOR, www.jstor.org/stable/2026961.
11. Lee SJ, Ralston HJP, Drey EA, Partridge JC, Rosen MA. Fetal Pain: A Systematic Multidisciplinary Review of the Evidence. JAMA. 2005;294(8):947–954. doi:https://doi.org/10.1001/jama.294.8.947
12. Boonin, D. (2002). A Defense of Abortion (Cambridge Studies in Philosophy and Public Policy). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511610172
13. https://www.ansirh.org/research/turnaway-study

[ACC] Should Gene Editing Technologies Be Used In Humans?

[This is an entry to the 2019 Adversarial Collaboration Contest by Nita J and Patrick N.]

Introduction

In October 2018, the world’s first genetically edited babies were born, twin girls given the pseudonyms Lulu and Nana; Chinese scientist He Jiankui used CRISPR technology to edit the CCR5 gene in human embryos with the aim of conferring resistance to HIV. In response to the international furor, China began redrafting its civil code to include regulations that would hold scientists accountable for any adverse outcomes that occur as the result of genetic manipulation in human populations. Now, reproductive biologists at Weill Cornell Medicine in New York City are conducting their own experiment designed to target BRCA2, a gene associated with breast cancer, in sperm cells. While sometimes considered controversial, gene editing has been used as a last resort to cure some diseases. For example, a precursor of CRISPR was successfully used to cure leukemia in two young girls when all other treatment options had failed. Due to its convenience and efficiency, CRISPR offers the potential to fight cancer on an unprecedented level and tackle previously incurable genetic diseases. However, before we start reinventing ourselves and mapping out our genetic futures, maybe we should take a moment to reevaluate the risks and repercussions of gene editing and rethink our goals and motives.

How does CRISPR work?

CRISPR, which stands for clustered regularly interspaced short palindromic repeats, is an adaptive bacterial immune response that protects against repeat offenders. When exposed to a pathogenic bacteriophage, a bacterium can store some viral phage DNA in its own genome in “spacers,” which function as genetic mug shots, allowing the bacterium to quickly mount a defense in case of future invasions. When necessary, the CRISPR defense system will slice up any DNA matching these genetic fingerprints. In 2012, Jennifer Doudna and Emmanuelle Charpentier demonstrated how CRISPR could be used to slice any DNA sequence of choice. The CRISPR-Cas9 system allows researchers to not only recognize and remove DNA sequences but also modify them. The completion of the Human Genome Project in 2003 provided a copy of the genetic book of life; CRISPR offers a way to purportedly erase and “correct” certain words in that book.

Of course, this newfound power raises several ethical concerns. The major worry among scientists revolves around the long-term consequences of germline modification, meaning genetic changes made in a human egg, sperm, or embryo. Edits made in the germline will affect every cell in an organism and will also be passed on to any offspring. If a mistake is made in the process and a new disease inadvertently introduced, these changes will persist for generations to come. Human germline modification could also theoretically allow for the installation of genes to confer protection against infections, Alzheimer’s, and even aging. For many, the thought of controlling our own genetic destinies seems to be a very slippery slope, conjuring up dystopian images of Frankenstein or Brave New World. For these reasons and more, in 2015, Doudna and other scientists proposed a moratorium on the use of CRISPR-Cas9 for human genome editing until safety and efficacy issues could be more thoroughly addressed.

How safe and efficient is gene editing?

CRISPR is currently being used in clinical trials for cancers and blood disorders; since these interventions won’t lead to heritable DNA changes, these trials don’t face the same ethical dilemmas as Dr. He’s experiment but may nevertheless carry risks. Doubts persist about the safety and efficacy of the CRISPR gene editing system, as many other initially promising technologies have failed. Conventional gene therapies, which attempt to insert healthy copies of genes into cells using viruses, faced many early setbacks, including the tragic death of 18-year-old Jesse Gelsinger in 1999 during a gene therapy trial for ornithine transcarbamylase deficiency. However, the causes surrounding Gelsinger’s death may have included a systemic immune response triggered by the use of a viral vector.

While the death of Jesse Gelsinger marked a somber moment for the field, gene therapy also experienced successes when researchers from Paris treated two young infants who suffered from a fatal form of severe combined immunodeficiency disease (SCID), an inherited disorder characterized by low levels of T cells and natural killer cells, which leaves affected patients incredibly susceptible to infection. Fortunately, viral gene therapy was able to reverse the disease symptoms in this particular case. On the other hand, gene therapy trials using viral vectors were recently halted when 25-50 percent of gene therapy patients developed leukemia resulting from the insertion of a gene-carrying virus near an oncogene; a gene with the potential to cause cancer. Modern CRISPR technology is not affected by such hurdles, however, as it does not rely on the use of viral vectors. While more precise than traditional gene therapy, CRISPR nonetheless sometimes results in unintended edits, which may be especially problematic for certain gene targets. Some pairs of genes are “linked” due to physical proximity on the same chromosome and are therefore almost always passed on together. Any edits to a gene belonging to a linked pair may therefore inadvertently cause an edit in its neighboring partner.

Even intended cuts can have unexpected consequences. Two separate 2018 studies published in Nature Medicine, one conducted by the Karolinska Institute in Sweden and the other by Novartis Institutes for Biomedical Research, concluded that CRISPR edits might increase the risk of cancer via inhibition of a tumor suppressor gene called P53, which has been described as “the guardian of the genome” due to its crucial role in maintaining genomic stability. Double-stranded DNA breaks made by CRISPR activate repair mechanisms encoded by P53 that instruct the cell to either mend the damage or self-destruct. Making these types of edits successfully would therefore require inhibition of P53; however, cells could become more vulnerable to tumorigenic mutations and the development of cancer as a result. “We don’t always fully understand the changes we’re making,” says Alan Regenberg, a bioethicist at Johns Hopkins Berman Institute of Bioethics. “Even if we do make the changes we want to make, there’s still question about whether it will do what we want and not do things we don’t want.”

Nevertheless, a slight increase in cancer risk might be a worthwhile trade-off for many patients with genetic diseases, such as the aforementioned SCIDs, which affect 1 in 50,000 people globally. Usually, the only cure for SCIDs is a bone marrow transplantation, which requires a matched donor in order to avoid rejection by host immune cells or, alternatively, the depletion of T cells to avoid rejection in the case of an unmatched donor. CRISPR offers a safer, more efficient way to treat genetic diseases such as SCIDs. Bone marrow cells of a patient may be extracted and genetically modified using CRISPR, thereby avoiding rejection by the host immune system. Pre-clinical trials in mice are already underway to test the safety and efficacy of this approach. Stanford scientist Dr. Matthew Porteus demonstrated the efficiency of this technique and said in an interview, “We don’t see any abnormalities in the mice that received the treatment. More specifically, we also performed genetic analysis to see if the CRISPR-Cas9 system made DNA breaks at places that it’s not supposed to, and we see no evidence of that.”

CRISPR also offers the additional possibility of removing parts of a gene, providing extra value over standard viral gene therapy, which only allows for insertion of genes. This feature can be especially important for autosomal dominant genetic disorders, which are made manifest with only one copy of a deleterious mutation. In her book, A Crack in Creation, Jennifer Doudna speculates that as CRISPR becomes increasingly safe, the tool may be used to help people who aren’t fortunate enough to win the genetic lottery. Doudna intones, “Someday we may consider it unethical not to use germline editing to alleviate human suffering.” What was unthinkable just a few years ago will soon enter clinical praxis.

Are some genetic variants superior to others?

In biology, those organisms that are most suited to their environment exhibit the highest fitness, a measure that accounts for both survival and reproduction. The accumulation of mutations over time is thought to contribute to many disease processes, but genetic diversity can also be beneficial for an organism when faced with a changing environment or unanticipated stress, such as drought or illness. Discussions on rigid natural selection should give way to more nuanced conversations on “balancing selection, the evolutionary process that favors genetic diversification rather than the fixation of a single ‘best’ variant,” as described by Professor Maynard V. Olson at the University of Washington.

Evolution has allowed many potentially deleterious genes to remain in the gene pool due to their ability to impart a selective advantage to individuals with carrier status, a phenomenon referred to as heterozygote advantage. Sickle cell anemia is a disease inherited in an autosomal recessive pattern—two copies of the problematic gene variant are necessary for disease expression. However, having just one copy of that variant confers resistance to malaria, which may explain the increased prevalence of sickle cell anemia in areas where malaria is more common, namely India and many countries in Africa. In this manner, malaria acts as a selective evolutionary pressure maintaining the occurrence of the sickle cell variant in the gene pool.

Nevertheless, sickle cell disease has become prevalent in countries currently unaffected by malaria. In the United States, approximately 100,000 people suffer from sickle cell disease, but therapeutic options remain limited. Researchers have been investigating the possible insertion of wild-type, “anti-sickling” genes using viral vectors in affected patients as therapy. However, since the pathological mutation for sickle cell disease has already been clearly identified, correction of the mutated gene using CRISPR may offer a more straightforward approach. The biotech company CRISPR Therapeutics recently announced the results of a phase I clinical trial in which CRISPR technology was used to treat a patient with sickle cell disease, although the efficacy and safety of the intervention have not yet been evaluated.

Can gene editing eliminate disease?

To answer these questions, we need to first evaluate our understanding of genetics and weigh the importance of genetics against environmental factors such as diet and lifestyle.

How reliable is our understanding of gene-disease links?

A mutation is usually defined as a genetic sequence that differs from the agreed-upon consensus or “wild-type” sequence. After the completion of the Human Genome Project in 2003, the arduous process of genome annotation began. Genome-wide association studies, or GWAS, began examining population data over time to look for possible associations between genetic variants, or genotypes, and physical traits and diseases, or phenotypes. Unfortunately, these studies often fail to employ random sampling, and 96 percent of subjects included in GWAS have been people of European descent. In fact, scientific disciplines frequently disproportionately sample from WEIRD (western, education, industrialized, rich, democratic) populations, whether studying genetic diseases or human gut microbiota.

Given the sources of genetic information used to determine “wild-type” sequences, we may be using information that is relevant to one demographic but not another. According to Maynard Olson, one of the founders of the Human Genome Project, the wild-type human simply doesn’t exist, and “genetics is unlikely to revolutionize medicine until we develop a better understanding of normal phenotypic variation.” These words seem to have fallen on deaf ears, however, as evidenced by the burgeoning numbers of genome-wide association studies conducted over the last 12 years. Most of the associations discovered thus far are only correlative, and few studies have been conducted to determine whether observed associations are indeed causal.

Closer examination of the relationship between gene variants and certain diseases reveals weak associations in many cases. For example, the APOE gene, which encodes for the production of a protein known as apolipoprotein E, comes in three genetic forms- APOE2, APOE3, and APOE4 with the last being associated with an increased risk of developing Alzheimer’s disease (AD). However, the correlation is not determinative, as the Nigerian population exhibits high frequencies of the APOE4 allele but low frequencies of AD. Environment and nutrition also play significant roles in the disease pathophysiology, as illustrated by Dr. Dale Bredesen’s research demonstrating reversal of cognitive decline through a targeted dietary and lifestyle approach. In fact, the majority of afflictions commonly affecting the general population, such as type 2 diabetes, cardiovascular disease, cancer, Alzheimer’s, and Parkinson’s are not caused solely by mutations.

How often does disease arise as the result of genetic mutation alone?

Chronic diseases are the result of a complex interplay between host genetics and the environment. A study conducted by the Wellcome Trust Sanger Institute in Cambridge, England analyzed DNA sequencing data from 179 people of African, European, or East Asian origin as part of the 1000 Genomes Pilot Project and discovered that healthy individuals carried an average of 400 mutations in their genes, including around 100 loss-of-function variants that result in the complete inactivation of about 20 genes that encode for proteins. These findings indicate that deleterious mutations, even those that lead to protein damage, do not invariably give rise to disease. As Professor James Evans from the University of North Carolina, who was not involved in the study, summarized in an NPR health blog, “We’re all mutants. The good news is that most of those mutations do not overtly cause disease, and we appear to have all kinds of redundancy and backup mechanisms to take care of that.” The authors hypothesize that healthy individuals can carry disadvantageous mutations without showing ill effects for a number of possible reasons: an individual may carry just one copy of a gene mutation for a recessive disorder that requires two mutations in order to manifest the disease, the disease may exhibit delayed onset or require additional environmental factors for expression, or the reference catalogs used to identify gene-disease links may be inaccurate. One analysis found that 27 percent of database entries cited in the literature were incorrectly identified.

To account for the discrepancy between genetic predisposition and disease manifestation, in 2005, cancer epidemiologist Dr. Christopher Wild proposed the concept of the exposome, which encompasses “life-course environmental exposures (including lifestyle factors) from the prenatal period onwards” and accounts for factors such as socioeconomic status, chemical contaminants, and gut microflora. The risk of developing a chronic disease during one’s lifetime may be modeled by G×E: the interaction between a person’s genetics (G) and lifetime exposures (the exposome, E). Identical twin studies reveal that genotype alone cannot determine whether a given phenotype will be expressed, and the interaction between genes and the environment must be taken into account. 

In fact, the “genes load the gun, environment pulls the trigger” paradigm may be overly simplistic, as Dr. Alessio Fasano at Harvard Medical School has shown that loss of intestinal barrier function is likely also necessary for the development of chronic inflammation, autoimmunity, and cancer. Two particular gene markers, HLA-DQ2 and HLA-DQ8, are observed in the vast majority of celiac disease cases. While over 30 percent of the U.S. population carries one or both of the necessary genes, only around one percent of Americans are affected by celiac disease. This data suggests that exposure to gluten through ingestion of wheat, barley, or rye is not sufficient to trigger the development of celiac disease even in individuals with a genetic predisposition. Without the additional loss of intestinal tight junction function, celiac disease is not made manifest. Thus, factors besides genetics are necessary for the development of chronic disease.

How does gene expression contribute to disease risk?

The concept of genetic determinism purports that our genes are our destiny, but genes are not nearly as important as gene expression. When most people think of evolution, the first name that comes to mind is Charles Darwin, but a contemporary of Darwin’s named Jean Baptiste Lamarck had proposed a theory of “acquired characteristics” by which individuals evolved certain traits within their lifetimes. The most oft-cited example discrediting this theory is that of giraffes elongating their necks by stretching to reach the treetops and then passing on this trait of long necks to their progeny. In contrast, Darwin proposed that those giraffes that had the longest necks went on to find food, survive, and reproduce. Eventually, Darwin’s theory of natural selection prevailed, but his 18th century French naturalist contender may have simply foreseen the field of epigenetics, the study of those drivers of gene expression that occur without a change in DNA sequence. The prefix “epi-” means above in Greek, and epigenetic changes determine whether genes are switched on or off and also influence the production of proteins. If you imagine your genetic code as the hardware of a computer, epigenetics is the software that runs on top and controls the operation of the hardware. Epigenetic changes control the expression of genes through various mechanisms and are influenced by diet, exercise, lifestyle, sunlight exposure, circadian rhythms, stress, trauma, exposure to pollutants, and other environmental factors.

The epigenetic mechanism of DNA methylation involves tagging DNA bases with methyl groups, a process that tends to silence genes. DNA methylation is responsible for X-chromosome inactivation in females, a process necessary to ensure that females don’t produce twice the number of X-chromosome gene products as males. Methylation is also responsible for the normal suppression of many genes in somatic cells, allowing for cell differentiation. Every somatic cell in the human body contains nearly identical genetic material, but skin cells, muscle cells, bone cells, and nerve cells exhibit different properties due to different sets of genes being turned on or off. Dietary nutrients such as vitamin B12, folic acid, choline, and betaine double as methyl donors, so even small changes in nutritional status during gestation can result in markedly different effects on gene expression and varied physical characteristics in the offspring. If differential gene expression can produce such drastic changes, is genome rewriting really necessary? Perhaps the centrality of the gene in driving human health has been overstated. Indeed, why worry about a potentially pathogenic gene if it is never expressed?

Inappropriate DNA methylation has been referred to as a “hallmark of cancer,” along with uncontrolled cell growth and proliferation. Almost all types of human tumors are characterized by two distinct phenomena: global hypomethylation, which may result in the expression of normally suppressed oncogenes, genes that promote tumor formation, as well as regional hypermethylation near tumor suppressor genes. In other words, genes that promote tumor formation are turned on while genes that suppress tumor formation are turned off. Cigarette smoke has been shown to promote both demethylation of metastatic genes in lung cancer cells as well as regional methylation of other specific genes via modulation of enzymatic activities. To succinctly summarize, genes themselves are not driving tumor formation; rather inappropriate gene expression is increasing the risk of tumor development.

Can gene editing treat cancer?

Cancers are front and center among the conditions gene editing therapies are targeted to treat. To answer the question of whether CRISPR can be used to treat cancer, we need to first examine how cancer arises. Medical textbooks frequently attribute the development of cancer to the accumulation of mutations over time. However, the accumulation of genetic mutations is not sufficient to cause cancer; the tumor microenvironment must be taken into account. In other words, the same oncogenic mutation that is adaptive for cancer in altered tissue is not advantageous to cancer in healthy, homeostatic cells. 

James DeGregori at the University of Colorado School of Medicine offers the following analogy. When tackling drug dealing in the inner city, arresting all the drug dealers is unlikely to work; the ones left behind will be smarter and more conniving. Instead, one might focus on creating better jobs, schools, and infrastructure, so citizens won’t have to resort to crime as a means of survival. Addressing the environment that lead to the problem in the first place will provide a more stable long-term solution. Similarly, instead of simply targeting the cancer, altering the microenvironment to disfavor its proliferation may provide a more viable long-term strategy, as the former immediately selects for resistance, accounting for the difficulty in keeping a patient in remission. Highlighting the importance of the microenvironment in regulating development, homeostasis, and cancer, biologist Mina Bissell writes, “The sequence of our genes are like the keys on the piano; it is the context that makes the music.” Cancer depends on context, as should our approach to treatment.

Nevertheless, despite recent medical advances, cancer treatment has not seen significant improvement in decades. Standard therapies rely on toxic chemotherapy, which destroys both cancerous and healthy tissue. Furthermore, cancerous cells often evade detection and destruction by host immune defenses by expressing cell surface molecules that prevent killing by host T cells. A new and effective form of immunotherapy known as chimeric antigen receptor (CAR) T cell therapy attempts to harness the power of the human immune system to recognize and kill cancer cells. However, this method has several disadvantages. A patient must have a sufficient number of immune cells prior to beginning therapy, which may not be the case for patients who have already received chemotherapy. Additionally, the process is time-consuming, and the use of viral vectors may increase the risk of developing other cancers. 

To address the issues of T cell collection and manufacturing delays, researchers are now developing “off-the-shelf” CAR T cells, which utilize gene editing to prevent rejection by the host immune system and the development of graft-versus-host disease (GvHD), a condition in which foreign immune cells attack the recipient’s body. In 2017, two infants with relapsing leukemia were successfully treated with these “off-the-shelf” CAR T cells, which were modified using the genome editing tool TALEN. Short for transcription activator-like effector nucleases, TALEN can be considered the predecessor to CRISPR and uses enzymes that are specifically guided to a genomic sequence to induce a cut. However, designing these enzymes requires extensive work, making the process costly and time-consuming. Additionally, in vitro studies have demonstrated that CRISPR techniques exhibit better correction efficiencies and fewer off-target effects than TALEN. Moreover, the use of CRISPR can speed up the manufacturing of CAR T cells and drive down costs of such therapies from hundreds of thousands of dollars to a few hundred dollars.

Can gene editing prevent HIV?

Another prospective application for CRISPR technology is the treatment of HIV. Today, approximately 37 million people around the world live with HIV. The use of antiretroviral drugs has greatly reduced the death rate from 1.9 million in 2004 to less than one million in 2017. Challenges still exist, as human immunodeficiency virus (HIV) inserts itself into the host genome and mutates rapidly, making complete eradication of the disease very difficult. About one percent of the population is naturally immune to HIV due to a CCR5 gene mutation, which prevents the expression of a cell surface receptor that HIV binds to in order to gain entry into host cells. As previously mentioned, the first genetically edited babies were born in October 2018 after Chinese scientist Dr. He Jiankui used CRISPR technology to edit the CCR5 gene in human embryos.

According to Dr. He, a married couple with the pseudonyms Mark and Grace consented to in vitro fertilization with additional CRISPR treatment to provide immunity to HIV for their offspring. First, a process called sperm washing was used to separate sperm from semen, the fluid that carries HIV. Next, eggs were fertilized by sperm to create embryos, on which Dr. He performed CRISPR gene editing. After several implant attempts, successful pregnancy was achieved. Nine months later, twins with the pseudonyms Lulu and Nana were born healthy and purportedly suffered no off-target effects from the CRISPR therapy.
 
Testing indicated that gene editing did not successfully alter both copies of the CCR5 gene in one of the twins, however. Chinese researchers were apparently knowledgeable of the gene editing failure prior to the pregnancy attempt; the decision to proceed with implantation regardless has numerous ethical implications. “In that child, there really was almost nothing to be gained in terms of protection against HIV and yet you’re exposing that child to all the unknown safety risks,” said Dr. Kiran Musunuru, a professor of stem cell and regenerative biology at Harvard University. The choice to use the unedited embryo suggests that the researchers may have been more focused on testing the accuracy of the gene editing technology than providing immunity to disease.

According to the Chinese government and his employers, Dr. He acted without the knowledge or consent of his superiors. Chinese authorities suspended all of He’s research activities, saying his work was “extremely abominable in nature” and a violation of Chinese law. In fact, the procedure was not medically necessary. When only the father is HIV-positive, as in this case, sperm washing alone is usually sufficient to reduce transmission of the virus. A meta-analysis that investigated the efficacy of sperm washing did not find a single case where HIV was transmitted to offspring.
 
Dr. He claims that the CCR5 gene is already very well characterized, but a recently published study found that decreased function of the CCR5 gene enhances cognitive function in mice. At first glance, this new knowledge may appear to be a boon, but the potential benefit also invites a discussion on the possibility of designer babies. Another point to consider is the fact that the CCR5 mutation that confers HIV immunity more commonly appears in Caucasians and may make individuals more susceptible to infections that are common in Asia.

Can gene editing be used to create designer babies?

A discussion on human genome editing would not be complete without evaluating the potential to create “designer babies,” a term commonly used in the vernacular to refer to babies with genetic enhancements. Both the utility of gene editing for basic research and the use of somatic gene editing to heal individuals who are sick are generally widely accepted among the public. The waters become murkier when we consider germline editing and the possibility of preventing disease or altering traits unrelated to health needs. In the 1970s, scientists first began to establish distinctions between somatic and germline genome modifications; somatic edits only affect a single individual while germline edits can be passed down over generations. By the mid-1980s, bioethicists began to argue that the morally relevant line was between disease and enhancement rather than somatic and germline. Discussions of heritable enhancements in particular raise fears of a possible return to eugenics.

John Fletcher, former head of bioethics at the National Institutes of Health (NIH), once wrote, “The most relevant moral distinction is between uses that may relieve real suffering and those that alter characteristics that have little or nothing to do with disease.” Many scientists today share the sentiment that treatment and prevention of “disease” constitute acceptable uses of CRISPR technologies while “enhancement” applications should be discouraged, but the boundary between the two is riddled with semantic discord. Moreover, the line delineating disability and disease is often blurred, and many perceived shortcomings may in fact represent normal variation on the phenotypic spectrum.

The discussion of whether we can or should modify human characteristics may be a moot point since our knowledge of which genes affect complex traits such as height, intelligence, and eye color is still limited. Additionally, most traits are influenced not only by genetics but also environmental factors, and monozygotic twin studies demonstrate that genes alone cannot predict whether physical traits will be expressed. Furthermore, genes that encode for physical traits may also impart increased vulnerability to certain diseases. For example, variations in the MC1R gene responsible for red hair color may also increase the risk of developing skin cancer. As indicated earlier, Dr. He’s efforts to confer resistance to HIV may have also resulted in increased susceptibility to infection by West Nile virus or influenza. As always, trade-offs exist, and the idea of the “perfect specimen” is a fallacy. Any efforts to gain genetic advantages will always be subject to the limitations of biology.

How should society move forward with gene editing technology?

CRISPR technology holds invaluable potential as a research tool and possible treatment for diseases caused by single-point genetic mutations. As previously described, some genetic diseases can be treated by stem cell gene editing without the need for germline modification, thereby minimizing the risk for potential mistakes that could be passed on to subsequent generations. On the other hand, trying to correct an error after a certain point during development is sometimes problematic, as the error has already been incorporated into billions of cells. Jennifer Doudna offers the following visual: “Imagine trying to correct an error in a news article after the newspapers have been printed and delivered, as opposed to when the article is still just a text file on the editor’s computer.” Germline editing may therefore provide a more expedient option for the prevention of some genetic diseases such as sickle cell disease or cystic fibrosis.

One of the most compelling arguments against CRISPR gene editing, namely the potential for misuse, can also be considered the most compelling argument for CRISPR gene editing. Banning progress on gene editing technology may create a black market, but the continuation of research on gene editing will allow the scientific community to control its use and ensure patient safety. Research into CRISPR is continually finding ways to make the technology safer and more effective; a paper published in September 2019 reported on the potential for a novel CRISPR system to affect gene expression in human cells. The process is reversible in theory and doesn’t involve the cutting of DNA, thereby reducing the risk of human harm and leveraging the power of epigenetics.

Moreover, while gene expression and the tumor microenvironment are viable targets for cancer treatment, gene editing can be considered a last resort therapy for certain cases in which other interventions have failed. Common chronic diseases, such as Alzheimer’s, type 2 diabetes, and cardiovascular disease, likely require a more nuanced approach, as gene expression, governed by factors such as diet and lifestyle, plays a significant role in disease pathogenesis. The use of gene editing to mold favorable traits, such as eye or hair color, likely exposes individuals to unnecessary risks and does not constitute medical necessity. Nevertheless, many consider mainstream germline gene editing an inevitability. Joseph Fletcher, one of the founders of bioethics, wrote in 1971, “Man is a maker and a selector and a designer, and the more rationally contrived and deliberate anything is, the more human it is.” The establishment of gene editing guidelines should include input from scientists, policy makers, and the public and incorporate the most current knowledge available in order to prevent misuse and realize potential. As the custodians of such powerful technology, we must take care to use it in an ethical and responsible manner. Whether our efforts will alleviate human suffering or ensure the survival of our species, only time will tell.

[ACC] Should We Colonize Space To Mitigate X-Risk?

[This is an entry to the 2019 Adversarial Collaboration Contest by Nick D and Rob S.]

I.

Nick Bostrom defines existential risks (or X-risks) as “[risks] where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially this boils down to events where a bad outcome lies somewhere in the range of ‘destruction of civilization’ to ‘extermination of life on Earth’. Given that this has not already happened to us, we are left in the position of making predictions with very little directly applicable historical data, and as such it is a struggle to generate and defend precise figures for probabilities and magnitudes of different outcomes in these scenarios. Bostrom’s introduction to existential risk​ provides more insight into this problem than there is space for here.

There are two problems that arise with any discussion of X-risk mitigation. Is this worth doing? And how do you generate the political will necessary to handle the issue? Due to scope constraints this collaboration will not engage with either question, but will simply assume that the reader sees value in the continuation of the human species and civilization. The collaborators see X-risk mitigation as a “​Molochian​” problem, as we blindly stumble into these risks in the process of maturing our civilisation, or perhaps a twist on the tragedy of the commons. Everyone agrees that we should try to avoid extinction, but nobody wants to pay an outsized cost to prevent it. Coordination problems have been solved throughout history, and the collaborators assume that as the public becomes more educated on the subject, more pressure will be put on world governments to solve the issue.

Exactly which scenarios should be described as X-risks is impossible to pin down, but on the chart above, the closer you get to the top right, the more significant the concern. Considering there is no reliable data on the probability of a civilization collapsing pandemic or many other of these scenarios, the true risk of any scenario is impossible to determine. So any of the above scenarios should be considered dangerous, but for some of them, we have already enacted preparations and mitigation strategies. World governments are already preparing for X-risks such as nuclear war, or pandemics by leveraging conventional mitigation strategies like nuclear disarmament and WHO funding. When applicable, these strategies should be pursued in parallel with the strategies discussed in this paper. However, for something like a gamma ray burst or grey goo scenario, there is very little that can be done to prevent civilizational collapse. In these cases, the only effective remedy is the development of ​closed systems​. Lifeboats. Places for the last vestiges of humanity to hide and survive and wait for the catastrophe to burn itself out. There is no guarantee that any particular lifeboat would survive. But a dozen colonies scattered across every continent or every world would allow humanity to rise from the ashes of civilization.

Both authors of this adversarial collaboration agree that the human species is worth preserving, and that closed systems represent the best compromise between cost, feasibility, and effectiveness. We disagree, however, on if the lifeboats should be terrestrial, or off world. We’re going to go into more detail on the benefits and challenges of each, but in brief the argument boils down to whether we should aim more conservatively by developing the systems terrestrially, or ‘shoot for the stars’ and build an offworld base and reap the secondary benefits

II.

For the X-risks listed above, there are measures that could be taken to reduce the risk of them occurring, or to mitigate against the negative outcomes. The most concrete steps that have been taken so far that mitigate against X-risks would be the creation of organisations like the UN, intended to disincentivize warmongering behaviour and reward cooperation. Similarly the World Health Organisation and acts like the Kyoto Protocol serve to reduce the chances of catastrophic disease outbreak and climate change respectively. MIRI works to reduce the risk of rogue AI coming into being, while space missions like the Sentinel telescope from the B612 Foundation seek to spot incoming asteroids from space.

While mitigation attempts are to be lauded, and expanded upon, our planet, global ecosystem, and biosphere are still the single point of failure for our human civilization. Creating separate reserves of human civilization, in the form of offworld colonies or closed systems on Earth, would be the most effective approach to mitigating against the worst outcomes of X-risk.

The scenario for these backups would go something like this: despite the best efforts to reduce the chance of any given catastrophe it occurs, and efforts made to protect/preserve civilization at large fail. Thankfully, our closed system or space colony has been specifically hardened to survive against the worst we can imagine, and a few thousand humans survive in their little self-sufficient bubble with the hope of retaining existing knowledge and technology until the point where they have grown enough to resume the advancement of human civilization, and the species/civilization loss event has been averted.

Some partial analogues come to mind when thinking of closed systems and colonies; the colonisation of the New World, Antarctic exploration and scientific bases, the Biosphere 2 experiment, the International Space Station, and nuclear submarines. These do not all exactly match the criteria of a closed system lifeboat, but lessons can be learned.

One of the challenges of X-risk mitigation is developing useful cost/benefit analyses for various schemes that might protect against catastrophic events. Given the uncertainty inherent in the outcomes and probabilities of these events, it can be very difficult to pin down the ‘benefit’ side of the equation; if you invest $5B in an asteroid mitigation scheme, are you rescuing humanity in 1% of counterfactuals or are you just softening the blow in 0.001% of them? If those fronting the costs can’t be convinced that they’re purchasing real value in terms of the future then it’s going to be awfully hard to convince them to spend that money. Additionally, the ‘cost’ side of the equation is not necessarily simple either, as many of the available solutions are unprecedented in scale or scope (and take the form of large infrastructure projects famous for cost-overruns). The crux of our disagreement ended up resting on the question of cost/benefit for terrestrial and offworld lifeboats, and the possibility of raising the funds and successfully establishing these lifeboats.

III.

The two types of closed systems under consideration are offworld colonies, or planetary closed systems. An offworld colony would likely be based on some local celestial body, perhaps Mars, or one of Jupiter’s moons. For an offworld colony, the X-risk mitigation wouldn’t be the only point in its favor. A colony would also be able to provide secondary and tertiary benefits in acting as a research base and exploration hub, and possibly taking advantage of otheropportunities offered by off-planet environments.

In terms of X-risk mitigation, these colonies would work much the same as the planetary lifeboats, where isolation from the main population provides protection from most disasters. The advantage would lie in the extreme isolation offered by leaving the Earth. While a planetary lifeboat might allow a small population to survive a pandemic, a nuclear/volcanic winter, or catastrophic climate change, other threats such as an asteroid strike or nuclear strikes themselves would retain the ability to wipe out human civilization in the worst case.

Offworld colonies would provide near complete protection from asteroid strikes and threats local to the Earth such as pandemics, climate catastrophe, or geological events, as well as being out of range of existing nuclear weaponry. Climate change wouldn’t realistically be an issue on Mars, the Moon, or anywhere else in space, pandemics would be unable to spread from Earth, and the colonies would probably be low priority targets come the breakout of nuclear war. While eradicating human civilisation would require enough asteroid strikes to hit every colony, astronomically reducing the odds.

Historically, the only successful drivers for human space presence have been political, the Space Race being the obvious example. I would attribute this to a combination of two factors; human presence in space doesn’t increase the value of scientific research possible enough to offset the costs of supporting them there, and no economically attractive proposals exist for human space presence. As such, the chances of an off-planet colony being founded as a research base or economic enterprise are low in the near future. This leaves them in a similar position to planetary lifeboats, which also fail to provide an economic incentive or research prospects beyond studying the colony itself. To me this suggests that the point of argument between the two possibilities lies on the trade-off between the costs of establishing a colony on or off planet, and the risk mitigation they would respectively provide.

The value of human space presence for research purposes is only likely to decrease as automation and robotics improve, while for economic purposes, as access to space becomes cheaper, it may be possible to come up with some profitable activity for people off-planet. The most likely options for this would involve some kind of tourism, or if the colony was orbital, zero-g manufacturing of advanced materials, while an unexpectedly attractive proposal would be to offer retirement homes off planet for the ultra wealthy (to reduce the strain of gravity on their bodies in an already carefully controlled environment). It seems unlikely that any of these activities would be sufficiently profitable to justify an entire colony, but they could at least serve to offset some of the costs.

Perhaps the closest historical analogue to these systems would be the colonisation of the New World, the length of the trip was comparable (two months for the Mayflower, at least six to reach Mars), and isolation from home further compounded by the expense and lead time on mounting additional missions. Explorers traveling to the New World disappeared without warning multiple times, presumably due to the difficulty of sending for external help when unexpected problems were encountered. Difficulties associated with these kinds of unknown unknowns were encountered during the Biosphere projects as well, it transpired that ​trees grown in enclosed space​s​ won’t develop enough structural integrity to hold their own weight, as it is the stresses due to wind that cause them to develop this strength. It appears that this was not something that was even on the radar before the project happened, while several other unforeseen issues also had to be solved, the running theme was that in the event of an emergency supplies and assistance could come from outside to solve the problem. A space-based colony would have to solve problems of this kind with only what would be immediately to hand. With modern technology, assistance in the form of information would be available (see Gene Kranz and Ground Control’s rescue of Apollo 13), but lead times on space missions mean that even emergency flights to the ISS, for which travel time could be as little as ten minutes, aren’t really feasible. As such off-planet lifeboats would be expected to suffer more from unexpected problems than terrestrial lifeboats, and be more likely to fail before there was even any need for them.

The other big disadvantage of a space colony is the massively increased cost of construction, Elon Musk’s going estimate for a ‘self sustaining civilization’ on Mars is $100B – $10T, assuming that SpaceX’s plans for reducing the cost of transport to Mars work out as planned. In order to offer an apples to apples comparison with the terrestrial lifeboat considered later in this collaboration, if Musk’s estimate for a population of one million for a self-sustaining city is scaled down to the 4000 families considered below (a population of 16000) our cost estimate comes down to $1.6B – $160B. Bearing in mind that this is just for transport of the requisite mass to Mars, we would expect development and construction costs to be higher. With sufficient political will, these kinds of costs can be met; the Apollo program cost an estimated $150B in today’s money (why the cost of space travel for private and government run enterprises has changed so much across sixty years is an exercise left to the reader). Realistically though, it seems unlikely that any political crisis will occur to which the solution seems to be a second space race of a similar magnitude. This leaves the colonization project in the difficult position of trying to discern the best way to fund itself. Can enough international coordination be achieved to fund a colonization effort in a manner similar to the LHC or the ISS (but an order of magnitude larger)? Will the ongoing but very quiet space race between China, what’s left of Western space agencies human spaceflight efforts, and US private enterprise escalate into a colony race? Or will Musk’s current hope of ‘build it and they will come’ result in access to Mars spurring massive private investment into Martian infrastructure projects?

IV

Planetary closed systems would be exclusively focused on allowing us to survive a catastrophic scenario (read: “zombie apocalypse”). Isolated using geography and technology, Earth based closed systems would still have many similarities to an offworld colony. Each lifeboat would need to make its own food, water, energy, and air. People would be able to leave during emergencies like a fire, ​O​2​ failure or heart attack, but the community would generally be closed off from the outside world. Once the technology has been developed, there is no reason other countries couldn’t replicate the project. In fact, it should be encouraged. Multiple communities located in different regions of the world would actually have three big benefits. Diversity, redundancy, and sovereignty. Allowing individual countries to make their own decisions allows different designs with no common points of failure and if one of the sites does fail, there are other communities that will still survive. Site locations should be chosen based on

● Political stability of the host nation
● System implementation plan
● Degree of exposure to natural disasters
● Geographic location
● Cultural Diversity

There is no reason a major nation couldn’t develop a lifeboat on their own, but considering the benefits of diversity, smaller nations should be encouraged to develop their own projects through UN funding and support. A UN committee made up of culturally diverse nations could be charged with examining grant proposals using the above criteria. In practice, this would mean a country would go before the committee and apply for a grant to help build their lifeboat.

Let’s say the US has selected Oracle, Arizona as a possible site for an above ground closed system. The proposal points out the cool, dry air minimizes decomposition, located far from major cities or nuclear targets, and protected and partially funded by the United States. The committee reviews the request and their only concern is the periodic earthquakes in the region. To improve the quality of their bid, The United States adds a guarantee that the town’s demographics would be reflected in the system by committing to a 40% Latino system. The committee considers the cultural benefits of the site, and approves the funding.

Oracle, Arizona wasn’t a random example, In fact it’s already the site of the world’s largest Closed Ecological System [CES] It actually was used as the site of Biosphere 2. As described by ​acting CEO Steve Bannon:

Biosphere 2 was designed as an environmental lab that replicated […] all the different ecosystems of the earth… It has been referred to in the past as a planet in a bottle.. It does not directly replicate earth [but] it’s the closest thing we’ve ever come to having all the major biomes, all the major ecosystems, plant species, animals etc. Really trying to make an analogue for the planet Earth.

I feel like I need to take a moment to point out that that was not a typo, and the quote above is provided by ​that​ Steve Bannon. I don’t know what else to say about that other than to acknowledge how weird it is (very).

As our friend Steve “Darth Vader” Bannon points out, what made Biosphere 2 unique, is that it was a Closed Ecological System where 8 scientists were sealed into an area of around 3 acres for a period of 2 years (Sept 26, 1991 – Sept. 27, 1993). There are many significant differences from the Biosphere 2 project and a lifeboat for humanity. Biosphere 2 contained a rainforest, for example. But the project was the longest a group of humans have ever been cut off from earth (“Biosphere 1”). Our best view into what issues future citizens of Mars may face is through the glass wall of a giant greenhouse in Arizona.

One of the major benefits of using terrestrial lifeboats as opposed to planetary colonies is that if (when) something goes wrong, nobody dies. There is no speed of light delay for problem solving, outside staff are available to provide emergency support, and in the event of a fire or gas leak, everyone can be evacuated. In Biosphere 2, something went wrong. Over the course of 16 months the oxygen in the Biosphere dropped from 20.9% from 14.5%. At the lowest levels, scientists were reporting trouble climbing stairs and inability to perform basic arithmetic. Outside support staff had liquid oxygen transported to the biosphere and pumped in.

A 1993 New York Times article “​Too Rich a Soil: Scientists find Flaw That Undid The Biosphere​” reports:

A mysterious decline in oxygen during the two-year trial run of the project endangered the lives of crew members and forced its leaders to inject huge amounts of oxygen […] The cause of the life-threatening deficit, scientists now say, was a glut of organic material like peat and compost in the structure’s soils. The organic matter set off an explosive growth of oxygen-eating bacteria, which in turn produced a rush of carbon dioxide in the course of bacterial respiration.

Considering a Martian city would need to rely on the same closed system technology as Biosphere 2, It seems that a necessary first step for a permanent community on Mars would be to demonstrate the ability to develop a reliable, sustainable, and safe closed system. I reached out to William F. Dempster, The Chief Engineer for the Biosphere 2. He has been a huge help and provided tons of papers that he authored during his time on the project. He was kind enough to point out some of the challenges of building closed systems intended for long-term human habitation:

What you are contemplating is a human life support pod that can endure on its own for generations, if not indefinitely, in a hostile environment devoid of myriads of critical resources that we are so accustomed to that we just take them for granted. A sealed structure like Biosphere 2 [….] is absolutely essential, but, if one also has to independently provide the energy and all the external conditions necessary, the whole problem is orders of magnitude more challenging.

The degree to which an off-planet lifeboat would lack resources compared to a terrestrial one would be dependent on the kind of disaster scenario that occurred, in some cases such as pandemic, it could be feasible to eventually venture out and recover machines, possibly some foods, and air and water (all with appropriate sterilization). While in the case of an asteroid strike or nuclear war at a civilization-destruction level, the lifeboat would have to be resistant to much the same conditions as an off-planet colony, as these are the kind of disasters where the Earth could conceivably become nearly as inhospitable as the rest of the solar system. To provide similar levels of x-risk protection as an off-planet colony in these situations, the terrestrial lifeboat would need to be as capable as Dempster worries.

While Biosphere 2 is in many ways a good analogue for some of the challenges a terrestrial closed system would face, There are many differences as well. First, Biosphere 2 was intended to maintain a living, breathing, ecosystem, while a terrestrial lifeboat would be able to leverage modern technology in order to save on costs, and the cost for a terrestrial lifeboat is really the biggest selling point. A decent mental model could be a large, relatively squat building, with an enclosed central courtyard. Something like the​ ​world’s largest office building​. It cost 1 billion dollars in today’s money to build, and bought us 6.5 million sq ft of living space. Enough for 4000 families to each have a comfortable 2 bedroom home. A lifeboat would have additional expenses for food and energy generation, as well as needing medical and entertainment facilities, but the facility could have a construction cost of around $250,000 per family. The median US home price is $223,800.

There is one additional benefit that can’t be overlooked, Due to the closed nature of the community, the tech centric lifestyle, and combined with the subsidized cost of living. There is a natural draw for software research, development, and technology companies. Creating a government sponsored technology hub would allow young engineers a small city to congregate, sparking new innovation. This wouldn’t and shouldn’t be a permanent relocation. In good times, with low risks, new people could be continuously brought in and cycled out periodically, with lockdowns only occurring in times of trouble. The X-risk benefits are largely dependent on the facilities themselves, but the facilities will naturally have nuclear fallout and pandemic protection as well as a certain amount of inclement weather or climate protection. Depending on the facility, There could be (natural or designed) radiation protection. Overall, a planetary system of lifeboats would be able to survive anything an offworld colony would survive, outside of a rogue AI or grey goo scenario. But simultaneously the facilities would have a very low likelihood of a system failure resulting in massive loss of life the way a Martian colony could.

V.

To conclude, we decided that terrestrial and off-planet lifeboats offer very similar amounts of protection from x-risks, with off-planet solutions adding a small amount additional protection in certain scenarios whilst being markedly more expensive than a terrestrial equivalent, with additional risks and unknowns to the construction process.

The initial advocate for off-planet colonies now concedes that the additional difficulties associated with constructing a space colony would encourage the successful construction of terrestrial lifeboats before attempts are made to construct one on another body. The only reason to still countenance their construction at all is an issue which revealed itself to the advocate for terrestrial biospheres towards the end of the collaboration. A terrestrial lifeboat could end up being easily discontinued and abandoned if funding/political will failed, whereas a space colony would be very difficult to abandon due to the astronomical (​pun intended​) expense of transporting every colonist back. A return trip for even a relatively modest number of colonists would require billions of dollars allocated over several years, by, most importantly, multiple sessions of a congress or parliament. This creates a paradigm where a terrestrial lifeboat, while being less expensive and in many ways more practical, could never be a long term guarantor of human survival do to its ease of decommissioning (as was seen in the Biosphere 2 incident). To be clear, the advocate for terrestrial lifeboats considers this single point sufficient to decide the debate in its entirety and concedes the debate without reservation.

Open Thread 143

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit – and also check out the SSC Podcast. Also:

1. No matter how many times I advertise it, people somehow never figure out there’s an SSC podcast (no extra content, just someone reading the blog posts). And in case you prefer robots to humans (not judging you), now someone’s made an automated version of the same thing.

2. Deiseach, Matt M, Missingno, TheAncientGeek, and toastengineer have served their sentences and are unbanned. If you should be unbanned but can’t post, let me know and I’ll try to figure out why. I’ve also banned a few extra people who deserved it. Spambots and randos don’t get due process, but people with more than 10ish comments who get banned are all on the Register Of Bans with explanation. If you seem to be banned but aren’t on there, let me know – it’s probably a spam filter problem.

3. I’ve also updated the list of people who are banned from IRL meetups. These are vague, don’t use full names, and don’t list offenses – out of sensitivity to the fact that these people haven’t been convicted by any court and I’m not trying to sentence them to Googleable Internet shaming. The people on there generally know what they did, so if you see your name on there and are shocked, it may just be someone with the same first name + last initial as you; message me and I will tell you. If you’re banned and I see you at a meetup, I’ll assume you didn’t notice you were banned, and politely ask you to leave. If you refuse, then I will have to resort to Googleable Internet shaming to make sure everyone else is adequately informed/protected, so please leave.

4. A reader has proposed a redesign for SSC. Please take a look at the mockup (it doesn’t have ads because they’re hard to add to the mockup, but imagine it did) and then vote on whether you think it’s better or worse.

5. I’ll be at the Bay Area Winter Solstice celebration tonight, hope to see some of you there.

Posted in Uncategorized | Tagged | 998 Comments

[ACC] Does Calorie Restriction Slow Aging?

[This is an entry to the 2019 Adversarial Collaboration Contest by the delightfully-pseudonymous Adrian Liberman and Calvin Reese.]

About the Authors: Adrian Liberman is currently a PhD student in biology at a university in the mid-Atlantic. He previously worked at the National Institute of Aging and remains actively interested in gerontology and the biological study of aging. Calvin Reese is an author with a BS in Biology. He has always been interested in the possibility of life extension by calorie restriction. Recently, he has reexamined the subject after undertaking a series of intermittent fasts for weight loss reasons. Calvin believes CR extends life; Adrian has long been skeptical.

Introduction: Is food making us old?

We all agree that food is delicious, and we also all agree that too much food is bad for us, but exactly how bad is it? Various academics have proposed that too much food actually accelerates the aging process, and reducing our food intake via calorie restriction (CR) is one of the most accessible and available methods of extending human life. While billionaires pump vast fortunes into increasingly far-fetched stem cell treatments and consciousness transfers, CR advocates contend that they can get a 10-20% increase in their natural lifespans simply by eating a little less. If true, CR raises a question of enormous significance to gerontology and the science of aging: are our diets aging us one calorie at a time? And if so, can we stop it?

Calorie Restriction (CR) and Intermittent Fasting (IF) advocates generally claim that CR will extend your lifespan and prevent various diseases, and that IF is an effective form of CR. Evidencing these claims are animal studies in yeast, worms, flies, mice, and monkeys as well as indirect evidence from humans. A variety of biochemical studies have been performed, and a host of theoretical literature generally claims that the underlying mechanisms for the effect are the IGF axis affecting sugar metabolism, DNA damage mechanisms dealing with free radical formation, and inflammation modulation.

My position (Adrian) is that for the average individual reading this article, CR and IF are not generally worth the effort, because the individual will be exposed to non-trivial risks and the benefits will be minute.

My position (Calvin), is that there exists some amount of food, on average, that will produce an optimal human lifespan and that the average person could significantly extend their life by moderately reducing their calorie intake.

Calorie Restriction (CR) is a term coined to describe a series of experiments that have been conducted over the course of more than a century, demonstrating that various animals kept in laboratory conditions generally survive longer if fed diets that are ‘restricted’. This effect has been observed in bacteria, yeast, worms, fruit flies, mice, and arguably monkeys, so it appears to span every domain of life. (40)

On the other hand, population-level studies say that the lowest observed mortality in western populations occurs at a BMI of ~25. (4)(5) This appears to be a paradoxical result, since a BMI of 25 generally results from a diet that isn’t particularly “restricted”. Why would we observe a lower mortality in lab animals when they are undergoing perpetual mild starvation, but a higher mortality when this happens in humans?

And by extension, should you, the reader, adopt a calorie-restricted diet, or an NIH-approved 25 BMI diet?

Semantics: Calorie Restriction versus just not overeating

First things first. Let’s define some very strict terminology. This will become important later:

Kinds of diets:

• Ad libitum Diet (ad-lib) – at one’s pleasure, or as much food as you would like.

• Ad-lib Calorie Restriction (Ad-lib CR) – a diet that contains fewer (for example 30% fewer) calories than your diet would if you were eating an ad-libitum diet

• Normative Diet (ND) – a diet that is balanced and prevents the onset of obesity. The general analog to USDA’s 2000 calorie diet. (Our term and not in common usage)

• Normative Diet Calorie Restriction (nd CR) – a diet that is balanced, but contains fewer calories than a normative diet (for example 30% fewer)

Once we establish these definitions, the kinds of claims that proponents or opponents can make expand into the following:

Claim 0: You, the reader, should adopt an ad-lib diet
(Nobody claims this, put the chips down!) (null hypothesis)
Claim 1: You, the reader, should adopt an ad-lib CR diet
Claim 2: You, the reader, should adopt a Normative Diet
Claim 3: You, the reader, should adopt an nd-CR diet

For the purposes of this article, we’ll be assuming that you, the reader, are an average American of indeterminate sex.

First the claims of CR in detail:

Animal Studies and the NIA Interventions

CR relative to ad-lib leads to improved metabolic function in the animals studied, generally because they do not become obese. On the other hand CR relative to a normative diet leads to unfavorable/negative outcomes including a decrease in fertility, altered mental states, and muscle wasting, but also a significantly increased lifespan.

A Calorie Restriction experiment goes like this: Take some animals and first establish the amount of nutrients they consume “normally”, or under optimal growth conditions. After this, take half of those animals and provide them access to all the micronutrients and amino acids they want, but restrict their access to raw energy in the form of fats or carbohydrates to X% of their normal intake.

Where does this lead to increased lifespans? Let’s start at the beginning! The beginning of time:

CR dramatically extends the lifespan of yeast. (39) Yeast longevity is difficult to quantify, but experiments have suggested 75% CR in yeast (done by decreasing glucose concentrations in yeast media from 2% to 0.5%) extends yeast longevity by a factor of 3. (39)(40). Single-celled organism lifespan is a goofy term, but it can be measured both directly by looking under a microscope at cells and keeping track of when they kick it, and indirectly by comparing the steady population of cells to how often they divide.

Nematodes (worms) exposed to 50-75% CR experience a 2-3 fold increase in longevity. CR in Drosophila melanogaster produces between 30% and 100% increases in the observed lifespan of the flies. (40)(41) Many hypotheses have been proposed to explain why CR produces such marked increases in the lifespan of worms, flies, and yeast, but most notable is the behavior of the SIRT-family genes. (71) Unfortunately the SIRT mutation data is difficult to interpret due to the relatively central role SIRT genes tend to play to the functioning of a diverse array of cells, but some theories are more convincing than others. More on this later. Of course, in general, the problem with studies like these is that establishing what constitutes a normative diet in organisms like these is somewhat subjective. Is there such a thing as an obese yeast cell? On the other hand, a normative diet for rodents is pretty easy to establish.

Numerous studies and overwhelming evidence show that CR significantly extends the lifespan of rats. Rodent CR studies suggesting CR extends lifespan go back to the 1930s and 1940s (26) (30). The earliest rodent CR studies focused on longevity with modern approaches and methods were conducted in the 1980s. Pugh et. al in 1992 (27), Yu et. al in 1985 (28), and Weindruch and Walford in 1982 (29), all subjected rats to 40% CR diets compared to ad lib baselines and found between 10 and 20% increases in longevity as compared to normative diets. Similarly, 1986 Weindruch is a gold standard mouse CR study. Here we see that even fairly aggressive ndCR produces extended average and maximum lifespans, both relative to ad-lib and relative to a normative diet, with a normative diet extending lifespan 20% relative to ad-lib, and 40% ndCR extending lifespan another 30% on top of that. (Context: think big fat guy keeling over from a heart attack at 50, vs everyone’s stereotypical tiny Chinese grandma, who spends her 100th birthday stubbornly refusing to reveal the location of her phylactery)

Reported health benefits of CR in rodents include reduced cancer risk in p53-deficient mice (33), increased proteasome activity (34) in mice and rats, improved cognition (32)(35), reduced oxidative stress and NF-kB signaling (36), and various other health benefits. Several studies, including Park et. al. and Pires et. al. have observed dramatic changes in insulin signaling and serum glucose levels (32)(37)(38), which has been of particular interest to gerontology researchers. IGF signaling has been proposed as one of the mechanisms by which CR improves health outcomes (15).

So far this is a pretty strong story. What’s the sketch factor of this evidence? Well… Some rat and mouse strains responded to CR better than others and methodology varied widely between rodent trials, making them difficult to compare (32). The only study performed on wild-caught mice that aren’t buried under a mountain of genetic defects to shame the Habsburgs had a negative outcome. As a model for aging, mice are also slightly suspect because unlike most mammals, they are globally telomerase-positive, which means that the effects of mutational accumulation on their soma don’t directly translate to other mammals, since oxidative damage has a large interplay with telomeric senescence.

Overall, however, we agree that mouse lifespans are significantly extended by CR. But…

As the organism becomes larger and more complex, the beneficial effects of CR on lifespan appear to taper off (40). 25% nd CR extends dog lifespan 25% (66). Why isn’t more aggressive CR investigated in dogs? Larger animals don’t tolerate aggressive caloric restriction well, 40% caloric restriction would likely kill most dogs. It seems that brains are probably to blame, because the metabolic rate of the brain is generally not significantly regulated (70). If you run out of energy for running your brain cells, you’re done for. The larger the proportion of your metabolism dedicated to maintaining brain function, the less CR you can tolerate.

So, given that trend, the most relevant information for humans should probably come from monkeys. (#freescopes)

Unfortunately, monkeys live a long time, so studies of monkey aging are measured in many decades and cost millions and millions of dollars. The National Institute on Aging and the University of Wisconsin have taken on the task and subjected ~40 rhesus monkeys each to 30% CR. In the WNPRC study, CR was versus an ad lib baseline and in the NIA study CR was relative to a standardized diet designed to prevent obesity. (1) (3)

Rhesus monkeys have a maximum lifespan of ~40 years, and after 30, the WNPRC study reported only 13% of the CR group had died of age-related causes, whereas 37% of the control group had died. (1) The authors write “CR reduced the incidence of diabetes, cancer, cardiovascular disease, and brain atrophy.” (1) In 2017, 10 of the WNPRC CR group animals remained alive compared to 3 animals in the control group. (31) In a monument to absurd timelines, the WNPRC study is not yet over, as many CR monkeys survive in 2019. No mean lifespans are established.

Here we hit our first snag. In complete contrast to the WNPRC study, the NIA rhesus monkey study used a standardized diet designed to prevent obesity for the control group. The NIA study found no increase in survivability among the CR group. (3) What gives?

If we were being charitable, we would say that monkey studies are ridiculously hard to power properly. With only 86 monkeys present in the NIA study, the negative observation could easily have been a product of chance that was inadequately represented by the reported p-values, and we should defer to priors based on other mammals. Anecdotally, and though the NIA paper would never admit to this, the NIH sometimes staffs monkey studies of this kind with “leftover” monkeys from clinical trials that may have mysterious medical conditions that are not obvious to the naked eye but were caused by drug trials, experimental surgical procedures, etc.

If we wanted to be less charitable, we would look at the fact that the WNPRC study pulled shit like this:

…and point out that aging is almost by definition a process that impacts the survival rate relative to any injury, so trying to disentangle “age related” mortality from regular mortality is bad and wrong.

The most significant difference between these studies is the use of adCR (Wisconsin) vs ndCR (NIA). It’s no surprise that monkeys that are on a controlled diet are healthier and live longer than monkeys that are fed an ad-libitum diet. In mice, this is a commonly known problem for control populations, and monkeys are no different (72). The balance of evidence probably tilts against the idea that CR is effective in monkeys, however our priors that ndCR should be effective are fairly strong, so for now let’s assume that it is.

Adrian concedes that a 2017 statistical analysis of both studies by the University of Alabama at Birmingham in cooperation with the authors of the original studies determined that CR decreased mortality in rhesus monkeys (31), but the study is presented under protest because combining ndCR and adCR in the same analysis is inadvisable, disingenuous, probably illegal, and was the direct cause of the sinking of the Lusitania.

The gold standard of CR studies in animals with respect to human health would be studies that occur in higher animals (mice, dogs, monkeys) and perform CR relative to a normative diet. On the balance, evidence that ndCR extends lifespan in higher animals is fairly strong. The monkey problem, however, is pretty bad, and we remain skeptical of the strength of CR in humans.

Despite these objections, we agree that CR promotes longevity and reduces all-cause mortality of animal model organisms and that this finding supports the view that CR probably increases the lifespans of humans in an ideal scenario.

So… what’s up with that? Do the above findings mean that we need to radically reinterpret what being overweight or obese means? Are we just WILDLY overestimating the “healthy” weight for all these animals?

In a word, no. Animals are clearly not adapted to undergoing caloric restriction this severe. How do we know that? Easy. The animals in most of the experiments above, under fasting conditions, are driven sterile. From an evolutionary standpoint, it’s safe to assume that being sterile is not an adaptive trait, so clearly most animals have not evolved to operate at levels of caloric restriction this severe on a routine basis. That’s one of the simpler distinctions between a “healthy weight”, obesity, and being in a starvation regime. Still, clearly animals live longer when they are starving, so why?

Proposed Mechanism by Which Food Makes You Old (and how calorie restriction stops it)

First an interlude: aging is one of the last frontiers in biology where major theories still compete on an even footing to explain a basic and universal process. The simplest definition of aging is the observation that past a certain point in an animal’s life, the likelihood that it will die doubles every period X, where X is different for different species. This observation is stunningly universal. It’s also important to note that the likelihood of death from almost any type of injury increases over time, so aging is not just the idea that cancer is more frequent when you’re 50 than when you’re 20, but the idea that almost all diseases are more frequent in older individuals, and dying from almost any injury is more likely when you are old than young (78, Arkin).

Covering the slapfight over the specific mechanism involved fully is beyond the scope of this paper, especially because there are no conclusive answers to any of your questions, but briefly these are the major aspects of biology that change with age, drawing directly from “The Hallmarks of Aging” Lopez-Otin et al. (44).

1) genomic instability, such as replication errors, mutations, DS breaks, and crosslinking;
2) telomere attrition, including damage to the telomeres that does not result in telomere shortening;
3) epigenetic alterations, including changes in methylation patterns, histone modifications, and chromatin remodeling. This generally also leads to deregulated/erroneous gene expression;
4) loss of proteostasis, which is characterized by protein denaturation or unfolding and the accumulation of waste products your body cannot break down;
5) deregulated nutrient sensing, causing both the cell and the body to become less responsive to nutrients;
6) mitochondrial dysfunction, which is thought to be particularly central to the relationship between CR, obesity, and aging because mitochondria process glucose and create reactive oxidative species (ROS);
7) senescense or quiescence, the cessation of the cell cycle; and
8) stem cell exhaustion, which slows or halts renewal of virtually all tissues and cell types.

These are specific, measurable instances of the general breakdown we associate with aging chosen for their correlation with chronological and apparent age. None of them has been decisively established as the actual cause of aging as we understand it, though all are understood to contribute to aging at the cellular and tissue level, which is the general breakdown and cessation of cell functionality (44).

Gerontologists love to whip these hallmarks out, but in reality most of them are interrelated in some way, so establishing which ones of them are merely the byproducts of other is extremely difficult. Ex: mitochondrial dysfunction increases the rate at which oxygen radicals are produced leading to greater genome instability and telomere attrition. Stem cell exhaustion probably arises from genomic instability or nutrient sensing deregulation, but on the other hand it can lead to senescence in the tissues as replacement of dying cells slows down. It’s all an ouroborosian mess. (44)

Almost all of the hallmarks have been shown to be impacted by food intake. Obesity has been decisively implicated in causing genomic instability in model animals, but evidence is lacking in humans (45). Crucially, the CR-ROS hypothesis that excessive eating, causing both oxidative damage from digestion and metabolism of glucose and obesity – thus linking obesity and ROS damage together, remains poorly supported in humans (45)(46). Obese adult individuals suffer from greater telomere attrition than non-obese individuals and have shorter telomeres (47). Obese individuals have profoundly different epigenetics than non-obese individuals, which is not surprising, since insulin expression profiles and fat metabolism are well-established as being changed by, and changing, epigenetics (46)(48)(49). In fact, epigenetic changes in insulin expression due to a high-fat diet may be heritable (48). Obese individuals, and particularly diabetes patients, have markedly different DNA methylation patterns from non-obese persons and modified chromatin structure (49) Altered chromatin density due to obesity has been observed in rodents and implicated in the onset of dementia, which is commonly associated with old age (50)(51). Numerous proteasome dysfunctions and significant protein misfolding have been observed in obese rodent and human subjects (46). In addition to creating insulin resistance, human adipose tissue creates a pro-inflammatory environment and significantly alters cell response to NF-kB and inflammatory cytokines (52). Obesity has been specifically implicated in ROS damage to mitochondria, mitochondrial dysfunction, reduced mitochondrial fission, and further elevated ROS production by the Krebs cycle (53). Furthermore, adipose tissue and obesity apparently promote senescence (54). Obese mice exhibit dramatically increased rates of T-cell senescence (55). Finally, obesity results in far greater stem cell exhaustion and quiescence (56). Adipose tissue stem cells, hematopoetic stem cells (HSCs), bone marrow, and other stem cell reserviors have all demonstrated higher rates of quiescence in obese populations of rodents and humans (46).

So, we have covered all eight biomolecular cell and tissue-level hallmarks. Obesity does, in fact, make us older. And we are pretty sure CR makes most animals younger. That is an excellent clue as to how the two are related, and which of the mechanisms might be the most important.

Your body usually converts sugar into mechanical or chemical energy by using a complicated daisychain of a couple dozen proteins called the Electron Transport Chain (ETC). The ETC is so called because each protein in it contains an electrically charged amino acid that is highly reactive due to holding on to loose electrons that came from sugar. The ETC takes electrons obtained from sugar, and attempts to stick them onto molecules of oxygen, converting it into water. Unfortunately, this process occasionally fucks up, and instead of getting nice benign water, the oxygen becomes radicalized, gets an Al-Qaeda franchise, and becomes either Hydrogen Peroxide or the Superoxide anion. Both of these are comically reactive molecules, and if they happen to diffuse out of the mitochondrion, they can react with pretty much anything and cause damage. If they react with your DNA, you’re in trouble, because oxidative damage to your DNA produces mutations. Similarly, the oxidation of fats results in chemicals that cause inflammation, and the oxidation of proteins can result in byproducts that your body can’t break down.

From here comes the Rate of Living Theory: an animal can consume only a specific and finite number of calories in its lifetime, because after that point the damage induced by said calories becomes fatal.

Essentially, for a given cellular architecture, so many calories in results in so many molecules of peroxide and superoxide out, and so many mutation events in DNA, oxidized proteins, and rancid fats. Inside you there are populations of cells that can tolerate only a finite amount of oxidation events, namely heart muscle fiber nuclei, neurons, and long-term stem cells like marrow and fibroblast stem cells. For these tissues, the DNA you have is the DNA you get, and you can only expose it to so much chemical damage before it stops functioning. Similarly the accumulation of other byproducts would be fatal.

While there is some slight debate about this, the balance of evidence is that CR slows down the metabolism of most tissues relative to their quantity (74,75), so it should definitely allow you to stretch your “mutation budget” over a longer timeframe. This theory predicts that you can extend your remaining life by X% by cutting X% of calories out of your daily food intake. Napkin math tells us that this basically tracks with the results we saw above, and the limitation on this is obviously starving to death.

Hopefully at this point we have exhaustively convinced you that Calorie Restriction definitely works in small animals, probably due to inhibiting oxidant production, and it might work in humans, though the theoretical backing for this is dicier.

So we return to our claims. Should you eat an ndCR diet?

First we have to take care of Chesterton’s Fence.

Population Aging Studies and Human Trials

Why is it commonly recommended to consume ~2000 calories per day? And where does this number come from anyway?

Dietary guidelines approach the question of how much food you can eat epidemiologically, by asking what kinds of mortalities are associated with various food intakes.

A meta-analysis of large-scale population studies concludes that a BMI of roughly 24, considered to be “normal” weight, is associated with the minimum long-term all-cause mortality. (5) (4), and once BMI exceeds 30, all-cause mortality begins to sharply increase. (5) (4) A CDC estimate for the 2015-2016 period found that 39.8% of adults were obese, and it is not unreasonable to assume that the average American BMI is roughly 30 based on huge body of evidence. The interesting aspect of these studies lies in the fact that BMIs below 25 are likewise associated with increased mortality (5)(4). Direct observational data of western populations is nearly unanimous in showing that BMIs below 25 are correlated with bad health outcomes.

So here we meet the fundamental paradox: Why is it that we can observe lab animals living longer in the face of CR, but when we observe humans, we generally find an association between mortality and low body weight?

Epidemiologic studies on low BMIs in western populations become a little hazier. There is fairly wide agreement that BMIs below 25 correlate with lower survival, but not usually a clear claim as to why. BMIs below 25 generally appear to correlate with smoking. Excluding the effects of smoking, people with BMIs below 25 still appear to have increased all-cause mortality. This could be from correlations to weight loss from cancer and metabolic diseases, tuberculosis, or something similar. While the effects of obesity on risk of mortality are very clear, the effects of being underweight and starvation on health are obviously the source of our core paradox (4).

Even studies that attempt to exclude acute illnesses that commonly induce weightloss seem to find higher rates of death among men that are underweight across a wide range of causes. Most troubling is the fact that an under-consumption of calories has a documented effect of suppressing wound healing and immune response, and worsening the progression of infectious diseases (67-69). From this standpoint, it’s altogether unclear whether the low weights observed in these studies are induced by the diseases in question, or whether diseases are simply more prevalent in people with BMIs below 25.

Calvin’s hypothesis, though unsupported by evidence, is that no one is actually on a CR diet.
A CR diet is reduced calorie intake without malnutrition. Some of the people with BMIs below 25 may have an underlying health issue, such as HIV, which is causing their BMI to drop or mortality to rise, but I speculate that most people with a BMI below 25 are suffering from malnutrition, which is causing the increased mortality observed in persons with BMI below 24-25. Sarcopenia in the elderly is another possibility, though age-adjusted studies appear to refute this hypothesis (4).

We can attempt to look at other cultures to see if lower caloric consumption has similar effects there.

Gerontologists and anthropologists have observed that the longest-lived national and ethnic groupings of humans tend to eat the least. Japan has long had the highest life expectancy of any developed country (20). Japanese life expectancy at birth stood at 87.17 years in 2016, as compared to 81.40 years for the United States and 84.43 being the average of 18 high-income countries (20). Japanese are estimated to eat 23% less than Americans (21).

However, this finding is purely correlational in nature. Although reverse causation – that people eat less because they live longer – can be ruled out, the correlation could be coincidental, pleiotropic, or for genetic rather than dietary reasons. The FOXO3A gene, rather than CR, has been proposed as the reason for variations in longevity between ethnic populations (17). Several other genes, like APOE and CETP, have been suggested as alternative genetic causes of these ethnic longevity differences (18). Gerontologists have aggressively suggested that smaller humans tend to live longer (4) (13) (14), and that members of ethnic or national groups, such as the Japanese, who live longer tend to be physically smaller, and centenarians within this ethnic group tend to be smaller than those who live shorter lives (13). One proposed reason that smaller individuals and ethnic populations tend to live longer is lower levels of GH and IGF-1 due to genetic factors (15). As has been discussed, IGF-1 is associated with increased mortality due to various illnesses and also makes the individual physically larger. CR advocates have equally argued that social, environmental, and economic factors causing CR in these population groups cause the drop in GH and IGF-1, increasing longevity in these groups. A Washington University study of the effects of CR on a group of humans on a long-term diet of 1800 kcal per day versus an experimental group ingesting 2500 kcal per day found no decrease in serum IGF levels from baseline unless protein intake was also restricted (16). This result is at odds with rodent studies, which showed decreases in IGF-1 concentrations in CR subjects versus those on normative diets (16). Other results contradict these findings. A Tufts 30% CR trial among mildly overweight (BMI 25-29.9) young adults showed significant decreases in serum insulin concentrations, contradicting the Washington University results (25). Many, many other human trials have produced contradictory results in CR trials, with the severity of the CR, weight, and age of the patients, as well as compliance with CR, having been variously proposed as explanations (21).

There is also compelling evidence in favor of CR and dietary explanations as the cause of longevity in particular populations. For example, the ethnically distinct Okinawans were for generations the Japanese ethnic group with the greatest longevity; the island has 4-5 times the centenarians per capita of any industrialized country (21). Little racial admixture has occurred on Okinawa, but as the Okinawan diet has westernized, the Okinawans have lost their longevity advantage, with Okinawan longevity dropping below the rest of Japan in 2005 (19). Older Okinawans, who continue to eat CR and protein-restricted traditional diets, have greater longevity compared to Japanese populations of equal age. For Okinawans age 60-64, all-cause mortality was half that of Japanese persons of equal age (22). Evidence from natural “experiments” during mandatory rationing and food shortages also supports the CR hypothesis. Involuntary food rationing during the World Wars also paradoxically increased lifespan. WWI-era rationing in Denmark resulting in a 34% drop in mortality over a two year period (21)(23). Similar rationing in Oslo during WWII, thought to be equivalent to 20% CR, resulted in a 30% drop in mortality (24).

The longest lived humans tend to eat the least. Though causation – whether CR causes decreased IGF levels or decreased IGF levels cause CR – cannot be definitively established, the conclusion, supported by rodent studies and defensible from surveys of humans, that CR causes a drop in IGF levels remains compelling.

So, should you, mean American Reader with 1.5 X chromosomes, adopt a CR diet? WHAT COULD GO WRONG?

Acute Risk Factors of Caloric Restriction

A good first place to look is the Minnesota Starvation Experiment. Conducted around 1944 on 32 volunteer males, the object of the experiment was to subject humans to a 25% calorie-restricted diet that simulates 6 months of famine and observe the effects. Diet was strictly controlled by housing subjects in a special dormitory and supervising them during their time outside of it. Despite the simulated famine conditions, diets were formulated to ensure that subjects received daily minimum intakes of important vitamins and minerals

An exhaustive description of the effects can be found in The Biology of Human Starvation, by Keys, but let’s go through the highlights. (70, not available online but try to find it at a library. Equal parts fascinatingly and grim, but occasionally also very funny)

First the physical: subjects of the experiment underwent a substantial drop in bodyweight (duh) over the first few weeks, after which their weight stabilized at 75% of their original bodyweight, as was the design of the experiment. In terms of physical strength and work capacity, subjects experienced loss of motor coordination, loss of strength, and a devastating negative impact on their endurance. Counts of all blood cells were down across the board per unit volume of blood, and in absolute terms. Subjects had a lowered basal metabolic rate, and an observed drop in surface but not core body temperature. Severe caloric restriction (15%+) also leads to a sharp fall in fertility for both men and women. Women undergoing caloric restriction may stop menstruating. (not observed in the Minnesota study, which was all-male) Finally, a starvation state can lead to bradycardia, starvation edema, fainting, and looking like a cartoon skeleton (70)(77).

Psychologically, the first and most significant observed effect was a severe preoccupation with food, and a permanent feeling of being cold. The latter was both a true sensory perception (skin temperatures of subjects were lower than normal) and a subjective sensation. Additionally, subjects experienced a loss of sexual interest. The subjects were capable of achieving erections physiologically, but not psychologically. Subjects also felt a loss of motivation to engage in self-improvement or social activities. By month 6 of semi-starvation, more than half of all subjects were routinely failing to complete basic maintenance tasks such as cleaning, and most had dropped out of university classes they were initially attending. After subjects were released from the experiment, several developed eating disorders, and fasting in general can induce eating disorders where none were otherwise present, both binging and anorexia.

Advocates of intermittent fasting, which is a diet that is much easier to maintain, often say that you eventually adapt to the feeling of acute, distracting hunger that strikes you when you are on a short (eg 1 or 2 day) fast (Original research). No such adaptation was observed for the subjects of the Minnesota experiment, and if anything morale of the subjects deteriorated steadily throughout all 6 months of the experiment. In principle, we need not assume that you cannot psychologically adapt to permanent starvation conditions, however mice that undergo lifetime caloric restriction do show a permanently depressed level of motility, eg they just hate moving around.

Evidence from other sources also indicates that lowered caloric intake leads to worsened progression of infectious diseases and slowed wound healing. This point may initially appear controversial because some studies indicate faster and greater response to mitogens, but studies using live pathogens prove this point fairly conclusively. Crucially, this aspect of starvation biology is one that would not be revealed in CR Mouse studies. Lab mice are generally kept in aluminum shoeboxes in a strictly sterile environment most of their lives. They have little opportunity to suffer injuries or heal from the same, and rarely experience infections. If they do, they are quite likely to die and be excluded from analysis. Unless you happen to live in a sterile aluminum shoebox as well, consider this as you interpret CR studies (67-69).

How does this square with the smaller diets in East Asia? Okinawans reportedly ate a diet that was at a similar level of caloric restriction (20% relative to western ND) to the Minnesota experiment, however obviously the entire island wasn’t driven sterile or languid. Note however, that this difference is reported only in absolute terms, not as calories per unit of body weight. Factor 1 is probably stature. Long-term caloric shortages have been shown in both mice and humans to lead to shorter stature and smaller size. If you were reared from youth to eat a relatively smaller diet, your long-term caloric requirements could probably be lowered somewhat. This is borne out in the average heights and weights of Okinawans of this time period, often reported to be less than 5 feet tall. This also squares with studies on longevity between different mouse strains, which routinely report that smaller overall stature (or length) has a positive correlation to longevity.

Factor 2 may be local weather conditions (Okinawa is sub-tropical) Temperatures have an impact on the ability to tolerate a restricted diet long-term. One notable result observed in investigations of Okinawans is the higher thermogenesis and lower oxygen consumption in Japanese and Okinawan mitochondrial haplogroups, suggesting both lower generation of ROS, and likely an ability to tolerate lower caloric consumption while maintaining adequate body temperature (76), which is an important aspect of determining basal metabolic rate. The BMR of Japanese people and Okinawans can be considered as lower or higher depending on whether it is measured through thermogenesis or oxygen consumption, which measure subtly different things.

This discrepancy suggests to us that even in the event that you are able to maintain a traditional Okinawan diet, if you were reared in America, it’s quite possible that you would have the ol’ Minnesota Boner Downer experience attempting to do so.

The outcomes of a lesser caloric restriction would be easier to tolerate for the average westerner, but will also be less effective. Whether there exists a sweet spot in which you are calorically restricted, but don’t hate life and can lift a broom is probably a subjective judgement.

Overall Conclusions

None of the evidence in favor of CR is indisputable. CR-ROS, the hypothesis that calorie restriction reduces oxidative radicals, remains compelling, but direct evidence in humans is lacking. Numerous model animal studies have shown a link between CR, reduced mortality, and life extension. Population studies support the CR hypothesis, but the effects of CR cannot be easily disentangled from genetic, social, environmental, or non-CR dietary factors. CR experiments in humans and rhesus monkeys produced contradictory results, and in some cases the tradeoffs between early and late mortality are a judgment call. Progerias in the obese and biomolecular evidence of cellular and tissue-level anti-aging effects of CR remain the strongest evidence for CR’s potential to extend human life.

We, the authors, conclude that the evidence as it stands weakly supports the conclusion that CR modestly extends human life. We expect that an individual engaging in 20-30% CR versus a normative, non-obesogenic diet without malnutrition might enjoy a 10%-20% increase in longevity. A 10%-15% CR relative to a normative diet may increase lifespan by perhaps 5-10%.

As with all good science, this conclusion raises still more questions. 20-30% CR might result in a 10% increase in longevity, but is that worth it? Calvin, one of the authors, is a practicing intermittent faster and can testify that CR and IF are unpleasant, difficult, and sometimes painful.

Scientific investigation adds another layer to this subjective answer: starvation conditions are likely to expose you to infections of greater severity, potential sterility, negative impacts on your physical abilities, and subjective but significant impacts on your psychological state, including motivation, attention, and libido.

The field of gerontology and the general study of aging continues to lurch forward – not at the pace we want it to, necessarily, but it’s still developing anyway. New drugs and treatments, including stem cell activators like GDF11, senolytic drugs, and anti-inflammatory interventions may be able to make many of the benefits of CR redundant in the relatively near future (we hope).

While CR would probably extend your life, we, the authors, don’t advocate it. The risks and miseries aren’t trivial and you probably have to go to work in order to exchange money for goods and services.

Claim 2: You, the reader, should adopt a Normative Diet
True (and basically the same as an adCR diet)
Claim 3: You, the reader, should adopt an nd-CR diet
Debatable, but no.

If you are interested in the best current options for life extension, you should consider a long-term aspirin regimen, maintaining a healthy body weight, and building a nuclear bunker in your backyard.

All the best,
–Calvin Reese, Adrian Liberman

PS(A): PLEASE NOTE: If you are over 75 years old, do not attempt Calorie Restriction. If your grandma is over 75 years old, go to her house and pour soup into her until she is overweight. This is not a joke and is entirely serious advice. Among the very elderly, being overweight serves as a protective factor that mitigates the dangers of death due to traumatic injury. The dangers of heart attacks and diabetes associated with excess weight are less than the dangers associated with sarcopenia and cachexia. Elderly people usually experience a loss of strength in esophageal muscles and for them swallowing food becomes more difficult, leading to a vicious cycle of muscle weakness and weight loss. If you have an elderly relative, please make sure they’re eating enough.

PPS: Studies excluded from this review: CALERIE: conformity to study protocol was terrible, duration too short, and they took overweight people and got them to baseline. Total tripe on a bike. CRONies: lmfao. Be skeptical of studies claiming to observe DR in a human population. Talking a large group of people into actually following a strict DR regimen long-term is borderline impossible because it fucking sucks. Sample was also self-selected.

Bibliography

1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2812811/ – Wisconsin Monkeys
2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC345016/ -84 adiposity
3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3832985/ – NIA Monkeys
4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4115619/ – Mortality data
5. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2662372/ – BMI meta-analysis
6. https://qz.com/255210/turns-out-the-way-americans-measure-healthy-weight-is-totally-wrong/ – media report on American obesity (for estimating average BMI)
7. https://www.nbcnews.com/healthmain/real-shape-american-man-dudes-youre-porky-8C11394082 – media report on American obesity (for estimating average BMI)
8. https://www.theatlantic.com/health/archive/2013/10/this-is-the-average-mans-body/280194/ – media report on American obesity (for estimating average BMI)
9. https://www.cdc.gov/nchs/data/hestat/obesity_adult_13_14/obesity_adult_13_14.pdf – NIH obesity 2013-2014
10. https://www.cdc.gov/nchs/data/databriefs/db288.pdf – CDC obesity 2015-2016
11. Dixon, John B. “The effect of obesity on health outcomes.” Molecular and cellular endocrinology 316, no. 2 (2010): 104-108. https://doi.org/10.1016/j.mce.2009.07.008
12. https://www.ahajournals.org/doi/full/10.1161/ATVBAHA.111.241927 – biomolecular consequences of obesity
13. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3354762/ – various characteristics of centarians
14. https://www.ncbi.nlm.nih.gov/pubmed/12208237/
15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3893695/ – effects of IGF and GH on body size, longevity, and mortality
16. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2673798/ – effects of CR and protein restriction on IGF serum levels
17. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4707876/#MXS013C74 – FOXO3A variation and longevity
18. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4296168/ – APOE variation and longevity, supports FOXO3, refutes others
19. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3362219/ – Okinawan longevity
20. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6092679/ – global life expectancy trends by country
21. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5315691/ – lots of stuff, general review of CR
22. https://www.ncbi.nlm.nih.gov/pubmed/16810568 – calorie restriction in Okinawa
23. https://jamanetwork.com/journals/jama/article-abstract/223580 – involuntary rationing in Denmark during WWI
24. https://www.ncbi.nlm.nih.gov/pubmed/14795790 – involuntary rationing in Norway during WWII
25. https://www.ncbi.nlm.nih.gov/pubmed/17413101 – Tufts CR study that showed decreased insulin levels in overweight subjects.
26. https://academic.oup.com/jn/article-abstract/21/1/45/4725572 – 1941 rodent CR study
27. https://www.ncbi.nlm.nih.gov/pubmed/10197641/ – Pugh et al. 40% rodent CR study
28. https://www.ncbi.nlm.nih.gov/pubmed/4056321/ – Yu et. al 40% rodent CR study
29. https://www.ncbi.nlm.nih.gov/pubmed/7063854/ – Weindruch and Walford rodent CR study
30. https://www.ncbi.nlm.nih.gov/pubmed/2520283 – 1935 McCay rodent CR study
31. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/ – 2017 combined Rhesus monkey study
32. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5565679/ – review of rodent CR studies
33. https://www.ncbi.nlm.nih.gov/pubmed/12016155/ – reduced cancer risk in p53 deficient mice
34. https://www.ncbi.nlm.nih.gov/pubmed/17460208/ – increase proteasome activity in CR mice and rats
35. https://www.ncbi.nlm.nih.gov/pubmed/18002475/ – improved cognition in CR mice and rats
36. https://www.ncbi.nlm.nih.gov/pubmed/19199090/ – anti-inflammatory effects of CR in rodents
37. https://www.ncbi.nlm.nih.gov/pubmed/16920310/ – improved glucose tolerance and lower insulin levels in CR rodents
38. https://www.ncbi.nlm.nih.gov/pubmed/24844367/ – drop in serum insulin in CR rodents
39. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3696189/ – review on budding yeast longevity
40. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3607354/ – review on longevity in general
41. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1857724/ – Drosphila CR review
42. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181168/ – Progeria rapamycin
43. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6509231/ – “Obesity May Accelerate the Aging Process”
44. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3836174/ – The Hallmarks of Aging
45. https://www.ncbi.nlm.nih.gov/pubmed/30115431/ – genome damage by obesity
46. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6509231/ – obesity/aging review
47. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2805851/ – telomere attrition in the obese
48. https://www.ncbi.nlm.nih.gov/pubmed/21088573/ – nutritional epigenetics
49. https://www.ncbi.nlm.nih.gov/pubmed/24779963/ – methylation patterns and diabetes
50. https://www.ncbi.nlm.nih.gov/pubmed/24154559/ – obese TD2 rat chromatin density
51. https://www.ncbi.nlm.nih.gov/pubmed/25380530/ – more obese rodent chromatin density
52. https://www.ncbi.nlm.nih.gov/pubmed/27503945/ – pro-inflammatory environment of adipose tissue
53. https://www.ncbi.nlm.nih.gov/pubmed/29155300/ – mitochondria ROS/dysfunction in the obese
54. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2941545/ – obesity and senescence review
55. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5127667/ – obesity and senescence in mice
56. https://www.ncbi.nlm.nih.gov/pubmed/22772162/ – stem cell quiescence in adipose tissue
57. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5599616/ – methylation drift in rodents, monkeys
58. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2190719/ – CR’s effect on genomic stability
59. https://www.ncbi.nlm.nih.gov/pubmed/17665967/ – CR promotes autophagy of misfolded proteins in rats
60. https://www.ncbi.nlm.nih.gov/pubmed/30395873/ – CR and cell senescence review
61. https://www.ncbi.nlm.nih.gov/pubmed/25481406/ – CR reduction of stem cell exhaustion
62. https://www.sciencedirect.com/science/article/pii/S1934590912001671 – CR and skeletal muscles, including transplant
63. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4889297/ – SIRT6 and NF-kB
64. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4570809/ – SIRT-1 signaling and CR in rats
65. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3331748/ – AMPK causes insulin sensitivity in CR mice
66. Kealy, Richard D., Dennis F. Lawler, Joan M. Ballam, Sandra L. Mantz, Darryl N. Biery, Elizabeth H. Greeley, George Lust, Mariangela Segre, Gail K. Smith, and Howard D. Stowe. “Effects of diet restriction on life span and age-related changes in dogs.” Journal of the American Veterinary Medical Association 220, no. 9 (2002): 1315-1320. https://admin.avma.org/News/Journals/Collections/Documents/javma_220_9_1315.pdf?7fh285_auid=1555113600043_jueqhhd825meind2et (Dog CR)
67. https://academic.oup.com/biomedgerontology/article/60/6/688/590315 (live infection in CR mice)
68. https://link.springer.com/article/10.1007/s11357-008-9056-1(live infection in CR mice)
69. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3528375/ (wound healing in CR mice)
70. ISBN 978-0816672349, Ancel Keys, Biology of Human Starvation (Minnesota starvation)
71. https://www.sciencedirect.com/science/article/pii/S1043276009000915 (SIRT mutants vs CR)
72. https://www.pnas.org/content/pnas/107/14/6127.full.pdf (Mattson control mice)
73. The Retardation of Aging in Mice by Dietary Restriction: Longevity, Cancer, Immunity and Lifetime Energy Intake1 – Weindruch mouse study, 86
74. https://www.sciencedirect.com/science/article/pii/S0047637407001005 CR metabolic rate 1
75. https://www.sciencedirect.com/science/article/pii/S0047637499000949 CR metabolic rate 2
76. https://jphysiolanthropol.biomedcentral.com/articles/10.1186/1880-6805-31-22 Okinawa thermodynamics
77. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/96E9E7436516D443974E1C6C8859299D/S0029665194000170a.pdf/the-right-weight-body-fat-menarche-and-fertility.pdf Starvation/Fertility women
78. Biology of Aging: Observations and Principles, Arkin, general description of aging theory (doubling mortality rate)

[ACC] Is Eating Meat A Net Harm?

[This is an entry to the 2019 Adversarial Collaboration Contest by David G and Froolow. Please also note my correction to yesterday’s entry.]


Introduction

Many people around the world have strong convictions about eating animals. These are often based on vague intuitions which results in unproductive swapping of opinions between vegetarians and meat eaters. The goal of this collaboration is to investigate all relevant considerations from a shared frame of reference.

To help ground this discussion we have produced a decision aid making explicit everything discussed below. You can download it here and we encourage you to play around with it.

The central question is whether factory farmed animal lives are worth living; the realistic alternative to meat eating is not a better life but for those animals to not exist in the first place.

We begin by investigating which animals are conscious. Then, we compare the happiness literature to the conditions under which animals are factory farmed to figure out if from their perspective non-existence is preferable. And finally, we survey the more easily measurable impacts of meat eating on environment, finance, and health.

1. Consciousness

1.1. What is consciousness?

This essay isn’t about a general theory of consciousness. We tried to research this and our main takeaway is simply that consciousness is really, really weird. When we say that something is ‘conscious’ we mean simply that it’s ‘like something to be that thing,’ and that if we were that thing we’d care if ‘good’ or ‘bad’ things happened to us.

Of particular relevance is the conscious experience of pain and suffering, which we regard as morally undesirable when it occurs in ourselves or others. Most animals have damage sensors, but triggering these may not result in the subjective experience of ‘suffering’ if the animal is not conscious or the stimulus is a constant presence that has been accustomed to.

And this is the absolute crux of this investigation; if animals suffer under current farming standards to the point of preferring non-existence then there is a moral burden on meat eaters to justify eating them.

To resolve this, we will look for in different animals at two kinds of evidence for consciousness:

1. A brain architecture similar to humans resulting from the same evolutionary process
2. Behaviors that are hard to explain except with reference to having experiences

1.2. What parts of the brain are ‘responsible’ for consciousness?

You might assume consciousness is just caused by ‘the brain,’ because without it how could we think (or do) anything? However, a huge part of intelligent information processing happens in our brain without giving rise to conscious experience.

Try noticing what happens when you read the sentence “The dustmen said they would refuse to collect the refuse without a raise.” Notice how the word ‘the’ appeared twice? And how you read it both times despite Scott’s best efforts at conditioning you otherwise? Very good. Now, notice also that the word “refuse” appeared twice, but the first time you not only interpreted it but heard in your mind as a verb with the stress on the second syllable, and the second time as a noun. Before the words appeared in your conscious mind, as visual on a screen, as sound in your head, as a feeling of understanding the sentence, your brain already did all the hard parts of figuring out what they mean subconsciously, without you experiencing anything.

If you sever the spinal cord, the 1 billion nerve cells in the backbone connecting your brain to your limbs, you would lose complete sensation below the neck but you would continue to have thoughts and rich experiences. Or consider the cerebellum, the 69 billion (80%+) neurons which are responsible for motor control. If you cut off parts of it you may lose the mechanic ability to execute certain motions like playing a piano or walking but you will remain completely conscious with a sense of self, a memory, and the ability to plan for the future.

What is in common between the spinal cord and the cerebellum? The neural circuits operate in parallel with hundreds of independent input/output logic gates and all fire one way, instead of forming an interconnected multi-way circuit. There is no capacity for reflection, just predefined decision making. A necessary condition for consciousness (in people) is two neurons ‘talking’ to each other, rather than just passing information with deterministic modification up the chain of command.

Where consciousness appears to be generated is the posterior cerebral cortex, the outer surface of the brain composed of two highly folded sheets. Stimulate it electromagnetically and it’s like any other acid trip. The folded physical structure seems extremely important for generating consciousness because it maximizes the surface area exposure of neurons to each other. Many parts of your brain can be removed without major changes to your personality of intelligence, but if even small parts of the posterior cortex are missing surgery patients lose entire classes of conscious content: awareness of motion, space, sounds, etc.

It’s important to recognize that consciousness is not simply ‘caused by anything that happens in your brain.’ It’s a specific, fragile thing with distinct characteristics that differ from other neural activity that we associate with intelligence – therefore the relative intelligence of animals to humans does not necessarily map closely to their degree of consciousness.

1.3. Neural indicators of consciousness

All mammals have a cerebral cortex. Mice and rats have a smooth one; cats and dogs have some folding; and humans/dolphins/elephants have highly expanded and folded cortices. Therefore all mammals are probably conscious, although with large differences in vividness and complexity.

Birds and reptiles are a harder case because their brain evolution diverged much earlier. They have instead a cluster of neurons with chemical markers associated with differentiated layers of the neocortex but without the folded shape that maximizes connectivity.

By contrast, fish do not have any neural architecture unique to the consciousness-related parts of the brain and are probably unable to feel fear or pain in the way a human would – we strongly encourage you to read this article in full to convince yourself of this claim. Although fish show pain-like responses to harmful stimulus and do so less if given painkillers, this is true even when the entire telencephalon (which includes the forebrain) is removed so on balance it is unlikely they are having a qualitative experience to accompany that response.

1.4. Behavioral indicators of consciousness

Behavior seems like an obvious place to look for evidence of consciousness. However, any behavior can be explained by intelligence alone, or even sub-intelligent evolutionary ‘hard coding’. If you swat a fly, it will make loud ‘angry’ noises and go away. Your little brother would react the same. If you knew nothing about the neural architecture of flies, you might conclude that flies are just as conscious and capable of suffering as people.

One way around this is if we can design tests that indirectly look for mental states, such as the mirror test (whether an animal can recognize its reflection). But elephants (definitely conscious) routinely fail and at least one fish has passed, so we are wary about assigning much weight to these tests.

Another is to look for behaviors that map onto extremely complex emotional states that we observe in humans. If there is a large difference in intelligence but a great similarity in behavior, we can infer that the animal is having a similar conscious experience.

Starting simple; if you play with a dog it will act in the highly specific ways you might if you were feeling ‘joy.’ From a hormonal and intelligence perspective, stress and positive excitement are very similar states, and in non-conscious creatures we would have no reason to expect – for example – a creature to seek out stressors like a chew toy unless they had some positive feelings towards them. That we can so clearly tell how a dog is feeling is to us highly persuasive evidence of consciousness.

Dogs also exhibit something quite analogous to a theory of mind, for example they will comfort their owner if their owner is sad (but maybe this is a learned behavior)

Dogs are unlikely to be a special case; other animals of varying intelligence also exhibit complex behaviors indicative of consciousness:

• Chimps who see another chimp lose a fight will direct more grooming behavior towards the loser, but not if they don’t see that chimp lose the fight. [Link, popular coverage]
• Corvids who hide a treat when being observed will sneak back later and rehide the treat somewhere else, indicating (perhaps) a theory of mind and (certainly) ‘mental time travel’ of imagining the self in various future states. [Link, popular coverage]
• Dolphins given a test to discriminate between X and Y for a reward, but including the option of ‘bailing out’ of the test in exchange for a lesser reward, will bail out more often in more difficult tests, indicating a theory of metacognition (which we’d say is adjacent to – if not the same thing as – a theory of mind). [Link, popular coverage]

Spending time with animals (higher mammals, especially) makes it extremely hard to imagine they are anything but conscious, but we recognize that any behavior could be explained as an expression of intelligence without assuming conscious experience. However, we are reasonably confident that:

1. The range and complexity of behaviors conducted by animals correlates closely with the brain architecture we believe causes consciousness – the more complex the brain architecture, the more consciousness-like the behavior. This would be a substantial coincidence if in fact animals were not conscious.
2. Animals we intuit as conscious are less likely to exhibit ‘glitching’ behavior indicative of being a non-conscious rule-following automaton. There are many examples of ‘glitches’ in insect behavior (such as ant vortexes of death, repetitive digger wasp behavior [although maybe not] and moths failing to notice they are circling a candle), whereas there are very few examples of ‘glitches’ in mammal behavior. A humorous example of a glitch in bird behavior can be found in YouTube videos where the ‘imprinting mechanism’ of ducklings has confused them into thinking a dog is their mother.

1.5. What animals are conscious?

It’s fair to reflect on the uncertainty in the above, but we’d be comfortable ascribing consciousness on the basis of neural architecture and behavior as follows:

There is good reason to believe all common land-based food mammals (cows, pigs, sheep, goats) are highly consciousness. On the other side, we think we can be reasonably confident fish don’t suffer in a morally relevant way. We’re not sure about chickens. We encourage you to read this overview of their behavior in full to convince yourself that their emotional and cognitive intelligence would group them with simple mammals if they had the same neural architecture.

However, since in most parts of the human brain ‘intelligence’ does not correspond to ‘consciousness,’ and because chicken brains are a clump of neurons with a different evolutionary history and lacking the distinct layered and highly folded structure of the cerebrum, in the model we assume their likelihood of consciousness is 75%.

A key part of this post is to quantify vague feelings about animal consciousness. This is similar to what Scott did with a sample of Tumblr respondents here and SSC reader Tibbar did with an MTurk sample here. Their results are expressed in terms of an animal’s ‘worth’ relative to a human in percentage terms.

% Consciousness

Tumblr sample

MTurk sample

Human

100

100

Chimp

20

50

Elephant

14

100 (!)

Pig

3

20

Cow

2

33

Chicken

0.2

4

Lobster

0.03

1.6

However, it’s important to make the distinction between ‘worth of experience’ and ‘worth of suffering’ because while we might rather be a human than a chicken on a good day, feeling pain might be equally unpleasant in either body. Below is our best guess for a ‘universal’ estimate (i.e. even a meat eater ought to agree that these are plausible) – people who place a premium on animal experience, such as many vegetarians, would likely rate animal experiences higher:

%Weight Suffering

%Weight Experience

Human

100

100

Chimp

90

50

Elephant

90

35

Pig

80

25

Cow

50

10

Chicken

10

1

Lobster

0.1

~0

What we mean by the above is, for example, if the unit of suffering was ‘being boiled alive,’ based on our understanding of how vivid their sense data would be assuming consciousness we would be roughly indifferent between being boiled once as any of a human, chimp, or elephant, 10 times as a chicken, or 1000 times as a lobster. However, we would be indifferent between living (assuming no scarcity or predators) for 1 year as a human, 2 as a chimp, 4 as a pig, or 100 as a chicken.

In the model, the moral impact of a farmed animal is its likelihood of consciousness times the moral weight of its suffering if its life is, based on information in the following sections, ‘worse than non-existence,’ otherwise the moral weight of its existence.

2. How many animals are farmed and under what conditions?

2.1. Animals eaten per capita

The OECD records the exact weight of meat consumed per year, and so by dividing this by the carcass weight we get the per-year per-capita animal consumption by country. This is consistent with other estimates on the web.

Animals / Capita consumed by four major categories in 2018

Australia

Canada

EU27

UK

USA

World

BEEF

0.05

0.05

0.03

0.03

0.07

0.02

PIG

0.27

0.20

0.44

0.22

0.29

0.15

POULTRY

29.42

22.53

15.70

18.53

33.12

9.44

SHEEP

0.38

0.05

0.07

0.22

0.02

0.09

The conditions most animals are farmed in may surprise you. Since a few large factory farms account for most animals farmed, the typical farm may well be a small mom-and-pop operation but the typical food animal is raised on an industrial scale. Further, ‘ag gag’ laws limit facts about animal conditions reaching public awareness and are arguably designed explicitly to allow companies to mislead consumers about the conditions most animals are farmed in.

Animal rights organizations will frequently quote a study by the Sentience Institute that over 99% of animals eaten in the US are factory farmed. Although a biased source, it’s consistent with other government estimates and we get similar results when replicating their methodology with USDA figures. What really drives this statistic is, as seen above, chickens form the vast majority of the farmed animal count and are almost exclusively farmed industrially.

 

My
calculation for % factory farmed using Sentience Institute
methodology

Sentience
Institute estimate for % factory farmed

BEEF

64%

70%

PIG

94%

98%

POULTRY

~100%

~100%

SHEEP

64%
(based on being woolly cows)

Not
included

Comparing across countries is difficult, but it seems that America is slightly more industrialized than the EU. My best estimate is that the difference is not significant enough to make a moral difference. If you eat meat and cannot explicitly trace the source you are most likely eating factory farmed meat.

2.2. What is it like to be farmed industrially?

The definitive feature of factory farming is that market incentives lead to a paperclip maximizer situation where producing as many animals as possible takes precedent over concerns about animal welfare. Consequently, the ‘experience’ of being factory farmed is best understood as a particular form of slavery where cruelty is the side effect of a system designed to maximize economic output.

2.2.1. Chickens

Chickens can be raised in two ways: in cages or in a shed. Cage-rearing chickens is typical in developing economies such as China, but in the West cages are used for egg laying hens only. Slaughter chickens in the West are raised instead in a large ‘broiler’ shed covered with liter.

Broadly speaking, caged chickens have literally no human analogue in terms of how much they suffer. They live in a state of constant pain and anxiety, barely able to move. The only mercy is that they do not suffer for long. In this analysis we focus on meat-eating in a Western context. So, we model 100% of chickens as being broiler farmed.

A factory farmed slaughter chicken lives for approximately 47 days, during which time it grows to a weight of 2.6kg (42 days and 2.5kg in the EU). This is analogous to a newborn human baby reaching adult weight by the first birthday. To achieve this rapid weight onset, a combination of force-feeding, drugs and high-energy feed is used. But the worst culprit is selective breeding. In a study by Kestin, between 2% and 30% of broiler chickens, depending on the breed, had a gait score between 3 and 5 on a 5 point scale (1 no issues, 3 obvious gate defect, 5 unable to move at all). But 100% of a control group bred randomly and then raised under the same conditions had no or minor mobility issues. Selection for quick growth rather than fitness in the wild leads to a high rate of heart attacks and other organ failure. In the final weeks of life, the chickens often outgrow the ability of their legs to support them, making broken or otherwise failed legs endemic in the industry.

Photos of cage-raised chickens, borderline NSFW

Photos of broiler shed chickens, NSFW

Because it is cheaper to only change the litter between flocks it is a major source of bacterial infection and especially contact dermatitis (rashes and lesions on the chicken’s feet and lower body). It is common practice in the EU (not the US) to remove a portion of the flock a week before slaughter time to create enough space for the remaining birds to reach their usual slaughter weight, suggesting there isn’t much free space for the birds. Birds whose legs fail will often dehydrate to death. We don’t want to overegg this – a dead bird is an unproductive bird and only around 3.3% of the flock die during growth for any reason – but remember that this is a 3.3% chance of dying in only six or seven weeks.

De-beaking is common in broiler chickens (universal in laying chickens). One reason for debeaking is to reduce cannibalism which occurs because the birds are so stressed – pet chickens will peck each other to establish a dominance hierarchy but don’t kill and eat each other. Beaks are sensory and manipulative tools for chickens, so this is analogous to cutting the fingers of prisoners off without anesthetic to lower the probability of escape.

Photos of debeaking, NSFW

Shed chickens have it slightly better. They have a small amount of mobility, are able to do some natural activities such as socializing and digging in the dirt with their claws (but not usually their beaks) and have a little natural light from windows in the warehouse. On the other hand, chickens only cannibalize each other when very stressed and the strain on their systems from the massive growth they are forced to undertake causes considerable pain.

We think it is reasonable to say that broiler chickens exist in a state worse than death – in the model, we assume chicken-days are equivalent to -2 human-days (you’d rather have your life be 2 days shorter than have to experience a day of chicken life), but your intuitions may differ substantially.

2.2.2. Pigs

Pigs are the next-most commonly farmed food animal. There are two major sources of cruelty in pig production; the raising of the food-pigs themselves and the creation of new food-pigs from breeding pigs.

Breeding sows are confined to a ‘gestation’ or ‘sow’ crate for most of their lives. These are only slightly larger than their bodies, making it impossible to turn around or even lie down. Generally the floors are made of slats or iron rungs to allow manure to fall through. These slats can hurt the sensitive feet of the pigs, and the fact that they are confined directly above their own manure means they are exposed to ammonia toxicity, which leads to respiratory conditions common in confined sows (and presumably smells incredibly distressing). Pigs are highly intelligent, and the unstimulating confinement means that the pigs engage in repetitive stress behaviors such as biting at the metal bars of their cage – this can cause further harm such as mouth sores.

Shortly before birth, the pregnant sow is taken to a ‘farrowing crate’ – even more restrictive than a sow crate. This is designed to separate the mother from the piglet so the piglet can nurse without being crushed (piglets being crushed can happen in the wild, but it is rare – this is a problem almost entirely caused by the confined conditions the sow is kept in). The crate is so tight the mother cannot even see her baby once it is born, and the baby is taken away after about 17-20 days. The piglet is then prepared to be fattened for slaughter, and the mother is either re-impregnated and returned to the gestation crate or slaughtered herself if she is unlikely to survive another pregnancy.

Photos of gestation and farrowing crates, surprisingly SFW

Piglets being prepared for slaughter are castrated and have their tails docked, often without anesthetic. Unlike chicken beaks, pig tails don’t really seem to serve any purpose, but pigs show pain behavior towards their stumps suggesting that it is very sensitive even after being docked. The tails are docked to prevent other pigs biting it and causing an infection – again, behavior which is vanishingly rare in the wild and therefore seems to be a stress response to the conditions they are kept in. Piglets may also have their teeth clipped to prevent biting but we can’t find figures on how common it is. Pigs prepared for slaughter are kept in ‘finishing crates’ which seem to run from anywhere between a slightly larger sow crate (larger only in the sense that it is bigger – finishing pigs are much larger so don’t have any more space to turn around or express natural behaviors) and something a little more like a traditional farmyard pen but indoors – six or seven pigs confined to a small pen where they have just enough space to walk around if they want to.

Pig-tures of finishing crates, SFW

Pigs are highly intelligent animals, and when not confined to stalls will spend hours playing and rooting around in the mud. The pigs consumed for food will be in constant low-level pain and sows used for breeding will be in quite intense pain constantly. It is hard to imagine a more distressing event than having your child taken away from you or being taken away from your mother, and we might imagine that the constant lack of stimulus for both food and breeding pigs causes considerable boredom and sadness.

It is a harder call whether pigs exist in conditions worse than death. My intuition is that food pigs are right on the border, and breeding pigs would strongly prefer to not live. In the model we assume that a pig-day is worth -1 human days.

2.2.3. Cows

Cows are the only animals routinely farmed in conditions approaching the way people imagine farming to be in their head – that is, in a field where they have enough space to move around and socialize. Factory farmed cows spend six to twelve months being raised outdoors in fields (the sorts of cows you see dotted around the countryside), and are then transported to ‘feedlots’ for their last few months where they are fed an artificial diet of corn and soy that is very hard on their bodies and can cause illnesses such as ulcers. Note that almost all cow-meat can be labelled ‘grass fed’ because most cows spend their first year in fields eating grass: doesn’t mean that is where the majority of their final slaughter weight came from! Much like pigs, cows have complex social hierarchies and being put on a feedlot with thousands of other cows is depressing.

Pictures of feedlot, SFW

Beef cattle also endure individual painful events like castration, branding and dehorning (often done without anesthetic) and transport in cramped crates for long periods of time. It is unclear how to incorporate the impact of these events on the animal’s overall quality of life. We’d suggest animals raised on a field have the same quality of life as a traditionally farmed animal and animals raised on a feedlot have a quality of life moderately worse than a typical elderly human. By quite a long way cows appear to have it the best of all factory farmed animals and have lives that are clearly worth living.

We think we’d be pretty content to live as a cow in a field, but cows on feedlots seem to have lives that are closer to food-pigs. Approximately averaging these out over a cows’ lifetime, we model 1 human day as equal to 10 cow days. We can’t find good information on how sheep are factory farmed so we’ve assumed they’re just woolly cows for the purpose of estimating their quality of life.

2.2.4. Value of animal life

Quantifying and comparing subjective experiences of farmed animals is hard because there is no ‘natural unit’ of suffering or experience. We proceed instead by asking ‘at what factor are we willing to skip/trade off days of my life to live as any particular animal under factory farmed conditions’ as described above. Factory farmed cow lives seem greatly preferable to non-existence, pigs spend most of their time experiencing some form of chronic or acute suffering, and chickens have truly awful lives from birth to slaughter.

Slaughter itself may be a morally relevant part of valuing an animal’s life. In principle, animals are stunned before slaughter so that the process is painless (and evidence suggests animals are not distressed by watching other animals being stunned). However, in practice animals are often not insensible to pain when they are skinned or carved up, either because of poor training (paywalled WaPost link), religious beliefs around the way meat should be prepared (link, further figures) or just because of a culture of laxity and cruelty (INTENSELY NSFW link of animals being abused by slaughterhouse workers, SFW-ish PDF report of the investigation). ‘Ag gag’ laws and other efforts by farmers to avoid bad press prevent serious scholarly investigation of the extent of the issue, but the AnimalAid hidden camera investigation linked as a PDF above found evidence of criminally cruel treatment at one of the three abattoirs observed and evidence of mistreatment at another.

Also, some people believe in a specific ethical obligation to not kill conscious creatures that do not want to be dead, so that slaughter of even humanely stunned animals is immoral. We instead take the consequentialist view that there is a symmetric value in actualizing the existence of conscious creatures that want to be alive (again, a farmed animal’s practical alternative is non-existence).

In the model, we assume that at human-level consciousness the experience of a typical human life-day is worth the factory farmed experience of 10 cow days, -1 pig days, and -0.5 chicken days. We value a year of perfect health at $50,000 in line with typical healthcare priority setting in the US (it is much less in the UK and Europe – closer to £30,000), where a typical western life-year is about 86% of that ($43,000). Slaughter is not included as even the cruelest slaughter imaginable would be QALY-negligible if averaged over an animal’s whole lifespan.

3. What lives are worth living?

Speaking loosely, evolution does not care how happy your life is as long as you a) exist and b) pass on your genes, and so has come up with a number of ‘patches’ to the conscious reward system to ensure animals are never too satisfied to stop competing to breed but never so dissatisfied they would prefer to be dead. Instead, what we have is roughly a baseline set of happiness from which we deviate when good or bad things happen, but to which we almost always return to. If this is also true of animals then it does not matter that we perceive their lives as described above as intolerable; if we were actually forced to live as that animal then we would observe ourselves hoping we don’t die painlessly in our sleep. Since we cannot ask animals directly if they consider their lives worth living, we instead look at the conditions in which people report changes in happiness or commit suicide, and compare these to the lived experience of factory farm animals.

3.1. Habituation and Happiness Set Points

Habituation is a “decrease in response to a stimulus after repeated presentations.” The simplest form of learning, it is caused by neural processes that regulate responsiveness to different stimuli. When we are repeatedly sent a signal, especially if it is highly frequent and hasn’t recently changed in intensity or duration, we consciously experience it less acutely. This makes sense. If consciousness is about complex reflection, after we’ve already processed a signal and determined a response if any, the response becomes subconsciously automatic.

However, we don’t habituate just to local physical sensations, like the ticking of a clock or the pressure of a shirt against our skin. We habituate to pain and suffering as well, even to large shocks to the system. In the literature this is known as the disability paradox, whereby a majority of those with severe disabilities report having a good or decent quality of life, even when to external observers it seems like a life not worth living (although this story is nuanced, and some of the improvement is related to the ability of intelligent humans to adapt by changing their lifestyle).

Nevertheless, the consensus in happiness research is that people have a fairly stable general level of baseline happiness which they return to after certain large changes. In a famous study by Brickman (1978), paraplegics and lottery winners reported similar levels of happiness before and after what one might assume was a life-altering development, either extremely negative or extremely positive. And in a twin study of several thousand by Lykken and Tellegen, it was found that about 50% of variation in Well-Being scale of the Multidimensional Personality Questionnaire is associated with genetic variation, and less than 3% (!) of the variance can be accounted for by any of socioeconomic status, educational attainment, family income, marital status, or religious commitment.

3.2. Lessons from suicide

Sometimes, humans decide that their lives are intolerable and commit suicide. Interestingly, we basically never observe this in other animals. The only maybe-credible anecdotal claims are of dolphins, which are highly intelligent and can commit suicide by not breathing (breathing by dolphins is probably an active choice, rather than an automatic process as regulated in humans by the amygdala).

This doesn’t mean we can conclude farmed animals prefer living because animals might lack the theory of mind or intelligence to act on their preference to stop existing. However, if humans in extremely poor conditions overwhelmingly do not choose suicide, we might infer that animal lives of roughly similar quality would also be worth living. Two well-studied areas where humans are placed in extremely poor conditions are slavery and terminal disease.

3.2.1. Slavery

The historical consensus is that while slavery caused extreme stress and suffering, the rate of suicide by black slaves was quite low. According to the 1850 U.S. census, slaves had a suicide rate of 0.72 per 100,000 while whites had a rate of 2.37 and freed slaves a rate of 1.15. From the Federal Writer’s Project Slave Narratives which documented incidences of resistance, only 1.2% were acts of suicide. Further, when slaves did resort to suicide, it was usually in response to deterioration in their circumstances or unfulfilled expectations, rather than being explained by living under the most brutal conditions – this is consistent with a ‘happiness set point’ theory.

To be clear, we are not saying that because enslaved Africans committed suicide at lower rates than free whites slavery wasn’t ‘that bad.’ There is a substantial academic literature explaining the cultural reasons for the difference. The observation is simply that the main explanatory factor of whether a slave thought it worth living was not how bad his or her life objectively was.

3.2.2. Terminal Patients

In a review of the psychological profile of patients in palliative care of 18,000 terminal cancer patients, a small number of which committed suicide, it was found that some of those who committed suicide:

“…presented functional and physical impairments, uncontrolled pain, awareness of being in the terminal stage, and mild to moderate depression… however, the loss of, and the fear of losing, autonomy and their independence and of being a burden on others were the most relevant.”

The presence of significant pain or even depression (what we might refer to as ‘objective suffering’) was not a significant factor in predicting suicide, the best revealed preference we have for whether a life is considered worth living by the morally relevant actor experiencing it.

3.3. What does this mean?

Preference for living is a strongly mean reverting process. The scientific literature and historical examples from slavery and terminal illness both suggest that humans will habituate to almost anything that is done to them. In our ancestral environment life was really, really hard. Brutally hard. And it makes sense that even in environments that modern folks would instantly label as ‘much worse than non-existence,’ evolution made sure that we would continue to have the strength of will to not only survive but want to.

How far does this go? Does this mean that animals, no matter how much suffering they experience, prefer living? Reasonable people can disagree. As detailed above, factory farmed animals – especially chickens – do not exist in anything remotely resembling an ‘ancestral environment’. Chronic stressors are more likely to cause permanent changes in happiness than acute changes, and, as in the case of chicken cannibalism absent food scarcity, it is likely that factory farming creates an environment that is not only unpleasant but one which even the astounding level of habituation we observe in humans might not adjust above the ‘worth living’ watermark.

In the model we take the habituation literature seriously, estimating by how much deviations from long-term set point of average quality of life (0.86) are neutralized by habituation. Based on the previous sections and how much worse factory farmed conditions are than an ‘ancestral environment’ in which habituation would be calibrated, we estimate that humans habituate by 80%, cows 70%, pigs 60%, and chickens 50%. Even though before we had assumed that we would prefer non-existence to being a factory farmed pig, and would dislike being a chicken twice as much, once habituation is taken into account pig lives are slightly better than non-existence, and chicken lives are quite bad but have a (negative) moral weight not much larger than the positive one of cows. However, since we consume so many chickens in the end they dominate the analysis.

4. Health Considerations

4.1. Nutrition

Animal protein sources such as meat, fish, eggs, and dairy contain a good balance of the 20 amino acids that we need for almost every metabolic process in the body, whereas individual plants are generally deficient in mix or concentration. The same is the case for micronutrients: animal protein sources are much higher in vitamin B12, vitamin D, the omega-3 fatty acid DHA, heme-iron and zinc.

Animal products provide most of the zinc in US diets, and meat, poultry, and fish provide iron in the highly bioavailable heme form. For example, the panel setting the new Dietary Reference Intakes recommends an 80% higher daily iron intake for vegetarians.

Concern also has been expressed about the difficulty that children have in obtaining adequate energy and nutrient intake from bulky plant-based diets. Dutch infants consuming vegan diets had poorer nutritional status and were more likely to have rickets and deficiencies of vitamin B-12 and iron, and the World Health Organization strongly recommends animal products for infants to ensure enough calcium, iron and zinc.

Whether or not a plant based diet is viable from a nutritional perspective depends mostly on whether you have the economic means to consume a wide variety of food sources, and may be riskier for small children or those whose ancestry is from regions where meat-eating was prevalent.

4.2. Long Term Health Outcomes

Estimating the long-term health outcomes of eating certain things is difficult because food is highly bound up in the culture we live in and culture correlates to just about every health outcome you could possibly imagine. Even less conveniently, nutritional science is highly anti-inductive; if a particular food group is identified as being healthy people with an interest in being healthy flock to that food group, and people with an interest in being healthy are likely to be healthy for a bunch of reasons regardless of diet.

So here’s a nice headline result: vegetarians have less heart disease with extremely high certainty, and probably less cardiovascular disease and cancer too. Most of the studies in that meta-analysis have had some of the really obvious stuff adjusted away (race, income, etc.) but not all studies adjust for all confounders, and we should be cautious about trusting studies that ‘adjust for confounders’. If you ignore confounders then the answer is clear; eating vegetarian is good for you in every single way we can measure (including, possibly, circulating testosterone in defiance of stereotypes about meat eaters!).

If you are interested in confounders: There are a handful of cool natural experiments, taking groups with reasons to eat certain food but not bother with the associated healthy lifestyles, which are the closest we are likely to come to a true experiment in this area. In particular, the American Adventist Health Studies are pretty much state of the art in the field from what we can see. Adventists have quite unique dietary habits, brought about by religious prohibitions on certain foodstuffs which some Adventist churches follow and some don’t. Consequently, if you are an Adventist you are functionally ‘randomized’ into different food-eating conditions depending on which church you attend, and this randomization can be exploited by researchers.

Based on the Adventist Health Studies, a vegetarian diet increases life expectancy by around 3.6 years. The less meat you eat, the healthier your BMI and the less likely you are to get diabetes.

Overall we might expect lacto-ovo vegetarians to have a health related quality of life around 10% better than a meat eater, with most of this benefit being apparent 20 years after making the switch to a vegetarian diet.

You could complicate this picture a lot (especially by introducing future discounting) but we think the general principle that if you value life-years towards the end of your life you should likely go vegetarian is well demonstrated by the data:

One final point on how meat might affect your lifespan; there is a growing awareness of the fact that industrially produced meat is an ideal breeding ground for zoonotic disease, and that those diseases can mutate and jump to humans very quickly. Previous pandemics such as H1N1 (‘swine flu’) and H5N1 (‘bird flu’) may have originated with farmed animals, and were rapidly spread by the close contact of unhealthy animals and global nature of the meat supply chain. At the margin, eating meat probably increases the probability of a global pandemic but there isn’t good evidence on how much your individual consumption affects things at the margin.

In the model we take the Adventist study result at almost face value, estimating that eating vegetarian will increase your lifespan by 3 years, and include constant low costs due to possible nutritional deficiency and moderate benefits to health that appear later in life.

5. Environmental Impact

The environmental impact of meat consumption is difficult to measure and aggregate because the numbers are sensitive to the type of farming used and location, and any serious attempt requires a massive aggregation of different data sources. The best we could find was by Oxford’s Zoology department which combined data from 570 studies with a median reference year of 2010 covering 40,000 farms and 1600 processors, packaging types, and retailers, in 119 countries of 40 products representing ~90% of global protein and calorie use, focusing on five environmental impact indicators: land use, freshwater withdrawals weighed by local scarcity, greenhouse gasses, acidifying emissions, and eutrophying emissions.

Overall, today’s food supply chain creates ~13.8 billion metric tons of CO2 equivalents, which is about 26% of human caused emissions. It also causes 32% of global terrestrial acidification and 78% of eutrophication. It’s also very resource intensive, covering about 40% of the world’s ice- and desert- free land, and driving roughly 90% of global scarcity-weighted water use because irrigation returns less water to rivers and groundwater than industrial and municipal uses and predominates in water-scare areas and times of the year.

Because of different technologies and other environmental variables, the environmental impact of any foodstuff can vary widely. For example, ninetieth-percentile GHG emissions of beef are 105kg of CO2eq per 100g of protein, and land use (area multiplied by years occupied) is 370 m2 ∙year. These values are 12 and 50 times greater than 10th-percentile dairy beef impacts.

However, as you can see below the environmental impact from meat dwarfs that of other nutrition sources:

In total, meat, aquaculture, eggs, and dairy use ~83% of the world’s farmland and contribute 56-58% of food’s different emissions, despite providing only 37% of our protein and 18% of our calories.

Because of substitution effects and nutritional requirements, it is unclear exactly how much of these resources would be freed up if we switched away from eating meat. In a simple model where we assume ‘protein is protein and calories are calories’ and freed up land would only remove carbon through natural vegetation and accumulated soil carbon, “moving from current diets to a diet that excludes animal products would result in reducing food’s land use by 3.1 (2.8-3.3) billion hectares (a 76% reduction), including a 19% reduction in arable land; food’s GHG emissions by 6.6 (5.5-7.4) billion metric tons of CO2eq (a 49% reduction); acidification by 50% (45-54%); 19 eutrophication by 49% (37-56%); and scarcity-weighted freshwater withdrawals by 19% (−5 to 32%) for a 2010 reference year.”

It’s difficult to translate these tradeoffs into ‘one number’ that captures the environmental impact of meat eating. Land which currently supports animal farming is likeliest to be least suitable for agriculture or urban dwellings, and the costs of climate change, the value of species diversity, and the future scarcity of freshwater are all difficult to measure.
One (very approximate) way is to assume that the human race is polluting the planet as much as it can sustainably (at current technologies it’s a lot worse, but we’re science-optimists), and that the long-term impact of a reduction in pollution is a proportionally inverse change in the equilibrium human population.

Water shortages, eutrophication, and acidifications are serious environmental concerns but can be managed. Greenhouse gas emissions and land seem like the most important constraints. Combining the statistics from the total absolute resource impact of the food supply chain with the relative impact of switching from meat-eating, we get that if the planet went vegetarian we’d reduce emissions by 12.5% and free up 30% of the world’s non-desert/ice land most of which we would not be able to immediately put to good use. We think it’s reasonable that the total reduction in ‘human pollution and resource use’ would be about 10%. Since raw resources and access to clean environment aren’t the only limiting factors on population size this result should be adjusted downward by a reasonable factor.

In the model, we assume that without meat farming there would be about 2.5% more capacity for population or quality of life, 30% of which (completely uneducated guess) would actualize as more lives, and 70% of which would actualize as better lives (and would count at 1/5th weight due to habituation).

6. Cost of Switching Diets

One good reason not to switch to a vegetarian diet would be if doing so was prohibitive, either because of the financial cost or the satisfaction from eating.

6.1. Cost per meal

In a trivial sense, vegetarianism is clearly cheaper. It takes more time and energy to grow plants that we feed to animals and then eat the animals than it does to just eat the plants themselves. This is borne out by research into the cost per calorie of various foodstuffs. Of course, humans don’t eat exactly the same food animals eat, and vegetarians are for some reason unwilling to just drink 2000 calories of canola oil every day.

The cost of various types of diet seem to be bizarrely under-studied (or perhaps crowded out by the literature on trying to get people to stop eating junk food). The one academic source I found seems to be really high quality though. Here is the paper and here is a nice associated blog post.

At all income levels meat eaters spend about $20 more per week than true vegetarians (~$1000 / year). Adjusting for all controls (including politics and body weight which may be affected by vegetarianism) reduces this number to a savings of $11.1/week, which is what we use in the model.

6.2. Psychological costs

However, one switching cost that might not be trivial is the psychological importance of having meat in your diet.
Most vegetarians eventually enjoy vegetarian food as much as meat (not sure if that’s just survivorship bias), but anecdotal experience from everyone I know who has gone vegetarian says that there is a really horrible period of adjustment of at least a couple years where you want to eat meat and can’t.

One reasonable measure of psychological pain is to look at how much people would be willing to pay to avoid it, which conveniently has been studied:

This table is the output of a point regression asking US consumers how much they would pay to avoid a one percent decrease in each category of food. A 1993 dollar is approximately half of a 2019 dollar (1.78), so consumers are saying that they would pay $15 per year to avoid a one percent reduction in their meat consumption. It is highly unlikely that people would accept one hundred times this value to cut out meat completely – it’s easy to cut out the first few percentage points of meat (just have slightly smaller portions) but gets harder as you are forced to make fundamental changes to your diet.

For those inclined to stop eating meat, we wouldn’t overthink this parameter. If the habituation literature has convinced you that you’d be just as happy in a wheelchair as a lottery winner then in the long-run you probably won’t mind eating more tofu.

In the model, we double the $1,500/year preference loss of meat implied by marginal preferences to $3,000 to account for social costs and elasticity of demand, and assume this decays to 0 by 10%/year as one gets used to the new lifestyle.

7. Conclusion

Overall, the case for reduced meat consumption is strong. Vegetarianism is cheaper, better for your health (if you can afford a diverse diet and are not an infant), and is less impactful for the environment. It also has a significant moral cost in terms of animal suffering.

At the outset of the collaboration the vegetarian was sure that farmed animals’ lives are so awful that the status quo is an unmitigated moral disaster; the meat eater was open to that conclusion but could also imagine being persuaded to spend all disposable income on buying meat and throwing it away because that was the only efficient means of causing the existence of sentient creatures who strongly prefer to exist. If you mapped reasonable conclusions on meat eating from a scale of -10 to 10, you could say that we started out as a confident -5 and highly uncertain 2, and ended up agreeing on a very confident -3.

Based on the research above, we’ve produced a ‘base case’ for the decision aid. It is weighted heavily towards the beliefs of the meat eater in the collaboration since the question revolves around what a ‘typical’ person might think and meat eaters are more ‘typical’ than vegetarians. We would certainly encourage you to tinker with the worksheet yourself though, as some decisions are very personal. You can download it here.

From the model we get that the total impact of meat eating per typical western consumer is roughly -$9,500 – that is, the ‘society of conscious beings’ would be better off by around $9,500 per year if any individual human meat eater switched to eating plants instead. To put this in other units, it would be about as good for 5 people to go vegetarian for a year as it would be for medicine to extend one person’s life by one year.

Each value in the table below represents the annual impact of a decision to eat meat versus eating an exclusively vegetarian diet. The right-most attempts to express everything in the same units ($) based on a willingness to pay for a year of perfect human life of $50,000 and a year of YOUR OWN life of $100,000 (to reflect the fact that people generally care about their own welfare more, but if you are a perfect utilitarian feel free to set these both to $50k!). Per the model we find that even though cows, sheep and (very weakly) pigs prefer farming to non-existence, the number of chickens eaten and the conditions they are farmed in dominates the ethical considerations. In terms of other harms, the impacts on your health and the environment are moderate, and the financial impact of switching to a vegetarian diet is small but negative – that is, the typical meat consumer will in the long run prefer to eat meat than spend the savings from vegetarianism elsewhere.

Human
life-year equivalents

Expressed
in $ equivalent

Annual
impact on other conscious creatures

Cows

0.009

$373

Pigs

0.007

$317

Sheep

0.002

$65

Cage
chicken

0.000

$0

Shed
chicken

-0.138

-$5,913

Fish

0.000

$0

Environ.

-0.027

-$1,166

Impact
on you

Health

-0.039

-$3,336

Finance

 

$162

TOTAL
LIFE YEARS

-0.186

-$9,498

Overall, the impact of eating meat like a typical person is likely to be substantially negative. Eating no chicken limits the impact on the animals themselves, but the harm to your own health and the environment outweighs the moral good you do by causing the creation of animals who are happy at the margin.

7.1. Sensitivity

The decision aid allows you to specify uncertainty over any of your estimates, which we have done anywhere we are still uncertain about the value of a parameter. This analysis is displayed in the graph below. For any plausible distribution of inputs meat eating is harmful to you personally, primarily for health reasons; and meat eating generally causes harm to other conscious creatures because of the impact on environment and the high suffering of chickens. However, there is significant uncertainty about this value; in a small number of cases eating meat actually produces benefit to society by creating more lives animals would prefer to live on net.

Another way of exploring model uncertainty is scenario analysis. We’ve calculated a number of scenarios that cover likely areas of disagreement.

In order of least to most harmful to other conscious being, the scenarios are:
• No factory farming narrowly results in outcomes which favor eating meat, since every animal would prefer to be alive than not. The effect is not greater because there are still environmental and health implications to eating meat.
• Chickens not conscious – The base case assigns a 75% chance that chickens are conscious, and this is a big assumption to which the model is highly sensitive. Assuming chickens are not conscious results in outcomes which narrowly favor being a vegetarian, since the moral importance of creating worthwhile cow, pig and sheep lives is offset by the other harms of meat eating.
• Base case – As described in the document
• No meat causes depression – In this scenario not eating meat causes you a significant but not life-threatening illness – modelled by having minor depression (0.62 utility). This scenario is very interesting because it predicts that eating meat would be good FOR YOU, but would harm others, and therefore whether you should eat meat or not depends on the valuation you place on your own happiness versus the happiness of others (remember the model already values your QALYs higher than anyone else’s based on the assumption you are not a perfect utility maximizer AND you are compensated for the unhappiness meat causes).
• Environmental worst case – the resources used to create meat are the upper end of the plausible range (10%) and all of this resource will create new people. The more convincing you find the environmental argument the more likely you should be vegetarian
• No habituation – In this scenario no creatures (including humans) habituate at all, meaning they are exposed to the full ‘badness’ of the farming conditions. The less you buy the habituation literature, the more likely you should be a vegetarian; this is a very strong result
• No speciesism – In this scenario the value of conscious experiences for animals is weighted just as much as conscious experience for a human. This is the scenario that results in the strongest argument against eating meat, and could perhaps be the intuition driving the sometimes acrimonious state of discussion between meat eaters and vegetarians.
• Not shown in this analysis is an ‘Unfettered Vegetarian’ analysis, where the vegetarian collaborator is able to enter their own assumptions into the model without any check from the meat-eating collaboration partner. This is because what the vegetarian considers highly plausible assumptions (chickens are conscious, a much greater weight is placed on animal suffering/experience and much less habituation occurs) results in values that fall off the end of the graph – around $250,000 worth of harms to others per year.

Our key takeaway is that even under the most extreme scenarios we could think of meat eating is still very likely to be a net harm to both you and wider society. Also note that even in scenarios where you are not doing harm to both yourself and society, you are certainly hurting one of them quite a lot.

7.2. Impact on Collaborator Lifestyle

The meat eating collaborator was impressed by the environmental impact of beef and moral cost of factory farmed chicken. For the moment he has significantly reduced consumption of both, offsetting in part with salmon because fish have less environmental impact and are most likely not conscious.

The vegetarian was surprised how marginal the case for vegetarianism was when a ‘typical’ perspective was considered. Part of this is because he is still pretty skeptical that animals would actually habituate to the conditions we farm them in given the habituation literature doesn’t really cover conditions as cruel as factory farming. Another part might be that this collaboration has not focused on one-off traumatic events – especially slaughter – which probably don’t affect lifetime utility much but might be regarded as so self-evidently ‘evil’ that the way we have thought about the problem as a balance of good versus harm is incorrect. Having said that, although the actual harms of meat eating are less than he expected, the certainty of those harms occurring under any plausible distribution of beliefs (the fact that even a ‘typical’ person would probably regard meat eating as harmful considering everything) will probably make the vegetarian more militant about his vegetarianism. Sorry!

That being said, both collaborators agree that there is no substitute for evaluating the evidence for yourself. We can only hope that you find our analysis a useful reference.

Correction To Circumcision ACC

The original title of the essay was “Circumcision: Harms, Benefits, Ethics”. I wanted to have all the titles in the same format, as questions, so I titled my post “Is Circumcision Ethical?” Then lots of people got upset because the essay included focused on harms and benefits as much as (or more than) ethics.

This was totally my fault, not the fault of the authors. Sorry. For the sake of voting, please pretend the post had been titled “Circumcision: Harms, Benefits, Ethics”.

You can see my proposed titles for the other collaborations here; if any authors are unhappy with how I’ve phrased them, please let me know.

[ACC] What Are The Benefits, Harms, And Ethics Of Infant Circumcision?

[This is an entry to the 2019 Adversarial Collaboration Contest by Joel P and Missingno]


“They practise circumcision for cleanliness’ sake; for they would rather be clean than more becoming.” – Herodotus, The Histories – 2.37 

The debate over circumcision in the Western world today is surprisingly similar to the conflict that Greeks and Egyptians faced 2500 years ago.  Supporters tend to emphasize its hygiene and health benefits; opponents tend to call it cruel or to emphasize its deviation from the natural human form.  In this adversarial collaboration we address medical aspects, sensitivity and pleasure, and ethical aspects of infant circumcision. 

Effect on penile cancer

Circumcision greatly reduces the relative rate of penile cancer, a relatively uncommon malignancy in developed nations which kills a little over 400 American men each year. Denmark, while it has one of the lowest rates of penile cancer for a non-circumcising country, nevertheless has 10x the rate of penile cancer as Israel – where almost all men are circumcised.  Likewise, a Kaiser Permanente study of patients with penile cancer found that 16% of patients with carcinoma in situ had been circumcised; only 2% of patients with invasive penile cancer had been circumcised.  Since the circumcision rate of Kaiser patients of the appropriate age was ~50%, this is in line with the 90% reduction.

While these are observational rather than prospective trials, the magnitude of the reduction is quite high.  It is unlikely to be simply due to class or race given that it exists when comparing countries and when comparing individuals within the same health care system.  Additionally, there is some association of penile cancer with HPV and a very strong association with phimosis, and circumcision reduces the rate of both of these.  This provides a highly plausible theoretical explanation of how circumcision might lead to this risk reduction in penile cancer.  However, this does raise the question of whether more aggressive future treatment of phimosis combined with HPV vaccination might reduce the rate of penile cancer in uncircumcised men in the future somewhat.  Of course, more aggressive treatment of phimosis would require more childhood circumcisions, which carry higher risk than infant circumcision.

Effect on transmission of HIV and STDs

HIV: Three large randomized control trials have been performed in South Africa, Uganda, and Kenya, together comprising over 11,000 men.  These men were randomized to be circumcised or not at the start of the studies for primary HIV prevention.  The reduction in female to male HIV transmission seen in these studies is about 50%. This is consistent with observational studies and is the highest quality evidence: three independent, large-scale randomized control trials with similar results scrutinized by the Cochrane Collaboration. The studies were terminated early due to positive results, which is appropriate ethical practice, but which can tend to overestimate positive effects.  However, the data is consistent with observational data so this is less likely a concern. Some have expressed the concern that the two groups did not receive identical HIV counseling. 

It is true that the circumcision group felt much more comfortable having sex without condoms, and additional counseling was given to the circumcision group to tell them this was not adequate protection.  Condom use was, despite the counseling, lower in the circumcision group than in the control. In one sense this means that the protective benefits of circumcision vs HIV may be understated. In another sense, this creates a large concern with advertising circumcision for the stated purpose of HIV prevention.  Any such efforts must be careful not to oversell the benefits and thereby reduce condom usage. Additionally, the results are only applicable to heterosexual HIV transmission. Homosexual transmission has not been shown to be decreased by circumcision, presumably because of the extremely high risk of receptive anal sex.  IV drug related transmission is almost certainly unaffected except via “herd immunity”.

The data for other STIs is far less compelling than for HIV. Secondary endpoints of the African HIV studies were other STIs, and rates of HPV and HSV were reduced by circumcision. This was only a secondary outcome, however, and other studies have had mixed results. The data for lower rates of bacterial vaginosis and trichomonas in female partners of circumcised men is somewhat stronger. However, none of these benefits are nearly as strongly supported or as high impact as the HIV reduction.  Additionally, when considering the benefits and harms of an intervention such as circumcision, there are strong reasons not to consider the benefits that accrue to the patient’s future partners, but instead to focus only on the individual in question.

It is ironic that the evidence for reduction in other STIs is fairly weak, because as historian David Gollaher shows in Circumcision: A History of the World’s Most Controversial Surgery, this is the primary reason the US adopted widespread circumcision in the early 20th century.  There had been very small-scale interest in circumcision due to religious ideas about masturbation and ideas about balanitis and phimosis causing systemic illness, but these ideas do not appear to have motivated a large number of circumcisions. Mainstream circumcision of healthy males caught on as a way to reduce STI rates – particularly syphilis. Physicians both in the US and UK saw the far lower rates of STIs Jews experienced than gentiles and attributed these primarily to circumcision. In the US, the time was just right for such STI reduction efforts – worries about infection were widespread and an increasing number of people were adopting hospital births where there was ready access to a physician able to perform a circumcision.

Meanwhile, during WWI and WWII the military offered circumcision to many conscripts to protect vs STIs (the wealthier officer class already having a much higher circumcision rate than the enlisted men as more of their parents could afford hospital births). The UK’s experience of WWI and WWII was quite different from the US’s. For one thing, STIs ranked far lower on the set of risks to soldiers. And rather than seeing a boom in hospital births, the UK’s medical resources were strained during WWI and WWII. Circumcision was seen as something of a waste compared to the UK’s more pressing needs. Presumably, arguments that positively presented Jews as having low STI rates did not catch on in early 20th century mainland Europe to nearly the extent that was seen in the US and UK.

Effect on UTIs

In the first year of life, the rate of UTIs is approximately 1% per year among uncircumcised boys and 0.1%-0.2% among circumcised boys.  Particularly in the first year of life, UTIs can be severe, causing fever and hospitalization, as well as permanent kidney damage. Circumcision is presumably protective against UTI primarily by reducing the bacterial load around the urethra. Some sources have suggested that the difference is primarily one of contamination during sampling. However, studies looking only at clean catch urine samples or suprapubic tap samples give similar reductions (90%). Unlike many of the other benefits listed above, UTI avoidance is specifically a benefit of infant circumcision.

Effect on Penile Problems

Many penile problems such as balanitis (inflammation of the glans), pathologic phimosis (inability to retract the foreskin), and paraphimosis (foreskin entrapment, which requires emergency treatment to preserve the penis) are prevented by circumcision.  Others, including meatal stenosis, scarring, bleeding are caused by circumcision. A New Zealand cohort study directly comparing the incidence of penile problems requiring intervention found a rate of 1.1% in circumcised children and 1.8% in uncircumcised children when followed to age 8. 

Risks of surgery

The risks of surgery include pain, bleeding, bruising, inadequate foreskin removal, excess skin removal, swelling, meatal stenosis, scarring, infection, and anesthetic complications. These are different based on age group; neonatal circumcision is associated with a much lower risk of complications than other age groups.  However, studies show a wide range of rates of complications dependent on practitioner training level. Overall, the rate of minor complications (bleeding, bruising) is ~1.5% worldwide and the rate of major complications (scar, severe infection, meatal stenosis, or need for additional surgery) is <0.2%.  In comparison, the risk of complications in children past infancy and adults is approximately 6% with trained practitioners – significantly higher than for infant circumcision.  Indeed, the majority of cases of the most severe complication (penectomy) related to circumcision appear to occur in people who were not circumcised as infants.  This would include both adults with penile cancer as well as children undergoing phimosis surgery (as in the infamous case of David Reimer).  

Sensitivity and Sexual satisfaction

There is a highly plausible mechanism by which circumcision could reduce sexual sensitivity: the foreskin is highly innervated (20,000 nerve endings is often repeated, but this appears to be a case of citogenesis and is likely far too high), produces lubrication for the penis, and is sensitive to light touch. Several studies demonstrate that the foreskin is more sensitive to certain forms of nonsexual stimulation than other parts of the penis. The glans itself does not change in sensitivity from circumcision. 

Sexual satisfaction, particularly in sexually active heterosexual men, seems to be unchanged with adult circumcision.  During studies of adult circumcision for HIV prevention, in which large numbers of men were randomized to receive circumcision at the time of the study or after, sexual satisfaction of did not significantly differ between the two groups.  On the other hand, a South Korean study of men circumcised as adults (as has become traditional there) found decreased pleasure from masturbation after circumcision.  It is certainly possible that both these things are true – that masturbation is impaired by adult circumcision while intercourse is not.  It is also possible that the Korean study (retrospective, smaller than the African studies, and with much higher rates of scarring than are observed in the US) was unrepresentative.  There are two European studies which are frequently cited: cohort studies look at circumcised and uncircumcised men in Denmark and Belgium. However, circumcision is quite rare in these countries, and the majority of the circumcisions in the study groups were performed to correct problems such as phimosis. They are thus comparing men who had penile problems requiring surgical correction to men who did not; it is therefore unclear why they are frequently cited in discussions of elective circumcision.

No available studies actually measure sensitivity to sexual stimulation, which is of course an important topic – but one requiring consummate professionalism on the part of the researcher.  We are left waiting for such a study, but in the meantime may reasonably fear that there is some decrease in at least masturbatory pleasure due to circumcision even though the evidence for this is weak. The evidence does not support any change in sexual pleasure otherwise. 

Infant circumcision may be different than adult circumcision, in addition.  If circumcision eliminates important nerves, due to brain plasticity infants are likely better able than adults to reassign the portions of the brain processing the foreskin to other areas of the penis.  A large survey of circumcised and uncircumcised men in the US (where infant circumcision is the most common) found similar sensation in circumcised and uncircumcised men. The uncircumcised men appear to have had slightly higher incidences of sexual dysfunction. Also of interest, circumcised men appear to have an easier time obtaining oral sex, which may relate to subtle aspects of class or may have to do with the perceived cleanliness of circumcised penis.

Ethics

The ethics of infant circumcision is a complex topic, and the answers likely depend on one’s ethical system.  The benefits of infant circumcision appear to outweigh the risks and harms. Additionally, it is safer to be circumcised as an infant than as an adult, and a significant portion of the benefits of circumcision accrue to infants and children. From a strictly utilitarian perspective, infant circumcision should therefore be encouraged – whether we consider society as a whole or only the boy in question.  However, autonomy is an important value, and while a man can become circumcised (missing only some of the benefits of having been circumcised as an infant), it is impossible to effectively restore the foreskin and become “de-circumcised”. An ethical system that heavily values personal choice over cost-benefit analysis may reasonably reject circumcision – especially one that rejects currently-widespread societal assumptions about parents making medical decisions for their children.  Furthermore, many of the benefits of circumcision accrue only to men who have sex with women. For men who exclusively have sex with men and for men who do not have sex, the benefits and risks are close to equipose. There is a moral concern with performing a procedure that can thus tend to reinforce heteronormativity and sex-normativity.

2019 Adversarial Collaboration Entries

Thanks to everyone who sent in entries for the 2019 adversarial collaboration contest.

Remember, an adversarial collaboration is where two people with opposite views on a controversial issue work together to present a unified summary of the evidence and its implications. In theory it’s a good way to make sure you hear the strongest arguments and counterarguments for both sides – like hearing a debate between experts, except all the debate and rhetoric and disagreement have already been done by the time you start reading, so you’re just left with the end result. See the 2018 entries for examples.

Eight teams submitted collaborations for this year’s contest:

1. “What are the benefits, harms, and ethics of infant circumcision?” by Joel P and Missingno

2. “Is eating meat a net harm?” by David G and Froolow

3. “Does calorie restriction slow aging?” by Adrian L and Calvin R

4. “Should we colonize space to mitigate x-risk?” by Nick D and Rob S

5. “Should gene editing technologies be used in humans” by Nita J and Patrick N

6. “When during fetal development does abortion become morally wrong?” by BlockOfNihilism and Icerun

7. “Will automation lead to economic crisis?” by Doug S and Erusian

8. “How much significance should we ascribe to spiritual experiences?” by Seth S and Jeremiah G

(if any of you are unhappy with how I named you or titled your piece, let me know)

At the end of the two weeks, I’ll ask readers to vote for their favorite collaboration, so try to remember which ones impress you. I think we’re all winners by getting to read these – but the actual winners get that plus $2500 in prize money. Thanks again to everyone who donates to the Patreon for making that possible.

Please put any comments about the contest itself here, not on the individual entries.

Symptom, Condition, Cause

On my recent post on autism, several people chimed in to say that “autism” wasn’t a unitary/homogenous category. It probably lumps together many different conditions with many different causes. It’s useless to speculate on the characteristics of “autism” until it can be separated out further.

I get this every time I talk about a psychiatric condition. The proponents of this view seem to think they’re speaking a shocking heresy that overturns the psychiatric establishment. But guys, we know this kind of stuff. Psychiatric diagnoses don’t have to perfectly match underlying root causes to be useful.

Suppose a patient comes to you with difficulty breathing, excessive sweating, anxiety, and extreme discomfort when lying down flat. You recognize these as potential signs of pulmonary edema, ie fluid in the lungs. You do an x-ray, confirm the diagnosis, and prescribe symptomatic treatment – in this case, supplemental oxygen. All of this is good work.

But you can have fluid in your lungs for lots of different reasons. Most of the time it’s heart failure, but sometimes it’s kidney failure, pneumonia, drug overdose, smoke inhalation, or altitude sickness. Some of these causes will have slightly different symptoms, which an alert doctor can notice.

Suppose the real cause of your pulmonary edema is heroin overdose. In that case, it wouldn’t be fair to call pulmonary edema a “root cause”. The root cause of your problem is the heroin. But you also can’t call pulmonary edema merely a “symptom”. No patient comes in saying “Doc, I’m feeling a bit pulmonary edemic today”. The symptoms of pulmonary edema are difficulty breathing, excessive sweating, anxiety, etc. So what is pulmonary edema?

I don’t know the technical philosophy-of-medicine term for this, but let’s call it a “condition”. A condition which nobody has yet matched with a biological process gets dubbed a syndrome – a set of symptoms that go together even if we remain agnostic about why. A condition which has been matched a biological process ends up like pulmonary edema – such a well-known part of the medical canon that nobody feels a need to do philosophy around it.

Lots of things are conditions like this. Even some universally-known diseases like stroke are better thought of as conditions than root causes. Strokes can be caused either by ischaemia (usually a blood vessel blocked by a clot) or haemorrhage (a blood vessel bursting and bleeding out). These two causes have differing risk factors (anticoagulants cause haemorrhagic stroke but protect against ischaemic) and differing treatments (tPA relieves ischaemic stroke but catastrophically worsens haemorrhagic).

But nobody ever bursts into neurology conferences shouting “STROKE ISN’T A REAL DISEASE, IT’S A COBBLED-TOGETHER BASKET OF MULTIPLE DIFFERENT ROOT CAUSES!” Everyone realizes that conditions are a useful intermediate level to work at.

This is how I feel about things like depression too. No psychiatrist would be even a tiny bit surprised to hear that depression is many different conditions with many different causes. For example, everyone knows some depressions are caused by hypothyroidism, and others aren’t.

The biggest difference between the philosophical status of depression vs. stroke is that we know what biological process stroke corresponds to. Stroke is brain cells dying from lack of oxygen. It can be caused by arterial blockage or by bleeding, sometimes it can even have more distal causes like cocaine use or Moyamoya disease, but it all ends with brain cells dying from lack of oxygen. That in turn produces classic symptoms like sudden-onset slurred speech, hemiparalysis, and facial asymmetry.

We don’t have as good an idea what biological process depression corresponds to. There are some theories – maybe a failure of synaptogenesis – but they’re all pretty speculative right now. Still, I think it’s reasonable to propose that they correspond to some process.

First, because depression includes a lot of surprising symptoms mysteriously clustered together. Just as without the concept of “stroke” you can’t explain why slurred speech and hemiparalysis happen together so often, so without the concept of “depression” it’s hard to explain why SIGECAPS tend to go together. The only good alternative I’ve heard here is the idea of symptom networks. But I no longer find this very convincing, and it never seems to be what the people talking about how “depression isn’t a single disorder” mean.

Second, because at this point we don’t even know what biological process normal low mood corresponds to, but it seems like it has to be something, and it would be strange for a single biological process to cause low mood and not be related to depression.

My (very wild) guess is that in the end psychiatric disorders will mostly turn out to be computational conditions. That is, something like “the learning rate of this system is set too high” or “the threshold for errors in this error-detector is too low”. There will be lots of different things that will cause that, from biological (because these computations are implemented on biological systems including the usual range of things like serotonin and dopamine and synapses) to psychological (because the brain is plastic enough that its computational parameters can change with experience) to environmental (because if you pour a bucket of battery acid onto a computer, probably its computational parameters will change in some way). This is just my personal bias towards computational explanations speaking, and it could be that these disorders will be better explained by regional stories (ie “the amygdala is broken” or “the hippocampus is broken”), by biochemical stories (“there’s too much serotonin”), by structural stories (“there are too few synapses”), by some combination of these, by something totally different, or by something that’s on a totally different level than any of this.

If something like this story is true, it means that research that treats depression as a single condition might or might not work. Returning to the analogy of stroke, I think (though I’m not an expert) that the prognoses for ischaemic and haemorrhagic stroke are mostly similar, since both depend on how long it takes the brain to adapt after some cells have died. But the risk factors for these two kinds of stroke are different (again, anticoagulants protect against one and cause the other). Scientists who were researching “stroke”, without understanding the different causes, would get some things right and end up confused about others.

Some people, upon hearing this, say that we should be trying to figure out the different kinds of depression so we can do real research on those. People have been trying this for a century, and every one of their leads have been false. Traditional psychiatry flirts with admitting two subtypes of depression, but you can also find papers claiming to have found three subtypes, four subtypes, five subtypes, etc. Even papers that agree on how many subtypes there are often identify the subtypes totally differently. This has not been a productive research program and I think better understanding of what depression is will be more valuable than bashing our heads against the subtype identification problem further. At least this is how it has always worked in regular medicine, where once we realized what eg pulmonary edema was, everything fell into place (including potential root causes) and nobody felt like figuring out exactly how many subtypes there were was a very interesting problem.

The saying goes: all models are wrong, some models are useful. I don’t think existing psychiatric diagnosis is particularly accurate, but I think it’s the most useful thing we have right now. And I don’t think talking about how each condition is probably made up of many root causes is a particularly damaging objection to it. We should keep the likely heterogeneity in mind and pull it out when we need it, but we shouldn’t use that as an excuse to abandon the whole nosology.

Posted in Uncategorized | Tagged | 133 Comments