codex Slate Star Codex


Please Take The 2020 SSC Survey!

Please take the 2020 Slate Star Codex Survey.

The survey helps me learn more about SSC readers and plan community events. But it also provides me with useful informal research data for questions I’m interested it, which I then turn into interesting posts. My favorite was 2018’s Fight Me, Psychologists: Birth Order Effects Exist And Are Very Strong, which I think made a real contribution to individual differences psychology and which could not have happened without your cooperation. But last year I also got to debunk a myth about how mathematicians eat corn, fail to replicate supposed dangers of beef jerky, and test a theory of how fetishes form. I expect this year’s research to be even more interesting.

The survey is open to anyone who has ever read a post on this blog before December 30 2019. Please don’t avoid taking the survey just because you feel like you’re not enough of a “regular”. It will ask you how much of a “regular” you are, so there’s no risk you’ll “dilute” the results. The survey will stay open until mid-January, and I will probably be begging and harassing you to take it about once a week or so until then.

This year’s survey is in two parts. Part I asks the same basic questions as previous years and should take about ten minutes. Part II asks more questions on research topics I’m interested in and should take about fifteen minutes. It would be great if you could take both parts, but if 25 minutes sounds like too much surveying to you, you can also just take Part I.

As always, the survey is plagued by fundamental limitations, poor technology, and my own carelessness, but a couple of things to watch for:

– Once you click a box on a Google form, you cannot un-click it – i.e. you can change your answer but you can’t unanswer the question. If you click a box you didn’t mean to, please switch your answer to “Other” if available; if not, then choose the most boring inoffensive answer that is least likely to produce surprising results. I realize how bad this is but there is apparently no way around it.

– By default, all responses will be included in a public dataset for anyone who wants to analyze them. Your responses will obviously not be attached to your name or any similarly blatant identifying information, and this year I’m going to further bin a couple of especially identifiable categories like age, but if you’re the only supercentenarian Mongolian reader or something, you might still be identifiable. There is an option to make all your responses private; if identifiability bothers you, feel free to check it and you will not be included in the public dataset.

– Due to poring over a 5000 entry spreadsheet not actually being that much fun, I am not up for changing your answers after you submit them. Please do not email me asking me to do this. This includes your answer to the privacy question. Please figure out whether you want privacy before taking the survey.

That having been said, you are all great, and I super-appreciate any survey-filling-out you are willing to do. If you can donate about a half-hour, I hope I can pay you back in interesting findings and useful crowd-sourced life advice. I also plan to pay up to two randomly selected respondents back with a large monetary prize (with some caveats I hope you’ll find fun, see the last section for details). So:

Take the 2020 Slate Star Codex Survey here!

Posted in Uncategorized | Tagged , | 408 Comments

Open Thread 144

This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit – and also check out the SSC Podcast. Also:

1. Voting for the Adversarial Collaboration Contest winner is still going on. I will leave voting open until this Friday, so try to vote before then.

Posted in Uncategorized | Tagged | 469 Comments

Please Vote For ACC Winner

I’ve now posted all eight adversarial collaborations.

In case you missed any, you can find a list of them (with links) here.

If you have read all the collaborations, please vote on your favorite. This year I will decide the winner by popular vote; I don’t feel like putting my finger on the scale this time. I will give $2000 to the first place winner and $500 to second place. You can vote for your favorite collaboration here. No, you may not vote for the Grinch.

Thanks again to all participants, readers, and voters.

[ACC] How Much Significance Should We Ascribe To Spiritual Experiences?

[This is an entry to the 2019 Adversarial Collaboration Contest by Jeremiah Gruenberg and Seth Schoen]

1. Introduction

This project seeks to explore the viability of spiritual or religious experiences as empirical evidence for a component of reality that transcends or is radically different from our ordinary experience. The question at hand is not the existence of God or higher powers, nor the failures, successes, or benefits of religion, but rather the role of spiritual experience in the human understanding of the nature of reality. We formulated the topic in controversy this way:

The empirical study of the content and nature of people’s personal spiritual experiences justifies taking them seriously as evidence of an important component of human life deserving of individual and collective exploration.

Our fellow human beings have always had unusual experiences that they found special and meaningful, but often struggled to interpret or place in the context of their ordinary lives. These experiences and their interpretation have aroused intense controversy, both because people have deployed them as support for their views on contested issues about the nature of reality, and because they may arise in settings where one could easily question whether the brain’s altered perceptions and understandings are enhanced or impaired. Another source of debate is how radically different individuals’ experiences—and their personal interpretations of the origins and meanings of those experiences—can be. Finally, spiritual experiences are often reported through a cultural lens that leads to questions about how accurately and objectively people could perceive and describe the unusual things that they perceived.

We emphasize that there is no question, even from the most skeptical perspective, of insisting that individuals alter their own views or memories of what they have witnessed (although we encourage people to question their interpretations and to become aware of factors that could raise doubts about those interpretations). What is rational or plausible for each person to believe at a particular moment can be different, and in any case the way that people interpret their own experience and history will be different. If you have had a spiritual experience whose nature and meaning you find evident and certain, others may offer you alternative interpretations and evidence against your view, but can’t demand that you change it. However, we find it interesting to consider what lessons others can draw from accounts of unusual experiences and perceptions: not so much what sort of evidence your own spiritual experiences may constitute for you, but rather what sort of evidence your accounts of them may constitute for others. Can we collectively learn anything from these experiences?

One objective of this project is to explore empiricism as a key to a “common language” which allows all perspectives to discuss the significance of spiritual experience—not just those who are predisposed to a traditional theistic model of reality. Empiricism seems to a major contender in the competition to find common ground surrounding spirituality. It is both experience-based and rational. Properly employed, the use of empiricism may allow for a rational discussion of personal experience.

We’ve structured this article in nine sections:

  1. Introduction.

  2. Definitions of Empiricism, Experience, Knowledge, and Spirituality: a discussion of some important terms, as well as the coherence and conflict between empiricism and rationality.

  3. Psychological Research on Spiritual Experience.

  4. Epistemology and Religious Experience: this section focuses on William P. Alston’s treatment of how mystical perception may justify the generation of personal belief.

  5. Near Death Experiences: a review and discussion of a major work on the significance of NDEs from the perspective of a scholar who suggests that their meaning is largely symbolic.

  6. The Use of Entheogens: a look at a recent meta-study which reviews the data of five different studies on spiritual experiences resulting from the use of entheogenic substances, and some other sources on entheogens in religion.

  7. The Problem of Dreaming: an objection to the interpretation of spiritual experiences as having anything other than personal, momentary significance.

  8. Some Possible Perspectives: the collaborators share their various ideas on how the empirical and philosophical content of the paper thus far might be viewed or understood.

  9. Concluding Thoughts.

2. Definitions of Empiricism, Experience, and Spirituality

The first step in exploring this statement is interact with philosophical perspectives on the interrelated concepts of empiricism, experience, and spirituality in the attempt to define our terms.

Empiricism can be various defined. Essentially it deals with sensory experience as a generator of knowledge.

Empiricism was an integral—perhaps the singular fundamental—of the emergence of scientific inquiry as we know it. Wolfe and Gal write:

It was in 1660s England, according to the received view, in the meetings of the Royal Society of London, that science acquired the form of empirical enquiry that we recognize as our own: an open, collaborative experimental practice, mediated by specially-designed instruments, supported by civil, critical discourse, stressing accuracy and replicability. Guided by the philosophy of Francis Bacon, by Protestant ideas of this-worldly benevolence, by gentlemanly codes of decorum and integrity and by a dominant interest in mechanics and a conviction in the mechanical structure of the universe, the members of the Royal Society created a novel experimental practice that superseded all former modes of empirical inquiry – from Aristotelian observations to alchemical experimentation.1

However, it is important to note that empiricism was popularized as a philosophical concept in the first half of the 20th Century (by such figures as A.J. Ayer, Rudolf Carnap, Kurt Gödel, Karl Popper, Hans Reichenbach, and Ludwig Wittgenstein), and began to take various shapes. Since then, there have been major disagreements on exactly what empiricism entails and how it functions.2 However, the Stanford Encyclopedia of Philosophy states: “Since antiquity the idea that natural science rests importantly on experience has been non-controversial.”3

Perhaps most germane to the topic of this investigation is the fact that empiricism conflicts with pure rationality. It is easy to recognize the limitations and failings of human experience as a reliable source of truth.4 However, since all inputs to human cognition are fundamentally experiential in nature (e.g. the senses of sight, hearing, etc.), the issue of experience must be addressed in any epistemological mode.

In reviewing this conflict between rationalism and empiricism, Markie summarizes the empiricist position with this thesis: “We have no source of knowledge in S or for the concepts we use in S other than sense experience.”5

On the other hand, Markie provides three theses which summarize the rationalist position:

The Intuition/Deduction Thesis: Some propositions in a particular subject area, S, are knowable by us by intuition alone; still others are knowable by being deduced from intuited propositions.

The Innate Knowledge Thesis: We have knowledge of some truths in a particular subject area, S, as part of our rational nature.

The Innate Concept Thesis: We have some of the concepts we employ in a particular subject area, S, as part of our rational nature.

Taking these views into consideration, it seems that the human being is still left in an empiricist position in that our existential state precludes non-experiential data gathering. Intuition itself is formed by lifelong experiences. Nor is “our rationalist nature” is as purely rational as we might hope. Certainly, we should temper the negative subjective qualities of experience, but it seems impossible to circumvent experience altogether regarding embodied human epistemology.

It is true that our experiences must be tempered with objective rationality. However, most humans naturally function primarily in an empirical manner in the formation of worldview, beliefs, and knowledge. The relevant question is therefore not whether some form of empiricism is at play in epistemology in general, but what its role should be. It seems the pure rationalist would exclude all subjective experiential sources of knowledge, even if humans naturally engage in—and rely on—such subjective experiential sense-making.

When it comes to issues of spirituality, a materialistic presupposition would immediately dismiss all appeals to experience. Such a presupposition precludes any engagement with the metaphysical due to its (supposed) nonexistence. A committed materialist would not even investigate the possibility that spiritual experiences have anything but a neurological/biochemical cause. However, if such a presupposition may be suspended, empiricism may hold the key in explaining and/or understanding a spiritual worldview.

What then constitutes “spirituality” in this conversation? The authors of this collaboration would include such experiences as meditative suspensions of self, encounters with the divine in any religious context, near-death experiences, and transcendental uses of entheogens6. Spirituality may be theistic, or it may not. Examples of theistic spirituality are easy to come by. An example of a non-theistic approach to spirituality is found in Sam Harris’ Waking Up, which advocates the use of meditation derived from Buddhist practices to attain altered states of consciousness. While we (the collaborators) might individually define spirituality somewhat differently on an individual level, we find this more general approach helpful to foster conversation on the empirical nature of spiritual experience.

Zinnbauer argues that the terms religion and spirituality are very similar in meaning, but that religion is a narrower term as it is limited to a traditional or institutional context. Zinnbauer writes:

Thus, according to these definitions, spirituality is a broader term than religiousness. Spirituality includes a range of phenomena that extends from the well-worn paths associated with traditional religions to the experiences of individuals or groups who seek the sacred outside of socially or culturally defined systems. For example, an individual’s spirituality may include feelings of devotion, memories of a mystical experience, gatherings with other seekers, rebellion against a culture antagonistic to such a search, and a sense of unity with all sentient life. Significant changes in any of these levels or developmental strands may change the search itself. Development of a serious illness, for example, may change feelings of devotion to confusion or anger, make gatherings more difficult to attend, and cause psychological isolation from a sacred connection to others.7

Pargament provides a slightly different contrast between the two:

In short, spirituality is highlighted as a distinctive dimension of human functioning in the…. Spirituality alone addresses the discovery, conservation, and transformation of the most ultimate of all concerns, the sacred. Yet religiousness is not viewed as inconsistent with or an impediment to spirituality. In fact, spirituality is the core function of religion. Indeed, considerable religious energy is dedicated to helping people integrate the sacred more fully into their pathways and destinations of living. But to succeed at this task, religion accepts and attempts to address the full range of human strivings. Thus, as defined here, religiousness represents a broader phenomenon than spirituality, one that is concerned with all aspects of human functioning, sacred and profane.8

However, Zinnbauer and Pargament note that culturally, spirituality seems to be supplanting religion in a few ways. Spirituality is now seen as the encompassing “sacred or existential goals in life, such as finding meaning, wholeness, inner potential, and interconnections with others….”9 They continue: “In contrast, religiousness is substantively associated with formal belief, group practice, and institutions.”10

3. Psychological Research on Spiritual Experience

We were impressed by the existence of numerous empirical psychological studies of spiritual experience. Two recent major works which reveal this breadth of research are The Psychology of Religion: an Empirical Approach by Hood, Hill, and Spilka11, and Handbook of the Psychology of Religion and Spirituality, edited by Paloutzian and Park12. The significant earlier studies of spiritual experience include those of Harvard psychologist William James (Varieties of Religious Experience: A Study in Human Nature) and Alister Hardy (The Spiritual Nature of Man: Study of Contemporary Religious Experience). “Both James and Hardy affirmed the evidential value of religious/spiritual experiences as at least hypotheses suggesting the existence of a transcendent reality variously experienced.”13 (James famously suggests that spiritual experiences are difficult to understand or evaluate, but that they are compelling and widespread enough to “forbid a premature closing of our accounts with reality.“)

Paloutzian and Park recognize the limitations of psychological research on the nature of spiritual experience in this way:

our job as scientific psychologists of religion is to create good theory to explain religiousness in a way that allows the theory to be assessed against evidence. This means ideas about possible causal factors that are not, in principle, capable of being tested against evidence may be interesting, but they do not meet the criteria necessary to bear upon our theory construction process.14

Current Research

Surveys indicate that somewhere between one third to one half of the population has had some sort of significant religious experience.15 Such experiences are correlated with gender, education, and social class—being more common for females, for those with a higher education, and for those in higher classes.16 Hood et al. write: “Women report more such experiences than men; the experiences tend to be age-related, increasing with age; they are characteristic of educated and affluent people; and they are more likely to be associated with indices of psychological health and well-being than with those of pathology or social dysfunction.”17 Investigations into the heritability of religiosity, particularly through twin studies, place it between 0% to 50%.18

It was also discovered that people in the United States, Australia, the United Kingdom, and Scandinavia do not tend to share their spiritual experiences with others. Hood et al. wonder if this is why such spiritual experiences are thought to be uncommon (as fewer people in these societies might have heard reports of others’ spiritual experiences).

Due to the internal and personal nature of spiritual experiences, the data gathering on the topic is most often accomplished through surveys, questionnaires, and interviews. The accuracy of such self-reporting can be measured—with some testing suggesting that some percentage are likely false positives (tendencies both for the pro-religious to affirm spiritual experience and for the anti-religious to deny spiritual experience).19 Identifying activity which triggers spiritual experience and studying individuals undertaking these actions in a laboratory setting is another research mode. Such activities studied include prayer and meditation.20 As we will discuss in another section, they also increasingly include the use of psychoactive substances.

Hood et al. admit that the over-reliance on self-reporting is a difficult hurdle in the study of spiritual experience, particularly because of the potential pitfalls of bias, including “intentional deception, impression management, personal bias, and many more.”21 They discuss the alternative uses of other kinds of measurement, such as physiological and behavioral measures, and the increasing use of the Implicit Association Test.

One study indicated that spiritual experience (reading Psalm 23) functioned neurologically in the frontal and parietal lobes, while nonreligious experience of reading Psalm 23 involved the amygdala (which was not active in the religious experience).22 “On the basis of these findings, Azari and her coworkers have proposed that religious experience is likely to be a cognitive process utilizing established neural connections between the frontal and parietal lobes.”23

Addressing whether spiritual experiences are the result of a psychiatric disorder, Hood et al. note that “both normal and psychotic individuals can have mystical experiences” and that this is backed up by empirical research.24 Such research noted that the differences between normal mystics and psychotics is that “The psychotic mystics exhibited resistance and rigidity, as opposed to the normal mystics, who exhibited openness and fluidity. Thus it is not simply mystical experience, but the reactions to the experience, that distinguish psychotic from normal mystics.”25

The Nature of Spiritual Experience

Some evolutionary psychologists argue that religious experience arose because it is an adaptive advantage. Kirkpatrick writes, “Hypotheses about the adaptive function of such religious instincts have ranged from defense against fear of death or other forms of comfort and anxiety reduction to group-level benefits such as promoting cohesion and solidarity or reducing conflict.”26 However, Kirkpatrick argues that the evidence does not point to adaptive advantage. He writes, “My own view…is that the diverse collection of phenomena we refer to as ‘religion’ represent a collection of by-products of numerous adaptations with other specific, mundane functions.”27 Kirkpatrick continues,

With respect to religion, beliefs about the existence of supernatural forces and beings appear to emerge as a spandrel-like by-product of evolved systems dedicated to understanding the physical, biological, and interpersonal worlds (Boyer, 1994, 2001). For example, an evolved agency-detector mechanism, designed to distinguish animate from inanimate objects in the world, can be fooled fairly readily to produce psychological animism and anthropomorphism (Atran, 2002; Atran & Norenzayan, in press; Guthrie, 1993), as when we find ourselves cursing at our aforementioned computer when it crashes. Once these spandrel-like effects enable ideas about gods and other supernatural beings, I have suggested, specific forms of religious belief emerge as by-products of psychological mechanisms dedicated to processing information about functionally distinct kinds of interpersonal relationships—attachments, kinships, dominance and status competitions, social exchange relationships, friendships, coalitions, and so forth—that whir into action to shape specific beliefs and expectations about these beings and guide behavior toward them. Thus, for example, gods might be perceived as attachment figures, dominant or high-status individuals, or social exchange partners, with each possibility leading to a different set of expectations and inferences about those gods’ behavior and decisions about how to best interact with them—processes emerging from functionally distinct psychological systems designed to solve such adaptive problems in human relations (Kirkpatrick, 1999, 2005).28

Theories related to agency detection are popular with religious skeptics; some have noted that in a dangerous world of predation or intergroup violence, wrongly failing to perceive agency and intelligence where they are present has greater adverse survival consequences than wrongly perceiving them where they are not present.

Neuropsychologists, however, point to a combination of “cognitive operators” in the brain which together give rise to human religiosity. “The term ‘cognitive operator’ simply refers to the neurophysiological mechanisms that underlie certain broad categories of cognitive function. Thus, these operators do not exist in the literal sense, but can be useful when considering overall brain function.”29 The “causal operator” works with any series of perceived events and attempts to organize them back to an original cause. Such perception and organization is subjective and may not arrive at an accurate conclusion. Newberg and Newberg write: “We have proposed that when no observational or “scientific” causal explanation is forthcoming for a strip of reality, gods, powers, spirits, or some other causative construct is automatically generated by the causal operator (d’Aquili & Newberg, 1997).”30 A second operator proposed as functioning in the development of spiritual experience is the “holistic operator”: “The proposed holistic operator permits reality to be viewed as a whole or as a gestalt, as well as the abstraction from particulars or individuals into a larger contextual framework.”31

How to View Religious/Spiritual Experience

The following quotes are summaries of current positions of psychologists studying religious and spiritual experience. They represent foundational views in the field and we have chosen therefore to quote them at length.

Hood et al. warn against the danger of reductionism in the psychological study of spiritual experience, stating that it is dangerous “to reduce the richness and complexity of religious experience to a favorite psychological construct.”32 Elsewhere, they write:

The empirical study of religiousness has many great challenges. The first of these challenges considered here is how to maintain the scientific standards of good empirical work, always the goal of science, without sacrificing the richness and depth of the object of study. We have gone to considerable lengths to make the case that religious experience should not be reduced to specific psychological processes. It is tempting to do so when one adopts the naturalistic perspective that underlies scientific investigation, and to ignore the meaning system of the people being studied. What is needed is some nonreductionistic accounting of the phenomena of interest, but without abandoning scientific methodology and thus not reaping the benefits that it provides.33 (Hood and Hill Psychology of Religion, 25)

Zinnbauer and Pargament write:

A controversy that often is raised in discussions of measurement and definition is that of reductionism, the process of understanding a phenomenon at one level of analysis by reducing it to presumably more fundamental processes (see discussions in Idinopulos & Yonan, 1994, and Wilber, 1995). In some sense this process is unavoidable in scientific study (Moberg, 2002; Segal, 1994). However, reductionism is often accompanied by a loss of information. For example, the reduction of mystical experiences of oneness with the universe to a change in neurotransmitter levels eliminates information at all other levels (e.g., the cultural, social, familial, affective, cognitive, and behavioral). There may indeed be important physical correlates of such an experience, but to deny the relevance or value of other modes of interpretation and understanding is to commit the error of reductionism.34

Regarding the problem of the compatibility of empiricism with the notion of a genuine spiritual reality, Hood et al. write:

Although social scientists cannot confirm any ontological claims based upon mystical experience, they can construct theories compatible with claims to the existence of such realities. Hodges (1974) and Porpora (2006) have argued that the scientific taboo against the supernatural can be broken, as long as hypotheses about the supernatural can be shown to have empirical consequences. In Garrett’s (1974) phrase, “troublesome transcendence” must be confronted by social scientists as much as by theologians and philosophers.35

Newberg and Newberg write:

Western society has historically emphasized the importance of causality, technological advances, and empiricism. It is from these values that Western medicine, psychiatry, and psychology have developed. We propose that regardless of the connotation of the concept of spirituality in Western society, mystical and meditative experiences are natural and probably measurable processes that are and can be experienced by a diversity of people of different races, religions, and cultures. Those having spiritual experiences can have a variety of neuropsychological constitutions. In addition, it is important for clinicians to be sensitive and knowledgeable regarding spiritual and philosophical beliefs (Worthington, McCullough, & Sandage, 1996). Professionals need to be capable of distinguishing normal, healthy spiritual growth from psychopathology. We hope that some of the neurophysiological analysis described above might allow for a distinction between “normal” spiritual experiences and pathological states. In fact, such a nomenclature may be valuable for future psychological analysis of religious experiences. However, the fact that spiritual experiences have an effect on autonomic function as well as other cortically mediated cognitive and emotional processes suggests that such experiences not only affect the human psyche, but also can be carefully crafted to assist in the therapy of various disorders. It has already been shown that prayer and meditation can improve both physical and psychological parameters (Carson, 1993; Kabat-Zinn, Lipworth, & Burney, 1985; Kaplan, Goldenberg, & Galvin-Nadeu, 1993; Worthington et al., 1996). The more the underlying neurophysiological correlates of spiritual experiences are understood, the more such experiences can be analyzed and utilized in clinical practice. Therefore, spiritual experience can be very useful in clinical psychological and psychiatric practice. Furthermore, clinicians themselves can be instrumental in helping their patients toward personal and spiritual growth by discussing various meditative and/or spiritual practices and encouraging patients to approach these practices in an unambiguous manner. According to Rowan (1983), a humanistic psychologist, it is the self that is the missing link between the psychological and the spiritual. Therefore, it seems natural that spiritual experiences, such as those encountered in meditation and prayer, could become an adjunct to Western therapeutic practices and that developing oneself spiritually can become an important part of psychosocial as well as neuropsychological development.36

Hood et al. continue:

There is no reason why scientists cannot include specific hypotheses derived from views about the nature of transcendent reality in empirical studies of religious experience, as long as specific empirical predictions can be made. The source of the predictions may reference even the unobservable and the intangible. All that is required is that there be identifiable empirical consequences. As Jones (1986) has stated the case, Invoking Occam’s Razor [i.e., the philosophical principle that the best explanation of an event is the simplest one] to disallow reference to factors other than sensory observable ones is question begging in favor of one metaphysics building up an ontology with material objects as basic. (p. 225) Jones echoes the classic claim of William James that mystics base their experience upon the same sort of processes that all empiricists do—direct experience. James would restrict the authoritative value of mystical experience to the person who had the experience, but would view it as a hypothesis for the social scientist to investigate (Hood, 1992a, 1995c). However, mystics are united in the belief that such experiences are real, and many nonmystics are convinced of the reality of the experience even if they personally have not had it. Thus, as Swinburne (1981) argues, mystical experience is also authoritative for others:

. . . if it seems to me I have a glimpse of Nirvana, or a vision of God, that is good grounds for me to suppose that I do. And, more generally, the occurrence of religious experience is prima facie reason for all to believe in that of which the experience was purportedly an experience. (p. 190)

Social scientists are often too quick to boast that their own limited empirical data undermine ontological claims. Religious traditions cannot be adequately understood without the assumption that transcendent objects of experience are believed to be real and foundational to those who experience them (Hood, 1995a). It is also possible that not only are they believed to be real, but that they are in fact real as well. Furthermore, their reality may be revealed in experience. Carmody and Carmody (1996, p. 10) define “mysticism” as “a direct experience of ultimate reality.” This definition remains a hypothesis capable of empirical investigation. To presuppose otherwise is less persuasive than once thought. Bowker (1973), after critically reviewing social-scientific theories of the sense of God, has noted that it is an empirical option to conclude that at least part of the sense of God might come from God. In our terms, religious views of the nature of the Real suggest ways in which it can be expressed in human experience. This can work in two directions, both deductively and inductively. Deductively, one can note that if the Real is conceived in a particular way, then certain experiences of the real can be expected to follow. Thus we can anticipate that expectations play a significant role in religious experience, often confirming the foundational realities of one’s faith tradition. Inductively, we can infer that if particular experiences occur, than the possibility that the Real exists is a reasonable inference—a position forcefully argued by Berger (1979). Thus we can anticipate that experiences, some unanticipated, may lead some to seek religions for their illumination. O’Brien (1965) has gone so far as to include in his criteria for a mystical experience that it be unexpected. Religious traditions adopt both options in confronting mystical and numinous experiences. In this sense, a rigorous methodological atheism is unwarranted in the study of religious and mystical experiences (Porpora, 2006). Not surprisingly, then, mystical experiences have long been the focus of empirical research and provocative theorizing among both sociologists and psychologists. We first explore classic efforts to confront these experiences. These classic views are of more than historical interest, as they set the range of conceptual issues that continue to plague the contemporary empirical study of mysticism. Our focus upon classic views is not exhaustive. We focus upon representatives of three major social-scientific views regarding mystical experience: as erroneous attribution, as a heightened state of awareness, and as evolved consciousness.37

4. Epistemology and Religious Experience

Debates about the existence of God have often included the “argument from religious experience”; its advocates may cite their own experiences, or may claim that perceptions of some sort of divinity are a psychological or cultural universal, or nearly so. These arguments may involve evidence such as

  • The way that many people feel that they have met God or that God has spoken to them.

  • The way that many people have had some sort of experience or perceptual of a divine or spiritual realm or order.

  • The way that the human tendency to perceive or be interested in these topics (and to believe in, venerate, or attempt communications with deities or spiritual powers) is widespread across cultures, even though their interpretation varies so dramatically. (The notion of “natural religion” has sometimes been justified on the basis of supposedly universal notions among human societies, or supposedly universal experiences shared among human beings who contemplate the idea of the divine. In the view of proponents of this concept, we might have good evidence to believe in God or divinity as a result of widespread human experience of these things, but perhaps not good evidence to believe in specifics about the divine nature, which are much less widely agreed upon.)

While realizing that not everyone who has had a spiritual experience assigns any particular significance or interpretation to that experience, never mind formulating a specific theistic argument on the basis of the experience, we were interested in looking at how this argument is viewed by some of its proponents and opponents. In addition to reading online summary articles about arguments from religious experience, we chose to focus on William P. Alston’s groundbreaking philosophical work on empiricism regarding religious/spiritual epistemology. Alston introduces his book Perceiving God: The Epistemology of Religious Experience in this way:

The central thesis of this book is that experiential awareness of God, or as I shall be saying, the perception of God, makes an important contribution to the grounds of religious belief. More specifically, a person can become justified in holding certain kinds of beliefs about God by virtue of perceiving God as being or doing so-and-so. The kinds of beliefs that can be so justified I shall call "M-beliefs" (‘M’ for manifestation). M-beliefs are beliefs to the effect that God is doing something currently vis-a-vis the subject—comforting, strengthening, guiding, communicating a message, sustaining the subject in being—or to the effect that God has some (allegedly) perceivable property— goodness, power, lovingness. The intuitive idea is that by virtue of my being aware of God as sustaining me in being I can justifiably believe that God is sustaining me in being. This initial formulation will undergo much refinement in the course of the book.38

In order to avoid presupposing the existence of God, Alston specifies “the experiences in question as those that are taken by the subject to be an awareness of God (or would be so taken if the question arose).”39 In this way, it seems Alston is exploring the justification of taking spiritual experience as the basis of forming “M-beliefs” (basically, beliefs formed by mystical experiences) irrespective of whether such beliefs correspond with reality. As Alston states: “I want to make explicit at the outset that my project here is to be distinguished from anything properly called an ‘argument from religious experience’ for the existence of God…. It is rather that people sometimes do perceive God and thereby acquire justified beliefs about God.”40 In this sense, Alston focuses mainly on the philosophical legitimacy of people treating their own religious and spiritual experience as evidence for belief in the object of those experiences—not on the legitimacy of arguing that others, too, ought to do so.

This distinction is interesting. After all, having religious experiences and adopting specific attitudes with respect to their meaning or implications could theoretically be completely independent, so we could imagine encountering

  • Someone who has subjectively perceived God41 but is unsure of whether this experience was veridical or significant, and does not adopt or argue for a view that the perception was necessarily real.

  • Someone who has subjectively perceived God and is convinced of the reality of that perception (whether or not he or she supposes that others, who haven’t shared this perception, ought to agree).

  • Someone who has not had such an experience, but finds others’ accounts of their experience persuasive and is inclined to agree with their interpretations.

  • Someone who has not had such an experience and remains skeptical of others’ accounts or interpretations.

Of course, still other nuances are possible.

Alston appears particularly interested in countering those who maintain that individuals ought to rationally discount their own personal experience (especially because they feel that personal experience isn’t the kind of thing that could be rationally convincing in this realm). In some ways, Alston’s approach could be seen as focused on the analysis of individual rationality and not about collective reasoning or persuasion.

In fact, Alston’s views on evidence and belief seem remarkably compatible with other accounts of rationality; it seems that his main point is that experience of the divine is evidence that should lead the experiencer to update his or her beliefs in favor of a greater likelihood that the divine exists—which one would only not do given other priors that the divine is absurd, that perceptions of it are of a different kind of evidential value than other perceptions, or something. So Alston then dedicates long passages to claiming that there’s no philosophical reason that we have to think that these perceptions don’t have a similar evidential value from other kinds of perceptions, and to trying to defeat arguments that they don’t.

We could probably successfully rephrase most of his line of argumentation with a Bayesian-rationalist flavor: “conceptually, there is nothing about the sense of the divine that makes it less rational to use it to update one’s beliefs about the objects of its perception than it would be to use other senses to update one’s beliefs about the objects of their perception.” Of course, how much to update will be largely informed by one’s priors, particularly here about naturalism or materialism, etc. The nut to be cracked here is this: how does rationality work when trying to learn about subject matter that is not perceived through the bodily senses but that presents itself as an analogously coherent or compelling perception? Alston’s position seems to include the notion that, even though we may have very high priors for naturalism, we shouldn’t confuse that with the choice to rule other kinds of perception out-of-bounds for belief updating.

Alston further clarifies his aim in this way:

Even if our age were firmly realist in its predilections, my central thesis would still be in stark contradiction to assumptions that are well nigh universally shared in intellectual circles. It is often taken for granted by the wise of this world, believers and unbelievers alike, that "religious experience" is a purely subjective phenomenon. Although it may have various psychosocial functions to play, any claims to its cognitive value can be safely dismissed without a hearing. It is the purpose of this book to challenge that assumption and to marshal the resources that are needed to support its rejection.42

While his argument is very detailed, we present here an overview.

He begins with a discussion on the phenomenology of spiritual experience. After reviewing a few self-reported descriptions of spiritual experience, Alston emphasizes that they all share a common dimension of the experience as being presented to the subject. In this way, Alston likens spiritual perception to natural perception, and distinguishes presentation from abstract thought. He suggests that this aspect of presentation demonstrates that such experiences do not arise internally and are not subjective in their inception. He furthers this point by reviewing several descriptions of spiritual experiences which include sensory perception. Alston’s phenomenology here is one in which “mystical perception” (his preferred term over “spiritual experience”) is a “putative direct experiential awareness of God.”43

Alston argues that a “direct awareness” of something—whether physical or mystical—is independent from beliefs, judgment, or concepts of the object of awareness.44 This is in general accord with Russell’s idea of “acquaintance” and Moore’s idea of “direct apprehension.” However, this view is in contrast with the view initiated by Kant that all perception is mediated by beliefs, judgment, or concepts. Alston argues against this view by stating that there is a difference between a direct awareness of something and the subsequent judgment that the object has some sort of property. In this way, Alston maintains that a person may be aware of an object without interpretational judgment on that object.45

The contrary view would easily be employed toward not taking any reports of mystical experiences seriously. If all perception is mediated by belief, then those who believe in God could easily be interpreting a non-spiritual event through their spiritual presuppositions. The argument could be made that the subjects of mystical experiences quoted by Alston are not reliable sources regarding the nature of their experiences. Alston responds to this charge:

It is conceivable that one should suppose that a purely affective experience or a strongly held conviction should involve the experiential presentation of God when it doesn’t, especially if there is a strong need or longing for such a direct awareness. But even if an individual’s account of the phenomenology of his or her own experience is not infallible, it must certainly be taken seriously. Who is in a better position to determine whether S is having an experience as if of something’s presenting itself to S as φ than S? We would need strong reasons to override the subject’s confident report of the character of her experience. And where could we find such reasons? I suspect that most people who put forward these alternative diagnoses do so because they have general philosophical reasons for supposing either that God does not exist or that no human being could perceive Him, and they fail to recognize the difference between a phenomenological account of object presentation and the fact that a certain object, as the subject conceives it to be, presents itself to the subject’s awareness. In any event, once we get straight about all this, I cannot see any reason for doubting the subjects’ account of the character of their experience, whatever reasons there may be for doubting that God Himself does appear to them.46

Alston also notes that such experiences should be taken seriously because the subjects themselves consider alternative sources or interpretations for their experiences. In other words, it seems likely that such people remain rational regardless of the sometimes outlandish nature of their experiences, and are aware that their own perception may be at fault. The subjects rejected such alternatives, however, often due to the “presentation” aspect of the experiences—in other words, stating that they did not, themselves, produce the experience.47

Alston’s argument proper begins with the notion that spiritual experiences are perceptual in nature. That is, God presents himself to the subject in some manner. He takes that conclusion as the basis for his argument for the reasonability for taking such perceptions to be indicative of true reality. Alston introduces this argument in this way:

If what seems to me to be a direct experiential awareness of X puts me in a position to form justified beliefs about X’s perceptible features, that warrants me in supposing that X itself is indeed presenting itself to my awareness; otherwise how could the experience justify my beliefs about X? We have to stop short of the claim that the perceptual justification of perceptual beliefs entails that the experience is genuine perception. I may be perceptually justified in believing that there is a lake in front of me even if I am a victim of a mirage and no lake is being perceived. But this is just an isolated incident that occurs against the background of innumerable cases in which perceptual justification involves authentic perception of the object. It strains credulity to suppose that an entire sphere of putatively perceptual experience could be a source of justification for perceptual beliefs, while there is no, or virtually no, genuine perception of the objects involved. Therefore, if putative experience of God provides justification for beliefs about God, that provides very strong support for supposing that such experiences are, at least frequently, genuine perceptions of God.48

Alston’s concept of justification for belief is that the subject is justified in maintaining his or her belief, rather than the subject’s activity of justification. The difference is that someone may be justified in his or her own beliefs irrespective of attempting to argue for them.49

He clarifies his view of justification as indicating favorability of true belief, taking into account that it is a matter of degree. One may be justified in a belief, even if the evidence does not lead one to be completely certain.50

Alston’s argument for the justification of M-beliefs is as follows: 1) A perceptual belief concerns a perceived object, no more and no less. In other words, a perceptual belief is that one is sensing a presented object. 2) This belief is formed primarily by an experience of perception (according to the human senses). 3) A perceptual belief is not based on prior beliefs or concepts. Alston writes, “The theory of justification I am using takes justification to be a function of the adequacy of what the belief is based on. If it is based purely on experience, and that basis is adequate, it will be purely immediately justified. If it is based partly on experience and partly on other beliefs, its justification will be partly immediate and partly mediate.”51

His recognizes the mediative role of previously held beliefs in the interpretation of perceptual experience, but does not assert that such background beliefs are required in the formation of perceptual belief. Alston writes:

Background beliefs not infrequently figure in the total basis of perceptual beliefs, and in these cases the justification of the latter depends in part on the justification of the former. Nevertheless this is less common than it seems on first sight, and we can often explain the justificatory relevance of background beliefs without supposing them to be part of the basis, and so part of the prima facie justification. Thus there is considerable scope for purely immediately justified perceptual beliefs, even though partly mediately justified beliefs must also be taken into account.52

In applying this formulation to M-beliefs, Alston writes:

If God appears to me as φ (or at least so it seems to me), then that will contribute to justifying a belief that God is φ; if the belief is purely immediately justified, that will be the whole story. If one is aware of what one takes to be God as loving or almighty, then, if no partly doxastic basis is involved or required for justification, a belief that God is loving or almighty formed on that basis is thereby prima facie justified. If one is aware of what one takes to be God as comforting one or saying that P to one, then, with similar restrictions, a belief that God is comforting one or saying that P to one is thereby prima facie justified.53

Regarding whether God can be perceived, Alston lays out the summary of his argument as such:

To come to grips with the serious, unconfused problem here, we will have to cut through some unwarranted assumptions that may be behind these questions. We should not suppose that in order to succeed in perceptually recognizing an object of perception as X (i.e., become perceptually justified in believing, or perceptually know, that the object is X), it is necessary that the object appears to one as φ, where φ is a property uniquely possessed by X. To perceptually recognize your house, it is not necessary that the object even display features that are in fact only possessed by your house, much less features that only your house could possess. It is enough that the object present to my experience features that, in this situation or in situations in which I generally find myself, are sufficiently indicative of (are a reliable guide to) the object’s being your house. And so it is here. For me to recognize what I am aware of (X) as God, all that is necessary is that X present to me features that are in fact a reliable indication of their possessor’s being God, at least in situations of the sort in which I typically find myself. It is, again, not required that these features attach only to God, still less that they be such that they can attach only to God. And it is a matter for detailed investigation what sorts of appearances satisfy that condition, just as in the case of sensorily perceived objects.54

He then reviews the accounts of spiritual experiences he provided earlier in the book in order to identify the ways in which God presented his qualities as God.

In chapter 3, Alston surprisingly maintains that it is not possible to give adequate reasons for supposing that the beliefs formed by sense perception are reliable, even though it is common practice to do so.55 In this way, Alston casts doubt upon the entire enterprise of a sense perception basis for accurate epistemology—on both a natural and mystical level. Alston writes,

It is widely believed that we are in a much better position to judge that sense perception is a source of justification than we are in the case of theistic perception. Many even believe that we can show that sense perception is reliable, but not that mystical perception is. These convictions are used as a basis for downgrading the epistemic status of the latter and for denying that beliefs formed on the basis of theistic perception are justified. Looking carefully at attempts to show sense perception to be reliable will put us in a position to assess these views.56

Alston demonstrates that arguments attempting to prove the reliability of sense perception fail due to their epistemic circularity. Alston writes, “If we have to assume the reliability of SP [sense perception] in order to suppose ourselves entitled to the premises, how can an argument from those premises, however impeccable its logical credentials, provide support for that proposition?”57 The simplest version of the argument for the reliability of SP is that it is proven by its fruit. In other words, if our understanding of reality based on SP are most often confirmed through prediction and control of events, then SP is reliable. However, this argument suffers from epistemic circularity in that the only way to confirm the accuracy of the fruit of SP is through the use of SP. Alston reviews a number of arguments for the reliability of SP put forth by (or emerging from) Descartes, Wittgenstein, Oldenquist, Kant, and Locke. However, he finds them all lacking, primarily due to the pitfall of epistemic circularity.

We don’t think Alston’s views and other philosophers’ responses to them will be very exciting or edifying except to readers who are deeply interested in technical debates about epistemology. But we found that they sparked interesting conversations for us about skepticism and how people personally respond to skeptical arguments. They also led us to discuss the conditions under which people can learn from other people’s experiences, and the assumptions that we may need to use in order to assess the evidentiary value for us of other people’s beliefs.

5. Near Death Experiences

For some people, near death experiences (NDEs) are a compelling form of spiritual experience that can be life-altering in its consequences. Those who have them may believe that they have met God, angels, or other spiritual beings, that they have in fact died and then deliberately been sent back to life, or that they have received some kind of teaching or message from spiritual entities. In some cases, they may claim to have received information that would be objectively confirmable by others. Some of those who have such experiences have written popular books recounting them and arguing for specific religious or metaphysical views on the strength of their and others’ experiences.

One major scholarly work on near death experiences is Otherworld Journeys: Accounts of Near-Death Experience in Medieval and Modern Times by Carol Zaleski.58 Zaleski reviews historical accounts of NDEs, as well as modern ones.

Although their contents and interpretations vary, NDEs are remarkably common in the modern world. Among those who come close to dying, the percentage of those who experience NDEs appears to range from 34 percent to 43 percent.59

One reason why NDEs are ripe for research is their commonalities. Zaleski writes, “The [sympathetic] researchers agree that the similarities of near-death reports are more striking than their differences and see this unanimity as a key to the validity of near-death experience.”60 It has also been observed that NDEs do not always conform to the individual’s pre-existing desires or expectations, thereby contradicting the idea that they are merely comforting fantasies of some sort.61 Another argument for their validity is their lasting, transformative effects on the individual.62 Zaleski writes of the striking independent consistency of NDEs:

Age, sex, race, geographic location, education, occupation, religious upbringing, church attendance, prior knowledge of near-death studies, all have negligible effect on the likelihood of near-death visions. Suicide victims seeking annihilation, fundamentalists who expect to see God on the operating table, atheists, agnostics, and carpe diem advocates find equal representation in the ranks of near-death experiencers. And their answers to survey questions show that, for all the religious implications of near-death experience, a person’s beliefs about God, life after death, and heaven and hell do not determine the content of his vision.63

One study produced the following statistics: 60 percent described a sense of peace. 37 percent reported feeling a separation from the body. 23 percent entered a darkness. 16 percent saw light. 10 percent entered the light.64

There are a number of strong objections to viewing NDEs as experiences of a spiritual reality. Notably, NDEs by definition occur at moments of great damage and stress to the body, commonly in the course of partial or even complete failures of its systems. Thus, one could view NDEs as perceptions that are symptomatic of this damage in some way. One objection characterizes NDEs as hallucinations due to dysfunction in the nervous system.65 Another objection suggests that sensory deprivation leads to the experience of NDEs. If the brain is continually cut off from the ability to perceive or process external perceptions, it composes its own reality.66 (Immersion in a sensory deprivation tank, for example, often produces vivid sensory hallucinations.) If the neurological structure and functions of the brain alone are responsible for constructing these NDEs, then this would account for the commonalities of these experiences. Another criticism of the spiritual view of NDEs is the concept that human psychology will always attempt to deny death. “Although contemporary psychological treatments of this subject differ, all rest on the axiom that the mind will resort to any stratagem to push from view the prospect of its own annihilation….”67

However, Zaleski states that each individual objection cannot account for every reported NDE: “for every pathological condition presumed to cause near-death visions, one can find subjects who were demonstrably free of its influence….”68 Zaleski continues: “researchers cite statistics that show an inverse relationship between near-death experience and various pathological mind-altering conditions….”69 For example, the experience (or recollection) or NDEs seems to be inhibited by the effects of drugs and anesthetics. For this reason, it seems unlikely that drugs are responsible for the generation of NDEs. Zaleski writes, “Backed by the collective testimony of hundreds of subjects, researchers contrast the alert, blissful, lucid quality of near-death experience to the confusion, anxiety, and perceptual distortions that accompany such disorders as hypoxia, limbic lobe syndrome, autoscopy, depression, and schizophrenic hallucination.”70

While it seems the critics’ of individual suggestions for biochemical or psychological explanations of the origins of NDEs are dispatched by the variety of cases which do not conform to such explanations, if these criticisms are seen as a whole, then perhaps all NDE cases can be explained only by materialistic means. Zaleski writes: “any feature of near-death experience that is not finished off by endorphins can be dispatched by temporal lobe seizure, depersonalization, state-dependent birth recall, and so forth.”71

However, there are still arguments against this unified front. Firstly, if there were a number of underlying causes for NDEs, why is their nature so consistent? In other words, wouldn’t different biochemical processes produce different kinds of experiences? If consistency is a striking component of NDEs, it seems unlikely that there would be a lack of consistency in their origins. Secondly, Zaleski warns against a reductionistic view of NDEs when such heavy reductionism is not (usually) applied to general human experience. She writes,

After all, not only extraordinary visions but also normal states of consciousness are linked with electrical and chemical events in the brain, hormonal tides in the body, inherited drives, and cultural coercion. Yet we do not apply reductionist vetoes to our ordinary experience. Love can be explained in terms of neurochemical and social mechanisms, ranging from the influence of advertising to the lure of pheromones; but scarcely anyone suggests that knowledge of these mechanisms should prohibit people from believing that they are in love and rearranging their lives accordingly.72

Why should an NDE be dismissed as invalid or hallucinatory because we are able to point to associated neurological or biochemical processes? It may be important to interrogate the reasons why we may be inclined to discount the subjective experiences of NDEs even if naturalistic processes are at play. (The same sort of question will arise again when we ponder experiences with psychoactive substances.)

Zaleski herself advocates for a balance in understanding and appreciating the symbolic and narrative value of NDEs while being skeptical of their theological value. She writes,

Clearly, a new approach is needed; to make near-death testimony an arena for restaging old philosophical or theological battles will not suffice. It appears to be impossible, in any case, to determine objectively whether near-death reports are accurate or inaccurate depictions of the future life. It might therefore be more fruitful for theologians to consider near-death visions as works of the religious imagination, whose function is to communicate meaning through symbolic forms rather than to copy external facts. This is the aspect of near-death literature that I have attempted to highlight.73

In other words, Zaleski does not personally believe that NDEs point to an objective spiritual reality, but that they should nevertheless be afforded personal and cultural significance. She writes,

I suggest, therefore, that a pragmatic method and a sensitivity to symbol must go hand in hand if we wish to give a fair hearing to the claims of near-death literature. If we fully recognize the symbolic nature of near-death testimony (and accept the limits that imposes on us), then in the end we will be able to accord it a value and a validity that would not otherwise be possible; this in turn will yield further insight into the visionary, imaginative, and therapeutic aspects of religious thought in general.74

Zaleski seems to take a position which stands in weak support of the controversial statement at the core of this paper. Although she does not see NDEs as pointing to any sort of genuine mystical reality, she nevertheless sees value in such experiences. If Zaleski is right, and NDEs merely carry symbolic power, they would still be representative of an important component of human life as a kind of spiritual experience with powerful, lasting effects upon the individual.

6. The Use of Entheogens

Perhaps the relevant topic which is most suited toward empirical study is the use of entheogens in the generation of spiritual experience. This seems true for a few reasons: firstly, they are drugs whose chemical make-up can be scientifically studied and described, and secondly, their effect on the human on the biochemical and neurochemical levels can also be scientifically studied and described. We also have the benefit of several decades of scientific research on these substances, in addition to extensive reports from people’s self-experimentation, and anthropological and insider accounts of their use within traditional religions. A number of psychedelic substances have also recently benefited from a new round of legally-sanctioned medical investigations aimed in part at evaluating their potential use in psychiatry.

Entheogens are psychedelic drugs or other preparations of comparable substances that tend to produce a subjective experience of a divine or spiritual nature. They are controversial on many levels; most are legally controlled substances, they are often seen by governments and health authorities as having the potential to be abused, and different religious traditions have radically opposed views about their harms and benefits75. It is also striking that they seem to offer such a reliable way to create experiences that most subjects view as meaningful and spiritual, although the content of these experiences and the way participants account for them varies tremendously76.

Writing for The Outline about her personal experience with psilocybin in a controlled setting at Johns Hopkins, Rachael Peterson states:

Like all trip stories, mine sound crazy at worst and clichéd at best. But I can tell you this much: at the peak of my experience, my sense of self dissolved and I unified with an abiding force that permeated all existence — something that felt conscious, vast, benevolent, eternal, peaceful, and furiously important. After sitting up on the couch six hours later, covered in snot and tears, I struggled to put words to an encounter that felt more real than everyday reality — a mind-bendy paradox characteristic of many mystical experiences.77

Similar accounts (often involving interaction with entities that seem conscious and intelligent) are described repeatedly in case reports by DMT researchers such as Rick Strassman78.

A recent paper analyzes the data of five groups who had experiences with various entheogens, including psilocybin, LSD, ayahuasca, or DMT, as well as one who had spiritual experiences without the use of drugs.79 The authors of the paper note that there are some differences between the experiences of the entheogen cohort and the non-drug cohort, they are more similar than different. The most striking finding was that two-thirds of participants who identified as atheist prior to their experience no longer considered themselves atheist afterward. The abstract of this article summarizes the nature of these experiences:

Most participants reported vivid memories of the encounter experience, which frequently involved communication with something having the attributes of being conscious, benevolent, intelligent, sacred, eternal, and all-knowing. The encounter experience fulfilled a priori criteria for being a complete mystical experience in approximately half of the participants. More than two thirds of those who identified as atheist before the experience no longer identified as atheist afterwards. These experiences were rated as among the most personally meaningful and spiritually significant lifetime experiences, with moderate to strong persisting positive changes in life satisfaction, purpose, and meaning attributed to these experiences. Among the four groups of psychedelic users, the psilocybin and LSD groups were most similar and the ayahuasca group tended to have the highest rates of endorsing positive features and enduring consequences of the experience. Future exploration of predisposing factors and phenomenological and neural correlates of such experiences may provide new insights into religious and spiritual beliefs that have been integral to shaping human culture since time immemorial.

Certainly the subjective power of these experiences cannot be denied, as their transformative effects upon the subjects’ personal lives can attest. In the conclusion, the authors reiterate that that the reported effects of these experiences led to “persisting moderate to strong positive changes in attitudes about self, life satisfaction, life purpose, and life meaning that participants attributed to these experiences.” Legal obstacles have meant that formal therapeutic exploration of many entheogens is just beginning, but it seems apparent that they have promise in treating mental illnesses through catalyzing deep experiences of profound meaning.

Good Friday Experiment

On Good Friday in 1962, a researcher (Walter Pahnke) administered psilocybin to divinity student volunteers just before they attended a Christian worship service80. The results were dramatic; almost all of the research subjects reported profound experiences which they continued to regard as meaningful and important for the rest of their lives (as confirmed by a follow-up survey decades later). Some have described the experience as the most powerful spiritual experience of their lives—which is noteworthy because they were generally already religious believers who were preparing for careers related to their Christian faith. The volunteers clearly understood that they were being given a drug, which did not seem to reduce their assessment of the spiritual importance of their experience at the time or upon subsequent reflection.

Huston Smith, a scholar of comparative religion, was one of the participants in the experiment and describes his experiences in Cleansing the Doors of Perception, his survey work on entheogens81. He says that his attention was fixed on particular melodic and lyrical features of a hymn sung during the service, and his musical training and Christian upbringing

converged on the Good Friday story under psilocybin, [and] the gestalt transformed a routine musical progression into the most powerful cosmic homecoming I have ever experienced.

In an interview he reprints in the same book, Smith remarks that the experiment

[…] enlarged my understanding of God by affording me the only powerful experience I have had of his personal nature. I had known and firmly believed that God is love and that none of love’s nuances could be absent from his infinite nature; but that God loves me, and I him, in the concrete way that human beings love individuals, each most wanting from the other what the other most wants to give and with everything that might distract from that holy relationship excluded from view—that relation with God I had never before had. It’s the theistic mode that doesn’t come naturally to me, but I have to say for it that its carryover topped those of my other entheogenic epiphanies. […]

Sam Harris cautions in Waking Up that if powerful and important spiritual experiences can be produced by a particular context, we should be wary of taking them as evidence for specific doctrinal metaphysical claims, since presumably these experiences can be, and are, encountered within many different religious traditions. If Christian divinity students had an experience that they took to be an encounter with God under the influence of psilocybin at a Good Friday service, other subjects might experience equally compelling encounters that they interpreted when using the same subject in another setting, or merely with different prior metaphysical beliefs. Harris supposes that a spiritual experience he personally had at the Sea of Galilee would have confirmed for him his particular religious faith, if he had had one. Later Harris asks:

What does a spiritual experience mean? If you are a Christian sitting in church, it might mean that Jesus Christ survived his death and has taken a personal interest in the fate of your soul. If you are Hindu praying to Shiva, you will have a very different story to tell. Altered states of consciousness are empirical facts, and human beings experience them under a wide range of conditions.

Smith is also sensitive to this reasoning and notes that participants in other religions’ rituals, and those raised in different traditions, have had equally powerful experiences of their own. But Smith and Harris take this observation in radically different directions; for Harris this diversity undermines any specific metaphysical claim that any religious tradition might advance, while for Smith, it provides an encouragement to try to find commonalities in different religious experiences and reason to suspect that they are pointing toward the same reality.

The Eleusinian Mysteries

[…] χαλεποὶ δὲ θεοὶ θνητοῖσιν ὁρᾶσθαι.

[…] gods are hard for mortals to recognize.

Homeric Hymn to Demeter, 111 (translated by Helene P. Foley)

The Eleusinian Mysteries were an ancient Greek tradition practiced for thousands of years relating to the worship of the goddess Demeter and the story of her daughter Persephone’s descent into and return from the underworld. (The story, itself widely known in Greek culture, said that Persephone was kidnapped as a bride by Hades, the god of the underworld, and yet Demeter was able to bring her back—at least for part of each year, corresponding to the growing season.)

The associated ceremonies could at least originally be practiced only at a specific site near the city of Eleusis. Participants signed up to be initiated over the course of several days, during which they participated in allegorical rites. The initiates swore a vow of secrecy and were never allowed to talk about the details of what the rites consisted of. Although participants went home afterward and returned to their normal lives, virtually all of them took the vow extremely seriously, taking their memories to their graves, so we know very little today about exactly what went on at Eleusis.

The Mysteries attracted participation from the rich and famous of the classical world, and remained extremely popular throughout classical antiquity. (They were open to the public, but could only be experienced, or discussed, in the proper place, at the proper time, with the proper preparation.) Although those who had taken part almost never gave any concrete details, they generally considered the experience extraordinarily valuable and worthwhile, and often recommended it in the strongest terms to their friends and family members. Many indicated that they had had some kind of contact with the divinity during the initiation, and some said that they were no longer frightened of death. Many sources suggest that the Mysteries taught, or showed, their initiates some very specific reason why death was nothing to be afraid of—and participants apparently took this lesson to heart. Cicero, who was probably a participant himself, says in his De Legibus that the Mysteries’ initiations allowed “neque solum cum laetitia vivendi rationem accepimus, sed etiam cum spe meliore moriendi” (“[that] we not only took from them a way of living happily, but also a way of dying with a better hope”).

The Mysteries stopped being celebrated with the rise of Christianity and no one has experienced them at all for more than a millennium and a half. Since then, people have remained intensely curious about what was done and taught in these ceremonies, and why participants found them so valuable and transformative, over and over again, and why so many of them claimed to lose all concern about death. How could this have happened? Were all of them really encountering Demeter, or witnessing Persephone returning from the realm of the dead?

One interesting fact is that all the initiates were given a drink called a kykeon (κυκεών, meaning something like ‘mixture’) during the course of the initiation. The recipe for the kykeon is one of the details lost to history due to the Mystery initiates’ dedication to keeping their vows, but people have often wondered about how it might have affected those who drank it. In the 1960s, two prominent psychedelic researchers (R. Gordon Wasson, who introduced psilocybin to the American public, and Albert Hofmann, who discovered LSD) and a classicist (Carl Ruck) published The Road to Eleusis, a book arguing that the kykeon contained substances derived from the ergot fungus. Wasson and Ruck subsequently published numerous other books arguing that religions all around the world have traditionally used psychoactive substances (termed entheogens) to facilitate the experience of the divine, and that religious doctrines, narratives, and rituals are often at least initially based on drug-mediated experiences. Other scholars have tended to accept the view that the kykeon probably contained psychoactive substances, even if they disagree about exactly what substances these were.

Interestingly, scholars who endorse a substance-related explanation for the experiences of Eleusinian initiates (and for the origins of other religious traditions and narratives) don’t necessarily believe that they are debunking or explaining away these experiences. This is an oddity for skeptics who might imagine that initiates were being tricked into interpreting their experiences as divine contacts or visions.

7. The Problem of Dreaming

Dreaming is a universal human experience that poses a difficulty for confidently viewing spiritual experience as veridical, because dreams usually feel so real and feel so important. They represent a familiar, ubiquitous form of experience that is

  • Vividly perceived

  • Often deeply personally meaningful

  • Often considered, at least metaphorically, to reflect the experiencer’s deepest desires, aspirations, or values

  • Sometimes transformative in their consequences

  • Usually accepted as completely real during the experience itself

  • Often involve extended interactions with other beings who are perceived as separate from the experiencer

  • Difficult to convey to others

Yet modern western culture mostly accepts that dreams are not veridical—that they tell us little or nothing about how the world is—and that at most they might reveal or reinforce something about the dreamer’s own memories, desires, or unconscious psychology. (It’s worth noting that many cultures have assumed, or believed on evidence, that dreams do come from somewhere outside us, or that they represent a visit of some kind to another world. Many people have routinely believed, or routinely believe, that dreams represent genuine visions of the future, visits from one’s ancestors, temptations from evil forces, or glimpses of the real structure underlying the mundane world. Our neuropsychological depiction of dreaming as the brain’s efforts to make sense of random noise is quite unusual among cultures’ ways of accounting for dreams. But all cultures have had to deal with the ephemerality of dreams and the way that their content is at best unreliable and difficult to make practical use of in one’s day-to-day life.)

The relevance of dreams for a skeptical account of spiritual experience could include a strong version and a weaker version. In the strong version, many spiritual experiences could be suspected of originating in dreams that had been somehow misperceived or misremembered by their dreamers as waking experiences, or correctly perceived as dreams but somehow nevertheless accepted as genuine. In the weaker version, dreams simply provide an analogy that shows that our minds are sometimes capable of producing experiences that feel genuine and important, whose genuineness we reflectively accept, but whose connection to mind-independent reality is questionable.

A counterargument is that we should not have to doubt all of our interpretations of our experiences merely because we are sometimes temporarily mistaken about whether we are dreaming. Otherwise, we would fall into radical skepticism about all of our knowledge and experience. (People throughout history have flirted with embracing this conclusion seriously in various ways. For example, Descartes uses the experience of waking from a dream that he had believed in while it lasted as a touchstone of his motivation for undertaking to doubt everything82, the Chinese philosopher Zhuang Zhou claims not to know “whether he was a man who dreamt he was a butterfly or a butterfly dreaming he was a man”, while the movie Inception depicts people who, accustomed to the experience of “awakening” within a dream, become unsure of how many times they still have to wake up in order to return to the waking reality83.

What kinds of things could affect our judgment about whether particular dreams, or some dreams, are veridical? Perhaps dreams could be shown to foretell future events. Or perhaps the content of many people’s dreams could coincide, like Italo Calvino imagines in Invisible Cities:

They tell this tale of its foundation: men of various nations had an identical dream. They saw a woman running at night through an unknown city; she was seen from behind, with long hair, and she was naked. They dreamed of pursuing her. As they twisted and turned, each of them lost her. After the dream, they set out in search of that city; they never found it, but they found one another; they decided to build a city like the one in the dream.

But sometimes the content of different people’s dreams does coincide. For example, many people have a recurring dream that they’re in school and have somehow skipped class or forgotten to prepare for an important test. Why do so many people share this dream content? Does this suggest that there’s something veridical about this particular dream? Or at least that there’s something psychologically important about this situation?

Certainly the communication of dreams and their subjective significance mirrors the difficulties encountered by someone who wishes to communicate a spiritual experience. Dreams are usually deemed significant only to the one who has the dream. Most of us find that to be completely appropriate. However, spiritual experiences are generally thought (at least by those who have them) to be important enough to share with others, potentially as experiences of something real and relevant for others. The question then is how we might differentiate between a dream and a spiritual experience for the skeptic. Research into spiritual experience tells us that the two are different, at least in how they are perceived by the individual. It is often the case that dreams are easily forgotten and cease to seem real when the dream is over. However, spiritual experiences—particularly those of the NDE and entheogenic varieties—have lasting effects, and, at times, permanently changing behavior. As Zaleski writes about near death experiences: “In addition, against Siegel’s sweeping comparison of near-death visions to the psychoneurology of hallucination, the researchers cite nearly unanimous testimony that near-death experience is subjectively different from dreaming or intoxication; that it is, as one of Sabom’s subjects puts it, ‘realer than here.’”84 This may be true for the one who experienced the NDE, but, for those who did not have that experience, hearing about it may elicit a similar reaction to hearing about a dream.

8. Some Possible Perspectives

We join in recommending epistemic humility across the board for all perspectives. To have any sort of meaningful discussion about any topic, we must allow for the possibility that we are possibly wrong in our opinions and beliefs. Dialogue about experiences that bear on people’s religious (or anti-religious) beliefs is often challenging.

We should be careful in how we interpret spiritual experience, especially if it prompts us to take action or change behavior. If a metaphysical or spiritual reality does not exist, then this suggestion is self-explanatory. However, epistemic humility is still important if a metaphysical or spiritual reality does exist. Suppose that God can and does speak to humans. The present evidence supports the idea that such communication may not always be clearly understood or interpreted by the human listener, and that human beings may not find it easy to be confident about when God is speaking to them. If this were not true, we would obviously not see such a broad range of (often) conflicting divine messages. Being careful with the knowledge and action generated by spiritual experience is therefore wise.

In a related context, we had fruitful discussions of the parable of the blind men and the elephant. The blind men and the elephant is an ancient story, and we highly recommend the Wikipedia entry for it.85 The basic story is that a number of blind men came upon an elephant, and they all touched the elephant while attempting to describe it. They all disagreed upon the nature of the elephant because one man felt the side of the elephant and said it was like a wall, while another felt the trunk and said it was like a snake, etc. Interestingly, the moral that this story is meant to convey is subject to dispute by its listeners! One might take the lesson of the story as a metaphor, for each blind man erroneously believes in the truth of his own religious doctrines based his limited perception. Because each person’s perception is limited, conclusions on the nature of the elephant cannot be trusted. However, another way to interpret the story begins with the idea that the radical differences of perception should not be taken as proof that there is no elephant. People in different cultures and time periods may be seeing something real, but their description of it may not be complete. The lesson may be that even though we are tempted to discount spiritual accounts because of their inconsistency, we should not discount them completely because they may still be of evidential value, albeit in a more limited or complicated way than the authors of the accounts appreciate.

Like Plato’s allegory of the cave and other thought experiments in philosophy, the tale of the blind men and the elephant reminds us that epistemology is hard. It’s also challenging to integrate reported perceptions and experiences that differ greatly from person to person. Much of our reason for confidence in our understanding of the physical world is our ready and far-reaching intersubjective agreement about our sense perceptions of it. “Everyone” more or less agrees that we see the Moon in the sky, hear the sounds of rain, are injured by fire, like the taste of sugar, or find it challenging to lift horses. Accepting these widely-shared perceptions as veridical even without discussion or reflection is the most natural thing in the world. But when Blake says

I assert for My self that I do not behold the Outward Creation & that to me it is hindrance & not Action; it is as the Dirt upon my feet, No part of Me. “What,” it will be Questioned, “When the Sun rises, do you not see a round Disk of fire somewhat like a Guinea?” O no no, I see an Innumerable company of the Heavenly host crying “Holy Holy Holy is the Lord God Almighty.” I question not my Corporeal or Vegetative Eye any more than I would Question a Window concerning a Sight: I look thro it & not with it.

it may be a greater challenge to achieve a consensus about the status of his vision, even if we suppose that he reported it faithfully and earnestly86.

As we mentioned, Alston’s work prompted us to discuss some epistemological issues. We recognized the likelihood that people’s intuitions will vary as to what evidence is compelling enough to consider a spiritual experience to be (some combination of) true, authentic, and/or indicative of some spiritual dimension of existence. (Even some of the authors we read are still struggling with this question for themselves.)

Apart from this, we might ponder the notion that entheogenic or meditation experiences are valuable because they bring people in contact with facets of their own consciousness that are not normally accessible, whether or not we believe that these facets are metaphysically different from ordinary experience of the world and the mind. Some advocates of meditation and/or entheogens who incline toward naturalism maintain that we are “merely” learning more about ourselves (or, perhaps, our non-selves) through these experiences, yet that this learning is of immense value. Clearly, entheogens generate experiences which have lasting, powerful effects upon those who take them, and which are often described as improvements in their users’ lives. Even if it is not a revealing of a separate spiritual reality, should such striking and consequential entheogenic experiences be considered “real” at least in terms of the individual’s own psychology? Considering the literature on the psychological study of religious experience, on near death experiences, and on entheogens, it is undeniable that people subjectively experience genuine states of being which transcend normal states of consciousness. Is there a satisfactory way to consider these experiences’ meaning without insisting that they prove something specific about how the world is? And, considering Alston, is it even plausible to expect more than this from any particular kind of human perception?

Alston might maintain that two people may be justified to believe different things based on differing experiences. Their perspectives may be in conflict, but this need not imply that either is behaving irrationally. It may be rational for one person to believe that spiritual experiences are indicative of a spiritual plane of existence that is usually inaccessible, while another person may also be rational in the belief that such an idea is silly nonsense and that spiritual experiences are the result of purely natural causes. Alston’s concept of epistemology is that it is empirically formed through perceptual experience. Therefore, rational knowledge may not ever be universally uniform. In fact, this prospect is widely accepted in rationalist and empiricist circles, since rationality has to do with making good use of evidence and different people are in possession of different evidence. Those with spiritual experiences simply have a particular sort of evidence which they may try to interpret alongside other forms of evidence87.

Regarding veridicality and consensus, one possible tool in the difficult arena of parsing spiritual experience is inquiry for confirmation. People generally agree upon the common physical and psychological experiences of natural life (e.g. “we both agree this coffee is too hot to drink right now,” or “everyone is laughing so this movie must be funny”) while also finding dissimilarities (e.g. “although this coffee is too hot for you to drink, and I agree that it is hot, it is not too hot for me to drink,” or “everyone else is laughing, but I don’t think this movie is funny”). This kind of discussion of experience toward confirmation is actually rather common in everyday life. Spiritual experience might be discussed and confirmed in a similar manner, leading to increased confidence in confirmed commonalities compared to experiences that do not seem to be shared. These discussions could, of course, be extremely culturally or institutionally difficult. In light of Alston’s claims about the general unreliability of all human perception, finding where subjective experiences agree (whether spiritual or otherwise) is a wise course of action. Of course it is still possible to find agreement and still be wrong, but at least we’ll be wrong together.

One reason why empiricism seems to be of value in discussing and interpreting spiritual experience is that people’s experiences tend to steer both our beliefs and our skepticism. Skepticism is a powerful tool that can be leveled at anything. As a negative example, conspiracy theorists can find reasons to be skeptical of any claim which rebuts their conspiracy theories. More relevant to this project, one could criticise both religion and materialistic rationalism as being culturally conditioned and psychologically motivated phenomena. Skepticism can cut both ways on this topic. However, understanding that our experience informs our future views, then even amidst our own skepticism of spiritual experience, we can appreciate the potential rationality of various worldviews informed by such experiences.

Concluding Thoughts

The question that we’ve aspired to consider in this project is:

The empirical study of the content and nature of people’s personal spiritual experiences justifies taking them seriously as evidence of an important component of human life deserving of individual and collective exploration.

What are the present authors’ views of this statement, and what did they learn that surprised them?

Gruenberg originally proposed the project from the point of view of a supporter of this statement; at the end of the project, he finds that he strongly agrees with it.

Gruenberg found a number of surprises in the course of this research. The first is the vast amount of empirical research that has already been accomplished, particularly in the field of psychology. The researchers in the field are serious about the objective study of spiritual and religious experience, regardless of their stance on their veridicality. Their commitment to objectivity and sensitivity to the subject is admirable. The second is the compatibility of the (often) secular use of entheogens with non-drug-assisted spiritual experience, particularly in the emotional benefits of entheogenic experiences, as well as the surprising conversion rate of atheists to non-atheists through the use of entheogens. The third is Schoen’s objection to the veridicality of spiritual experience on the grounds of the non-veridical nature of dreams. Mystical experiences of waking life—including meditative transcendent states of the loss of self and numinal encounters with God—seem to be of a different nature than dreams, but their precise differences are difficult to articulate to those who have not had them. The fourth is the all-encompassing nature of skepticism that we could have regarding any sense perception (advocated by Alston) that ends up “leveling the playing field” between spiritual and non-spiritual experiences mediated by our perception. In other words, if all sense perception is suspect, then spiritual experiences are no less suspect than material experiences. The fifth is Schoen’s use of the parable of the blind men and the elephant in order to demonstrate the possibility that limited perception may not always be erroneous. Perhaps we are all blind men and women, probing the elephant of reality. If we live in a version of that parable, then it is even more imperative that we actively seek a discourse which explores our disagreements with a willingness to try on some different versions of the elephant.

Schoen joined the project as a skeptic of this statement, but concludes that he ends in weak agreement with it, noting that a broad spectrum of authors whose work we considered maintain that the human mind is capable of a much broader range of experiences than we’re used to thinking of, and that these experiences can potentially be sought out deliberately and often have profound consequences for the experiencers’ lives and worldviews. In some sense, this much is agreed by people with otherwise opposing views on questions such as naturalism or theism. These authors maintain that there is more inside us than we know, or that we are capable of more than we know, or that personal experiences can give rise to important philosophical challenges. While it’s not clear that these facts imply any particular view of reality or the universe, they seem to have import, at least, for our own lives and self-concepts.

Schoen was also surprised by a number of things he learned in the course of this research. The first is the way that large numbers of people do not dismiss the meaning or relevance of their own or others’ drug-mediated experiences even when they are explicitly aware that they were “on drugs” and that their normal brain function was altered by the influence of substances like LSD, psilocybin, mescaline, or DMT88. Reminding them of the drugs’ role does not seem to change this. What’s more, vast numbers of people durably rate these experiences as extraordinarily significant, valuable, and influential, even on sober89 retrospective reflection, and tend to feel that their understanding of their minds was expanded. Schoen would have expected the role and impact of substances in religious ritual from antiquity to the present as a sort of dirty secret that would draw the meaning or authenticity of religious experience into question. Instead, many participants in religious practices have demonstrated that they knew that drugs were centrally involved and did not reject the experiences or associated insights on that account.

A second surprise is the analogy drawn by Alston and others between the unreliability of spiritual perception and the unreliability of other sorts of perception, including some that clearly are part of our social consensus about reality. One can note that sense perceptions like vision and hearing are well-confirmed by the understanding they yield of the physical world and the predictions this understanding allows us to make, but other kinds of perceptions are socially normalized and believed even without the same degree of objective confirmation. While Alston might have an extremely low prior for the plausibility of metaphysical naturalism, and even shockingly low from the point of view of the median SSC reader, it’s interesting to consider the idea that perceptions other than the use of our bodily senses can rationally count as evidence much like other evidence. (This debate really starts there rather than ending there, and includes rather involved issues about how our beliefs and experiences reinforce one another.) It’s interesting to note how cultural and societal consensus hold so much more power over belief (in all different eras and all different sorts of societies) than any kind of formally-reasoned epistemology according to a detailed philosophical theory!

A third surprise is the extent to which mystical traditions are consciously aware of the risk of spurious spiritual insights due to, for example, mental illness. Skeptics often suggest that neurological disorders such as epilepsy might cause experiences that religious believers interpret as religious ecstasy or revelation. Organized religious and contemplative traditions are at least grappling with these risks and actively trying to find ways to distinguish experiences that they see as valuable from experiences that might be attributed to disorders. Mark Salzman’s novel Lying Awake movingly described this issue as it presents itself in the life of a nun who receives a sense of purpose and fulfillment (and more practical benefits for herself and her religious community) from ecstatic religious visions that are later attributed to the influence of a treatable brain tumor. All traditions and communities that rely on visions and revelations worry about how one can tell whether a particular experience constitutes a good or legitimate source of insight or teaching, although their answers are not necessarily satisfactory or reassuring to outsiders.

A fourth surprise is the frequency with which spiritual experiences are more often sought out or provoked rather than spontaneous or unexpected90, and the popularity of the view that one can be taught to have such an experience by a particular method (particularly in contemplative and mystical traditions, as well as among advocates of insight meditation and entheogens91). Near-death experiences are a major exception here because people have rarely actively hoped to receive one or sought one out. Quite a few traditions suggest that there is a specific thing we ought to do, or a specific practice we ought to follow, in order to receive spiritual experiences and spiritual insights. While that creates its own set of risks (for example, of being manipulated or exploited by an unscrupulous teacher or dysfunctional community), it also provides an interesting opportunity for people to try things out for themselves if they’re so inclined. This picture of practical steps is especially associated with secular Western interpretations of Buddhism, which emphasize the claim that Buddhist teachings can be taken as a phenomenological how-to guide92.

This collaboration expressly excluded the truth or falsity of naturalism, or of any specific religious doctrine, from its scope. Although we and many of the authors we consulted have views about this, we didn’t try to find a consensus about these questions. The evidence for and against a naturalistic worldview, or for and against some specific kind of supernatural phenomenon, might make an interesting adversarial collaboration topic for others in the future.

It’s worth acknowledging the pervasiveness of the belief that spiritual experiences can’t usefully be described or analyzed in words, and that trying to theorize about them is an absurd and useless activity. On this account, this whole project is an exercise in futility, doomed from the start, and perhaps a mockery of itself. While the present authors don’t share this attitude, and even see it as counterproductive, they realize that others would strongly recommend experiencing spirituality, not talking about it.

We realize that there is a huge literature about the phenomenology and interpretation of spiritual experiences, and that we’ve only managed to scratch the surface here. Nor have we engaged with every issue raised within the sources that we did review. Interested readers looking for more material on these topics might want to start with Prof. Wesley Wildman’s bibliography on religious experience at <> or the Stanford Encyclopedia of Philosophy’s article on religious experience at <>.

About the Authors

Jeremiah Gruenberg has a Ph.D. in theology. He writes and hosts a podcast on Christian spirituality, called The God Experiment.

Seth David Schoen is a computer and language enthusiast living in San Francisco.

1Charles Wolfe, Ofer Gal. The Body as Object and Instrument of Knowledge: Embodied Empiricism in Early Modern Science. France. Springer, 2010, STUDIES IN HISTORY AND PHILOSOPHY OF SCIENCE, 978-90-481-3686-5. hal-01238121

2Creath, Richard, "Logical Empiricism", The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), Creath writes, “Because logical empiricism is here construed as a movement rather than as doctrine, there is probably no important position that all logical empiricists shared—including, surprisingly enough, empiricism. And while most participants in the movement were empiricists of one form or another, they disagreed on what the best form of empiricism was and on the cognitive status of empiricism.”

3Creath, Richard, "Logical Empiricism", The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.),

4Markie, Peter, "Rationalism vs. Empiricism", The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), Markie writes, “The dispute between rationalism and empiricism concerns the extent to which we are dependent upon sense experience in our effort to gain knowledge. Rationalists claim that there are significant ways in which our concepts and knowledge are gained independently of sense experience. Empiricists claim that sense experience is the ultimate source of all our concepts and knowledge.”

5Markie, Peter, "Rationalism vs. Empiricism", The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.),

6The Stanford Encyclopedia of Philosophy also notes that Keith Yandell distinguishes five sorts of religious experiences, one monotheistic, three associated with South Asian religious traditions, and one related to “nature”. This kind of classification is daunting (perhaps further investigation would reveal dozens more!) but also useful when people want to talk about their experiences and attempt to compare and contrast them.

7Zinnbauer and Pargament. “Religiousness and Spirituality,” 36.

8Zinnbauer and Pargament. “Religiousness and Spirituality,” 37.

9Zinnbauer and Pargament. “Religiousness and Spirituality,” 24-25.

10Zinnbauer and Pargament. “Religiousness and Spirituality,” 25.

11Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. New York and London: The Guilford Press (2009).

12Raymond F. Paloutzian and Crystal L. Park, eds. Handbook of the Psychology of Religion and Spirituality. New York and London: The Guilford Press (2005).

13Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 294.

14Raymond F. Paloutzian and Crystal L. Park. “Integrative Themes in the Current Science of the Psychology of Religion,” from Handbook of the Psychology of Religion and Spirituality. Raymond F. Paloutzian and Crystal L. Park, eds. New York and London: The Guilford Press (2005), 8.

15Ralph W. Hood, Jr., and Jacob A. Belzen. “Research Methods in the Psychology of Religion,” from Handbook of the Psychology of Religion and Spirituality. Raymond F. Paloutzian and Crystal L. Park, eds. New York and London: The Guilford Press (2005), 67.

16Ralph W. Hood, Jr., and Jacob A. Belzen. “Research Methods in the Psychology of Religion,” 67.

17Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 347.

18Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 60.

19Ralph W. Hood, Jr., and Jacob A. Belzen. “Research Methods in the Psychology of Religion,” 69.

20Ralph W. Hood, Jr., and Jacob A. Belzen. “Research Methods in the Psychology of Religion,” 70.

21Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 42.

22Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 65.

23Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 65.

24Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 371.

25Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 371.

26Lee A. Kirkpatrick. “Evolutionary Psychology: An Emerging New Foundation for the Psychology of Religion,” from Handbook of the Psychology of Religion and Spirituality. Raymond F. Paloutzian and Crystal L. Park, eds. New York and London: The Guilford Press (2005), 106.

27Lee A. Kirkpatrick. “Evolutionary Psychology: An Emerging New Foundation for the Psychology of Religion,” 107.

28Lee A. Kirkpatrick. “Evolutionary Psychology: An Emerging New Foundation for the Psychology of Religion,” 108.

29Andrew B. Newberg and Stephanie K. Newberg. “The Neuropsychology of Religious and Spiritual Experience,” from Handbook of the Psychology of Religion and Spirituality. Raymond F. Paloutzian and Crystal L. Park, eds. New York and London: The Guilford Press (2005), 200.

30Andrew B. Newberg and Stephanie K. Newberg. “The Neuropsychology of Religious and Spiritual Experience,” 201.

31Andrew B. Newberg and Stephanie K. Newberg. “The Neuropsychology of Religious and Spiritual Experience,” 201.

32Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 22.

33Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 25.

34Zinnbauer and Pargament. “Religiousness and Spirituality,” 31.

35Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 332-333.

36Andrew B. Newberg and Stephanie K. Newberg. “The Neuropsychology of Religious and Spiritual Experience,” 210.

37Ralph W. Hood, Jr, Peter C. Hill, and Bernard Spilka. The Psychology of Religion: an Empirical Approach, Fourth Edition. 333-334.

38William P. Alston, Perceiving God: The Epistemology of Religious Experience, Cornell University Press (1991), 1.

39Alston, Perceiving God, 1.

40Alston, Perceiving God, 3.

41Following Alston in referring to the object of a spiritual experience as “God”, and putting aside for the moment the possibility that many people might choose to use other words or concepts, or maintain that their own experiences point to other kinds of spiritual reality.

42Alston, Perceiving God, 4.

43Alston, Perceiving God, 35.

44Alston, Perceiving God, 37.

45Alston, Perceiving God, 37-38.

46Alston, Perceiving God, 40.

47Alston, Perceiving God, 42.

48Alston, Perceiving God, 68-69.

49Alston, Perceiving God, 71.

50Alston, Perceiving God, 72.

51Alston, Perceiving God, 78.

52Alston, Perceiving God, 93.

53Alston, Perceiving God, 94.

54Alston, Perceiving God, 96-97.

55Alston, Perceiving God, 102.

56Alston, Perceiving God, 102-103.

57Alston, Perceiving God, 108.

58Carol Zaleski. Otherworld Journeys: Accounts of Near-Death Experience in Medieval and Modern Times. New York and Oxford: Oxford University Press (1987).

59Carol Zaleski. Otherworld Journeys, 159.

60Carol Zaleski. Otherworld Journeys, 156.

61Carol Zaleski. Otherworld Journeys, 156-157.

62Carol Zaleski. Otherworld Journeys, 158.

63Carol Zaleski. Otherworld Journeys, 177.

64Carol Zaleski. Otherworld Journeys, 159.

65Carol Zaleski. Otherworld Journeys, 164-165.

66Carol Zaleski. Otherworld Journeys, 167.

67Carol Zaleski. Otherworld Journeys, 170.

68Carol Zaleski. Otherworld Journeys, 175.

69Carol Zaleski. Otherworld Journeys, 175.

70Carol Zaleski. Otherworld Journeys, 176.

71Carol Zaleski. Otherworld Journeys, 180.

72Carol Zaleski. Otherworld Journeys, 182.

73Carol Zaleski. Otherworld Journeys, 187.

74Carol Zaleski. Otherworld Journeys, 192.

75Another occasional source of controversy is that members of religious traditions that use specific entheogenic substances often have specific rules or norms about who ought to use these substances, how, where, why, etc. For example, some traditions maintain that substances should only be used and experienced within the context of a particular ritual, led by a qualified religious leader. Using them in other contexts—as is now common—can be seen as disrespectful to the sanctity of the substances or of the experiences they produce, or as dangerous, or as likely to miss out on the spiritual benefit that ought to be obtained from them.

76There is also a magnetic brain stimulation device called the “God helmet” which reportedly produces similar effects, but which has proven difficult to replicate; see <>.

77Rachael Peterson. “Taking mushrooms for depression cured me of my atheism.” The Outline, April 29, 2019.

78E.g., Rick Strassman, DMT: The Spirit Molecule (Rochester, Vermont: Park Street Press, 2001). Note that Strassman’s hypotheses about the pineal gland’s role in producing DMT have been severely questioned by other biomedical researchers.

79Griffiths RR, Hurwitz ES, Davis AK, Johnson MW, Jesse R (2019) “Survey of subjective ‘God encounter experiences’: Comparisons among naturally occurring experiences and those occasioned by the classic psychedelics psilocybin, LSD, ayahuasca, or DMT.” PLOS ONE 14(4): e0214377.

80See There was also a control group which received a non-psychoactive substance and attended the same service.

81This is an homage to a more famous but less wide-ranging book on the same subject by Aldous Huxley, The Doors of Perception. Both take their title from a line of William Blake: “If the doors of perception were cleansed every thing would appear to man as it is: Infinite.”

82“Though this be true, I must nevertheless here consider that I am a man, and that, consequently, I am in the habit of sleeping, and representing to myself in dreams those same things, or even sometimes others less probable, which the insane think are presented to them in their waking moments. How often have I dreamt that I was in these familiar circumstances, that I was dressed, and occupied this place by the fire, when I was lying undressed in bed? At the present moment, however, I certainly look upon this paper with eyes wide awake; the head which I now move is not asleep; I extend this hand consciously and with express purpose, and I perceive it; the occurrences in sleep are not so distinct as all this. But I cannot forget that, at other times I have been deceived in sleep by similar illusions; and, attentively considering those cases, I perceive so clearly that there exist no certain marks by which the state of waking can ever be distinguished from sleep, that I feel greatly astonished; and in amazement I almost persuade myself that I am now dreaming.” (Meditations on First Philosophy, I.5)

83Most dramatically, Cobb and Mal have an ongoing disagreement about whether or not they have finished waking up, which Cobb believes has led Mal to commit suicide. On another occasion, Saito forgets over the course of many subjective years (!) that he is dreaming, and Cobb has to convince him. See also and

84Carol Zaleski. Otherworld Journeys, 176.

85“Blind men and an elephant.” Wikipedia. You may also enjoy Natalie Merchant’s musical rendition of John Godfrey Saxe’s famous poem on this theme:

86In Unsong, the world has reverted to its basic spiritual nature in which Blake’s vision of the Sun is simple everyday common sense. Compare Ted Chiang’s “Hell is the Absence of God” and Ken Liu’s “Single-Bit Error”.

87We also briefly discussed Aumann’s Agreement Theorem but concluded that we didn’t understand how well it could apply to most disagreement in the real world—probably not so straightforwardly.

88However, some do—for example, one of Rick Strassman’s DMT subjects had a very unusual and intense experience, and afterward said that this experience had no real significance because it was entirely provoked by the drug. It remains worth acknowledging that powerful experiences are not always given a great significance by their experiencers. Some people remain radically skeptical and hold that the experiences merely seemed real or important in the moment, but should not ultimately be regarded this way.


90Several sources also maintain that spontaneous spiritual experience is extremely common, and that most people who experience some form of it simply don’t talk about it, perhaps because they find that it would be difficult to communicate, or because they don’t suppose that communicating it would have beneficial results. Many people keep their spiritual experiences private for fear of social consequences.

91Harris mentions the aphorism that taking entheogens is like launching one’s self on a rocket, while meditation is like raising a sail to the wind.

92This account of what Buddhism is “really” about is widespread in the West today. David Chapman has suggested that it would not be familiar to most Buddhists in history. That’s another interesting inquiry. Conversely, other religions have their own mystical and contemplative traditions that can offer elaborate and quite specific advice about how to pursue spiritual experiences, but that may not be widely known even to adherents of those religions—an interesting theme, for example, in Rodger Kamenetz’s The Jew in the Lotus.

[ACC] Should You Have A Merry Christmas?

[This is an entry to the 2019 Adversarial Collaboration Contest by Cindy Lou Who and the Grinch]

Christmas Day is a a time full of laughter and cheer which is held in the West at the end of each year.

Believers in Jesus traditionally think the day marks his birth; scientists disagree. They point to the shepherds; when carolers sing about fields full of sheep, that occurs in the spring. The Star of the Magi provides further doubt. Simulations can tell us what star it’s about: it was most likely Jupiter shining near Saturn, but it’s only in autumn one sees such a pattern. It is proven in space and it’s proven on Earth – Christmas isn’t the real time of Jesus’ birth.

One of the most popular Yule celebrations is handing out gifts to one’s friends and relations. Parents offer the story these presents appeared due to Santa, a jolly old man with a beard. Originally a historical saint, his tale was embellished, with little restraint. He flies through the air in a reindeer-pulled sleigh, and visits all households on Earth in a day. This tradition seems pagan, with some scholars noting the details are pulled from a legend of Odin. Though sources like NORAD appear to support Santa’s presence, we think that their data fall short. After reading the pros and the cons, we both feel the consensus perspective is Santa’s not real.

And what are these gifts’ economics effects? According to Goeddeke and Birg, it’s complex. Since presents are valuable, one might assume that their giving would cause stores and markets to boom. You give to your parents! You give to your boss! But economists say it is all deadweight loss. You would spend the same money on something, you see, and presents are chosen incompetently. Others’ preferences aren’t as clear as our own, so when we buy for others, their needs are unknown. Presents don’t increase welfare and don’t increase growth; all the papers agree they are harmful to both.

Is there anything good about Yule? It depends. Holidays are a time to see family and friends. In a season of darkness and inclement weather, Christmas gives an excuse to bring people together. People pray for world peace; relatives are united; workers get time off work; little kids are delighted. It gives us a reason for dancing and singing, for candles and carols and feasts and bell-ringing. Despite deadweight losses, despite the wrong days, we think that this matters so much it outweighs all the falsehoods and problems; our model predicts it outweighs all those things by a factor of six!

CONCLUSION (CLW): The Grinch turned out right on historical dates, and made several good points about gifts (which he hates). But if measures and numbers are all we inspect, then we risk falling prey to the streetlight effect. It was useful to reason this out with the Grinch, but I chose not to budge; I did not budge one inch! The best parts of Christmas can hardly be measured; they’re moments of joy to be felt and be treasured.

CONCLUSION (TG): I started out thinking that Christmas was bad. I hated the season! It made me so mad! But the evidence reached an explicit conclusion: my hatred of Christmas was based on confusion. It wasn’t that Christmas was harmful at all! It was just that my heart was two sizes too small!

CONCLUSION (JOINT): After working our differences out, we agree: Merry Christmas to all those who read SSC!

[ACC] Will Automation Lead To Economic Crisis?

[This is an entry to the 2019 Adversarial Collaboration Contest by Doug Summers-Stay and Erusian]

Adversarial collaboration on the question: “Automation/AI will not lead to a general, sustained economic crisis within our lifetimes or for the foreseeable future. Automation/AI’s effects into the future will have effects similar to technology’s effects in the past and, on the whole, follow the general trend.”

Defending the proposition: Erusian

Challenging the proposition: Doug Summers-Stay

tldr: Until the pace of automation increases faster than new jobs can be created, AI shouldn’t be expected to cause mass unemployment or anything like that. When AI can pick up a new job as quickly and cheaply as a person can, then the economy will break (but everything else will break too, because that would be the Singularity).


As software and hardware grow more capable each year, many are concerned that automation of jobs will lead to some sort of economic crisis. This could take the form of permanent high levels of unemployment, wages that drop below subsistence levels for many workers, or an abrupt change to a different economic system in response to these conditions.

This has become a talking point outside of economic circles in the U.S. Democratic presidential candidate Andrew Yang’s most well-known policy proposal is a universal basic income to offset this (an idea Elon Musk has supported for years). Bill Gates suggested that when robots replace workers, the companies should be taxed at a similar rate to the taxes being paid by those workers. These entrepreneurs have spent a lot of time thinking about and planning for the future, and have a lot of experience with introducing new technology. Are their concerns valid?

Throughout this discussion, we use the words AI, automation, and robots more-or-less interchangeably. Imagining Asimov-style androids with positronic brains makes it easier to picture a world where all jobs are automated. In reality, though, it would be a silly waste of resources to literally have robots come in and do jobs as drop-in replacements for workers, and there are few jobs where this would make sense. A lot of software in the future will be more human-like in the sense that many machines could have natural language and image-understanding capabilities and have the ability to reason about the wider context in which their work exists to avoid dangerous or costly mistakes due to a lack of common sense. In many other ways, though, software for nearly all working robots will not be similar to human minds at all.

Some jobs, like picking most fruits and vegetables or most assembly line jobs, currently can’t be done by machines because they require manual dexterity. In such cases, you would need more precise manipulators with touch sensors that can adapt to a wide range of situations. There is no reason to expect they will look like human hands, though. Machines will also be networked, of course, so imagining them as a bunch of individuals is also unrealistic. Finally, the way tasks are currently divided into jobs makes sense for human workers, but wouldn’t make sense for automation. Instead, certain tasks will be automated first, and the remaining tasks that form part of a job will still be done by humans.

What is happening in technological unemployment today?

Worldwide, employment rates are not worse than historic levels. This seems to show that jobs are being created more-or-less as fast as they are being automated. Scott has covered this issue thoroughly in a survey article. The Bureau of Labor Statistics puts out a report of their projections for what jobs will be lost and what will be added for the next ten years. Some jobs that are expected to grow are in health care, renewable energy, and several computing professions. The ones that are declining include secretaries, clerks, assemblers, and care of outdated tech like locomotives and wristwatches. Automation has tended recently to take middle skill jobs. This has caused many people to take less desirable jobs.

We have seen at least three instances in the past where automation has taken a significant fraction of all existing jobs. Before the neolithic agricultural revolution, nearly everyone was a hunter or gatherer or both. During the period from about 8000-4000 B.C., this gradually shifted so that most people were employed in agriculture. Before 1400, around 70% of all employment was in agriculture. Today, it is only a few percent in advanced economies. At its peak during World War II, nearly 40% of U.S. employment was in manufacturing. Today, that number is below 10%. Housework also declined from 60 hours a week in 1900 to only 15 hours a week today (although this doesn’t show up in employment figures, of course, and can also be partly attributed to declining fertility rates.)

Another line of evidence to consider is GDP. Before 1000 AD, the per capita GDP everywhere was below $1000, adjusted for inflation. In Western countries today it is around $50000. Since human innate capability hasn’t changed, this must be the result of innovations (in education, processes, tools, machinery, or what-have-you) that allow people to produce more value for each hour of work. By one way of looking at it, this means that there are already 49 “robots” worth of automation for every person in these countries. Yet employment is still at roughly the same level it has always been.

So we know from experience that employment is able to adapt to extensive automation. However, these changes took place over millennia, centuries and decades respectively. If the pace of job automation were to increase so that replacement of a significant fraction of the workplace happened over years instead of decades there might not be time for people to retrain fast enough to avoid some higher rates of unemployment. Again, though, that doesn’t seem to be the case at the moment.

Are there things computers will never be good at?

A common response to the question of whether we will eventually reach the point that all jobs can be automated is to name a skill that computers will never be able to perform, attempting a disproof by counterexample. The following terms link to places a skill was claimed to be impossible for computers: write jokes, write novels, express compassion, robotically navigate a human environment (like an arbitrary kitchen), manufacture new categories at arbitrary levels of abstraction, act creatively, represent and invent concepts, learn from small data, express emotion, have motivational direction, think socially and cooperate.

Researchers are, however, aware of these limitations of current machines and actively trying to find ways to automate them. Here are the same terms along with a link to a paper where research is presented on how to automate the task: write jokes, write novels, express compassion, robotically navigate a human environment, manufacture new categories at arbitrary levels of abstraction, act creatively, represent and invent concepts, learn from small data, express emotion, have motivational direction, think socially and cooperate.

In every case, although researchers haven’t yet solved the problem and in some are far from a solution, it is possible to see a clear research direction and a path to gradual progress. Many claims of this sort are (in the local argot) making a motte-and-bailey argument. When someone argues that a machine can’t really feel emotion, they can always retreat to the motte that the machines are incapable of phenomenal conscious awareness of what it is like to feel (for example) emotional pain. This is true: we have no idea how to make a machine that is conscious in this sense or to test whether phenomenal consciousness is present in a person, animal, or machine. They then, however, make claims that the robots will not be able to respond to anger in a voice, or anticipate that taking an action might cause someone to feel sadness. This is false: a neural network trained on examples of anger in a voice could learn to discriminate it without the ability to feel blood rush to its ears. For the purpose of taking jobs, an accurate discriminator or ability to accurately simulate emotions is all that is necessary.

The new techniques researchers have come up with are not only able to perform better than previous methods on a particular benchmark; they are also becoming more general. Artificial General Intelligence is what human-like AI is usually called these days because the ability to handle unanticipated situations is such a central part of what makes human intelligence special.

The new Transformer neural net architectures are an example of how AI is becoming more general. Although simply trained on predicting the next word in a sequence, such models have demonstrated superior performance on question-answering, common-sense, categorization, and other benchmarks. A similar architecture, with few changes, can be used to compose music, create artwork, simulate voices, and so on. These models work well because they are able to learn to direct attention to the parts of the context most applicable to deciding what the next output should be. In the future, we should expect systems that are more adaptable still. An adaptable AI will be quicker and cheaper to deploy on new jobs, so we should expect the rate at which jobs are automated to increase.

In terms of hardware, we can expect computational capacity on the order of the human brain in supercomputers in the next five years and in home computers about twenty-five years after that. (Assuming 100 billion neurons, 1000 synaptic connections per neuron, 10 floating-point operations per interaction, and a temporal resolution of 1000 interactions per second.) So hardware shouldn’t be a limiting factor after 2050 or so, as long as current trends hold.

This is not to minimize how far we are at the moment from a machine that can learn an arbitrary new job as easily as a human can. People can model another human by “putting themselves in their shoes.” All of that ability to anticipate how other humans would react to an action has to be built into machines to achieve the kind of autonomy we are imagining. While machines can now act in creative ways as well as rational ways, tying the two together is still a very open problem. Systems that have lifelong learning, that continue to grow with experience, are still very rare. The ability to understand spoken or written language is still at a very primitive level. These problems don’t seem insurmountable– merely very hard.

What jobs will be automated?

Frey 2013 characterizes which jobs are likely to be automated soonest based on the following capabilities required to perform the job:

– Finger Dexterity, Manual Dexterity, Cramped Work Space, Awkward Positions
– Originality, Fine Arts
– Social Perceptiveness, Negotiation, Persuasion, Assisting and Caring for Others

He concludes that 49% of U.S. jobs are repetitive, don’t require fine dexterity, originality or people skills, and are therefore likely to be automated in the next few decades. This includes most office and administrative support jobs, sales jobs, some service jobs, and most production and transportation jobs.

However, Arntz et al argue that this number is much too high. Holding everything else the same, they show that this is neglecting the variation within a profession and the ability for a job to adapt when new technology becomes available. With these taken into account, only 9% of jobs are found to be at risk.

Both of these papers are discussing the right side of the graph above, everything above the “75% probability of computerisation” line. Eventually, though, essentially all the skills described on this chart will be automatable. While it may not fall within our lifetimes, it does seem to be part of the “foreseeable future.” It doesn’t seem like there are any fundamental physical limitations preventing it (in the sense that we may never build a spaceship that goes faster than light.) The existence of human brains shows that the right arrangement of atoms can compute at human levels with reasonable size, weight, and power restrictions. It seems reasonable to suppose that computers will continue to increase in capability until they will be able to perform any intellectual task required in a job as well as a human. This includes creative, decision-making, and emotional reasoning tasks.

To replace people in jobs also requires a body that can perform tasks with the dexterity and ability to adapt to different conditions that are required for a job. This also seems to be at least decades away for many jobs.

Beyond the invention of hardware and software capable of performing these tasks, the cost of developing and deploying the technology must fall below the cost of hiring workers in order for the workers to be replaced. The price of computing has been dropping steadily for several decades now, and there are no fundamental physical limitations to this improvement that would prevent the trend from continuing to the size and power-usage levels of a human brain. Robotic bodies and manipulators, while continuing to improve in dexterity, sensing ability, and cost over time, do not seem likely to have the same exponential improvement that we have seen in computing hardware. Again, though, we know that a machine with human dexterity is possible (because hands exist) so it seems inevitable that machines will eventually surpass us in these abilities as well.

There are, however, certain jobs that some people may be willing to continue to pay for a human to do, even if a robot can do it better in some sense. For example:

– producing handmade goods
– creating artwork whose value depends on whether it is an original
– some kinds of food preparation
– performance arts (acting, dancing, stand-up comedy, etc…)
– domestic service (personal servants like butlers, gardeners, etc)
– sports
– therapy
– hairdressing and the like
– massage
– certain aspects of medical care (a sense that someone cares)
– certain aspects of teaching (motivation, mentorship)
– certain aspects of war (decisions about when to use violent force)
– clergy work
– mortuary services
– some kinds of sales
– politics

For these kinds of tasks, having it done by a human is part of what is valued by some customers. If most other jobs can be automated, more jobs that fall in this category would be expected to be created, as a larger pool of workers is available to do them.

As more jobs are automated, what economic effects should we expect to see?

Around 1800, economist Jean-Baptiste Say argued that workers displaced by new technology would find work elsewhere once the market had had time to adjust. By the mid-1800s, a theory was in place that explored the economic effects of automation. In Das Kapital Marx would later dub it “the theory of compensation.” This includes additional employment in the capital goods sector, decrease in prices, new investments, and new products (the effect on wages is complicated). In general this is still the prevailing opinion of economists.

According to this theory, when workers are fired because their jobs are automated, this frees up capital which the owner will then use to hire other workers to do other jobs. Because of this, the number of workers hired doesn’t decrease because of automation. (Marx disagreed with this, saying that part of the capital would now be tied up in the machines). The theory also discussed other effects. Automation reduces the prices of goods, making them more affordable. It also reduces the prices of components, making new products viable. The companies making these goods make more profits, allowing them to expand and hire more workers.

Because of these effects, as long as the market has time to adjust, we shouldn’t expect to see increasing levels of unemployment up to the point where robots have taken all the jobs. Instead, new jobs for workers should be created until the entire employment pool is being utilized. This process can be expected to continue up until the point that all jobs can be done more cheaply by machine. As long as there exist skills humans can do more cheaply than machines, the number of jobs using those skills should increase until they absorb the entire available human labor pool.

Susskind 2018 concludes that in the future, automation will put downward pressure on wages, while increasing the amount earned by capital owners. We may already be seeing this effect in the United States: although GDP per capita and net productivity have increased consistently since the Great Depression, median wages have stagnated since 1975. This would be consistent with automation sending increases in productivity to the owners of capital rather than workers.

As more jobs are automated, the mean standard of living will improve, as the amount of value produced per-capita goes higher and higher. Even without raising tax rates or rates of giving to charity, the overall amount received will increase as more is produced at lower cost. Whether this leads to more people living on the dole or not is more a matter for political argument than for technological extrapolation.

Another effect might be shorter working hours– as more jobs become automated, the same number of people could be employed, but at fewer hours per week or more days of leave per year. Given the option, though, the preference of most workers at the moment is to work full-time (and for many workers, overtime) trading leisure time for additional income. For this to change would require both regulatory changes (part-time workers have different rules about benefits, for example) and cultural changes. It is not absurd, though: in Germany, for example, the average adult only works 1400 hours a year (26 hours a week) compared with 1900 hours a year (35 hours per week) in the U.S.

One might expect that as machines become more capable, more and more people will find themselves below the waterline, unable to find any job that AI can’t do better– perhaps those with the lowest IQ first, or something along those lines. To date, though, the capabilities of AI have not developed this way. Grandmaster chess and rapid calculation can’t be done by those with low IQ, but are simple for modern machines, and the inverse is also true– even very young children and those with a low IQ can perform recognition tasks in varying conditions that defeat even the best computer vision programs, for example. On the other hand, the number of routine jobs in the U.S. is an ever declining fraction of all jobs. If some constant fraction of people can only perform routine jobs, eventually some of them will be unable to find any job they can do, if current trends continue.

Suppose we reach the point where robots can do literally any job a human can do. What will happen to the economy?

A robot will likely never be cheap compared to other manufactured goods. Although future process innovations (such as advanced 3D printers or nanotech assemblers) may reduce the cost of building robots, they will also reduce the cost of manufacturing everything else, and robots must have large numbers of moving parts. This could mean that the number of robots will be limited, and this limited supply will drive up wages in jobs that the robots could otherwise do, if there were enough of them or they were cheap enough to produce.

Robots with human levels of ability, however, would be able to self-repair, extract natural resources, manufacture parts and create more robots without any human intervention. They would also be able to invent new ways to make money and employ other machines to achieve goals.

For any job, the machines in this scenario could do it better. People will still likely strive to purchase and direct factories, resource extraction, and robots for all purposes. Those doing this would still have a job of deciding how to direct the robots, acting as business owners. (Although one imagines running an AI as a manager to handle this kind of work as well.) There will also be people with earned or inherited wealth who don’t work, and welfare recipients who don’t work, but to what extent the economy will redistribute the wealth generated by this vastly expanded economy is a political question.

If we ever reach this point though, it is hard to make serious predictions because we don’t know what such machines would be like, and whether we would be able to maintain control of our economy and civilization at all (this breakdown of all models is the reason von Neumann called such an eventuality a singularity). If it becomes possible to create machines with human level intelligence and skills it will be possible for a little more money to create superhuman intelligence and skills, which will necessarily be hard for us mere humans to predict or control.


Even if nearly all currently existing jobs will eventually be automated, as we progress toward that point new jobs will continue to be created for humans, preventing the kind of mass unemployment or low wages that might be expected, as long as the market has time to adjust (which isn’t necessarily the case– if the pace of automating jobs were to speed up enough, we could still see a crisis.) However, once machines surpass human capabilities for a low enough price in all jobs, the entire economy will change and something else will take its place. What that is we can’t really say– our economic models break down, and the future becomes even more difficult to predict. Beyond this point, we don’t even know to what extent humans are still guiding the course of civilization, let alone how employment will work.

The economic gains that come from all this automation will flow primarily to those who own the machines. As they invest more, create new products, spend more, pay more taxes, and give more to charity, the general civilization will benefit, though some people will doubtless be unable to adapt and find new work and be worse off. How we choose to provide for those that can’t find work is something each democracy will need to continue to decide.

Further Reading

I found this article on Wired had some good points. One of them was that ‘job churn’ is at historic lows. That’s the rate of creation and destruction of jobs. You would expect that to go up if the rate of job automation was increasing.

The paper Automation and New Tasks: How Technology Displaces and Reinstates Labor provides a reasonable framework for estimating current and future effects of automation on labor. They conclude “if the origin of productivity growth in the future continues to be automation, the relative standing of labor, together with the task content of production, will decline. The creation of new tasks and other technologies raising the labor intensity of production and the labor share are vital for continued wage growth commensurate with productivity growth.”

If you are interested in specific predictions about dates for developments in AI, ML and robotics, MIT roboticist Rodney Brooks has a blogpost.

AI researcher Stuart Russell’s new book Human Compatible provides a nice introduction to current thinking about the future development of AI. It also contains some interesting ideas about creating artificial intelligence with open objective functions, so that the AI wants to please people but isn’t sure how best to do so.

A Maximally Lazy Guide To Giving To Charity In 2019

[Sorry for the interruption; we will return to our regularly scheduled Adversarial Collaboration Contest tomorrow.]
[Epistemic status: I’m linking evaluations made by people I mostly trust, but there are many people who don’t trust these, I haven’t 100% evaluated them perfectly, and if your assumptions differ even a little from those of the people involved these might not be very helpful. If you don’t know what effective altruism is, you might want to find out before supporting it. Like I said, this is for maximally lazy people and everyone else might want to investigate further.]

If you’re like me, you resolved to donate money to charity this year, and are just now realizing that the year is going to end soon and you should probably get around to doing it. Also, you support effective altruism. Also, you are very lazy. This guide is for you.

The maximally lazy way to donate to effective charity is probably to donate to EA Funds. This is a group of funds run by the Center for Effective Altruism where they get experts to figure out what are the best charities to give your money to each year. The four funds are Global Health, Animal Welfare, Long-Term Future, and Effective Altruism Meta/Community. If you are truly maximally lazy, you can just donate an equal amount to all four of them; if you have enough energy to shift a set of little sliders, you can decide which ones get more or less.

If you have a little more time and energy, you might want to look at the charities suggested by some charity-evaluating organizations and see which ones you like best.

GiveWell tries to rigorously evaluate charities that can be rigorously evaluated, which usually means global health. They admit that they have to exclude whole categories of charity that try to change society in vague ways, because those charities can’t be evaluated as rigorously. But they do a good job of what they do. Most of their top charities fight malaria and parasitic worms; this latter cause is interesting because these worms semipermanently lower school performance, concentration, and general health, suggesting that treating them could permanently improve economic growth. You can donate directly to GiveWell (to be divided up among their top charities at their discretion) here, or you can look at their list of top recommended charities for 2019 here.

Animal Charity Evaluators is the same thing, but for charities that try to help animals, usually by fighting factory farming. You can donate to ACE’s Recommended Charity Fund, again to be divided up among their top charities at their discretion, here, or see their list of top recommended charities for 2019 here.

AI Alignment Literature Review And Charity Comparison is a report posted by LW user Larks going over all the major players in AI safety, what they’ve been doing the past year, and which ones need more funding. If you just want to know which ones they like best, CTRL+F “conclusions” and run it through rot13. Or if you’re too lazy to do that and you just want me to link you their top recommended charity’s donation page, it’s here.

Vox’s report on the best charities for climate change lists ones that claim to be able to prevent one ton of carbon emissions for $0.12 and $1, compared to the $10 you would get on normal offset sites. Their top choice is Coalition For Rainforest Nations (but see criticism here), and their second choice is Clean Air Task Force.

You might also want to check out ImpactMatters (a version of GiveWell focused on literal First World problems), Let’s Fund (a site that highlights charities, mostly in science and technology, and runs campaigns for them), this post on the Effective Altruism forum about which charities people are donating to this year, and this list of what charities the charity selection experts at the Open Philanthropy Project are donating to.

And if you’re not actually lazy at all, you might want to check out some interesting individual charities that have been making appeals around here recently (others can add their appeals in the comments if they want).

The Center For Election Science tries to convince US cities (and presumably plans to eventually work up to larger areas) to use approval voting, a form of voting where third party candidates don’t “split the vote” and you can vote for whoever you want with a clear conscience. They argue this will make compromise easier and moderate candidates more likely to win. They’ve already succeeded in changing the ballot in Fargo, North Dakota, and as the old saying goes, “as Fargo, North Dakota goes, so goes the world.”

Happier Lives Institute wants to work directly on making people happier, but they realize nobody really knows what that means, so they’re doing a lot of meta-research on what happiness is and what the best way to measure it is. Aside from that, they seem to be working on cheap mental health interventions in Third World countries.

Machine Intelligence Research Institute works on a different aspect of AI alignment than most other groups; this comic explains the technicalities better than most sources. They are secretive and don’t talk a lot about their work or give a lot for people to evaluate them on, so whether or not you donate will probably be based on whether they’ve won social trust with you (they have with me).

Charter Cities Institute is trying to work with investors and Third World governments to create charter cities, autonomous cities with better institutions that can supercharge growth in the Third World. For example, a corrupt Third World country where doing business is near-impossible might designate one of their cities to be administered by foreign judges under an open-source law code, so that enterprise can take off. Think of it as a seastead, except on land, and with the host country’s consent (they’re hoping to profit off the tax revenue). David Friedman’s son Patri is leading another effort in this direction.

Finally, if you’re really skeptical and don’t believe any charity can accomplish much, you might want to consider GiveDirectly, which just gives your money directly to very poor people in Africa to do whatever they want with.

Posted in Uncategorized | Tagged | 181 Comments

[ACC] When During Fetal Development Does Abortion Become Morally Wrong?

[This is an entry to the 2019 Adversarial Collaboration Contest by BlockOfNihilism and Icerun]

Note: For simplicity, we have constrained our analysis of data about pregnancy and motherhood to the United States. We note that these data are largely dependent on the state of the medical and social support systems that are available in a particular region.

Introduction: Review of abortion and pregnancy data in the United States

We agreed that it was important to first reach an understanding about the general facts of abortion, pregnancy and motherhood in the US prior to making ethical assertions. To understand abortion rates and distributions, we reviewed data obtained by the CDC’s Abortion Surveillance System (1). The Pregnancy Risk Assessment Monitoring System (PRAMS), Pregnancy Mortality Surveillance System (PMSS) and National Vital Statistics datasets were used to evaluate the medical hazards imposed by pregnancy (2, 3, 4). Finally, we examined a number of studies performed on the Turnaway Study cohort, maintained by UCSF, to investigate the economic effects of denying wanted abortions to women (5, 6, 7, 13).

Abortion rates by trimester and maternal age: Using data collected by the CDC, 638,169 abortions were performed in the United States in 2015. Data was received from 49/52 reporting areas, suggesting that these rates are likely close to the population rates. This was equivalent to 188 abortions per 1000 live births, a 24% decline from 2006. Of these, approximately 65% were performed prior to 8 weeks of development, and 91% before 13 weeks of development. An additional 7.6% were performed at between 14-20 weeks. Approximately 90% of abortions were performed on women older than 19, and adolescent women between the ages of 18-19 accounted for 67% of the abortions in women under 19. By race, non-Hispanic black women were most likely to undergo an abortion (25 per 1000 women), while non-Hispanic white women were least likely (6.7 per 1000). This translates to a rate of 390 abortions per 1,000 live births in non-Hispanic black women and 111 per 1,000 live births in non-Hispanic white women. (1) These data show that most abortions are undertaken prior to the end of the first trimester, that most women choosing an abortion are adults, and that non-Hispanic black women are disproportionately more likely to choose an abortion.

Mortality and morbidity associated with abortion and pregnancy: On average, there were 0.62 fatalities per 100,000 legal abortions between 2008-2014 (six reported fatalities in 2014). For comparison, in 2015 there were 17.2 pregnancy-related fatalities per 100,000 live births in 2014. These data suggest that an abortion is generally safer than attempting to carry a child to term. Also, it is important to consider the racial disparities within these data. For example, African-American women were three times as likely to die as a result of pregnancy than non-Hispanic white women (42.8 vs 13 per 100,000 live births). The reasons for these disparities are unclear. (3)

Pregnant women are also at risk for severe morbidity associated with pregnancy and delivery, with approximately 50,000 women experiencing at least one severe complication in 2014. This translated to a rate of ~140/10,000 deliveries. Approximately 1.2% of live births resulted in severe maternal complications. Women can also experience significant psychological morbidity after pregnancy, as 1 out of 9 women who deliver a live fetus develop postpartum depression. We were unable to find CDC data for morbidity resulting from abortion procedures; however, one publication reported approximately 2% of abortions result in a medical complication. As this data did not discriminate between minor and severe complications, it would be reasonable to assume that abortions result in a lower overall severe complication rate than pregnancy. We will make the further assumption (based on educated guessing) that late-term abortions are more risky than early-term abortions. (2)

From these data, we conclude that pregnancy and delivery pose a significant risk to the mother’s health. These risks are greatest for African-American and Native American women. By comparison, abortion appears to pose much lower risks of death, and probably much lower risks of morbidity. Consequently, mothers undergo unique and substantial hazards which are imposed by pregnancy. 

Comparison of pregnancy-associated risks and other common risk factors: It is difficult to compare the risks of pregnancy with other factors due to the disparate means of measuring those risks (per live birth, vs per person). However, a naive interpretation of the available data suggests that, while pregnancy is relatively unlikely to lead to severe consequences, it compares in risk to other common activities. For example, the mortality rate associated with motor vehicle accidents is 12.1 per 100,000 people. This is similar to the risk of death per 100,000 live births for women in the US (16.7). (8)

An alternative approach is to examine how pregnancy, childbirth and post-pregnancy changes affect overall mortality. According to the National Vital Statistics Reports (Volume 68, 2016), pregnancy and childbirth was the 6th leading cause of death for women(all races and ethnicities) aged 20-24 and 25-29, accounting for 652 deaths in the two groups combined. Pregnancy and childbirth was the 10th leading cause of death for women between the ages of 15-19 (28 deaths). These data indicate that pregnancy is a leading cause of death in women of child-bearing age. (4)

Socioeconomic costs of unwanted pregnancy: The socioeconomic effects of abortion denial have been studied extensively on the Turnaway Study cohort at the University of California-San Francisco. One study on this cohort found that mothers who were denied a wanted abortion due to gestational age experienced a significantly higher likelihood of being unemployed, in poverty and using public assistance programs like WIC. (6) Another study based on this cohort found that already-born children of a mother denied an abortion were significantly more likely to live in poverty and fail to meet developmental milestones.(5) Mothers who were denied abortions were also less likely to have and meet aspirational goals.(7) These data indicate that women who received wanted abortions experience significantly less socioeconomic strain than women who are denied an abortion.

Adoption vs abortion: Adoption is commonly suggested as an alternative to abortion. Adoption does eliminate the direct socioeconomic burdens of parenthood. However, adoption is rarely considered as an alternative to abortion. For example, in the U.S., there were approximately 18,000 adoptions compared with nearly 1 million abortions. A recent article in The Atlantic did an excellent job of summarizing potential reasons for the discrepancy. Adoption obviously does not alleviate the physical burdens and hazards of pregnancy. Additionally, several studies have suggested that women do not choose adoption due to worry about their perception of the emotional effect of giving away a child. Pro-adoption groups also suggest that both pro- and anti-abortion advocates fail to emphasize or properly counsel women on considering adoption as an alternative to abortion. (9)

Who are the stakeholders in the abortion question? The mother, the father, the fetus, and society at large. The mother’s unique interests are her safety and health, the development of a unique bond with a new human life, and the economic, emotional and physical burdens of motherhood. The father, if held responsible, shares the economic and emotional burdens of parenthood. The fetus, once it has developed the fundamental features of a human being, has at least a theoretical interest in preserving its life. Society at large has an interest in justice and preserving the rights of its members, if only out of self-interest for the individuals within that society. At some point in time, a fetus becomes considered a member of that society, with the same rights as all other individuals. Consequently, the point of conflict arises when a mother (or both parents) desires to terminate a pregnancy prior to delivery.

The question: At what point during development does abortion become a moral wrong? 

Starting positions: At conception (icerun), At fetal viability/minimal neurological activity (BlockofNihilism)

icerun’s Position: A Future Like Ours: Conception

Many arguments for and against abortion pick out a characteristic of the fetus – its size, level of consciousness, ability to feel pain, etc. – and go on to argue why this characteristic, or lack of one, gives the fetus a right to life. Unfortunately, these characteristics tend to have accidental byproducts – they may give the right to life to sheep or remove it from infants. The Future Like Ours arguments begin by determining what best accounts for the wrongness of killing people like you and me (who people on both sides of the abortion debate agree it is wrong to kill). And then use this standard to determine if it is wrong to kill a fetus (who it is contested whether it is wrong to kill).

A Future Like Ours
In Why Abortion is Immoral, Marquis argues killing someone like you or me is prima facie wrong because the deceased is robbed of a valued future like ours. (10) Killing most directly and significantly harms the one who is killed.

The harm to the deceased is the loss of her valued future. Her future would have included all of the experiences, relationships, and works that were valuable for their own sake or means to something valuable. She loses not only those parts of the future she valued in the moment but also those experiences, relationships, and works that she would have come to value as she grew older or is not currently aware of as she grew older: a 16 year old may not value parts of his future whether that be a career, family, or woodwork but if the teenager had been allowed to develop may have come to value these parts of his future.

In summary, it is wrong to kill somebody like you or me because it robs them of a future like ours. The value of a fetus’ future is its current experience, relationships, and works that the fetus values now and those experiences, relationships, and works that the fetus would come to value. A typical fetus cannot currently value it’s experiences, relationships, and works but as the fetus develops it will come to have the same experiences, relationships, and works that we do. Therefore, a fetus has a future like ours. By this definition, it is wrong to kill a fetus from the point of conception (for the record, Marquis does not claim it is wrong to kill a fetus from the point of conception; however, this seems to be the implication).

Intuitions: The future like ours argument works off common assumptions by pro and antiabortion proponents. In doing so it both avoids assuming an ought from an is and creates common ground. The account of the wrongness of killing humans must fit within these intuitions: it must account for why it is wrong to kill typical adult humans, infants, and those who are suicidal but it is not wrong to kill typical sperm, eggs, and some animals. However our intuitions differ on whether it is wrong to kill a typical single cell zygote. Intuitively we both believe it is not wrong to kill a typical zygote, however BlockofNihilism believes this strongly and I believe this weakly. Many anti-abortion advocates have the opposite intuiition.

For BlockofNihilism, this future like ours argument violates his strong intuition that it is not seriously wrong to kill a zygote and this argument fails. For myself, it violates a weak intuition and while on it’s own is not enough to completely overcome the intuition, it holds the strongest sway and influence over my view on abortion as it offends the least intuitions and is more coherent than most other arguments.

BlockofNihilism’s Position: Conscious Perception and Viability

Abortion is morally acceptable until the fetus develops the structures required for perception of external stimuli, with exceptions for preserving the life and health of the mother. Abortion is acceptable because a fetus does not experience conscious suffering “like ours” and simultaneously imposes a significant physical, mental and economic burden on the mother. As the minimum requirements for conscious perception are actually met after fetal viability, I suggest we fall back on viability as a compromise ethical barrier to abortion.

When does the fetus develop “conscious” perception? By conscious perception, we mean perception which a human person would recognize as their own. Obviously, this question in general pushes the limits of our ability of description. As perception is an (obviously) complex topic, I will use the perception of physical pain as an example of the requirements for conscious perception. Pain, too, is a complex psychological concept that arises at the intersection of physical sensation with emotional constructs. At the minimum empirical level, certain neurological structures are necessary, but not sufficient, for the perception of pain. Thus, until these structures are present and active, perception (as we understand it) cannot occur. (10)

To experience pain, afferent nerves must synapse with spinothalamic nerves projecting to the thalamus, which then connect to thalamocortical neurons projecting to the cortex (the region of conscious experience). Thus, all three components (peripheral pain sensor, thalamic project, and functioning cortex) must all be active for the perception of pain. Based upon multiple studies, nociceptive neurons develop around 19 weeks, thalamic afferents reach the cortex at 20-24 weeks, and somatosensory activity provoked by thalamic activity is detectable around 28-29 weeks. Several behavioral studies have found that at 29-30 weeks of development, fetal facial movements in response to pain are like adults. However, these results have been contradicted by other studies, and these findings may represent non-voluntary and unconscious responses to stimuli rather than the conscious perception of pain. (10)

In any event, a fetus does not have the required neurological structures for what we would recognize as the conscious experience of pain until at least 29 weeks of development, three weeks into the third trimester. (10) Prior to full integration of the various components of the nervous system, and the development of an active cortical system, the pain experience of a fetus would likely be akin to that of a comatose individual- no conscious experience at all.

As other types of experience require these same structures to be active, we can conclude that a fetus does not have the minimum capacity for conscious experience until approximately 29 weeks of development. Thus, when considering an abortion prior to this stage of development, we are balancing (1) the harms posed to the mother, a conscious agent, against (2) an entity that does not “experience” anything. To me, this suggests that abortion is permissible at this point.

The fetus is truly viable at ~27 weeks: With intensive care, a preterm neonate can survive at as early as 24 weeks of gestation. However, survival rates at this point are approximately 50%. Also, these severely preterm neonates are at a significantly increased risk of a variety of both short- and long-term complications. By 27 to 28 weeks, the fetus can be delivered and survive in most cases without major interventions. So, true fetal viability and the development of the fundamentals for conscious experience are roughly concurrent, with viability likely being reached prior to conscious experience.

Potential harms and viability: Viability means that the fetus no longer requires the mother’s body to survive. Within the womb, the fetus imposes both a significant immediate burden as well as the potential for significant harms. Once safely delivered, these harms are no longer present. While the mother is still on the hook for the economic and emotional burdens of motherhood, her life is no longer at risk. Also, while adoption is a possibility after birth, it is obviously not an option prior to delivery. Consequently, viability represents a special moment in the development of a fetus- it can live without posing a significant hazard to the mother’s physical well-being. While we could not find solid evidence (likely due to the very low number of late-term abortions performed), my educated guess is that an abortion at this late stage is approximately as dangerous as performing a natural delivery or C-section. Consequently, at viability, it is reasonable to treat the fetus as having full human rights and intercede to protect its life.

icerun’s rebuttal to BlockofNihilism:

Viability: The only difference between a viable fetus and an infant is location, which is not a moral distinction (except in cases of direct harm to the mother) therefore a viable fetus is seen as having the same right to life as an infant. The chain would seem to continue. The primary distinction between a viable fetus and nonviable fetus is that a nonviable fetus survival depends solely on one person (the mother) whereas a viable fetus survival can depend on others. This does not appear to be a moral distinction either and so the viability argument appears to be very closely related to the argument that the fetus gains the right to life at birth or when it becomes an infant. Therefore, a viable fetus would have the same right to life of a newborn however without further reasoning, it seems likely a fetus gains the right to life earlier.

Experience: BlockofNihilism argues that since a fetus before 29 weeks is not capable of conscious experience it is not capable of suffering and therefore it is not wrong to abort. However, there are times when adult humans are not conscious and are even unable to achieve consciousness in the case of temporarily comatose humans. Because they are not conscious they are not capable of suffering. This argument seems to allow for the killing of sleeping and temporarily comatose humans as long as they do not suffer, feel pain, or realize what is happening in the moment.

Further, an adult would likely not recognize the consciousness of a fetus as its own. It is unlikely that a fetus or infant has a sense of self and they seem to operate at a significantly lower level of self-awareness. Though we do not have a good understanding of the level of consciousness a fetus holds, a dog appears to operate at a higher stage of consciousness than a fetus though this is very speculative.

For these reasons, the experience of suffering is not what makes it wrong to kill a fetus or human.

BlockofNihilism’s Rebuttal of the Future Like Ours Account:

Consciousness-based and FLV-based arguments arrive at the same place: For me, any ethical argument that places the interests of a non-conscious entity incapable of experience above the interests of a conscious agent capable of both rational decision-making and of suffering is intuitively absurd. Prior to the development of the basics for neurological experience, the fetus represents the potential for a future life of value or the potential to be a conscious agent. In either case, I do not believe that the potential outweighs the present!

I understand how the future life-of-value (FLV) argument can seem to apply to a fetus: We imagine the entity that will come from the fetus, imagine its potential for an FLV, and extrapolate rights from there. However, a fetus represents the potential for having a life of value and cannot be said to currently possess that future in the way implied by Marquis. I believe the intuitive appeal of the future life-of-value argument arises from our experience and knowledge of what a “future” constitutes. However, fetuses prior to their development of the basic neurological structures required for experience cannot have or value their “future.” 

My interpretation is informed by Boonin’s famous critique of Marquis’ “future of value” argument. (11) According to Boonin, the intuitive value of the future can be found in the dispositional ideal present value of a future. A dispositional value or belief is one that is held by someone but not consciously on the mind. An ideal value or desire is one that would be held if one had full information about the situation. The dispositional ideal desire formulation is more parsimonious as it does not invoke potential desires but only present ones. Thus, the wrongness of killing someone like you or me is the taking of a future like ours they dispositionally, ideally, and presently value. Upon developing the neurological structures necessary for experience, a fetus can begin to (at least unconsciously) desire food, close touch, and parent’s voice. The necessary neurological structures for these desires, and for meeting the minimum requirement for having an FLV, is near or at the point of viability.

Consciousness-based accounts do not allow for murdering sleeping people! There is a clear distinction between an entity that has had the experience of consciousness (a sleeping or temporarily comatose individual) and an entity that has never been conscious. A sleeping person still has her memories, desires and agency encoded within her brain; the fact that she is temporarily unaware of those attributes does not mean they do not exist! Conversely, a fetus prior to its developing consciousness has no memories, desires or agency. It cannot be said to be a person yet. My argument is simple- prior to having the minimum requirements for consciousness there is absolutely no chance whatsoever that a fetus can experience any harm like we (persons) do.

Once these structures are developed and active, it becomes far more difficult to determine “when” a fetus or infant reaches consciousness. At this point, I become squeamish with the prospect of destroying something that potentially does have a conscious experience (including a “future of value” concept) like ours. The moral calculus changes: Instead of balancing a person’s interest (mother) vs a nonperson’s interests (fetus), we now have a person vs (maybe a person?). This is where, to be safe and prevent potential harms, we can draw a clear ethical line.

Preventing abortion prior to viability will cause significant harms: As previously discussed, substantial scientific evidence suggests that preventing wanted abortions will lead to harm. First, there would be a significant increase in morbidity and mortality associated with pregnancy. This increase would disproportionately impact economically disadvantaged and minority women. Second, women denied wanted abortions are significantly more likely to suffer socially, economically and psychologically. Perhaps most importantly, women (or both parents) are denied agency and denied the ability to make the ethical decision for themselves according to their unique circumstances and beliefs. 

Location, location, location! Viability represents the best point for ethical compromise: Terminating a fetus after it is capable of living “on its own” is equivalent to infanticide. In the special case of a fetus, location does have moral significance. The fetus, living within and dependent upon the mother’s body, poses immediate and potential costs and hazards to the mother. By contrast, once delivery has taken place the fetus/neonate no longer poses these threats. While the mother still has the significant economic and social burdens of motherhood, these burdens are unlikely to lead to immediate physical harm. And for the mother unable to cope with these burdens, adoption or surrendering the care of the infant to the state is an option once delivery of a viable neonate has taken place.

Icerun’s Defense of the Future Like Ours Account

Capacity of a fetus to have a future: The fetus does not have a potential future nor is the fetus’ future simply a concept in its brain. The future of a fetus are those unrealized experiences the fetus will have if its development is not impeded. Likewise, a 20-year-old will be a 25-year-old with experiences, relationships, and works if it’s development is not impeded. Sometimes a human’s development is impeded by natural causes in which case we mourn their loss of a future or by conscious decisions in which case we mourn and try to provide restitution as best possible.

In fact, one’s future is most certainly not in or dependent on the brain. A 4-year-old does not have a good understanding of what it is like to be a 60 year old yet being a 60 year old is still a part of his future. If the 4-year-old is killed, it has lost not only on the relationships it understands as a 4-year-old but also a future that includes a career or children or what it would have found valuable and meaningful as an adult.

Boonin’s present future account fails: Marquis and Boonin account for the value of the parts of our future that we do not know yet (ex: our future in 20 years) in different ways. Marquis includes both our present valuation of the future and our future valuation of the future while Boonin argues for a present ideal desire of the future. However, to have an ideal desire, one must first have an actual desire (if an actual desire is not required, then one could say the zygote or trees have ideal desires).

Though the fetus can be said to have desires, these desires are unconscious. A conscious desire is willed and chosen to a certain extent whereas an unconscious desire is simply the body doing what the body does; a personification which is often helpful but, in this case, not relevant. The unconscious desire for warmth is simply the brain releasing chemicals based on external states. Similarly, the zygote will begin to multiply based on external states, stem cells will divide into different cells based on external states, the heart begins beating based on external states. The heart beating or zygote splitting apart seem to fulfill the requirement of some unconscious desire. The fact that it is the brain responding to outside stimuli is not morally relevant – the fetus does not appear to be aware of a desire for warmth just as it is not aware of the heart’s desire to pump blood. If conscious desires are necessary then newborns and possibly older infants likely do not have a right to life as they do not appear to have conscious desires or a sense of self. In this case though it is more parsimonious, it fails because it does not grant infants a right to life.


icerun’s conclusion: The point where the fetus gains the right to life is rightly contested and debated as I do not believe there are any completely coherent and consistent arguments that define the point of development where the fetus gains the right to life.

The latest possible point where abortion may be permissible appears to be viability where the sole difference between an infant and a fetus is the location (one inside the womb and dependent on a specific person and the other outside the womb that could be cared for by others). However, abortion may be impermissible at an earlier point and the point of viability does not appear to have a moral significance that makes the fetus seriously wrong to kill.

At the end though, I have not come to a solid position at what point it becomes wrong to kill a typical fetus. And it is important to note, have failed to provide a coherent argument. In making my decision on abortion three items weigh heaviest:

First, in cases of consensual sex (excluding rape), parents hold a strong positive obligation to provide and protect a child once it gains the right to life. This obligation comes from the fact that children have a right to life, require support to survive, and that the parents engaged in activities that are known to create humans. Second, the future like ours argument points to the fetus gaining a right to life at conception and though this goes against my intuitions, it comes the closest to providing a coherent and consistent argument. It is a model to understand why it is seriously wrong to kill humans and thus points to an earlier rather than later point in the fetus’ development. My choice of this argument is likely biased by various intuitions that I hold and others would not doubt come to focus on other flawed arguments based on their own intuitions. Third, there are situations where bearing a child brings significant issues and problems for either the mother or fetus where abortion apears the best option.

A mesh of all three in light of it being uncertain when the right to life begins for a fetus perhaps leads to the stance that abortion should be safe, legal, and rare that investigates abortion on a cases by case basis that attempts to balance the weightiness of aborting a fetus with practical costs and difficulties that are imposed on parents.

BlockofNihilism’s conclusion: If my ethical standard were to be adopted and used to change current practice in the US, it would allow for a few more elective early third-trimester abortions than are currently performed. However, it would have little to no effect on the current situation, as most abortions are performed well before viability. I believe that communicating our knowledge about the fetus pre-viability, including its lack of internal conscious experience, would significantly reduce the potential for psychological harms to women who choose abortions. In contrast, if abortion after conception was prevented, there would be several negative consequences. There would be a significant increase in pregnancy-related morbidity and mortality that would disproportionately affect minority and socioeconomically distressed women. The likely uptick in illegal abortions would increase the likelihood of unsafe abortions, further increasing the risk of morbidity and mortality. Finally, the denial of wanted abortions imposes pronounced social and economic strains on new mothers and their families. These consequences are, obviously, of significant moral concern.

I remain convinced that abortion is acceptable prior to fetal viability. I believe that the intuitive appeal of the FLV argument is, as suggested by Boonin, not applicable to a fetus prior to developing the fundamental requirements for neurological experience. Even if we decided that the FLV argument pertained to fetuses, the fact that abortion pre-viability cannot cause conscious harm outweighs any potential for FLV that could result from a fetus carried to term. I believe (like Aesop) that a bird in the hand (the mother’s rights, interests and potential for harm) far outweighs a bird in the bush (the non-conscious potential person represented by a fetus).

Shared conclusion: Abortion is never a happy choice. Regardless of our ethical position on the abortion question, we agree that new people are of tremendous value! Improvements in the delivery and efficacy of birth control options, increases in social support systems for mothers and parents, reducing pregnancy-associated morbidity and mortality and increasing access to alternative options like adoption are all essential factors in reducing the number of abortions and any potential harms that arise from them. By focusing on these issues rather than on preventing abortions directly through legal or ethical edicts, we can make having a child a more reasonable and safe option than at present.

Works cited:
5. Foster DG, Raifman S, Gipson JD, Rocca CH, Biggs MA. Effects of Carrying an Unwanted Pregnancy to Term on Women’s Existing Children. February 2019. The Journal of Pediatrics, 205:183-189.e1.
6. Foster DG, Biggs MA, Ralph L, Gerdts C, Roberts SCM, Glymour MA. Socioeconomic Outcomes of Women Who Receive and Women Who Are Denied Wanted Abortions in the United States. January 2018. American Journal of Public Health, 108(3):407-413
7. Upadhyay UD, Biggs MA, Foster DG. The effect of abortion on having and achieving aspirational one-year plans. November 2015. BMC Women’s Health, 15:102. (Request pdf)
10. Marquis, Don. “Why Abortion Is Immoral.” The Journal of Philosophy, vol. 86, no. 4, 1989, pp. 183–202. JSTOR,
11. Lee SJ, Ralston HJP, Drey EA, Partridge JC, Rosen MA. Fetal Pain: A Systematic Multidisciplinary Review of the Evidence. JAMA. 2005;294(8):947–954. doi:
12. Boonin, D. (2002). A Defense of Abortion (Cambridge Studies in Philosophy and Public Policy). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511610172

[ACC] Should Gene Editing Technologies Be Used In Humans?

[This is an entry to the 2019 Adversarial Collaboration Contest by Nita J and Patrick N.]


In October 2018, the world’s first genetically edited babies were born, twin girls given the pseudonyms Lulu and Nana; Chinese scientist He Jiankui used CRISPR technology to edit the CCR5 gene in human embryos with the aim of conferring resistance to HIV. In response to the international furor, China began redrafting its civil code to include regulations that would hold scientists accountable for any adverse outcomes that occur as the result of genetic manipulation in human populations. Now, reproductive biologists at Weill Cornell Medicine in New York City are conducting their own experiment designed to target BRCA2, a gene associated with breast cancer, in sperm cells. While sometimes considered controversial, gene editing has been used as a last resort to cure some diseases. For example, a precursor of CRISPR was successfully used to cure leukemia in two young girls when all other treatment options had failed. Due to its convenience and efficiency, CRISPR offers the potential to fight cancer on an unprecedented level and tackle previously incurable genetic diseases. However, before we start reinventing ourselves and mapping out our genetic futures, maybe we should take a moment to reevaluate the risks and repercussions of gene editing and rethink our goals and motives.

How does CRISPR work?

CRISPR, which stands for clustered regularly interspaced short palindromic repeats, is an adaptive bacterial immune response that protects against repeat offenders. When exposed to a pathogenic bacteriophage, a bacterium can store some viral phage DNA in its own genome in “spacers,” which function as genetic mug shots, allowing the bacterium to quickly mount a defense in case of future invasions. When necessary, the CRISPR defense system will slice up any DNA matching these genetic fingerprints. In 2012, Jennifer Doudna and Emmanuelle Charpentier demonstrated how CRISPR could be used to slice any DNA sequence of choice. The CRISPR-Cas9 system allows researchers to not only recognize and remove DNA sequences but also modify them. The completion of the Human Genome Project in 2003 provided a copy of the genetic book of life; CRISPR offers a way to purportedly erase and “correct” certain words in that book.

Of course, this newfound power raises several ethical concerns. The major worry among scientists revolves around the long-term consequences of germline modification, meaning genetic changes made in a human egg, sperm, or embryo. Edits made in the germline will affect every cell in an organism and will also be passed on to any offspring. If a mistake is made in the process and a new disease inadvertently introduced, these changes will persist for generations to come. Human germline modification could also theoretically allow for the installation of genes to confer protection against infections, Alzheimer’s, and even aging. For many, the thought of controlling our own genetic destinies seems to be a very slippery slope, conjuring up dystopian images of Frankenstein or Brave New World. For these reasons and more, in 2015, Doudna and other scientists proposed a moratorium on the use of CRISPR-Cas9 for human genome editing until safety and efficacy issues could be more thoroughly addressed.

How safe and efficient is gene editing?

CRISPR is currently being used in clinical trials for cancers and blood disorders; since these interventions won’t lead to heritable DNA changes, these trials don’t face the same ethical dilemmas as Dr. He’s experiment but may nevertheless carry risks. Doubts persist about the safety and efficacy of the CRISPR gene editing system, as many other initially promising technologies have failed. Conventional gene therapies, which attempt to insert healthy copies of genes into cells using viruses, faced many early setbacks, including the tragic death of 18-year-old Jesse Gelsinger in 1999 during a gene therapy trial for ornithine transcarbamylase deficiency. However, the causes surrounding Gelsinger’s death may have included a systemic immune response triggered by the use of a viral vector.

While the death of Jesse Gelsinger marked a somber moment for the field, gene therapy also experienced successes when researchers from Paris treated two young infants who suffered from a fatal form of severe combined immunodeficiency disease (SCID), an inherited disorder characterized by low levels of T cells and natural killer cells, which leaves affected patients incredibly susceptible to infection. Fortunately, viral gene therapy was able to reverse the disease symptoms in this particular case. On the other hand, gene therapy trials using viral vectors were recently halted when 25-50 percent of gene therapy patients developed leukemia resulting from the insertion of a gene-carrying virus near an oncogene; a gene with the potential to cause cancer. Modern CRISPR technology is not affected by such hurdles, however, as it does not rely on the use of viral vectors. While more precise than traditional gene therapy, CRISPR nonetheless sometimes results in unintended edits, which may be especially problematic for certain gene targets. Some pairs of genes are “linked” due to physical proximity on the same chromosome and are therefore almost always passed on together. Any edits to a gene belonging to a linked pair may therefore inadvertently cause an edit in its neighboring partner.

Even intended cuts can have unexpected consequences. Two separate 2018 studies published in Nature Medicine, one conducted by the Karolinska Institute in Sweden and the other by Novartis Institutes for Biomedical Research, concluded that CRISPR edits might increase the risk of cancer via inhibition of a tumor suppressor gene called P53, which has been described as “the guardian of the genome” due to its crucial role in maintaining genomic stability. Double-stranded DNA breaks made by CRISPR activate repair mechanisms encoded by P53 that instruct the cell to either mend the damage or self-destruct. Making these types of edits successfully would therefore require inhibition of P53; however, cells could become more vulnerable to tumorigenic mutations and the development of cancer as a result. “We don’t always fully understand the changes we’re making,” says Alan Regenberg, a bioethicist at Johns Hopkins Berman Institute of Bioethics. “Even if we do make the changes we want to make, there’s still question about whether it will do what we want and not do things we don’t want.”

Nevertheless, a slight increase in cancer risk might be a worthwhile trade-off for many patients with genetic diseases, such as the aforementioned SCIDs, which affect 1 in 50,000 people globally. Usually, the only cure for SCIDs is a bone marrow transplantation, which requires a matched donor in order to avoid rejection by host immune cells or, alternatively, the depletion of T cells to avoid rejection in the case of an unmatched donor. CRISPR offers a safer, more efficient way to treat genetic diseases such as SCIDs. Bone marrow cells of a patient may be extracted and genetically modified using CRISPR, thereby avoiding rejection by the host immune system. Pre-clinical trials in mice are already underway to test the safety and efficacy of this approach. Stanford scientist Dr. Matthew Porteus demonstrated the efficiency of this technique and said in an interview, “We don’t see any abnormalities in the mice that received the treatment. More specifically, we also performed genetic analysis to see if the CRISPR-Cas9 system made DNA breaks at places that it’s not supposed to, and we see no evidence of that.”

CRISPR also offers the additional possibility of removing parts of a gene, providing extra value over standard viral gene therapy, which only allows for insertion of genes. This feature can be especially important for autosomal dominant genetic disorders, which are made manifest with only one copy of a deleterious mutation. In her book, A Crack in Creation, Jennifer Doudna speculates that as CRISPR becomes increasingly safe, the tool may be used to help people who aren’t fortunate enough to win the genetic lottery. Doudna intones, “Someday we may consider it unethical not to use germline editing to alleviate human suffering.” What was unthinkable just a few years ago will soon enter clinical praxis.

Are some genetic variants superior to others?

In biology, those organisms that are most suited to their environment exhibit the highest fitness, a measure that accounts for both survival and reproduction. The accumulation of mutations over time is thought to contribute to many disease processes, but genetic diversity can also be beneficial for an organism when faced with a changing environment or unanticipated stress, such as drought or illness. Discussions on rigid natural selection should give way to more nuanced conversations on “balancing selection, the evolutionary process that favors genetic diversification rather than the fixation of a single ‘best’ variant,” as described by Professor Maynard V. Olson at the University of Washington.

Evolution has allowed many potentially deleterious genes to remain in the gene pool due to their ability to impart a selective advantage to individuals with carrier status, a phenomenon referred to as heterozygote advantage. Sickle cell anemia is a disease inherited in an autosomal recessive pattern—two copies of the problematic gene variant are necessary for disease expression. However, having just one copy of that variant confers resistance to malaria, which may explain the increased prevalence of sickle cell anemia in areas where malaria is more common, namely India and many countries in Africa. In this manner, malaria acts as a selective evolutionary pressure maintaining the occurrence of the sickle cell variant in the gene pool.

Nevertheless, sickle cell disease has become prevalent in countries currently unaffected by malaria. In the United States, approximately 100,000 people suffer from sickle cell disease, but therapeutic options remain limited. Researchers have been investigating the possible insertion of wild-type, “anti-sickling” genes using viral vectors in affected patients as therapy. However, since the pathological mutation for sickle cell disease has already been clearly identified, correction of the mutated gene using CRISPR may offer a more straightforward approach. The biotech company CRISPR Therapeutics recently announced the results of a phase I clinical trial in which CRISPR technology was used to treat a patient with sickle cell disease, although the efficacy and safety of the intervention have not yet been evaluated.

Can gene editing eliminate disease?

To answer these questions, we need to first evaluate our understanding of genetics and weigh the importance of genetics against environmental factors such as diet and lifestyle.

How reliable is our understanding of gene-disease links?

A mutation is usually defined as a genetic sequence that differs from the agreed-upon consensus or “wild-type” sequence. After the completion of the Human Genome Project in 2003, the arduous process of genome annotation began. Genome-wide association studies, or GWAS, began examining population data over time to look for possible associations between genetic variants, or genotypes, and physical traits and diseases, or phenotypes. Unfortunately, these studies often fail to employ random sampling, and 96 percent of subjects included in GWAS have been people of European descent. In fact, scientific disciplines frequently disproportionately sample from WEIRD (western, education, industrialized, rich, democratic) populations, whether studying genetic diseases or human gut microbiota.

Given the sources of genetic information used to determine “wild-type” sequences, we may be using information that is relevant to one demographic but not another. According to Maynard Olson, one of the founders of the Human Genome Project, the wild-type human simply doesn’t exist, and “genetics is unlikely to revolutionize medicine until we develop a better understanding of normal phenotypic variation.” These words seem to have fallen on deaf ears, however, as evidenced by the burgeoning numbers of genome-wide association studies conducted over the last 12 years. Most of the associations discovered thus far are only correlative, and few studies have been conducted to determine whether observed associations are indeed causal.

Closer examination of the relationship between gene variants and certain diseases reveals weak associations in many cases. For example, the APOE gene, which encodes for the production of a protein known as apolipoprotein E, comes in three genetic forms- APOE2, APOE3, and APOE4 with the last being associated with an increased risk of developing Alzheimer’s disease (AD). However, the correlation is not determinative, as the Nigerian population exhibits high frequencies of the APOE4 allele but low frequencies of AD. Environment and nutrition also play significant roles in the disease pathophysiology, as illustrated by Dr. Dale Bredesen’s research demonstrating reversal of cognitive decline through a targeted dietary and lifestyle approach. In fact, the majority of afflictions commonly affecting the general population, such as type 2 diabetes, cardiovascular disease, cancer, Alzheimer’s, and Parkinson’s are not caused solely by mutations.

How often does disease arise as the result of genetic mutation alone?

Chronic diseases are the result of a complex interplay between host genetics and the environment. A study conducted by the Wellcome Trust Sanger Institute in Cambridge, England analyzed DNA sequencing data from 179 people of African, European, or East Asian origin as part of the 1000 Genomes Pilot Project and discovered that healthy individuals carried an average of 400 mutations in their genes, including around 100 loss-of-function variants that result in the complete inactivation of about 20 genes that encode for proteins. These findings indicate that deleterious mutations, even those that lead to protein damage, do not invariably give rise to disease. As Professor James Evans from the University of North Carolina, who was not involved in the study, summarized in an NPR health blog, “We’re all mutants. The good news is that most of those mutations do not overtly cause disease, and we appear to have all kinds of redundancy and backup mechanisms to take care of that.” The authors hypothesize that healthy individuals can carry disadvantageous mutations without showing ill effects for a number of possible reasons: an individual may carry just one copy of a gene mutation for a recessive disorder that requires two mutations in order to manifest the disease, the disease may exhibit delayed onset or require additional environmental factors for expression, or the reference catalogs used to identify gene-disease links may be inaccurate. One analysis found that 27 percent of database entries cited in the literature were incorrectly identified.

To account for the discrepancy between genetic predisposition and disease manifestation, in 2005, cancer epidemiologist Dr. Christopher Wild proposed the concept of the exposome, which encompasses “life-course environmental exposures (including lifestyle factors) from the prenatal period onwards” and accounts for factors such as socioeconomic status, chemical contaminants, and gut microflora. The risk of developing a chronic disease during one’s lifetime may be modeled by G×E: the interaction between a person’s genetics (G) and lifetime exposures (the exposome, E). Identical twin studies reveal that genotype alone cannot determine whether a given phenotype will be expressed, and the interaction between genes and the environment must be taken into account. 

In fact, the “genes load the gun, environment pulls the trigger” paradigm may be overly simplistic, as Dr. Alessio Fasano at Harvard Medical School has shown that loss of intestinal barrier function is likely also necessary for the development of chronic inflammation, autoimmunity, and cancer. Two particular gene markers, HLA-DQ2 and HLA-DQ8, are observed in the vast majority of celiac disease cases. While over 30 percent of the U.S. population carries one or both of the necessary genes, only around one percent of Americans are affected by celiac disease. This data suggests that exposure to gluten through ingestion of wheat, barley, or rye is not sufficient to trigger the development of celiac disease even in individuals with a genetic predisposition. Without the additional loss of intestinal tight junction function, celiac disease is not made manifest. Thus, factors besides genetics are necessary for the development of chronic disease.

How does gene expression contribute to disease risk?

The concept of genetic determinism purports that our genes are our destiny, but genes are not nearly as important as gene expression. When most people think of evolution, the first name that comes to mind is Charles Darwin, but a contemporary of Darwin’s named Jean Baptiste Lamarck had proposed a theory of “acquired characteristics” by which individuals evolved certain traits within their lifetimes. The most oft-cited example discrediting this theory is that of giraffes elongating their necks by stretching to reach the treetops and then passing on this trait of long necks to their progeny. In contrast, Darwin proposed that those giraffes that had the longest necks went on to find food, survive, and reproduce. Eventually, Darwin’s theory of natural selection prevailed, but his 18th century French naturalist contender may have simply foreseen the field of epigenetics, the study of those drivers of gene expression that occur without a change in DNA sequence. The prefix “epi-” means above in Greek, and epigenetic changes determine whether genes are switched on or off and also influence the production of proteins. If you imagine your genetic code as the hardware of a computer, epigenetics is the software that runs on top and controls the operation of the hardware. Epigenetic changes control the expression of genes through various mechanisms and are influenced by diet, exercise, lifestyle, sunlight exposure, circadian rhythms, stress, trauma, exposure to pollutants, and other environmental factors.

The epigenetic mechanism of DNA methylation involves tagging DNA bases with methyl groups, a process that tends to silence genes. DNA methylation is responsible for X-chromosome inactivation in females, a process necessary to ensure that females don’t produce twice the number of X-chromosome gene products as males. Methylation is also responsible for the normal suppression of many genes in somatic cells, allowing for cell differentiation. Every somatic cell in the human body contains nearly identical genetic material, but skin cells, muscle cells, bone cells, and nerve cells exhibit different properties due to different sets of genes being turned on or off. Dietary nutrients such as vitamin B12, folic acid, choline, and betaine double as methyl donors, so even small changes in nutritional status during gestation can result in markedly different effects on gene expression and varied physical characteristics in the offspring. If differential gene expression can produce such drastic changes, is genome rewriting really necessary? Perhaps the centrality of the gene in driving human health has been overstated. Indeed, why worry about a potentially pathogenic gene if it is never expressed?

Inappropriate DNA methylation has been referred to as a “hallmark of cancer,” along with uncontrolled cell growth and proliferation. Almost all types of human tumors are characterized by two distinct phenomena: global hypomethylation, which may result in the expression of normally suppressed oncogenes, genes that promote tumor formation, as well as regional hypermethylation near tumor suppressor genes. In other words, genes that promote tumor formation are turned on while genes that suppress tumor formation are turned off. Cigarette smoke has been shown to promote both demethylation of metastatic genes in lung cancer cells as well as regional methylation of other specific genes via modulation of enzymatic activities. To succinctly summarize, genes themselves are not driving tumor formation; rather inappropriate gene expression is increasing the risk of tumor development.

Can gene editing treat cancer?

Cancers are front and center among the conditions gene editing therapies are targeted to treat. To answer the question of whether CRISPR can be used to treat cancer, we need to first examine how cancer arises. Medical textbooks frequently attribute the development of cancer to the accumulation of mutations over time. However, the accumulation of genetic mutations is not sufficient to cause cancer; the tumor microenvironment must be taken into account. In other words, the same oncogenic mutation that is adaptive for cancer in altered tissue is not advantageous to cancer in healthy, homeostatic cells. 

James DeGregori at the University of Colorado School of Medicine offers the following analogy. When tackling drug dealing in the inner city, arresting all the drug dealers is unlikely to work; the ones left behind will be smarter and more conniving. Instead, one might focus on creating better jobs, schools, and infrastructure, so citizens won’t have to resort to crime as a means of survival. Addressing the environment that lead to the problem in the first place will provide a more stable long-term solution. Similarly, instead of simply targeting the cancer, altering the microenvironment to disfavor its proliferation may provide a more viable long-term strategy, as the former immediately selects for resistance, accounting for the difficulty in keeping a patient in remission. Highlighting the importance of the microenvironment in regulating development, homeostasis, and cancer, biologist Mina Bissell writes, “The sequence of our genes are like the keys on the piano; it is the context that makes the music.” Cancer depends on context, as should our approach to treatment.

Nevertheless, despite recent medical advances, cancer treatment has not seen significant improvement in decades. Standard therapies rely on toxic chemotherapy, which destroys both cancerous and healthy tissue. Furthermore, cancerous cells often evade detection and destruction by host immune defenses by expressing cell surface molecules that prevent killing by host T cells. A new and effective form of immunotherapy known as chimeric antigen receptor (CAR) T cell therapy attempts to harness the power of the human immune system to recognize and kill cancer cells. However, this method has several disadvantages. A patient must have a sufficient number of immune cells prior to beginning therapy, which may not be the case for patients who have already received chemotherapy. Additionally, the process is time-consuming, and the use of viral vectors may increase the risk of developing other cancers. 

To address the issues of T cell collection and manufacturing delays, researchers are now developing “off-the-shelf” CAR T cells, which utilize gene editing to prevent rejection by the host immune system and the development of graft-versus-host disease (GvHD), a condition in which foreign immune cells attack the recipient’s body. In 2017, two infants with relapsing leukemia were successfully treated with these “off-the-shelf” CAR T cells, which were modified using the genome editing tool TALEN. Short for transcription activator-like effector nucleases, TALEN can be considered the predecessor to CRISPR and uses enzymes that are specifically guided to a genomic sequence to induce a cut. However, designing these enzymes requires extensive work, making the process costly and time-consuming. Additionally, in vitro studies have demonstrated that CRISPR techniques exhibit better correction efficiencies and fewer off-target effects than TALEN. Moreover, the use of CRISPR can speed up the manufacturing of CAR T cells and drive down costs of such therapies from hundreds of thousands of dollars to a few hundred dollars.

Can gene editing prevent HIV?

Another prospective application for CRISPR technology is the treatment of HIV. Today, approximately 37 million people around the world live with HIV. The use of antiretroviral drugs has greatly reduced the death rate from 1.9 million in 2004 to less than one million in 2017. Challenges still exist, as human immunodeficiency virus (HIV) inserts itself into the host genome and mutates rapidly, making complete eradication of the disease very difficult. About one percent of the population is naturally immune to HIV due to a CCR5 gene mutation, which prevents the expression of a cell surface receptor that HIV binds to in order to gain entry into host cells. As previously mentioned, the first genetically edited babies were born in October 2018 after Chinese scientist Dr. He Jiankui used CRISPR technology to edit the CCR5 gene in human embryos.

According to Dr. He, a married couple with the pseudonyms Mark and Grace consented to in vitro fertilization with additional CRISPR treatment to provide immunity to HIV for their offspring. First, a process called sperm washing was used to separate sperm from semen, the fluid that carries HIV. Next, eggs were fertilized by sperm to create embryos, on which Dr. He performed CRISPR gene editing. After several implant attempts, successful pregnancy was achieved. Nine months later, twins with the pseudonyms Lulu and Nana were born healthy and purportedly suffered no off-target effects from the CRISPR therapy.
Testing indicated that gene editing did not successfully alter both copies of the CCR5 gene in one of the twins, however. Chinese researchers were apparently knowledgeable of the gene editing failure prior to the pregnancy attempt; the decision to proceed with implantation regardless has numerous ethical implications. “In that child, there really was almost nothing to be gained in terms of protection against HIV and yet you’re exposing that child to all the unknown safety risks,” said Dr. Kiran Musunuru, a professor of stem cell and regenerative biology at Harvard University. The choice to use the unedited embryo suggests that the researchers may have been more focused on testing the accuracy of the gene editing technology than providing immunity to disease.

According to the Chinese government and his employers, Dr. He acted without the knowledge or consent of his superiors. Chinese authorities suspended all of He’s research activities, saying his work was “extremely abominable in nature” and a violation of Chinese law. In fact, the procedure was not medically necessary. When only the father is HIV-positive, as in this case, sperm washing alone is usually sufficient to reduce transmission of the virus. A meta-analysis that investigated the efficacy of sperm washing did not find a single case where HIV was transmitted to offspring.
Dr. He claims that the CCR5 gene is already very well characterized, but a recently published study found that decreased function of the CCR5 gene enhances cognitive function in mice. At first glance, this new knowledge may appear to be a boon, but the potential benefit also invites a discussion on the possibility of designer babies. Another point to consider is the fact that the CCR5 mutation that confers HIV immunity more commonly appears in Caucasians and may make individuals more susceptible to infections that are common in Asia.

Can gene editing be used to create designer babies?

A discussion on human genome editing would not be complete without evaluating the potential to create “designer babies,” a term commonly used in the vernacular to refer to babies with genetic enhancements. Both the utility of gene editing for basic research and the use of somatic gene editing to heal individuals who are sick are generally widely accepted among the public. The waters become murkier when we consider germline editing and the possibility of preventing disease or altering traits unrelated to health needs. In the 1970s, scientists first began to establish distinctions between somatic and germline genome modifications; somatic edits only affect a single individual while germline edits can be passed down over generations. By the mid-1980s, bioethicists began to argue that the morally relevant line was between disease and enhancement rather than somatic and germline. Discussions of heritable enhancements in particular raise fears of a possible return to eugenics.

John Fletcher, former head of bioethics at the National Institutes of Health (NIH), once wrote, “The most relevant moral distinction is between uses that may relieve real suffering and those that alter characteristics that have little or nothing to do with disease.” Many scientists today share the sentiment that treatment and prevention of “disease” constitute acceptable uses of CRISPR technologies while “enhancement” applications should be discouraged, but the boundary between the two is riddled with semantic discord. Moreover, the line delineating disability and disease is often blurred, and many perceived shortcomings may in fact represent normal variation on the phenotypic spectrum.

The discussion of whether we can or should modify human characteristics may be a moot point since our knowledge of which genes affect complex traits such as height, intelligence, and eye color is still limited. Additionally, most traits are influenced not only by genetics but also environmental factors, and monozygotic twin studies demonstrate that genes alone cannot predict whether physical traits will be expressed. Furthermore, genes that encode for physical traits may also impart increased vulnerability to certain diseases. For example, variations in the MC1R gene responsible for red hair color may also increase the risk of developing skin cancer. As indicated earlier, Dr. He’s efforts to confer resistance to HIV may have also resulted in increased susceptibility to infection by West Nile virus or influenza. As always, trade-offs exist, and the idea of the “perfect specimen” is a fallacy. Any efforts to gain genetic advantages will always be subject to the limitations of biology.

How should society move forward with gene editing technology?

CRISPR technology holds invaluable potential as a research tool and possible treatment for diseases caused by single-point genetic mutations. As previously described, some genetic diseases can be treated by stem cell gene editing without the need for germline modification, thereby minimizing the risk for potential mistakes that could be passed on to subsequent generations. On the other hand, trying to correct an error after a certain point during development is sometimes problematic, as the error has already been incorporated into billions of cells. Jennifer Doudna offers the following visual: “Imagine trying to correct an error in a news article after the newspapers have been printed and delivered, as opposed to when the article is still just a text file on the editor’s computer.” Germline editing may therefore provide a more expedient option for the prevention of some genetic diseases such as sickle cell disease or cystic fibrosis.

One of the most compelling arguments against CRISPR gene editing, namely the potential for misuse, can also be considered the most compelling argument for CRISPR gene editing. Banning progress on gene editing technology may create a black market, but the continuation of research on gene editing will allow the scientific community to control its use and ensure patient safety. Research into CRISPR is continually finding ways to make the technology safer and more effective; a paper published in September 2019 reported on the potential for a novel CRISPR system to affect gene expression in human cells. The process is reversible in theory and doesn’t involve the cutting of DNA, thereby reducing the risk of human harm and leveraging the power of epigenetics.

Moreover, while gene expression and the tumor microenvironment are viable targets for cancer treatment, gene editing can be considered a last resort therapy for certain cases in which other interventions have failed. Common chronic diseases, such as Alzheimer’s, type 2 diabetes, and cardiovascular disease, likely require a more nuanced approach, as gene expression, governed by factors such as diet and lifestyle, plays a significant role in disease pathogenesis. The use of gene editing to mold favorable traits, such as eye or hair color, likely exposes individuals to unnecessary risks and does not constitute medical necessity. Nevertheless, many consider mainstream germline gene editing an inevitability. Joseph Fletcher, one of the founders of bioethics, wrote in 1971, “Man is a maker and a selector and a designer, and the more rationally contrived and deliberate anything is, the more human it is.” The establishment of gene editing guidelines should include input from scientists, policy makers, and the public and incorporate the most current knowledge available in order to prevent misuse and realize potential. As the custodians of such powerful technology, we must take care to use it in an ethical and responsible manner. Whether our efforts will alleviate human suffering or ensure the survival of our species, only time will tell.

[ACC] Should We Colonize Space To Mitigate X-Risk?

[This is an entry to the 2019 Adversarial Collaboration Contest by Nick D and Rob S.]


Nick Bostrom defines existential risks (or X-risks) as “[risks] where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially this boils down to events where a bad outcome lies somewhere in the range of ‘destruction of civilization’ to ‘extermination of life on Earth’. Given that this has not already happened to us, we are left in the position of making predictions with very little directly applicable historical data, and as such it is a struggle to generate and defend precise figures for probabilities and magnitudes of different outcomes in these scenarios. Bostrom’s introduction to existential risk​ provides more insight into this problem than there is space for here.

There are two problems that arise with any discussion of X-risk mitigation. Is this worth doing? And how do you generate the political will necessary to handle the issue? Due to scope constraints this collaboration will not engage with either question, but will simply assume that the reader sees value in the continuation of the human species and civilization. The collaborators see X-risk mitigation as a “​Molochian​” problem, as we blindly stumble into these risks in the process of maturing our civilisation, or perhaps a twist on the tragedy of the commons. Everyone agrees that we should try to avoid extinction, but nobody wants to pay an outsized cost to prevent it. Coordination problems have been solved throughout history, and the collaborators assume that as the public becomes more educated on the subject, more pressure will be put on world governments to solve the issue.

Exactly which scenarios should be described as X-risks is impossible to pin down, but on the chart above, the closer you get to the top right, the more significant the concern. Considering there is no reliable data on the probability of a civilization collapsing pandemic or many other of these scenarios, the true risk of any scenario is impossible to determine. So any of the above scenarios should be considered dangerous, but for some of them, we have already enacted preparations and mitigation strategies. World governments are already preparing for X-risks such as nuclear war, or pandemics by leveraging conventional mitigation strategies like nuclear disarmament and WHO funding. When applicable, these strategies should be pursued in parallel with the strategies discussed in this paper. However, for something like a gamma ray burst or grey goo scenario, there is very little that can be done to prevent civilizational collapse. In these cases, the only effective remedy is the development of ​closed systems​. Lifeboats. Places for the last vestiges of humanity to hide and survive and wait for the catastrophe to burn itself out. There is no guarantee that any particular lifeboat would survive. But a dozen colonies scattered across every continent or every world would allow humanity to rise from the ashes of civilization.

Both authors of this adversarial collaboration agree that the human species is worth preserving, and that closed systems represent the best compromise between cost, feasibility, and effectiveness. We disagree, however, on if the lifeboats should be terrestrial, or off world. We’re going to go into more detail on the benefits and challenges of each, but in brief the argument boils down to whether we should aim more conservatively by developing the systems terrestrially, or ‘shoot for the stars’ and build an offworld base and reap the secondary benefits


For the X-risks listed above, there are measures that could be taken to reduce the risk of them occurring, or to mitigate against the negative outcomes. The most concrete steps that have been taken so far that mitigate against X-risks would be the creation of organisations like the UN, intended to disincentivize warmongering behaviour and reward cooperation. Similarly the World Health Organisation and acts like the Kyoto Protocol serve to reduce the chances of catastrophic disease outbreak and climate change respectively. MIRI works to reduce the risk of rogue AI coming into being, while space missions like the Sentinel telescope from the B612 Foundation seek to spot incoming asteroids from space.

While mitigation attempts are to be lauded, and expanded upon, our planet, global ecosystem, and biosphere are still the single point of failure for our human civilization. Creating separate reserves of human civilization, in the form of offworld colonies or closed systems on Earth, would be the most effective approach to mitigating against the worst outcomes of X-risk.

The scenario for these backups would go something like this: despite the best efforts to reduce the chance of any given catastrophe it occurs, and efforts made to protect/preserve civilization at large fail. Thankfully, our closed system or space colony has been specifically hardened to survive against the worst we can imagine, and a few thousand humans survive in their little self-sufficient bubble with the hope of retaining existing knowledge and technology until the point where they have grown enough to resume the advancement of human civilization, and the species/civilization loss event has been averted.

Some partial analogues come to mind when thinking of closed systems and colonies; the colonisation of the New World, Antarctic exploration and scientific bases, the Biosphere 2 experiment, the International Space Station, and nuclear submarines. These do not all exactly match the criteria of a closed system lifeboat, but lessons can be learned.

One of the challenges of X-risk mitigation is developing useful cost/benefit analyses for various schemes that might protect against catastrophic events. Given the uncertainty inherent in the outcomes and probabilities of these events, it can be very difficult to pin down the ‘benefit’ side of the equation; if you invest $5B in an asteroid mitigation scheme, are you rescuing humanity in 1% of counterfactuals or are you just softening the blow in 0.001% of them? If those fronting the costs can’t be convinced that they’re purchasing real value in terms of the future then it’s going to be awfully hard to convince them to spend that money. Additionally, the ‘cost’ side of the equation is not necessarily simple either, as many of the available solutions are unprecedented in scale or scope (and take the form of large infrastructure projects famous for cost-overruns). The crux of our disagreement ended up resting on the question of cost/benefit for terrestrial and offworld lifeboats, and the possibility of raising the funds and successfully establishing these lifeboats.


The two types of closed systems under consideration are offworld colonies, or planetary closed systems. An offworld colony would likely be based on some local celestial body, perhaps Mars, or one of Jupiter’s moons. For an offworld colony, the X-risk mitigation wouldn’t be the only point in its favor. A colony would also be able to provide secondary and tertiary benefits in acting as a research base and exploration hub, and possibly taking advantage of otheropportunities offered by off-planet environments.

In terms of X-risk mitigation, these colonies would work much the same as the planetary lifeboats, where isolation from the main population provides protection from most disasters. The advantage would lie in the extreme isolation offered by leaving the Earth. While a planetary lifeboat might allow a small population to survive a pandemic, a nuclear/volcanic winter, or catastrophic climate change, other threats such as an asteroid strike or nuclear strikes themselves would retain the ability to wipe out human civilization in the worst case.

Offworld colonies would provide near complete protection from asteroid strikes and threats local to the Earth such as pandemics, climate catastrophe, or geological events, as well as being out of range of existing nuclear weaponry. Climate change wouldn’t realistically be an issue on Mars, the Moon, or anywhere else in space, pandemics would be unable to spread from Earth, and the colonies would probably be low priority targets come the breakout of nuclear war. While eradicating human civilisation would require enough asteroid strikes to hit every colony, astronomically reducing the odds.

Historically, the only successful drivers for human space presence have been political, the Space Race being the obvious example. I would attribute this to a combination of two factors; human presence in space doesn’t increase the value of scientific research possible enough to offset the costs of supporting them there, and no economically attractive proposals exist for human space presence. As such, the chances of an off-planet colony being founded as a research base or economic enterprise are low in the near future. This leaves them in a similar position to planetary lifeboats, which also fail to provide an economic incentive or research prospects beyond studying the colony itself. To me this suggests that the point of argument between the two possibilities lies on the trade-off between the costs of establishing a colony on or off planet, and the risk mitigation they would respectively provide.

The value of human space presence for research purposes is only likely to decrease as automation and robotics improve, while for economic purposes, as access to space becomes cheaper, it may be possible to come up with some profitable activity for people off-planet. The most likely options for this would involve some kind of tourism, or if the colony was orbital, zero-g manufacturing of advanced materials, while an unexpectedly attractive proposal would be to offer retirement homes off planet for the ultra wealthy (to reduce the strain of gravity on their bodies in an already carefully controlled environment). It seems unlikely that any of these activities would be sufficiently profitable to justify an entire colony, but they could at least serve to offset some of the costs.

Perhaps the closest historical analogue to these systems would be the colonisation of the New World, the length of the trip was comparable (two months for the Mayflower, at least six to reach Mars), and isolation from home further compounded by the expense and lead time on mounting additional missions. Explorers traveling to the New World disappeared without warning multiple times, presumably due to the difficulty of sending for external help when unexpected problems were encountered. Difficulties associated with these kinds of unknown unknowns were encountered during the Biosphere projects as well, it transpired that ​trees grown in enclosed space​s​ won’t develop enough structural integrity to hold their own weight, as it is the stresses due to wind that cause them to develop this strength. It appears that this was not something that was even on the radar before the project happened, while several other unforeseen issues also had to be solved, the running theme was that in the event of an emergency supplies and assistance could come from outside to solve the problem. A space-based colony would have to solve problems of this kind with only what would be immediately to hand. With modern technology, assistance in the form of information would be available (see Gene Kranz and Ground Control’s rescue of Apollo 13), but lead times on space missions mean that even emergency flights to the ISS, for which travel time could be as little as ten minutes, aren’t really feasible. As such off-planet lifeboats would be expected to suffer more from unexpected problems than terrestrial lifeboats, and be more likely to fail before there was even any need for them.

The other big disadvantage of a space colony is the massively increased cost of construction, Elon Musk’s going estimate for a ‘self sustaining civilization’ on Mars is $100B – $10T, assuming that SpaceX’s plans for reducing the cost of transport to Mars work out as planned. In order to offer an apples to apples comparison with the terrestrial lifeboat considered later in this collaboration, if Musk’s estimate for a population of one million for a self-sustaining city is scaled down to the 4000 families considered below (a population of 16000) our cost estimate comes down to $1.6B – $160B. Bearing in mind that this is just for transport of the requisite mass to Mars, we would expect development and construction costs to be higher. With sufficient political will, these kinds of costs can be met; the Apollo program cost an estimated $150B in today’s money (why the cost of space travel for private and government run enterprises has changed so much across sixty years is an exercise left to the reader). Realistically though, it seems unlikely that any political crisis will occur to which the solution seems to be a second space race of a similar magnitude. This leaves the colonization project in the difficult position of trying to discern the best way to fund itself. Can enough international coordination be achieved to fund a colonization effort in a manner similar to the LHC or the ISS (but an order of magnitude larger)? Will the ongoing but very quiet space race between China, what’s left of Western space agencies human spaceflight efforts, and US private enterprise escalate into a colony race? Or will Musk’s current hope of ‘build it and they will come’ result in access to Mars spurring massive private investment into Martian infrastructure projects?


Planetary closed systems would be exclusively focused on allowing us to survive a catastrophic scenario (read: “zombie apocalypse”). Isolated using geography and technology, Earth based closed systems would still have many similarities to an offworld colony. Each lifeboat would need to make its own food, water, energy, and air. People would be able to leave during emergencies like a fire, ​O​2​ failure or heart attack, but the community would generally be closed off from the outside world. Once the technology has been developed, there is no reason other countries couldn’t replicate the project. In fact, it should be encouraged. Multiple communities located in different regions of the world would actually have three big benefits. Diversity, redundancy, and sovereignty. Allowing individual countries to make their own decisions allows different designs with no common points of failure and if one of the sites does fail, there are other communities that will still survive. Site locations should be chosen based on

● Political stability of the host nation
● System implementation plan
● Degree of exposure to natural disasters
● Geographic location
● Cultural Diversity

There is no reason a major nation couldn’t develop a lifeboat on their own, but considering the benefits of diversity, smaller nations should be encouraged to develop their own projects through UN funding and support. A UN committee made up of culturally diverse nations could be charged with examining grant proposals using the above criteria. In practice, this would mean a country would go before the committee and apply for a grant to help build their lifeboat.

Let’s say the US has selected Oracle, Arizona as a possible site for an above ground closed system. The proposal points out the cool, dry air minimizes decomposition, located far from major cities or nuclear targets, and protected and partially funded by the United States. The committee reviews the request and their only concern is the periodic earthquakes in the region. To improve the quality of their bid, The United States adds a guarantee that the town’s demographics would be reflected in the system by committing to a 40% Latino system. The committee considers the cultural benefits of the site, and approves the funding.

Oracle, Arizona wasn’t a random example, In fact it’s already the site of the world’s largest Closed Ecological System [CES] It actually was used as the site of Biosphere 2. As described by ​acting CEO Steve Bannon:

Biosphere 2 was designed as an environmental lab that replicated […] all the different ecosystems of the earth… It has been referred to in the past as a planet in a bottle.. It does not directly replicate earth [but] it’s the closest thing we’ve ever come to having all the major biomes, all the major ecosystems, plant species, animals etc. Really trying to make an analogue for the planet Earth.

I feel like I need to take a moment to point out that that was not a typo, and the quote above is provided by ​that​ Steve Bannon. I don’t know what else to say about that other than to acknowledge how weird it is (very).

As our friend Steve “Darth Vader” Bannon points out, what made Biosphere 2 unique, is that it was a Closed Ecological System where 8 scientists were sealed into an area of around 3 acres for a period of 2 years (Sept 26, 1991 – Sept. 27, 1993). There are many significant differences from the Biosphere 2 project and a lifeboat for humanity. Biosphere 2 contained a rainforest, for example. But the project was the longest a group of humans have ever been cut off from earth (“Biosphere 1”). Our best view into what issues future citizens of Mars may face is through the glass wall of a giant greenhouse in Arizona.

One of the major benefits of using terrestrial lifeboats as opposed to planetary colonies is that if (when) something goes wrong, nobody dies. There is no speed of light delay for problem solving, outside staff are available to provide emergency support, and in the event of a fire or gas leak, everyone can be evacuated. In Biosphere 2, something went wrong. Over the course of 16 months the oxygen in the Biosphere dropped from 20.9% from 14.5%. At the lowest levels, scientists were reporting trouble climbing stairs and inability to perform basic arithmetic. Outside support staff had liquid oxygen transported to the biosphere and pumped in.

A 1993 New York Times article “​Too Rich a Soil: Scientists find Flaw That Undid The Biosphere​” reports:

A mysterious decline in oxygen during the two-year trial run of the project endangered the lives of crew members and forced its leaders to inject huge amounts of oxygen […] The cause of the life-threatening deficit, scientists now say, was a glut of organic material like peat and compost in the structure’s soils. The organic matter set off an explosive growth of oxygen-eating bacteria, which in turn produced a rush of carbon dioxide in the course of bacterial respiration.

Considering a Martian city would need to rely on the same closed system technology as Biosphere 2, It seems that a necessary first step for a permanent community on Mars would be to demonstrate the ability to develop a reliable, sustainable, and safe closed system. I reached out to William F. Dempster, The Chief Engineer for the Biosphere 2. He has been a huge help and provided tons of papers that he authored during his time on the project. He was kind enough to point out some of the challenges of building closed systems intended for long-term human habitation:

What you are contemplating is a human life support pod that can endure on its own for generations, if not indefinitely, in a hostile environment devoid of myriads of critical resources that we are so accustomed to that we just take them for granted. A sealed structure like Biosphere 2 [….] is absolutely essential, but, if one also has to independently provide the energy and all the external conditions necessary, the whole problem is orders of magnitude more challenging.

The degree to which an off-planet lifeboat would lack resources compared to a terrestrial one would be dependent on the kind of disaster scenario that occurred, in some cases such as pandemic, it could be feasible to eventually venture out and recover machines, possibly some foods, and air and water (all with appropriate sterilization). While in the case of an asteroid strike or nuclear war at a civilization-destruction level, the lifeboat would have to be resistant to much the same conditions as an off-planet colony, as these are the kind of disasters where the Earth could conceivably become nearly as inhospitable as the rest of the solar system. To provide similar levels of x-risk protection as an off-planet colony in these situations, the terrestrial lifeboat would need to be as capable as Dempster worries.

While Biosphere 2 is in many ways a good analogue for some of the challenges a terrestrial closed system would face, There are many differences as well. First, Biosphere 2 was intended to maintain a living, breathing, ecosystem, while a terrestrial lifeboat would be able to leverage modern technology in order to save on costs, and the cost for a terrestrial lifeboat is really the biggest selling point. A decent mental model could be a large, relatively squat building, with an enclosed central courtyard. Something like the​ ​world’s largest office building​. It cost 1 billion dollars in today’s money to build, and bought us 6.5 million sq ft of living space. Enough for 4000 families to each have a comfortable 2 bedroom home. A lifeboat would have additional expenses for food and energy generation, as well as needing medical and entertainment facilities, but the facility could have a construction cost of around $250,000 per family. The median US home price is $223,800.

There is one additional benefit that can’t be overlooked, Due to the closed nature of the community, the tech centric lifestyle, and combined with the subsidized cost of living. There is a natural draw for software research, development, and technology companies. Creating a government sponsored technology hub would allow young engineers a small city to congregate, sparking new innovation. This wouldn’t and shouldn’t be a permanent relocation. In good times, with low risks, new people could be continuously brought in and cycled out periodically, with lockdowns only occurring in times of trouble. The X-risk benefits are largely dependent on the facilities themselves, but the facilities will naturally have nuclear fallout and pandemic protection as well as a certain amount of inclement weather or climate protection. Depending on the facility, There could be (natural or designed) radiation protection. Overall, a planetary system of lifeboats would be able to survive anything an offworld colony would survive, outside of a rogue AI or grey goo scenario. But simultaneously the facilities would have a very low likelihood of a system failure resulting in massive loss of life the way a Martian colony could.


To conclude, we decided that terrestrial and off-planet lifeboats offer very similar amounts of protection from x-risks, with off-planet solutions adding a small amount additional protection in certain scenarios whilst being markedly more expensive than a terrestrial equivalent, with additional risks and unknowns to the construction process.

The initial advocate for off-planet colonies now concedes that the additional difficulties associated with constructing a space colony would encourage the successful construction of terrestrial lifeboats before attempts are made to construct one on another body. The only reason to still countenance their construction at all is an issue which revealed itself to the advocate for terrestrial biospheres towards the end of the collaboration. A terrestrial lifeboat could end up being easily discontinued and abandoned if funding/political will failed, whereas a space colony would be very difficult to abandon due to the astronomical (​pun intended​) expense of transporting every colonist back. A return trip for even a relatively modest number of colonists would require billions of dollars allocated over several years, by, most importantly, multiple sessions of a congress or parliament. This creates a paradigm where a terrestrial lifeboat, while being less expensive and in many ways more practical, could never be a long term guarantor of human survival do to its ease of decommissioning (as was seen in the Biosphere 2 incident). To be clear, the advocate for terrestrial lifeboats considers this single point sufficient to decide the debate in its entirety and concedes the debate without reservation.