02 December 2010

Emergence and the Mind

This is my research paper for my Intro to Interdisciplinary Studies program. Revisiting it today, it’s… well, it’s a reflection of a specific time in my life where I very much wanted to believe certain, comfortable things.

The first half is a sort of historical overview of both the mind-brain dilemma and emergence theory, while the second half deals with the support for a theory of strong emergence and its applicability to the study of the mind. The first half, likewise, isn’t making much of an argument, but rather is bringing readers up to speed on the state of affairs that necessitates the second, argument-presenting half.

For a cool demonstration of weak emergence, check out Conway’s Game of Life.


The radical difference between the mind and the physical material of the universe has intrigued many philosophers throughout history. Mental “stuff”—thoughts, beliefs, morals, and so on—appear so wildly different from rocks and plants and bodies that many have gone so far as to declare that the material cannot produce the mental. René Descartes, one of the most famous of these so-called dualists, believed the mind to be an immaterial entity outside the physical world, in contrast to the body, which he thought of as a sort of mechanical entity obeying the laws of physics. Few intellectuals today are willing to make such a leap, however. With little exception, we now live in a monist world (where monism is a subdomain of the general materialist belief that the mind is based in matter); we believe that physical things give rise to mental things, even in instances where it is presently unclear how they do so.

To say, though, that the mind is a product of physical things is not necessarily to eliminate the mental phenomena. An explanation of mental characteristics based on the physical interactions of neurons will not necessarily imply that physical interactions are all there is to the mind, or that the subjective realm of experienced consciousness does not exist. As John Searle asserts, “the fact that a feature is mental does not imply that it is not physical; the fact that a feature is physical does not imply that it is not mental” (Searle 1994, 14). Searle believes that physical structures may at the same time give rise to mental structures—that a physical neuronal circuit may correspond to a certain mental state—and, correspondingly, that mental structures may be founded on physical ones. This is not dualism, but rather a monism that recognizes the usefulness of terms like “mental” in contrast to “physical.” Searle likens this to the idea of a mundane physical object like a desk: though we know that desks are made entirely of atoms, it still makes sense to talk about a desk as a real object, rather than saying “some billions of molecules combined in ways that look like a desk.” In the same way, even if it is the case that the belief “calculus is difficult” is identical to a certain physical brain state, it hardly makes sense for most purposes to talk about the belief in terms of neurons firing in particular ways.

Monism, though, presents certain challenges to our intuitive picture of the world. If a mind exists as a product of the physical goings-on of a brain, can it actually have any effect on that brain? Certainly our experience tells us that our thoughts, beliefs, and decisions affect our actions. Scientific confirmation of this intuitively obvious statement, though, has been notoriously difficult to come by—in fact, many experiments seem to deny any role for consciousness in acting, leading some scientists to declare that the mind is a silent observer of the input directed to it. If we are to believe that mental states are causally efficacious in the physical world, we need some framework in which to understand how this might occur.

Some philosophers of mind—David Chalmers and John Searle among others—suggest that the reason the mind appears to be greater than the sum of its material parts is… because it is. They claim that an explanation of mental properties in terms of emergence provides a way of understanding the mind as being greater than the interactions of its parts (Searle 1994; Chalmers 2006). This is their answer to the challenge of monism to our intuitive perceptions of ourselves, and it is a claim worth considering, as we will see.

Emergence itself is the phenomenon by which complex systems arise from very simple ones interacting in complex ways; emergence theory, likewise, deals with the characterization of this phenomenon. Aristotle described emergence in his Metaphysics: “. . . the totality is not, as it were, a mere heap, but the whole is something besides the parts” (Aristotle 1994). In everyday language, we describe this as an instance where the whole is—or at least appears to be—greater than the sum of its parts.

Emergence may be understood as the self-organization of simple parts into complex wholes, or wholes with properties not discernable (whether immediately or in principle) from the properties of their parts. Note that we use “whole” here recursively to denote an object or system at whatever level of complexity is required to see the emergent property in question; likewise, a “part” is any level of complexity below the whole. In most cases, a “whole” will correspond to the level on which we typically think of an object—the level below which one can no longer be said to have the object.

Examples of this complexity in everyday experience abound. For instance, consider the behavior of water compared to a 2-to-1 ratio of hydrogen and oxygen gas. Even with a very detailed understanding of the physical chemistry involved with hydrogen and oxygen individually, the fact that water is wet is quite unexpected. Furthermore, no individual molecule of H2O can be characterized this way; only a large group of molecules is “wet.” Or, to borrow an example from Steven Johnson, we may think of an economy as an emergent “whole,” while individual buyers and sellers are the “parts” (Johnson 2001). Taking away one or two or three buyers doesn’t mean the economy as a whole ceases to exist; nevertheless, the economy is composed of such individual “parts.”

Alan Turing brought the concept of emergence into the spotlight of modern science with his 1952 paper on morphogenesis. Morphogenesis, literally the study of the “beginnings of shape,” examines how structure first appears in a developing organism (e.g., a zygote). Turing’s findings can be summarized by saying that simple chemical substances, when diffused throughout a tissue and able to react with that tissue in rather simple ways, are sufficient to account for the beginnings of order in an organism (Turing 1952). This is a fitting first example of emergence—complex systems (in this case, organisms) are created by lots of simply interacting parts, where the system as a whole is quite unexpected for someone able to see only the parts.

Since Turing’s work, emergence theory has worked its way into a spectacular variety of research, joining together fields as disparate as economics, sociology, biology, physics, philosophy, and even theology (Johnson 2001; Holland 1998). Emergence theory figures prominently in all these fields, yet it is not restricted to any particular disciplinary context (Bar-Yam 2004).

Instances of emergence may be classified as either “weak” or “strong.” Systems that exhibit weak emergence have properties which are complex and not immediately discernable from the parts of individual components. Weakly emergent properties, nonetheless, are grounded wholly in the interactions of the system’s parts. To return to a previous example, water as a product of hydrogen and oxygen is weakly emergent: the whole has properties quite unlike the parts, but given enough knowledge about the properties and interactions of those parts, one could, in principle, predict the properties of the whole. In this way, weak emergence is a framework useful for understanding phenomena that we can’t meaningfully break down any other way. Such a framing, though, would be wholly redundant to an omniscient being who could extrapolate from the interactions of parts to the behavior of the (profoundly more complicated) whole.

On the other hand, systems that exhibit strong emergence have properties “systematically determined by low-level facts without being deducible from those facts” (Chalmers 2006, 4). That is, strong emergent systems are made up of quite ordinary “stuff”—atoms and molecules and the like. The way in which this “stuff” interacts, though, produces something new, something not discernable from any subset of the parts. In this case, even an omniscient god would understand the phenomenon in question only by considering the whole. An interesting (and essential) feature of such phenomena is something called “downward causation” (Kim 2006).

Downward causation exists in contrast to traditional “upward” causation. Upward causation occurs when simple particles act to affect more complex configurations, such as when individual atoms in a piece of iron come together to cause a whole magnet to move in response to a charge. This kind of causation acts from parts to wholes. Contrastingly, downward causation occurs when complex configurations of particles affect individual ones. It acts from wholes to parts. (Note that we shall consider examples of this sort of causation later.)

This is a curious, and quite counterintuitive, idea; it seems strange to say a thing could have properties not determined by its parts, and indeed, that it could actually cause changes to its parts. According to Chalmers, we know of “exactly one clear case of a strongly emergent phenomenon, and that is the phenomenon of consciousness” (Chalmers 2006, 246). Here, Chalmers refers to the experiential nature of consciousness—for instance, the “redness” of red—as a clear instance of strong emergence. Unfortunately, he provides no suggestions regarding how this might work; indeed, his position seems to boil down to the assertion that, since we presently don’t understand how such experienced consciousness arises from physical bodies, it must be “something more” via emergence. While this particular claim is not duly supported, in this paper we will eventually examine the applicability of emergence to other mental phenomena, things like beliefs and decisions; to use philosopher Ned Block’s terminology, we will examine the applicability of emergence to features of access consciousness, in contrast to features of phenomenal (or experienced) consciousness.

At this point, having introduced both emergence and some problems presented by mind-matter interactions, we can see why strong emergence may prove important in the philosophy of mind: it is billed as a naturalistic, non-dualistic, and emphatically non-mystical explanation for phenomena that seem to defy explanation in purely physical terms. If the mind is a strongly emergent phenomenon, then it becomes clear why there is something more to the universe than physical, materialist (and thus inherently reductive) descriptions can capture, things like beliefs and decisions and thoughts. As I will argue, understanding the mind as a strongly emergent phenomenon provides a mechanism by which it may causally act on the brain. This is an explanation of why the mind matters when it comes to matter.

So what are we to make of the possibility of strong emergence? To begin with, the proposition represents a significant break with scientific tradition. We have considered already the case of upward versus downward causation; the idea of downward causation is counterintuitive in the traditional physicalist understanding of the universe. Charles Darwin gave voice to the view of many scientists past and present when he reminded us “natura non facit saltus”—nature does not make leaps (Darwin 1871). Indeed, such maxims have served us well with regard to the physical sciences, where time and again we have found that complex, seemingly irreducible phenomenon are actually made up of the interactions of a system’s parts. Strong emergence, though, is by its own terms a leap; understanding it requires explanation in terms of the relationships between a system’s low- and high-level features that cannot be expressed in terms of, say, all the atoms in the system. Thus, there is understandably a great weight of scientific opinion opposing the idea of strong emergence on principle.

Clearly, there is a great burden of proof that strong emergence must bear if it is to be taken seriously. Yaneer Bar-Yam faces these challenges head on; he is explicitly interested in a mathematical account of strong emergence, and he provides good reason to believe he has found one.

To preface his discussion, Bar-Yam tells us that the fields of constraint analysis and reconstructability analysis concern themselves with the question of when properties of a whole system can be represented in terms of their parts. As he points out, these studies “do not guarantee such a decomposition, allowing for the case that a system cannot be described in terms of parts” (Bar-Yam 2004). This is good news for the would-be emergentist: mathematical-scientific inquiry is not closed a priori to the notion of a system greater than its parts.

In approaching emergence, Bar-Yam concerns himself especially with multiscale variety—that is, studying a system at different levels of complexity. Such study, he says, “reveals anomalous behavior” for cases where “dependencies exist between many variables, but for which subsets of the variables do not have the analogous dependency” (Bar-Yam 2004, 16). Using multiscale variety, Bar-Yam provides a mathematical description of what it would mean for a system to causally affect its parts. For the sake of brevity, I will not describe this mathematical formulation further here; for our purposes, the interpretation of Bar-Yam’s work is most important.

According to Bar-Yam, any system with global constraints which are independent of the constraints on its components qualifies as strongly emergent. That is, a strongly emergent system occurs any time the properties of the whole determine the behavior of the parts. This, importantly, is not as strange a phenomenon as it may appear, and Bar-Yam presents a number of instances where it occurs. He even presents a trivial example of a strongly emergent system: that of a parity bit ensemble. In computer science, a parity bit is a single bit (usually represented as either 0 or 1) which determines the even-ness or odd-ness (the parity) of the 1s in the following bits. (This is a slight simplification of the reality, but an acceptable one for our purposes.) For example, a group of seven bits with one parity bit may look like this:

10010110

The initial 1, in bold type above, specifies that, in the following seven bits, there is an odd number of 1s. We see that indeed there are an odd number of 1s—specifically, there are three.

In this case, observing any subset of the system “cannot reveal the existence of the global constraint” (Bar-Yam 2004, 20); only the system as a whole has the given parity property. Indeed, one could have any possible configuration of any six of the data bits—that is, any six of the bits not in bold type above—and still have the global constraint only satisfied by the seventh bit. In this way, the state of each bit in the system is constrained by the system as a whole, but consideration of any individual bit is not similarly affected.

Now, what if the environment were to try to “flip” one of the bits in our parity ensemble? The global constraint will not allow just one bit to go from 1 to 0, or vice versa; it requires, for instance, that a multiple of two bits go from 1 to 0 simultaneously, or that a bit changing from 1 to 0 is matched by another changing from 0 to 1. In this way, the global constraint affects the individual bits quite directly. Once again, though, a consideration of a subset of the system does not reveal this: any subset could be transformed in any way, with the bits outside one’s focus changing accordingly and invisibly.

This example of bit parity seems far removed from the rather mystical, enigmatic ideas that strong emergence might initially conjure in our minds. This does not imply, though, that the example is deficient. On the contrary, it meets the definition of strong emergence perfectly: it is a system whose whole is greater than the sum of its parts, and it cannot be fully understood on any level but that of the full ensemble. In this particular case, the “greater” aspect of the whole is that of parity. It just so happens (and luckily so for our sakes) that this example is simple enough to be comprehended fully as a whole.

Of course, Bar-Yam’s research can only be applied to the study of mind if there are indeed global constraints on the mental system as a whole which do not apply to its components (for instance, neuronal interactions). This indeed is the “million dollar question.” Conclusions regarding the applicability of strong emergence to the mind are, quite strangely, left entirely to the reader in much of the available literature. Those documents which do claim the mind to be a strongly emergent phenomenon are with little exception not the ones providing a scientific look at the concept; they are written by philosophers who do not, in many cases, give proper grounding for this belief, no doubt contributing to the impression that strong emergence is a sort of deus ex machina flying in to validate our experiences as they appear to us.

Perhaps the most noteworthy exception to this trend is Nancey Murphy’s “Emergence and Mental Causation” (Murphy 2006). In this essay, Murphy, whose area of expertise may be characterized as the philosophy of mind as it relates to theology, arrives independently at a mechanism of emergence that parallels that of Bar-Yam. While the two use quite different terminology, the framework they develop is remarkably similar.

Murphy, unlike Bar-Yam, explicitly considers the question of how this framework for downward causation might act on the brain. Recall from the description of the parity bit system that when the environment attempts to “flip” a bit, it is constrained in how it can do so; regardless of which bits it changes, it can only do so in a way that results in an odd number of 1s. The global constraint here cannot be said to create anything on the component level; instead, it is causally efficacious in selecting between possible states of the components. In a similar way, Murphy tells us that for “mental downward causation” (that is, the mind causing brain events) to exist, the mind need not create anything, per se, at the brain level—indeed, this is inconsistent with everything we know about neuroscience. Instead, mental properties only select between existing physical possibilities (Murphy 2006).

To relate this to the parity bit system, the mind (or, to be more accurate, a certain representation of relation in a mind) serves as the global constraint. Likewise, the possible neuronal connections in the brain parallel the numerous possible bit configurations considered apart from the global constraint. Thus, the effect of the global constraint (the mind) allows only certain configurations (of neurons) to actually come about, chosen from the much larger number of possibilities.

To illustrate, imagine a person at whose home dinner is always announced by the ringing of a bell. A neuronal connection in this case will develop between the neural correlates of hearing the bell and those of eating dinner; this is classical conditioning in its most basic form. The neural connection formed represents the relationship between the two experiences; it carries information understood only in broader terms of the system as a whole and its history—in particular, its history with bells and dinner. To understand why this is, consider the multitude of connections linking any given group of neurons to any other; that such a connection exists may carry only minor significance, if it carries any at all. However, when put in context, the connection represents belief and prior experience. This representational relationship is the person’s belief in the bell-dinner connection (Murphy 2006).

If this person’s situation changes—if bells begin ringing without dinner being served, or dinners occur without the fanfare of bells—this belief becomes false. At such time, the person can evaluate this belief via other networks in the brain. This may occur via self-referencing loops à la Douglas Hofstadter or some other mechanism (Hofstadter 2007); for our purposes, it doesn’t matter. Because the subject is aware of this belief, though, when it is falsified, he or she can change the neural connections, “selecting” a different set of neural connections from the already extant possibilities. This could occur by something like the linking of bells to both dinner and not-dinner on a neuronal level, or by doing away entirely with the connection between bells and dinner. In any case, the fact that we can empirically verify or falsify a belief—and remember the outcome of this—is proof of the causal efficacy of the mind-level concept; it is proof of downward causation. Indeed, as Murphy points out, under some hypothetical hypnosis where the subject is consciously unaware of this bell-dinner belief, falsifying it in one instance—producing dinner without ringing a bell—will not trigger the associated neuronal change (Murphy 2006); that is, downward causation will not have a chance to act because the global constraint of an overarching belief will have been removed.

Worth noting is the fact that there is no principled reason why this emergence-fueled downward causation could not occur in animal minds, or indeed, in machine minds. Any system able to instantiate global patterns constraining its parts could (again, in principle) affect downward causation as it occurs in the mind. This is a topic for further research, certainly, especially inasmuch as it concerns artificial intelligence.

Let’s take a step back and examine what exactly has been shown. We considered briefly the reasons why we might look for an explanation of the mind that goes beyond neurons. We saw that emergence theory offers the tantalizing prospect of showing how systems may be greater than the sum of their parts. We continued by examining what proof emergence offers in exchange for adding to the traditional physicalist world view, and what reasons we have for believing strong emergence to be possible. Here, Yaneer Bar-Yam demonstrates that strong emergence and downward causation are wholly plausible features of a materialist account of the universe. Finally, we considered the applicability of strong emergence to the philosophy of mind; specifically, we looked at an example from Nancey Murphy of downward causation acting from a mental/conceptual/belief level to influence physical neurons.

Clearly lacking from this description is an explanation of how an experienced consciousness arises from the brain; we might explain numerous features of access consciousness, such as beliefs and decisions, but it is not at all clear that strong emergence might explain phenomenal consciousness. These features of access consciousness, while certainly available to phenomenal consciousness, are not constitutive of it; thus, while strong emergence may occur in the mind qua access consciousness, it does not occur in the mind qua phenomenal consciousness, pending further research. This shortcoming in getting to the bottom of “the hard problem,” shared with other philosophers attempting to link emergence and mental causation, does not negate the importance of these findings. In showing that strong emergence may quite appropriately be applied to the mind-brain paradigm, and that downward causation acting from mental to physical systems is wholly plausible, this paper hopefully lays the foundations for future research into the relationship between the emergent mind and its relationship to the physical body.

Bibliography

Aristotle. Metaphysics. Translated by W. D. Ross. Internet Classics Archive, 1994. Accessed October 20, 2010.  http://classics.mit.edu/Aristotle/metaphysics.8.viii.html.

Bar-Yam, Yaneer. “A Mathematical Theory of Strong Emergence Using Multiscale Variety.” Cambridge, MA: New England Complex Systems Institute, 2004.

——. Dynamics of Complex Systems. Boulder, CO: Westview Press, 1997.

Blackmore, Susan. Consciousness: An Introduction. Oxford: Oxford University Press, 2004.

Block, Ned. “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18 (1995): 227-87.

Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press, 1996.

——. “Strong and Weak Emergence.” In The Re-Emergence of Emergence, edited by Paul Davies and Phillip Clayton, 244-254. Oxford: Oxford University Press, 2006.

Corning, Peter A. “The Re-Emergence of ‘Emergence’: A Venerable Concept in Search of a Theory.” Complexity 7, no 6 (2002): 18-30.

Darwin, Charles. The Descent of Man, and Selection in Relation to Sex. London: John Murray, 1871.

Davies, Paul. Introduction to The Re-Emergence of Emergence, edited by Paul Davies and Phillip Clayton. Oxford: Oxford University Press, 2006.

Hofstadter, Douglas. I Am a Strange Loop. New York: Basic Books, 2007.

Holland, John H. Emergence: From Chaos to Order. Reading, MA: Addison-Wesley, 1998.

Johnson, Steven. Emergence: The Connected Lives of Ants, Brains, Cities, and Software. New York: Scribner, 2001.

Kim, Jaegwon. “Being Realistic about Emergence.” In The Re-Emergence of Emergence, edited by Paul Davies and Phillip Clayton, 189-202. Oxford: Oxford University Press, 2006.

Koch, Christof. Radiolab. By Jad Abumrad and Robert Krulwich. New York Public Radio. WNYC New York, August 14, 2007.

Levine, Joseph. “Materialism and Qualia: The Explanatory Gap.” Pacific Philosophical Quarterly 64 (1983): 354-361.

Murphy, Nancey. “Emergence and Mental Causation.” In The Re-Emergence of Emergence, edited by Paul Davies and Phillip Clayton, 227-243. Oxford: Oxford University Press, 2006.

Searle, John R. Mind: A Brief Introduction. Oxford: Oxford University Press, 2004.

——. The Rediscovery of the Mind. Cambridge, MA: MIT Press, 1994.

Strogatz, Steven. Radiolab. By Jad Abumrad and Robert Krulwich. New York Public Radio. WNYC New York, August 14, 2007.

Turing, Alan M. “The Chemical Basis of Morphogenesis.” Philosophical Transactions of the Royal Society of London: Series B, Biological Sciences 237, no 641 (1952): 37-72.