Sunday, January 18, 2015

A Guide to Skinner's Genealogy of Liberty





What does it mean to be free? Liberty is the most important concept in modern political theory. That’s an overstatement, of course. There are other important concepts — equality? well-being? — and somebody could no doubt make the case for them. Still, liberty is very important, particularly to those who have temerity to call themselves “liberal”. It would help if they had some more detailed conception of liberty.

The traditional philosophical approach to this is to provide a conceptual analysis of what it means to be at liberty. The philosopher, from their privileged position in a comfortable armchair, thinks deeply about the nature of freedom. They propose a definition — a set of necessary and sufficient conditions for the application of the predicate “liberty” — and then they defend this analysis from a range of counterexamples and counterarguments, some proposed by themselves, some proposed by their philosophical friends and enemies.

This method has a long and venerable history, admirable and frustrating in equal measure. Are there any alternative approaches? In his excellent lecture “The Genealogy of Liberty” (based on his scholarly writings), Quentin Skinner argues that there is. He thinks we can construct a genealogy of all the different conceptions of liberty that have been proposed, rediscovered and defended since the birth of liberalism. The genealogy will highlight the resemblances and tensions between the different concepts, contextualise some of the important historic debates, and provide us with a rich landscape of conceptual possibility.

As I say, I think Skinner’s take on this is excellent, the product of his long years of historical and philosophical scholarship. I encourage everyone to watch his lecture. But, at the same time, I have been disappointed to see that no one (to the best of my knowledge) has provided a detailed summary and illustration of Skinner’s genealogy. That’s what this post is designed to do. Its goal is not to criticise Skinner’s framework (though this could certainly be done); its goal is to share what I believe to be a valuable intellectual tool.


1. The Big Picture
We’ll start with the big picture. The diagram below illustrates all the key components of Skinner’s genealogy.




Don’t worry if this looks confusing. We’ll be going through it step-by-step in a moment. For now, just note three main features of the genealogy. First, note that it starts with a basic two-condition analysis of freedom (throughout we’ll be using the term “freedom” interchangeably with the term “liberty”; this tracks Skinner’s usage of the terms):

Freedom: consists in (a) the power of an individual act and (b) the ability of the individual to exercise that power in a particular way.

This two-condition analysis provides the root from which the rest of the genealogy grows. And it is from the second condition that they grow. This is because the major traditions in the history of liberty have all tended to differ with respect to how they fill out the particulars of that condition.

That brings us to the second point, which is that those traditions are arrayed along the three major branches of the genealogy. Starting in the middle, we have the dominant liberal conception of freedom as non-interference. This has been subject to a number of analyses and elaborations over the years, but remains at the core of liberal theory. Moving to the left of that branch, we have the republican conception of liberty as non-domination. This was prevalent in ancient Rome, and underwent something of a resurgence in the latter half of the twentieth century (partly because of the work of Quentin Skinner). And then, moving to the extreme right (perhaps aptly) we have the more mystical concept of positive liberty. According to this conception, being free consists in the ability to realise one’s true self. This conception is probably less popular than the other two, but it has had some prominent defenders.

One other general point about the genealogy. In tracing out these different conceptions, Skinner focuses on the debate within the Anglophone literature. At times, he calls upon ideas that were initially developed in other languages (e.g. Greek, Latin, German and French). But he focuses on the Anglophone debate in order to sidestep problems that come from assuming that the English word for liberty is the same as the French “liberte” and so on.

Anyway, with that general introduction out of the way, let’s proceed to consider the three main branches of the genealogy.


2. Freedom as Non-Interference: The Dominant Tradition
We’ll start with the dominant branch, the one which conceptualises freedom in terms of non-interference. The guiding intuition here is that you are free whenever you exercise your power to act in accordance with your own will. In other words, when your will to act is not being interfered with. As Skinner notes, this conception of freedom is negative: it holds that freedom is the absence of something else, namely interference. As such, this conception shifts the focus away from the nature of freedom and onto the nature of interference. What counts as a freedom-undermining interference? Broadly speaking, there have been three answers to that question.

The first is associated with Thomas Hobbes and his classic work Leviathan. Hobbes argued that the only freedom-undermining interference was interference by some external agency, acting through the use of force on your body, in such a way as to literally prevent you from choosing an alternative course of action. So, for example, if I put a contract before you and asked you to sign it, and then proceeded to grab your hand, and trace out your signature through the use of force, I would be undermining your freedom. I would be using force, on your body, to prevent you from doing anything other than signing the contract. In this example, I am the one forcing you to perform the action, but don’t assume from this that it is only the use of force by another human agent that undermines freedom. According to Hobbes and his followers, the external agency that interferes with your act could be nature itself. In other words, the predication of “external agency” is being kept as wide as possible.

Freedom as Non-Interference by Force: You are free if (a) you have the power to act and (b) you exercise that power without being interfered with by an external agency, exerting force on your body, in such a way as to literally prevent you from doing anything else.

This classic Hobbesian position still has its followers, but for many it leaves something important out. While they accept that the use of force on the body undermines freedom, they think that less obvious forms of manipulation can undermine freedom too. Consider the famous example of the highwayman. The highwayman pulls over your stagecoach and offers you a deal: “Your money or your life?” No doubt, you give him your money. In this case, the highwayman does not manipulate your body through the use of force. Rather, he manipulates your will through the use of coercion. The question is whether that undermines freedom. The Hobbesian position is that it does not. Indeed, Hobbes famously said that in such a situation you still had a choice and you still had the power to exercise your will. The mere fact that the alternative you were being offered was unpleasant, did nothing to undermine your freedom.

To most contemporary liberals that seems pretty unsatisfactory. John Locke was one of the earliest critics. He argued that coercion could undermine freedom. Now, Locke had a pretty wide definition of what could count as coercion. He suggested that offers, threats, promises and bribes could all undermine the free exercise of the will. That probably goes too far. The politician who votes against a piece of legislation simply because she wishes to receive some bribe money, doesn’t seem to be acting without liberty. So others propose a more restrictive definition of coercion. For them, a coercive act is one that threatens to make you worse off than you would otherwise have been, and is serious, credible and (relatively) immediate. The highwayman’s offer of “your money or your life” fits the bill.

Freedom as Non-Interference by Coercion: You are free if (a) you have the power to act and (b) you exercise that power without being interfered with by an external agency, coercing you into doing something through the use of a threat, to make you worse off, that is (i) serious, (ii) credible and (iii) immediate.

This analysis of freedom as non-interference is usually added together with the Hobbesian one, to give us the classic (and arguably most popular) liberal conception of freedom.

But the story does not end there. As Skinner points out, in the 1800s a group of thinkers added some further complexity to the conception of freedom as non-interference. They highlighted an omission in the prevailing point of view. Note how the two preceding conceptions held that it must be an external agency (broadly construed) that interferes with your power to act. But why must this be the case? Could you yourself not interfere with your own power to act? John Stuart Mill was one of the first to recognise this possibility. Probably influenced by his occasional dalliances with Romanticism, Mill argued that you could be in the grip of inauthentic desires and beliefs, ones which interfered with your capacity to act as you truly wished. Ridding ourselves of such inauthenticity was one of the themes in Mill’s famous paean to freedom On Liberty. Similar “internal” interferences with liberty were highlighted by others. Marxists, for example, argued that many ordinary citizens were subject to false consciousness, and Freudians argued that psychological mechanisms for repression helped to suppress our true desires. This gives us:

Freedom as Non-Interference by the Self: You are free if (a) you have the power to act and (b) you exercise that power without being interfered with by some aspect of your self which prevents or compels you to act by (i) passion, (ii) inauthenticity, (iii) false consciousness, (iv) repression (and, maybe (v) other possibilities).

[Note: the “other possibilities” option at the very end is included by Skinner in order to recognise the fact that classificatory schemes of this sort are never complete].

Again, as with freedom as the absence of coercion, this conception can be added to the preceding ones. In other words, one could have a general theory of freedom as non-interference which accepted that the use of force, the use of coercion, and the prevention or compulsion of action by some aspect of the self, all undermine our freedom. The image below highlights these three possibilities in the general genealogy of liberty. As you can see, this conception of freedom as non-interference takes up the most space.




2. Positive Liberty: The Mystical Tradition
But freedom as non-interference does not exhaust the conceptual space. Skinner argues that in the late 1800s there also emerged a school of thought that rejected this purely negative conception of freedom. For them, freedom as non-interference only captured “the negative portion of the dialectic”. Freedom wasn’t simply the absence of something; it was the presence of something too. It was about realising some property or quality in one’s actions.

But what is this property or quality? Skinner cites the work of T.H. Green, who argued that in order to be free we must have acted so as to realise the true essence of our selves. I call this, somewhat disparagingly, the “mystical” tradition in the history of liberty. I do so because the notion that there is some true essence to the self strikes me as being slightly mystical. Nevertheless, there have been a variety of proposals over the years. Two are mentioned by Skinner. First, there is the Aristotelian proposal that the essence of the self is political (“man is a political animal”) and hence we are free when we act so as to realise our political natures. Second, there is the Christian proposal that the essence of the self is spiritual and hence we are free when we act so as to realise our spiritual natures (usually done by achieving commune with God).

Freedom as Self-Realisation: You are free if (a) you have the power to act and (b) you exercise that power in such a way that you realise the true essence of your self as: (i) a political being; or (ii) a spiritual being; or (iii) some other possibility.

The two most prominent exponents of this positive conception of liberty in the recent past are, according to Skinner, Hannah Arendt and Charles Taylor. Both of these authors adopt a political/communitarian conception of self-realisation.




3. Freedom as Non-Domination: The Republican Tradition
That brings us to the final branch of the genealogy. The one that conceptualises freedom in terms of non-domination. We come to this last because, according to Skinner, it is the conception of freedom that was suppressed by the birth of modern liberalism. Specifically, he argues that when Hobbes defended his version of freedom as non-interference, he did so in an attempt to refute the alternative, republican tradition, of freedom as non-domination (this makes sense when one considers the historical context in which Hobbes wrote his major works: Hobbes wished to defend the monarchy from its republican opponents).

This alternative tradition adopts an interesting, counterfactual, definition of political freedom. It holds that absence of interference is not enough for freedom. Imagine you are a slave. You are born into slavery. You know no other life. The condition of being a slave is such that you are always subject to the will of another (your master). But now imagine that you are happy to go along with the will of your master. You never contradict him; you always do as he pleases. As result, you may live a life that is devoid of interference. You will never be forced to do something by physical manipulation or by manipulation of the will. But would that be enough to secure your freedom?

The republican claim is that it is not. In order to be truly free, you must not be subject to the will of another in this way. To live under the constant shadow of the master’s good will is problematic in and of itself, and will likely limit your self-expression in other ways too. For instance, the slave will be prone to self-censorship. Because they know that their ability to act is conditional upon the good will of their masters, they will always be cautious about speaking and acting in a forthright manner. This gives us:

Freedom as Non-Domination: You are free if (a) you have the power to act and (b) your exercise of that power is not conditional upon the good will of another (i.e. you are not living like a slave subject to a master).

This republican conception of freedom has been resurrected in recent times by Skinner himself, and by the work of other political philosophers, most notably Philip Pettit (about whom I’ve written before). But it has featured prominently during a number of debates in the past 300 or so years. For instance, at the time of the American War of Independence, the notion was that the colonialists were subject to the will of the British crown (the irony that they permitted slavery within their own borders is not ignored in Skinner’s discussion of this history). Similarly, leading figures in the movement for women’s rights appealed, implicitly, to the republican conception of liberty. Indeed, both Mary Wollstonecraft’s A Vindication of the Rights of Women and John Stuart Mill’s The Subjection of Women drew analogies between the status of the married woman and the slave.

This branch of the genealogy is highlighted in the image below.




4. Conclusion
So that’s it. As I said at the outset, I didn’t intend for this to be a critical commentary on Skinner’s genealogy. Instead, I merely intended to share what I think is a useful intellectual tool. Hopefully that intention has been realised. As we have seen, there are three branches in the genealogy of liberty. The dominant branch holds that freedom consists in non-interference, with some debate about the precise nature of freedom-undermining interference. The mystical branch holds that freedom consists in the ability to realise one’s true essence through action. And the republican branch holds that freedom consists in non-domination, i.e. the absence of conditional dependence on the good will of another. Although it is possible to combine some of these views, there are also important tensions between them. Nevertheless, they have all featured in the debate about what it means to be free.

Saturday, January 17, 2015

Neuroenhancement and the Extended Mind Hypothesis




Consider your smartphone for a moment. It provides you with access to a cornucopia of information. Some of it is general, stored on publicly accessible internet sites, and capable of being called up to resolve any pub debate one might be having (how many U.S. presidents have been assassinated? or how many times have Brazil won the World Cup?). Some of it is more personal, and includes a comprehensive databank of all emails and text message conversations you have had, your calendar appointments, the number of steps you have taken on any given day, books read, films watched, calories consumed and so forth.

Now consider a question: is this information part of your mind? Does it form part of an extended mind loop, one that interfaces with and augments the mental processors inside your skull? According to some philosophers it does. They believe in something called the extended mind hypothesis, which goes against the neuro-physicalist wisdom and holds that the mind is not necessarily to be identified with the brain. On the contrary, they suggest that humans are natural-born cyborgs, constantly expanding their minds into their external environments.

This is an intriguing hypothesis, and one that has been much-debated in the philosophy of mind. But does it have any ethical implications? If your mind extends into the external environment, wouldn’t we be obliged to treat anything that forms part of your extended mind loop in accordance with the ethical principles that are usually thought to apply to the treatment of any part of one’s non-extended mind? In other words, shouldn’t we adopt a parity-stance when it comes to the treatment of the internal and external mind?

Maybe. One philosopher who has taken up the parity-stance in recent years is Neil Levy. He has used it explicitly in relation to debates about neuroenhancement, i.e. the use of drugs and other forms of biotechnology to enhance and tweak the elements of neural anatomy. In this post, I want to take a look at Levy’s argument and at a recent response to it. I’ll proceed in three stages. I’ll start with a description of the extended mind hypothesis. I’ll then look at Levy’s “parity” argument. Following this, I’ll consider some obvious criticisms of the parity argument.

If that sounds tolerable, let’s proceed.


1. A Quick Outline of the Extended Mind Hypothesis
The extended mind hypothesis (EMH) was first introduced to the philosophical world by David Chalmers and Andy Clark in 1998. Their claim was simple enough. The most prevalent version of mind-body physicalism in the latter half of the 20th century was functionalism. According to functionalism, whether or not something counted as a mental state (e.g. a belief, desire, intention, memory, experience and so forth) depended not so much on the stuff it was made of but on its place within a functional system. In other words, the mind was like a mechanism, with particular mental states playing different causal and functional roles within the mechanism, all leading to the creation of this phenomenon we call the “mind”.

Because functionalism placed such an emphasis causal roles in the production of mental phenomena, it led philosophers of mind to propose a natural corollary. It led to them to claim that mental phenomena were multiply realisable. That is to say, mental phenomena could supervene upon many different physical systems. There was nothing uniquely special about neurons and other brain cells in this respect. In theory, a mind could be instantiated in other systems, for example an artificial neural network or digital computer. All that mattered was whether that system had all the relevant component parts playing the appropriate functional roles. Functionalists are still physicalists. They still think the mind requires some physical system. They just don’t think that the brain is the only eligible physical system.

Chalmers and Clark’s extended mind thesis was another natural corollary of functionalism and multiple realisability. It added that if the mind is multiply realisable it can surely be jointly (or, rather, conjointly) realisable. In other words, the brain and other physical systems could combine to form a mind. Chalmers and Clark provided a striking illustration of their hypothesis. Imagine there is a man named Otto, who suffers from some memory impairment. At all times, Otto carries with him a notebook. This notebook contains all the information Otto needs to remember on any given day. Suppose one day he wants to go to an exhibition at the Museum of Modern Art in New York but he can’t remember the address. Fortunately, he can simply look up the address in his notebook. This he duly does and attends the exhibition. Now compare Otto to Inga. She also wants to go to the exhibition, but has no memory problems and is able to recall the location using the traditional, brain-based recollection system.

Chalmers and Clark argue that there is nothing fundamentally different about Otto and Inga. They both remember the location. It just so happens that Otto uses an extended mind loop for recollection, whereas Inga uses an internal one. In this sense, Otto’s notebook forms part of his mind.

To be clear, Chalmers and Clark do not think that everything in the physical environment will form part of an extended mind. Certain conditions must be met. These include:

Accessibility: The external prop must be constantly and easily accessible to the individual.
Endorsement: The contents of the external prop (the notebook in Otto’s case) must be automatically endorsed by the individual and must have been consciously endorsed in the past.

These conditions are (allegedly) sufficient for something forming part of the mind, but they may not be necessary. What kinds of external prop meet these conditions? Otto’s notebook obviously fits the bill, but that’s an archaic example. I would argue that most smartphones and artificial assistants now meet these criteria. And as I said some of the contents of those props can include information from the (publicly-accessible) web. Of course, this raises interesting questions about whether the information on the web forms part of my mind, as well as your mind and everyone else’s. I think it does, if we follow Chalmers and Clark’s conditions. I frequently consciously endorse information on the web (e.g. the location of my hotel when travelling) and then access and automatically endorse it at a later point.

Strangely, Levy tries to resist this claim (at least in his own case). He says that something like the information on Wikipedia does not meet these conditions because access is relatively slow and effortful, and he does not always trust it (Levy 2011). Of course, I can’t speak directly to his own experience, but it does seem like an oddly out-dated claim. A lot of information on the web is no longer effortful and slow to access (no more so than accessing a notebook) and is automatically endorsed. I readily access information via my smartphone in virtually any location and at any time. Furthermore, the trust issue seems no greater in the case of certain types of web-based information than it would in the case of the information stored in Otto’s notebook.

But this is a digression. The important point for now is to grasp the essence of the extended mind hypothesis. Remember, the claim is that the mind isn’t all inside the skull. External props can form part of extended mind loop, provided that the contents of those props meet certain conditions. The simplest example of this is in how external props form part of our memory system. But it doesn’t end there. External props can form part of other mental systems too, such as our motivational systems.


2. Ethical Parity and the Neuroenhancement Debate
There are many criticisms of the EMH in the literature. I will discuss some a little later on. For now, I want to take it as a given, and see what difference it makes to the neuroenhancement debate. That debate is all about the use of pharmacological and biotechnological devices to alter and enhance the neural anatomy. Examples might include the use of methyphenidates to enhance memory and attention, propanolol to suppress painful memories, and deep brain stimulation to regulate mood. Some people are positively disposed to such enhancements, others are negatively disposed. Each side has a set of standard arguments at their disposal. The positively disposed appeal to removal of barriers to self-optimisation and the associated positive societal effects. The negatively disposed worry about things like upsetting the natural order, inauthenticity and the possible negative societal effects.

Can the EMH be used to break the deadlock between the two sides? Levy thinks it might help. Specifically, he thinks it might help in a way that supports the pro-enhancement side. As he sees it, part of the opposition to the use of neural enhancement is based on the notion that there is some principled distinction between enhancements that are “inside the head” and enhancements that are on the outside. Thus, enhancing my memory consolidation with the use of methylphenidate is deemed to be very different from enhancing my memory through the use of my smartphone.

But the EMH calls this principled distinction into question. If the EMH is right, then both the internal and external realms form part of our mind. And if we have no objection to enhancing the latter, we should (by implication) have no objection to enhancing the former. If we accept that I can enhance my extended mind loop by buying a newer, smarter, smartphone, then why shouldn’t we accept that I can do the same with a smartpill? As Levy puts it:

Much of the heat and the hype surrounding neuroscientific technologies stems from the perception that they offer (or threaten) opportunities genuinely unprecedented in human experience. But if the mind is not confined within the skull…[then] intervening in the mind is ubiquitous. It becomes difficult to defend the idea that there is a difference in principle between interventions which work by altering a person’s environment and that that work directly on her brain, insofar as the effect on cognition is the same; the mere fact that an intervention targets the brain directly no longer seems relevant. 
(Levy 2011, 291)

Levy is working here with an ethical parity principle (EPP), one that claims there is, ethically speaking, no important principled difference between internal and external interventions in the mind. This is a strong parity claim, premised on our acceptance of the EMH. The principle can be formulated in the following way:

Strong EPP: Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.

This can then be used to support Levy’s argument in favour of neural enhancement:



  • (1) Since the mind extends into the external environment, alterations of external props used for thinking are (ceteris paribus) ethically on a par with alterations of the brain.
  • (2) We have no “in principle” ethical objection to alterations to external props for thinking.
  • (3) Therefore, we ought to have no “in-principle” ethical objection to alterations of neural mechanisms for thinking.



Just to be clear, this argument is not claiming that interventions to the external and internal mind are fine and dandy. It is, rather, claiming that they should be treated to equivalent forms of ethical scrutiny. If it would be wrong to remove the part of Inga’s brain that allows her to remember the location of the Museum of Modern Art, then it would be wrong to remove Otto’s notebook. And vice versa.


3. Is the EPP credible?
Now we come to the crux of the issue. Is the EPP, as outlined, any good? Should we agree with Levy that there is no “in principle” difference between internal and external interventions into the mind? Some are skeptical. They argue that the strong form of the EPP is unsustainable, but that a weak form may be allowed to stand in its stead (something Levy himself falls back on). DeMarco and Ford are two such critics. I want to close by outlining their critique.

To understand this critique, let’s dwell on the example of memory. The EPP claims that there is no in-principle distinction between alterations to external memory props (like Otto’s notebook, or my email history) and alterations to internal memory systems (presumably regions or networks of the brain). While this sounds initially plausible on the EMH, there are at least three important differences between internal and external memory. Each of these differences has some ethical salience:

Dynamic integration: Internal memory is a dynamic, not a static, phenomenon. The information stored in Otto’s notebook or my smartphone is static. Once inputted, the information remains the same, unless it is deliberately altered. Internal memory is not like this. As is now well-known, the brain does not store information like, say, a hard disk stores information. Memories are dynamic. They are changed by the act of remembering. What’s more, memories integrate with other aspects of our cognitive frameworks. They affect how we perceive and how we desire. Perhaps external props can do something similar, but their effects are more attenuated. Internal memory is more closely coupled to these other phenomena. Consequently, tinkering with internal memory could have a much more widespread effect than tinkering with external memory. To the extent that those effects are ethically significant, we have found a reason to reject the strong EPP.

Fungibility: External memory props may be more easily replaceable (more fungible) than internal memory. If I destroy your smartphone, you can always get another one. And although you may have lost some of your externally stored memories (maybe some pictures and messages) you will still be able to form new ones. If, on the other hand, I destroy your hippocampus (part of the brain network needed to form long-term memories), I can permanently impair your capacity to acquire new long-term memories. This isn’t a hypothetical example either. This has really happened to some people. The most famous being patient HM, who had part of his hippocampus removed during surgery for epilepsy in the 1950s, and was never able to form another long-term semantic memory. Again, this difference in fungibility seems like it is ethically significant.

Consciousness: Another obvious difference between internal and external memory is the degree to which they are implicated in conscious experience. Consciousness is usually deemed to be an ethically salient property. Entities that are capable of conscious experience are capable of suffering and hence capable of being morally wronged. What’s more, the nature and quality of one’s conscious experiences is often thought to be central to living a good life. Although the information stored in an external prop may, eventually, feature in one’s conscious experiences, it does not shape the very content of those conscious experiences in the way that something which is internally stored may do. As I noted above, internal memory can get deeply integrated into our mental models of the world, affecting how we perceive and act in that world. So if we alter an internal memory system, it could have a much more significant impact on the quality of our conscious experience.

These aren’t perfect reasons for rejecting the strong EPP. One could dream up examples of external props that seem to elide the alleged distinctions. For example, some external props may get deeply integrated into our mental models of the world and change how we perceive it. And perhaps the difference in fungibility is merely temporary: with advanced technology, even brain parts may be as readily replaceable as smartphones. Nevertheless, taken together, I think these three distinctions provide some reason for doubting the strong version of the EPP, and there are, in any event, more distinctions to be made (see work by Evan Selinger, for example)

The question is: where do we go from there? Levy introduced the EPP in an effort to make people more comfortable about the prospect of neural enhancement. He reasoned that if people had no problem with external enhancement, then if you could convince them that there was no in principle difference between external and internal enhancement, you could, in turn, convince them of the acceptability of the latter. In others words, he tried to draw us from our embrace of external enhancement to an embrace of internal enhancement, via the EPP. Given this, one might be inclined to think that any claim to the effect that there are important differences between the internal and the external would lead us to be more wary of internal forms of enhancement. But this is not my view. I think the differences may draw us in the opposite direction. I think they show that there are greater moral risks associated with tinkering with the internal realm, but, at the same time, there are greater benefits too. For instance, if consciousness is such an important ethical property — so deeply implicated in what it takes to live a good life — then surely that is the very thing we should be trying to enhance?

DeMarco and Ford make a slightly more general point. They think that the differences between the internal and external worlds should lead us to abandon the strong version of the EPP, but retain, in its stead, a weaker version. Levy himself drafted a weaker version of the principle, one that was not so reliant on the EMH. DeMarco and Ford try to modify this weaker version in light of a series of criticisms. I won’t rehearse their arguments here. Instead, I’ll simply skip to the end and to their attempt at a weak EPP:

Weak EPP (DeMarco and Ford): Alterations of external props are ethically on a par with functionally similar alterations of the brain, to the precise extent to which reasons for finding the functional alterations of the brain morally acceptable or unacceptable equally apply to reasons related to the functional alteration of external mental props.

As they see it, this weaker version of the EPP has a modest, but important effect on debates about enhancement and other mental interventions. It forces us to focus on the ethical permissibility of tinkering with different mental functions (like memory, desire, belief, mood and so on). If it is permissible to tinker with a given mental function, then its permissibility should not depend on whether the tinkering is internal or external. It should depend on the moral reasons for the alteration. But this is only the beginning of the ethical inquiry. If the functional alteration is permissible, then other more complex issues will need to be addressed. How effective are current technologies for alteration? What are their side effects? Do they have social implications? And so on.


4. Conclusion
To sum up, the EMH argues that the mind is not all in the head. External props can form part of a functional mind loop. Neil Levy argues that this functional equivalence between the internal and external has some ethical implications. In particular, he thinks it can affect the debate about the moral propriety of neural enhancement. In brief, his argument is that since we typically have no problem with the alteration and enhancement of external mental props, so too should we have no problem with functionally equivalent alterations to and enhancements of internal mental systems.

Others disagree, arguing that there are important ethical differences between internal and external mental systems. DeMarco and Ford argue that these differences should lead us to a revised, weaker version of Levy’s parity principle. I have argued that these differences may indirectly play into Levy’s hands. This is on the grounds that they may suggest that internal alterations have greater ethical priority. To be sure, this is an incomplete argument. But it is one worth developing and one which I hope to develop in the future.

Saturday, January 10, 2015

Longer Lives and the Alleged Tedium of Immortality

Bernard Williams - argued that immortality would be tedious

Back in 1973, Bernard Williams published an article about the desirability of immortality. The article was entitled “The Makropulos Case: Reflections on the Tedium of Immortality”. The article used the story of Elina Makropulos — from Janacek’s opera The Makropulos Affair — to argue that immortality would not be desirable. According to the story, Elina Makropulos is given the elixir of life by her father. The elixir allows Elina to live for three hundred years at her current biological age. After this period has elapsed, she has to choose whether to take the elixir again and live for another three hundred. She takes it once, lives her three hundred years, and then chooses to die rather than live another three hundred. Why? Because she has become bored with her existence.

Of course, this is just a story, but Williams thinks that it makes a serious point. He argues that a meaningful life, one that is of value to the one that lives it, is one that focuses on the fulfillment of certain categorical desires. He worries that an immortal life would lead to the exhaustion of such desires, which would in turn lead to tedium and boredom. It is this exhaustion of categorical desires that he feels is captured so well by the story of Elina Makropulous.

Williams’s article is justly famous. Like all of his work, it is well-written and has a provocative thesis. It is also, for better or worse, the starting point for all contemporary discussions of the desirability of immortality. Many have been persuaded by its arguments. They think Williams does indeed say something important about the nature of immortality. I have historically counted myself as one of those people (sort of - see my earlier series of posts on this topic). But perhaps we need to re-evaluate? Perhaps Williams’s argument says nothing at all about the desirability of immortality?

That’s the claim put forward by Samuel Scheffler in his book Death and the Afterlife. Scheffler argues that Williams is wrong about the desirability of immortality, but may have nevertheless highlighted a paradox at the heart of the human condition. I want to examine Scheffler’s argument in this post.


1. Williams’s Tedium of Immortality Argument
I have to start by reviewing Williams’s tedium of immortality argument. I don’t want to spend too long on this since I’ve written about it at length before (and, since I am not immortal, I probably shouldn’t waste the time I have).

Let's assume that most people fear their deaths and would prefer not to die. That is an assumption that guides Williams’s analysis. In preferring not to die, Williams further assumes that they would want to live forever, i.e. to have an existence that cannot come to an end. This is what he means by an immortal life. Such a life should be distinguished from one that is merely very long (e.g. 1,000 years) or super-long (e.g. 1,000,000 years). This distinction has important repercussions for assessing the impact of Williams’s argument on projects aimed at extending human lifespan. I’ll return to this point at the end of the post.

In imagining a life without end, what is it that people would be imagining? Williams suggests that they would imagine a type of existence that satisfies two conditions:

Williams’s Conditions: An immortal human life must:
(a) preserve a sense of self over time, i.e. it must be the same self that is living the life in question; 
(b) be such that the state of being in which the self will be, should it survive, allows the self to satisfy those aims it has in wanting to survive.

As regards the second of these conditions, Williams focuses on the types of desire that motivate us to continue living. He claims that there are two general classes of such desire:

Conditional/Contingent Desires: These are desires that are ephemeral and fleeting in nature, often tied to (or conditional upon) the limitations of our biology and our continued survival, e.g. the desire for food, shelter, sex and so forth.
Categorical Desires: These are more significant desires. They are akin to life projects or plans. They are desires around which our self-worth is organised, e.g. the desire to write a great novel, raise happy and successful children, make important scientific discoveries, and so forth. 

Williams claims that the satisfaction of contingent desires, while important, is not really what makes life worth living. It is the satisfaction of categorical desires that does that. Since they are the focal point of what we do on a daily basis, it is their satisfaction that makes us want to live. Williams’s worry is that there are only so many categorical desires that one self can pursue. In the course of an immortal life, you would end up pursuing and satisfying every achievable categorical desire. Eventually, you would have nothing left to make your life worth living. You would be bored, listless and tired of life.

To put it more formally:


  • (1) In order for life to worth living, one must have a set of categorical desires that one wishes to satisfy, i.e. a set of life projects around which one’s sense of self and value is organised. 
  • (2) If one lives an immortal life, one would exhaust the set of categorical desires and become bored and apathetic as a result. 
  • (3) Therefore, it would not be worth living an immortal life.



There are several criticisms one could make of this argument. I have pursued some of them in the past. Here, I want to focus on the second premise. For it is there that Scheffler’s concerns are directed. He claims that the arguments and examples Williams adduces in support of premise (2) are not really about immortality at all. Instead, if those arguments and examples are persuasive, they have a much wider significance.


2. Scheffler’s Criticisms
Scheffler’s critique comes in two parts. The first part argues that Williams’s argument applies just as much to a very long or super-long life as it does to an immortal one. The second part argues that — despite its irrelevance to the issue of immortality — Williams’s argument nevertheless succeeds in saying something important about the human condition.

Let’s focus on the first part of the critique. Recall that Williams uses the story of Elina Makropulos to defend his claim that an immortal life would lead to the exhaustion of categorical desires. In the story, Elina grows tired of her life after 342 years, having seen and experienced enough of the world. The first thing to acknowledge here is that no one really knows if Elina’s desire to die after 342 years is representative of what an actual human who had lived for 342 years would desire. Since no has lived that long, we are in the realm of speculation and fiction. Nevertheless, we have to also acknowledge that Williams’s argument is not dependent on this particular story. His claim is merely that we would grow tired of our lives at some point, whether that is at 342 years or 342,000 years or 3,420,000 years. This is linked to his in-principle claim that the number of categorical desires that can be pursued by one individual are limited and so will be exhausted at some point in time. The Makropulos story is simply a neat illustration of this claim. There are other fictional examples such as Robert Heinlein’s story of Lazarus Long, who lived for over 2,000 years before deciding that he wanted to die.

As it happens, the claim that the pool of categorical desires open to an individual is limited is something that Williams has been challenged on in the past. Donald Bruckner, for example, has argued that more members could be added to the set, or that there could be a renewal of interest in long-forgotten categorical desires. But let’s set this critique to the side and assume that Williams is right. If he is right, does this tell us anything about the undesirability of immortality? No; not according to Scheffler. As he points out, Williams’s complaint about the exhaustion of categorical desires has nothing to do with immortality per se. Rather, it has to do with all abnormally long life spans. If the pool of categorical desires is limited, then it will be exhausted in a finite period of time, not in one that never comes to an end.

So much for the first part of Scheffler's critique. It is the second part that is more interesting. It challenges us to think a little more deeply about why it is — according to Williams — that Elina Makropulos grows tired of her life. To do this we need to draw a distinction between two things: (i) the set of possible categorical desires; and (ii) the set of categorical desires possible for a particular self. The first set may be relatively expansive, possibly unlimited; the second set is much more constrained. And it is the second set that Williams appeals to in his argument. To be a self, a person must have a relatively fixed set of characteristics over time. For example, a shared set of memories, beliefs and desires. Williams’s point is that the set of categorical desires that it is possible for a self with a relatively fixed set of characteristics to meaningfully pursue will exhaust itself. They will run out of categorical desires that are appropriately linked to their sense of self. It is only by changing the self that the problem is avoided.

And this is where Scheffler thinks that Williams captures something fundamental to the human condition. He thinks Williams captures a tension between having a constant character (a constant sense of self) and the ability to be absorbed in or engaged by one’s activities. The problem is that becoming absorbed in one’s activities allows one to lose the sense of self. Think about entering a true “flow” state: there is nothing but the activity and the experience of the activity. The self disappears. A permanent state of such absorption would lead to the death of the self. Therefore, if we wish to maintain a sense of self, we must retreat from total absorption. But if we do this, we must recognise the limitation on the set of possible categorical desires that can be pursued by a constant self. And this is a problem because the sense of a continued self is what motivates much of the desire to continue living. As Scheffler puts it:

We want to live our lives and be engaged in the world around us. Categorical desires give us reasons to live, and they support such engagement. But when we are engaged, and so succeed in leading the kinds of lives we want, then the way we succeed is by losing ourselves in absorbing activities. When categorical desire dies, as it must do eventually if we have sufficient constancy of character to define selves worth wanting to sustain in the first place, then we will be left with ourselves, and we ourselves are, terminally, boring. The real problem is that one’s reasons to live are, in a sense, reasons not to live as oneself.   
(Scheffler 2013, 94-95)

This is the paradox at the heart of the human condition: a desire to live as oneself, but an incompatibility between that desire and a very long life as oneself.


3. Some Thoughts on Scheffler’s Argument
Is Scheffler’s second critique any good? Does he really raise an important point about the human condition? I want to close with three observations. The first is pretty simple: there is nothing in any of this that calls into question projects to extend the human lifespan beyond the current upper limits (say, about 115 years). Scheffler and Williams are talking about very long or super long lives, not about the kinds of lives we currently live. They may be right that we would eventually grow tired of ourselves, but that shouldn’t necessarily stop us from trying to see whether they are right.

The second observation is that Scheffler’s critique concedes a lot of ground to Williams’s original argument. If you reject Williams’s key concepts, you might be less persuaded by what Scheffler has to say. One thing that niggles with me is the concept of self, and constancy of the self, that seems to be operating in this discussion. Both Scheffler and Williams assume that the self must be pretty constant over time, otherwise the self dies. But it’s not clear to me that this must be true. One popular theory of self-identity is the Lockean or psychological theory. According to this, in order for a particular person to remain the same over time, all that is required is that the different temporal stages of that person share, overlapping psychological states. So, for example, in order for the me-today to be the same self as the me-tomorrow, the me-tomorrow must remember the me-today, and share some of my beliefs and desires. But there is no requirement that the set of shared psychological states remain absolutely constant over time. The me-in-fifty-years-time may remember nothing about the me-today, but that wouldn’t necessarily rule out a continued sense of self. Each link in the chain between me-in-fifty-year and me-today may share overlapping characteristics, even if the me-today shares nothing with the me-in-fifty-years. That wouldn’t compromise the sense of self over time, nor the desire of that self, at all times, to continue living. Of course, it may still be true that the pool of categorical desires will be exhausted, but it needn’t be because we grow tired of ourselves.

The third observation is perhaps more important. It is that Scheffler and Williams may nevertheless say something important about the value of having a constant sense of self. I value my sense of self. Having certain projects and plans, and weaving them into some kind of coherent narrative is something that I strive to do with my life. But maybe this strife is misguided. One of the recurring themes of mystics and gurus down through the ages is that true enlightenment comes from the abandonment of the self. This insight has often been linked to religious ideologies, but there is nothing intrinsically religious about it. Indeed, this is the central thesis of Sam Harris’s book Waking Up: A Guide to Spirituality without Religion, which is all about how secularists and atheists can embrace spiritual practice. And it may be the case that the insight gained from such practice, specifically the loss of self, will reveal that the desire for continued, perpetual survival of that self is also misguided. It may be that the pure momentary absorption in experience will call into question the value of a self.

What this means for the desirability of immortality is another question, and Scheffler has an argument of his own about that. I will take a look at that argument another time.


Friday, January 9, 2015

Enhancement and authenticity: Is it all about being true to our selves?



I’ve met Erik Parens twice; he seems like a thoroughly nice fellow. I say this because I’ve just been reading his latest book Shaping Our Selves: On Technology, Flourishing and a Habit of Thinking, and it is noticeable how much of his personality shines through in the book. Indeed, the book opens with a revealing memoir of Parens’s personal life and experiences in bioethics, specifically in the enhancement debate. What’s more, Parens’s frustrations with the limiting and binary nature of much philosophical debate is apparent throughout his book. The result is an interesting blend of meta-philosophical and personal reflections, with particular arguments about aspects of the enhancement debate.

This isn’t to criticise the book. Far from it. I found it an enjoyable and thought-provoking read. Parens and I certainly come from different starting points: he is far more inclined to resist the use of biotechnological enhancements than I, and has written about the phenomenon negatively in the past. But it would be silly for me to dwell on this difference here. Parens’s book is explicitly designed to offer a corrective to this “pro” or “con” mentality. His central argument is that those who care about how technology is used to shape our lives should embrace binocularity. That is: they should be willing to oscillate between competing perspectives. For example, they should acknowledge that humans are both biological machines, capable of being tinkered with and adjusted, as well as subjects and agents, capable of dreaming, desiring, willing and experiencing.

The binocularity thesis is an interesting one, and one that Parens explores in several different ways. I want to focus on one of them in the remainder of this post. In chapter 3 of the book, Parens makes the claim that there is something that unites both the “knockers” and “boosters” of human enhancement, namely: they are each committed to a certain ideal of human authenticity. This is interesting since that ideal is more typically associated with the knockers of human enhancement, not the boosters. What’s more, Parens actually presents some evidence for this claim. Is this evidence any good? What are its implications? Let’s find out.


1. The Moral Ideal of Authenticity
We have to start with a closer look at the concept of authenticity. There is no sense in evaluating Parens’s evidence if we don’t know what he is talking about. We all have a vague sense of what authenticity is. It is about being true to something, to some ideal, principle, concept, or person. An authentic expression of one’s opinion is one that is pure, uncontaminated by the desire to be duplicitous or deceptive. That much is straightforward. But obviously philosophers would like a more precise and refined concept. So, unsurprisingly, Parens obliges. In his analysis, he focuses on the moral ideal of authenticity. What does he mean by this?

The answer lies in the work of Charles Taylor, a Canadian philosopher, communitarian and critic of secularism. Taylor does not write about the enhancement debate itself, but he does write about the related debated between the knockers and boosters of modernity. By “modernity” Taylor means to refer to the philosophical, political and scientific ideals that emerged from the enlightenment era. One of those ideals was the ideal of authenticity. This ideal can be understood in the following way:

Moral Ideal of Authenticity: In living your life, you must be true to your own way of being, i.e. your own path to self-fulfillment. If you are not true to this, you miss the point of life, you miss what being human is really all about. Being true to oneself means overcoming impediments to self-understanding, and knowing what is important to one’s sense of self.

In short, you must to true to yourself, to your sense of what your life is about. When Taylor and Parens speak of this being a moral ideal, I do not think they mean for it to be an ideal related to obligation or duty; rather, I think they mean for it to be a statement of prudential axiology, i.e. as a claim about how to live the good life.

One of Taylor’s most important arguments is that this ideal of authenticity has been misunderstood in the debate between the knockers and boosters of modernity. Critics of modernity think of the ideal in terms of selfishness, self-indulgence and the egotistical desire to get what you want from life, without any real concern for others. The critics propose, in lieu of this, the ideal of the virtuous life, one that involves the cultivation and sustenance of certain forms of excellence. Taylor argues that the critics miss the fact that both they and the boosters share the ideal of authenticity. They both have a particular conception of what it means to live a human and fulfilling existence — one that is true to the self. Where they differ is in how they cash out that ideal.

Parens claims that a similar misunderstanding plagues the debate between the knockers and boosters of bio-enhancement. What is this misunderstanding? (Note: throughout what follows I will be assuming no major difference between what is referred to as “enhancement” and what is referred to as “treatment”. This is somewhat in keeping with Parens’s own approach, since he prefers to conceive of the debate in terms of technology that is used to shape the self, and prefers not to prejudge whether it is actually an enhancement. No doubt, I should have switched to Parens’s terminology, but I persevere with the term “enhancement” on the grounds that the term is so prevalent and can reasonably cover the types of biotechnology discussed in the following examples.)


2. Authenticity and the Enhancement Debate
Parens argues that the misunderstanding between the knockers and boosters of enhancement hinges on the attachment people have to certain aspects of their characters and abilities. The knockers of enhancement are grateful for certain fixed characteristics and feel that those characteristics are essential to who they are (their authentic selves). Likewise, the boosters of enhancement are impressed by the human ability to create identity, to act to fulfill certain projects, plans and aspirations. They feel that following that model of self-creation is being true to their authentic selves.

But this is just a general characterisation of the competing conceptions. As I said in the introduction, one of the more compelling aspects of Parens’s argument is the evidence he amasses in support of the claim that these conceptions share a commitment to authenticity. It is really this evidence that I want to focus on. That evidence comes from two main sources. First, from academic commentators — members of the knocker or booster brigades — and second, from statements by people who have used or rejected the use of bio-enhancements. In the interests of clarity, I will simply list this evidence, coding it as E1, E2…En as I go along. I will also provide links to the original sources after each description:

E1 - Elliot’s Discussion of Medical Enhancement: The first bit of evidence comes from Carl Elliot’s book Better than Well, which is a critique of how Americans engage with certain types of medical treatment. Elliot has two major concerns about how certain drugs — e.g. anti-depressants — are promoted and used. He worries in the first instance that the drugs will alienate people from who they really are, and in the second instance that they will alienate us from how the world really is. For example, an anti-depressant might cure us of our melancholy, but also lead us to ignore problems with our personal and social environments. Maybe we are depressed by our unfulfilling jobs or by the degree of social injustice we experience, and maybe the drugs mask what is a proportionate response to our predicament. In other words, maybe the drugs prevent us from living a truly authentic life. (Source: Elliot)
E2 - Kramer and DeGrazia’s Support of Anti-Depressants: The second bit of evidence comes from the work of Peter Kramer and David DeGrazia. They both make arguments for the use of drugs like Prozac that appeal to the ability of the drugs to remove certain impediments to self-fulfillment. A person could be crippled by depression and unable to achieve some goal or project that is an important part of their self-conception. By using the drugs, they free themselves to be true to their own ideals of self-fulfillment. Again, that seems to be an appeal to the ability to live an authentic life.(Sources: Kramer, DeGrazia)
E3 - The Paxil Ad: Paxil is an anti-depressant drug, similar in chemical operation to Prozac (i.e. it’s an SSRI). An ad appeared for Paxil in a medical journal many years ago which spoke of the “imprisoned patient” and then showed an image of an unfinished sculpture. The message was clear: the patient suffering from depression was like the unfinished sculpture. They were trapped in a block of marble, waiting to be freed. Paxil would enable them to do this. It would free the true self that was being masked by the illness. (Source: Could not find a copy of the ad)
E4 - Debate about Cochlear Implants: The fourth bit of evidence comes from the debate between those who use and those who reject the use of cochlear implants. As Parens points out, some members of the Deaf community reject the use of such implants on the grounds that deafness is an essential aspect of their self-identity (partly constitutive of their authentic selves). Contrariwise, there are those who embrace the use of cochlear implants because they allow them to be more truly human. For example, Michael Chorost, who wrote a book entitled Rebuilt: How Becoming Part Computer Made me More Human, claims that the cochlear implant allowed him to experience the world more fully — almost the opposite of Elliot’s concern about anti-depressants alienating us from reality. (Sources: Crouch, Chorost)
E5 - Women who get Cosmetic Surgery: The fifth bit of evidence comes from the work of Kathy Davis, a sociologist who interviewed women who received cosmetic surgery (e.g. breast enlargements or reductions). She was struck by how these women tended to report a similar motivation for seeking the surgery. They claimed to have a body part that didn’t belong — that didn’t fit with their true identity — and they needed to have it altered to express their true selves. (Source: Davis)
E6 - Debate about Treatment for ADHD: The sixth bit of evidence is drawn from ethnographic work by Ilina Singh on children diagnosed with ADHD. Again, Parens notes how the language of authenticity permeates the views of those who embrace and those who reject pharmacological treatment for their condition. Those who embrace it think the drugs enable their moral agency (their ability to express themselves through action), while those who reject it think the drugs change who they really are. (Note: the majority of Singh’s subjects seemed to think they benefited from treatment). (Source: Singh)
E7 - Poets using Anti-Depressants: The seventh bit of evidence comes from the experiences of poets using anti-depressants. Some poets think the drugs remove a barrier to doing their work, while others think the drugs separates them from a crucial aspect of their identities (Source: Berlin (ed)).
E8 - Debate about Transgender Surgery: The final bit of evidence comes from the debate about gender reassignment surgery (another example of using technology to shape ourselves). There are those that reject such surgeries on the grounds that it amounts to a denial of one’s true, genderqueer, norm-challenging self. On other hand, there are those that embrace such surgeries on the grounds that they allow people to become who they truly are. Once again, we have a debate about the ideal of authenticity (Sources: Raymond, Green).

Parens also comments on similar disagreements among anorexics and those with body dysmorphia, but I’ve decided to skip over those since it is more of the same thing (with respect to the language and concepts used, not with respect to the nature of the different conditions). I think, taken together, these eight bits of evidence do provide some reasonable support for Parens’s claim about the pervasiveness of authenticity in the enhancement debate. Of course, Parens’s argument isn’t intended to be a true scientific or empirical argument. It is, rather, an interpretation of a range of different textual sources. It provides some initial support for a hypothesis that could (if the interest was there) be taken up by psychologists and other researchers in the future. Further experiments could be used to determine whether people really do conceive of the pros and cons of enhancement in terms of the ideal of authenticity.


3. What are the implications of this?
But assume for the moment that Parens is correct. What follows from this? Does it matter for the enhancement debate? Or should that debate continue as it has done? Does the pervasiveness of authenticity have any practical implications for how we approach the use of enhancement? I think some things do follow (weakly), and I want to close with three examples.

The first, which maps with what Parens himself seems to think, is that it may encourage a less divisive approach to the issue. The ideal of authenticity is about remaining true to some characteristic or aspect of yourself and your situation in the world. If Parens is right, then the knockers of enhancement prioritise some characteristics (e.g. their ADHD, depression or deafness), whereas the boosters prioritise others (e.g. their desire to pursue certain goals, to free certain aspects of their selves that they feel are hindered by a condition or disability). Within certain limitations, there is no reason why these different ideals cannot live side by side. Indeed, this is something that disability theorists often claim. They say we should not view one mode of living as being intrinsically better or worse than another; rather, we should accept that there are simply many different modes of living, each with their own value. There is no reason why the knockers and boosters can’t embrace this pluralism.

The second observation is that there is some reason to retreat from this pluralism, at least if it is understood in an extreme way. Although it is no doubt true that several different ideals of authenticity can live side-by-side, it is also true that certain ideals have implications beyond the implications to the one living the authentic life. For example, should a paedophile or serial rapist be encouraged to reject chemical castration on the grounds that their sexual preferences are an essential part of their authentic selves? The proposition seems troubling. Surely there are some conceptions of an “authentic life” that cannot be wholly tolerated and which people should be encouraged to reject? If that’s right, then we can only have pluralism up to a point. (Note: I speak here of “encouragement” not of “compulsion”).

The final observation is of a more personal nature. By this, I don’t mean that the observation is about me and the way I live my life, but rather that it is about the kinds of decisions we all make about our lives. By drawing attention to how different ideals of authenticity operate within the enhancement debate, Parens’s argument also encourages us to think more seriously about what is important to us in our own lives. What characteristics and attributes are of value? Which of them would we like to preserve? Which would we like to modify? Maybe our melancholic moods are integral to our creativity and self-expression, maybe they are not. Either way, maybe we should take a more reflective and considered approach to the use of particular enhancement technologies. In doing so, we could avoid falling into the trap of being a knee-jerk knocker (as Parens once was) or a knee-jerk booster (as I tend to be).

That might be the most important lesson to draw from Parens’s book.

Tuesday, January 6, 2015

New Paper - The Normativity of Linguistic Originalism: A Speech Act Analysis




I have a new paper coming out in the journal Law and Philosophy. This one looks at originalist theories of constitutional interpretation through the lens of speech act theory. In particular, it critiques approaches to originalism that assume there is a linguistic and factual core to the doctrine.

Here is the abstract, along with links to pre-publication versions of the paper. I will add a link to the published version as soon as it is online:

Title: The Normativity of Linguistic Originalism: A Speech Act Analysis
Journal: Law and Philosophy
Abstract: The debate over the merits of originalism has advanced considerably in recent years, both in terms of its intellectual sophistication and its practical significance. In the process, some prominent originalists — Lawrence Solum and Jeffrey Goldsworthy being the two discussed here — have been at pains to separate out the linguistic and normative components of the theory. For these authors, while it is true that judges and other legal decision-makers ought to be originalists, it is also true that the communicated content of the constitution is its original meaning. That is to say: the meaning is what it is, not what it should be. Accordingly, there is no sense in which the communicated content of the constitution is determined by reference to moral desiderata; linguistic desiderata do all the work. In this article I beg to differ. In advancing their arguments for linguistic originalism, both authors rely upon the notion of successful communications conditions. In doing so they implicitly open up the door for moral desiderata to play a role in determining the original communicated content. This undercuts their claim and changes considerably the dialectical role of linguistic originalism in the debate over constitutional interpretation.
Pre-publication versions: available here and here.



  

Monday, January 5, 2015

Pereboom's Case for Hard Incompatibilism (Series Index)




I have just completed a series of three posts looking at Derk Pereboom's case for hard incompatibilism. Hard incompatibilism is the view that free will does not exist. Defending this view requires a critique of both compatibilism and libertarian theories of free will. Pereboom's critique is presented in his book Free Will, Agency and Meaning in Life. The book obviously covers the topic in more detail than this blog, but I have tried to provide as detailed a summary of the core arguments as I possibly can. I have also tried to offer some critiques of those core arguments.

Anyway, these are the links to all three posts:




Sunday, January 4, 2015

Pereboom's Four Case Argument against Compatiblism



I have recently been working my way through some of the arguments in Derk Pereboom’s book Free Will, Agency and Meaning in Life. The book presents the most thorough case for hard incompatibilism of which I am aware. Hard incompatibilism is the view that free will is not compatible with causal determinism, and, what’s more, probably doesn’t even exist. In previous entries, I’ve looked at Pereboom’s critique of non-compatibilist theories of free will. In this post, I want to look at his famous argument against compatibilism.

That argument rests on a series of four thought experiments, each assuming the truth of compatibilism, each involving an agent making a decision, and each provoking an intuitive reaction to that agent’s responsibility. Consequently, the argument is sometimes referred to as the “Four Case” Argument against compatibilism. In what follows, I want to present the argument in as clear a way as I possibly can, and offer some critical reflections.

I’ll proceed in three main steps. I’ll start by reviewing the traditional compatibilist accounts of free will along with Pereboom’s argumentative strategy against those accounts; I’ll then consider the four thought experiments at the heart of Pereboom’s argument; and finally I’ll present some critical reflections.


1. Compatibilism and Manipulation Arguments
Compatibilism is the view that free will exists and is compatible with the truth of determinism. In other words, compatibilists hold that whether a decision is to be judged free (or not) does not depend on whether the decision was causally determined. This raises the obvious question: if causal determinism has no bearing on the matter, what does? The answer is that a certain type of causal sequence is associated with free and responsible decision-making. If the actual causal sequence leading up to a decision fits this type, then we are entitled to say that the decision is free.

So which type of causal sequences do the trick? A variety of accounts have been proposed over the years. Here are four of the more popular ones:

Character-based account: A decision can be said to be “free” if it is caused by, and not out of character for, a particular agent. This is the view traditionally associated with the likes of David Hume. It is probably too simplistic to be useful. Other compatibilist accounts offer more specific conditions.
Second-order desire account: A decision can be said to be free if it is caused by a first-order desire (e.g. I want some chocolate) that is reflexively endorsed by a second-order desire (e.g. I want to want some chocolate). This is the account associated with Harry Frankfurt (and others).
Reasons-responsive account: A decision can be said to be free if it is caused by a decision-making mechanism that is sufficiently responsive to reasons. In other words, if the mechanism had been presented with a different set of reasons-for-action, it would have produced a different decision (in at least some possible worlds). This is the account associated with Fischer and Ravizza, and comes in several different forms (weak, moderate and strong responsiveness).
Moral reasons-sensitivity account: A decision can be said to be free if it is produced by a decision-making mechanism that is capable of grasping and making use of moral reasons for action. This is the account associated with R. Jay Wallace. It is similar to Fischer and Ravizza’s account, but pays particular attention to the role of moral reasons in decision-making.

As you can see, all of these accounts claim that a certain type of causal sequence has the “right stuff” for free will, irrespective of whether the decisions produced are fully determined by those causal sequences.

Pereboom challenges this with a manipulation argument. This is a species of argument that starts with the simple supposition that if a decision by one agent (A) has been manipulated into existence by another agent (B), there is no way in which we can say that this decision has been “freely” made by A. On the contrary, it is produced by something that is beyond A’s control. So if I grab your hand, place a knife in it, and then proceed to use your arm to stab another person, the act of stabbing is clearly not a product of your free will. It is a product of my manipulation. Manipulation arguments then simply add to this starting point the more controversial claim that, on compatibilism, all decisions are effectively manipulated into existence by factors beyond an agent’s control. Consequently, none of them can be said to be free.

To put it more formally, Pereboom adopts the following argument against compatibilism:


  • (1) If one agent's decision is manipulated by another agent, then that first agent's action is not freely willed.

  • (2) There is no difference between a manipulation by another agent and causation by a causal factor external to the agent.

  • (3) On determinism, all of an agent's actions are determined (causally influenced) by at least some factors beyond that agent's control.

  • (4) Therefore, on determinism, no agent can be said to freely will their actions (or be morally responsible for them). (from 1, 2 and 3)

  • (5) Compatibilism holds that free will and moral responsibility are compatible with determinism.

  • (6) Therefore compatibilism must be false. (from 4 and 5)


Most of this argument looks right to me. I’m certainly inclined to agree with premise (1). If I implanted a device in your brain that manipulated your motor cortex in such a way that it allowed me to bring about complex actions through the medium of your body, while at the same time convincing you that you were the one making the decisions, I cannot see how you could be fairly said to have freely willed those decisions. I also think premises (3) and (5) are uncontroversial statements of determinism and compatibilism, and that (4) and (5) are logically valid conclusions from the other premises. That leaves premise (2) as the critical variable. This is the premise that Pereboom’s four case argument is intended to defend.


2. Pereboom’s Four Case Argument
Pereboom’s four case argument works by drawing analogies between four separate hypothetical cases. In each of the four cases, an agent (Professor Plum) decides to kill another person (White) for his own personal advantage. In each of the four cases, Professor Plum’s decision is produced by a mechanism that meets the conditions set down by the different compatibilist accounts outlined above. Despite this, in each of the four cases, the claim made by Pereboom is that Professor Plum is not responsible for his decision. The cases differ in the amount of manipulation involved, starting out with a very clear case of external manipulation by another agent, and moving onto more subtle forms that seem to be akin to standard compatibilist accounts of responsible decision making. The heart of Pereboom’s argument is that these differences in the degree and type of external manipulation are irrelevant to Plum’s lack of responsibility. This then allows him to reach his desired conclusion: there is no difference between manipulation by another agent and causation by factors external to an agent.

To put it more succinctly, Pereboom argues that if there is no responsibility in the first case (Case 1), and there are no relevant differences between Case 1 and all the subsequent ones (Cases 2, 3 and 4), then responsibility (and by proxy free will) is not compatible with determinism. Of course, it is impossible to appreciate this without going through the cases themselves. So here they are (these descriptions are abbreviated from the original):

Case 1: A team of neuroscientists has the ability to manipulate Plum’s neural states at any time by radio-like technology. In this particular case, they do so by pressing a button just before he begins to reason about his situation, which they know will produce in him a neural state that realizes a strongly egoistic reasoning process, which the neuroscientists know will deterministically result in his decision to kill White. Plum would not have killed White had the neuroscientists not intervened, since his reasoning would then not have been sufficiently egoistic to produce this decision. But otherwise Plum’s decision meets the requirements set down by standard compatibilist accounts of free will (i.e. it is consistent with his character, reflexively endorsed by a second-order desires, produced by a mechanism that is sensitive to reasons, moral and prudential).


It seems straightforward enough to say that Plum’s decision is not freely willed in this case. It is the result of external manipulation.

Case 2: Plum is just like an ordinary human being, except that a team of neuroscientists programmed him at the beginning of his life so that his reasoning is often but not always egoistic (as in Case 1), and at times strongly so, with the intended consequence that in his current circumstances he is causally determined to engage in the egoistic reasons-responsive process of deliberation and to have the set of first and second-order desires that result in his decision to kill White. The neural realization of his reasoning process and of his decision is exactly the same as it is in Case 1 (although their causal histories are different).


Again, it seems straightforward enough to say that Plum’s decision is not freely willed in this case. The only difference between this case and Case 1 is that the manipulation took place at an earlier moment in time (during his initial development). But that can’t be a relevant difference. At least not when it comes to assessing free will and responsibility.

Case 3: Plum is an ordinary human being, except that the training practices of his community causally determined the nature of his deliberative reasoning process so that they are frequently but not exclusively rationally egoistic (the resulting nature of his deliberative reasoning processes are exactly as they are in Cases 1 and 2). This training was completed before he developed the ability to prevent or alter these practices. Due to the aspect of his character produced by this training, in his present circumstances he is causally determined to engage in the strongly egoistic reasons-responsive process of deliberation and to have the first and second-order desires that issue in his decision to kill White.


This case is like Case 2. The only difference is that it removes the technological manipulation by neuroscientists and replaces it with cultural and behavioural manipulation. Pereboom’s claim is that the fact that the manipulation is done by some brain implant or programming device, vis-a-vis traditional methods, should play no part in our moral assessment. If technological manipulation undermines free will and responsibility, so too should cultural and behavioural manipulation. Once again Plum is not responsible for his action.

Case 4: Everything that happens in our universe is causally determined by virtue of its past states together with the laws of nature. Plum is an ordinary human being, raised in normal circumstances, and again his reasoning processes are frequently but not exclusively egoistic, and sometimes strongly so (as in Cases 1-3). His decision to kill White issues from his strongly egoistic but reasons-responsive process of deliberation, and he has the specified first and second-order desires. The neural realization of Plum’s reasoning process and decision is exactly as it is in Cases 1-3.


Okay, so this is where things get really interesting. The idea is that Case 4 is like Case 3, only there is no explicit manipulation by another set of agents (neuroscientists or cultural peers). No doubt some environmental manipulation is taking place — we are all, on determinism, products of our historical and contemporary environments — but we don’t know exactly what it is. This is pretty much the view of all causal determinists in the present age. Nevertheless, the differences between Case 3 and Case 4 seem more significant than the differences between the other cases. Bridging the gap between our judgment in Case 3 and Case 4 is crucial to Pereboom’s overall case. How does he do it?

He focuses on two possibly significant differences between the cases. The first is simply our ignorance of the causal history. Could this be a relevant difference? Obviously not. We could be ignorant of a causal history that involved direct neural manipulation of a particular decision, and yet know that manipulation was taking place. Our ignorance of the specific details wouldn’t seem sufficient to hold someone responsible. The second difference is the fact that the manipulation in Case 4 is not being done by a group of other agents. This smells like a more significant difference, but Pereboom argues that, on reflection, it can’t be relevant. To prove his point, he asks us to reimagine Cases 1 and 2 involving the neural manipulation devices. In our reimagining we are to suppose that the manipulation devices are “spontaneously generated”, i.e. not the product of an intelligent designer or anything of the sort. Surely our finding such a device in someone’s brain would entitle us to deem them non-responsible? If so, then the absence of a specific external agent manipulating Plum’s choices in Case 4 cannot be a relevant difference. And if that’s right, then non-responsibility holds across all four cases. Compatibilism is thereby undermined.


3. Does the argument work?
Pereboom’s four case argument is much-discussed, but is it any good? I have some concerns. I think the gap between Cases 3 and 4 is more difficult to bridge than Pereboom lets on. In particular, I’m not a fan of his reliance on thought experiments involving spontaneous neural manipulation devices. For one thing, I find it very difficult to imagine finding such a device in someone’s brain and not being drawn to the conclusion that it was the product of another agent’s direct manipulation. For another thing, and as Steve Maitzen pointed out in the comments section to an earlier post, it seems like thought experiments involving such devices surreptitiously presuppose a difference that Pereboom’s argument is trying to deny. The presence of such a device would be anomalous. That is to say, it would be an atypical cause of someone’s behaviour. It is this atypicality which is likely to drive the intuitive conclusion that Plum is not responsible. But that doesn’t seem to threaten the possibility of responsibility in the more typical cases in which compatibilist conditions are satisfied.

None of this is to suggest that I am not sympathetic to Pereboom’s argument. I am. I too feel that there is something dubious about individualised judgments of moral responsibility in a world in which all decisions can ultimately be traced to causal factors external to the individual decision-maker. But I think there are more persuasive arguments for such scepticism. For instance, I find Bruce Waller’s argument against moral responsibility (which I’ve discussed before) slightly more compelling. Waller’s argument works from a fundamental principle of equality in our treatment of persons and then highlights inequalities in the initial causal conditions of individual behaviour which can have important long-term effects. His suggestion is that those initial inequalities create problems for moral practices like the ascription of responsibility. That said, I accept that Waller’s argument has some doubtful elements too and that the precise implications of free will scepticism for our moral practices is something that requires a longer treatment, especially given that Pereboom himself accepts that most of those practices remain intact in the face of his scepticism.

But those are just my criticisms. There are many others discussed in Pereboom’s book, and he is nothing if not meticulous in his treatment of his critics. I can only give an inkling of what he does here. He starts by noting that there are two basic responses to his argument: the hard line response, which tries to argue that Plum is responsible in all four cases; and the soft line response, which tries to argue that there are differences between the cases which undermine Pereboom’s argument. My comments above are examples of the latter because they suggest that there are important differences between Cases 3 and 4.

An example of a hard line response can be found in the work of Michael McKenna. He says we should examine the four cases by initially being agnostic about the implications of determinism for free will and responsibility. In other words, we should not (prematurely) assume that determinism undermines responsibility. If we do this, then we would simply be doubtful about Plum’s responsibility in Case 4, and we would transfer that doubt over to Cases 3, 2 and 1. The analogy between the four cases consequently works in the opposite direction to that supposed by Pereboom.

Pereboom has a response to this. He thinks McKenna’s agnosticism is too strong. McKenna seems to assume that we should start by being confirmed agnostics about responsibility in Case 4 and then transfer that agnosticism back to the other cases. But really we should be neutral inquirers. That is, we should start off being unsure about the effect of determinism on free will and responsibility. This means we are open to the possibility that determinism undermines free will and responsibility, and that it does not. If we start with this attitude, then McKenna’s desired transference of agnosticism is much less compelling.

I tend to think Pereboom is right about this — we should be neutral inquirers, not confirmed agnostics — but maybe neutrality is not the right stance. As Steve Maitzen also pointed out to me, why should the truth of a metaphysical thesis like determinism have any implications for an ordinary commonsense moral practice like holding others responsible for their actions? I’m not sure exactly what Steve was getting at with this comment, but I’m guessing it is something to do with the fact that our beliefs about ordinary moral practices are more solid than beliefs about recondite and abstruse metaphysical theses like determinism. But what does this mean in practice? Does it mean that no metaphysical claim can unseat our ordinary moral beliefs? Or just that it would take a lot of evidence to unseat such a belief? I can’t answer those questions on his behalf.


Conclusion
This has gone on for too long. It’s time to wrap things up. To briefly summarise, Pereboom’s four case argument tries to undermine compatibilist theories of free will. It does so by suggesting that there is no difference between cases involving manipulation of decision-making by other agents and cases of ordinary compatibilist causation of behaviour. Since no one thinks we exercise freedom in manipulation cases, it follows that we don’t exercise freedom in cases of ordinary compatiblist causation of behaviour either.

There are several responses to Pereboom’s argument. Hard-line responses try to argue that responsibility (or, at least, agnosticism about the effects of determinism on responsibility) holds across all four of Pereboom’s cases. Soft-line responses try to show that there are important differences between the four cases. For my part, I think that there are differences between Cases 3 and 4 that are harder to smooth over than Pereboom lets on. I am also inclined to think that there are more persuasive arguments for free will (and responsibility) scepticism.