Monday, July 17, 2017

Episode #26 - Behan on Technopolitics and the Automation of the State

apHSdJO_.jpg

In this episode I talk to Anthony Behan. Anthony is a technologist with an interest in the political and legal aspects of technology. We have a wide-ranging discussion about the automation of the law and the politics of technology.  The conversation is based on Anthony's thesis ‘The Politics of Technology: An Assessment of the Barriers to Law Enforcement Automation in Ireland’, (a link to which is available in the links section below).

You can download the episode here or listen below. You can also subscribe on Stitcher or iTunes (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 2:35 - The relationship between technology and humanity
  • 5:25 - Technology and the legitimacy of the state
  • 8:15 - Is the state a kind of technology?
  • 13:20 - Does technology have a political orientation?
  • 20:20 - Automated traffic monitoring as a case study
  • 24:40 - Studying automated traffic monitoring in Ireland
  • 30:30 - The mismatch between technology and legal procedure
  • 33:58 - Does technology create new forms of governance or does it just make old forms more efficient?
  • 39:40 - The problem of discretion
  • 43:45 - The feminist gap in the debate about the automation of the state
  • 49:15 - A mindful approach to automation
  • 53:00 - Postcolonialism and resistance to automation
 

Relevant Links

 

Saturday, July 15, 2017

Slaves to the Machine: Understanding the Paradox of Transhumanism




TL;DR: This is the text of a keynote lecture I delivered to the 'Transcending Humanity' conference at Tubingen University on the 13th July 2017. It discusses the alleged tension between the transhumanist ideal of biological freedom and the glorification of technological means to that freedom. In the talk, I argue that the tension is superficial because the concept of freedom is multidimensional.


1. The Paradox of Transhumanism
In September of 1960, in the official journal of the American Rocket Society (now known as the American Institute of Aerospace and Astronautics), Manfred E Clynes and Nathan S Kline, published a ground-breaking article. Manfred Clynes was an Austrian-born, Australian-raised, polymath. He was educated in engineering and music and he still is an original and creative inventor, with over 40 patents to his name, and a competent concert pianist. Nathan Kline was a Manhattan-based psychopharmacologist, one of the pioneers of the field, responsible for developing drugs to treat schizophrenia and depression. Their joint article was something of a diversion from their main lines of research, but has arguably had more cultural impact than the rest of their work put together.

To understand it, we need to understand the cultural context in which it was written. September 1960 was the height of the Cold War. The Soviet Union had kick-started the space race three years earlier with the successful launch of its two Sputnik satellites into Earth’s orbit. The United States was clambering to make up lost ground. The best and brightest scientific talent was being marshalled to the cause. Clynes and Kline’s article was a contribution to the space race effort. But instead of offering practical proposals for getting man into space, they offered a more abstract, conceptual perspective. They looked at the biological challenge of spaceflight. The problem, as they described it, was that humans were not biologically adapted to spaceflight. They could not breathe outside the earth’s atmosphere, and once beyond the earth’s magnetic sphere would be bombarded by nasty solar radiation. In short, humans were not ‘free’ to explore space.

What could be done to solve the problem? This is where Clynes and Kline made their bold proposal. The standard approach was to create mini-environments in space that are relatively congenial to human beings. Hence, the oxygen filled spaceship and the hyperprotective spacesuit. This would suffice for short-term compatibility between fragile human biological tissue and the harsh environment of space, but it would be a precarious solution at best:

Artificial atmospheres encapsulated in some sort of enclosure constitute only temporizing, and dangerous temporizing at that, since we place ourselves in the same position as a fish taking a small quantity of water along with him to live on land. The bubble all too easily bursts.

If we ever wanted to do more in space — if we wanted to travel to the farthest reaches of our solar system (and beyond) — a different approach would be needed. We would have to alter our physiology through the creation technological substitutes and extensions of our innate biology:

If man attempts partial adaptation to space conditions, instead of insisting on carrying his whole environment along with him, a number of new possibilities appear. One is then led to think about the incorporation of integral exogenous devices to bring about the biological changes which might be necessary in man’s homeostatic mechanisms to allow him to live in space qua natura.

This is where Clynes and Kline made their most famous contribution to our culture. What should we call a human being that was technologically enhanced so as to adapt to the environment of space? Their suggested neologism was the “cyborg” - the cybernetic organism. This was the first recorded use of the term — a term that now generates over 40 million results on Google.

Modern transhumanists share something with Clynes and Kline. They are not interested in winning the Cold War nor, necessarily, exploring the outer reaches of space (though some are), but they are acutely aware of the limitations of human biology. They agree with Clynes and Kline in thinking that, given our current biological predicament, we are ‘unfree’. They wish to use technology to escape from this predicament - to unleash us from the shackles of evolution. Consequently, transhumanism is frequently understood as a liberation movement — complete with its own liberation theology, according to some critics — that sees technology as an instrument of freedom. Attend any transhumanist conference, or read any transhumanist article, and you will become palpably aware of this. You can’t escape the breathless enthusiasm with which transhumanists approach the latest scientific research in biotechnology, genetics, robotics and artificial intelligence. They eagerly await the critical technologies that will enable us to escape from our biological prison.

But this enthusiasm seems to entail a strange paradox. The journalist Mark O’Connell captures it well in his recent book To Be a Machine. Having lived with, observed, and interviewed some of the leading figures in the transhumanist movement over the past couple of years, O’Connell could not help but be disturbed by the faith they placed in technology:

[T]ranshumanism is a liberation movement advocating nothing less than a total emancipation from biology itself. There is another way of seeing this, an equal and opposite interpretation, which is that this apparent liberation would in reality be nothing less than a final and total enslavement to technology. 
(O’Connell 2017, 6)

This then is the ‘paradox of transhumanism’: if we want to free ourselves in manner envisaged by contemporary transhumanists, we must swap our biological prison for a technological one.

I have to say I sympathise with this understanding of the paradox. In the past five or six years, I have developed an increasingly ambivalent relationship with technology. Where once I saw technology as a tool that opened up new vistas of potentiality, I now see more sinister forces gathering on the horizon. In my own work I have written about the ’threat of algocracy’, i.e. the threat to democratic processes if humans end up being governed entirely by computer-programmed algorithm. I see this as part and parcel of the paradox identified by O’Connell. After all, the machines to which we might be enslaved speak the language of the algorithm. If we are to be their slaves, it will be an algorithmic form of enslavement.

So what I want to do in the remainder of this talk is to probe the paradox of transhumanism from several different angles. Specifically, I want to ask and answer the following three questions:

(1) How should we understand the kind of freedom desired by transhumanists?
(2) How might this lead to our technological enslavement?
(3) Can the paradox be resolved?

In the process of answering these questions, I will make one basic argument: human freedom is a complex, multidimensional phenomenon. Perfect freedom is a practical (possibly a metaphysical) impossibility. So to say that transhumanism entails a paradox is misleading. Transhumanism entails a tradeoff between different sources and forms of unfreedom. The question is whether this tradeoff is better or worse than our current predicament.


2. What is Transhumanist Freedom Anyway?
How should we understand the transhumanist desire for freedom? Let’s start by considering the nature of freedom itself. Broadly speaking, there are two concepts of freedom that are used in philosophical discourse:

Metaphysical Freedom: This is freedom in its purest sense. This is the actual ability to make choices about our lives without external determination or interference. When people discuss this form of freedom they often use the term ‘freedom of will’ or ‘free will’ and they will debate different theories such as libertarianism, compatibilism and incompatibilism. In order to have this type of freedom, two things are important: (i) the ability to do otherwise than we might have done (the alternative possibilities condition) and (ii) the ability to be the source of our own decisions (the sourcehood condition). There are many different interpretations of both conditions, and many different views on which is more important.

Political Freedom: This is freedom in a more restricted sense. This is the ability to make choices about our lives that are authentic representations of our own preferences, without interference or determination from other human beings, whether they be acting individually or collectively (through institutions or governments). This is the kind of freedom that animates most political debates about ‘liberty’, ‘freedom of speech’, ‘freedom of conscience’ and so on.

Obviously, metaphysical freedom is the more basic category. Political freedom is a sub-category of metaphysical freedom. This means it is possible for us to have political freedom without having metaphysical freedom. My general feeling is that you either believe in metaphysical freedom or you don’t. That is to say, you either believe that we have free will in its purest sense, or you don’t, or you think we have to redefine and reconceptualise the concept of free will to such an extent that it becomes indistinguishable from other ‘lesser forms’ of freedom. This is because metaphysical freedom seems to require an almost total absence of dependency on external causal forces, and it is really only if you believe in the idea of non-natural souls or agents that you can get your head around the total absence of such dependency. (Put a bookmark in that idea for now, we will return to it later).

Political freedom is different. Even people who are deeply sceptical about metaphysical freedom tend to be more optimistic about the possibility of limiting interference or determination by other external agents. Thus, it is possible to be politically free even if it is not possible to be metaphysically free. It is worth dwelling on the different types of political freedom for a moment, doing so will pay dividends later on when we look at transhumanist freedom and the enslavement to technology. Following Isaiah Berlin’s classic work, we can distinguish between positive and negative senses of political freedom. In the positive sense, political freedom requires that individuals be provided with means to act in way that is truly consistent with their own preferences (and so forth). In the negative sense, political freedom requires the absence of interference or limitation by other agents.

I’m going to set the positive sense of freedom to one side for the remainder of this talk, though you may be able to detect its ghostly presence in some aspects of the discussion. For now, I want to further clarify the negative sense. There are two leading theories of political freedom in the negative sense. The distinction between the two can be explained by reference to two famous historical thought experiments. The first is:

The Highwayman: You are living in 17th century Great Britain. You are travelling by stagecoach when you are waylaid by a masked ‘highwayman’. The highwayman points his pistol at you and offers you a deal: ‘your money or your life?’* You give him your money and he lets you on your way.

Here is the question: did you give him your money freely? According to proponents of a theory known as ‘freedom as non-interference’ you did not. The highwayman interfered with your choice by coercing you into giving him the money: he exert some active influence over your will. Freedom as non-interference is a very popular and influential theory in contemporary liberal political theory, but some people argue that it doesn’t cover everything that should be covered by a political concept of freedom. This is drawn out by the second thought experiment.

The Happy Slave: You are a slave, legally owned by another human being. But you are happy slave. Your master treats you well and as luck would have it, what he wants you to do, lines up with what you prefer to do. Consequently, he never interferes with your choices. You live in harmony with one another.

Here’s the question: are you free? The obvious answer is ‘no’. Indeed, life as a slave is the paradigm of unfreedom. But, interestingly, this is a type of unfreedom that is not captured by freedom as non-interference. After all, in the example just given there is never any interference with your actions. This is where the second theory of negative freedom comes into play. According to proponents of something called ‘freedom as non-domination’, we lack political freedom is we live under the dominion of another agent. In other words, if we have to ingratiate ourselves to them and rely on their good will to get by. The problem with the happy slave is that, no matter how happy he may be, he lives in a state of domination.

Okay, we covered a lot of conceptual ground just there. Let’s get our bearings by drawing a map of the territory. We start with the general concept of metaphysical freedom — the lack of causally determining influences on the human will — we then move down to the narrower political concept of freedom. Political freedom is necessary but not sufficient for metaphysical freedom. Political freedom comes in positive and negative forms, with there being two major specifications of negative freedom: FNI and FND.




The question I now want to turn to is how to understand the transhumanist liberation project? How does it fit into this conceptual map? The position I will defend is that transhumanist freedom is a distinct sub-category of freedom. It is not full-blown metaphysical freedom (this is important, for reasons we shall get back to later on) and it is not just another form of political freedom. It is, rather, adjacent to and distinct from political freedom.

Transhumanists are concerned with limitations on human freedom that are grounded in our biology (this links back, once more Clynes and Kline’s project). Thus, transhumanist freedom is ‘biological freedom’:

Biological Freedom: The ability to make choices about our lives without being constrained by the limitations that are inherent in our biological** constitution.

What kinds of biological limitations concern transhumanists? David Pearce, one of co-founders of the World Transhumanist Association (now Humanity+), argues that transhumanists are motivated by the three ‘supers’: (i) superlongevity, i.e. the desire to have extra long lives; (ii) superintelligence, i.e. the desire to be smarter than we currently are; and (iii) superwellbeing, i.e. the desire to live in a state of heightened bliss. The desire for each of these three ‘supers’ stems from a different biological limitation. Superlongevity is motivated by the biological limitation of death: one of the unfortunate facts about our current biological predicament is that we have been equipped with a biological machinery that tends to decay and cease functioning after about 80 years. Superintelligence is motivated by the information-processing limitations of the human brain: our brains are marvels of evolution, but they function in odd ways, limiting our knowledge and understanding of the world around us. And superwellbeing is motivated by the biological constraints on happiness. This is Pearce’s unique contribution to the transhumanist debate. He notes that some people are equipped with lower biological baselines of wellbeing (e.g. people who suffer from depression). This puts a limit on how happy they can be. We should try to overcome this limit.

There are other forms of biological freedom in the transhumanist movement. A prominent sub-section of the transhumanist community is interested in something called ‘morphological freedom’, which is essentially freedom from biological form. Fans of morphological freedom want to change their physical constitution so that they can experience different forms of physical embodiment. The slide shows some examples of this.

For what it’s worth, I think characterising transhumanism as a liberation movement with the concept of biological freedom at its core, is better than alternative characterisations, such as viewing it as a religion or a social movement concerned with technological change per se.

There are two advantages to characterising transhumanism in this way. The first is that it is reasonably pluralistic: it covers most of the dominant strands within the transhumanist community, without necessarily committing to a singular view of what the good transhumanist life consists of. If you ask a transhumanist what they want, beyond the freedom from biological constraint, you’ll get a lot of different views. The second is that it places transhumanism within an interesting historical arc. It has long been argued — by James Hughes in particular — that transhumanism is a continuation of the Enlightenment project. Indeed, some of the leading figures in the Enlightenment project were proto-transhumanists: the Marquis de Condorcet being the famous example. Where the Enlightenment project concerned itself with developing freedom through the celebration of reason and the desire for political change — i.e. to address the sources of unfreedom that arose from the behaviour of other human beings — the transhumanist project concerns itself with the next logical step in the march towards freedom. Transhumanists are, in essence, saying ‘Look we have got the basic hang of political freedom — we know how other humans limit us and we have plausible political models for overcoming those limits — now let’s focus on another major source of unfreedom: the biological one.’

Let’s take a breath here. The image below places the biological concept of freedom into the conceptual map of freedom from earlier on. The argument to this point is that transhumanism is concerned with a distinct type of freedom, namely: biological freedom. This type of freedom insists that we overcome biological limitations, particularly those associated with death, intelligence and well-being. The next question is whether in their zeal to overcome those limitations transhumanists make a Faustian pact with technology?





3. Are we becoming slaves to the machine?
The transhumanist hope for achieving biological freedom certainly places an inordinate amount of faith in technology. On the face of it, this makes a lot of sense. Humans have been using technology to overcome our biological limitations for quite some time. One of the ancient cousins of modern day homo sapiens is homo habilis. Homo habilis used primitive stone tools to butcher and skin animals, thereby overcoming the biological limitations of hands, feet and teeth. We have been elaborating on this same theme ever since. From the birth of agriculture to the dawn of the computer age, we have being using technology to accentuate and extend our biological capacities.

What is interesting about the technological developments thus far is that they have generally left our basic biological form unchanged. Technology is largely something that is external to our bodies, something that we use to facilitate and mediate our interactions with the world. This is as true of the Acheulean handaxe as it is of the smartphone. Of course, this isn’t the full picture. Some of our technological developments have involved tinkering with our biological form. Consider vaccination: this involves reprogramming the body’s immune system. Likewise there are some prosthetic technologies — artificial limbs, cochlear implants, pacemakers, deep brain stimulators — that involve replacing or augmenting biological systems. These technological developments are the first step towards the creation of literal cyborgs (ones that Clynes and Kline would have embraced). Still, the developments on this front have been relatively modest, with most of the effort focused on restoring functionality to those who have lost it, and not on transcending limitations in the manner desired by transhumanists.

So this is where we are currently at. We have made impressive gains in the use of externalising technologies to augment and transcend human biology; we have made modest gains in the use of internal technologies. Transhumanists would like to see more of this happening and a faster pace. Where then is the paradox of transhumanism? In what sense are we trading a biological prison for a technological one? We can answer that question in two stages. First, by considering in more detail the different possible relations between humans and technology, and then by considering the various ways in which those relations can compromise freedom.

There have been many attempts to categorise human-technology relationships over the years. I don’t claim that the following categorisation is the final and definitive one, merely that it captures something important for present purposes. My suggestion is that we can categorise human-technology relations along two major dimensions: (i) the internal-external dimension and (ii) the complementary-competitive dimension. The internal-external dimension should be straightforward enough as it captures the distinctions mentioned above. It is a true dimension, continuous rather than discrete in form. In other words, you cannot always neatly categorise a technology as being internal or external to our biology. Proponents of distributed and extended cognition, for example, will insist that humans sometimes form fully integrated-systems with our ‘external’ technologies thus on occasion collapsing the internal-external distinction.

The complementary-competitive dimension is a little bit more opaque and possibly more discontinuous. It comes from the work of the complexity theorist David Krakauer, who has developed it specifically in relation to modern computer technology and how it differs from historical forms of technological enhancement. As he sees it, most of our historical technologies, be they handaxes, spades, abaci or whatever, have a tendency to complement human biology. In other words, they enable humans to form beneficial partnerships with technology, oftentimes extending their innate biological capacities in the process. Thus, using a handaxe will strengthen your arm muscles and using an abacus will strengthen your cognitive ones. Things started to change with the Industrial revolution when humans created machines that fully replaced human physical labour. They have started to change even more with the advent of computer technology that can fully replace human cognitive labour. Thus it seems that technology no longer simply complements humanity; it competes with us.

I think what Krakauer says about external technologies also applies equally well to internal technologies. Some internal technologies try to work with our innate biological capacities, extending our powers and enabling greater insight and understanding. A perceptual implant like an artificial retina or cochlear implant is a good example of this. Contrariwise, there are some internal technologies that effectively bypass our innate biological capacities, carrying out tasks on our behalf, without any direct or meaningful input from us. Some brain implants seem to work like this, radically altering our behaviour without our direct control or input. They are like mini autonomous robots implanted into our skulls, taking over from our biology, not complementing it.

I could go on, but this should suffice for understanding the two dimensions along which we can categorise our relationships with technology. Now, even though I said that these could be viewed as true dimensions (i.e. as continuous rather than discrete in nature), for the purposes of simplification, I want to use the two dimensions to construct a two-by-two matrix for categorising our relationships with technology.



This categorisation system muddies the waters somewhat from our initial, optimistic view of technology-as-tool. It still seems to be the case that technology can help us to transcend or overcome our biological limitations. We can use computers, the internet and artificial intelligence to greatly enhance and extend our knowledge and understanding of the world. We can use technologies to produce more valuable things and to get more of what we want, thereby enhancing our well-being. We could also, potentially, use technology to extend our lives, either by generating biotechnological breakthroughs that enable cell-repair and preservation (nanorobots in the bloodstream anyone?), or, more fancifully, by fusing ourselves with machines to become complete cyborgs. This could be achieved, in part, through external technologies but, more likely in the long-term, through the use of internal technologies that directly fuse with our biology. At this point we will reach an apotheosis in our relationship with technology, becoming one with the machine. In this sense, technology really does seem to hold out the possibility of achieving biological freedom.

The mud in the water comes from the fact that this reliance on machines leads to new forms of limitation and dependency, and hence new forms of unfreedom. This is where the paradox of transhumanism arises. If we want to take advantage of the new powers and abilities afforded to us by machines, it seems like we must accept technological interference, manipulation, and domination.

There are many ways in which technology might be a source of unfreedom. For illustrative purposes, I’ll just mention three:

Technological coercion: This arises when conditions are attached to the use of technology. In other words, we only get to take advantage of its powers if we explicitly or tacitly agree to forgo something else. We see this happening right now. Think about AI assistants or social media services or fitness tracking devices. They arguably improve our lives in various ways, but we are often only allowed to use them if we agree to give up something important (e.g. our privacy) or submit to something unpleasant (e.g. relentless advertising). Sometimes the bargain may involve genuine coercion — e.g. an insurance company promising you lower premiums if you agree to wear a health monitoring bracelet at all times — sometimes the coercive effect may be more subtle — e.g. facebook offering you an endless stream of distracting information in return for personal information that they can sell to advertisers. But in both cases there is a subtle interference with your ability to make choices for yourself.

Technological domination: This arises when technology provides genuine benefits to us without actually interfering with our choices, but nevertheless exerts a dominating influence over our lives because it could be used to interfere with us if we step out of line. Some people argue that our current situation of mass surveillance leads to technological domination. As we are now all too aware, our digital devices are constantly tracking and surveilling our every move. The information gathered is used for various purposes: to grant access to credit, to push advertising, to monitor terrorist activities, to check our mental health and emotional well-being. Some people embrace this digital panopticon, arguing that it can be used for great good. Sebastian Thrun, the co-founder of Google X, for example imagines a future in which we are constantly monitored for medical diagnostic purposes. He thinks this could help us to avoid bad health outcomes. But the pessimists will argue that living in a digital panopticon is akin to living as a happy slave. You have the illusion of freedom, nothing more.

Technological dependency/vulnerability: This arises when we rely too heavily on technology to make choices on our behalf or when we become helpless without its assistance. This undermines our freedom because it effectively drains our capacity for self-determination and resiliency. This might be the most serious form of technological unfreedom, and the one most commonly discussed. We all probably have a vague sense of it happening too. Many of us feel addicted to our devices, and helpless without them. A clear example of this dependency problem would be the over-reliance of people on services like Google maps. There are many stories of people who have got into trouble by trusting the information provided to them by satellite navigation systems, even when it was contradicted by what was right before their eyes. Technology critics like Nicholas Carr argue that this is leading to cognitive degeneration (i.e. technology is actively degrading our biological mental capacities). More alarmingly, cybersecurity experts like Marc Goodman argue that it is leading to a situation of extreme vulnerability. Goodman uses the language of the ‘singularity’, beloved by technology enthusiasts, to make his point. He argues that because most of technology is now networked, and because, with the rise of the internet of things, every object in the world in being slowly added to that network, everything is potentially hackable and corruptible. This is leading to a potential singularity of crime, where the frequency and magnitude of criminal attacks will completely overwhelm us. We will never not be victims of criminal attack. If that doesn’t compromise our freedom, I don’t know what does.

These forms of technological unfreedom can arise from internal and external technologies, as well as from complementary and competitive technologies. But the potential impact is much greater as we move away from external, complementary technologies towards internal, competitive technologies. With external-complementary technologies there is always the possibility of decoupling from the technological systems that compromises our freedom. With internal-competitive technologies this becomes less possible. Since transhumanism is often thought to be synonymous with the drive toward more internalised forms of technology, and since most of the contemporary forms of internal technology are quasi-competitive in nature, you can see how the alleged paradox of transhumanism arises. We are moving down and to the right in our matrix of technological relations and this engenders the Faustian pact outlined at the start.



Before I move on to consider ways in which this paradox can be resolved, I want to briefly return to the diagram I sketched earlier on in which I arranged the metaphysical, political, and biological concepts of freedom. To that diagram we can now add another concept of freedom: technological freedom, i.e. the ability to make choices and decisions for oneself without interference with, domination by, or limitation by technological forces. But where exactly should this new concept of freedom be placed? Is it a distinctive type of freedom or is it a sub-freedom of political freedom?

This may be a question of little importance to most readers, but it matters from the perspective of conceptual purity. Some people have tried to argue that technological freedom is another form of political freedom. They do so because some of the problems that technology poses for freedom are quite similar to the political problems of freedom. This is because technology is still, often, a tool used by other powerful people in order to manipulate, coerce and dominate. Nevertheless, people who have taken this view have also noted problems that arise when you view technological unfreedom as just another form of political unfreedom. Technological domination, for example, often doesn’t emanate from a single, discrete agent or institution, as does political domination. Technological domination is, according to some writers, ‘functionally agentless’. Something similar is true of technological coercion. It is not directly analogous to the simple interaction between the highwayman and his victim. It’s more subtle and insidious. Finally, technological dependency doesn’t seem to involve anything like the traditional forms of political unfreedom. For these reasons, I think it is best to understand technological freedom as a distinct category of unfreedom, one that occasionally overlaps with the political form, but is dissociable from it.




4. Dissolving the Paradox
Now that we have a much clearer understanding of the paradox (and how it might arise) we turn to the final and most important question: can the paradox be resolved? I want to close by making four arguments that respond to this question.

First, I want to argue that there is no intrinsic paradox of transhumanism. In other words, there is nothing in the transhumanist view that necessarily entails or requires that we substitute biological unfreedom for technological unfreedom. The tension between biology and technology is contingent. Go back to the two-by-two matrix I sketched in the previous section. I used this to explain the alleged paradox by arguing that the transhumanist dilemma arises from the impulse/tendency to move down and to the right in our relationships with technology, i.e. to move towards internal-competitive technologies. But that should have struck you as a pretty odd thing to say. There is no reason why transhumanists should necessarily want to move in that direction. Indeed, if anything, their preferred quadrant is the bottom-left one (i.e. the internal-complementary one). After all, they want to preserve and extend what is best about humanity, using technology to compensate for the limitations in our biology, not to completely replace us with machines (to the extent that they wish to become cyborgs or uploaded minds they definitely want to preserve our sense of self). So they don’t necessarily embrace extreme technological dependency and vulnerability. The problem arises from the fact that moving down and to the left is less accessible than moving down and to the right. The current historical moment is one in which the most impressive technological gains are coming from artificial intelligence and robotics, the quintessential competitive technologies, and not from, say, more complementary biotechnologies. If our path to biological freedom did not force us to rely on such technologies, transhumanists would, I think, be happier. Admittedly, this is the kind of argument that will only appeal to a philosopher — those of us who love differentiating the necessary from the contingent — but it is important nonetheless.

The second argument I want to make is that there is no such thing as perfect freedom. Pure metaphysical freedom — i.e. freedom from all constraints, limitations, manipulations and interferences — is impossible. Furthermore, even if it were possible, it would not be desirable. If we are to be actors in the world, we must be subject to that world. We must be somehow affected or influenced by the causal forces in the world around us. We can never completely escape them. This is important because our sense of self and our sense of value is bound up with constraint and limitation. It is because I made particular choices at particular times that I am who I am. It is because I am forced to choose that my choices have value. If it didn’t matter what choices I made at a particular moment, if I could always rewind the clock and change what I did, this value would be lost. Nothing would really matter because everything would be revisable.

This then leads to the third argument, which is that whenever we think about advancing the cause of freedom, we must think in terms of trade-offs, not absolutes. Since you cannot avoid all possible constraints, limitations, manipulations or interferences, you must ask yourself: which mix of those things represents the best tradeoff? It is best to view freedom as a multidimensional phenomenon, not something that can be measured or assessed along a single dimension. This is something that philosophers and political scientists have recognised for some time. This is why there are so many different concepts of freedom, each one tending to emphasise a different dimension or aspect of freedom. Consider the philosopher Joseph Raz’s theory of autonomy (which we can here deem to be equivalent to a theory of freedom).*** This theory argues that there are three conditions of freedom: (i) rationality, i.e. the ability to act for reasons in pursuit of goals; (ii) optionality, i.e. the availability of a range of valuable options; and (iii) independence, i.e. freedom from interference or domination. These conditions can be taken to define a three-dimensional space of freedom against which we can assess individual lives. The ideal life is one that has maximum rationality, optionality and independence. But it is often not possible to ensure the maximum degree of each. Being more independent, for example, often reduces the options available to you and makes some choices less rationally tractable (i.e. you are less able to identify the best means to a particular end because you stop relying on the opinions or advice of others). Furthermore, we often willingly sacrifice freedom in one domain of life in order to increase it in another, e.g. we automate our retirement savings, thereby reducing freedom at one point in time, in order to increase it at a later point in time.

This is a long way of saying that transhumanism should be interpreted as one view of how we should tradeoff across the different dimensions of freedom. Transhumanists think that the biological limitations on freedom are great, having shorter lives, less intelligence and less well-being than we might otherwise leads to diminished human flourishing. Consequently, they might argue that we ought to trade these biological limitations for technological ones: what’s a loss of privacy compared to the gain in longevity/intelligence/wellbeing. Their critics — the technological pessimists — have a different understanding of the tradeoffs. They think that biological limitations are better than technological ones. That living under a technological panopticon is a much worse fate than living under the scythe of biological decay and death.

This brings me to my final argument. This one is slightly more personal nature. For what it’s worth, I tend to sympathise with both transhumanists and technological pessimists. I think most of the transhumanist goals are commendable and desirable. I think we should probably strive to remove the various forms of biological limitation identified by transhumanists (I am being cagey here since I disagree with certain interpretations and understandings of those goals). Furthermore, I think that technology — particularly internal-complementary technologies — represent the best hope for transhumanists in this regard. At the same time, I think it is dangerous to pursue the transhumanist goal by simply plunging headlong into the latest technological innovations. We need to be selective in how we embrace technology and be cognisant of the ways in which it can limit and compromise freedom. In essence, I disagree with understanding the debate about technology and its impact on freedom in a simple, binary way. We shouldn’t be transhumanist cheerleaders or resolute technological pessimists. We should be something in between, perhaps: cautiously optimistic technological sceptics.

To conclude, and to briefly sum up, the paradox of transhumanism is intriguing. Thinking about the tension between biological freedom and technological freedom can help to clarify and shed light on our ambiguous modern relationship with technology. Nevertheless, the paradox is more of an illusion than a reality. It dissolves upon closer inspection. This is because there is no pure form of freedom: we are (and should always be) forced to live with some constraints, limitations, manipulations and interferences. What we need to do is to figure out the best tradeoff or compromise.



* I have never quite understood the logic of this deal. Although this is the popular way of phrasing it, presumably the highwayman’s actual offer is ‘your money or your life and your money’ since his ultimate goal is to take your money. 


** If I were being more technically sophisticated in this discussion, I would point out that the concept of the ‘biological’ is controversial. Some people argue that certain biological categories/properties are socially constructed. The classic example might be the property of sex/gender. If you take that view of at least some biological properties, then the distinction between biological freedom and political freedom would be more blurry. If I were being even more technically sophisticated I would point out that social construction comes in different forms and not all of these are threatening to the distinction I try to draw in the text. Specifically, I would argue that most of the biological limitations that preoccupy transhumanism are causally socially constructed rather than constitutively socially constructed. 


*** There is, arguably, a technical distinction between freedom and autonomy. Following the work of Gerald Dworkin we can argue that freedom is a local property that applies to particular decisions, whereas autonomy is a global property that applies to an extended set of decisions. The two concepts are ultimately related.




Wednesday, July 12, 2017

Likelihood Arguments for Design





These are some notes about design arguments for the existence of God. They are based on my readings of Benjamin Jantzen’s excellent book An Introduction to Design Arguments, which was published by Cambridge University Press back in 2014.


1. Likelihood Versions of the Design Argument
Design arguments for the existence of God are popular and persistent. They all share a common form. They start with evidence drawn from the real world — the remarkable way in which a stick insect resembles a stick; the echolocation of bats; the fact that the planet earth exists in the habitable zone; the fine tuning of the physical constants for the production of life in the universe; or the collection of all such examples — and then argue that this evidence points to the existence of a designer, i.e. God.

This basic common form has been developed in numerous ways over the course of human history. Most recently, it has been common to present design arguments using the formal trappings of probability theory and, quite often, this involves the use of likelihood comparisons. ‘Likelihood’ here must be understood in its formal sense. In every day language, the term ‘likely’ is synonymous with ‘probable’. In its formal sense, its meaning is subtly different: it is a measure of how probable some piece of evidence is given the truth of some particular theory.

Let’s use an example. Suppose you have a jar filled with one hundred beans. You are told that one of three hypotheses about that jar of beans is true, but not which one. The three hypotheses are:

H1: The jar only contains black beans.
H2: The jar contains 50 black beans and 50 green beans.
H3: The jar contains 25 black beans and 75 green beans.

Suppose you draw a bean from the jar. It is green. This is now some evidence (E) that you can use to rank the likelihood of the different hypotheses. How likely is it that you would draw a green bean if H1 were true? Answer: zero. H1 says that all the beans are black. If you draw a green bean, you immediately disconfirm H1. What about H2 and H3? There, the situation is slightly different. Both of those hypotheses allow for the existence of green beans. Nevertheless, E is more expected on H3 than it is on H2. That is to say, E is more likely on H3 than it is on H2. In formal notation, the picture looks like this:

Pr (E|H2) = 0.50
Pr (E|H3) = 0.75
Therefore - Pr (E|H2) < Pr (E|H3)

Notice that this doesn’t tell us anything about the probability of the respective hypotheses. Likelihood is a measure of the probability of E|H and not a measure of the probability of H|E (the so-called ‘posterior probability’ of a hypothesis). This is pretty important because there are cases in which the posterior probability of a hypothesis and the likelihood it confers on the evidence are radically divergent. Based on the above example, we conclude that H3 is the more likely theory: it confers the greatest probability on the observed evidence. But suppose we were also told that 90 percent of all jars contain a 50-50 mix of black and green beans, whereas only 5 percent contained the 25-75 mix. If that were true, H2 would be the more probable hypothesis, even if we did draw a green bean from the jar. (You can do the formal calculation using Bayes Theorem if you like). The only case in which likelihood arguments tell us anything about the posterior probability of a theory are cases in which all the available hypotheses are equally probable prior to observing the evidence (i.e. when the ‘principle of indifference’ can be applied to the hypotheses).

This hasn’t deterred some theists from defending likelihood versions of the design argument. The reason for this is that they think that when it comes to comparing certain hypotheses we are in a situation in which the principle of indifference can be applied. More particularly, they think that when it comes to explaining evidence of design in the world, the leading available theories (theism and naturalism) both have equal prior probabilities and hence the fact that the evidence of design is more likely on theism than it is on naturalism gives some succour to the theist. In other words, they think the following argument holds:


  • Notation: E = Remarkable adaptiveness of life in the universe; T = hypothesis of theistic design; and N = hypothesis of naturalistic causation.
  • (1) Prior probabilities of T and N are equal.
  • (2) Pr (E|T) >> Pr (E|N) [probability of E given theism is much higher than the probability of E given naturalism]
  • (3) Therefore, Pr(T}E) >> Pr (N|E) [theism has more posterior probability than naturalism]



Is this argument any good?


2. The Reverse Gambler’s Fallacy
There are many things we could challenge about the likelihood argument. An obvious one is its underspecification of the relevant explanatory hypotheses. Consider N. How exactly does naturalistic causation explain the adaptiveness of life? One answer is simply to say that it explains it through chance. The naturalistic view is that the universe churns through different arrangements of matter and energy, and through sheer luck it occasionally stumbles on arrangements of matter and energy that take on the adaptive properties of life. If your understanding of N is that it only explains E in terms of pure chance, then the likelihood argument may well be effective (though see the objection discussed in the next section).


But no one thinks that naturalism explains adaptiveness in terms of pure chance: the universe doesn’t constantly rearrange itself in completely random ways. Even before the time of Darwin, there were versions of naturalism that went beyond pure chance as an explanation. David Hume, in his famous Dialogues Concerning Natural Religion argued that design could be explained in Epicurean terms. The idea here is that although the universe does churns through different arrangements of matter and energy, some of those arrangements are more dynamically stable than others. They tend to persist, replicate and adapt. Those are the arrangements to which we attribute the properties of life and adaptiveness. Jantzen fleshes out this Humean/Epicurean hypotheses in the following manner (2014, 180):


  • N1: The traits of organisms (and the universe as a whole) are the product of a process involving chance, the laws with which atoms blindly interact with one another, and a great deal of time — after a very long time, the universe eventually stumbled across a configuration that is dynamically stable.


If this is your understanding of naturalism, then the likelihood argument is cast into more doubt. It is at least plausible that the probability of E|N is much closer to the probability of E|T (particularly if the universe has been around for long enough).

Elliott Sober disputes this Humean argument. He says that proponents of it overstate the likelihood of E because they commit something called the Inverse Gambler’s Fallacy. The regular Gambler’s Fallacy arises from the tendency to assume that if a particular random outcome occurs several times in row it is less likely to happen in the future. Thus, if you flip a coin ten times and get heads on each occasion, you would commit the Gambler’s Fallacy if you assumed that you were more likely to get tails on the next flip. Although the numbers of heads and tails tend to be roughly equal over the very long term, the probability of the next coin flip being tails is the same as it is for every other coin flip, i.e. 0.5. Thus, the regular Gambler’s Fallacy is the tendency to overstate the likelihood of an event (a tails) given a previous set of evidence.

The Inverse Gambler’s Fallacy is, as you might expect, the reverse. It’s the tendency to overstate the likelihood of a particular event given a limited set of evidence. Jantzen’s explains the concept with a simple example. Imagine you have just wandered into a casino and you see somebody roll a double-six on a pair of dice. That’s your evidence (call it E1). There are two hypotheses that could explain that observation:


  • H4: This is the first roll of the evening.
  • H5: There have been many rolls of the dice that evening.


Although the probability of any particular roll of the dice being a double-six is 1/36, if there were lots of rolls in the course of one evening you would expect to see a double-six at some stage (indeed, given enough rolls the probability of eventually seeing a double six would start to approach 1). Thus, you could argue that:


  • Pr (E1|H4) << Pr (E1|H5)


And hence that H5 is the more likely explanation. But this, according to Sober, is a fallacy. You have overstated the likelihood of the observation you made. The reason for this is that E1 is ambiguously stated. It could mean ‘a double six was rolled at some point in the evening’ or it could mean ‘a double six was rolled on this particular occasion’. If it means the former, then H5 is indeed more likely than H4. But if it means the latter, then the likelihood of H5 and H4 is equal. For any particular throw, they each confer an equal likelihood on E1, i.e. 1/36.

How does this apply to the Humean argument? The answer, according to Sober, is that the Humean explanation is like H5. The Humean idea is that given enough time and enough rolls of the galactic dice, we will eventually see arrangements of matter and energy that have the properties of life and adaptiveness. This could well be true, but for any particular arrangement of matter and energy — e.g. the functional adaptation of the eye for receiving and processing light signals — the Humean explanation does not confer that much likelihood on the outcome. Hence, the person who assigns a high likelihood to the Pr (E|N1) is committing the Inverse Gambler’s Fallacy.

There are, however, three problems with this criticism. The first is that the evidence of design that is relevant to the likelihood argument is general, not specific. Theists are appealing to the general presence of adaptiveness in the universe over the course of history, not just individual specific ones. The Humean explanation takes this into account. So the Humean argument does not really involve anything analogous to the Inverse Gambler’s Fallacy. Second, if the focus were on specific instances of adaptiveness, the theistic explanation would be just as much trouble as the Humean one. After all, the generic hypothesis of theism doesn’t explain why God would have chosen to design particular functions and adaptations into animals. You need a much more specific hypothesis for that, and providing one runs into all sorts of trouble (more on this below). Third, the Humean explanation obviously does not exhaust all the possible naturalistic explanations of adaptiveness. The most scientifically credible explanation — Darwinian natural selection — confers a much higher likelihood on adaptiveness than the simple Humean explanation. If we were to compare the likelihood of E given Darwinian natural selection to the likelihood of E given theism, the comparative likelihoods would be much harder to disentangle, and would arguably lean in favour of naturalism.


3. The Problem of Auxiliary Hypotheses
There are other problems with likelihood arguments. Sober’s favourite criticism of them focuses on the role of auxiliary hypotheses in their computation. His point is subtle and its significance is often missed. The idea is that whenever we make a claim concerning the likelihood of one hypothesis relative to another, we usually leave a great deal unsaid (implicit) that helps us in making that comparison. When I gave the example of the dice being rolled in the previous section, I assumed a number of things to be true: I assumed that dice rolls are statistically independent; I assumed that there are usually many dice rolls in any given evening of play; I assumed that the dice in question were fair. It was only because of these assumptions that I was able to say, with reasonable confidence, that the probability of any particular roll resulting in a double six was 1/36 or that the probability of observing a double-six at some point in the evening was reasonably high.

All of these assumptions are auxiliary hypotheses and they are needed if we are going to make sensible likelihood comparisons. In everyday scenarios, the presence of auxiliary hypotheses in a likelihood calculation is not a major cause for concern. We share common experience of the world and so rightfully take a lot for granted. Things are rather different when it comes to explaining the origins of adaptiveness in the universe as a whole. When we reach this level of explanatory generality, there is less and less that we can assume uncontroversially. This means that it is very difficult to compute sensible likelihoods for general explanations of adaptiveness.

This is a particular problem with theism. In order for the general hypothesis of theism to confer plausible likelihoods on the presence of adaptiveness, we would need to add a number of auxiliary hypotheses concerning the intentions and goals of the designer. For example, when looking at the human eye (or any collection of examples of adaptiveness), we would have to be able to say that God has goals X, Y and Z and these explain why the eye (or the collection) has the features it does. Some theists might be willing to speculate about the intentions and goals of God, but doing so gets them into trouble, especially when it comes to explaining away instances of natural evil. They would have to state the intentions and goals that justify God in creating parasites that incubate in and destroy the functionality of the eye (to give but one example). In light of the problem of evil, many theists are unwilling to speculate in too much detail about divine intentions. They resile themselves to view that God’s intentions are unknowable or beyond our ken. But in doing this, they undercut the likelihood argument.

Note, however, that the problem with auxiliary hypotheses is not just a problem for the theist. It is also a problem for the naturalist. In order for the naturalist to compute plausible likelihoods, they have to add more detail to explain why the adaptiveness we see have the features it has. There are various ways of doing this, e.g. by making assumptions about natural laws, historical conditions on earth, and so on. They would all have to get added into the mix to make a reasonable likelihood comparison. The problem then, as Jantzen puts it, is that ‘Sober’s objection is not really about picking auxiliary assumptions but rather identifying allowable hypotheses. But [the likelihood principle] tells us nothing about what counts as an acceptable hypothesis. Nor does the principle of Indifference. So it seems we have to either entertain them all or risk begging the question in favour of one or another conclusion” (Jantzen 2014, 184).


The net result is that it is very difficult to come up with a plausible likelihood argument for design.


Monday, July 3, 2017

The Intrinsic Value of Achievements




People always talk about their achievements. They talk about the promotions they just received, telling us that they worked hard to achieve them. They talk about the marathons they just ran, describing in detail the many painful training sessions that made this possible. They often berate others for their seeming lack of achievement. The rich kid who got a lucrative job at his father’s best friend’s hedge fund, didn’t, we are told, really deserve it. There is something phoney about his achievement.

But what are achievements and why do they matter so much? The philosopher Gwen Bradford has being doing quite a lot of work on this topic over the past few years. In this post I want to take a look at some of that work. In particular, I want to look at how she characterises achievements and her defence of the claim that achievements are intrinsically valuable.


1. A Three-Part Account of Achievements
Before we can even begin to understand the value of achievements, we need to know what they are. It might seem silly to spend time on this question. Surely we all know what achievements are, especially given that we bang on about them all the time? But philosophers like to clarify and complicate, breaking down everyday concepts into their component parts are seeing how they fit together. That’s exactly what Bradford does with the concept of achievement.

She says that achievements are characterised by a process-product relation. Thus, an achievement arises when you use a process to produce a product. For example, the recently-promoted worker can be said to have achieved her promotion because she followed a certain process (hard work, dedication etc.) that produced that result. Likewise, the marathon runner can be said to have achieved her success at the marathon because she followed a gruelling training routine and then actually ran the full twenty-six miles that constitutes the marathon.

But achievements aren’t just characterised by process-product relations. If I stood up and flicked on the light switch, you wouldn’t commend me for the remarkable achievement of bringing light to the previously dark room. And yet I did follow a process (standing up, flicking the light switch) that produced that result. What is missing from this example? Answer: The process in question wasn’t sufficiently difficult to count as an achievement. In order for X to count as an achievement, it must be brought about via a process that requires some effort or skill to bring about the associated product.

This hints at another thing that is needed in order for there to be an achievement. It’s not enough for there to be a process-product relation that involves difficulty. The process must also be sufficiently non-lucky. So, for example, if I win the lottery no one would describe this as a great achievement. This is despite the fact there is a process-product relation (I went to the store and bought the ticket) and despite the fact that it was exceptionally difficult for me to win the lottery. The problem is that I just got lucky. I didn’t bring about the result through the appropriate exercise of skill. As Bradford sees it, competent causation is probably required in order for there to be an achievement.

This gives us the following account of achievements:

Achievement: In order for there to be an achievement, there must be: (a) a process-product relation; (b) that involves difficulty; and (c) competent causation.

I have tried to illustrate this in the diagram below.



Now, that we have a clearer understanding of what achievements are, we can turn to the main topic: what kind of value do they have? On this front, Bradford makes two interesting arguments. First, she claims that achievements have intrinsic value, i.e. they are valuable irrespective of the value of the products they produce. And second, the amount of intrinsic value associated with the achievement is proportional to its difficulty, i.e. the more difficult the achievement, the more valuable it is. Both of these arguments are controversial. Let’s see what she has to say in defence of them.


2. Against the Simple Product View
Bradford’s first argument is that achievements have intrinsic value. She defends this by considering a contrary view. The contrary view is something she calls the ‘Simple Product’ theory:

Simple Product Theory: This is the view that achievements have value solely in virtue of their products, i.e. if the product is valuable then the achievement is valuable, but if the product is not valuable, then neither is the achievement.

There is something to be said for this theory. It appeals to an intuition — one that many have felt — that if you spend your time doing something that is devoid of value, then no matter how difficult it may have been, it is not worthwhile. If you spend your life looking for a cure for cancer, and you succeed in finding one, your achievement is valuable; if you spend your life trying to cause the most painful and vile cancers, and you succeed, then your achievement does not have value. It’s the intrinsic value of the product that determines the value of the achievement.

There are two problems with the simple product theory. First, it cannot account for the value of what Bradford calls “zero-value” products. In other words, it cannot account for the value of achieving an outcome that has no significant intrinsic value. Many sports and recreational activities have this quality. Bradford uses two examples: climbing a mountain and running a marathon. In neither case is the end result (being atop a mountain; crossing the finish line) intrinsically valuable (at least, not to any significant degree). Nevertheless, we would happily say that there is value to those achievements. But if we say that and accept that the end products are devoid of value, we must also accept that the value is coming from something else. The most plausible ‘something else’ is from the achievements themselves.

Proponents of the simple product theory may have a response to this. They could argue that we have mischaracterised the product in the case of mountain climbing and the marathon running. It is not being atop the mountain or crossing the finish line that matters. Rather, it is the activity of climbing and running that matters. In other words, these processes are, in fact, themselves products and they are intrinsically valuable. The problem with this argument is that it begs the further question: why are these processes intrinsically valuable? Bradford argues that the most plausible answer is because they are difficult and involve triumph over adversity. But in that case the process-as-product view simply reduces to the account of achievements that she is trying to defend.

The other problem with the simple product theory is that it contradicts another powerful intuition, namely: that hard work and perseverance matter when it comes to assessing the value of an outcome. Bradford illustrates this point with a thought experiment:

Two Novelists Thought Experiment: Suppose there are two novelists who have produced equally aesthetically valuable books. Smith’s experience while writing the book was typical. He struggled with bouts of procrastination and writer’s block, but he eventually finished the manuscript. Jones’s experience was rather different. He endured all the normal writerly roadblocks, as well as the death of his wife, his beloved pet, and the loss of his home and other property. On top of this, he suffered from clinically diagnosed depression throughout, which made many ordinary days a terrible struggle. Nevertheless, he managed to finish the manuscript.

Whose achievement is more valuable? Assuming the books to be of equal quality, the answer seems pretty obvious: Jones’s. He had a much more difficult process. But if that’s right, then the end product is not the only determinant of value. The nature of the process itself confers value on the achievement.

This still leaves the question: what happens when the product has negative value? Consider the cancer-causing example from earlier on. If I spend my life causing as much cancer as possible, I have not spent it in a valuable way. Quite the contrary. Surely we would not say that my success in so doing had value? Maybe not, but in light of the preceding arguments, the most sensible thing to say about this example might be that my achievement has some intrinsic value, but that this is massively outweighed (overwhelmed, really) by the intrinsic disvalue of the product.


3. The Value of Difficulty
Bradford’s second argument is that the value of an achievement is, in some sense, directly related to its difficulty. The Two Novelists thought experiment hints at this view. In that case, we had two books of equal value produced through two different processes, one of which was much more difficult than the other. We concluded that the more difficult process was more intrinsically valuable.

There is, however, a problem with this view. It seems to reduce to the absurd. After all, we can arbitrarily ramp up the difficulty of everything we do. Instead of running a marathon, I can hop a marathon. Instead of climbing a mountain with sophisticated equipment and oxygen, I can climb it freestyle with no oxygen. Does arbitrarily introducing this difficulty make the achievements more valuable? Maybe in some cases, but in others it seems downright wasteful and disrespectful. Imagine someone trying to achieve a cure for cancer who arbitrarily made the process more difficult by insisting on ignoring all the insights gained from previous studies and eschewing all funding offered in support of their work. Suppose they eventually succeed in curing cancer, but do so several years later than they would have done if they had avoided those artificial obstacles. Surely their achievement isn’t more valuable than someone who would have done the same in fewer years?

Bradford is inclined to hang tough on this one. She argues that difficulty does make the achievement more valuable. The reason why we are tricking into thinking that a more difficult does not add value is because (a) achievements are not the only valuable things in the world and (b) the value of achievement can conflict with other values. So, for example, in the cancer cure case, the value of curing cancer (and alleviating all the associated suffering) outweighs the value of achieving the cure for cancer. We would rightly disparage someone for making the achievement more difficult because doing so delays or hinders something that could be of great value to others. Contrast that with the person who climbs the mountain without equipment and oxygen. Since nothing of significant intrinsic value is delayed or hindered by making the climb more difficult, we are more willing to entertain the claim that this is a more valuable achievement than that of someone who climbed the mountain with oxygen and equipment.

Bradford goes a step further. She illustrates the strength of the claim that difficulty adds value to achievements by considering what life would be like in a world where achievements are the only value available to humans (i.e. a world where condition (a) does not hold). Such a world was imagined by Bernard Suits in his book the Grasshopper. I’ve discussed this work on several occasions. Suits’s book depicts a world of technological and scientific perfection: every human desire and need can be satisfied at the touch of a button or the wave of a hand; and all knowledge has been written down and easily retrieved.

This world is a kind of utopia. There is no poverty, hunger or deprivation of any kind. But what then is left for humans to do? The answer, according to Suits, is to play games. This is the only way to derive any value or meaning from life in a world where everything is available at the flick of a switch. But playing a game, according to Suits, requires reintroducing difficulty into the world, i.e. setting up arbitrary obstacles that prevent us from achieving goals in the most efficient manner. So difficult processes are the only source of value in utopia.

This seems plausible to me. In many cases, I think we dislike difficulty because it hinders or prevents us from achieving an end that is itself very valuable. This tricks us into thinking that difficulty is a bad thing, but when we consider cases in which the end is not valuable, the difficulty of the process bears a lot of weight. One thing, however, puzzles me. It seems to me that sometimes finding a more efficient process for producing a product is more of an achievement than sticking with a more difficult process. For example, when I was a student solving math problems, I was always something of a plodder. I could solve the problems, but only by following inelegant, brute force methods. One of my friends was a real mathematician. He was able to solve the problems using much simpler, more elegant algorithms. I always viewed his solutions as more of a mathematical achievement than mine, even though I clearly followed a more difficult process. Does this disprove the claim that difficulty increases the value of an achievement? Or is there some way to reconcile it with that claim? Perhaps finding the more elegant and efficient process is more difficult than sticking with the less efficient, less elegant process? I’m not sure.

Anyway, that brings us to the end of this post. To briefly recap, Bradford defends a tripartite theory of achievements. According to this theory, an achievement is characterised by (a) a process-product relation, where the process is (b) sufficiently difficult and (c) non-lucky. She argues that achievements are intrinsically valuable, contrary to the simple product theory. And that the more difficult the process the more valuable the achievement.




Tuesday, June 27, 2017

The Tell-Tale Brain: The Effect of Predictive Brain Implants on Autonomy




What if your brain could talk to you?

’That’s a silly question’, I hear you say, ‘My brain already talks to me.’

To the best of our current knowledge, the mind is the brain, and the mind is always talking. Indeed, it’s where all the talking gets started. We have voices in our heads — a cacophony of different thoughts, interests, fears, and hopes— vying for attention. We live in a stream of self-talk. We build up detailed narratives about our lives. We are always spinning yarns, telling stories.

This is all probably true. But our brains don’t tell us everything. The stream of self-talk in which we are situated (or should that be ‘by which we are constituted’?) sits atop a vast, churning sea of sub-conscious neurological activity. We operate on a ‘need to know’ basis and we don’t need to know an awful lot. Many times we sail through this sea of activity unperturbed. But sometimes we don’t. Sometimes what is happening beneath the surface is deeply problematic, hurtful to ourselves and to others, and occasionally catastrophic. Sometimes our brains only send us warning signals when we are about to get washed up on the rocks.

Take epilepsy as an example. The brains of those who suffer from epilepsy occasionally enter into cycles of excessive synchronous neuronal activity. This results in seizures (sometimes referred to as ‘fits’), which can lead to blackouts and severe convulsions. Sometimes these seizures are preceded by warning signs (e.g. visual auras), but many times they are not, and even when they are, the signs often come too late in the day, well after anything can be done to avert their negative consequences. What if the brains of epileptics could tell them something in advance? What if certain patterns of neuronal activity were predictive of the likelihood of a seizure and what if this information could be provided to epileptic patients in time for them to avert a seizure?

That’s the promise of a new breed of predictive brain implants. These are devices (sets of electrodes) that are implanted into the brains of epileptics and, through statistical learning algorithms, used to predict the likelihood of seizures from patterns of neuronal activity. These devices are already being trialled on epileptic patients and proving successful. Some people are enthusiastic about their potential to help those who suffer from the negative effects of this condition and, as you might expect, there is much speculation about other use cases for this technology. For example, could predictive brain implants tell whether someone is going to go into a violent rage? Could this knowledge prove useful in crime prevention and mitigation?

These are important questions, but before we get too carried away with the technical possibilities (or impossibilities) it’s worth asking some general conceptual and ethical questions. Using predictive brain implants to control and regulate behaviour might seem a little ‘Clockwork Orange’-y at a first glance. Is this technology going to be great boon to individual liberty, freeing us from the shackles of unwanted neural activity? Or is it going to be a technique of mind control - the ultimate infringement of human autonomy? These are some of the questions taken up in Frederic Gilbert’s paper ‘A Threat to Autonomy? The Intrusion of Predictive Brain Implants’. I want to offer some of my own thoughts on the issue in the remainder of this post.


1. The Three Types of Predictive Brain Implants

Let’s start by clarifying the technology of interest. Brain implants of one sort of another have been around for quite some time. So-called ‘deep brain stimulators’ have been used to treat patients with neurological and psychiatric conditions for a couple of decades. The most common use is for patients with Parkinson’s disease, who are often given brain implants that help to minimise or eliminate the tremors associated with their disease. It is thought that over 100,000 patients worldwide have been implanted with this technology.

Predictive brain implants (PBIs) are simply variations on this technology. Electrodes are implanted in the brains of patients. These electrodes record and analyse the electrical signals generated by the brain. They then use this data to learn and predict when a neuronal event (such as a seizure) is going to take place. At the moment, the technology is its infancy, essentially just providing patients with warning signals, but we can easily imagine developments in the technology, perhaps achieved by combining it with other technologies. Gilbert suggests that there are three possible forms for predictive brain implants:

Purely Predictive: These are PBIs that simply provide patients with predictive information about future neuronal events. Given the kinds of events that are likely to be targets for PBIs, this information will probably always have a ‘warning signal’-like quality.

Advisory: These are PBIs that provide predictions about future neuronal events, as well as advice to patients about how to avert/manipulate those neuronal events. For example, in the case of epilepsy, a patient could be advised to take a particular medication or engage in some preventive behaviour. The type of advice that could be given could be quite elaborate, if the PBI is combined with other information processing technologies.

Automated: These are PBIs that predict neuronal events and then deliver some treatment/intervention that will avert or manipulate that event. They will do this without first warning or seeking the patient’s consent. This might sound strange, but it is not that strange. There are a number of automated-treatment devices in existence already, such as heart pacemakers or insulin pumps, and they regulate biochemical processes without any meaningful ongoing input from the patient.

The boundary between the first two categories is quite blurry. Given that PBIs necessarily select specific neuronal events from the whirlwind of ongoing neuronal events for prediction, and given that they will probably feed this selective information to patients in the form of warning signals, the predictions are likely to carry some implicit advice. Nevertheless, the type of advice provided by advisory PBIs could, as mentioned above, be more or less elaborate. It could range from the very general ‘Warning: you ought to do something to avert a seizure’ to the more specific ‘warning: you ought to take medication X, which can be purchased at store Y, which is five minutes from your present location’.



The different types of PBI could have very different impacts on personal autonomy. At a first glance, it seems like an automated PBI would put more pressure on individual autonomy than a purely predictive PBI. Indeed, it seems like a purely predictive or advisory PBI could actually benefit autonomy, but that first glance might be misleading. We need a more precise characterisation of autonomy, and a more detailed analysis of the different ways in which a PBI could impact upon autonomy, before we can reach any firm conclusions.


2. The Nature of Autonomy
Many books and articles have been written on the concept of ‘autonomy’. Generations of philosophers have painstakingly identified necessary and sufficient conditions for its attainment, subjected those conditions to revision and critique, scrapped their original accounts, started again, given up and argued that the concept is devoid of meaning, and so on. I cannot hope to do justice to the richness of the literature on this topic here. Still, it’s important to have at least a rough and ready conception of what autonomy is and the most general (and hopefully least contentious) conditions needed for its attainment.

I have said this before, but I like Joseph Raz’s general account. Like most people, he thinks that an autonomous agent is one who is, in some meaningful sense, the author of their own lives. In order for this to happen, he says that three conditions must be met:

Rationality condition: The agent must have goals/ends and must be able to use their reason to plan the means to achieve those goals/ends.

Optionality condition: The agent must have an adequate range of options from which to choose their goals and their means.

Independence condition: The agent must be free from external coercion and manipulation when choosing and exercising their rationality.

I have mentioned before that you can view these as ‘threshold conditions’, i.e. conditions that simply have to be met in order for an agent to be autonomous, or you can have a slightly more complex view, taking them to define a three dimensional space in which autonomy resides. In other words, you can argue that an agent can have more or less rationality, more or less optionality, and more or less independence. The conditions are satisfied in degrees. This means that agents can be more or less autonomous, and the same overall level of autonomy can be achieved through different combinations of the relevant degrees of satisfaction of the conditions. That’s the view I tend to favour. I think there possibly is a minimum threshold for each condition that must be satisfied in order for an agent to count as autonomous, but I suspect that the cases in which this threshold is not met are pretty stark. The more complicated cases, and the ones that really keep us up at night, arise when someone scores high on one of the conditions but low on another. Are they autonomous or not? There may not be a simple ‘yes’ or ‘no’ answer to that question.

Anyway, using the three conditions we can formulate the following ‘autonomy principle’ or ‘autonomy test’:

Autonomy principle: An agent’s actions are more or less autonomous to the extent that they meet the (i) rationality condition; (ii) optionality condition and (iii) independence condition.

We can then use this principle to determine whether, and if, PBIs interfere with or undermine an agent’s autonomy.

What would such an analysis reveal? Well, looking first to the rationality condition, it is difficult to see how a PBI could undermine this. Unless they malfunction or are misdirected, it is unlikely that a PBI would undermine our capacity for rational thought. Indeed, the contrary would seem to be the case. You could argue that a condition such as epilepsy is a disruption of rationality. Someone in the grip of a seizure is no longer capable of rational thought. Consequently, using the PBI to avert or prevent their seizure might actually increase, not decrease their rationality.

Turning to the other two conditions, things become a little more unclear. The extent to which autonomy is enhanced or undermined depends on the type of PBI being used.


3. Do advisory PBIs support or undermine autonomy?
Let’s start by looking at predictive/advisory PBIs. I’ll treat these as a pair since, as I stated earlier on, a purely predictive PBI probably does carry some implicit advice. That said, the advice would be different in character. The purely predictive PBI will provide a vague, implied piece of advice (“do something to stop x”). The advisory PBI could provide very detailed, precise advice, perhaps based on the latest medical evidence (“take medication x in ten minutes time and purchase it from vendor y”). Does this difference in detail and specification matter? Does it undermine or promote autonomy?

Consider this first in light of the optionality condition. On the one hand, you could argue that a vague and general bit of advice is better because it keeps more options open. It advises you to do something, but leaves it up to you exactly what that is. The more specific advice seems to narrow the range of choices, and this may seem to reduce the degree of optionality. That said, the effect here is probably quite slight. The more specific advice is not compelled or forced upon you (more on this in a moment), so you are arguably left in pretty much the same position as someone getting the more general advice, albeit with a little more knowledge. Furthermore, there is the widely-discussed ‘paradox of choice’ which suggests that having too many options can be a bad thing for autonomy because it leaves you paralysed in your decisions. Having your PBI specify an option might help you to break that paralysis. That said, this paradox of choice may not arise in the kinds of scenarios in which PBIs get deployed. The paradox of choice is best documented in relation to consumer behaviours and its not clear how similar this would be to decisions about which intervention to pick to avoid a neuronal event.

The independence condition is possibly more important. At a first glance, it seems pretty obvious that an advisory PBI does not undermine the independence condition. For one thing, the net effect of a PBI may be to increase your overall level of independence because it will make you less reliant on others to help you out and monitor your well-being. This is one thing Gilbert discusses in his paper on epileptic patients. He was actually involved with one of the first experimental trials of PBIs and interviewed some of patients who received them. One of the patients on the trial reported feeling an increased level of independence after getting the implant:

…the patient reported: “My family and I felt more at ease when I was out in the community [by myself], […] I didn’t need to rely on my family so much.” These descriptions are rather clear: with sustained surveillance by the implanted device, the patient experienced novel levels of independence and autonomy. 
(Gilbert 2015, 7)

In addition to that, the advisory PBI is merely providing you with suggestions: it does not force them upon you. You are not compelled to take the medication or follow the prescribed steps. This doesn’t involve manipulation or coercion in the sense usually discussed by philosophers of autonomy.

So things look pretty good for advisory PBIs on the independence front, right? Well, not so fast. There are three issues to bear in mind.

First, although the advice provided by the PBI may not be coercive right now, it could end up having a coercive quality. For example, it could be that following the advice provided by the PBI is a condition of health insurance: if you don’t follow the advice, you won’t be covered by your health insurance policy. That might lend a coercive air to the phenomenon.

Second, people may end up being pretty dependent on the PBI. People might not be inclined to second guess or question the advice provided, and may always go along with what it says. This might make them less resilient and less able to fend for themselves, which would undermine independence. We already encounter this phenomenon, of course. Many of us are already dependent on the advice provided to us by services like Google Maps. I don’t know you feel about that dependency. It doesn’t bother me most of the time, though there have been occasions on which I have lamented my overreliance on the technology. So if you think that dependency on Google Maps undermines autonomy, then you might think the same of an advisory PBI (and vice versa).

Third, and finally, the impact of an advisory PBI on independence, specifically, and autonomy, more generally, probably depends to a large extent the type of neuronal event it is being used to predict and manipulate. An epileptic on the cusp of a seizure is already in a state of severely compromised autonomy. They have limited options and limited independence in any event. The advisory PBI might impact negatively on those variables in moments just prior to the predicted seizure, but the net effect of following the advice (i.e. possibly avoiding the seizure) probably compensates for those momentary negative impacts. Things might be very different if the PBI was being used to predict whether you were about to go into a violent rage or engage in some other immoral behaviour. We don’t usually think of violence or immorality as diseases of autonomy so there may be no equivalent compensating effect. In other words, the negative impact on autonomy might be greater in these use-cases.


4. Do automated PBIs support or undermine autonomy?
Let’s turn finally to the impact of automated PBIs on autonomy. Recall, these are PBIs that predict neuronal events and use this information to automatically deliver some intervention to the patient that averts or otherwise manipulates those neuronal events. This means that the decisions made on foot of the prediction are not mediated through the patient’s conscious reasoning faculties; they are dictated by the machine itself (by its code/software). The patient might be informed of the decisions at some point, but this has no immediate impact on how those decisions get made.

This use of PBIs seems to be much more compromising of individual autonomy. After all, the automated PBI does not treat the patient as someone who’s input is relevant to ongoing decisions about medical treatment. The patient is definitely not given any options and they not even respected as independent autonomous agents. Consequently, the negative impact on autonomy seems clear.

But we have to be careful here. It is true that the patient with the automated PBI does not exercise any control over their treatment at the time that the treatment is delivered, but this is not to say they exercise no control at all. Presumably, the patient originally consented to having the PBI implanted in their brains. At that point in time, they were given options and were treated as independent autonomous agents. Furthermore, they may retain control over how the device works in the future. The type of treatment automatically delivered by the PBI could be reviewed over time, by the patient, in consultation with their medical team. During those reviews, the patient could once again exercise their autonomy over the device. You could, thus, view the use of the automated PBI as akin to a commitment contract or Ulysses contract. The patient is autonomously consenting to the use of the device as a way of increasing their level of autonomous control at all points in their lives. This may mean losing autonomy over certain discrete decisions, but gaining it in the long run.

Again, the type of neuronal event that the PBI is used to avert or manipulate would also seem crucial here. If it is a neuronal event that otherwise tends to compromise or undermine autonomy, then it seems very plausible to argue that use of the automated PBI does not undermine or compromise autonomy. After all, we don’t think that the diabetic has compromised their autonomy by using an automated insulin pump. But if it is a neuronal event that is associated with immorality and vice, we might feel rather different.

I should add that all of this assumes that PBIs will be used on a consent-basis. If we start compelling certain people to use them, the analysis becomes more complex. The burgeoning literature on neurointerventions in the criminal law would be useful for those who wish to pursue those issues.


5. Conclusion
That brings us to the end. In keeping with my earlier comments about the complex nature of autonomy, you’ll notice that I haven’t reached any firm conclusions about whether PBIs undermine or support autonomy. What I have said is that ‘it depends’. But I think I have gone beyond a mere platitude and argued that it depends on at least three things: (i) the modality of the PBI (general advisory, specific advisory or automated); (ii) the impact on the different autonomy conditions (rationality, optionality, independence) and (iii) the neuronal events being predicted/manipulated.