Thursday, September 25, 2014

Dawkins and the "We are going to die"-Argument



(I originally thought this would be a more interesting blog post, but I think the final product is slightly underwhelming. Indeed, I thought about not posting it at all. In the end, I felt there might be some value to it, particularly since there might be those who disagree with my analysis. If you are one of them, I'd love to hear from you in the comments.)

Consider the following passage from Richard Dawkins’s book Unweaving the Rainbow:

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people. In the teeth of these stupefying odds it is you and I, in our ordinariness, that are here.We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?

As Steven Pinker points out in his recent book, this is a rhetorically powerful passage. It is robust, punchy and replete with evocative and dramatic imagery (“the sand grains of Arabia”, “unborn ghosts”, “teeth of these stupefying odds”). Indeed, so powerful is it that many non-religious people — Dawkins included — have asked for it to be read at their funerals (click here to see Dawkins read the passage at public lecture to rapturous applause).

While I can certainly appreciate the quality of the writing, I am, alas, somewhat prone to “unweaving the rainbow” myself. If we stripped away the lyrical writing, what would we be left with? To be more precise, what kind of argument would we be left with? It seems to me that Dawkins is indeed trying to present some kind of argument: he has conclusions that he wants us to accept. Specifically, he wants us to be consoled by the fact that we are going to die; to stop whining about our deaths; to stop fearing our ultimate demise. And this is all because we are lucky to be alive. In this respect, I think that what Dawkins is doing is analogous to what the classic Epicurean writers did when they tried to soothe our death-related anxieties. But is his argument any good? That’s the question I will try to answer.

I’ll start by looking at the classic Epicurean arguments and draw out the analogy between them and what Dawkins is trying to do. Once that task is complete, I’ll try to formulate and evaluate Dawkins’s argument.


1. The Epicurean Tradition
There are two classic Epicurean arguments about death. The first comes from Epicurus himself; the second comes from Lucretius, who was a follower of Epicureanism. Epicurus’s argument is contained in the following passage:

Foolish, therefore, is the man who says that he fears death, not because it will pain when it comes, but because it pains in the prospect. Whatever causes no annoyance when it is present, causes only a groundless pain in the expectation. Death, therefore, the most awful of evils, is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not. It is nothing, then, either to the living or to the dead, for with the living it is not and the dead exist no longer 
(Epicurus, Letter to Menoeceus)

The argument is all about our attitude towards death (that is: the state of being dead, not the process of dying). Most people fear death. They think it among the greatest of the evils that befall us. But Epicurus is telling us they are wrong. The only things that are good or bad are conscious pleasure and pain. Death entails the absence of both. Therefore, death is not bad and we should stop worrying about it. I’ve discussed a more complicated version of this argument before, in case you are interested, but that’s the gist of it.

Let’s turn then to Lucretius’s argument. This one comes from a passage of De Rerum Natura, which I believe is the only piece of writing we have from Lucretius:

In days of old, we felt no disquiet... So, when we shall be no more — when the union of body and spirit that engenders us has been disrupted — to us, who shall then be nothing, nothing by any hazard will happen any more at all. Look back at the eternity that passed before we were born, and mark how utterly it counts to us as nothing. This is a mirror that Nature holds up to us, in which we may see the time that shall be after we are dead.

This argument builds upon that of Epicurus by adding a supporting analogy. This analogy asks us to compare the state of non-existence prior to our births with the state of non-existence after our deaths. Since the former is not something we worry about; so too should the latter “count to us as nothing”. This is sometimes referred to as the symmetry argument. This is because it argues that we should have a symmetrical attitude toward pre-natal and post-mortem non-existence. Some people think that Lucretius adds little to what Epicurus originally argued; some people think Lucretius’s argument has its own merits. Again, this is something I discussed in more detail before.

I won’t assess the merits of either argument here. Instead, I’ll just highlight some general features. Note how both arguments try to call our attention to some “surprising fact”: the centrality of pain and pleasure to our well-being, in Epicurus’s case (this might be less radical now than it was in his day); and our attitude to pre-natal non-existence, in Lucretius’s case. Then note how they both use this surprising fact to reorient our perspective on death. They both claim that this surprising fact has the implication that we should not join the masses in fearing our deaths; instead, we should treat our deaths with equanimity.


2. Dawkins’s and the Argument from Genetic Luck
My feeling is that Dawkins is trying to do the same thing in his “We are going die”-passage. Only in Dawkins’s case the “surprising fact” has nothing to do with conscious experience or our attitudes towards non-existence prior to birth, it has to do with the improbability of our existence in the first place.

So how should we interpret this argument? Look first to the wording. Dawkins seems to be concerned with those who spend their lives ‘whining’ about death. He thinks they don’t fully appreciate the rare ‘privilege’ they have in being alive at all, particularly when they compare their ordinariness to the set of possible people who could have existed. He tells them (actually all of us) that they are the “lucky ones” because they are going to die, not in spite of it.

This suggests that we could interpret Dawkins’s argument in something like the following form:


  • (1) If we are lucky to be alive, then we should not be upset by the fact that we are going to die.
  • (2) We are lucky to be alive.
  • (3) Therefore, we should not be upset by the fact that we are going to die.


How do the premises of this argument work? Let’s start with premise (1). The implication contained in the premise is that we should be grateful for the opportunity of being alive, even if that entails our deaths. This suggests that the argument is an argument from gratitude. He is telling us to be grateful for the rare privilege of dying. The problem I have with this is that gratitude has a somewhat uncertain place in a non-religious worldview. Gratitude is typically something we experience in our relationships with others. I am grateful to my parents for supporting me and paying for my education; I am grateful to my friends for buying me an expensive gift; and so on. If we think of our lives as being gifts from a benevolent creator, then being grateful, arguably, makes sense. But Dawkins is, famously, an atheist. So he must be relying on a different notion of gratitude. He must be saying that we should be grateful to the causal contingency of the natural order for allowing us to exist. But this seems perverse. The natural order is impersonal and uncaring: it just rolls along in accordance with certain causal laws. Why should we feel grateful to it? This same natural order is, after all, responsible for untold human suffering, e.g. suffering from natural disasters, viral infections, cancer and other unpleasantries. These are facets of the natural order that we tend not to accept. In fact, they are things we generally try to overcome. Why should we feel grateful for being plunged into a life filled with suffering of this sort? Couldn’t it be that death is one of the facets of life that we should use our ingenuity to overcome?

Now, I don’t want to be entirely dismissive of this line of argument. Michael Sandel and Michael Hauskeller have tried to articulate a secular, non-religious sense of gratitude that might fit with Dawkins’s argument (though I have my doubts). And I also don’t think that rejecting gratitude should lead us to resentment either. I don’t think resentment toward the natural order is any more appropriate than gratitude. Indeed, I suspect it may even be counter-productive. For example, if we take up the suggestion at the end of the previous paragraph — and think that we should use our ingenuity to overcome death — I suspect we will end up being pretty disappointed. That’s not to say that efforts to achieve life extension are to be rejected. It’s just to say that it’s probably unwise to make them hinge upon which you hang all your hopes and aspirations. I tend to favour a more stoic attitude to the natural order, which involves adjusting one’s hopes and desires so that they are reconciled with the likelihood of death.

I think these criticisms point toward the untenability of Dawkins’s argument — at least insofar as it attempts to console us about our deaths. But for the sake of completeness, let’s also consider the second premise. Is is true to say that we are lucky to be alive? Dawkins probably spends more time addressing this issue in the passage. He uses an argument from genetic luck: the set of possible combinations of DNA is vastly larger than the set of actual people, including you. Your particular combination of DNA is just a tiny, tiny slice of that probability space.

I would be inclined to accept this argument. I don’t doubt that set of possible people is much larger than the set of actual people. The question, of course, is whether all members of that set are equiprobable. Dawkins seems to think that they are. Indeed, he seems to adopt something akin to the principle of indifference when it comes to assessing the probability members of that set. Is this the right principle to adopt? I’m not sure. If one accepts causal determinism, then maybe my existence wasn’t lucky at all: it was causally predetermined by the previous events. It could never have been any other way. Still, I don’t the fact (if it is a fact) of causal determinism affect my probability judgments in relation to other, potentially causally determined, phenomena, like say national or state lotteries. So it probably shouldn’t affect my judgment in this case either.

In other words, I think premise (2) is okay. The real issue is with premise (1) and whether luck entails some change in our attitude toward death. As I said above, I don’t see why this has to be the case.

Sunday, September 21, 2014

Can blogging be academically valuable? Seven reasons for thinking it might be




I have been blogging for nearly five years (hard to believe). In that time, I’ve written over 650 posts on a wide variety of topics: religion, metaethics, applied ethics, philosophy of mind, philosophy of law, technology, epistemology, philosophy of science and so on. Since most of my posts clock-in at around 2,000 words, I’d estimate that I have written over one million words. I also reckon I spend somewhere in the region of 10-15 hours per week working on the blog, sometimes more. The obvious question is: why?

Could it be the popularity? Well, I can’t deny that having a wide readership is part of the attraction, but if that’s the reason then I must be doing something wrong. The blog is only “sort of” popular. My google stats suggest that I’ll clear 1,000,000 views in the next month and half (with a current average of 35,000 per month). My Bravenet counter suggests I’m still languishing in the low 700,000s. Since that is the more conservative estimate, I’m guessing it is closer to the truth, though for some reason it doesn’t always count hits I get from reddit (incidentally, I’d like to thank reddit user Snow_Mandolaria — whoever you are — for regularly linking to my blog on there). These aren’t insignificant figures, but they aren’t hugely impressive either (I should add, though, that I get many more readers from reposts of my material on other websites, particularly the IEET page).

But if it’s not the popularity, what could it be? There are two answers to that. The first is that I genuinely enjoy reading about interesting topics, trying to analyse and understand them, and then sharing my analysis with others (however many of those others there may be). The second is that I think blogging has been (enormously) beneficial to me in my work as an academic. Most of my colleagues are sceptical about this. They tend to think that the time spent on this blog represents a significant opportunity cost. For better or worse, the incentive-structure in modern academia is geared around publishing high-quality, peer-reviewed publications (and also, though to a lesser extent, about high-quality teaching). The time I spend churning out blog posts could be spent working on publications for peer-review. Imagine if instead of writing one million words-worth of blog posts I had written one-tenth as many words for peer review. That would represent a significant number of peer-reviewed publications (maybe as many as ten). They also tend to think I don’t specialise enough in what I write about on here: I’m employed in a law school, and so I should be writing about my research in law. In writing about other topics, I reduce my cachet as a legal scholar.

Now, I think some of this scepticism is warranted. There is no doubt that sometimes I feel like the time spent working on this blog could be better spent working on something else. There is also no doubt that I don’t have the ideal perspective on this issue: I’ve been blogging throughout my academic career, so I don’t have any “control” state to compare what I do now with what I could otherwise be doing. Nevertheless, I think that blogging has actually enhanced the work I do on a day-to-day basis, and I want to give my main reasons for thinking this is true. This may be interesting to others who are thinking about blogging:


1. It helps to build the habit of writing: Writing is often hard. I don’t know why this is, but it is. Academics (and others) often struggle with their writing projects. I know I do. One thing that blogging has helped me with is building a regular habit of writing into my daily routine. So much so that I now miss it if I don’t write for a couple of days. There is a downside to this too. Blogging is highly addictive form of writing, but I think that is outweighed by the upside.

2. It helps to generate writing flow states: I appreciate that the term “flow” state is something of a buzzword. Still, it has a basis in psychological science and it is something that blogging can help generate. The psychological barriers to writing a blog post are much lower than the psychological barriers to writing an article for peer review. Yet, when writing the former you can get into a flow state that can then be leveraged into writing the latter. Many is the time that I have finished writing a blog post and jumped straight into writing a more serious article.

3. It helps you to really understand your area of research: This is a big one. Before I started this blog I don’t think I really understood the arguments and ideas I was reading about in my research. It was only when I tried to explain those ideas to others, through my writing, that I saw exactly why the ideas were important, how the arguments were built, and where their flaws were located.

4. It allows you to systematically develop the elements of a research article: This is also a big one. One of the main ways in which I have leveraged this blog into my academic work is by using a series of blog posts to develop the understanding and analysis I need to write a peer-reviewed article. The best example of this might be the article I published in Neuroethics last year, entitled “Hyperagency and the Good Life — Does Extreme Enhancement Threaten Meaning?”. That article was a systematic analysis and critique of so-called hyper agency objections to the enhancement project. The centrepiece of the article is a critique of four different authors, along with an endorsement of two others. The work of every one of those authors was originally the subject of one or more blog posts. Each blog post was a standalone, nevertheless they each helped to build the final product. I have done this on other occasions too, though maybe in less obvious ways.

5. It enables you to acquire serendipitous research interests: In addition to using the blog in a systematic way when writing peer-reviewed articles, I have also found it helps to develop serendipitous interests, which can later be used in peer-reviewed articles. There are actually several examples of this on this blog. One would be my article from Criminal Law and Philosophy entitled “Kramer’s Purgative Rationale for Capital Punishment: A Critique”. The idea for that article originally came from a two-part overview of the book that I wrote on the blog. Similarly, my article in the International Journal for Philosophy of Religion, entitled “Skeptical Theism and Divine Permission: A Response to Anderson”, was made possible by a series of posts on the topic of sceptical theism. At no time in the original writing of these posts did I think they would lead to peer-reviewed publications. And yet that’s exactly what happened.

6. It helps with networking and developing contacts: The blog posts I write follow a common structure. I read somebody’s article or book and then I write about it. Sometimes what I write is laudatory, sometimes it is critical. Either way, the people I write about tend to appreciate it and many get in contact with me as a result (I have rarely contacted them). This has led to a number of useful discussions, invites to give talks, and it has helped me to develop a network of people who are interested in similar topics and are willing to give feedback on the articles I’m trying to write for peer review. Thus, for example, without this blog I would not have got in touch with people like Felipe Leon, Brian Earp, Stephen Maitzen, Michael Hauskeller, Nick Agar and Nicole Vincent (among others) - all of whom have provided feedback on draft versions of articles I have subsequently published.

7. And yes,  it also helps with teaching: It’s perhaps unfair to relegate this to last on my list, but that’s not intended as a reflection on its importance or significance. Blogging has really helped me to develop the in-depth knowledge I need for teaching. I don’t blog about everything I teach, but several of the subjects I have taught in the past have started life as blog posts. The best examples of this are, perhaps, my posts on the philosophy of mental illness and the ethics of prostitution, all of which provided material for classes I later taught. But these are just two examples. I couldn’t even begin to quantify the number of other posts I have used in my teaching. I should also note that I use the blog as a supplement to my classes. Thus, whenever it is relevant, I will direct my students to the blog to learn more about a given topic. Some have even been inspired to start their own blogs.


As you might be able to tell, many of these reasons suggest that it is possible to avoid the “opportunity cost” problem highlighted by my peers. By using the blog in a somewhat instrumentalist way, you can actually kill two birds with the one stone. In other words, you can use blogging to support teaching and research, not as a distraction therefrom.

Saturday, September 20, 2014

Media Coverage of my article "Sex Work, Tech Unemployment and the Basic Income Guarantee"

,

A few months back I published an article in the Journal of Evolution and Technology entitled "Sex Work, Technological Unemployment and the Basic Income Guarantee". The article was part of a symposium on future impacts of technology on employment and the potential need to reform social welfare. The article explored the possible effects of advances in robotics on the sex work industry. I summarised the arguments in a blog post entitled "Will Sex Workers be Replaced by Robots? A Precis".

Unsurprisingly (I guess), this article has proven quite popular, and has been discussed on a number of other websites. As an exercise in shameless navel-gazing, I thought I would try to collect links to all those websites here. If you know of any more, please let me know:


  • Pando Daily - "Academics Dream of Electric Sex Workers" - Another slightly negative, or at least tongue-in-cheek take on the topic. Based partly on an interview with me, but featuring a number of inaccuracies (e.g. it's not true to say that the majority of research on this is conducted by "stodgy British scholars". We may be stodgy, but as far as I am aware most of us aren't British)
  • Ekstra Bladet - "Sexrobotter tager ludernes arbejde" - Definitely NSFW! I have no idea what this says, but apparently Ekstra Bladet is a Danish tabloid, and one of the more popular Danish news websites.

It is noticeable that few of these mention the basic income part of the article.




Friday, September 19, 2014

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (2): Pigliucci's Pessimism



(Part One)

This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.

As we saw in part one, there were two issues up for debate:

The Consciousness Issue: Would an uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?
The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process?


David Chalmers was optimistic on both fronts. Adopting a functionalist theory of consciousness, he saw no reason to think that a functional isomorph of the human brain would not be conscious. Not unless we assume that biological material has some sort of magic consciousness-conferring property. And while he had his doubts about survival via destructive or non-destructive uploading, he thought that that a gradual replacement of the human brain, with functionally equivalent artificial components, could allow for our survival.

As we will see today, Pigliucci is much more pessimistic. He thinks it is unlikely that uploads would be conscious, and, even if they are, he thinks it is unlikely that we would survive the uploading process. He offers four reasons to doubt the prospect of conscious uploads, two based on criticisms of the computational theory of mind, and two based on criticisms of functionalism. He offers one main reason to doubt survival. I will suggest that some of his arguments have merit, some don’t, and some fail to engage with the arguments put forward by Chalmers.


1. Pigliucci’s Criticisms of the Computational Theory of Mind
Pigliucci assumes that the pro-uploading position depends on a computational theory of mind (and, more importantly, a computational theory of consciousness). According to this theory, consciousness is a property (perhaps an emergent property) of certain computational processes. Pigliucci believes that if he can undermine the computational theory of mind, then so too can he undermine any optimism we might have about conscious uploads.

To put it more formally, Pigliucci thinks that the following argument will work against Chalmers:


  • (1) A conscious upload is possible only if the computational theory of mind is correct.
  • (2) The computational theory of mind is not correct (or, at least, it is highly unlikely to be correct).
  • (3) Therefore, (probably) conscious uploads are not possible.


Pigliucci provides two reasons for us to endorse premise (2). The first is a — somewhat bizarre — appeal to the work of Jerry Fodor. Fodor was one of the founders of the computational theory of mind. But Fodor has, in subsequent years, pushed back against the overreach he perceives among computationalists. As Pigliucci puts it:

[Fodor distinguishes] between “modular” and “global” mental processes, and [argues] that [only] the former, but not the latter (which include consciousness), are computational in any strong sense of the term…If Fodor is right, then the CTM [computational theory of mind] cannot be a complete theory of mind, because there are a large number of mental processes that are not computational in nature. 
(Intelligence Unbound, p. 123)


In saying this, Pigliucci explicitly references Fodor’s book-length response to the work of Steven Pinker, called The Mind Doesn’t Work that Way: The Scope and Limits of Computational Psychology. I can’t say I’m huge fan of Fodor, but even if I were I would find Pigliucci’s argument pretty unsatisfying. It is, after all, little more than a bare appeal to authority, neglecting to mention any of the detail of Fodor’s critique. It also neglects to mention that Fodor’s particular understanding of computation is disputed. Indeed, Pinker disputed it in his response to Fodor, which Pigliucci doesn’t cite and which you can easily find online. Now, my point here is not to defend the computational theory, or to suggest that Pinker is correct in his criticisms of Fodor, it is just merely to suggest that appealing to the work of Fodor isn’t going to be enough. Fodor may have done much to popularise the computational theory, but he doesn’t have final authority on whether it is correct or not.


Let’s move on then to Pigliucci’s second reason to endorse premise (2). This one claims that the computational theory rests on a mistaken understanding of the Church-Turing thesis about universal computability. Citing the work of Jack Copeland — an expert on Turing, whose biography of Turing I recently read and recommend — Pigliucci notes that the thesis only establishes that logical computing machines (Turing Machines) “can do anything that can be described as a rule of thumb or purely mechanical (“algorithmic”)”. It does not establish that “whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable”. This is said to be a problem because proponents of the computational theory of mind have tended to assume that “Church-Turing has essentially established the CTM”.

I may not be well-qualified to evaluate the significance of this point, but it seems pretty thin to me. I think it relies on an impoverished notion of computation. It assumes that computationalists, and by proxy proponents of mind-uploading, think that a mind could be implemented on a classic digital computer architecture. While some may believe that, it doesn’t strike me as being essential to their claims. I think there is a broader notion of computation that could avoid his criticisms. To me, a computational theory is one that assumes mental processes (including, ultimately, conscious mental processes) could be implemented in some sort of mechanical architecture. The basis for the theory is the belief that mental states involve the representation of information (in either symbolic or analog forms) and that mental processes involve the manipulation and processing of the represented information. I see nothing in Pigliucci’s comments about the Church-Turing thesis that upsets that model. Pigliucci actually did a pretty good podcast on broader definitions of computation with Gerard O’Brien. I recommend it if you want to learn more.




In summary, I think Pigliucci’s criticisms of the computational theory are off-the-mark. Nevertheless, I concede that the broader sense of computation may in turn collapse into the broader theory of functionalism. This is where the debate is really joined.


2. Pigliucci’s Criticisms of Functionalism
And I think Pigliucci is on firmer ground when he criticises functionalism. Admittedly, he doesn’t distinguish between functionalism and computationalism, but I think it is possible to separate out his criticisms. Again, there are two criticisms with which to contend. To understand them, we need to go back to something I mentioned in part one. There, I noted how Chalmers seemed to help himself to a significant assumption when defending the possibility of a conscious upload. The assumption was that we could create of “functional isomorph” of the brain. In other words, an artificial model that replicated all the relevant functional attributes of the human brain. I questioned whether it was possible to do this. This is something that Pigliucci also questions.

We can put the criticism like this:


  • (8) A conscious upload is possible only if we know how to create a functional isomorph of the brain.
  • (9) But we do not know what it takes to create a functional isomorph of the brain.
  • (10) Therefore, a conscious upload is not possible.



Pigliucci adduces two reasons for us to favour premise (9). The first has to do with the danger of conflating simulation with function. This hearkens back to his criticism of the computational theory, but can be interpreted as a critique of functionalism. The idea is that when we create functional analogues of real-world phenomena we may only be simulating them, not creating models that could take their place. The classic example here would be a computer model of rainfall or of photosynthesis. The computer models may be able to replicate those real-world processes (i.e. you might be able to put the elements of the models in a one-to-one relationship with the elements of the real-world phenomena), but they would still lack certain critical properties: they would not be wet or capable of converting sunlight into food. They would be mere simulations, not functional isomorphs. I agree with Pigliucci that the conflation of simulation with function is a real danger when it comes to creating functional isomorphs of the brain.

Pigliucci’s second reason has to do with knowing the material constraints on consciousness. Here he draws on an analogy with life. We know that we are alive and that our being alive is the product of the complex chemical processes that take place in our body. The question is: could we create living beings from something other than this complex chemistry? Pigliucci notes that life on earth is carbon-based and that the only viable alternative is some kind of silicon-based life (because silicon is the only other element that would be capable of forming similarly complex molecule chains). So the material constraints on creating functional isomorphs of current living beings are striking: there are only two forms of chemistry that could do the trick. This, Pigliucci suggests, should provide some fuel for scepticism about creating isomorphs of the human brain:

[This] scenario requires “only” a convincing (empirical) demonstration that, say, silicon-made neurons can function just as well as carbon-based ones, which is, again, an exclusively empirical question. They might or might not, we do not know. What we do know is that not just any chemical will do, for the simple reason that neurons need to be able to do certain things (grow, produce synapses, release and respond to chemical signals) that cannot be done if we alter the brain’s chemistry too radically. 
(Intelligence Unbound, p 125)

I don’t quite buy the analogy with life. I think we could create wholly digital living beings (indeed, we may even have done so) though this depends on what counts as “life”, which is a question Pigliucci tries to avoid. Still, I think the point here is well-taken. There is a lot going on in the human brain. There are a lot of moving parts, a lot of complex chemical mechanisms. We don’t know exactly which elements of this complex machinery need to be replicated in our functional isomorph. If we replicate everything then we are just creating another biological brain. If we don’t, then we risk missing something critical. Thus, there is a significant hurdle when it comes to knowing whether our upload will share the consciousness of its biological equivalent. It has been a while since I read it, but as I recall, John Bickle’s work on the philosophy of neuroscience develops this point about biological constraints quite nicely.




This epistemic hurdle is heightened by the hard problem of consciousness. We are capable of creating functional isomorphs of some biological organs. For example, we can create functional isomorphs of the human heart, i.e. mechanical devices that replicate the functionality of the heart. But that’s because everything we need to know about the functionality of the heart is externally accessible (i.e. accessible from the third-person perspective). Not everything about consciousness is accessible from that perspective.


3. Pigliucci on the Identity Question
After his lengthy discussion of the consciousness issue, Pigliucci has rather less to say about the identity issue. This isn’t surprising. If you don’t think an upload is likely to be conscious, then you are unlikely to think that it will preserve your identity. But Pigliucci is sceptical even if the consciousness issue is set to the side.

His argument focuses on the difference between destructive and non-destructive uploading. The former involves three steps: brain scan, mechanical reconstruction of the brain, and destruction of the original brain. The latter just involves the first two of those steps. Most people would agree that in the latter case your identity is not transferred to the upload. Instead, the upload is just a copy or clone of you. But if that’s what they believe about the latter case, why wouldn’t they believe it about the former too? As Pigliucci puts it:

[I]f the only difference between the two cases is that in one the original is destroyed, then how on earth can we avoid the conclusion that when it comes to destructive uploading we just committed suicide (or murder, as the case may be)? After all, ex hypothesi there is no substantive differences between destructive and non-destructive uploading in terms of end results…I realize, of course, that to some philosophers this may seem far too simple a solution to what they regard as an intricate metaphysical problem. But sometimes even philosophers agree that problems need to be dis-solved, not solved [he then quotes from Wittgenstein]. 
(Intelligence Unbound, p. 128)

Pigliucci may be pleased with this simple, common-sensical solution to the identity issue, but I am less impressed. This is for two reasons. First, Chalmers made the exact same argument in relation to non-destructive and destructive uploading — so Pigliucci isn’t adding anything to the discussion here. Second, this criticism ignores the gradual uploading scenario. It was that scenario that Chalmers thought might allow for identity to be preserved. So I’d have to say Pigliucci has failed to engage the issue. If this were a formal debate, the points would go to Chalmers. That’s not to say that Chalmers is right; it’s just to say that we have been given no reason to suppose he is wrong.


4. Conclusion
To sum up, Pigliucci is much more pessimistic than Chalmers. He thinks it unlikely that an upload would be conscious. This is because the computational theory of mind is flawed, and because we don’t know what the material constraints on consciousness might be. He is also pessimistic about the prospect of identity being preserved through uploading, believing it is more likely to result in death or duplication.


I have suggested that Pigliucci may be right when it comes to consciousness: whatever the merits of the computational theory of mind, it is true that we don’t know what it would take to build a functional isomorph of the human brain. But I have also suggested that he misses the point when it comes to identity.

Wednesday, September 17, 2014

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (1): Chalmers's Optimism

Chalmers's Image from TedxSydney - Pigliucci from his CUNY Profile Page

The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.

But is it really feasible? This is a question I’ve looked at many times before, but the recent book Intelligence Unbound: The Future of Uploaded and Machine Minds offers perhaps the most detailed, sophisticated and thoughtful treatment of the topic. It is a collection of essays, from a diverse array of authors, probing the key issues from several different perspectives. I highly recommend it.

Within its pages you will find a pair of essays debating the philosophical aspects of mind-uploading (you’ll find others too, but I want to zone-in on this pair because one is a direct response to the other). The first of those essays comes from David Chalmers and is broadly optimistic about the prospect of mind-uploading. The second of them comes from Massimo Pigliucci and is much less enthusiastic. In this two-part series of posts, I want to examine the debate between Chalmers and Pigliucci. I start by looking at Chalmers’s contribution.


1. Methods of Mind-Uploading and the Issues for Debate
Chalmers starts his essay by considering the different possible methods of mind-uploading. This is useful because it helps to clarify — to some extent — exactly what we are debating. He identifies three different methods (note: in a previous post I looked at work from Sim Bamford suggesting that there were more methods of uploading, but we can ignore those other possibilities for now):

Destructive Uploading: As the name suggests, this is a method of mind-uploading that involves the destruction of the original (biological) mind. An example would be uploading via serial sectioning. The brain is frozen and its structure is analyzed layer by layer. From this analysis, one builds up a detailed map of the connections between neurons (and other glial cells if necessary). This information is then used to build a functional computational model of the brain.

Gradual Uploading: This is a method of mind-uploading in which the original copy is gradually replaced by functionally equivalent components. One example of this would be nanotransfer. Nanotechnology devices could be inserted into the brain and attached to individual neurons (and other relevant cells if necessary). They could then learn how those cells work and use this information to simulate the behaviour of the neuron. This would lead to the construction of a functional analogue of the original neuron. Once the construction is complete, the original neuron can be destroyed and the functional analogue can take its place. This process can be repeated for every neuron, until a complete copy of the original brain is constructed.

Nondestructive Uploading: This is a method of mind-uploading in which the original copy is retained. Some form of nanotechnology brain-scanning would be needed for this. This would build up a dynamical map of current brain function — without disrupting or destroying it — and use that dynamical map to construct a functional analogue.


Whether these forms of uploading are actually technologically feasible is anyone’s guess. They are certainly not completely implausible. I can certainly imagine a model of the brain being built from a highly detailed scan and analysis. It might take a huge amount of computational power and technical resources, but it seems within the realm of technological possibility. The deeper question is whether our minds would really survive the process. This is where the philosophical debate kicks-in.

There are, in fact, two philosophical issues to debate:

The Consciousness Issue: Would the uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?

The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process? Would our identities be preserved?

The two issues are connected. Consciousness is valuable to us. Indeed, it is arguably the most valuable thing of all: it is what allows us to enjoy our interactions with the world, and it is what confers moral status upon us. If consciousness was not preserved by the mind-uploading process, it is difficult to see why we would care. So consciousness is a necessary condition for a valuable form of mind-uploading. That does not, however, make it a sufficient condition. After all, two beings can be conscious without sharing any important connection (you are conscious, and I am conscious, but your consciousness is not valuable to me in the same way that it is valuable to you). What we really want to preserve through uploading is our individual consciousnesses. That is to say: the stream of conscious experiences that constitutes our identity. But would this be preserved?

These two issues form the heart of the Chalmers-Pigliucci debate.


2. Would consciousness survive the uploading process?
So let’s start by looking at Chalmers’s take on the consciousness issue. Chalmers is famously one of the new-Mysterians, a group of philosophers who doubt our ability to have a fully scientific theory of consciousness. Indeed, he coined the term “The Hard Problem” of consciousness to describe the difficulty we have in accounting for the first-personal quality of conscious experience. Given his scepticism, one might have thought he’d have his doubts about the possibility of creating a conscious upload. But he actually thinks we have reason to be optimistic.

He notes that there are two leading contemporary views about the nature of consciousness (setting non-naturalist theories to the side). The first — which he calls the biological view — holds that consciousness is only instantiated in a particular kind of biological system: no nonbiological system is likely to be conscious. The second — which he (and everyone else) calls the functionalist view — holds that consciousness is instantiated in any system with the right causal structure and causal roles. The important thing is that the functionalist view allows for consciousness to be substrate independent, whereas the biological view does not. Substrate independence is necessary if an upload is going to be conscious.

So which of these views is correct? Chalmers favours the functionalist view and he has a somewhat elaborate argument for this. The argument starts with a thought experiment. The thought experiment comes in two stages. The first stage asks us to imagine a “perfect upload of a brain inside a computer” (p. 105), by which is meant a model of the brain in which every relevant component of a biological brain has a functional analogue within the computer. This computer-brain is also hooked up to the external world through the same kinds of sensory input-output channels. The result is a computer model that is a functional isomorph of a real brain. Would we doubt that such a system was conscious if the real brain was conscious?

Maybe. That brings us to the second stage of the thought experiment. Now, we are asked to imagine the construction of a functional isomorph through gradual uploading:

Here we upload different components of the brain one by one, over time. This might involve gradual replacement of entire brain areas with computational circuits, or it might involve uploading neurons one at a time. The components might be replaced with silicon circuits in their original location…It might take place over months or years or over hours.

If a gradual uploading process is executed correctly, each new component will perfectly emulate the component it replaces, and will interact with both biological and nonbiological components around it in just the same way that the previous component did. So the system will behave in exactly the same way that it would have without the uploading. 
(Intelligence Unbound pp. 105-106)

Critical to this exercise in imagination is the fact that the process results in a functional isomorph and that you can make the process exceptionally gradual, both in terms of the time taken and the size of the units being replaced.

With the building blocks in place, we now ask ourselves the critical question: if we were undergoing this process of gradual replacement, what would happen to our conscious experience? There are three possibilities. Either it would suddenly stop, or it would gradually fade out, or it would be retained. The first two possibilities are consistent with the biological view of consciousness; the last is not. It is only consistent with the functional view. Chalmers’s argument is that the last possibility is the most plausible.

In other words, he defends the following argument:


  • (1) If the parts of our brain are gradually replaced by functional isomorphic component parts, our conscious experience will either: (a) be suddenly lost; (b) gradually fadeout; or © be retained throughout.
  • (2) Sudden loss and gradual fadeout are not plausible; retention is.
  • (3) Therefore, our conscious experience is likely to be retained throughout the process of gradual replacement.
  • (4) Retention of conscious experience is only compatible with the functionalist view.
  • (5) Therefore, the functionalist view is like to be correct; and preservation of consciousness via mind-uploading is plausible.


Chalmers adds some detail to the conclusion, which we’ll talk about in a minute. The crucial thing for now is to focus on the key premise, number (2). What reason do we have for thinking that retention is the only plausible option?

With regard to sudden loss, Chalmers makes a simple argument. If we were to suppose, say, that the replacement of the 50,000th neuron led to the sudden loss of consciousness, we could break down the transition point into ever more gradual steps. So instead of replacing the 50,000th neuron in one go, we could divide the neuron itself into ten sub-components and replace them gradually and individually. Are we to suppose that consciousness would suddenly be lost in this process? If so, then break down those sub-components into other sub-components and start replacing them gradually. The point is that eventually we will reach some limit (e.g. when we are replacing the neuron molecule by molecule) where it is implausible to suppose that there will be a sudden loss of consciousness (unless you believe that one molecule makes a difference to consciousness: a belief that is refuted by reality since we lose brain cells all the time without thereby losing consciousness). This casts the whole notion of sudden loss into doubt.

With regard to gradual fadeout, the argument is more subtle. Remember it is critical to Chalmers’ thought experiment that the upload is functionally isomorphic to the original brain: for every brain state that used to be associated with conscious experience there will be a functionally equivalent state in the uploaded version. If we accept gradual fadeout, we would have to suppose that despite this equivalence, there is a gradual loss of certain conscious experiences (e.g. the ability to experience black and white, or certain high-pitched sounds etc.) despite the presence of functionally equivalent states. Chalmers’ argues that this is implausible because it asks us to imagine a system that is deeply out of touch with its own conscious experiences. I find this slightly unsatisfactory insofar as it may presuppose the functionalist view that Chalmers is trying to defend.

But, in any event, Chalmers suggests that the process of partial uploading will convince people that retention of consciousness is likely. Once we have friends and family who have had parts of their brains replaced, and who seem to retain conscious experience (or, at least, all outward signs of having conscious experience), we are likely to accept that consciousness is preserved. After all, I don’t doubt that people with cochlear or retinal implants have some sort of aural or visual experiences. Why should I doubt it if other parts of the brain are replaced by functional equivalents?



Chalmers concludes with the suggestion that all of this points to the likelihood of consciousness being an organizational invariant. What he means by this is that systems with the exact same patterns of causal organization are likely to have the same states of consciousness, no matter what those systems are made of.

I’ll hold off on the major criticisms until part two, since this is the part of the argument about which Pigliucci has the most to say. Nevertheless, I will make one comment. I’m inclined towards functionalism myself, but it seems to me that in crafting the thought experiment that supports his argument, Chalmers helps himself to a pretty colossal assumption. He assumes that we know (or can imagine) what it takes to create a “perfect” functional analogue of a conscious system like the brain. But, of course, we don’t know really know what it takes. Any functional model is likely to simplify and abstract from the messy biological details. The problem is knowing which of those details is critical for ensuring functional equivalence. We can create functional models of the heart because all the critical elements of the heart are determinable from a third person perspective (i.e. we know what is necessary to make the blood pump from a third person perspective). That doesn’t seem to be the case with consciousness. In fact, that’s what Chalmers’s Hard Problem is supposed to highlight.


3. Will our identities be preserved? Will we survive the process?
Let’s assume Chalmers is right to be optimistic about consciousness. Does that mean he is right to be optimistic about identity/survival? Will the uploaded mind be the same as we are? Will it share our identity? Chalmers has more doubts about this, but again he sees some reason to be optimistic.

He starts by noting that there are three different philosophical approaches to personal identity. The first is biologism (or animalism), which holds that preservation of one’s identity depends on the preservation of the biological organism that one is. The second is psychological continuity, which holds that preservation of one’s identity depends on maintaining threads of overlapping psychological states (memories, beliefs, desires etc.). The third, slightly more unusual, is Robert Nozick’s “closest continuer” theory, which holds that preservation of identity depends on the existence of a closely-related subsequent entity (where “closeness” is defined in various ways).

Chalmers then defends two different arguments. The first gives some reason to be pessimistic about survival, at least in the case of destructive and nondestructive forms of uploading. The second gives some reason to be optimistic, at least in the case of gradual uploading. The end result is a qualified optimism about gradual uploading.

Let’s start with the pessimistic argument. Again, it involves a thought experiment. Imagine a man named Dave. Suppose that one day Dave undergoes a nondestructive uploading process. A copy of his brain is made and uploaded to a computer, but the biological brain continues to exist. There are, thus, two Daves: BioDave and DigiDave. It seems natural to suppose that BioDave is the original, and his identity is preserved in this original biological form; and it is equally natural to suppose that DigiDave is simply a branchline copy. In other words, it seems natural to suppose that BioDave and DigiDave have separate identities.

But now suppose we imagine the same scenario, only this time the original biological copy is destroyed. Do we have any reason to change our view about identity and survival? Surely not. The only difference this time round is that BioDave is destroyed. DigiDave is the same as he was in the original thought experiment. That suggests the following argument (numbering follows on from the previous argument diagram):


  • (9) In nondestructive uploading, DigiDave is not identical to Dave.
  • (10) If in nondestructive uploading, DigiDave is not identical to Dave, then in destructive uploading, DigiDave is not identical to Dave.
  • (11) In destructive uploading, DigiDave is not identical to Dave.


This looks pretty sound to me. And as we shall see in part two, Pigliucci takes a similar view. Nevertheless, there are two possible ways to escape the conclusion. The first would be to deny premise (2) by adopting the closest continuer theory of personal identity. The idea then would be that in destructive (but not non-destructive) uploading DigiDave is the closest continuer and hence the vessel in which identity is preserved. I think this simply reveals how odd the closest continuer theory really is.

The other option would be to argue that this is a fission case. It is a scenario in which one original identity fissions into two subsequent identities. The concept of fissioning identities was originally discussed by Derek Parfit in the case of severing and transplanting of brain hemispheres. In the brain hemisphere case, some part of the original person lives on in two separate forms. Neither is strictly identical to the original, but they do stand in “relation R” to the original, and that relation might be what is critical to survival. It is more difficult to say that nondestructive uploading involves fissioning. But it might be the best bet for the optimist. The argument then would be that the original Dave survives in two separate forms (BioDave and DigiDave), each of which stands in relation R to him. But I’d have to say this is quite a stretch, given that BioDave isn’t really some new entity. He’s simply the original Dave with a new name. The new name is unlikely to make an ontological difference.




Let’s now turn our attention to the optimistic argument. This one requires us to imagine a gradual uploading process. Fortunately, we’ve done this already so you know the drill: imagine that the subcomponents of the brain are replaced gradually (say 1% at a time), over a period of several years. It seems highly likely that each step in the replacement process preserves identity with the previous step, which in turn suggests that identity is preserved once the process is complete.

To state this is in more formal terms:


  • (14) For all n < 100, Daven+1 is identical to Daven.
  • (15) If for all n < 100, Daven+1 is identical to Daven, then Dave100 is identical to Dave.
  • (16) Therefore, Dave100 is identical to Dave.


If you’re not convinced by this 1%-at-a-time version of the argument, you can adjust it until it becomes more persuasive. In other words, setting aside certain extreme physical and temporal limits, you can make the process of gradual replacement as slow as you like. Surely there is some point at which the degree of change between the steps becomes so minimal that identity is clearly being preserved? If not, then how do you explain the fact that our identities are being preserved as our body cells replace themselves over time? Maybe you explain it by appealing to the biological nature of the replacement.  But if we have functionally equivalent technological analogues it’s difficult to see where the problem is.



Chalmers adds other versions of this argument. These involve speeding up the process of replacement. His intuition is that if identity is preserved over the course of a really gradual replacement, then it may well be preserved over a much shorter period of replacement too, for example one that takes a few hours or a few minutes. That said, there may be important differences when the process is sped up. It may be that too much change takes place too quickly and the new components fail to smoothly integrate with the old ones. The result is a break in the strands of continuity that are necessary for identity-preservation. I have to say I would certainly be less enthusiastic about a fast replacement. I would like the time to see whether my identity is being preserved following each replacement.


4. Conclusion
That brings us to the end of Chalmers’ contribution to the debate. He says more in his essay, particularly about cryopreservation, and the possible legal and social implications of uploading. But there is no sense in addressing those topics here. Chalmers doesn’t develop his thoughts at any great length and Pigliucci wisely ignores them in his reply. We’ll be discussing Pigliucci’s reply in part two.

Sunday, September 14, 2014

Are hierarchical theories of freedom and responsibility plausible?




In order to be responsible for your actions, you must be free. Or so it is commonly believed. But what exactly does it mean to be free? One popular view holds that freedom consists in the ability to do otherwise. That is to say: the ability to choose among alternative possible futures. This popular view runs into a host of problems. The obvious one being that it is inconsistent with causal determinism.

This has led several authors to propose alternative hierarchical theories of freedom. According to these theories, an action is free when it is consistent with an agent’s higher-order, reflective desires. The idea is that sometimes we have impulsive, non-reflective desires that are not consistent with the kinds of people we really want (or believe) ourselves to be. I, for example, currently desire a piece of cake. But I have also, in my more reflective moments, committed to losing weight because I want to be a skinny person (note: this is just a hypothetical). Consequently, acting on my impulsive desire for cake would be inconsistent with my higher-order preference for being skinny. That would be the essence of unfreedom.

Hierarchical theories of freedom have many attractive features. They are consistent with determinism, and speak to the core belief that in order for an action to free it must belong to us in some respect. But do they provide a compelling account of responsibility? In his book, Against Moral Responsibility, Bruce Waller argues that they don’t. Indeed, he argues that the overwhelming belief that freedom and moral responsibility are connected has led people to propose deeply flawed theories of freedom and responsibility. The hierarchical theory is just one particularly good example of this.

In this post, I want to review Waller’s main arguments. I do so largely as an attempt to better understand his critique. I have, in the past, endorsed hierarchical theories, but have recently become more sceptical. Waller’s critique proceeds in three stages, each one looking at a variant of the hierarchical theory from a different theorist — Harry Frankfurt, Gerald Dworkin and Susan Wolf, respectively. I’ll sketch each of these three stages in what follows.


1. Frankfurt’s Theory and the Implausibility of the Hierarchical Approach
Harry Frankfurt’s 1971 article, “Freedom of the will and the concept of a person”, is perhaps the classic work on hierarchical theories of freedom. It proposes the simplest, and arguably most compelling, of the hierarchical theories. This is the one I laid out in the introduction. It claims that freedom consists simply in doing whatever is consistent with one’s second-order preferences. And what exactly is the difference between a first and second (or higher)-order preference? The answer is roughly as follows:

First-Order Desire: Is expressible in the form “A wants to X”, where “X” is some particular action (like eating cake)
Second-Order Desire: Is expressible in the form “A wants to X”, where “X” is some particular first-order desire (like wanting to want to eat cake, or, in my case, wanting not to want to eat cake).

So any particular action is free when the desire motivating the action is endorsed by a higher order preference to want to have that desire.

The problem with this simple theory is that it appears to have troubling implications. It implies that certain people who we would not ordinarily classify as being free are in fact free. In particular, it implies that people who are gripped by compulsive desires are, sometimes, free. Frankfurt embraces this implication when he distinguishes between two kinds of drug addict. Ordinarily, we would be inclined to say that the drug addict is not free: she is controlled by her first-order desires to take a drug. But Frankfurt says this is not always true. There are wanton addicts and willing addicts. Wanton addicts follow their first-order desires, even when they are out of line with what they really want to want (a good job, a stable family life etc.). Willing addicts fully endorse their first-order desire for drugs: they want to want them. They are truly free (and, by implication, responsible for what they do).

Waller argues that this is absurd, particularly when we bear in mind the typical history of the willing addict. Consider three counterexample:

Willing Addict: Peter starts taking drugs in college. He initially believes himself to be in control of his desire, saying “I can quit anytime”. Later, he finds himself trapped in a drug addiction he despises. He tries to get out of it but instead he slides deeper and deeper into difficulties. He loses his family and friends, destroys his career, and suffers from numerous psychological and physical problems. In the end, nothing of his old life is left. At this stage, he has an epiphany: since nothing of that old life is left, he has no reason to despise what he has become. He then embraces his addiction. He wants to want the drugs. He becomes a willing addict.

Willing Slave: Jamal is a fierce, independent warrior. He his captured by slavers and transported to a plantation in the Caribbean. While there, he his “whipped, branded, and abused”. He is forced to work against his will. In the beginning, he maintains his commitment to freedom, striking back at his slave masters whenever he gets the chance. But, after many years, he gives up. His spirit is broken. He embraces his conditions. He becomes a willing and happy slave.

Willing Convert: Eve is a strong, independent young woman. She longs for an education and career of her own. Unfortunately, she has been born into a strict, religious community. In that community, women are expected to be meek and compliant, to accept male authority, to remain uneducated, and maintain a subservient societal role. Eve rejects those values and “insists that she be respected as fully equal to anyone else”. But after years of “failure, condemnation, and psychological and physical abuse”, she breaks down. She starts to accept the subservient role. She becomes a willing convert.


In each of these cases, the individuals in question meet the conditions set down by Frankfurt. In the end, each of them reflectively approves of their first-order desires. But surely we would not say that any of them are free? Indeed, they arguably epitomise unfreedom. This suggests that Frankfurt’s simplistic version of hierarchical freedom is deeply flawed. The question is whether the hierarchical approach can then be salvaged.


2. Dworkin and the Right Causal Pathway Account
If you look at the three counterexamples just given, you’ll notice a common theme. They each involve people who have to embrace their position in life via a certain kind of causal pathway. Namely, a causal pathway involving their deprivation, abuse or coercion. They want what they now have, but only because circumstances left them with no other viable options. This cannot be freedom.

But this directs our attention to a possible escape route for proponents of the hierarchical approach to freedom. Instead of arguing that freedom is simply about wanting what you want, couldn’t they argue that it is about that and coming to that realisation through the right causal pathway?

This is exactly what Gerald Dworkin claims in his 1988 book The Theory and Practice of Autonomy. He argues that one’s higher order evaluations need to meet the condition of procedural independence. Very roughly, this means that one’s higher-order evaluations are free from manipulation and coercion. That they are arrived at through appropriate education and access to the right kinds of information. They do arise simply because one is beaten, abused and cajoled into accepting one’s lot in life.

This is an intuitively attractive idea. We all have the sense that certain desires are arrived at via improper causal pathways, and certain others are not. If Eve decided she wanted to live the life of subservience after having received an education and being exposed to the mainstream, secular way of life, then we might view her differently. The fact that she didn’t and was never even given that opportunity is crucial to her lack of freedom.

But Dworkin’s solution raises new problems. Waller highlights two of them. The first is that it effectively does away with the hierarchical component of the theory. If what matters is that you arrive at your desires through the right kind of deliberation, then that’s all that matters: consistency with higher-order desires doesn’t seem like an necessary addition. The second is that it is difficult to know what counts as the right causal pathway. Manipulation and coercion by others is one thing, but what about more subtle forms of manipulation? We are all “manipulated” by our genes, culture, education and social setting. Countless studies confirm this fact. Do these count, and if not why not? Waller gives the example of a willing gambler, who came by his addiction due to a fortunate run of luck the first time he visited a casino. Did he arrive at his compulsion through the right causal pathway? If not, then any number of “fortuitous contingencies” would seem to undermine freedom. The number of truly free actions would be vastly diminished. Maybe that’s something we are willing to accept, but we should acknowledge it as a potential consequence of Dworkin’s theory nonetheless.

Dworkin proposes a test of his own. He says that one way we can know if a desire is arrived at in the right way is if the individual in question would reflectively approve of the process whereby they arrived at that desire. Thus, I can say that I arrived at my desire to be skinny through careful deliberation about the person I would like to be; and I approve of that process of arriving at that desire. I’ve come to this position in the right way. The danger with this test is that many people who have been manipulated or coerced into a state of acceptance are likely to reflectively endorse the process whereby they arrived at that state. So, for example, Eve may well approve of her frustrations and denials once she has “come to see the light”. She may thank her community for helping her to see the errors of her ways. That doesn’t make her free.


3. Wolf’s Perfect Rationality Account of Freedom
There may, however, be one causal pathway that leads to freedom. This is the one advocated by Susan Wolf. According to Wolf, the only way to be truly free — i.e. free from the kinds of manipulations and coercions we worried about above — is to track the True and the Good. In other words, to desire what is right for the right reasons.

This view has its origins in religious, predominantly Christian, philosophy (though the Christians adopted it from the Greeks). The idea, as Waller describes it, is that:

True freedom is living in accordance with one’s true nature (as a rational being); genuine freedom can be realized only through accurate pursuit of the True; real freedom means living in accordance with the way God designed you; true freedom is found in perfect obedience to God. 
(Waller, 2011, pp 66-67)

Wolf adapts this classic ideal by replacing obedience to God with obedience to reason. One behaves freely when one believes what is true, desires what is right, and does so because one has access to the right information and can process it appropriately. In that case, you are not being surreptitiously manipulated into your desires, and there are no subtle, undetectable, genetic or environmental quirks influencing what you do. You are simply being guided by the light of reason.

There is some irony to all this. If Wolf is right, then freedom does not consist in the ability to jump tracks and to do otherwise; instead, it consists in the ability to follow the right track (note: if there are many things that are “right”, there may still be several tracks that one can follow; nevertheless, there is a much narrower set of right tracks than is typically thought).

I have to say, I find Wolf’s account somewhat attractive. I don’t know if I would call it a theory of responsibility or freedom, per se, but I do think it addresses the worries we have about manipulations and other causal influences. The obvious problem with Wolf’s account is that humans routinely and systematically fall short of such perfect rationality. In addition, it may challenge the traditional conception of responsibility for one’s actions. This doesn’t mean she’s wrong, of course, it just means this sort of freedom is alien to human beings. (Waller, I should note, also thinks that the ability to jump track is valuable and neglected in Wolf’s account).

So what should we conclude? I’m not sure. There is much more to the literature on freedom and responsibility. There are several other variants on the hierarchical/right causal pathway theme, and there is a veritable cottage-industry of academic work on manipulations and how they may, or may not, undermine freedom. Nevertheless, I like Waller’s simple criticisms. I think they embody a robust commonsense. I think he is right to say that Frankfurt’s approach is flawed, and that identifying the right causal pathway is extremely difficult (unless we resort to Wolf’s extreme). Philosophical sophistication is all well and good, but sometimes you need that kind of commonsense critique.

Thursday, September 11, 2014

Steven Pinker's Guide to Classic Style



I try to be a decent writer. I try to convey complex ideas to a broader audience. I try to write in a straightforward, conversational style. But I know I often fail in this. I know I sometimes lean too heavily on technical philosophical vocabulary, hoping that the reader will be able to follow along. I know I sometimes rush to complete blog posts, never getting a chance to polish or rewrite them. Still, I strive for clarity and would like to improve.

That’s why I have been keen to read Steven Pinker’s new book, The Sense of Style. Pinker is, of course, a well-known linguist, cognitive scientist and public intellectual. And this latest book is his attempt to provide a style guide for the 21st Century. Those of you who are familiar with style guides will know the usual drill: a list of principles and dos and don’ts, often supplied without reason and subject to any number of qualifications and exceptions. Some are good, some are bad, some are merely infuriating. Pinker’s book is different. It has some of the traditional lists of dos and don’ts, but with an added helping of psychology and linguistic theory. Furthermore, it’s written in an engaging style (always encouraging in a style guide), and may be the first manual of its sort that you would actually want to read from start to finish.

But I don’t intend for this post to be a fawning review. Instead, I want to share one of the key ideas from the book. In particular, I want to share the basic theory of communication that Pinker relies on, and some of his main dos and don’ts.


1. The Classic Style of Communication
Let’s start with the theory. One of the infuriating aspects of traditional style guides — according to Pinker anyway — is that they lack an underlying theory of communication. When their authors are busy doling out advice, they do so in an intuitive, somewhat haphazard manner. That’s why their rules are often so odd, and why the best prose stylists often break them. If the style-gurus had some theory of communication in place, they could explain why they adopt a certain set of rules and
why it is okay to occasionally break them.

So that’s what Pinker does. His preferred theory of communication is that of classic style. This is not something he came up with himself. It was originally presented by two literary theorists — Francis-Noel Thomas and Mark Turner — in a book called Clear and Simple as the Truth. The essence of classic style is that writing should be viewed as a conversation between the writer and the reader, in which the writer explains some object of joint attention to the reader. As Pinker puts it:

The guiding metaphor of classic style is seeing the world. The writer can see something that the reader has not yet noticed, and he orients the reader’s gaze so that she can see it for herself. The purpose of writing is presentation, and its motive is disinterested truth. It succeeds when it aligns with the truth, the proof of success being clarity and simplicity.
(Pinker, 2014, pp. 28-29



The simplest example of classic style in action would be where the writer literally describes an object or event in the real world to the reader. Suppose I just witnessed an accident on my way home, and I’m trying to describe it to you in a letter. Here, the accident is the object of joint attention; the goal of the written communication is to “orient your gaze” toward that accident; and the communication succeeds when I manage to describe it accurately.

But don’t get too hung up on this example. The object of joint attention need not be so mundane. It could be much more abstract. For example, it could be a scientific theory, or a philosophical concept, or an academic or scholarly debate. Indeed, I think of most academic writing as an attempt to orient the reader toward some kind of abstract “object”. In my case, it is usually an argument, one that I almost literally want the reader to be able to see: I want them to see the premises, how they connect with one another, and how they support one or more conclusions. The visual element of this metaphor is driven home by the use of argument diagrams.

As Pinker sees it, classic style is an ideal model of communication for academic and expository writing. In academic writing, you are usually trying to explain or justify something to a reader: you have seen something they have not, and you want to bring it into the spotlight. It may be less well-suited to other disciplines. Poetry and fiction writing, for example, is not always about describing and explaining some object of joint attention (though it often is, albeit in a novel and interesting way).

Classic style is the antithesis of the postmodern style. This is for good reason. Postmodernists are usually sceptical about the “Truth”. They don’t think it exists, at least not apart from the concepts and theories we use to describe it. Classic style seems naturally opposed to this since it assumes that the goal of writing is to convey some truth to the reader. However, the tension between postmodern and classic style may be more apparent than real. Postmodernists often do have important truths to convey. It is true that knowledge is sometimes socially constructed; it is true that our concepts and theories are sometimes biased. It is sometimes a good idea to draw the reader’s attention to these things.

In other words, the postmodernists have no excuse.


2. Pinker’s Dos and Don’ts for Classic Stylists
With the theory in place, Pinker proceeds to give some dos and don’ts to would-be writers. These dos and don’ts are not absolute or steadfast rules. There are exceptions to them. But these exceptions make sense in light of the underlying theory of communication. And that’s the important thing. A good classic stylist will tend to follow the rules that Pinker outlines, but they won’t always do so. This is because the good writer never loses sight of the overarching goal of communication. So long as you never lose sight of this, you too can occasionally flaunt the rules.

Pinker offers a lot of advice, but I’ve tried to reduce it down to eight basic principles, along with a few qualifications:

1. Eliminate Metadiscourse - Metadiscourse is writing about the writing. Signposting is a famous example (“in the first section we will do x”...”in this first section we will...”). Sometimes this is necessary, but it should be kept to a minimum and should be conversational in nature (“as we have just seen”... “let’s start by looking at this”).

2. Don’t confuse the subject matter of the communication with your line of work - You are trying to explain some important subject matter to the reader, don’t get bogged down in debates only relevant to those in your line of work, and don’t constantly harp on about how difficult or controversial what you are trying to say really is. This is a major problem in academic writing. For example, philosophers often talk about what other philosophers say and do, rather than about actual arguments and theories. 

Exception: Sometimes the object of joint attention really is what others in your field of work say, e.g. you want to talk about a debate between two famous academics.

3. Minimise Compulsive Hedging - As Pinker says, “Many writers cushion their prose with wads of fluff that imply they are not willing to stand behind what they say”. Thus we have the persistent use of the adverbial qualifiers “seemingly”, “apparently”, “nearly”, “partially”. To be sure, some of this is necessary. But it is tedious if overdone and many times readers will imply the necessary qualifications. Save the qualifications for the claims that really need to be qualified. 

Complementary rule: Avoid excessive use of intensifiers: they often detract from the impact of what you are saying.

4. Avoid cliches like the plague - Cliches were originally effective and punchy ways of conveying ideas - they brought to mind powerful sensory metaphors and analogies. But overuse has robbed them of this value. Try to come up with new, punchy metaphors instead. If you must resort to a cliche, don’t mix the metaphors. For example, say “fall through the cracks” rather than “fall between the cracks”.

5. By all means discuss abstract ideas, but avoid unnecessary abstract nouns - This one takes a little explaining. It’s perfectly okay to discuss abstract concepts and ideas, but you should avoid unnecessary abstraction. So avoid using verbal coffins like “issues”, “models”, “levels”, “perspectives” to convey abstract ideas. Example: “Individuals with mental health issues can become dangerous” becomes “People who are mentally ill can become dangerous”.

6. Remember: Nominalization is a dangerous weapon - This is the process of turning a verb into a noun -- e.g. affirming into affirmation. Academics and bureaucrats tend to overuse nominalizations. Not only do they strip your prose of agents and actors, they are often used to avoid accountability, e.g. the classic politician’s defence “Mistakes were made”.

7. Adopt an active, conversational style - Use the first and second person pronouns, don’t talk about the article or book as though it were an agent independent of you (“this article will argue that...”). Use the active voice if possible, e.g. if you are giving important instructions to someone (Pinker gives the example of instructions for a dangerous produce: “X can result in accumulated damage over time” vs “Never do X: it can kill you in minutes”).

8. But it’s okay to use the passive voice (sometimes) - the passive voice is much maligned in writing guides, but it’s okay to use it sometimes. Just remember the guiding principle: you are trying to direct the reader’s attention to something in the world. The active voice directs their attention to the doer of the action; the passive voice directs their attention to the person or object to whom the action is being done. Sometimes it’s the latter to which you want to direct attention. For example “See that mime? He’s being pelted with zucchini by the lady with the shopping bag”. The passive construction in the second sentence makes sense because you are trying to draw the reader’s attention to the mime, not the lady pelting him with zucchinis.



So there you have it: Steven Pinker’s guide to classic style. I don’t want you to go away with the impression that this is all there is in the book. There is much more, including some very interesting chapters on grammar and the psychology of correct usage. I would encourage everyone to read the full thing. Still, I hope this summary gives a flavour of its contents, and is of some use to all.

Tuesday, September 9, 2014

Teaching Documents Online

My teaching idol...

Since I have started a new job, I decided to put some of my old teaching documents online. It's just a small sample, but some people might be interested. They are handouts from classes I taught over the past three years, while employed at Keele University. I've just put up the ones dealing with ethics and the philosophy of law for the time being. I may add more in the future.

Bear in mind that these are intended for teaching purposes. I don't defend any particular views in them. I just try to explain key concepts and arguments, and give students some suggestions for how to evaluate and analyse those concepts and ideas.

Here's what I have up so far:

  • Rationality and Efficiency - An introduction to rational choice theory for law students. Also looks at the concept of economic efficiency.
  • Scientific Evidence and Torture - Uses economic concepts to evaluate the worth of scientific evidence and information gained from interrogational torture.