Friday, September 19, 2014

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (2): Pigliucci's Pessimism



(Part One)

This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.

As we saw in part one, there were two issues up for debate:

The Consciousness Issue: Would an uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?
The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process?


David Chalmers was optimistic on both fronts. Adopting a functionalist theory of consciousness, he saw no reason to think that a functional isomorph of the human brain would not be conscious. Not unless we assume that biological material has some sort of magic consciousness-conferring property. And while he had his doubts about survival via destructive or non-destructive uploading, he thought that that a gradual replacement of the human brain, with functionally equivalent artificial components, could allow for our survival.

As we will see today, Pigliucci is much more pessimistic. He thinks it is unlikely that uploads would be conscious, and, even if they are, he thinks it is unlikely that we would survive the uploading process. He offers four reasons to doubt the prospect of conscious uploads, two based on criticisms of the computational theory of mind, and two based on criticisms of functionalism. He offers one main reason to doubt survival. I will suggest that some of his arguments have merit, some don’t, and some fail to engage with the arguments put forward by Chalmers.


1. Pigliucci’s Criticisms of the Computational Theory of Mind
Pigliucci assumes that the pro-uploading position depends on a computational theory of mind (and, more importantly, a computational theory of consciousness). According to this theory, consciousness is a property (perhaps an emergent property) of certain computational processes. Pigliucci believes that if he can undermine the computational theory of mind, then so too can he undermine any optimism we might have about conscious uploads.

To put it more formally, Pigliucci thinks that the following argument will work against Chalmers:


  • (1) A conscious upload is possible only if the computational theory of mind is correct.
  • (2) The computational theory of mind is not correct (or, at least, it is highly unlikely to be correct).
  • (3) Therefore, (probably) conscious uploads are not possible.


Pigliucci provides two reasons for us to endorse premise (2). The first is a — somewhat bizarre — appeal to the work of Jerry Fodor. Fodor was one of the founders of the computational theory of mind. But Fodor has, in subsequent years, pushed back against the overreach he perceives among computationalists. As Pigliucci puts it:

[Fodor distinguishes] between “modular” and “global” mental processes, and [argues] that [only] the former, but not the latter (which include consciousness), are computational in any strong sense of the term…If Fodor is right, then the CTM [computational theory of mind] cannot be a complete theory of mind, because there are a large number of mental processes that are not computational in nature. 
(Intelligence Unbound, p. 123)


In saying this, Pigliucci explicitly references Fodor’s book-length response to the work of Steven Pinker, called The Mind Doesn’t Work that Way: The Scope and Limits of Computational Psychology. I can’t say I’m huge fan of Fodor, but even if I were I would find Pigliucci’s argument pretty unsatisfying. It is, after all, little more than a bare appeal to authority, neglecting to mention any of the detail of Fodor’s critique. It also neglects to mention that Fodor’s particular understanding of computation is disputed. Indeed, Pinker disputed it in his response to Fodor, which Pigliucci doesn’t cite and which you can easily find online. Now, my point here is not to defend the computational theory, or to suggest that Pinker is correct in his criticisms of Fodor, it is just merely to suggest that appealing to the work of Fodor isn’t going to be enough. Fodor may have done much to popularise the computational theory, but he doesn’t have final authority on whether it is correct or not.


Let’s move on then to Pigliucci’s second reason to endorse premise (2). This one claims that the computational theory rests on a mistaken understanding of the Church-Turing thesis about universal computability. Citing the work of Jack Copeland — an expert on Turing, whose biography of Turing I recently read and recommend — Pigliucci notes that the thesis only establishes that logical computing machines (Turing Machines) “can do anything that can be described as a rule of thumb or purely mechanical (“algorithmic”)”. It does not establish that “whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable”. This is said to be a problem because proponents of the computational theory of mind have tended to assume that “Church-Turing has essentially established the CTM”.

I may not be well-qualified to evaluate the significance of this point, but it seems pretty thin to me. I think it relies on an impoverished notion of computation. It assumes that computationalists, and by proxy proponents of mind-uploading, think that a mind could be implemented on a classic digital computer architecture. While some may believe that, it doesn’t strike me as being essential to their claims. I think there is a broader notion of computation that could avoid his criticisms. To me, a computational theory is one that assumes mental processes (including, ultimately, conscious mental processes) could be implemented in some sort of mechanical architecture. The basis for the theory is the belief that mental states involve the representation of information (in either symbolic or analog forms) and that mental processes involve the manipulation and processing of the represented information. I see nothing in Pigliucci’s comments about the Church-Turing thesis that upsets that model. Pigliucci actually did a pretty good podcast on broader definitions of computation with Gerard O’Brien. I recommend it if you want to learn more.




In summary, I think Pigliucci’s criticisms of the computational theory are off-the-mark. Nevertheless, I concede that the broader sense of computation may in turn collapse into the broader theory of functionalism. This is where the debate is really joined.


2. Pigliucci’s Criticisms of Functionalism
And I think Pigliucci is on firmer ground when he criticises functionalism. Admittedly, he doesn’t distinguish between functionalism and computationalism, but I think it is possible to separate out his criticisms. Again, there are two criticisms with which to contend. To understand them, we need to go back to something I mentioned in part one. There, I noted how Chalmers seemed to help himself to a significant assumption when defending the possibility of a conscious upload. The assumption was that we could create of “functional isomorph” of the brain. In other words, an artificial model that replicated all the relevant functional attributes of the human brain. I questioned whether it was possible to do this. This is something that Pigliucci also questions.

We can put the criticism like this:


  • (8) A conscious upload is possible only if we know how to create a functional isomorph of the brain.
  • (9) But we do not know what it takes to create a functional isomorph of the brain.
  • (10) Therefore, a conscious upload is not possible.



Pigliucci adduces two reasons for us to favour premise (9). The first has to do with the danger of conflating simulation with function. This hearkens back to his criticism of the computational theory, but can be interpreted as a critique of functionalism. The idea is that when we create functional analogues of real-world phenomena we may only be simulating them, not creating models that could take their place. The classic example here would be a computer model of rainfall or of photosynthesis. The computer models may be able to replicate those real-world processes (i.e. you might be able to put the elements of the models in a one-to-one relationship with the elements of the real-world phenomena), but they would still lack certain critical properties: they would not be wet or capable of converting sunlight into food. They would be mere simulations, not functional isomorphs. I agree with Pigliucci that the conflation of simulation with function is a real danger when it comes to creating functional isomorphs of the brain.

Pigliucci’s second reason has to do with knowing the material constraints on consciousness. Here he draws on an analogy with life. We know that we are alive and that our being alive is the product of the complex chemical processes that take place in our body. The question is: could we create living beings from something other than this complex chemistry? Pigliucci notes that life on earth is carbon-based and that the only viable alternative is some kind of silicon-based life (because silicon is the only other element that would be capable of forming similarly complex molecule chains). So the material constraints on creating functional isomorphs of current living beings are striking: there are only two forms of chemistry that could do the trick. This, Pigliucci suggests, should provide some fuel for scepticism about creating isomorphs of the human brain:

[This] scenario requires “only” a convincing (empirical) demonstration that, say, silicon-made neurons can function just as well as carbon-based ones, which is, again, an exclusively empirical question. They might or might not, we do not know. What we do know is that not just any chemical will do, for the simple reason that neurons need to be able to do certain things (grow, produce synapses, release and respond to chemical signals) that cannot be done if we alter the brain’s chemistry too radically. 
(Intelligence Unbound, p 125)

I don’t quite buy the analogy with life. I think we could create wholly digital living beings (indeed, we may even have done so) though this depends on what counts as “life”, which is a question Pigliucci tries to avoid. Still, I think the point here is well-taken. There is a lot going on in the human brain. There are a lot of moving parts, a lot of complex chemical mechanisms. We don’t know exactly which elements of this complex machinery need to be replicated in our functional isomorph. If we replicate everything then we are just creating another biological brain. If we don’t, then we risk missing something critical. Thus, there is a significant hurdle when it comes to knowing whether our upload will share the consciousness of its biological equivalent. It has been a while since I read it, but as I recall, John Bickle’s work on the philosophy of neuroscience develops this point about biological constraints quite nicely.




This epistemic hurdle is heightened by the hard problem of consciousness. We are capable of creating functional isomorphs of some biological organs. For example, we can create functional isomorphs of the human heart, i.e. mechanical devices that replicate the functionality of the heart. But that’s because everything we need to know about the functionality of the heart is externally accessible (i.e. accessible from the third-person perspective). Not everything about consciousness is accessible from that perspective.


3. Pigliucci on the Identity Question
After his lengthy discussion of the consciousness issue, Pigliucci has rather less to say about the identity issue. This isn’t surprising. If you don’t think an upload is likely to be conscious, then you are unlikely to think that it will preserve your identity. But Pigliucci is sceptical even if the consciousness issue is set to the side.

His argument focuses on the difference between destructive and non-destructive uploading. The former involves three steps: brain scan, mechanical reconstruction of the brain, and destruction of the original brain. The latter just involves the first two of those steps. Most people would agree that in the latter case your identity is not transferred to the upload. Instead, the upload is just a copy or clone of you. But if that’s what they believe about the latter case, why wouldn’t they believe it about the former too? As Pigliucci puts it:

[I]f the only difference between the two cases is that in one the original is destroyed, then how on earth can we avoid the conclusion that when it comes to destructive uploading we just committed suicide (or murder, as the case may be)? After all, ex hypothesi there is no substantive differences between destructive and non-destructive uploading in terms of end results…I realize, of course, that to some philosophers this may seem far too simple a solution to what they regard as an intricate metaphysical problem. But sometimes even philosophers agree that problems need to be dis-solved, not solved [he then quotes from Wittgenstein]. 
(Intelligence Unbound, p. 128)

Pigliucci may be pleased with this simple, common-sensical solution to the identity issue, but I am less impressed. This is for two reasons. First, Chalmers made the exact same argument in relation to non-destructive and destructive uploading — so Pigliucci isn’t adding anything to the discussion here. Second, this criticism ignores the gradual uploading scenario. It was that scenario that Chalmers thought might allow for identity to be preserved. So I’d have to say Pigliucci has failed to engage the issue. If this were a formal debate, the points would go to Chalmers. That’s not to say that Chalmers is right; it’s just to say that we have been given no reason to suppose he is wrong.


4. Conclusion
To sum up, Pigliucci is much more pessimistic than Chalmers. He thinks it unlikely that an upload would be conscious. This is because the computational theory of mind is flawed, and because we don’t know what the material constraints on consciousness might be. He is also pessimistic about the prospect of identity being preserved through uploading, believing it is more likely to result in death or duplication.


I have suggested that Pigliucci may be right when it comes to consciousness: whatever the merits of the computational theory of mind, it is true that we don’t know what it would take to build a functional isomorph of the human brain. But I have also suggested that he misses the point when it comes to identity.

No comments:

Post a Comment