Thursday, May 10, 2012

The Epistemic Objection to Torture (Part Two)



(Part one)

This is the second part in my series on the epistemic objection to the use of torture. The objection is derived from the work of Roger Koppl on epistemic systems. In part one, I formulated the argument and laid out Koppl’s defence of its first premise (note: Koppl uses mathematical modelling to make his case, I tried to simplify things with some elementary logic and decision theory). In this part, I will discuss the second premise of the argument.

For ease of reference, I will restate the epistemic argument here:


  • (1) Torture can only be an effective and reliable means of obtaining information if: (a) torturers can recognise the truth once it is spoken; and (b) suspects undergoing torture believe that the torturer will stop once they speak the truth. 
  • (2) Conditions (a) and (b) are unlikely to be met in empirically plausible cases. 
  • (3) Therefore, torture is unlikely to be an effective and reliable means of obtaining information.
  • (4) Torture is only permissible (in ticking bomb scenarios) if it is an effective and reliable means of obtaining information. (5) Therefore, torture is impermissible (in ticking bomb scenarios).


Now, we see more clearly the focus of this post. We must ask: why is that torturers are unlikely to be able to recognise the truth once given and why are suspects undergoing torture unlikely to believe that they can?


1. A Simple Defence of Premise 2
Interestingly, Koppl never provides an explicit defence of premise (2). Instead, he defends it from a series of counterexamples. I presume this is because he either thinks the defence of premise (2) is obvious, or he thinks that if it can fend off counterattacks it will be deemed plausible. He may be right about this, but I still think it’s worth pausing briefly to consider how one might provide a positive defence of premise (2).

The defence would, I believe, run something like this. If a torturer could reliably distinguish true answers from false ones, they would probably not need to threaten the use of torture. Why not? Because then they would either know the truth, or have narrowed down possible truths to such an extent that they could check themselves (without getting the information from the suspect) or could use other less severe threats to extract the truth. It is only because they don’t know what the correct answer is, and have no other means of finding out, that torture becomes an option.

What’s more, the suspect would presumably know that the torturer was in this epistemic predicament. They would reason to themselves that the torturer would not be threatening to torture them if they could reliably tell the truth from a lie. Hence, they would be encouraged to follow the decision-theoretical reasoning I laid out in part one.

In other words, the following two propositions seem likely to be true and conjointly they give us reason to endorse premise (2) (numbering follows from part one):


  • (12) If the torture knew or could recognise the truth, they would not probably not need to torture. 
  • (13) If a suspect is threatened with torture, then they would know (or have strong reason to believe) that the torturer could not recognise the true answer once given.




Thus, it seems that in practice Koppl is right: the two conditions needed for epistemic success would probably not be met. One can understand the argument here as presenting something akin to a common knowledge problem. If the suspect can be expected to know that we have no way of telling the truth from a lie, then we should know that he will know this (and vice versa ad infinitum). Hence, we would know that we could gain no epistemic advantage by using torture.

There are, however, two potential objections, both discussed by Koppl. The first, and most interesting, objection concerns the scenario in which there are two (or more) suspects with access to the relevant information, both of whom can be threatened with torture. The second, and less interesting objection, concerns the possible use of feedback in the torture process. Let’s consider both in turn.


2. Truth Recognition in the Two-Person Scenario
There is an old logic puzzle involving two guards standing in front of two doors, each leading to a different location. One of the guards always tells the truth; the other always tells a lie. You do not know which guard is which and you need to find out the location behind each door. Is there anyway that you can do this? Yes there is. You can simply go up to one of the guards and ask him: what will the other guard say if I ask him what’s behind his door? I won’t explain exactly why this works now, but you can read about it here, or view a video about it here. What’s interesting about this puzzle is that it points to a scenario in which the truth is seemingly concealed from us, but in which the truth can nevertheless be revealed by carefully analysing the logical structure of the scenario.

Could something similar be true in the case of torture? In other words, could it be that, despite the epistemic impediments outlined above (and in part one), there is some way in which we can bring the truth to the surface? Some people argue that there is, provided we have two or more suspects available to us, both of whom have access to the truth. To understand how this might work, we need to introduce a new concept called a message set:

Message Set: This is the set of credible messages (i.e. the set of believable lies and truths) that the torture victim could “send” to the torturer.

Now, imagine that we have two suspects in custody, both of whom have access to the truth we wish to know, and whose message sets only overlap on this truth. In other words, the union of the message sets has only one member: the truth (although, admittedly the truth may consist of more than one message).

If we were presented with two such suspects, then we could indeed use torture as means of eliciting the truth. How so? Well, we could simply continue to torture them until they both gave us the same answer. We would then know that this had to be the truth. If you bear with me for a moment and assume that it is likely for us to run into two (or more) such suspects, we can make the following argument against premise (2):


  • (14) If a torturer had two or more suspects in custody, both of whom had access to the truth, but whose message sets only overlapped on the truth, then he could torture them both until they gave the same response and he would know that this response was the truth (i.e. the torturer would be able to independently verify the truth). 
  • (15) If the suspects could be made aware that this was the nature of the scenario, they would have reason to believe that the torturer could reliably distinguish the truth from a lie. 
  • (16) The antecedents of the conditionals expressed in (14) and (15) are likely to be true in empirically plausible circumstances. 
  • (17) Therefore, (2) is false.





3. Objections to the Two-Person Argument
Thank you for bearing with me as I wrote that out. No doubt, a whole raft of objections occurred to you as you read through it. Don’t worry, I spotted (at least some of) them too. Let’s run down the list of objections now.

The first worry is in my characterisation of premise (14). It is not just that we need to have two suspects with messages sets that only overlap on the truth, we also have to know that we have two such suspects. That is to say, it’s not enough for this to be the case, we also have to know that it is the case. This is similar to the logic puzzle involving the two guards. In order for the solution to work, we have to know that one guard always tells the truth and the other always tells a lie.

This knowledge-requirement has a direct knock-on effect on the plausibility of premise (16). After all, no matter how unlikely it is that we have in our custody two suspects, from the same terrorist cell, who have interacted in such a manner that their messages sets only overlap on the truth, it is surely even more unlikely that we would actually know that we had two such suspects in custody.

There are other objections to (16) as well. Koppl mentions several. One obvious one is that terrorist suspects are more likely to have pre-coordinated on a common lie before being taken into custody. This is true even if they operate within a cell-structure which seals them off from certain information. Why so? Because even within that system those who are higher-up in the structure could filter some agreed common response down to the underlings within the cells. Indeed, they are more likely to do this if they are aware of the possibility of the double-torture scenario. So even if the two-person situation arose once and was successful, it would be unlikely to work again in the future as the terrorist cell adapts to the problem.

Finally, even supposing that the terrorists could not pre-coordinate on a common response, it is possible that they could give a common lie in response to being tortured. This could happen if there is some common non-truthful message in the message sets that forms a “Schelling point”. A Schelling point is effectively an attractor point that two independent, non-colluding agents are likely to coordinate on. It was introduced by the game theorist and strategist Thomas Schelling in the mid-20th Century. Schelling proposed that two travellers who had agreed to meet each other in New York on a particular day, but who had failed to agree a time and place for this meeting, might independently arrive in Grand Central Station at 12 noon. This is because that place and time forms a common attractor point for travellers to New York. Is it possible that there is something akin to “12 noon, Grand Central Station” in the message set available to potential terrorist suspects? Arguably it is, and arguably this would be more likely than there being two terrorists with message sets that only overlap on the truth.

Combined, these objections seem to throw the two-person defence of the epistemic efficiency of torture into some doubt.


4. The Feedback Objection
The two-person objection to premise (2) was a little intricate; the feedback objection is much more straightforward. It argues that torturers, in empirically plausible scenarios, do have ways in which to independently verify whether the suspect is giving them truthful information. Quite simply: once an answer has been given to them, they can stop torturing the suspect, and go and check whether the answer is correct (remember: we are talking about ticking bomb scenarios where the relevant information is going to be a bomb-location or something along those lines). If it turns out to be correct, the torture will cease; if it turns out to be false, they can recommence torturing the suspect.

Under those conditions, the torturer should be able to (a) recognise the truth and (b) convince the suspect that they will stop torturing them once the correct answer has been given. And assuming independent verification of this sort would be quite normal in empirically plausible circumstances, we get the following argument:


  • (18) If a torturer can independently verify the answers given to them by the suspect (e.g. by checking locations for ticking bombs), then they could distinguish the truth from a lie and, what’s more, they could convince the suspect that they had this ability. 
  • (19) Independent verification of answers is possible in most empirically plausible scenarios.
  • (20) Therefore, premise (2) is false.





Two responses to this argument can be mentioned here. The first is that the time pressures are such in ticking bomb scenarios that checking multiple locations is not practical. In other words, it may be that you only really get “one shot” at correctly identifying the location. If the suspect was aware that the time pressures were of this sort, then the argument would not go through (i.e. premise (19) would be rebutted).

In a similar vein, Koppl argues that many terrorist organisations will function in a fluidic and responsive manner. Once they know that a member of theirs has been arrested and is likely to be tortured for relevant information, they could quickly change the proposed location of their attack. Thus, whatever information was available to the suspect would quickly become obsolete (this argument would also work against the two-person scenario). I’m not quite sure what to say about this. Many of the most famous terrorist attacks (e.g. 9/11 and 7/7) did involve some partial last minute changes of plan, but this was usually due to unforeseen obstacles or accidents. Still, this would seem to be enough to suggest that terrorist groups can operate in the fluidic and responsive manner envisaged by Koppl. The key question then is how likely it is that members of the terrorist group would know that one of their own had been arrested. After all, if they don’t know that their plan might be foiled by someone divulging relevant information they can’t be responsive to this possibility. Whether they have such knowledge or not seems like something that would vary greatly from case-to-case.


5. Conclusion
To sum up, the epistemic objection to torture undercuts one of the key assumptions of those who would defend its permissibility, namely: that torture is an effective an reliable means of obtaining information. In order to work, the proponent of the objection needs to show that, in empirically plausible cases, (a) it is unlikely that a torturer would be able to recognise when a suspect is giving them a truthful answer and (b) a suspect would be unlikely to believe that the torturer had such an ability.

In this post, we considered a prima facie defence of this claim, as well as two responses to it. Both of these responses were found to be lacking, though some version of the feedback argument might be plausible, depending on the case.

There is one final point I want to address here. This relates back to premise (4) of the epistemic argument against torture. As you recall, this premise stated:


  • (4) Torture is only permissible (in ticking bomb scenarios) if it is an effective and reliable means of obtaining information.


I think there might be some reason to doubt this. In the kinds of “high stakes” scenarios envisaged in the torture debate — i.e. scenarios in which some disastrous consequence is going to arise unless the information is obtained — it’s at least arguable that torture need not be effective or reliable in order to be permissible. It could be argued that as long as there is some small chance that torture would elicit the relevant information, it is worth giving it a shot, given that otherwise a disastrous consequence is going to occur. In other words, it could be that it is the proponent of the epistemic objection, not the proponent of the permissibility of the torture, whose argument rests on a faulty assumption.

No comments:

Post a Comment