Monday, December 11, 2017

Episode #33: McArthur and Danaher on Robot Sex


In this episode I talk to Neil McArthur about a book that he and I recently co-edited entitled Robot Sex: Social and Ethical Implications (MIT Press, 2017). Neil is a Professor of Philosophy at the University of Manitoba where he also directs the Center for Professional and Applied Ethics. This a free-ranging conversation. We talk about what got us interested in the topic of robot sex, our own arguments and ideas, some of the feedback we've received on the book, some of our favourite sexbot-related media, and where we think the future of the debate might go.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).

Show Notes

  • 0:00 - Introduction to Neil
  • 1:42 - How did Neil go from writing about David Hume to Robot Sex?
  • 5:15 - Why did I (John Danaher) get interested in this topic?
  • 6:49 - The astonishing media interest in robot sex
  • 8:58 - Why did we put together this book?
  • 11:05 - Neil's general outlook on the robot sex debate
  • 16:41 - Could sex robots address the problems of loneliness and isolation?
  • 19:46 - Why a passive and compliant sex robot might be good thing
  • 21:08 - Could sex robots enhance existing human relationships?
  • 25:53 - Sexual infidelity and the intermediate ontological status of sex robots
  • 31:23 - Ethical behaviourism and robots
  • 34:36 - My perspective on the robot sex debate
  • 37:32 - Some legitimate concerns about robot sex
  • 44:20 - Some of our favourite arguments or ideas from the book (acknowledging that all the contributions are excellent!)
  • 54:37 - Neil's booklaunch - some of the feedback from a lay audience
  • 58:25 - Where will the debate go in the future? Neil's thoughts on the rise of the digisexual
  • 1:02:54 - Our favourite fictional sex robots

Relevant links


Wednesday, December 6, 2017

Ethical Behaviourism in the Age of the Robot

[Thanks to the Singularity Bros podcast for inspiring me to write this post. It was a conversation I had with the hosts of this podcast that prompted me to further elaborate on the idea of ethical behaviourism.]

I’ve always been something of a behaviourist at heart. That’s not to say that I deny conscious experience, or that I think that external behavioural patterns are constitutive of mental states. On the contrary, I think that conscious experience is real and important, and that inner mental states have some ontological independence from external behavioural patterns. But I am a behaviourist when it comes to our ethical duties to others. I believe that when we formulate the principles that determine the appropriateness of our conduct toward other beings, we have to ground those principles in epistemically accessible behavioural states.

I think this is an intuitively sensible view, and I am always somewhat shocked to find that others disagree with it. But disagree they do, particularly when I apply this perspective to debates about the ethical and social status of robots. Since these others are, in most cases, rational and intelligent people — people for whom I have the utmost respect — I have to consider the possibility that my view on this is completely wrongheaded.

And so, as part of my general effort to educate myself in public, I thought I would use this blogpost to explain my stance and why I think it is sensible. I’m trying to work things out for myself in this post and I’d be happy to receive critical feedback. I’ll start by further clarifying the distinction between what I call ‘mental’ and ‘ethical’ behaviourism. I’ll then consider how ethical behaviourism applies to the emerging debate about the ethical and social consequences of robots. Then, finally, I’ll consider two major criticisms of ethical behaviourism that emerge from this debate.

1. Mental vs Ethical Behaviourism

Mental behaviourism was popular in psychology and philosophy in the early-to-mid twentieth century. Behaviourist psychologists like John Watson and BF Skinner revolutionised our understanding of human and animal behaviour, particularly through their experiments on learning and behavioural change. Their behaviourism was largely methodological in nature. They worried about the scientific propriety of psychologists postulating unobservable inner mental states to explain why humans act the way they do. They felt that psychologists should concern themselves strictly with measurable, observable behavioural patterns.

As a methodological stance, this had much to recommend to it, particularly before the advent of modern cognitive neuroscience. And one could argue that even with the help of the investigative techniques of modern cognitive neuroscience, psychology is still essentially behaviouristic in its methods (insofar as it focuses on external, observable, measurable phenomena). Furthermore, methodological behaviourism is what underlies the classic Turing Test for machine intelligence. But behaviourism became more than a mere methodological posture in the hands of the philosophers. It became an entire theory of mind. Logical behaviourists, like Gilbert Ryle, claimed that descriptions of mental states were really just abbreviations for a set of behaviours. So a statement like ‘I believe X’ is just a shorthand way of saying ‘I will assert X in context Y’, ‘I will do action A in pursuit of X in context Z’ and so on. The mental could be reduced to the behavioural.

This is what I have in mind when I use the term ‘mental behaviourism’. I have in mind the view that reduces the mental — the world of intentions, beliefs, desires, hopes, fears, pleasure, and pain — to the behavioural. As such, I think is pretty implausible. It stretches common sense to believe that mental states are actually behavioural, and it is probably impossible to satisfactorily translate a description of a mental state into a set of behaviours.

Despite this, I think ethical behaviourism is pretty plausible and common sensical. So what’s the difference? One difference is that I think of ethical behaviourism as essentially an application of methodological behaviourism to the ethical domain. To me, ethical behaviourism says that the epistemic ground or warrant for believing that we have certain duties and responsibilities toward other entities lies in their observable behavioural relations and reactions to us (and the world around them), not in their inner mental states or capacities.

It is important to note that this is an epistemic principle, not a metaphysical one. Adopting a stance of ethical behaviourism does not mean giving up the belief in the existence of inner mental states, nor the belief that those inner mental states provide the ultimate metaphysical warrant for our ethical principles. Take consciousness/sentience as an example. Many people believe that conscious awareness is the most important thing in the world. They think that the reason we should respect other humans and animals, and why we have certain ethical duties toward them, is because they are consciously aware. An ethical behaviourist can accept this position. They can agree that conscious awareness provides the ultimate metaphysical warrant for our duties to animals and humans. They simply modify this slightly by arguing that our epistemic warrant for believing in the existence of this metaphysical property, derives from an entity’s observable behavioural patterns. After all, we can never directly gain epistemic access to their inner mental states; we can only infer this from what we do. It is the practical unavoidability of this inference, that motivate ethical behaviourism.

It is also important to note that ‘behaviour’ needs to be interpreted broadly here. It is not limited to external physical behaviours (e.g. the movement of limbs and lips); it includes all directly observable patterns and functions, such as the operation of the brain. This might seem contradictory, but it’s not. Brain states are directly observable and recordable; mental states are not. Even in cognitive neuroscience no one thinks that observations of the brain are directly equivalent to observations of mental states like beliefs and desires. Rather, they infer correlations between those brain patterns and postulated mental states. What’s more, they ultimately verify those correlations through other behavioural measures. So when a neuroscientist tells us that a particular pattern of brain activity correlates with the mental state of pleasure, they usually work this out by asking someone in a brain scanner what they are feeling when this pattern of activity is observable.

2. Ethical Behaviourism and Robots
Ethical behaviourism has consequences. One of the most important concerns comparative claims to moral status. If you are an ethical behaviourist and you’re asked whether an entity (X) has certain rights and duties, you will determine this by comparing their behavioural patterns to the patterns of another entity (Y) that we think already possesses those rights and duties. If the two are behaviourally indistinguishable, you’ll tend to think that X has those rights and duties too. The only thing that might upset this conclusion is if you are not particularly confident in the belief that those behavioural patterns justify the ascription of rights to Y. In that case, you might use the behavioural equivalence between X and Y to reevaluate the epistemic grounding for your ethical principles. Put more formally:

The Comparative Principle of EB: If an entity X displays or exhibits all the behavioural patterns (P1…Pn) that we believe ground or justify our ascription of rights and duties to entity Y, then we must either (a) ascribe the same rights and duties to X or (b) reevaluate our use of P1…Pn to ground our ethical duties to Y.

Again, I think this is a sensible principle, but it has significant implications, particularly when it comes to debates about the ethical status and significance of robots. To put it bluntly, it maintains that if there is behavioural equivalence between a robot and some other entity to whom we already owe ethical duties (where the equivalence relates specifically to the patterns that epistemically ground our duties to that other entity) we probably owe the same duties to the robot.

To make this more concrete, suppose we all agree that we owe ethical duties to certain animals due to their capacity to feel pain. The ethical behaviourist will argue that the epistemic ground for this belief lies not in the unobservable mental state of pain, but rather in the observable behavioural repertoire of the animal, i.e. because it yelps or cries out when it is hurt, because it recoils from certain pain-inducing objects in the world. Then, applying the comparative principle, it would follow that if a robot exhibits the same behavioural patterns, we owe it a similar set of duties. Of course, we could reject this if we decide to reevaluate our epistemic grounding for our belief that we owe animals certain duties, but this reevaluation will, if we follow ethical behaviourism, result in our simply identifying another set of behavioural patterns which it may be possible for a robot to emulate.

This has some important repercussions. It means that we ought to take much more seriously our ethical duties towards robots. We may easily neglect or overlook ways in which we violate or breach our ethical duties to them. Indeed, I think it may mean that we have to approach the creation of robots in the same way that we approach the creation of other entities of moral concern. It also means that robots could be a greater source of value in our lives than we currently realise. If our interactions with robots are behaviourally indistinguishable from our interactions with humans, and if we think those interactions with humans provide value in our lives, it is also possible for robots to provide similar values. I’ve defended this idea elsewhere, arguing that robotic ‘offspring’ could provide the same sort of value as human offspring, and that it is possible to have valuable friendships with robots.

But isn’t this completely absurd? Doesn’t it shake the foundations of common sense?

3. Objections to Ethical Behaviourism
Let me say a few things that might make it seem less absurd. First, I’m not the only one who argues for something along these lines. David Gunkel and Mark Coeckelbergh have both argued for a ‘relational turn’ in our approach to both animal and machine ethics. This approach advocates that we move away from thinking about the ontological properties of animals/machines and focus more on how they relate to us and how we relate to them. That said, there are probably some important differences between my position and theirs. They tend to avoid making strong normative arguments about the moral standing of animals/machines, and they would probably see my view as being much closer to the traditional approach that they criticise. After all, my view still focuses on ontological properties, but simply argues that we cannot gain direct epistemic access to them.

Second, note that the behavioural equivalence between robots and other entities to whom we owe moral duties really matters on this view. They must be equivalent with respect to all the behavioural patterns that are relevant to the epistemic grounding of our moral duties. And, remember, this could include internal functional patterns as well as external ones. This means that the threshold for the application of the comparative principle could be quite high (though, for reasons I am exploring in a draft paper, I think they may not be that high). Furthermore, as robots become more behaviourally equivalent to animals and humans, we could continue to reevaluate which behavioural patterns really count (think about the shifting behavioural boundaries for establishing machine ‘intelligence’ over the years).

This may blunt some of the seeming absurdity, but it doesn’t engage with the more obvious criticisms of the idea. The most obvious is that ethical behaviourism is just wrong. We don’t actually derive the epistemic warrant for our ethical beliefs from the behavioural patterns of the entities with whom we interact. There are other epistemic sources for these beliefs.

For example, someone might argue that we derive the epistemic warrant for our belief in the rights and duties of other humans and animals from the fact that we are made from the same ‘stuff’ (i.e. biological, organic material). This ‘material equivalence’ gives us reason for thinking that they will share similar mental states like pleasure and pain, and hence reason for thinking that they have sufficient moral status. Since robots will not be made from the same kind of stuff, we will not have the same confidence in accepting their moral status.

It’s possible to be unkind about this argument and accuse it of thinking that there is some moral magic to being made out of flesh and bone. But we shouldn’t be too unkind. Why matter gives rise to consciousness and mentality is still essentially mysterious, and it’s possible that there is something about our biological constitution that makes this possible in a way that an artificial constitution would not. I personally don’t buy this. I believe in mind-body functionalism. According to this view the physical substrate does not matter when it comes to instantiating a conscious mind. This would mean that ‘material equivalence’ should not be the epistemic grounding for our ethical beliefs. But it actually doesn’t matter whether you accept functionalism or not. I think the mere fact that there is uncertainty and plausible disagreement about the relevance of biological material to moral status is enough to undercut this as a potential epistemic source for our moral beliefs.

Another argument along these lines might focus on shared origins: that one reason for thinking that we owe animals and other humans moral duties is because they came into being through a similar causal process to us, i.e. by evolution and biological development. Robots would come into being in a very different way, i.e. through computer programming and engineering. This might be a relevant difference and give us less epistemic warrant for thinking that robots would have similar rights and duties.

There are, however, several problems with this. First, with advances in gene-editing technology, it’s already the case that animals are brought into being through something akin to programming and engineering, and it’s quite possible in the near future that humans will be too. Will this cause them to lose moral status? Second, it’s not clear that the differences are all that pronounced anyway. Many biologists conceive of evolution and biological development as a type of informational programming and engineering. The only difference is that there is no conscious human designer. Finally, it’s not obvious why origins should be ethically relevant. We usually try to avoid passing moral judgment on someone because of where they came from, focusing instead on how the behave and act toward us. Why should it be any different with machines?

This brings me to what I think might be the most serious objection to ethical behaviourism. One critical difference between humans/animals and robots has to do with how they are owned and controlled and this give rise to two related objections: (i) the deception objection and (ii) the ulterior motive objection.

The deception objection argues that because robots will be owned and controlled by corporations, with commercial objectives, those corporations will have every reason to program the robot to behave in a way that deceives you into thinking that you have some morally significant relationship with them. The ‘hired actor’ analogy is often used to flesh this out. Imagine if your life were actually a variant on the Truman Show: everyone else in it was just an actor hired to play the part of your family and friends. If you found this out, it would significantly undercut the epistemic foundations for your relationships with them. But, so the argument goes, this is exactly what will happen in the case of robots. They will be akin to hired actors: artificial constructs designed to play the part of our friends and companions (and so on).

I’m not sure what to make of this objection. It’s true that if I found out that all my friends were actors, it would require a significant reevaluation of my relationship to them. But it wouldn’t change the fact that they have a basic moral status and that I owe them some ethical duties. There are different gradations or levels of seriousness to our moral relationships with other beings. Removing someone from one level does not mean removing them from all. So I might stop being friends with these actors, but that’s a separate issue from their basic moral status. That could be true for robots too. Furthermore, I have to find out about the deception in order for it to have any effect. As long as everyone consistently and repeatedly behaves towards me in a particular way, then I have no reason to doubt their sincerity. If robots consistently and repeatedly behave toward us in a way that makes them indistinguishable from other objects of moral concern, then I think we will have no reason to believe that they are being deceptive.

Of course, it’s hard to make sense of the deception objection in the abstract because usually people are deceptive for a particular reason. This is where the ulterior motive objection comes into play. Sometimes people have ulterior motives for relating to us in a particular way, and when we find out about them it disturbs the epistemic foundations of our relationships with them. Think about the ingratiating con artist and how finding out about their fraud can quickly change a relationship from love to hate. One claim that is made about robots is that they will always have an ulterior motive underlying their relationships to us. They will be owned and controlled by corporations and will ultimately serve the profit motives of those corporations. Thus, there will always be some divided loyalty and potential for betrayal. We will always have some reason to be suspicious about them and to worry that they are not acting in our interests. (Something along these lines seems to motivate some of Joanna Bryson’s opposition to the creation of person-like robots).

I think this is a serious concern and a reason to be very wary about entering into relationships with robots. But let me say a few things in response. First, I don’t think this objection upsets the main commitments of ethical behaviourism. Divided loyalties and the possibility of betrayal are already a constant feature of our relationships with humans (and animals) but doesn’t negate the fact that they have some moral status. Second, ulterior motives do not always have to undermine an ethically valuable relationship. We can live with complex motivations. People enter into intimate relationships for a multiplicity of reasons, not all of them shared explicitly with their partners. This doesn’t have to undermine the relationship. And third, the ownership and control of robots (and, more importantly, the fact that they will be designed to serve corporate commercial interests) is not some fixed, Platonic truth about them. Property rights are social and legal constructs and we could decide to negate them in the case of robots (as we have done in the case of humans in the past). Indeed, the very fact that robots could have significant ethical status in our lives might give us reason to do that.

All that said, the very fact that companies might use ethical behaviourism to their advantage when creating robots, suggest that people who defend it (like me, in this post) have a responsibility to be aware of and mitigate the risks of misuse.

4. Conclusion
That’s all I’m going to say for now. As I mentioned above, ethical behaviourism is something that I intuit to be correct, but which most people I encounter disagree with. This post was a first attempt to reason through my intuitions. It could be that I am completely wrong-headed on this and that there are devastating objections to my position that I have not thought through. I’d be happy to hear about them in the comments (or via email).

Sunday, December 3, 2017

Is Technology Value-Neutral? New Technologies and Collective Action Problems

Via Wikimedia Commons

We’ve all heard the saying “Guns don’t kill people, people do”. It’s a classic statement of the value-neutrality thesis. This is the thesis that technology, by itself, is value-neutral. It is the people that use it that are not. If the creation of a new technology, like a gun or a smartphone, has good or bad effects, it is due to good or bad people, not the technology itself.

The value-neutrality thesis gives great succour to inventors and engineers. It seems to absolve them of responsibility for the ill effects of their creations. It also suggests that we should maintain a general policy of free and open innovation. Let a thousand blossoms bloom, and leave it to the human users of technology to determine the consequences.

But the value-neutrality thesis has plenty of critics. Many philosophers of technology maintain that technology is often (perhaps always) value-laden. Guns may not kill people themselves but they make it much more likely that people will be killed in a particular way. And autonomous weapons systems can kill people by themselves. To suggest that the technology has no biasing effect, or cannot embody a certain set of values, is misleading.

This critique of value-neutrality seems right to me, but it is often difficult to formulate it in an adequate way. In the remainder of this post, I want to look one attempted formulation from the philosopher David Morrow. This argument maintains that technologies are not always value neutral because they change the costs of certain options, thereby making certain collective action problems or errors of rational choice more likely. The argument is interesting in its own right, and looking at it allows us to see how difficult it is to adequately distinguish between the value-neutrality and value-ladenness of technology.

1. What is the value-neutrality thesis?
Value-neutrality is a seductive position. For most of human history, technology has been the product of human agency. In order for a technology to come into existence, and have any effect on the world, it must have been conceived, created and utilised by a human being. There has been a necessary dyadic relationship between humans and technology. This has meant that whenever it comes time to evaluate the impacts of a particular technology on the world, there is always some human to share in the praise or blame. And since we are so comfortable with praising and blaming our fellow human beings, it’s very easy to suppose that they share all the praise and blame.

Note how I said that this has been true for ‘most of human history’. There is one obvious way in which technology could cease to be value-neutral: if technology itself has agency. In other words, if technology develops its own preferences and values, and acts to pursue them in the world. The great promise (and fear) about artificial intelligence is that it will result in forms of technology that do exactly that (and that can create other forms of technology that do exactly that). Once we have full-blown artificial agents, the value-neutrality thesis may no longer be so seductive.

We are almost there, but not quite. For the time being, it is still possible to view all technologies in terms of the dyadic relationship that makes value-neutrality more plausible. Unsurprisingly, it is this kind of relationship that Morrow has in mind when he defines his own preferred version of the value-neutrality thesis. The essence of his definition is that value-neutrality arises if all the good and bad consequences of technology are attributable to praiseworthy or blameworthy actions/preferences of human users. The more precise formulation of this is this:

Value Neutrality Thesis: “The invention of some new piece of technology, T, can have bad consequences, only if people have vicious T-relevant preferences, or if users with “minimally decent” preferences act out of ignorance; and the invention of T can have good consequences, on balance, only if people have minimally decent T-relevant preferences, or if users with vicious T-relevant preferences act out of ignorance” 
(Morrow 2013, 331)
A T-relevant preference is just any preference that influences whether one uses a particular piece of technology. A vicious preference is one that is morally condemnable; a minimally decent preference is one that is not. The reference to ignorance in both halves of the definition is a little bit confusing to me. It seems to suggest that technology can be value neutral even if it is put to bad/good use by people acting out of ignorance (Morrow gives that example of the drug thalidomide to illustrate the point). The idea then is that in those cases the technology itself is not to blame for the good or bad effects — it is the people. But I worry that this makes value-neutrality too easy to establish. Later in the article, Morrow seems to conceive of neutrality in terms of how morally praiseworthy and blameworthy the human motivations and actions were. Since ignorance is sometimes blameworthy, it makes more sense to me to think that neutrality occurs when the ignorance of human actors is blameworthy.

Be that as it may, Morrow’s definition gives us a clear standard for determining whether technology is value-neutral. If the bad or good effects of a piece of technology are not directly attributable to the blameworthy or praiseworthy preferences (or levels of knowledge) of the human user, then there is reason to think that the technology itself is value-laden. Is there ever reason to suspect this?

2. Technology and the Costs of Cooperation and Delayed Gratification
Morrow says that there is. His argument starts by assuming that human beings follow some of the basic tenets of rational choice theory when making decisions. The commitment to rational choice theory is not strong and could be modified in various ways without doing damage to the argument. The idea is that humans have preferences or goals (to which we can attach a particular value called ‘utility’), and they act so as to maximise their preference or goal-satisfaction. This means that they follow a type of cost-benefit analysis when making decisions. If the costs of a particular action outweigh its benefits, they’ll favour other actions with a more favourable ratio.

The key idea then is that one of the main functions of technology is to reduce the costs of certain actions (or make available/affordable actions that weren’t previously on the table). People typically invent technologies in order to be able to do something more efficiently and quickly. Transportation technology is the obvious example. Trains, planes and automobiles have all served to reduce the costs of long-distance travel to individual travellers (there may be negative or positive externalities associated with the technologies too — more on this in a moment).

This reduction in cost can change what people do. Morrow gives the example of a woman living three hours from New York City who wants to attend musical theatre. She can go to her local community theatre, or travel to New York to catch a show on Broadway. The show on Broadway will be of much higher quality than the show in her local community theatre, but tickets are expensive and it takes a long time to get to New York, watch the show, and return home (about a 9-hour excursion all told). This makes the local community theatre the more attractive option. But then a new high speed train is installed between her place of residence and the city. This reduces travel time to less than one hour each way. A 9-hour round trip has been reduced to a 5-hour one. This might be enough to tip the scales in favour of going to Broadway. The new technology has made an option more attractive.

Morrow has a nuanced understanding of how technology changes the costs of action. The benefits of technology need not be widely dispersed. They could reduce costs for some people and raise them for others. He uses an example from Langdon Winner (a well-known theorist of technology) to illustrate the point. Winner looked at the effects of tomato-harvesting machines on large and small farmers and found that they mainly benefitted the large farmers. They could afford them and thereby harvest far more tomatoes than before. This increased supply and thereby reduced the price per tomato to the producer. This was still a net benefit for the large farmer, but a significant loss for the small farmer. They now had to harvest more tomatoes, with their more limited technologies, in order to achieve they same income.

Now we come to the nub of the argument against value-neutrality. The argument is that technology, by reducing costs, can make certain options more attractive to people with minimally decent preferences. These actions, by themselves, may not be morally problematic, but in the aggregate they could have very bad consequences (it’s interesting that at this point Morrow switches to focusing purely on bad consequences). He gives two examples of this:

Collective action problems: Human society is beset by collective action problems, i.e. scenarios in which individuals can choose to ‘cooperate’ or ‘defect’ on their fellow citizens, and in which the individual benefits of defection outweigh the individual benefits of cooperation. A classic example of a collective action problem is overfishing. The population of fish in a given area is a self-sustaining common resource, something that can shared fruitfully among all the local fishermen if they fish a limited quota each year. If they ‘overfish’, the population may collapse, thereby depriving them of the common resource. The problem is that it can be difficult to enforce a quota system (to ensure cooperation), and individual fishermen are nearly always incentivised to overfish themselves. Technology can exacerbate this by reducing the costs of overfishing. It is, after all, relatively difficult to overfish if you simply relying on a fishing rod. Modern industrial fishing technology makes is much easier to dredge the ocean floor and scrape up most of the available fish. Thus, modern fishing technology is not value-neutral because it exacerbates the collective action problem.

Delayed gratification problems: Many of us face decision problems in which we must choose between short-term and long-term rewards. Do we use the money we just earned to buy ice-cream or do we save for our retirements? Do we sacrifice our Saturday afternoons to learning a new musical instrument, or do we watch the latest series on Netflix instead? Oftentimes the long-term reward greatly outweighs the short-term reward, but due to quirk of human reasoning we tend to discount this long-term value and favour the short-term rewards. This can have bad consequences for individually (if we evaluate our lives across their entire span) and collectively (because it erodes social capital if nobody in society is thinking about the long-term). Morrow argues that technology can make it more difficult to prioritise long-term rewards by lowering the cost of instant gratification. I suspect many of us have an intimate knowledge of the problem to which Morrow is alluding. I know I have often lost days to work that would have been valuable in the long-term because I have been attracted to the short-term rewards of social media and video-streaming.

Morrow gives more examples of both problems in his paper. He also argues that the problems interact, suggesting that the allure of instant gratification can exacerbate collective action problems.

3. Criticisms and Conclusions
So is this an effective critique of value neutrality? Perhaps. The problems to which it alludes are certainly real, and the basic premise underlying the argument — that technology reduces the cost of certain options — is plausible (perhaps even a truism). But there is one major objection to the narrative: that even in the case of collective action problems and delayed gratification, it is human viciousness that does the damage?

Morrow rejects this objection by arguing that it is only right to call the human actors vicious if the preferences and choices they make are condemnable in and of themselves. He argues that the preferences that give rise to the problems he highlights are not, by themselves, morally condemnable; it is only the aggregate effect that is morally condemnable. Morality can only demand so much from us, and it is part and parcel of the human condition to be imbued with these preferences and quirks. We are not entitled to assume a population of moral and rational saints when creating new technologies, or when trying to critique their value-neutrality.

I think there is something to this, but I also think that it is much harder to draw the line between preferences and choices that are morally condemnable and those that are not. I discussed this once before when I looked at Ted Poston’s article “Social Evil”. The problem for me is that knowledge plays a crucial role in moral evaluation. If an individual fisherman knows that his actions contribute to the problem of overfishing (and if he knows about the structure of the collective action problem), it is difficult, in my view, to say that he does not deserve some moral censure if he chooses to overfish. Likewise, given what I know about human motivation and the tradeoff between instant and delayed gratification, I think I would share some of the blame if I spend my entire afternoon streaming the latest series on Netflix instead of doing something more important. That said, this has to be moderated, and a few occasional lapses could certainly be tolerated.

Finally, let me just point out that if technology is not value-neutral, it stands to reason that it's non-neutrality can work in both directions. All of Morrow’s examples involve technology biasing us toward the bad. But surely technology can also bias us toward the good? Technology can reduce the costs of surveillance and monitoring, which makes it easier to enforce cooperative agreements, and prevent collective action problems (I owe this point to Miles Brundage). This may have other negative effects, but it can mitigate some problems. Similarly, technology can reduce the costs of vital goods and services (medicines, food etc.) thereby making it easier to distribute them more widely. If we don’t share all the blame for the bad effects of technology, then surely we don’t share all the credit for its good effects?

Tuesday, November 28, 2017

The Problem with Hate Speech Laws

Via John S Quarterman on Flickr

Many jurisdictions in Europe have laws that criminalise hate speech and there is no shortage of campaigners requesting such prohibitions. The debate is particularly acute on college campuses, where the protection of minority students from such hate speech is increasingly being viewed as central to the university’s mission to provide a ‘safe space’ for education.

That’s not to say that hate speech prohibitions have proved uncontroversial. On the contrary, they are among the most controversial prohibitions that are discussed today. Some people feel that it is difficult to adequately define hate speech, that it is hard to explain why hate speech is harmful (if it is harmful), and that prohibiting it conflicts with other important values such as the value of free speech.

Several philosophers have tried to engage with these controversies. Steven Heyman and Jeremy Waldron are among the most prominent. They have provided sophisticated philosophical justifications for prohibitions on hate speech. They have done so by arguing that hate speech undermines the liberal democratic commitment to recognising human dignity and equality. In a recent(ish) article, Robert Mark Simpson has argued that their justifications are flawed.

In this post, I want to look at Simpson’s argument. I do so partly because it is interesting in its own right and partly because it reveals certain problems with other attempts to outlaw/prohibit behaviour that is linked to the systematic oppression of minorities.

1. The Heyman-Waldron Argument for Hate Speech
We’ll need to start by defining ‘hate speech’. After all, we cannot understand the potential justifications for its prohibition, without understanding what it is. Here’s the definition favoured by Simpson (which comes from earlier work by Corlett and Francescotti):

Hate Speech: Is any symbolic, communicative action which wilfully expresses intense antipathy towards some group or towards an individual on the basis of membership in some group.

This is quite a general definition. Examples might make it more concrete. Using a racial slurs (e.g. ‘kike’, ‘gypsy’) to describe someone who belongs to a particular racial or ethnic minority might count as hate speech according to this definition. But it is hard to be overly concrete on this matter. It is all very context-dependent. The specific forms that hate speech takes will vary, to some extent, from country to country and culture to culture. It will also vary depending on the pragmatic context in which the speech is uttered.

It seems plausible to suppose that hate speech, so defined, can be harmful. But the harms it entails could be quite variable. It could cause harm to a specific individual in the form of psychological trauma or upset. But lots of speech, some of it not falling within the definition of hate speech, could cause such harm. For example, describing someone’s work as ‘incompetent’ or ‘abominable’ could cause a great deal of psychological upset, but it would not count as hate speech, nor would we think it requires special legal prohibition.

For this reason, the modern tendency is not to think about the harm of hate speech in terms of direct harms to specific individuals, but rather as a type of collective institutional harm — something that contributes to a social climate or set of institutions in which members of minority groups continue to be oppressed. This makes the harm of hate speech more abstract and indirect.

The key feature of Heyman and Waldron’s work has been to flesh out this ‘institutional harm’ view of hate speech in more detail. There are subtle differences between their arguments, but they are grounded in the same basic idea. They argue that hate speech is problematic because of the signals it sends to members of minority groups concerning their moral and legal status within a given community.

They both argue that modern liberal democratic states are founded upon a principle of moral equality. This principle holds that all people, regardless of race, religion, ethnicity, gender (etc) are moral equals. No one individual has a superior moral or legal status to another. And they both argue that the problem with hate speech is that it tells members of minority groups that they do not share in this equal moral status. Heyman’s version of the argument focuses on fundamental rights and how language signals recognition of the ‘other’ as a rights-bearer. Waldron focuses more on the gap between de jure equality and de facto equality. Many liberal democratic legal systems include provisions that formally recognise the equality of all persons, but then fail to live up to this ideal in practice. He thinks the problem with hate speech is that it makes members of minority groups less confident in the official commitments of the system. They no longer feel that the community is a safe space for them.

But how exactly does hate speech do this? It seems implausible to suppose that one particular instance of hate speech can shake the foundations of the legal order in the manner envisaged by Heyman and Waldron, or undermine an individual’s confidence in a social system to such an extent that they no longer feel safe. One neo-Nazi does not make for a system of oppression. Simpson suggests that an analogy between hate speech and environmental pollution can explain the idea:

The Pollution Analogy: “[T]hose who tend to hold this view of hate speech tend to think of individual acts of hate speech operating in a way that is analogous to pollution. Individual acts of pollution can inflict discrete harm on specifiable victims. Many acts of pollution don’t inflict harm in that way. However, even when there are no specifiable victims, all acts of pollution have a degrading impact on environmental systems whose degradation beyond a certain point does inflict harms on individuals….Analogously, acts of hate speech do not always directly harm specifiable individuals, but they all contribute, so one may argue, to the creation and sustenance of a social climate in which harms and disadvantages redound to members of vulnerable social classes.” 
(Simpson 2013)

That’s the core of the Heyman-Waldron line of argument. Hate speech should be prohibited because it contributes to a climate of intimidation that cumulatively degrades and subordinates particular minority groups.

2. The Injustice of Hate Speech Prohibitions
So what’s wrong with this argument? It sounds superficially plausible, doesn’t it? Why should we reject it? Simpson’s counterargument is very straightforward. He agrees that the institutional harm highlighted by Heyman and Waldron is superficially plausible. In fact, he thinks it may even be true that individual acts of hate speech cumulatively result in a polluted social climate. The problem is that hate speech laws are typically targeted at the individual acts, not the cumulative result. Unless it can be shown that the individual act meaningfully contributes to the institutional harm, imposing a sanction on the individual act is not just.

Here’s the argument in more formal terms (this is my reconstruction):

  • (1) Hate speech laws target individual behaviour (i.e. individual acts of hate speech)
  • (2) If the harm of hate speech is institutional/structural, hate speech laws can only be just if individual acts contribute meaningfully to that institutional/structural harm.
  • (3) The harm of hate speech is institutional/structural (conclusion of the Heyman-Waldron argument)
  • (4) Individual acts of hate speech do not meaningfully contribute to that institutional/structural harm.
  • (5) Therefore, hate speech laws are unjust.

The controversial premises here are (2) and (4). Premise (2) is working off an intuitive theory of causal responsibility and just punishment. The idea underlying it is that an individual subject S can only be rightly held responsible for a harm X, if their behaviour was a significant or primary causal factor in X. This is a theory of just punishment that applies in many areas of law, in particular criminal and civil law, where it has to be shown that ‘but for’ an individual’s behaviour a harmful result would not have occurred. You could certainly challenge this intuitive theory of punishment. People already do so in the context of harms caused by complex organisations or autonomous technology, but it is still the core of our intuitive theory of just punishment.

Premise (4) is the one that Simpson dedicates most of his time to. He thinks it is fairly obviously true that an individual act of hate speech cannot, by itself, create a system of social exclusion, particularly when the legal system includes provisions that formally protect equal status. He thinks that to assert the contrary view is to assign too much importance to the actions of outlier individuals. But he notes that some participants in the hate speech debate say that their arguments do not depend on making complex causal claims of this sort. Waldron is a good example. His argument is that the mere presence of hate speech — any hate speech — is enough to degrade the social environment. He is not saying that hate speech is the root cause of systematic inequality. Furthermore, he doesn’t focus on punishment in his justification of hate speech laws. He focuses on the expressive and deterrent functions of the law instead. He thinks, contra free speech advocates like JS Mill, that the government should interfere in the ‘marketplace of ideas’ because this could have a positive long-term effect on the social environment.

Simpson argues that this creates a problem for Waldron. On the one hand, he is explicating the harm of hate speech in terms of its mere presence and visibility in society. On the other hand, he is defending hate speech laws in terms of their long-term, consequential impact on social order. The former claim tries to sidestep complex causal questions; the latter engages with them directly. This leads to a tension in the argument. The only way for Waldron to justify the claim concerning the consequential impact of hate speech laws on society is to make an assumption about the meaningful causal role of hate speech in creating such an environment. This buts up against the view that Simpson defends in premise (4).

Simpson goes on to point out that it is, in any event, unlikely that the mere visibility or presence of hate speech will undermine an individual’s confidence in a social order that otherwise protects their equal status. The hate speech has to have some credibility behind it, i.e. the individual will have to believe that the hateful views will be taken up by other players and actors in the social system. This, in turn, gets us into debates about the causal links between individual acts and collective outcomes.

3. Conclusion
That’s basically it (I told you this would be brief). Heyman and Waldron argue that the harm of hate speech lies in its contribution to a social order in which minorities are excluded from equal moral status. They both then seek to justify hate speech laws in terms of their ability to mitigate this exclusionary effect. The problem with both arguments is that the use of an individualised tool — the hate speech law — to solve a problem that is not the causal product of any one individual’s actions. This does not sit well with an intuitive theory of just punishment.

There are, of course, some solutions to this problem. For one thing, Simpson’s objection is less impressive when we are dealing with hate speech emanating from individuals with important social influence or power. It’s more plausible to claim that their particular actions will have a meaningful effect on a social order.

Furthermore, it is not as if the law (or legal theorists) have never dealt with this basic problem that Simpson identifies before. There are areas of tort law — e.g. toxic torts — where it can be difficult to prove that particular actions were the ‘but for’ cause of a harmful outcome. An example might be litigation concerning exposure to asbestos and the disease mesothelioma. A single exposure can be enough to cause the disease but this makes it difficult to prove that a particular employer is responsible for the harmful exposure (the same problem arises when the disease is a cumulative result of many exposures that are attributable to different causes). Nevertheless, courts have been willing to assign legal responsibility on the basis of alternative theories of causation and public policy considerations. Similarly, there is an active debate at the moment about legal responsibility for actions that are caused through technological systems (e.g. robots). Philosophers like Luciano Floridi, for example, favour a theory of distributed causal responsibility, according to which, every node with a causal system that is responsible for some outcome bears some of the blame for that outcome. One could imagine similar theories being adopted in the debate about hate speech.

This would mean making a break with our intuitive theory of just punishment, and further assessing the consequences of doing that. Unfortunately, such an assessment lies beyond the scope of this blogpost.

Monday, November 27, 2017

Robot Sex in the Media

Myself and Neil McArthur's edited book Robot Sex: Social and Ethical Implications (MIT Press 2017) has been featured in a number of recent media pieces. Although this topic is usually treated with excessive hyperbole by the media, we have managed to secure some pretty good, substantive engagement with the ideas in the book in some outlets (partly because we wrote some of it). I'm collecting links to these pieces here. If you know of any other coverage, please let me know.

[Also if you'd like to buy or review the book on Amazon, I wouldn't be displeased...]


  • 'Kunstigt Klimaks' (roughly: 'Artificial Orgasm') - Weekendavisen, 15th September 2017 by Anne Jensen Sand. (This is in Danish so I have no idea what it says)

  • 'Falling in Love with Sexbots' - by Brian Appleyard in The Sunday Times News Review, 22nd October 2017 (sadly behind a subscription wall -- though you can get free access by signing up)

Audio and Video Interviews

A Dilemma for Anti-Porn Feminism

Feminism is a complex school of thought. Indeed, it’s not really a school of thought at all. It’s many different schools of thought, often uncomfortably lumped together under a single label. Within these schools of thought, there are some that are deeply opposed to mainstream, hardcore pornography. The radical feminist school — led by the likes of Catharine MacKinnon and Andrea Dworkin — are the obvious exemplars of this anti-porn point of view. But there are also more liberal feminists who have defended variations of it, such as Rae Langton and Jennifer Hornsby. Are they right to do so?

Alex Davies has recently published a fascinating and well-researched article arguing that they are not. Focusing on the anti-porn arguments of MacKinnon, he claims that liberal feminists cannot consistently embrace a view that prioritises female autonomy and views pornography as something that necessarily silences and subordinates women. The reason for this is that there are female pornographers, i.e. women who seem to freely and autonomously choose to produce and distribute pornography that falls within the remit of that to which MacKinnon et al are opposed.

In this post I want to provide a quick overview and analysis of Davies’s argument. I tend to agree with his reasoning and explaining why reveals something pretty important about the political and ethical aspects of pornography. That said, my overview won’t be a substitute for reading the full thing. Anyone with an interest in the debate about the ethics of pornographic representations should do so.

1. The Structure of Davies’s Argument

To understand Davies’s central argument, we first need to take a step back in time to consider MacKinnon’s case against pornography. That case was premised on three things: (i) a narrow definition and understanding of ‘pornography’; (ii) a particular conception of the harm constituted by that narrowly-defined form of pornography; and (iii) a unique legal remedy for this harm.

Let’s start with the narrow definition of pornography. Anyone who campaigns against pornography faces an obvious definitional problem: you don’t want the campaign to be over-inclusive. Many fictional and pictographic representations are sexually provocative and arousing. Sometimes they are presented as ‘serious art’; sometimes they actually are serious art. MacKinnon was conscious of this and tried to target her campaign at a specific subset of sexually provocative material. She offered an elaborate definition of this type of pornography (which on previous occasions I have called ‘Mac-Porn’ and will do so again here). Mac-Porn is anything that involves the ‘sexually explicit subordination of women through words and pictures’, and consists in imagery or words that dehumanise or objectify women, or depict them as enjoying rape or sexual humiliation, or reduces them to body parts, or otherwise brutalises and degrades them. Furthermore, although it is initially defined as requiring the depiction of women, it is subsequently expanded to cover ‘the use of men, children or transsexuals in the place of women’. I’ve tried to illustrate this in full detail in the image below, drawing specifically on the lengthier characterisations in the work of MacKinnon.

Note that Mac-Porn, as defined, may avoid the problem of over-inclusivity at the expense of under-inclusivity and value-ladenness. Nevertheless, it is what we will be working with for the remainder of the post.

The second premise of MacKinnon’s case against pornography focuses on the harm constituted by porn. Note how I say ‘constituted by’ and not ‘caused by’. MacKinnon studiously avoids making claims about the empirical consequences of exposure to porn. Instead, she argues that pornography itself constitutes a kind of harm to women. Specifically, she thinks that the production and distribution of porn is itself an act that subordinates and silences women. This helps her to sidestep defences of pornography that use the principle of free speech. If you’re really interested, I’ve examined some ways to make sense of this argument in the past. We don’t need to dwell on them here. We just need to accept it, for the sake of argument, and move on.

Finally, MacKinnon’s case against pornography advocates a particular legal remedy to the problem. MacKinnon does not favour government-run censorship as this would not empower women (although she did, controversially, appear to support censorship in the R v. Butler case). She favours the creation of civil rights ordinances that would enable women to sue producers and distributors of pornography for the harm caused to them by porn. This would be a legal option open to all women since the purported harm is not done to specific, individual women, but rather to women as a collective.

That’s everything we need to understand Davies’s argument. His argument works like this (this is my reconstruction of the reasoning, not something that appears in the paper):

  • (1) We should not silence women (assumption, presumed by MacKinnon’s argument).
  • (2) There are female pornographers, i.e. women who produce and distribute pornographic material that falls within the definition of Mac-Porn.
  • (3) If we introduced a legal remedy like MacKinnon’s anti-porn civil rights ordinances, female pornographers would be silenced.
  • (4) Therefore, we should not introduce a legal remedy like MacKinnon’s anti-porn civil rights ordinances.

I should clarify that Davies doesn’t fully endorse (4) in his article. His aims are more modest than that. He merely wants to highlight the tension or dilemma posed by accepting that we should not silence women while at the same time acknowledging the existence of female pornographers. He points out that this neglected tension leads many anti-porn feminists to either reject or deny the existence of female pornographers. He claims that this is not a credible position, at least not if you are a liberal feminist. If you are a radical like MacKinnon, it may be possible to deny or overlook the existence of female pornographers, but only if you accept that all women who produce pornography are victims of false consciousness. Let’s see how he fleshes this out.

2. Do female pornographers exist?
The existence of female pornographers should be relatively uncontroversial. There clearly are women who produce, direct, design and distribute pornographic material. If you doubt this, I encourage you to read The Feminist Porn Book, which contains over two dozen essays from prominent female/feminist pornographers. These are not just women who produce and distribute porn; they are women who produce and distribute porn that they are proud of and that they feel lives up to the ideals of feminism. (I should clarify that not all the contributions are from women, though the vast majority are; some are from men and others are from transgender or genderqueer individuals — I sidestep that important detail here because the MacKinnon-style argument seems to focus primarily on cisgender women).

This fact alone might be enough to support premise (2) of Davies’s argument. But, of course, the position is more complicated than that. It might be the case that all the pornography produced by these female pornographers is of a softer, more genteel nature than that envisaged by MacKinnon in her definition of Mac-Porn. For example, consider the work of the Candida Royalle. She was one of the pioneers of female-made pornography in the 1980s, and her filmography favoured relatively softcore content. She was a female pornographer, for sure, but she did not make Mac-Porn. Consequently, her existence does not support premise (2) of Davies’s argument.

But not all female-created pornography is of this ‘softer’ type. Some of it is quite hardcore and involves the eroticisation of women in submissive and objectified positions. Davies’s presents a few examples of this in his article. First, he reviews back issues of the ground-breaking lesbian-porn magazine On Our Backs, and describes how they:

[D]epicted women being penetrated by objects, women on display, and fantasies that involved the use of coercion, humiliation, and violence. They also depicted fantasies that involved none of these things. The contents were designed to appeal to an audience of diverse sexual tastes and curiosities. 
(Davies 2017)

He also looks at the female producers of pornographic films that came after Candida Royalle and notes how many of them have produced films that eroticise domination and submission, with different motivations and intentions:

[These female pornographers] include: Nina Hartley, Jacky St. James, Erika Lust, Tristan Taormino, Courtney Trouble, and Madison Young. Each has produced material that eroticizes doominance/submission. They have various motivations for producing the material that they do. Hartley and Taormino believe that sexually explicit material can function as good sex education. Taormino also believes that well-designed sexually explicit material can be used to expose what she calls the ‘fallacies of gender’; by which she means the gender binary and the stereotypes common in male-oriented pornography. Trouble aims to produce material that shows people like her (larger, queer women) as desirable. Young wants to produce depictions of authentic desire. Lust wants to produce material that reflects her sexuality better than male-oriented. 
(Davies 2017)

Davies points out that not only do women produce this material; they also seem to desire access to it and value it quite highly. He cites a focus group study done by Rachel Liberman, which suggests that feminist pornography of this sort was held in high regard because it provided more authentic insights into female sexual subjectivity.

I could go on. Davies provides many more examples of female pornographers in his article and his engagement with these examples is one of the real strengths of his piece. Hopefully, this handful suffices to make the critical point: that premise (2) seems to be robustly well-supported by empirical, real-world examples of female pornographers.

This then leads to the dilemma at the heart of Davies’s argument. If we were to accept the MacKinnon style argument, we would have to assume that (a) all these female pornographers are silencing and subordinating women through their work and (b) that they ought to be subject to legal sanction for doing do. This seems strange given that this would, in effect, silence this particular group of women.

3. Resolving the dilemma?
Is there any way out of this dilemma? Davies suggests that the most popular route out of the dilemma is to simply deny or overlook the existence of female pornographers, and he spends a good deal of time in his article highlighting how prominent liberal anti-porn feminists do this, either implicitly or explicitly. But let’s say you don’t deny their existence. Is there anyway to then maintain the opposition to pornography?

One possibility would be to view the women in question as victims. They need to be saved from the system because they are being oppressed by it. The problem with this is that none of female pornographers discussed above (or elsewhere in Davies article) see themselves in this light. Oftentimes their view is the opposite. They think they are being empowered through the production and distribution of porn. They, and the people who consume their content, view the pornographic material as something that makes a positive contribution to their sex lives. Why should we deny their testimony?

Davies argues that only MacKinnon can maintain a consistent position on this. Because of her radicalist leanings, she views all (or virtually all) women as victims of a patriarchal false consciousness when it comes to sex. She thinks that men set the conditions for sexuality and that we cannot trust women’s testimony concerning their sexual preferences and desires until we have achieved meaningful gender equality. Indeed, on one occasion, MacKinnon even went so far as to suggest that female pornographers were like abuse victims defending their abusers (her exact words are cited in Davies’s article).

This is a pretty extreme view, one that is not shared by the typical liberal feminist, and one that leads to certain conceptual difficulties. After all, MacKinnon must believe that at least some women can see through the veil of false consciousness that has been foisted upon them by the patriarchy. I assume, for example, that she sees herself as someone who has managed to do this. But why assume that she is the only one in this privileged position? Why not trust the voices of the female pornographers?

4. Conclusion
This is why I think Davies’s argument is important and interesting. It highlights two uncomfortable truths about our attitudes towards sex and sexuality.

First, it highlights how we often have a ‘standard model’ for normative sex/sexuality in our minds. This model affects the kinds of sex and sexual self-expression that we deem appropriate or acceptable (for women in particular). Classically, this model consisted of heterosexual sexual intercourse, within marriage. We’ve expanded the standard model since then, but there are still forms of sex that trouble many of us because they lie outside the boundary lines (e.g. BDSM, non-monogamous sex, etc.). This is why people often assume that women could not authentically desire these forms of sex, or seek to represent them in words and images.

Second, the article highlights the importance of taking individual testimony seriously, even when it conflicts with our standard model. This is something that is particularly pertinent at the moment as more and more women come forward to openly share their stories of sexual harassment and assault. But just as we are now taking this testimonial evidence more seriously, perhaps we should also take the testimonial evidence of female pornographers more seriously? If you read someone like Tristan Taormino or Nina Hartley, it’s very hard to believe that they are victims of false consciousness. They seem to have thought this through and are fully aware of what they are doing and the conditions under which they are doing it. They are not rose-tinted idealists, but nor are they oppressed victims. What they say may make us uncomfortable (given the implied commitment to the standard model), but perhaps we should take it at face value?

Thursday, November 23, 2017

Episode #32 - Carter and Palermos on Extended Cognition and Extended Assault


In this episode I talk to Adam Carter and Orestis Palermos. Adam is a Lecturer in Philosophy at the University of Glasgow. His primary research interests lie in the area of epistemology, but he has increasingly explored connections between epistemology and other disciplines, including bioethics (especially human enhancement); the philosophy of mind and cognitive science. Orestis is a lecturer in philosophy at Cardiff University. His research focuses on how ‘philosophy can impact the engineering of emerging technologies and socio-technical systems.’ We talk about the theory of the extended mind and the idea of extended assault.

You can download the episode here or listen to it below. You can also subscribe on iTunes and Stitcher (RSS feed).

Show Notes

  • 0:00 - Introduction
  • 0:55 - The story of David Leon Riley and the phone search
  • 3:15 - What is extended cognition?
  • 7:35 - Extended cognition vs extended mind - exploring the difference
  • 13:35 - What counts as part of an extended cognitive system? The role of dynamical systems theory
  • 19:14 - Does cognitive extension come in degrees?
  • 24:18 - Are smartphones part of our extended cognitive systems?
  • 28:10 - Are we over-extended? Do we rely too much on technology?
  • 35:02 - Making the case for extended personal assault
  • 39:50 - Does functional disability make a difference to the case for extended assault?
  • 43:35 - Does pain matter to our understanding of assault?
  • 49:50 - Does the replaceability/fungibility of technology undermine the case for extended assault?
  • 55:00 - Online hacking as a form of personal assault
  • 59:30 - The ethics of extended expertise
  • 1:02:58 - Distributed cognition and distributed blame

Relevant Links


Wednesday, November 1, 2017

Video Interview about Robot Sex: Social and Ethical Implications

Through the wonders of the modern technology, myself and Adam Ford sat down for an extended video chat about the new book Robot Sex: Social and Ethical Implications (MIT Press, 2017). You can watch the full thing above or on youtube. Topics covered include:

  • Why did I start writing about this topic?
  • Sex work and technological unemployment
  • Can you have sex with a robot?
  • Is there a case to be made for the use of sex robots?
  • The Campaign Against Sex Robots
  • The possibility of valuable, loving relationships between humans and robots
  • Sexbots as a social experiment

Be sure to check out Adam's other videos and support his work.

Tuesday, October 31, 2017

Should Robots Have Rights? Four Perspectives

Ralph McQuarrie's original concept art for C3PO

I always had a soft spot for C3PO. I know most people hated him. He was overly obsequious, terribly nervous, and often annoying. R2D2 was more rougish, resilient and robust. Nevertheless, I think C3PO had his charms. You couldn’t help but sympathise with his plight, dragged along by his more courageous peer into all sorts of adventures, most of which lay well beyond the competence of a simple protocol droid like him.

It seems I wasn’t the only one who sympthasised with C3PO’s plight. Anthony Daniels — the actor who has spent much of his onscreen career stuffed inside the suit — was drawn to the part after seeing Ralph McQuarrie’s original drawings of the robot. He said the drawings conveyed a tremendous sense of pathos. So much so that he felt he had to play the character.

All of this came flooding back to me as I read David Gunkel’s recent article ‘The Other Question: Can and Should Robots Have Rights?’. Gunkel is well-known for his philosophical musings on technology, cyborgs and robots. He authored the ground-breaking book The Machine Question back in 2011, and has recently been dipping his toe into the topic of robot rights. At first glance, the topic seems like an odd one. Robots are simply machines (aren’t they?). Surely, they could not be the bearers of moral rights?

Au contraire. It seems that some people take the plight of the robots very seriously indeed. In his paper, Gunkel reviews four leading positions on the topic of robot rights before turning his attention to a fifth position — one that he thinks we should favour.

In what follows, I’m going to set out the four positions that he reviews, along with his criticisms thereof. I’ll then close by outlining some of my own criticisms/concerns about his proposed fifth position.

1. The Four Positions on Robot Rights
Before I get into the four perspectives that Gunkel reviews, I’m going to start by asking a question that he does not raise (in this paper), namely: what would it mean to say that a robot has a ‘right’ to something? This is an inquiry into the nature of rights in the first place. I think it is important to start with this question because it is worth having some sense of the practical meaning of robot rights before we consider their entitlement to them.

I’m not going to say anything particularly ground-breaking. I’m going to follow the standard Hohfeldian account of rights — one that has been used for over 100 years. According to this account, rights claims — e.g. the claim that you have a right to privacy — can be broken down into a set of four possible ‘incidents’: (i) a privilege; (ii) a claim; (iii) a power; and (iv) an immunity. So, in the case of a right to privacy, you could be claiming one or more of the following four things:

Privilege: That you have a liberty or privilege to do as you please within a certain zone of privacy.
Claim: That others have a duty not to encroach upon you in that zone of privacy.
Power: That you have the power to waive your claim-right not to be interfered with in that zone of privacy.
Immunity: That you are legally protected against others trying to waive your claim-right on your behalf.

As you can see, these four incidents are logically related to one another. Saying that you have a privilege to do X typically entails that you have a claim-right against others to stop them from interfering with that privilege. That said, you don’t need all four incidents in every case.

We don’t need to get too bogged down in these details. The important point here is that when we ask the question ‘Can and should robots have rights?’ we are asking whether they should have privileges, claims, powers and immunities to certain things. For example, you might say that there is a robot right to bodily integrity, which could mean that a robot would be free to do with its body (physical form) as it pleases and that others have a duty not to interfere with or manipulate that bodily form, unless they receive the robot’s acquiescence. Or, if you think that’s silly because robots can’t consent (or can they?) you might limit it to a simple claim-right, i.e. a duty not to interfere without permission from someone given the authority to make those decisions. Legal systems grant rights to people that are incapable of communicating their wishes, or to entities that are non-human, all the time so the notion that robots could be given rights in this way is not absurd.

But that, of course, brings us to the question that Gunkel asks, which is in fact two questions:

Q1 - Can robots have rights?
Q2 - Should robots have rights?

The first question is about the capacities of robots: do they, or could they, have the kinds of capacities that would ordinarily entitle an entity to rights? Gunkel views this as a factual/ontological question (an ‘is’ question). The second question is about whether robots should have the status of rights holders. Gunkel views this as an axiological question (an ‘ought’ question).

I’m not sure what to make of this framing. I’m a fairly staunch moralist when it comes to rights. I think we need to sort out our normative justification for the granting of rights before we can determine whether robots can have rights. Our normative justification of rights would have to identify the kinds of properties/capacities an entity needs to possess in order to have rights. It would then be a relatively simple question of determining whether robots can have those properties/capacities. The normative justification does most of the hard work and is really analytically prior to any inquiry into the rights of robots.

This means that I think there are more-or-less interesting ways of asking the two questions to which Gunkel alludes. The interesting form of the ‘can’ question is really: is it possible to create robots that would satisfy the normative conditions for an entitlement to rights (or have we even already created such robots)? The interesting form of the ‘should’ question is really: if it is possible to create such robots, should we do so?

But that’s just my take on it. I still accept that there is an important distinction to be drawn between the ‘can’ and ‘should’ questions and that depending on your answer to them there are four logically possible perspectives on the issue of robot rights: (a) robots cannot and therefore should not have rights; (b) robots can and should have rights; (c) robots can but should not have rights; and (d) robots cannot but should have rights. These four perspectives are illustrated in the following two-by-two matrix below.

Surprisingly enough, each of these four perspectives has its defenders and one of the goals of Gunkel’s article is to review subject each of them to critique. Let’s look at that next.

2. An Evaluation of the Four Perspectives
Let’s start with the claim that robots cannot and therefore should not have rights. Gunkel argues that this tends to be supported by those who view technology as a tool, i.e. as an instrument of the human will. This is a very common view, and is in many ways the ‘classic’ theoretical understanding of technology in human life. If technology is always a tool, and robots are just another form of technology, then they too are tools. And since tools cannot (and should not) be rights-bearers, it follows (doesn’t it?) that robots cannot and should not have rights.

Gunkel goes into the history of this position in quite some detail, but we don’t need to follow suit. What matters for us are his criticisms of it. He has two. The first is simply that the tool/instrumentalist view seems inadequate when it comes to explaining the functionality of some technologies. Even as far back as Hegel and Marx, distinctions were drawn between ‘machines’ and ‘tools’. The former could completely automate and replace a human worker, whereas the latter would just complement and assist one. Robots are clearly something more than mere tools: the latest ones are minimally autonomous and capable of learning from their mistakes. Calling them ‘tools’ would seem inappropriate. The other criticism is that the instrumentalist view seems particularly inadequate when it comes to describing advances in social robotics. People form close and deep attachments to social robots, even when the robots are not designed to look or act in ways that arouse such an emotional response. Consider, for example, the emotional attachments soldiers form with bomb disposal robots. There is nothing cute or animal-like about these robots. Nevertheless, they are not experienced as mere tools.

This brings us to the second perspective: that robots can and so should have rights. This is probably the view that is most similar to my own. Gunkel describes this as the ‘properties’ approach because proponents of it follow the path I outlined in the previous section: they first determine the properties that they think an entity must possess in order to count as a rights-bearer; and they then figure out whether robots exhibit those properties. Candidate properties include things like autonomy, self-awareness, sentience etc. Proponents of this view will say that if we can agree that robots exhibit those properties then, of course, they should have rights. But most say that robots don’t exhibit those properties just yet.

Gunkel sees three problems with this. First, the terms used to describe the properties are often highly contested. There is no single standard or agreement about the meaning of ‘consciousness’ or ‘autonomy’ and it is hard to see them being decidable in the future. Second, there are epistemic limitations to our ability to determine whether an entity possesses these properties. Consciousness is the famous example: we can never know for sure whether another person is conscious. Third, even if you accept this approach to the question there is still an important ethical issue concerning the creation of robots that exhibit the relevant properties: should we create such entities?

(For what it’s worth: I don’t see any of these as being significant criticisms of the ‘properties’ view. Why not? Because if they are problems for the ascription of rights to robots they are also problems for the ascription of rights to human beings. In other words: they raise no special problems when it comes to robots. After all, it is already the case that we don’t know for sure whether other humans exhibit the relevant properties, and there is a very active debate about the ethics of creating humans that exhibit these properties. If it is the properties that matter, then the specific entity that exhibits them does not.)

The third perspective says that robots can but should not have rights. This is effectively the view espoused by Joanna Bryson. Although she is somewhat sceptical of the possibility of robots exhibiting the properties needed to be rights-bearers, she is willing to concede the possibility. Nevertheless, she thinks it would be a very bad idea to create robots that exhibit these properties. In her most famous article on the topic, she argues that robots should always be ‘slaves’. She has since dropped this term in favour of ‘servants’. Bryson’s reasons for thinking that we should avoid creating robots that have rights are multifold. She sometimes makes much of the fact that we will necessarily be the ‘owners’ of robots (that they will be our property), but this seems like a weak grounding for the view that robots should not have rights given that property rights are not (contra Locke et al) features of the natural order. Better is the claim that the creation of such robots will lead to problems when it comes to responsibility and liability for robot misdeeds, and that they could be used to deceive, manipulate or mislead human beings — though neither of these in entirely persuasive to me.

Gunkel has two main criticisms of Bryson’s view. The first — which I like — is that Bryson is committed to a form of robot asceticism. Bryson thinks that we should not create robots that exhibit the properties that make them legitimate objects of moral concern. This means no social robots with person-like (or perhaps even animal-like) qualities. It could be extremely difficult to realise this asceticism in practice. As noted earlier, humans seem to form close, empathetic relationships with robots that are not intended to pull upon their emotional heartstrings. Consider, once more, the example of soldiers forming close attachments to bomb disposal robots. The other criticism that Gunkel has — which I’m slightly less convinced of — is that Bryson’s position commits her to building a class of robot servants. He worries about the social effects of this institutionalised subjugation. I find this less persuasive because I think the psychological and social effects on humans will depend largely on the form that robots take. If we create a class of robot servants that look and act like C3PO, we might have something to worry about. But robots do not need to exist in an integrated, humanoid (or organism-like) form.

The fourth perspective says that robots cannot but should have rights. This is the view of Kate Darling. I haven’t read her work so I’m relying on Gunkel’s presentation of it. Darling’s claim seems to be that robots do not currently have the properties that we require of rights-bearers, but that they are experienced by human beings in a unique and special way. They are not mere objects to us. We tend to anthropomorphise them, and project certain cognitive capabilities and emotions onto them. This in turn foments certain emotions in our interactions with them. Darling claims that this phenomenological experience might necessitate our having certain ethical obligations to robots. I tend to agree with this, though perhaps because I am not sure how different it really is from the ‘properties’ view (outlined above): whether an entity has the properties of a rights-bearer depends, to a large extent (with some qualifications) on our experience of it. At least, that’s my approach to the topic.

Gunkel thinks that there are three problems with Darling’s view. The first is that if we follow Kantian approaches to ethics, feelings are a poor guide to ethical duties. What’s more, if perception is what matters this raises the question: whose perception counts? What if not everyone experiences robots in the same way? Are their experiences to be discounted? The second problem is that Darling’s approach might be thought to derive an ‘ought’ from an ‘is’: the facts of experience determine the content of our moral obligations. The third problem is that it makes robot rights depend on us — our experience of robots — and not on the properties of the robots themselves. I agree with Gunkel that these might be problematic, but again I tend to think that they are problems that plague our approach to humans as well.

I’ve tried to summarise Gunkel’s criticisms of the four different positions in the following diagram.

3. The Other Perspective
Gunkel argues that none of the arguments outlined above is fully persuasive. They each have their problems. We could continue to develop and refine the arguments, but he favours a different approach. He thinks we should try to find a fifth perspective on the problem of robot rights. He calls this perspective ‘thinking otherwise’ and bases it on the work on Eammuel Levinas. I’ll have to be honest and admit that I don’t fully understand this perspective, but I’ll do my best to explain it and identify where I have problems with it.

In essence, the Levinasian perspective favours an ethics first view of ontology. The four perspectives outlined above all situate themselves within the classic Humean is-ought distinction. They claim that the rights of robots are, in some way, contingent upon what robots are — i.e. that our ethical principles determines what is ontologically important and that, correspondingly, the robot’s ontological properties will determine its ethical status. The Levinasian perspective involves a shift away from that way of thinking — realising that you can only derive obligations from facts. The idea is that we first focus on our ethical responses to the world and then consider the ontological status of that world. It’s easier to quote directly from Gunkel on this point:

According to this way of thinking, we are first confronted with a mess of anonymous others who intrude on us and to whom we are obligated to respond even before we know anything at all about them. To use Hume’s terminology — which will be a kind of translation insofar as Hume’s philosophical vocabulary, and not just his language, is something that is foreign to Levinas’s own formulations — we are first obligated to respond and then, after having made a response, what or who we responded to is able to be determined and identified. 
(Gunkel 2017, 10)

I have some initial concerns about this. First, I’m not sure how distinctive or radical this is. It seems broadly similar to an approach that Dan Dennett has advocated for years in relation to the free will debate. His view is that it may be impossible to settle the ontological question of freedom vs determinism and hence we should allow our ethical practices to guide us. Setting that aside, I also have some concerns about the meaning of the phrase ‘obligated to respond’ in the quoted passage. It seems to me that it could be trading on an ambiguity between two different meanings of the phrase, one amoral and one moral. It could be that we are physically obligated to respond: our ongoing engagement with the world doesn’t give us time to settle moral or ontological questions first before coming up with a response. We are pressured to come up with a response and revise and resubmit our answers to the ontological/ethical questions at a later time. That type of obligated response is amoral. If that’s what is meant by the phrase ‘obligated to respond’ in the above passage then I would say it is a relatively banal and mundane idea. The moralised formulation of the phrase would be very different. It would suggest that our obligated response actually has some moral or ethical weight. That’s more interesting — and it might be true in some deep philosophical sense insofar as we can never truly escape or step back from our dynamic engagement with the world — but then I’m not sure that it necessitates a radical break from traditional approaches to moral philosophy.

This brings me to another problem. As described, the Levinasian perspective seems very similar to the one advocated by Kate Darling. After all, she was suggesting that our ethical stance toward social robots should be dictated by our phenomenological experience of them. The Levinasian perspective says pretty much the same thing:

[T]he question of social and moral status does not necessarily depend on what the other is in its essence but on how she/he/it…supervenes before us and how we decide, in the “face of the other” (to use Levinasian terminology), to respond. 
(Gunkel 2017, 10)

Gunkel anticipates this critique. He argues that there are two major differences between the Levinasian perspective and Darling’s. The first is that Darling’s perspective is anthropomorphic whereas the Levinasian one is resolutely not. For Darling, our ethical response to social robots is dictated by our emotional needs and by our tendency to project ourselves onto the ‘other’. Levinas thinks that anthropomorphism of this kind is a problem because it denies the ‘alterity’ of the other. This then leads to the second major difference which is that Darling’s perspective maintains the superiority and privilege of the self (the person experiencing the world) and maintains them in a position of power when it comes to granting rights to others. Again, the purpose of the Levinasian perspective is to challenge this position of superiority and privilege.

This sounds very high-minded and progressive, but it’s at this point that I begin to lose the thread a little. I am just not sure what any of this really means in practical and concrete terms. It seems to me that the self who experiences the world must always, necessarily, assume a position of superiority over the world they experience. They can never fully occupy another person’s perspective — all attempts to sympathise and empathise are ultimately filtered through their own experience.

Furthermore, I do not see how deciding on an entity’s rights and obligations could ever avoid assuming some perspective of power and privilege. Rights — while perhaps grounded in deeper ethical truths — are ultimately social constructions that depend on institutions with powers and privileges for their practical enforcement. You can have more-or-less hierarchical and absolute institutions of power, but you cannot completely avoid them when it comes to the protection and recognition of rights. So, I guess, I’m just not sure where the Levinasian perspective ultimately gets us in the robot rights debate.

That said, I know that David is publishing an entire book on this topic next year. I’m sure more light will be shed at that stage.