Saturday, April 23, 2016

The Ethics of Intimate Surveillance (2): A Landscape of Objections



(Part One)

This is the second in a two-part series looking at the ethics of intimate surveillance. In part one, I explained what was meant by the term ‘intimate surveillance’, gave some examples of digital technologies that facilitate intimate surveillance, and looked at what I take to be the major argument in favour of this practice (the argument from autonomy).

To briefly recap, intimate surveillance is the practice of gathering and tracking data about one’s intimate life, i.e. information about prospective intimate partners, information about sexual and romantic behaviours, information about fertility and pregnancy, and information about what your intimate partner is up to. There are a plethora of apps allowing for such intimate surveillance. What’s interesting about them is how they not only facilitate top-down surveillance (i.e. surveillance by app makers, corporate agents and governments) but also interpersonal and self-surveillance. This suggests that a major reason why people make use of these services is to attain more control and mastery over their intimate lives.

I introduced some criticisms of intimate surveillance at the end of the previous post. In this post, I want to continue that critical mode by reviewing several arguments against their use. The plausibility of these arguments will vary depending on the nature of app or service being used. I’m not going to go into the nitty gritty here. I want to survey the landscape of arguments, offering some formalisations of commonly-voiced objections along with some critical evaluation. I’m hoping that this exercise will prove useful to others who are researching in the area. Again, the main source and inspiration for this post is Karen Levy’s article ‘Intimate Surveillance’.


1. Arguments from Biased Data
All forms of intimate surveillance depend on the existence of data that can be captured, measured and tracked. Is it possible to know the ages and sexual preferences of all the women/men within a 2 mile radius? Services like Tinder and Grindr make this possible. But what if you wanted to know what they ate today or how many steps they have walked? Technically this data could be gathered and shared via the same services, but at present it is not.

This dependency of these services on data that is and can be captured, measured and tracked creates problems. What if the data that is being gathered is not particularly useful? What if it is biased in some way? What if it contributes to some form of social oppression? There are at least three objections to intimate surveillance that play upon this theme.

The first rests on a version of the old adage ‘what gets measured gets managed’. If data is being gathered and tracked, it becomes more salient to people and they start to manage their behaviour so as to optimise the measurements. But if the measurements being provided are no good (or biased) then this may thwart preferred outcomes. For example, mutual satisfaction is a key part of any intimate relationship: it’s not all about you and what you want; it’s about working together with some else to achieve a mutually satisfactory outcome. One danger of intimate surveillance is that it could get one of the partners to focus on behaviours that do not contribute to mutually satisfactory outcomes. In general terms:

  • (1) What gets measured gets managed, i.e. if people can gather and track certain forms of data they will tend to act so as to optimise patterns in that data.
  • (2) In the case of intimate surveillance, if the optimisation of the data being gathered does not contribute to mutual satisfaction, it will not improve our intimate lives.
  • (3) The optimisation of the data being gathered by some intimate surveillance apps does not contribute to mutual satisfaction.
  • (4) Therefore, use of those intimate surveillance apps will not improve our intimate lives.

Premise (1) here is an assumption about how humans behave. Premise (2) is the ethical principle. It says that mutual satisfaction is key to a healthy intimate life and anything that thwarts that should be avoided (assuming we want a healthy intimate life). Premise (3) is the empirical claim, one that will vary depending on the service in question. (4) is the conclusion.

Is the argument any good? There are some intimate surveillance apps that would seem to match the requirements of premise (3). Levy gives the example of Spreadsheets — the sex tacker app that I mentioned in part one. This app allows users to collect data about the frequency, duration, number of thrusts and decibel level reached during sexual activity. Presumably, with the data gathered, users are likely to optimise these metrics, i.e. have more frequent, longer-lasting, more thrusting and decibel-raising sexual encounters. While this might do it for some people, the optimisation of these metrics is unlikely to be a good way to ensure mutual satisfaction. The app might get people to focus on the wrong thing.

I think the argument in the case of Spreadsheets might be persuasive, but I would make two comments about this style of argument more generally. First, I’m not sure that the behavioural assumption always holds. Some people are motivated to optimise their metrics; some aren’t. I have lots of devices that track the number of steps I walk, or miles I run. I have experimented with them occasionally, but I’ve never become consumed with the goal of optimising the metrics they provide. In other words, how successful these apps actually are at changing behaviour is up for debate. Second, premise (3) tends to presume incomplete or imperfect data. Some people think that as the network of data gathering devices grows, and as they become more sensitive to different types of information, the problem of biased or incomplete data will disappear. But this might not happen anytime soon and even if it does there remains the problem of finding some way to optimise across the full range of relevant data.



Another argument against intimate surveillance focuses on gender-based inequality and oppression. Many intimate surveillance apps collect and track information about women (e.g. the dating apps that locate women in a geographical region, the spying apps that focus on cheating wives, and the various fertility trackers that provide information about women’s menstrual cycles and associated moods). These apps may contribute to social oppression in at least two ways. First, the data being gathered may be premised upon and contribute to harmful, stereotypical views of women and how they relate to men (e.g. the ‘slutty’ college girl, the moody hormonal woman, the cheating wife and her cuckolded husband etc.). Second, and more generally, they may contribute to the view that women are subjects that can be (and should be) monitored and controlled through surveillance technologies. To put it more formally:

  • (5) If something contributes to or reinforces harmful gender stereotypes, or contributes to and reinforces the view that women can be and should be monitored and controlled, it is bad.
  • (6) Some intimate surveillance apps contribute to or reinforce harmful gender stereotypes and support the view that women can and should be monitored and controlled.
  • (7) Therefore, some intimate surveillance apps are bad.

This is a deliberately vague argument. It is similar to many arguments about gender-based oppression insofar as it draws attention to the symbolic properties of a particular practice and then suggests that these properties contribute to or reinforce gender-based oppression. I’ve looked at similar arguments in relation to prostitution, sex robots and surrogacy in the past. One tricky aspect of any such argument is proving the causal link between the symbolic practice (in this case the data being gathered and organised about women) and gender-based oppression more generally. Empirical evidence is often difficult to gather or inconclusive. This leads people to fall back on purely symbolic arguments or to offer revised views of what causation might mean in this context. A final problem with the argument is that even if it is successful it’s not clear what it’s implications are. Could the badness of the oppression be offset by other gains (e.g. what if the fertility apps really do enhance women’s reproductive autonomy)?



The third argument in this particular group is a little bit more esoteric. Levy points in its direction with a quote from Deborah Lupton:

These technologies configure a certain type of approach to understanding and experiencing one’s body, an algorithmic subjectivity, in which the body and its health states, functions and activities are portrayed and understood predominantly via quantified calculations, predictions and comparisons.
(Lupton 2015, 449)
 
The objection that derives from this stems from a concern about algorithmic subjectivity. I have seen it expressed by several others. The concern is always that the apps encourage us to view ourselves as aggregates of data (to be optimised etc). Why this is problematic is never fully spelled out. I think it is because this form of algorithmic subjectivity is dehumanising and misses out on something essential to the well-lived human life (the unmeasurable, unpredictable, unquantifiable):

  • (8) Algorithmic subjectivity is bad: it encourages us to view ourselves as aggregates of data to be quantified, tracked and optimised; it ignores essential aspects of a well-lived life.
  • (9) Intimate surveillance apps contribute to algorithmic subjectivity.
  • (10) Therefore, intimate surveillance apps are bad.



This strikes me as a potentially very rich argument — one worthy of deeper reflection and consideration. I have mixed feelings about it. It seems plausible to suggest that intimate surveillance contributes to algorithmic subjectivity (though how much and in what ways will require empirical investigation). I’m less sure about whether algorithmic subjectivity is a bad thing. It might be bad if the data being gathered is biased or distorting. But I’m also inclined to think that there are many ways to live a good and fulfilling life. Algorithmic subjectivity might just be different; not bad.


2. Arguments from Core Relationship Values
Another group of objections to intimate surveillance are concerned with its impact on relationships. The idea is that there are certain core values associated with any healthy relationship and that intimate surveillance tends to corrupt or undermine those values. I’ll look at two such objections here: the argument from mutual trust; and the argument from informal reciprocal altruism (or solidarity).

Before I do so, however, I would like to voice a general concern about this style of argument. I’m sceptical of essentialistic approaches to healthy relationships, i.e. approaches to healthy relationships that assume they must have certain core features. There are a few reasons for this, but most of them flow from my sense that the contours of a healthy relationship are largely shaped by the individuals that are party to that relationship. I certainly think it is important for the parties to the relationship to respect one another’s autonomy and to ensure that there is informed consent, but beyond that I think people can make all sorts of different relationships work. The other major issue I have is that I’m not sure what a healthy relationship really is. Is it one that lasts indefinitely? Can you have a healthy on-again off-again relationship? Abuse and maltreatment are definite no-gos, but beyond that I’m not sure what makes things work.

Setting that general concern to the side, let’s look at the argument from mutual trust. It works something like this:

  • (11) A central virtue of any healthy relationship is mutual trust, i.e. a willingness to trust that your partner will act in a way that is consistent with your interests and needs without having to monitor and control them.
  • (12) Intimate surveillance undermines mutual trust.
  • (13) Therefore, intimate surveillance prevents you from having a healthy relationship.

The support for (12) is straightforward enough. There are certain apps that allow you to spy on your partner’s smartphone: see who they have been texting/calling, where they have been, and so on. If you use these apps, you are clearly demonstrating that you are unwilling to trust your partner without monitoring and control. So you are clearly undermining mutual trust.

I agree with this argument up to a point. If I spy on my partner’s phone without her consent, then I’m definitely doing something wrong: I’m failing to respect her autonomy and privacy and I’m not being mature, open and transparent. But it strikes me that there is a deeper issue here: what if she is willing to consent to my use of the spying app as gesture of her commitment? Would it still be a bad idea to use it? I’m less convinced. To argue the affirmative you would need to show that having (blind?) faith in your partner is essential to a healthy relationship. You would also have to contend with the fact that mutual trust may be too demanding, and that petty jealousy is all too common. Maybe it would be good to have a ‘lesser evil’ option?



The other argument against intimate surveillance is the argument from informal reciprocal altruism (or solidarity). This is a bit of a mouthful. The idea is that relationships are partly about sharing and distributing resources. At the centre of any relationship there are two (or more) people who get together and share income, time, manual labour, emotional labour and so on. But what principle do people use to share these resources? Based on my own anecdotal experience, I reckon people adopt a type of informal reciprocal altruism. They effectively agree that if one of them does something for the other, then the other will do something else in return, but no one really keeps score to make sure that every altruistic gesture is matched with an equal and opposing altruistic gesture. They know that it is part of their commitment to one another that it will all pretty much balance out in the end. They think: “we are in this together and we’ve got each other’s backs”.

This provides the basis for the following argument:

  • (14) A central virtue of any healthy relationship is that resources are shared between the partners on the basis of informal reciprocal altruism (i.e. the partners do things for one another but don’t keep score as to who owes what to whom)
  • (15) Intimate surveillance undermines informal reciprocal altruism.
  • (16) Therefore, intimate surveillance prevents you from having a healthy relationship.

The support for (15) comes from the example of apps that try to gamify relationships by tracking data about who did what for whom, assigning points to these actions, and then creating an exchange system whereby one partner can cash in these points for favours by the other partner. The concern is that this creates a formal exchange mentality within a relationship. Every time you do the laundry for your partner you expect them to do something equivalently generous and burdensome in return. If they don’t, you will feel aggrieved and will try to enforce their obligation to reciprocate.

I find this objection somewhat appealing. I certainly don’t like the idea of keeping track of who owes what to whom in a relationship. If I pay for the cinema tickets, I don’t automatically expect my partner to pay for the popcorn (though we may often end up doing this). But there are some countervailing considerations. Many relationships are characterised by inequalities of bargaining power (typically gendered): one party end’s up doing the lions share of care work (say). Formal tracking and measuring of actions might help to redress this inequality. It could also save people from emotional anguish and feelings of injustice. Furthermore, some people seem to make formal exchanges of this sort work. The creators of the Beeminder app, for instance, appear to have a fascinating approach to their relationship.




3. Privacy-related Objections

The final set of objections returns the debate to familiar territory: privacy. Intimate surveillance may involve both top-down and horiztonal privacy harms. That is to say, privacy harms due to the fact that corporations (and maybe governments) have access to the data being captured by the relevant technologies; and privacy harms to due to the fact that one’s potential and actual partners have access to the data.

I don’t have too much to say about privacy-related objections. This is because they are widely-debated in the literature on surveillance and I’m not sure that they are all that different in the debate about intimate surveillance. They all boil down to the same thing: the claim that the use of these apps violates somebody’s privacy. This is because the data is gathered and used either without the person whose data it is consenting to this gathering and use (e.g. facebook stalking), or with imperfect consent (i.e. not fully informed). It is no doubt true that this is often the case. App makers frequently package and sell the data they mine from their users: it is intrinsic to their business model. And certain apps — like the ones that allow you to spy on your partner’s phone — seem to encourage their users to violate their partner’s privacy.

The critical question then becomes: why should we be so protective of privacy? I think there are two main ways to answer this:

Privacy is intrinsic to autonomy: The idea here is that we have a right to control how we present ourselves to others (what bits get shared etc) and how others use information about us; this right is tied into autonomy more generally; and these apps routinely violate this right. This argument works no matter how the information is used (i.e. even if it is used for good). The right may need to be counterbalanced against other considerations and rights, but it is a moral harm to violate it no matter what.

Privacy is a bulwark against the moral imperfection of others: The idea here is that privacy is instrumentally useful. People often argue that if you are a morally good person you should have nothing to hide. This might be true, but it forgets that other people are not morally perfect. They may use information about you to further some morally corrupt enterprise or goal. Consequently, it’s good if we can protect people from at least some unwanted disclosures of personal information. The ‘outing’ of homosexuals is a good example of this problem. There is nothing morally wrong about being a homosexual. In a morally perfect world you should have nothing to fear from the disclosure of your sexuality. But the world isn’t morally perfect: some people in some communities persecute homosexuals. In those communities, homosexuals clearly should have the right to hide their sexuality from others. The same could apply to the data being gathered through intimate surveillance technology. While you might not being doing anything morally wrong, others could use the information gathered for morally corrupt ends.

I think both of these arguments have merit. I’m less inclined toward the view that privacy is an intrinsic good and necessarily connected to autonomy, but I do think that it provides protection against the moral imperfection of others. We should work hard to protect the users of intimate surveillance technology from the unwanted and undesirable disclosure of their personal data.

Okay, that brings me to the end of this series. I won’t summarise everything I have just said. I think the diagrams given above summarise the landscape of objections already. But have I missed something? Are there other objections to the practice of intimate surveillance? Please add suggestions in the comments section.

Friday, April 22, 2016

New Podcast - Ep 1 Tal Zarsky on the Ethics of Big Data and Predictive Analytics





I've started a new podcast as part of my Algocracy and Transhumanism project. The aim of the project is to ask three questions:

  • How does technology create new governance structures, particularly algorithmic governance structures?
  • How does technology create new governance subjects, particularly through the augmentation and enhancement of the human body?
  • What implications does this have for our core political values such liberty, equality, privacy, transparency, accountability and so on?

The first episode is now available. I interview Professor Tal Zarsky about the ethics of big data and predictive analytics. You can download here or listen below. I will add iTunes and Stitcher subscription information once I have received approval from both.


Show Notes

  • 0:00-2:00 - Introduction 
  • 2:00-12:00 - Defining Big Data, Data-Mining and Predictive Analytics 
  • 12:00-17:00 - Understanding a predictive analytics systems 
  • 17:00 - 21:30 - Could we ever have an intelligent, automated decision-making system? 
  • 21:30 - 29:30 - Evaluating algorithmic governance systems: efficiency and fairness 
  • 29:30 - 36:00 - Could algocratic systems be less biased? 
  • 36:00 - 42:00 - Wouldn't algocratic systems inherit the biases of programmers/society? 
  • 42:00 - 54:30 - The value of transparency in algocratic systems
  •  
  • 54:30 - 1:00:1 - The gaming the system objection   


Links



Thursday, April 21, 2016

The Ethics of Intimate Surveillance (1)



'Intimate Surveillance’ is the title of an article by Karen Levy - a legal and sociological scholar currently-based at NYU. It shines light on an interesting and under-explored aspect of surveillance in the digital era. The forms of surveillance that capture most attention are those undertaken by governments in the interests of national security or corporations in the interests of profit.

But ‘smart’ technology facilitates other forms of surveillance . One particularly interesting form of surveillance is that relating to our intimate lives, i.e. activities associated with dating and mating. There are (or have been) a plethora of apps developed to allow us to track and quantify data associated with our intimate activities. Although many of these apps have a commercial dimension — and we shouldn’t ignore that dimension — users are primarily drawn to them for personal and interpersonal reasons. They think that accessing and mining intimate data will enhance the quality of their intimate lives. But are they right to think this?

That’s the question I want to answer over the next two posts. Levy’s article does a good job sketching out the terrain in which the conversation must take place, and so I will follow her presentation closely in what follows, but I want to add a layer of philosophical formalism to her analysis. I start, in this post, by sketching out the different forms of surveillance and explaining in more detail what is interesting and significant about intimate surveillance. I will follow this with some examples of intimate surveillance apps. And I will close with what I take to be the core argument in favour of their use. I’ll postpone the more critical arguments to part two.


1. The Forms of Intimate Surveillance
I have thrashed out the concept of surveillance many times before on this blog. In particular, I’ve looked at the frameworks developed by David Brin and Steve Mann to distinguish surveillance from sousveillance. Here, I want to develop a slightly different framework. It starts with a simple and intuitive definition of surveillance as the practice of observing and gathering data about human beings and their activities. I guess, technically, the concept could be expanded to include gathering data about other subjects, and if you wanted you could insist that data analysis and mining is part and parcel of surveillance, but I won’t insist on those things here. I don’t think we need to be overly formal or precise.

What’s more important are the forms of surveillance. What I mean by this is: who exactly is gathering the data? About whom? And for what purpose? Steve Mann might insist that the word ‘surveillance’ has a particular form built into its etymology: ‘sur’-veillance is monitoring and observation from above, i.e. from the top-down. As such, it is to be contrasted with other forms of ‘veillance’, such as ‘sous’-veillance, which is monitoring from below, i.e. from the bottom-up. This can be a useful distinction, but it does not exhaust the possibilities. In fact, we can distinguish between at least four different forms of ‘veillance’:

Top-down Veillance: This is where data is being gathered by socially powerful organisations about their subjects. The most common practitioners of top-down monitoring are governments and corporations. They gather information about their citizens and customers, usually in an effort to control and manipulate their behaviour in desired directions.

Bottom-up Veillance: This is where data is being gathered about socially powerful organisations by their subjects. For example, the citizens in a state could gather information about police abuse of minority populations by recording such abuse on their smartphones. Brin and Mann believe that bottom-up monitoring of this sort is the key to creating a transparent and fair society in the digital age.

Horizontal Veillance: This is where data is being gathered by individuals about other individuals (at roughly the same scale in a social hierarchy). Humans do this all the time through simple observation and gossip. We seem to have strong desire to know more about our social peers. Technology fuels this desire by providing additional windows into their lives.

Self-veillance: This is where data is being gathered by individuals about themselves. It is common enough for us to monitor our own activities. But modern technologies allow us to gather more precisely quantified data about our own lives, e.g. number of steps walked, average heartbeat, hours of deep sleep, daily work-related productivity (emails answered, words written, sales made etc.).


So where does intimate surveillance fit into this schema? Intimate surveillance involves the gathering of data about our romantic and sexual lives. Technically, intimate surveillance could span all four categories, but what is particularly interesting about it is that it often takes the form of horizontal or self-veillance. People want to know more about their actual and potential intimate partners. And they want to know more about their performance/productivity in their intimate lives. This is not to discount the fact that the digital tools that enable horizontal and self-veillance also enable top-down veillance, but it is to suggest that the impact of intimate surveillance on how we relate to our intimate partners and how we understand our own intimate lives is possibly the most significant impact of this technology. At least, that’s how I feel about it.


2. Technologies of Intimate Surveillance
So how does intimate surveillance work? What kinds of information can we gather about our intimate lives? What apps are available to do this? Levy suggests that we think about this in relation to the ‘life-cycle’ of the typical relationship. Of course, to suggest that there is a typical life-cycle to a relationship is a dangerous thing — relationships comes in many flavours and people can make different patterns work — nevertheless there do seem to be three general stages to relationships: (i) searching; (ii) connecting and (iii) committing (with breakdown/dissolution being common in many instances too).

Different kinds of data are important at the different stages in the life-cycle of a relationship, and different digital services facilitate the gathering of that data. In what follows, I want to give more detailed characterisations of the three main stages in a relationship and explain the forms of surveillance that take place at those stages. Levy’s paper is filled with examples of the many apps that have been developed to assist with intimate surveillance. Some of these apps were short-lived; some are still with us; others have, no doubt, been created since she published her article. I won’t review the full set here. I’ll just give some choice examples.

Searching: This is when we are looking for someone with whom to form an intimate connection. We usually don’t want to do this in a reckless fashion. We want to find someone who is suitable, shares our interests, to whom we are attracted, is geographically proximate, doesn’t pose a risk to us and so on. This requires some data gathering. Various apps assist with this. Two examples stick out from Levy’s article:
Tinder/Grindr: These are apps allows you to find people in your geographical locale. You set the parameters on what you are looking for (age range, how close etc) and then you can search through profiles matching those criteria and ‘like’ them. If the other person likes you too, you can make a connection. Note how this is unlike traditional online dating services like Match.com or eHarmony. Those services tried to do the searching for you by using a complex algorithm to match you to other people. Tinder/Grindr are much more self-controlled: you set the parameters and surveil the data.
Lulu: This is an app that allows female users to evaluate male users. It works kind of like a tripadvisor for men where women are the reviewers. They rate the men on the basis of romantic, personal and sexual appeal. This allows for women to gather and share information about prospective intimate partners. It is mainly targeted at undergraduate college students.

Connecting: This is when we actually make an intimate connection. Obviously, intimate connections can take a variety of forms. Two main ones are of interest here: (i) sex and (ii) romance. A variety of apps are available that allow you to track and gamify your sexual and romantic performance. Again, I’ll use two examples:
Spreadsheets: This bills itself as a ‘sex improvement’ app. It enables you to record how frequently you and your partner have sex. It also records how long each sexual encounter lasted, the number of ‘thrusts’ that took place, and the moans and groans (decibel level reached). The dubious assumption here being that these metrics are useful tools for optimising sexual performance.
Kahnoodle: This (defunct) app tried to gamify relationships. It allowed partners to rank ‘love signs’ from one another that would then earn them kudos points. Once they accumulated enough points they could be redeemed for ‘koupons’ and other rewards.
With the rise of wearable tech and the development of new more sophisticated sensors, the number of apps that try to gamify our sexual and romantic lives is likely to increase. Apps of this sort explicitly or implicitly include behaviour change dimensions, i.e. they try to prompt you to alter your romantic and sexual behaviours in various ways.

Committing: This when we have made a connection and then try to commit to our partner(s). Again, commitment can take different forms and partners often determine the parameters of commitment for themselves (e.g. some are comfortable with open relationships or polyamorous relationships). For many, though, commitment comes with two main concerns: (i) fertility (i.e. having or not having children) and (ii) fidelity (i.e. staying loyal to your partner). Various apps are available to assist people in ensuring fertility (or lack thereof) and fidelity:
Glow: This is an app that tries to assist women in getting pregnant. It does this by allowing them to track various bits of data, including menstruation, position and firmness of cervix, mood, position during sexual intercourse. The related app Glow Nurture is focused on women who are actually pregnant and allows them to track pregnancy symptoms. Both apps have an interpersonal dimension to them: women are encouraged to share data with their partners; the partners are encouraged to provide additional data, and are then prompted to behave in different ways. The app makers have also partnered with pharmacies to enable refilling of prescriptions for birth control etc. (There were also a bunch of menstrual cycle apps targeted at men that were supposed to enable them to organise their lives around their partner’s menstrual cycle - most of these seem to be defunct, e.g. PMSBuddy and iAmaMan)
Flexispy: This is one of a range of apps that allow you to spy on other people’s phones and smart devices. Though this could be used for many purposes, it explicitly states that one of its potential uses is to spy on ‘cheating’ spouses. The app allows you to see pictures/videos, messages, location data, calendars, listen to phone calls and ‘ambient’ audio. As Levy puts it, with these kinds of apps we enter a much darker world of intimate surveillance.

I have tried to illustrate all these examples in the image below.



3. The Argument from Autonomy
By now you should have a reasonable understanding of how intimate surveillance works. What about its consequences? Is it a good or bad thing? It’s difficult to answer this in the abstract. The different apps outlined above have different properties and features. Some of these properties might be positive; some might be negative. To truly evaluate their impact on our lives, we would have to go through them individually. That said, there are some general arguments to be made. I’ll start with an argument in favour of intimate surveillance.
 
The argument in favour of intimate surveillance is based on the value of individual autonomy. Autonomy is a contested concept but it refers, roughly, to the ability to make choices for oneself, be the author of one’s own destiny, and perform actions that are consistent with one’s higher order goals and preferences. I suspect that the attraction of these surveillance apps lies predominantly in their perceived ability to enhance autonomy associated with intimate behaviour. They give us the information we need to make better decisions at the searching, connecting and committing phases. Through tracking and gamification they help us to avoid problems associated with weakness of the will and ensure that we act in accordance with our higher order goals and preferences.

Think about an analogous case: exercise-related surveillance. Many people want to be fitter and healthier. They want to make better decisions about their health and well-being. But they find it hard to choose the right diet and exercise programmes and stick to them in the long run. There is a huge number of apps dedicated to assisting people in doing this — apps that allow them to track their workouts, set targets, achieve goals, and share with their peers in order to stay motivated. The net result (at least in principle) is that they acquire greater control or mastery over their health-related destinies. I think the goal is similar in the case of intimate surveillance: the data, the tracking, the gamification allows people to achieve greater control and mastery over their intimate lives. And since autonomy is a highly prized value in modern society, you could argue that intimate surveillance is a good thing.

To set this out more formally:

  • (1) Anything that allows people to enhance their autonomy (i.e. make better choices, control their own destiny, act in accordance with higher-order preferences and desires) is, ceteris paribus, good.
  • (2) Intimate surveillance apps allow people to enhance their autonomy.
  • (3) Therefore, intimate surveillance apps are, ceteris paribus, good.

There are two main ways to attack this argument. The first is to focus on the ‘ceteris paribus’ (all else being equal) clause in premise (1). You might accept that autonomy is an important value but that it must be balanced against other important values (e.g. mutual consent, trust, privacy etc) and then show how intimate surveillance apps compromise those other values. I’ll be looking at arguments along those lines in part 2.

The other way to attack the argument is to take issue with premise (2). Here everything turns on the properties of the individual app and the dispositions of the person using it. I suspect the biggest problem in this area is with the surveillance apps that include some element of behaviour change, e.g. the sex and romance tracking apps described above. Two specific problems would seem to arise. First, the apps might make dubious assumptions about what is optimal or desirable behaviour in this aspect of one’s intimate life. The assumptions might be flawed and might encourage behaviour that is not consistent with your higher order goals and preferences. Second, and more philosophically-minded, by including behaviour prompts the apps would seem to take away a degree of autonomy. This is because they shift the locus of control away from the user to the behaviour-change algorithm developed by the app-makers. Now, to be clear, we often need some external motivational scaffolding to help us achieve greater autonomy. For instance, I need an alarm clock to help me wake up in the morning. But if our goal is greater autonomy, I would be sceptical of any motivational scaffolding that makes our choices for us. I think it is best (from an autonomy perspective) if we can set the parameters for preferred choices and then set up the external scaffolding that helps us satisfy those preferences. I worry that some apps try to do both of these things.
 
Okay, I’ll leave it there for today. In part two, I’ll consider a variety of objections to the practice of intimate surveillance.

Thursday, April 7, 2016

Blockchains and the Emergence of a Lex Cryptographia



Here’s an interesting idea. It’s taken from Aaron Wright and Primavera de Filippi’s article ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’. The article provides an excellent overview of blockchain technology and its potential impact on the law. It ends with an interesting historical reflection. It suggests that the growth of blockchain technology may give rise to a new type of legal order: a lex cryptographia. This is similar to how the growth in international trading networks gave rise to a lex mercatoria and how the growth in the internet gave rise to a lex informatica.

Is this an idea worthy of our consideration? I want to investigate that question in this post. I’ll do so by explaining the rationale for Wright and de Filippi’s claim. I’ll start by going back to first principles and considering the nature of regulatory systems and the different possible forms of regulation. This will allow me to explain more clearly the proposed evolution to a lex cryptographia and the implications this might have.


1. The Nature of Regulation and Regulatory Systems
All human societies try to regulate the behaviour of their members. In simple terms, regulation is the biasing of behaviour. You want to encourage people to act in certain ways and discourage them from acting in other ways. You want to push them towards certain outcomes and pull them away from others. There are two main forms that this biasing can take:

Ex ante biasing: Guiding, directing and incentivising behaviour in advance.
Ex post biasing: Punishing or sanctioning behaviour that does not comply with preferred norms or standards of behaviour in order to encourage future compliance.

[This isn’t to rule out other potential purposes for punishment (such as retribution or revenge), it’s just to suggest that in the regulatory context the biasing function often takes precedence.]

How do we go about biasing people’s behaviour in the desired directions? What tools can we use? In his famous 1999 book - Code and Other Laws of Cyberspace, Lawrence Lessig argued that there were four main tools for regulation:

Architecture: Any natural or man-made structures that shape, constrain and/or permit certain forms of behaviour. Architectures are ubiquitous and are often the first and primary mode of regulation. Most of the other forms require some communication and/or signaling. Architectures don’t: they are structural limitations on possible forms of behaviour. For example, our biological architecture biases us in favour of breathing oxygen: if we didn’t we would die. Similarly, the construction of railroads permitted us to travel faster and further than we had gone before, but only along fixed tracks. The construction of the automobile and the building of modern roads allowed for additional but also limited possibilities. Technologies frequently create new possibilities for human behaviour and interaction, but those possibilities are controlled by the underlying architecture.

Social Norms: These are non-legal social standards, policed and enforced through peer pressure. A simple example would be table manners. There are all sorts of standards of behaviour for dining - these standards vary depending on the culture and the occasion. Formal dining has elaborate norms. You must hold your cutlery in a particular way; proceed through the courses in a particular order; fold your napkin just-so; be served by the waiting staff from a particular side; and so on. These behavioural standards are a creation of custom and social expectation. These forces create norms that govern many aspects of our lives. Failure to comply with these norms often leads to undesirable social repercussions: shunning, gossip, ridicule, mockery and so on.

The Market: Humans trade goods and services on markets. Markets then regulate human behaviour using a simple but often effective tool: They set prices. The prices bias human behaviour in various ways. Creators and suppliers are (usually) biased in favour of the goods and services that have the highest prices. Purchasers and demanders are (usually) biased in favour of those with the lowest prices. The market also disciplines behaviour: those who spend more than they take in are punished and disincentivised from continuing to do what led to that sad state of affairs.

Law: Most societies have a set of norms that are given a special social status. We call these norms ‘the law’. These are norms that are created and endorsed by recognised social authorities, and are usually enforced (ultimately) by the threat of violent coercion. In the modern world, it is governments and states that create these special social norms. They then use an elaborate institutional machinery to bias us in favour of compliance with those norms: police forces, courts, prisons and so on. [Note: I am aware that this assumes a potentially controversial, positivistic theory of law]



According to Lessig, these four tools exhaust the regulatory possibilities. How they are used by different societies, at different times, in response to different challenges, is the interesting thing.


2. Lex Mercatoria and Lex Informatica
The typical pattern over the course of human history has been that new technologies and new discoveries create new architectures. These architectures are the initial and primary regulatory tool: the only limit on behaviour is that provided by the architecture (and the conscience of those using it). Once the new architecture becomes widely available, the other regulatory tools flood-in and further constraints and limitations emerge. Users of the architectures adopt social norms to bias the behaviour of other users. If they exploit the architecture for financial gain, market norms emerge. Some tools are more effective than others. Ironically (given its special social status) legal regulation is often the last to flood into the new architecture and often simply codifies the pre-established norms.

Wright and De Filippi use two historical examples to illustrate this pattern.

The first example is the lex mercatoria. This was a set of (quasi?) legal norms that developed from trading networks in Europe during the middle ages. At the time, Europe was made up of small principalities and states. Within these principalities a local ruler had the authority to pass and create laws. However, merchants did business with people from outside the principalities. Indeed, trading networks were established that covered most of the continent. These networks constituted an architecture. The traders who operated in these networks needed some body of rules to regulate their behaviour. They could not rely on the local rulers to provide these rules since they only had authority within small geographical areas. So they developed them themselves. An impressive body of customary rules emerged that was known as the lex mercatoria. Over time, these rules became more formalised and started to be recognised by local legal systems (often because of tax benefits to the local rulers). That said, the relationship between the lex mercatoria and the local law was sometimes uneasy. Some argue that this is still the case: that there are norms and customs for international trade that constitute a modern day lex mercatoria, and that these norms have an uneasy relationship with national legal systems. You can read about this debate here.

The second example is the lex informatica. This was the set of norms that developed after the emergence of the internet. The internet created a new architecture for social interaction. People could communicate with one another in new ways — ways that minimised the relevance of traditional geographical and legal boundaries. It allowed them to engage in new methods of trade, to create and distribute new goods and services. In the early days, this architecture constituted something of a legal ‘wild west’. Users of the internet had to develop their own norms, relying heavily on private contractual methods such as End-User Licensing Agreements (EULAs). Because the internet wasn’t localised in any particular state or national legal system, these contractually established norms often ignored or supplanted pre-existing legal norms. Eventually, national and international legal regulations started to enter the new architecture, but there continues to be an uneasy relationship between these regulations and the lex informatica to this day.

The question now is whether the emergence of blockchain technologies gives rise to something similar - a lex cryptographia perhaps?


3. Lex Cryptographica: A New ‘Legal’ Order
This is what Wright and de Filippi suggest. To appreciate why they suggest this, you need to know something about blockchain technology. I have written two previous posts that try to explain how it works. I won’t go into the same depth here. Suffice to say, the blockchain is a distributed ledger that records and verifies transactional information (e.g. did X send money to Y; did Y receive it). The ledger (“the blockchain”) is maintained and stored on a network of computers. The network can be distributed over potentially any geographical area (anywhere with network connectivity). Every computer (or node) on the network stores a copy of the ledger. The network then verifies the transactional information using some sort of consensus or majority decision-making rule (e.g. does every computer on the network agree that X sent the money to Y? If so, the transaction is verified).

When explained in these terms, blockchain technology often seems unexciting, but that is far from the case. Any information that can be digitised and sent over a network can, in theory, be recorded and verified by the blockchain. With the growth of the internet of things, this means that the blockchain can be used to verify many different kinds of information and thereby regulate many different human interactions. As a regulatory mechanism, the blockchain has at least three interesting properties:

Decentralisation: The blockchain is set up and maintained by a decentralised network, not by any one individual or organisation. Indeed, one of the alleged virtues of the blockchain is its ability to breakdown the power of ‘trusted third parties’ in society, e.g. governments, banks, large corporations. You don’t need to trust these powerful organisations anymore; you just need to trust the network. This enables people to create their own bespoke blockchain-based regulatory architectures (“smart contracts”) for managing their relationships with other network users.

Encryption: The information that is recorded and verified by the blockchain is encrypted and hence, in principle if not in practice, anonymised. This is good for privacy and for facilitating ‘private ordering’ of how one relates to other users of the network, but means that it can be difficult for traditional legal systems to regulate these relationships.

Architecture-driven: Given the two preceding properties, the main regulatory tool in the case of the blockchain is the underlying technological architecture. What has the system been programmed do? What kinds of information will it receive and verify? How exactly will it verify it? How frequently? How will those who maintain the network be rewarded for their efforts? All these questions are answered at the level of coding. As a result, much of the regulation has to be baked-into the architecture of the system.


A lex cryptographia is likely to emerge from this. Users of these systems will develop norms that will be baked-into the programmes they develop on blockchain technology. How you feel about this largely depends on how you feel about traditional legal systems. Cyber-libertarians tend to love it. They think that the blockchain allows for the creation of self-governing communities that are outside the reach of the law. And since they think that state-driven law is basically evil, they also think we should welcome this new technological regulatory architecture. It is far more freedom-enhancing than what we currently have in place.

Others are less sanguine. They worry that the blockchain is technocratic and elitist. At present, relatively few people know how to create and code private regulatory architectures on the blockchain. How are they to make use of it? If they are not educated to develop their own architectures, they will have to rely on those with the relevant technical expertise. This would seem to create a new set of trusted third parties, with a lot of social power. And, as the old adage goes, power tends to corrupt.
The result is that some people would like for this new regulatory architecture to be brought within the reach of traditional legal systems. Is this possible? Wright and de Filippi argue that it probably is.

Traditional legal systems excel by (ultimately) using force or the threat of force to change how humans act. As long as these traditional legal systems can find the humans that run and operate the blockchain, they can use these tools to enact regulatory changes. What’s more, they don’t need to find everyone who runs and operates the blockchain. They just need to find the people that matter. Although the blockchain is, in theory, a decentralised network — and so, in theory, power is distributed across the network — the reality is that there are centralised “chokepoints” in the system. Internet service providers (ISPs) and other corporate intermediaries (e.g. software developers, hardware manufacturers) are such centralised chokepoints. They provide people with the technology they need to make use of the blockchain. If you bring the law to bear on them, it will be possible to bring some degree of legal regulation into the system.

It looks then like we could be heading for another uneasy relationship between our regulatory tools.