Ethics of AI for Video Surveillance

Gepost op Dec 2, 2019

The path towards the successful application of artificial intelligence in video surveillance that we are taking as a society crosses a lot of junctions and making a wrong choice along the way can cause a very undesirable outcome. The promise of AI is immense but the risks are large too. It is of utmost importance that we are aware of this, that we keep thinking critically and that we enable an open and inclusive dialogue. A strong and justified moral compass is necessary to keep walking the right track. In this blog, we offer our vision on the ethics of video surveillance with machine learning; it’s as relevant as ever.

The Growing Potential

The pace of development in artificial intelligence is incredibly high. The past few years the translation of state-of-the-art research into practical applications has also started to take shape. AI is no longer solely a tech-giant asset but is deployed in organisations of all kinds. Protecting your online banking account from scammers, detecting asbestos-roofs or making the energy grid more efficient using AI-guided optimisations: all are no longer futuristic dreams. The automatic and immediate detection of violent assault also was such a dream once.

There is, however, a non-technical limit to this progress. Not everything that’s technologically possible, is desirable. While some applications are clearly justified, others are clearly wrong. This articles focuses on the domain of video surveillance. No parent would argue against the value of finding a lost child using a network of face recognising surveillance cameras. At the same time, it would be uncomfortable if these cameras could be put to arbitrary use by the government. Unfortunately, it is not apparent how to implement the former while preventing the latter.

Ethics of AI for Video Surveillance /img/blog/panopticon.png Plan of Jeremy Bentham's Panopticon prison, one of the first instances of constant surveillance. (source: Wikipedia)

The wish to surveil is of all times, but modern technology allows governments (and other organisations) to extend surveillance to the whole of our society. At the same time, it’s now possible to do so at a scale and with an efficiency that have never been seen before. Dystopian scenarios of repressive police states with all-seeing eyes, such as made famous in George Orwell’s 1984 (a must-read for everyone interested in video surveillance), are becoming technologically feasible. The risk of malicious application is the price we pay for the technological progress that enables the righteous application.

Safety vs. Privacy, Finding a Balance

The necessity of public video surveillance comes from the primary goal of public authorities to protect citizens and their social and economic needs. This is often seen as a valid justification to interfere with privacy (even the European Union Convention on Human Rights leaves room for governments to limit privacy for security reasons.

However, “security” is ill-defined in this context (what exactly do we mean with it?) and there is disagreement about its interaction with privacy. Is spying on everyone who Googled for “bomb” necessary to prevent a terrorist attack? Or is it just trawling in hope of catching something useful while losing privacy as by-catch? How much security should be gained to make it worth losing a bit of privacy? And, when do we lose too much privacy? Where do we draw the line?

Approaching privacy as our “right to be let alone” or as “affording us the space to be and define ourselves through giving us a degree of autonomy and protecting our dignity”, we argue that the line should be drawn when personal identity is part of the information gathered without consent. So when the surveillant knows who the surveilled is, that is when he has intruded our right to privacy too much.

The GDPR already prohibits the use of e.g. biometrics completely (that includes facial features, effectively prohibiting facial recognition). However, EU member states are allowed to explicitly make exceptions. In The Netherlands, for example, it is allowed to use biometrics for the identification or verification of subject as long as it’s proportional to ‘justifiable interests’. In the coming years, we will probably see legal cases about the interpretation of this as ‘proportionality’ is also subject to discussion.

However, different rules regarding the use of personal data may apply to law enforcement, as is the case in The Netherlands. These make the data collection less transparent, which is both necessary from the perspective of authorities and hazardous from the citizens’ viewpoint.

A high-level expert group on Artificial Intelligence (AI HLEG) set up by the European Commission has recently put out a set of ethical guidelines in which they formulate seven requirements for trustworthy AI:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Environmental and societal well-being
  7. Accountability

While this is a good start, the conclusions of the report remain fairly abstract and legislation is still outstanding. All the while technological progress keeps marching on.

Predictive Policing

When equipping yourself with an automated video surveillance system, it might be tempting to harness the predictive power of machine learning algorithms using the data you collect. You could detect suspicious behaviour before a crime is committed. This is a fantastically efficient way to make the world a safer place because you prevent crime from happening in the first place. You might have already come into contact with the pros and cons of predictive policing with the “pre-crime” unit from Steven Spielberg’s Minority Report (which we at Oddity.ai highly recommend if you haven’t seen it yet). Predictive policing (on the level of the individual) not only goes against the presumption of innocence (a fundamental human right) but it is also currently impossible to get right.

While the “pre-crime” unit had all-seeing mediums to their disposal, we have to do with AI techniques. All predictive techniques considered are data-driven, which means that they use historical data to learn to predict. This data is inherently limited (finite, but also just a collection of instances, no principles) and in effect always discriminates. This might be because of a sampling bias but there are bigger problems, certainly when targeting individuals.

People with characteristics that correlate with criminal behaviour (these correlations are learnt from the historic data) will be policed more because the algorithms predict they will commit more crimes. From the numbers, however, there is no way to distinguish between causally related and not causally related correlations. The learned characteristics could be anything we can gather about an individual, from skin colour to family history to political preferences. There thus is no guarantee of the predictions being just or fair.

Let’s take an example: the innocent, brown-eyed John Doe. In the data of the police, brown-eyed people were caught committing a crime more often than people with other eye colours. John will thus be policed more because of this characteristic. Some might find this reasonable (because brown-eyed people are more likely to be criminals) but John is not actually a criminal and has to carry this burden without him actually deserving it.

Additionally, these systems put more pressure on groups who already lag behind. People who live in neighbourhoods where criminal activity is above average, for example, will experience a greater police burden (longer lines at the airport, getting pulled over more often, etc). This has all kinds of negative impact, e.g. on mental wellbeing, and effectively makes it harder to escape from such criminal environments, creating a social divide.

Then, lastly, there is the risk of creating a self-fulfilling prophecy. Let’s say a causal correlation is found in the data and it’s used for predictive policing. If people with the found characteristic would, at some point in the future, no longer exhibit above-average amounts of criminal behaviour, they would still be policed more because the systems works on historical data. If you police a group more, you will also catch more of the crimes they commit. This way the statistics get skewed and paint a distorted picture of the real underlying crime distribution. It will look like they commit more crimes than average but, in reality, they just get caught more often.

It is impossible to prevent discrimination using these techniques.

Taking a Stance

We are optimistic that, with careful consideration, artificial intelligence for video surveillance can bring society a lot of value. The potential is immense.

As we’ve seen, however, the risks involved are also huge. With legislation lagging behind, we as a company think it is important to take a stance in the matter. While fully acknowledging the efforts of the AI HLEG, we choose a more concrete position.

1. Maintain an open and critical dialogue

We think it’s important to keep ascertaining if the problems we have are large enough to justify the privacy infringements caused by the video surveillance solutions. There is no single, right answer to all cases so we should always maintain a dialogue with different voices in order to stay critical on your own decisions and prevent the formation of a bubble. We constantly try to involve our partners in such a dialogue.

2. Improve the state of both security and privacy

Human personnel behind surveillance camera feeds, no matter how well-trained, inherently have their own personal interests and are equipped with biological face-recognition. Deploying algorithmic solutions focused on security has the potential of increasing privacy by eliminating the need for human surveillants while solving a security problem. Our solutions are designed to be privacy-first: they don’t use any personal information (e.g. no face recognition) and don’t need to store videos.

3. Don’t engage in individual predictive policing

We think people should not be limited in their freedom beforehand and innocence is to be assumed until proven otherwise. This means to only act when a crime has actually been committed. At Oddity.ai, we do not try to predict violence but detect it at the exact moment it happens. By focussing on reactiveness instead of predictiveness, we can make the world a safer place while also being fair.

4. Experiment with policy-making in mind

By working with public authorities on testing grounds for new technologies, the effectiveness and societal impact can be evaluated in a controlled and safe manner. Making policy should always be in mind during these experiments. We encourage the formalisation in policy because this creates transparency for everyone involved.

Ethics of AI for Video Surveillance /img/blog/ethics.png

If you have a thought about this article or want to know more about what we at Oddity.ai do, please reach out to us.