The Watchful AI: Privacy and Bias in Computer Vision

Posted on Dec 9, 2023




In our present society, surveillance cameras can be more prevalent than patrolling police officers. As the use of artificial intelligence (AI) on such surveillance cameras is expanded, its role in public safety is becoming increasingly significant. Computer vision technology, which enables machines to interpret and make decisions based on visual data, has great potential for addressing crime and ensuring public safety. However, the technology is often viewed as a double-edged sword, offering the promise of enhanced security and improved quality of life, while also harboring the potential for a mass surveillance dystopian society. It is Oddity’s mission to use computer vision as a force for good, and increase safety without sacrificing privacy. There are two main concerns to take into account to ensure proper use of AI in public surveillance:

  • Privacy In the context of this blog post, privacy refers to an individual’s right to keep their personal life and personal information out of public view and control the flow of their personal data. When it comes to AI in surveillance, privacy is concerned with protecting individuals from being constantly monitored or having their data used without consent, potentially leading to invasive profiling or personal exposure.
  • Bias Bias in AI systems occurs when the algorithms make decisions that systematically favor or discriminate against certain groups of people. We should make it clear that our discussion focuses on discriminatory biases and prejudices, rather than the concept of 'bias' as it is typically understood in the field of statistics. In surveillance, this could manifest as the AI inaccurately identifying or targeting individuals based on their ethnicity, gender, or other characteristics, leading to unfair treatment and potentially exacerbating existing societal inequalities.

Related to bias, is the notion of the “black box” and the importance of explainable AI. The black box refers to a situation where the decision-making process of an AI system is hidden and not easily understandable by humans. This can lead to challenges in understanding, trusting, and validating the AI's decisions and behaviors. If we cannot inspect the inner workings of an AI system, it will be enormously difficult to rule out that an AI system has no undesired biases. However, understanding the issues of privacy and bias is not just a technological challenge but a societal imperative. In developing computer vision systems, we must take great caution, ensuring they are engineered to limit misuse where ever possible.

In this blogpost, we will explore the positions of the European Union and the Dutch government regarding privacy and bias and AI. Following this, we will present an overview of how Oddity.ai aligns with these regulations and standards. We will detail our proactive measures and the internal policies we have implemented to ensure our AI systems uphold privacy and mitigate bias, reinforcing our commitment to ethical AI development.

The European Union: Balancing Innovation and Individual Rights

A spectre is haunting Europe — the spectre of unregulated AI. The view of the European Union (EU) is clear: the power of AI must be wielded with a careful balance between innovation and the upholding of privacy and personal freedoms. The EU has been at the forefront of regulating AI, particularly with the proposal of the AI Act in 2021. The overarching concept of this act is to categorize AI systems into four risk groups:

  • Unacceptable risk: AI systems in this category are considered a clear threat to the safety, livelihoods, and rights of individuals. They are set to be banned. For example, a social scoring system.
  • High risk: Broadly summarized this category includes applications that could impact the rights you have as an individual but also applications in healthcare. These systems require strict compliance with strict regulatory requirements before they can be deployed. For example, an AI application in robot-assisted surgery.
  • Limited risk: AI applications that interact with individuals, such as chatbots, fall into this category. They require specific transparency obligations to inform users that they are interacting with an AI system.
  • Minimal or no risk: This group comprises AI systems that pose minimal or no risk, like AI-enabled video games or spam filters. These applications are subject to minimal regulation.

The EU's Data Protection Board and the European Data Protection Supervisor have voiced concerns about the use of AI in public spaces. They underline the importance of ensuring that these systems do not lead to mass surveillance. Mass surveillance, by its very nature, stands in direct conflict with the principles of necessity and proportionality mandated by data protection laws like the General Data Protection Regulation (GDPR). The GDPR is mainly concerned with privacy. However, the proposed regulations also demand robust human oversight to prevent discriminatory outcomes, addressing the concerns of bias.

The Dutch Perspective: A Cautious Approach to AI

In the wake of high-profile incidents such as the “toeslagenaffaire” and the use of AI assisted fraud detection in the welfare system, a wave of vigilance and skepticism has washed over the Dutch government and its citizens. These events have pushed the conversation on AI towards a critical examination, particularly in regards to surveillance and social governance. The Netherlands is taking a measured approach to AI. The Autoriteit Persoonsgegevens (AP), the Dutch Data Protection Authority, is tasked with safeguarding privacy. It cautions against the encroaching nature of surveillance technology. In line with the previously discussed European Commission’s AI Act, Dutch authorities advocate for a strict regulatory approach.

The government supports tightly scoped exceptions for the use of invasive surveillance technology in public areas, reflecting a strong preference for privacy in public spaces. Particularly noteworthy is the Dutch position on biometric identification: In a decisive move in 2021, the AP issued a formal warning to a supermarket chain eager to deploy facial recognition. The AP’s message stated that facial recognition is broadly prohibited in the Netherlands with only two exceptions. The first exception permits usage only when users give explicit consent. The second facilitates that the technology can be used as a security tool in areas of vital public interest, citing a nuclear power plant as a prime example. Contrary to the opinion of the supermarket chain, its use-case failed to meet either criterion according to the AP. This incident underlines the commitment to privacy and exemplifies the high bar set for the use of potentially invasive technologies.

In the aftermath of the aforementioned “toeslagenaffaire”, the Impact Assessment for Human Rights and Algorithms (IAMA) emerged as a potential guide for ethical AI implementation. Developed by the Utrecht Data School in collaboration with legal scholars by request of the Dutch government, the IAMA provides a methodical framework to scrutinize the deployment of algorithms, balancing technological innovation with fundamental human rights.

The IAMA's inception was a direct response to a government request, aiming to prevent future misuses of algorithms like those that were misused in the “toeslagenaffaire” and the welfare fraud detection case. It serves as a manual, outlining a tri-phase decision-making process:

  1. Preparation: Organizations must clarify their intentions with algorithmic deployment, anticipating potential impacts.
  2. Technical Scrutiny: It stresses the 'garbage in, garbage out' adage, pushing for quality data and sound algorithmic throughput to avert flawed outcomes.
  3. Output Management: The IAMA insists on the capability for human intervention, ensuring algorithmic decisions can be overruled, preserving human autonomy.

This approach is in stark contrast to the prior status quo. Amnesty International's previously called out the Dutch government for lax regulations and shrouded algorithmic operations. The IAMA has also been translated and made available for international usage under the, not so catchy, name “Fundamental Rights and Algorithm Impact Assessment“ (FRAIA).

Policy discussions in the Netherlands also highlight the need for transparent AI systems. Dutch policymakers underscore the need to step away from the ‘black box’ principle around AI decision-making. There's an active push to ensure that these systems are free of bias, an initiative that is not only about meeting ethical standards but also about developing public trust and consent.

Oddity: Our Approach To Privacy and Bias

At Oddity.ai, we are at the forefront of developing cutting-edge computer vision technology while also being deeply committed to its ethical development and deployment. This commitment is increasingly significant given the growing number of strict laws and regulations across the European Union and the Netherlands. In this section, we will delve into how Oddity.ai’s operations align with the EU and Dutch government's regulatory frameworks and how our foundational design principles both complement and enhance compliance efforts.

The development of our first algorithm began with extensive engagement with various stakeholders in the industry. This included regular interactions with customers, municipal authorities, industry and academic experts. Through these interactions, we were able to extensively map the concerns and challenges related to privacy and bias in AI surveillance technology. This process of continuous dialogue and learning helped us gradually develop a strategic approach to address these critical concerns. It was through this iterative process that we arrived at our four foundational design principles. The design principles were not drawn up as a mere legal checklist but reflect our company’s view of how this technology should be used.

Oddity.ai’s four core design principles:

  • Privacy-by-design: In line with the GDPR, our algorithms are architected to respect privacy at their core. We ensure that no personal identification takes place, addressing concerns around unnecessary data collection and the potential for invasive profiling.
  • Human-in-the-loop: Consistent with both EU and Dutch guidelines that demand human oversight in AI systems, our approach empowers human agency. Our technology assists but does not replace the critical decision-making processes of surveillance and security staff, ensuring that AI supports rather than replaces human judgment.
  • Action-based alerts: Our algorithm only alerts a staff member when a person is observed performing any of our recognized actions, i.e. violence. This aligns with the EU’s call for proportionality and necessity in surveillance practices. Our triggers are based on the actions being performed by an individual, not on profiling or pre-emptive labeling based on appearance.
  • Transparency and Traceability: Opening the ‘black box’ for understandable AI decisions, ensuring ethical data acquisition and sourcing, and maintaining transparency in our data use and processing, while respecting privacy and regulatory standards.

The Dutch emphasis on transparency and the avoidance of ‘black box’ AI is reflected in our commitment to open and explainable AI processes. We utilize Gradient-weighted Class Activation Mapping (Grad-CAM) technology to provide visual explanations of our algorithm's decisions. This tool highlights the areas in an image or video that significantly influenced the algorithm’s classification decision. This tool enables us to review our algorithm’s decisions and confirm that they were taken based on acceptable criteria. Conversely, it also allows us to find out if an algorithm has taken a decision based on an undesirable bias, enabling us to quickly rectify this. When an alert is sent by our algorithm, we ensure that the logic of those alerts is understandable and justifiable.

Considering the AP’s stance on biometric identification and the EU’s directives, we consciously avoid any form of biometric data analysis, or any type of identification in general, further reinforcing our commitment to privacy protection. By integrating these design principles into our products and processes, Oddity.ai is not merely compliant with the current laws and regulations but also positions itself to adapt effortlessly to future legislation.

In summary, as we stand at the forefront of AI development, we at Oddity.ai truly believe in the transformative power of AI. We envision a future where AI enhances current security implementations and significantly improves the quality of life for countless individuals across the globe. Yet, we are aware that this bright future is dependent on a commitment to the ethical pursuit of these technologies. Our mission is not just to innovate, but to do so with a conscience, ensuring that the advancements we champion result in a better future for everyone.