Posted on Dec 9, 2023
In our present society, surveillance cameras can be more prevalent than patrolling police officers. As the use of artificial intelligence (AI) on such surveillance cameras is expanded, its role in public safety is becoming increasingly significant. Computer vision technology, which enables machines to interpret and make decisions based on visual data, has great potential for addressing crime and ensuring public safety. However, the technology is often viewed as a double-edged sword, offering the promise of enhanced security and improved quality of life, while also harboring the potential for a mass surveillance dystopian society. It is Oddity’s mission to use computer vision as a force for good, and increase safety without sacrificing privacy. There are two main concerns to take into account to ensure proper use of AI in public surveillance:
Related to bias, is the notion of the “black box” and the importance of explainable AI. The black box refers to a situation where the decision-making process of an AI system is hidden and not easily understandable by humans. This can lead to challenges in understanding, trusting, and validating the AI's decisions and behaviors. If we cannot inspect the inner workings of an AI system, it will be enormously difficult to rule out that an AI system has no undesired biases. However, understanding the issues of privacy and bias is not just a technological challenge but a societal imperative. In developing computer vision systems, we must take great caution, ensuring they are engineered to limit misuse where ever possible.
In this blogpost, we will explore the positions of the European Union and the Dutch government regarding privacy and bias and AI. Following this, we will present an overview of how Oddity.ai aligns with these regulations and standards. We will detail our proactive measures and the internal policies we have implemented to ensure our AI systems uphold privacy and mitigate bias, reinforcing our commitment to ethical AI development.
A spectre is haunting Europe — the spectre of unregulated AI. The view of the European Union (EU) is clear: the power of AI must be wielded with a careful balance between innovation and the upholding of privacy and personal freedoms. The EU has been at the forefront of regulating AI, particularly with the proposal of the AI Act in 2021. The overarching concept of this act is to categorize AI systems into four risk groups:
The EU's Data Protection Board and the European Data Protection Supervisor have voiced concerns about the use of AI in public spaces. They underline the importance of ensuring that these systems do not lead to mass surveillance. Mass surveillance, by its very nature, stands in direct conflict with the principles of necessity and proportionality mandated by data protection laws like the General Data Protection Regulation (GDPR). The GDPR is mainly concerned with privacy. However, the proposed regulations also demand robust human oversight to prevent discriminatory outcomes, addressing the concerns of bias.
In the wake of high-profile incidents such as the “toeslagenaffaire” and the use of AI assisted fraud detection in the welfare system, a wave of vigilance and skepticism has washed over the Dutch government and its citizens. These events have pushed the conversation on AI towards a critical examination, particularly in regards to surveillance and social governance. The Netherlands is taking a measured approach to AI. The Autoriteit Persoonsgegevens (AP), the Dutch Data Protection Authority, is tasked with safeguarding privacy. It cautions against the encroaching nature of surveillance technology. In line with the previously discussed European Commission’s AI Act, Dutch authorities advocate for a strict regulatory approach.
The government supports tightly scoped exceptions for the use of invasive surveillance technology in public areas, reflecting a strong preference for privacy in public spaces. Particularly noteworthy is the Dutch position on biometric identification: In a decisive move in 2021, the AP issued a formal warning to a supermarket chain eager to deploy facial recognition. The AP’s message stated that facial recognition is broadly prohibited in the Netherlands with only two exceptions. The first exception permits usage only when users give explicit consent. The second facilitates that the technology can be used as a security tool in areas of vital public interest, citing a nuclear power plant as a prime example. Contrary to the opinion of the supermarket chain, its use-case failed to meet either criterion according to the AP. This incident underlines the commitment to privacy and exemplifies the high bar set for the use of potentially invasive technologies.
In the aftermath of the aforementioned “toeslagenaffaire”, the Impact Assessment for Human Rights and Algorithms (IAMA) emerged as a potential guide for ethical AI implementation. Developed by the Utrecht Data School in collaboration with legal scholars by request of the Dutch government, the IAMA provides a methodical framework to scrutinize the deployment of algorithms, balancing technological innovation with fundamental human rights.
The IAMA's inception was a direct response to a government request, aiming to prevent future misuses of algorithms like those that were misused in the “toeslagenaffaire” and the welfare fraud detection case. It serves as a manual, outlining a tri-phase decision-making process:
This approach is in stark contrast to the prior status quo. Amnesty International's previously called out the Dutch government for lax regulations and shrouded algorithmic operations. The IAMA has also been translated and made available for international usage under the, not so catchy, name “Fundamental Rights and Algorithm Impact Assessment“ (FRAIA).
Policy discussions in the Netherlands also highlight the need for transparent AI systems. Dutch policymakers underscore the need to step away from the ‘black box’ principle around AI decision-making. There's an active push to ensure that these systems are free of bias, an initiative that is not only about meeting ethical standards but also about developing public trust and consent.
At Oddity.ai, we are at the forefront of developing cutting-edge computer vision technology while also being deeply committed to its ethical development and deployment. This commitment is increasingly significant given the growing number of strict laws and regulations across the European Union and the Netherlands. In this section, we will delve into how Oddity.ai’s operations align with the EU and Dutch government's regulatory frameworks and how our foundational design principles both complement and enhance compliance efforts.
The development of our first algorithm began with extensive engagement with various stakeholders in the industry. This included regular interactions with customers, municipal authorities, industry and academic experts. Through these interactions, we were able to extensively map the concerns and challenges related to privacy and bias in AI surveillance technology. This process of continuous dialogue and learning helped us gradually develop a strategic approach to address these critical concerns. It was through this iterative process that we arrived at our four foundational design principles. The design principles were not drawn up as a mere legal checklist but reflect our company’s view of how this technology should be used.
Oddity.ai’s four core design principles:
The Dutch emphasis on transparency and the avoidance of ‘black box’ AI is reflected in our commitment to open and explainable AI processes. We utilize Gradient-weighted Class Activation Mapping (Grad-CAM) technology to provide visual explanations of our algorithm's decisions. This tool highlights the areas in an image or video that significantly influenced the algorithm’s classification decision. This tool enables us to review our algorithm’s decisions and confirm that they were taken based on acceptable criteria. Conversely, it also allows us to find out if an algorithm has taken a decision based on an undesirable bias, enabling us to quickly rectify this. When an alert is sent by our algorithm, we ensure that the logic of those alerts is understandable and justifiable.
Considering the AP’s stance on biometric identification and the EU’s directives, we consciously avoid any form of biometric data analysis, or any type of identification in general, further reinforcing our commitment to privacy protection. By integrating these design principles into our products and processes, Oddity.ai is not merely compliant with the current laws and regulations but also positions itself to adapt effortlessly to future legislation.
In summary, as we stand at the forefront of AI development, we at Oddity.ai truly believe in the transformative power of AI. We envision a future where AI enhances current security implementations and significantly improves the quality of life for countless individuals across the globe. Yet, we are aware that this bright future is dependent on a commitment to the ethical pursuit of these technologies. Our mission is not just to innovate, but to do so with a conscience, ensuring that the advancements we champion result in a better future for everyone.