Oddity analyzes live video from your existing cameras to flag potential physical aggression in real time.
Oddity uses computer vision based on Convolutional Neural Networks (CNNs), a deep-learning approach that learns patterns of physical aggression from large, labeled video datasets. The model analyzes live video from your existing cameras and assigns a confidence score that helps determine whether an alert should be sent. It is not generative AI. Oddity does not record audio, identify individuals, or create new or synthetic content. It only observes what is on camera and scores the likelihood of physical aggression.
Live video streams are securely received from your existing cameras.
The model evaluates motion patterns frame by frame to assess observable physical behavior and produce a confidence score.
A site-specific threshold converts that assessment into an alert or no-alert decision, tuned to your environment.
When an alert is triggered, staff receive a notification with a short video clip and camera location via mobile or the existing video system.
The model is trained on a diverse mix of public datasets, re-enacted and self-recorded footage, and licensed videos. Training is continuous and improves over time as new real-world edge cases are introduced.
Defined as forceful, aggressive movement involving intended physical contact between two or more people. Detection is based solely on visible physical behavior, not speech or identity.
Models are tuned to reliably catch real incidents without overwhelming staff, achieving at least an 80% detection rate and roughly one false alert every few days per camera.
Fairness is evaluated throughout training and deployment using ongoing statistical bias testing. Performance is measured with equalized odds to keep detection rates consistent across protected groups, and each production model is automatically verified to meet these fairness standards before going live.
Learn more