Solution

How the AI Works

Oddity analyzes live video from your existing cameras to flag potential physical aggression in real time.

Oddity uses computer vision based on Convolutional Neural Networks (CNNs), a deep-learning approach that learns patterns of physical aggression from large, labeled video datasets. The model analyzes live video from your existing cameras and assigns a confidence score that helps determine whether an alert should be sent. It is not generative AI. Oddity does not record audio, identify individuals, or create new or synthetic content. It only observes what is on camera and scores the likelihood of physical aggression.

The detection pipeline

Higher Expectations

1. Ingest

Live video streams are securely received from your existing cameras.

Ready Tech

2. Analyze

The model evaluates motion patterns frame by frame to assess observable physical behavior and produce a confidence score.

Funding Compliance

3. Detect

A site-specific threshold converts that assessment into an alert or no-alert decision, tuned to your environment.

Workforce Pressure

4. Alert

When an alert is triggered, staff receive a notification with a short video clip and camera location via mobile or the existing video system.

What the model learns
and how we test it

Higher Expectations

Training data

The model is trained on a diverse mix of public datasets, re-enacted and self-recorded footage, and licensed videos. Training is continuous and improves over time as new real-world edge cases are introduced.

Ready Tech

Physical aggression

Defined as forceful, aggressive movement involving intended physical contact between two or more people. Detection is based solely on visible physical behavior, not speech or identity.

Workforce Pressure

Accuracy targets

Models are tuned to reliably catch real incidents without overwhelming staff, achieving at least an 80% detection rate and roughly one false alert every few days per camera.

Security

Fairness and bias checks

Fairness is evaluated throughout training and deployment using ongoing statistical bias testing. Performance is measured with equalized odds to keep detection rates consistent across protected groups, and each production model is automatically verified to meet these fairness standards before going live.

Learn more