Though graphical processors were originally meant for gamers, any computer science nerd knows that they are extremely valuable in other domains as well. Now that crypto-miners are starting to move to ASICs, prices and stock for GPU’s are stabilising again. The deep learning community can take a breath, relax, buy a new batch GPUs and run some training sessions. NVIDIA knows that gamers are not the sole target demographic for their products anymore.
This blogpost offers a quick and approachable look at one of the technical deep learning problems faced at Oddity.ai and outlines how we go about solving such problems. In any deep learning application, the amount of data is an important factor for success. Gathering this data is not always an easy task. We are looking at alternative methods for increasing the size of our datasets, such as data synthesis.
It has been six months since the start of our first pilot in Stratumseind, Eindhoven. Hence it’s time to write about our expectations, experiences, results, and outlooks. We’ve performed this pilot in collaboration with Axis Communications and the Dutch Institute of Technology Safety & Security (DITSS), under the supervision of Tinus Kanters from the municipality of Eindhoven. This blogpost briefly discusses the results from this pilot and lays out our plan for the future.
The path towards the successful application of artificial intelligence in video surveillance that we are taking as a society crosses a lot of junctions and making a wrong choice along the way can cause a very undesirable outcome. The promise of AI is immense but the risks are large too. It is of utmost importance that we are aware of this, that we keep thinking critically and that we enable an open and inclusive dialogue.
When talking about Oddity’s violence recognition system, we are often asked what the accuracy of our algorithm is. This seems like an easy enough question, but while answering it, we quickly run into trouble. To explain why, we need to look into a concept known within statistics as the Base Rate Fallacy. In general, the Base Rate Fallacy concerns a psychological effect that clouds peoples’ judgement when presented with certain statistics.
In my previous article I explained the importance of finding the problem-solution fit, and showed some parts of our go-to-market (GTM) approach. A multiple-case study with 12 founders of software startups located in The Netherlands, taught us that some aspects of the GTM approach are of significant value. We explain these industry lessons in this chapter, for which we coined the term The Startup Toolkit. The toolkit explains the six most important elements a startup should take into account to achieve the problem-solution fit and product-market fit.
Software startups around the world are struggling to survive. Usually, within two years from the startups’ creation, it is not competition but rather self-destruction that drives the majority of startups into failure. My previous post presented a go-to-market approach based on The Lean Startup, Design Thinking, The Lean Product Playbook, and The Startup Owner’s Manual, to provide structured guidance to startups. These strategies are all user-driven innovation strategies: they involve potential users, customers, or other stakeholders into the development process, thus maintaining a user-centred approach.