Speaker: Oliver Zendel (AIT)
Test data plays an important role in computer vision (CV) but is plagued by two questions: Which situations should be covered by the test data and have we tested enough to reach a conclusion? This presentation will propose a new solution answering these questions using a standard procedure devised by the safety community to validate complex systems: The Hazard and Operability Analysis (HAZOP). It is designed to systematically search and identify difficult, performance-decreasing situations and aspects. The talk will illustrate how we created a generic CV model as a basis for the hazard analysis and then applied an extensive HAZOP to the CV domain. The result is a publicly available checklist with more than 900 identified individual hazards. This checklist can be used to evaluate existing test datasets by quantifying the amount of covered hazards. We evaluate our approach by first annotating popular stereo vision test datasets (Middlebury, KITTI) and then compare the performance of six popular stereo matching algorithms at the identified hazards from our checklist with their average performance. We can show a clear negative influence of the hazards. The presented approach is a useful tool to evaluate and improve test datasets and creates a common basis for future dataset designs.