Learning-AI

Bayesian YOLOv3: Uncertainty Estimation in One-Stage Object Detection

November 2019

tl;dr: Extension of the work on towards safe ad to one-stage detector.

Overall impression

This paper is very similar to Gaussian Yolov3 and models epistemic uncertainty as well.

Predicting uncertainty is critical for downstream pipelines such as tracking or sensor fusion. Aleatoric uncertainty captures sensor noise or ambiguities in the problem itself. Epistemic uncertainty is a measure for underrepresented classes in a dataset. Building a model ensemble is a way to estimate uncertainty.

They also observed that aleatoric scores correlates well with occlusion, but not epistemic.

However most of these methods have no measure of how certain they are in their output. When confronted with previously unseen data there is usually no way to measure if the model can deal with this input. For example a model trained on good weather data is faced with adverse weather situations.

From the standpoint of active learning, label only the data with the most information gain could help cutting time and cost.

The paper also observes that Modeling aleatoric uncertainty boosted the performance (by 1-5%). Modeling epistemic uncertainty via monte carlo dropout degrades performance slightly. –> similar to towards safe ad.

Key ideas

Technical details

Notes