Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at Reliable OOD Detection
| Authors |
|
|---|---|
| Publication date | 2021 |
| Journal | Proceedings of Machine Learning Research |
| Event | 37th Conference on Uncertainty in Artificial Intelligence, UAI 2021 |
| Volume | Issue number | 161 |
| Pages (from-to) | 1766-1776 |
| Number of pages | 11 |
| Organisations |
|
| Abstract |
A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform. Previous work has attempted to tackle this problem using uncertainty estimation techniques. However, there is empirical evidence that a large family of these techniques do not detect OOD reliably in classification tasks. This paper gives a theoretical explanation for said experimental findings and illustrates it on synthetic data. We prove that such techniques are not able to reliably identify OOD samples in a classification setting, since their level of confidence is generalized to unseen areas of the feature space. This result stems from the interplay between the representation of ReLU networks as piece-wise affine transformations, the saturating nature of activation functions like softmax, and the most widely-used uncertainty metrics. |
| Document type | Article |
| Note | Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, 27-30 July 2021, Online. - With supplementary file. |
| Language | English |
| Published at | https://doi.org/10.48550/arXiv.2012.05329 |
| Published at | https://proceedings.mlr.press/v161/ulmer21a.html |
| Other links | https://www.scopus.com/pages/publications/85163330329 |
| Downloads |
2012.05329v4
(Accepted author manuscript)
ulmer21a-1
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
