Calibrated Out-of-Distribution Detection with a Generic Representation
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F23%3A00371043" target="_blank" >RIV/68407700:21230/23:00371043 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/ICCVW60793.2023.00485" target="_blank" >https://doi.org/10.1109/ICCVW60793.2023.00485</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/ICCVW60793.2023.00485" target="_blank" >10.1109/ICCVW60793.2023.00485</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Calibrated Out-of-Distribution Detection with a Generic Representation
Popis výsledku v původním jazyce
Out-of-distribution detection is a common issue in deploying vision models in practice and solving it is an essential building block in safety critical applications. Most of the existing OOD detection solutions focus on improving the OOD robustness of a classification model trained exclusively on in-distribution (ID) data. In this work, we take a different approach and propose to leverage generic pre-trained representation. We propose a novel OOD method, called GROOD, that formulates the OOD detection as a Neyman-Pearson task with well calibrated scores and which achieves excellent performance, predicated by the use of a good generic representation. Only a trivial training process is required for adapting GROOD to a particular problem. The method is simple, general, efficient, calibrated and with only a few hyper-parameters. The method achieves state-of-the-art performance on a number of OOD benchmarks, reaching near perfect performance on several of them. The source code is available at https://github.com/vojirt/GROOD.
Název v anglickém jazyce
Calibrated Out-of-Distribution Detection with a Generic Representation
Popis výsledku anglicky
Out-of-distribution detection is a common issue in deploying vision models in practice and solving it is an essential building block in safety critical applications. Most of the existing OOD detection solutions focus on improving the OOD robustness of a classification model trained exclusively on in-distribution (ID) data. In this work, we take a different approach and propose to leverage generic pre-trained representation. We propose a novel OOD method, called GROOD, that formulates the OOD detection as a Neyman-Pearson task with well calibrated scores and which achieves excellent performance, predicated by the use of a good generic representation. Only a trivial training process is required for adapting GROOD to a particular problem. The method is simple, general, efficient, calibrated and with only a few hyper-parameters. The method achieves state-of-the-art performance on a number of OOD benchmarks, reaching near perfect performance on several of them. The source code is available at https://github.com/vojirt/GROOD.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
N - Vyzkumna aktivita podporovana z neverejnych zdroju
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
ICCVW2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
ISBN
979-8-3503-0744-3
ISSN
2473-9936
e-ISSN
2473-9944
Počet stran výsledku
10
Strana od-do
4509-4518
Název nakladatele
IEEE
Místo vydání
Anchorage, Alaska
Místo konání akce
Paris
Datum konání akce
2. 10. 2023
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
001156680304064