Contrastive Learning for Fine-grained Visual Recognition
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F23%3A43969940" target="_blank" >RIV/49777513:23520/23:43969940 - isvavai.cz</a>
Výsledek na webu
<a href="http://svk.fav.zcu.cz/download/proceedings_svk_2023.pdf" target="_blank" >http://svk.fav.zcu.cz/download/proceedings_svk_2023.pdf</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Contrastive Learning for Fine-grained Visual Recognition
Popis výsledku v původním jazyce
Contrastive learning is a type of representation learning which retains a representation by comparing the input samples, e.g., images, video, text, and sound. Having good representation can be beneficial for the interpretability of Deep Neural Networks (DNNs) and for some downstream tasks like open-set recognition. Contrastive learning compares positive pairs of similar inputs and negative pairs of dissimilar inputs. The key component is the contrastive loss which measures the similarity between feature vectors and enforces minimization and maximization of the similarity between positive and negative pairs. Modern contrastive learning methods are often applied in self-supervised settings, while discriminative cross-entropy learning is widely used in supervised settings. In this work, we employ supervised contrastive learning to fine-tune DNNs for fine-grained recognition.
Název v anglickém jazyce
Contrastive Learning for Fine-grained Visual Recognition
Popis výsledku anglicky
Contrastive learning is a type of representation learning which retains a representation by comparing the input samples, e.g., images, video, text, and sound. Having good representation can be beneficial for the interpretability of Deep Neural Networks (DNNs) and for some downstream tasks like open-set recognition. Contrastive learning compares positive pairs of similar inputs and negative pairs of dissimilar inputs. The key component is the contrastive loss which measures the similarity between feature vectors and enforces minimization and maximization of the similarity between positive and negative pairs. Modern contrastive learning methods are often applied in self-supervised settings, while discriminative cross-entropy learning is widely used in supervised settings. In this work, we employ supervised contrastive learning to fine-tune DNNs for fine-grained recognition.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
20205 - Automation and control systems
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů