STALP: Style Transfer With Auxiliary Limited Pairing
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F21%3A00349941" target="_blank" >RIV/68407700:21230/21:00349941 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1111/cgf.142655" target="_blank" >https://doi.org/10.1111/cgf.142655</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1111/cgf.142655" target="_blank" >10.1111/cgf.142655</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
STALP: Style Transfer With Auxiliary Limited Pairing
Popis výsledku v původním jazyce
We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.
Název v anglickém jazyce
STALP: Style Transfer With Auxiliary Limited Pairing
Popis výsledku anglicky
We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/EF16_019%2F0000765" target="_blank" >EF16_019/0000765: Výzkumné centrum informatiky</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)<br>S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Computer Graphics Forum
ISSN
0167-7055
e-ISSN
1467-8659
Svazek periodika
40
Číslo periodika v rámci svazku
2
Stát vydavatele periodika
CH - Švýcarská konfederace
Počet stran výsledku
11
Strana od-do
563-573
Kód UT WoS článku
000657959600046
EID výsledku v databázi Scopus
2-s2.0-85107365098