Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F24%3A00371779" target="_blank" >RIV/68407700:21230/24:00371779 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/TPAMI.2023.3335311" target="_blank" >https://doi.org/10.1109/TPAMI.2023.3335311</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TPAMI.2023.3335311" target="_blank" >10.1109/TPAMI.2023.3335311</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis
Popis výsledku v původním jazyce
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis. Recent generalizing view synthesis methods can render high-quality novel views using a set of nearby input views. However, the rendering speed is still slow due to the nature of uniformly-point sampling of neural radiance fields. Existing scene-specific methods can train and render novel views efficiently but can not generalize to unseen data. Our approach addresses the problems of fast and generalizing view synthesis by proposing two novel modules: a coarse radiance fields predictor and a convolutional-based neural renderer. This architecture infers consistent scene geometry based on the implicit neural fields and renders new views efficiently using a single GPU. We first train CG-NeRF on multiple 3D scenes of the DTU dataset, and the network can produce high-quality and accurate novel views on unseen real and synthetic data using only photometric losses. Moreover, our method can leverage a denser set of reference images of a single scene to produce accurate novel views without relying on additional explicit representations and still maintains the high-speed rendering of the pre-trained model. Experimental results show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
Název v anglickém jazyce
Cascaded and Generalizable Neural Radiance Fields for Fast View Synthesis
Popis výsledku anglicky
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis. Recent generalizing view synthesis methods can render high-quality novel views using a set of nearby input views. However, the rendering speed is still slow due to the nature of uniformly-point sampling of neural radiance fields. Existing scene-specific methods can train and render novel views efficiently but can not generalize to unseen data. Our approach addresses the problems of fast and generalizing view synthesis by proposing two novel modules: a coarse radiance fields predictor and a convolutional-based neural renderer. This architecture infers consistent scene geometry based on the implicit neural fields and renders new views efficiently using a single GPU. We first train CG-NeRF on multiple 3D scenes of the DTU dataset, and the network can produce high-quality and accurate novel views on unseen real and synthetic data using only photometric losses. Moreover, our method can leverage a denser set of reference images of a single scene to produce accurate novel views without relying on additional explicit representations and still maintains the high-speed rendering of the pre-trained model. Experimental results show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE Transactions on Pattern Analysis and Machine Intelligence
ISSN
0162-8828
e-ISSN
1939-3539
Svazek periodika
46
Číslo periodika v rámci svazku
5
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
12
Strana od-do
2758-2769
Kód UT WoS článku
001196751500047
EID výsledku v databázi Scopus
2-s2.0-85178037115