ViewFormer: NeRF-Free Neural Rendering from Few Images Using Transformers
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F22%3A00361838" target="_blank" >RIV/68407700:21230/22:00361838 - isvavai.cz</a>
Alternative codes found
RIV/68407700:21730/22:00361838
Result on the web
<a href="https://doi.org/10.1007/978-3-031-19784-0_12" target="_blank" >https://doi.org/10.1007/978-3-031-19784-0_12</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1007/978-3-031-19784-0_12" target="_blank" >10.1007/978-3-031-19784-0_12</a>
Alternative languages
Result language
angličtina
Original language name
ViewFormer: NeRF-Free Neural Rendering from Few Images Using Transformers
Original language description
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Field (NeRF), and while achieving impressive results, the methods suffer from long training times as they require evaluating millions of 3D point samples via a neural network for each image. We propose a 2D-only method that maps multiple context views and a query pose to a new image in a single pass of a neural network. Our model uses a two-stage architecture consisting of a codebook and a transformer model. The codebook is used to embed individual images into a smaller latent space, and the transformer solves the view synthesis task in this more compact space. To train our model efficiently, we introduce a novel branching attention mechanism that allows us to use the same model not only for neural rendering but also for camera pose estimation. Experimental results on real-world scenes show that our approach is competitive compared to NeRF-based methods while not reasoning explicitly in 3D, and it is faster to train.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
Result was created during the realization of more than one project. More information in the Projects tab.
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)<br>S - Specificky vyzkum na vysokych skolach
Others
Publication year
2022
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Computer Vision – ECCV 2022
ISBN
978-3-031-19784-0
ISSN
0302-9743
e-ISSN
1611-3349
Number of pages
19
Pages from-to
198-216
Publisher name
Springer
Place of publication
Cham
Event location
Tel Aviv
Event date
Oct 23, 2022
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
000904099300012