Estimating 3D Motion and Forces of Person-Object Interactions from Monocular Video
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21730%2F19%3A00337183" target="_blank" >RIV/68407700:21730/19:00337183 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1109/CVPR.2019.00884" target="_blank" >https://doi.org/10.1109/CVPR.2019.00884</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/CVPR.2019.00884" target="_blank" >10.1109/CVPR.2019.00884</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Estimating 3D Motion and Forces of Person-Object Interactions from Monocular Video
Popis výsledku v původním jazyce
In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modelling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.
Název v anglickém jazyce
Estimating 3D Motion and Forces of Person-Object Interactions from Monocular Video
Popis výsledku anglicky
In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modelling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/EF15_003%2F0000468" target="_blank" >EF15_003/0000468: Inteligentní strojové vnímání</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2019
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
CVPR 2019: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition
ISBN
978-1-7281-3294-5
ISSN
1063-6919
e-ISSN
2575-7075
Počet stran výsledku
10
Strana od-do
8632-8641
Název nakladatele
IEEE
Místo vydání
—
Místo konání akce
Long Beach
Datum konání akce
15. 6. 2019
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
000542649302026