Unsupervised Visual and Lexical Binding
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F12%3A00201479" target="_blank" >RIV/68407700:21230/12:00201479 - isvavai.cz</a>
Result on the web
<a href="http://cogsys2012.acin.tuwien.ac.at/doc/cogsys2012_proceedings/120Vavrecka_Michal.pdf" target="_blank" >http://cogsys2012.acin.tuwien.ac.at/doc/cogsys2012_proceedings/120Vavrecka_Michal.pdf</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Unsupervised Visual and Lexical Binding
Original language description
We present an unsupervised architecture capable of grounding color, shape and spatial relations of two objects in a 2D space. Our model constitutes a multimodal architecture thatintegrates information from visual (retina projection) and auditory inputs (sentences describing the visual scene). The visual system imitates "what" and "where" pathways and is able to bind location and shape (visual binding) and link them to the proper words in the sentence (lexical binding or symbol grounding). We reached better results in case of using "neural gas"algorithm (non-topographic organization) in the multimodal layer compared to the self-organizing maps (topographic organization). The results confirm theoretical assumptions about different nature of visual and auditory coding.
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
AN - Psychology
OECD FORD branch
—
Result continuities
Project
<a href="/en/project/GPP407%2F11%2FP696" target="_blank" >GPP407/11/P696: Frames of Reference Processing in the 3D Environment and its EEG Analysis</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2012
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů