Analogical inference from distributional structure: What recurrent neural networks can tell us about word learning
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3AF3ZB8EHP" target="_blank" >RIV/00216208:11320/23:F3ZB8EHP - isvavai.cz</a>
Result on the web
<a href="https://www.sciencedirect.com/science/article/pii/S2666827023000312" target="_blank" >https://www.sciencedirect.com/science/article/pii/S2666827023000312</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.mlwa.2023.100478" target="_blank" >10.1016/j.mlwa.2023.100478</a>
Alternative languages
Result language
angličtina
Original language name
Analogical inference from distributional structure: What recurrent neural networks can tell us about word learning
Original language description
"One proposal that can explain the remarkable pace of word learning in young children is that they leverage the language-internal distributional similarity of familiar and novel words to make analogical inferences about possible meanings of novel words"
Czech name
—
Czech description
—
Classification
Type
J<sub>ost</sub> - Miscellaneous article in a specialist periodical
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
—
Others
Publication year
2023
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
"Machine Learning with Applications"
ISSN
2666-8270
e-ISSN
—
Volume of the periodical
13
Issue of the periodical within the volume
2023-2-28
Country of publishing house
US - UNITED STATES
Number of pages
34
Pages from-to
1-34
UT code for WoS article
—
EID of the result in the Scopus database
—