Learning Audio-Sheet Music Correspondences for Cross-Modal Retrieval and Piece Identification
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F18%3A10390078" target="_blank" >RIV/00216208:11320/18:10390078 - isvavai.cz</a>
Výsledek na webu
<a href="https://transactions.ismir.net/articles/10.5334/tismir.12/#" target="_blank" >https://transactions.ismir.net/articles/10.5334/tismir.12/#</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.5334/tismir.12" target="_blank" >10.5334/tismir.12</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Learning Audio-Sheet Music Correspondences for Cross-Modal Retrieval and Piece Identification
Popis výsledku v původním jazyce
This work addresses the problem of matching musical audio directly to sheet music, without any higher-level abstract representation. We propose a method that learns joint embedding spaces for short excerpts of audio and their respective counterparts in sheet music images, using multimodal convolutional neural networks. Given the learned representations, we show how to utilize them for two sheet-music-related tasks: (1) piece/score identification from audio queries and (2) retrieving relevant performances given a score as a search query. All retrieval models are trained and evaluated on a new, large scale multimodal audio-sheet music dataset which is made publicly available along with this article. The dataset comprises 479 precisely annotated solo piano pieces by 53 composers, for a total of 1,129 pages of music and about 15 hours of aligned audio, which was synthesized from these scores. Going beyond this synthetic training data, we carry out first retrieval experiments using scans of real sheet musi
Název v anglickém jazyce
Learning Audio-Sheet Music Correspondences for Cross-Modal Retrieval and Piece Identification
Popis výsledku anglicky
This work addresses the problem of matching musical audio directly to sheet music, without any higher-level abstract representation. We propose a method that learns joint embedding spaces for short excerpts of audio and their respective counterparts in sheet music images, using multimodal convolutional neural networks. Given the learned representations, we show how to utilize them for two sheet-music-related tasks: (1) piece/score identification from audio queries and (2) retrieving relevant performances given a score as a search query. All retrieval models are trained and evaluated on a new, large scale multimodal audio-sheet music dataset which is made publicly available along with this article. The dataset comprises 479 precisely annotated solo piano pieces by 53 composers, for a total of 1,129 pages of music and about 15 hours of aligned audio, which was synthesized from these scores. Going beyond this synthetic training data, we carry out first retrieval experiments using scans of real sheet musi
Klasifikace
Druh
J<sub>ost</sub> - Ostatní články v recenzovaných periodicích
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/GBP103%2F12%2FG084" target="_blank" >GBP103/12/G084: Centrum pro multi-modální interpretaci dat velkého rozsahu</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2018
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Transactions of the International Society for Music Information Retrieval
ISSN
2514-3298
e-ISSN
—
Svazek periodika
1
Číslo periodika v rámci svazku
1
Stát vydavatele periodika
CA - Kanada
Počet stran výsledku
12
Strana od-do
22-33
Kód UT WoS článku
—
EID výsledku v databázi Scopus
—