Deep Bayesian Semi-Supervised Active Learning for Sequence Labelling
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985807%3A_____%2F19%3A00509321" target="_blank" >RIV/67985807:_____/19:00509321 - isvavai.cz</a>
Alternative codes found
RIV/68407700:21240/19:00333461
Result on the web
<a href="http://ceur-ws.org/Vol-2444/ialatecml_paper6.pdf" target="_blank" >http://ceur-ws.org/Vol-2444/ialatecml_paper6.pdf</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Deep Bayesian Semi-Supervised Active Learning for Sequence Labelling
Original language description
In recent years, deep learning has shown supreme results in many sequence labelling tasks, especially in natural language processing. However, it typically requires a large training data set compared with statistical approaches. In areas where collecting of unlabelled data is cheap but labelling expensive, active learning can bring considerable improvement. Sequence learning algorithms require a series of token-level labels for a whole sequence to be available during the training process. Annotators of sequences typically label easily predictable parts of the sequence although such parts could be labelled automatically instead. In this paper, we introduce a combination of active and semi-supervised learning for sequence labelling. Our approach utilizes an approximation of Bayesian inference for neural nets using Monte Carlo dropout. The approximation yields a measure of uncertainty that is needed in many active learning query strategies. We propose Monte Carlo token entropy and Monte Carlo N-best sequence entropy strategies. Furthermore, we use semi-supervised pseudo-labelling to reduce labelling effort. The approach was experimentally evaluated on multiple sequence labelling tasks. The proposed query strategies outperform other existing techniques for deep neural nets. Moreover, the semi-supervised learning reduced the labelling effort by almost 80% without any incorrectly labelled samples being inserted into the training data set.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/GA18-18080S" target="_blank" >GA18-18080S: Fusion-Based Knowledge Discovery in Human Activity Data</a><br>
Continuities
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Others
Publication year
2019
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
IAL ECML PKDD 2019: Workshop & Tutorial on Interactive Adaptive Learning. Proceedings
ISBN
—
ISSN
1613-0073
e-ISSN
—
Number of pages
16
Pages from-to
80-95
Publisher name
Technical University & CreateSpace Independent Publishing Platform
Place of publication
Aachen
Event location
Würzburg
Event date
Sep 16, 2019
Type of event by nationality
EUR - Evropská akce
UT code for WoS article
—