All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Active Learning for Text Classification and Fake News Detection

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F21%3A00351077" target="_blank" >RIV/68407700:21230/21:00351077 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/document/9644290" target="_blank" >https://ieeexplore.ieee.org/document/9644290</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/ISCSIC54682.2021.00027" target="_blank" >10.1109/ISCSIC54682.2021.00027</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Active Learning for Text Classification and Fake News Detection

  • Original language description

    Supervised classification of texts relies on the availability of reliable class labels for the training data. However, the process of collecting data labels can be complex and costly. A standard procedure is to add labels sequentially by querying an annotator until reaching satisfactory performance. Active learning is a process of selecting unlabeled data records for which the knowledge of the label would bring the highest discriminability of the dataset. In this paper, we provide a comparative study of various active learning strategies for different embeddings of the text on various datasets. We focus on Bayesian active learning methods that are used due to their ability to represent the uncertainty of the classification procedure. We compare three types of uncertainty representation: i) SGLD, ii) Dropout, and iii) deep ensembles. The latter two methods in cold- and warm-start versions. The texts were embedded using Fast Text, LASER, and RoBERTa encoding techniques. The methods are tested on two types of datasets, text categorization (Kaggle News Category and Twitter Sentiment140 dataset) and fake news detection (Kaggle Fake News and Fake News Detection datasets). We show that the conventional dropout Monte Carlo approach provides good results for the majority of the tasks. The ensemble methods provide more accurate representation of uncertainty that allows to keep the pace of learning of a complicated problem for the growing number of requests, outperforming the dropout in the long run. However, for the majority of the datasets the active strategy using Dropout MC and Deep Ensembles achieved almost perfect performance even for a very low number of requests. The best results were obtained for the most recent embeddings RoBERTa

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

    <a href="/en/project/EF16_019%2F0000765" target="_blank" >EF16_019/0000765: Research Center for Informatics</a><br>

  • Continuities

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Others

  • Publication year

    2021

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    2021 International Symposium on Computer Science and Intelligent Controls (ISCSIC)

  • ISBN

    978-1-6654-1627-6

  • ISSN

  • e-ISSN

  • Number of pages

    8

  • Pages from-to

    87-94

  • Publisher name

    IEEE Computer Society

  • Place of publication

    Los Alamitos

  • Event location

    Řím

  • Event date

    Feb 12, 2022

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article