All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Exploring Large Language Models for Classical Philology

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F23%3ADQSBZTSF" target="_blank" >RIV/00216208:11320/23:DQSBZTSF - isvavai.cz</a>

  • Result on the web

    <a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85170506218&partnerID=40&md5=61bfbd1a6f8cd098ec091103da0dfa07" target="_blank" >https://www.scopus.com/inward/record.uri?eid=2-s2.0-85170506218&partnerID=40&md5=61bfbd1a6f8cd098ec091103da0dfa07</a>

  • DOI - Digital Object Identifier

Alternative languages

  • Result language

    angličtina

  • Original language name

    Exploring Large Language Models for Classical Philology

  • Original language description

    "Recent advances in NLP have led to the creation of powerful language models for many languages including Ancient Greek and Latin. While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using ROBERTA and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English. We evaluate all models on morphological and syntactic tasks, including lemmatization, which demonstrates the added value of T5's decoding abilities. We further define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. Our experiments provide the first benchmarking analysis of existing models of Ancient Greek. Results show that our models provide significant improvements over the SoTA. The systematic analysis of model types can inform future research in designing language models for Classical languages, including the development of novel generative tasks. We make all our models available as community resources, along with a large curated pre-training corpus for Ancient Greek, to support the creation of a larger, comparable model zoo for Classical Philology. Our models and resources are available at https://github.com/Heidelberg-NLP/ancient-language-models. © 2023 Association for Computational Linguistics."

  • Czech name

  • Czech description

Classification

  • Type

    D - Article in proceedings

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Article name in the collection

    "Proc. Annu. Meet. Assoc. Comput Linguist."

  • ISBN

    978-195942972-2

  • ISSN

    0736-587X

  • e-ISSN

  • Number of pages

    19

  • Pages from-to

    15181-15199

  • Publisher name

    Association for Computational Linguistics (ACL)

  • Place of publication

  • Event location

    Cham

  • Event date

    Jan 1, 2023

  • Type of event by nationality

    WRD - Celosvětová akce

  • UT code for WoS article