All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F25%3ALJE5ZFFP" target="_blank" >RIV/00216208:11320/25:LJE5ZFFP - isvavai.cz</a>

  • Result on the web

    <a href="https://openreview.net/forum?id=5RdIMlGLXL" target="_blank" >https://openreview.net/forum?id=5RdIMlGLXL</a>

  • DOI - Digital Object Identifier

Alternative languages

  • Result language

    angličtina

  • Original language name

    LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models

  • Original language description

    Large language models have become the cornerstone of today's natural language processing research. To facilitate the training, evaluation, and deployment of language models, the community has developed a series of tools and frameworks and made them openly available. This joint community effort has led to more collaboration, standardization, and overall more progress in language model research. However, one crucial aspect of large language models has been neglected so far: the pretraining datasets. To address this gap, we present an open framework for the collection and systematic compilation of pretraining datasets, called LLM-Datasets. With LLM-Datasets, we make a community-effort and collaborate with experts from the individual languages to collect and systematically compile datasets suitable in terms of data quantity and quality for pretraining language models in a multilingual setting. The framework provides a unified interface to pretraining datasets enabling the download, text extraction, filtering, and sampling of the pretraining data. It is modular and extensible with new datasets and designed with high-performance-computing requirements in mind that are needed to achieve the scale of today's language models. Users of the framework can focus on the actual data composition and reuse existing datasets from the community while ensuring reproducibility. To showcase LLM-Datasets, we compiled a pretraining dataset with 2.3 trillion tokens for a large language model covering 32 European languages.

  • Czech name

  • Czech description

Classification

  • Type

    O - Miscellaneous

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

Others

  • Publication year

    2024

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů