Retrieval-Augmented Generation (RAG) using Large Language Models (LLMs)
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F24%3A43973058" target="_blank" >RIV/49777513:23520/24:43973058 - isvavai.cz</a>
Result on the web
<a href="https://svk.fav.zcu.cz/download/proceedings_svk_2024.pdf" target="_blank" >https://svk.fav.zcu.cz/download/proceedings_svk_2024.pdf</a>
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Retrieval-Augmented Generation (RAG) using Large Language Models (LLMs)
Original language description
The convergence of Retrieval-augmented generation (RAG) methodologies with the ro- bust computational prowess of Large Language Models (LLMs) heralds a new era in natural language processing, promising unprecedented levels of accuracy and contextual relevance in text generation tasks. Pre-trained large language models, also referred to as foundation models, typically lack the ability to learn incrementally, may exhibit hallucinations, and can inadvertently expose pri- vate data from their training corpus. Addressing these shortcomings has sparked increasing interest in retrieval-augmented generation methods. RAG enhances the predictive capabilities of large language models by integrating an external datastore during inference. This approach enriches prompts with a blend of context, historical data, and pertinent knowledge, resulting in RAG LLMs.
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
—
OECD FORD branch
20205 - Automation and control systems
Result continuities
Project
—
Continuities
S - Specificky vyzkum na vysokych skolach
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů