Retrieval-Augmented Generation (RAG) using Large Language Models (LLMs)
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F24%3A43973058" target="_blank" >RIV/49777513:23520/24:43973058 - isvavai.cz</a>
Výsledek na webu
<a href="https://svk.fav.zcu.cz/download/proceedings_svk_2024.pdf" target="_blank" >https://svk.fav.zcu.cz/download/proceedings_svk_2024.pdf</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Retrieval-Augmented Generation (RAG) using Large Language Models (LLMs)
Popis výsledku v původním jazyce
The convergence of Retrieval-augmented generation (RAG) methodologies with the ro- bust computational prowess of Large Language Models (LLMs) heralds a new era in natural language processing, promising unprecedented levels of accuracy and contextual relevance in text generation tasks. Pre-trained large language models, also referred to as foundation models, typically lack the ability to learn incrementally, may exhibit hallucinations, and can inadvertently expose pri- vate data from their training corpus. Addressing these shortcomings has sparked increasing interest in retrieval-augmented generation methods. RAG enhances the predictive capabilities of large language models by integrating an external datastore during inference. This approach enriches prompts with a blend of context, historical data, and pertinent knowledge, resulting in RAG LLMs.
Název v anglickém jazyce
Retrieval-Augmented Generation (RAG) using Large Language Models (LLMs)
Popis výsledku anglicky
The convergence of Retrieval-augmented generation (RAG) methodologies with the ro- bust computational prowess of Large Language Models (LLMs) heralds a new era in natural language processing, promising unprecedented levels of accuracy and contextual relevance in text generation tasks. Pre-trained large language models, also referred to as foundation models, typically lack the ability to learn incrementally, may exhibit hallucinations, and can inadvertently expose pri- vate data from their training corpus. Addressing these shortcomings has sparked increasing interest in retrieval-augmented generation methods. RAG enhances the predictive capabilities of large language models by integrating an external datastore during inference. This approach enriches prompts with a blend of context, historical data, and pertinent knowledge, resulting in RAG LLMs.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
20205 - Automation and control systems
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů