The Problem of AI Hallucination and How to Solve It
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11410%2F24%3A10491287" target="_blank" >RIV/00216208:11410/24:10491287 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/00216208:11320/24:10491287
Výsledek na webu
—
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
The Problem of AI Hallucination and How to Solve It
Popis výsledku v původním jazyce
AI in education is a topic that has been researched for the last 70 years. However, the last two years have seen very significant changes. These changes relate to the introduction of OpenAI's ChatGPT chatbot in November 2022. The GPT (Generative Pre-trained Transformer) language model has dramatically influenced how the public approaches artificial intelligence. For many, generative language models have become synonymous with AI and have come uncritically viewed as a universal source of answers to most questions. However, it soon became apparent that even generative language models had their limits. Among the main problems that emerged was hallucination (providing answers containing false or misleading information), which is expected in all language models. The main problem of hallucination is that this information is difficult to distinguish from other information, and AI language models are very persuasive in presenting it. The risks of this phenomenon are much more substantial when using language modules to support learning, where the learner cannot distinguish correct information from incorrect information. The proposed paper focuses on the area of AI hallucination in mathematics education. It will first show how AI chatbots hallucinate in mathematics and then present one possible solution to counter this hallucination. The presented solution was created for the AI chatbot Edu-AI and designed to tutor students in mathematics. Usually, the problem is approached so that the system verifies the correctness of the output offered by the chatbot. Within the Edu-AI, checking responses is not implemented, but checking inputs is. If an input containing a factual query is recorded, it is redirected, and the answer is traced to authorised knowledge sources and study materials. If a relevant answer cannot be traced in these sources, a redirect to a natural person who will address the question is offered. In addition to describing the technical solution, the article includes concrete examples of how the system works. This solution has been developed for the educational domain but applies to all domains where users must be provided with relevant information.
Název v anglickém jazyce
The Problem of AI Hallucination and How to Solve It
Popis výsledku anglicky
AI in education is a topic that has been researched for the last 70 years. However, the last two years have seen very significant changes. These changes relate to the introduction of OpenAI's ChatGPT chatbot in November 2022. The GPT (Generative Pre-trained Transformer) language model has dramatically influenced how the public approaches artificial intelligence. For many, generative language models have become synonymous with AI and have come uncritically viewed as a universal source of answers to most questions. However, it soon became apparent that even generative language models had their limits. Among the main problems that emerged was hallucination (providing answers containing false or misleading information), which is expected in all language models. The main problem of hallucination is that this information is difficult to distinguish from other information, and AI language models are very persuasive in presenting it. The risks of this phenomenon are much more substantial when using language modules to support learning, where the learner cannot distinguish correct information from incorrect information. The proposed paper focuses on the area of AI hallucination in mathematics education. It will first show how AI chatbots hallucinate in mathematics and then present one possible solution to counter this hallucination. The presented solution was created for the AI chatbot Edu-AI and designed to tutor students in mathematics. Usually, the problem is approached so that the system verifies the correctness of the output offered by the chatbot. Within the Edu-AI, checking responses is not implemented, but checking inputs is. If an input containing a factual query is recorded, it is redirected, and the answer is traced to authorised knowledge sources and study materials. If a relevant answer cannot be traced in these sources, a redirect to a natural person who will address the question is offered. In addition to describing the technical solution, the article includes concrete examples of how the system works. This solution has been developed for the educational domain but applies to all domains where users must be provided with relevant information.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
50301 - Education, general; including training, pedagogy, didactics [and education systems]
Návaznosti výsledku
Projekt
<a href="/cs/project/TL05000236" target="_blank" >TL05000236: AI asistent pro žáky a učitele</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the European Conference on e-Learning, ECEL
ISBN
—
ISSN
2048-8637
e-ISSN
2048-8645
Počet stran výsledku
7
Strana od-do
122-128
Název nakladatele
Academic Conferences and Publishing International Limited
Místo vydání
Portugal
Místo konání akce
Porto
Datum konání akce
24. 10. 2024
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—