Ask the experts: sourcing a high-quality nutrition counseling dataset through Human-AI collaboration
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11320%2F24%3A10492847" target="_blank" >RIV/00216208:11320/24:10492847 - isvavai.cz</a>
Výsledek na webu
<a href="https://aclanthology.org/2024.findings-emnlp.674/" target="_blank" >https://aclanthology.org/2024.findings-emnlp.674/</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.18653/v1/2024.findings-emnlp.674" target="_blank" >10.18653/v1/2024.findings-emnlp.674</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Ask the experts: sourcing a high-quality nutrition counseling dataset through Human-AI collaboration
Popis výsledku v původním jazyce
Large Language Models (LLMs) are being employed by end-users for various tasks, including sensitive ones such as health counseling, disregarding potential safety concerns. It is thus necessary to understand how adequately LLMs perform in such domains. We conduct a case study on ChatGPT in nutrition counseling, a popular use-case where the model supports a user with their dietary struggles. We crowdsource real-world diet-related struggles, then work with nutrition experts to generate supportive text using ChatGPT. Finally, experts evaluate the safety and text quality of ChatGPT's output. The result is the HAI-Coaching dataset, containing ~2.4K crowdsourced dietary struggles and ~97K corresponding ChatGPT-generated and expert-annotated supportive texts. We analyse ChatGPT's performance, discovering potentially harmful behaviours, especially for sensitive topics like mental health. Finally, we use HAI-Coaching to test open LLMs on various downstream tasks, showing that even the latest models struggle to
Název v anglickém jazyce
Ask the experts: sourcing a high-quality nutrition counseling dataset through Human-AI collaboration
Popis výsledku anglicky
Large Language Models (LLMs) are being employed by end-users for various tasks, including sensitive ones such as health counseling, disregarding potential safety concerns. It is thus necessary to understand how adequately LLMs perform in such domains. We conduct a case study on ChatGPT in nutrition counseling, a popular use-case where the model supports a user with their dietary struggles. We crowdsource real-world diet-related struggles, then work with nutrition experts to generate supportive text using ChatGPT. Finally, experts evaluate the safety and text quality of ChatGPT's output. The result is the HAI-Coaching dataset, containing ~2.4K crowdsourced dietary struggles and ~97K corresponding ChatGPT-generated and expert-annotated supportive texts. We analyse ChatGPT's performance, discovering potentially harmful behaviours, especially for sensitive topics like mental health. Finally, we use HAI-Coaching to test open LLMs on various downstream tasks, showing that even the latest models struggle to
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
R - Projekt Ramcoveho programu EK
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Findings of the Association for Computational Linguistics: EMNLP 2024
ISBN
979-8-89176-168-1
ISSN
—
e-ISSN
—
Počet stran výsledku
27
Strana od-do
11519-11545
Název nakladatele
Association for Computational Linguistics
Místo vydání
Kerrville, TX, USA
Místo konání akce
Miami, FL, USA
Datum konání akce
12. 11. 2024
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
—