Deep learning-based assessment model for Real-time identification of visual learners using Raw EEG
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F24%3APU150438" target="_blank" >RIV/00216305:26230/24:PU150438 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/document/10387266?source=authoralert" target="_blank" >https://ieeexplore.ieee.org/document/10387266?source=authoralert</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TNSRE.2024.3351694" target="_blank" >10.1109/TNSRE.2024.3351694</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Deep learning-based assessment model for Real-time identification of visual learners using Raw EEG
Popis výsledku v původním jazyce
Automatic identification of visual learning style in real time using raw electroencephalogram (EEG) is challenging. In this work, inspired by the powerful abilities of deep learning techniques, deep learning-based models are proposed to learn high-level feature representation for EEG visual learning identification. Existing computer-aided systems that use electroencephalograms and machine learning can reasonably assess learning styles. Despite their potential, offline processing is often necessary to eliminate artifacts and extract features, making these methods unsuitable for real-time applications. The dataset was chosen with 34 healthy subjects to measure their EEG signals during resting states (eyes open and eyes closed) and while performing learning tasks. The subjects displayed no prior knowledge of the animated educational content presented in video format. The paper presents an analysis of EEG signals measured during a resting state with closed eyes using three deep learning techniques: Long-term, short-term memory (LSTM), Long-term, short-term memory-convolutional neural network (LSTM-CNN), and Long-term, short-term memory - Fully convolutional neural network (LSTM-FCNN). The chosen techniques were based on their suitability for real-time applications with varying data lengths and the need for less computational time. The optimization of hypertuning parameters has enabled the identification of visual learners through the implementation of three techniques. LSTM- CNN technique has the highest average accuracy of 94%, a sensitivity of 80%, a specificity of 92%, and an F1 score of 94% when identifying the visual learning style of the student out of all three techniques. This research has shown that the most effective method is the deep learning-based LSTM-CNN technique, which accurately identifies a student's visual learning style.
Název v anglickém jazyce
Deep learning-based assessment model for Real-time identification of visual learners using Raw EEG
Popis výsledku anglicky
Automatic identification of visual learning style in real time using raw electroencephalogram (EEG) is challenging. In this work, inspired by the powerful abilities of deep learning techniques, deep learning-based models are proposed to learn high-level feature representation for EEG visual learning identification. Existing computer-aided systems that use electroencephalograms and machine learning can reasonably assess learning styles. Despite their potential, offline processing is often necessary to eliminate artifacts and extract features, making these methods unsuitable for real-time applications. The dataset was chosen with 34 healthy subjects to measure their EEG signals during resting states (eyes open and eyes closed) and while performing learning tasks. The subjects displayed no prior knowledge of the animated educational content presented in video format. The paper presents an analysis of EEG signals measured during a resting state with closed eyes using three deep learning techniques: Long-term, short-term memory (LSTM), Long-term, short-term memory-convolutional neural network (LSTM-CNN), and Long-term, short-term memory - Fully convolutional neural network (LSTM-FCNN). The chosen techniques were based on their suitability for real-time applications with varying data lengths and the need for less computational time. The optimization of hypertuning parameters has enabled the identification of visual learners through the implementation of three techniques. LSTM- CNN technique has the highest average accuracy of 94%, a sensitivity of 80%, a specificity of 92%, and an F1 score of 94% when identifying the visual learning style of the student out of all three techniques. This research has shown that the most effective method is the deep learning-based LSTM-CNN technique, which accurately identifies a student's visual learning style.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/GA24-10990S" target="_blank" >GA24-10990S: Strojové učení zohledňující hardware: Od automatizovaného návrhu k inovativním a vysvětlitelným řešením</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
ISSN
1534-4320
e-ISSN
1558-0210
Svazek periodika
32
Číslo periodika v rámci svazku
1
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
13
Strana od-do
378-390
Kód UT WoS článku
001146060000005
EID výsledku v databázi Scopus
2-s2.0-85182363569