Deep learning-based assessment model for Real-time identification of visual learners using Raw EEG
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F24%3APU150438" target="_blank" >RIV/00216305:26230/24:PU150438 - isvavai.cz</a>
Result on the web
<a href="https://ieeexplore.ieee.org/document/10387266?source=authoralert" target="_blank" >https://ieeexplore.ieee.org/document/10387266?source=authoralert</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/TNSRE.2024.3351694" target="_blank" >10.1109/TNSRE.2024.3351694</a>
Alternative languages
Result language
angličtina
Original language name
Deep learning-based assessment model for Real-time identification of visual learners using Raw EEG
Original language description
Automatic identification of visual learning style in real time using raw electroencephalogram (EEG) is challenging. In this work, inspired by the powerful abilities of deep learning techniques, deep learning-based models are proposed to learn high-level feature representation for EEG visual learning identification. Existing computer-aided systems that use electroencephalograms and machine learning can reasonably assess learning styles. Despite their potential, offline processing is often necessary to eliminate artifacts and extract features, making these methods unsuitable for real-time applications. The dataset was chosen with 34 healthy subjects to measure their EEG signals during resting states (eyes open and eyes closed) and while performing learning tasks. The subjects displayed no prior knowledge of the animated educational content presented in video format. The paper presents an analysis of EEG signals measured during a resting state with closed eyes using three deep learning techniques: Long-term, short-term memory (LSTM), Long-term, short-term memory-convolutional neural network (LSTM-CNN), and Long-term, short-term memory - Fully convolutional neural network (LSTM-FCNN). The chosen techniques were based on their suitability for real-time applications with varying data lengths and the need for less computational time. The optimization of hypertuning parameters has enabled the identification of visual learners through the implementation of three techniques. LSTM- CNN technique has the highest average accuracy of 94%, a sensitivity of 80%, a specificity of 92%, and an F1 score of 94% when identifying the visual learning style of the student out of all three techniques. This research has shown that the most effective method is the deep learning-based LSTM-CNN technique, which accurately identifies a student's visual learning style.
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
<a href="/en/project/GA24-10990S" target="_blank" >GA24-10990S: Hardware-Aware Machine Learning: From Automated Design to Innovative and Explainable Solutions</a><br>
Continuities
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Others
Publication year
2024
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
ISSN
1534-4320
e-ISSN
1558-0210
Volume of the periodical
32
Issue of the periodical within the volume
1
Country of publishing house
US - UNITED STATES
Number of pages
13
Pages from-to
378-390
UT code for WoS article
001146060000005
EID of the result in the Scopus database
2-s2.0-85182363569