Elevating Crop Classification Performance Through CNN-GRU Feature Fusion
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F61989100%3A27240%2F24%3A10256796" target="_blank" >RIV/61989100:27240/24:10256796 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/document/10689581" target="_blank" >https://ieeexplore.ieee.org/document/10689581</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/ACCESS.2024.3467193" target="_blank" >10.1109/ACCESS.2024.3467193</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Elevating Crop Classification Performance Through CNN-GRU Feature Fusion
Popis výsledku v původním jazyce
Crop classification and its area estimation hold significant importance in agricultural production and management. It serves as a crucial tool for efficiently identifying various crops and contributes to the formulation of tailored strategies for crop-specific production and estimation purposes. Numerous deep learning techniques, with a particular emphasis on Convolutional Neural Networks (CNNs), have found application in the analysis of satellite data for crop classification. Nevertheless, this approach has its limitations, notably the challenge of sequential information loss, which hinders its ability to accurately extract features that rely on temporal changes over time. On the other hand, another deep learning technique, the Gated Recurrent Unit (GRU), excels at handling time series data but falls short in its ability to extract intricate patterns within the data, a strength of CNNs. This study explores the capabilities of these methodologies for crop classification in the Charsadda district of Pakistan, a region renowned for its diverse agricultural landscape dedicated to the cultivation of economically valuable crops. We introduce a dual-channel CNN-GRU feature fusion architecture, a method that merges individually extracted features from both CNNs and GRUs. The proposed framework is specifically designed for data obtained from Sentinel-2 and PlanetScope imagery, complemented by NDVI timeseries data collected from the study site. The model's performance is assessed in comparison to the individual performance of CNN and GRU models. The models were subjected to evaluation using a variety of metrics, encompassing precision, recall, F1 Score, and overall accuracy. The outcomes reveal that the proposed model surpasses the performance of the individual models. The feature fusion-based model emerged as the top performer, achieving an overall accuracy of 96.47 percent, an F1 Score of 93.16 percent, a precision of 93.93 percent, and a recall of 92.69 percent. This research offers substantial evidence supporting the potential of the CNN-GRU feature fusion model for crop identification. By harnessing the strengths of both architectural approaches, it demonstrates its effectiveness as a valuable tool for crop classification.
Název v anglickém jazyce
Elevating Crop Classification Performance Through CNN-GRU Feature Fusion
Popis výsledku anglicky
Crop classification and its area estimation hold significant importance in agricultural production and management. It serves as a crucial tool for efficiently identifying various crops and contributes to the formulation of tailored strategies for crop-specific production and estimation purposes. Numerous deep learning techniques, with a particular emphasis on Convolutional Neural Networks (CNNs), have found application in the analysis of satellite data for crop classification. Nevertheless, this approach has its limitations, notably the challenge of sequential information loss, which hinders its ability to accurately extract features that rely on temporal changes over time. On the other hand, another deep learning technique, the Gated Recurrent Unit (GRU), excels at handling time series data but falls short in its ability to extract intricate patterns within the data, a strength of CNNs. This study explores the capabilities of these methodologies for crop classification in the Charsadda district of Pakistan, a region renowned for its diverse agricultural landscape dedicated to the cultivation of economically valuable crops. We introduce a dual-channel CNN-GRU feature fusion architecture, a method that merges individually extracted features from both CNNs and GRUs. The proposed framework is specifically designed for data obtained from Sentinel-2 and PlanetScope imagery, complemented by NDVI timeseries data collected from the study site. The model's performance is assessed in comparison to the individual performance of CNN and GRU models. The models were subjected to evaluation using a variety of metrics, encompassing precision, recall, F1 Score, and overall accuracy. The outcomes reveal that the proposed model surpasses the performance of the individual models. The feature fusion-based model emerged as the top performer, achieving an overall accuracy of 96.47 percent, an F1 Score of 93.16 percent, a precision of 93.93 percent, and a recall of 92.69 percent. This research offers substantial evidence supporting the potential of the CNN-GRU feature fusion model for crop identification. By harnessing the strengths of both architectural approaches, it demonstrates its effectiveness as a valuable tool for crop classification.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
20200 - Electrical engineering, Electronic engineering, Information engineering
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE Access
ISSN
2169-3536
e-ISSN
2169-3536
Svazek periodika
12
Číslo periodika v rámci svazku
1
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
13
Strana od-do
141013-141025
Kód UT WoS článku
001329032300001
EID výsledku v databázi Scopus
—