How does the number of objective function evaluations impact our understanding of metaheuristics behavior?
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F70883521%3A28140%2F21%3A63544438" target="_blank" >RIV/70883521:28140/21:63544438 - isvavai.cz</a>
Výsledek na webu
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378523" target="_blank" >https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378523</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/ACCESS.2021.3066135" target="_blank" >10.1109/ACCESS.2021.3066135</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
How does the number of objective function evaluations impact our understanding of metaheuristics behavior?
Popis výsledku v původním jazyce
Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.
Název v anglickém jazyce
How does the number of objective function evaluations impact our understanding of metaheuristics behavior?
Popis výsledku anglicky
Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2021
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
IEEE Access
ISSN
2169-3536
e-ISSN
—
Svazek periodika
9
Číslo periodika v rámci svazku
Neuveden
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
17
Strana od-do
44032-44048
Kód UT WoS článku
000633380900001
EID výsledku v databázi Scopus
2-s2.0-85103755169