How does the number of objective function evaluations impact our understanding of metaheuristics behavior?
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F70883521%3A28140%2F21%3A63544438" target="_blank" >RIV/70883521:28140/21:63544438 - isvavai.cz</a>
Result on the web
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378523" target="_blank" >https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378523</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/ACCESS.2021.3066135" target="_blank" >10.1109/ACCESS.2021.3066135</a>
Alternative languages
Result language
angličtina
Original language name
How does the number of objective function evaluations impact our understanding of metaheuristics behavior?
Original language description
Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.
Czech name
—
Czech description
—
Classification
Type
J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
S - Specificky vyzkum na vysokych skolach
Others
Publication year
2021
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Name of the periodical
IEEE Access
ISSN
2169-3536
e-ISSN
—
Volume of the periodical
9
Issue of the periodical within the volume
Neuveden
Country of publishing house
US - UNITED STATES
Number of pages
17
Pages from-to
44032-44048
UT code for WoS article
000633380900001
EID of the result in the Scopus database
2-s2.0-85103755169