Generation of Benchmark of Software Testing Methods for Java with Realistic Introduced Errors
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F49777513%3A23520%2F23%3A43969479" target="_blank" >RIV/49777513:23520/23:43969479 - isvavai.cz</a>
Výsledek na webu
<a href="https://annals-csis.org/proceedings/2023/drp/3165.html" target="_blank" >https://annals-csis.org/proceedings/2023/drp/3165.html</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.15439/2023F3165" target="_blank" >10.15439/2023F3165</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Generation of Benchmark of Software Testing Methods for Java with Realistic Introduced Errors
Popis výsledku v původním jazyce
This paper deals with a benchmark of automated test generation methods for software testing. To enable mutual comparison of various methods and to investigate their ability to find realistic errors, we propose a benchmark consisting of several applications with wittingly introduced errors. These errors should be found by the investigated test generation methods during the benchmark. To enable an easy introduction of various errors of various types into the benchmark applications, we created the Testing Applications Generator (TAG) tool. The description of the TAG along with two applications, which we developed as a part of the intended benchmark, is the main contribution of this paper.
Název v anglickém jazyce
Generation of Benchmark of Software Testing Methods for Java with Realistic Introduced Errors
Popis výsledku anglicky
This paper deals with a benchmark of automated test generation methods for software testing. To enable mutual comparison of various methods and to investigate their ability to find realistic errors, we propose a benchmark consisting of several applications with wittingly introduced errors. These errors should be found by the investigated test generation methods during the benchmark. To enable an easy introduction of various errors of various types into the benchmark applications, we created the Testing Applications Generator (TAG) tool. The description of the TAG along with two applications, which we developed as a part of the intended benchmark, is the main contribution of this paper.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
—
Návaznosti
S - Specificky vyzkum na vysokych skolach
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů