Improving Benchmark Result Validation in the Renaissance Benchmark Suite
Thesis title in Czech: | Zlepšení validace výsledků benchmarků v projektu Renaissance Benchmark Suite |
---|---|
Thesis title in English: | Improving Benchmark Result Validation in the Renaissance Benchmark Suite |
Key words: | validace výsledků benchmarků|návrh benchmarků |
English key words: | benchmark result validation|benchmark workload design |
Academic year of topic announcement: | 2023/2024 |
Thesis type: | Bachelor's thesis |
Thesis language: | angličtina |
Department: | Department of Distributed and Dependable Systems (32-KDSS) |
Supervisor: | doc. Ing. Lubomír Bulej, Ph.D. |
Author: | Bc. David Lovíšek - assigned and confirmed by the Study Dept. |
Date of registration: | 10.06.2024 |
Date of assignment: | 19.06.2024 |
Confirmed by Study dept. on: | 19.06.2024 |
Date and time of defence: | 05.09.2024 09:00 |
Date of electronic submission: | 18.07.2024 |
Date of submission of printed version: | 18.07.2024 |
Date of proceeded defence: | 05.09.2024 |
Opponents: | Mgr. Vojtěch Horký, Ph.D. |
Guidelines |
The Renaissance benchmark suite provides a collection of diverse benchmarks designed to evaluate the performance of modern Java Virtual Machines (JVMs). An important part of each benchmark is a validation phase which checks the correctness of the benchmark result, but not all benchmarks in the suite currently have a validation phase, partly due to the inherent challenges associated with concurrent workloads and numerical stability.
The aim of this thesis is to introduce robust validation phase to selected benchmarks within the Renaissance benchmark suite, ensuring that they produce correct results regardless of platform, updates to library dependencies, or refactoring of benchmark code. The focus will be on identifying benchmarks currently lacking validation support and developing a suitable validation phase for selected benchmarks. As an outcome, the thesis should document the current state of validation within the Renaissance benchmark suite, select a set of benchmarks for which a validation phase will be developed, and evaluate the newly added validation phases to demonstrate their correctness and performance impact. |
References |
[1] Blackburn, S. M., et al. The DaCapo Benchmarks: Java Benchmarking Development and Analysis. Proc. ACM SIGPLAN conf. on Object-Oriented Programing, Systems, Languages, and Applications (OOPSLA). ACM, 2006.
[2] The DaCapo Benchmark Suite, https://github.com/dacapobench/dacapobench [3] Prokopec, A., et al. Renaissance: Benchmarking Suite for Parallel Applications on the JVMs. Proc. AGN SIGPLAN conf. on Programming Language Design and Implementation (PLDI). ACM, 2019. [4] The Renaissance Benchmark Suite, https://github.com/renaissance-benchmarks/renaissance |