Thesis (Selection of subject)Thesis (Selection of subject)(version: 368)
Thesis details
   Login via CAS
Asynchronous Duet Benchmarking
Thesis title in Czech: Asynchronní Duet Benchmarking
Thesis title in English: Asynchronous Duet Benchmarking
Key words: benchmarking|cloud|duet|kontajnery
English key words: benchmarking|cloud|duet|containers
Academic year of topic announcement: 2021/2022
Thesis type: diploma thesis
Thesis language: angličtina
Department: Department of Distributed and Dependable Systems (32-KDSS)
Supervisor: Mgr. Vojtěch Horký, Ph.D.
Author: Mgr. Tomáš Drozdík - assigned and confirmed by the Study Dept.
Date of registration: 27.05.2022
Date of assignment: 31.05.2022
Confirmed by Study dept. on: 06.06.2022
Date and time of defence: 08.02.2023 09:00
Date of electronic submission:05.01.2023
Date of submission of printed version:09.01.2023
Date of proceeded defence: 08.02.2023
Opponents: Michele Tucci, Ph.D.
 
 
 
Guidelines
Detecting performance regressions in software development usually requires repeated measurements and dedicated machines. Cloud offers a tempting alternative with abundance of hardware with pay-for-what-you-use principle. But the cloud environment is inherently heterogeneous and jobs are influenced by load of other users, increasing instability (variation) of obtained measurements.

Duet benchmarking is an approach where benchmarks are run in parallel on the same machine: both measured software versions are thus susceptible to the same fluctuations of the cloud machine and it is possible to filter out variations caused by the cloud environment. However, this approach requires significant support from the benchmark harness to ensure concurrent execution.

This thesis aims to remove these limitations to enable broader use of this method.

The thesis would include a prototype for running duet benchmarks without any need for synchronization between the two instances. The evaluation of the new approach and the toolchain prototype would be based on established benchmark suites running under different conditions: from dedicated servers to public clouds. The evaluation would compare the new (asynchronous) method with the original duet (synchronized) method as well as with sequential execution (i.e., the current practice).

The thesis would also provide a prototype of an automated evaluation tool that would be able to detect whether the two tested instances differ in performance.
References
[1] Lubomír Bulej et al. “Duet benchmarking: improving measurement accuracy in the cloud”. In: Proceedings of the ACM/SPEC International Conferenceon Performance Engineering. 2020, pp. 100–107
[2] Christoph Laaber, Joel Scheuner, and Philipp Leitner. “Software microbench-marking in the cloud. How bad is it really?” In: Empirical Software Engi-neering 24.4 (2019), pp. 2469–2508
[3] A. Abedi and T. Brecht. Conducting repeatable experiments in highly variable cloud computing environments. In Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering, pages 287– 292. ACM, 2017
 
Charles University | Information system of Charles University | http://www.cuni.cz/UKEN-329.html