Témata prací (Výběr práce)Témata prací (Výběr práce)(verze: 368)
Detail práce
   Přihlásit přes CAS
Odstraňování chyb ve rozsáhlých a dlouho-běžících aplikacích s využitím generovaných testů
Název práce v češtině: Odstraňování chyb ve rozsáhlých a dlouho-běžících aplikacích s využitím generovaných testů
Název v anglickém jazyce: Debugging Large and Long-Running Applications Using Generated Tests
Akademický rok vypsání: 2023/2024
Typ práce: diplomová práce
Jazyk práce:
Ústav: Katedra distribuovaných a spolehlivých systémů (32-KDSS)
Vedoucí / školitel: doc. RNDr. Pavel Parízek, Ph.D.
Řešitel: skrytý - zadáno a potvrzeno stud. odd.
Datum přihlášení: 07.11.2023
Datum zadání: 09.11.2023
Datum potvrzení stud. oddělením: 10.11.2023
Zásady pro vypracování
Debugging is an essential part of the software development process. Its main purpose is to locate root causes of observed errors and fix the corresponding bugs in source code. While there is some tool support for debugging programs, it is a very hard, tedious, and time-consuming activity, because many related tasks (inspecting the bug report and logs) still have to be done manually. Software developers are, therefore, in great need of techniques and tools that make debugging easier and faster through a higher level of automation, more efficient inspection of buggy execution traces, and support for log analysis, for example. This is much more apparent in the case of large software systems with long-running computations, where usage of interactive online debuggers in IDEs is rather limited and impractical.

One possible way of addressing the challenges related to debugging may be the recently proposed methodology (process) that involves the following steps (to be performed repeatedly):
- Fully automated dynamic analysis of the subject application that records necessary information about execution of methods and components identified by the developer as possibly related to the observed runtime error.
- Generating tests for the relevant application components and methods, using information provided by the dynamic analysis (e.g., runtime values of call arguments).
- Execution of the generated tests to determine the likely root cause of the runtime error in question, e.g. through discovering some differences between passing and failing tests in terms of method call arguments and input program states.

The main goal of this project is (1) to create a prototype basic implementation of the necessary infrastructure for all steps of the whole debugging process and (2) then improve and optimize the procedure for generating tests. Individual components of the prototype implementation should have practical performance, which especially means reasonable overhead in the case of dynamic analysis. Main criteria for test generation include high coverage, small number of tests without redundancy (using some reduction techniques), and precision geared towards locating root causes of reported errors. The prototype implementation will target Java applications. It should be designed in a modular and extensible way, in order to enable replacement of prototype modules with advanced variants and porting to other languages (C#/.NET). Experimental evaluation will be performed on several large Java programs.
Seznam odborné literatury
1. DiSL, a domain-specific language for Java bytecode instrumentation. https://disl.ow2.org/
2. Java Virtual Machine Tool Interface (JVM TI). https://docs.oracle.com/javase/8/docs/technotes/guides/jvmti/
3. ASM bytecode manipulation framework. https://asm.ow2.io/
4. S.Zhang, D. Saff, Y. Bu, and M. Ernst. Combined Static and Dynamic Automated Test Generation. ISSTA 2011.
5. D. Hao, L. Zhang, X. Wu, H. Mei, and G. Rothermel. On-Demand Test Suite Reduction. ICSE 2012.
6. A. Shi, A. Gyori, M. Gligoric, A. Zaytsev, and D. Marinov. Balancing Trade-offs in Test-Suite Reduction.FSE 2014.
7. Y. Tian, Z. Xu, Y. Dong, C. Sun, and S.-C. Cheung. Revisiting the Evaluation of Deep Learning-Based Compiler Testing. IJCAI 2023.
 
Univerzita Karlova | Informační systém UK