Abstract:
The objective of the thesis is to examine the efficacy of test impact analysis in a setting of continuous testing, where automated test cases are routinely run to guarantee the integration of high-quality codes. While continuous testing can enhance code quality and minimize maintenance workload, it also leads to a notable rise in overhead for test execution. In our study on test impact analysis, we developed a novel static class level technique that utilizes JavaParser to create a dependency graph between test cases and source code classes based on abstract syntax trees. We applied this technique to seven Java systems and evaluated it against two configurations and discovered a correlation between the effectiveness of test impact analysis and the degree of interdependence between test cases and source code classes. Our study confirmed previous research indicating that even with minimal code changes in commits, roughly 40% of test cases were impacted and required execution on average in the first configuration (changed source files), while approximately 58% of test cases were impacted on average in the second configuration (changed class tokens). Nevertheless, despite these findings, we assert that test impact analysis is an effective tool for reducing test overhead. Moreover, we compared our proposed test impact analysis (TIA) technique with a state-of-the-art static class-level technique on commonly studied systems and observed that our approach achieved an improvement of approximately 13% in selecting impacted tests in the changed source files configuration, while it underperformed by about 6.5% in the changed class tokens configuration.