Prioritizing Program Elements: A Pre-testing Effort
To Improve Software Quality

Ray, Mitrabinda (2012) Prioritizing Program Elements: A Pre-testing Effort
To Improve Software Quality.
PhD thesis.



Test effort prioritization is a powerful technique that enables the tester to effectively utilize the test resources by streamlining the test effort. The distribution of test effort is important to test organization. We address prioritization-based testing strategies in order to do the best possible job with limited test resources. Our proposed techniques give benefit to the tester, when applied in the case of looming deadlines and limited resources. Some parts of a system are more critical and sensitive to bugs than others, and thus should be tested thoroughly. The rationale behind this thesis is to estimate the criticality of various parts within a system and prioritize the parts for testing according to their estimated criticality. We propose several prioritization techniques at different phases of Software Development Life Cycle (SDLC). Different chapters of the thesis aim at setting test priority based on various factors of the system. The purpose is to identify and focus on the critical and strategic areas and detect the important defects as early as possible, before the product release. Focusing on the critical and strategic areas helps to improve the reliability of the system within the available resources. We present code-based and architecture-based techniques to prioritize the testing tasks. In these techniques, we analyze the criticality of a component within a system using a combination of its internal and external factors. We have conducted a set of experiments on the case studies and observed that the proposed techniques are
efficient and address the challenge of prioritization. We propose a novel idea of calculating the influence of a component, where in-fluence refers to the contribution or usage of the component at every execution step. This influence value serves as a metric in test effort prioritization. We first calculate the influence through static analysis of the source code and then, refine our work
by calculating it through dynamic analysis. We have experimentally proved that decreasing the reliability of an element with high influence value drastically increases
the failure rate of the system, which is not true in case of an element with low influence value. We estimate the criticality of a component within a system by considering
its both internal and external factors such as influence value, average execution time, structural complexity, severity and business value. We prioritize the components
for testing according to their estimated criticality. We have compared our approach with a related approach, in which the components were prioritized on the basis of their structural complexity only. From the experimental results, we observed that our approach helps to reduce the failure rate at the operational environment. The consequence of the observed failures were also low compared to the related approach. Priority should be established by order of importance or urgency. As the importance of a component may vary at different points of the testing phase, we propose a multi cycle-based test effort prioritization approach, in which we assign different priorities to the same component at different test cycles. Test effort prioritization at the initial phase of SDLC has a greater impact than that made at a later phase. As the analysis and design stage is critical compared to other stages, detecting and correcting errors at this stage is less costly compared to later stages of SDLC. Designing metrics at this stage help the test manager in decision making for allocating resources. We propose a technique to estimate the criticality of a use case at the design level. The criticality is computed on the basis of complexity and business value. We evaluated the complexity of a use case analytically through a set of data collected at the design level. We experimentally observed that assigning test effort to various use cases according to their estimated criticality improves the reliability of a system under test. Test effort prioritization based on risk is a powerful technique for streamlining the test effort. The tester can exploit the relationship between risk and testing
effort. We proposed a technique to estimate the risk associated with various states at the component level and risk associated with use case scenarios at the system
level. The estimated risks are used for enhancing the resource allocation decision. An intermediate graph called Inter-Component State-Dependence graph (ISDG) is
introduced for getting the complexity for a state of a component, which is used for risk estimation. We empirically evaluated the estimated risks. We assigned test priority to the components / scenarios within a system according to their estimated risks. We performed an experimental comparative analysis and observed that the testing team guided by our technique achieved high test efficiency compared to a related approach.

Item Type:Thesis (PhD)
Uncontrolled Keywords:Program Dependence Graph, Interaction Overview Diagram, Access Modifier Changes, Inter-Component State-Dependence graph, Software Development Life Cycle
Subjects:Engineering and Technology > Computer and Information Science > Data Mining
Divisions: Engineering and Technology > Department of Computer Science
ID Code:4477
Deposited By:Hemanta Biswal
Deposited On:08 May 2013 14:46
Last Modified:08 May 2013 14:46
Supervisor(s):Mohapatra, D P

Repository Staff Only: item control page