They provide systematic backup for the design of tests, support decisions for the allocation of testing resources and provide a suitable basis for measuring the product and test quality. This chapter presents recommendation systems that use information in issue repositories to support these two challenges, by supporting either duplicate detection of issue reports or navigation of artifacts in evolving software systems.ĭefect taxonomies collect and organize the domain knowledge and project experience of experts and are a valuable instrument of system testing for several reasons. ![]() However, the large number of issue reports may also be used to help a developer to navigate the software development project to find related software artifacts, required both to understand the issue itself, and to analyze the impact of a possible issue resolution. One specific difficulty is to determine whether a newly submitted issue report is a duplicate of an issue previously reported, if it contains complementary information related to a known issue, or if the issue report addresses something that has not been observed before. Software developers spend much effort on issue triage, a task in which the mere number of issue reports becomes a significant challenge. This repository may contribute to information overload in an organization, but it may also help in navigating the software system. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs.Ĭhanges in evolving software systems are often managed using an issue repository. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. This feature can considerably reduce cost for analyzing and removing causes of the crashes. This technique also contributes to achieve higher code coverage.įourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage. Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. In addition, CAPTIG efficently reduces the amount of generated input. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. The contributions of this proposed research are the following.įirst, CAPTIG enhances method-sequence generation techniques. ![]() To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge. Desirable objects help tests exercise the new parts of the code. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Many efficient test input generation techniques for object-oriented software have been proposed however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |