Software testing remains a popular technique for achieving some degree of software quality and to gain consumers’ confidence. It accounts for anything between 50–75 per cent of the development cost. This research project investigates and develops approaches involving the use of artificial intelligence techniques for automatic generation of test case generation as well as for evaluation of test results.
The three main types of activities associated with software testing are: (1) test data generation, (2) test execution involving the use of test data and the software under test (SUT) and (3) evaluation of test results. A key task associated with test data generation is obtaining an effective test set, while the existence and ease of use of a testing oracle is a key issue associated with the evaluation of test results. Owing to the immensely huge input space, exhaustive testing is impossible. Test case generation ensuring the adequacy of the test data as well as its effectiveness in detecting defects in the software is important so that by testing the SUT with an effective test set will imply its correctness over all possible inputs.
The context-driven component evaluation (CdCE) project attempts to develop techniques and strategies that incorporates artificial intelligence techniques for component selection. Developers using software components for building software system need to be confident in their selection of the most suitable component for system composition. Manual searching is time consuming and unlikely to be able to consider large numbers of components.
The CdCE project is investigating ways to use artificial intelligence to assist the component selection process. The aim of the project is a generic assessment system that can automatically shortlist components for further evaluation.