UnITeD - Unterstützung Inkrementeller TestDaten
The project is carried out in cooperation with AFRA GmbH. It aims at increasing - far beyond the present state-of-the-art - the degree of testing automation for highly reliable and especially for safety-critical software, thus contributing both to fault detection and cost reduction.
Withing the project UnITed (the acronym UnITed standing for "Unterstützung inkrementeller Testdaten"), which is supported by the Bavarian State, new approaches for test automation are being developed and will be implemented by means of appropriate tools. A.o. such techniques will contribute to a significant reduction of testing effort in early development phases and also help to identify the need for additional verification activities at coding level. The test procedure is developed and will be tested in an industrial environment at our pilot partner Siemens Medical Solutions.
The project is divided into two sub-projects: both are concerned with automatic test data generation, the first of them focusing at unit level, the second one at component integration level. Both sub-projects follow a common concept for automatic test data generation, which was successfully developed at our Department within the project .gEAR and which will be adapted from code level to model level. On such a base, the fault detection capability of model-based tests and of structural code tests will be successively compared in order to use the insight gained for optimizing the increment steps of model-based testing phases.
Definition of model-based coverage criteria for unit testing: For the automatic generation of test cases with regard to structural model characteristics coverage criteria are required at model level. An extensive set of coverage criteria consisting of existing, generic criteria (like statement coverage and branch coverage) as well as of newly defined, dedicated criteria was analysed, in particular in the light of its underlying subsumption hierarchy.
Generation of test cases: In order to cover a model with respect to any of the coverage criteria defined, test scenarios (actions) as well as test data (parameters) are required. For the generation of test data an evolutionary process was designed and implemented, allowing for the automatic generation of optimised test cases, maximising the number of entities covered according to a given model-based coverage criterion, and minimising at the same time the number of required test cases.
Development of a model simulator: In order to determine the coverage achieved by a given test case a tool was developed capable of simulating UML models, thus enabling the user to capture all model entities covered during the execution of the test case. The insight gained hereby is subsequently used for the purpose of optimising the generated test cases. The implemented tool is currently able to simulate state machines, activities and sequence diagrams.
Support for regression testing: the tool was extended such as to support the tester in identifying already validated test cases, which are not affected by changes in the system or model under test. Those test cases need not be validated again, which leads to a reduction of the overall testing effort. Only additionally required test cases traversing modified parts of the test object are generated.
Support for test refinement: The refinement concept for state machines was defined and analyzed as part of this task; in particular, a technique was implemented which assigns to each test case at a given model level all corresponding test cases at a refined level. The correspondence between finer and coarser test cases was visualized by means of so-called genealogical trees.
Definition of model-based Interface coverage criteria:
Based on former projects of this department, interface coverage criteria for model-based integration testing were defined and the hierarchy underlying such criteria was analysed. For every test criterion identified, an effort indicator was derived based on the maximum number of model entities to be covered. This metric is intended to support the test manager in choosing an appropriate testing strategy in early development phases.
Automatic generation of test cases:
The evolutionary approach described in subproject one was adapted in order to support the automatic generation and optimization of test cases satisfying the abovementioned interface coverage criteria. For this purpose the model simulator from subproject one was extended to focus on the interactions between communicating state machines.
Visualization of the model entities covered by a generated test suite:
the plug-ins for MagicDraw and Enterprise Architect developed within this task allow to earmark the model entities covered by a test suite. They are based on display concepts permitting to distinguish model entities absolutely required to be covered from those only optionally required to be covered by means of different graphical aids (more precisely, by different colors and text field entries).
Implementation of a mutant generator:
For the purpose of evaluating the technique developed within this sub-task a generator of model mutations was implemented. Several mutation operators at model level were first described and classified, which can be instantiated in a common initial model, thereby resulting in as many model mutants. The adequacy of a test suite can then be evaluated by observing and comparing the behaviour of all models (the initial one as well as all its mutants) when executing the test suite considered. The number of model mutants whose behaviour disagrees from the initial model is interpreted as an indicator of the fault detection capability of the test suite.