Methodology and techniques for deriving test cases from models

Description: 

In the context of the ARTIST project, the migration is performed based on model-driven engineering techniques. Thus, the original software is reverse-engineered to obtain a model-based representation in terms of platform-specific models. These models are transformed into more abstract models, such as UML models, which describe the original application in a platform-independent way. The actual migration is performed by applying model transformations and code generators to create the migrated software.

When a software is migrated to new technologies or platforms, one important task is to ensure the quality of the software after the migration has been performed. In particular, two aspects have to be considered: it has to be evaluated whether the expected improvements of the migration have been accomplished and whether the migrated software still meets the original specification. In this document, we focus on the latter aspect. Here, it is investigated whether the migrated software still behaves the same as the original one in terms of functional requirements. Therefore, the behavioural equivalence of the original and the migrated software has to be ensured. More precisely, and since models are the central artefacts in the migration in the context of ARTIST, we use such models to also drive the behavioural equivalence at model level, which is complemented with the end-user based testing component.

In this document, it is presented and explained the component to realize the behavioural comparison, at model level, using the activity diagrams that are obtained in the migration. In a migration process, the complete software is not modernized, but only specific parts of it. Therefore, the prototype focuses on checking the behavioural equivalence only of those parts that have changed. To do so, the domain knowledge of the user is utilized, and the user becomes a key part in the behavioural equivalence testing, since he/she has to decide which tests must be performed. This also allows to avoid the problem of the state space explosion when defining test cases for software.

The errors that are detected at model level, regarding functional equivalence of the original and migrated applications, are cheaper to repair than those detected after the application has been deployed. A simple reason for this is that, having the application specified in a model level, we have not put any effort yet on deploying it, so the respective changes can be performed in the model before the code of the application is obtained. After having checked the functional equivalence in model level, this does not necessarily mean that the functional behaviour of the applications at code level is also guaranteed, but at least a part of its functionality has been checked, what is then complemented with the end-user based testing component.