Use case evaluation methodology M12

Description: 

This document describes the ARTIST use case evaluation methodology. It defines the objectives of the evaluation, the different techniques that will be employed, and the scope of the different approaches.

The evaluation method presented covers the main two approaches to evaluation

  • Evaluation as a process supporting decision making
  • Evaluation as a judgement of how far objectives have been or are likely to be achieved

The methodology also covers both assessments by people intimately familiar with the project, working in the project are very close to it, and those that are close to the target domain of the use case, but might be less involved with the project on a daily basis.

Three selected instruments are described:

  1. A Dynamic Uniform Quantified SWOT analysis
  2. An iterative requirements tracker
  3. An in-vivo evaluation method

The first instrument is a technique based on the Dynamic Uniform Quantified SWOT analysis Chart. Perceived changes in the Strength, Weakness, Opportunity and Threat factors expressed about the project to map the evolution of the project in the context of a dynamically evolving contextual frame of reference will be documented. The primary goal of this is to support agility in decision making in the projects timeframe. It primarily targets subjects with close knowledge of the projects evolution.

In the second technique of requirements fulfilment tracking the projects’ perceived progress towards the pre-stated goals of the use case realizations as expressed through the requirements will be periodically assessed. This evolution will give an indication of the progress towards the challenges posed by the four use cases included in the project as they present the real-world touchstone on which future exploitability and relevance are marked. Future projection to allow pro-active expression and potential correction is added.

As a third technique a traditional evaluation methodology is employed. Targets of evaluation are defined, a coordinator is assigned, actor roles, evaluation dimension and metrics are defined. Subsequently tools of evaluation, success criteria and proposed action plans are established. Subsequently an outline method for detailed implementation of the evaluations within the context of the four use cases is established. The elements that need to be decided on how each use case will implement this evaluation process are specified. The complementarity and heterogeneity of the cases themselves, give rise to differences in parameterization and focus of the evaluation procedures, but homogeneity is aimed in the meta-protocol of the approaches.

The definition of the methods and instruments was an iterative feedback process resulting from various consortium workshops throughout the first project year. Since for the first two instruments the most salient information will be coming from observing and explaining the deltas across the evaluation cycles, an initial seeding of the data sets was achieved through a questionnaire submitted to the Use Case Owners and the Technology Providers in the project.