Cloud services modelling and performance analysis framework M30

Description: 

This document is associated to four prototypes delivered in Work Package 7. The prototypes aim at providing a modelling and performance analysis framework to support the creation of cloud models and to measure performance or availability aspects of cloud services. Implementation and usage instructions details are included in this document.

The first prototype is a meta-model that will be used as baseline for creating models of cloud models (Section 2). The meta-model defines the concepts and relationships that describe the main capabilities of resources offered by cloud platforms. The meta-model is realized as an extension to the UML meta-model. Therefore, it has been realized in terms of a profile / collection of profiles regarding specific aspects (e.g. Availability concepts –Appendix B, Performance
Concepts, Appendix C etc.). Profiles created starting from this meta-model (Appendix F) will be used in ARTIST during the migration of an application in order to select the target platform that matches best the requirements and functionalities needed by the re-engineered application.

The second prototype consists of a software suite for benchmarking cloud resources in order to extract performance-related data and to include it in the cloud models. Since performance aspects are considered as key criteria that are taken into consideration in the selection of the target platform, the availability of this data in the cloud models simplifies and makes more accurate the migration phase. The software suite includes an installation, configuration and
execution tool, incorporating a set of third-party benchmarking tools specifically selected for their effectiveness in evaluating cloud resources performances, a database for the data storage and a user interface to automate the management of executions and results of tests. It also includes a GUI for end users to visualize the results of the performance experiments. 

The third prototype consists of an abstracted software library for the measurements of availability
of services regardless of the supported provider on which they are deployed. The library may measure availability based on the respective providers SLA definitions and thus conclude for potential SLA violations, including also the evidence of this violation. It can also provide recommendations as to whether a specific service instance is viable for an SLA based on the provider’s preconditions.

The fourth prototype consists of a tool to update the provider models/profiles that are stored in the repository with new results coming from the benchmarking experiments. While the overall data of the experiments are kept in an internal Raw Data DB and not stored in the provider profiles in order to keep the latter lightweight, the average performance information contained in them needs to be periodically updated based on the new range of measurements conducted each time.