Xavier Etchevers - Academia.edu (original) (raw)

Papers by Xavier Etchevers

Research paper thumbnail of Poster: A Declarative Approach for Updating Distributed Microservices

2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion), 2018

Microservice architectures are considered with much promises to achieve DevOps in IT organization... more Microservice architectures are considered with much promises to achieve DevOps in IT organizations—mainly because they split applications into services that can be updated independently the ones from each other. But the challenge does not stop there: updating microservices still require DevOps teams to specify complex pipelines of commands to execute. In this paper, we propose a framework that strongly simplifies such updates: DevOps teams just specify the desired target architecture for their microservice application, instead of specifying how to reach it. In addition, updates managed with our framework can conform to various SLA requirements and resist to failures.

Research paper thumbnail of Architecture-Based Automated Updates of Distributed Microservices

Microservice architectures are considered really promising to achieve devops in IT organizations,... more Microservice architectures are considered really promising to achieve devops in IT organizations, because they split applications into services that can be updated independently from each others. But to protect SLA (Service Level Agreement) properties when updating microservices, devops teams have to deal with complex and error-prone scripts of management operations. In this paper, we leverage an architecture-based approach to provide an easy and safe way to update microservices.

Research paper thumbnail of A declarative approach for updating distributed microservices

One of the greatest benefits of microservices is to sensitively ease changing applications by spl... more One of the greatest benefits of microservices is to sensitively ease changing applications by splitting these into independently deployable units [5]. Combined with Continuous Delivery (CD) -that aims at delivering quickly and safely every software releases- and Platform as a Service (PaaS) automating application management in a on-demand virtualized environment, the microservice paradigm has become essential to implement agile processes.

Research paper thumbnail of Docteur De L'Université De Grenoble

Research paper thumbnail of Combining hardware nodes and software components ordering-based heuristics for optimizing the placement of distributed IoT applications in the fog

Proceedings of the 33rd Annual ACM Symposium on Applied Computing - SAC '18, 2018

As fog computing brings compute and storage resources to the edge of the network, there is an inc... more As fog computing brings compute and storage resources to the edge of the network, there is an increasing need for automated placement (i.e., selection of hosting devices) to deploy distributed applications. Such a placement must conform to applications' resource requirements in a heterogeneous fog infrastructure. The placement decision-making is further complicated by Internet of Things (IoT) applications that are tied to geographical locations of physical objects/things. This paper presents a model, an objective function, and a mechanism to address the problem of placing distributed IoT applications in the fog. Based on a backtrack search algorithm and accompanied heuristics, the proposed mechanism is able to deal with large scale problems, and to efficiently make placement decisions that fit the objective-to lower placed applications' response time. The proposed approach is validated through comparative simulations of different combinations of the algorithms and heuristics on varying sizes of infrastructures and applications. CCS CONCEPTS • Software and its engineering → Distributed systems organizing principles;

Research paper thumbnail of Déploiement d'applications patrimoniales en environnements de type informatique dans le nuage. (Deploying legacy applications in cloud computing environments)

L'objectif de cette these est d'offrir une solution de bout en bout permettant de decrire... more L'objectif de cette these est d'offrir une solution de bout en bout permettant de decrire et de deployer de facon fiable une application distribuee dans un environnement virtualise. Ceci passe par la definition d'un formalisme permettant de decrirer une application ainsi que son environnement d'execution, puis de fournir les outils capable d'interpeter ce formalisme pour deployer (installer, instancier et configurer) l'application sur une plate-forme de type cloud computing.

Research paper thumbnail of Resilience of Stateful IoT Applications in a Dynamic Fog Environment

Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services

Research paper thumbnail of Combining Heuristics to Optimize and Scale the Placement of IoT Applications in the Fog

2018 IEEE/ACM 11th International Conference on Utility and Cloud Computing (UCC)

As fog computing brings processing and storage resources to the edge of the network, there is an ... more As fog computing brings processing and storage resources to the edge of the network, there is an increasing need of automated placement (i.e., host selection) to deploy distributed applications. Such a placement must conform to applications' resource requirements in a heterogeneous fog infrastructure, and deal with the complexity brought by Internet of Things (IoT) applications tied to sensors and actuators. This paper presents four heuristics to address the problem of placing distributed IoT applications in the fog. By combining proposed heuristics, our approach is able to deal with large scale problems, and to efficiently make placement decisions fitting the objective: minimizing placed applications' average response time. The proposed approach is validated through comparative simulation of different heuristic combinations with varying sizes of infrastructures and applications.

Research paper thumbnail of Reliable self-deployment of distributed cloud applications

Software: Practice and Experience, 2016

Cloud applications consist of a set of interconnected software elements distributed over several ... more Cloud applications consist of a set of interconnected software elements distributed over several virtual machines, themselves hosted on remote physical servers. Most existing solutions for deploying such applications require human intervention to configure parts of the system, do not conform to functional dependencies among elements that must be respected when starting them, and do not handle virtual machine failures that can occur when deploying an application. This paper presents a self-deployment protocol that was designed to automatically configure a set of software elements to be deployed on different virtual machines. This protocol works in a decentralized way, i.e., there is no need for a centralized server. It also starts the software elements in a certain order, respecting important architectural invariants. This protocol supports virtual machine and network failures, and always succeeds in deploying an application when faced with a finite number of failures. Designing such highly parallel management protocols is difficult, therefore formal modeling techniques and verification tools were used for validation purposes. The protocol was implemented in Java and was used to deploy industrial applications.

Research paper thumbnail of Déploiement d'applications patrimoniales en environnements de type informatique dans le nuage

Cloud computing aims to cut down on the outlay and operational expenses involved in setting up an... more Cloud computing aims to cut down on the outlay and operational expenses involved in setting up and running applications. To do this, an application is split into a set of virtualized hardware and software resources. This virtualized application can be autonomously managed, making it responsive to the dynamic changes affecting its running environment. This is referred to as Application Life-cycle Management (ALM). In cloud computing, ALM is a growing but immature market, with many offers claiming to significantly improve productivity. However, all these solutions are faced with a major restriction: the duality between the level of autonomy they offer and the type of applications they can handle. To address this, this thesis focuses on managing the initial deployment of an application to demonstrate that the duality is artificial. The main contributions of this work are presented in a platform named VAMP (Virtual Applications Management Platform). VAMP can deploy any legacy applicatio...

Research paper thumbnail of A Self-Scalable and Auto-Regulated Request Injection Benchmarking Tool for Automatic Saturation Detection

IEEE Transactions on Cloud Computing, 2014

ABSTRACT Software applications providers have always been required to perform load testing prior ... more ABSTRACT Software applications providers have always been required to perform load testing prior to launching new applications. This crucial test phase is expensive in human and hardware terms, and the solutions generally used would benefit from further development. In particular, designing an appropriate load profile to stress an application is difficult and must be done carefully to avoid skewed testing. In addition, static testing platforms are exceedingly complex to set up. New opportunities to ease load testing solutions are becoming available thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based on: (i) intelligent generation of traffic to the benched application without inducing thrashing (avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system. The platform developed was experimented using two use cases based on the reference JEE benchmark RUBiS. This involved detecting bottleneck tiers, and tuning servers to improve performance. This platform was found to reduce the cost of testing by 50 percent compared to more commonly used solutions.

Research paper thumbnail of An experience report on the verification of autonomic protocols in the cloud

Innovations in Systems and Software Engineering, 2013

This work results from a collaboration between experts in autonomic protocols and cloud computing... more This work results from a collaboration between experts in autonomic protocols and cloud computing on the one hand, and an expert in formal techniques and tools on the other. More precisely, the repartition of the work was as follows: The reconfiguration protocol (Section 2.1) was designed by F. Boyer and O. Gruber; The self-configuration protocol (Section 2.2) was designed by X. Etchevers, N. De Palma, F. Boyer, and T. Coupaye; Specification and verification tasks (Section 3) were carried out by G. Salaün.

Research paper thumbnail of Reliable self-deployment of cloud applications

Proceedings of the 29th Annual ACM Symposium on Applied Computing - SAC '14, 2014

Cloud applications consist of a set of interconnected software elements distributed over several ... more Cloud applications consist of a set of interconnected software elements distributed over several virtual machines, themselves hosted on remote physical servers. Most existing solutions for deploying such applications require human intervention to configure parts of the system, do not respect functional dependencies among elements that must be respected when starting them, and do not handle virtual machine failures that can occur when deploying an application. This paper presents a self-deployment protocol that was designed to automatically configure a set of software elements to be deployed on different virtual machines. This protocol works in a decentralized way, i.e., there is no need for a centralized server. It also starts the software elements in a certain order, respecting important architectural invariants. This protocol supports virtual machine and network failures, and always succeeds in deploying an application when faced with a finite number of failures. Designing such highly parallel management protocols is difficult, therefore formal modeling techniques and verification tools were used for validation purposes. The protocol was implemented in Java and was used to deploy industrial applications.

Research paper thumbnail of Experiences in Benchmarking of Autonomic Systems

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2010

ABSTRACT Autonomic computing promises improvements of systems quality of service in terms of avai... more ABSTRACT Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

Research paper thumbnail of Architectural Model and Planification Algorithm for the Self-Management of Elastic Cloud Applications

2014 International Conference on Cloud and Autonomic Computing, 2014

This paper introduces a generic approach for managing automatically applications elasticity. The ... more This paper introduces a generic approach for managing automatically applications elasticity. The proposed solution addresses a noticeably wider scope of use-cases and does not depend on the underlying execution environment. It consists of: (i) a model and a formalism used for specifying valid applicative architectures (or elasticity scenarios) according to the resources allocated to the application. The originality of this first contribution lies in the association of a component model with a set oriented query language; (ii) a defeasible reasoning-based planning algorithm that computes the target applicative architecture from the model and elasticity requests; (iii) a first qualitative and quantitative evaluation that highlights the relevance and the viability of the proposed approach.

Research paper thumbnail of Automated Configuration of Legacy Applications in the Cloud

2011 Fourth IEEE International Conference on Utility and Cloud Computing, 2011

Current solutions for managing distributed applications in the cloud, typically covered by PaaS (... more Current solutions for managing distributed applications in the cloud, typically covered by PaaS (Platform as a Service) offers, remain domain specific and are only partially automated. In this context, the task consisting in automatically configuring distributed applications is still a difficult issue. In this paper, we present an application architectural model and a self-configuration protocol that automates the deployment of legacy distributed applications. Our protocol is decentralized and loosely coupled to avoid the need of a global synchronization between virtual machines (VMs) during the configuration stage. An evaluation reports the performances of the protocol when applied to deploy enterprise web applications on a private cloud platform.

Research paper thumbnail of Verification of a self-configuration protocol for distributed applications in the cloud

Proceedings of the 27th Annual ACM Symposium on Applied Computing - SAC '12, 2012

Distributed applications in the cloud are composed of a set of virtual machines running a set of ... more Distributed applications in the cloud are composed of a set of virtual machines running a set of interconnected software components. In this context, the task of automatically configuring distributed applications is a very difficult issue. In this paper, we focus on such a self-configuration protocol, which is able to configure a whole distributed application without requiring any centralized server. The high degree of parallelism involved in this protocol makes its design complicated and error-prone. In order to check that this protocol works as expected, we specify it in Lotos NT and verify it using the Cadp toolbox. The use of these formal techniques and tools helped to detect a bug in the protocol, and served as a workbench to experiment with several possible communication models.

Research paper thumbnail of Self-Configuration of Distributed Applications in the Cloud

2011 IEEE 4th International Conference on Cloud Computing, 2011

In the field of cloud computing, current solutions dedicated to PaaS (Platform as a Service), i.e... more In the field of cloud computing, current solutions dedicated to PaaS (Platform as a Service), i.e. the environments that deal with the different stages of the application lifecycle, remain business domain specific and are only partially automated. This limitation is due to the lack of an architectural model for describing a distributed application in terms of its software stacks (operating system, middleware, application), their instantiation as virtual machines, and their configuration interdependencies. This article puts forward (i) a component-based application model for defining any kind of distributed applications composed of a set of interconnected virtual machines, (ii) an automated line for deploying such a distributed application in the cloud, which includes a decentralized protocol for self-configuring the virtual application machines, (iii) a first performance evaluation demonstrating the viability of the solution.

Research paper thumbnail of Self-scalable Benchmarking as a Service with Automatic Saturation Detection

Lecture Notes in Computer Science, 2013

Software applications providers have always been required to perform load testing prior to launch... more Software applications providers have always been required to perform load testing prior to launching new applications. This crucial test phase is expensive in human and hardware terms, and the solutions generally used would benefit from further development. In particular, designing an appropriate load profile to stress an application is difficult and must be done carefully to avoid skewed testing. In addition, static testing platforms are exceedingly complex to set up. New opportunities to ease load testing solutions are becoming available thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based on: (i) intelligent generation of traffic to the benched application without inducing thrashing (avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system. This platform was found to reduce the cost of testing by 50% compared to more commonly used solutions. It was experimented on the reference JEE benchmark RUBiS. This involved detecting bottleneck tiers.

Research paper thumbnail of A self-scalable load injection service

Software: Practice and Experience, 2013

ABSTRACT Load testing of applications is an important and costly activity for software provider c... more ABSTRACT Load testing of applications is an important and costly activity for software provider companies. Classical solutions are very difficult to set up statically, and their cost is prohibitive in terms of both human and hardware resources. Virtualized cloud computing platforms provide new opportunities for stressing an application's scalability, by providing a large range of flexible and less expensive (pay-per-use model) computation units. On the basis of these advantages, load testing solutions could be provided on demand in the cloud. This paper describes a Benchmark-as-a-Service solution that automatically scales the load injection platform and facilitates its setup according to load profiles. Our approach is based on: (i) virtualization of the benchmarking platform to create self-scaling injectors; (ii) online calibration to characterize the injector's capacity and impact on the benched application; and (iii) a provisioning solution to appropriately scale the load injection platform ahead of time. We also report experiments on a benchmark illustrating the benefits of this system in terms of cost and resource reductions. Copyright © 2013 John Wiley & Sons, Ltd.

Research paper thumbnail of Poster: A Declarative Approach for Updating Distributed Microservices

2018 IEEE/ACM 40th International Conference on Software Engineering: Companion (ICSE-Companion), 2018

Microservice architectures are considered with much promises to achieve DevOps in IT organization... more Microservice architectures are considered with much promises to achieve DevOps in IT organizations—mainly because they split applications into services that can be updated independently the ones from each other. But the challenge does not stop there: updating microservices still require DevOps teams to specify complex pipelines of commands to execute. In this paper, we propose a framework that strongly simplifies such updates: DevOps teams just specify the desired target architecture for their microservice application, instead of specifying how to reach it. In addition, updates managed with our framework can conform to various SLA requirements and resist to failures.

Research paper thumbnail of Architecture-Based Automated Updates of Distributed Microservices

Microservice architectures are considered really promising to achieve devops in IT organizations,... more Microservice architectures are considered really promising to achieve devops in IT organizations, because they split applications into services that can be updated independently from each others. But to protect SLA (Service Level Agreement) properties when updating microservices, devops teams have to deal with complex and error-prone scripts of management operations. In this paper, we leverage an architecture-based approach to provide an easy and safe way to update microservices.

Research paper thumbnail of A declarative approach for updating distributed microservices

One of the greatest benefits of microservices is to sensitively ease changing applications by spl... more One of the greatest benefits of microservices is to sensitively ease changing applications by splitting these into independently deployable units [5]. Combined with Continuous Delivery (CD) -that aims at delivering quickly and safely every software releases- and Platform as a Service (PaaS) automating application management in a on-demand virtualized environment, the microservice paradigm has become essential to implement agile processes.

Research paper thumbnail of Docteur De L'Université De Grenoble

Research paper thumbnail of Combining hardware nodes and software components ordering-based heuristics for optimizing the placement of distributed IoT applications in the fog

Proceedings of the 33rd Annual ACM Symposium on Applied Computing - SAC '18, 2018

As fog computing brings compute and storage resources to the edge of the network, there is an inc... more As fog computing brings compute and storage resources to the edge of the network, there is an increasing need for automated placement (i.e., selection of hosting devices) to deploy distributed applications. Such a placement must conform to applications' resource requirements in a heterogeneous fog infrastructure. The placement decision-making is further complicated by Internet of Things (IoT) applications that are tied to geographical locations of physical objects/things. This paper presents a model, an objective function, and a mechanism to address the problem of placing distributed IoT applications in the fog. Based on a backtrack search algorithm and accompanied heuristics, the proposed mechanism is able to deal with large scale problems, and to efficiently make placement decisions that fit the objective-to lower placed applications' response time. The proposed approach is validated through comparative simulations of different combinations of the algorithms and heuristics on varying sizes of infrastructures and applications. CCS CONCEPTS • Software and its engineering → Distributed systems organizing principles;

Research paper thumbnail of Déploiement d'applications patrimoniales en environnements de type informatique dans le nuage. (Deploying legacy applications in cloud computing environments)

L'objectif de cette these est d'offrir une solution de bout en bout permettant de decrire... more L'objectif de cette these est d'offrir une solution de bout en bout permettant de decrire et de deployer de facon fiable une application distribuee dans un environnement virtualise. Ceci passe par la definition d'un formalisme permettant de decrirer une application ainsi que son environnement d'execution, puis de fournir les outils capable d'interpeter ce formalisme pour deployer (installer, instancier et configurer) l'application sur une plate-forme de type cloud computing.

Research paper thumbnail of Resilience of Stateful IoT Applications in a Dynamic Fog Environment

Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services

Research paper thumbnail of Combining Heuristics to Optimize and Scale the Placement of IoT Applications in the Fog

2018 IEEE/ACM 11th International Conference on Utility and Cloud Computing (UCC)

As fog computing brings processing and storage resources to the edge of the network, there is an ... more As fog computing brings processing and storage resources to the edge of the network, there is an increasing need of automated placement (i.e., host selection) to deploy distributed applications. Such a placement must conform to applications' resource requirements in a heterogeneous fog infrastructure, and deal with the complexity brought by Internet of Things (IoT) applications tied to sensors and actuators. This paper presents four heuristics to address the problem of placing distributed IoT applications in the fog. By combining proposed heuristics, our approach is able to deal with large scale problems, and to efficiently make placement decisions fitting the objective: minimizing placed applications' average response time. The proposed approach is validated through comparative simulation of different heuristic combinations with varying sizes of infrastructures and applications.

Research paper thumbnail of Reliable self-deployment of distributed cloud applications

Software: Practice and Experience, 2016

Cloud applications consist of a set of interconnected software elements distributed over several ... more Cloud applications consist of a set of interconnected software elements distributed over several virtual machines, themselves hosted on remote physical servers. Most existing solutions for deploying such applications require human intervention to configure parts of the system, do not conform to functional dependencies among elements that must be respected when starting them, and do not handle virtual machine failures that can occur when deploying an application. This paper presents a self-deployment protocol that was designed to automatically configure a set of software elements to be deployed on different virtual machines. This protocol works in a decentralized way, i.e., there is no need for a centralized server. It also starts the software elements in a certain order, respecting important architectural invariants. This protocol supports virtual machine and network failures, and always succeeds in deploying an application when faced with a finite number of failures. Designing such highly parallel management protocols is difficult, therefore formal modeling techniques and verification tools were used for validation purposes. The protocol was implemented in Java and was used to deploy industrial applications.

Research paper thumbnail of Déploiement d'applications patrimoniales en environnements de type informatique dans le nuage

Cloud computing aims to cut down on the outlay and operational expenses involved in setting up an... more Cloud computing aims to cut down on the outlay and operational expenses involved in setting up and running applications. To do this, an application is split into a set of virtualized hardware and software resources. This virtualized application can be autonomously managed, making it responsive to the dynamic changes affecting its running environment. This is referred to as Application Life-cycle Management (ALM). In cloud computing, ALM is a growing but immature market, with many offers claiming to significantly improve productivity. However, all these solutions are faced with a major restriction: the duality between the level of autonomy they offer and the type of applications they can handle. To address this, this thesis focuses on managing the initial deployment of an application to demonstrate that the duality is artificial. The main contributions of this work are presented in a platform named VAMP (Virtual Applications Management Platform). VAMP can deploy any legacy applicatio...

Research paper thumbnail of A Self-Scalable and Auto-Regulated Request Injection Benchmarking Tool for Automatic Saturation Detection

IEEE Transactions on Cloud Computing, 2014

ABSTRACT Software applications providers have always been required to perform load testing prior ... more ABSTRACT Software applications providers have always been required to perform load testing prior to launching new applications. This crucial test phase is expensive in human and hardware terms, and the solutions generally used would benefit from further development. In particular, designing an appropriate load profile to stress an application is difficult and must be done carefully to avoid skewed testing. In addition, static testing platforms are exceedingly complex to set up. New opportunities to ease load testing solutions are becoming available thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based on: (i) intelligent generation of traffic to the benched application without inducing thrashing (avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system. The platform developed was experimented using two use cases based on the reference JEE benchmark RUBiS. This involved detecting bottleneck tiers, and tuning servers to improve performance. This platform was found to reduce the cost of testing by 50 percent compared to more commonly used solutions.

Research paper thumbnail of An experience report on the verification of autonomic protocols in the cloud

Innovations in Systems and Software Engineering, 2013

This work results from a collaboration between experts in autonomic protocols and cloud computing... more This work results from a collaboration between experts in autonomic protocols and cloud computing on the one hand, and an expert in formal techniques and tools on the other. More precisely, the repartition of the work was as follows: The reconfiguration protocol (Section 2.1) was designed by F. Boyer and O. Gruber; The self-configuration protocol (Section 2.2) was designed by X. Etchevers, N. De Palma, F. Boyer, and T. Coupaye; Specification and verification tasks (Section 3) were carried out by G. Salaün.

Research paper thumbnail of Reliable self-deployment of cloud applications

Proceedings of the 29th Annual ACM Symposium on Applied Computing - SAC '14, 2014

Cloud applications consist of a set of interconnected software elements distributed over several ... more Cloud applications consist of a set of interconnected software elements distributed over several virtual machines, themselves hosted on remote physical servers. Most existing solutions for deploying such applications require human intervention to configure parts of the system, do not respect functional dependencies among elements that must be respected when starting them, and do not handle virtual machine failures that can occur when deploying an application. This paper presents a self-deployment protocol that was designed to automatically configure a set of software elements to be deployed on different virtual machines. This protocol works in a decentralized way, i.e., there is no need for a centralized server. It also starts the software elements in a certain order, respecting important architectural invariants. This protocol supports virtual machine and network failures, and always succeeds in deploying an application when faced with a finite number of failures. Designing such highly parallel management protocols is difficult, therefore formal modeling techniques and verification tools were used for validation purposes. The protocol was implemented in Java and was used to deploy industrial applications.

Research paper thumbnail of Experiences in Benchmarking of Autonomic Systems

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2010

ABSTRACT Autonomic computing promises improvements of systems quality of service in terms of avai... more ABSTRACT Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

Research paper thumbnail of Architectural Model and Planification Algorithm for the Self-Management of Elastic Cloud Applications

2014 International Conference on Cloud and Autonomic Computing, 2014

This paper introduces a generic approach for managing automatically applications elasticity. The ... more This paper introduces a generic approach for managing automatically applications elasticity. The proposed solution addresses a noticeably wider scope of use-cases and does not depend on the underlying execution environment. It consists of: (i) a model and a formalism used for specifying valid applicative architectures (or elasticity scenarios) according to the resources allocated to the application. The originality of this first contribution lies in the association of a component model with a set oriented query language; (ii) a defeasible reasoning-based planning algorithm that computes the target applicative architecture from the model and elasticity requests; (iii) a first qualitative and quantitative evaluation that highlights the relevance and the viability of the proposed approach.

Research paper thumbnail of Automated Configuration of Legacy Applications in the Cloud

2011 Fourth IEEE International Conference on Utility and Cloud Computing, 2011

Current solutions for managing distributed applications in the cloud, typically covered by PaaS (... more Current solutions for managing distributed applications in the cloud, typically covered by PaaS (Platform as a Service) offers, remain domain specific and are only partially automated. In this context, the task consisting in automatically configuring distributed applications is still a difficult issue. In this paper, we present an application architectural model and a self-configuration protocol that automates the deployment of legacy distributed applications. Our protocol is decentralized and loosely coupled to avoid the need of a global synchronization between virtual machines (VMs) during the configuration stage. An evaluation reports the performances of the protocol when applied to deploy enterprise web applications on a private cloud platform.

Research paper thumbnail of Verification of a self-configuration protocol for distributed applications in the cloud

Proceedings of the 27th Annual ACM Symposium on Applied Computing - SAC '12, 2012

Distributed applications in the cloud are composed of a set of virtual machines running a set of ... more Distributed applications in the cloud are composed of a set of virtual machines running a set of interconnected software components. In this context, the task of automatically configuring distributed applications is a very difficult issue. In this paper, we focus on such a self-configuration protocol, which is able to configure a whole distributed application without requiring any centralized server. The high degree of parallelism involved in this protocol makes its design complicated and error-prone. In order to check that this protocol works as expected, we specify it in Lotos NT and verify it using the Cadp toolbox. The use of these formal techniques and tools helped to detect a bug in the protocol, and served as a workbench to experiment with several possible communication models.

Research paper thumbnail of Self-Configuration of Distributed Applications in the Cloud

2011 IEEE 4th International Conference on Cloud Computing, 2011

In the field of cloud computing, current solutions dedicated to PaaS (Platform as a Service), i.e... more In the field of cloud computing, current solutions dedicated to PaaS (Platform as a Service), i.e. the environments that deal with the different stages of the application lifecycle, remain business domain specific and are only partially automated. This limitation is due to the lack of an architectural model for describing a distributed application in terms of its software stacks (operating system, middleware, application), their instantiation as virtual machines, and their configuration interdependencies. This article puts forward (i) a component-based application model for defining any kind of distributed applications composed of a set of interconnected virtual machines, (ii) an automated line for deploying such a distributed application in the cloud, which includes a decentralized protocol for self-configuring the virtual application machines, (iii) a first performance evaluation demonstrating the viability of the solution.

Research paper thumbnail of Self-scalable Benchmarking as a Service with Automatic Saturation Detection

Lecture Notes in Computer Science, 2013

Software applications providers have always been required to perform load testing prior to launch... more Software applications providers have always been required to perform load testing prior to launching new applications. This crucial test phase is expensive in human and hardware terms, and the solutions generally used would benefit from further development. In particular, designing an appropriate load profile to stress an application is difficult and must be done carefully to avoid skewed testing. In addition, static testing platforms are exceedingly complex to set up. New opportunities to ease load testing solutions are becoming available thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based on: (i) intelligent generation of traffic to the benched application without inducing thrashing (avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system. This platform was found to reduce the cost of testing by 50% compared to more commonly used solutions. It was experimented on the reference JEE benchmark RUBiS. This involved detecting bottleneck tiers.

Research paper thumbnail of A self-scalable load injection service

Software: Practice and Experience, 2013

ABSTRACT Load testing of applications is an important and costly activity for software provider c... more ABSTRACT Load testing of applications is an important and costly activity for software provider companies. Classical solutions are very difficult to set up statically, and their cost is prohibitive in terms of both human and hardware resources. Virtualized cloud computing platforms provide new opportunities for stressing an application's scalability, by providing a large range of flexible and less expensive (pay-per-use model) computation units. On the basis of these advantages, load testing solutions could be provided on demand in the cloud. This paper describes a Benchmark-as-a-Service solution that automatically scales the load injection platform and facilitates its setup according to load profiles. Our approach is based on: (i) virtualization of the benchmarking platform to create self-scaling injectors; (ii) online calibration to characterize the injector's capacity and impact on the benched application; and (iii) a provisioning solution to appropriately scale the load injection platform ahead of time. We also report experiments on a benchmark illustrating the benefits of this system in terms of cost and resource reductions. Copyright © 2013 John Wiley & Sons, Ltd.