raquel lopes - Profile on Academia.edu (original) (raw)
Papers by raquel lopes
Business-Driven Capacity Planning of a Cloud-based IT Infrastructure for the Execution of Web Applications
2010 Ieee International Symposium on Parallel Distributed Processing Workshops and Phd Forum, 2010
With the emergence of the cloud computing paradigm and the continuous search to reduce the cost o... more With the emergence of the cloud computing paradigm and the continuous search to reduce the cost of running Information Technology (IT) infrastructures, we are currently experiencing an important change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a computing infrastructure whose processing elements are acquired from infrastructure-as-a-service (IaaS) providers,
Distributed test agents (a pattern for the development of automatic system tests for distributed applications)
Proceedings of the 9th Latin American Conference, Sep 20, 2012
Democratizing Resource-Intensive e-Science Through Peer-to-Peer Grid Computing
Computer Communications and Networks, 2011
... and L. Sampaio Departamento de Sistemas e Computação, Laboratório de Sistemas Distribuídos, U... more ... and L. Sampaio Departamento de Sistemas e Computação, Laboratório de Sistemas Distribuídos, Universidade Federal de Campina Grande, Campina Grande, Paraíba, Brazil e-mail: fubica@dsc.ufcg.edu.br; nazareno@dsc.ufcg.edu.br; raquel@dsc.ufcg.edu.br; livia@dsc.ufcg ...
2011 International Green Computing Conference and Workshops, 2011
The energy costs of running computer systems are a growing concern: for large data centers, recen... more The energy costs of running computer systems are a growing concern: for large data centers, recent estimates put these costs higher than the cost of hardware itself. As a consequence, energy efficiency has become a pervasive theme for designing, deploying, and operating computer systems. This paper evaluates the energy trade-offs brought by data deduplication in distributed storage systems. Depending on the workload, deduplication can enable a lower storage footprint, reduce the I/O pressure on the storage system, and reduce network traffic, at the cost of increased computational overhead. From an energy perspective, data deduplication enables a trade-off between the energy consumed for additional computation and the energy saved by lower storage and network load. The main point our experiments and model bring home is the following: while for non energy-proportional machines performance-and energy-centric optimizations have break-even points that are relatively close, for the newer generation of energy proportional machines the break-even points are significantly different. An important consequence of this difference is that, with newer systems, there are higher energy inefficiencies when the system is optimized for performance.
Business-driven capacity planning of a cloud-based it infrastructure for the execution of Web applications
2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010
With the emergence of the cloud computing paradigm and the continuous search to reduce the cost o... more With the emergence of the cloud computing paradigm and the continuous search to reduce the cost of running Information Technology (IT) infrastructures, we are currently experiencing an important change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a computing infrastructure whose processing elements are acquired from infrastructure-as-a-service (IaaS) providers,
Distributed test agents
Proceedings of the 9th Latin-American Conference on Pattern Languages of Programming - SugarLoafPLoP '12, 2012
Lecture Notes in Computer Science, 2004
Load variations are unexpected perturbations that can degrade performance or even cause unavailab... more Load variations are unexpected perturbations that can degrade performance or even cause unavailability of a system. There are efforts that attempt to dynamically provide resources to accommodate load fluctuations during the execution of applications. However, these efforts do not consider the existence of software faults, whose effects can influence the application behavior and its quality of service, and may mislead a dynamic provisioning system. When trying to tackle both problems simultaneously the fundamental issue to be addressed is how to differentiate a saturated application from a faulty one. The contributions of this paper are threefold. Firstly, we introduce the idea of taking software faults into account when specifying a dynamic provisioning scheme. Secondly, we define a simple algorithm that can be used to distinguish saturated from faulty software. By implementing this algorithm one is able to realize dynamic provisioning with restarts into a full server infrastructure data center. Finally, we implement this algorithm and experimentally demonstrate its efficacy.
Technical Program Committee Co-chairs
Tutorials Chair: Ricardo Jimenez-Peris (UPM, Spain) ... Fast Abstracts Chair: Rui Oliveira (U. Mi... more Tutorials Chair: Ricardo Jimenez-Peris (UPM, Spain) ... Fast Abstracts Chair: Rui Oliveira (U. Minho, Portugal) ... Local Arrangements Chair: Raquel Lopes (UFCG, Brazil) ... Finance Chair: Marco Aurélio Spohn (UFCG, Brazil) ... Steering Committee Marcos Aguilera (Microsoft Research, USA) Jean Arlat (LAAS-CNRS, France) Andrea Bondavalli (U. Firenze, Italy) Francisco Brasileiro (UFCG, Brazil) Rogério de Lemos (U. Kent, UK) Fabíola Greve - Chair - (UFBA, Brazil) Ingrid Jansch-Porto (UFRGS, Brazil) ... Technical Program Committee A. Ademaj ...
12th IFIP/IEEE International Symposium on Integrated Network Management (IM 2011) and Workshops, 2011
The cloud computing market has emerged as an alternative for the provisioning of resources on a p... more The cloud computing market has emerged as an alternative for the provisioning of resources on a pay-as-yougo basis. This flexibility potentially allows clients of cloud computing solutions to reduce the total cost of ownership of their Information Technology infrastructures. On the other hand, this market-based model is not the only way to reduce costs. Among other solutions proposed, peer-to-peer (P2P) grid computing has been suggested as a way to enable a simpler economy for the trading of idle resources. In this paper, we consider an IT infrastructure which benefits from both of these strategies. In such a hybrid infrastructure, computing power can be obtained from in-house dedicated resources, from resources acquired from cloud computing providers, and from resources received as donations from a P2P grid. We take a business-driven approach to the problem and try to maximise the profit that can be achieved by running applications in this hybrid infrastructure. The execution of applications yields utility, while costs may be incurred when resources are used to run the applications, or even when they sit idle. We assume that resources made available from cloud computing providers can be either reserved in advance, or bought on-demand. We study the impact that long-term contracts established with the cloud computing providers have on the profit achieved. Anticipating the optimal contracts is not possible due to the many uncertainties in the system, which stem from the prediction error on the workload demand, the lack of guarantees on the quality of service of the P2P grid, and fluctuations in the future prices of on-demand resources. However, we show that the judicious planning of long term contracts can lead to profits close to those given by an optimal contract set. In particular, we model the planning problem as an optimisation problem and show that the planning performed by solving this optimization problem is robust to the inherent uncertainties of the system, producing profits that for some scenarios can be more than double those (C) IEEE. Published as: . Abstract online at http://ieeexplore.ieee.org/freeabs all.jsparnumber=5990678 achieved by following some common rule-of-thumb approaches to choosing reservation contracts.
Predicting the Quality of Service of a Peer-to-Peer Desktop Grid
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, 2010
Abstract Peer-to-peer (P2P) desktop grids have been proposed as an economical way to increase the... more Abstract Peer-to-peer (P2P) desktop grids have been proposed as an economical way to increase the processing capabilities of information technology (IT) infrastructures. In a P2P grid, a peer donates its idle resources to the other peers in the system, and, in exchange, ...
Investigating Business-Driven Cloudburst Schedulers for E-Science Bag-of-Tasks Applications
2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010
Page 1. Investigating Business-Driven Cloudburst Schedulers for e-Science Bag-of-Tasks Applicatio... more Page 1. Investigating Business-Driven Cloudburst Schedulers for e-Science Bag-of-Tasks Applications David Candeia, Ricardo Araújo, Raquel Lopes, Francisco Brasileiro Universidade Federal de Campina Grande - UFCG Laboratório de Sistemas Distribuıdos - LSD Av. ...
Lecture Notes in Computer Science, 2005
Dynamic provisioning systems change application capacity in order to use enough resources to acco... more Dynamic provisioning systems change application capacity in order to use enough resources to accommodate current load. Rejuvenation systems detect/forecast software failures and temporarily remove one or more components of the application in order to bring them to a clean state. Up to now, these systems have been developed unaware of one another. However, many applications need to be controlled by both. In this paper we investigate whether these systems can actuate over the same application when they are not aware of each other, i.e., without coordination. We present and apply a model to study the performance of dynamic provisioning and rejuvenation systems when they actuate over the same application without coordination. Our results show that when both systems coexist application quality of service degrades in comparison with the quality of service provided when each system is acting alone. This suggests that some level of coordination must be added to maximize the benefits gained from the simultaneous use of both systems.
Business-driven short-term management of a hybrid IT infrastructure
Journal of Parallel and Distributed Computing, 2012
We consider the problem of managing a hybrid computing infrastructure whose processing elements a... more We consider the problem of managing a hybrid computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired on-demand from a cloud computing provider through short-term reservation contracts, and ...
Business-Driven Capacity Planning of a Cloud-based IT Infrastructure for the Execution of Web Applications
2010 Ieee International Symposium on Parallel Distributed Processing Workshops and Phd Forum, 2010
With the emergence of the cloud computing paradigm and the continuous search to reduce the cost o... more With the emergence of the cloud computing paradigm and the continuous search to reduce the cost of running Information Technology (IT) infrastructures, we are currently experiencing an important change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a computing infrastructure whose processing elements are acquired from infrastructure-as-a-service (IaaS) providers,
Distributed test agents (a pattern for the development of automatic system tests for distributed applications)
Proceedings of the 9th Latin American Conference, Sep 20, 2012
Democratizing Resource-Intensive e-Science Through Peer-to-Peer Grid Computing
Computer Communications and Networks, 2011
... and L. Sampaio Departamento de Sistemas e Computação, Laboratório de Sistemas Distribuídos, U... more ... and L. Sampaio Departamento de Sistemas e Computação, Laboratório de Sistemas Distribuídos, Universidade Federal de Campina Grande, Campina Grande, Paraíba, Brazil e-mail: fubica@dsc.ufcg.edu.br; nazareno@dsc.ufcg.edu.br; raquel@dsc.ufcg.edu.br; livia@dsc.ufcg ...
2011 International Green Computing Conference and Workshops, 2011
The energy costs of running computer systems are a growing concern: for large data centers, recen... more The energy costs of running computer systems are a growing concern: for large data centers, recent estimates put these costs higher than the cost of hardware itself. As a consequence, energy efficiency has become a pervasive theme for designing, deploying, and operating computer systems. This paper evaluates the energy trade-offs brought by data deduplication in distributed storage systems. Depending on the workload, deduplication can enable a lower storage footprint, reduce the I/O pressure on the storage system, and reduce network traffic, at the cost of increased computational overhead. From an energy perspective, data deduplication enables a trade-off between the energy consumed for additional computation and the energy saved by lower storage and network load. The main point our experiments and model bring home is the following: while for non energy-proportional machines performance-and energy-centric optimizations have break-even points that are relatively close, for the newer generation of energy proportional machines the break-even points are significantly different. An important consequence of this difference is that, with newer systems, there are higher energy inefficiencies when the system is optimized for performance.
Business-driven capacity planning of a cloud-based it infrastructure for the execution of Web applications
2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010
With the emergence of the cloud computing paradigm and the continuous search to reduce the cost o... more With the emergence of the cloud computing paradigm and the continuous search to reduce the cost of running Information Technology (IT) infrastructures, we are currently experiencing an important change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a computing infrastructure whose processing elements are acquired from infrastructure-as-a-service (IaaS) providers,
Distributed test agents
Proceedings of the 9th Latin-American Conference on Pattern Languages of Programming - SugarLoafPLoP '12, 2012
Lecture Notes in Computer Science, 2004
Load variations are unexpected perturbations that can degrade performance or even cause unavailab... more Load variations are unexpected perturbations that can degrade performance or even cause unavailability of a system. There are efforts that attempt to dynamically provide resources to accommodate load fluctuations during the execution of applications. However, these efforts do not consider the existence of software faults, whose effects can influence the application behavior and its quality of service, and may mislead a dynamic provisioning system. When trying to tackle both problems simultaneously the fundamental issue to be addressed is how to differentiate a saturated application from a faulty one. The contributions of this paper are threefold. Firstly, we introduce the idea of taking software faults into account when specifying a dynamic provisioning scheme. Secondly, we define a simple algorithm that can be used to distinguish saturated from faulty software. By implementing this algorithm one is able to realize dynamic provisioning with restarts into a full server infrastructure data center. Finally, we implement this algorithm and experimentally demonstrate its efficacy.
Technical Program Committee Co-chairs
Tutorials Chair: Ricardo Jimenez-Peris (UPM, Spain) ... Fast Abstracts Chair: Rui Oliveira (U. Mi... more Tutorials Chair: Ricardo Jimenez-Peris (UPM, Spain) ... Fast Abstracts Chair: Rui Oliveira (U. Minho, Portugal) ... Local Arrangements Chair: Raquel Lopes (UFCG, Brazil) ... Finance Chair: Marco Aurélio Spohn (UFCG, Brazil) ... Steering Committee Marcos Aguilera (Microsoft Research, USA) Jean Arlat (LAAS-CNRS, France) Andrea Bondavalli (U. Firenze, Italy) Francisco Brasileiro (UFCG, Brazil) Rogério de Lemos (U. Kent, UK) Fabíola Greve - Chair - (UFBA, Brazil) Ingrid Jansch-Porto (UFRGS, Brazil) ... Technical Program Committee A. Ademaj ...
12th IFIP/IEEE International Symposium on Integrated Network Management (IM 2011) and Workshops, 2011
The cloud computing market has emerged as an alternative for the provisioning of resources on a p... more The cloud computing market has emerged as an alternative for the provisioning of resources on a pay-as-yougo basis. This flexibility potentially allows clients of cloud computing solutions to reduce the total cost of ownership of their Information Technology infrastructures. On the other hand, this market-based model is not the only way to reduce costs. Among other solutions proposed, peer-to-peer (P2P) grid computing has been suggested as a way to enable a simpler economy for the trading of idle resources. In this paper, we consider an IT infrastructure which benefits from both of these strategies. In such a hybrid infrastructure, computing power can be obtained from in-house dedicated resources, from resources acquired from cloud computing providers, and from resources received as donations from a P2P grid. We take a business-driven approach to the problem and try to maximise the profit that can be achieved by running applications in this hybrid infrastructure. The execution of applications yields utility, while costs may be incurred when resources are used to run the applications, or even when they sit idle. We assume that resources made available from cloud computing providers can be either reserved in advance, or bought on-demand. We study the impact that long-term contracts established with the cloud computing providers have on the profit achieved. Anticipating the optimal contracts is not possible due to the many uncertainties in the system, which stem from the prediction error on the workload demand, the lack of guarantees on the quality of service of the P2P grid, and fluctuations in the future prices of on-demand resources. However, we show that the judicious planning of long term contracts can lead to profits close to those given by an optimal contract set. In particular, we model the planning problem as an optimisation problem and show that the planning performed by solving this optimization problem is robust to the inherent uncertainties of the system, producing profits that for some scenarios can be more than double those (C) IEEE. Published as: . Abstract online at http://ieeexplore.ieee.org/freeabs all.jsparnumber=5990678 achieved by following some common rule-of-thumb approaches to choosing reservation contracts.
Predicting the Quality of Service of a Peer-to-Peer Desktop Grid
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, 2010
Abstract Peer-to-peer (P2P) desktop grids have been proposed as an economical way to increase the... more Abstract Peer-to-peer (P2P) desktop grids have been proposed as an economical way to increase the processing capabilities of information technology (IT) infrastructures. In a P2P grid, a peer donates its idle resources to the other peers in the system, and, in exchange, ...
Investigating Business-Driven Cloudburst Schedulers for E-Science Bag-of-Tasks Applications
2010 IEEE Second International Conference on Cloud Computing Technology and Science, 2010
Page 1. Investigating Business-Driven Cloudburst Schedulers for e-Science Bag-of-Tasks Applicatio... more Page 1. Investigating Business-Driven Cloudburst Schedulers for e-Science Bag-of-Tasks Applications David Candeia, Ricardo Araújo, Raquel Lopes, Francisco Brasileiro Universidade Federal de Campina Grande - UFCG Laboratório de Sistemas Distribuıdos - LSD Av. ...
Lecture Notes in Computer Science, 2005
Dynamic provisioning systems change application capacity in order to use enough resources to acco... more Dynamic provisioning systems change application capacity in order to use enough resources to accommodate current load. Rejuvenation systems detect/forecast software failures and temporarily remove one or more components of the application in order to bring them to a clean state. Up to now, these systems have been developed unaware of one another. However, many applications need to be controlled by both. In this paper we investigate whether these systems can actuate over the same application when they are not aware of each other, i.e., without coordination. We present and apply a model to study the performance of dynamic provisioning and rejuvenation systems when they actuate over the same application without coordination. Our results show that when both systems coexist application quality of service degrades in comparison with the quality of service provided when each system is acting alone. This suggests that some level of coordination must be added to maximize the benefits gained from the simultaneous use of both systems.
Business-driven short-term management of a hybrid IT infrastructure
Journal of Parallel and Distributed Computing, 2012
We consider the problem of managing a hybrid computing infrastructure whose processing elements a... more We consider the problem of managing a hybrid computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired on-demand from a cloud computing provider through short-term reservation contracts, and ...