Sadegh Aslanpour - Academia.edu (original) (raw)
Papers by Sadegh Aslanpour
Internet of Things
Resource management in computing is a very challenging problem that involves making sequential de... more Resource management in computing is a very challenging problem that involves making sequential decisions. Resource limitations, resource heterogeneity, dynamic and diverse nature of workload, and the unpredictability of fog/edge computing environments have made resource management even more challenging to be considered in the fog landscape. Recently Artificial Intelligence (AI) and Machine Learning (ML) based solutions are adopted to solve this problem. AI/ML methods with the capability to make sequential decisions like reinforcement learning seem most promising for these type of problems. But these algorithms come with their own challenges such as high variance, explainability, and online training. The continuously changing fog/edge environment dynamics require solutions that learn online, adopting changing computing environment. In this paper, we used standard review methodology to conduct this Systematic Literature Review (SLR) to analyze the role of AI/ML algorithms and the challenges in the applicability of these algorithms for resource management in fog/edge computing environments. Further, various machine learning, deep learning and reinforcement learning techniques for edge AI management have been discussed. Furthermore, we have presented the background and current status of AI/ML-based Fog/Edge Computing. Moreover, a taxonomy of AI/ML-based resource management techniques for fog/edge computing has been proposed and compared the existing techniques based on the proposed taxonomy. Finally, open challenges and promising future research directions have been identified and discussed in the area of AI/ML-based fog/edge computing.
2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid)
2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)
Mechanisms for dynamically adding and removing Virtual Machines (VMs) to reduce cost while minimi... more Mechanisms for dynamically adding and removing Virtual Machines (VMs) to reduce cost while minimizing the latency are called auto-scaling. Latency improvements are mainly fulfilled through minimizing the "average" response times while unpredictabilities and fluctuations of the Web applications, aka flash crowds, can result in very high latencies for users’ requests. Requests influenced by flash crowd suffer from long latencies, known as outliers. Such outliers are inevitable to a large extent as auto-scaling solutions continue to improve the average, not the "tail" of latencies. In this paper, we study possible sources of tail latency in auto-scaling mechanisms for Web applications. Based on our extensive evaluations in a real cloud platform, we discovered sources of a tail latency as 1) large requests, i.e. those data-intensive; 2) long-term scaling intervals; 3) instant analysis of scaling parameters; 4) conservative, i.e. tight, threshold tuning; 5) load-unaware surplus VM selection policies used for executing a scale-down decision; 6) cooldown feature, although cost-effective; and 7) VM start-up delay. We also discovered that after improving the average latency by auto-scaling mechanisms, the tail may behave differently, demanding dedicated tail-aware solutions for auto-scaling mechanisms.
Springer International Publishing eBooks, Nov 14, 2021
The exponential growth of Internet of Things (IoT) has given rise to a new wave of edge computing... more The exponential growth of Internet of Things (IoT) has given rise to a new wave of edge computing due to the need to process data on the edge, closer to where it is being produced and attempting to move away from a cloud-centric architecture. This provides its own opportunity to decrease latency and address data privacy concerns along with the ability to reduce public cloud costs. The serverless computing model provides a potential solution with its event-driven architecture to reduce the need for ever-running servers and convert the backend services to an as-used model. This model is an attractive prospect in edge computing environments with varying workloads and limited resources. Furthermore, its setup on the edge of the network promises reduced latency to the edge devices communicating with it and eliminates the need to manage the underlying infrastructure. In this book chapter, first, we introduce the novel concept of serverless edge computing, then, we analyze the performance of multiple serverless platforms, namely, OpenFaaS, AWS Greengrass, Apache OpenWhisk, when set up on the single-board computers (SBCs) on the edge and compare it with public cloud serverless offerings, namely, AWS Lambda and Azure Functions, to deduce the suitability of serverless architectures on the network edge. These serverless platforms are set up on a cluster of Raspberry Pis and we evaluate their performance by simulating different types of edge workloads. The evaluation results show that OpenFaaS achieves the lowest response time on the SBC edge computing infrastructure while serverless cloud offerings are the most reliable with the highest success rate.
Service-Oriented Computing, 2021
2021 Australasian Computer Science Week Multiconference, 2021
Born from a need for a pure "pay-per-use" model and highly scalable platform, the "Serverless" pa... more Born from a need for a pure "pay-per-use" model and highly scalable platform, the "Serverless" paradigm emerged and has the potential to become a dominant way of building cloud applications. Although it was originally designed for cloud environments, Serverless is finding its position in the Edge Computing landscape, aiming to bring computational resources closer to the data source. That is, Serverless is crossing cloud borders to assess its merits in Edge computing, whose principal partner will be the Internet of Things (IoT) applications. This move sounds promising as Serverless brings particular benefits such as eliminating always-on services causing high electricity usage, for instance. However, the community is still hesitant to uptake Serverless Edge Computing because of the cloud-driven design of current Serverless platforms, and distinctive characteristics of edge landscape and IoT applications. In this paper, we evaluate both sides to shed light on the Serverless new territory. Our in-depth analysis promotes a broad vision for bringing Serverless to the Edge Computing. It also issues major challenges for Serverless to be met before entering Edge computing.
Internet of Things, 2020
Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog an... more Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog and Edge paradigms. Not only these emerging paradigms demand reevaluating cloud-native optimizations and exploring Fog and Edge-based solutions, but also the objectives require significant shift from considering only latency to energy, security, reliability and cost. Hence, it is apparent that optimization objectives have become diverse and lately Internet of Things (IoT)-specific born objectives must come into play. This is critical as incorrect selection of metrics can mislead the developer about the real performance. For instance, a latency-aware auto-scaler must be evaluated through latency-related metrics as response time or tail latency; otherwise the resource manager is not carefully evaluated even if it can reduce the cost. Given such challenges, researchers and developers are struggling to explore and utilize the right metrics to evaluate the performance of optimization techniques such as task scheduling, resource provisioning, resource allocation, resource scheduling and resource execution. This is challenging due to (1) novel and multi-layered computing paradigm, e.g., Cloud, Fog and Edge, (2) IoT applications with different requirements, e.g., latency or privacy, and (3) not having a benchmark and standard for the evaluation metrics. In this paper, by exploring the literature, (1) we present a taxonomy of the various real-world metrics to evaluate the performance of cloud, fog, and edge computing; (2) we survey the literature to recognize common metrics and their applications; and (3) outline open issues for future research. This comprehensive benchmark study can significantly assist developers and researchers to evaluate performance under realistic metrics and standards to ensure their objectives will be achieved in the production environments.
Simulation Modelling Practice and Theory, 2021
Abstract Auto-scaling of Web applications is an extensively investigated issue in cloud computing... more Abstract Auto-scaling of Web applications is an extensively investigated issue in cloud computing. To evaluate auto-scaling mechanisms, the cloud community is facing considerable challenges on either real cloud platforms or custom test-beds. Challenges include – but not limited to – deployment impediments, the complexity of setting parameters, and most importantly, the cost of hosting and testing Web applications on a massive scale. Hence, simulation is presently one of the most popular evaluation solutions to overcome these obstacles. Existing simulators, however, fail to provide support for hosting, deploying and subsequently auto-scaling of Web applications. In this paper, we introduce AutoScaleSim, which extends the existing CloudSim simulator, to support auto-scaling of Web applications in cloud environments in a customizable, extendable and scalable manner. Using AutoScaleSim, the cloud community can freely implement/evaluate policies for all four phases of auto-scaling mechanisms, that is, Monitoring, Analysis, Planning and Execution. AutoScaleSim can also be used for evaluating load balancing algorithms similarly. We conducted a set of experiments to validate and carefully evaluate the performance of AutoScaleSim in a real cloud platform, with a wide range of performance metrics.
Internet of Things, 2019
Cloud computing plays a critical role in modern society and enables a range of applications from ... more Cloud computing plays a critical role in modern society and enables a range of applications from infrastructure to social media. Such system must cope with varying load and evolving usage reflecting societies' interaction and dependency on automated computing systems whilst satisfying Quality of Service (QoS) guarantees. Enabling these systems are a cohort of conceptual technologies, synthesised to meet demand of evolving computing applications. In order to understand current and future challenges of such system, there is a need to identify key technologies enabling future applications. In this study, we aim to explore how three emerging paradigms (Blockchain, IoT and Artificial Intelligence) will influence future cloud computing systems. Further, we identify several technologies driving these paradigms and invite international experts to discuss the current status and future directions of cloud computing. Finally, we proposed a conceptual model for cloud futurology to explore the influence of emerging paradigms and technologies on evolution of cloud computing.
International Journal of Communication Systems, 2019
Nowadays, with the development of communication systems, massively multiplayer online games (MMOG... more Nowadays, with the development of communication systems, massively multiplayer online games (MMOGs) have become very popular. In these games, the players all over the world dynamically interact with each other by sending play actions such as shootings, movements, or chatting in the form of MMOG sessions in real time through a large‐scale distributed environment. Leveraging affordable cloud computing to host such services is a widely investigated issue. It is because the arrival rate of players to the game environment has to make fluctuations, and the players expect services to be always available with an acceptable quality of service (QoS), especially in terms of the response time. Therefore, the dynamic provisioning of resources in order to deal with fluctuating demands due to variability in the arrival rate of players of the MMOG services is highly recommended. In this paper, we propose a learning‐based resource provisioning approach for MMOG services that is based on the combination of the autonomic computing paradigm and learning automata (LA). The remarkable performance of the proposed approach in terms of response time, cost, and allocated virtual machines (VMs) is assessed through simulation and comparison with the existing approaches.
The Journal of Supercomputing, 2017
The scalability feature of cloud computing attracts application service providers (ASPs) to use c... more The scalability feature of cloud computing attracts application service providers (ASPs) to use cloud application hosting. In cloud environments, resources can be dynamically provisioned on demand for ASPs. Autonomic resource provisioning for the purpose of preventing resources over-provisioning or under-provisioning is a widely investigated topic in cloud environments. There has been proposed a lot of resource-aware and/or service-level agreement (SLA)-aware solutions to handle this problem. However, intelligence solutions such as exploring the hidden knowledge on the Web users' behavior are more effective in cost efficiency. Most importantly, with considering cloud service diversity, solutions should be flexible and customizable to fulfill ASPs' requirements. Therefore, lack of a flexible resource provisioning mechanism is strongly felt. In this paper, we proposed an autonomic resource provisioning mechanism with resource-aware, SLA-aware, and user behavior-aware features, which is called three-dimensional mechanism. The proposed mechanism used radial basis function neural network in order to provide providence and flexibility features. The B Seyed Ebrahim Dashti
International Journal of Grid and High Performance Computing, 2017
Application providers (APs) leave their application hosting to cloud with the aim of reducing inf... more Application providers (APs) leave their application hosting to cloud with the aim of reducing infrastructure purchase and maintenance costs. However, variation in the arrival rate of user application requests on the one hand, and the attractive cloud resource auto-scaling feature on the other hand, has made APs consider further savings in the cost of renting resources. Researchers generally seek to select parameters for scaling decision making, while it seems that analysis of the parameter history is more effective. This paper presents a proactive auto-scaling algorithm (PASA) equipped with a heuristic predictor. The predictor analyzes history with the help of the following techniques: (1) double exponential smoothing - DES, (2) weighted moving average - WMA and (3) Fibonacci numbers. The results of PASA simulation in CloudSim is indicative of its effectiveness in a way that the algorithm can reduce the AP's cost while maintaining web user satisfaction.
Soft Computing, 2017
In recent years, service-based applications are deemed to be one of the new solutions to build an... more In recent years, service-based applications are deemed to be one of the new solutions to build an enterprise application system. In order to answer the most demanding needs or adaptations to the needs of changed services quickly, service composition is currently used to exploit the multi-service capabilities in the Information Technology organizations. While web services, which have been independently developed, may not always be compatible with each other, the selection of optimal services and composition of these services are seen as a challenging issue. In this paper, we present cuckoo search algorithm for web service composition problem which is called 'CSA-WSC' that provides web service composition to improve the quality of service (QoS) in the distributed cloud environment. The experimental results indicate that the CSA-WSC compared to genetic search skyline network (GS-S-Net) and genetic particle swarm optimization algorithm (GAPSO-Communicated by V. Loia.
Journal of Network and Computer Applications, 2017
The elasticity feature of cloud computing and its pay-per-use pricing entice application provider... more The elasticity feature of cloud computing and its pay-per-use pricing entice application providers to use cloud application hosting. One of the most valuable methods, an application provider can use in order to reduce costs is resource auto-scaling. Resource autoscaling for the purpose of preventing resource over-provisioning or under-provisioning is a widely investigated topic in cloud environments. The Auto-scaling process is often implemented based on the four phases of MAPE loop: Monitoring (M), Analysis (A), Planning (P) and Execution (E). Hence, researchers seek to improve the performance of this mechanism with different solutions for each phase. However, the solutions in this area are generally focused on the improvement of the performance in the three phases of the monitoring, analysis, and planning, while the execution phase is considered less often. This paper provides a cost saving super professional executor which shows the importance and effectiveness of this phase of the controlling cycle. Unlike common executors, the proposed solution executes scale-down commands via aware selection of surplus virtual machines; moreover, with its novel features, surplus virtual machines are kept quarantined for the rest of their billing period in order to maximize the cost efficiency. Simulation results show that the proposed executor reduces the cost of renting virtual machines by 7% while improves the final service level agreement of the application provider and controls the mechanism's oscillation in decision-making.
2016 Second International Conference on Web Research (ICWR), 2016
With the advent of cloud computing and the possibility of renting virtual resources on an as-need... more With the advent of cloud computing and the possibility of renting virtual resources on an as-needed basis, application services providers (ASPs) entered the realm of information technology in order to provide web-based applications. They rent virtual machines from cloud service providers in order to host their applications; however, it is possible that due to the dynamic nature of the workload, these resources may not be sufficient to meet the real needs of the ASP. In this point, the issue of automatic optimal resource allocation scaling is raised. With the aid of the scalability property of cloud computing, this issue can be improved in order to decrease the costs and ensure the quality of service. Researchers try to resolve this issue using reactive and proactive methods. In this study, a new framework is proposed which carries out automatic optimal resource allocation with the help of a combinatory method based on reactive and proactive scaling which is sensitive to past and present fluctuations. In this framework, the possibility of supervising resources is provided through two analyzers: the behavior of the virtual machine and the SLA behavior. The parameters affecting our two-sided decision making include virtual machine's CPU utilization and the response time. In order to implement the proposed framework, the user requests for a website with a synthetic workload based on the real pattern of cloud environment were enumerated using CloudSim tool. The results show that compared to the one-sided methods, our proposed two-sided method is able to improve the costs of resource provision, the quality of service, scaling decision making overhead, and the resource utilization fluctuation control. This scaling mechanism can pave the way for vertical scaling as well as the utilization of Amazon's Spot machines.
The scalability feature of cloud computing attracts application service providers (ASPs) to use c... more The scalability feature of cloud computing attracts application service providers (ASPs) to use cloud application hosting. In cloud environments, resources can be dynamically provisioned on demand for ASPs. Autonomic resource provisioning for the purpose of preventing resources over-provisioning or under-provisioning is a widely investigated topic in cloud environments. There has been proposed a lot of resource-aware and/or service-level agreement (SLA)-aware solutions to handle this problem. However, intelligence solutions such as exploring the hidden knowledge on the Web users' behavior are more effective in cost efficiency. Most importantly, with considering cloud service diversity, solutions should be flexible and customizable to fulfill ASPs' requirements. Therefore, lack of a flexible resource provisioning mechanism is strongly felt. In this paper, we proposed an autonomic resource provisioning mechanism with resource-aware, SLA-aware, and user behavior-aware features, which is called three-dimensional mechanism. The proposed mechanism used radial basis function neural network in order to provide providence and flexibility features. The
Internet of Things
Resource management in computing is a very challenging problem that involves making sequential de... more Resource management in computing is a very challenging problem that involves making sequential decisions. Resource limitations, resource heterogeneity, dynamic and diverse nature of workload, and the unpredictability of fog/edge computing environments have made resource management even more challenging to be considered in the fog landscape. Recently Artificial Intelligence (AI) and Machine Learning (ML) based solutions are adopted to solve this problem. AI/ML methods with the capability to make sequential decisions like reinforcement learning seem most promising for these type of problems. But these algorithms come with their own challenges such as high variance, explainability, and online training. The continuously changing fog/edge environment dynamics require solutions that learn online, adopting changing computing environment. In this paper, we used standard review methodology to conduct this Systematic Literature Review (SLR) to analyze the role of AI/ML algorithms and the challenges in the applicability of these algorithms for resource management in fog/edge computing environments. Further, various machine learning, deep learning and reinforcement learning techniques for edge AI management have been discussed. Furthermore, we have presented the background and current status of AI/ML-based Fog/Edge Computing. Moreover, a taxonomy of AI/ML-based resource management techniques for fog/edge computing has been proposed and compared the existing techniques based on the proposed taxonomy. Finally, open challenges and promising future research directions have been identified and discussed in the area of AI/ML-based fog/edge computing.
2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid)
2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)
Mechanisms for dynamically adding and removing Virtual Machines (VMs) to reduce cost while minimi... more Mechanisms for dynamically adding and removing Virtual Machines (VMs) to reduce cost while minimizing the latency are called auto-scaling. Latency improvements are mainly fulfilled through minimizing the "average" response times while unpredictabilities and fluctuations of the Web applications, aka flash crowds, can result in very high latencies for users’ requests. Requests influenced by flash crowd suffer from long latencies, known as outliers. Such outliers are inevitable to a large extent as auto-scaling solutions continue to improve the average, not the "tail" of latencies. In this paper, we study possible sources of tail latency in auto-scaling mechanisms for Web applications. Based on our extensive evaluations in a real cloud platform, we discovered sources of a tail latency as 1) large requests, i.e. those data-intensive; 2) long-term scaling intervals; 3) instant analysis of scaling parameters; 4) conservative, i.e. tight, threshold tuning; 5) load-unaware surplus VM selection policies used for executing a scale-down decision; 6) cooldown feature, although cost-effective; and 7) VM start-up delay. We also discovered that after improving the average latency by auto-scaling mechanisms, the tail may behave differently, demanding dedicated tail-aware solutions for auto-scaling mechanisms.
Springer International Publishing eBooks, Nov 14, 2021
The exponential growth of Internet of Things (IoT) has given rise to a new wave of edge computing... more The exponential growth of Internet of Things (IoT) has given rise to a new wave of edge computing due to the need to process data on the edge, closer to where it is being produced and attempting to move away from a cloud-centric architecture. This provides its own opportunity to decrease latency and address data privacy concerns along with the ability to reduce public cloud costs. The serverless computing model provides a potential solution with its event-driven architecture to reduce the need for ever-running servers and convert the backend services to an as-used model. This model is an attractive prospect in edge computing environments with varying workloads and limited resources. Furthermore, its setup on the edge of the network promises reduced latency to the edge devices communicating with it and eliminates the need to manage the underlying infrastructure. In this book chapter, first, we introduce the novel concept of serverless edge computing, then, we analyze the performance of multiple serverless platforms, namely, OpenFaaS, AWS Greengrass, Apache OpenWhisk, when set up on the single-board computers (SBCs) on the edge and compare it with public cloud serverless offerings, namely, AWS Lambda and Azure Functions, to deduce the suitability of serverless architectures on the network edge. These serverless platforms are set up on a cluster of Raspberry Pis and we evaluate their performance by simulating different types of edge workloads. The evaluation results show that OpenFaaS achieves the lowest response time on the SBC edge computing infrastructure while serverless cloud offerings are the most reliable with the highest success rate.
Service-Oriented Computing, 2021
2021 Australasian Computer Science Week Multiconference, 2021
Born from a need for a pure "pay-per-use" model and highly scalable platform, the "Serverless" pa... more Born from a need for a pure "pay-per-use" model and highly scalable platform, the "Serverless" paradigm emerged and has the potential to become a dominant way of building cloud applications. Although it was originally designed for cloud environments, Serverless is finding its position in the Edge Computing landscape, aiming to bring computational resources closer to the data source. That is, Serverless is crossing cloud borders to assess its merits in Edge computing, whose principal partner will be the Internet of Things (IoT) applications. This move sounds promising as Serverless brings particular benefits such as eliminating always-on services causing high electricity usage, for instance. However, the community is still hesitant to uptake Serverless Edge Computing because of the cloud-driven design of current Serverless platforms, and distinctive characteristics of edge landscape and IoT applications. In this paper, we evaluate both sides to shed light on the Serverless new territory. Our in-depth analysis promotes a broad vision for bringing Serverless to the Edge Computing. It also issues major challenges for Serverless to be met before entering Edge computing.
Internet of Things, 2020
Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog an... more Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog and Edge paradigms. Not only these emerging paradigms demand reevaluating cloud-native optimizations and exploring Fog and Edge-based solutions, but also the objectives require significant shift from considering only latency to energy, security, reliability and cost. Hence, it is apparent that optimization objectives have become diverse and lately Internet of Things (IoT)-specific born objectives must come into play. This is critical as incorrect selection of metrics can mislead the developer about the real performance. For instance, a latency-aware auto-scaler must be evaluated through latency-related metrics as response time or tail latency; otherwise the resource manager is not carefully evaluated even if it can reduce the cost. Given such challenges, researchers and developers are struggling to explore and utilize the right metrics to evaluate the performance of optimization techniques such as task scheduling, resource provisioning, resource allocation, resource scheduling and resource execution. This is challenging due to (1) novel and multi-layered computing paradigm, e.g., Cloud, Fog and Edge, (2) IoT applications with different requirements, e.g., latency or privacy, and (3) not having a benchmark and standard for the evaluation metrics. In this paper, by exploring the literature, (1) we present a taxonomy of the various real-world metrics to evaluate the performance of cloud, fog, and edge computing; (2) we survey the literature to recognize common metrics and their applications; and (3) outline open issues for future research. This comprehensive benchmark study can significantly assist developers and researchers to evaluate performance under realistic metrics and standards to ensure their objectives will be achieved in the production environments.
Simulation Modelling Practice and Theory, 2021
Abstract Auto-scaling of Web applications is an extensively investigated issue in cloud computing... more Abstract Auto-scaling of Web applications is an extensively investigated issue in cloud computing. To evaluate auto-scaling mechanisms, the cloud community is facing considerable challenges on either real cloud platforms or custom test-beds. Challenges include – but not limited to – deployment impediments, the complexity of setting parameters, and most importantly, the cost of hosting and testing Web applications on a massive scale. Hence, simulation is presently one of the most popular evaluation solutions to overcome these obstacles. Existing simulators, however, fail to provide support for hosting, deploying and subsequently auto-scaling of Web applications. In this paper, we introduce AutoScaleSim, which extends the existing CloudSim simulator, to support auto-scaling of Web applications in cloud environments in a customizable, extendable and scalable manner. Using AutoScaleSim, the cloud community can freely implement/evaluate policies for all four phases of auto-scaling mechanisms, that is, Monitoring, Analysis, Planning and Execution. AutoScaleSim can also be used for evaluating load balancing algorithms similarly. We conducted a set of experiments to validate and carefully evaluate the performance of AutoScaleSim in a real cloud platform, with a wide range of performance metrics.
Internet of Things, 2019
Cloud computing plays a critical role in modern society and enables a range of applications from ... more Cloud computing plays a critical role in modern society and enables a range of applications from infrastructure to social media. Such system must cope with varying load and evolving usage reflecting societies' interaction and dependency on automated computing systems whilst satisfying Quality of Service (QoS) guarantees. Enabling these systems are a cohort of conceptual technologies, synthesised to meet demand of evolving computing applications. In order to understand current and future challenges of such system, there is a need to identify key technologies enabling future applications. In this study, we aim to explore how three emerging paradigms (Blockchain, IoT and Artificial Intelligence) will influence future cloud computing systems. Further, we identify several technologies driving these paradigms and invite international experts to discuss the current status and future directions of cloud computing. Finally, we proposed a conceptual model for cloud futurology to explore the influence of emerging paradigms and technologies on evolution of cloud computing.
International Journal of Communication Systems, 2019
Nowadays, with the development of communication systems, massively multiplayer online games (MMOG... more Nowadays, with the development of communication systems, massively multiplayer online games (MMOGs) have become very popular. In these games, the players all over the world dynamically interact with each other by sending play actions such as shootings, movements, or chatting in the form of MMOG sessions in real time through a large‐scale distributed environment. Leveraging affordable cloud computing to host such services is a widely investigated issue. It is because the arrival rate of players to the game environment has to make fluctuations, and the players expect services to be always available with an acceptable quality of service (QoS), especially in terms of the response time. Therefore, the dynamic provisioning of resources in order to deal with fluctuating demands due to variability in the arrival rate of players of the MMOG services is highly recommended. In this paper, we propose a learning‐based resource provisioning approach for MMOG services that is based on the combination of the autonomic computing paradigm and learning automata (LA). The remarkable performance of the proposed approach in terms of response time, cost, and allocated virtual machines (VMs) is assessed through simulation and comparison with the existing approaches.
The Journal of Supercomputing, 2017
The scalability feature of cloud computing attracts application service providers (ASPs) to use c... more The scalability feature of cloud computing attracts application service providers (ASPs) to use cloud application hosting. In cloud environments, resources can be dynamically provisioned on demand for ASPs. Autonomic resource provisioning for the purpose of preventing resources over-provisioning or under-provisioning is a widely investigated topic in cloud environments. There has been proposed a lot of resource-aware and/or service-level agreement (SLA)-aware solutions to handle this problem. However, intelligence solutions such as exploring the hidden knowledge on the Web users' behavior are more effective in cost efficiency. Most importantly, with considering cloud service diversity, solutions should be flexible and customizable to fulfill ASPs' requirements. Therefore, lack of a flexible resource provisioning mechanism is strongly felt. In this paper, we proposed an autonomic resource provisioning mechanism with resource-aware, SLA-aware, and user behavior-aware features, which is called three-dimensional mechanism. The proposed mechanism used radial basis function neural network in order to provide providence and flexibility features. The B Seyed Ebrahim Dashti
International Journal of Grid and High Performance Computing, 2017
Application providers (APs) leave their application hosting to cloud with the aim of reducing inf... more Application providers (APs) leave their application hosting to cloud with the aim of reducing infrastructure purchase and maintenance costs. However, variation in the arrival rate of user application requests on the one hand, and the attractive cloud resource auto-scaling feature on the other hand, has made APs consider further savings in the cost of renting resources. Researchers generally seek to select parameters for scaling decision making, while it seems that analysis of the parameter history is more effective. This paper presents a proactive auto-scaling algorithm (PASA) equipped with a heuristic predictor. The predictor analyzes history with the help of the following techniques: (1) double exponential smoothing - DES, (2) weighted moving average - WMA and (3) Fibonacci numbers. The results of PASA simulation in CloudSim is indicative of its effectiveness in a way that the algorithm can reduce the AP's cost while maintaining web user satisfaction.
Soft Computing, 2017
In recent years, service-based applications are deemed to be one of the new solutions to build an... more In recent years, service-based applications are deemed to be one of the new solutions to build an enterprise application system. In order to answer the most demanding needs or adaptations to the needs of changed services quickly, service composition is currently used to exploit the multi-service capabilities in the Information Technology organizations. While web services, which have been independently developed, may not always be compatible with each other, the selection of optimal services and composition of these services are seen as a challenging issue. In this paper, we present cuckoo search algorithm for web service composition problem which is called 'CSA-WSC' that provides web service composition to improve the quality of service (QoS) in the distributed cloud environment. The experimental results indicate that the CSA-WSC compared to genetic search skyline network (GS-S-Net) and genetic particle swarm optimization algorithm (GAPSO-Communicated by V. Loia.
Journal of Network and Computer Applications, 2017
The elasticity feature of cloud computing and its pay-per-use pricing entice application provider... more The elasticity feature of cloud computing and its pay-per-use pricing entice application providers to use cloud application hosting. One of the most valuable methods, an application provider can use in order to reduce costs is resource auto-scaling. Resource autoscaling for the purpose of preventing resource over-provisioning or under-provisioning is a widely investigated topic in cloud environments. The Auto-scaling process is often implemented based on the four phases of MAPE loop: Monitoring (M), Analysis (A), Planning (P) and Execution (E). Hence, researchers seek to improve the performance of this mechanism with different solutions for each phase. However, the solutions in this area are generally focused on the improvement of the performance in the three phases of the monitoring, analysis, and planning, while the execution phase is considered less often. This paper provides a cost saving super professional executor which shows the importance and effectiveness of this phase of the controlling cycle. Unlike common executors, the proposed solution executes scale-down commands via aware selection of surplus virtual machines; moreover, with its novel features, surplus virtual machines are kept quarantined for the rest of their billing period in order to maximize the cost efficiency. Simulation results show that the proposed executor reduces the cost of renting virtual machines by 7% while improves the final service level agreement of the application provider and controls the mechanism's oscillation in decision-making.
2016 Second International Conference on Web Research (ICWR), 2016
With the advent of cloud computing and the possibility of renting virtual resources on an as-need... more With the advent of cloud computing and the possibility of renting virtual resources on an as-needed basis, application services providers (ASPs) entered the realm of information technology in order to provide web-based applications. They rent virtual machines from cloud service providers in order to host their applications; however, it is possible that due to the dynamic nature of the workload, these resources may not be sufficient to meet the real needs of the ASP. In this point, the issue of automatic optimal resource allocation scaling is raised. With the aid of the scalability property of cloud computing, this issue can be improved in order to decrease the costs and ensure the quality of service. Researchers try to resolve this issue using reactive and proactive methods. In this study, a new framework is proposed which carries out automatic optimal resource allocation with the help of a combinatory method based on reactive and proactive scaling which is sensitive to past and present fluctuations. In this framework, the possibility of supervising resources is provided through two analyzers: the behavior of the virtual machine and the SLA behavior. The parameters affecting our two-sided decision making include virtual machine's CPU utilization and the response time. In order to implement the proposed framework, the user requests for a website with a synthetic workload based on the real pattern of cloud environment were enumerated using CloudSim tool. The results show that compared to the one-sided methods, our proposed two-sided method is able to improve the costs of resource provision, the quality of service, scaling decision making overhead, and the resource utilization fluctuation control. This scaling mechanism can pave the way for vertical scaling as well as the utilization of Amazon's Spot machines.
The scalability feature of cloud computing attracts application service providers (ASPs) to use c... more The scalability feature of cloud computing attracts application service providers (ASPs) to use cloud application hosting. In cloud environments, resources can be dynamically provisioned on demand for ASPs. Autonomic resource provisioning for the purpose of preventing resources over-provisioning or under-provisioning is a widely investigated topic in cloud environments. There has been proposed a lot of resource-aware and/or service-level agreement (SLA)-aware solutions to handle this problem. However, intelligence solutions such as exploring the hidden knowledge on the Web users' behavior are more effective in cost efficiency. Most importantly, with considering cloud service diversity, solutions should be flexible and customizable to fulfill ASPs' requirements. Therefore, lack of a flexible resource provisioning mechanism is strongly felt. In this paper, we proposed an autonomic resource provisioning mechanism with resource-aware, SLA-aware, and user behavior-aware features, which is called three-dimensional mechanism. The proposed mechanism used radial basis function neural network in order to provide providence and flexibility features. The