Improving Application Migration to Serverless Computing Platforms: Latency Mitigation with Keep-Alive Workloads (original) (raw)
Related papers
Temporal Performance Modelling of Serverless Computing Platforms
Proceedings of the 2020 Sixth International Workshop on Serverless Computing, 2020
Analytical performance models have been shown very efficient in analyzing, predicting, and improving the performance of distributed computing systems. However, there is a lack of rigorous analytical models for analyzing the transient behaviour of serverless computing platforms, which is expected to be the dominant computing paradigm in cloud computing. Also, due to its unique characteristics and policies, performance models developed for other systems cannot be directly applied to modelling these systems. In this work, we propose an analytical performance model that is capable of predicting several key performance metrics for serverless workloads using only their average response time for warm and cold requests. The introduced model uses realistic assumptions, which makes it suitable for online analysis of real-world platforms. We validate the proposed model through extensive experimentation on AWS Lambda. Although we focus primarily on AWS Lambda due to its wide adoption in our exp...
Big Data and Cognitive Computing
The shift towards microservisation which can be observed in recent developments of the cloud landscape for applications has led towards the emergence of the Function as a Service (FaaS) concept, also called Serverless. This term describes the event-driven, reactive programming paradigm of functional components in container instances, which are scaled, deployed, executed and billed by the cloud provider on demand. However, increasing reports of issues of Serverless services have shown significant obscurity regarding its reliability. In particular, developers and especially system administrators struggle with latency compliance. In this paper, following a systematic literature review, the performance indicators influencing traffic and the effective delivery of the provider’s underlying infrastructure are determined by carrying out empirical measurements based on the example of a File Upload Stream on Amazon’s Web Service Cloud. This popular example was used as an experimental baseline...
Serverless Computing: Optimizing Resource Utilization and Cost Efficiency
International Journal of Innovative Science and Research Technology, 2024
Serverless computing has emerged as a transformative paradigm in cloud infrastructure, offering organizations the ability to scale their applications dynamically without the burden of managing underlying servers. By abstracting away the provisioning and scaling of infrastructure, serverless computing enables developers to focus on building and deploying their applications, while the cloud provider handles the autoscaling, load balancing, and fault tolerance. This paper examines the key benefits and challenges of serverless computing, with a particular emphasis on optimizing resource utilization and cost efficiency. The findings suggest that serverless computing can lead to significant improvements in resource utilization and cost savings, but organizations must also address challenges related to cold starts, vendor lock-in, and monitoring complexity to fully realize the potential of this cloud computing paradigm.
Evaluation of Production Serverless Computing Environments
2018 IEEE 11th International Conference on Cloud Computing (CLOUD), 2018
Serverless computing provides a small runtime container to execute lines of codes without a management of infrastructure which is similar to Platform as a Service (PaaS) but a functional level. Amazon started the event-driven compute named Lambda functions in 2014 with a 25 concurrent limitation but it now supports at least a thousand of concurrent invocation to process event messages generated by resources like databases, storage and system logs. Other providers i.e. Google, Microsoft and IBM offer a dynamic scaling manager to handle parallel requests of stateless functions in which additional containers are provisioning on new compute nodes for distribution. However, while functions are often developed for microservices and lightweight workload, they are associated with distributed data processing using the concurrent invocations. We claim that the current serverless computing environments are able to support dynamic applications in parallel when a partitioned task is executable on a small function instance. We present results of throughput, network bandwidth, a file I/O and compute performance regarding to the concurrent invocations. We also deployed a series of functions for large distributed data processing to address the elasticity and scalability and then demonstrate the differences between serverless computing and virtual machines for cost efficiency and resource utilization.
A Review of Serverless Use Cases and their Characteristics
ArXiv, 2020
The serverless computing paradigm promises many desirable properties for cloud applications - low-cost, fine-grained deployment, and management-free operation. Consequently, the paradigm has underwent rapid growth: there currently exist tens of serverless platforms and all global cloud providers host serverless operations. To help tune existing platforms, guide the design of new serverless approaches, and overall contribute to understanding this paradigm, in this work we present a long-term, comprehensive effort to identify, collect, and characterize 89 serverless use cases. We survey use cases, sourced from white and grey literature, and from consultations with experts in areas such as scientific computing. We study each use case using 24 characteristics, including general aspects, but also workload, application, and requirements. When the use cases employ workflows, we further analyze their characteristics. Overall, we hope our study will be useful for both academia and industry, ...
Serverless Computing: An Investigation of Deployment Environments for Web APIs
Computers, 2019
Cloud vendors offer a variety of serverless technologies promising high availability and dynamic scaling while reducing operational and maintenance costs. One such technology, serverless computing, or function-as-a-service (FaaS), is advertised as a good candidate for web applications, data-processing, or backend services, where you only pay for usage. Unlike virtual machines (VMs), they come with automatic resource provisioning and allocation, providing elastic and automatic scaling. We present the results from our investigation of a specific serverless candidate, Web Application Programming Interface or Web API, deployed on virtual machines and as function(s)-as-a-service. We contrast these deployments by varying the number of concurrent users for measuring response times and costs. We found no significant response time differences between deployments when VMs are configured for the expected load, and test scenarios are within the FaaS hardware limitations. Higher numbers of concu...
ArXiv, 2021
Serverless computing has emerged as an attractive deployment option for cloud applications in recent times. The unique features of this computing model include, rapid auto-scaling, strong isolation, fine-grained billing options and access to a massive service ecosystem which autonomously handles resource management decisions. This model is increasingly being explored for deployments in geographically distributed edge and fog computing networks as well, due to these characteristics. Effective management of computing resources has always gained a lot of attention among researchers. The need to automate the entire process of resource provisioning, allocation, scheduling, monitoring and scaling, has resulted in the need for specialized focus on resource management under the serverless model. In this article, we identify the major aspects covering the broader concept of resource management in serverless environments and propose a taxonomy of elements which influence these aspects, encomp...
2021
Serverless computing is a game-changing technology in today's world. Do not worry about the host machine, and it is a requirement to build the code. Identify that which serverless Cloud Platform is suitable to deploy the code to minimize the effort and the batter outcome for a use case or a business case. This paper performs the vast number of samples onto the three big giants of the Cloud Computing Industry and their renowned serverless functions like Amazon Web Service (AWS) Lambda Function, Google Cloud Platform (GCP) Cloud Function, and Microsoft Azure Function. All these functions have the same environment as NodeJS version 12, and the code is written in JavaScript. The code editor is used in our sampling Microsoft Visual Studio Code. They were starting from five hundred records to goes up to one lac records. Meanwhile, checking the limitation of all three functions in this research paper. It provides a brief comparison in terms of response time and the number of records fetching from the functions.
Serverless Computing and Scheduling Tasks on Cloud: A Review
Recently, the emergence of Function-as-a-Service (FaaS) has gained increasing attention by researchers. FaaS, also known as serverless computing, is a new concept in cloud computing that allows the services computation that triggers the code execution as a response for certain events. In this paper, we discuss various proposals related to scheduling tasks in clouds. These proposals are categorized according to their objective functions, namely minimizing execution time, minimizing execution cost, or multi objectives (time and cost). The dependency relationships between the tasks plays a vital role in determining the efficiency of the scheduling approach. This dependency may result in resources underutilization. FaaS is expected to have a significant impact on the process of scheduling tasks. This problem can be reduced by adopting a hybrid approach that combines both the benefit of FaaS and Infrastructure-as-a-Service (IaaS). Using FaaS, we can run the small tasks remotely and focus only on scheduling the large tasks. This helps in increasing the utilization of the resources because the small tasks will not be considered during the process of scheduling. An extension of the restricted time limit by cloud vendors will allow running the complete workflow using the serverless architecture, avoiding the scheduling problem.