Live Migration Research Papers - Academia.edu (original) (raw)
2025, 2011 IEEE 19th Annual International Symposium on Modelling, Analysis, and Simulation of Computer and Telecommunication Systems
Clouds allow enterprises to increase or decrease their resource allocation on demand in response to changes in workload intensity. Virtualization is one of the building blocks for cloud computing and provides the mechanisms to implement... more
Clouds allow enterprises to increase or decrease their resource allocation on demand in response to changes in workload intensity. Virtualization is one of the building blocks for cloud computing and provides the mechanisms to implement the dynamic allocation of resources. These dynamic reconfiguration actions lead to performance impact during the reconfiguration duration. In this paper, we model the cost of reconfiguring a cloud-based IT infrastructure in response to workload variations. We show that maintaining a cloud requires frequent reconfigurations necessitating both VM resizing and VM live migration, with live migration dominating reconfiguration costs. We design the CosM ig model to predict the duration of live migration and its impact on application performance. Our model is based on parameters that are typically monitored in enterprise data centers. Further, the model faithfully captures the impact of shared resources in a virtualized environment. We experimentally validate the accuracy and effectiveness of CosM ig using microbenchmarks and representative applications.
2025, IEEE Transactions on Parallel and Distributed Systems
Recent research trends exhibit a growing imbalance between the demands of tenants' software applications and the provisioning of hardware resources. Misalignment of demand and supply gradually hinders workloads from being efficiently... more
Recent research trends exhibit a growing imbalance between the demands of tenants' software applications and the provisioning of hardware resources. Misalignment of demand and supply gradually hinders workloads from being efficiently mapped to fixed-sized server nodes in traditional data centers. The incurred resource holes not only lower infrastructure utilization but also cripple the capability of a data center for hosting large-sized workloads. This deficiency motivates the development of a new rack-wide architecture referred to as the composable system. The composable system transforms traditional server racks of static capacity into a dynamic compute platform. Specifically, this novel architecture aims to link up all compute components that are traditionally distributed on traditional server boards, such as central processing unit (CPU), random access memory (RAM), storage devices, and other application-specific processors. By doing so, a logically giant compute platform is created and this platform is more resistant against the variety of workload demands by breaking the resource boundaries among traditional server boards. In this paper, we introduce the concepts of this reconfigurable architecture and design a framework of the composable system for cloud data centers. We then develop mathematical models to describe the resource usage patterns on this platform and enumerate some types of workloads that commonly appear in data centers. From the simulations, we show that the composable system sustains nearly up to 1.6 times stronger workload intensity than that of traditional systems and it is insensitive to the distribution of workload demands. This demonstrates that this composable system is indeed an effective solution to support cloud data center services.
2025
Medical image processing in the Cloud can involve moving large data sets and/or applications across the network infrastructure. With the aim of minimizing the total processing time, the optimal placement of image data and processing... more
Medical image processing in the Cloud can involve moving large data sets and/or applications across the network infrastructure. With the aim of minimizing the total processing time, the optimal placement of image data and processing algorithms on a large scale, distributed Cloud infrastructure is a challenging task. This work presents a genetic algorithm-based approach for data and application (virtual machine) placement using hypervisor and network metrics to avoid service level agreement violations. The solution involves placing medical image data and associated processing algorithms at optimized processing and compute nodes located within the Cloud. The results of initial experiments show that a genetic algorithm-based placement approach can increase Cloud-based application performance.
2025, Engineering Science and Technology, an International Journal
Virtual machine migration is used in cloud computing for server consolidation, system maintenance, opportunistic power savings, and load balancing. Scatter-Gather VM live migration is a new type of virtual machine migration. It is... more
Virtual machine migration is used in cloud computing for server consolidation, system maintenance, opportunistic power savings, and load balancing. Scatter-Gather VM live migration is a new type of virtual machine migration. It is proposed to decouple the source quickly and reduce the time to evict the state of a migrating virtual machine from the host. In Scatter-Gather live migration, the memory pages of the VM are transferred not just to the destination but also to several pre-selected intermediaries. This open problem of selection of the intermediate nodes is not addressed in the literature. In this paper, we propose to define the problem of intermediate node selection in Scatter-Gather migration and prove that it is NP-complete. The problem is mathematically modelled as an integer programming problem based on two optimality criteria -minimizing eviction time and minimizing energy. Two heuristic algorithms -maximum-decrease-in-eviction-time and least-increase-in-energy are proposed to solve the problem and their performance is analyzed with respect to three parameters -eviction time, energy and total migration.
2025, ACM SIGCOMM Computer Communication Review
Many data centers extensively use virtual machines (VMs), which provide the flexibility to move workload among physical servers. VMs can be placed to maximize application performance, power efficiency, or even fault tolerance. However,... more
Many data centers extensively use virtual machines (VMs), which provide the flexibility to move workload among physical servers. VMs can be placed to maximize application performance, power efficiency, or even fault tolerance. However, VMs are typically repositioned without considering network topology, congestion, or traffic routes. In this demo, we show a system, Virtue, which enables the comparison of different algorithms for VM placement and network routing at the scale of an entire data center. Our goal is to understand how placement and routing affect overall application performance by varying the types and mix of workloads, network topologies, and compute resources; these parameters will be available for demo attendees to explore.
2025, Journal of emerging technologies and innovative research
This paper proposes a highthroughput HW gas pedal for lossless compression of the command trace. The proposed HW is designed in a pipeline construction to handle Huffman tree age, encoding, and stream merge.To keep away from the HW cost... more
This paper proposes a highthroughput HW gas pedal for lossless compression of the command trace. The proposed HW is designed in a pipeline construction to handle Huffman tree age, encoding, and stream merge.To keep away from the HW cost increment attributable to highthroughput preparing, a Huffman tree is productively carried out by using static irregular access memory-based lines and bitmaps. What's more, factor length stream combine is performed for a minimal price by lessening the HW wire width utilizing the numerical properties of Huffman coding and handling the metadata and the Huffman codeword utilizing FIFO independently. Moreover, to further develop the compression productivity of the DDR4 memory command, the proposed design incorporates two preprocessing activities, the "don't mind bits supersede" and the "bits orchestrate," which use the working qualities of DDR4 memory.
2025, 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012)
This paper presents a framework to support transparent, live migration of virtual GPU accelerators in a virtualized execution environment. Migration is a critical capability in such environments because it provides support for fault... more
This paper presents a framework to support transparent, live migration of virtual GPU accelerators in a virtualized execution environment. Migration is a critical capability in such environments because it provides support for fault tolerance, ondemand system maintenance, resource management, and load balancing in the mapping of virtual to physical GPUs. Techniques to increase responsiveness and reduce migration overhead are explored. The system is evaluated by using four application kernels and is demonstrated to provide low migration overheads. Through transparent load balancing, our system provides a speedup of 1.7 to 1.9 for three of the four application kernels.
2025, IOSR Journal of Computer Engineering
The advancements in the field of networking, network infrastructures and the growing size for processing, storage/data and communication, the demand for Virtualization and Data Center flexible resource management has become significantly... more
The advancements in the field of networking, network infrastructures and the growing size for processing, storage/data and communication, the demand for Virtualization and Data Center flexible resource management has become significantly important. Virtualization is the single most effective way to reduce IT expenses while boosting efficiency and agility. And with time, the advancements in the field of Virtualization, increasing demands of datacenters and increasing network infrastructure development and maintenance cost lead to the development of Virtual Machines and their migrations across datacenters at large scale now. Though Virtual Machines in datacenter decouples physical machines with Virtual ones for their processing but they are still coupled with hosts for their virtual resources sharing and thereby exerting load on servers further. This paper focuses on balancing the load of the servers with the help of efficient VM migrations and proposes an algorithm that adapts to the dynamic business needs in a better way for datacenter access and request processing.
2025
Cloud Computing has impacted the way data is stored, handled and accessed. One of the biggest advantages of Cloud Computing, is its ability to provide service on a large scale. This is made possible by technologies such as Virtualization.... more
Cloud Computing has impacted the way data is stored, handled and accessed. One of the biggest advantages of Cloud Computing, is its ability to provide service on a large scale. This is made possible by technologies such as Virtualization. The challenges of provisioning, managing and scheduling the resources are aplenty. Autonomic Cloud computing provides solutions for automatically scale up or down the resource pool. Various resource management schemes are available and the proposed mechanism is an enhanced version of Live Virtual Machine Migration. The technique of PreCopy is used to study the behaviour of resources and the performance of the resource management system is enhanced by the Pre-Copy approach.
2025, Journal of Educational and Social Research
In today's world, where technology plays a crucial role in development, education is also benefiting from advancements in network infrastructure and virtualized services. Virtualization technology and highavailability (HA) infrastructures... more
In today's world, where technology plays a crucial role in development, education is also benefiting from advancements in network infrastructure and virtualized services. Virtualization technology and highavailability (HA) infrastructures usage can transform how educational institutions manage and deliver their services. This is especially important for learning environments that require continuous access to critical systems such as data servers, online learning platforms, and student resources. Network and server administrators in educational institutions must ensure the availability and security of their systems. Building high-availability infrastructures enables the creation of more secure and stable environments, ensuring uninterrupted access to essential resources for students and educators. This paper addresses the use of virtualization technology in managing high-availability services using open-source tools. Educational services can be hosted on virtual machines that are automatically migrated between physical nodes of a cluster system in case of a network failure, ensuring a seamless experience for users. The project utilizes the open-source Heartbeat program to enable real-time migration of virtual machines between physical nodes in a cluster. A solution and algorithm (Perf+) will be developed to improve CPU performance and memory and reduce downtime during migration. The solution will be tested in an experimental HA system, measuring the impact of migration on performance. This approach helps educational institutions adopt innovative technologies to enhance the quality of education, ensuring continuous and reliable access to their services.
2025, International Conference on Green Computing
The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some... more
The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.
2025, Advances in systems analysis, software engineering, and high performance computing book series
Recent developments in virtualization and communication technologies have transformed the way data centers are designed and operated by providing new tools for better sharing and control of data center resources. In particular, Virtual... more
Recent developments in virtualization and communication technologies have transformed the way data centers are designed and operated by providing new tools for better sharing and control of data center resources. In particular, Virtual Machine (VM) migration is a powerful management technique that gives data center operators the ability to adapt the placement of VMs in order to better satisfy performance objectives, improve resource utilization and communication locality, mitigate performance hotspots, achieve fault tolerance, reduce energy consumption, and facilitate system maintenance activities. Despite these potential benefits, VM migration also poses new requirements on the design of the underlying communication infrastructure, such as addressing and bandwidth requirements to support VM mobility. Furthermore, devising efficient VM migration schemes is also a challenging problem, as it not only requires weighing the benefits of VM migration, but also considering migration costs, including communication cost, service disruption, and management overhead. This chapter provides an overview of VM migration benefits and techniques and discusses its related research challenges in data center environments.
2025, Journal of emerging technologies and innovative research
The core of Cloud computing includes virtualization of hardware resources such as storage, network and memory provided through virtual machines (VM). The live migration of these VMs is introduced to obtain multiple benefits which mainly... more
The core of Cloud computing includes virtualization of hardware resources such as storage, network and memory provided through virtual machines (VM). The live migration of these VMs is introduced to obtain multiple benefits which mainly include high availability, hardware maintenance, fault takeover and workload balancing. Besides various facilities of the VM migration, it is susceptible to severe security risks during migration process due to which the industry is hesitant to accept it. The research done so far is on the performance of migration process; whereas the security aspects in migration are not fully explored. We have carried out an extensive survey to investigate the vulnerabilities, threats and possible attacks on the live VM migration. Furthermore, we have identified security requirements for secure VM migration and presented a detailed analysis of existing solutions on the basis of these security requirements. Finally, limitations in the existing solutions are presente...
2025, Journal of Applied Engineering and Technological Science (JAETS)
The industry and government have recently acknowledged and used virtual machines (VM) to promote their businesses. During the process of VM, some problems might occur. The issues, such as a heavy load of memory, a large load of CPU, a... more
The industry and government have recently acknowledged and used virtual machines (VM) to promote their businesses. During the process of VM, some problems might occur. The issues, such as a heavy load of memory, a large load of CPU, a massive load of a disk, a high load of network and time-defined migration, might interrupt the business processes. This paper identifies the migration process among hosts for VM to overcome the problem within the defined time frame of migration. The introduction of VMs migration in a timely manner is to detect a problem earlier. There are workload parameters, such as network, CPU, disk and memory as our parameters. To overcome the issue, we have to follow the Model named Fuzzy rule. The rule follows the basic of tree model for decision-making. The application of the fuzzy Model for the study is to determine VMs allocation from busy VMs to vacant VMs for balancing purposes. The result of the study showed that the use of the fuzzy Model to forecast VMs m...
2025
Cloud users may decide to live migrate their virtual machines from a public cloud provider to another due to a lower cost or ceasing operations. Currently, it is not possible to install a second virtualization platform on public cloud... more
Cloud users may decide to live migrate their virtual machines from a public cloud provider to another due to a lower cost or ceasing operations. Currently, it is not possible to install a second virtualization platform on public cloud infrastructure (IaaS) because nested virtualization and hardwareassisted virtualization are disabled by default. As a result, cloud users' VMs are tightly coupled to providers IaaS hindering live migration of VMs to different providers. This paper introduces LivCloud, a solution to live cloud migration. LivCloud is designed based on well-established criteria to live migrate VMs across various cloud IaaS with minimal interruption to the services hosted on these VMs. The paper discusses the basic design of LivCloud which consists of a Virtual Machine manager and IPsec VPN tunnel introduced for the first time within this environment. It is also the first time that the migrated VM architecture (64-bit & 32-bit) is taken into consideration. In this study, we evaluate the implementation of the basic design of LivCloud on Amazon EC2 C4 instance. This instance has a compute optimized instance and has high performance processors. In particular we explore three developed options. Theses options are being tested for the first time on EC2 to change the value of the EC2 instance's control registers. Changing the values of the registers will significantly help enable nested virtualization on Amazon EC2.
2025, 2011 IEEE 19th Annual International Symposium on Modelling, Analysis, and Simulation of Computer and Telecommunication Systems
Clouds allow enterprises to increase or decrease their resource allocation on demand in response to changes in workload intensity. Virtualization is one of the building blocks for cloud computing and provides the mechanisms to implement... more
Clouds allow enterprises to increase or decrease their resource allocation on demand in response to changes in workload intensity. Virtualization is one of the building blocks for cloud computing and provides the mechanisms to implement the dynamic allocation of resources. These dynamic reconfiguration actions lead to performance impact during the reconfiguration duration. In this paper, we model the cost of reconfiguring a cloud-based IT infrastructure in response to workload variations. We show that maintaining a cloud requires frequent reconfigurations necessitating both VM resizing and VM live migration, with live migration dominating reconfiguration costs. We design the CosM ig model to predict the duration of live migration and its impact on application performance. Our model is based on parameters that are typically monitored in enterprise data centers. Further, the model faithfully captures the impact of shared resources in a virtualized environment. We experimentally validate the accuracy and effectiveness of CosM ig using microbenchmarks and representative applications.
2025, Journal of Network and Computer Applications
In the growing age of cloud computing, shared computing and storage resources can be accessed over the Internet. Conversely, the infrastructure cost of the cloud reaches an incredible limit. Therefore, virtualization concept is applied in... more
In the growing age of cloud computing, shared computing and storage resources can be accessed over the Internet. Conversely, the infrastructure cost of the cloud reaches an incredible limit. Therefore, virtualization concept is applied in cloud computing systems to help users and owners to achieve better usage and efficient management of the cloud with the least cost. Live migration of virtual machines(VMs) is an essential feature of virtualization, which allows migrating VMs from one location to another without suspending VMs. This process has many advantages for data centers such as load balancing, online maintenance, power management, and proactive fault tolerance. For enhancing live migration of VMs, many optimization techniques have been applied to minimize the key performance metrics of total transferred data, total migration time and downtime. This paper provides a better understanding of live migration of virtual machines and its main approaches. Specifically, it focuses on reviewing stateof-the-art optimization techniques devoted to developing live VM migration according to memory migration. It reviews, discusses, analyzes and compares these techniques to realize their optimization and their challenges. This work also highlights the open research issues that necessitate further investigation to
2024, ArXiv
Virtualization technology reduces cloud operational cost by increasing cloud resource utilization level. The incorporation of virtualization within cloud data centers can severely degrade cloud performance if not properly managed. Virtual... more
Virtualization technology reduces cloud operational cost by increasing cloud resource utilization level. The incorporation of virtualization within cloud data centers can severely degrade cloud performance if not properly managed. Virtual machine (VM) migration is a method that assists cloud service providers to efficiently manage cloud resources while eliminating the need of human supervision. VM migration methodology migrates current-hosted workload from one server to another by either employing live or non-live migration pattern. In comparison to non-live migration, live migration does not suspend application services prior to VM migration process. VM migration enables cloud operators to achieve various resource management goals, such as, green computing, load balancing, fault management, and real time server maintenance. In this paper, we have thoroughly surveyed VM migration methods and applications. We have briefly discussed VM migration applications. Some open research issues...
2024, International Journal of Cloud Applications and Computing
The article presents an efficient energy optimization framework based on dynamic resource scheduling for VM migration in cloud data centers. This increasing number of cloud data centers all over the world are consuming a vast amount of... more
The article presents an efficient energy optimization framework based on dynamic resource scheduling for VM migration in cloud data centers. This increasing number of cloud data centers all over the world are consuming a vast amount of power and thus, exhaling a huge amount of CO2 that has a strong negative impact on the environment. Therefore, implementing Green cloud computing by efficient power reduction is a momentous research area. Live Virtual Machine (VM) migration, and server consolidation technology along with appropriate resource allocation of users' tasks, is particularly useful for reducing power consumption in cloud data centers. In this article, the authors propose algorithms which mainly consider live VM migration techniques for power reduction named “Power_reduction” and “VM_migration.” Moreover, the authors implement dynamic scheduling of servers based on sequential search, random search, and a maximum fairness search for convenient allocation and higher utiliza...
2024, International Journal of Innovative Technology and Exploring Engineering
Now a day Energy Consumption is one of the most promising fields amongst several computing services of cloud computing. A maximum amount of Power resources are absorbed by the data centre because of huge amount of data processing which is... more
Now a day Energy Consumption is one of the most promising fields amongst several computing services of cloud computing. A maximum amount of Power resources are absorbed by the data centre because of huge amount of data processing which is increased abnormally. So it’s the time to think about the energy consumption in cloud environment. Existing Energy Consumption systems are limited in terms of virtualization because improper virtualization leads to loads imbalance and excessive power consumption and inefficiency in terms of computational power. Billing[1,2 ] is another exciting feature that is closely related to energy consumption, because higher or lesser billing depends on energy consumption somehow-as we know that cloud providers allow cloud users to access resources as pay-per-use, so these resources need to be optimally selected to process the user request to maximize user satisfaction in the distributed virtualized environment. There may be an inequity between the actual powe...
2024, 2015 International Symposium on Networks, Computers and Communications (ISNCC)
Software Defined Networking (SDN) is based basically on three features: centralization of the control plane, programmability of network functions and traffic engineering. The network function migration poses interesting problems that we... more
Software Defined Networking (SDN) is based basically on three features: centralization of the control plane, programmability of network functions and traffic engineering. The network function migration poses interesting problems that we try to expose and solve in this paper. Content Distribution Network virtualization is presented as use case.
2024
With the rise of cloud computing, data centers have been called to play a main role in the Internet scenario nowadays. Despite this relevance, they are probably far from their zenith yet due to the ever increasing demand of contents to be... more
With the rise of cloud computing, data centers have been called to play a main role in the Internet scenario nowadays. Despite this relevance, they are probably far from their zenith yet due to the ever increasing demand of contents to be stored in and distributed by the cloud, the need of computing power or the larger and larger amounts of data being analyzed by top companies such as Google, Microsoft or Amazon.
2024, Communications: Wireless in Developing Countries and Networks of the Future
Migration is an important feature for network virtualization because it allows the reallocation of virtual resources over the physical resources. In this paper, we investigate the characteristics of different migration models, according... more
Migration is an important feature for network virtualization because it allows the reallocation of virtual resources over the physical resources. In this paper, we investigate the characteristics of different migration models, according to their virtualization platforms. We show the main advantages and limitations of using the migration mechanisms provided by Xen and OpenFlow platforms. We also propose a migration model for Xen, using data and control plane separation, which outperforms the Xen standard migration. We developed two prototypes, using Xen and OpenFlow, and we performed evaluation experiments to measure the impact of the network migration on traffic forwarding.
2024, International Journal of Information Technology
Cloud computing is a new business model that provides facility to avail computing power on demand anytime, anywhere. It is highly elastic and can grow or shrink dynamically according to client need. Virtual machine migration (VMM) plays... more
Cloud computing is a new business model that provides facility to avail computing power on demand anytime, anywhere. It is highly elastic and can grow or shrink dynamically according to client need. Virtual machine migration (VMM) plays very important role to provide the power of elasticity to cloud environment. VMM generates considerable amount of overhead and also degrades overall performance of cloud environment. So it becomes very important to decide when to migrate and when not. In this paper, we present challenges and short comings in existing virtual machine migration approaches. Most of them monitor the resources at the hypervisor level. To overcome these short comings we have introduced real time resource monitoring (RTRM) model for selection of the hotspot host and when virtual machine migration should take place. Our result shows significant improvement in the hotspot detection as compared to primitive techniques.
2024, Advances in Information Security
Virtual machine migration is a powerful technique used to balance the workload of hosts in environments such as a cloud data center. In that technique, VMs can be transferred from a source host to a destination host due to various reasons... more
Virtual machine migration is a powerful technique used to balance the workload of hosts in environments such as a cloud data center. In that technique, VMs can be transferred from a source host to a destination host due to various reasons such as maintenance of the source host or resource requirements of the VMs. The VM migration can happen in two ways, live and offline migration. In time of live VM migration, VMs get transferred from a source host to a destination host while running. In that situation, the state of the running VM and information such as memory pages get copied from a host and get transferred to the destination by the VM migration system. There exist security risks toward the migrating VM's data integrity and confidentiality. After a successful VM migration, the source host shall remove the memory pages of the migrated VM. However there should be a mechanism for the owner of the VM to make sure his VM's memory pages and information are removed from the source host's physical memory. On the other hand, the memory portion on the destination host shall be clear from previously used VM's data and possibly malicious codes. In this chapter, we investigate the possibility of misuse of migrating VM's data either in transit or present at source and destination during the VM migration process. Based on the investigations, we give a proposal for a secure live VM migration protocol.
2024
Why do we need to care about performance unpredictability in the cloud? Seung-Hwan Lim claimed that unpredictability creates a cascade effect in all the related jobs: a lowperforming outlier dictates the overall performance of the entire... more
Why do we need to care about performance unpredictability in the cloud? Seung-Hwan Lim claimed that unpredictability creates a cascade effect in all the related jobs: a lowperforming outlier dictates the overall performance of the entire application. In order to address this problem, virtual machine (VM) scheduling or reassigning to different physical machines has been considered. Amid VM scheduling, he mentioned that a set of VM migrations occur, and migration policy, in turn, determines the performance impact in reassigning VMs. He presented his measurement that showed that migration time could vary according to system configuration and how to group VMs for migration. He formulated an optimization problem that tries to minimize the total migration time when migrating a set of VMs while bounding the performance impact. This formulation allows him to estimate the completion time when multiple jobs contend for multiple resources. He also proposed performance slowdown as the metric of performance variance, which can be calculated from his formula. How would this work handle cases in which jobs were dependent? This work assumed only independent cases, in order to ease the difficulty of calculating the probability of contention across multiple resources. The dependent case is more challenging and would be a direction for future work. Byung-Gon Chun asked how much accuracy degraded in estimating finish time when more than two jobs were considered. Lim said results showed about 15% accuracy degradation with up to four co-located workloads. How does this work compare with existing live migration work? Lim replied that many have considered the optimal state in terms of VM assignment, but this work looks at what happens during the state transition to optimal states. A live migration addresses migrating a single virtual machine, but they dealt with multiple VM migrations bringing a greater performance impact than a single VM migration.
2024
Why do we need to care about performance unpredictability in the cloud? Seung-Hwan Lim claimed that unpredictability creates a cascade effect in all the related jobs: a lowperforming outlier dictates the overall performance of the entire... more
Why do we need to care about performance unpredictability in the cloud? Seung-Hwan Lim claimed that unpredictability creates a cascade effect in all the related jobs: a lowperforming outlier dictates the overall performance of the entire application. In order to address this problem, virtual machine (VM) scheduling or reassigning to different physical machines has been considered. Amid VM scheduling, he mentioned that a set of VM migrations occur, and migration policy, in turn, determines the performance impact in reassigning VMs. He presented his measurement that showed that migration time could vary according to system configuration and how to group VMs for migration. He formulated an optimization problem that tries to minimize the total migration time when migrating a set of VMs while bounding the performance impact. This formulation allows him to estimate the completion time when multiple jobs contend for multiple resources. He also proposed performance slowdown as the metric of performance variance, which can be calculated from his formula. How would this work handle cases in which jobs were dependent? This work assumed only independent cases, in order to ease the difficulty of calculating the probability of contention across multiple resources. The dependent case is more challenging and would be a direction for future work. Byung-Gon Chun asked how much accuracy degraded in estimating finish time when more than two jobs were considered. Lim said results showed about 15% accuracy degradation with up to four co-located workloads. How does this work compare with existing live migration work? Lim replied that many have considered the optimal state in terms of VM assignment, but this work looks at what happens during the state transition to optimal states. A live migration addresses migrating a single virtual machine, but they dealt with multiple VM migrations bringing a greater performance impact than a single VM migration.
2024, Journal of emerging technologies and innovative research
The core of Cloud computing includes virtualization of hardware resources such as storage, network and memory provided through virtual machines (VM). The live migration of these VMs is introduced to obtain multiple benefits which mainly... more
The core of Cloud computing includes virtualization of hardware resources such as storage, network and memory provided through virtual machines (VM). The live migration of these VMs is introduced to obtain multiple benefits which mainly include high availability, hardware maintenance, fault takeover and workload balancing. Besides various facilities of the VM migration, it is susceptible to severe security risks during migration process due to which the industry is hesitant to accept it. The research done so far is on the performance of migration process; whereas the security aspects in migration are not fully explored. We have carried out an extensive survey to investigate the vulnerabilities, threats and possible attacks on the live VM migration. Furthermore, we have identified security requirements for secure VM migration and presented a detailed analysis of existing solutions on the basis of these security requirements. Finally, limitations in the existing solutions are presented.
2024, Journal of emerging technologies and innovative research
The core of Cloud computing includes virtualization of hardware resources such as storage, network and memory provided through virtual machines (VM). The live migration of these VMs is introduced to obtain multiple benefits which mainly... more
The core of Cloud computing includes virtualization of hardware resources such as storage, network and memory provided through virtual machines (VM). The live migration of these VMs is introduced to obtain multiple benefits which mainly include high availability, hardware maintenance, fault takeover and workload balancing. Besides various facilities of the VM migration, it is susceptible to severe security risks during migration process due to which the industry is hesitant to accept it. The research done so far is on the performance of migration process; whereas the security aspects in migration are not fully explored. We have carried out an extensive survey to investigate the vulnerabilities, threats and possible attacks on the live VM migration. Furthermore, we have identified security requirements for secure VM migration and presented a detailed analysis of existing solutions on the basis of these security requirements. Finally, limitations in the existing solutions are presente...
2024, IEEE Access
Due to the rapid utilization of cloud services, the energy consumption of cloud data centres is increasing dramatically. These cloud services are provided by Virtual Machines (VMs) through the cloud data center. Therefore, energy-aware... more
Due to the rapid utilization of cloud services, the energy consumption of cloud data centres is increasing dramatically. These cloud services are provided by Virtual Machines (VMs) through the cloud data center. Therefore, energy-aware VMs allocation and migration are essential tasks in the cloud environment. This paper proposes a Branch-and-Price based energy-efficient VMs allocation algorithm and a Multi-Dimensional Virtual Machine Migration (MDVMM) algorithm at the cloud data center. The Branch-and-Price based VMs allocation algorithm reduces energy consumption and wastage of resources by selecting the optimal number of energy-efficient PMs at the cloud data center. The proposed MDVMM algorithm saves energy consumption and avoids the Service Level Agreement (SLA) violation by performing an optimal number of VMs migrations. The experimental results demonstrate that our proposed Branch-and-Price based VMs allocation with VMs migration algorithms saves more than 31% energy consumption and improves 21.7% average resource utilization over existing state-of-the-art techniques with a 95% confidence interval. The performance of the proposed approaches outperforms in terms of SLA violation, VMs migration, and Energy SLA Violation (ESV) combined metrics over existing state-of-the-art VMs allocation and migration algorithms.
2024
Live Migration (LM) of Virtual Machine (VM) is a process of transferring a working VM from on host to another host of a different physical machine without interfering the VM. In datacentre networks, LM enables flexibility in resource... more
Live Migration (LM) of Virtual Machine (VM) is a process of transferring a working VM from on host to another host of a different physical machine without interfering the VM. In datacentre networks, LM enables flexibility in resource optimisation, fault tolerance and load balancing. However, in real time, the resource consumption and latency of live VM migration reduce these benefits to much less than their potential. In this paper, we present the results of an experimental study that evaluates LM in our unique high speed optical fibre network connecting Northern Ireland, Dublin and Halifax (Canada). We observe that using Pre-Copy LM extra large amounts of stressed memory leads to non convergence over high latency paths. However, using Post-copy LM the total migration time as well as downtime is dominated by specific memory utilisation patterns inside the virtualised guest. We experience variation in total migration time and downtime using Post-Copy LM considering Quality of Service (QoS) parameters, which can have significant impact in the cloud applications performance.
2024, Journal of Computer and Communications
IT infrastructures have been widely deployed in datacentres by cloud service providers for Infrastructure as a Service (IaaS) with Virtual Machines (VMs). With the rapid development of cloud-based tools and techniques, IaaS is changing... more
IT infrastructures have been widely deployed in datacentres by cloud service providers for Infrastructure as a Service (IaaS) with Virtual Machines (VMs). With the rapid development of cloud-based tools and techniques, IaaS is changing the current cloud infrastructure to meet the customer demand. In this paper, an efficient management model is presented and evaluated using our unique Trans-Atlantic high-speed optical fibre network connecting three datacentres located in Coleraine (Northern Ireland), Dublin (Ireland) and Halifax (Canada). Our work highlights the design and implementation of a management system that can dynamically create VMs upon request, process live migration and other services over the high-speed inter-networking Datacentres (DCs). The goal is to provide an efficient and intelligent on-demand management system for virtualization that can make decisions about the migration of VMs and get better utilisation of the network.
2024, IAEME PUBLICATION
Industrial automation is experiencing a significant transformation with the integration of edge computing technologies, which bring computational power closer to data sources, enabling real-time processing and intelligent decision-making... more
Industrial automation is experiencing a significant transformation with the integration of edge computing technologies, which bring computational power closer to data sources, enabling real-time processing and intelligent decision-making at the network edge. This paradigm shift optimizes data analytics and enhances system responsiveness in industrial settings, addressing critical issues such as latency, bandwidth limitations, and reliability concerns associated with traditional cloud-based systems. The paper explores edge computing's applications in industrial automation, including real-time monitoring and control, predictive maintenance, and quality assurance, highlighting benefits like improved operational efficiency, reduced downtime, and enhanced product quality. While implementation challenges such as security concerns, interoperability problems, and data governance issues exist, the potential of edge computing to reshape industrial automation is immense. As technology advances and industry standards evolve, edge computing is poised to unlock new levels of efficiency, reliability, and scalability in industrial processes. The integration of edge computing with emerging technologies like 5G and artificial intelligence promises to further revolutionize the industrial landscape, despite the hurdles that need to be overcome.
2024, Bulletin of Electrical Engineering and Informatics
The fifth generation (5G) architecture represents the most recent advancement in mobile networks and is presently operational in various global places. Several new use cases and applications have been introduced, with a specific focus on... more
The fifth generation (5G) architecture represents the most recent advancement in mobile networks and is presently operational in various global places. Several new use cases and applications have been introduced, with a specific focus on improving throughput, reducing latency, minimising packet loss, optimising CPU usage, and maximising memory utilisation. In order to effectively address each scenario, it is necessary to integrate the most advanced technology, putting in significant effort to optimise resources and ensure system adaptability. This strategy will establish an architecture capable of accommodating many scenarios of a shared physical infrastructure by using techniques such as virtualization and cloud-based service deployment. Therefore, in this study, a test was carried out related to the performance of the 5G core network (CN) on bare metal servers and virtual private servers (VPSs). The quality of service (QoS) using Wireshark and Iperf3 is tested by utilizing 'cpustat' and free tools. The results of performance comparisons of these two methods on the 5G CN shows throughput values of ≥10 Gbps ≤20 Gbps, latency values of ≤4 ms, and packet loss values of 0%, in accordance with IMT 2020 standards. Thus, the ideal 5G CN services can be realized.
2024
Load balancing is one of the critical issues in cloud due to the change in user requirement at run time. Cloud provider allots resources to the user with the help of virtualization which allows dividing the physical resources in the form... more
Load balancing is one of the critical issues in cloud due to the change in user requirement at run time. Cloud provider allots resources to the user with the help of virtualization which allows dividing the physical resources in the form of virtual machine (VM). User services are running on these VM which is hosted inside the physical machine (PM). If the VM is not distributed properly then it will degrade the performance of the physical and virtual machine. Hence load balancing is the core management function of the cloud provider. Three steps are involved in the migration process i.e., source PM selection, VM selection and the last step is target PM selection. The study of previous work on the VM migration says that VM selection and VM placement are the two challenging task in the cloud environment and the performance of the load balancing approach is totally dependent on the VM selection and placement. Further performance of the load balancing approach can be controlled by selecting the suitable physical and virtual machine. Plenty of work on the load balancing in cloud computing environment are presented in the last few decade and mostly they are differ in the VM selection and VM placement policies. This paper presents various existing VM selection and placement approaches with their anomalies.
2024
In this Era of Digital communication data is most important thing which is used in everywhere, a huge amount of data are available in the system node network. So we require a technique to store data over the network or internet Space. It... more
In this Era of Digital communication data is most important thing which is used in everywhere, a huge amount of data are available in the system node network. So we require a technique to store data over the network or internet Space. It is also known as cloud computing. Cloud Computing Delivery of on demand and computing services. Cloud services suppliers host the data services and we can access the data from these services. In this technique user can share or store the data and retrieve the data anywhere at any time but of the storage of large amount of data. In the services the retrieval of data very hard so load balancing technique are used to maintain the load of cloud or balance the load over the cloud. The aim of this paper is to define all the types of load balancing algorithm and introduce the cloud computing.
2024, Telecommunication Systems
Live virtual machine migration is one of the most promising features of data center virtualization technology. Numerous strategies have been proposed for live migration of virtual machines on Local Area Networks. These strategies work... more
Live virtual machine migration is one of the most promising features of data center virtualization technology. Numerous strategies have been proposed for live migration of virtual machines on Local Area Networks. These strategies work perfectly in their respective domains with negligible downtime. However, these techniques are not suitable to handle live migration over Wide Area Networks and results in significant downtime. In this paper we have proposed a machine learningbased downtime optimization (MLDO) approach which is an adaptive live migration approach based on predictive mechanisms that reduces downtime during live migration over wide area networks for standard workloads. The main contribution of our work is to employ machine learning methods to reduce downtime. Machine learning methods are also used to introduce automated learning into the predictive model and adaptive threshold levels. We compare our proposed approach with existing strategies in terms of downtime observed during the migration process and have observed improvements in downtime of up to 15%.
2024, Telecommunication Systems
Live virtual machine migration is one of the most promising features of data center virtualization technology. Numerous strategies have been proposed for live migration of virtual machines on Local Area Networks. These strategies work... more
Live virtual machine migration is one of the most promising features of data center virtualization technology. Numerous strategies have been proposed for live migration of virtual machines on Local Area Networks. These strategies work perfectly in their respective domains with negligible downtime. However, these techniques are not suitable to handle live migration over Wide Area Networks and results in significant downtime. In this paper we have proposed a machine learningbased downtime optimization (MLDO) approach which is an adaptive live migration approach based on predictive mechanisms that reduces downtime during live migration over wide area networks for standard workloads. The main contribution of our work is to employ machine learning methods to reduce downtime. Machine learning methods are also used to introduce automated learning into the predictive model and adaptive threshold levels. We compare our proposed approach with existing strategies in terms of downtime observed during the migration process and have observed improvements in downtime of up to 15%.
2024
Cloud computing is one of the most emerging technology of the world. Due to rapid increase in the number of data centers to provide the services to the users the power consumed and operational costs of them is going on increasing day by... more
Cloud computing is one of the most emerging technology of the world. Due to rapid increase in the number of data centers to provide the services to the users the power consumed and operational costs of them is going on increasing day by day. Due to high power consumption and continuous working the amount of CO2 emission is going on increasing which is adding up to the greenhouse effect. So there is a need to create an energy efficient system where the power consumption is less. Virtualization can be used to reduce the power consumption by data center. VM placement deals with the optimal choice of physical machine where it should be placed. Moreover the main aim for making it energy efficient is to transfer the load of fewer servers and switch off the idle servers. So in this paper a summary of various VM placement algorithms are discussed and energy conservation parameters are also discussed.
2024, Computer Networks
Traditional network functions such as firewalls and Intrusion Detection Systems (IDS) are implemented in costly dedicated hardware, making the networks expensive to manage and inflexible to changes. Network function virtualization enables... more
Traditional network functions such as firewalls and Intrusion Detection Systems (IDS) are implemented in costly dedicated hardware, making the networks expensive to manage and inflexible to changes. Network function virtualization enables flexible and inexpensive operation of network functions, by implementing virtual network functions (VNFs) as software in virtual machines (VMs) that run in commodity servers. However, VNFs are vulnerable to various faults such as software and hardware failures. Without efficient and effective fault tolerant mechanisms, the benefits of deploying VNFs in networks can be traded-off. In this paper, we investigate the problem of fault tolerant VNF placement in cloud networks, by proactively deploying VNFs in stand-by VM instances when necessary. It is challenging because VNFs are usually stateful. This means that stand-by instances require continuous state updates from active instances during their operation, and the fault tolerant methods need to carefully handle such states. Specifically, the placement of active/stand-by VNF instances, the request routing paths to active instances, and state transfer paths to stand-by instances need to be jointly considered. To tackle this challenge, we devise an efficient heuristic algorithm for the fault tolerant VNF placement. We also propose two bicriteria approximation algorithms with provable approximation ratios for the problem without compute or bandwidth constraints. We then consider the dynamic fault recovery problem given that some placed active instances of VNFs may go faulty, for which we propose an approximation algorithm that dynamically switches traffic processing from faulty VNFs to stand-by instances. Simulations with realistic settings show that our algorithms can significantly improve the request admission rate compared to conventional approaches. We finally evaluate the performance of the proposed algorithm for the dynamic fault recovery problem in a real test-bed consisting of both physical and virtual switches, and results demonstrate that our algorithms have potentials of being applied in real scenarios.
2024, International Journal of INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING
Energy efficiency is one of the most crucial aspects to consider while operating a cloud. After all, a cloud that isn't energy efficient will be more expensive to operate and maintain. Using the horse herd algorithm to position virtual... more
Energy efficiency is one of the most crucial aspects to consider while operating a cloud. After all, a cloud that isn't energy efficient will be more expensive to operate and maintain. Using the horse herd algorithm to position virtual machines within a cloud is one technique to increase the system's energy efficiency. The horse herd algorithm is a heuristic used to optimize virtual machines' placement in a cloud. The algorithm works by first identifying the set of machines that are most energy efficient. These are the machines that will be used to host the virtual machines. The next step is to identify the set of machines that are the least energy efficient. These are the machines that will be used to host the virtual machines. Finally, the algorithm places the virtual machines on the most energy efficient machines. Additionally, the algorithm can help to meet SLA requirements. This is because the algorithm ensures that the virtual machines are placed on the most energy-efficient machines. As a result, the cloud will be able to meet the SLA requirements. The horse herd algorithm is a fantastic technique to increase a cloud's energy effectiveness. Additionally, the algorithm can help to meet SLA requirements. If you're searching for a way to improve the energy efficiency of your cloud, the horse herd algorithm is a good option to consider. A recent study has shown that the Horse Herd Algorithm can achieve energy efficiency and meet Service Level Agreement (SLA) requirements in virtual machine placement for SDN managed clouds. The Horse Herd Algorithm is a placement algorithm that is based on the location of resources in a data center.
2024
Serverless computing is an emerging computing paradigm where users can focus solely on the business logic of their applications without needing to invest in system administration. Cloud providers like Amazon AWS [4] provides users with... more
Serverless computing is an emerging computing paradigm where users can focus solely on the business logic of their applications without needing to invest in system administration. Cloud providers like Amazon AWS [4] provides users with the platform, services, and tools to build, deploy, and maintain their applications. Still missing, however, are comprehensive tools for debugging applications on serverless platforms like AWS Lambda [5]. In this work, we identify core debugging features that are unavailable to Lambda users and present two technical innovations to improve the debugging experience. The command line step debugger allows a user to step debug a single Lambda instance, and the investigation toolkit provides the user with the ability to extract details from, interrupt programs on, and interact directly with individual Lambda instances. These tools fill a previously-unfilled niche in debugging serverless functions on AWS Lambda. 1 Cloud Computing at Present One view of moder...
2024, International Journal of Innovative Technology and Exploring Engineering
using virtualization many Virtual Machines can run parallel on the same Host. For dynamic resource management, virtual machines can be migrated from residing host to a different. But before starting the migration some questions need to be... more
using virtualization many Virtual Machines can run parallel on the same Host. For dynamic resource management, virtual machines can be migrated from residing host to a different. But before starting the migration some questions need to be answered like when to start the virtual machine migration, which VM to migrated and where? The Virtual Machine migration methods on the Virtual cloud environment has already been researched at length, but very few studies have focused on affinity-relations among virtual machines during migration, hence the key objective of this research paper, is to explore the Affinity-aware VM migration in detail and propose Affinity-Aware VM migration algorithms for migration of a group of VMs with affinity to a destination Host with less capacity than required. This paper also provides a brief review of several virtual machine migration techniques.
2024, International Journal of Innovative Technology and Exploring Engineering
Now a day Energy Consumption is one of the most promising fields amongst several computing services of cloud computing. A maximum amount of Power resources are absorbed by the data centre because of huge amount of data processing which is... more
Now a day Energy Consumption is one of the most promising fields amongst several computing services of cloud computing. A maximum amount of Power resources are absorbed by the data centre because of huge amount of data processing which is increased abnormally. So it’s the time to think about the energy consumption in cloud environment. Existing Energy Consumption systems are limited in terms of virtualization because improper virtualization leads to loads imbalance and excessive power consumption and inefficiency in terms of computational power. Billing[1,2 ] is another exciting feature that is closely related to energy consumption, because higher or lesser billing depends on energy consumption somehow-as we know that cloud providers allow cloud users to access resources as pay-per-use, so these resources need to be optimally selected to process the user request to maximize user satisfaction in the distributed virtualized environment. There may be an inequity between the actual powe...
2024, IEEE Systems Journal
Network virtualization facilitates the deployment of new protocols and applications without the need to change the core of the network. One key step in instantiating virtual networks (VNs) is the allocation of physical resources to... more
Network virtualization facilitates the deployment of new protocols and applications without the need to change the core of the network. One key step in instantiating virtual networks (VNs) is the allocation of physical resources to virtual elements (routers and links), which can be then targeted for the minimization of energy consumption. However, such mappings need to support the quality-of-service requirements of applications. Indeed, the search for an optimal solution for the VN mapping problem is NP-hard, and approximated algorithms must be developed for its solution. The dynamic allocation and deallocation of VNs on a network substrate can compromise the optimality of a mapping designed to minimize energy consumption, since such allocation and deallocation can lead to the underutilization of the network substrate. To mitigate such negative effects, techniques such as live migration can be employed to rearrange already mapped VNs in order to improve network utilization, thus minimizing energy consumption. This paper introduces a set of new algorithms for the mapping of VNs on network substrates designed to reduce network energy consumption. Moreover, two new algorithms for the migration of virtual routers and links are proposed with simulation showing the efficacy of the algorithms.
2024, International Journal of Intelligent Information Technologies
2024, Research Square (Research Square)
Organizations widely use cloud computing to outsource their computing needs. One crucial issue of cloud computing is that services must be available to clients at all times. However, the cloud services may be temporarily unavailable due... more
Organizations widely use cloud computing to outsource their computing needs. One crucial issue of cloud computing is that services must be available to clients at all times. However, the cloud services may be temporarily unavailable due to maintenance of the cloud infrastructure, load balancing of services, defense against cyber attacks, power management, proactive fault tolerance, or resource usage. The unavailability of cloud services impacts negatively on the business model of cloud providers. One solution to tackle the service unavailability is Live Virtual Machine Migration (LVM), that is, moving virtual machines (VMs) from the source host machine to the destination host without disrupting the running application. Pre-copy memory migration is a common LVM approach used in most networked systems such as the cloud. The main difficulty with this approach is the high rate of frequently updating memory pages, referred to as "dirty pages. Transferring these updated or dirty pages during the pre-copy migration approach prolongs the total migration time. After a predefined iteration, the pre-copy approach enters the stop-and-copy phase and transfers the remaining memory pages. If the remaining pages are huge, the downtime or service unavailability will be very high-resulting in a negative impact on the availability of the running services. To minimize such service downtime, it is critical to find an optimal time to migrate a virtual machine in the pre-copy approach. To address the issue, this paper proposes a machine learning-based method to optimize pre-copy migration. It has mainly three stages (i) Feature selection (ii) Model generation and (iii) Application of the proposed model in pre-copy migration. The experiment results show that our proposed model outperforms other 1 Springer Nature 2021 L A T E X template 2 Article Title machine learning models in terms of prediction accuracy and it significantly reduces downtime or service unavailability during the migration process.
2024, Research Square (Research Square)
Organizations widely use cloud computing to outsource their computing needs. One crucial issue of cloud computing is that services must be available to clients at all times. However, the cloud services may be temporarily unavailable due... more
Organizations widely use cloud computing to outsource their computing needs. One crucial issue of cloud computing is that services must be available to clients at all times. However, the cloud services may be temporarily unavailable due to maintenance of the cloud infrastructure, load balancing of services, defense against cyber attacks, power management, proactive fault tolerance, or resource usage. The unavailability of cloud services impacts negatively on the business model of cloud providers. One solution to tackle the service unavailability is Live Virtual Machine Migration (LVM), that is, moving virtual machines (VMs) from the source host machine to the destination host without disrupting the running application. Pre-copy memory migration is a common LVM approach used in most networked systems such as the cloud. The main difficulty with this approach is the high rate of frequently updating memory pages, referred to as "dirty pages. Transferring these updated or dirty pages during the pre-copy migration approach prolongs the total migration time. After a predefined iteration, the pre-copy approach enters the stop-and-copy phase and transfers the remaining memory pages. If the remaining pages are huge, the downtime or service unavailability will be very high-resulting in a negative impact on the availability of the running services. To minimize such service downtime, it is critical to find an optimal time to migrate a virtual machine in the pre-copy approach. To address the issue, this paper proposes a machine learning-based method to optimize pre-copy migration. It has mainly three stages (i) Feature selection (ii) Model generation and (iii) Application of the proposed model in pre-copy migration. The experiment results show that our proposed model outperforms other 1 Springer Nature 2021 L A T E X template 2 Article Title machine learning models in terms of prediction accuracy and it significantly reduces downtime or service unavailability during the migration process.