What is server virtualization? The ultimate guide (original) (raw)

Server virtualization is a process that creates and abstracts multiple virtual instances on a single server. Server virtualization also abstracts or masks server resources, including the number and identity of individual physical machines, processors and different operating systems.

Traditional computer hardware and software designs typically supported single applications. Often, this forced servers to each run a single workload, wasting unused processors, memory capacity and other hardware resources such as network bandwidth. Server hardware counts spiraled upward as organizations deployed more applications and services across the enterprise. The corresponding costs and increasing demands on space, power, cooling and connectivity pushed data centers to their limits.

The advent of server virtualization changed all this. Virtualization adds a layer of software, called a hypervisor, to a computer, which abstracts the underlying hardware from all the software that runs above. Virtualization translates physical resources into virtual -- logical -- equivalents. The hypervisor then organizes and manages the computer's virtualized resources, provisioning those virtualized resources into logical instances called virtual machines (VMs), each capable of functioning as a separate and independent server.

The key here is resource utilization. Hypervisor-managed virtualization can create and run multiple simultaneous VMs built from the computer's available resources. Virtualization can enable one computer to do the work of multiple computers, utilizing up to 100% of the server's available hardware to handle multiple workloads simultaneously. This reduces server counts, eases the strain on data center facilities, improves IT flexibility and lowers the cost of IT for the enterprise.

Virtualization has changed the face of enterprise computing, but its many benefits are sometimes tempered by factors such as licensing and management complexity, as well as potential availability and downtime issues. Organizations must understand what virtualization is, how it works, its tradeoffs and use cases. Only then can an organization adopt and deploy virtualization effectively across the data center.

Why is server virtualization important?

To appreciate the role of virtualization in the modern enterprise, consider a bit of IT history.

Virtualization isn't a new idea. The technology first appeared in the 1960s during the early era of computer mainframes as a means of supporting mainframe time-sharing, which divides the mainframe's considerable hardware resources to run multiple workloads simultaneously. Virtualization was an ideal and essential fit for mainframes because the substantial cost and complexity of mainframes limited them to just one deployed system -- organizations had to get the most utilization from the investment.

The advent of x86 computing architectures brought readily available, simple, low-cost computing devices into the 1980s. Organizations moved away from mainframes and embraced individual computer systems to host or serve each enterprise application to growing numbers of user or client endpoint computers. Because individual x86-type computers were simple and limited in processing, memory and storage capacity, the x86 computer and its operating systems (OSes) were typically only capable of supporting a single application. One big, shared computer was replaced by many little cheap computers. Virtualization was no longer necessary, and its use faded into history along with mainframes.

But two factors emerged that drove the return of virtualization technology to the modern enterprise. First, computer hardware evolved quickly and dramatically. By the early 2000s, typical enterprise-class servers routinely provided multiple processors and far more memory and storage than most enterprise applications could realistically use. This resulted in wasted resources -- and wasted capital investment -- as excess computing capacity on each server went unused. It was common to find an enterprise server utilizing only 15% to 25% of its available resources.

The second factor was a hard limit on facilities. Organizations simply procured and deployed additional servers as more workloads were added to the enterprise application repertoire. Over time, the sheer number of servers in operation could threaten to overwhelm a data center's physical space, cooling capacity and power availability. The early 2000s experienced major concerns with energy availability, distribution and costs. The trend of spiraling server counts and wasted resources was unsustainable.

History of virtualization timeline

Virtualization has seen emerging technologies such as x86 architecture, hypervisors and virtual switches.

Server virtualization reemerged in the late 1990s with several basic products and services, but it wasn't until the release of VMware's ESX Server 1.0 product in 2001 that organizations finally had access to a production-ready virtualization software platform. The years that followed introduced additional virtualization products from the Xen Project, Microsoft's Hyper-V with Windows Server 2008 and others. Virtualization had matured in stability and performance, and the introduction of Docker in 2013 ushered in the era of virtualized containers offering greater speed and scalability for microservices application architectures compared to traditional VMs.

Today's virtualization platforms embrace the same functional ideas as their early mainframe counterpart. Virtualization abstracts software from the underlying hardware, enabling virtualization to provision and manage virtualized resources as isolated and independent logical instances -- effectively turning one physical server into multiple virtual servers, each capable of operating independently to support multiple applications running on the same physical computer at the same time.

The importance of server virtualization has been profound because it addresses the two problems that plagued enterprise computing into the 21st century. Virtualization lowers the physical server count, enabling an organization to reduce the number of physical servers in the data center -- or run vastly more workloads without adding servers. It's a technique called server consolidation. The lower server count also conserves data center space, power and cooling; this can often forestall or even eliminate the need to build new data center facilities. In addition, virtualization platforms routinely provide powerful capabilities such as centralized VM management, VM migration -- enabling a VM to easily move from one system to another -- and workload/data protection through backups and snapshots.

Virtualization also formed a cornerstone of modern cloud services. By helping to overcome the limitations of physical server environments, virtualization provided a principal mechanism to allow flexible, highly consolidated, highly efficient, software-driven data centers that are essential to practical cloud computing. There would be no cloud without server virtualization and other virtualization technologies such as network virtualization.

How does server virtualization work?

Server virtualization works by abstracting or isolating a computer's hardware from all the software that might run on that hardware. This abstraction is accomplished by a hypervisor, a specialized software product which must be installed on a physical computer. There are numerous hypervisors in the enterprise space, including Microsoft Hyper-V and VMware vSphere.

Later introduction of virtual containers as a virtualization alternative uses a hypervisor variation called a container engine, such as Docker or Apache Mesos. Although the characteristics and behaviors of containers are slightly different than their VM counterparts, the underlying goals of resource abstraction, provisioning and management are identical.

Abstraction recognizes the computer's physical resources -- including processors, memory, storage volumes and network interfaces -- and creates logical aliases for those resources. For example, a physical processor can be abstracted into a logical representation called a virtual CPU, or vCPU. The hypervisor is responsible for managing all the virtual resources that it abstracts and handles all the data exchanges between virtual resources and their physical counterparts.

Server virtualization architecture

Virtualization uses software that simulates hardware functionality to create a virtual system, enabling organizations to run multiple operating systems and applications on a single server.

The real power of a hypervisor isn't abstraction, but what can be done with those abstracted resources. A hypervisor uses virtualized resources to create logical representations of computers, or VMs. A VM is assigned virtualized processors, memory, storage, network adapters and other virtualized elements -- such as GPUs -- managed by the hypervisor. When a hypervisor provisions a VM, the resulting logical instance is completely isolated from the underlying hardware and all other VMs established by the hypervisor. This means a VM has no direct dependence on, or knowledge of, the underlying physical computer or any of the other VMs that might share the physical computer's resources.

This logical isolation, combined with careful resource management, enables a hypervisor to create and control multiple VMs on the same physical computer at the same time -- with each VM capable of acting as a complete, fully functional computer. Virtualization enables an organization to carve several virtual servers from a single physical server. Once a VM is established, it requires a complete suite of software installation, including its own operating system, drivers, libraries and the desired enterprise application. This enables an organization to use multiple OSes to support a wide mix of workloads all on the same physical computer. For example, one VM might use a Windows Server version to run a Windows application, while another VM on the same computer might use a Linux variation to run a Linux application.

The abstraction enabled by virtualization gives VMs extraordinary flexibility that isn't possible with traditional physical computers and physical software installations. All VMs exist and run in a computer's physical memory space, so VMs can easily be saved as ordinary memory image files. These saved files can be used to quickly create duplicate or clone VMs on the same or other computers across the enterprise, or to save the VM at that point in time. Similarly, a VM can easily be moved from one virtualized computer to another simply by copying the desired VM from the memory space of a source computer to a memory space in a target computer and then deleting the original VM from the source computer. In most cases, the migration can take place without disrupting the VM or user experience.

Although virtualization makes it possible to create multiple logical computers from a single physical computer, the actual number of VMs that can be created is limited by the physical resources present on the host computer, and the computing demands imposed by the enterprise applications running in those VMs. For example, a computer with four CPUs and 64 GB of memory might host up to four VMs each with one vCPU and 16 GB of virtualized memory. Once a VM is created, it's possible to change the abstracted resources assigned to the VM to optimize the VM's performance and maximize the number of VMs hosted on the system.

Newer and more resource-rich computers can host a larger number of VMs, while older systems or those with compute-intensive workloads might host fewer VMs. It's possible for the hypervisor to assign resources to more than one VM -- a practice called overcommitment -- but this is discouraged because of computing performance penalties incurred, as the system must time-share any overcommitted resources. The ready availability of powerful new computers also makes overcommitment all but unnecessary because the penalties of overcommitment far outweigh the benefits of squeezing another VM onto a physical system. It's easier and better to just provision the additional VM on another system where resources are available.

What are the benefits of server virtualization?

Virtualization brings a wide range of technological and business benefits to the organization. Consider a handful of the most important and common virtualization benefits:

Server virtualization benefits

Server virtualization benefits include cost savings, more efficient resource provisioning and improved productivity.

What are the disadvantages of server virtualization?

Although server virtualization brings a host of potential benefits to the organization, the additional software and management implications of virtualization software bring numerous possible disadvantages that the organization should consider:

Server virtualization drawbacks

Server virtualization disadvantages include implementation and licensing costs, virtual server sprawl, security concerns and resource contention.

Use cases and applications

Virtualization has proven to be a reliable and versatile technology that has permeated much of the data center in the last two decades. Yet organizations might continue to face important questions about suitable use cases and applications for virtualization deployment. Today, server virtualization can be applied across a vast spectrum of enterprise use cases, projects and business objectives, including the following:

There are very few enterprise workloads that can't function well in a VM. These include legacy applications that depend on direct access to specific server hardware devices to function, such as a specific processor model or type. Such concerns are rare today and should continue to abate as legacy applications are inevitably revised and updated over time.

What are the types of server virtualization?

Virtualization is accomplished through several proven techniques: the use of VMs, the use of paravirtualization and the implementation of virtualization hosted by the OS.

Full virtualization vs. paravirtualization comparison chart

Full virtualization is a complete abstraction of resources from the underlying hardware, whereas paravirtualization requires the OS to communicate with the hypervisor.

VM model

The VM model is the most popular and widely implemented approach to virtualization used by VMware and Microsoft. This approach employs a hypervisor based on a virtual machine monitor (VMM) that is usually applied directly onto the computer's hardware. Such hypervisors are typically dubbed Type 1, full virtualization or bare-metal virtualization, and require no dedicated OS on the host computer. In fact, a bare-metal hypervisor is often regarded as a virtualization OS -- an operating system in its own right. The term host VM is often applied to a principal VM running the server's management software or other main workload -- though Type 1 hypervisors rarely designate or require a host VM today.

The hypervisor is responsible for abstracting and managing the host computer's resources, such as processors and memory, and then providing those abstracted resources to one or more VM instances. Each VM exists as a guest atop the hypervisor. Guest VMs are completely logically isolated from the hypervisor and other VMs. Each VM requires its own guest OS, enabling organizations to employ varied OS versions on the same physical computer.

Paravirtualization

Early bare-metal hypervisors faced performance limitations. Paravirtualization emerged to address those early performance issues by modifying the host OS to recognize and interoperate with a hypervisor through commands called hypercalls. Once successfully modified, the virtualized computer could create and manage guest VMs. OSes installed in guest VMs could employ varied and unmodified OSes and unmodified applications.

The principal challenge of paravirtualization is the need for a host OS -- and the need to modify that host OS -- to support virtualization. Unmodified proprietary OSes, such as Microsoft Windows, won't support a paravirtualized environment, and a paravirtualized hypervisor, such as Xen, requires support and drivers built into the Linux kernel. This poses considerable risk for OS updates and changes. An organization shifting from one OS to another might risk losing paravirtualization support. The popularity of paravirtualization quickly waned as computer hardware evolved to support VMM-based virtualization directly, such as introducing virtualization extensions to the processors' command set.

Full vs. para vs. hardware-assisted virtualization

Before hardware-assisted virtualization, virtualization was accomplished using two techniques: full virtualization and paravirtualization.

Hosted virtualization

Although it's most common to host a hypervisor directly on a computer's hardware -- foregoing the need for a host OS -- a hypervisor can also be installed atop an existing host OS to provide virtualization services for one or more VMs. This is dubbed Type 2 or hosted virtualization and is employed by products such as Virtuozzo and Solaris Zones. The Type 2 hypervisor enables each VM to share the underlying host OS kernel along with common binaries and libraries, whereas Type 1 hypervisors don't allow such sharing.

Hosted virtualization potentially makes guest VMs far more resource efficient because VMs share a common OS -- the OS need not be duplicated for every VM. Consequently, hosted virtualization can potentially support hundreds, even thousands, of VM instances on the same system. However, the common OS offers a single vector for failure or attack: If the host OS is compromised, all the VMs running atop the hypervisor are potentially compromised too.

The efficiency of hosted VMs has spawned the development of containers. The basic concept of containers is identical to hosted virtualization where a hypervisor is installed atop a host OS, and virtual instances all share the same OS. But the hypervisor layer -- for example, Docker and Apache Mesos -- is tailored specifically for high volumes of small, efficient VMs intended to share common components or dependencies such as binaries and libraries. Containers have found significant growth with microservice-based software architectures where agile, highly scalable components are deployed and removed from the environment quickly.

VM vs. container architecture

VMs take up more space than containers because they need a guest OS to run. Each container shares the host's OS. Some users deploy containers within VMs to improve container security.

Migration and deployment best practices

Virtualization brings powerful capabilities to enterprise IT, but virtualization requires an additional software layer that demands careful and considered management -- especially in areas of VM deployment and migration.

A VM can be created on demand, manually constructing the VM by provisioning resources and setting an array of configuration items, then installing the OS and application. Although a manual process can work fine for ad hoc testing or specialized use cases, such as software evaluation, deployment can be vastly accelerated using templates, which predefine the resources, configuration and contents of a desired VM. A template defines the VM, which can then automatically be built quickly and accurately, and duplicated as needed. Major hypervisors and associated management tools support the use of templates, including Hyper-V and vSphere.

Templates are important in enterprise computing environments. They bring consistency and predictability to VM creation, ensuring the following:

Templates not only streamline IT efforts and enhance workload performance, but also reflect the organization's business policies and strengthen compliance requirements. Tools such as Microsoft System Center Virtual Machine Manager, Packer and PowerCLI can help create and deploy templates.

Migration is a second vital aspect of virtualization process and practice. Different hypervisors can offer different feature sets and aren't 100% interoperable. An organization might opt to use multiple hypervisors, but moving an existing VM from one hypervisor to another requires a means to migrate VMs created for one hypervisor to function on another hypervisor instead. Consider a migration from Hyper-V to VMware, where a tool such as VMware vCenter Converter can help to migrate VMs en masse.

Migrations typically involve a consideration of current VM inventory that should detail the number of VMs, destination system capacity and dependencies. Admins can select source VMs, set destination VMs -- including any destination folders -- install any agents needed for the conversion, set migration options such as the VM format and submit the migration job for execution. It's often possible to set migration schedules, enabling admins to set desired migration times and groups so related VMs can be moved in the best order at a time when effects are minimized.

Such hypervisor migrations aren't quick or easy. The decision to change hypervisors and migrate VMs from one hypervisor to another should be carefully tested and validated well in advance of any actual migration initiative.

Server virtualization management

Managing virtualization across an enterprise requires a combination of practical experience, clear policies, conscientious planning and capable tools. Virtualization management can usually be clarified through a series of common best practices that emphasize the role of the infrastructure as well as the business:

Vendors and products

There are numerous virtualization offerings in the current marketplace, but the choice of vendors and products often depends heavily on virtualization goals and established IT infrastructures. Organizations that need bare-metal -- Type 1 -- hypervisors for production workloads can typically select from VMware vSphere, Microsoft Hyper-V, Citrix Hypervisor, IBM Red Hat Enterprise Virtualization (RHEV) and Oracle VM Server for x86. VMware dominates the current virtualization landscape for its rich feature set and versatility. Microsoft Hyper-V is a common choice for organizations that already standardize on Microsoft Windows Server platforms. RHEV is commonly employed in Linux environments.

Hosted -- Type 2 -- hypervisors are also commonplace in test and development environments as well as multi-platform endpoints -- such as PCs that need to run Windows and Mac applications. Popular offerings include VMware Workstation, VMware Fusion, VMware Horizon, Oracle VM VirtualBox and Parallels Desktop. VMware's multiple offerings provide general-purpose virtualization, supporting Windows and Linux OSes and applications on Mac hardware, as well as the deployment of virtual desktop infrastructure across the enterprise. Oracle's product is also general-purpose, supporting multiple OSes on a single desktop system. Parallels' hypervisors support non-Mac OSes on Mac hardware.

Type 1 vs. Type hypervisor differences

A Type 1 hypervisor runs on bare metal and a Type 2 hypervisor runs on top of an operating system.

Hypervisors can vary dramatically in terms of features and functionality. For example, when comparing vSphere and Hyper-V, decision-makers typically consider issues such as the way both hypervisors manage scalability -- the total number of processors and clusters supported by the hypervisor -- dynamic memory management, cost and licensing issues, and the availability and diversity of virtualization management tools.

But some products are also designed for advanced mission-specific tasks. When comparing vSphere ESXi to Nutanix, Nutanix AHV brings hyperconverged infrastructure (HCI), software-defined storage and its Prism management platform to enterprise virtualization. However, AHV is intended for HCI only; organizations that need more general-purpose virtualization and tools might turn to the more mature VMware platform instead.

Organizations can also choose between Xen -- commercially called Citrix Hypervisor -- and Linux KVM hypervisors. Both can run multiple OSes simultaneously, providing network flexibility, but the decision often depends on the underlying infrastructure and any cloud interest. Today, Amazon is reducing support for Xen and opting for KVM, and this can influence the choice of hypervisor for organizations worried about the integration of virtualization software with any prospective cloud provider.

The choice of any hypervisor should only be made after an extended period of evaluation, testing and experimentation. IT and business leaders should have a clear understanding of the compatibilities, performance and technical nuances of a preferred hypervisor, as well as a thorough picture of the costs and license implications of the hypervisor and management tools.

What's the future of server virtualization?

Server virtualization has come a long way in the last two decades. Today, server virtualization is viewed largely as a commodity. It's table stakes -- a commonly used, almost mandatory, element of any modern enterprise IT infrastructure. Hypervisors have also become commodity products with little new or innovative functionality to distinguish competitors in the marketplace. The future of server virtualization isn't a matter of hypervisors, but rather how server virtualization can support vital business initiatives.

First, server virtualization isn't a mutually exclusive technology. One hypervisor type might not be ideal for every task, and bare-metal, hosted and container-based hypervisors can coexist in the same data center to serve a range of specific roles. Organizations that have standardized one type of virtualization might find reasons to deploy and manage additional hypervisor types moving forward.

Consider the burgeoning influence of containers. VMs and containers are two different types of virtualization, handled by two different types of hypervisors -- yet the VMs and containers can certainly operate side-by-side in a data center to handle different types of enterprise workloads.

Second, the continued influence and evolution of technologies such as HCI will test the limits of virtualization management. For example, recent trends toward disaggregation or HCI 2.0 work by separating computing and storage resources, and virtualization tools must efficiently organize those disaggregated resources into pools and tiers, provision those resources to workloads and monitor those distributed resources accurately.

The continued threats of security breaches and malicious attacks will further the need for logging, analytics and reporting, change management and automation. These factors will drive the evolution of server virtualization management tools -- though not the hypervisor itself -- and improve visibility into the environment for business insights and analytics.

Look toward the future of virtualization management. The focus of virtualization is shifting from the hypervisors -- what you need to do -- to the automation, orchestration and overall intelligence available to streamline and assist administrators on a daily basis -- how you need to do it. Tools like Kubernetes for Docker containers, along with scripts and templates, are absolutely essential for successful container deployments. Look for AI technologies to add autonomy, analytics and predictive features to dynamic virtualized environments.

Finally, traditional server virtualization will see continued integration with clouds and cloud platforms, enabling easier and more fluid migrations between data centers and clouds. Examples of such integrations include VMware Cloud on AWS and Microsoft Azure Stack.

Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years of technical writing experience in the PC and technology industry.

Alexander S. Gillis is a technical writer for the WhatIs team at TechTarget.

This was last updated in March 2024

Continue Reading About What is server virtualization? The ultimate guide

Dig Deeper on Containers and virtualization