What is hyperconverged infrastructure? Guide to HCI (original) (raw)

Hyperconverged infrastructure is a software-centric architecture that tightly integrates compute, storage and virtualization resources into a single, highly integrated system that usually consists of x86 hardware, along with a comprehensive software stack that includes operating systems, virtualization platforms and software management tools.

Modern businesses rely on the data center to provide the computing, storage, networking and management resources that are necessary to host vital enterprise workloads and data. But data centers can be notoriously complex places where a multitude of vendors compete to deliver myriad different devices, systems, services and software. This heterogeneous mix often struggles to interoperate -- and rarely delivers peak performance for the business without careful, time-consuming optimizations. Today, IT teams simply don't have the time to wrestle with the deployment, integration and data center management challenges posed by traditional heterogeneous environments.

The notion of convergence originally arose as a means of addressing the challenges of heterogeneity. Early on, a single vendor would gather the systems and software of different vendors into a single preconfigured and optimized set of equipment and tools that was sold as a package. This was known as converged infrastructure, or CI. Later, convergence vendors took the next step to design and produce their own line of prepackaged and highly integrated compute, storage and network gear for the data center. It was an evolutionary step now called hyperconverged infrastructure, or HCI.

Converged and hyperconverged systems are possible through a combination of virtualization technology and unified management. Virtualization enables compute, storage and networking resources to be treated as pooled resources that can be centrally provisioned and managed. Unified management lets all those resources be discovered, organized into pools, divided into performance tiers and then seamlessly provisioned to workloads regardless of where those resources are physically located. Unified management offers a quantum leap over traditional heterogeneous data center environments that might rely on multiple disparate management tools, which often didn't discover or manage all resources.

Today, the combination of virtualized hardware and associated management tooling is often treated as a standalone appliance that can operate as a single, complete hyperconverged system in the data center, or be combined with other HCI appliances to quickly and easily scale up a hyperconverged infrastructure deployment.

Let's take a closer look at hyperconverged infrastructure technology, consider its use cases and implementation, evaluate its tradeoffs, examine some current vendors and product offerings and look ahead to the future of the technology.

How does hyperconverged infrastructure work?

Too often, eclectic mixes of hardware from varied vendors have been tied together with inadequate networking gear and prove impossible to provision and manage through a single tool. The result is almost always a hodgepodge of diverse gear and software that results in confusion, oversights, security vulnerabilities, needless firefighting and wasted time on the part of IT administrators.

Hyperconverged infrastructure is founded on the two essential premises of integration and management, which arose as a means of solving two of the most perplexing problems of traditional heterogeneous data centers: suboptimal performance and fractured -- problematic -- systems management. The goal of HCI is to deliver virtualized and scalable compute, storage and network resources that are all discoverable and managed through a single pane of glass.

Evaluate hyperconverged infrastructure use cases, benefits and challenges before adoption.

Beyond that basic premise, however, there are numerous variations and options available for hyperconverged infrastructure. It's important to understand the most common considerations found in HCI technology.

Hardware or software deployment

Hyperconverged infrastructure can be implemented through hardware or software:

Integrated or disaggregated

Hyperconverged infrastructure can follow two different approaches in terms of hardware design:

Compare and contrast traditional IT vs. HCI vs. dHCI vs. composable infrastructure.

Deployment

Hyperconverged infrastructure has been regarded as a disruptive technology; it typically displaces existing data center hardware. Anytime that HCI is being introduced to the data center, it's important to consider how that technology will be implemented or operated. There are basically three ways to add HCI to a traditional heterogeneous data center:

  1. Full replacement HCI deployment_._ The first option is a complete replacement of the traditional environment with a hyperconverged infrastructure product. In practice, this is probably the least desirable option because it poses the maximum possible displacement of hardware -- as well as the highest potential costs. Few organizations have the capital or technological need for such an undertaking. It's more likely that a new HCI deployment will be adopted for greenfield projects, such as a second -- backup, remote or edge -- data center construction or other new build, using reference architecture where equipment capital can be invested in HCI without displacing existing hardware.
  2. Side-by-side HCI deployment. The second, and far more palatable, approach is a side-by-side deployment where an HCI platform is deployed in the existing data center along with traditional heterogeneous infrastructure. This approach lets businesses migrate workloads to HCI over time and can be performed over the long term. Displaced hardware can be repurposed or decommissioned in smaller, more manageable portions. It is likely that HCI will run in tandem with traditional infrastructures over the long term, and the two can easily coexist. A common use for side-by-side HCI deployment is for a private or hybrid cloud project.
  3. Per-application HCI deployment. The third approach also brings hyperconverged infrastructure into the existing data center environment. But rather than migrate existing workloads to the new infrastructure, the HCI is intended only to support specific new applications or computing initiatives, such as a new virtual desktop infrastructure deployment or a new big data processing cluster; previous workloads are left intact on the existing infrastructure.

Features

HCI is intended to provide the compute, storage and network resources needed to run enterprise workloads -- all tied together through a single-pane-of-glass management platform. However, competitive HCI products can offer a wide range of additional capabilities that potential adopters should consider:

Why is hyperconvergence important?

There are two fundamental philosophies in data center infrastructure selection: homogeneity, meaning one or same; and heterogeneity, meaning many or different. A homogeneous data center uses equipment -- and often software and services -- from a single IT vendor. Homogeneity brings simplicity because everything comes from a single source and is expected to interoperate together properly. It's a singular offering, though organizations are typically locked into the vendor they select.

In spite of the benefits or homogeneity, heterogeneous data centers evolved to dominate the IT landscape because every enterprise has computing problems that must be solved, but the answer is rarely the same for every company or every problem. The promise of heterogeneity frees an enterprise to mix and match equipment, software and services. This lets the business choose between low-cost product options and optimized ones -- and everything in between -- all while keeping the data center largely free of vendor lock-in.

But heterogeneity has a price. Constructing an effective heterogeneous data center infrastructure requires time and effort. Hardware and software must be individually procured, integrated, configured, optimized -- if possible -- and then managed through tools that are often unique to a vendor's own products. Thus, managing a diverse heterogeneous infrastructure usually requires expertise in multiple tools that IT staff must master and maintain. This causes additional time and integration challenges when the infrastructure needs to be changed, repaired or scaled up. Traditional heterogeneous IT simply isn't all that agile.

Today, business changes at an astonishingly fast pace. IT must respond to the demands of business almost immediately, provisioning new resources for emerging workloads on demand and adding new resources often just in time to keep enterprise applications running and secure. Yet, IT must also eliminate systems management errors and oversights that can leave critical systems vulnerable. And all of this must be accomplished with ever-shrinking IT budgets and staff. Hyperconverged infrastructure is all about deployment speed and agility.

HCI draws on the same benefits that made homogeneous data center environments popular: single-vendor platforms that ensured compatibility, interoperability and consistent management, while providing one vendor to interrogate when something went wrong. But HCI goes deeper to deliver compute, storage and network resources that are organized using software-defined and virtualization-based technologies. The resources are tightly integrated and pre-optimized, and the hardware and software are packaged into convenient appliances -- or nodes -- which can be deployed singularly to start and then quickly and easily scaled out as resource demands increase.

In short, HCI products are essentially envisioned as data centers in a box. If a business needs more resources, just add more modules to the HCI box. If one HCI box is not enough for the data center, just add more boxes. But the appeal of HCI extends beyond the data center. The compact, highly integrated offerings are easily installed and can be managed remotely, and HCI technology has become a staple of remote office/branch office (ROBO) and edge computing deployments.

As an example, consider a typical big data installation where petabytes of data arrived from an army of IoT devices. Rather than rely on an expensive, high-bandwidth network to send raw data back to a data center for processing, the data can be collected and stored locally at the edge -- where the data originates -- and an HCI deployment can readily be installed at the edge to remotely process and analyze the raw data, eliminating network traffic congestion by sending only the resulting analysis to the main data center.

As another example, consider an organization with a private or hybrid cloud initiative. A traditional effort would involve adding and provisioning infrastructure required to support a private cloud stack. An HCI deployment provides a convenient, all-inclusive "data center in a box" that can be configured and deployed as a private cloud, subsequently connected to a public cloud to build a hybrid cloud infrastructure for the business.

Why more companies are adopting HCI

In 2020, the hyperconverged infrastructure market was generating about 2billioninsalesperquarter.By2022,theglobalHCImarketwasvaluedat2 billion in sales per quarter. By 2022, the global HCI market was valued at 2billioninsalesperquarter.By2022,theglobalHCImarketwasvaluedat4.8 billion, and is expected to grow to $19.7 billion by 2028. This tremendous investment has taken HCI from a niche or SMB platform to a viable enterprise alternative. Although HCI might not be ideal for all workloads, HCI is able to tackle a greater range of applications and use cases than ever before.

HCI started as a point platform, a means of simplifying and accelerating modest IT deployments in ROBO as well as a limited number of enterprise-class environments, such as virtualized desktop infrastructure (VDI). Early on, large businesses used HCI to support mission-specific goals separate from the main production environment, which could be left alone to continue doing the heavy lifting.

Today, HCI remains a powerful point solution for a wide array of IT project initiatives. HCI offerings benefit from the radical improvements that have taken place in processors, memory and storage devices, as well as dramatic advances in software-defined technologies that redefine how businesses perceive and handle resources and workloads. Examples include the following:

  1. HCI support for container clusters. Vendors routinely offer HCI configurations optimized for popular container software, such as Kubernetes.
  2. Support for machine learning and deep learning algorithms. The ability to support a huge volume of scalable containers makes HCI a natural fit for machine learning (ML) and artificial intelligence (AI) workloads, which demand enormous numbers of compute instances or nodes.
  3. The emergence of streaming data analytics. Streaming analytics is an expression of big data, allowing an HCI system to ingest, process and report on data and metrics collected from a wide array of sources in real time. Such analytics can be used to yield valuable business insights and predict impending problems or faults.

Hyperconverged infrastructure has had a profound effect on edge computing. Today's unparalleled proliferation of IoT devices, sensors, remote sites and mobile accessibility is demanding that organizations reconsider the gathering, storing and processing of enormous data volumes. In most cases, this requires the business to move data processing and analysis out of the central data center and relocate those resources closer to the source of the data: the edge. The ease and versatility provided by HCI offerings makes remote deployment and management far easier than traditional heterogeneous IT infrastructures.

Finally, the speed and flexibility in HCI has made it well suited to rapid deployment, and even rapid repurposing. The realities of the COVID-19 pandemic have forced a vast number of users to suddenly work from home. This has made organizations have to suddenly deploy additional resources and infrastructure to support the business computing needs of users now working remotely. HCI systems have played a notable role in such rapid infrastructure adjustments.

There are several business drivers pushing IT shops to adopt HCI technology.

Today's hyperconverged infrastructure technologies didn't spring into being overnight. The HCI products available today are the result of decades of data center -- and use case -- evolution. To appreciate the journey, it's important to start with traditional data center approaches where compute, storage and network equipment were all selected, deployed and usually managed individually. The approach was tried and true, but it required careful integration to ensure that all of the gear would interoperate and perform adequately. Optimization, if possible at all, was often limited.

As the pace of business accelerated, organizations recognized the deployment and performance benefits of integration and optimization. It took a long time to optimize and validate a heterogeneous IT hardware and software infrastructure stack. If it were possible to skip the challenges of integrating, configuring and optimizing new gear and software, deployments could be accomplished faster and with fewer problems.

This gave rise to the notion of convergence, enabling vendors to create predefined sets of server, storage and network gear -- as well as software tools -- that had been prepackaged and pre-integrated, and were already validated to function well together. Although CI was basically packaged gear from several different vendors, the time-consuming integration and optimization work had already been accomplished. In most cases, a software layer was also included, which could manage the converged infrastructure products collectively, essentially providing a single pane of glass for the given CI package.

Eventually, vendors realized that convergence could provide even greater levels of integration and performance by foregoing multiple vendors' products in favor of a single-vendor approach that combined compute, storage, network and software components into a single packaged product. The concept was dubbed hyperconvergence and led to the rise of hyperconverged infrastructure.

HCI products are often denoted by a modular architecture, enabling compute, storage and network components to be built as modules that are installed into a specialized rack. The physical blade form factor proved extremely popular for HCI modules and racks, enabling rapid installation and hot swap capabilities for modules. More compute, storage and network blades could be added to the blade rack. When the rack was filled, a new rack could be installed to hold more modules -- further scaling up the deployment. From a software perspective, the HCI environment is fully virtualized and includes unified management tools to configure, pool, provision and troubleshoot HCI resources.

HCI 1.0 vs. HCI 2.0

Hyperconverged infrastructure continues to evolve, expressing new features and capabilities while working to overcome perceived limitations and expand potential use cases. Today, there is no commonly accepted terminology to define the evolution of HCI, but the technology is colloquially termed HCI 1.0 and HCI 2.0. The principal difference in these designations is the use of disaggregation.

The original premise of HCI was to provide tightly integrated and optimized sets of virtualized CPU, memory, storage and network connectivity in prepackaged nodes. When more resources are needed, it's a simple matter to just add more nodes. Unified management software discovered, pooled, configured, provisioned and managed all the virtualized resources. The point here was that hyperconverged infrastructure relied on the use of aggregation -- putting everything in the same box -- which could be deployed easily and quickly. It's this underlying use of aggregation that made HCI 1.0 products so appealing for rapid deployment in ROBO and edge use cases.

The major complaint about HCI 1.0 products is the workload resource use and the potential for resource waste. A typical HCI product provides a finite and fixed amount of CPU, memory and storage. The proportion of those resources generally reflects more traditional, balanced workloads. But workloads that place uneven or disproportionate demands on resources can ultimately exhaust some resources quickly -- forcing the business to add more costly nodes to cover resource shortages yet leave the remaining resources underutilized.

Disaggregation is increasingly seen as a potential answer to the problem of HCI resource waste. The introduction of disaggregated hyperconverged infrastructure -- dHCI or HCI 2.0 -- essentially separates compute resources from storage and storage area networks (SANs). HCI 2.0 puts CPU and memory in one device and storage in another device, and both devices can be added separately as needed. This approach helps businesses target the HCI investment in order to support less-traditional workloads that might pose more specific resource demands. Nimble Storage dHCI, NetApp HCI and Datrium DVX are examples of HCI 2.0.

But the evolution of HCI has not stopped with disaggregation, and the entire IT industry is starting to embrace the notion of composable infrastructure. In theory, a composable infrastructure separates all resources into independently scalable and collectively managed components, which can be added as needed and interconnected with a specialized network fabric that supports fast, low-latency communication between components. At the same time, unified management software continues to discover, pool, configure -- or tier -- provision and manage all of the resources available.

Today, composable infrastructure is still far from such an ideal scenario, but vendors are starting to deliver HCI devices with greater versatility in hardware selection and deployment -- such as allowing other nonvendor storage to be added. The key to success in any composable infrastructure is management software that must focus on resource discovery, pooling, tiering -- organizing resources based on their relative level of performance -- and almost total dependence on software-defined behaviors to provision resources.

Hyperconverged infrastructure and the cloud

It's easy to confuse HCI and cloud technology. Both rely on virtualization and a mature software management layer that can define, organize, provision and manage a proliferation of hardware resources and services that enterprise workloads can operate within. Although HCI and cloud can interoperate well together, they aren't the same thing, and there are subtle but important differences to consider.

HCI is fundamentally a centralized, software-driven approach to deploying and using a data center infrastructure. The underlying hardware is clearly defined, and the amount of hardware resources is finite. Virtualization abstracts the resources, while software organizes and defines the ways resources are provisioned to workloads.

A cloud is intended to provide computing as a utility, shrouding vast amounts of virtualized resources that users can provision and release as desired through software tools. The cloud not only provides a vast reservoir of resources but also a staggering array of predefined services -- such as load balancers, databases and monitoring tools -- that users can choose to implement.

Essentially, the difference between HCI and cloud is the difference between hardware and software. HCI is merely one implementation of hardware that can be deployed in a data center. A cloud is really the software and constituent services -- the cloud stack -- built to run atop the available hardware. Thus, an HCI deployment can be used to support a cloud, typically a private cloud or a private cloud integrated as part of a hybrid cloud, aiding in digital transformation. Conversely, a cloud software stack, such as OpenStack, will run on an HCI deployment within the data center.

For example, Azure Stack HCI is a version of Microsoft Azure public cloud stack designed to run on local HCI hardware. Similarly, Dell's VMware Cloud Foundation on VxRail is an example of an HCI offering that includes a suite of software that can be used to operate the HCI platform as a private cloud -- and streamline the private cloud into public cloud environments, such as VMware on AWS.

What are the benefits and use cases of hyperconverged infrastructure?

Hyperconverged infrastructure might not be appropriate for every IT project or deployment. Organizations must take the time to evaluate the technology, perform proof-of-concept testing and carefully evaluate the tradeoffs involved before committing to HCI. However, HCI brings several noteworthy benefits to the enterprise, including the following:

HCI can typically support a wide range of enterprise computing use cases, including general-purpose workloads. But some important HCI use cases include the following:

What are the drawbacks of HCI?

In spite of the many noteworthy benefits, HCI also poses several potential drawbacks that deserve careful consideration. Common HCI disadvantages or limitations can include the following:

HCI management and implementation

Although HCI brings an array of powerful benefits to the enterprise, there are also numerous management and implementation considerations that must be carefully evaluated and understood before an HCI investment is ever made. These include the following:

Resiliency. One critical issue is resiliency. HCI simplifies deployments and operation, but the simplicity that users see hides tremendous complexity. Errors, faults and failures can all conspire to threaten application performance and critical business data. And while HCI offerings can support resiliency, the feature is never automatic, and it can require detailed understandings of the HCI system's inner workings, such as write acknowledgement, RAID levels or other storage techniques used for resiliency.

Key hyperconverged appliance features include security and interoperability.

To understand the ways that an HCI offering actually handles data resiliency, IT leaders must evaluate the offering and consider how the system handles node or hardware module failures, the default operations of data resiliency operations, the additional costs of HCI resilience options, the workload performance effect of using resiliency options and the overhead resource capacity that is used to provide resiliency.

Management. When it comes to HCI systems management, users can often benefit from third-party management tools, such as DataOn Must for Windows Admin Center. By exposing APIs, third-party tools and services can connect to HCI to provide a wider array of services or integrations.

HCI deployments and management also benefit from a clear use case, so it's important to understand any specific or tailored roles that an HCI system plays in the environment and manage those roles accordingly. For example, an HCI deployment intended for backup and DR is likely to be managed and supported differently -- and rely on different management features -- than an HCI deployment used for everyday production workloads.

Take advantage of any automated policy management and/or enforcement that the HCI system provides. Emerging software-defined tools, such as Nutanix Flow Network Security, can enable network and policy management that helps to speed provisioning while enforcing best practices for the business, such as configuration settings adjustment and application security implementation. Such capabilities are often adept at reporting and alerting variations from policy, helping businesses maintain careful control over resource use and configurations.

Finally, use care in choosing management tools. The native tools that accompany HCI systems are usually proprietary and typically might not interoperate easily -- if at all -- with other heterogeneous servers, storage and network elements across the data center. Organizations that must monitor and manage broader heterogeneous environments without the silos that accompany multiple management tools might benefit from adopting comprehensive third-party tools, such as Zenoss, Uila, EG Innovations EG Enterprise and ManageEngine OpManager. Third-party tools must be tested and vetted thoroughly before being implemented in mixed environments to ensure that all elements are visible and reported properly.

Major HCI vendors and products

There are numerous vendors and products across the HCI space. Principal HCI vendors and products include the following:

HCI offerings typically fall into software-based or hardware-based deployments.

Software-based deployments focus on the "software-defined" capabilities that virtualization and HCI promise, enabling vendors to build more ubiquitous HCI software platforms that can interoperate with a much broader range of existing systems hardware within the enterprise -- often minimizing the required investment and reducing vendor lock-in.

As an example, Nutanix follows a software-based approach to HCI. The goal for Nutanix AHV is to provide a software layer that can provide the features and functionality needed for HCI yet support a broad mix of software and hardware. Nutanix does offer its own purpose-built NX hyperconverged appliances, but appliances made by Cisco, HPE, Hitachi, NEC, Intel and other vendors can also use Nutanix software. Thus, Nutanix is broadly seen as a general-purpose HCI platform. Other vendors with software-based HCI offerings include VMware and Microsoft Azure Stack HCI.

Hardware-based deployments underscore the highly integrated and carefully managed nature of HCI as a concept. Vendors will generally provide servers, storage and network gear that are already known to interoperate efficiently using a common management tool. Virtualization and management software are included atop the hardware stack.

For example, HPE provides the family of SimpliVity HCI platforms as a more familiar hardware-based approach to HCI. SimpliVity promises more of a balance between hardware and software by using a software HCI foundation running on HPE DL380 Gen10 servers as a use case-based hardware platform. For example, the HPE SimpliVity 380 Gen10 Plus is recommended for best general-purpose performance, while the Gen10 G variant is recommended for multi-GPU image processing and VDI. The product family touts enterprise-class performance, data protection and resiliency, along with a high level of automation and analytics driven by HPE management and analytics. Other vendors with hardware-based HCI offerings include Cisco and Scale Computing.

HCI technology has not remained idle as vendors and users alike seek new -- often niche -- use cases that push beyond traditional VDI, edge and private or hybrid cloud foundation use cases. HCI is making strides in data center automation and analytics where software layers can easily access tightly integrated hardware to monitor capacity, assist with upgrades, handle provisioning and warn of potential system problems.

Hardware choices are also advancing, with some software-based HCI able to accommodate a diverse array of servers and storage, while hardware-centric offerings move to support more demanding use cases -- such as adding NVMe support for SAP HANA. Similarly, HCI seems to be gaining traction as a data protection, backup or DR platform for enterprise uses, sometimes in conjunction with cloud services. Finally, the role of cloud providers in HCI is starting to gain traction, with offerings such as AWS Outposts providing both hardware and software for enterprise uses.

Ultimately, HCI is likely to see continued adoption and growth in important technology areas over the next several years including private/hybrid cloud computing projects, analytics and ML/AI tasks, expanding edge computing environments and greenfield sites where deployment speed and management simplicity are primary design considerations.

Editor's note: The HCI vendors and products list was assembled from various industry research sources and presented in unranked, alphabetical order.

Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years of technical writing experience in the PC and technology industry.

This was last updated in January 2024