Hyperconvergence offers speed and simplicity compared to the old ways of server configuration. Credit: Thinkstock Demand for on-premises data center equipment is shrinking as organizations move workloads to the cloud. But on-prem is far from dead, and one segment that’s thriving is hyperconverged infrastructure (HCI). HCI is a form of scale-out, software-integrated infrastructure that applies a modular approach to compute, network and storage capacity. Rather than silos with specialized hardware, HCI leverages distributed, horizontal blocks of commodity hardware and delivers a single-pane dashboard for reporting and management. Form factors vary: Enterprises can choose to deploy hardware-agnostic hyperconvergence software from vendors such as Nutanix and VMware, or an integrated HCI appliance from vendors such as HP Enterprise, Dell, Cisco, and Lenovo. The market is growing fast. By 2023, Gartner projects 70% of enterprises will be running some form of hyperconverged infrastructure, up from less than 30% in 2019. And as HCI grows in popularity, cloud providers such as Amazon, Google and Microsoft are providing connections to on-prem HCI products for hybrid deployment and management. So why is it so popular? Here are some of the top reasons. 1) Simplified design A traditional data center design is comprised of separate storage silos with individual tiers of servers and specialized networking spanning the compute and storage silos. This worked in the pre-cloud era, but it’s too rigid for the cloud era. “It’s untenable for IT teams to take weeks or months to provision new infrastructure so the dev team can produce new apps and get to market quickly,” says Greg Smith, vice president of product marketing at Nutanix. “HCI radically simplifies data center architectures and operations, reducing the time and expense of managing data and delivering apps,” he says. 2) Cloud integration HCI software, such as from Nutanix or VMware, is deployed the same way in both a customer’s data center and cloud instances; it runs on bare metal instances in the cloud exactly the same as it does in a data center. HCI “is the best foundation for companies that want to build a hybrid cloud. They can deploy apps in their data center and meld it with a public cloud,” Smith says. “Because it’s the same on both ends, I can have one team manage an end-to-end hybrid cloud and with confidence that whatever apps run in my private cloud will also run in that public cloud environment,” he adds. 3) Ability to start small, grow large “HCI allows you to consolidate compute, network, and storage into one box, and grow this solution quickly and easily without a lot of downtime,” says Tom Lockhart, IT systems manager with Hastings Prince Edward Public Health in Bellville, Ontario, Canada. In a legacy approach, multiple pieces of hardware – a server, Fiber Channel switch, host-based adapters, and a hypervisor – have to be installed and configured separately. With hyperconvergence, everything is software-defined. HCI uses the storage in the server, and the software almost entirely auto-configures and detects the hardware, setting up the connections between compute, storage, and networking. “Once we get in on a workload, [customers] typically have a pretty good experience. A few months later, they try another workload, then another, and they start to extend it out of their data center to remote sites,” says Chad Dunn, vice president of product management for HCI at Dell. “They can start small and grow incrementally larger but also have a consistent operating model experience, whether they have 1,000 nodes or three nodes per site across 1,000 sites, whether they have 40 terabytes of data or 40 petabytes. They have consistent software updates where they don’t have to retrain their people because it’s the same toolset,” Dunn added. 4) Reduced footprint By starting small, customers find they can reduce their hardware stack to just what they need, rather than overprovision excessive capacity. Moving away from the siloed approach also allows users to eliminate certain hardware. Josh Goodall, automation engineer with steel fabricator USS-POSCO Industries, says his firm deployed HCI primarily for its ability to do stretched clusters, where the hardware cluster is in two physical locations but linked together. This is primarily for use as a backup, so if one site went down, the other can take over the workload. In the process, though, USS-POSCO got rid of a lot of expensive hardware and software. “We eliminated several CPU [software] licenses, we eliminated the SAN from other site, we didn’t need SRM [site recovery management] software, and we didn’t need Commvault licensing. We saved between $25,000 and $30,000 on annual license renewals,” Goodall says. 5) No special skills needed To run a traditional three-tiered environment, companies need specialists in compute, storage, and networking. With HCI, a company can manage its environment with general technology consultants and staff rather than the more expensive specialists. “HCI has empowered the storage generalist,” Smith says. “You don’t have to hire a storage expert, a network expert. Everyone has to have infrastructure, but they made the actual maintenance of infrastructure a lot easier than under a typical scenario, where a deep level of expertise is needed to manage under those three skill sets.” Lockhart of Hastings Prince Edward Public Health says adding new compute/storage/networking is also much faster when compared to traditional infrastructure. “An upgrade to our server cluster was 20 minutes with no down time, versus hours of downtime with an interruption in service using the traditional method,” he says. “Instead of concentrating on infrastructure, you can expand the amount of time and resources you spend on workloads, which adds value to your business. When you don’t have to worry about infrastructure, you can spend more time on things that add value to your clients,” Lockhart adds. 6) Faster disaster recovery Key elements of hyperconvergence products are their backup, recovery, data protection, and data deduplication capabilities, plus analytics to examine it all. Disaster recovery components are managed from a single dashboard, and HCI monitors not only the on-premises storage but also cloud storage resources. With deduplication, compression rates as high as 55:1, and backups can be done in minutes. USS-POSCO Industries is an HP Enterprise shop and uses HPE’s SimpliVity HCI software, which includes dedupe, backup, and recovery. Goodall says he gets about 12-15:1 compression on mixed workloads, and that has eliminated the need for third-party backup software. More importantly, recovery timeframes have dropped. “The best recent example is a Windows update messed up a manufacturing line, and the error wasn’t realized for a few weeks. In about 30 minutes, I rolled through four weeks of backups, updated the system, rebooted and tested a 350GB system. Restoring just one backup would have been a multi-hour process,” Goodall says. 7) Hyperconvergence analytics HCI products come with a considerable amount of analytics software to monitor workloads and find resource constraints. The monitoring software is consolidated into a single dashboard view of system performance, including negatively impacted performance. Hastings recently had a problem with a Windows 7 migration, but the HCI model made it easy to get performance info. “It showed that workloads, depending on time of day, were running out of memory, and there was excessive CPU queuing and paging,” Lockhart says. “We had the entire [issue] written up in an hour. It was easy to determine where problems lie. It can take a lot longer without that single-pane-of-glass view.” 8) Less time managing network, storage resources Goodall says he used to spend up to 50% of his time dealing with storage issues and backup matrixes. Now he spends maybe 20% of his time dealing with it and most of his time tackling and addressing legacy systems. And his apps are better performing under HCI. “We’ve had no issues with our SQL databases; if anything, we’ve seen huge performance gain due to the move to full SSDs [instead of hard disks] and the data dedupe, reducing reads and writes in the environment.” Related content news Cisco patches actively exploited zero-day flaw in Nexus switches The moderate-severity vulnerability has been observed being exploited in the wild by Chinese APT Velvet Ant. By Lucian Constantin Jul 02, 2024 1 min Network Switches Network Security news Nokia to buy optical networker Infinera for $2.3 billion Customers struggling with managing systems able to handle the scale and power needs of soaring generative AI and cloud operations is fueling the deal. By Evan Schuman Jul 02, 2024 4 mins Mergers and Acquisitions Networking news French antitrust charges threaten Nvidia amid AI chip market surge Enforcement of charges could significantly impact global AI markets and customers, prompting operational changes. By Prasanth Aby Thomas Jul 02, 2024 3 mins Technology Industry GPUs Cloud Computing news Lenovo adds new AI solutions, expands Neptune cooling range to enable heat reuse Lenovo’s updated liquid cooling addresses the heat generated by data centers running AI workloads, while new services help enterprises get started with AI. By Lynn Greiner Jul 02, 2024 4 mins Cooling Systems Generative AI Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe