Back to basics: Make sure VMs don’t exceed host capacity

How-To
07 Mar 20224 mins
Data CenterData Center ManagementVirtualization

Adding new virtual servers is a good time to audit host memory and compute to ensure each virtual machine gets enough of each.

configuring virtual machine laptop
Credit: VAKS-Stock Agency / Shutterstock

At the agency where I work we recently bought software products that required new virtual machines, and that provided the opportunity to review some of the important basices of properly assigning the hardware memory and compute to each VM.

That’s important so we stay in a failover-ready state, and in our environment, that means appropriately allocating resources of the two clustered physical hosts that run VMs for our production applications. It’s even more important now because the new software is particularly resource-intensive.

The task also provided the opportunity to review and adjust the resources assigned to all of our existing virtual servers so they, too, were properly sized.

To start the project, an audit of what was already in use across the physical resources was called for to give us a clear picuture of where there was room to create new instances and where there were misallocations of those resources. That exercise was also a great time to resize servers that were underutilized even during peak usage and to decommission servers no longer in use or past the “don’t touch it until we know we won’t need it” period.

Altogether the main value of such an audit comes down to two things: balancing ram and processor allocation, and server decommissioning.

Balancing RAM and processor resources

When assigning RAM and processor resources, the goal is the same: neither allocation should exceed half of the total amount supported by the physical host. So if there are 128 processors per physical machine, the total assigned processors for all VMs on that host should be no more than 64. Similarly, if there are 500GB of RAM, then the combined assigned RAM among all the VMs on the host should not exceed 250GB.

It’s important to note that the assigned RAM and procs for each VM are generally much higher than the actual use even during peak hours, which provides a cushion should there be a surge in demand for either.

One way to balance these numbers is to record on a spreadsheet the RAM and procs resources assigned to each VM on each physical host and total them up for each host. If there is an over-commitment on any one host, VMs can be moved among the hosts to reach the needed balance. If there is a need for more resources than are free, it may be possible to find more by reassessing the RAM and procs assigned, looking for which virtual servers can be downsized without risking performance degradation. To do that, it’s wise to observe actual use of assigned resources over time to gauge how close they come to maxing out at peak hours. A good rule of thumb is to allow RAM and procs usage up to 80% of what’s assigned because beyond that processes start to fail. If you find a virtual server that, say, never utilizes more than 15% of its RAM or procs, you have room to trim. The math is no more difficult than figuring out the tip at a restaurant.

Server decommissioning

When a business application or network component gets retired or replaced it needs to be properly decommissioned. The frequency of this largely depends on the size of the server environment, the needs of the business, and hardware/software support lifecycles. In my environment, this occurs two to five times a year.

One thing to consider is first removing clients for the application that may reside on users’ workstations. It can be helpful to see the clients either go offline or disconnect from within the application being retired to ensure none are missed. It might be easiest to remove the clients from the app itself, but other options include using group policy, logon scripts, or SCCM to reach the same goal.

Once that’s accomplished turn off the VM that hosted the app and uncluster it. In a Windows environment this can be done from within the Hyper-V Cluster Manager. Unclustering prevents it from being part of any fail-over operations, and in the case of Hyper-V, you are not able to delete the VM while it’s still clustered. Since it is a VM, it only exists as virtual hard-drive files somewhere either the physical hosts or, as in our case, the SAN.

The next step s moving those virtual hard-drive files to an archive or cold storage so they can be recovered if needed and deleting the VM instance off of the host. Then, again in a Microsoft environment, disable the computer object in Active Directory and moving it to a non-production organizational unit.

Finally, trim backups of the decommissioned servers to keep only the last good image, one copy on-site and one in the cloud.

Exit mobile version