Chat Now

Best Practices For Server Virtualization

Published: Mar 9 2016

Best Practices For Virtualization While the benefits of server virtualization at the corporate data center are receiving a great deal of attention, its ability to address life cycle management issues might make virtualization even more compelling at the manufacturing plant. Along with the advantages, however, come additional challenges and risks. This article offers best practices you can use to benefit from server virtualization while avoiding mistakes that could affect the availability and performance of business-critical manufacturing IT.

Know Your Application

Begin by characterizing your software application and its workload correctly. Which resources does your application consume? How much? When? How much headroom do you need for peak times and temporary surges in demand? In the event of performance degradation, the application could become unavailable and provide poor response time to users or processes. Also conduct an appropriate risk assessment. Even if you’re starting with noncritical applications, the server on which you’re consolidating them often becomes essential when it drives numerous applications. In addition, not every application is a good candidate for virtualization. Typical examples are I/O-heavy applications and performance-sensitive environments that aren’t characterized easily.

Understand Trade-Offs

Because virtual servers are easy to set up and don’t require the same management approval as hardware purchases, trade press articles report that some companies are experiencing “virtual server sprawl.” Expect some performance penalty too; how much depends on your application and the virtualization technology you use. What’s more, maximizing application availability and performance on a virtual machine requires considerable skill. In addition, you need to understand how virtualization will affect your software license fees.

Seek Enterprise-Strength

Technology Remember, the virtualization layer has the potential to be a single point of failure for all of the virtual machines it supports. One rule of thumb: Software reliability increases as the amount of code and its complexity decrease. Look for virtualization software that’s small, compact, and controlled — and as appliance-like in nature as possible. Virtualization and availability solutions that are simple to configure and maintain provide crucial advantages by reducing operating cost and by significantly reducing your exposure to downtime caused by operational errors. Plan for Business Continuity Reliable availability and performance become more important the more you depend on an IT resource, and the more that resource is integrated with other systems. To mitigate the risk of plant operations being interrupted, institute backup and disaster recovery measures for the physical servers that run your virtual machines.

Simplify With Robust Hardware

Virtualization subtracts physical complexity, but adds equally real complexity in a virtual dimension. Without proper planning, this can be an issue because IT skills are in short supply at the average manufacturing facility. Clustering multiple servers is one technique for achieving high availability. Implementing virtualization on a server cluster adds another layer to deploying and administering a cluster, on top of the ongoing attention from IT staff that’s already necessary. For instance, a server must be running to migrate its workload to another member of the cluster. Therefore, when a double-bit memory fault causes a server to crash, its workload can’t be transferred, data will probably be lost and a reboot will be necessary. With clustering, you can expect to incur performance overhead in any case. Server hardware or virtualization software vendors might claim they achieve high availability by predicting hardware faults and by enabling live migration of applications to backup servers. They also might assert that applications can be quickly restarted on another server. Ask which hardware faults can be predicted enough in advance to support a live migration. What percentage of the hardware is covered? In the case of a full restart, what is the worst-case fault detection and restart time? Is the management software making fail-over decisions robust, or for that matter, running on a robust platform? For simplicity’s sake, consider a fault-tolerant server that automatically protects reliability and availability without requiring changes to your business-critical application. This approach uses redundant components while appearing as a single server to virtualization and application software. Ideally, the emphasis should be on preventing downtime and data loss instead of simply on quick recovery.

Don’t Let I/O Sink the Ship

Incompatibilities related to I/O interfaces are a known cause of system instability and performance problems. Establish that I/O devices and drivers are compatible with the virtualization technology you plan to use. Be ready, willing and able to resolve incompatibilities up front if you need to use legacy or proprietary I/O cards to access specialized plant equipment networks, which is common with supervisory control and data acquisition (SCADA).

Don’t Go It Alone

Virtualizing in a manner that promotes the availability and performance of business-critical applications requires considerable expertise. This goal isn’t realistic without a knowledgeable staff or a trusted professional services provider. Stratus Technologies and Rockwell Automation have teamed up to provide you always-on solutions to your virtualization needs. Talk to your CBT automation specialist today to learn how Stratus Technologies, Rockwell Automation, and CBT to work through your virtualization needs.

We would love to hear from you. Please login to leave a comment!