Companies looking to save money and keep their technology fresh are currently eying virtual infrastructures to replace aging, outdated equipment. It's no longer a question of "Will you implement a virtual infrastructure?" but "When?"
Virtual infrastructure allows for dynamic computing and storage, and it's far more scalable than traditional racks and boxes. Additionally, it's much faster to provision. But infrastructure shouldn't be virtualized just for virtualization's sake. As the equipment ages, companies can start examining what applications can be virtualized and move into the cloud slowly -- or do it all at once.
The key is to assess whether virtualizing will make the company more efficient, according to David Davis, director of infrastructure at Train Signal Inc. He writes that companies need to first analyze the current state of servers and applications, then make sure they understand the function of each application as it relates to the business.
Examining existing in-house infrastructure and software standards is also important. According to Gartner, if the standards and change management processes can prevent or minimize service disruption, configuration processes may only need to be adjusted to account for virtualization. Services should be carefully documented as well.
Once the planning and evaluation are complete, enterprises have a few things to consider. Among them:
Homogeneous virtualization. According to Chris Wolf, research vice president at IDC, it's a hot topic. In his blog, he writes that organizations that want to be more efficient should standardize their hypervisors on one platform.
Hypervisor tiering. Hypervisor tiering can lower costs by providing less-expensive hypervisors when needed, allowing for management from a single interface and keeping spending within budget, explains Davis in another article.
Automated storage tiering. In an automatic storage tiering configuration, common items like virtual desktops sit in flash memory, while "colder" items move to the lowest tier of storage. This allows for easy access to commonly-used applications, speeding the network's performance.
Thorough deduplication. Most servers typically peak during the night, during backup time -- not during lunch, when everyone is checking Facebook. On a virtual infrastructure, deduplication can ease the pain of backup, especially when you're looking at application-specific engines that ensure the backup isn't full of duplicate content.
Security. As demonstrated last month, a malicious virtual machine could attack another virtual machine. While this has only happened in a controlled lab environment, the researchers' work highlights the need to consider what is actually being put on the virtual infrastructure. Cloud security is imperfect, and a private cloud may be the best answer for most enterprises dealing with sensitive and regulated data.
Cloud architects. Any infrastructure, whether virtual or physical, is only as good as the people behind it. With a virtual infrastructure, IT personnel need cross-domain skills, like storage and networking, and need to be able to broker services, not just install routers and configure databases.
There is no easy answer to building an efficient, flexible, virtual infrastructure. Only after planning, and the evaluation of existing systems, can technology be chosen.
What have you done to prepare your organization for a virtual infrastructure? Sound off in the comments below.
— Christine Parizo is a freelance writer specializing in business and technology.