Many already realize the overarching benefits of Exchange virtualization, but may have lingering questions regarding deployment, cost, complexity, configuration, support, third party applications, and more. Recently, ENow board member and Microsoft Exchange MVP Tony Redmond authored a white paper titled “Virtualizing Exchange 2013 – the right way” the document deconstructed the arguments for and against Exchange virtualization and presented recommendations and best practices for a level-headed deployment.
To read Tony’s white paper: Please visit VEEAM: (Requires form fill for download)
In short, virtualization is not for all companies. Every organization has unique needs that must account for the pros and cons of this cloud-based strategy. Cutting through the marketing claims, do you have the right personnel who understand virtual environments and have the right experience (like understanding hypervisors) required to support and maintain virtualization? Tony puts it this way:
“What is true is that any decision to use virtualization for any application should be well-founded and based on real data. Deciding to virtualize on a whim is seldom a good idea.”
However, this is not to dissuade any organization from considering migration of Exchange 2013 to a virtual environment. The decision cannot solely be made on the notion that if the server is virtualized, companies experience a transformation in terms of cost savings and uptime. There is considerable amount of work that goes into either deployment.
However, with that stated, Tony notes virtual servers utilize available hardware more efficiently, and therefore, are able to reduce the overall cost of the solution. This is particularly true in cases where Exchange serves only a couple hundred mailboxes where only a portion of the load is on a single server. Further evidence suggests for larger enterprise deployments where multiple virtual servers can be organized as Data Availability Groups (DAG), thousands of mailboxes can cost-effectively be supported.
In terms of easy deployment, the white paper notes that virtual servers are more flexible and “far faster than it takes to procure new physical hardware and then install Windows, Exchange” and other third party applications. It also highlights superior options for set up and recovery for fixing hardware issues that often plague physical servers.
Lastly, Redmond discusses the topic of pro-virtualization. He advocates that virtual Exchange is especially beneficial for more modestly sized organizations due to its ability to deploy Exchange and other applications in DAGs without the investment in multiple physical servers. This follows best practices in “not wanting all of one’s eggs in a single basket.” This facilitates easier troubleshooting processes and if issues should arise, doesn’t grind the entire system to a halt until the problem is resolved. However, according to Redmond, a poorly configured virtual environment “will be even more fraught with problems than its physical counterpart.” This stresses his contention with decisions to move forward with virtualization. He asserts that despite its apparent upside, without the proper knowledge, virtualization could be like walking into a hornets’ nest of problems.
Migration and support of a virtual Exchange does not come without its perceived weaknesses. Redmond concedes that Exchange is simpler to load on a physical machine and overall, easier to configure and support. For example, in addition to the lack of hands-on preparation needed to set up a machine for Exchange, once it is up and running, there would be no additional layers to complicate its ongoing management. In virtual deployments, hypervisor (required on virtual deployments) adds another layer of complexity especially during the debugging process.
Hypervisor is often cited as the biggest reason to avoid virtualization. The fact that Microsoft does not use virtualized servers in its Exchange Online service within Office 365 backs up this claim. Putting it in another context, think of virtualization like a birthday cake. The version that comes from the store is solid, tasty and easy. Pick it up, light the candles, and you’re good to go. However, a homemade cake adds several new elements of complexity and time. The end result may be tastier, but at the cost of having to mix the right amounts of egg, cream, sugar, butter, shortening and flour (don’t over mix the batter!), find the right flavoring and frosting and remember to top it with that hard-to-find Elsa (from your child’s favorite movie; “Frozen”) cake topping. Any one of those stages of cake development may run into issues and alter the result. In the hands of a chef, this is typically not a problem, but unless you have the experience, you might find the end result a bit flaky or too dense or, what should have taken a half hour added another hour to your cooking time.
This all feeds into Redmond’s last point against virtualization: the cost. Most arguments in favor of virtualization use cost to tip the balance. However, the reality is not so cut and dry. Yes, there are significant benefits that come with the easier processes and cost savings stemming from the lack of investment in server hardware, but there are also other costs that balance the scales. There are several costs for hypervisor licenses, for additional expertise and resources, and the cost of additional processing power needed (typically a 10% “penalty”) to maintain the necessary hypervisor layer on virtual servers.
Redmond recognizes this challenge:
“Cost is often the hardest problem to mitigate because the direct and easily measurable cost of software licenses can only be offset by efficiencies in hardware utilization and ease of server deployment and management, both of which are more difficult to measure in terms of direct savings.”
He is careful not to endorse virtualization one way or the other, yet he is emphatic about the importance of experience and knowledge of the virtualized platform as the key to success. In fact, Redmond recommends a hybrid approach that provides the best assets of both types of deployments. This also promotes his recommendation to not depend on a single server for Exchange functionality: “When these factors are put together with the need to ensure resilience by separating components so that no one failure can compromise a large number of servers.”
Additionally, he lays out a series of best practices for Exchange 2013 deployment.
- Use multi-role servers whenever possible
- Configure servers in DAGs to achieve high availability
- DAG must run the same version of the Windows operating system
- Replicate databases so that at least three copies exist within a DAG.
- Consistent monitoring and reporting is key to understanding performance
- Never attempt to compensate or replace a feature built into Exchange with what appears to be an overlapping hypervisor feature
- 3rd party products have to be validated against the selected hypervisor
- Understanding Exchange product support for virtualization technologies
Redmond’s white paper goes into considerable detail about these best practices, so we recommend you read the whole article in context: here.
Therefore, when deciding whether a virtual Exchange 2013 environment makes sense for your organization, know that your choice should depend on the best option for your specific need. Virtualization creates its own particular technical demands that system administrators must take into account as they plan its use with applications. There are plenty of experts willing to weigh in on best practices for each option. But when it comes to communication platforms like Exchange, make sure you have access to expertise to manage either or both.
One of the biggest mistakes is following a decision that looks good on paper, but ends up costing considerably more if you don’t have the resources, knowledge or the management tools to support it long term. In the end, there is a right answer. To virtualize, or not to virtualize Exchange 2013 is a question every company needs to answer for themselves.