With vSphere 5 and its new licensing model, a fire storm has started across the internet resulting in very upset customers who think they are getting screwed by VMware. So I figured I will try a put together something that might help us all analyze the issue here. I keep hearing it’s really bad for SMB. Is it really as bad as we think it is?
Assuming we are all on 4.1 (not to mention with 4.1 HD/DRS limits were increased that should have helped in reducing the number of CPU licenses, I don’t remember VMware being thrown a party for that at the time), below are some limits for a 4.1 cluster:
- Number of hosts per cluster (HA/DRS) = 32
- Number of VMs per host = 320
- Number of VMs in a HA/DRS cluster = 3000
For the purpose of this post, I will not push for the limits here but I just wanted to put the facts out there. We will assume we have the following setup in the 4.1 world:
- 12 Hosts - dual processors – 256GB of memory each
- 24 Enterprise Plus licenses
- 2 – 6 host clusters (this will give has 1536GB of memory on each one our clusters)
- HA/DRS is enabled and Admission controlled is also enabled (cluster failure tolerance is set to 1)
- We will take one host out of the equation for HA to function which will leave us with 1280GB of memory in each one of our clusters
So what do we need to power on the same setup in vSphere 5? That will depend on the number of total physical CPUs we have and the total number of configured RAM in the VMs that will be powered on. We know we have 24 CPUs but what about the vRAM that will be needed to bring up all the VMs? If we go strictly by the number of total CPUs, we will have 24 * 48 = 1152GB in our vRAM pool. What does this mean?
With 1152GB in your vRAM pool you can only power on VMs that have a total of configured memory less than or equal to 1152Gb. Keep in mind with the exception of Essentials and Essentials Plus, these are only soft limits and I highly recommend you don’t exploit this area either.
We calculated earlier that we had 1280GB available earlier in each of the two clusters. But now we only have a total of 1152GB across the two clusters. This is not a moment to panic and freak out. This is the time to figure out how much memory do you really need. Let’s run a few examples. Let’s assume we have:
- 200 2GB VMs
- 500 4GB VMs
- 80 8GB VMs
- 20 24GB VMs
- For the purpose of this discussion, we have no limits
- And all VMs are on at all time
The amount of memory that’s allocated above is 3520GB when we really have 1280*2 of pRAM available and the magic of over-commitment is making all this come to life. Assuming the load above is evenly spread we are looking at 800 VMs spread across 2 6Host clusters. That is roughly about 67 VMs per host and I am not going to comment on the sanity of that setup (with the exception of VDI but even that is a little out there and most SMBs that I know of don’t leverage VDI yet). With only dual processors in each host running such a load will probably cause cpu ready issues in your environment. But I will boldly assume it all just works. So what do you have to do to make this setup work in vSphere 5?
Before we do that lets have a quick look at this document. In section 4.4 it clearly states that the recommended number of hosts for 500 VMs is 50 hosts to maintain reasonable performance with vCenter etc etc. Just thought I will compare that to the 800 VMs we are running in our 2 6host cluster example. Ok lets get back to craziness.
Since your total allocation is 3520GB your vRAM pool will have to be the same size, you will need 3520/48=73 Enterprise plus licenses to run the same gig in vSphere 5. That is 49 more licenses that what you previously had. But that is if you are trying to run the setup I mentioned above. Compare that to the cost of running 800 physical servers and you will definitely see the cost savings with virtualization. Is Hyper V or Xen a good option? Probably not.
What should you do?
- I highly recommend you review your environment. It’s possible that some VMs in your environment are oversized, this will be a good time to give them what they need. No more oversizing as this will result in licensing costs to your organization.
- Moving to Xen or Hyper V may not really be the best move. If you are using Enterprise Plus, chances are you will go in a state of depression with either one of these tools as they don’t come close to what vSphere 5 has to offer (I will cover the new capabilties of vSphere 5 in a few days).
- What is a comparable product to vSphere, the honest answer is none. Accept that fact and make the best of the situation.
- Do cost benefit analysis on what more functionality will you gain with vSphere 5. If this will enable you do more (SRM 5 has automatic failback), is the extra cost worth it? Or should you wait?
- Wait! If you realize you don’t need the new functionalities in vSphere 5, don’t upgarde just yet. This will buy you more time before you go in front of your CIO to upgrade to vSphere 5. I like this approach the most as this will increase your chances of staying with your favorite hypervisor. Xen and Hyper V are just not there yet and we all know it.
The example above is an extreme case in my opinion and you will probably not be in such a bad shape. Calculate what you have today, how much vRAM you will need. Look around and see if you can reclaim memory from oversized VMs. That will be the best place to start.
As of today, you can still buy vSphere 5 licenses and downgrade them to vSphere 4, do that until the time is right for you to upgrade your environment. But most importantly review your environment. If you can reclaim configured VM memory, do everything in your power to do so. After all, oversized VMs will cost you more money now, that fact alone will get you all the backing you will need to start “mission reclaim”.