Microsoft Myths and Realities…

The fun just doesn’t stop from those ‘ol pranksters over in Redmond. So we all know about the Microsoft Myths video that was recently published, but you all know me – I tear straight through the marketing bs and go straight for the jugular. Give me the cold, hard facts. That’s all I ask. And don’t lie, because I’ll know.

So I have to hand it to Microsoft, although they don’t publicise the cold hard facts too often, they are hidden in the bowels of Technet, just waiting to be happened upon. And so it came to pass that an article was brought to my attention by none other than my friendly Microsoft TAM. Bless him, he means well and is a top bloke. But Microsoft really shouldn’t be publishing this kind of information for the world at large to see, or at least not for people like you and I to see.

In this paper, published January 2009, we get the cold hard facts on Hyper-V as deployed by none other than Microsoft IT themselves. Internally. Y’know, that whole dogfood thing. And the results are absolutely astounding. Now before going further, I need to re-iterate this is an actual Microsoft published case. It is not an April Fools joke, they are not having a lend of us. It is stone cold truth from Microsoft’s own IT department. If you were going to listen to anybody talk about the reality of Hyper-V, it’s these guys. And again, this is not a joke. This is real. Here are a few juicy excerpts (bold bits added by me for emphasis):

As Microsoft IT developed standards for which physical machines to virtualize, it identified many lab and development servers with very low utilization and availability requirements. Because of the lower expectations, Microsoft IT now is deploying the lab and development virtual servers with four processor sockets, 16 to 24 processor cores, and up to 64 gigabytes (GB) of random access memory (RAM). These servers can host a large number of virtual machines, averaging 10.4 virtual machines per host machine.

A 16 core box, with somewhere north of 32GB RAM, could only take 10.4 “development servers with very low utilization”. WTF? Well if that’s what they’re doing for low utilisation boxes, I wonder how they fare for production machines. Another excerpt:

For the production-server deployments, Microsoft IT is using servers with two processor sockets, 8 to 12 processor cores, and 32 GB of RAM.

On average, the host servers with eight processors and 32 GB of RAM are hosting 5.7 virtual machines in the production environment.

Less than 6 machines on average, on an 8 core box with 32GB RAM. Yikes.

They then get onto some great stuff surrounding “high availability”. Make sure you’re sitting down when you read the following excerpts, are somewhere private like your own home, and have the bathroom door open and an Ambulance on hold. Because it is very likely you will piss yourself laughing, and then your sides will actually split as you move through these next gems:

With Windows Server 2008 failover clustering, an administrator must store each virtual machine on an individual LUN. Because an administrator must provide all cluster nodes with access to the same shared storage by using the same drive letters, 23 is the maximum number of virtual machines that can run in a failover cluster. Microsoft IT could work around this limitation by using mount points and virtual machine groupings, but it considers this configuration too complex to administer. Because of this limitation, Microsoft IT has adopted a standard of using only three nodes in a cluster, with the cluster configured to tolerate one node’s failure.

I don’t know what to say. Due to their own stupid design (actually 2 stupid designs – one being drive letters and the other being 1 LUN per VM), they are crippled into wasting a huge amount of host capacity, by allowing tolerance for 1 host failure in a 3 node cluster. Those remaining 2 hosts clearly could not run at 100%, and because you can’t overcommit memory one can only assume they are wasting a much more CPU than RAM. Now, firmly re-attach your jaws for the next excerpt from “high availability”

When virtual machines fail over in a Windows Server 2008 failover cluster, the cluster service with Hyper-V must save the virtual machine state, transfer the control of the shared storage to another cluster node, and restart the virtual machine from the saved state. Although this process takes only a few seconds, the virtual machine still is offline for that brief period. If an administrator has to restart all hosts in the failover cluster because of a security update installation, the virtual machines in the cluster have to be taken offline more than once. Therefore, Microsoft IT determined that highly available virtual machines could have more downtime than virtual machines deployed on stand-alone servers in the case of simple planned downtimes for host maintenance, such as applying software updates.

We have just lost cabin pressure. Virtual Machines that are configured for “high availability” by being clustered, could actually have more downtime than those running on a single standalone host.

I’m going to end the slaughter here, and leave the remains for the rest of the hungry pack to devour. Read the article in full for more hilarity. Now we know the real reason why Microsoft release these stupid marketing videos – to draw our attention away from the harsh realities that they themselves know all too well.

The message from this paper is clear. All cost calculations are off. Theoretical consolidation ratios, off. Performance comparisons, off. Datacenter efficiency comparions, don’t even bother. In light of the hard facts, all the marketing fluff just isn’t necessary for VMware. With Microsoft producing papers like this, VMware clearly doesn’t need marketing. I apologise in advance for potentially putting 90% of VMware’s marketing team out of a job. Please keep John Troyer though, someone needs to rebuild the community after we all laugh ourselves to death at this.


30 Responses to “Microsoft Myths and Realities…”

  1. ruste Says:

    OMFG! thanks MS for keeping me in a job for the next 5yrs !

  2. Duncan Says:

    You should start a video blog! man that would be hilarious!

  3. Microsoft Mythbusters creating myths themself « UP2V Says:

    […] A very funny response to the Microsoft IT article can be found here […]

  4. Erik Says:

    Hilarious! Hyper-V enterprise ready? Hyper-V to challenge VMware ESX? ROFL!

  5. Hyper-V the laughter continues | - I choose (a virtual) life! Says:

    […] my twitter popped up and I saw a retweet from Duncan Epping pointing to an article on VInternals. Duncan added a warning ‘Please sit down before reading my latest post‘, this caught my […]

  6. Microsoft Hyper-V - Enterprise ready? Nope, thats a myth! :: Says:

    […] on over to vinternals and read Microsoft Myths and Realities…. As Stu concludes, VMware doesn’t need marketing when Microsoft provides us with this kind of real […]

  7. Jeff Hengesbach Says:

    Fantastic find! Those utilization ratios and resulting conclusions are just….sad, sad, sad.

  8. SICP-BBCode-EnterpriseExpertProgrammer Says:


    But oh wait, with 2008 R2, they will surely achieve even more FABULOUS results.
    Until then, enjoy XBOX HUGE energy bills, low available clustered VMs, and actively contributing to the increase of global warming, MS.

    Maybe VMware should make a public offer to MS for deploying VI in their environments, including a calculation on the savings in $, availability improvements etc. against the their current pile of crap.

  9. RTFM Education » Blog Archive » The VMware Vs Microsoft Cat Fight Continues Says:

    […] […]

  10. Jason Boche Says:

    Very nice.

  11. VMware fires back against Microsoft’s mythbusters - SearchServerVirtualization Blog Says:

    […] step further by digging up a Microsoft research paper from January that shows the limitations to Hyper-V performance. And Eric Gray wins the prize for Funniest Response with this […]

  12. Daern Says:

    MS have said that the drive letter thing will be sorted in 2008 R2:

  13. Eric Gray Says:

    Stu, brilliant piece of work. Destined to become a classic.

  14. dad Says:

    32gigs divided to six machines plus host system is less than four gigs per machine which is laughable amount of memory even for regular workstations.

  15. Anton Zhbankov Says:

    You’ve done my day!

  16. Stu Says:

    “Less than 6 machines on average, on an 8 core box with 32GB RAM. Yikes.”

    So without any context on the actual load on the VM’s you’re claiming that there is a poor consolidation ratio? That’s an interesting approach – do you approach all your virtualisation designs in that way?
    Of course you don’t, because you know that you have to design the environment for the expected load on the VM’s. And you might pause to consider that perhaps the workloads that Microsoft are hosting might be under considerable stress (for instance: which might explain why you see what appears to be a poor consolidation ratio.
    Just for example, here is a quote about the site: “The site handles 15,000 requests per second, 1.2 billion page views per month, and 280M worldwide unique users per month as well as supporting ~5000 content contributors from within the company. This site has close to 300GB of content consisting of some seven million individual files on each server.”



    Disclaimer: I work for Microsoft, this opinion is however my own and not that of my employer.

    • stu Says:

      Hi Stu,

      I would 100% agree if context was completely lacking. But it isn’t – their results of ~10 “low utilisation development servers” on hosts with double the sockets and up to double the RAM are consistent with ~6 on a 2 socket / 32GB RAM box.

      I highly doubt that the number of hosts used for running would have had an undue influence on the overall average. Being a bit of a web guy, I’ve seen most of the material on Channel9 and the like where they discuss the infrastructure behind – it isn’t _that_ huge. But you do bring up an interesting point regarding the approach to virtualisation designs – I’d normally go through a candidate analysis phase as well. If it was a bunch of standalone apps and you could only get 1 VM per host, then I would virtualise in order to get the benefit of workload portability. But given the portability of web code, the ease of application layer clustering, and in the case of the availability of hardware load balancers and high resource utilisation of the physical hardware, I dare say that the boxes running would probably be ruled out as potential virtualisation candidates – there would simply be no benefit in virtualising such machines. So you’re right – blindly trying to virtualise everything using Hyper-V could result in such pathetic numbers, but I give Microsoft IT more credit than that – I’m sure they would’ve gone through a candidate analysis phase as well. And given this was their first shot at virtualising on Hyper-V, they would’ve surely gone for the low hanging fruit first.

      So I don’t accept that there is no context to the production machine numbers.

      One thing I failed to make clear in my post however was that I’m not ridiculing Microsoft IT – they were forced to put together an enterprise class design using a product that is clearly not enterprise worthy. They merely played the cards they were dealt, I can’t possibly infer that their design was substandard – If I had to work in the same constraints, I may well have come out with a similar design, who knows.

      Thanks for taking the time comment, I appreciate it.


      (v)Stu 🙂

  17. Tall Dave Says:

    Quote from the Microsoft Paper “The Server Core installation option provides a smaller surface area for attack because fewer components are installed” = WTF! – its still 2.5GB Microsoft!! – ESXi from VMware is 32MB. Another MS Myth Busted

  18. titomane Says:

    that’s hilarous I want more HA !!! the thing is that hyperV is not so bad but it would have been better for microsoft to have some humility this time… before attacking VMware 🙂

    I’m still afraid of microsoft as they have a huge marketing advance on every other, and our society is based on marketing not hard facts. let’s pray my friends 🙂

    tomtom (microsoft & virtualisation IT expert ;p)

  19. Interesting HyperV vs ESX link @ VMWare Hero Says:

    […] […]

  20. Did Microsoft Just Bust its Own Mythbusters? « vTeardown Says:

    […] I don’t think there has even been documented proof before – until now.  As Stu has written on vinternals – and there is no sense in me trying to recreate the details here, he’s written an excellent […]

  21. Marcel van Os Says:

    I think this Technet article simply proves a point I’ve been making for some time. Most MS people and evangelists come from a Virtual PC/Virtual Server “world”. They don’t have a clue as to the difference it makes if you look at things from the enterprise (read: VMware) perspective.

    The internal MS People don’t have a clue as well so it seems. Enterprise level virtualization requires a different mindset and that takes time to develop/build. Lots of people still make stupid mistakes when using/implementing VMware. So with Hyper-V it will be the same thing all over again.

    By the way, it is possible to get around the 23 machines restriction by using disks which use GUIDs instead of drive letters. Those are no fun either.

    The best solution is to buy the SanBolic MelioFS product which adds a VMFS like solution. I’ve talked to a customer who’s using it and they see it as simply essential and I have to agree.

    • stu Says:

      Hi Marcel,

      I agree it is possible to get around the 23 machine limit, however I agree with Microsoft IT and your reasonsing for not doing so – it’s complex and not really manageable. SanBolic do have a good product, I took a look at it when doing some due diligence on Hyper-V a while back. Kinda funny that Microsoft IT didn’t use this for their internal deployment, actually.


  22. Stu Fox Says:

    Thanks for the response. While you might think wouldn’t be a great virtualisation candidate, it is at least partly running on the Hyper-V platform:
    What’s interesting is that in this case it isn’t about consolidation in this case (the ratio is really low – you & I both agree that it’s about workload, not ratio), it’s about time to deployment. On physical it takes 12 hours to sync the web content, on virtual you can preprovision it and cut the time down to 4 hours. And MSDN & Technet websites are fully running on Hyper-V (haven’t seen any details what the back end looks like for that though).


    Stu Fox

  23. adude Says:

    M$ RTFM > Virtualization for DUMMIES!

  24. Craig Says:

    good post and HyperV is no where near enterprise yet. They are more likely to be a good marketing company rather than a good software company 🙂

  25. chris Says:

    I like how MS picked virtualization for their twitter handle. Don’t stop there…. I think virtualdeath is still available.

  26. » Blog Archive » Hyper-V - MS Dogfooded their own s@#$% Says:

    […] above quote is from which is an easier read than the Technet […]

  27. Will you fight or will you run? » Yellow Bricks Says:

    […] response: Microsoft Myths and Realities… […]

  28. Brent Hawkins Says:

    Hey guys, consider your audience. We’re engineers and architects and can see through the typical MS BS. Hyper-V will never go into our datacenter PERIOD. We just had a wave of worms infect a few of our VM guests. I can’t imaging if the worm took down a Hyper-V farm. If someone lost their job due to the stupidity of using Microsoft Hyper-V, they certainly deserved it.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: