Posts Tagged ‘Effenheimer’

Citrix Marketing Bullsh_t Dispelled by VMware (again)

January 31, 2009

I can’t tell you how pleased I was to see VMware _finally_ publish some data on a more realistic XenApp scenario than they have previously. Back in the days they used to push a single vCPU for Terminal Servers argument, which as any Citrix guy will know is sub-optimal in more than a few aspects (mostly relating to the relationship of the TS console session and logons to CPU0).

Now I’ll be the first to admit that any kind of test and resulting statistical analysis can be very subjective, but I’ll tell you what I think is the most important point to take from this latest study from VMware – the performance isn’t that different. Sure ESX came out on top, but not by much (I’m sure Citrix will come out with some kind of data that shows the contrary before too long, and that’s fine). And why is that important? Because I am sick and tired of the marketing bullshit from Citrix and subsequent throwaway one-liners that propagate through the intertubes and into the minds of Administrators in the enterprise, regarding XenServer being “optimised for XenApp“. I have _never_ seen any further details on what this actually means.

Well I’m throwing down the gauntlet on this one. I’m opening up the comments on this post, in the hope that Simon Crosby, Ian Pratt, or anyone else I highly respect on a technical level who works for Citrix to tell us all what the fuck these XenApp specific optimisations in XenServer actually are and why the same configuration could not be implemented with tweaks to ESX (not that they’re even necessary, going by the VMware post). I’ll even take a reply from hype-master Roger Klorese. If it is some kind of secret, I have no problem with that – just tell us so (and I’ll continue to call it complete BS). If it’s something that can only be disclosed under NDA, then tell us that too because I guarantee you that 90% of the people who read this blog regularly are working for large enterprises who have NDA’s in place with Citrix, so we can follow that avenue if necessary (and of course honour the NDA – I’m not stupid enough to lose my job for the sake of this blog). Cut the bullshit Citrix – we just want to know.

UPDATE As numerous commenters have pointed out, I completely neglected to mention the excellent work of Project VRC. But again, in line with my original comment, the differences in hypervisors are so minor that it really doesn’t matter – as Duncan said, the main thing is that Citrix workloads are finally viable targets for virtualisation, because the difference to physical is also insignificant when you consider all the benefits of virtualisation.

As for Simon Crosby’s retort, I won’t bother picking through it. Citrix are easily as guilty as anyone when it comes to spreading one sided “studies” under the guise of science, and their best buddies at Microsoft are the kings of the hill in that arena. Here’s a tip for you Simon – Microsoft are also the kings of “embrace, extend, extinguish“, remember? We all know what’s gonna happen when they enter the connection broker market.

BUT I will say this much in credit to Simon – he hit the nail on the head with his comments regarding VMware’s draconian “thou shalt not publish benchmarks” stance. This has been a bugbear of mine for a long time, and I’ve said as much to fairly senior people at VMware. A change in that would be most welcome indeed.

Also as a few of the anonymous commenters pointed out, Citrix offered this explanation 6 months ago as to what the ‘secret XenApp optimisation sauce’ is. Which I can only assume is now completely null and void with the widespread availability of RVI / EPT. Oh well, was fun while it lasted I guess.

Advertisements

Shanklin Elevated From "Demigod" to "FullGod"… Film at 11.

January 28, 2009

After having a look around the new VI Toolkit release today, and the accompanying videos and documentation, I am convinced Carter will be hailed as a new God by VI admins the world over. I mean, even Boche has stated this release may be enough to move his lazy ass into finally learning the ways of the ‘Shell (I’ve been at him for ages about this, but alas he just ignores me).

I sooooo want to show the VI Java API some love (which was quietly updated to 1.0U1 only days ago), but with Windows Server 2008 R2 Core finally providing .net support, I’m more likely to wrap PowerShell scripts up in webservices for interop purposes than get dirty with Java (and that’s no reflection on the quality of what Steve Jin has done, which is top class).

Massive props to Carter and his team. With tools like this, I don’t give a fuck if vCenter or anything else becomes Linux based. Lets hope vCenter Orchestrator has native PowerShell support.

The Myth of Infrastructure Contention

January 18, 2009

Back… caught u lookin for the same thing. It’s a new thing, check out this… oh no wait. It ain’t a new thing, it’s just another Sunday Arvo Architecture And Philoshopy post. This time I’m going to focus on a long time thorn in many of our sides – the myth of infrastructure resource contention.

This ugly beast rears it’s head in many ways, but in particular when consolidating workloads or changing hardware standards (such as standardising on blade). Of course, the people raising these arguments are often server admins with little or no knowledge of the storage and network architecture in their environments, or consultants who have either never worked in large environments or also do not know the storage and network architectures of the environment they have come into. Which is not their fault – due to the necessary delineation of responsibility in any enterprise, they just don’t get exposure to the big picture. And again, I should say from the outset that I’m talking ENTERPRISE people! Seriously, if I cop shit from one more person who claims to know better based on their 20 “strong” ESX infrastructure or home fucking lab, I am going to break out the shuriken. YES THE ENTERPRISE IS DIFFERENT. If you have never worked in a large environment, you can probably stop reading right now (unless you want to work in a large environment, in which case you should pay close attention). Can you tell how much these baseless concerns get to me? Now where was I…

OK a few more disclaimers. In this post I will try to stay as generic as possible to allow for broader applicability, and focus on single paths and simplistic components for the sake of clarity. Yes I know about Layer 3 switches and other convergent network devices and topologies, but they don’t help to clarify things for those who may not know of such things. Additionally, the diagrams below are a wierd kind of mish-mash of various things I’ve seen in my time in several large enterprises, and I suck at Visio. Again, I have labelled things for clarity more than accuracy, and chopped stuff down in the name of broad applicability. Keep that in mind before you write to me saying I’ve got it all wrong.

IP Networks
So lets tackle the big one first, IP networks. Before virtualisation, your network may have looked something like this:

Does that surprise you? If it does, go ask one of your network team to draw out what a typical server class network looks like, from border to server. I bet I’m not far off. Go and do it now, I’ll wait for you to get back.

OK, enlightened now? And in fact if you are, the penny has probably already dropped. But in case it hasn’t, lets see what happens when the virtualisatoin train comes rolling in, and your friendly architecture and engineering team propose putting those 100 phsyicals into a blade chassis. It is precisely at this point that most operations staff, without a view of the big picture, start screaming bloody murder. “You idiot designers, how the hell do you think you can connect 100 boxes with only 4GB (active) links!!! @$%#@%# no way I’m letting that into production you @#%$%!!!”. However, when we virtualise those 100 physical boxes and throw them all into a blade chassis, our diagram becomes:

OK, _now_ the penny has definitely dropped (or you shouldn’t have administrative access to production systems). IT DOESN”T MATTER WHAT IS BELOW THE ACCESS LAYER. Because a single hop away (or 2 if you’re lucky), all that bandwidth is concentrated by an order of magnitude. The networks guys have known this all along. They probably laughed at the server guys demands for GbE to the endpoints, knowing that in the grand scheme of things it would make fuck all difference in 90% of cases. But they humoured us anyway. And lucky for them they did, because the average network guy’s mantra of “the core needs to be a multiple of the edge” needs to be tossed out on it’s arse, for different reasons. But that’s another post :-).

Fibre Channel Networks
I know I know, I really don’t need to be as blatant about it this time, because you know I’m going to follow the exact same logic with storage. But just to drive the point home, here again we have our before virtualisation infrastructure:

And again, after sticking everything onto a blade chassis:

I don’t think the above needs any futher explanation.

I’m sure there are a million variations out there that may give rise to what some may think as legitimate arguments. You may have a dedicated backup network, it may even be non-routed. To which I would ask, what is the backup server connected at? What are you backing up to? Whats the overall throughput of that backup system? Point is, there will _always_ be concentration of bandwidth on the backend, be it networking or storage, and your physical boxes don’t use anywhere near the amount of bandwdith that you think they do. You may get the odd outlier, sure. Just stick it on it’s own box, but still put ESX underneath it – even without the added benefits of SAN and cluster membership, from an administrative perspective you still get many advantages of virtualising the OS (remember, enterprise. We don’t pay on a per-host basis, so the additional cost of ESX doesn’t factor in for enterprises like it would for smaller shops).

OK, time to wrap this one up. Your environment may vary from the diagrams above, but you _will_ have concentration points like those above, somewhere. That being the case, if you don’t have network or storage bandwidth problems before virtualisation, don’t think that you will have them afterwards just because you massively cut the aggregate endpoint connectivity.

ThinApp Blog – would you like a glass of water to help swallow that foot?

December 4, 2008

I’m going to try and resist the urge to make this post another ‘effenheimer’ (as Mr Boche might say 🙂 but my mind boggles as to WTF the ThinApp team were thinking when they made this post. Way to call out a major shortcoming of your own product guys! To be honest, I’m completely amazed that VMware don’t support their own client as a ThinApp Package. Say what you will about Microsoft, but you gotta respect their ‘eat our own dogfood’ mentality. To my knowledge, if you encounter an issue with any Microsoft client based app that is delivered via App-V, they will support you to the hilt.

Now that i’ve passed the Planet v12n summary wordcount, I can give in to my temptation and start dropping the F bombs, because I’m mad. The VI client is a fairly typical .NET based app. If VMware themselves don’t support ThinApp’ing it, how the fuck do they expect other ISV’s with .NET based products to support ThinApp’ing their apps? Imagine if VMware said that running vCenter in a guest wasn’t supported – what kind of message would that send about machine virtualisation! Adding to the embarrassment, it seems that ThinApp’ing the .NET Framework itself is no dramas!!!

It’s laughable that a company would spend so much time and money on marketing efforts like renaming products mid-lifecycle, but let stuff like this slip by the wayside. Let’s hope this is fixed for the next version of VI.