Removing Complexity from the Orchestration Layer

I’m pretty sure that most people have figured out by now that the Cloud really has fuck all do with technology. Or at the very least, the technical challenges are the least of your concerns – it’s really all about the operating model. The implications of this new operating model stretch to pretty much every part of your IT organisation as it exists today, however I would argue that most of the changes are things worth doing anyway even if you have zero intention of moving to the Cloud. I like to collectively term these things “The Art of Cloud Without Cloud”. In this post, I’ll take a look at the orchestration layer.

How many times have you heard a vendor pitching an orchestration product and extolling the virtues of their “integration with 3rd party components”? I’ve heard it a lot. But what people seem to miss is that being able to speak to various backend infrastructure components (backup, monitoring, CMDB’s etc) is not even half the battle – most of the work comes from understanding _how_ those components are used in your environment. For example, is there a clearly defined process for adding new servers into backup schedules, or does the process only exist in the heads of the operations staff? How consistent is that process locally? Globally? How about decommissioning?

Even if the processes are documented and consistent, it usually results in a lot of complexity in the orchestration layer. This problematic not only because it can mean an excess of configuration data being stored in this layer (workflow tools should _not_ be used a configuration repositories!), but it also means the owners / developers of the orchestration layer need to gain a good understanding of the infrastructure component in question (a resource intensive process) and a loss of control by the operations team, who are in reality the experts!

The solution to this is to make sure that these backend components can truly operate _as a service_. That is, move all the component specific logic as close as possible to the component itself, and give ownership of the service endpoint back to the team who owns the service. An orchestration tool should be able to just call into the service endpoint with a set of requirements (Location: DC1, OS:Linux, RPO:Standard, RTO:Standard, DataVol:100GB) and have something underneath that endpoint handle all the capacity management elements and schedule configuration. In the world of today, this will likely mean some kind of custom development to build such a service endpoint. But that’s not such a bad thing – remember all that talk of “freeing people from mundane stuff so they can do more higher level stuff”? This is a perfect example.

Doing something like this is a good idea, Cloud or not. Service Oriented Infrastructure is one of those buzzwords that has been flying around for years, but the only work that seems to have been done is somewhat superficial (ie implementing SLA’s, service catalogs and the like). And the next time you see / hear a vendor start touting their wares as “cloud ready” or “built for the cloud”, ask them about this kind of functionality. And see how ready they really are.


Tags: , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: