Tech buzzword alert 2014, Hyperconvergence has hit the mainstream and is coming to a datacenter near you…
If you’ve spent anytime this year in the compute space or have talked to vendors about purchasing equipment then there is no doubt that you’ve been exposed to the concept of hyperconvergence. Besides having the most corporate-appealing marketing name of all time, it does accurately describe the process. For the uninitiated, it’s nothing more than compute, storage and network made of multiple nodes in a single box or cluster of boxes that is controlled by a single software orchestration engine.
Do you remember the days of the mainframe? Well it’s back…sort of. Back in the days of the mainframe (converged infrastructure), everything was centralized due to cost and complexity. The problem was that systems were slow across large geographic distances and hardware choices were often locked to a single vendor.
Then, in accordance with Moore’s Law, systems were able to become smaller which reduced cost and the industry moved to a distributed infrastructure. The distributed model helped make large amounts of data and compute available across large geographic areas by copying data local to sites. It also helped make many organizations vendor agnostic.
In the last few years, with the rise of virtualization and the accessibility of high-speed networks, the industry has returned to converged architecture to help ensure data security and availability. One of the many constant struggles as a sysadmin, is keeping all the data an organization generates safe, secure and backed up. With distributed architecture each site needs storage. Storage is expensive as is generating data, and therefore sites need local and offsite backup solutions to maintain data accessibility and recoverability. This drives up costs for hardware and licensing, and if the systems are geographically distant from each other then there is a need for additional workforce or travel to manage these systems. To help mitigate these costs and meet security compliances, virtual desktop infrastructure (VDI) has come into play and I believe is the primary reason for the rise of hyperconverged infrastructures. VDI is very resource and network demanding and in a hyperconverged where resources are shared and load-balanced the architecture works.
Why I don’t think it ready yet?
Right now most hyperconverged product offering are vendor dependant, and I don’t like that. Also, the architecture is so new the industry hasn’t had a chance for natural selection to kill off the failures and promote winners. The last thing I want to do to my org is spend a quarter of a million dollars on a hardware/software piece whose vendor might go bankrupt, get bought out or worse, have zero reliability.
Right now the only system that I see that has long-term promise is the VMWare vSphere/vSAN architecture but even that has some flaws to me.
What I want?
What I would like to see is a hardware and hypervisor vendor agnostic software piece similar to Docker that from the software level controls hardware and presents storage to the hypervisor to converge systems. This means that I can choose hardware from vendors that I like to use and the hypervisor that best does the job. At this time I don’t know of any such software.