![]() |
I have a question for you. Actually, a couple questions.
Seriously – I want to know :-) I posted a little survey here. Share anonymously – and I will share the what you share with me in a couple weeks! The point here is a simple, but important one.
Now – I truly believe that an increasing consumption of infrastructure in converged forms is inevitable. Why? People are starting to grok something fundamental: while there will always be innovations in parts of the stack (network/server/storage/abstraction/automation etc) – the incremental benefit of hyper-engineering elements of the stack is minimizing (except for certain important corner cases). Frankly one side effect of the success of the hyper-scale public clouds is an acceleration of this understanding: business benefit comes from outcomes, and frankly “lock-in” risk below the IaaS layer is a nit. Yeah, sure – there are some that fight this furiously – but usually it is people trying to protect their “turf”. There are a multitude of different “converged infrastructure” offers on the market – all different. Vblock and VSPEX Blue and other startups focused on hyperconverged architectures (each coupled with an IaaS layer to make them a cloud) and yes – the IaaS layers of things like vCloud Air, Azure, AWS are each proving something… You’re wasting time if you are optimizing and managing component by component. Let someone else do that for you, and focus on more valuable things. But – as the questions and observations above show (we’ll see if the survey confirms!) – converged infrastructure system design is not going to be “one way”. In other words – it’s time for a “phylum” or taxonomy for converged infrastructure architectures, just like there was one for storage (here). Now – I can’t claim to be the inventor of this taxonomy. It’s one that has had a lot of discussion/thinking across EMC, VMware, VCE by many over many, many years – and it’s not an “invention”, but rather something that flows naturally from talking to lots, and lots, and lots of customers. I believe that the taxonomy of BLOCKS, RACKS, APPLIANCES captures 3 distinct design centers and groups of various forms of converged infrastructure system-level design. EMC/VCE is the CI leader by a long margin with Vblocks and VxBlocks in the BLOCK category. We’ve come out swinging with VSPEX Blue as our first APPLIANCE. And today, we’re extending our leadership position with the tech preview of VxRack as our first RACK system architecture. Read on to understand/discuss the idea of the BLOCKS, RACKS, and APPLIANCES taxonomy. Please – I want YOUR thoughts and comments! Ok – first, let us start with baseline axioms:
Ok – let’s look at each of these: BLOCKS:
RACKS:
APPLIANCES:
Now, I want to be clear – it’s not that BLOCKS aren’t “flexible” or “simple”, or that APPLIANCES can’t be “FLEXIBLE” – but this is their DESIGN center. It’s a false argument to say: “but I want to be able to support all the proven workloads I run today unchanged, with total flexibility, and total operational simplicity”. Sounds nice, but it’s a delusion. This observation is a variation of the classic idiom: “fast, cheap, reliable – pick two”.
Warning – what you will see/experience of course is that people will vehemently argue in favor of BLOCKS! No, it’s all about RACKS! No you idiot, it’s all about APPLIANCES! In my experience – that perspective comes when consciously or subconsciously you “block off” the logical arguments around the strengths/weaknesses of each. It’s a variation of the ancient furious battles of “NAS! NO, BLOCK!” or “PRIVATE CLOUD! NO, PUBLIC CLOUD!” As some have observed, startups are often fueled by passion, and fanatical fervor. It’s funny, I wrote this before I read Chuck’s post here on “The Cults Among Us”. I’ve been there – and it feels really, really good :-) Life is simple :-) Sometimes that fervor is not driven by belief/passion. Sometimes it’s something more insidious. Sometimes the fervor is rooted in “I only have a hammer, so I better argue that everything is a nail”. I get that too,I suppose. I suspect (and encourage) that this post will get lots of arguments to this post from some (customers, companies, whatever) saying that this taxonomy is a false one. I would encourage each reader to think for themselves, and come to their own conclusions. I WANT to debate, discuss with you – but let’s not let any of us be cultists! Ok… just like with the storage taxonomy, it’s VERY useful intellectually to think of these system architecture without attaching a vendor label or implementation, because it helps group similar things together. Just like with the storage taxonomy, I would argue that these are SYSTEM architectures – and while different implementations will have various strengths and weaknesses, those SYSTEM design centers will influence the strengths and weaknesses of everything in a category – across different vendor implementations.
People tend to assume that the capex model of BLOCKS is worse than the capex model of APPLICANCES and RACKS – because the APPLIANCE/RACKS use industry standard components. The reality is that the economic cost curves are different. This diagram on the left is a simplified representation of a basic idea: System architectures that are BLOCKS have a large step function of capex cost embedded in them. Conversely, SYSTEM architectures of RACKS and APPLIANCES tend to have smaller step functions of capex cost. Depending where you are on the scaling axis, BLOCKS can be have more, or less capex cost embedded in their model than RACKS and APPLIANCES. There are also other factors (the above is a simplification) – like often the density (capacity/IOps/compute/memory) of the 3 system architecture vary. Ok – next observation: that is capex COST (the inherent capital embedded in the architectures) – not the PRICE (what people pay) curve. What does the price curve of a “utilitized” (aka “sold as a utility”) BLOCK look like? Answer – a linear line. The lines in the diagram represent the inherent cost in the architectures of their hardware scaling model. Likewise – price is also a function of so many other factors, you can’t simply map it out like this (everyone has different economic models, and different values inherent in the software stacks). BUT – the diagram above does reflect the inherent hardware capital cost model. There’s another HUGE factor – frankly, I think the more important economic factor – which is the OPEX model. Since BLOCKS have traditional elements for persistence and network as well as traditional SYSTEM level design (both required to support some existing workload requirements), the scaling of the opex model looks a little more like traditional stacks – even when well integrated as a converged infrastructure model. It looks like a step function (matching the capex curves) Conversely, the operational model model of RACKS and APPLIANCES has a more simple scaling model, and are designed around a simpler operational model, so their OPEX scaling curve looks more logarithmic. At EMC – our CI portfolio maps out like this (all are about accelerating outcomes) :
Today we announced VxRack – which is the family of offers built around the “RACKS” design center. The CI leader is now the “complete CI Portfolio leader”. BTW - There’s a SIMPLE decoder for the EMC/VCE product naming:
I’ll do a standalone post on VxRack for people who want to learn more about it (including details and demos!) The whole CI portfolio all comes from EMC and our partners with a fully converged model – buy as a system, support as a system, manage as a system. So – how to navigate these choices? He’s an example of how I’ve been guiding customers in simple question-driven flows: Here – a customer says “I will forgo all workloads that have traditional data service depenencies that are not handled in the SDS data planes” (or they will handle those workloads otherwise). The core question is then “do you believe in the design of a single abstraction layer for all workloads?” If yes, put it all on vSphere, and the RACK based design, which is VxRack with the EVO:RACK personality – later this year. If no (perhaps there are workloads that for any reason won’t be on vSphere, but will be on physical – or perhaps the customer has a mix of non-vSphere hypervisors), the answer will be VxRack with the open personality. What about the case where a customer has broad application requirements, and doesn’t want to “not go CI” because the SDS layer and various networking functions can’t support them? Let me give you an example. What if they want to deploy an SAP Landscape, and need consistency groups. Or what happens if the application stack has a specific dependency or certification against a given target? Or perhaps it’s SAP HANA and they need a TDI or appliance support position? Well – in that case it CANNOT be deployed on the path to the left in the diagram here. Note that going down that path (staying in the RACK system design) and only getting the VxRack with the vSphere persona means that they would have to not put those workloads on it. They would generally go down the path to the right – use Vblock for the subset of workloads that demand that behavior, and VxRack for the rest.
Some customers (generally those with a small amount of P3) go down the path to the left. They work to have a single stack, a single team operate the whole thing. They use vSphere for the world of P2 workloads and will use VxRack with the EVO:RACK persona to optimize it using some of the ideas in P3-land (vSphere only SDS layer + commodity hardware, SDN model) to make it “P2.5”. They love the idea of using Cloud Foundry and getting the openstack APIs on top of the same stack for their P3 models. Photon as a thin light linux distribution seems great! Other customers (generally those with a lot of P3, or working towards a lot of P3) go down the path on the right. They will use VxRack with the open persona, and simply put their P2 workloads on there using some of the ideas in P3-land (an OPEN SDS + commodity hardware, OPEN SDN model) to make it “P2.5”. Their P3 workloads will be more “pure” open source. This last question example is a nice one. Someone still reading this (thanks for sticking with me!) might ask themselves: “WTF is Chad talking about?!?! I only have a few hundred VMs. I don’t have a huge SAP landscape. I don’t have ANY P3 workloads. All I want is the ‘easy button’!” In that case – there’s no decision tree :-) VSPEX Blue can be ordered in minutes, arrive in days, and be setup and running in less than 10 minutes. There’s nothing simpler. Remember the design center = Simple. That’s simple in all things (including acquisition/configuration/etc). Let me make one final observation. Humans like there to be ONE ANSWER. It’s a TRAP! Just like with the diversity in the storage phylum – there is diversity in the converged infrastructure phylum for a reason: they are adapted to fit their environment. The name of the game is to pick the right amount of diversity. Not too few (you’re not efficient) and not too many (you’re overcomplicating things). In my opinion, the correct way to look at the question of “what is the right CI strategy for me?” is along these lines:
Evaluate the landscape, and make your choices based on who you think can execute best for you. To deny that CI is increasingly the right answer, and that there is a diversity of CI choices and system architectures is to have one’s head in the sand IMO. Would love your thoughts, your input? Am I thinking about this in the right/wrong way? |
