![]() |
Today is the launch (directed availability - general availability in Q1) of the ScaleIO Node - the ScaleIO software, bundled with a range of of server hardware, and if needed a ToR switch - delivered as an appliance with a single clear support model: EMC supports it, period. Hmmm:
What is it? It’s a storage thing. Scratch that - it’s a GREAT storage thing :-) It’s closest relative isn’t a VxRack, but rather an Isilon node. Another less close relative is a VSAN-Ready Node.
What do I mean? Why have we created the ScaleIO Node - and what’s it used for? Read on! First of all - the ScaleIO node is all about the ScaleIO SDS software, so if you want to stop reading right now, I would encourage it.
ScaleIO is at it core quite simple. It’s goal is to be an extremely scaleable, extremely simple (the simplicity is to some degree the key to performance and scaling) that can deployed for a broad heterogenous set of transactional use cases. Randy Bias did a post comparing (with a ton of performance data) ScaleIO to the use of Ceph (commonly deployed as a transactional SDS stack with Openstack) here. The point is that ScaleIO is a laser, and does what it does really well. But remember - DON’T LISTEN TO ME. DON’T LISTEN TO RANDY. Download and give it a whirl.
People have taken the freely available and frictionless (I’ll say it again: no time bomb + no feature limits + no capacity limits + we don’t even ask for your email address :-) bits and the infra as code tools and created simple automation packages to deploy into AWS, Azure, vCloud Air and more. They have played with it at huge scale (hundreds/thousands of nodes) and massive performance levels for a few hours for a few dollars. I’d encourage anyone to download, play, learn and share. What makes ScaleIO great is:
The ScaleIO node, at it’s core is simple - all the things that are solid, and for customers that want it with hardware - it’s a great, simple answer.
I did a blog a little while back that it’s worth checking out here: Is the dress white and gold – or blue and black? SDS + Server, or Node? This captures the essence of what today’s announcement is about. It’s captured in this “Software+Hardware” vs. “Software only” crazy illogical circle.
Interestingly as we do more and more with pure software-only stacks, I’m finding I’m navigating this circle with more customers. They think they want a pure “software only” solution (starting at the 1 o’clock position), and then the dialog goes in a strange circle that ends with them choosing an software + hardware combo. I’ve found that as much as I want to - I can’t “short circuit” the dialog - because then they think I care whether it’s a software + hardware combo (if you want more, read the blog post above). I **don’t** care. Customers that fancy themselves hyper-scale (hint, odds are good that you aren’t) take longer to go around the circle than those who don’t. It’s a core operational and economic question. Operationally: do you have (or do you want to have) a “bare-metal as a service” function? Economics: can you actually save money by procuring the servers (which by definition are cheaper at first glance - but not as dense, or as built for purpose), particularly when you take on managment/sparing/fault management of said hardware.
We’ve discovered (as VMware has with “VSAN Ready Nodes”) is that supported/qualified hardware accelerates adoption of SDS stacks. So - what does a ScaleIO node include? 1) ScaleIO software (specifically v1.32 as of this writing); 2) industry-standard servers; 3) optionally, the top-of-rack switch that we’ve tested with and support. What does the server look like? Well - the answer is that there’s a broad set. Here’s one.
This is actually a performance oriented node (low storage, high CPU/memory). So far - SINCE THIS IS A STORAGE THING - the vast majority of the demand is for the capacity-oriented nodes. There’s a broad range of configs - which are detailed below. The premise here is simple.
Now - why do I keep reinforcing this as a storage thing? After all - can you run compute on one of the nodes? Can you? Yes. Should you? probably not. For those of you following closely, for a while we have demoed Isilon clusters that run compute workloads (even VMAX3 running general purpose workloads). We’ve discovered that just because you CAN, doesn’t mean you SHOULD. Since the ScaleIO Node is completely missing the management and orchestration stack to manage that, update it, and otherwise make it a Hyper-Converged compute thing (including the support model) - it is a storage thing, not a hyper-converged compute thing.
BTW - if what you need is a hyper-converged compute thing… If that’s what you want - it’s VxRack or VSPEX Blue depending on scale. Here’s the continuum - from ScaleIO = software only (use however you want) -> Scale IO Node = software + hardware node (just like an Isilon node - which is software packaged with an industry standard server) -> VxRack = Hyper-Converged Rack Scale Infrastructure. What’s going on with VSPEX Blue? Building momentum and commitment. My personal view is that you cannot simultaneously design for “start small” and “scale big”.
So - with another example today of our model of SDS Data Planes are real, but we will give you the choice of package that fits you best….
...I’m insanely curious:
a) have you download the ScaleIO and VSAN bits? What do you think?
b) where are YOU on the “circle of illogical choice?” Do YOU think it’s illogical?
|
