Quantcast
Channel: Blog | Dell
Viewing all articles
Browse latest Browse all 17822

Departmental Workloads

$
0
0
EMC logo

In previous posts I stated that evolving application workloads have been a primary cause of innovation inthe high-tech industry. For example, applications drove the need for faster disk response times in the 1980s, leading to innovations such as the cached disk array (Symmetrix) and RAID5 (CLARiiON).  In the mid-1990s applications drove the need for higher service levels, resulting in technologies such as snap copy (Symmetrix TimeFinder) and remote replication (Symmetrix SRDF).

As a review, consider the diagram below that stimulated this discussion. In place of Symmetrix and CLARiiON I have used the descendant names of these products in 2013 (VMAX and VNX). These two products continue to position themselves in the upper right hand quadrant of the workload chart, providing both high performance and high service levels.

VMAX-VNX
Application workloads continued to play a role in the late 1990s as well. The adoption of technologies such as cached disk arrays, RAID, snap copy, replication, etc., resulted in a shift towards a shared storage model. Instead of one application workload pointing at one storage device, enterprise customers began configuring multiple application workloads, from different departments within the enterprise, against a shared storage device.

For example, the engineering department might use a portion of the storage for test and dev, the sales department might use a database for customer leads, and the marketing department might create a file share to store marketing collateral such as PowerPoints and PDF files.

In this era, the requirement to handle multiple application workloads gave rise to an innovative new paradigm known as the Storage Area Network (SAN). In order to build a SAN, innovation was required by several sub-components of the overal SAN solution. These sub-components are described below.

Storage Switches

Different departmental workloads ran on different departmental servers. In the 1990s the hypervisor had not yet been invented. Some storage devices (e.g. CLARiiON) did not have enough ports to connect to multiple departmental servers. As a result, the industry moved towards a switch model in which the servers connected to the switches, and the switches in turn connected to the storage ports. This type of consolidation of the storage network is best typified by offerings such as Connectrix.

Connectrix helped propel the trend to storage consolidation. By the late 90’s open systems customers were challenged by the proliferation of Windows servers,  each with their own storage. Some customers complained of needing a grocery store shopping cart every Monday morning to collect the disks that failed over the weekend! Symmetrix, VNX, and Connectrix capitalized on the  availability  and administration  problems of a decentralized storage architecture. A level of service well  established in the mainframe storage space won support with open systems.

Server-based Multi-pathing

Existing applications still desired high service levels, and the introduction of a switch in between the server and the storage introduced new failure permutations that had to be overcome. In addition, multiple I/O paths now existed which could be leveraged to provide even higher performance levels via load-balancing techniques.

In 1998 a company named Conley was working on an innovative approach to solve these problems. They were acquired by EMC and the product was eventually branded as PowerPath. PowerPath became a staple of the industry; millions of copies were sold. It was an example of application workloads driving storage innovation up into the server level. 

As a result of the introduction of switches and PowerPath routing software, application workloads continued to transition further and further away from the storage devices, as evidenced by the diagram below.

PowerPath

Multi-tenancy, Fairness, and Trust

The introduction of SAN technology created a scenario whereby multiple tenants (departments) were "occupying" a portion of the underlying storage device.  This introduced the problem of fairness (predictable allocation of storage resources to each tenant) and trust (fencing and/or prohibiting a tenant from accessing the resources of another tenant). 

The problem of trust was solved via innovations such as zoning (at the switch level) and lun-masking (at the storage level). Over time masking and zoning were implemented in a variety of different locations in the SAN stack.  The issue of trust within the IT infrastructure would grow significantly more complex moving forward (I will cover this topic in future posts).

Management and Orchestration (M&O)

Perhaps the key area that began to take center stage from an innovation standpoint was the management of the SAN at each layer (the server, PowerPath, switch, and storage perspectives). This became increasingly important as customers began to install hundreds of applications upon hundreds of servers and point them at dozens of switches and storage devices.

One of the significant innovations at the time was the capability of "pushing" server information (server name, operating system, application information, and port information) down into the storage system. This marked one of the first times that applications began "registering" themselves with storage devices, another strong indicator that application workloads were driving storage innovation.

For example, in the diagram below, Servers A-D would register themselves with the underlying storage system. This enabled a management interface (for example the Unisphere tool) to associate a departmental workload (running on a given server) with a portion of the disks in the storage system.

SAN

File as a Workload

The list above is large and doesn't do justice to the long list of innovations required to make the SAN vision a reality (for example I haven't mentioned the advent of Fiber Channel technology).

More importantly, I haven't mentioned another critical requirement that departmental workloads placed on the storage infrastructure: a desire for serving file-based and block-based requests together from one infrastructure. At that point in time, block storage systems and file storage systems often existed side-by-side as separate systems.

I'll spend some time discussing this workload in a future post.

Steve

http://stevetodd.typepad.com

Twitter: @SteveTodd

EMC Fellow

 

 

 

Update your feed preferences

                submit to reddit    

Viewing all articles
Browse latest Browse all 17822

Trending Articles