Capacity management is at the heart of current thinking around storage management. This thinking has an historical precedent. In 1979, surveys of IBM's GUIDE user group members revealed several startling facts. For one, storage administrators could only manage about 11 GB of storage effectively. Moreover, direct access storage devices (DASD) were being utilized only to about 35 percent of their capacity, while customers were reporting storage growth at a rate of between 30 and 40 percent per year. [2] Those at IBM at the time were flabbergasted by these statistics. The issue for IBM was simple: If something wasn't done quickly to improve storage management, they would be hard-pressed to sell more DASD to their customers. Practical issues like available floor space and budgets for personnel and hardware would impose limits to growth on customers and limit the vendor's ability to sell more gear. From this sanguine analysis, Systems Managed Storage (SMS) was born. IBM created a group to work on the problem of storage management. A fundamental assumption was that the logical requirements of data storage needed to be separated from the physical aspects of the disk platform itself. This involved the creation of a management approach with two distinct and separate constructs: "storage class," which enumerated the logical requirements pertaining to application data itself, and "storage group," which defined the physical attributes of the back end storage platforms. The idea was to have IT managers define what the storage requirements were for the data produced by their various applications, then for an intelligent system of policies and rules to allocate storage of the right flavor and the right capacity automatically ”simply by stating that this new data belonged required storage class X and storage pool Y. [3] (This sounds strikingly familiar to the way that SANs were supposed to work according to the early pioneers at Compaq who authored the Enterprise Network Storage Architecture (ENSA) white paper in 1997.) SMS development, according to folks who were involved, was an enormous undertaking that started with about 11 people and grew to as many as 1,200 (not including the support from other groups within IBM responsible for S/390 OS development, hardware development, database development, etc.). Technical hurdles abounded, but they were minor compared to the effort involved in weaning IBM customers away from old practices and into the "new" way to manage storage that was being advanced by SMS. IBM spent an enormous effort studying customers and helping them implement SMS. One of the co-patent-holders on SMS said that he had personally performed over 350 storage studies and visited 650 data centers in the Global 2000 customers in efforts to understand storage management requirements and to evangelize the SMS approach. [4] SMS actually had three logical policies (Storage Class, Management Class, and Data Class) and one physical policy (Storage Group). The purpose of each is as follows :
The effectiveness of SMS, released as DFSMS/MVS in 1988, was demonstrated in the storage capacity that it allowed an individual administrator to manage. SMS took storage management in the S/390 world from 11 GB per person to about 15 Terabytes per administrator. At the same time, in shops using SMS to its full potential, allocation efficiency climbed from 35 percent in 1979 to where it is today: about 60 percent. One takeaway from this historical view is that capacity allocation and utilization efficiency cuts to the heart of storage costs. Only by effectively managing storage ”at the level of data itself ”can we address the underlying cost multiplier in storage cost of ownership: labor. IBM SMS goes well beyond current storage management concepts and approaches popular in open systems environments today. Collectively speaking, Storage Resource Management (SRM) tools, which have been the focal point of a $10, $14, or $21 billion dollar storage management software industry (depending on the analyst you read), largely ignore the need for management based on access characteristics and platform attributes. Arguably, this oversight is partly the fault of the hype around SANs, which were originally billed as utility storage infrastructure that would serve up the right kind of storage to whatever application needed it automatically. One might posit that the vendors have been drinking their own Kool-Aid , adopting the view that the requisite intelligence for managing capacity allocation and utilization in a SAN would be provided by some "higher authority" vested the SAN fabric itself. As a result, the philosophy of many, if not most, SRM products seems to be that the job of SRM is to monitor the operation of devices in the SAN to ensure that they are not overheating or exhibiting the onset of other operational errors or faults. Some mystical feature of SANs will do the rest. |