Traditional ClientServer Computing with Direct Attached Storage


Traditional Client/Server Computing with Direct Attached Storage

This section takes a look at a legacy storage topology that worked for many years but is incapable of meeting today's high-availability system requirements. Before there was network storage, there was just plain old storage. Storage products were categorized by the computer platform they were designed for, such as IBM mainframe systems, Digital VAX systems, AS/400s, UNIX workstations and servers, PC servers and desktops, and Apple Macintosh computers. Historically, storage was usually sold as an integrated part of the system.

Open-systems machines were connected then as they are today, mostly over Ethernet and TCP/IP networks. File sharing, the first form of open-systems network storage, allowed workstation and desktop users to access data on file server systems. Client systems could be almost anywhere on a LAN and could access data from the file server. This way, storage on a UNIX server from one vendor could be used by users running many different kinds of operating systems. In other words, the cost of storage could be shared among many different platforms. A simple client/server file-sharing network is shown in Figure 1-1.

Figure 1-1. Basic Client/Server File-Sharing Network


Introducing DAS

The acronym DAS stands for direct attached storage and reflects the legacy storage connection topology used in client/server file-sharing networks. The storage connectivity technologies in this environment have typically used either Small Computer Systems Interface (SCSI) or Advanced Technology Attachment (ATA), although there have been a few others over the years. With the advent of storage networking technologies, a term was needed to differentiate preexisting storage technologies from newer storage area network (SAN) and network attached storage (NAS) technologies; hence the term DAS was invented.

Connectivity Shortcomings of DAS

DAS uses a bus topology in which systems and storage are connected by a bus that commonly takes the form of a ribbon cable. Every entity on a DAS bus has a unique address from a limited number of possible addresses. Devices are connected to the DAS bus in sequential fashion, sometimes called a daisy chain, as illustrated in Figure 1-2.

Figure 1-2. DAS Devices Connected on a Daisy Chain Bus


Data Availability Depends on Server System Health

Notice in Figure 1-2 that there is a single host system storage controller for all the devices on the bus. This is certainly a cost-effective arrangement, but it is hardly optimal for high availability. If the controller were to fail, data on any of the devices on the bus would not be accessible. More important, if the system were to fail for any reason, data on any of its buses would not be accessible until the system were recovered and made operational again. With a goal of high availability, single points of failure such as these are simply not allowable.

Figure 1-3 shows a client/server network with several clients accessing three different application servers, each with its own storage. Server 3 is in the process of being upgraded and has been shut down to complete the upgrade process. While the upgrade is being done, the application's data is temporarily unavailable.

Figure 1-3. Data Accessed Through Server 3 Is Unavailable While the Server Is Being Upgraded


Less dramatic than a system crash, but almost as frustrating to users, is the scenario in which storage workloads increase until they exceed the server's capabilities, creating an I/O bottleneck that can increase application response time. Referring to Figure 1-3 again, instead of Server 3 being down for system maintenance, it could still be running but not keeping up with its I/O workload, creating performance problems for the clients and applications that are using it.

Static Configuration of DAS Storage

In addition to the single point of failure problems and the bottleneck problems of DAS, the electric-connection nature of parallel DAS buses makes it almost impossible to change the configuration of the bus while the system is running. I've sometimes referred to this condition as "electric love" because the controllers and devices on the bus cannot stand to be separated, even temporarily, while the system is operational.

Without the ability to dynamically change the configuration of the bus by adding, for instance, more storage devices, it is impossible to make adjustments on the fly that could relieve I/O bottlenecks or create additional storage capacity.

Distance Limitations of DAS

No discussion of DAS storage shortcomings would be complete without mentioning the distance limitations of DAS storage buses. DAS makes many different bus and cable lengths available, but they are all relatively short. The longest cable length for DAS storage is 30 meters, which used to be supported with differential SCSI. Today, low-voltage differential SCSI cables can be 12 meters long.

There are two fundamental problems with short cables. The first is disaster tolerance. A fire, flood, or any other site disaster that physically impacts a storage subsystem will also wipe out a redundant subsystem that is 12 meters away. There is no good way to achieve the required distances for data redundancy and business continuity using DAS.

The second problem with DAS cable lengths becomes painfully clear when positioning servers and storage in a crowded data center or server room. DAS's limited-distance connections force servers and storage to be positioned adjacently. As systems and storage are upgraded and new systems and storage are installed, the challenge of fitting all servers and storage close enough to each other can become an expensive and time-consuming exercise. Most IT professionals agree that spending time plotting the moves of servers and storage to accommodate cabling is a waste of time they would gladly avoid.

Business Issues with DAS

DAS worked well enough in the pre-Internet days, but today's high-availability environments suffer when using DAS-based storage. Storage has become an increasingly dynamic part of the information infrastructure, but the requirements for using and managing it have exceeded the capabilities of DAS static products.

High Cost of Managing DAS Storage

DAS is typically the least expensive storage to buy but the most expensive to own and manage. Considering that storage management costs exceed the cost of storage several times over, it is clear that DAS is on the wrong side of the value fulcrum.

DAS Capacity Fire Drill

Storage capacity limitations are familiar to almost all network administrators. A "disk full" message is not necessarily the worst thing that can happen in an administrator's day, but it's certainly not good news either. There is always plenty to do besides bailing out a bloated server system. The fact is, disk-full conditions occur with some regularity, and the larger a business is, the more frequently they occur.

When a server runs out of disk capacity, the first order of business is creating fresh capacity by deleting or removing dataa practice that certainly runs the risk of losing data. After that, a plan needs to be created for solving the problem, which includes analyzing the storage configuration and selecting replacement or additional products. If products need to be acquired, there might be paperwork to fill out, approvals to arrange, and budgets to exceedwhich can lead to reworking the plan.

Finally, there is the storage upgrade process itself, which involves downing the server, installing new storage, restarting the server, copying and distributing data among the storage devices, and verifying that the whole thing worked as planned.

The disk-full condition in a DAS environment is just the first domino in a chain. There are risks and delays along the way with costs that should not have cropped up.


One of the primary issues with managing DAS storage is the lack of centralized management. As management can be performed only through the server that connects to the DAS system, the management of DAS storage is determined by the server's operating system, if it exists at all. With inconsistent management methods, the end result is that DAS storage problems can be more difficult to predict than one might expect, which means that unpleasant disk-full surprises are more likely to pop up.

Captive Resources and Storage Utilization

A common frustration with DAS storage is the inability to share storage resources between servers. Say you have two servers, Server A and Server B, both using DAS storage. The storage capacity on Server A cannot be used by Server B and vice versa. This makes it difficult and expensive to purchase storage collectively for all the servers together, because each system needs to have its own excess storage capacity. The utilization of storage resources cannot be balanced or spread among multiple servers. In other words, the cost of storage cannot be leveraged across all servers, but is isolated to each server and its applications. Unfortunately, it is nearly impossible to predict the amount of storage an application is going to need before it is installed. Some applications are never used as expected, while others that start out as simple utilities can grow into full-fledged workhorses.

Figure 1-4 shows two servers, each running two applications on separate I/O buses to reduce I/O bottlenecks. Of the two applications on Server A, one is growing faster than planned, while the other is growing slower than planned. Of the two applications on Server B, one is growing faster than planned, and the other is growing as expected.

Figure 1-4. Inconsistent Utilization of DAS Storage


The situation in Figure 1-4 poses some difficult challenges. It might be possible to allocate some of the storage from the slower-growing applications to the fast-growing applications. This type of solution could possibly work, but it could also trigger other problems, such as I/O bottlenecks. Regardless, the solution is only a Band-Aid, as there is still an excellent chance that some data growth will continue to be faster than expected, and the applications will be more likely to run out of storage space.

Even though there are two separate I/O buses on each server, it is not possible to add storage while the system is running. If the I/O bus needs to be changed, the entire system must be shut down. Therefore, a capacity-full situation with either application creates a data availability problem for both applications running on the server.

Performance and capacity of slow-growing applications seldom create operational problems, but there might be other financial issues to deal with. As more companies look for ways to run more efficiently, storage resources that are less than 50% utilized might be viewed as overly expensive. Requests for more budget resources to address storage problems when current storage resources are underutilized are not always warmly received by financial managers.

In the final analysis, DAS products are simply not capacity-efficient. Companies wind up buying far more storage than they need to.

Limited Scalability with DAS

Another serious problem with DAS storage is the lack of scalability, which comes from having a small address space. While most networking technologies can accommodate thousands or even millions of entities, DAS storage is limited to a few hundred.

The SCSI bus has been implemented with a variety of address spaces. Today, SCSI adapters for systems typically support one or two buses with a total of 16 target addresses. In turn, each one of these supports up to 15 subaddresses, which expand the addressability a great deal, but it is still small by networking standards.

Whether or not the address space allows enough storage devices to be connected, there are still other matters that must be considered, such as the way fairness algorithms are implemented in SCSI. Without plunging in too deeply at this point, all entities on the bus arbitrate to determine which entity will gain control of the bus and transfer data. The bus address determines the priority that is used to resolve concurrent arbitration attempts from multiple bus entities. While this is sometimes referred to as a fairness algorithm, there is nothing fair about it, as the entities with the lowest-priority addresses get serviced the least.

In fact, the target addresses with the lowest priority could potentially have 15 devices with subaddresses needing to transfer data over the bus. If these devices are unable to gain control of the bus, a situation called device starving can occur, which has the unpleasant side effect of ruining the performance of applications needing services from those devices.

Exasperation with DAS Backup

There is no question that on a day-to-day basis, backup and recovery operations are among the most problematic in many IT organizations. Backup processing in a DAS environment can be almost impossible. IT workers that say they do not have problems with DAS-based backups are probably either lying or don't know what they are talking about. Servers with DAS storage can be backed up either over a LAN or to locally attached DAS tape devices. Backing up over the LAN enables backup to be centrally managed but creates significant network congestion. The cruelest part is that backup over the LAN often cannot complete in the allotted time for the most important servers, which means that full recoveries are jeopardized and made even more stressful than they already are. The alternative is to use a decentralized backup approach that backs up servers to their own DAS tape equipment. This provides optimal performance, but the complexity of distributing hundreds of tapes on a daily basis is error-prone as well as time-consuming.

New SAN-based backup architectures that supercede DAS backup capabilities are badly needed for this all-important systems management application.




Storage Networking Fundamentals(c) An Introduction to Storage Devices, Subsystems, Applications, Management, a[... ]stems
Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Management, and File Systems (Vol 1)
ISBN: 1587051621
EAN: 2147483647
Year: 2006
Pages: 184
Authors: Marc Farley

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net