|
NAS uses three modes of processing. Understanding these modes helps determine and identify the growth of NAS workloads from the appliance level into the enterprise solutions. If we continue to view NAS as the I/O Manager for file-level processing, the modes in the following sections can be expressed .
Simple File Processing (Sfp) This mode of processing simply extends the clients F/S and storage capacity into the NAS device. As depicted by Figure 10-5, this providesa simple transactional level of communication through a single type of protocol. Generally supporting Windows-based systems that require additional storage capacity for shared files or multiuser access to workgroup files, Sfp is handled by appliance-level type NAS devices with capacities that range up to 300GB at the high-end with minimal RAID support. Due to the limited transactional level (for example, access to shared files versus number of application transactional users), levels of RAM at or above (but not below) 512MB will generally support and balance the I/O operations for these configurations. Said configuration cross over to entry level and mid-range NAS devices as two things evolve : first, the number of users and consequently the size of user data; and second, the number of protocols supported. This can manifest itself in the heterogeneous support of client computers (for example, the addition of UNIX and Mac clients ).
Quality File Processing (Qfp) As user requirements move into supporting multiple protocols and increasing data capacities, the capability to provide quality processing increases . Quality processing, as shown in Figure 10-6, is defined as the capability to provide a level of data reliability to withstand component hardware failures, as well as support transactional level processing and heterogeneous sets of users. Qfp is supported from entry level and mid-range level NAS devices with components such as fault- tolerant hardware components (power supply, fan redundancy, and hot-swappable drives ), RAID functionality, and multipath disk array extensibility. Qfp is depicted in Figure 10-6 where a larger user base is supported along with transactional-like support through protocols such as HTTP. Storage capacities range from 300GB to 1TB, with RAM moving into the gigabyte range. These configurations cross into the area of enterprise level NAS solutions as the work begins to scale into multitier client/server applications with a heterogeneous user base and multiprotocol support.
Complex File Processing (Cfp): Figure 10-7 illustrates the complexity of requirements that NAS can grow into. The multitier processing of application servers and the capability to service high volume Internet traffic forms the basis for Cfp. Within these configurations, the NAS devices must support quality processing and handle the transparency of transactional redirection. User requests coming into an application server that provides web services are redirected to the NAS devices where the files are physically stored. A more complex processing configuration is the integration of storage area networks. In these configurations, user transactions must be redirected to NAS file systems and further into the level of database block I/O contained in the SAN storage network. Cfp will clearly stress the high end of NAS devices and push the limits of what vendors offer. Clearly, this requires the features of fault-tolerant hardware given its participation within a highly available transactional system. The terabyte levels of storage capacities and RAM gigabyte levels notwithstanding, the uniform and reliable performance becomes paramount in Cfp. Consequently, additional importance will be placed on recovery options (for example, tape connectivity and backup/recovery software, management tools, and overall enterprise data center integration).
The NAS workload is characterized as file-oriented processing for various applications that end users require. Typical among these are Windows client files using personal applications such as office applications, shared forms, and templates. Other typical applications can manage unstructured data such as image, video, and audio files. Larger scale enterprise applications are centered on relational database tables while enterprise Internet applications are grounded in new web-based file formats. The application notwithstanding, the workload that NAS has to contend with centers on the characteristics of the files they are storing, or to put it another way, the I/O content they have to satisfy .
Within the context of NAS, the I/O content is determined by the type, format, and characteristics of the file. For example, the content of a word document is very different than the content of a JPEG image. The reason is not only self-evident but also simple: the size and structure of the data. If we look at the access of these contrasting file types, we see the effect each has on the NAS hardware components.
If we take the next step and observe the I/O operations necessary to access these diverse file types from the NAS storage system, we find that the data content transferred within each I/O operation can differ greatly. Contrasted against the operating system, we find that the I/O operation is dependent on many things, such as the number of paths to the controller, the number of LUNs, and disk density. Consequently, the number of I/O operations, commonly referred to as IOPS, is a number without a corresponding level of efficiency.
Note | IOPS is defined as the number of I/O operations performed per second by a computer system. Its used to measure system performance and throughput. |
Its possible, as we illustrated in Figure 10-5, that the transfer of a large text document can theoretically take more I/O operations, resulting in an inefficient transfer of data with each I/O. Conversely, the I/O operations for the JPEG could be done with less I/O operations, resulting in an efficient transfer of bytes of data with each I/Overy efficient! The difference is not only the density of the disk, the cluster size as set with the operating system, but also the capacities of the internal components of the NAS hardware elements themselves . The point is that the workload determines the capacities for individual, as well as groups of, NAS hardware specifications. Ensure that workloads and relative I/O content are given due consideration prior to setting the record for the fastest NAS install.
Given the black box orientation of the NAS device, look for efficiencies in the I/O manager that will provide levels of flexibility in these contrasting workloads. Operational evaluations of NAS devices should also cover the effects of a black box architecture on other data hierarchy locations, including system-level cache, RAM, and types of processor architecture.
Additionally, careful evaluation is necessary in maintaining the NAS devices for data recovery, disaster recovery, and data archiving. Operational discussions should also cover the challenges involved in participating in larger storage infrastructures , where NAS must interface with various storage devices such as tape, optical, and other server disk storage systems.
NAS hardware constitutes commodity components bundled into a Plug and Play storage solution. However, the evaluation of NAS devices requires some level of diligence in understanding the processing model used, the need for particular NAS components, such as processor types, device extensibilities, and capacities. More important is the type and extensibility of the storage system used with the bundled solutions. Given the rapid growth of end-user storage, configurations should ultimately be scalable to both the enterprise and workload.
|