Types of Disk Subsystems


There are three main types of disk subsystems you need to understand when putting together a SQL Server solution: Direct-Attached Storage (DAS), Network-Attached Storage, and storage area networks (SANs).

Direct-Attached Storage

Direct-Attached Storage (DAS) is the traditional SCSI bus architecture. There are quite a few variations of SCSI, and describing them all is beyond the scope of this chapter. With DAS, you have a disk or set of disks dedicated to one host, or in the case of a failover cluster, potentially available to the nodes. SCSI has some inherent cabling issues, such as distance limitations, and adding a new disk with DAS can cause an availability outage . One of the biggest problems with DAS is that when it is used with a server cluster, you have to turn all caching off, so you take a potential performance hit because the redundant array of independent disks (RAID) controllers are located in each node. However, this can vary with each individual hardware implementation. If the node s SQL Server resources fail to the other node and there are uncommitted transactions in the cache of the local controller, assuming that node is not cycled and you fail the resources back at some point, you then potentially corrupt your data when the original node sees the disks again and flushes its cache.

Keep in mind that if the material in the cache is not something that needs to be written to the log (such as an insert, update, or delete), it might not matter. SQL Server automatically rolls back any unfinished transactions in a failover.

Network-Attached Storage

Network-Attached Storage devices are fairly new to the storage arena. These devices are function-focused file servers that enable administrators to deploy and manage a large amount of disk storage in a single device. Network- Attached Storage behaves like any other kind of server ”it integrates into a standard IP-based network to communicate with other clients and servers. Some storage vendors present Network-Attached Storage as a DAS device. Network-Attached Storage is usually used for file-based storage, but it can be used with SQL Server if it meets all of your performance, availability, and other goals. Besides generic Network-Attached Storage devices, there are also Windows- powered Network-Attached Storage devices based on Windows 2000 that are optimized and preconfigured for file storage. Before purchasing a Network-Attached Storage “based SQL Server storage solution, consider the following:

  • If you do not use a Windows Hardware Quality Labs (WHQL) certified Network-Attached Storage, it might not meet the I/O guarantees for a transactional database, although it might work with your SQL Server solution. Should there be data corruption due to use of a non-WHQL certified Network-Attached Storage, Microsoft will not support the data- related issues and instead will refer you to the Network-Attached Storage vendor for support. Check with your vendor about compatibility of the device for use with SQL Server and database systems.

  • If the Network-Attached Storage device offers support for snapshot backup using split-mirror (with or without copy-on-write ), the Network- Attached Storage device must support the SQL Server Virtual Device Interface (VDI) for backups . The vendor-supplied Network-Attached Storage utilities and third-party software should also support the VDI if this functionality is needed. If the VDI is not supported, availability could be impacted (SQL Server would need to be stopped so that the mirror could be split) or the SQL Server databases could become corrupt (by splitting the mirror without allowing SQL Server to cleanly flush pending writes and leaving the databases in a consistent state for backup).

  • Make sure that your network bandwidth can handle the traffic that will be generated by SQL Server to and from the Network-Attached Storage devices. Consider a dedicated network that will not impact any other network traffic.

  • SQL Server 2000 failover clustering is currently not a supported configuration with Network-Attached Storage as the disk subsystem. SQL Server 2000 currently only supports stand-alone instances for use with Network-Attached Storage.

  • For a normal non-Network-Attached Storage “based SQL Server installation, if the network or a network card malfunctions, it is probably not a catastrophic failure because SQL Server and its disk subsystem are properly working. On a clustered SQL Server installation, a network or a network card malfunction can result in a SQL Server restart and possible failover of the service. This implication does not apply because Network-Attached Storage use is not supported with virtual servers. Because Network-Attached Storage solutions are based on the network and network card being available, you must guarantee 100 percent uptime for the network and network card to avoid any data problems, data corruption, or data loss.

  • If you are using RAID, make sure that the Network-Attached Storage device supports the level of RAID that will give you the performance and availability that you require.

  • When deploying Network-Attached Storage for a transactional database, contact the Network-Attached Storage vendor to ensure that the device is properly configured and tuned for use with a database.

Performance Considerations

Prior to making any storage purchase decisions, you need to determine the required disk throughput, disk capacity, and necessary processor power. If you are considering a Network-Attached Storage solution, these additional requirements need to be taken into account:

  • Network-Attached Storage performance is dependent on the Internet Protocol (IP) stack, the network interface card, other networking components , and the network itself. If Network-Attached Storage is used as the data store for SQL Server, all database I/O is processed by the network stack instead of the disk subsystem (as it would be in DAS) and is limited by the network s bandwidth.

  • Because processor utilization can increase with database load, only databases with a smaller load should be used with Network-Attached Storage. However, if a high-speed switched network interconnect is used between your SQL Server and Network-Attached Storage device, such as the Virtual Interface Architecture (VIA), processor utilization and network latency can be reduced for every SQL Server I/O. If you have a multiprocessor system, you need multiple network cards, and vice versa: if you have multiple network cards, you need multiple processors, otherwise you might see unbalanced processor loads, especially in a high-bandwidth network or Network-Attached Storage environment.

  • Most networks are configured at 10 or 100 megabits (Mb). This means that Network-Attached Storage performance might be significantly less than that of DAS or SANs, which are optimized for high disk transfer rates.

  • If many Network-Attached Storage devices are placed on a shared network, they might consume a large portion of network bandwidth. This has a significant impact not only on SQL Server, but also on anything else accessing the network.

Using Network-Attached Storage as a Data Store with SQL Server

SQL Server is not configured by default to support the creation and usage of a data store on a network file share, either those located on a standard server or a Network-Attached Storage device. In the case of Network-Attached Storage, data corruption could occur due to network interruptions if the process that ensures database consistency issues a write that cannot be committed to the disks of the Network-Attached Storage device.

To enable support for network file shares, trace flag 1807 must be enabled. This trace flag bypasses the check to see if the location for the use and creation of the database file is on a network share. Use Query Analyzer, select the master database, and execute the following command:

 DBCC TRACEON(1807) 

The successful result of this command should be as follows :

 DBCC Execution Completed. If DBCC Printed Error Messages,  Contact Your System Administrator. 

It is now possible to use a mapped drive or a Universal Naming Convention (UNC) path (that is, \\ servername \sharename ) with SQL Server 2000. If trace flag 1807 is not enabled prior to using a Network-Attached Storage device with SQL Server, you will encounter one of the following errors:

  • 5105 (Device Activation Error)

  • 5110 (File ˜ file_name Is On A Network Device Not Supported For Database Files).

    Warning

    Although the preceding section illustrates how SQL Server can be configured to use Network-Attached Storage for its data store, remember that the usage is currently limited to a stand-alone SQL Server installation. A clustered SQL Server installation is not supported using Network-Attached Storage. Consult http://support.microsoft.com to see if this changes in the future.

Storage Area Networks

SANs are a logical evolution of DAS, and they fix many of the issues, such as the caching problem, associated with DAS. The purpose of a SAN is to give you flexibility, scalability, availability, reliability, security, and device sharing. SANs cache work at the physical level on the disk of sectors, tracks, and cylinders .

SQL Server itself technically does not care what protocol you use to access your SAN, but your choice impacts performance. SCSI and Fibre Channel have been mentioned, but there is also the VIA protocol, traditional IP, Ethernet, and Gigabit Ethernet. Fibre Channel, which does have some low-level SCSI still embedded, is strongly recommended. It is the most effective at carrying block- type data from storage to the computer writing or reading data. It also delivers predictable performance under higher loads than traditional Ethernet loads. Most important, Fibre Channel is extremely reliable.

More Info

Remember to separate the transport from the actual protocol. For example, iSCSI, Fibre Channel, and VIA are transports. Typically the block transfer protocol on top is SCSI. So, for example, iSCSI is SCSI over IP. Fibre Channel fabrics are SCSI over Fibre Channel, and so on.

The speed of your SAN is largely related to its architecture, which has a host bus adapter that is in each server accessing the SAN, controllers for the disk, switches, and finally, the disks themselves . Each manufacturer implements SANs a bit differently, although the overall concepts are the same, so it is important to work with your hardware vendor to ensure you understand what you are implementing.

From a security standpoint, with a SAN, you can do things like zoning and masking, although your SAN vendor must support these features. Zoning occurs when you set up the SAN so that systems can be isolated from one another, and this feature is very useful for clustering. Masking allows you to hide LUNs from certain systems. Cluster nodes can be in the same or different, overlapping zones that are configured with masking.

Because SANs support a wide range of operating systems and servers, a company might purchase a large SAN for use with many different servers or types of workload. Although it is good that you have a SAN, you need to realize the implications of this. Because all systems attached to the system have different workloads and the SAN only has one cache, you will be sharing the cache among multiple systems. If, for example, you also have an Exchange server on your SAN that is heavily utilizing the cache, this could impact the performance of your SQL Server.

Also consider how the other operating systems or applications interface with the SAN at a base level ”if another Windows server or cluster issues a bus reset as it comes online (say, after a reboot of a node), will it affect your current Windows server or cluster with SQL Server that is running with no problems? These are things you should know prior to putting the SAN into production.

You might also be sharing spindles between different servers on a SAN depending on how the vendor implemented its disk technology. What this means is that one physical disk might be carved up into multiple chunks . For example, a 36-GB disk might be divided into four equal 9-GB partitions at a physical level that Windows and SQL Server would never even see; they only see the LUN. You must recognize if your hardware does this and plan accordingly .

More Info

For a good reference on SANs, read the book Building SANs with Brocade Fabric Switches by Chris Beauchamp (Syngress Publishing, 2001, 1-9289-9430-X).

What Disk Technology to Use

With so many choices available to you, what should you choose? Simply put, choose what makes sense for your business from many standpoints: administration, management, performance, growth, cost, and so on. This is not like other decisions you will make for your high availability solution, but it might be one of the most important technology decisions you make with regard to SQL Server. When considering cost versus performance, features, and so on, look at the long- term investment of your hardware purchase. Spending $100,000 on a basic SAN might seem excessive when compared to a $10,000 traditional DAS solution, but if your company is making millions of dollars per month, and downtime will affect profitability, over time, that SAN investment gets cheaper. The initial cost outlay is usually a barrier , however.

As you might have gathered, Network-Attached Storage is currently not the best solution when you want to configure SQL Server. From an availability standpoint, your network is one large single point of failure, and the possibility of data corruption due to network interruption decreases your availability if you need to restore from a backup.

That leaves DAS and SAN. At this point the main issues will be cost, supportability, and ease of expansion and administration. DAS is usually SCSI- based and it is much cheaper, but it is less flexible, and because you cannot, for example, use the write cache (read is just fine), it might not be ideal for your high availability usage of SQL Server. SANs are the way to go if you can afford a solution that fits your needs. The ease of configuration and expansion as well as flexibility are key points to think about when looking at SANs.

Although they are not mentioned directly, there is always the option (in a nonclustered system) to use separate disks internal to a system, whether they are SCSI or Integrated Device Electronics (IDE). Some internal SCSI disks also support RAID, which is described in more detail in the section A RAID Primer later in this chapter.




Microsoft SQL Server 2000 High Availability
Microsoft SQL Server 2000 High Availability
ISBN: 0735619204
EAN: 2147483647
Year: 2006
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net