Storage Architectures

Disks and interfaces are all well and good, but when you're pushing uncompressed HD around, a single disk just isn't going to do it for you. Fortunately, you can arrange disks to provide both greater capacity and higher speeds. We'll review these storage architectures here, starting with an introduction to some useful concepts and buzzwords. We'll look at organizing disks into RAIDs for better performance, then discuss shared storage using NAS and SAN setups.

Concepts and Buzzwords

Storage, like video, has its own set of cryptic acronyms, buzzwords, and concepts. We'll define enough of them so that you can decode spec sheets and marketing documents like a pro.

Ports, NICs, and HBAs

A port is a communication connection, whether Ethernet (networking), Fibre Channel, FireWire, USB, or the like.

A NIC, or network interface card, is a plug-in PCI or PC Card board with one or more networking ports on it.

An HBA, short for Host Bus Adapter, is a Fibre Channel port on a plug-in card.

Hubs and Switches

You connect multiple Ethernet, Fiber Channel, or FireWire devices to each other using hubs or switches.

Hubs are "dumb" party-line devices: any communication that comes in on one port gets replicated on all other ports. All hubs do is provide multiple points of connection; they don't direct or partition data flow at all. Every device hooked up to a hub sees all the data traffic going between any two devices. If 10 devices are hooked up, and 5 are talking to the other 5, each talker winds up with about 20 percent of the available bandwidth, because all the conversations are sharing the same "virtual wire."

By contrast, switches are "smart" devices that work more like a telephone switchboard or a video routing switcher. Switches learn what devices are connected to which ports, and direct data traffic so that it flows only between the devices concerned by it, and other devices don't see it. Switches allow faster communication because devices see only the traffic intended for them. If 10 devices are hooked up, and 5 are talking to the other 5, each talker winds up with about 100 percent of the available bandwidth because each conversation has its own "virtual circuit" through the switch, and all connections are said to run at "full wireline speed."

Hubs are cheaper than switches, but don't provide nearly the same level of performance. Switches also do a much better job of partitioning: if one device on a switch starts misbehaving, the switch can isolate its traffic from the network, whereas a hub simply rebroadcasts it, potentially bringing all connected devices to their knees.


RAID is short for Redundant Arrays of Inexpensive Disksalthough not all RAIDs are redundant, nor are the disks always inexpensive! RAIDs gang multiple disks together to improve reliability, speed, or both. We'll discuss RAIDs in detail shortly.


JBOD (Just a Bunch Of Disks) is used to describe storage enclosures that hold, well, just a bunch of disks. In a "true" JBOD, the disks aren't ganged together in a RAID configuration, although the controller may span them, essentially turning them into one large virtual disk by using them sequentially. JBOD is often used to describe any box of disks that has a "dumb" controller, like a Fiber Channel hub connecting FC disks to the outside world, even if the box is part of a RAID.


A "Switched Bunch of Disks" is a JBOD with a smart, switching controller. It's used to differentiate high-end enclosures with integrated Fibre Channel switches from JBODs with mere hubs. SBODs are the latest twist in high-performance disk arrays and are only now starting to filter out of the labs and into the marketplace.


Any disk or RAID connected directly to your Mac is Direct Access Storage, or DAS. DAS gives you the fastest, lowest-latency performance, all else being equal, because it's yoursall yours. Your Mac owns the storage entirely and need not contend with other users for its use, and DAS has the lowest communications overhead between the Mac and its data. The file system runs on your Mac, and it has direct access to the blocks of data on the drives.


Network Access Storage, or NAS, is shared storage using networked file sharing. At its simplest, you're using NAS when you mount another Mac's shared drives on your desktop over the network. There are also dedicated NAS boxes stuffed full of high-speed drives whose sole purpose is to provide shared storage across a network, but you access them just like any other networked Mac. Your Mac makes file-level requests over the network; the server Mac or NAS box translates those requests to block-level requests in its own file system and provides the requested data.


Storage Area Networks connect multiple clients and shared storage using a high-speed connection, usually Fibre Channel. SANs provide fast access to shared storage: a special file system driver runs on your Mac providing block-level access to the drives, while at the same time handling the potential conflicts from other clients accessing the same files or volumes.


RAIDs are collections of disks ganged together to form a single logical unit (LUN) so as to provide higher reliability, higher speed, or both, than a single drive would provide.

Both software and hardware RAIDing is possible; OS X has simple software RAID capabilities built in. A hardware RAID controller gives better performance than software RAIDing does because it doesn't burden the computer with RAID housekeeping. But software RAIDing is better than no RAIDing at all. Sometimes software and hardware RAIDing are used together, as we'll see.

RAIDs employ three strategies in various combinations to work their magic, as outlined in the following sections.


You can improve speed by striping data across two or more drives. Information is broken into small chunks and read and written to all the drives, each drive storing one chunk and sharing the overall load. For example, if you have two drives striped, all the even bytes of data might be written to one drive and all the odd bytes to the other. The performance you see is close to the performance of a single drive multiplied by the number of drives in the stripe set.

Striping improves performance but does nothing for redundancy: if any drive in a stripe set fails, you've lost all the data on all the drives. Striping is good for data you can easily replace, like logged and captured media files and render scratch areas, but it's not so good for hard-to-replace stuff like your system, your project, andmost importantlyyour invoice files.

Spatial efficiency with striping is the same as with single drives: 100 percent. Since no data are replicated, every bit counts: a stripe set made from two 250 GB drives has 500 GB available for data storage.

Mac OS X has built-in support for striping; you can set it up using Disk Utility.


You can protect data with mirroring, a real-time backup strategy usually using two drives. All incoming data is written to both drives in parallel, so if one fails, the other one has a complete copy of all the data.

Mirroring does nothing at all for write performance (it may even slow writes down, especially with software mirroring), but with certain controllers it can improve read performance since reads can be shared by the two drives.

Mirroring is redundancy: there's a full backup copy of all the data written. If a drive fails, you can proceed, using the remaining drive, and rebuild the mirrored drive after replacing the failed hardware. Of course, if your remaining drive fails before you've rebuilt the mirror, you will lose your data.

Mirroring's spatial efficiency is 50 percent. Mirroring one 250 GB drive with another 250 GB drive means you have the space for only 250 GB of data, since each bit is written twice, once on each drive.

Mac OS X has built-in support for mirroring; you can set it up using Disk Utility.

Striping with Parity

RAIDs using parity split the difference between the pure performance orientation of striping and the conservative data protection of mirroring. Instead of duplicating none of the data or all of it, they record extra parity bits for every bit of data coming in. The parity bits can be used to reconstruct data in cases of drive failure.

A simple case of parity recording involves adding a parity drive to a two-drive stripe set. Parity data are written such that the sum of all the bits recorded in parallel across the three drives always equals an odd number:

Drive 0

+ Drive 1

+ Parity

= Sum

















("Hang on!", you say, "1+1+1=3, not 1!" Right you are, but for parity purposes, all we care about is that least-significant bit, the one that says even or odd. In case you're wondering, it's called modulo 2 addition, in which you ignore any part of the number greater than one.)

Let's assume drive 1 fails, and its data are lost. We can recover its data just by looking up its value in the truth table just shown, which gives every permissible combination of bits on Drive 0, Drive 1, and the parity drive. (And at that, the Sum column is superfluous; it's only there to show how the math works.)

As it turns out, truth table lookups are very fast to implement in hardware, so no "real math" needs to be done at all once the truth table is designed. Incoming data are passed through the truth table to generate parity, and the same table is used in the event of drive failure to re-create the lost data, regardless of which drive failed: it works equally well to replace the data from Drive 0, Drive 1, or the parity drive.

As long as only one drive at a time fails, you won't lose any data. And in the example given, you get twice the performance of a single drive, since this parity setup is basically a two-drive stripe set with an added parity drive.

This sort of scheme can be extended across more than three drives, and often is. Parity storage can be kept on a single drive, as in RAID 3, or distributed among the data drives in RAID 5.

Performance is almost as good as in a striped set without the parity drive, though writes can take a bit longer.

Redundancy in the case of single-drive failure is assured, and spatial efficiency is equal to (# drives without parity) ÷ (# drives with parity). In the example in the preceding section, three 250 GB drives yield 500 GB of usable space, with 250 GB taken up by parity data, for 67 percent spatial efficiency.

RAID Levels

These approaches are combined in various ways characterized as RAID levels, which are simple numerical identifiers. We'll cover the most common ones here. Single-digit RAID levels form the basic building blocks of RAID architectures. Two-digit levels combine two RAID levels at once, with the first digit describing the "lower level" RAIDing, and the second the "higher level" RAIDing. Well, most vendors do it that way, but RAID terminology is one of those things that different vendors interpret in different ways. When in doubt, ask the vendor for details.

More Info

Probably the best summary of every possible RAID level is at


Pure striping, as described in the "Striping" section. RAID 0 is a good choice for storing media files when you're on the cheap. Spatial efficiency and performance are high, and you can even use OS X as your RAID controller. But there's no protection for your data: lose a drive, and you've lost the entire RAID.


Mirroring, as described in the "Mirroring" section. Not recommended for media files because RAID 1 doesn't improve performance over a single drive.


Parity striping at the byte level, using a dedicated parity drive. RAID 3 adds data protection to RAID 0 and is a great choice for storing large, sequentially-accessed files, like audio and video. With parity localized to a single drive, performance stays high even with a single drive failure, but the parity drive can be a bottleneck when lots of random writing is occurring, so for general-purposes RAID 5 is usually better.


Block-level striping with parity distributed among the different drives on a block-by-block basis. Distributing parity across the drives tends to improve random write performance compared to RAID 3, but RAID 3 usually performs better for sequential reads, especially when a drive has failed.

RAID 0+1

Also called RAID 01, but 0+1 avoids the confusion of 01's leading zero. RAID 0+1 uses striping first, then mirroring. Two sets of disks are made into stripe sets, and then one set is used to mirror the other. RAID 0+1 is for those who need RAID 0 performance levels with fault tolerance and can afford to double the number of drives used to obtain that robustness. Note that a single drive failure knocks the entire mirror set offline, and the whole darned thing needs to be rebuilt when the drive is replaced.


Mirroring first, then striping. Like RAID 0+1, but may perform a bit better when a drive has failed, and tends to rebuild more quickly, since failure of a single drive requires the rebuilding of only that one drive.


Multiple RAID 3s are striped together. RAID 30 gives better performance than the same number of drives in a single RAID 3 set, but depending on how the drives are divided up (many small RAID 3s, or fewer but bigger RAID 3s?) performance can be difficult to predict, and RAID 30s tend to be expensive for the performance and space they yield.


Multiple RAID 5s are striped together. Like RAID 30, good performance and reliability but with attendant complexity and cost.

RAIDs for Video and Audio

Generally speaking, RAIDs 0, 3, and 5 give you the best bang for the buck when it comes to media storage.

RAID 0 is the winner when sheer performance is required and when you need to maximize storage space. When you're working at high data rates, as with uncompressed HD, RAID 0 may be the way to go to keep array costs within reason and eke out as much online storage as possible. Just keep your source tapes around in case the array should suffer a failure.

RAIDs 3 and 5 sacrifice some performance for reliability; maybe you won't stream as many channels of real-time video, but you may sleep better at night knowing a single drive failure won't wipe out your media. RAIDs 3 and 5 are often preferred when you have lots of important media online, especially in a shared-storage environment, and loss of the array would entail huge inconveniences.

With many array controllers, RAID 3 is preferred over RAID 5 for media storage. Apple's Xserve RAID, on the other hand, is better optimized for RAID 5, and that's the preferred format even for media storage.

RAID Level Comparison

RAID Level

Storage Efficiency

Read Performance

Write Performance

Data Redundancy



Very high









High to very high





High to very high















High to very high





High to very high




If performance is paramount, and cost is no object, RAIDs 30 and 50 are worth considering. On the Xserve RAID, RAID 0 gives the best write performance, and RAID 50 excels at reads and does nearly as well on writes. Because you usually need to play back more streams simultaneously than you write, RAID 50 is the way to go if you must maximize performanceyou might choose RAID 50 when setting up a multiuser Xsan system, for example.

How do you decide which RAID level to implement? Read the recommendations in following lessons, of course, but also consult with vendors of the arrays you're considering and talk with other FCP users and systems integrators who've set up similar systems. RAID controllers vary from vendor to vendor, performance "sweet spots" differ between systems, and your requirements will vary depending on media type, number of users, and workflow patterns.

Combining capture cards, RAID controllers, and disk drives is a complex task with multiple independent variables; there is no "right" answer that works in every case. But, of course, that's why you get paid the big, big money to put FCP systems together!

Network Attached Storage

Network Attached Storageshared drives you can see on the network and mount on the desktophas long been used for simple file transfers between systems, but the combination of low-bit rate media and fast networking have led to the occasional use of NAS for video editing in shared-media environments.

Fast Ethernet, a.k.a. 100BaseT, runs at 100 Mb/second, allowing roughly 7 to 8 MB/second of data transfer in the best case. Gigabit Ethernet, now standard on G5s and the larger PowerBooks, runs 10 times as fast, with practical transfer rates between 30 and 60 MB/ second. With DV25 requiring only 3.6 MB/second, even Fast Ethernet should allow a stream or two of playback across the network.

In practice, this worksup to a point. Ethernet has the speed, but it doesn't have guaranteed latency: if a packet of data is dropped, it gets retransmitted later, and there's nothing in the way Ethernet works (well, not under the TCP/IP protocol at the root of Mac networking) to prevent packets from being lost. Indeed, TCP/IP works like a party line: anyone wanting to speak just shouts out. If two devices try to talk at the same time, they both shut up, and try again some random time later. As a result, there's no way to guarantee quality of service. (There are network protocols that guarantee QoS, but they aren't currently applicable to networked file sharing.)

Although Ethernet speeds are consistent in the long term, short-term latencies can be extremely variable. What's more, the variable time it takes to ship a frame's worth of data over Ethernet is added to the time it takes the server computer to fetch it from whatever local storage it's using. Ethernet is also optimized for small bursts of data, not large chunks of video; it takes nearly 80 separate transmissions (packets) to send a single DV25 frame, making Ethernet a fairly inefficient transport mechanism. These give NAS the longest and most variable latencies of any storage architecture we consider.

What this means to the FCP editor is that networked media drives will drop frames. Not very frequently perhaps, if the network and storage are well designed, but you'll see dropped frames sooner or later.

NAS requires the use of switches, not hubs, on the Ethernet network. Unless you're simply hooking one Mac up to another Mac, the file server needs to use Gigabit Ethernet to avoid being overloaded by multiple clients. Some NAS "appliances," like EditShare or HUGE Systems' SanStream, have Gigabit switches built in: simply connect your client Macs to the server with GigE cabling and you're done.

NAS is useful in shared-storage applications when cost or mechanics preclude the use of a SAN, but the workflows used in NAS environments must allow the occasional dropped frame. If you're seriously considering NAS, take note of the following:

  • FCP works happily right out of the box with DV25 or lower-bit rate media over 100BaseT or Gigabit Ethernet. Higher-bit rate media, like DV50, starts dropping frames whenever a dual-stream effect, such as a dissolve, is played back.

  • Enabling "jumbo frames" on Gigabit Ethernet improves performance. Your NIC, switch, and server must all support jumbo frames for this to work. OS X 10.2.4 and later supports jumbo frames, but you'll need to add a separate NIC since Mac motherboard Ethernet lacks jumbo support.

  • Plan to capture from tape and play out to tape from the server itself, avoiding network bottlenecks when real-time performance is mandatory. Otherwise, consider capturing to and playing out from local drives on the FCP Mac: move media to the server after capture, and render your finished Timeline to the local disk as a self-contained QuickTime movie for playout purposes.

Storage Area Networks

SANs give you direct-access speed to a pool of shared-access storage, enabling collaborative production workflows with the ease and speed of local storage. Such capabilities don't come cheaply, of course: SANs can be expensive and complex to set up. With the exception of the MicroNet SANcube, discussed shortly, each client Mac needs a Fibre Channel HBA and SAN software installed; the combination costs $1,500 or more (as of Summer 2004). You'll also need a Fibre Channel switch, enough Fibre Channel-equipped RAIDs to meet your bandwidth and capacity needs, and possibly a metadata controller. A what? Read on.

SAN Concepts

When you read a file from direct-access storage, the Mac's file system driver translates your file-level request ("open file 'project.fcp' and give me the first 2 KB of its data") into block-level commands ("mark 'project.fcp' as opened for read, and retrieve its first four blocks124, 125, 132, and 137from disk").

When you read the same file from network-attached storage, your Mac sends the file-level commands over the network to the server, and the server translates them into block-level accesses, handling all the overhead of file sharing so that multiple clients don't conflict with each other.

In a SAN, your Mac's SAN file system provides the same block-level access to files as with direct-attached storage, but it must simultaneously tell every other machine on the SAN what it's up to, since every SAN participant has the same block-level access to the storage. Communicating this file system metadata (literally, "data about data") expeditiously, keeping all the SAN participants' views of the volume's catalog and bitmap in sync, and avoiding file system corruption make SANs much more complex than the other file systems discussed so far.

Adding to the fun is the fact that metadata communication requires lots of tiny packets of information sent at seemingly random intervals, whereas block-level data transfer involves the sequential and orderly flow of very large packets of information. An infrastructure that supports one may not be the best for the other, and vice versa: Ethernet makes a lot of sense for metadata transfer; Fibre Channel is better optimized for large blocks.

SANS typically use one of two approaches for dealing with this complexity. The differences affect both the underlying architecture of the SAN and how you can use it, as well.

Volume-Level Locking

Some SANs offer volume-level locking. These SANS let multiple computers read from a volume, but only one at a time can write to it. Since reading doesn't change the file system's low-level metadata aside from "who has this file open for read", many computers can read from the same volume simultaneously without fear of file system corruption.

Metadata management tasks regarding which disk blocks hold valid data, how big files are and where their data blocks reside, and where files are listed in the volume's catalog, is relegated to the one computer with write access. When you write a file, the catalog and bitmap changes can be shared in a leisurely fashion once all the data have been committed to disk. When a file is deleted, the catalog is updated, and then the underlying data structures like the bitmap are adjusted to free up the disk space. The writing computer needs merely to publish its metadata changes to the other SAN participants; there's never the problem of keeping two metadata-changers closely synchronized.

Such SANs typically share what metadata they need to via the same communications fabric they use for block-level data transfer, such as Fibre Channel or FireWire. They are simple to set up and administrate, with user-level tools for mounting SAN volumes for read-only or read/write access.

Volume-locking SAN systems are available from CharisMac (FibreShare), CommandSoft (FibreJet), Rorke Data (ImageSAN), and Studio Network Solutions (SANmp), among others.

It's also worth mentioning one of the first SANs available for Macs: MicroNet's SANcube. The SANcube provided volume-locked shared storage for up to four Mac OS 9 participants, using FireWire 400 for its fabric via a built-in switch. This made it extremely inexpensive. Sadly, the OS X drivers for SANcube don't support shared storage, so it's more notable for its innovative approach than for its current level of functionality.

File-Level Locking

SANs like Apple's Xsan offer file-level locking and therefore have to tackle the metadata problem head-on. File-locking SANS let multiple computers read and write to the same volume simultaneously, so multiple computers can change the catalog and bitmap.

File-level locking, while providing increased file-sharing flexibility, requires a more sophisticated approach to metadata. File-locking SANs typically segregate file system metadata from user data, both in storage and transport.

Metadata controllersdedicated servers such as Xservesstore and manage metadata. The SAN participants connect to the metadata controller (MDC) using Fast Ethernet or Gigabit Ethernet, sharing file system data through the MDC. The MDC fields file system requests and translates them into block-level instructions that the participants use to access data directly from the shared storage over a high-speed Fibre Channel fabric. In Xsan, the MDC also stores its metadata on the SAN through its own Fibre Channel link.

In essence, the MDC serves as a centralized file server as far as arbitrating access to the storage is concerned, while each computer still performs its own direct access to the storage itself. This division of labor provides each SAN participant with a synchronized and consistent view of the shared storage, without the bottleneck caused by moving user data though a single, centralized file server.

Of course, this level of sophistication has its own cost: you now have two networks to deal with, an Ethernet network for metadata, and a Fibre Channel network for user data. We'll discuss these issues in a bit.

Which SAN approach is right for your system? Volume-locking SANs offer easier setup and administration, but your workflows must allow for having only one Mac writing to a volume at a time. File-locking SANs offer greater working flexibility, but at a considerable cost in setup and administrative complexity.

Volume-locking SAN vendors say their systems are faster due to the simplicity of metadata exchange allowed by volume locking. File-locking SAN vendors claim their architecture is faster because the metadata is segregated on its own network. In reality, the relative speed differences of the two designs, all else being equal, will pale in significance compared to the operational and administrative differences the two designs impose.

In short, there's no easy answer here. You simply have to design your storage architecture based on current and future needs and pick the system that works best for you. SANs are evolving rapidly and their use in collaborative video production is just starting to become widespread, so your best resource will be other FCP users and system integrators with early-adopter experience. When in doubt, learn from the experiences of others.

Volume Structures

SANs offer considerable flexibility through storage virtualizationthe capability to create logical storage volumes atop varying physical devices. SANs like Xsan give you a lot of freedom in structuring your volumes though several levels of logical organization between the physical disks and the mounted volumes. You can optimize a volume for storing high-bit rate sequentially accessed video files or low-bit rate latency-critical audio files; for ultimate speed or ultimate redundancy; or any other combination of performance parameters you desire.

The lowest-level organizer in Xsan (we'll use Xsan as our example henceforth; once you understand Xsan's components, you'll be well prepared to look at other SANs) is the RAID, or LUN (logical unit). You set up RAIDs (assuming you're using Xserve RAIDs) using RAID Admin as you would for direct-attach uses.

Then, using Xsan Admin, you assign the LUNs to one or more storage pools. Data written to a storage pool is striped across all the contained RAIDs using RAID 0 techniques. So, a storage pool is essentially a "RAID of RAIDs," adding a zero to whatever the underlying RAID setup is: if your RAIDs are RAID 5, your storage pool performs like RAID 50.

Since storage pools use RAID 0 techniques, each RAID in the pool should be structured the same way, with the same speed and capacity.

Finally, you create volumes containing one or more storage pools. (You can even add storage pools to existing volumes.) Xsan volumes treat storage pools the same way as JBODs treat disks: as available space without any special organization, so storage pools don't need to be the same size or speed.

Indeed, you can build volumes with different storage pools, each optimized for different media types, and define where data will be stored using affinities. Affinities associate folders with storage pools, effectively providing "soft partitions" within volumes. For example, you can define affinities for Capture Scratch and Render Files folders and a fast, high-performance storage pool within an FCP volume, and set up affinities for Audio Capture Scratch and Audio Render Files pointing to a low-latency storage pool in the same volume. When you then point FCP towards the FCP volume as its scratch disk, its various captures and renders will automatically wind up using the appropriate storage pools within the volume.

Image courtesy Apple Computer

Fabric Design

All SANs require a properly designed storage fabricthe high-speed data networking between the participating computers and the shared storage. Each SAN vendor has its own recommendations for approved components and architectures, and it's worth spending some time with the installation manuals and the information on the vendor's Web site before making your shopping list. Xsan has been qualified with various switches from Brocade, Emulex, and Qlogic; check the current list at

In general, SAN Fibre Channel fabrics require fabric-capable switches, not hubs. The switches should be capable of isolating RSCNs (Registered State Change Notifications, as discussed previously) to the port they occur on, to avoid disrupting traffic throughout the rest of the SAN. (Qlogic calls this capability I/O Stream Guard.) The switches should also be fully nonblocking, so that traffic between any two ports on the switch can travel at full speed regardless of the traffic in the rest of the switch.

Even though Fibre Channel switch prices are declining rapidly, don't be surprised if the switch winds up being the single most expensive hardware component in your system. As the central point through which all your high-bandwidth data flows, its performance is critical to SAN operations. No amount of clever RAID design and high-speed disks can break through a bottleneck imposed by an insufficiently muscular switch.

If your SAN requires a separate MDC, it's best to give it its own dedicated Ethernet network, too. Although it's possible to run the metadata traffic on the house network, you'll hurt your SAN's performance with all the network slowdowns and traffic jams possible on a general-purpose net. You can install a second NIC in each Mac for the metadata network, leaving the built-in network interface connected to the house network for file sharing, downloading software updates, and surfing the iTunes Music Store. An inexpensive way to get optimal performance is to invest in a dedicated, high-performance Ethernet switch and high-quality Cat 6 cables.

System Administration

When you set up shared storage for multiple users, you become not only a storage and fabric guru, but a network administrator, too. The topic of network administration is worth a whole book in itself, but we can at least touch on some of the issues you'll face.

Xsan Admin provides volume-level access control through masking and mappingallowing certain users to see some volumes, and not others. You can also restrict certain users to mounting specified volumes as read-only.

If you want to administer file and folder permissions on a user and group level, bear in mind that OS X determines permissions based on user and group ID numbers, not human-readable user and group names. Unless all the Macs sharing a SAN are centrally administered (using OS X Server's Open Directory, for example) it's possible for user and group IDs to differ from Mac to Mac, even if the human-readable names are the same. When the IDs differ, a user logging into one Mac may not see the same SAN resources he saw when using a different Macor even worse, he may see the resources, but be unable to open the project and media files he used on that other machine. Keeping the IDs consistent across all the connected Macs avoids this problem. You don't have to use Open Directory to administer users and groups, but it makes things a lot simpler; editing each Mac's NetInfo database by hand is not for the faint of heart.

Xsan Admin lets you specify quotas for users and groups. Quotas limit the amount of storage a user or group can consumethat way you won't have one user filling up the entire SAN at the expense of everyone else.

If your MDC fails, the SAN becomes unusable until it's replaced (the SAN itself should survive the outage; MDCs maintain journal files on the SAN to allow state recovery following such a failure). Xsan lets you specify failover MDCs, so that if (or when) the primary MDC fails, a backup MDC takes over with minimal disruption in SAN service. Any connected Mac G4 800 MHz or faster with 512MB RAM per volume can serve as a backup MDC, even an FCP system. But many SANs are set up with a dedicated backup MDC such as a second Xserve so that all connected workstations can continue to do the jobs they normally do.

We'll cover these issues in more detail in later lessons.

Apple Pro Training Series. Optimizing Your Final Cut Pro System. A Technical Guide to Real-World Post-Production
Apple Pro Training Series. Optimizing Your Final Cut Pro System. A Technical Guide to Real-World Post-Production
Year: 2004
Pages: 205 © 2008-2017.
If you may any questions please contact us: