The third of the three major NAS components supports standard TCP/IP communications within a diversity of transport media. The most popular topology remains Ethernet. Support here is evolving with increasing bandwidth in light of Gigabit Ethernet standards, sophisticated participation within enterprise networks with quality of server (Qos) and routing participation, and advancements in coexistence with other storage models, such as FC SANs.
A limiting factor of file I/O processing with NAS is the TCP/IP network environment and the processing of TCP layers. As we discussed, the communication between client applications and server I/O is processed within the encapsulation layers. Within the NAS architecture, these layers (which are normally processed within the server) are now processed within the NAS box. To address these challenges, the TCP/IP overhead should be given great thought when implementing a NAS solution. However, TCP/IP performance issues are difficult to address, given the black box nature of the NAS software and the OEM supplier relationship of the NIC components.
An advancement that may help this condition is the development of TCP off-load engines (TOE). These are special NIC cards that off-load much of the TCP encapsulation processing onto the NIC card itself. Although this solution is just emerging, it reflects evolving technology directions that optimize latency within IP networks.
TCP off-loads derive their value by processing TCP layers within the NIC card. Anytime software instructions are downloaded into lower-level machine instructions, it becomes highly optimized, operating at machine speeds. However, this is done at the sacrifice of software flexibility, given its much harder to change micro-code or ASIC instructions than software thats operating under a high-level language within the OS.