Ru-Brd |
One reason why it's hard to write robust, extensible, and efficient networked applications is that developers must master many complex networking programming concepts and mechanisms, including
Application programming interfaces (APIs) and tools have evolved over the years to simplify the development of networked applications and middleware. Figure 1.6 illustrates the IPC APIs available on OS platforms ranging from UNIX to many real-time operating systems. This figure shows how applications can access networking APIs for local and remote IPC at several levels of abstraction. We briefly discuss each level of abstraction below, starting from the lower-level kernel APIs to the native OS user -level networking APIs and the host infrastructure middleware. Figure 1.6. Levels of Abstraction for Network Programming
Kernel-level networking APIs. Lower-level networking APIs are available in an OS kernel's I/O subsystem. For example, the UNIX putmsg() and getmsg () system functions can be used to access the Transport Provider Interface (TPI) [OSI92b] and the Data Link Provider Interface (DLPI) [OSI92a] available in System V STREAMS [Rit84]. It's also possible to develop network services, such as routers [KMC + 00], network file systems [WLS + 85], or even Web servers [JKN + 01], that reside entirely within an OS kernel. Programming directly to kernel-level networking APIs is rarely portable between different OS platforms, however. It's often not even portable across different versions of the same OS! Since kernel-level programming isn't used in most networked applications, we don't cover it any further in this book. See [Rag93], [SW95, MBKQ96], and [SR00] for coverage of these topics in the context of System V UNIX, BSD UNIX, and Windows 2000, respectively. User-level networking APIs. Networking protocol stacks in modern commercial operating systems reside within the protected address space of the OS kernel. Applications running in user space access protocol stacks in the OS kernel via IPC APIs, such as the Socket or TLI APIs. These APIs collaborate with an OS kernel to provide the capabilities shown in the following table:
These capabilities are covered in Chapter 2 of C++NPv1 in the context of the Socket API. Many IPC APIs are modeled loosely on the UNIX file I/O API, which defines the open () , read() , write() , close() , ioctl() , lseek () , and select() functions [Rit84]. Due to syntactic and semantic differences between file I/O and network I/O, however, networking APIs provide additional functionality that's not supported directly by the standard UNIX file I/O APIs. For example, the pathnames used to identify files on a UNIX system aren't globally unique across hosts in a heterogeneous distributed environment. Different naming schemes, such as IP host addresses and TCP / UDP port numbers , have therefore been devised to uniquely identify communication endpoints used by networked applications. Host infrastructure middleware frameworks. Many networked applications exchange messages using synchronous and/or asynchronous request/response protocols in conjunction with host infrastructure middleware frameworks. Host infrastructure middleware encapsulates OS concurrency and IPC mechanisms to automate many low-level aspects of networked application development, including
The increasing availability and popularity of high-quality and affordable host infrastructure middleware is helping to raise the level of abstraction at which developers of networked applications can work effectively. For example, [C++NPv1, SS02] present an overview of higher-level distributed object computing middleware, such as CORBA [Obj02] and The ACE ORB (TAO) [SLM98], which is an implementation of CORBA built using the frameworks and classes in ACE. It's still useful, however, to understand how lower level IPC mechanisms work to fully comprehend the challenges that arise when designing, porting, and optimizing networked applications. |
Ru-Brd |