2.1 Service and Server Design Dimensions

Ru-Brd

When designing networked applications, it's important to recognize the difference between a service , which is a capability offered to clients , and a server , which is the mechanism by which the service is offered. The design decisions regarding services and servers are easily confused , but should be considered separately. This section covers the following service and server design dimensions:

  • Short- versus long-duration services

  • Internal versus external services

  • Stateful versus stateless services

  • Layered/modular versus monolithic services

  • Single- versus multiservice servers

  • One-shot versus standing servers

2.1.1 Short-Duration versus Long-Duration Services

The services offered by network servers can be classified as short duration or long duration. These time durations reflect how long a service holds system resources. The primary tradeoff in this design dimension involves holding system resources when they may be better used elsewhere versus the overhead of restarting a service when it's needed. In a networked application, this dimension is closely related to protocol selection because setup requirements for different protocols can vary significantly.

Short-duration services execute in brief, often fixed, amounts of time and usually handle a single request at a time. Examples of short-duration services include computing the current time of day, resolving the Ethernet number of an IP address, and retrieving a disk block from the cache of a network file server. To minimize the amount of time spent setting up a connection, short-duration services are often implemented using connectionless protocols, such as UDP / IP [Ste94].

Long-duration services run for extended, often variable, lengths of time and may handle numerous requests during their lifetime. Examples of long-duration services include transferring large software releases via FTP , downloading MP3files from a Web server using HTTP , streaming audio and video from a server using RTSP , accessing host resources remotely via TELNET , and performing remote file system backups over a network. Services that run for longer durations allow more flexibility in protocol selection. For example, to improve efficiency and reliability, these services are often implemented with connection-oriented protocols, such as TCP / IP [Ste94], or session-oriented protocols, such as RTSP [SRL98] or SCTP [SX01].

Logging service From the standpoint of an individual log record, our server logging daemon seems like a short-duration service. Each log record is limited to a maximum length of 4K bytes, though in practice most are much smaller. The actual time spent handling a log record is relatively short. Since a client may transmit many log records, however, we optimize performance by designing client logging daemons to establish connections with their peer server logging daemons. We then reuse these connections for subsequent logging requests. It would be wasteful and time consuming to set up and tear down a socket connection for each logging request, particularly when small requests are sent frequently. We therefore model our client and server logging daemons as long-duration services.

2.1.2 Internal versus External Services

Services can be classified as internal or external. The primary tradeoffs in this dimension are service initialization time, isolation of one service from another, and simplicity.

Internal services execute in the same address space as the server that receives the request, as shown in Figure 2.1 (1). As described in Chapter 5 of C++NPv1, an internal service can run iteratively, concurrently, or reactively in relation to other internal services. Internal services usually have low initialization latency and their context switch time is generally shorter than that of services residing in separate processes.

Figure 2.1. Internal versus External Services

Internal services may also reduce application robustness, however, since separate services within a process aren't protected from one another. One faulty service can therefore corrupt data shared with other internal services in the process, which may produce incorrect results, crash the process, or cause the process to hang indefinitely. As a result, internal services should be reserved for code that can be trusted to operate correctly when run in the context of other services in an application's address space.

External services execute in different process address spaces. For instance, Figure 2.1 (2) illustrates a master service process that monitors a set of network ports. When a connection request arrives from a client, the master accepts the connection and then spawns a new pro-cess to perform the requested service externally. External services may be more robust than internal services since the failure of one need not cause the failure of another. To increase robustness, therefore, mission-critical application services are often isolated in separate processes. The price for this robustness, however, can be a reduction in performance due to process management and IPC overhead.

Some server frameworks support both internal and external services. For example, the INETD superserver [Ste98] is a daemon that listens for connection requests or messages on certain ports and runs programs to perform the services associated with those ports. System administrators can choose between internal and external services in INETD by modifying the inetd.conf configuration file as follows :

  • INETD can be configured to execute short-duration services, such as ECHO and DAY TIME , internally via calls to statically linked functions in the INETD program.

  • INETD can also be configured to run longer-duration services, such as FTP and TELNET , externally by spawning separate processes.

Sidebar 4 (page 31) describes this and other service provisioning mechanisms that use both internal and external services.

Logging service All logging server implementations in this book are designed as internal services. As long as only one type of service is configured into our logging server, we needn't isolate it from harmful side effects of other services. There are valid reasons to protect the processing of different client sessions from each other, however, particularly if services are linked dynamically using the Component Configurator pattern [POSA2]. Chapter 8 in C++NPv1 therefore illustrates how to implement a logging server as an external service using the ACE_Process and ACE_Process_Manager classes.

2.1.3 Stateful versus Stateless Services

Services can be classified as stateful or stateless. The amount of state, or context, that a service maintains between requests impacts the complexity and resource consumption of clients and servers. Stateful and stateless services trade off efficiency for reliability, with the right choice depending on a variety of factors, such as the probability and impact of host and network failures.

Stateful services cache certain information, such as session state, authentication keys, identification numbers , and I/O handles, in a server to reduce communication and computation overhead. For instance, Web cookies enable a Web server to preserve state across multiple page requests.

Stateless services retain no volatile state within a server. For example, the Network File System (NFS) [Ste94] provides distributed data storage and retrieval services that don't maintain volatile state information within a server's address space. Each request sent from a client is completely self-contained with the information needed to carry it out, such as the file handle, byte count, starting file offset, and user credentials.

Some common network applications, such as FTP and TELNET , don't require retention of persistent application state information between consecutive service invocations. These stateless services are generally fairly simple to configure and reconfigure reliably. Conversely, the CORBA Naming Service [Obj98] is a common middleware service that manages various bindings whose values may need to be retained even if the server containing the service crashes. If preserving state across failures is paramount to system correctness, you may need to use a transaction monitor [GR93] or some type of active replication [BvR94].

Logging service Our networked logging service exhibits both stateful and stateless characteristics. The state maintained by the server process resides largely in the OS kernel (e.g., connection blocks) and the file system (e.g., the log records). Both client and server logging daemon services in this book are stateless, however, since they process each record individually without requiring or using any information from, or expectation of, any previous or possible future request. The need to handle any possible request ordering is not a factor since we use TCP / IP , which provides an ordered, reliable communication byte stream.

2.1.4 Layered/Modular versus Monolithic Services

Service implementations can be classified as layered/modular or monolithic. The primary tradeoffs in this dimension are service reusability, extensibility, and efficiency.

Layered/modular services can be decomposed into a series of partitioned and hierarchically related tasks . For instance, application families can be specified and implemented as layered/modular services, as shown in Figure 2.2 (1). Each layer can handle a self-contained portion of the overall service, such as input and output, event analysis, event filtering, and service processing. Interconnected services can collaborate by exchanging control and data messages for incoming and outgoing communication.

Figure 2.2. Layered/Modular versus Monolithic Services

Powerful communication frameworks have emerged over the years to simplify and automate the development and configuration of layered/modular services [SS93]. Examples include System V STREAMS [Rit84], the x-kernel [HP91], the Conduits+ framework [HJE95], and the ACE Streams framework (Chapter 9). These frameworks decouple the service functionality from the following service design aspects:

  • Compositional strategies, such as the time and/or order in which services and protocols are composed together (described in Chapters 5 and 9 of this book)

  • Concurrency and synchronization strategies, such as task- and message-based architectures (described in Chapter 5 of C++NPv1) that execute services at run time

  • Communication strategies, such as the protocols and messaging mechanisms (described in Chapters 1 “3of C++NPv1) that interconnect services together [SS95b]

Monolithic services are tightly coupled clumps of functionality that aren't organized hierarchically. They may contain separate modules of functionality that vaguely resemble layers , but are most often tightly coupled via shared, global variables , as shown in Figure 2.2 (2) (page 27). They are also often tightly coupled functionally, with control flow diagrams that look like spaghetti. Monolithic services are therefore hard to understand, maintain, and extend. While they may sometimes be appropriate in short-lived, "throw away" prototypes [FY00], they are rarely suitable for software that must be maintained and enhanced by multiple developers over longer amounts of time. [1]

[1] After you become proficient with the ACE toolkit, you'll find it's usually much faster to build a properly layered prototype than to hack together a monolithic one.

Developers can often select either layered or monolithic service architectures to structure their networked applications. The ACE Task and Streams frameworks, discussed in Chapters 6 and 9, provide efficient and extensible ways to build modular services. The advantages of designing layered/modular services are

  • Layering enhances reuse since multiple higher-layer application services can share lower-layer services.

  • Implementing applications via an interconnected series of layered services enables transparent, incremental enhancement of their functionality.

  • A layered/modular architecture facilitates macro-level performance improvements by allowing the selective omission of unnecessary service functionality or selective configuration of contextually optimal service functionality.

  • Modular designs generally improve the implementation, testing, and maintenance of networked applications and services.

There can also be some disadvantages, however, with using a layered/modular architecture to develop networked applications:

  • The modularity of layered implementations can cause excessive overhead. For example, layering may be inefficient if buffer sizes don't match in adjacent layers, thereby causing additional segmentation, reassembly, and transmission delays.

  • Communication between layers must be designed and implemented properly, which can introduce another source of errors.

  • Information hiding within layers can make it hard to manage resources predictably in applications with stringent real-time requirements.

Logging service By carefully separating design concerns, our client and server logging daemons are designed using the layered/modular architecture depicted in Figure 2.3 and described below.

Figure 2.3. Networked Logging Service Architecture Layers

  1. Event infrastructure layer, which detects and demultiplexes events and dispatches them to their associated event handlers. Chapters 3 and 4 describe how the Reactor pattern and ACE Reactor framework can be applied to implement a generic event infrastructure layer. Likewise, Chapter 8 describes how the Proactor pattern and ACE Proactor framework can be applied for a similar purpose.

  2. Configuration management layer, which installs , initializes, controls, and shuts service components down. Chapter 5 describes how the Component Configurator pattern and ACE Service Configurator framework can be applied to implement a generic configuration management layer.

  3. Connection management and concurrency layer, which performs connection and initialization services that are independent of application functionality. Chapters 6 and 7 describe how the Acceptor-Connector and Half-Sync/Half-Async patterns and the ACE Acceptor-Connector and Task frameworks can implement a generic connection management layer.

  4. Application layer, which customizes the application-independent classes provided by the other layers to create concrete objects that configure applications, handle events, establish connections, exchange data, and perform logging-specific processing. Throughout this book, we illustrate how to implement these application-level capabilities using the ACE frameworks and the ACE wrapper facade classes.

2.1.5 Single-Service versus Multiservice Servers

Protocols and services rarely operate in isolation, but instead are accessed by applications within the context of a server. Servers can be designed either as single service or multiservice. The tradeoff in this dimension is between resource consumption versus robustness.

Single-service servers offer only one service. As shown in Figure 2.4 (1), a service can be internal or external, but there's only a single service per process. Examples of single-service servers include

Figure 2.4. Single-service versus Multiservice Servers

  • The RWHO daemon ( RWHOD ), which reports the identity and number of active users, as well as host workloads and host availability

  • Early versions of UNIX standard network services, such as FTP and TELNET , that ran as distinct single-service daemons initiated at OS boot time [Ste98]

Each instance of these single-service servers executed externally in a separate process. As the number of system servers increased, however, this statically configured, single-service- per-process approach incurred the following limitations:

  • It consumed excessive OS resources, such as virtual memory and process table slots.

  • It caused redundant initialization and networking code to be written separately for each service program.

  • It required running processes to be shut down and restarted manually to install new service implementations.

  • It led to ad hoc and inconsistent administrative mechanisms' being used to control different types of services.

Multiservice servers address the limitations with single-service servers by integrating a collection of single-service servers into a single administrative unit, as shown in Figure 2.4 (2). Examples of multiservice servers include INETD (which originated with BSD UNIX [MBKQ96, Ste98]), LISTEN (which is the System V UNIX network listener service [Rag93]), and the Service Control Manager ( SCM ) (which originated with Windows NT [SR00]). Sidebar 4 compares and contrasts these multiservice servers.

Sidebar 4: Comparing Multiservice Server Frameworks

This sidebar compares the multiservice server frameworks supported by various versions of UNIX and Windows.

  • INETD 's internal services, such as ECHO and DAYTIME , are fixed at static link time. The master INETD daemon permits dynamic reconfiguration of its external services, such as FTP or TELNET . For instance, when the INETD daemon is sent the SIGHUP signal, it reads its inetd.conf file and performs the socket()/bind()/listen() sequence for all services listed in that file. Since INETD does not support dynamic reconfiguration of internal services, however, any newly listed services must still be processed by spawning slave daemons via fork() and the exec *() family of system functions.

  • The System V UNIX LISTEN port monitoring facility is similar to INETD , though it only supports connection-oriented protocols accessed via TLI and System V STREAMS , and doesn't provide internal services. Unlike INETD , however, LISTEN supports standing servers by passing initialized file descriptors via STREAMS pipes from the LISTEN process to a previously registered standing server.

  • Unlike INETD and LISTEN , the Windows SCM is not a port monitor since it doesn't provide built-in support for listening to a set of I/O ports and dispatching server processes on demand when client requests arrive . Instead, it provides an RPC-based interface that allows a master SCM process to automatically initiate and control (i.e., pause, resume, or terminate) administrator-installed services (such as FTP and TELNET ) that typically run as separate threads within either a single-service or a multiservice daemon process. Each installed service is individually responsible for configuring the service and monitoring any communication endpoints. These endpoints may be more general than TCP or UDP sockets, for example, they can be Windows named pipes.

A multiservice server can yield the following benefits:

  • It can reduce OS resource consumption by spawning servers on demand.

  • It simplifies server development and reuses common code by automatically daemonizing a server process (described in Sidebar 5), initializing transport endpoints, monitoring ports, and demultiplexing /dispatching client requests to service handlers.

  • It can allow external services to be updated without modifying existing source code or terminating running server processes.

  • It consolidates network service administration via a uniform set of configuration management utilities. For example, the INETD superserver provides a uniform interface for coordinating and initiating external services, such as FTP and TELNET , and internal services, such as DAYTIME and ECHO .

Logging service Implementations of the networked logging service in C++NPv1 all used single-service servers. Starting in Chapter 5 of this book, various entities in the networked logging service will be configured via the ACE Service Configurator framework, which can be used to configure multiservice superservers similar to INETD .

Sidebar 5: Daemons and Daemonizing

A daemon is a long-running server process that executes in the "background" performing various services on behalf of clients [Ste98]. A daemon is not associated with an interactive user or controlling terminal. It's therefore important to ensure a daemon is designed robustly to recover from errors and to manage its resources carefully.

Daemonizing a UNIX process involves spawning a new server process, closing all unnecessary I/O handles, changing the current filesystem directory away from the initiating user's, resetting the file access creation mask, disassociating from the controlling process group and controlling terminal, and ignoring terminal I/O-related events and signals. An ACE server can convert itself into a daemon on UNIX by invoking the static method ACE::daemonize() or passing the '-b' option to ACE_Service_Config:: open () (page 141). A Windows Service [Ric97] is a form of daemon and can be programmed in ACE using the ACE_NT_Service class.

2.1.6 One-shot versus Standing Servers

In addition to being single service or multiservice, networked servers can be designed as either one shot or standing. The primary tradeoffs in this dimension involve how long the server runs and uses system resources. When evaluating choices in this dimension, consider anticipated usage frequency for the service(s) offered by the server, as well as requirements for startup speed and configuration flexibility.

One-shot servers are spawned on demand, for example, by an INETD superserver. They perform service requests in a separate thread or process, as shown in Figure 2.5 (1). A one-shot server terminates after the completion of the request or session that triggered its creation. An example of a one-shot server is a UNIX FTP server. When an FTP client connects to the server, a new process is spawned to handle the FTP session, including user authentication and file transfers. The FTP server process exits when the client session ends.

Figure 2.5. One-shot versus Standing Servers

A one-shot server doesn't remain in system memory when it's idle. Therefore, this design strategy can consume fewer system resources, such as virtual memory and process table slots. This advantage is clearly more pronounced for services that are seldom used.

Standing servers continue to run beyond the lifetime of any particular service request or session they process. Standing servers are often initiated at boot time or by a superserver after the first client request. They may receive connection and/or service requests via local IPC channels, such as named pipes or sockets, that are attached to a superserver, as shown in Figure 2.5 (2). Alternatively, a standing server may take ownership of, or inherit, an IPC channel from the original service invocation.

An example of a standing server is the Apache Web server [HMS98]. Apache's initial parent process can be configured to pre-spawn a pool of child processes that service client HTTP requests. Each child process services a tunable number of client requests before it exits. The parent process can spawn new child processes as required to support the load on the Web server.

Compared with one-shot servers, standing servers can improve service response time by amortizing the cost of spawning a server process or thread over a series of client requests. As in Apache's case, they can also be tuned adaptively to support differing types of load. The ability of a standing server design to terminate and respawn service processes periodically can also guard against OS or application problems, such as memory leaks, that degrade performance over time or become security holes.

Logging service We implement the client and server logging daemons in our networked logging service as standing servers to improve performance of the overall system. We justify the tradeoff of occupying process slots and system resources since logging is a service that's used frequently. Thus, restarting a logging server for each client would delay the client making the request and degrade overall system performance.

The choice between one-shot or standing servers is orthogonal to the choice between short- or long-duration services described in Section 2.1.1. The former design alternative usually reflects OS resource management constraints, whereas the latter design alternative is a property of a service. For example, we could easily change to a short-duration service without changing the standing nature of the server itself. Likewise, if the logging service is lightly used in some environments, it could easily be changed to a one-shot server, with or without revisiting the duration of each service request.

Ru-Brd


C++ Network Programming
C++ Network Programming, Volume I: Mastering Complexity with ACE and Patterns
ISBN: 0201604647
EAN: 2147483647
Year: 2002
Pages: 65

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net