Networked applications such as word processing, spreadsheet, database, e-mail, and Web browsing are now routinely deployed on a massive scale in order to enable entire organizations to work together and share resources. However, the way in which application architecture imposes itself on a network can have a profound influence on the traffic flows imposed over that network, and the network designer should understand the basic application models commonly deployed today. In a home office or even a home network, all applications are typically installed and run in isolation on the user's workstation. On a network, particularly a medium-sized to large network, applications may be centralized, fully or partially distributed, to improve productivity and decrease overall costs.
Applications architectures have evolved significantly over the past decade, and understanding how various application models will impact the network is vital in order to accurately characterize traffic flows. The key application models are presented in the following text and are illustrated in Figure 1.4.
Figure 1.4: (a) Centralized model, (b) decentralized model, (c) client/server model, (d) distributed model.
Centralized Model. Network application models were initially quite simple. Before the explosion of desktop computing, the application typically ran on shared central computers (usually a large mainframe). Remote users logged in to the mainframe via a virtual terminal protocol (either block or character oriented). The traffic between the user and the mainframe effectively comprised keystrokes or blocks of display updates, and data were predominantly text.
Decentralized Model. With the rise in desktop computing we saw for the first time a move toward distributed computing, with discrete applications running on user workstations, and these workstations were networked together to share common resources such as printers and file servers. The advantage of decentralization was increased power and performance to the desktop; the disadvantage was cost and maintenance. Imagine, for example, trying to ensure that your 3,000-member workforce has the latest patch for the word processing application.
Client/Server Model. The next major shift in application architecture was to combine the best of the centralized and decentralized approaches to form a hybrid architecture, where a centralized server running the application cooperated with high-performance thin client software. In this way data integrity and consistency could be managed centrally, while high-bandwidth features, such as graphical interface handling, could be devolved locally.
Distributed Model. It seemed only natural to evolve this architecture one step further, so that many cooperating entities could be involved in the provision of an application without any knowledge of the location of each entity; this is the so-called distributed object model. There are now a number of middleware stacks that support application distribution (including DCOM, CORBA, and EJB).
We discuss the client/server and the distributed model in more detail.
In the client/server model the application is divided into two functions: a front-end client, which presents information to the user and collects information from the user, and a back-end server, which stores, retrieves, and manipulates data and generally handles the bulk of the computing tasks for the client. Typically the server part of the application runs on a more powerful platform than does the client (e.g., a minicomputer or mainframe) and also acts as a central data repository for many client computers (thereby promoting consistency and making the system easy to manage). The client/server architecture increases workgroup productivity by combining the best features of standalone workstations with the best features of minicomputers and mainframes. The model makes the best use of high-end server hardware and reduces the load on client PCs.
In terms of operation the server is a program that runs on a network-attached computer, and provides a service to one or more clients. The server receives requests from clients over the network, performs the necessary processing to service those requests, and returns results to the clients. A request could even be to download part of the application. The client is the program (typically running on a user PC or workstation) that sends requests to a server and waits for a response (e.g., a database query). A single server instance can service several client requests concurrently. For this reason, designing and implementing servers tends to be more difficult than implementing clients. For a client and server to communicate and share workload, an interprocess communication (IPC) facility is required (such as the TCP socket interface).
The client/server architecture contrasts with the classical centralized architecture adopted by early mainframe installations. In a centralized environment, the clients are little more than dumb terminals that act as simple data entry and display devices. The terminal process does very little actual work; the user typically fills in the fields of a text-based form and these data are simply forwarded to the central computer for processing. All processing and screen formatting are performed by the central computer; the terminal simply displays the preformatted data as they arrive. However, in a client/server environment the client has much more control over the final visual presentation to the user. Instead of the data being preformatted at the server, data are sent back in raw format, and the client application must determine how best to translate and display these data. This model enables changes to be made at the client interface without having to change the server code. A good example of a client/server application is the X Windows protocol.
Common Object Request Broker Architecture (CORBA) and Enterprise Java Beans (EJB) are examples of new software architectures that enable applications to be fully distributed and isolated from the plumbing aspects of the network infrastructure. The architecture of these platforms enables components of an application to communicate transparently regardless of location—with entities running on the same machine, different machines, or indeed different networks. For applications such as e-commerce this represents a powerful and highly flexible way to distribute performance and features. These architectures can present interesting challenges for a network planner, both in terms of the traffic dynamics and in areas such as security. Distributed applications are being deployed on a large scale for Internet e-commerce, health care, and financial applications and represent the new wave of truly distributed computing.
CORBA is the de facto set of APIs and middleware services for developing distributed applications at present; the specifications for CORBA are defined by the Object Management Group (OMG) . Applications can access distributed objects by using an API called the Object Request Broker (ORB). The ORB transparently forwards object requests from clients to the appropriate server objects and returns the results. Distributed objects can be organized in a client/server or peer-to-peer relationship, often dynamically changing depending upon the context of the transaction. The Internet Inter-ORB Protocol (IIOP) is a high-level communications protocol used by CORBA objects to support remote method invocation (i.e., it enables cooperating entities to communicate transparently and activate remote functions over protocols such as TCP/IP). IIOP afterwards gets the nearest CORBA to network plumbing and is analogous to the HTTP protocol (the communications protocol used to transport Web traffic). For example, by using the CORBA/IIOP protocol, a Java applet running in a client machine could communicate transparently with a servlet running in the Web server (it need know nothing about the location of the servlet). However, IIOP is more scalable and efficient than HTTP, since IIOP can reuse a single connection for multiple requests; hence, the applet and the servlet can exchange method calls in both directions, using a single connection. Once IIOP communication is established, object activations and method calls on those objects may occur using either a direct IIOP connection or IIOP over HTTP. Note that it is the client ORB that decides whether to use a direct IIOP connection or revert to IIOP over HTTP. The client will use the best quality of service available. It tries to establish a direct IIOP connection first and, if that doesn't work, it uses IIOP over HTTP