Overview of Client/Server
Client/server is a style of computing where a client process requests services from a server process. In the simplest terms, a server is a program that makes available any kind of service, such as e-mail, files, ftp, Web, or data in the form of a database server. A client is an application that connects to a server to make use of the service it provides.
Clients and servers have different jobs. Some examples of server responsibilities are providing backups to ensure data is safe, security against unwanted intrusion, timely access to the service, and maintenance of reliable storage facilities to ensure high availability of the service.
Some examples of client responsibilities are providing a pleasing user interface, making use of the limited server resources in a responsible and resource-economical way, and, of course, fulfilling the goals of the application.
Client applications include mail clients, such as Eudora and Microsoft Outlook. These applications connect to a mail server to retrieve e-mail messages. Internet Explorer is a client that connects to a Web server.
SQL Server client applications that ship with the product include the Query Analyzer, SQL Server Enterprise Manager, Profiler, SQL Agent, and even Data Transformation Services. Each of these applications connects to the database engine and uses the engine's services in a different way. To use the service, each of these clients sends a query to the server. The server processes the query and sends back results .
Although it is not a requirement of the client/server model, in almost every case, a server allows many clients to connect to it at once. A client might or might not have the ability to connect to multiple servers at once.
Figure 1.1 shows a representation of a typical client/server database environment.
Figure 1.1. Typical client/server architecture.
Before client/server, two other important architectures ruled the world: mainframe, or host-based computing, and PC/LAN-based computing. It's informative to look back at where this all started.
Mainframe or Host-Based Computing
The original computing architecture was the mainframe or host-based architecture. In this environment, virtually all the processing power exists on a central host machine. The business logic tier and the data access tier reside centrally on the host. The user of the application interfaces with the data through a dumb terminal. The terminal is referred to as dumb simply because it has no inherent processing power of its own. The only processing provided by the terminal is sending keystrokes to the host and displaying data to the user. Although data is displayed on the dumb terminal, the host computer makes all the decisions about how the data is to be presented.
Because these host machines were extremely expensive and maintenance costs were equally high, it made sense for an organization to centralize as much of its data and application logic as possible. Over time, organizations found that this centralized environment caused severe backlogs both in application processing and development.
In this environment, applications and data are centralized and exist solely on the host computer. Communications are virtually eliminated as a bottleneck, even when the host and the terminal are separated by hundreds of miles and only share a relatively slow asynchronous connection. Application development and maintenance is also centralized, providing an important measure of control and security. Administration of the system (backup, data maintenance) is handled centrally as well.
This highly centralized approach of data processing led to a number of issues. One of the problems was availability. If the mainframe was down, all processing ceased. Another problem was cost. The combined effect of high purchase prices and exorbitant maintenance fees was that processing cycles centralized on the host became far more expensive than the processing cycles on a PC. Another issue that arose was that end users began desiring instant access to data and information. In the mainframe environment, requesting a new report or a change to an existing report often required submission of a request to have a job written to run the report. Then the report needed to be scheduled to run and eventually the user would receive the report. This type of environment did not lend itself well to end users running ad hoc queries and reports against their data, especially to test a number of various "what if" scenarios.
End users wanted, and needed, more personal control over their data. There was also a need to offload some of the mainframe data and processing in order to reduce costs.
The first real answer to this problem was the PC.
As the PC became affordable, it made sense to use the inherent processing power of the PC to offload data and work from the host. Many departmental users began using their PCs to perform various operations that used to rely on the host. The low cost and high availability of PC computing were extremely attractive to people who were forced to wait in line for the privilege to pay high prices for mainframe processing. The real nightmare of host processing has always been the tremendous backlog of applications waiting to be developed and maintained . PC users found that they could build their own applications (admittedly amateur , but often more usable than the enterprise applications) faster than they could fill out the forms requesting apps from the central MIS group .
The first widely used business applications for the PC were spreadsheet applications that could perform much of the number crunching and calculations that used to be performed on the host. Eventually, file system databases such as dBASE and FoxPro became prevalent . Users were able to create their own database-driven applications.
In this architecture, the presentation logic and the business logic typically reside on the local PC. The data could reside on the local PC as well, but often resides on another machine within the local area network, perhaps a network file server, so that the information can be shared among multiple users.
The file system databases work well for individual applications and local PC use. They are not, however, ideally suited for multiuser environments where many users need access to the same information. Using these file system databases over a LAN for shared data access can cause increasing stress on network traffic and do not scale well to the large enterprise-type applications that are needed to run a business.
The advent of the multiuser relational database management system (RDBMS) was really the key technology that drove the client/server computing architecture. The RDBMS served as a central storage location for an organization's data. The RDBMS was designed to handle multiuser access to a shared set of data. All the locking and connection management is handled by the RDBMS along with security. Structured Query Language (SQL) was created to be a universal programming language to request specific data from an RDBMS.
The client/server architecture was really a marriage of the best features of both the host-based environment and the PC LAN environment. This architecture utilizes the power of the PC to perform the presentation of data along with the complicated business processing that adds value to that data. The RDBMS provides a centralized storage area for data and provides the services to manage shared, concurrent access to that data. The client/server architecture can take many forms, depending on how you choose to separate the presentation, business logic, and data tiers. The following sections examine the predominant client/server architectures in more detail.
When you hear the term client/server, the inclination is to think of only the two sides of the transaction, the client and the server. Most of us are familiar with this traditional two-tiered view of client/server, which involves a client application running on a workstation and a server application running on a server.
In a typical two-tier client/server system, the client application connects directly to a server application, such as SQL Server. This usually means that each client workstation must be loaded with vendor-specific libraries and drivers to establish connections with the server. Client applications are also responsible for logging onto the server and maintaining connections, along with handling error messages and the like returned from the server.
The business logic layer can reside on the client, the server, or both in a two-tier system.
Three-Tier and n-Tier Client/Server
The client/server model does allow for more than just these two tiers. A middle tier is a program that sits between the client and server and provides beneficial services to both the client and server tiers. (One such example of a beneficial service is described in the following Note.) Applications that make use of a middle tier are called three tier or n-tier applications. When many middle tiers exist between the endpoints, each serving different functions, it is called an n-tier model .
The primary goal of the n-tier architecture is to separate the business logic from both the presentation and data access layers into a set of reusable objects, sometimes called business objects. Business objects are like stored procedures in that they allow you to centralize your business logic and keep it separate from your client applications.
This type of architecture has its advantages. Once an n-tier architecture is put in place, applications can be much easier to develop and maintain. You can bring new applications online relatively easily by reusing existing business objects. Database changes and business logic changes can be made without redistributing client applications. Programmers can concentrate on developing business rules without having to worry about user interface issues.
While the n-tier architecture has a number of advantages, implementing a successful n-tier architecture requires a complex infrastructure to handle low-level services such as connection pooling, thread maintenance, and transaction monitoring. Some products, however, such as Microsoft Transaction Server (MTS) and .NET Framework, handle many of these complex infrastructure issues and reduce the complexity of implementing an n-tier solution.