SQL Server Supporting Applications


Each tier of a business model represents the isolation of a process so that each individual tier performs a portion of the entire system's processing. The number of tiers is dependant on a lot of factors. In a full implementation each process delivers an important aspect of the system. In the following sections the different models are described so that an understanding of the choices can be reached.

One- and Two-Tier Systems

A one-tier system in a PC environment dates back 25 or more years to an environment in which the only way to share data was to use the "sneaker net" approach. In other words, copies of the data were made and distributed manually. This approach was adequate at best and caused many headaches for anyone trying to implement a multiple-user environment. Particularly difficult was the merging of updates from multiple copies of the data.

With a single-tier approach, one computer performs all the processing required to view, update, and store the data. Many products use this technique for small, single-user database systems, but this technique becomes overwhelming when the data is needed by multiple users sharing the same data. In this case, a two-tier system with a central data store on a proper server is a better approach.

A two-tier architecture places the user interface and data on separate machines. The client application sends queries across the network to be resolved by the data engine running on the server. The server resolves the query and sends the necessary data back across the network for the client application to display. There are two implementations of a two-tier system: "thin client" and "thick client."

In a thin approach the client application does little or no processing. A thin client just presents the data to the user and, when needed, communicates with the database engine on the server to work with the data. The thin client approach is best when the number of concurrent users accessing the data can be kept to a minimum. Because the server must perform the processing to validate the data as well as all other data manipulations, there is a lot of server overhead related to this approach. A thin client is a good approach for maintainability. If you need to upgrade software, for example, you do not have to do so on 1,000 clients. It also works well for Internet applications.

The thick client (sometimes referred to as a fat client) approach offloads some of the work needed to validate and process the data from the server machine to the client machine. In this approach the client may make some of the determinations as to whether data should be sent to the server based on validity checks coded in the client application. This approach enables many more users to access the same database concurrently. On the downside, though, application maintenance is more demanding, and higher performance is required from the client.

Although a two-tier architecture allows for more flexibility and a larger number of users, it is still quite a limited architecture that can serve only small environments. When a larger number of concurrent user accesses is needed, a better choice of architecture is a multiple "n-tier" architecture or possibly an Internet architecture.

Three- or More Tier Systems

In a three-tier system, the client system presents the interface and interacts with the user. The database server manipulates the data, but there also exists a middle tier to control some of the operations. The middle tier can be represented by one or more machines that offload some processing from the database server, which allows for a very large number of users. There is usually no differentiation between three or more tiers; instead they are all categorized as n-tier systems.

In an n-tier system, processing is divided into three categories. Each category represents one of the three main tiers in the three-tier system, which is also carried forward in an n-tier system regardless of the number of layers of processing.

The presentation, or client, tier contains the components of the system that interact directly with the user. The sole purpose of this tier is to focus on the end user and present data in an attractive, organized, and meaningful fashion.

The middle tier, or business tier, is responsible for communicating with the database server and also sets up the rules by which communication will be established. The idea behind the business tier is to provide mechanisms to implement the business rules that need to be applied to validate data and also perform intermediate processing that may be needed to prepare the data for presentation to the user. For this reason the business tier is often separated into two divisions: interactions with the user and interactions with the database server. With this approach business rules can be separated from data access processes.

The final tier is the data tier, which is responsible for the execution of the data engine to perform all manipulations of the data. Access to the data tier is made through the middle tier. The data tier doesn't directly interact with the presentation layer.

Internet Application Architecture

Internet applications can be said to fall under a two-, three-, or n-tier model, depending on the complexity and physical design. In an Internet application, the web server prepares the presentation elements to be displayed on the user's browser. If a middle tier server exists, the web server is configured to interact with that server. If no middle tier server exists, the web server interacts directly with the database server.

Depending on the implementation, the client's browser may also be considered a tier. If a set of records is sent to the browser by the web server to allow editing on the browser, the client is considered a tier. In this case, a disconnected recordset or XML data is used for the client's data manipulation. If a round-trip to the web server must take place to submit changes and interact with the data, the client is not considered a tierit is more just a mechanism to display the HTML that is sent by the web server. In this case the user tier is the Internet server that acts as the user and prepares the HTML for display.

Internet applications have the best scalability of all application architecture types, meaning that they can support the largest number of concurrent users. The drawbacks to using Internet architecture are that there is a requirement for maintaining the state information of the client, a greater number of development skills are needed, and update conflict issues are inherent to the technology.

Although an Internet application is usually implemented so that the database can be accessed from anywhere in the world, it can also be used for internal purposes through a configured intranet. Under this principle, users can access the database from a corporate HTML or XML site.

Whatever model is chosen, many components must come together to complete the entire project. Database servers, email servers, Internet Information servers, and other hardware and software together make up the infrastructure of the system.

SQL Server 2000 Back Office Integration

SQL Server's capability to store data and handle scheduled procedures makes it an important portion of the Microsoft Server family. SQL Server can be used as the data support and back end of websites running on Internet Information Server (IIS). SQL Server stores the data gathered by Systems Management Server (SMS) in an enterprise infrastructure analysis. SQL Server can even supply information to the email system through Exchange.

SQL Server is quite at home hosting the databases that maintain the Internet's data. Through IIS and Microsoft's state-of-the-art development framework, IIS operates as a midpoint server between the web pages that expose the data to the Internet and the back-end storage of the data within SQL Server.

SMS provides change and configuration management as well as inventory over software and hardware within an organization. The information gathered by SQL is stored within databases on SQL Server, where it is updated as changes in the computer environment occur. SQL Server and SMS enable organizations to provide relevant software and updates to users quickly and cost-effectively while managing expensive assets.

The Exchange mail system, as well as other back-office systems, can utilize SQL Server capabilities for data storage and administration over related data activity. The many functionalities of the server provide flexibility when responding to the needs of diverse business systems.

SQL Server 2000 supports heterogeneous connectivity to any data source through the use of OLE-DB and ODBC drivers, which are available for most common databases. SQL Server can move data to and from these sources using Data Transformation Services (DTS), replication, and linked server operations. SQL Server can act as a gateway to any number of data sources and either handle a copy of the data itself or pass the processing to the third-party source.

Third-Party Database Interactions

The capability of SQL Server to act with almost any third-party source means that existing applications can continue to function in the environment undisturbed while SQL Server applications can also make use of the same data.

Replication, DTS, and linked servers are among the processes that can be set up among database servers other than SQL Server. SQL Server supports OLE-DB, which enables data to be replicated from SQL Server to any other database server, such as Sybase or Oracle. There is only a single requirement: The third-party server must provide a 32-bit ODBC or OLE-DB driver on any Microsoft Windows operating system. When the nonSQL Server database meets this requirement, it can be used in conjunction with SQL Server.

Data Replication

Replication is the process of carrying, modifying, and distributing data and database objects (stored procedures, extended properties, views, tables, user-defined functions, triggers, and indexes) from one source server to another, in a consistent fashion, independent of location. Replication is a huge topic.

Any implementation can use multiple types and an endless number of forms. This section describes just why replication is implemented.

Many corporations distribute information from remote sales locations to central order-processing locations. Other organizations operate distributed warehouses, with each individual location needing knowledge of the others' inventories. The problems are many. One of the potential solutions is the implementation of a replication strategy.

Replication can be performed across platforms, to and from virtually any data source. SQL Server provides for a very diverse strategy for implementing a replication strategy. After it is enabled, you can monitor the progress and quickly be alerted to any problem situations.

A push subscription in replication can send data to a third-party server. When you need to pull a subscription from SQL Server to a third-party database system such as Oracle, you need to create a custom program that accesses Distributed Management Objects (SQL-DMO).

NonSQL Server database engines may also very well act as Publishers that replicate data and database objects to SQL Server Subscribers. ODBC or OLE-DB driver requirements must be met if you want to configure a publishing nonSQL Server database. SQL Server uses ODBC to make replication connections to other servers. That is, a foreign Subscriber must support transactions and be Level 1, ODBC compliant. When it meets this requirement, it can also receive data that is sent via a push subscription.

Of course, replication is only one mechanism that can be used to get data from another data source. DTS is equally capable of utilizing OLEDB or ODBC mechanisms to communicate with other systems.

Snapshot Replication

Snapshot replication is suited to situations in which data is likely to remain unchanged, it is acceptable to have a higher degree of latency, and replication involves small volumes of data. It is preferred over transactional replication when data changes are substantial but infrequent. Application of snapshots using native BCP or compression helps to improve performance.

Snapshot replication can be used alongside Immediate Updating Subscribers using two-phase commit (2PC). In this type of replication, the Subscriber needs to be in contact with the Publisher. Queued update, on the other hand, doesn't require constant connectivity. When you create a publication with queued updating, and a Subscriber performs INSERT, UPDATE, or DELETE statements on published data, the changes are stored in a queue. The queued transactions are applied at the Publisher when network connectivity is restored.

Transactional Replication

In transactional replication, as with snapshot replication, Updating Subscribers may be used with 2PC for the immediate update option. This enables the Subscriber to change the replica at his local server. Changes made to data at the Subscriber are applied to both the Subscriber and the Publisher databases at the same moment, proving high transactional consistency and less latency.

Merge Replication

Merge replication is well suited to scenarios in which conflicts are less likely to occur. For instance, a site might make changes to its internal records only, possibly needing data from all other locations, but not changing any of it. A conflict occurs when the two participants have both changed a record (or column within a record) since they last shared the same version.

Merge replication offers site autonomy at its apexbecause sites are virtually independentand low transactional consistency. These sites, or Subscribers, can freely make modifications to their local copies of the replicated data. These modifications and updates made to data are combined with modifications made at the other Subscribers and also with the modifications made at the Publisher. This process ultimately ensures that all Subscribers and Publishers receive modifications and updates from all other sites; it is better known as convergence.

Very Large Database Applications

SQL Server 2000 has high-speed optimizations that support very large database environments. Although previous versions lacked the capability to support larger systems, SQL Server 2000 and SQL Server 7.0 can effectively support terabyte-sized databases.

With the implementation of partitioned views in Enterprise Edition, servers can be scaled to meet the requirements of large websites and enterprise environments. Federated server implementations enable a large number of servers to assist in maintaining a complex large system.

Elements of the replication system can also help distribute data among a number of machines while providing mutual updatability and maintaining centralized control over the entire system.



    EXAM CRAM 2 Designing and Implementing Databases with SQL Server 2000 Enterprise
    MCAD/MCSE/MCDBA 70-229 Exam Cram 2: Designing & Implementing Databases w/SQL Server 2000 Enterprise Edition
    ISBN: 0789731061
    EAN: 2147483647
    Year: 2005
    Pages: 154

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net