Logical and Physical Architecture

In today's world, an object-oriented application must be designed to work in a variety of physical configurations. Although the entire application might run on a single machine, it's more likely that the application will run on a web server, or be split between an intelligent client and an application server. Given these varied physical environments, we're faced with the following questions:

  • Where do the objects reside?

  • Are the objects designed to maintain state, or should they be stateless?

  • How do we handle object-to-relational mapping when we retrieve or store data in the database?

  • How do we manage database transactions?

Before we get into discussing some answers to these questions, it's important that we fully understand the difference between a physical architecture and a logical architecture . After that, we'll define objects and distributed objects, and see how they fit into the architectural discussion.

When most people talk about n-tier applications, they're talking about physical models in which the application is spread across multiple machines with different functions: a client, a web server, an application server, a database server, and so on. And this isn't a misconception ”these are indeed n- tier systems. The problem is that many people tend to assume there's a one-to-one relationship between the tiers in a logical model and the tiers in a physical model, when in fact that's not always true.

A physical n-tier architecture is quite different from a logical n-tier architecture. The latter has nothing to do with the number of machines or network hops involved in running the application. Rather, a logical architecture is all about separating different types of functionality. The most common logical separation is into a UI tier, a business tier, and a data tier that may exist on a single machine, or on three separate machines ” the logical architecture doesn't define those details.

Note  

There is a relationship between an application's logical and physical architectures: The logical architecture always has at least as many tiers as the physical architecture. There may be more logical tiers than physical ones (because one physical tier can contain several logical tiers), but never fewer.

The sad reality is that many applications have no clearly defined logical architecture. Often the logical architecture merely defaults to the number of physical tiers. This lack of a formal, logical design causes problems, because it reduces flexibility. If we design a system to operate in two or three physical tiers, then changing the number of physical tiers at a later date is typically very difficult. However, if we start by creating a logical architecture of three tiers, we can switch more easily between one, two, or three physical tiers later on.

The flexibility to choose your physical architecture is important because the benefits gained by employing a physical n-tier architecture are different from those gained by employing a logical n-tier architecture. A properly designed logical n-tier architecture provides the following benefits:

  • Logically organized code

  • Easier maintenance

  • Better reuse of code

  • Better team development experience

  • Higher clarity in coding

On the other hand, a properly chosen physical n-tier architecture can provide the following benefits:

  • Performance

  • Scalability

  • Fault tolerance

  • Security

It goes almost without saying that if the physical or logical architecture of an application is designed poorly, there will be a risk of damaging the things that would have been improved had the job been done well.

Complexity

As experienced designers and developers, we often view a good n-tier architecture as a way of simplifying an application and reducing complexity, but this isn't necessarily the case. It's important to recognize that n-tier designs (logical and/or physical) are typically more complex than single-tier designs. Even novice developers can visualize the design of a form or a page that retrieves data from a file and displays it to the user , but novice developers often struggle with 2-tier designs, and are hopelessly lost in an n-tier environment.

With sufficient experience, architects and developers do typically find that the organization and structure of an n-tier model reduces complexity for large applications. However, even a veteran n-tier developer will often find it easier to avoid n-tier models when creating a simple form to display some simple data.

The point here is that n-tier architectures only simplify the process for large applications or complex environments. They can easily complicate matters if all we're trying to do is to create a small application with a few forms that will be running on someone's desktop computer. (Of course, if that desktop computer is one of hundreds or thousands in a global organization, then the environment may be so complex that an n-tier solution provides simplicity.)

In short, n-tier architectures help to decrease or manage complexity when any of these are true:

  • The application is large or complex.

  • The application is one of many similar or related applications that when combined may be large or complex.

  • The environment (including deployment, support, and other factors) is large or complex.

On the other hand, n-tier architectures can increase complexity when all of these are true:

  • The application is small or relatively simple.

  • The application isn't part of a larger group of enterprise applications that are similar or related.

  • The environment isn't complex.

Something to remember is that even a small application is likely to grow, and even a simple environment will often become more complex over time. The more successful our application, the more likely that one or both of these will happen. If you find yourself on the edge of choosing an n-tier solution, it's typically best to go with it. You should expect and plan for growth.

This discussion illustrates why n-tier applications are viewed as relatively complex. There are a lot of factors, technical and nontechnical, that must be taken into account. Unfortunately, it isn't possible to say definitively when n-tier does and doesn't fit. In the end, it's a judgment call that we, as architects of our applications, must make, based on the factors that affect our particular organization, environment, and development team.

Relationship Between Logical and Physical Models

Architectures such as .NET's forebear, Windows Distributed interNet Applications (DNA), represent a merger of logical and physical models. Such mergers seem attractive because they appear so simple and straightforward, but typically they aren't good in practice ”they can lead people to design applications using a logical or physical architecture that isn't best suited to their needs.

Note  

To be fair, Windows DNA didn't mandate that the logical and physical models be the same. Unfortunately, almost all of the printed material (even the mousepads) surrounding Windows DNA included diagrams and pictures that illustrated the "proper" Windows DNA implementation as an intertwined blur of physical and logical architecture. Although some experienced architects were able to separate the concepts, many more didn't, and created some horrendous results.

The Logical Model

When you're creating an application, it's important to start with a logical architecture that clarifies the roles of all components , separates functionality so that a team can work together effectively, and simplifies overall maintenance of the system. The logical architecture must also include enough tiers so that we have flexibility in choosing a physical architecture later on.

Traditionally, we'd devise at least a 3-tier logical model that separates the interface, the logic, and the data-management portions of the application. Today that's rarely sufficient, because the "interface" tier is often physically split into two parts (browser and web server), and the "logic" tier is often physically split between a client or web server and an application server. Additionally, there are various application models that break the traditional business tier up into multiple parts ”model-view-controller and facade-data-logic being two of the most popular at the moment.

This means that our logical tiers are governed by the following rules:

  • The logical architecture includes tiers in order to organize our components into discrete roles.

  • The logical architecture must have at least as many tiers as our anticipated physical deployment.

Following these rules, most modern applications have four to six logical tiers. As we'll see, the architecture used in this book includes five logical tiers.

The Physical Model

By ensuring that the logical model has enough tiers to give us flexibility, we can configure our application into an appropriate physical architecture that will depend on our performance, scalability, fault tolerance, and security requirements. The more physical tiers we include, the worse our performance will be ”but we have the potential to increase scalability, security, and/or fault tolerance.

Performance and Scalability

The more physical tiers there are, the worse the performance? That doesn't sound right, but if we think it through, it makes perfect sense: performance is the speed at which an application responds to a user. This is different from scalability , which is a measure of how performance changes as we add load (such as increased users) to an application. To get optimal performance ”that is, the fastest possible response time for a given user ”the ideal solution is to put the client, the logic, and the data on the user's machine. This means no network hops, no network latency, and no contention with other users.

If we decide that we need to support multiple users, we might consider putting application data on a central file server. (This is typical with Access and dBASE systems, for example.) However, this immediately affects performance because of contention on the data file. Furthermore, data access now takes place across the network, which means we've introduced network latency and network contention, too. To overcome this problem, we could put the data into a managed environment such as SQL Server or Oracle. This will help to reduce data contention, but we're still stuck with the network latency and contention problems. Although improved, performance for a given user is still nowhere near what it was when everything ran directly on that user's computer.

Even with a central database server, scalability is limited. Clients are still in contention for the resources of the server, with each client opening and closing connections, doing queries and updates, and constantly demanding the CPU, memory, and disk resources that are being used by other clients . We can reduce this load by shifting some of the work off to another server. An application server such as MTS or COM+ (sometimes referred to as Enterprise Services in .NET) can provide database connection pooling to minimize the number of database connections that are opened and closed. It can also perform some data processing, filtering, and even caching to offload some work from the database server.

These additional steps provide a dramatic boost to scalability, but again at the cost of performance. The user's request now has two network hops, potentially resulting in double the network latency and contention. For a given user, the system gets slower ” but we're able to handle many times more users with acceptable performance levels.

In the end, the application is constrained by the most limiting resource. This is typically the speed of transferring data across the network, but if our database or application server is underpowered, it can become so slow that data transfer across the network isn't an issue. Likewise, if our application does extremely intense calculations and our client machines are slow, then the cost of transferring the data across the network to a relatively idle high-speed server can make sense.

Security

Security is a broad and complex topic, but if we narrow the discussion solely to consider how it's affected by physical n-tier decisions, it becomes more approachable. We find that we're not talking about authentication or authorization as much as we're talking about controlling physical access to the machines on which portions of our application will run. The number of physical tiers in an application has no impact on whether we can authenticate or authorize users, but we can use physical tiers to increase or decrease physical access to the machines on which our application executes.

Security requirements vary radically based on the environment and the requirements of your application. A Windows Forms application deployed only to internal users may need relatively little security, though a Web Forms application exposed to anyone on the Internet may need extensive security.

To a large degree, security is all about surface area: How many points of attack are exposed from our application? We can define the surface area in terms of domains of trust.

Security and Internal Applications

Internal applications are totally encapsulated within our domain of trust ”the client and all servers are running in a trusted environment. This means that virtually every part of our application is exposed to a potential hacker ( assuming that the hacker can gain physical access to a machine on our network in the first place). In a typical organization, a hacker can attack the client workstation, the web server, the application server, and the database server if they so choose. Rarely are there firewalls or other major security roadblocks within the context of an organization's LAN.

Note  

Obviously, we do have security ”we typically use Windows domain or Active Directory security on our clients and servers, for instance ”but there's nothing stopping someone from attempting to communicate directly with any of these machines. What we're talking about here is access , and within a typical network, we have access to all machines.

Because the internal environment is so exposed to start with, security should have little impact on our decisions regarding the number of physical tiers for our application. Increasing or decreasing the number of tiers will rarely have much impact on a hacker's ability to compromise the application from a client workstation on our LAN.

An exception to this rule comes when someone can use our own web services or remoting services to access our servers in invalid ways. This problem was particularly acute with DCOM, because there were browsers that end users could use to locate and invoke server-side services. Thanks to COM, users could use Microsoft Excel to locate and interact with server-side COM components, thereby bypassing the portions of our application that were supposed to run on the client. This meant that we were vulnerable to power users who could use our components in ways we never imagined!

Note  

The problem is likely to transfer to web services in the near future, as new versions of Microsoft Office and other end-user tools gain web service browsers. We can then expect to find power users writing macros in Excel to invoke our web services in ways we never expected.

The technology we'll be using in this book is .NET remoting, which isn't geared toward the same ease of use as web services, and it's unlikely that end users will have browsers to locate remoting services. Even so, we'll be designing our remoting services to prevent casual usage of our objects, even if a power user were to gain access to the service from some future version of Excel!

In summary, although security shouldn't cause us to increase or decrease the number of physical tiers for internal applications, it should inform our design choices when we expose services from our server machines.

Security and External Applications

For external applications, things are entirely different. To start with, we assume that there are at least two tiers: The client workstation is separate from any machines physically running within our environment. Typically, the client workstations are on the other side of a firewall from any of our servers, and we control the specific IP ports by which they gain entry to our network.

This means that the client workstations are outside our domain of trust, which in turn means that we must assume that they're compromised and potentially malicious. If we actually run any code on those clients, we must assume that it ran incorrectly or didn't run at all ”in other words, we must completely validate any input from the client as it enters our domain of trust, even if we put code into the client to do the validation.

In many web applications, for instance, we'll include script code that executes on the browser to validate user input. When the user posts that data back to our Web Form, we must revalidate the data, because we must assume that the user somehow defeated or altered the client-side validation, and is now providing us with invalid data.

Note  

I've had people tell me that this is an overly paranoid attitude, but I've been burned this way too many times. Any time we're exposing an interface (Windows, web, XML, and so on) so that clients outside our control can use it, we must assume that the interface will be misused. Often, this misuse is unintentional ”someone wrote a macro to automate data entry rather than doing it by hand ”but the end result is that our application fails unless we completely validate the input as it enters our domain of trust.

The ideal in this case is to expose only one server (or one type of server ”a web server, say) to clients that are outside our domain of trust. That way, we only have one "port of entry," at which we can completely control and validate any inbound data or requests . It also reduces the hacker footprint by providing only one machine with which a hacker can interact. At this stage, we've only dictated two physical tiers: the client and our server.

Many organizations take this a step further, and mandate that there should be a second firewall behind which all data must reside. Only the web server can sit between the two firewalls. The idea is that the second firewall prevents a hacker from gaining access to any sensitive data, even if she breaches the first firewall and takes control of the web server. Typically, a further constraint in configurations like this is that the web server can't interact directly with database servers. Instead, the web server must communicate with an application server (which is behind the second firewall), and that server communicates with the database server.

There's some debate as to how much security is gained through this approach, but it's a common arrangement. What it means to us is that we now have a minimum of four tiers: the client, the web server, an application server, and a data server ”and as we discussed earlier, the more physical tiers we have, the worse our performance will be. As a general rule, switching from a 3-tier web model (client, web server, database server) to this type of 4-tier web model (client, web server, application server, database server) will result in a 50 percent performance reduction.

Special network configurations can mitigate some of the performance hit, using dual NICs and special routing for the servers, for example ”but the fact remains that there's a substantial impact. That second firewall had better provide a lot of extra security, because we're making a big sacrifice in order to implement it.

Fault Tolerance

Fault tolerance is achieved by identifying points of failure and providing redundancy. Typically, our applications have numerous points of failure. Some of the most obvious are as follows :

  • The network feed to our building or data center

  • The power feed to our building or data center

  • The network feed and power feed to our ISP's data center

  • The primary DNS host servicing our domain

  • Our firewall

  • Our web server

  • Our application server

  • Our database server

  • Our internal LAN

In order to achieve high levels of fault tolerance, we need to ensure that if any one of these fails, some system will instantly kick in and fill the void. If our power goes out, a generator kicks in. If a bulldozer cuts our network feed, we have a second network feed coming in from the other side of our building, and so forth.

Considering some of the larger and more well-known outages of major websites in the past couple of years , it's worth noting that most of them occurred due to construction work cutting network or power feeds, or because their ISP or external DNS provider went down or was attacked . That said, there are plenty of examples of websites going down due to local equipment failure. The reason why the high-profile failures are seldom due to this type of problem is because large sites make sure to provide redundancy in these areas.

Clearly, adding redundant power, network, ISP, DNS, or LAN hardware will have little impact on our application architecture. Adding redundant servers, on the other hand, will affect our n-tier application architecture ”or at least, our application design. Each time we add a physical tier to our n-tier model, we need to ensure that we can add redundancy to the servers in that tier. The more physical tiers, the more redundant servers we need to configure and maintain. Thus, adding a tier always means adding at least two servers to our infrastructure.

Not only that, but to achieve fault tolerance through redundancy, all servers in a tier must also be identical at all times. In other words, at no time can a user be tied to a specific server ”in other words, no server can ever maintain any user-specific information. As soon as a user is tied to a specific server that server becomes a point of failure for that user, and we've lost fault tolerance (for that user, at least).

Achieving a high degree of fault tolerance isn't easy. It requires a great deal of thought and effort to locate all points of failure and make them redundant. Having fewer physical tiers in our architecture can assist in this process by reducing the number of tiers that must be made redundant.

Ultimately, the number of physical tiers in our architecture is a trade-off between performance, scalability, security, and fault tolerance. Furthermore, the optimal configuration for a web application isn't the same as the one for an intranet application with intelligent client machines. If the framework we're going to create is to have any hope of broad appeal , we need flexibility in the physical architecture so that we can support web and intelligent clients effectively as well as provide both with optimal performance and scalability.

A 5-Tier Logical Architecture

In this book, we'll explore a 5 -tier logical architecture and how we can implement it using object-oriented concepts. Once we've created it, we'll configure the logical architecture into various physical architectures in order to achieve optimal results for Windows Forms, Web Forms, and web-services interfaces.

Note  

If you get any group of architects into a room and ask them to describe their ideal architecture, each one will come up with a different answer. I make no pretense that this architecture is the only one out there, nor do I intend to discuss all the possible options. Our aim here is to present a coherent , distributed, object-oriented architecture that supports Windows, web, and web-services interfaces.

In our framework, the logical architecture comprises the five tiers shown in Figure 1-1.

image from book
Figure 1-1: The 5-tier logical architecture

Remember that the benefit of a logical n-tier architecture is the separation of functionality into clearly defined roles or groups, in order to increase clarity and maintainability. Let's define each of the tiers more carefully .

Presentation

At first, it may not be clear why we've separated presentation from the user interface (UI). Certainly, from a Windows perspective, presentation and UI are one and the same: They are GUI forms with which the user can interact. From a web perspective (or from that of terminal-based programming), the distinction is probably quite clear. The browser (or a terminal) merely presents information to the user, and collects user input. All of the actual interaction logic ”the code we write to generate the output, or to interpret user input ”runs on the web server (or mainframe), and not on the client machine.

Knowing that our logical model must support both intelligent and web-based clients (along with even more limited clients, such as cell phones or other mobile devices), it's important to recognize that in many cases, the presentation will be physically separate from the UI logic. In order to accommodate this separation, we'll need to design our applications around this concept.

Note  

The types of presentation tiers continue to multiply, and each comes with a new and relatively incompatible technology with which we must work. It's virtually impossible to create a programming framework that entirely abstracts presentation concepts. Because of this, our architecture and framework will merely support the creation of varied presentations, not automate or simplify them. Instead, our focus will be on simplifying the other tiers in the architecture, where technology is more stable.

User Interface

Now that we understand the distinction between presentation and UI, the latter's purpose is probably fairly clear. This tier includes the logic to decide what the user sees, the navigation paths, and how to interpret user input. In a Windows Forms application, this is the code behind the form. Actually, it's the code behind the form in a Web Forms application, too, but here it can also include code that resides in server-side controls ” logically , that's part of the same tier.

In many applications, the UI code is very complex. For a start, it must respond to the user's requests in a nonlinear fashion. (We have little control over how users might click controls, or enter or leave our forms or pages.) The UI code must also interact with logic in the business tier to validate user input, to perform any processing that's required, or to do any other business-related action.

Basically, what we're talking about here is writing UI code that accepts user input and then provides it to the business logic, where it can be validated , processed , or otherwise manipulated. The UI code must then respond to the user by displaying the results of its interaction with the business logic. Was the user's data valid? If not, what was wrong with it? And so forth.

In .NET, our UI code is almost always event-driven. Windows Forms code is all about responding to events as the user types and clicks our form, and Web Forms code is all about responding to events as the browser round-trips the user's actions back to the web server. Although both Windows Forms and Web Forms technologies make heavy use of objects, the code that we typically write into our UI isn't object-oriented as much as procedural and event-based .

That said, there's great value in creating frameworks and reusable components that will support a particular type of UI. If we're creating a Windows Forms UI, we can make use of visual inheritance and other object-oriented techniques to simplify the creation of our forms. If we're creating a Web Forms UI, we can use ASCX user controls and custom server controls to provide reusable components that simplify page development.

Because there's such a wide variety of UI styles and approaches, we won't spend much time dealing with UI development or frameworks in this book. Instead, we'll focus on simplifying the creation of the business logic and data-access tiers, which are required for any type of UI.

Business Logic

Business logic includes all business rules, data validation, manipulation, processing, and security for our application. One definition from Microsoft is as follows: "The combination of validation edits, logon verifications, database lookups, policies, and algorithmic transformations that constitute an enterprise's way of doing business." [1]

The business logic must reside in a separate tier from the UI code. I believe that this particular separation is the most important if we want to gain the benefits of increased maintainability and reusability for our code. This is because any business logic that creeps into the UI tier will reside within a specific UI, and will not be available to any other UIs that we might later create.

Any business logic that we write into (say) our Windows UI is useless to a web or web-service UI, and must therefore be written into those as well. This instantly leads to duplicated code, which is a maintenance nightmare. Separation of these two tiers can be done through techniques such as clearly defined procedural models, or object-oriented design and programming. In this book, we'll apply object-oriented concepts: Encapsulating business data and logic in a set of objects is a powerful way to accomplish separation of the business logic from the UI.

Data Access

Data-access code interacts with the data-management tier to retrieve, update, and remove information. The data-access tier doesn't actually manage or store the data; it merely provides an interface between the business logic and the database.

Data access gets its own logical tier for much the same reason that we split the presentation from the UI. In some cases, data access will occur on a machine that's physically separate from the one where the UI and/or business logic is running. In other cases, data-access code will run on the same machine as the business logic (or even the UI) in order to improve performance or fault tolerance.

Note  

It may sound odd to say that putting the data-access tier on the same machine as our business logic can increase fault tolerance, but consider the case of web farms, where each web server is identical to all the others. By putting the data-access code on the web servers, we provide automatic redundancy of the data-access tier along with the business logic and UI tiers.

Adding an extra physical tier just to do the data access makes fault tolerance harder to implement, because it increases the number of tiers in which redundancy needs to be implemented. As a side effect, adding more physical tiers also reduces performance, so it's not something that should be done lightly.

By logically defining data access as a separate tier, we enforce a separation between the business logic and how we interact with a database (or any other data source). This separation gives us the flexibility to choose later whether to run the data-access code on the same machine as the business logic, or on a separate machine. It also makes it much easier to change data sources without affecting the application. This is important because we may need to switch from one database vendor to another at some point.

This separation is useful for another reason: Microsoft has a habit of changing data-access technologies every three years or so, meaning that we need to rewrite our data-access code to keep up (remember DAO, RDO, ADO 1.0, ADO 2.0, and now ADO.NET?).

By isolating the data-access code into a specific tier, we limit the impact of these changes to a smaller part of our application.

Data-access mechanisms are typically implemented as a set of services, with each service being a procedure that's called by the business logic to create, retrieve, update, or delete data. Although these services are often constructed using objects, it's important to recognize that the designs for an effective data-access tier are really quite procedural in nature. Attempts to force more object-oriented designs for relational database access often result in increased complexity or decreased performance.

Note  

If we're using an object database instead of a relational database, then of course our data-access code may be very object-oriented. Few of us get such an opportunity, however, because almost all data is stored in relational databases.

Sometimes the data-access tier can be as simple as a series of methods that use ADO.NET directly to retrieve or store data. In other circumstances, the data-access tier is more complex, providing a more abstract or even metadata-driven way to get at data. In these cases, the data-access tier can contain a lot of complex code to provide this more abstract data-access scheme. The framework we'll create in this book will work directly against ADO.NET, but you could also use a metadata-driven data-access layer if you prefer.

Another common role for the data-access tier is to provide mapping between the object-oriented business logic, and the relational data in a data store. A good object-oriented model is almost never the same as a good relational database model. Objects often contain data from multiple tables, or even from multiple databases ”or conversely, multiple objects in the model can represent a single table. The process of taking the data from the tables in our relational model and getting it into the object-oriented model is called object-relational mapping , and we'll have more to say on the subject in Chapter 2.

Data Storage and Management

Finally, we have the data storage and management tier. Database servers such as SQL Server or Oracle often handle these tasks , but increasingly other applications may provide this functionality, too, via technologies such as web services.

What's key about this tier is that it handles the physical creation, retrieval, update, and deletion of data. This is different from the data-access tier, which requests the creation, retrieval, update, and deletion of data. In the data-management tier, we actually implement these operations within the context of a database or a set of files, and so on.

Our business logic (via the data-access tier) invokes the data-management tier, but the tier often includes additional logic to validate the data and its relationship to other data. Sometimes, this is true relational data modeling from a database; other times, it's the application of business logic from an external application. What this means is that a typical data-management tier will include business logic that we also implement in our business logic tier. This time the replication is unavoidable because relational databases are designed to enforce relational integrity ”and that's just another form of business logic.

In any case, whether we're using stored procedures in SQL Server, or SOAP calls to another application, data storage and management is typically handled by creating a set of services or procedures that can be called as needed. Like the data-access tier, it's important to recognize that the designs for data storage and management are typically very procedural.

Table 1-1 summarizes the five tiers and their roles.

Table 1-1: The Five Logical Tiers and the Roles They Provide

Tier

Roles

Presentation

Renders display and collects user input.

UI

Acts as an intermediary between the user and the business logic, taking user input and providing it to the business logic, then returning results to the user.

Business logic

Provides all business rules, validation, manipulation, processing, and security for the application.

Data access

Acts as an intermediary between the business logic and data management. Also encapsulates and contains all knowledge of data-access technologies (such as ADO.NET), databases, and data structures.

Data storage and management

Physically creates, retrieves, updates, and deletes data in a persistent data store.

Everything we've talked about to this point is part of a logical architecture. Now let's move on and see how we can apply it in various physical configurations.

Applying the Logical Architecture

Given this 5-tier logical architecture, we should be able to configure it into one, two, three, four, or five physical tiers in order to gain performance, scalability, security, or fault tolerance to various degrees, and in various combinations.

Note  

In this discussion, we're assuming that we have total flexibility to configure what logical tier runs where. In some cases, there are technical issues that prevent the physical separation of some tiers. Fortunately, there are fewer such issues with the .NET Framework than there were with COM-based technologies.

There are a few physical configurations that I want to discuss in order to illustrate how our logical model works. These are common and important setups that most of us encounter on a day-to-day basis.

Optimal Performance Intelligent Client

When so much focus is placed on distributed systems, it's easy to forget the value of a single-tier solution. Point of sale, sales force automation, and many other types of application often run in stand-alone environments. However, we still want the benefits of the logical n-tier architecture in terms of maintainability and code reuse.

It probably goes without saying that if we want to, we can install everything on a single client workstation. An optimal performance-intelligent client is usually implemented using Windows Forms for the presentation and UI, with the business logic and data-access code running in the same process and talking to a JET or Microsoft SQL Server Desktop Engine (MSDE) database. The fact that the system is deployed on a single physical tier doesn't compromise the logical architecture and separation as shown in Figure 1-2.

image from book
Figure 1-2: The five logical tiers running on a single machine

I think it's very important to remember that n-tier systems can run on a single machine in order to support the wide range of applications that require stand-alone machines. It's also worth pointing out that this is basically the same as 2-tier, "fat-client" physical architecture ”the only difference in that case is that the data storage and management tier would be running on a central database server, such as SQL Server or Oracle as shown in Figure 1-3.

image from book
Figure 1-3: The five logical tiers with a separate database server

Other than the location of the data storage, this is identical to the single-tier configuration, and typically the switch from single-tier to 2-tier revolves around little more than changing the database configuration string for ADO.NET.

High-Scalability-Intelligent Client

Single-tier configurations are good for stand-alone environments, but they don't scale well. To support multiple users, we often use 2-tier configurations. I've seen 2-tier configurations support more than 350 concurrent users against SQL Server with very acceptable performance.

Going further, we can trade performance to gain scalability by moving the data-access tier to a separate machine. Single- or 2-tier configurations give the best performance, but they don't scale as well as a 3-tier configuration would. A good rule of thumb is that if you have more than 50 to 100 concurrent users, you can gain by making use of a separate server to handle the data-access tier, as shown in Figure 1-4.

image from book
Figure 1-4: The five logical tiers with separate application and database servers

By doing this, we can centralize all access to the database on a single machine. In .NET, if the connections to the database for all our users are made using the same user ID and password, we'll get the benefits of connection pooling for all our users. What this means immediately is that there will be far fewer connections to the database than there would be if each client machine connected directly. The actual reduction depends on the specific application, but we're often looking at supporting 150 “200 concurrent users with just two or three database connections!

Of course, all user requests now go across an extra network hop, thereby causing increased latency (and therefore decreased performance). This performance cost translates into a huge scalability gain, however, because this architecture can handle many more concurrent users than a 2-tier physical configuration. If well designed, such an architecture can support thousands of concurrent users with adequate performance.

Optimal Performance Web Client

As with a Windows Forms application, we get the best performance from a web-based application by minimizing the number of physical tiers. However, the trade-off in a web scenario is different: In this case, we can improve performance and scalability at the same time, but at the cost of security, as we'll see shortly.

To get optimal performance in a web application, we want to run most of our code in a single process on a single machine, as shown in Figure 1-5.

image from book
Figure 1-5: The five logical tiers as used for web applications

The presentation tier must be physically separate, because it's running in a browser, but the UI, the business logic, and the data-access tier can all run on the same machine, in the same process. In some cases, we might even put the data-management tier on the same physical machine, though this is only suitable for smaller applications.

This minimizes network and communication overhead and optimizes performance. In Figure 1-6 we see how we can get very good scalability, because the web server can be part of a web farm in which all the web servers are running the same code.

image from book
Figure 1-6: The five logical tiers deployed on a load-balanced web farm

This setup gives us very good database-connection pooling, because each web server will be (potentially) servicing hundreds of concurrent users, and all database connections on a web server are pooled.

Note  

With COM-based technologies such as ASP and Visual Basic 6, this configuration was problematic , because running COM components in the same process as ASP pages had drawbacks in terms of the manageability and stability of the system. Running the COM components in a COM+ server application addressed the stability issues, but at the cost of performance. These issues have been addressed in .NET, however, so this configuration is highly practical when using ASP.NET and other .NET components.

Unless we notice that our database server is getting overwhelmed with connections from the web servers in our web farm, a separate application server will rarely provide gains in scalability. If we do decide that we need a separate application server, we must realize that we'll reduce performance because we're adding another physical tier. (Hopefully, we'll gain scalability, because the application server can consolidate database connections across all the web servers.) We must also consider fault tolerance in this case, because we may need redundant application servers in order to avoid a point of failure.

Another reason for implementing an application server is to increase security, and that's the topic of the next section.

High-Security Web Client

As we discussed in the earlier section on security, there will be many projects in which it's dictated that a web server can never talk directly to a database. The web server must run in a "demilitarized zone" (DMZ), sandwiched between the external firewall and a second internal firewall. The web server must communicate with another server through the internal firewall in order to interact with the database or any other internal systems. This is illustrated in Figure 1-7, where the dashed lines represent the firewalls.

image from book
Figure 1-7: The five logical tiers deployed in a secure web configuration

By splitting out the data-access tier and running it on a separate application server, we're able to increase the security of the application. However, this comes at the cost of performance ”as we discussed earlier, this configuration will typically cause a performance degradation of around 50 percent. Scalability, on the other hand, is fine: Like the first web configuration, we can achieve it by implementing a web farm in which each web server runs the same UI and business-logic code, as shown in Figure 1-8.

image from book
Figure 1-8: The five logical tiers in a secured environment with a web farm

The Way Ahead

After we've implemented the framework to support this 5-tier architecture, we'll create a sample application with three different interfaces: Windows, web, and web services. This will give us the opportunity to see firsthand how our framework supports the following models:

  • High-scalability-intelligent client

  • Optimal performance web client

  • Optimal performance web client (variation to support web services)

Due to the way we'll implement our framework, switching to any of the other models we've just discussed will just require some configuration file changes, meaning that we can easily adapt our application to any of the physical configurations without having to change our code.

[1] MSDN, "Business rule" definition, "Platform SDK: Transaction Server." Found in "Glossary." See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/mts/vipdef01_1yih.asp.



Expert C# Business Objects
Expert C# 2008 Business Objects
ISBN: 1430210192
EAN: 2147483647
Year: 2006
Pages: 111
Authors: Rockford Lhotka
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net