Let s Get Physical

Let's Get Physical

When you envision physically designing the solution, you are not too far from creating a physical design. A physical design is simply an extension of the logical design, in which you map the elements to actual hardware and software and define specific details about the user interface (UI), assemblies, and stored procedures. This level of analysis might uncover a lack of required space on your server, for example, so you will need to purchase another server for the project to be a success.

If you have created your physical design successfully, you would be able to answer yes to some of the following questions:

  • Can I count on the integration of the team's assemblies, UI, and database not needing extensive rework because of misunderstandings?

  • Am I confident there will be no unexpected communication issues with any existing software midway through the project?

  • Do I own or have I budgeted for the correct hardware?

  • Can I hand the physical design specifications off to another developer?

  • If I left midway through the project, would it still be a success? (Some of you might want to answer no to this one anyway!)

Even if you can confidently answer yes to all these questions, problems can still crop up, but with proper planning the associated risks and implications can be diminished. For example, in the logical design, you might decide to call a legacy component to access the data warehouse, but you have not specifically established what methods you plan to call and exactly what type of data you plan to receive. You might be a little surprised when you try to implement this functionality into your solution. Perhaps the method you had planned on calling does not exist, so now you must access the data warehouse directly, which could require a tighter security scheme, additional resources, and so forth. This chapter guides you through the process of creating physical design specifications for each aspect of an enterprise solution.

Auditing and Logging

Chapter 7 discussed the logical auditing and logging design; this chapter extends on this design to encapsulate the physical portion.

Logging is the process of storing information in a consistent manner so that it can be referenced later for purposes such as auditing the information. When monitoring an application, storing the logged information in the Windows event log, a relational database management system (RDBMS), or a custom log file is most common.

Auditing tracks all requests, changes, and consumed resources and logs them for later viewing. You can audit an application by reviewing its logs and looking for noticeable suspicious access attempts or checking which resources are in use. Often auditing catches something you never knew existed and enables you to block it if necessary. For example, say an employee left your company a year ago, but his account is still active, and he continues to dial in through the company's virtual private network (VPN) every two weeks. Although this example might seem extreme, it is possible.

These are some examples of actions you might want to audit:

  • Failed access attempts to a resource

  • Changes made to a resource

  • Actions that consume too much time or physical resources

  • User actions (to specifically view what a user is attempting)

An auditing scheme can determine which actions are taking up the most resources and log them for later review.

Tracing

Tracing is a structured form of auditing used in ASP.NET that can help diagnose problems. Before ASP.NET, a classic ASP Web application used Response.Write to audit events. With tracing, you can deploy a project with trace statements intact, and make them active via the web.config file only when you are having difficulties with your application.

Tracing enables you to write messages to the trace.axd file or to the Web page directly. You can configure tracing in the Web configuration file or in the .aspx page itself. The System.Diagnostics namespace includes the Trace and Debug classes, which are used to write specific messages to the trace listener. To write a message to the trace listener, simply call the Trace.Write or Trace.Warn method. The only difference between these two logging methods is that Warn appears in red text and Write in black when viewing the trace.axd file. By accessing the trace listener, you can forward the output to any destination source, such as a flat file, an event log, or a database.

graphics/alert_icon.gif

When calling the Write or Warn method from the Debug or Trace class, you are referencing the same listener, so do not send the same message to both. Otherwise, you will have duplicate data in your listener.


Logging

You can use Windows Event Viewer's Application log as a reliable, consistent repository for error, warning, informational, and success messages. Applications can also create a project-specific version of the Application log to create and view messages. The major drawback of Windows Event Viewer is that the application resides on a single accessible machine and is not easily moved. However, this drawback might not be an issue with the use of Windows Management Instrumentation (WMI). WMI is discussed in more detail in Chapter 10, "Deploying and Maintaining the Application," but it is important to understand that WMI enables you to view logs of the servers in your network, based on preset privileges.

Database logging is an excellent centralized solution for logging, but lacks Event Viewer's level of reliability and does not provide built-in interfaces to perform advanced actions against the logs. An RDBMS can define a more manageable event log scheme than Event Viewer because it has a limitless hierarchy and structure.

Custom logging is simply a text file formatted to your specifications and should be used only if RDBMS and Event Viewer logs are not sufficient or cannot be used for some reason. Custom logs should be used only as a last resort. There is no need to reinvent the wheel when various sizes and models are readily available.

graphics/note_icon.gif

Ninety percent of the time, Windows Event Viewer is the method of choice because of its reliability and the built-in features of the OS. Logging the event in two places, such as the database and Windows Event Viewer, might be wise, but think carefully before implementing logging based solely on a remote database or custom log file.


Exception Handling

Not handling exceptions properly will most certainly guarantee your project's failure. When determining the exception-handling approach to apply over an application, assuming nothing is best. A proper exception-handling process includes the following capabilities:

  • Detecting exceptions

  • Logging and reporting information after an exception is encountered

  • Generating events that can be monitored separately from the application

The framework used to apply the exception-handling process should be separate from the Business Logic Layer. When handled correctly, exceptions can be very informative. They can be used when building tools that monitor exception logs and perform logic, such as dispatching an administrator if ASP.NET times out more than three times in 10 minutes.

.NET Exceptions

To handle .NET exceptions effectively, understanding the exception hierarchy is important. All exception classes derive from the Exception base class using the System namespace. Exceptions can be derived from other exception classes derived from the base, which is useful in creating a finer level of granularity when throwing and catching exceptions. For example, you would want to know the difference between an IOException and a system Exception for various reasons. If the exception is an IOException, you might want to check permissions to validate that the user has the required access. The IOException class derives from the base class and creates a specific exception that can be thrown or caught. Again, you can create an even finer level of granularity and attempt to catch a FileNotFoundException, which derives from the IOException class.

graphics/note_icon.gif

Because all .NET exceptions derive from the same base class, exceptions can be thrown or caught independent of language specifications between the source and the handler.


Custom Exceptions

You can create custom exceptions in your framework by using the ApplicationException class, which derives from the base exception class. ApplicationException is the base class for custom exceptions, similar to how a system Exception is the base class for standard exceptions. When creating a custom exception, your hierarchy should resemble that of .NET. Much as system exceptions derive from one another to throw or create a specific exception, your custom exception should behave in the same manner to create a more specific exception within each hierarchical level.

Exception-Handling Process

The process of handling an exception should be well thought out and always include fallback handlers to ensure that the exception is logged and resources have been released. Figure 9.1 demonstrates the execution path for exceptions. You should follow this diagram closely when designing the exception-handling process.

Figure 9.1. The exception-handling process. From http://msdn.microsoft.com/practices: Reference Building Blocks > Exception Management in .NET.

graphics/09fig01.gif


Notice how the diagram in Figure 9.1 covers all possible execution paths for an exception and provides robust handled and unhandled approaches.

Unhandled Exceptions

.NET has the capability to catch unhandled exceptions for logging purposes and resource allocation before the exception is ultimately thrown to the user. ASP.NET unhandled exceptions can be handled in the web.config and global.asax files. The web.config file can redirect the exception to a custom handler page, which performs any necessary cleanup and logging. Using the web.config file, you can also specify IIS error codes for finer granularity. The global.asax file enables you to capture the error as an event and then perform all necessary logging and cleanup. This method is most useful when there is no designated handler page to transfer the exception.

graphics/note_icon.gif

Unhandled exceptions in ASP.NET should be caught in either the global.asax file or the defaultredirect property of the web.config file to be certain all resources have been released and the exception has been logged.


Structured Exception Handling

Structured exception handling enables you to try a piece of code, catch specific exceptions that might have been thrown, and then perform any cleanup by using the finally block. You can have multiple catch blocks that enable you to capture different types of exceptions. When using multiple catch blocks, you must make sure to begin catching the most specific exceptions, working your way down to the most general. For example, if you coded a system Exception before the IOException catch, the IOException would never be caught because the exception was already accounted for in the system Exception catch block.

After the exception is caught, you can choose to log it and throw it again, or continue processing if the exception was expected. When inserted, the finally block is carried out, regardless of whether an exception was caught. Implementing the finally block provides a convenient location to release resources, such as database connections, and perform other cleanup tasks before continuing.

graphics/note_icon.gif

When creating the physical specifications, remember that exception handling spans all tiers of your solution. The application or service framework tier includes your exception-handling process. The business services and user services include structured exception handling. The user interface includes the unhandled exception redirection.


Integration and Interoperability

Typically, when diving into a new solution, existing functionality must be leveraged to save time and money. Enterprise Application Integration (EAI) is a process you use to provide integration and interoperability between services and operational systems, legacy applications, databases, and third-party Web services. Those who have used BizTalk Server will be very familiar with EAI. BizTalk is not an exam requirement, so it is not covered in depth in this chapter. For the purposes of exam preparation, this chapter focuses specifically on Data Transformation Services (DTS), XML Web Services, and the integration of .NET services and legacy applications, known as interoperability.

graphics/note_icon.gif

When designing an ideal integration scheme, there has to be some leeway because real-time data can occasionally have too much impact on performance. An ideal solution might consist of logging changes through the use of queues and performing a transfer from one system to another during downtimes.


Data Transformation Services (DTS)

Data Transformation Services (DTS) is a tool used for importing, exporting, and transforming data between one or more data sources that support the open-standard OLE DB. DTS solutions are created through the use of DTS packages, which contain a set of tasks to be performed. DTS packages, in their simplest form, can perform the following actions:

  • Transform data.

  • Import, export, and manage data.

  • Run tasks as jobs from inside a package.

DTS packages are most useful from a long-term integration standpoint for their ability to run as jobs upon creation. Jobs enable you to define a revolving time in which the package is to run.

XML Web Services

XML Web Services were designed from an integration viewpoint and allow the transfer of XML data through HTTP in a loosely coupled fashion. Web Services are an industry initiative that support the use of Simple Object Access Protocol (SOAP), XML, and Web Services Description Language (WSDL). They are gaining broad support among competing platforms because of their platform-independent integration characteristics. Web Services are discussed in more detail later in this chapter in "Service Agents," but it is important to understand that they provide an excellent method for transferring data between firewalls and conflicting platforms.

Interoperability

Interoperability enables you to call unmanaged code (code that runs outside the Common Language Runtime) through Component Object Model (COM) components and Win32 application programming interfaces (APIs), or vice versa. .NET automatically performs the following interoperability functions between managed and unmanaged code:

  • Object binding Early and late bound interfaces are supported.

  • Data marshaling and translation Data type conversion is handled automatically.

  • Object lifetime management Object references are released or marked for garbage collection when they are out of scope.

  • Object identity COM object identity characteristics are implemented.

  • Exception and error handling Through the Common Language Runtime, COM's HRESULT values are translated to .NET exceptions and vice versa.

The Runtime Callable Wrapper (RCW) acts as a proxy when calling unmanaged COM objects from within .NET. The RCW is the smoke behind the automatic integration shown previously. The COM Callable Wrapper (CCW) performs similarly to the RCW, but is used when a call is made from a COM component to a .NET service.

graphics/alert_icon.gif

To interoperate with a .NET assembly from COM+, make sure the assembly is strong named and registered in the Global Assembly Cache (GAC). To interoperate with COM+ from .NET, use regsvr32 to make sure the component is registered.


Security

When considering security, think of the expression "a chain is only as strong as its weakest link." Security schemes are typically referred to as policies. Policies can span multiple tiers and can be more or less restrictive at any layer. Zones are used to define policies that span multiple tiers. If policies are different across tiers, they reside in separate zones. When designing your policy, take note of the following guidelines that will assist in your development:

  • Use industry-proven techniques and code when available.

  • Validate all input; this is essential.

  • Assume all systems outside yours are unsecured, unless proven otherwise.

  • Grant the fewest privileges possible, working your way up only as necessary.

  • Expose only what is absolutely needed and avoid making items publicly available unless specified.

  • Encrypt data; it is the only way to hide data. Never store sensitive data in obscure locations, wishing for the best.

  • Authenticate and authorize users before proceeding further.

  • Secure systems from internal invasions as well as external hackers; never assume internal users are not capable of doing damage.

The next section outlines the main aspects of a security policy: authentication, authorization, and secure communication.

Authentication

Authentication is the process of validating users and verifying their identity. Authentication is most common in the graphical user interface (GUI), where user identity is initially authenticated. Typically, as calls are made from one tier to another, specific user credentials become less important the further you scale down. In a traditional Web application, you can authenticate the user at the GUI and send the user's token to the Business Logic Layer for auditing purposes; then call the database with a general connection based on a user role. When you use the user's specific credentials for an authentication policy while connecting to the database, the advantages of stored procedure caching are lost because they are carried out under the current user instead of to a shared account based on a user role.

User Services Authentication

Authentication must be used if the application is going to perform authorization, auditing, or personalization functions. ASP.NET allows authentication via Windows, custom forms, or Microsoft Passport technology. ASP.NET offers a robust authentication scheme that captures authentication information in the OnAuthenticate method that passes the user context in FormsAuthenticationEventArgs when using Forms authentication. ASP.NET automatically passes the user's identity when making calls between managed and unmanaged code. Setting the Impersonate property to true in the web.config file in ASP.NET automatically runs code under the current user's token, applying his or her permissions when IIS Integrated authentication is active.

graphics/note_icon.gif

IIS integration is very tight when authentication is set to Windows in the web.config file. With IIS, you can choose Integrated, Anonymous, or Basic authentication. Credentials are verified against those requested in the web.config file and the OnAuthenticate method in the global.asax file.


Business Services Authentication

A business service authenticates the caller if the object needs to use the credentials to perform the required task or if the user's actions are being audited. Authentication of a user's token is integrated in .NET, so no extra steps are involved in receiving the user's credentials.

Data Access Authentication

When connecting to a database such as SQL Server, you have the option of using a service account or directly impersonating the user's Windows identity. Service accounts are accounts used by more than one user and are usually based on a specific role that enables the user to perform certain actions. From a database perspective, service accounts receive the benefits of connection pooling, stored procedure caching, and less security maintenance.

Windows accounts are exactly the opposite, in the sense that they are specific for each user and, therefore, lose the benefits of service accounts. However, Windows accounts have the advantages of account passwords rarely being compromised and audits that can easily be performed for each user.

As a rule of thumb, the performance benefits of service accounts typically outweigh the advantages of Windows accounts. Running a service account security policy can still provide advanced auditing if you pass another parameter to your stored procedures for the user. Service accounts can also encrypt connection strings.

Authorization

Authorization simply determines whether the authenticated user has the necessary permissions to take the requested action against the specified resource. Permissions can be granted to users and groups. While applying a security policy, maintenance and administration are much easier when permissions are defined on a group level, which is called role-based security. When defining security, it's important to remember that if a user belongs to a group that is explicitly denied access to the requested resource, the user is not allowed to access the resource even if he or she belongs to another group that is granted the necessary access. For example, even though a user named Sally has explicit access defined for her user account for the resource she needs, Sally will not have authorization if she belongs to a marketing group that is specifically denied access to the same resource.

.NET authorization spans not only users, but also code, which is referred to as code access security. Code access security can prevent assemblies from performing tasks they were never designed to do and prevent calls from components that are not allowed to access the assembly. When determining whether the caller has rights to access the assembly, you can check the following parameters:

  • Application's installation directory

  • Digital signature of the assembly publisher

  • Assembly's cryptographic hash

  • Assembly's cryptographic strong name

  • Site from which the assembly originated

  • URL from which the assembly originated

  • Zone from which the assembly originated

Remember to grant as few privileges as possible; do not confer the delete, add, and modify privileges to any account unless they are required.

User Services Authorization

ASP.NET's web.config file fully supports authorization to access sites and their subfolders based on user roles. For example, you can prevent a user from accessing the site if he is not a member of a user group. You can also prevent a user from accessing the contents of the admin folder unless he is a member of the Administrator group. Simply set the authorization section of the web.config file to perform the desired tasks.

Business Services Authorization

For the purposes of developing solid maintainability, relying on role-based security when defining the Business Services authorization policy is strongly encouraged. You have learned about using the User.Identity class to quickly reference the user's token. Using the PrincipalPermission property enables you to validate that the given user is in a specific role. Business Services authorization should also include a code access security scheme to prevent unauthorized components.

Data Access Authorization

Service accounts make authorization to data resources much easier than Windows accounts do. Simply assign your service accounts to their required permissions. Service accounts are easy to maintain when the primary purpose of the account is to make calls from the Web because there can be only so many variations, and most of the authorization is done in the User Interface and Business Services Layers. Make sure to implement an extremely tight data authorization policy when data resources are shared with external developers or when powerful operations that need protection are performed in the Data Access Layer.

Secure Communication

Securing transmissions is vitally important to preventing interception of sensitive information. You can protect sensitive information by securing the transfer mechanism or encrypting the data. For the purposes of the exam, this chapter specifically covers Secure Socket Layer (SSL) and data encryption. SSL, the backbone of secure channel transfer through HTTP, can be used anywhere HTTP is supported by obtaining a certificate from a third-party vendor. If data is extremely sensitive, you might consider encrypting information passed between calls made between the User Interface, Business Services, and Data Access Layers. There are various ways to encrypt data; .NET provides an extremely robust framework for encryption known as Cryptography.

Business Services

Business services are the code of the solution; they contain constraints to support the business rules. The following elements are all members of the Business Services Layer. Keep in mind that, depending on the chosen solution, you might not use everything in this list.

  • Business components Components are used when functionality must be reused over several business processes or when robust actions and calculations must be performed against data access and APIs.

  • Business workflows Workflows are needed when you are managing multiple steps and long-running transactions. BizTalk Server is the recommended platform for designing the proper workflow.

  • Business entities Entities, which represent business concepts, define real-life business relationships and data.

  • Service interface The service interface enables service agents to access your Business Services Layer through a channel such as HTTP or TCP/IP.

In an e-commerce application, it takes multiple steps to place an order, some of which include credit card authorization, payment processing, and shipment. The logic for each step exists in the business components. The business workflow then calls these steps in a transaction to complete the order.

graphics/note_icon.gif

BizTalk Server enables you to orchestrate the workflow of processes requiring multiple steps or long-running transactions. BizTalk orchestration services create XLANG schedules that implement business functionality. XLANG schedules are built through a graphical interface.


Business Components

Business objects should be reused and methods should be broken apart whenever code can prove useful later. Business components can be tightly coupled or loosely coupled. The loosely coupled effect is achieved when implementing a workflow or service interface over the Business Object Layer. The following list defines what business components entail:

  • They are invoked by the User Services Layer, the service interface, or other business services.

  • They are the root of transactions.

  • They should validate input and output.

  • They call data access logic to view, add, modify, and delete the database.

  • They call external services through service agents.

  • They call other business components and instantiate workflows.

  • They raise an exception to the caller if something happens in an atomic transaction.

Figure 9.2 demonstrates how the Business Services Layer interacts with the data access logic, service agents, and the User Services Layer.

Figure 9.2. Interaction in the Business Services Layer. From MSDN Library (http://msdn.microsoft.com): .NET Development > Building Distributed Applications with .NET > Architectural Topics > Application Architecture in .NET > Designing the Components of an Application or Service.

graphics/09fig02.gif


The following list explains the number references in Figure 9.2:

  1. Business components are being invoked by the Presentation Layer.

  2. Business components are being invoked by a service interface, such as an XML Web Service.

  3. Business components are making calls to data access logic to view, add, update, and delete logic.

  4. Business components are making calls to a service agent, such as a distant XML Web Service.

Transactions

Business services should use ACID transactions to ensure that data is not corrupted after failed workflow attempts. Transactions represent a type of workflow that requires all steps to succeed for the operation to complete; otherwise, a rollback occurs to ensure data integrity. ACID transactions are defined by these four properties:

  • Atomicity Work is indivisible; either it all must be performed or it cannot be performed at all. Work that cannot justify a rollback upon failure should not be a part of the transaction.

  • Consistency Transactions must leave all data unchanged or all data committed. Data cannot be half and half; it must be consistent.

  • Isolation States of the transaction are not publicly available and should not affect the caller.

  • Durability Upon completion, transaction data is reliable and permanent.

When designing transactional business services, consider who your callers might be and what level of trust they can hold. Transactions typically involve databases, and databases typically lock data when it's modified. If an unreliable source locks your database in an attempt to perform erroneous actions, your application will suffer tremendously.

Enterprise Services

Enterprise Services is a mechanism that enables you to perform transactions, role-based security, object pooling, and message-based interfaces through queued components. The EnterpriseServices namespace is available directly off the System namespace after the assembly has been referenced in the project. When implementing Enterprise Services, you must follow a few guidelines:

  • Remoting channel restriction HTTP and Distributed COM-Remote Procedure Calls (DCOM-RPC) are the only channels supported.

  • Strong-named components These components (and the components they call) must be signed.

  • Deployment Components need to be self-registering (which requires administrative rights), or a special deployment step needs to be taken.

  • Security The Enterprise Services role model is available, or .NET-based security can also be used.

Network Latency and Bandwidth

Your application's performance and reliability are heavily affected by your network's infrastructure. You need to understand the concepts of latency and bandwidth in any deployable application to verify what is consuming precious response time. The Business Services Layer consumes service agents, such as Web Services and RDBMS databases, and performs calculations and extensive transactions. In the Business Services Layer, there are many possibilities for response times to dwindle, so you must consider the network infrastructure.

Latency is the time it takes a piece of data to move from point A to point B over a network. As mentioned previously, Remote Procedure Calls (RPC) experience some form of measurable latency. Network bandwidth should not be confused with latency. For example, you might have relatively high network bandwidth, meaning network capacity is high, but this doesn't always mean you have low latency. Latency depends on the amount of bandwidth and the physical size of data to be transferred. Latency is also much higher over a WAN, even if the bandwidth is available, because the data must travel physically farther than if it were on a LAN. Data is known to travel at the speed of light theoretically, but cable and distance can make a significant difference. Typically, to achieve true speed-of-light transfers, fiber-optics must be used.

When developing your solution, you must consider the deployed project. For example, you might have a global project for which response times have been tested through the LAN. When the project is deployed and your customers from Japan attempt to view the application, however, response times might triple those tested.

Here are a few best-practice techniques to assist in latency issues:

  • Objects such as the DataSet are not ideal for transferring large amounts of data because of their built-in memory space. Ideally, use the data reader or XML.

  • "Chatty" calls should be avoided. Use chunky RPC to perform multiple steps in one call. You should use this method whenever a call is made between .NET and COM, Remoting, or XML Web Services, even if the components are on the same physical machine, as proxies are involved that consume more time.

  • If possible, do not pass reference-based objects through tiers. Attempt to create a scheme that relies primarily on value-based or serialized objects, such as strings and integers.

Business Service Interfaces

You can provide functionality to a remote caller through the use of a business service. Services are very loosely coupled because the caller must access the service through a channel such as HTTP or TCP/IP. Because services are transferred over a channel and sometimes called through untrusted sources, it is essential to make only the required functionality public and hold a tight security scheme so that the service is not compromised. The following list outlines design capabilities for your service interface:

  • Service interfaces can use caching, mapping, and schema transformations. Services implementing business logic are not recommended.

  • Services should be designed with interoperability in mind, so they should support industry standards and provide the most accepted transfer method that is sure to be supported in the future.

  • When business components are deployed on a server separate from the service interface, you might consider giving the service interface its own security policy for authentication and authorization.

  • Services can be transactional.

  • You might consider providing a transparent service so that changes made at the Business Services Layer to business components are automatically available through the service.

Web Services are the most common way to implement a business service interface. By providing a Web Services interface for your abstract business objects, you are providing a secure, user-friendly way to connect to the components that other remote callers can use for years to come. For a highly distributed application in which DCOM would traditionally be used in the Windows Distributed interNet Applications (DNA) model, Web Services can provide a secure, reliable transfer mechanism between the Business Services Layer and user interface components.

graphics/alert_icon.gif

An XML Web Service provides an industry-standard interface to business logic that, when shared with partners, can perform updates and workflow. A Web Service displaying current inventory that can be replenished offers a win-win scenario for you and your suppliers.


Business Entities

Business entities are data representations of actual objects in your company. Typically, entities are representations of your database table structure, but they do not usually conform to one specific table. Instead, they are denormalized and represent a small entity schema. Often it is wise to design your business entities to be stateful objects containing a DataSet or an XML representation of the entity schema that is available throughout the object's lifetime. Business entities should not directly access the database; instead, they should call the Data Access Layer, which handles cleanup and any special auditing features. Business entities are a low-level entity view and are meant to be aggregated to perform a specific function; transactions are not recommended at this service level.

Business entity design recommendations include the following:

  • Entities should map to data relationships. If a custom entity is needed, first verify that the data cannot be obtained by aggregating two entities in the business component member of the Business Services Layer.

  • When constructing business entities, deriving all common logic from a base class is considered a best practice.

  • Maintenance of state should be in DataSets or XML, not collections or structs.

  • Implement interfaces that expose common characteristics of all business entities.

  • Business entities should enforce all constraints and validations needed for a specific entity. For example, if there are a maximum number of ship-to addresses, the business entity should enforce this constraint.

  • Display validation rules to callers, but do not allow them to change any rules. A validation rule could be an XSD schema applied to the XML.

Designing the physical aspects of a business entity requires another level of constraints and validation that the database might not be able to perform without implementing complex queries. Business entities for an e-commerce site can consist of Order, Customer, Product, and so forth. Shopping Cart, however, is not a business entity; rather, it is a component because it aggregates the Product, Customer, and Order entities to define the cart.

User Services

The Presentation Layer is based on the user and must provide precise events, validation, and exception handling so that users can understand how to resolve problems as they occur. The UI should call only the Business Services Layer and should not call the Data Access Layer directly unless there is no Business Services Layer. The UI Layer is meant to cover harsh messages from the Business Object Layer and map design interfaces based on role responsibilities, as opposed to relations in the Business Services Layer. It should not display the underlying calls it makes to the Business Object Layer. Presentation to the client consists of buttons, text fields, and display grids that allow clients to view and modify data. Interfaces should be clear to the user and provide added functionality when necessary. For example, you might need to add the functionality of converting text to speech because 10% of your user base is visually impaired.

The following are best-practice guidelines for designing the UI:

  • The UI should not perform transactions.

  • Traditionally, UI components should not call data access logic; they should call only the Business Services Layer.

  • The UI should capture events from the user and allow the user to interact with it based on roles.

  • The UI should validate input and provide an English-language explanation and options when an error occurs.

  • The UI can cache result sets.

  • IDs and other mapping attributes should not be displayed to the user.

The Presentation Layer consists of UI components (UICs) and UI process components (UPCs). UICs are simply interfaces that enable users to interact with the application. UPCs are the same as UICs, but they follow a process. UPCs are common in wizard-type functionality.

UI examples include Windows Forms, console applications, Web forms, mobile projects, or any combination of these items. For the purposes of the exam, the following sections specifically cover mobile, Web, and Windows Forms, leaving out console applications.

Windows Forms

Windows Forms make a great choice when providing offline applications; they manage intense state or access to client system resources that might require administrative privileges. Windows Forms are not intended for Internet/intranet-based applications, but are ideal for distributable software applications, in which the processing is implemented in part or in full on the client side. Windows Forms are stateful and can provide multiple forms simultaneously.

The following list explains the various styles of Windows Forms:

  • Full-blown desktop user interfaces This option provides all or most of the graphical rendering through Windows Forms.

  • Embedded HTML You can choose to write a Windows Forms application that uses embedded HTML loaded from external resources when the application is connected.

  • Application plug-ins Your Windows Form might be a plug-in for other applications, such as Microsoft Office or CRM solutions. In this style, the application does not have a user interface.

.NET Windows Forms run over the Internet similarly to ActiveX controls and Java applets, with the .NET Framework installed at the client. When running Windows Forms over the Internet, they are isolated to the sandbox allocated by the Common Language Runtime, thus posing fewer security risks than their predecessors, such as applets.

.NET Windows Forms provide a feature-rich GUI that could be used for game programming or other graphics-intensive applications. .NET Windows Forms are also commonly used for touch-screen applications and can be extremely useful for accessibility extensions, such as those built for users with hearing or vision impairments.

The following list is a best-practice guideline to follow when creating Windows Forms:

  • Use data binding for synchronization across multiple open forms.

  • A child form should not contain hard-coded relationships between itself and its parent because as the application grows, the child form might be reused and could have multiple parent forms.

  • Structured exception handling is a must for providing user-friendly error messages and a way to recover.

  • User input validation is essential. Invalid data should not be passed to the Business Services Layer.

  • When creating custom user controls, clearly label the methods that can be called and make private the methods holding no benefit to the user or those that could possibly cause unanticipated results; this process is known as encapsulation.

  • Wire events to methods that will perform the requested task. Do not put too much logic inside the event handler because you might need to reuse this logic later in the application.

graphics/alert_icon.gif

You can include many accessibility options, such as converting text to speech for the hearing impaired, with the use of Windows Forms. A Windows Forms desktop application is also an excellent choice when the operating system must be out of reach to users.


Web Forms

ASP.NET is an excellent platform for developing Internet and intranet applications that scale to platforms .NET does not support. The Web is stateless by nature, but ASP.NET offers caching and state management options as a means of assistance. An ASP.NET application has many of the same features a Windows Form does, but is able to add the benefits of availability, performance, and scalability because the application can be accessed through a Web browser instead of a proprietary desktop application. Thin client applications built from ASP.NET can take advantage of many advanced features, such as the tightly bound Internet Information Services (IIS) security model.

Follow these recommendations for ASP.NET physical design:

  • Implement a custom error page in the web.config file and a global exception handler in the global.asax file to catch unhandled exceptions. Exceptions can be logged and a user-friendly message sent to users, along with the encountered exception.

  • ASP.NET controls should be used for validation. In addition to checking validation on the client, be certain to check validation on the server to block all erroneous attempts.

  • Design custom controls to hide properties and methods that have no value to the caller, thus making them easier to use and maintain.

  • ASP.NET's ViewState should be used to store page-specific data. Use session and application state for data that crosses over multiple pages.

  • When a user is completing a process that requires several steps, build a UPC that specifically calls the required actions from form to form, as opposed to just redirecting the user.

  • Do not store event functionality directly inside the event handler. Make a separate method for the functionality so that it can be used again. Storing common functionality in a user interface facade is considered a best practice.

  • ViewState should be turned off for those items that might not need it, such as a data grid.

ASP.NET also provides an extensive framework for globalization. Through the use of satellite assemblies, ASP.NET can change content depending on the user's language and culture preferences. For a finer level of granularity, countries that support the same language but have different dialects, such as the United States, Great Britain, and Australia, can be separate satellite assemblies based on the user's culture preferences.

ASP.NET is primarily a synchronous platform. A user's request is expected to be filled immediately. For an e-commerce order, it is possible to gain credit card acceptance and then place the order in a queue for later processing, thus presenting an asynchronous example. Typically, actions are synchronized to avoid the extra work of handling unexpected conditions.

Caching

Caching result sets and reference-based objects can significantly improve performance of an ASP.NET Web application. Caching has application scope and is performed on the server. .NET's Cache object is ideal storage for any object or page that does not undergo constant change. For example, you could cache a page that has a query string parameter ID equal to 12 and another equal to 14, if they don't change every 10 minutes. You wouldn't want to cache a list of stock quotes that change every second, however.

ASP.NET caching is provided through the OutputCache directive in an .aspx page. Properties of this directive can be set to vary by get/post parameters, the HTTP header, the Custom option, or the Control option. There is also a Duration setting for determining how long the cached object remains in memory. Setting the VaryByControl option is known as fragment caching, which is used when you want to store just a portion of the page instead of the entire page. In a stock market application, you still might want to cache your toolbar, header, and so forth. The Cache object is stored in a hash table and can be accessed through the Caching class in the System.Web.UI namespace.

Web Farm Architecture

Enterprise thin-client applications serving a substantial amount of concurrent users usually implement Web farm architecture. You must consider security for any application that allows open access through HTTP. In addition to SSL, the best practice is to implement a firewall between the Internet, the User Interface Layer, the Business Services Layer, and the database server. This practice puts the User Interface and Business Services Layers in a demilitarized zone (DMZ), where they are outside the company firewall but have an open port into the network. Solutions that implement business logic on a machine separate from user interface logic should implement this DMZ model, which allows for a three-layer firewall approach. Solutions that implement business logic and user interface logic on the same physical machines can still implement a two-layer firewall.

Figure 9.3 illustrates a thin-client deployment implementing business logic and user interface logic on separate tiers. Notice the firewalls placed outside each tier, which is designed for best-practice implementation.

Figure 9.3. .NET thin-client Web farm architecture. From http://msdn.microsoft.com/library/en-us/dnbda/html/AppArchCh4.asp.

graphics/09fig03.gif


The following list explains the number references in Figure 9.3:

  • Client computers (1) access the Web farm behind the firewall (2), possibly using SSL.

  • User interface components (UICs) and UI process components (UPCs) (3) access the Business Services Layer through a firewall (4) and are protected from outside attacks through the firewall (2).

  • Business components (BCs) and data access components (DACs) (5) call data sources (7) through a firewall (6) as well. If the Business Services Layer was located on the same physical machine as the user interface, steps 5 and 6 would be dropped from the diagram in Figure 9.3.

Data sources are almost always incredibly sensitive and must be protected at all costs. Notice in Figure 9.3 that the data source is accessed only after the request passes through three firewalls. This is not a guarantee that the data base won't be penetrated, but it's as close as it gets while still being practical.

Mobile

Mobile devices have become much easier to program with the mobile Internet toolkit. Previous complications in making a different version of an application for devices supporting Wireless Markup Language (WML), Compact HTML (cHTML), and standard HTML are no longer valid concerns with the use of mobile controls. Mobile controls request the device type and then automatically convert the tags to render properly in the calling device. You might find programming for mobile devices over the Internet no more complicated than programming an ASP.NET application over the Web with the use of these controls.

  • View the rendered page in each target emulator to be sure that all content controls and content are displayed correctly. Obviously, your pages will look different in each device. Buttons, for example, are going to look much different in a Windows CE PDA than in a Wireless Access Protocol (WAP)enabled phone.

  • State will be an issue if you need to support multiple versions of WAP-enabled phones. The only true way to maintain state across all versions of WAP-enabled phones is to provide a query string. Many WAP-enabled phones do not even support a compound query string, so you must be creative. There are obvious problems with the query string, such as maximum size and storage of sensitive data, but it might be the only appropriate choice.

  • Do not clutter the screen with anything other than what the user needs at that specific point. There is simply not enough room for elaborate menus. Provide a link back to the main menu for user screen selection.

When working specifically with Windows CE PDAs, you will find many of the same features as in ASP.NET's traditional applications. Caching, sessions, validation controls, and cookies are just a few of the robust features supported on a Windows CE PDA. If you must support phones and PDAs, detect the version type and dynamically provide a more elaborate PDA interface.

User Service Facade

The User Service facade is typically .NET classes that can be called or inherited from; it provides a base functionality that can be used globally throughout the user interface. For example, you might want to extend the functionality of the data grid control. In this case, create a class that inherits from the data grid, which provides an extended version that can be used globally throughout the user interface. User Service facades are an excellent choice for wizard controls that do not pass state data back to the business components until completion or exception.

User Service facades can be used whenever calling business services is impractical, but the functionality must be spread across multiple pages.

Data Services and Components

Data services are the foundation of an application and play a large role in performance and availability. Data can be in the form of relational databases, data warehouses, messaging databases (Exchange Server Web store), file systems, XML databases, or service agents (remote Web Services). Data access components must be built to perform specific communication with the data source and provide a consistent interface for business services. Data access components are typically stateless and simply abstract the obscurities of the underlying data source.

Figure 9.4 illustrates the full .NET architecture, dividing each layer into several components designed for best-practice implementation.

Figure 9.4. .NET architectural design. From MSDN Library (http://msdn.microsoft.com): .NET Development > Building Distributed Applications with .NET > Architectural Topics > Application Architecture in .NET > Designing the Components of an Application or Service.

graphics/09fig04.jpg


Figure 9.4 shows how each layer relates to another. This chapter has covered the various layers and elements; this diagram depicts how they all fit together.

Data Access Components

Data access components typically provide view, modify, and delete operations to the business entities in the data source. Data access, which is typically called by the Business Services Layer, is not intended to provide calculations or anything more than a framework for accessing data. The Data Access Layer has no impact on performance because no calculations are involved; it does nothing more than obscure the location of the data source. All performance losses derived from calling the data access logic are a direct result of the data source or the data access channel. Each data access component should access a maximum of one data store for the purposes of maintainability. Data access components can also act on multiple tables if they are related. If an RDBMS is used, stored procedures should be called to access data to increase performance.

The following are some physical design considerations for creating the Data Access Layer:

  • Locking schemas should be implemented through a custom component.

  • Non-transactional data caching should be implemented through a custom component.

  • Data routing should be implemented for larger systems that use multiple database servers.

  • Do not invoke other data access components, instantiate transactions, or attempt to maintain state in the Data Access Layer. These tasks are all jobs of the Business Services Layer.

  • Return only the data that is needed.

  • Implement a standard to call stored procedures that perform insert, update, add, and delete functionality.

  • A database base class that defines commonalities, such as connection strings and pre-call and after-call object creation and destruction, should be inherited.

  • The data access logic can perform decryption if the record is stored encrypted in the data source.

Service Agents

Among service agents, there are abstract APIs, remote-able components, or Web Services. The Data Access Layer should encapsulate the agent's underlying obscurities. The input/output formats should be similar to those of the RDBMS data access components. No more than one variation of a service agent should exist in each data access component.

Data access components that call service agents can also carry out the following additional functionality:

  • Perform basic validation on data received from the service.

  • Data caching is allowed for most common queries.

  • Perform authorization of the caller if required by the service agent. Performing authorization with the caller is an extra validation step that can be used.

  • Encryption and decryption can take place through the data access component and the service agent.

  • Log interaction with the service for auditing purposes.

Managing and Configuring State

Managing state over a stateless platform such as the Web has evolved considerably over the years. Web pages are created each time a request is made, contrary to Windows Forms, which maintain the same instance for each request. ASP.NET offers an extensive framework that handles both page-level and global application-level scope for handling state on the client and server. Determining the best option should be based on reviewing client settings and performance needs of the hosting server. State options are not mutually exclusive and will most certainly require multiple implementations to handle a variety of events.

Table 9.1 displays available options for implementing state in ASP.NET.

Table 9.1. Options for Implementing State in ASP.NET

Option

Location

Scope

Description

ViewState

Client

Page

This built-in feature of ASP.NET offers the capability to automatically maintain the state of .NET controls between trips to the server.

Hidden field

Client

Page

This option hides an input box from the user, but can be populated through client-side and server-side scripts.

Query string

Client

Page

Stores a name-value reference in the address call. Cannot store sensitive data; data size is limited.

Cookies

Client

Global

Stores a name-value reference on the client machine, which must be configured to accept cookies.

Application state

Server

Global

Stores session state on the server; this information pertains to all users currently accessing the application.

Session state

Server

Global

Stores session state on the server; this information pertains to each user accessing the application.

Database sessions

Server

Global

Same as session state, except state is stored in the database.


When determining which state methods to implement, always look at your client base first. If performance is unsatisfactory and you choose to implement cookie state, you must be sure the target user base accepts cookies. Your application might exhibit an increase in performance after using cookies, but if your user base does not accept cookies, your efforts will be in vain.

Client-Side State Management

ViewState, hidden fields, and query strings are used independently of any other state options. None of the choices in Table 9.1, including cookies, is the correct approach for storing sensitive data. Encryption can be used, but at this point you might want to consider other alternatives, such as session state.

ViewState is an excellent resource that automatically maintains state for ASP.NET controls between trips to the server. ViewState is stored on the client as a hidden property and applies to the entire form it represents.

Hidden fields are not displayed to the user, but can be accessed through client-side and server-side scripts. Hidden fields provide a way to keep page-level data on the client, hidden from the client's view. Hidden fields exist in ASP.NET through the HtmlInputHidden control.

Query strings are excellent choices for passing data that is not sensitive, such as a currently selected product's reference ID. Query strings can be easily changed by the client or applications running on the client. However, if the query string incidentally exposes data the client is not supposed to view, it is not the best choice.

Cookies can store name-value pairs of data on the client and be timed to expire at a specific time, such as 30 minutes or 30 seconds from now. Cookies represent the only built-in state resource that functions using the client and provides a global-level scope. Therefore, cookies are the perfect alternative to using sessions, except when user settings prohibit the use of cookies.

graphics/note_icon.gif

ViewState is a feature of ASP.NET controls only and is not available to server-side HTML controls. Therefore, to maintain state with HTML controls, you must reset the values of the controls with every server hit.


Server-Side State Management

Traditionally, there are problems with session state when it's placed in a Web farm because with each request, the load balancer might direct the call to a different server in the farm. Session state has significantly evolved since its earlier days, however, and now provides database and sticky sessions. Sticky sessions simply make each request "stick" to the server that originated the session. Database sessions are configured to store session data for use by any server in the farm.

Application state is available at an application level instead of being specific to each user. Consider storing values that are initially set when the first user enters your application and then released after the last user's session expires.

Session state is specific to each user and is released when the user's session has expired. All forms of session state (database, application, session) can be administered through the sessionState tag in the web.config file. Settings enable you to control when sessions time out and what data source is used for database state.



Analyzing Requirements and Defining. Net Solution Architectures (Exam 70-300)
MCSD Self-Paced Training Kit: Analyzing Requirements and Defining Microsoft .NET Solution Architectures, Exam 70-300: Analyzing Requirements and ... Exam 70-300 (Pro-Certification)
ISBN: 0735618941
EAN: 2147483647
Year: 2006
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net