Development of the Security Strategy

Security issues cover many areas of the application spectrum. To fully secure a system, you must consider the file system, data access, and network as well as other areas. Security is probably the number-one concern of most IT professionals in the work force. This section touches on some of the major security elements. To fully cover this topic, however, you should do a lot of additional reading. For other sources of information, check "Need to Know More?" at the end of this chapter.

First, take a look at security in ASP.NET applications, specifically authentication, authorization, and handling communications securely. To cover the topic more completely, you must understand the differences between intranet, extranet, and Internet security. Protecting any portion of a system is far easier if you never expose it to the outside world. ASP.NET security is in itself a complete security framework, with enterprise security mechanisms, Web Services security implementation, and remoting security adaptations. Finally, securing data at the source and throughout the application is of utmost importance.

Security is an issue at numerous physical points in a system. Data can be intercepted whenever it is communicated. Recognize the points at which these communications occur, and you then know what measures need to be taken. Major communication links in a system exist between the browser and the Web server, between the Web server and application server, between either server and the data store, between corporate routers on a WAN or an extranet, and, of course, at various points between the source and destination across public communication channels.

Data Privacy

Most privacy issues are resolved in the same manner as other secure transmissions. In attempting to achieve privacy when transmitting confidential data, you must take steps to ensure that any captured data is unusable. Monitoring traffic in many networking situations is easy, but being able to get the data and being able to make sense of the captured data are two different things. Secure communication channels are used to ensure the integrity of the data received; in this way, data is protected from modification while in transit. Confidential data should always be communicated in an encrypted fashion.

Encryption

Data encryption is an aspect of application planning that attempts to provide the most secure mechanisms possible when confidential information is transmitted over public and private networks. The basic goals of encryption are to ensure information confidentiality and integrity so that data is protected from corruption and, if the data is intercepted, it is a meaningless jumble of nonsense until decoded.

In this age of hacking and other computer-related illegal activity, taking precautions to ensure that applications passing secure information do so in an encrypted fashion is becoming increasingly more important. With access to data, a malicious user could intercept user credentials for access to the system or intercept financial data, such as credit card numbers. Intercepted data could also be altered to wreak havoc on susceptible systems.

The available technologies represent encryption mechanisms that package data in a manner that secures the information as much as possible. Secure Sockets Layer (SSL), Transport Layer Security (TLS), Internet Protocol Security (IPSec), and Remote Procedure Call (RPC) encryption can be used to secure data communication.

SSL/TLS is the most commonly used technology to secure the connection between a browser client and an Internet Web server. You can also use the same technology to secure Web Service messages and communications to and from a database or other applications server. IPSec provides a transport-level secure communication solution that can be used to secure data sent between any two configured computers. RPC encryption can be used by DCOM technologies to provide an encryption mechanism for every data packet sent between a client and a server computer. The choice of which technology you use depends on the type of transport being used for the data, the technologies available based on protocol and operating system, as well as other physical factors, such as hardware and firewalls.

SSL is used for Web-based data transfer using the HTTPS protocol and could affect application performance, as it uses complex functions to encrypt and decrypt data. The largest performance hit occurs during the initial establishment of a secure connection between the client and server. SSL uses an asymmetric public/private-key combination as its encryption mechanism. You can aid performance by designing SSL-related applications with less text and simple or no graphics.

IPSec can be used to secure data sent between two computers. With IPSec, you can ensure message confidentiality by encrypting all the data sent between the two computers. It is also possible to provide integrity between two computers without encrypting the data. Another use is authentication between two computers so that communication from other non-validated computers is not possible. Restrict which computers can communicate with one another. A number of more advanced filters are available that can conditionally restrict communication based on many different variables, such as TCP/UDP ports. IPSec cannot secure all types of IP traffic, so you must take additional precautions if you need security while using Broadcasts, Multicasts, or Internet Key Exchanges. IPSec also limits the use of Network Address Translation (NAT) mechanisms, as IPSec cannot communicate through a firewall configured with NAT.

RPC is a common mechanism for communications between processes and is used extensively by DCOM. RPC provides for full encryption if configured and has a number of different encryption ranges. At the most secure level, RPC encrypts data for every call. The level of encryption, 40-bit or 128-bit, depends on the version of the operating system running on the client and server. You can use RPC as an encryption mechanism when a Web-based application communicates with serviced components located on remote computers.

Applying encryption minimizes the possibility of a message being intercepted. Through signing and/or sealing, you can authenticate the sender and receiver to guarantee that a message comes from a specific originator and that only the system or person it was intended for can open it.

Signing to Gain Trust

Signing components is important so that users of the product have some assurance that the product is not going to damage their systems or attempt to gain personal information from the computers it is installed on. When a component is signed, you can identify the author and the corporation responsible for the signature certificate.

Visual Studio .NET enables you to sign code by assigning a strong name to the code assembly, which consists of a public key, a digital signature, and identification information. Providing a strong name guarantees the identity, but the user still needs to decide whether to trust the identity.

Similarly, you can use a digital certificate to sign your assembly. This certificate comes from a third-party authority that can prove your identity. In this manner, trust is verified through two agenciesthe developer and the third-party signer. This digital certificate guarantees the component's originator.

Sealing to Prevent Tampering

Sealing a message encrypts the message contents with a key that only the sender and receiver understand. In most Microsoft environments, sealing is accomplished through use of the Kerberos protocol, an industry-standard protocol for authentication and other security options. Sealing is also defined in the Microsoft Global XML Architecture (GXA) by using SOAP standards for guaranteeing secure communication of data from station to station through the Internet.

Secure Access

Secure access to a system is achieved through two mechanisms discussed earlier in this chapter: authentication and data encryption. To get secure access using any of the mechanisms defined in this chapter, you must have a user identification mechanism and a means of preventing interception of information. Authentication can be achieved through a login when starting an application. This can take place on a network through standard network logons and pass-through authentication or by using specialized challenge and response mechanisms built into an application. Validating users to determine whether they are permitted access to the current level of the application is vital.

Hand-in-hand with authentication is assigning permissions in levels or roles within the system; in this manner, you can provide levels of usage from a read-only user to an administrator and anything in between that the system needs.

Operations Strategy

The system's day-to-day operations should be planned with administrative and maintenance roles clearly defined. These procedures can be manual or fully automated, but backup processes need to be performed, software needs to be upgraded, performance-tuning measures are ongoing, and other procedures must be attended to at regular intervals.

Data Archiving

Backup is crucial to any system that maintains data. Many individuals in the IT world will tell you that backups are preparation for the inevitable failure or rollback situation that a system will hit some day. A backup strategy must be planned and geared to the degree of data volatility in the system. How much data changes in the system, and at what rate do these changes occur? The answers to these two questions usually indicate how frequently backups need to be occurring.

Archiving information can also include removing obsolete data and purging data from the system to maintain system performance and reduce resource utilization.

Data Purging

Data can accumulate quickly in many systems, using up resources and degrading system performance. Take, for example, a taxation system that records information for multiple years, yet has regular data queries occurring for only the most recent data. Many systems show similar usage patterns.

As the data ages, it is less likely to be needed, so you should implement a plan for removing antiquated data. It is also a good idea to separate commonly queried data from less frequently accessed information to improve overall performance of the system.

Upgrades

A system that is properly put together from the start aids in maintenance of the system. Building a system with reusable components that perform their tasks without relying heavily on specifics of other components helps during an upgrade process and next-version planning.

When each element of a system is coded, proper versioning and assembly identification make upgrades easier to perform. Ideally, an upgrade to any one component can be performed with minimal downtime, if any.

Planning the System Support

Supporting the system after it has been deployed must include a plan for finalizing documentation for end-users, administrative staff, and the support team. Full documentation on how each component interacts is essential for those who must support the system and weren't in on its development. After the system is in place, a multitude of questions will arise. If answers can't be found in the accompanying system documentation, there is a substantial shortcoming in the system's deployment.

The system support elements that should be developed and readily available to all include reference manuals, an online help facility, tutorials and demonstrations of the system components, and a knowledge base of situations and resolutions. Maintenance plans and upgrade procedures are also a key part of supporting the system after it is deployed.

Tests, Tests, and More Tests

A testing plan that uses a variety of testing strategies will prove to be the most successful. Each system varies, but elements of the test plan will be similar, regardless of the specifics of the system being developed. Testing should be planned for all elements in the system, and then other related plans should be adjusted based on the results.

You should perform functional tests that put the product through its paces by testing all its features. Coverage testing should be performed to identify which lines of code are executed and in what quantities the executions are occurring. Ongoing testing throughout the product's development should occur at strategic landmarks during the development. Periodically, all components should be brought together for build testing. The different testing mechanisms are revisited in Chapter 11, "Creating Standards and Processes."

Any time you modify an implementation within a program, you should perform regression testing. Run tests that have been performed in the past against the modified code. Try to determine whether the changes break anything that worked previously. (Some companies should be doing more of this before releasing service packs.) Don't go overboard with these tests, but ensure that these tests are performed adequately.

Usage testing should be performed before, during, and after deployment to allow for performance tuning and optimization of the system as it works in a real-life situation. Integration testing aids in verifying the interactions between components and other systems and ensures that these interactions are occurring as originally designed.

Testing alone does not eliminate all errors. The human factor comes into play in every system, and an inadequately trained staff that is using the application incorrectly can also cause many system problems.

Preaching to the "Converted"

The secret behind users of the system accepting the system is to get them involved early in the development process. Try to implement users' suggestions, and above all, ensure that the people using the system understand what the system is supposed to do and what it is not intended to do. Ensure that there is some form of formal training so that users are educated in how the system is to be used.

Training and an education plan should work along with the system documentation. While documentation is being prepared, appropriate training materials should be developed. Materials will be needed for end users, administrators, and support personnel. Ensure that goals for capabilities of users, administration, and the support team are fully stated. Training should provide full objectives for each audience, and a training schedule should be incorporated into the overall project schedule. A proper education plan should also describe training vehicles, materials, and resources.



Analyzing Requirements and Defining. Net Solution Architectures (Exam 70-300)
MCSD Self-Paced Training Kit: Analyzing Requirements and Defining Microsoft .NET Solution Architectures, Exam 70-300: Analyzing Requirements and ... Exam 70-300 (Pro-Certification)
ISBN: 0735618941
EAN: 2147483647
Year: 2006
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net