Business Continuity

 < Free Open Study > 



Business Continuity or BCP (Business Continuity Planning) focuses on preventing and protecting important business and financial assets in the case of a disaster. Through the implementation of approved processes and procedures, BCP attempts to prevent loss of mission critical services and provide functional fail over mechanisms if a disaster were to occur.

Technologies such as disk mirroring over the Internet, electronic vaulting, and Hierarchical Storage Management (HSM) can be implemented, at varying speeds and costs, to ensure Business Continuity during a disaster.

BCPs typically include a subplan, which is called a contingency plan. Simply put, a contingency plan is concerned with external actions and events that pose a danger to normal operations.

Once again, in order to define and implement a solid BCP, the first step is to define and document the goals that the BCP are expected to achieve. This includes identifying which of the company’s functions are essential to daily operations and at possible risk and also getting management to budget according to these needs.

Other important considerations to keep in mind when developing a BCP/Contingency plan might include the following:

  • Responsibilities checklist that includes all responsible members of the BCP team. This should include contact phone numbers, and define who will do what and where they will do it.

  • Notification or alerting customers and normal employees that an emergency or disaster has occurred and that the plan is commencing.

  • Damage assessment, control, and containment procedures.

  • Recovery of critical systems.

  • The ability to salvage a primary site or gain access to a remote backup facility or site.

Fault Tolerance and High Availability

In order for a business to provide high availability of an application’s services and systems to its customers whether it is during or after a disaster has occurred, or for just plain daily operations, it is imperative that a functional network includes fault tolerant systems. Fault tolerance is defined as the ability of a program or system to remain functional in the event of a hardware or software malfunction or failure. There are various levels of fault tolerance that offer different levels of protection for systems and data. Whichever fault tolerant system or technology is at your company directly impacts the effectiveness of your disaster recovery plan. In other words, your DRP should always include tested fault tolerant systems.

RAID

RAID (Redundant Array of Intelligent Disks) is one of the most popular means of providing fault tolerant systems in use today. Through a process known as disk or data striping, RAID divides data into separate units and distributes the data across two or more hard disks. There are many variations of RAID available, the most popular are as follows:

RAID level 0: This level of RAID is not considered fault tolerant. It spreads data in blocks across multiple disks but provides no data redundancy. This level of RAID produces better performance only. If one disk fails with this configuration, all data is lost.

  • RAID level 1: This level is also known as disk mirroring. With RAID level 1, all data is duplicated or written to a second hard disk. If one of the disks fails, the information is still available on the second disk. This level of RAID is fault tolerant although its performance is not rated as well as RAID level 5.

  • RAID level 3: This level also spreads data units across several disks but it also uses a dedicated disk for parity information, which is used for error correction purposes. In simple terms, it provides a basic level of fault tolerance.

  • RAID level 5: This level provides excellent fault tolerance and good performance. It stores parity information across all disks in the disk array and provides concurrent disk reads and writes. It is the most popular RAID implementation.

Server Clustering

Another form of fault tolerance that provides high availability of services, applications, and resources is called server clustering. Server clustering is the grouping together of independent servers into one large logical system. Many modern day operating systems such as certain versions of Microsoft Windows and Linux offer the ability, through software, to implement server or resource clustering. Clustering is also known to provide parallel processing and load balancing. Parallel processing is the separating of process instructions so that separate processors can process them. This allows programs to run better and faster. Load balancing is the dividing or separating of work between multiple systems in order to process data and work loads more efficiently.

Another name sometimes used for a cluster of servers is a server farm. Again, typically, a server farm is a centralized group of systems or servers that act together as one unit to provide processing services such as authentication, backup, file, and print services, load balancing, and other resource sharing. A Web farm is a group of Web servers that provide Web pages and services.

Finally, servers systems are often made redundant as a method of fault tolerance. The concept of redundant servers is simply the application or mirroring concepts to two or more server systems. When data is written to one server, it is also mirrored or written to the second server. The implementation and utilization of a remote, redundant server is an excellent real time solution for any disaster recovery of business continuity plan.



 < Free Open Study > 



The Security+ Exam Guide. TestTaker's Guide Series
Security + Exam Guide (Charles River Media Networking/Security)
ISBN: 1584502517
EAN: 2147483647
Year: 2003
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net