The Operations Security domain of Information Systems Security contains many elements that are important for a CISSP candidate to remember. In this domain we will describe the controls that a computing operating environment needs to ensure the three pillars of information security: Confidentiality, Integrity, and Availability (C.I.A.). Examples of these elements are controlling the separation of job functions, controlling the hardware and media that are used, and controlling the exploitation of common I/O errors.
Operations Security can be described as the controls over the hardware in a computing facility, over the data media used in a facility, and over the operators using these resources in a facility.
We will approach this material from the three following directions:
The term operations security refers to the act of understanding the threats to and vulnerabilities of computer operations in order to routinely support operational activities that enable computer systems to function correctly. The term also refers to the implementation of security controls for normal transaction processing, system administration tasks, and critical external support operations. These controls can include resolving software or hardware problems along with the proper maintenance of auditing and monitoring processes.
Like the other domains, the Operations Security domain is concerned with triples: threats, vulnerabilities, and assets. We will now look at what constitutes a triple in the Operations Security domain:
The following are the effects of operations controls on C.I.A.:
The Operations Security domain is concerned with the controls that are used to protect hardware, software, and media resources from the following:
A CISSP candidate should know what resources to protect, how privileges should be restricted, and what controls to implement.
In addition, we will also discuss the following two critical aspects of operations controls:
The following are the major categories of operations security controls:
The following are additional control categories:
The Orange Book is one of the books of the Rainbow Series, which is a six-foot-tall stack of books from the National Security Agency, each having a different cover color, on evaluating Trusted Computer Systems. The main book, to which all others refer, is the Orange Book, which defines the Trusted Computer System Evaluation Criteria (TCSEC), as mentioned in Chapter 5. Much of the Rainbow Series has been superseded by the Common Criteria Evaluation and Validation Scheme (CCEVS). This information can be found at http://niap.nist.gov/cc-scheme/index.html. Other books in the Rainbow Series can be found at www.fas.org/irp/nsa/rainbow.htm.
The TCSEC define major hierarchical classes of security by the letters D (least secure) through A (most secure):
Table 6-1 shows these TCSEC Security Evaluation Categories.
CLASS |
DESCRIPTION |
---|---|
D: |
Minimal Protection |
C: |
Discretionary Protection |
C1: |
Discretionary Security Protection |
C2: |
Controlled Access Protection |
B: |
Mandatory Protection |
B1: |
Labeled Security Protection |
B2: |
Structured Protection |
B3: |
Security Domains |
A1: |
Verified Protection |
The Orange Book defines assurance requirements for secure computer operations. Assurance is a level of confidence that ensures that a trusted computing base’s (TCB) security policy has been correctly implemented and that the system’s security features have accurately implemented that policy.
The Orange Book defines two types of assurance: operational assurance and life cycle assurance. Operational assurance focuses on the basic features and architecture of a system, whereas life cycle assurance focuses on the controls and standards that are necessary for building and maintaining a system. An example of an operational assurance is a feature that separates a security-sensitive code from a user code in a system’s memory.
TRUSTED COMPUTING BASE (TCB)
The trusted computing base (TCB) refers to the totality of protection mechanisms within a computer system, including hardware, firmware, and software, the combination of which is responsible for enforcing a security policy. A TCB consists of one or more components that together enforce a unified security policy over a product or system. The ability of a trusted computing base to correctly enforce a security policy depends solely on the mechanisms within the TCB and on the correct input by system administrative personnel of parameters (e.g., a user’s clearance) related to the security policy.
The operational assurance requirements specified in the Orange Book are as follows:
Life cycle assurance ensures that a TCB is designed, developed, and maintained with formally controlled standards that enforce protection at each stage in the system’s life cycle. Configuration management, which carefully monitors and protects all changes to a system’s resources, is a type of life cycle assurance.
The life cycle assurance requirements specified in the Orange Book are as follows:
In the Operations Security domain, the operations assurance areas of covert channel analysis, trusted facility management and trusted recovery, and the life cycle assurance area of configuration management are covered.
Covert Channel Analysis
An information transfer path within a system is a generic definition of a channel. A channel may also refer to the mechanism by which the path is effected. A covert channel is a communication channel that allows a process to transfer information in a manner that violates the system’s security policy. A covert channel is an information path that is not normally used for communication within a system; therefore, it is not protected by the system’s normal security mechanisms. Covert channels are a secret way to convey information to another person or program.[*] There are two common types of covert channels: covert storage channels and covert timing channels.
Covert Storage Channel
Covert storage channels convey information by changing a system’s stored data. For example, a program can convey information to a less secure program by changing the amount or the patterns of free space on a hard disk. Changing the characteristics of a file is another example of creating a covert channel. A covert storage channel typically involves a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels.
Covert Timing Channel
A covert timing channel is a covert channel in which one process signals information to another by modulating its own use of system resources (e.g., CPU time) in such a way that this manipulation affects the real response time observed by the second process. A covert timing channel employs a process that manipulates observable system resources in a way that affects response time.
Covert timing channels convey information by altering the performance of or modifying the timing of a system resource in some measurable way. Timing channels often work by taking advantage of some kind of system clock or timing device in a system. Information is conveyed by using elements such as the elapsed time required to perform an operation, the amount of CPU time expended, or the time occurring between two events.
Covert timing channels operate in real time - that is, the information transmitted from the sender must be sensed by the receiver immediately or it will be lost - whereas covert storage channels do not. For example, a full-disk error code may be exploited to create a storage channel that could remain for an indefinite amount of time.
Noise and traffic generation are often ways to combat the use of covert channels. Table 6-2 describes the primary covert channel classes.
CLASS |
DESCRIPTION |
---|---|
B2 |
The system must protect against covert storage channels. It must perform a covert channel analysis for all covert storage channels. |
B3 and A1 |
The system must protect against both covert storage and covert timing channels. It must perform a covert channel analysis for both types. |
Trusted Facility Management
Trusted facility management is defined as the assignment of a specific individual to administer the security-related functions of a system. Trusted facility management has two different requirements, one for B2 systems and another for B3 systems. The B2 requirements require that the TCB shall support separate operator and administrator functions.
The B3 requirements require that the functions performed in the role of a security administrator shall be identified. System administrative personnel shall be able to perform security administrator functions only after taking a distinct, auditable action to assume the security administrator role on the system. Nonsecurity functions that can be performed in the security administration role shall be limited strictly to those essential to performing the security role effectively.
Although trusted facility management is an assurance requirement only for highly secure systems, many systems evaluated at lower security levels are structured to try to meet this requirement (see Table 6-3).
CLASS |
REQUIREMENTS |
---|---|
B2 |
Systems must support separate operator and system administrator roles. |
B3 and A1 |
Systems must clearly identify the functions of the security administrator to perform the security-related functions. |
Trusted facility management uses the concept of least privilege (discussed later in this chapter), and it is also related to the administrative concepts of separation of duties and need to know.
Separation of Duties
Separation of duties (also called segregation of duties) assigns parts of tasks to different personnel. Thus, if no single person has total control of the system’s security mechanisms, the theory is that no single person can completely compromise the system.
In many systems, a system administrator has total control of the system’s administration and security functions. This consolidation of privilege should not be allowed in a secure system; therefore, security tasks and functions should not automatically be assigned to the role of the system administrator. In highly secure systems, three distinct administrative roles might be required: a system administrator; a security administrator, who is usually an information system security officer (ISSO); and an enhanced operator function.
The security administrator, system administrator, and operator might not necessarily be different individuals. However, whenever a system administrator assumes the role of the security administrator, this role change must be controlled and audited. Because the security administrator’s job is to perform security functions, the performance of nonsecurity tasks must be strictly limited. This separation of duties reduces the likelihood of loss that results from users abusing their authority by taking actions outside of their assigned functional responsibilities. While it might be cumbersome for the person to switch from one role to another, the roles are functionally different and must be executed as such.
In the concept of two-man control, two operators review and approve the work of each other. The purpose of two-man control is to provide accountability and to minimize fraud in highly sensitive or high-risk transactions. The concept of dual control means that both operators are needed to complete a sensitive task.
Typical system administrator or enhanced operator functions can include the following:
Typical security administrator functions may include the following:
An operator may perform some system administrator roles, such as backups. This may happen in facilities in which personnel resources are constrained.
For proper separation of duties, the function of user account establishment and maintenance should be separated from the function of initiating and authorizing the creation of the account. User account management focuses on identification, authentication, and access authorizations. This is augmented by the process of auditing and otherwise periodically verifying the legitimacy of current accounts and access authorizations. It also involves the timely modification or removal of access and associated issues for employees who are reassigned, promoted, or terminated or who retire.
Rotation of Duties
Another variation on the separation of duties is called rotation of duties, which is defined as the process of limiting the amount of time that an operator is assigned to perform a security-related task before being moved to a different task with a different security classification. This control lessens the opportunity for collusion between operators for fraudulent purposes. Like a separation of duties, a rotation of duties may be difficult to implement in small organizations but can be an effective security control procedure.
Trusted Recovery
Trusted recovery ensures that security is not breached when a system crash or other system failure (sometimes called a discontinuity) occurs. It must ensure that the system is restarted without compromising its required protection scheme and that it can recover and roll back without being compromised after the failure. Trusted recovery is required only for B3- and A1-level systems. A system failure represents a serious security risk because the security controls might be bypassed when the system is not functioning normally.
For example, if a system crashes while sensitive data is being written to a disk (where it would normally be protected by controls), the data might be left unprotected in memory and might be accessible by unauthorized personnel. Trusted recovery has two primary activities: preparing for a system failure and recovering the system.
Failure Preparation
Under trusted recovery, preparing for a system failure consists of backing up all critical files on a regular basis. This preparation must enable the data recovery in a protected and orderly manner while ensuring the continued security of the system. These procedures may also be required if a system problem, such as a missing resource, an inconsistent database, or any kind of compromise, is detected or if the system needs to be halted and rebooted.
THE SYSTEM ADMINISTRATOR’S MANY HATS
It is not just small organizations any more that require a system administrator to function as a security administrator. The LAN/Internet Network administrator role creates security risks because of the inherent lack of the separation of duties. With the current pullback in the Internet economy, a network administrator has to wear many hats - and performing security-related tasks is almost always one of them (along with various operator functions). The sometimes cumbersome yet very important concept of separation of duties is vital to preserve operations controls.
System Recovery
While specific, trusted recovery procedures depend upon a system’s requirements, general, secure system recovery procedures include the following:
After all these steps have been performed and the system’s data cannot be compromised, operators can then access the system.
In addition, the Common Criteria also describe three hierarchical recovery types:
Modes of Operation
The mode of operation is a description of the conditions under which an AIS functions, based on the sensitivity of data processed and the clearance levels and authorizations of the users. Four modes of operation are defined:
MULTILEVEL DEVICE
A multilevel device is a device that is used in a manner that permits it to process the data of two or more security levels simultaneously without risk of compromise. To accomplish this, sensitivity labels are normally stored on the same physical medium and in the same form (i.e., machine readable or human readable) as the data being processed.
Configuration Management and Change Control
Change control is the management of security features and a level of assurance provided through the control of the changes made to the system’s hardware, software, and firmware configurations throughout the development and operational life cycle.
Change control manages the process of tracking and approving changes to a system. It involves identifying, controlling, and auditing all changes made to the system. It can address hardware and software changes, networking changes, or any other change affecting security. Change control can also be used to protect a trusted system while it is being designed and developed.
The primary security goal of change control is to ensure that changes to the system do not unintentionally diminish security. For example, change control may prevent an older version of a system from being activated as the production system. Proper change control may also make it possible to accurately roll back to a previous version of a system in case a new system is found to be faulty. Another goal of change control is to ensure that system changes are reflected in current documentation to help mitigate the impact that a change may have on the security of other systems, while in the production or planning stages.
The following are the primary functions of change control:
Five generally accepted procedures exist to implement and support the change control process:
Configuration management is the more formalized, higher-level process of managing changes to a complicated system, and it is required for formal, trusted systems. Change control is contained in configuration management. The purpose of configuration management is to ensure that changes made to verification systems take place in an identifiable and controlled environment. Configuration managers take responsibility that additions, deletions, or changes made to the verification system do not jeopardize its ability to satisfy trusted requirements. Therefore, configuration management is vital to maintaining the endorsement of a verification system.
Although configuration management is a requirement only for B2, B3, and A1 systems, it is recommended for systems that are evaluated at lower levels. Most developers use some type of configuration management because it is common sense.
Configuration management is a discipline applying technical and administrative direction to do the following:
Configuration management involves process monitoring, version control, information capture, quality control, bookkeeping, and an organizational framework to support these activities. The configuration being managed is the verification system plus all tools and documentation related to the configuration process.
The four major aspects of configuration management are[*]:
Configuration Identification
Configuration management entails decomposing the verification system into identifiable, understandable, manageable, trackable units known as configuration items (CIs). A CI is a uniquely identifiable subset of the system that represents the smallest portion to be subject to independent configuration control procedures. The decomposition process of a verification system into CIs is called configuration identification.
CIs can vary widely in size, type, and complexity. Although there are no hard-and-fast rules for decomposition, the granularity of CIs can have great practical importance. A favorable strategy is to designate relatively large CIs for elements that are not expected to change over the life of the system and small CIs for elements likely to change more frequently.
Configuration Control
Configuration control is a means of ensuring that system changes are approved before being implemented; that only the proposed and approved changes are implemented; and that the implementation is complete and accurate. This involves strict procedures for proposing, monitoring, and approving system changes and their implementation. Configuration control entails central direction of the change process by personnel who coordinate analytical tasks, approve system changes, review the implementation of changes, and supervise other tasks such as documentation.
Configuration Status Accounting
Configuration accounting documents the status of configuration control activities and, in general, provides the information needed to manage a configuration effectively. It allows managers to trace system changes and establish the history of any developmental problems and associated fixes.
Configuration accounting also tracks the status of current changes as they move through the configuration control process. Configuration accounting establishes the granularity of recorded information and thus shapes the accuracy and usefulness of the audit function.
The accounting function must be able to locate all possible versions of a CI and all the incremental changes involved, thereby deriving the status of that CI at any specific time. The associated records must include commentary about the reason for each change and its major implications for the verification system.
Configuration Audit
Configuration audit is the quality assurance component of configuration management. It involves periodic checks to determine the consistency and completeness of accounting information and to verify that all configuration management policies are being followed. A vendor’s configuration management program must be able to sustain a complete configuration audit by an NCSC review team.
Configuration Management Plan
Strict adherence to a comprehensive configuration management plan is one of the most important requirements for successful configuration management. The configuration management plan is the vendor’s document tailored to the company’s practices and personnel. The plan accurately describes what the vendor is doing to the system at each moment and what evidence is being recorded.
Configuration Control Board (CCB)
All analytical and design tasks are conducted under the direction of the vendor’s corporate entity called the Configuration Control Board (CCB). The CCB is headed by a chairperson, who is responsible for ensuring that changes made do not jeopardize the soundness of the verification system and ensuring that the changes made are approved, tested, documented, and implemented correctly.
The members of the CCB should interact periodically, either through formal meetings or other available means, to discuss configuration management topics such as proposed changes, configuration status accounting reports, and other topics that may be of interest to the different areas of the system development. These interactions should be held to keep the entire system team updated on all advancements or alterations in the verification system.
Table 6-4 shows the two primary configuration management classes.
CLASS |
REQUIREMENT |
---|---|
B2 and B3 |
Configuration management procedures must be enforced during development and maintenance of a system. |
A1 |
Configuration management procedures must be enforced during the entire system’s life cycle. |
Administrative Controls
Administrative controls can be defined as the controls that are installed and maintained by administrative management to help reduce the threat or impact of violations on computer security. We separate them from the operations controls because these controls have more to do with human resources personnel administration and policy than they do with hardware or software controls.
The following are some examples of administrative controls:
Least Privilege
The least privilege principle requires that each subject in a system be granted the most restricted set of privileges (or lowest clearance) needed for the performance of authorized tasks. The application of this principle limits the damage that can result from accident, error, or unauthorized use of system resources.
It may be necessary to separate the levels of access based on the operator’s job function. A very effective approach is least privilege. An example of least privilege is computer operators who are not allowed access to computer resources at a level beyond what is absolutely needed for their specific job tasks. Operators are organized into privilege-level groups. Each group is then assigned the most restricted level that is applicable.
The three basic levels of privilege are defined as follows:
These privilege levels are commonly much more finely granular than we have stated here, and privilege levels in a large organization can, in fact, be very complicated.
Operations Job Function Overview
In a large shop, job functions and duties may be divided among a very large base of IT personnel. In many IT departments, the following roles are combined into fewer positions. The following listing, however, gives a nice overview of the various task components of the operational functions.
Record Retention
The term record retention refers to how long transactions and other types of records (legal, audit trails, e-mail, and so forth) should be retained according to management, legal, audit, or tax compliance requirements. In the Operations Security domain, record retention deals with retaining computer files, directories, and libraries. The retention of data media (tapes, diskettes, and backup media) can be based on one or more criteria, such as the number of days elapsed, number of days since creation, hold time, or other factors. An example of record retention issues could be the mandated retention periods for trial documentation or financial records.
Data Remanence
Data remanence is the data left on the media after the media has been erased. After erasure, there may be some physical traces left, which could enable the data that may contain sensitive material to be reconstructed. Object reuse mechanisms ensure that system resources are allocated and reassigned among authorized users in a way that prevents the leak of sensitive information, and they ensure that the authorized user of the system does not obtain residual information from system resources.
Object reuse is defined as “The reassignment to some subject of a storage medium (e.g., page frame, disk sector, magnetic tape) that contained one or more objects. To be securely reassigned, no residual data can be available to the new subject through standard system mechanisms.”[*] The object reuse requirement of the TCSEC is intended to ensure that system resources, in particular storage media, are allocated and reassigned among system users in a manner that prevents the disclosure of sensitive information.
Systems administrators and security administrators should be informed of the risks involving the issues of object reuse, declassification, destruction, and disposition of storage media. Data remanence, object reuse, and the proper disposal of data media are also discussed in Chapter 10.
Due Care and Due Diligence
The concepts of due care and due diligence require that an organization engage in good business practices relative to the organization’s industry. An example of due care could be training employees in security awareness, rather than simply creating a policy with no implementation plan or follow-up. Mandating statements from the employees that they have read and understood appropriate computer behavior is also an example of due care.
Due diligence might be mandated by various legal requirements in the organization’s industry or through compliance with governmental regulatory standards. Due care and due diligence are described in more detail in Chapter 9.
Due care and due diligence are becoming serious issues in computer operations today. In fact, the legal system has begun to hold major partners liable for the lack of due care in the event of a major security breach. Violations of security and privacy are hot-button issues that are confronting the Internet community, and standards covering the best practices of due care are necessary for an organization’s protection.
Documentation Control
A security system needs documentation controls. Documentation can include several things: security plans, contingency plans, risk analyses, and security policies and procedures. Most of this documentation must be protected from unauthorized disclosure; for example, printer output must be in a secure location. Disaster recovery documentation must also be readily available in the event of a disaster.
Operations controls embody the day-to-day procedures used to protect computer operations. A CISSP candidate must understand the concepts of resource protection, hardware/software control, and privileged entity.
The following are the most important aspects of operations controls:
Resource Protection
Resource protection is just what it sounds like - the concept of protecting an organization’s computing resources and assets from loss or compromise. Computing resources are defined as any hardware, software, or data that is owned and used by the organization. Resource protection is designed to help reduce the possibility of damage that can result from the unauthorized disclosure or alteration of data by limiting the opportunities for its misuse.
Various examples of resources that require protection are:
Hardware Resources
Software Resources
Data Resources
Hardware Controls
Hardware Maintenance
System maintenance requires physical or logical access to a system by support and operations staff, vendors, or service providers. Maintenance may be performed on-site, or the unit needing replacement may be transported to a repair site. Maintenance might also be performed remotely. Furthermore, background investigations of the service personnel may be necessary. Supervising and escorting the maintenance personnel when they are on-site is also necessary.
Maintenance Accounts
Many computer systems provide maintenance accounts. These supervisor-level accounts are created at the factory with preset and widely known passwords. It is critical to change these passwords, or at least disable the accounts until they are actually needed for maintenance. If an account is used remotely, authentication of the maintenance provider can be performed by using callback or encryption.
Diagnostic Port Control
Many systems have diagnostic ports through which troubleshooters can directly access the hardware. These ports should be used only by authorized personnel and should not enable either internal or external unauthorized access. Diagnostic port attack is the term that describes this type of abuse.
Hardware Physical Control
Many data processing areas that contain hardware may require locks and alarms. The following are some examples:
Locks and alarms are described in more detail in Chapter 10.
Software Controls
An important element of operations controls is software support - controlling what software is used in a system. The following are some elements of controls on software:
TRANSPARENCY OF CONTROLS
One important aspect of controls is the need for their transparency. Operators need to feel that security protections are reasonably flexible and that the security protections do not get in the way of doing their jobs. Ideally, the controls should not require users to perform extra steps, although realistically this result is hard to achieve. Transparency also aids in preventing users from learning too much about the security controls.
Privileged-Entity Controls
Privileged-entity access, which is also known as privileged operations functions, is defined as an extended or special access to computing resources given to operators and system administrators. Many job duties and functions require privileged-entity access.
Privileged-entity access is most often divided into classes. Operators should be assigned to a class based on their job title.
The following are some examples of privileged-entity operator functions:
RESTRICTING HARDWARE INSTRUCTIONS
A system control program, or the design of the hardware itself, restricts the execution of certain computing functions and permits them only when a processor is in a particular functional state, known as privileged or supervisor state. Applications can run in different states, during which different commands are permitted. To be authorized to execute privileged instructions, a program should be running in a restrictive state that enables these commands.
Media Resource Protection
Media resource protection can be classified into two areas: media security controls and media viability controls. Media security controls are implemented to prevent any threat to C.I.A. by the intentional or unintentional exposure of sensitive data. Media viability controls are implemented to preserve the proper working state of the media, particularly to facilitate the timely and accurate restoration of the system after a failure.
Media Security Controls
Media security controls should be designed to prevent the loss of sensitive information when the media is stored outside the system.
A CISSP candidate needs to know several of the following elements of media security controls:
Overwriting
Simply copying new data to the media is not recommended, because the application may not completely overwrite the old data properly, and strict configuration controls must be in place on both the operating system and the software itself. Also, bad sectors on the media may not permit the software to overwrite old data properly.
To purge the media, the DoD requires overwriting with a pattern, then its complement, and finally with another pattern; for example, overwriting first with 0011 0101, followed by 1100 1010, then 1001 0111. To satisfy the DoD clearing requirement, it is required to write a character to all data locations in the disk. The number of times an overwrite must be accomplished depends on the storage media, sometimes on its sensitivity, and sometimes on differing DoD component requirements, but seven times is most commonly recommended.
Degaussing
Degaussing is often recommended as the best method for purging most magnetic media. Degaussing is a process whereby the magnetic field patterns are erased from the media, returning the medium to its initial virgin state. Erasure via degaussing may be accomplished in two ways:
Another important point about degaussing is that degaussed magnetic hard drives will generally require restoration of factory-installed timing tracks, so data purging is recommended instead.
Destruction
Paper reports and diskettes need to be physically destroyed before disposal. Also, physical destruction of optical media (CD-ROM or WORM disks) is necessary.
Destruction techniques can include shredding or burning documentation, physically breaking CD-ROMS and diskettes, and destroying with acid. Paper reports should be shredded by personnel with the proper level of security clearance. Some shredders cut in straight lines or strips; others cross-cut or disintegrate the material into pulp. Care must be taken to limit access to the reports prior to disposal and those stored for long periods. Reports should never be disposed of without shredding, do not place them in a dumpster intact. Burning is also sometimes used to destroy paper reports, especially in the DoD.
In some cases, acid is used to destroy disk pack surfaces. Applying a high concentration of hydroiodic acid (55% to 58% solution) to the gamma ferric oxide disk surface is a rarely used method of media destruction, and acid solutions should be used in a well-ventilated area and only by qualified personnel.
Media Viability Controls
Many physical controls should be used to protect the viability of the data storage media. The goal is to protect the media from damage during handling and transportation or during short-term or long-term storage. Proper marking and labeling of the media are required in the event of a system recovery process.
MEDIA LIBRARIAN
It is the job of a media librarian to control access to the media library and to regulate the media library environment. All media must be labeled in a human-and machine-readable form that should contain information such as the date and who created the media, the retention period, a volume name and version, and security classification.
Physical Access Controls
The control of physical access to the resources is the major tenet of the Physical Security domain. Obviously, the Operations Security domain requires physical access control, and the following list contains examples of some of the elements of the operations resources that need physical access control.
Hardware
Software
Obviously, all personnel require some sort of control and accountability when accessing physical resources, yet some personnel will require special physical access to perform their job functions. The following are examples of this type of personnel:
Special arrangements for supervision must be made when external support providers are entering a data center.
The term physical piggybacking describes an unauthorized person going through a door behind an authorized person. The concept of a man trap (described in Chapter 10) is designed to prevent physical piggybacking.
[*]Sources: DoD 5200.28-STD, Department of Defense Trusted Computer System Evaluation Criteria; and NCSC-TG-030, A Guide to Understanding Covert Channel Analysis of Trusted Systems (Light Pink Book).
[*]Sources: National Computer Security Center publication NCSC-TG-006, A Guide To Understanding Configuration Management In Trusted Systems; NCSC-TG-014, Guidelines for Formal Verification Systems.
[*]Source: NCSC-TG-018, A Guide to Understanding Object Reuse in Trusted Systems (Light Blue Book).
Operational assurance requires the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively. Operational assurance addresses whether the system’s technical features are being bypassed or have vulnerabilities and whether required procedures are being followed. To maintain operational assurance, organizations use two basic methods: system audits and monitoring. A system audit is a one-time or periodic event to evaluate security; monitoring refers to an ongoing activity that examines either the system or the users.
Problem identification and problem resolution are the primary goals of monitoring. The concept of monitoring is integral to almost all the domains of information security. In Chapter 3 we described some technical aspects of monitoring and intrusion detection. Chapter 10 will also describe intrusion detection and monitoring from a physical access perspective. In this chapter we are more concerned with monitoring the controls implemented in an operational facility in order to identify abnormal computer usage, such as inappropriate use or intentional fraud. The task of failure recognition and response, which includes reporting mechanisms, is an important part of monitoring.
Monitoring contains the mechanisms, tools, and techniques that permit the identification of security events that can impact the operation of a computer facility. It also includes the actions to identify the important elements of an event and to report that information appropriately.
The concept of monitoring includes monitoring for illegal software installation, monitoring the hardware for faults and error states, and monitoring operational events for anomalies.
Monitoring Techniques
To perform this type of monitoring, an information security professional has several tools at his or her disposal:
Intrusion Detection (ID)
Intrusion Detection (ID) is a useful tool that can assist in the detective analysis of intrusion attempts. ID can be used not only for the identification of intruders but also to create a sampling of traffic patterns. By analyzing the activities occurring outside of normal clipping levels, a security practitioner can find evidence of events such as in-band signaling or other system abuses.
Penetration Testing
Penetration testing is the process of testing a network’s defenses by attempting to access the system from the outside, using the same techniques that an external intruder (for example, a cracker) would use. This testing gives a security professional a better snapshot of the organization’s security posture.
Among the techniques used to perform a penetration test are:
Figure 6-1 shows how penetration testing techniques should be used to test every access point of the network and work area.
Figure 6-1: Penetration testing all network access points.
Other techniques that are not solely technology-based can be used to complement the penetration test. The following are examples of such techniques:
Violation Analysis
One of the most-used techniques to track anomalies in user activity is violation tracking, processing, and analysis. To make violation tracking effective, clipping levels must be established. A clipping level is a baseline of user activity that is considered a routine level of user errors. A clipping level enables a system to ignore normal user errors. When the clipping level is exceeded, a violation record is then produced. Clipping levels are also used for variance detection.
Using clipping levels and profile-based anomaly detection, the following types of violations should be tracked, processed, and analyzed:
Profile-based anomaly detection uses profiles to look for abnormalities in user behavior. A profile is a pattern that characterizes the behavior of users. Patterns of usage are established according to the various types of activities the users engage in, such as processing exceptions, resource utilization, and patterns in actions performed. The ways in which the various types of activity are recorded in the profile are referred to as profile metrics.
Benefits of Incident-Handling Capability
The primary benefits of employing an incident-handling capability are containing and repairing damage from incidents and preventing future damage. Additional benefits related to establishing an incident-handling capability are[*]:
INDEPENDENT TESTING
It is important to note that in most cases, external penetration testing should be performed by a reputable, experienced firm that is independent of an organization’s IT or Audit departments. This independence guarantees an objective, nonpolitical report on the state of the company’s defenses. The firm must be fully vetted, however, and full legal nondisclosure issues must be resolved to the organization’s satisfaction before work begins. For this reason, “Black Hat” testers - that is, ex-crackers now working for security firms - are often not recommended.
The implementation of regular system audits is the foundation of operational security controls monitoring. In addition to enabling internal and external compliance checking, regular auditing of audit (transaction) trails and logs can assist the monitoring function by helping to recognize patterns of abnormal user behavior.
Security Auditing
Information Technology (IT) auditors are often divided into two types: internal and external. Internal auditors typically work for the organization whose systems are to be audited, whereas external auditors do not. External auditors are often Certified Public Accountants (CPAs) or other audit professionals who are hired to perform an independent audit of an organization’s financial statements. Internal auditors, on the other hand, usually have a much broader mandate: checking for compliance and standards of due care, auditing operational cost efficiencies, and recommending the appropriate controls.
IT auditors typically audit the following functions:
In addition, IT auditors might recommend improvements to controls, and they often participate in a system’s development process to help an organization avoid costly re-engineering after the system’s implementation.
Audit Trails
An audit trail is a set of records that collectively provides documentary evidence of processing, used to aid in tracing from original transactions forward to related records and reports or backward from records and reports to their component source transactions. Audit trails may be limited to specific events, or they may encompass all the activities on a system.
An audit (or transaction) trail enables a security practitioner to trace a transaction’s history. This transaction trail provides information about additions, deletions, or modifications to the data within a system. Audit trails enable the enforcement of individual accountability by creating a reconstruction of events. As with monitoring, one purpose of an audit trail is to assist in a problem’s identification, which leads to a problem’s resolution. An effectively implemented audit trail also enables an auditor to retrieve and easily certify the data. Any unusual activity or variation from the established procedures should be identified and investigated.
The audit logs should record the following:
In addition, an auditor should examine the audit logs for the following:
USER ACCOUNT REVIEW
It is necessary to regularly review user accounts on a system. Such reviews may examine the levels of access each individual has, conformity with the concept of least privilege, whether all accounts are still active, whether management authorizations are up-to-date, or whether required training has been completed, for example. These reviews can be conducted on at least two levels: on an application-by-application basis or on a systemwide basis. Both kinds of reviews can be conducted by, among others, in-house systems personnel (a self-audit), the organization’s internal audit staff, or external auditors.
User audit trails can usually log:
It is most useful if options and parameters are also recorded from commands. It is much more useful to know that a user tried to delete a log file (e.g., to hide unauthorized actions) than to know the user merely issued the delete command, possibly for a personal data file.
Source: National Institute of Standards and Technology Special Publication 800-12, An Introduction to Computer Security: The NIST Handbook).
The audit mechanism of a computer system has five important security goals:[*]
ELECTRONIC AUDIT TRAILS
Maintaining a proper audit trail is more difficult now because fewer transactions are recorded to paper media and will thus always stay in an electronic form. In the old paper system, a physical purchase order might be prepared with multiple copies, initiating a physical, permanent paper trail. An auditor’s job is now more complicated because digital media are more transient and a paper trail may not exist.
Other important security issues regarding the use of audit logs are:
Problem Management Concepts
Effective auditing embraces the concepts of problem management. Problem management is a way to control the process of problem isolation and problem resolution. An auditor may use problem management to resolve the issues arising from an IT security audit, for example.
The goal of problem management is threefold:
The first step in implementing problem management is to define the potential problem areas and the abnormal events that should be investigated. Some examples of potential problem areas are:
Some examples of abnormal events that could be discovered during an audit are as follows:
Of course, the final objective of problem management is resolution of the problem.
[*]Source: NIST Special Publication 800-12, “An Introduction to Computer Security: The NIST Handbook.”
[*]Source: NCSC-TG-001, A Guide to Understanding Audit in Trusted Systems (Tan Book).
[†]Source: V. D. Gligor, Guidelines for Trusted Facility Management and Audit (University of Maryland, 1985).
A threat is simply any event that, if realized, can cause damage to a system and create a loss of confidentiality, availability, or integrity. Threats can be malicious, such as the intentional modification of sensitive information, or they can be accidental, such as an error in a transaction calculation or the accidental deletion of a file.
A vulnerability is a weakness in a system that can be exploited by a threat. Reducing the vulnerable aspects of a system can reduce the risk and impact of threats on the system. For example, a password generation tool that helps users choose robust passwords reduces the chance that users will select poor passwords (the vulnerability) and makes the password more difficult to crack (the threat of external attack).
Threats and vulnerabilities are discussed in several of the ten domains; for example, many examples of attacks are given in Chapter 2.
We have grouped the threats into several categories, and we will describe some of the elements of each category.
Accidental Loss
Accidental loss is a loss that is incurred unintentionally, either through the lack of operator training or proficiency or by the malfunctioning of an application’s processing procedure. The following are some examples of the types of accidental loss:
Inappropriate Activities
Inappropriate activity is computer behavior that, while not rising to the level of criminal activity, may be grounds for job action or dismissal.
Illegal Computer Operations and Intentional Attacks
Under this heading, we have grouped the areas of computer activities that are considered as intentional and illegal computer activity for personal financial gain for destruction:
[*]Source: Fighting Computer Crime, Donn B. Parker (Wiley, 1998).
As we’ve discussed before, availability is one of the three cornerstone tenets of information systems security. In Chapter 3 we discussed the concept of Network Availability using fault-tolerant systems and server clustering. Here let’s look at how backup systems can help guarantee a system’s up time, and support the tenet of availability.
RAID stands for redundant array of inexpensive disks or redundant array of independent disks. Its primary purpose is to provide fault tolerance and protection against file server hard disk failure and the resultant loss of availability and data. Some RAID types secondarily improve system performance by caching and distributing disk reads from multiple disks that work together to save files simultaneously.
Simply put, RAID separates the data into multiple units and stores it on multiple disks by using a process called striping. It can be implemented as either a hardware or a software solution; each type of implementation has its own issues and benefits.
The RAID Advisory Board has defined three classifications of RAID:
RAID is implemented in one or a combination of several ways, called levels. They are:
Vendors created various other implementations of RAID to combine the features of several RAID levels, although these levels are less common. Level 6 is an extension of Level 5 that allows for additional fault tolerance by using a second independent distributed-parity scheme (i.e., two-dimensional parity). Level 10 is created by combining Level 0 (striping) with Level 1 (mirroring). Level 15 is created by combining Level 1 (mirroring) with Level 5 (interleave). Level 51 is created by mirroring entire Level 5 arrays. Table 6-5 shows the various levels of RAID with terms you will need to remember.
RAID LEVEL |
DESCRIPTION |
---|---|
0 |
Striping |
1 |
Mirroring |
2 |
Hamming Code Parity |
3 |
Byte Level Parity |
4 |
Block Level Parity |
5 |
Interleave Parity |
6 |
Second Independent Parity |
7 |
Single Virtual Disk |
10 |
Striping Across Multiple Pairs (1+0) |
15 |
Striping With Parity Across RAID 5 Pairs (1+5) |
51 |
Mirrored RAID 5 Arrays With Parity (5+1) |
A CISSP candidate will also need to know the basic concepts of data backup. The candidate might be presented with questions regarding file selection methods, tape format types, and common problems.
Tape Backup Methods
The purpose of a tape backup method is to protect and restore lost, corrupted, or deleted information - thereby preserving the data’s integrity and ensuring network availability. There are several varying methods of selecting files for backup.
Most backup methods use the Archive file attribute to determine whether the file should be backed up. The backup software determines which files need to be backed up by checking to see whether the Archive file attribute has been set and then resets the Archive bit value to null after the backup procedure.
The three most common methods are:
BACKUP METHOD EXAMPLE
A full backup was made on Friday night. This full backup is just what it says - it copied every file on the file server to the tape, regardless of the last time any other backup was made. This type of backup is common for creating full copies of the data for off-site archiving or in preparation for a major system upgrade. On Monday night, another backup was made. If the site uses the incremental backup method, Monday, Tuesday, Wednesday, and Thursday’s backup tapes contain only those files that were altered during that day (Monday’s incremental backup tape has only Monday’s data on it, Tuesday’s backup tape has only Tuesday’s on it, and so on). All backup tapes might be required to restore a system to its full state after a system crash, because some files that changed during the week may exist only on one tape. If the site is using the differential backup method, Monday’s tape backup has the same files that the incremental tape has (Monday is the only day that the files have changed so far). However, on Tuesday, rather than only backing up that day’s files, the site also backed up Monday’s files - creating a longer backup. Although this increases the time required to perform the backup and increases the amount of tapes needed, it does provide more protection from tape failure and speeds up recovery time (see Table 6-6).
BACKUP METHOD |
MONDAY |
TUESDAY |
WEDNESDAY |
THURSDAY |
FRIDAY |
---|---|---|---|---|---|
Full Backup |
Not used |
Not used |
Not used |
Not used |
All files |
Differential |
Changed File A |
Changed Files A and B |
Files A, B, and C |
Files A, B, C, and D |
Not used |
Incremental |
Changed File A |
Changed File B |
Changed File C |
Changed File D |
Not used |
Other Backup Formats
Common Backup Issues and Problems
All backup systems share common issues and problems, whether they use a tape or a CD-ROM format. There are three primary backup concerns:
The Chapter 4 section “E-mail Security Issues and Approaches” lists the main objectives of e-mail security and describes some cryptographic approaches, such as PEM, PGP, and S/MIME. This section addresses other ways e-mail can pose a threat to the organization’s security posture, as well as some solutions.
E-mails have three basic parts: attachments, content, and headers. Both the content and attachments are areas of vulnerability. E-mail is the primary means of virus and malicious code distribution, being one of the main ways Trojan horses and other executable code are distributed. The virus danger from e-mail stems from attachments containing active executable program files (with extensions such as CLASS, OCX, EXE, COM, and DLL) and from macro-enabled data files. These attachments could contain malicious code that could be masquerading as another file type. These attachments do not even need to be opened if the mail client automatically displays all attachments. (You should disable the preview pane feature in all your mail clients.) Virus detection and removal is a major industry and will continue to be so into the foreseeable future.
As shown in Figure 6-2, e-mail relay servers can propagate spam if the relay agent is not correctly configured. Any SMTP mail server in the DMZ should be correctly configured so that its relay agent is not being used by an unauthorized mail server for spamming. If your system is used for spamming, or even if it only has the possibility of being used for spamming, your customers’ Internet service providers may blacklist your domain, and you could be exposed to legal liability.
Figure 6-2: Spam can propagate through the enterprise and onto other networks.
A relay should not be configured to send any message it receives but only mail addressed to its domain, and it must have proper antispam features enabled. It must also employ antivirus and content-filtering applications both incoming and outgoing, to minimize the exposure of the company to liability. Figure 6-2 shows how open e-mail relays can compromise multiple networks.
E-mail is currently the largest attack vector for phishing malware and ID theft exploits. This may change, because Web sites increasingly employ advanced scripting techniques and automated functions; but e-mail is still the hands-down winner.
You can take a number of steps to protect your business from fraudulent e-mail, including the following:
MALICIOUS CODE VECTORS IN HTML E-MAIL
HTML e-mail and infected Web pages can deliver malicious code to the user in a variety of ways, such as:
The following sections discuss these three topics in more detail.[*]
Standard Customer Communication Policy
The organization should have an e-mail standard in regards to e-mailing clients and customers. A standard customer communications policy should convey a consistent message and not confuse your customer.
Here are some basic customer e-mail policy standards:
E-Mail Authentication Systems
E-mail authentication systems may provide an effective means of stopping e-mail and IP spoofing. Without authentication, verification, and traceability, users can never know for certain whether a message is legitimate or forged. E-mail administrators continually have to make educated guesses on behalf of their users on what to deliver, what to block, and what to quarantine.
E-MAIL BOUNCES
One piece of evidence that a spammer may be using your “From:” address is the receipt of hundreds of returned undeliverable messages a day. What’s happening is that a virus or a spammer is inserting your domain into the “From:” address, and the recipients have their servers configured to blindly return or ‘bounce’ spam to the sender, apparently you.
READING HEADERS
The following quote comes from Phishing: Cutting the Identity Theft Line by Lininger and Vines:
“Learning to read email headers is overrated. It’s kind of a neat parlor trick, but if you’re to the point where you need to read the headers to find out if it’s an honest message, you should be contacting the alleged sender directly. If the message is real, the headers will support that. If the message is fraudulent, there’s a pretty good chance the headers will still look real. Any header can be forged. The headers of a spam message might go back to the original server it was sent from, but this isn’t common. More likely, the headers will lead you back to the bot the spammer hijacked. Or some innocent third party. Or god@heaven.org.”
The four main contenders for authentication are Sender Policy Framework (SPF), SenderID, DomainKeys, and Cisco Identified Internet Mail. The Anti-Phishing Working Group (APWG) estimates that adopting a two-step e-mail authentication standard (say, using both SPF and DomainKeys) could stop 85% of phishing attacks in their current form. Although all four systems rely on changes being made to DNS, they differ in the specific part of the email that each tests:
All e-mail will eventually have to comply with some type of sender verification method if you want it to get through. Successful deployment of e-mail authentication will probably be achieved in stages, incorporating multiple approaches and technologies.
[*]Excerpted from Phishing: Cutting the Identity Theft Line, Lininger and Vines (Wiley, 2005). Used by permission.
In some ways, fax security awareness has taken a back seat to other types of intraorganizational communications such as e-mail security and IM security. Also, the use of fax servers has helped curtail the vulnerability of having printed faxes lying around the office. But because fax technology is still widespread, the CISSP candidate will need to know a few basics about fax security, especially the threats to fax servers.
Since fax machines are often used to transmit sensitive information they present security issues. Guidelines and procedures on the use of faxes, receiving as well as sending, must be incorporated into the security policy of the organization. Because a received fax sits in a physical inbox until retrieved, policies similar to sensitive document output should be implemented.
Fax servers electronically route a received fax to the e-mail inbox of the destination addressee. Since the fax stays in electronic form, this helps to remediate the sensitive document issue. This also helps save money by cutting down on paper requirements and shredding needs.
One problem with this approach is that users tend to print out received faxes, thereby recreating the issue. If necessary, the print feature can be disabled on the fax server configuration so that the viewing of the document retains the proper document security classification.
The fax server should also be monitored and audited, and encryption of the fax transmission may be implemented in high-security environments. The organization may employ a fax encryptor, an encryption mechanism that encrypts all fax transmissions at the Data Link Layer and helps ensure that all incoming and outgoing fax data is encrypted at its source.
You can find the answers to the following questions in Appendix A.
1. |
Which of the following places the four systems security modes of operation in order, from the most secure to the least?
|
|
2. |
Why is security an issue when a system is booted into single-user mode?
|
|
3. |
An audit trail is an example of what type of control?
|
|
4. |
Which of the following media controls is the best choice to prevent data remanence on magnetic tapes or floppy disks?
|
|
5. |
Which of the following choices is not a security goal of an audit mechanism?
|
|
6. |
Which of the following tasks would normally be a function of the security administrator, not the system administrator?
|
|
7. |
Which of the following is a reason to institute output controls?
|
|
8. |
Which of the following statements is not correct about reviewing user accounts?
|
|
9. |
Which of the following terms most accurately describes the trusted computing base (TCB)?
|
|
10. |
Which of the following statements is accurate about the concept of object reuse?
|
|
11. |
Using prenumbered forms to initiate a transaction is an example of what type of control?
|
|
12. |
Which of the following choices is the best description of operational assurance?
|
|
13. |
Which of the following is not a proper media control?
|
|
14. |
Which of the following choices is considered the highest level of operator privilege?
|
|
15. |
Which of the following choices below most accurately describes a covert storage channel?
|
|
16. |
Which of the following would not be a common element of a transaction trail?
|
|
17. |
Which of the following would not be considered a benefit of employing incident-handling capability?
|
|
18. |
Which of the following is the best description of an audit trail?
|
|
19. |
Which of the following best describes the function of change control?
|
|
20. |
Which of the following is not an example of intentionally inappropriate operator activity?
|
|
21. |
Which book of the Rainbow Series addresses the Trusted Computer System Evaluation Criteria (TCSEC)?
|
|
22. |
Which term best describes the concept of least privilege?
|
|
23. |
Which of the following best describes a threat as defined in the Operations Security domain?
|
|
24. |
Which of the following is not a common element of user account administration?
|
|
25. |
Which of the following is not an example of using a social engineering technique to gain physical access to a secure facility?
|
|
26. |
Which statement about covert channel analysis is not true?
|
|
27. |
“Separation of duties” embodies what principle?
|
|
28. |
Convert Channel Analysis, Trusted Facility Management, and Trusted Recovery are parts of which book in the TCSEC Rainbow Series?
|
|
29. |
How do covert timing channels convey information?
|
|
30. |
Which of the following would be the best description of a clipping level?
|
|
31. |
Which of the following backup methods will probably require the backup operator to use the most number of tapes for a complete system restoration if a different tape is used every night in a five-day rotation?
|
|
32. |
Which level of RAID is commonly referred to as disk mirroring?
|
|
33. |
Which is not a common element of an e-mail?
|
|
34. |
Which of the following choices is the best description of a fax encryptor?
|
|
35. |
Which of the following statements is true about e-mail headers?
|
Answers
1. |
Answer: b Dedicated Mode, System-High Mode, Compartmented Mode, and Multilevel Mode |
2. |
Answer: a When the operator boots the system in single-user mode, the user front-end security controls are not loaded. This mode should be used only for recovery and maintenance procedures, and all operations should be logged and audited. |
3. |
Answer: c An audit trail is a record of events to piece together what has happened and allow enforcement of individual accountability by creating a reconstruction of events. They can be used to assist in the proper implementation of the other controls, however. |
4. |
Answer: b Degaussing is recommended as the best method for purging most magnetic media. Answer a is not recommended because the application may not completely overwrite the old data properly. Answer c is a rarely used method of media destruction, and acid solutions should be used in a well-ventilated area only by qualified personnel. Answer d is wrong. |
5. |
Answer: b Answer b is a distracter; the other answers reflect proper security goals of an audit mechanism. |
6. |
Answer: c Reviewing audit data should be a function separate from the day-to-day administration of the system. |
7. |
Answer: b In addition to being used as a transaction control verification mechanism, output controls are used to ensure that output, such as printed reports, is distributed securely. Answer a is an example of change control, c is an example of application controls, and d is an example of recovery controls. |
8. |
Answer: a Reviews can be conducted by, among others, in-house systems personnel (a self-audit), the organization’s internal audit staff, or external auditors. |
9. |
Answer: d The Trusted Computing Base (TCB) represents totality of protection mechanisms within a computer system, including hardware, firmware, and software, the combination of which is responsible for enforcing a security policy. Answer a describes the reference monitor concept, answer b refers to a sensitivity label, and answer c describes formal verification. |
10. |
Answer: b Object reuse mechanisms ensure system resources are allocated and assigned among authorized users in a way that prevents the leak of sensitive information, and they ensure that the authorized user of the system does not obtain residual information from system resources. Answer a is incorrect, answer c is incorrect, and answer d refers to authorization: the granting of access rights to a user, program, or process. |
11. |
Answer: b Prenumbered forms are an example of preventative controls. They can also be considered a transaction control and input control. |
12. |
Answer: c Operational assurance is the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively. Operational assurance addresses whether the system’s technical features are being bypassed or have vulnerabilities and whether required procedures are being followed. Answer a is a description of an audit trail review, answer b is a description of a benefit of incident handling, and answer d describes a personnel control. |
13. |
Answer: d Sanitization is the process of removing information from used data media to prevent data remanence. Different media require different types of sanitization. All the others are examples of proper media controls. |
14. |
Answer: c The three common levels of operator privileges, based on the concept of “least privilege,” are:
Answer d is a distracter. |
15. |
Answer: d A covert storage channel typically involves a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels. Answer a is a partial description of a covert timing channel, and answer b is a generic definition of a channel. A channel may also refer to the mechanism by which the path is affected. Answer c is a higher-level definition of a covert channel. While a covert storage channel fits this definition generically, answer d is the proper specific definition. |
16. |
Answer: c Why the transaction was processed is not initially a concern of the audit log, but it will be investigated later. The other three elements are all important information that the audit log of the transaction should record. |
17. |
Answer: a The primary benefits of employing an incident-handling capability are containing and repairing damage from incidents and preventing future damage. Answer a is a benefit of employing “separation of duties” controls. |
18. |
Answer: a An audit trail is a set of records that collectively provide documentary evidence of processing used to aid in tracing from original transactions forward to related records and reports and/or backward from records and reports to their component source transactions. Answer b is a description of a multilevel device, and answer c refers to a network reference monitor. Answer d is incorrect because audit trails are detective, and answer d describes a preventative process - access control. |
19. |
Answer: a Answer b describes least privilege, answer c describes record retention, and answer d describes separation of duties. |
20. |
Answer: a Although operator error (answer a) is most certainly an example of a threat to a system’s integrity, it is considered unintentional loss, not an intentional activity. |
21. |
Answer: b |
22. |
Answer: a The least privilege principle requires that each subject in a system be granted the most restrictive set of privileges (or lowest clearance) needed for the performance of authorized tasks. Answer b describes separation of privilege, answer c describes a security level, and answer d is a distracter. |
23. |
Answer: a Answer b describes a vulnerability, answer c describes an asset, and answer d describes risk management. |
24. |
Answer: b For proper separation of duties, the function of user account establishment and maintenance should be separated from the function of initiating and authorizing the creation of the account. User account management focuses on identification, authentication, and access authorizations. |
25. |
Answer: d Answers a, b, and c denote common tactics used by an intruder to gain either physical access or system access. The salami fraud is an automated fraud technique. In the salami fraud, a programmer will create or alter a program to move small amounts of money into his personal bank account. The amounts are intended to be so small as to be unnoticed, such as rounding in foreign currency exchange transactions; hence the name, a reference to slicing a salami. |
26. |
Answer: c Orange Book B2 class systems do not need to be protected from covert timing channels. Covert channel analysis must be performed for B2-level class systems to protect against only covert storage channels. B3 class systems need to be protected from both covert storage channels and covert timing channels. |
27. |
Answer: d Separation of duties means that the operators are prevented from generating and verifying transactions alone, for example. A task might be divided into different smaller tasks to accomplish this, or in the case of an operator with multiple duties, the operator makes a logical, functional job change when performing such conflicting duties. Answer a is need-to-know, answer b is dual-control, and c is job rotation. |
28. |
Answer: b The Red Book (answer a) is the Trusted Network Interpretation (TNI) summary of network requirements (described in the Telecommunications and Network Security domain); the Green Book (answer c) is the Department of Defense (DoD) Password Management Guideline; and the Dark Green Book (answer d) is The Guide to Understanding Data Remanence in Automated Information Systems. |
29. |
Answer: d A covert timing channel alters the timing of parts of the system to enable it to be used to communicate information covertly (outside the normal security function). Answer a is the description of the use of a covert storage channel, answer b is a technique to combat the use of covert channels, and answer c is the Orange Book requirement for B3, B2, and A1 evaluated systems. |
30. |
Answer: a This description of a clipping level is the best. Answer b is not correct because one reason to create clipping levels is to prevent auditors from having to examine every error. Answer c is a common use for clipping levels but is not a definition. Answer d is a distracter. |
31. |
Answer: c Most backup methods use the Archive file attribute to determine whether the file should be backed up. The backup software determines which files need to be backed up by checking to see whether the Archive file attribute has been set and then resets the Archive bit value to null after the backup procedure. The Incremental backup method backs up only files that have been created or modified since the last backup was made because the Archive file attribute is reset. This can result in the backup operator needing several tapes to do a complete restoration, because every tape with changed files as well as the last full backup tape will need to be restored. A full or complete backup (answer a) backs up all files in all directories stored on the server regardless of when the last backup was made and whether the files have already been backed up. The Archive file attribute is changed to mark that the files have been backed up, and the tape or tapes will have all data and applications on it. This is an incorrect answer for this question, however, as it’s assumed that answers b and c will additionally require differential or incremental tapes. The Differential backup method (answer b) backs up only files that have been created or modified since the last backup was made, like an incremental backup. However, the difference between an incremental backup and a differential backup is that the Archive file attribute is not reset after the differential backup is completed; therefore, the changed file is backed up every time the differential backup is run. The backup set grows in size until the next full backup as these files continue to be backed up during each subsequent differential backup. The advantage of this backup method is that the backup operator should need only the full backup and the one differential backup to restore the system. Answer d is a distracter. |
32. |
Answer: b Redundant Array of Inexpensive Disks (RAID) is a method of enhancing hard disk fault tolerance, which can improve performance. RAID 1 maintains a complete copy of all data by duplicating each hard drive. Performance can suffer in some implementations of RAID 1, and twice as many drives are required. Novell developed a type of disk mirroring called disk duplexing, which uses multiple disk controller cards, increasing both performance and reliability. RAID 0 (answer a) gives some performance gains by striping the data across multiple drives but reduces fault tolerance, because the failure of any single drive disables the whole volume. RAID 3 (answer c) uses a dedicated error-correction disk called a parity drive, and it stripes the data across the other data drives. RAID 5 (answer d) uses all disks in the array for both data and error correction, increasing both storage capacity and performance. |
33. |
Answer: c E-mails have three basic parts: attachments, contents, and headers. Both the contents and attachments are areas of vulnerability. |
34. |
Answer: b A fax encryptor is a encryption mechanism that encrypts all fax transmissions at the Data Link layer and helps ensure that all incoming and outgoing fax data is encrypted at its source. |
35. |
Answer: c The header may point back to the hijacked spambot’s mail server. Email headers can be spoofed, fraudulent e-mail not always identified by the headers, and the header doesn’t always point back to the original spammer. |
Part One - Focused Review of the CISSP Ten Domains
Part Two - The Certification and Accreditation Professional (CAP) Credential