Basic concepts related to security architecture include the Trusted Computing Base (TCB), open and closed systems, protection rings, security modes, and recovery procedures.
A Trusted Computing Base (TCB) is the total combination of protection mechanisms within a computer system, including hardware, firmware, and software, which is responsible for enforcing a security policy. A security perimeter is the boundary that separates the TCB from the rest of the system.
Instant Answer A Trusted Computing Base (TCB) is the total combination of protection mechanisms within a computer system, including hardware, firmware, and software, which is responsible for enforcing a security policy.
Access control is the ability to permit or deny the use of an object (a passive entity such as a system or file) by a subject (an active entity such as individual or process).
Instant Answer Access control is the ability to permit or deny the use of an object (system or file) by a subject (individual or process).
A reference monitor is a system component that enforces access controls on an object. Stated another way, a reference monitor is an abstract machine that mediates all access to an object by a subject.
Instant Answer A reference monitor is a system component that enforces access controls on an object.
A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base that implements the reference monitor concept. Three requirements of a security kernel are that it must
Mediate all accesses
Be protected from modification
Be verified as correct
Instant Answer A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base (TCB) that implements the reference monitor concept.
An open system is a vendor-independent system that complies with a published and accepted standard. This promotes interoperability between systems and components made by different vendors. Additionally, open systems can be independently reviewed and evaluated, which facilitates identification of bugs and vulnerabilities and rapid development of solutions and updates.
A closed system uses proprietary hardware and/or software that may not be compatible with other systems or components. Source code for software in a closed system is not normally available.
Cross-Reference The concept of protection rings implements multiple domains with increasing levels of trust near the center. The most privileged ring is identified as Ring 0 and normally includes the operating system’s security kernel. Additional system components are placed in the appropriate concentric ring based on the principle of least privilege. (For more on this topic, read Chapter 10.) The MIT MULTICS operating system implements the concept of protection rings in a security architecture.
Several security modes of operation, based on the classification level of information being processed on a system and the clearance level of authorized users, have been defined. These designations are typically used for U.S. military and government systems and include
Dedicated: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system and a valid need-to-know.
System High: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system but a valid need-to-know isn’t necessarily required.
Multilevel: Information at different classification levels is stored or processed on a trusted computer system (a system that employs all necessary hardware and software assurance measures and meets the specified requirements for reliability and security). Authorized users must have an appropriate clearance level, and access restrictions are enforced by the system accordingly.
Limited access: Authorized users aren’t required to have a security clearance, but the highest level of information on the system is Sensitive but Unclassified (SBU).
Cross-Reference See Chapter 6 for more on clearance levels.
A hardware or software failure can potentially compromise a system’s security mechanisms. Security designs that protect a system during a hardware or software failure include
Fault-tolerant systems: These systems continue to operate following failure of a computer or network component. The system must be capable of detecting and correcting or circumventing a fault.
Fail-safe systems: When a hardware or software failure is detected, program execution is terminated, and the system is protected from compromise.
Fail-soft (resilient) systems: When a hardware or software failure is detected, certain noncritical processing is terminated, and the computer or network continues to function in a degraded mode.
Failover systems: When a hardware or software failure is detected, the system automatically transfers processing to a hot backup component, such as a clustered server.
Unless detected (and corrected) by an experienced security analyst, many weaknesses may be present in a system and permit exploitation or attack. We discuss the more important problems here.
Covert channels: These are unknown, hidden communications that take place within the medium of a legitimate communications channel.
Race conditions: Software code in multiprocessing and multiuser systems, unless very carefully designed and tested, can result in critical errors that are difficult to find. The most common race condition is the time of check to time of use bug caused by changes in a system between the checking of a condition and the use of the results of that check. For example, two programs that both try to open a file for exclusive use might both open the file, even though only one should be able to.
Emanations: The unintentional emissions of electromagnetic or acoustic energy from a system can be intercepted by others and possibly used to illicitly obtain information from the system. A common form of undesired emanations is radiated energy from CRT (cathode ray tube computer monitor). A third party can discover what data is being displayed on a CRT by intercepting radiation emanating from the display adaptor or monitor from as far as several hundred meters. A third party can also eavesdrop on a network if it has one or more unterminated coaxial cables in its cable plant.
Maintenance hooks: Hidden, undocumented features in software programs that are intended to inappropriately expose data or functions for illicit use. We also discuss this topic in Chapter 7.
Security countermeasures: Knowing that systems are subject to frequent or constant attack, systems architects need to include several security countermeasures in order to minimize system vulnerability. Such countermeasures include:
Reveal as little information about the system as possible. For example, don’t permit the system to ever display the version of operating system, database, or application software that are running.
Limit access to only those persons who must use the system in order to fulfill needed organizational functions.
Disable unnecessary services in order to reduce the number of attack targets.
Use strong authentication in order to make it as difficult as possible for outsiders to access the system.