Section 2.4. Building Toward Standardization

2.4. Building Toward Standardization

Late in the 1970s, two important government initiatives significantly affected the development of computer security standards and methods. In 1977, the Department of Defense announced the DoD Computer Security Initiative under the auspices of the Under Secretary of Defense for Research and Engineering. The goal was to focus national attention and resources on computer security issues. The initiative was launched in 1978 when DoD called together government and industry participants in a series of seminars. The goal of the seminars was to answer these questions:

  • Are secure computer systems useful and feasible?

  • What mechanisms should be developed to evaluate and approve secure computer systems?

  • How can computer vendors be encouraged to develop secure computer systems?

The second important initiative came from the National Bureau of Standards (NBS), now known as the National Institute of Standards and Technology. NIST has historically been responsible for the development of standards of all kinds. As a consequence of the Brooks Act of 1965 (described in "Computer Security Act" later in this chapter), NIST (as NBS) became the agency responsible for researching and developing standards for federal computer purchase and use, and for assisting other agencies in implementing these standards. The bureau has published many federal standards known as Federal Information Processing Standards publications (FIPS PUBs) in all areas of computer technology, including computer security. Over the course of the next decade or so after the Brooks Act, NBS focused on two distinct security standardization efforts: development of standards for building and evaluating secure computer systems, and development of a national standard for cryptography.

2.4.1. Standards for Secure Systems

NBS's first charge was to evaluate the federal government's overall computer security needs and to begin to find ways to meet them. Early efforts, based on NBS's Brooks Act mandate, included the following:


NBS performed an initial study to evaluate the government's computer security needs.


NBS sponsored a conference on computer security in collaboration with the ACM.


NBS initiated a program aimed at researching development standards for computer security.


NBS began a series of Invitational Workshops dedicated to the Audit and Evaluation of Computer Systems. These had far-reaching consequences for the development of standards for secure systems.

At the first Invitational Workshop in 1977, 58 experts in computer technology and security assembled to define problems and develop solutions for building and evaluating secure systems. Invitees represented NBS, the General Accounting Organization (GAO), other government agencies, and industry. Their goal? To determine:

What authoritative ways exist, or should exist, to decide whether a particular computer system is "secure enough" for a particular intended environment or operation, and if a given system is not "secure enough," what measures could or should be taken to make it so.

Workshop participants considered many different aspects of computer security, including accuracy, reliability, timeliness, and confidentiality. The NBS workshops resulted in the publication of several reports.[*] These concluded that achieving security required attention to all three of the following:

[*] Z. Ruthberg and R. McKenzie, ed., Audit and Evaluation of Computer Security, Special Publication 500-19, National Bureau of Standards, Gaithersburg (MD), 1980 (SN 003-003-01848-1).

Z. Ruthberg, ed., Audit and Evaluation of Computer Security II: System Vulnerabilities and Controls, Special Publication 500-57, (MD78733), National Bureau of Standards, Gaithersburg (MD), 1980 (SN 003-003-02178-4).


What security rules should be enforced for sensitive information?


What hardware and software mechanisms are needed to enforce the policy?


What needs to be done to make a convincing case that the mechanisms do support the policy even when the system is subject to threats?

The NBS report stated:

By any reasonable definition of "secure" no current operating system today can be considered "secure". . . We hope the reader does not interpret this to mean that highly sensitive information cannot be dealt with securely in a computer, for of course that is done all the time. The point is that the internal control mechanisms of current operating systems have too low integrity for them to . . . effectively isolate a user on the system from data that is at a "higher" security level than he is trusted . . . to deal with.

This conclusion was an important one in terms of the multilevel security concepts discussed in Part II of this book.

The NBS workshops recommended that a number of actions be taken. One action was to formulate a detailed computer security policy for sensitive information not covered by national security policies and guidelines. Another was to establish a formal security and evaluation and accreditation process, including the publication of a list of approved products to guide specification and procurement of systems intended to handle sensitive information. A third was to establish a standard, formalized, institutionalized technical means of measuring or evaluating the overall security of a system.

As an outgrowth of the NBS workshops, the Mitre Corporation was assigned the task of developing an initial set of computer security evaluation criteria that could be used to assess the degree of trust that could be placed in a computer system that protected classified data. Beginning in 1979, in response to the NBS workshop and report on the standardization of computer security requirements, the Office of the Secretary of Defense conducted a series of public seminars on the DoD Computer Security Initiative. One result of these seminars was that the Deputy Secretary of Defense assigned to the Director of the National Security Agency (NSA) responsibility for increasing the use of trusted information security products within the Department of Defense. National Computer Security Center

As a result of NSA's new responsibility for information security, on January 2, 1981, the DoD Computer Security Center (CSC) was established within NSA to expand upon the work begun by the DoD Computer Security Initiative. The official charter of the CSC is contained in the DoD Directive entitled "Computer Security Evaluation Center" (5215.1).

Several years later, the computer security responsibilities held by CSC were expanded to include all federal agencies and the Center became known as the National Computer Security Center (NCSC). The Center was founded with the following goals:

  • Encourage the widespread availability of trusted computer systems

  • Evaluate the technical protection capabilities of industry- and government-developed systems

  • Provide technical support of government and industry groups engaged in computer security research and development

  • Develop technical criteria for the evaluation of computer systems

  • Evaluate commercial systems

  • Conduct and sponsor research in computer and network security technology

  • Develop and provide access to verification and analysis tools used to develop and test secure computer systems

  • Conduct training in areas of computer security

  • Disseminate computer security information to other branches of the federal government and to industry

In 1985, NSA also merged its communications and computer security responsibilities together under the Deputy Directorate for Information Security Systems (INFOSEC). Birth of the Orange Book

The Center met an important goal by publishing the Department of Defense Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book because of the color of its cover. Based on the computer security evaluation criteria developed by Mitre,[*] and on such developments as the security model developed by Bell and LaPadula, this publication was distributed to government and industry experts, revised, and finally released in August 1983.

[*] G.H. Nibaldi, Proposed Technical Evaluation Criteria for Trusted Computer Systems, (M79-225), Mitre Corporation, Bedford (MA), 1979 (available from NTIS: AD A108832).

G.H. Nibaldi, Specification of a Trusted Computing Base, (M79-228), Mitre Corporation, Bedford (MA), 1979 (NTIS publication: AD 108831).

The Orange Book is the bible of secure system development. It describes the evaluation criteria used to assess the level of trust that can be placed in a particular computer system. It effectively makes security a measurable commodity so a buyer can identify the exact level of security required for a particular system, application, or environment. The Orange Book presents a graded classification of secure systems. It defines four broad hierarchical divisions, or levels, of protectionD, C, B, and A, in order of increasing security. Within each division, the Orange Book defines one or more classes, each defined by a specific set of criteria that a system must meet to achieve a rating in that class. Some divisions have only a single class, others have two or three. The original Orange Book was revised slightly and reissued in December 1985.

Using the Orange Book criteria, NCSC performed evaluations of products submitted by vendors for certification at a particular level of trust. Products that are successfully evaluated through the NCSC Trusted Products Evaluation Program (TPEP) are placed on the Evaluated Products List (EPL). Appendix C describes the Orange Book evaluation criteria (and also mentions some of the complaints about these criteria). The Orange Book is so pervasive that although the standards have transferred to its successor, the Common Criteria, Orange Book designations are often used synonymously with Common Criteria equivalents, and students research one by studying the other.

In the days since the Orange Book, the focus on common security has shifted to the Common Criteria. This set of guidelines describes parameters for secure computing and has a scale to rate the performance of an examined system against those parameters. Based on the European White Book, Common Criteria, in conjunction with numerous FIPS, is the basis of computer security in the United States today. Orange Book culture is so enduring, however, that you can barely speak of one without invoking the other. An overview of the interrelationship of these standards is contained in Appendix C.

2.4.2. Standards for Cryptography

During the 1970s, interest in a national cryptographic standard began to build within the government. The idea was to find an algorithm that could be used to protect sensitive unclassified government information (classified algorithms were already being used to protect classified information) and sensitive commercial data such as banking electronic funds transfers. In 1973, the National Bureau of Standards, part of the Department of Commerce, invited vendors to submit data encryption techniques that might be used as the basis of an encryptions algorithm.

Under the auspices of the Institute of Computer Science and Technology (ICST), later known as the National Computer Systems Laboratory, NBS organized a series of workshops for government and industry representatives to select a national encryption algorithm. The method eventually selected by NBS became known as the Data Encryption Standard (DES).

The DES was adopted as a Federal Information Processing Standard (FIPS PUB 46) in 1977 as the official method of protecting unclassified data in the computers of U.S. government agencies, and was subsequently adopted as an American National Standards Institute (ANSI) standard.

The DES consists of two components: an algorithm and a key. The DES algorithm is a complex, iterative process that is public information. This algorithm uses a secret valuethe keyto encode and decode messages.

DES technology has been embedded in the products of many commercial products. Until 1986, the National Security Agency endorsed products containing DES-based algorithms. In 1986, NSA announced that it would no longer endorse such products. There was a substantial reaction to this decision by vendors, users, and other government agencies. Chapter 7 describes DES in greater detail and outlines some of the issues surrounding the use of the algorithm.

DES has now been cracked, both by special-purpose devices (not necessarily computers) constructed of microchips and by clusters of computers operating in tandem over the Internet. While the DES algorithm is likely to remain in use for some time (it's still efficient in certain two-way voice encryption systems), cryptographic researchers have continued to work on the development of more advanced algorithms. A competition was held in the late 1990s to determine which encryption standard would become the Advanced Encryption System (AES). The winner of the competition was an algorithm called Rijndael. The "losers," many of which were powerful encryption tools, are also enjoying success in the world today. Most are available as open source programs.

Through the Commercial Communications Security Endorsement Program (CCEP), government and industry representatives develop, test, and endorse new cryptographic products.

2.4.3. Standards for Emanations

As early as the 1950s, concerns began to develop about the possibility that the electrical and electromagnetic radiation that emanates from computer equipment (as it does from all electronic equipment) could be intercepted and deciphered.

It works like this: any time a current flows, magnetic fields form around it. Conversely, when magnetic fields change size or shape, they induce currents in nearby conductors. Finally, any voltage that exists on one side of an insulator has the ability to cause changes to a voltage on the other side of an insulator due to the coupling of charges. All put together, operating any device that uses electricity can create signals that are detectable elsewhere. Often you see this as a disturbance of some kind, such as the interference on a televised football game caused by someone operating a vacuum cleaner or blender nearby.

In an effort to counter this threat, the U.S. government established the first standard for the level of emanations that was acceptable for equipment used to process classified information in the late 1950s. During the 1960s and 1970s, as standardization efforts proceeded in areas of secure systems and cryptography, they also resulted in the refinement of the initial TEMPEST standard and the establishment of a program to endorse products that met the requirements of this standard.

The Industrial TEMPEST Program was established in 1974 with three main goals:

  • Specify a TEMPEST standard that sets allowable limits on the levels of emission from electronic equipment.

  • Outline criteria for testing equipment that, according to its vendors, meets the TEMPEST standard.

  • Certify vendor equipment that successfully meets the TEMPEST standard.

The National TEMPEST Standard, known as NACSEM 5100 (National Communications Security Emanations Memorandum 5100), was published in 1970. Much of the document was classified. This standard has been revised several times. The current standards family, NSTISSAM/1-91 (Compromising Emanations Laboratory Test Requirements, Electromagnetic) was published in 1971. It is superseded by NSTISSAM/1-92; NSTISSAM/2-91 (Compromising Emanations Analysis Handbook) was published in 1991; NSTISSAM/3-91 (Maintenance and Disposition of TEMPEST Equipment) was published in 1991, with certain augmentations published in 1995.

Government and industry representatives have worked together to set standards and to develop, test, and certify TEMPEST equipment. The U.S. government approves laboratories that evaluate TEMPEST products.

Computer Security Basics
Computer Security Basics
ISBN: 0596006691
EAN: 2147483647
Year: 2004
Pages: 121

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: