2.3. Early Computer Security Efforts
The earliest computer-related security activities began in the 1950s, with the development of the first TEMPEST security standard, the consideration of security issues in some of the earliest computer system designs, and the establishment of the first government security organization, the U.S. Communications Security (COMSEC) Board. The board, which consisted of representatives from many different branches of the government, oversaw the protection of classified information.
Although these events set the scene for later computer security advances, the 1960s marked the true beginning of the age of computer security, with initiatives by the Department of Defense, the National Security Agency, and the National Bureau of Standards (now the National Institute of Standards and Technology or NIST), coupled with the first public awareness of security. The Spring Joint Computer Conference of 1967 is generally recognized as being the locale for the first comprehensive computer security presentation for a technical audience. Willis H. Ware of the RAND Corporation chaired a session that addressed the wide variety of vulnerabilities present in resource-sharing, remote-access computer systems. The session addressed threats ranging from electromagnetic radiation to bugs on communications lines to unauthorized programmer and user access to systems and data.
The Department of Defense, because of its strong interest in protecting military computers and classified information, was an early partisan of computer security efforts. In 1967, DoD began to study the potential threats to DoD computer systems and information. In October of that year, DoD assembled a task force under the auspices of the Defense Science Board within the Advanced Research Projects Agency (ARPA), now known as the Defense Advanced Research Projects Agency, or DARPA. The task force worked for the next two years examining systems and networks, identifying vulnerabilities and threats, and introducing methods of safeguarding and controlling access to defense computers, systems, networks, and information. Published as a classified document in 1970, the task force report, Security Controls for Computer Systems, was a landmark publication in the history of computer security. Its recommendations, and the research that followed its publication, led to a number of programs dedicated to protecting classified information and setting standards for protection.
The Department of Defense took to heart the recommendations of the task force and began to develop regulations for enforcing the security of the computer systems, networks, and classified data used by DoD and its contractors. In 1972, DoD issued a directive[*] and an accompanying manual that established a consistent DoD policy for computer controls and techniques. The directive stated overall policy as follows:
The directive also stipulated that systems specifically protect both the computer equipment and the data that it processes by preventing deliberate and inadvertent access to classified material by unauthorized persons, as well as unauthorized manipulation of the computer and associated equipment.
During the 1970s, under the sponsorship of DoD and industry, a number of major initiatives were undertaken to better understand the system vulnerabilities and threats that early studies had exposed and to begin to develop technical measures for countering these threats. These initiatives fell into three general categories: tiger teams, security research studies, and development of the first secure operating systems.
2.3.1. Tiger Teams
During the 1970s, tiger teams first emerged on the computer scene. Tiger teams were government- and industry-sponsored teams of crackers who attempted to break down the defenses of computer systems in an effort to uncover, and eventually patch, security holes. Most tiger teams were sponsored by DoD, but IBM aroused a great deal of public awareness of computer security by committing to spend $40 million to address computer security issuesand tiger teams were an important part of finding security flaws in the company's own products.
Tiger teams were an effective way to find and fix security problems, but their efforts were necessarily piecemeal. U.S. Air Force Lieutenant General Lincoln D. Faurer, former Director of the National Security Agency, wrote that the efforts of the tiger teams resulted in two significant conclusions:[§]
Tiger teams served a useful function by identifying security flaws and demonstrating how easily these flaws could be exploited. In fact, the current model in the hacker community of discovering vulnerabilities through probing, alerting manufacturers, and after a suitable waiting period, publishing a proposed exploit has its roots in tiger team methodology. The commercial practice of penetration testing, or running pen tests, has its roots in tiger teams as well.
By the end of the 1970s, however, it was apparent that a more rigorous method of building, testing, and evaluating computer systems was needed. Although this was achieved in certain defense-related systems, the commercial world in many instances is still waiting.
2.3.2. Research and Modeling
During the 1970s, DoD and other agencies sponsored a number of ground-breaking research projects aimed at identifying security requirements, formulating security policy models, and defining recommended guidelines and controls.
In the research report of the Computer Security Technology Planning Study,[*] James P. Anderson introduced the concept of a reference monitor, an entity that "enforces the authorized access relationships between subjects and objects of a system." The idea of a reference monitor became very important to the development of standards and technologies for secure systems. The concept of reference monitors and the function of subjects and objects in secure systems are described in Appendix C.
DoD went on to sponsor additional research and development in the 1970s focusing on the development of security policy models. A security policy defines system security by stating the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information. The mechanisms necessary to enforce a security policy usually conform to a specific security model. This is also discussed in Appendix C.
A number of additional technical research reports published during the 1970s defined secure systems and security requirements.During the 1970s, David Bell and Leonard LaPadula developed the first mathematical model of a multilevel security policy. The Bell and LaPadula model These reports included:
2.3.3. Secure Systems Development
A number of government-sponsored projects undertook to develop the first "secure" systems during the 1970s. Most of these efforts were devoted to developing prototypes for security kernels. A security kernel is the part of the operating system that controls access to system resources. The most significant was an Air Force-funded project that led to the development of a security kernel for the Multics (Multiplexed Information and Computing Service) system. 
Multics allowed users with different security clearances to simultaneously access information that had been classified at different levels. Because it embodied so many well-designed security features, the Multics system was particularly important to the development of later secure systems. Multics was a large-scale, highly interactive computer system that offered both hardware- and software-enforced security. Specific features of Multics included extensive password and login controls; data security through access control lists (ACLs), an access isolation mechanism (AIM), and a ring mechanism; auditing of all system access operations; decentralized system administration; and architectural features such as paged and segmented virtual memory and stack-controlled process architecture.
Other security kernels under development during the 1970s included Mitre Corporation's Digital Equipment Corporation PDP-11/45 and UCLAs Data Secure Unix PDP-11/70.[§]