Chapter 1: Terminology

Team-Fly

OVERVIEW

The field of computer science is filled with ill-defined terminology used by different persons in conflicting and sometimes even contradictory ways. This is especially true in the field of computer and communication security. Hence, we sacrifice the first chapter of this book to work against this tradition and to introduce and define some basic terms that are used in the rest of the book. As already mentioned in the preface, many terms related to Internet and intranet security are defined in RFC 2828 [1]. You may refer to this document to get a more comprehensive compilation of terms, definitions, and corresponding acronyms.

According to Webster's Dictionary, the term information refers to "knowledge communicated or received concerning a particular fact or circumstance" in general, and "data that can be coded for processing by a computer or similar device" in computer science. Also, according to RFC 2828, information refers to "facts and ideas, which can be represented (encoded) as various forms or data." Despite the fact these definitions are fairly broad and not too precise in a mathematically strong sense, they are sufficient for the purpose of this book. Anybody who is interested in a more precise and formal definition and treatment of information is referred to Claude E. Shannon's communication or information theory [2, 3].

In accordance with the definition for information, the term information technology (IT) is used to refer to any kind of technology that can be used to manage information. In particular, IT focuses on the questions of how to effectively and efficiently store, process, and transmit electronic data that encodes information.

Similarly, the term IT security is used to refer to the special field of IT that deals with security-related issues. In fact, IT security comprises both computer and communication security:

  • The aim of computer security is to preserve resources (e.g., data that encodes information) against unauthorized use and abuse, as well as to protect data from accidental or deliberate damage, disclosure, and modification. More specifically, computer security is to protect data during its storage and processing in computer systems that may or may not be networked.

  • The aim of communication security is to protect data that encodes information during its transmission in and between computer systems and networks.

In addition to these technically oriented aspects (e.g., computer and communication security), IT security must also take into account organizational and legal issues. These issues, however, are not further addressed in this book. There are many complementary books that focus entirely on organizational and legal issues related to IT security.

According to Andrew S. Tanenbaum, the term computer network refers to an interconnected collection of autonomous computer systems [4].

  • The systems are interconnected if they are able to directly exchange data (i.e., without using external storage media, such as floppy disks or tapes). In the past, this form of interconnection typically required a physical cable between the systems. Today and in the future, however, the use of wireless technologies will become more and more important and predominant to interconnect systems.

  • The systems are autonomous if there does not exist a clear master-slave relationship between them. For example, a system with one control unit and several slaves is not a network, nor is a large computer with remote card readers, printers, and terminals.

There is considerable confusion in the literature about what exactly distinguishes a distributed system from a computer network. Referring to Leslie Lamport, a distributed system consists of a collection of distinct processes that are spatially separated and that communicate with one another by exchanging messages.[1] In addition to that, Lamport refers to a system as a distributed system if the message transmission delay is not negligible compared with the time between events in a single process [5]. Note that this definition is particularly well suited to discuss time, clocks, and temporal ordering of events in distributed environments.

Again referring to Tanenbaum, the key distinction between a computer network and a distributed system is that in a distributed system, the existence of multiple autonomous computer systems is transparent and not necessarily visible to the user. In principle, the user can type a command to run a program, and it runs. It is up to the operating system to select the best processor available for the program to run, to find and transport all input data to that processor, and to put the results as output data in the appropriate place. In other words, the user of a distributed system should not necessarily be aware that there are multiple processors involved; to the user, the distributed system looks like a virtual uniprocessor. Note that in this example, a distributed system can be seen as a special case of a computer network, namely, one whose software gives it a very high degree of cohesiveness and transparency. Thus, the distinction between a computer network and a distributed system lies within the software in general and the operating system in particular, rather than within the hardware.

In accordance with the security frameworks developed by the Joint Technical Committee 1 (JTC1) of the International Organization for Standardization (ISO) and the International Electrotechnical Committee (IEC) [6], we use the term principal to refer to a human or system entity that is registered in and authenticatable to a computer network or distributed system. Users, hosts, and processes are commonly considered as principals:

  • A user is an entity made accountable and ultimately responsible for his or her activities within a computer network or distributed system.

  • A host is an addressable entity within a computer network or distributed system. The entity is typically addressed either by a name or a network address.

  • A process is an instantiation of a program running on a particular host. It is common to use the client-server model to distinguish between client and server processes:

    • A client is a process that requests (and eventually also obtains and uses) a network service.

    • A server is a process that provides a network service. In this terminology, a service refers to a coherent set of abstract functionality, and a server is typically a continuously running background program (a so-called daemon) that is specialized in providing the functionality.

      Note that sometimes a process can act either as client or server. For example, in a UNIX system a print server is usually created by and associated with a superuser. The print server acts as a server for printing requests by clients; however, the print server may also act as a client when it requests files to print from the file server. Also note that the client and server typically use a specific (set of) protocol(s) to communicate with each other. In fact, there should be made a strong distinction between a service and a protocol: A service refers to something an application program or a higher-layer protocol can use, whereas a protocol refers to a set of rules and messages to actually provide the service.

The client-server model provides an attractive paradigm for designing and implementing applications and application protocols for computer networks and distributed systems. In the simplest case, a service is implemented by just one server. But sometimes it is more convenient to have two or even more servers working collectively and cooperatively to provide a specific service. One point is that a single server may become overloaded or may not be sufficiently close to all users in a networked or distributed environment. Another point is availability. If a service is replicated, it does not matter if some of the replicas are down or unavailable. Often, the fact that a service is replicated is transparent to the user, meaning that the user does not know whether there is a single copy of the service or there are replicas. The development and analysis of technologies that can be used to securely replicate services is an interesting and very challenging area of research.

The ISO/IEC JTC1 uses the term standard to refer to a documented agreement containing technical specifications or other precise criteria to be used consistently as rules, guidelines, or definitions of characteristics to ensure that materials, products, processes, and services are fit for their purpose. Consequently, an open system standard is a standard that specifies an open system and allows manufacturers to build corresponding interoperable products, whereas an open system is a system that conforms to open system standards.

A computer network in general, and a distributed system in particular, is a complex collection of cooperating software and hardware. To aid in the understanding of these systems, network practitioners have developed some standard ways to model networked and distributed systems and to break them down into simpler pieces. A reference model is a model used to explain how the various components of a system fit together and what the common interface specifications are among the various components. A basic feature of a reference model is the division of the overall functionality into layers, done in an attempt to reduce complexity.

In 1978 the ISO/IEC JTC1 proposed a Reference Model for Open Systems Interconnection (OSI-RM) as a preeminent model for structuring and understanding communications functions within open systems. The OSI-RM follows the widely accepted structuring technique of layering, and the communication functions are partitioned into a hierarchical set of layers accordingly. More precisely, the OSI-RM specifies seven layers of communications system functionality, from the physical layer at the bottom to the application layer at the top. The layers are overviewed in Table 1.1. Refer to the books cited in the preface for a more comprehensive description of the OSI-RM and its seven layers.

Table 1.1: Layers of the OSI Reference Model

Layer 7

Application layer

Layer 6

Presentation layer

Layer 5

Session layer

Layer 4

Transport layer

Layer 3

Network layer

Layer 2

Data link layer

Layer 1

Physical layer

The OSI-RM is useful because it provides a commonly used terminology and defines a structure for data communication standards. It was approved as an international standard (IS) in 1982 [7]. Two years later, the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T)[2] adopted the OSI-RM in its recommendation X.200.

The use of open system standards and open systems that conform to these standards has many advantages, and we are not going to discuss them in this book. However, we want to point out that open systems may also negatively influence security. For example, if an attacker knows the communications protocols that are used between a client and a server, it is much simpler for him or her to eavesdrop on the communications and to actually understand what is actually going on. Consequently, security is a vital concern in open systems, and the apparent contradiction between openness and security is deceptive. In fact, it has often seduced people to buy proprietary systems instead of open systems. The assumption that has led to this purchase behavior is our strong belief in "security through obscurity." We think that to hide information about the design of a system is the best way to prevent potential attackers from learning something about the system's own vulnerabilities. Network security technologies, if well designed and properly implemented, can be used to solve the contradiction between openness and security, and it is one of the main goals of this book to provide a basic understanding for the proper design and implementation of such technologies.

A protocol suite is a set of protocols that work together and fit into a common protocol model. However, there might be more protocols in a protocol suite than are practical for use with a particular application. Therefore, a protocol stack is a selection of protocols from a protocol suite that is selected to support a particular application or class of applications. In the OSI world, various national and international standarization bodies specify profiles for stacks of OSI protocols. In the rest of this book we will see many protocol stacks to address specific security requirements of the Internet.

This book is entitled Internet and Intranet Security. As such, it focuses on the security of special computer networks and distributed systems, namely the ones that are based on the TCP/IP communications protocol suite. The fundamentals of TCP/IP networking are introduced in numerous books and briefly summarized in Chapter 2. However, we still want to point out that many things that are said in this book are equally true for other data networks as well, especially if they use packet switching (e.g., X.25-based networks).

According to RFC 2828, a vulnerability refers to "a flaw or weakness in a system's design, implementation, or operation and management that could be exploited to violate the system's security policy" [1]. In a computer network or distributed system, passwords transmitted in the clear often represent a major vulnerability. The passwords are exposed to passive eavesdropping and sniffing attacks. Similarly, the ability of a network host to boot with a network address that has originally been assigned to another host refers to another vulnerability that can be used to spoof that particular host and to masquerade the host accordingly.

There are at least three reasons why networked computer systems are generally much more vulnerable than their standalone counterparts:

  1. More points exist from which an attack can be launched. Someone who is not able to physically access or connect to a computer system cannot attack it. Consequently, by adding more network connections for legitimate users, more vulnerabilities are added as well.

  2. The physical perimeter of a system is artificially extended by having it connect to a computer network. This extension usually leads beyond what is controllable by a system administrator.

  3. Networked computer systems typically run software that is inherently more complex and error-prone. There are many network software packages that are known to be "buggy," and more often than not, intruders learn about these bugs before system administrators do. To make things even worse, intruders must know and be able to exploit just one single bug, whereas system administrators usually must know and be able to fix each of them.

Again according to RFC 2828, a threat refers to "a potential for violation of security, which exists when there is a circumstance, capability, action, or event that could breach security and cause harm" [1]. Computer networks and distributed systems are susceptible to a wide variety of possible threats that may be mounted either by legitimate users or intruders.[3] As a matter of fact, legitimate users are generally much more powerful adversaries, because they possess internal information that is not usually available to intruders. Unfortunately, protection against legitimate users is also much more difficult to achieve than protection against intruders. In fact, perimeter security (e.g., firewalls) does not affect and does not protect against legitimate users acting maliciously.

With respect to possible threats in computer networks and distributed systems, it is common to distinguish between host and communication compromises. A host compromise is the result of a subversion of an individual host within a computer network or distributed system. Various degrees of subversion are possible, ranging from the relatively benign case of corrupting process state information to the extreme case of assuming total control of the host. Web servers are heavily exposed to attackers trying to compromise the corresponding hosts.[4] A communication compromise is the result of a subversion of a communication line within a computer network or distributed system.

Last, a countermeasure refers to "an action, device, procedure, or technique that reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken" [1]. For example, the use of strong authentication techniques (as discussed in many points throughout this book) reduces the vulnerability of passwords transmitted in the clear. Similarly, the use of cryptographic authentication at the network layer effectively eliminates attacks based on machines spoofing other machines' IP addresses.

Against this background, it is fair to say that this book is about security technologies and countermeasures that can be used and deployed in TCP/IP-based networks (e.g., intranets) to provide security services.

[1]In a more humorous note, Lamport has also defined a distributed system as a "system that stops you from getting work done when a machine you've never seen crashes."

[2]The ITU-T was formerly known as Consultative Committee on International Telegraphy and Telephony (CCITT).

[3]The term hacker is often used to describe computer vandals who break into computer systems. These vandals call themselves hackers, and that is how they got the name, but in my opinion, they do not deserve it. In this book, we use the terms intruder and attacker instead.

[4]For example, refer to http://www.onething.com/archive/ for an archive of "hacked" Web sites.


Team-Fly


Internet and Intranet Security
Internet & Intranet Security
ISBN: 1580531660
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net