3. Architecture

 < Day Day Up > 



3. Architecture

3.1 Objectives

3.1.1 Completely Defined Security Plans

All sites should define a comprehensive security plan. This plan should be at a higher level than the specific policies discussed in Chapter 2, and it should be crafted as a framework of broad guidelines into which specific policies will fit.

It is important to have this framework in place so that individual policies can be consistent with the overall site security architecture. For example, having a strong policy with regard to Internet access and having weak restrictions on modem usage is inconsistent with an overall philosophy of strong security restrictions on external access.

A security plan should define the list of network services that will be provided; the areas of the organization that will provide the services; the people who will have access to those services; how access will be provided; who will administer those services; etc.

The plan should also address how an incident will be handled. Chapter 5 provides an in-depth discussion of this topic but it is important for each site to define classes of incidents and corresponding responses. For example, sites with firewalls should set a threshold on the number of attempts made to foil the firewall before triggering a response. Escalation levels should be defined for both attacks and responses. Sites without firewalls will have to determine if a single attempt to connect to a host constitutes an incident. What about a systematic scan of systems?

For sites connected to the Internet, the rampant media magnification of Internet-related security incidents can overshadow a (potentially) more-serious internal security problem. Likewise, companies who have never been connected to the Internet may have strong, well-defined internal policies but fail to adequately address an external connection policy.

3.1.2 Separation of Services

There are many services that a site may wish to provide for its users, some of which may be external. There are a variety of security reasons to attempt to isolate services on dedicated host computers. There are also performance reasons in most cases, but a detailed discussion is beyond the scope of this appendix.

The services that a site may provide will, in most cases, have different levels of access needs and models of trust. Services that are essential to the security or smooth operation of a site would be better off being placed on a dedicated machine with very limited access (discussed in the section "Deny All/Allow All"), rather than on a machine that provides a service (or services) that has traditionally been less secure or requires greater accessibility by users who may accidentally suborn security.

It is also important to distinguish between hosts that operate within different models of trust (e.g., all the hosts inside of a firewall and any host on an exposed network).

Some of the services that should be examined for potential separation are outlined in the section "Protecting the Services." It is important to remember that security is only as strong as the weakest link in the chain. Several of the most-publicized penetrations in recent years have been through the exploitation of vulnerabilities in electronic mail systems. The intruders were not trying to steal electronic mail, but they used the vulnerability in that service to gain access to other systems.

If possible, each service should be running on a different machine whose only duty is to provide a specific service. This helps to isolate intruders and limit potential harm.

3.1.3 Deny All/Allow All

There are two diametrically opposed, underlying philosophies that can be adopted when defining a security plan. Both alternatives are legitimate models to adopt, and the choice between them will depend on the site and its needs for security.

The first option is to turn off all services and then selectively enable services on a case-by-case basis as they are needed. This can be done at the host or network level as appropriate. This model, which will hereafter be referred to as the "deny all" model, is generally more secure than the other model described in the next paragraph. More work is required to successfully implement a "deny all" configuration as well as a better understanding of services. Allowing only known services provides for a better analysis of a particular service/protocol and the design of a security mechanism suited to the security level of the site.

The other model, which will hereafter be referred to as the "allow all" model, is much easier to implement but is generally less secure than the "deny all" model. Simply turn on all services, usually the default at the host level, and allow all protocols to travel across network boundaries, usually the default at the router level. As security holes become apparent, they are restricted or patched at either the host or network level.

Each of these models can be applied to different portions of the site, depending on functionality requirements, administrative control, site policy, etc. For example, the policy may be to use the "allow all" model when setting up workstations for general use but adopt a "deny all" model when setting up information servers, such as an e-mail hub. Likewise, an "allow all" policy may be adopted for traffic between LANs internal to the site, but a "deny all" policy can be adopted between the site and the Internet.

Be careful when mixing philosophies as in these examples. Many sites adopt the theory of a hard "crunchy" shell and a soft "squishy" middle. They are willing to pay the cost of security for their external traffic and require strong security measures but are unwilling or unable to provide similar protections internally. This works fine as long as the outer defenses are never breached and the internal users can be trusted. Once the outer shell (firewall) is breached, subverting the internal network is trivial.

3.1.4 Identify Real Needs for Services

There is a large variety of services that may be provided, both internally and on the Internet. Managing security is, in many ways, managing access to services internal to the site and managing how internal users access information at remote sites.

Services tend to rush like waves over the Internet. Over the years many sites have established anonymous FTP servers, gopher servers, WAIS servers, Web servers, etc., as they became popular but not particularly needed at all sites. Evaluate all new services that are established with a skeptical attitude to determine if they are actually needed or just the current fad sweeping the Internet.

Bear in mind that security complexity can grow exponentially with the number of services provided. Filtering routers need to be modified to support the new protocols. Some protocols are inherently difficult to filter safely (e.g., RPC and UDP services), thus providing more openings to the internal network. Services provided on the same machine can interact in catastrophic ways. For example, allowing an anonymous FTP on the same machine as the Web server may allow an intruder to place a file in the anonymous FTP area and cause the HTTP server to execute it.

3.2 Network and Service Configuration

3.2.1 Protecting the Infrastructure

Many network administrators go to great lengths to protect the hosts on their networks. Few administrators make any effort to protect the networks themselves. There is some rationale to this. For example, it is far easier to protect a host than a network. Also, intruders are likely to be after data on the hosts; damaging the network would not serve their purposes; having said that, there are still reasons to protect the networks. For example, an intruder might divert network traffic through an outside host in order to examine the data (i.e., to search for passwords). Also, infrastructure includes more than the networks and the routers that interconnect them. Infrastructure also includes network management (e.g., SNMP), services (e.g., DNS, NFS, NTP, WWW), and security (i.e., user authentication and access restrictions).

The infrastructure also needs protection against human error. When an administrator configures a host incorrectly, that host may offer degraded service. This only affects users who require that host; and unless that host is a primary server, the number of affected users will therefore be limited. However, if a router is configured incorrectly, all users who require the network will be affected. Obviously, this is a far larger number of users than those depending on any one host.

3.2.2 Protecting the Network

There are several problems to which networks are vulnerable. The classic problem is a denial-of-service attack. In this case, the network is brought to a state in which it can no longer carry legitimate users' data. There are two common ways this can be done: by attacking the routers and by flooding the network with extraneous traffic. Please note that the term "router" in this section is used as an example of a larger class of active network interconnection components that also includes components such as firewalls, proxy servers, etc.

An attack on the router is designed to cause it to stop forwarding packets or to forward them improperly. The former case may be due to poor configuration, the injection of a spurious routing update, or a "flood attack" (i.e., the router is bombarded with packets that cannot be routed, causing its performance to degrade). A flood attack on a network is similar to a flood attack on a router, except that the flood packets are usually broadcast. An ideal flood attack is the injection of a single packet that exploits some known flaw in the network nodes and causes them to retransmit the packet or to generate error packets, each of which is picked up and repeated by another host. A well-chosen attack packet can even generate an exponential explosion of transmissions.

Another classic problem is spoofing. In this case, spurious routing updates are sent to one or more routers, causing them to misroute packets. This differs from a denial-of-service attack only in the purpose behind the spurious route. In denial of service, the object is to make the router unusable, a state that will be quickly detected by network users. In spoofing, the spurious route will cause packets to be routed to a host from which an intruder may monitor the data in the packets. These packets are then rerouted to their correct destinations. However, the intruder may or may not have altered the contents of the packets.

The solution to most of these problems is to protect the routing update packets sent by the routing protocols in use (e.g., RIP-2, OSPF). There are three levels of protection: clear-text password, cryptographic checksum, and encryption. Passwords offer only minimal protection against intruders who do not have direct access to the physical networks. Passwords also offer some protection against misconfigured routers (i.e., routers that, out of the box, attempt to route packets). The advantage of passwords is that they have a very low overhead in both bandwidth and CPU consumption. Checksums protect against the injection of spurious packets, even if the intruder has direct access to the physical network. Combined with a sequence number or other unique identifier, a checksum can also protect again "replay" attacks, wherein an old (but valid at the time) routing update is retransmitted by either an intruder or a misbehaving router. The most security is provided by complete encryption of sequenced or uniquely identified routing updates. This prevents an intruder from determining the topology of the network. The disadvantage to encryption is the overhead involved in processing the updates.

RIP-2 (RFC 1723) and OSPF (RFC 1583) both support clear-text passwords in their base design specifications. In addition, there are extensions to each base protocol to support MD5 encryption.

Unfortunately, there is no adequate protection against a flooding attack or a misbehaving host or router that is flooding the network. Fortunately, this type of attack is obvious when it occurs and can usually be terminated relatively simply.

3.2.3 Protecting the Services

There are many types of services, and each has its own security requirements. These requirements will vary based on the intended use of the service. For example, a service that should only be usable within a site (e.g., NFS) may require different protection mechanisms than a service provided for external use. It may be sufficient to protect the internal server from external access. However, a Web server which provides a home page intended for viewing by users anywhere on the Internet, requires built-in protection. That is, the service/protocol/server must provide whatever security may be required to prevent unauthorized access and modification of the Web database.

Internal services (i.e., services meant to be used only by users within a site) and external services (i.e., services deliberately made available to users outside a site) will, in general, have protection requirements that differ, as previously described. It is therefore wise to isolate the internal services to one set of server host computers and the external services to another set of server host computers. That is, internal and external servers should not be co-located on the same host computer. In fact, many sites go so far as to have one set of subnets (or even different networks) that are accessible from the outside and another set that may be accessed only within the site. Of course, there is usually a firewall that connects these partitions. Great care must be taken to ensure that such a firewall is operating properly.

There is increasing interest in using intranets to connect different parts of an organization (e.g., divisions of a company). While this appendix generally differentiates between external and internal (public and private), sites using intranets should be aware that they will need to consider three separations and take appropriate actions when designing and offering services. A service offered to an intranet would be neither public nor as completely private as a service to a single organizational sub-unit. Therefore, the service would need its own supporting system, separated from both external and internal services and networks.

One form of external service deserves some special consideration, and that is anonymous or guest access. This may be either anonymous FTP or guest (unauthenticated) login. It is extremely important to ensure that anonymous FTP servers and guest login user IDs are carefully isolated from any hosts and file systems from which outside users should be kept. Another area to which special attention must be paid concerns anonymous, writable access. A site may be legally responsible for the content of publicly available information, so careful monitoring of the information deposited by anonymous users is advised.

Now we shall consider some of the most popular services: name service, password/key service, authentication/proxy service, electronic mail, Web services, file transfer, and NFS. Because these are the most frequently used services, they are the most obvious points of attack. Also, a successful attack on one of these services can produce disaster all out of proportion to the innocence of the basic service.

3.2.3.1 Name Servers (DNS and NIS(+))

The Internet uses the Domain Name System (DNS) to perform address resolution for host and network names. The Network Information Service (NIS) and NIS+ are not used on the global Internet but are subject to the same risks as a DNS server. Name-to-address resolution is critical to the secure operation of any network. An attacker who can successfully control or impersonate a DNS server can reroute traffic to subvert security protections. For example, routine traffic can be diverted to a compromised system to be monitored; or users can be tricked into providing authentication secrets. An organization should create well-known, protected sites to act as secondary name servers and protect their DNS masters from denial-of-service attacks using filtering routers.

Traditionally, DNS has had no security capabilities. In particular, the information returned from a query could not be checked for modification or verified that it had come from the name server in question. Work has been done to incorporate digital signatures into the protocol which, when deployed, will allow the integrity of the information to be cryptographically verified (see RFC 2065).

3.2.3.2 Password/Key Servers (NIS(+) and KDC)

Password and key servers generally protect their vital information (i.e., the passwords and keys) with encryption algorithms. However, even a one-way encrypted password can be determined by a dictionary attack (wherein common words are encrypted to see if they match the stored encryption). It is therefore necessary to ensure that these servers are not accessible by hosts that do not plan to use them for the service, and even those hosts should only be able to access the service (i.e., general services such as Telnet and FTP should not be allowed by anyone other than administrators).

3.2.3.3 Authentication/Proxy Servers (SOCKS, FWTK)

A proxy server provides a number of security enhancements. It allows sites to concentrate services through a specific host to allow monitoring, hiding of internal structure, etc. This funneling of services creates an attractive target for a potential intruder. The type of protection required for a proxy server depends greatly on the proxy protocol in use and the services being proxied. The general rule of limiting access only to those hosts that need the services and limiting access by those hosts to only those services is a good starting point.

3.2.3.4 Electronic Mail

Electronic mail (e-mail) systems have long been a source for intruder break-ins because e-mail protocols are among the oldest and most widely deployed services. Also, by its very nature, an e-mail server requires access to the outside world; most e-mail servers accept input from any source. An e-mail server generally consists of two parts: a receiving/sending agent and a processing agent. Because e-mail is delivered to all users and is usually private, the processing agent typically requires system (root) privileges to deliver the mail. Most e-mail implementations perform both portions of the service, which means the receiving agent also has system privileges. This opens several security holes that this appendix will not describe.

There are some implementations available that allow a separation of the two agents. Such implementations are generally considered more secure but still require careful installation to avoid creating a security problem.

3.2.3.5 World Wide Web

The Web is growing in popularity exponentially because of its ease of use and the powerful ability to concentrate information services. Most Web servers accept some type of direction and action from the persons accessing their services. The most common example is taking a request from a remote user and passing the provided information to a program running on the server to process the request. Some of these programs are not written with security in mind and can create security holes. If a Web server is available to the Internet community, it is especially important that confidential information not be co-located on the same host as that server. In fact, it is recommended that the server have a dedicated host that is not "trusted" by other internal hosts.

Many sites may want to co-locate FTP service with their Web service, but this should only occur for anonymous FTP (anonftp) servers that only provide information (ftp-get). Anonftp puts, in combination with Web, might be dangerous (e.g., they could result in modifications to the information your site is publishing to the Web) and in themselves make the security considerations for each service different.

3.2.3.6 File Transfer (FTP, TFTP)

FTP and TFTP allow users to receive and send electronic files in a point-to-point manner. However, FTP requires authentication while TFTP does not. For this reason, TFTP should be avoided as much as possible.

Improperly configured FTP servers can allow intruders to copy, replace, and delete files at will anywhere on a host, so it is very important to configure this service correctly. Access to encrypted passwords and proprietary data and the introduction of Trojan horses are just a few of the potential security holes that can occur when the service is configured incorrectly. FTP servers should reside on their own host. Some sites choose to co-locate FTP with a Web server, because the two protocols share common security considerations However, the practice is not recommended, especially when the FTP service allows the deposit of files (see the section "World Wide Web"). As mentioned in the opening paragraphs of the section "Protecting the Services," services offered internally to your site should not be co-located with services offered externally. Each service should have its own host.

TFTP does not support the same range of functions as FTP and has no security whatsoever. This service should only be considered for internal use, and then it should be configured in a restricted way so that the server only has access to a set of predetermined files (instead of every world-readable file on the system). Probably the most common usage of TFTP is for downloading router configuration files to a router. TFTP should reside on its own host and should not be installed on hosts supporting external FTP or Web access.

3.2.3.7 NFS

The Network File Service allows hosts to share common disks. NFS is frequently used by diskless hosts who depend on a disk server for all of their storage needs. Unfortunately, NFS has no built-in security. It is therefore necessary that the NFS server is accessible only by those hosts that are using it for service. This is achieved by specifying which hosts the file system is being exported to and in what manner (e.g., read-only, read-write, etc.). File systems should not be exported to any hosts outside the local network because this will require that the NFS service be accessible externally. Ideally, external access to NFS service should be stopped by a firewall.

3.2.4 Protecting the Protection

It is amazing how often a site will overlook the most obvious weakness in its security by leaving the security server itself open to attack. Based on considerations previously discussed, it should be clear that: the security server should not be accessible from offsite; should offer minimum access, except for the authentication function, to users on-site; and should not be co-located with any other servers. Further, all access to the node, including access to the service itself, should be logged to provide a paper trail in the event of a security breach.

3.3 Firewalls

One of the most widely deployed and publicized security measures in use on the Internet is a firewall. Firewalls have been given the reputation of a general panacea for many, if not all, of the Internet security issues. They are not. A firewall is just another tool in the quest for system security. Firewalls provide a certain level of protection and are, in general, a way of implementing security policy at the network level. The level of security that a firewall provides can vary as much as the level of security on a particular machine. There are the traditional trade-offs between security, ease of use, cost, complexity, etc.

A firewall is any one of several mechanisms used to control and watch access to and from a network for the purpose of protecting it. A firewall acts as a gateway through which all traffic to and from the protected network or systems passes. Firewalls help to place limitations on the amount and type of communication that takes place between the protected network and the other network (e.g., the Internet or another piece of the site's network).

A firewall is generally a way to build a wall between one part of a network (e.g., a company's internal network) and another part (e.g., the global Internet). The unique feature about this wall is that there needs to be ways for some traffic with particular characteristics to pass through carefully monitored doors (gateways). The difficult part is establishing the criteria by which the packets are allowed or denied access through the doors. Books written on the subject use different terminology to describe the various forms of firewalls. This can be confusing to system administrators who are not familiar with firewalls. The thing to note here is that there is no fixed terminology for the description of firewalls.

Firewalls are not always or even typically a single machine. Rather, firewalls are often a combination of routers, network segments, and host computers. Therefore, for the purposes of this discussion, the term "firewall" can consist of more than one physical device. Firewalls are typically built using two different components, filtering routers and proxy servers.

Filtering routers are the easiest component to conceptualize in a firewall. A router moves data back and forth between two (or more) different networks. A "normal" router takes a packet from network A and "routes" it to its destination on network B. A filtering router does the same thing but decides not only how to route the packet but whether it should route the packet at all. This is done by installing a series of filters by which the router decides what to do with any given packet of data.

A discussion concerning capabilities of a particular brand of router running a particular software version is outside the scope of this appendix. However, when evaluating a router to be used for filtering packets, the following criteria can be important when implementing a filtering policy: source and destination IP address, source and destination TCP port numbers, state of the TCP "ack" bit, UDP source and destination port numbers, and direction of packet flow (i.e., AB or BA). Other information necessary to construct a secure filtering scheme is whether the router reorders filter instructions designed to optimize filters, which can sometimes change the meaning and cause unintended access, and whether it is possible to apply filters for inbound and outbound packets on each interface (if the router filters only outbound packets, then the router is "outside" of its filters and may be more vulnerable to attack). In addition to the router being vulnerable, this distinction between applying filters on inbound or outbound packets is especially relevant for routers with more than two interfaces. Other important issues are the ability to create filters based on IP header options and the fragment state of a packet. Building a good filter can be very difficult and requires a good understanding of the type of services (protocols) that will be filtered.

For better security, the filters usually restrict access between the two connected nets to just one host, the bastion host. It is only possible to access the other network via this bastion host. As only this host, rather than a few hundred hosts, can get attacked, it is easier to maintain a certain level of security because only this host has to be protected very carefully. To make resources available to legitimate users across this firewall, services have to be forwarded by the bastion host. Some servers have built-in forwarding (e.g., DNS servers or SMTP servers); for other services (e.g., Telnet, FTP), proxy servers can be used to allow access to the resources across the firewall in a secure way.

A proxy server is a way to concentrate application services through a single machine. There is typically a single machine (the bastion host) that acts as a proxy server for a variety of protocols (Telnet, SMTP, FTP, HTTP, etc.), but there can be individual host computers for each service. Instead of connecting directly to an external server, the client connects to the proxy server, which in turn initiates a connection to the requested external server. Depending on the type of proxy server used, it is possible to configure internal clients to perform this redirection automatically, without knowledge to the user; others might require that the user connect directly to the proxy server and then initiate the connection through a specified format.

There are significant security benefits that can be derived from using proxy servers. It is possible to add access control lists to protocols, requiring users or systems to provide some level of authentication before access is granted. Smarter proxy servers, sometimes called Application Layer Gateways (ALGs), can be written that understand specific protocols and can be configured to block only subsections of the protocol. For example, an ALG for FTP can tell the difference between the "put" command and the "get" command; an organization may wish to allow users to "get" files from the Internet but not be able to "put" internal files on a remote server. By contrast, a filtering router could either block all FTP access or none but not a subset.

Proxy servers can also be configured to encrypt data streams based on a variety of parameters. An organization might use this feature to allow encrypted connections between two locations whose sole access points are on the Internet.

Firewalls are typically thought of as a way to keep intruders out, but they are also often used as a way to let legitimate users into a site. There are many examples where a valid user might need to regularly access the "home" site while on travel to trade shows and conferences, etc. Access to the Internet is often available but may be through an untrusted machine or network. A correctly configured proxy server can allow certain users into the site while still denying access to other users.

The current best effort in firewall techniques is found using a combination of a pair of screening routers with one or more proxy servers on a network between the two routers. This setup allows the external router to block off any attempts to use the underlying IP layer to break security (IP spoofing, source routing, packet fragments), while allowing the proxy server to handle potential security holes in the higher layer protocols. The internal router's purpose is to block all traffic except to the proxy server. If this setup is rigidly implemented, a high level of security can be achieved.

Most firewalls provide logging that can be tuned to make security administration of the network more convenient. Logging may be centralized and the system may be configured to send out alerts for abnormal conditions. It is important to regularly monitor these logs for any signs of intrusions or break-in attempts. Because some intruders will attempt to cover their tracks by editing logs, it is desirable to protect these logs. A variety of methods is available, including write once, read many (WORM) drives; papers logs; and centralized logging via the syslog utility. Another technique is to use a fake serial printer but have the serial port connected to an isolated machine or PC that keeps the logs.

Firewalls are available in a wide range of quality and strengths. Commercial packages start at approximately $10,000 and go up to over $250,000. Homegrown firewalls can be built for smaller amounts of capital. It should be remembered that the correct setup of a firewall (commercial or homegrown) requires a significant amount of skill and knowledge of TCP/IP. Both types require regular maintenance, installation of software patches and updates, and regular monitoring. When budgeting for a firewall, these additional costs should be considered in addition to the cost of the physical elements of the firewall.

As an aside, building a homegrown firewall requires a significant amount of skill and knowledge of TCP/IP. It should not be trivially attempted because a perceived sense of security is worse in the long run than knowing that there is no security. As with all security measures, it is important to decide on the threat, the value of the assets to be protected, and the costs to implement security.

A final note about firewalls: A firewall can be a great aid when implementing security for a site and can protect against a large variety of attacks. But it is important to keep in mind that a firewall is only one part of the solution. A firewall cannot protect your site against all types of attack.



 < Day Day Up > 



Critical Incident Management
Critical Incident Management
ISBN: 084930010X
EAN: 2147483647
Year: 2004
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net