Securing Network Services


Other than hardening the Linux kernel itself, you must also secure any network services that expose your server to the outside world. In the early days of computing, systems were accessed via hardwired (dumb) terminals or remote job entry devices (such as punched card readers). But with the advent of local area networks and development of various TCP/IP-based applications, computers today are accessed mostly using intelligent workstations and high-speed networks (either local or remote), meaning client workstations and the server interact in a collaborative manner. As a result, server-side applications (generally referred to as services) must be robust and be tolerant of any errors or faulty data provided by the clients. At the same time, because the communication pathway between the clients and the server may cross publicly accessible devices (such as routers on the Internet), there is also a need to protect the data from being "sniffed."

Some of the concepts and their applications have already been covered in previous chapters but are reviewed here in the context of securing network services. The following topics are discussed in this section:

  • Hardening remote services by using secure protocols when a public network separates the communication pathway between client and server.

  • Preventing service processes from having access to data they do not need. This can also be achieved by using UML or chroot jails as detailed in a later section of this chapter.

  • Granting system processes only the absolute minimum number of rights for them to function correctly. This ensures that the application and its configuration files are only accessible to the appropriate UID/GID.

  • Reducing the risk of DoS attacks by blocking undesired data packets from reaching the services. Inappropriate packets should be dropped at the router and firewall levels, preventing them from reaching the server. The local machine firewall would then not have to deal with them, thus reducing the CPU load on the server and the network load on its segment.

  • Hardening your network infrastructure.

  • Addressing wireless security concerns.

Hardening Remote Services

As illustrated in Chapter 12, it is very easy to set up and attach a data sniffer, such as Ethereal, to a network and passively capture transmissions between two devices such as user login names and passwords. The most troubling issue here is not the ease in which a sniffer can be employed to snoop on your data, but the fact that it can be done without your knowledge.

When users are communicating with your server from a remote location across public networks (such as the Internet or via a wireless network), there are many (too many) junctions where a sniffer may be placed. Chances are remote that someone would specifically target you for a data snoop. However, it is not uncommon that in data centers of your ISP network, traffic is monitored so they can identify bottlenecks and dynamically reroute traffic as necessary. You have no way of knowing how this captured data would be used or what type of security they use to safeguard it from nonauthorized access. By the same token, even the network within your organization is not totally safe from snooping.

For instance, an outside vendor may come in to troubleshoot one of its products and hook up a sniffer, or one of your summer interns may decide it's "fun" to see what goes on across the wire. Or if you have wireless networks in-house, they can be vulnerable (see the "Wireless Security" section later in this chapter). Therefore, it pays to be somewhat paranoid when you are accessing your server from across a network, remote or otherwise.

ACCIDENTAL DATA SNOOP

A number of years ago as part of an annual network health check, we connected a Token Ring sniffer to the network at a financial institution (with blessings from the network administrator and senior management). The main purpose was to check on token rotation times, the number of purge frames, error frames, and so on. However, one of the groups was implementing a new database server and was having some performance trouble, and we were asked to help look into it.

With the sniffer in hand, we looked at how the client software was querying the database server and found that due to a bug in the API, the client software was performing certain table searches by rows instead of columns. While reviewing the captured traces for the final report, we found one more bug in the software: It did not encrypt the user password before sending it on to the server for authentication!

Because data snooping, "accidental" or otherwise, can lead to involuntary disclosure of sensitive data, you should write into your corporate security policy and make known to all users (including vendors) that any unauthorized use of a sniffer will have severe consequences.


If you are somewhat concerned about the confidentiality of your data as it traverses your internal network or travels across public networks, you should consider securing, at the very least, the following remote access services:

  • Telnet and r*-utilities (such as rlogin) These services should be disabled, and ssh should be used instead. You can find details about setting up ssh in Chapter 8, "Network Services."

  • Remote control applications (such as VNC) If you must use such an application, if it runs over ssh (like VNC can), do so. If not, you should set up access over a VPN link at the very least. You can find details about setting up VNC in Chapter 8.

  • FTP Traditional FTP login and data transfer are not encrypted. Because most of the time you use the SLES username and password for FTP download from your server, sending the username and password in cleartext is not a good idea. Instead, you should use something like vsftpd as your FTP daemon (refer to Chapter 8), enable SSL support, and use a secure FTP client such as sftp or PSFTP (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).

  • Email The problem here is two-fold. First, although there is an authentication extension for SMTP service (RFC 2554), it is not in wide use. Therefore, data transfers between SMTP servers are mostly performed in cleartext. Consequently, anyone sniffing SMTP traffic may find out confidential information about you, your correspondent, and your company. Second, when you download email from your mail server to the workstation using POP3 or similar protocols, it is in cleartext as well. To protect your email from being snooped, you should use an email client that supports encryption (for example, see http://www.gnupg.org/(en)/related_software/frontends.html#mua). At the very least, manually encrypt your messages and file attachments using something like GnuPG before sending them.

Although you can access all the preceding services securely using a VPN link, it is not always feasible because you need a VPN client installed on your workstation. There are going to be situations when you need to remotely access your server at work from some workstation that does not have a compatible VPN client. Therefore, it is best to secure the remote services, and when you can also use VPN, you get double the security.

Limiting Rights of Services

Every process on a system is executed under the context of an account. If you sign onto a server and launch a text editor, in theory, the executable instance of the text editor has access to all the files and directories that you would normally have access to. This doesn't sound unreasonable and, in fact, for you to be able to save your file after the edit session, it is a requirement.

If you extrapolate this situation to a typical multiuser system, the situation becomes fairly complicated. In addition to all the interactive sessions from users and their respective cron jobs, a system also runs a number of services. Each one of these tasks has access to portions of the system.

As a system administrator, you need to understand the exposure presented by these processes. In the case of a simple text editor, little is exposed to damage other than the user's files. If the user happens to be root, however, the default process privileges can have serious implications in the event of human error.

The same situation arises for all processes. By default, user processes are fairly well contained. Their access to the system is limited to their own account's environment. If a user were to run a program under his or her own credentials, any damage caused by coding deficiencies within the routine would be limited to the access rights of that user's account. For unprivileged users, the damage is restricted by the limitations of the account.

The question is, What happens in the case of a privileged account? There is simply no mechanism available for the operating system to know the difference between an event triggered by mistake and one initiated by design. If a coding error in an application triggers the deletion of files in a directory, there is little to prevent the event. The more privileged the account running the faulty application, the higher the potential damage. It is therefore imperative to ensure that each and every process running on a server is executed with the most minimal privilege possible.

An additional layer of complexity results from placing your server on a network. So far, we have discussed deficiencies within a program running locally on a system and their possible impact. After a server is placed on a network, the services it presents are exposed beyond the confines of the local system. This greatly enhances the possibility that a coding deficiency in any service can be exploited from sources external to the machine. Additionally, the service is exposed to the network where it could be exploited without requiring local credentials.

Many such exploits are common today. In some cases, the remediated code is made available in short order. In most cases, however, there is a significant lapse of time between the discovery of the vulnerability and a patch. It is important to reflect on the fact that vulnerabilities are "discovered" and to understand that this implies they were present and dormant for an extended period of time. What happens to a system during the period leading up to the discovery is unknown. It is quite possible that in some cases the vulnerability was exploited by external sources for a significant amount of time before it was "discovered." Similarly, a system administrator may need to reflect on what can be done between being made aware of a possible problem and the availability of a patch.

One of the most important factors in reducing the amount of exposure to a vulnerability is to contain services within accounts with minimal privileges. In more recent versions of Linux, this is configured by default.

SLES is installed by default in such a way. A review of the accounts in /etc/passwd reveals individual accounts for running most services. In the case of an account such as sshd, it is used only to provide ssh services to the server. It is a local account on the machine and because it has no real login directory, it cannot be used to log in locally. This is in sharp contrast to the Telnet service available through xinetd. Though disabled by default in /etc/xinetd.d/telnetd, it can be enabled by simply changing the appropriate flag. When initiated at xinet startup, the resulting service will, by default, run under the root account. If a vulnerability in Telnet were to be discovered and exploited, the access privilege granted to the attacker would be equivalent to root.

An additional group of processes that should be examined are those that run under cron. In most cases, individuals run housekeeping jobs under cron. These tasks run in their user context, and few system-wide concerns are involved. You should closely scrutinize cron jobs that run under accounts that possess elevated privileges. Consider, for example, a script used to back up a special application. In some cases, a client group may require pre- and post-backup steps to be performed. You may be approached to run a customized script: /home/accounting/close_day.sh before the backup and /home/accounting/open_day.sh when the backups are done. Though this point may be obvious, these scripts must be moved to a location that is not user modifiable and audited for content before they are included in the nightly backup processes. If they are simply called by the nightly cron job, they will be executed in the same context as the backup process. If they are left in a location where they can be changed, there is little to prevent the introduction of a script error killing the nightly backups. In the worst-case scenario, the script could be used by someone to gain backup-level privileged access to the system.

All processes running on a system can have direct impact on the overall health of the server. Application bugs and vulnerabilities are a direct threat to the information and resources provided by the server. Because these vulnerabilities are present in the applications themselves, the local firewall policies cannot be applied to mitigate against this threat. It is therefore imperative to scrutinize the accounts under which processes are run to evaluate a server's exposure from both internal and external sources.

Using chroot Jails and User Mode Linux

In the preceding section, we examined the importance of minimizing the impact unknown vulnerabilities can have on a server. This was done by restricting access to system resources through the selection of appropriate accounts for each process.

In this section, we examine two additional methods of further restricting system exposure. Both chroot jails and User Mode Linux (UML) are containment techniques. These methods provide a closed environment within which the application is run, segregating it from the rest of the system. Both chroot and UML provide such environments, but their approaches are vastly different.

chroot

A chroot jail is based on the concept of moving a process from the original system environment and isolating it within a separate, parallel environment. As the process is initiated, it is provided with an alternate environment within which it will run. This environment is made up of a complete directory structure that mimics the standard file system.

Before you can port an application into the jail, you need to know the resources the application requires to run. In the case of a statically linked executable, the list of extra files needed could be very short. If, however, the application to be run requires access to a number of libraries, things can get quite complex. In the case of a chroot'ed web service, additional applications such as Perl or PHP may need to be placed in the target tree. To create a chroot file structure, you must perform the following:

1.

Create an isolated directory on a volume on the main server outside any path requiring privileges to access.

2.

Create a number of standard directories such as /usr, /dev, /lib, and /etc.

3.

Populate the directories with the appropriate files. In the case of the /lib directory, all the libraries required by the application must be copied, or the application will fail at runtime.

4.

Ensure the /etc/passwd file in the chroot jail contains the accounts required by the application and not a copy of the live system's password file.

5.

Apply the appropriate owneris b and permissions to all content.

The creation of a complete chroot environment for any application is a complex task. The most difficult step is collecting all the necessary library routines required by the applications.

After you have replicated the tree structure, you can launch the application with the command

 Athena> chroot /newtree command 

where /newTRee is the directory specification for the directory structure created previously. This will now be used as the / directory for the application instance. The command parameter is simply the application that you want to run within the chroot is benvironment.

USER MODE LINUX (UML)

The second method of creating a segregated environment is using User Mode Linux. UML's approach to segregating environments is to create a virtual machine within which the new application is run.

This new virtual machine is, in all respects, a separate Linux installation. Though YaST provides support for the initial portions of the installation, a number of additional steps are required to finalize the configuration. As a separate machine, it needs to have the required software installed. Unlike the chroot installation where directories can be copied, the UML machine instance requires an actual install. Similarly, all account management functions and system hardening are required on the new system as well.

When UML is launched, it loads its own copy of the Linux kernel within the context of the account used to start the service. This provides a complete Linux system to the application. This system is functionally independent of the original system and acts quite literally as a separate machine. The disks provided for the virtual machine are, in fact, large files in the file system of the parent machine. They can be mounted and modified as normal disks on the parent until they contain what is required for the application. Once configured, UML is launched and the new machine takes over the delivery of the service. The virtual machine appears on the network as a separate entity with its own TCP/IP address, accounts, and services.

Both techniques for providing a segregated application environment are nontrivial. They do, however, provide significant isolation of the service being offered and therefore help protect the remainder of the system. The more complex of the two techniques appears to be the chroot path. Though a number of resources are available on the Internet to help configure such environments, finding all the minutia required for an application can be quite tedious. Once completed, chroot does provide for a minimal environment within which an application can run. If a vulnerability is found and exploited, the minimal environment provided will not provide any extraneous utilities that would be an advantage to an attacker.

The UML approach does provide for a complete system environment and therefore requires more diligence in removing applications installed by default. The level of segregation, however, is almost complete and does not allow for any access to the original system's resources. One of the most significant advantages to the UML approach is the capacity for running a different operating system within each virtual machine. This allows for legacy systems requiring older, more vulnerable operating system flavors to be isolated. Also, third-party software that is no longer supported or software that requires specific runtime environments can be hosted virtually until they can be replaced.

NOTE

The concept of creating a UML environment is to separate the hosted application into a separate virtual machine. This is a good thing. It does, however, imply that each UML environment requires individual system management. Hardening, tuning, and account management must be done on each just as if they were physically separate machines.


Packet Filtering Using iptables

Historically, servers were placed on internal networks with little worry about exploits. Because corporations had minimal or no direct access to the Internet, there were few chances of compromise. As time marched on, more and more companies became Internet accessible. Today, most companies allow Internet access all the way down to desktop devices.

The increase in access has been fueled by business demands both in terms of internal resources requiring access to information as well as an outward-facing presence for marketing purposes. To protect Internet-facing machines, companies have employed firewalls. A firewall is a device that can be used to restrict network traffic. Restrictions can be placed on the source, destination, or type of traffic involved. In many instances, firewall appliances are used to segregate corporations from the Internet. These appliances often contain proprietary operating systems and command sets. Others use embedded versions of operating systems such as Linux. These types of firewalls are called edge devices. Though SLES can be configured to run as an edge firewall, we focus more on the individual server-side implementation.

SLES can be used to implement a local system-level firewall. The firewall configuration can be controlled though YaST. Though YaST only scratches the surface of the potential of the SLES firewall, it is sufficient for most server-side needs.

YaST serves as a tool for modifying what is known as iptables rules. iptables is the administration tool used to control the firewall configuration. In its simplest form, iptables recognizes the following:

  • Network interfaces

  • Trusted and untrusted networks

  • Network protocols such as TCP, ICMP, and UDP

  • Network ports

  • Packet sources and destinations: INPUT, OUTPUT, and FORWARDING

  • Packet disposition: ACCEPT, DROP, QUEUE, and RETURN

Combining the different permutations and combinations of these attributes generates a list of rules. These rules govern what happens to a network packet destined for a target server. When rules are applied to a packet, the process is known as packet filtering.

Under YaST, selecting Users and Security presents the option for configuring the firewall. YaST understands most of the typical business applications that run on an SLES server:

  • HTTP and HTTPS

  • SMTP, IMAP (SSL), POP3 (SSL)

    NOTE

    In a properly secured environment, the number of cleartext protocols should be kept to a minimum. There is little that can be done to encrypt SMTP traffic to hosts beyond your corporate environment. Securing protocols such as IMAP and POP3 with their SSL equivalents will prevent usernames and passwords from being captured through sniffing. It will also protect the content from being seen internally if intercepted between workstations and the server.


  • ssh, Telnet

  • rsync

YaST also provides for a more granular approach by allowing individual ports to be opened for less frequently used applications. In the preceding standards list, protocols such as DNS (port 53), Kerberos (port 88), and LDAP (port 389) are missing. If the firewall is enabled on a server and these protocols are required, manual adjustments will have to take place. In the Additional Services section of the firewall configuration, you can accommodate for these requirements.

In addition, third-party software installed on servers can also be made available through the firewall. A simple example of a port rule could allow for port 3306 (MySQL) to be permitted to traverse the firewall. The resulting entry in the iptables would look like this:

 ACCEPT tcp -- anywhere  anywhere state NEW,RELATED, ESTABLISHED tcp dpt:mysql 

This example was generated using the YaST tool by specifying, in Additional Services, that port 3306 be available for access to external machines. This entry allows traffic for MySQL to reach the local host. In the INPUT stream of packets, this rule will be interpreted as shown in Table 13.1.

Table 13.1. Interpreting the INPUT Stream of Packets

PACKET

MEANING

ACCEPT

Tells the firewall to let packets matching the following through.

Tcp

Defines the type of packet.

--

Specifies TCP options to filter on. A TCP option is a specific portion of the packet header. No option is specified in this case; therefore all packets should be accepted.

anywhere anywhere

Tells the firewall that packets received from all interface NICs (the first anywhere) be allowed to flow to any destination interface NICs (the second anywhere).

State

NEW: Packets initiating a new conversation

RELATED: Packets that are continuations of pre-existing conversations

ESTABLISHED: Packets that constitute replies to pre-existing conversations

dpt:mysql

Defines which tcp port the conversation will exist on. Keep in mind that we requested port 3306 be opened; the firewall has itself substituted the known-application name for the port number.

Warning: This does not mean that the traffic is actually MySQL traffic. It means only that the port is typically associated with MySQL.


Though the syntax and order of the rules can become quite complex, YaST provides for a simple, more intuitive interface.

Placing firewalls on local servers above and beyond the edge firewalls might appear as a waste of resources. Many Denial of Service attacks, however, come from internal sources. Viruses, worms, and the like propagate from machine to machine using known vulnerabilities. Though normally a workstation problem, these pests often affect the performance of servers. With a properly tuned firewall, requests for services from unauthorized clients can be eliminated from consideration at the firewall level. This removes the burden of processing from the application level. Consider, for example, the following two cases:

  • A number of workstations are infected with a worm that attempts a password hack attempt on accounts on your servers. The worm is smart and assumes that only ssh TRaffic is allowed to your server farm. Simple enough, it parses the local /etc/passwd file, harvests account names, and then scans the subnets in question for machines answering on port 22. If the firewall is properly configured, it will allow only ssh sessions from specific machines within the data center. Infected machines in the rest of the company would not be able to establish any connections, and therefore, the exposure to password loss is reduced. This also prevents the local sshd from processing numerous bogus password attempts.

  • The Internet-facing web server requires MySQL access through an application server. If each machine's firewall is configured appropriately, only the application server will be permitted to talk over port 3306 to the MySQL server. The application server will accept only communications from the web server over a restricted customized port. In this scenario, even if the Internet-facing web server were compromised, the attacker would be unable to access the database directly. This setup might not stop the attack, but the network rules would certainly slow the progress of the attacklong enough for it to be identified, one would hope.

In a server environment, it is imperative to allow traffic only for those services that should be available and, again, only to those clients that require the access. Restricting access to other network-aware applications that might co-reside on the server will further reduce the machine's target profile. An additional benefit to applying filtering rules is that spurious traffic from unauthorized sources never reaches the intended service. This protects the exposed application from possible hacks while reducing the amount of processing time lost to unsolicited connection attempts.

Hardening Your Physical Network Infrastructure

Proper system hardening practice should include securing your networking hardware from being attacked or hijacked for other uses. For instance, your server is useless if your remote users can't reach it over the WAN link because some intruder hijacked your router and changed its configuration. Similarly, if the access infrastructure is not secure and the traffic easily snooped, your confidential company data can be easily stolen or compromised.

Most network administrators are familiar with the concept of using firewalls to block undesired network traffic in and out of a network. However, many have not given much thought to securing the physical aspects of the network, namely the underlying hardware. The following sections cover a few topics related to hardening your physical networking environment.

PHYSICAL SECURITY

Probably the foremost consideration to securing your networking environment is securing physical access to equipment such as the wiring racks, hubs and switches, and routers. Most of the time, these types of equipment are in a wiring closet located strategically behind closed and locked doors. Unlike in the movies, hackers tapping into your network via available ports on the wiring hubs and switches are rare. It is actually much easier instead to use an available network plug found in one of the empty offices, meeting areas, or conference rooms. Or the attack may even be launched from outside your premises if you have wireless access points installed! (See the "Wireless Security" section later in this chapter.)

NOTE

A primer on various common networking devices, such as switches and routers, can be found at http://www.practicallynetworked.com/networking/bridge_types.htm.


Following are some ideas to ponder when you are implementing physical security for your networking infrastructure:

  • Wiring racks, hubs, switches, routers, and servers should be under lock and key.

  • Consider setting passwords for your server's BIOS as well as the boot loader.

  • Given the negligible cost difference between hubs and switches, it is more secure to use switches because they make packet sniffing more difficult (but not impossible, as many would tend to think; see the "Sniffing Switches" section later in this chapter). At the same time, switches provide better network bandwidth management than hubs.

  • Networking devices should be connected to power surge protectors, or better yet, uninterruptible power supplies (UPSes) to guard against power fluctuations and short periods of power outages. We have seen damaged hubs and switches resulting in a partially downed network that took hours to diagnose.

  • Unused ports should be disabled at the hub or switch, or patch cables not connected. If manageable hubs or switches are used, you should use the management software to periodically check that the unused ports are not suddenly being used without your prior knowledge.

DEFAULT PASSWORDS AND SNMP COMMUNITY NAMES

Many manageable devices, such as routers, switches, and hubs, are shipped with factory-set default passwords, and some are shipped without a password at all. If you fail to change these passwords, an attacker can easily access your device remotely over the network and cause havoc. For instance, Cisco routers are popular, and many corporations use them to connect to the Internet. An attacker can easily use something like Cisco Scanner (http://www.securityfocus.com/tools?platid=-1&cat=1&offset=130) to look for any Cisco routers that have not yet changed their default password of cisco. After locating such a router, the hacker can use it as a launching point to attack your network or others (while the finger points to your router as being the source).

TIP

You can find many default user IDs and passwords for vendor products at http://www.cirt.net/cgi-bin/passwd.pl.


Manageable devices can normally be accessed in a number of different waysfor example, via a console port, Telnet, Simple Network Management Protocol (SNMP), or even a web interface. Each of these access routes is either assigned a default password or none at all. Therefore, you need to change all of them to secure your device. If you don't change them, a hacker can get in via one of the unsecured methods and reset the passwords.

Consider this scenario: Your router can be remotely configured either via a Telnet session or SNMP set commands. To be able to manage the router remotely from your office or home, you dutifully changed the default Telnet access password. However, because you don't deal much with SNMP, you left that alone. A hacker stumbled across your router, found out its make, and looked up the default username and password. He tried to gain access through Telnet but found the default password didn't work. But by knowing that the router can also be configured via SNMP and knowing that the default read-write community name is private, the attacker can change the configuration of the router, reset the Telnet password to anything he wishes, and lock you out at the same time, all by using a simple SNMP management utility.

Before putting any networking devices into production, first change all their default passwords, following standards of good practice by setting strong passwords (see Chapter 4, "User and Group Administration," and Chapter 11, "Network Security Concepts"). Furthermore, you should disable any unused remote access methods if possible.

SNIFFING SWITCHES

A switch handles data frames in a point-to-point manner. That is, frames from Node A to Node B are sent across only the circuits in the switch that are necessary to complete a (virtual) connection between Node A and Node B, while the other nodes connected to the same switch do not see that traffic. Consequently, it is the general belief that data sniffing in a switched environment is possible only via the "monitor port" (where all internal traffic is passed) on the switch, if it has one. However, studies have revealed that several methods are available to sniff switched networks. Following are two of these methods:

  • MAC address flooding Switches work by setting up virtual circuits from one data port to another. To track what devices are connected (so the data frames can be routed correctly), a switch must keep a translation table (known as the Content Addressable Memory, or CAM) that tracks which MAC addresses are on which physical port. (Each network card is identified by a unique Media Access Control, or MAC, address assigned by the manufacturer.) The amount of memory for this translation table is limited. It is this fact that allows the switch to be exploited for sniffing purposes. On some switches, it is possible to bombard the switch with bogus MAC address data (for instance, using macof from the dsniff suite; http://monkey.org/~dugsong/dsniff/dsniff-2.3.tar.gz) and overflow the translation table. The switch, not knowing how to handle the situation, "fails open." That is, it reverts to function as a hub and broadcasts all data frames to all ports. At this point, one of the more generic network sniffers can be used to capture traffic.

  • MAC address duplication All data frames on a network are routed or bridged based on the associated MAC addresses. Therefore, the ability to impersonate another host's MAC address could be exploited. That's just what the MAC address duplication hack does: reconfigures your system to have the same MAC address as the machine whose traffic you're trying to sniff. This is easily done on many operating systems. On a Linux machine, for instance, the ifconfig eth0 12:34:56:78:90:AB command can be used, where 12:34:56:78:90:AB is the desired MAC address. In a MAC Address Duplication attack, the switch is fooled into thinking two ports have the same MAC address, and it forwards data to both ports.

There are more ways (such as man-in-the-middle method via Address Resolution Protocol [ARP] spoofing) to sniff switched networks. The two methods discussed here simply serve as an introduction and provide a cautionary note that a switched environment is not immune to packet sniffing.

Wireless Security

In the past few years, wireless networking (IEEE 802.11x standards; http://grouper.ieee.org/groups/802/11/) has become popular for both business and home users. Many laptops available today come with a built-in wireless network card. Setting up your wireless clients is a cinch. It is almost effort less to get a wireless network up and runningno routing of cables behind your desks, through walls or other tight spaces. And no hubs or switches are necessary. Unfortunately, such convenience comes with security concernswhich many people are not readily aware ofand they are discussed next.

NOTE

Wireless LANs (WLANs) can be set up in one of two modes. In infrastructure mode (also known as a basic service set, or BSS), each client connects to a wireless access point (also frequently referred to simply as an access point, or AP). The AP is a self-contained hardware device that connects multiple wireless clients to an existing LAN. In ad hoc mode (or independent basic service set, IBSS), all clients are all peers to each other without needing an AP. No matter which mode a WLAN operates in, the same security concerns discussed here apply.


NOTE

802.11x refers to a group of evolving WLAN standards that are under development as elements of the IEEE 802.11 family of specifications, but that have not yet been formally approved or deployed. 802.11x is also sometimes used as a generic term for any existing or proposed standard of the 802.11 family. Free downloads of all 802.11x specifications can be found at http://standards.ieee.org/getieee802/802.11.html.


LOCKING DOWN ACCESS

Wireless networks broadcast their data using radio waves (in the GHz frequency range), and unless you have a shielded building (like those depicted in Hollywood movies), you cannot restrict physically who can access your WLAN. The useable area depends on the characteristic of the office space. Thick walls degrade the signal to some extent, but depending on the location of the AP and the type and range of antenna it has, its signal may be picked up from outside the building, perhaps from up to a block or two away.

WARNING

With the popularity of home wireless networks, it is imperative that you take steps to secure your AP so strangers can't easily abuse your Internet connection. Given the typical range of an AP, someone could be sitting at the sidewalk next to your house and use your AP to surf the Net without your permission or, worse, commit a cyber crimewith your Internet connection being the "source"all without your knowledge!


Anyone with a wireless-enabled computer equipped with sniffer software that is within the range of your APs can see all the packets being sent to other wireless clients and can gain access to everything they make available. If the APs are acting in bridge mode, someone may even be able to sniff traffic between two wired machines on the LAN itself!

WARDRIVING AND WARCHALKING

One of the techniques employed by security professionals (the so-called "white hats") and would-be-hackers (the so-called "black hats") alike to determine the boundary of a WLAN is called wardriving. First automated by Peter Shipley (http://www.dis.org/shipley) during the 19992000 period, wardriving is the process of gathering WLAN information (such as locations of APs and their Service Set IDs, or SSIDs) by listening for periodic AP broadcasts using a computer running a stumbling utility.

When you wardrive, you ride around in a car while running the stumbling utility on a laptop or even a PDA equipped with a wireless card, and you record beacons from nearby APs. Most stumbling software has the ability to add GPS location information to its log files so that exact geographical locations of stumbled APs can be plotted onto electronic maps. The most popular and well-known stumbling tool is a Windows program called Network Stumbler (often referred to simply as NetStumbler; http://www.netstumbler.com). NetStumbler can sniff for active wireless channels and note any open networks. A similar tool called Kismet (http://www.kismetwireless.net) is available for a number of operating systems, including Linux and Mac OS X.

The warchalking concept was conceived in June 2002 by a group of people who have taken to drawing chalk symbols on walls and pavements to indicate the presence of an AP along with its details (such as SSID) so others can easily locate it. Although the idea was intriguing, it failed to catch on in a big way, most likely due to three factors. First, chalk drawings are easily removed or modified, so any information they provide may not be accurate. Second, with the popularity of public WLANs (so-called Wi-Fi HotSpots), such as in airports, school campuses, hotels, and even some communities, there isn't much need to rely on chalk symbols to locate public APs; you can look them up easily onlinefor example, http://www.wi-fihotspotlist.com. Lastly, in most cities, the use of chalk to mark APs is likely to incur the wrath of local authorities for violating antigraffiti laws.

The term war, which is used in wardriving, warchalking, and so on, was taken from the old hacking practice known as wardialing. Wardialing was phoning every extension of a phone network until the number associated with a modem was hit on and recorded. Today, this practice has been replaced by wardriving with the introduction of WLANs.


One simple way of closing a WLAN to unauthorized systems is to configure your AP to allow connections only from specific wireless network cards. Like standard network cards, each wireless network card has a unique MAC address. Some APs allow you to specify a list of authorized MAC addresses. If a machine attempts to join the network with a listed MAC address, it can connect; otherwise, the request is silently ignored. This method is sometimes referred to as MAC address filtering.

WARNING

The vendor hard-codes the MAC address (part of which contains a vendor code) for each network card and thus guarantees its uniqueness. Depending on the operating system, however, the MAC address of the wireless card can be easily changed using something similar to the ifconfig command:

 sniffer # ifconfig wlan0 12:34:56:78:90:AB 


If an intruder is determined to gain access to your WLAN, he can simply sniff the airwaves passively and log all MAC addresses that are in use. When one of them stops transmitting for a length of time (presumably disconnected from the WLAN), the intruder can then assume the identity of that MAC address and the AP will not know the difference.

Most APs available today allow the use of the optional 802.11 feature called shared key authentication. This feature helps prevent rogue wireless network cards from gaining access to the network. The authentication process is illustrated in Figure 13.1. When a client wants to connect to an AP, it first sends an authentication packet. The AP replies with a string of challenge text 128 bytes in length. The client must encrypt the challenge text with its shared key and send the encrypted version back to the AP. The AP decrypts the encrypted challenge text using the same shared key. If the decoded challenge text matches what was sent initially, a successful authentication is returned to the client and access is granted; otherwise, a negative authentication message is sent and access is denied.

Figure 13.1. IEEE 802.11 shared key authentication.


NOTE

Other than the shared key authentication, 802.11 also provides for open system authentication. The open system authentication is null authentication (meaning no authentication at all). The client workstation can associate with any access point and listen to all data sent as plaintext. This type of authentication is usually implemented where ease of use is the main security issue.


This shared key between the AP and a client is the same key used for Wired Equivalency Privacy (WEP) encryption, which is discussed in the next section.

NOTE

You can also employ IEEE 802.1x Extensible Authentication Protocol (EAP) with specific authentication methods such as EAP-TLS to provide mutual authentication mechanisms. However, such an implementation requires an authentication server, such as Remote Authentication Dial-In User Service (RADIUS), which is not very practical for home, small-, and medium-size businesses. An alternative is to use the preshared key authentication method available in Wi-Fi Protected Access (WPA) for infrastructure mode wireless networks. The WPA preshared key works in a similar manner as the WEP shared key method discussed previously. However, because of the way WPA works (discussed in the following section), the WPA preshared key is not subject to determination by collecting a large amount of encrypted data.


ENCRYPTING DATA TRAFFIC

Part of the IEEE 802.11 standard defines the Wired Equivalency Privacy (WEP) algorithm that uses RC4, a variable key-size stream cipher, to encrypt all transmissions between the AP and its clients. To use this feature, you must configure the AP to use WEP and create or randomly generate an encryption key, sometimes referred to as the network password or a secret key.

The key is usually expressed as a character string or hexadecimal numbers, and its length depends on the number of bits your hardware will support. At the time of this writing, APs support encryption keys ranging in size from 64 bits to 256 bits. The methodology that manufacturers employ for WEP encryption, however, is not universal and may differ from another. For instance, for a 64-bit encryption key, most vendors use a 24-bit randomly generated internal key (known as an Initialization Vector, or IV) to trigger the encryption (thus leaving you with a 40-bit key), whereas others may use the full 64 bits for encryption.

NOTE

The 802.11b specification defined a 40-bit user-specified key. Combined with the 24-bit IV, this yields a 64-bit encryption key for WEP. Likewise, a 128-bit WEP uses a 104-bit key, and a 256-bit WEP uses a 232-bit key. This is why user-defined ASCII keys are only 5 bytes in size for 64-bit WEP, 13 bytes for 128-bit WEP, and 29 bytes for 256-bit WEP.


WEP works by using the encryption key as input to the RC4 cipher. RC4 creates an infinite pseudo-random stream of bytes. The endpoint (either the AP or a client) encrypts its data packet by performing a bitwise XOR (logical exclusive OR) operationa simple and fast method for hashing two numbers in reversible fashionwith the latest section of the RC4 pseudo-random stream and sends it. Because the same encryption key is used at the AP and by the clients, the receiving device knows where they are in the RC4 stream and applies the XOR operation again to retrieve the original data.

If the intercepted packets are all encrypted using the same bytes, this provides a known cryptographic starting point for recovering the key used to generate the RC4 stream. That is the reason a 24-bit random key (the IV) is added to the user-supplied key to ensure the same key is not used for multiple packets, thus making it more difficult (but not impossible) to recover the key. But because the IV is only 24 bits, eventually you must reuse a previous value. By intercepting sufficient data packets, an attacker could crack the encryption key used based on "seeing" repeating RC4 data bytes. The general rule of thumb is that, depending on the key size, about 5 to 10 million encrypted packets would provide sufficient information to recover the key. On a typical corporate network, this number of packets can be captured in less than a business day. If, with some luckgood or bad, depending on your perspectivemany duplicate IVs are captured, the key may be cracked in less than an hour.

There exist a few utilities that can recover the WEP key. WEPCrack (http://wepcrack.sourceforge.net) was the first publicly available WEP cracking tool. AirSnort (http://airsnort.shmoo.com) is another such utility. Although both have Linux versions, you will find AirSnort a lot easier to use because the latest version is now a GTK+ GUI application (see Figure 13.2).

Figure 13.2. AirSnort cracking a WEP key.


There are a few precautions you can take as deterrence against WEP-cracking, for example:

  • Pick a WEP key that is totally random, rather than some word that can be found in a dictionary (much like selecting a good login password, as discussed in Chapter 4). Although doing so will not prevent your key from being cracked eventually, it will at least force the attacker to wait until he has captured enough traffic to exploit the WEP weakness, rather than just attempting a dictionary attack against your key. Some vendors will generate a WEP key for you based on a passphrase you supply.

  • Use the strongest key available on your hardware. If all your APs and wireless network cards support a 256-bit WEP key, by all means, use it. Although it will not defend against the shortcomings of WEP, a longer key will help deter casual attackers.

    NOTE

    Quite often, you cannot use the strongest key supported by the hardware because a handful of clients have an older wireless card that supports only 64-bit WEP keys. In such cases, you have to settle for a (weaker) 64-bit key because it is the lowest common denominator, or you might consider upgrading those clients to newer cards. Given the low cost of wireless network cards available today and the potential cost of having your network compromised, you can easily justify the upgrade.


  • Use different WEP keys for different APs. Doing this, however, may limit the mobility of your legitimate users within the building. On the other hand, some companies install a WLAN so they do not have to run cables. In such instances, varying WEP keys should not be a hindrance.

  • Although it may be administratively intensive, periodically changing the WEP key can serve to deter WEP key cracking. Depending on the frequency with which you change your key, unless the attacker uses the cracked key immediately, he may have to start all over again when he returns, thinking he has gained access to your APs from his previous visit. Depending on the client workstation configuration, you may be able to push out new WEP keys automatically and transparently.

Recognizing the shortcomings of WEP, IEEE 802.11i is a new standard that specifies improvements to wireless LAN networking security. The 802.11i standard addresses many of the security issues of the original 802.11 standard. While at the time of this writing, the new IEEE 802.11i standard is still being ratified, wireless vendors have agreed on an interoperable interim standard known as Wi-Fi Protected Access (WPA).

With WPA, encryption is done using the Temporal Key Integrity Protocol (TKIP, originally named WEP2), which replaces WEP with a stronger encryption algorithm (but still uses RC4 ciphers). Unlike WEP, TKIP avoids key reuse by creating a temporary key using a 128-bit IV (instead of the quickly repeating 24-bit IV WEP uses), and the key is changed every 10,000 packets. It also adds in the MAC address of the client to the mix, such that different devices will never seed RC4 with the same key. Because TKIP keys are determined automatically, there is no need to configure an encryption key for WPA.

If your WLAN equipment supports WPA in addition to WEP, by all means, use WPA.

PROTECTING THE SSID

The Service Set ID (SSID) represents the name of a particular WLAN. Because WLAN uses radio waves that are transmitted using the broadcast method, it's possible for signals from two or more WLANs to overlap. The SSID is used to differentiate between different WLANs that are within range of each other.

NOTE

An SSID contains up to 32 alphanumeric characters and is case sensitive.


To connect to a specific WLAN, you need to know its SSID. Many APs broadcast the SSID by default (called a beacon) to make it easier for clients to locate the correct WLAN. Some wireless clients (such as Windows XP) detect these beacons and automatically configure the workstation's wireless configuration for transparent access to the nearest WLAN. However, this convenience is a double-edged sword because these beacons also make it easier for wardrivers to determine the SSID of your WLAN.

One of the steps to dissuade unauthorized users from accessing your WLAN is to secure to your SSID. When setting up your WLAN, do not use the default SSID provided by the vendor. For instance, Cisco Aironet APs use autoinstall as the default SSID, some vendors use default for the default SSID, while some other vendors simply use their company name as the default SSID, such as proxim. You can find many wireless vendors' default SSIDs and other default settings at http://www.cirt.net/cgi-bin/ssids.pl.

The SSID should be something not related to your name or company. Like your login password, the SSID should be something difficult to guess. Ideally, it should be some random, meaningless string.

CAUTION

Although it is convenient to use the default SSID, it would cause problems if a company or neighbor next to you set up a wireless LAN with the same vendor's access points and also used the default SSID. If neither of you implements some form of security, which is often the case in both homes and smaller companies (sometimes even large organizations where wireless technologies are new to them), and you're both within range of each other, your wireless clients can mistakenly associate with your neighbor's access point, and vice versa.


CAUTION

Many APs allow a client using the SSID of any to connect to their APs, but this feature can generally be disabled. You should do so at your earliest convenience.


Unless you are running public APs (such as a community Wi-Fi HotSpot) where open connectivity is required, if your AP has the feature, it is generally a good idea to disable SSID broadcasting, even if it doesn't totally prevent your SSID from being sniffed. Disabling SSID broadcasting (also known as SSID blinding) only prevents the APs from broadcasting the SSID via the Beacon and Probe Request packets. But in response to a Probe Request, an AP's Probe Response frame includes the SSID. Furthermore, clients will broadcast the SSID in their Association and Reassociation frames. Therefore, something like SSID Sniff (http://www.bastard.net/~kos/wifi) or Wellenreiter (see Figure 13.3; http://www.wellenreiter.net) can be used to discover a WLAN's SSID. Because of this, SSID should never be considered a valid security "tool." However, it can serve as a small roadblock against casual snoopers and "script kiddies."

Figure 13.3. Wellenreiter's main window.


WHAT IS A SCRIPT KIDDIE?

From Webopedia.com, a script kiddie is "A person, normally someone who is not technologically sophisticated, who randomly seeks out a specific weakness over the Internet in order to gain root access to a system without really understanding what it is s/he is exploiting because the weakness was discovered by someone else. A script kiddie is not looking to target specific information or a specific company but rather uses knowledge of a vulnerability to scan the entire Internet for a victim that possesses that vulnerability."


There is one possible countermeasure that you can deploy against SSID snooping: Fake AP (http://www.blackalchemy.to/project/fakeap). As the saying goes, "the best hiding place is in plain sight." Fake AP can generate thousands of counterfeit Beacon frames (with different SSIDs) that essentially hide your actual network among the cloud of fake ones. Again, this is not a surefire solution, but it will definitely discourage amateurs and will require an experienced hacker to spend an extraordinary amount of time wading through the bogus beacons to guess at the real network.

CAUTION

You must be very careful when using Fake AP because you may unknowingly interfere with neighboring third-party WLANS, which could result in possible legal repercussions. Additionally, the extra traffic generated by Fake AP may decrease your network's available bandwidth.


ADDITIONAL WIRELESS LAN SECURITY TIPS

Other than those steps already discussed, you can take some additional steps to secure you WLAN. The following list summarizes the basic precautionary measures you should employ to protect your WLAN:

  • Change the default administration password on your APs (this includes the password for the web-based management interface).

  • Change the default SSID.

  • Disable the SSID broadcast, if possible.

  • Enable MAC address filtering.

  • Turn off AP's automatic DHCP IP address assignment. This will cause attackers to do extra work to get a valid IP address. If turning off this assignment is not feasible (especially if you have a fair number of users), restrict DHCP leases to the MAC addresses of your wireless clients only.

  • Use the highest (common) level of WEP/WPA supported by your hardware; upgrade old hardware to support 256-bit keys if possible.

  • Place your APs outside the corporate firewall. This helps to restrict intruders from accessing your internal network resources. You can configure the firewall to enable access from legitimate users based on MAC addresses.

  • Use a switch, not a hub, for connecting the AP to the wired network segment. This can help to reduce the possibility of all traffic being sniffed.

  • Encrypt your wireless traffic using a VPN, in addition to using WEP/WPA; use encryption protocols for applications where possible (TLS/HTTPS, ssh, and so on).

  • If the clients from the WLAN side need access to the Internet, use a proxy server (such as Squid, Novell Security Manager, or Novell BorderManager) with access control for outgoing requests.

  • Minimize radio wave propagation to nonuser areas. For instance, orient AP antennas or reduce transmission power, if possible, to avoid covering areas outside the physically controlled boundaries of your facility. By steering clear of public areas, such as parking lots, lobbies, and adjacent offices, you significantly reduce the ability of an intruder to participate on your WLAN. This also minimizes the impact of someone disrupting your WLAN with jamming or DoS techniques.

TIP

You can find a number of wireless intrusion detection systems (IDS) at http://www.zone-h.org/en/download/category=18.


As with fire drills, you should test your WLAN's security on a regular basis using the wardriving tools discussed earlier. It's better to find your own weaknesses (and fix them) than find out second hand from an intruder!



    SUSE LINUX Enterprise Server 9 Administrator's Handbook
    SUSE LINUX Enterprise Server 9 Administrators Handbook
    ISBN: 067232735X
    EAN: 2147483647
    Year: 2003
    Pages: 134

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net