Other than hardening the Linux kernel itself, you must also secure any network services that expose your server to the outside world. In the early days of computing, systems were accessed via hardwired (dumb) terminals or remote job entry devices (such as punched card readers). But with the advent of local area networks and development of various TCP/IP-based applications, computers today are accessed mostly using intelligent workstations and high-speed networks (either local or remote), meaning client workstations and the server interact in a collaborative manner. As a result, server-side applications (generally referred to as services) must be robust and be tolerant of any errors or faulty data provided by the clients. At the same time, because the communication pathway between the clients and the server may cross publicly accessible devices (such as routers on the Internet), there is also a need to protect the data from being "sniffed." Some of the concepts and their applications have already been covered in previous chapters but are reviewed here in the context of securing network services. The following topics are discussed in this section:
Hardening Remote ServicesAs illustrated in Chapter 12, it is very easy to set up and attach a data sniffer, such as Ethereal, to a network and passively capture transmissions between two devices such as user login names and passwords. The most troubling issue here is not the ease in which a sniffer can be employed to snoop on your data, but the fact that it can be done without your knowledge. When users are communicating with your server from a remote location across public networks (such as the Internet or via a wireless network), there are many (too many) junctions where a sniffer may be placed. Chances are remote that someone would specifically target you for a data snoop. However, it is not uncommon that in data centers of your ISP network, traffic is monitored so they can identify bottlenecks and dynamically reroute traffic as necessary. You have no way of knowing how this captured data would be used or what type of security they use to safeguard it from nonauthorized access. By the same token, even the network within your organization is not totally safe from snooping. For instance, an outside vendor may come in to troubleshoot one of its products and hook up a sniffer, or one of your summer interns may decide it's "fun" to see what goes on across the wire. Or if you have wireless networks in-house, they can be vulnerable (see the "Wireless Security" section later in this chapter). Therefore, it pays to be somewhat paranoid when you are accessing your server from across a network, remote or otherwise.
If you are somewhat concerned about the confidentiality of your data as it traverses your internal network or travels across public networks, you should consider securing, at the very least, the following remote access services:
Although you can access all the preceding services securely using a VPN link, it is not always feasible because you need a VPN client installed on your workstation. There are going to be situations when you need to remotely access your server at work from some workstation that does not have a compatible VPN client. Therefore, it is best to secure the remote services, and when you can also use VPN, you get double the security. Limiting Rights of ServicesEvery process on a system is executed under the context of an account. If you sign onto a server and launch a text editor, in theory, the executable instance of the text editor has access to all the files and directories that you would normally have access to. This doesn't sound unreasonable and, in fact, for you to be able to save your file after the edit session, it is a requirement. If you extrapolate this situation to a typical multiuser system, the situation becomes fairly complicated. In addition to all the interactive sessions from users and their respective cron jobs, a system also runs a number of services. Each one of these tasks has access to portions of the system. As a system administrator, you need to understand the exposure presented by these processes. In the case of a simple text editor, little is exposed to damage other than the user's files. If the user happens to be root, however, the default process privileges can have serious implications in the event of human error. The same situation arises for all processes. By default, user processes are fairly well contained. Their access to the system is limited to their own account's environment. If a user were to run a program under his or her own credentials, any damage caused by coding deficiencies within the routine would be limited to the access rights of that user's account. For unprivileged users, the damage is restricted by the limitations of the account. The question is, What happens in the case of a privileged account? There is simply no mechanism available for the operating system to know the difference between an event triggered by mistake and one initiated by design. If a coding error in an application triggers the deletion of files in a directory, there is little to prevent the event. The more privileged the account running the faulty application, the higher the potential damage. It is therefore imperative to ensure that each and every process running on a server is executed with the most minimal privilege possible. An additional layer of complexity results from placing your server on a network. So far, we have discussed deficiencies within a program running locally on a system and their possible impact. After a server is placed on a network, the services it presents are exposed beyond the confines of the local system. This greatly enhances the possibility that a coding deficiency in any service can be exploited from sources external to the machine. Additionally, the service is exposed to the network where it could be exploited without requiring local credentials. Many such exploits are common today. In some cases, the remediated code is made available in short order. In most cases, however, there is a significant lapse of time between the discovery of the vulnerability and a patch. It is important to reflect on the fact that vulnerabilities are "discovered" and to understand that this implies they were present and dormant for an extended period of time. What happens to a system during the period leading up to the discovery is unknown. It is quite possible that in some cases the vulnerability was exploited by external sources for a significant amount of time before it was "discovered." Similarly, a system administrator may need to reflect on what can be done between being made aware of a possible problem and the availability of a patch. One of the most important factors in reducing the amount of exposure to a vulnerability is to contain services within accounts with minimal privileges. In more recent versions of Linux, this is configured by default. SLES is installed by default in such a way. A review of the accounts in /etc/passwd reveals individual accounts for running most services. In the case of an account such as sshd, it is used only to provide ssh services to the server. It is a local account on the machine and because it has no real login directory, it cannot be used to log in locally. This is in sharp contrast to the Telnet service available through xinetd. Though disabled by default in /etc/xinetd.d/telnetd, it can be enabled by simply changing the appropriate flag. When initiated at xinet startup, the resulting service will, by default, run under the root account. If a vulnerability in Telnet were to be discovered and exploited, the access privilege granted to the attacker would be equivalent to root. An additional group of processes that should be examined are those that run under cron. In most cases, individuals run housekeeping jobs under cron. These tasks run in their user context, and few system-wide concerns are involved. You should closely scrutinize cron jobs that run under accounts that possess elevated privileges. Consider, for example, a script used to back up a special application. In some cases, a client group may require pre- and post-backup steps to be performed. You may be approached to run a customized script: /home/accounting/close_day.sh before the backup and /home/accounting/open_day.sh when the backups are done. Though this point may be obvious, these scripts must be moved to a location that is not user modifiable and audited for content before they are included in the nightly backup processes. If they are simply called by the nightly cron job, they will be executed in the same context as the backup process. If they are left in a location where they can be changed, there is little to prevent the introduction of a script error killing the nightly backups. In the worst-case scenario, the script could be used by someone to gain backup-level privileged access to the system. All processes running on a system can have direct impact on the overall health of the server. Application bugs and vulnerabilities are a direct threat to the information and resources provided by the server. Because these vulnerabilities are present in the applications themselves, the local firewall policies cannot be applied to mitigate against this threat. It is therefore imperative to scrutinize the accounts under which processes are run to evaluate a server's exposure from both internal and external sources. Using chroot Jails and User Mode LinuxIn the preceding section, we examined the importance of minimizing the impact unknown vulnerabilities can have on a server. This was done by restricting access to system resources through the selection of appropriate accounts for each process. In this section, we examine two additional methods of further restricting system exposure. Both chroot jails and User Mode Linux (UML) are containment techniques. These methods provide a closed environment within which the application is run, segregating it from the rest of the system. Both chroot and UML provide such environments, but their approaches are vastly different. chrootA chroot jail is based on the concept of moving a process from the original system environment and isolating it within a separate, parallel environment. As the process is initiated, it is provided with an alternate environment within which it will run. This environment is made up of a complete directory structure that mimics the standard file system. Before you can port an application into the jail, you need to know the resources the application requires to run. In the case of a statically linked executable, the list of extra files needed could be very short. If, however, the application to be run requires access to a number of libraries, things can get quite complex. In the case of a chroot'ed web service, additional applications such as Perl or PHP may need to be placed in the target tree. To create a chroot file structure, you must perform the following:
The creation of a complete chroot environment for any application is a complex task. The most difficult step is collecting all the necessary library routines required by the applications. After you have replicated the tree structure, you can launch the application with the command Athena> chroot /newtree command where /newTRee is the directory specification for the directory structure created previously. This will now be used as the / directory for the application instance. The command parameter is simply the application that you want to run within the chroot is benvironment. USER MODE LINUX (UML)The second method of creating a segregated environment is using User Mode Linux. UML's approach to segregating environments is to create a virtual machine within which the new application is run. This new virtual machine is, in all respects, a separate Linux installation. Though YaST provides support for the initial portions of the installation, a number of additional steps are required to finalize the configuration. As a separate machine, it needs to have the required software installed. Unlike the chroot installation where directories can be copied, the UML machine instance requires an actual install. Similarly, all account management functions and system hardening are required on the new system as well. When UML is launched, it loads its own copy of the Linux kernel within the context of the account used to start the service. This provides a complete Linux system to the application. This system is functionally independent of the original system and acts quite literally as a separate machine. The disks provided for the virtual machine are, in fact, large files in the file system of the parent machine. They can be mounted and modified as normal disks on the parent until they contain what is required for the application. Once configured, UML is launched and the new machine takes over the delivery of the service. The virtual machine appears on the network as a separate entity with its own TCP/IP address, accounts, and services. Both techniques for providing a segregated application environment are nontrivial. They do, however, provide significant isolation of the service being offered and therefore help protect the remainder of the system. The more complex of the two techniques appears to be the chroot path. Though a number of resources are available on the Internet to help configure such environments, finding all the minutia required for an application can be quite tedious. Once completed, chroot does provide for a minimal environment within which an application can run. If a vulnerability is found and exploited, the minimal environment provided will not provide any extraneous utilities that would be an advantage to an attacker. The UML approach does provide for a complete system environment and therefore requires more diligence in removing applications installed by default. The level of segregation, however, is almost complete and does not allow for any access to the original system's resources. One of the most significant advantages to the UML approach is the capacity for running a different operating system within each virtual machine. This allows for legacy systems requiring older, more vulnerable operating system flavors to be isolated. Also, third-party software that is no longer supported or software that requires specific runtime environments can be hosted virtually until they can be replaced. NOTE The concept of creating a UML environment is to separate the hosted application into a separate virtual machine. This is a good thing. It does, however, imply that each UML environment requires individual system management. Hardening, tuning, and account management must be done on each just as if they were physically separate machines. Packet Filtering Using iptablesHistorically, servers were placed on internal networks with little worry about exploits. Because corporations had minimal or no direct access to the Internet, there were few chances of compromise. As time marched on, more and more companies became Internet accessible. Today, most companies allow Internet access all the way down to desktop devices. The increase in access has been fueled by business demands both in terms of internal resources requiring access to information as well as an outward-facing presence for marketing purposes. To protect Internet-facing machines, companies have employed firewalls. A firewall is a device that can be used to restrict network traffic. Restrictions can be placed on the source, destination, or type of traffic involved. In many instances, firewall appliances are used to segregate corporations from the Internet. These appliances often contain proprietary operating systems and command sets. Others use embedded versions of operating systems such as Linux. These types of firewalls are called edge devices. Though SLES can be configured to run as an edge firewall, we focus more on the individual server-side implementation. SLES can be used to implement a local system-level firewall. The firewall configuration can be controlled though YaST. Though YaST only scratches the surface of the potential of the SLES firewall, it is sufficient for most server-side needs. YaST serves as a tool for modifying what is known as iptables rules. iptables is the administration tool used to control the firewall configuration. In its simplest form, iptables recognizes the following:
Combining the different permutations and combinations of these attributes generates a list of rules. These rules govern what happens to a network packet destined for a target server. When rules are applied to a packet, the process is known as packet filtering. Under YaST, selecting Users and Security presents the option for configuring the firewall. YaST understands most of the typical business applications that run on an SLES server:
YaST also provides for a more granular approach by allowing individual ports to be opened for less frequently used applications. In the preceding standards list, protocols such as DNS (port 53), Kerberos (port 88), and LDAP (port 389) are missing. If the firewall is enabled on a server and these protocols are required, manual adjustments will have to take place. In the Additional Services section of the firewall configuration, you can accommodate for these requirements. In addition, third-party software installed on servers can also be made available through the firewall. A simple example of a port rule could allow for port 3306 (MySQL) to be permitted to traverse the firewall. The resulting entry in the iptables would look like this: ACCEPT tcp -- anywhere anywhere state NEW,RELATED, ESTABLISHED tcp dpt:mysql This example was generated using the YaST tool by specifying, in Additional Services, that port 3306 be available for access to external machines. This entry allows traffic for MySQL to reach the local host. In the INPUT stream of packets, this rule will be interpreted as shown in Table 13.1.
Though the syntax and order of the rules can become quite complex, YaST provides for a simple, more intuitive interface. Placing firewalls on local servers above and beyond the edge firewalls might appear as a waste of resources. Many Denial of Service attacks, however, come from internal sources. Viruses, worms, and the like propagate from machine to machine using known vulnerabilities. Though normally a workstation problem, these pests often affect the performance of servers. With a properly tuned firewall, requests for services from unauthorized clients can be eliminated from consideration at the firewall level. This removes the burden of processing from the application level. Consider, for example, the following two cases:
In a server environment, it is imperative to allow traffic only for those services that should be available and, again, only to those clients that require the access. Restricting access to other network-aware applications that might co-reside on the server will further reduce the machine's target profile. An additional benefit to applying filtering rules is that spurious traffic from unauthorized sources never reaches the intended service. This protects the exposed application from possible hacks while reducing the amount of processing time lost to unsolicited connection attempts. Hardening Your Physical Network InfrastructureProper system hardening practice should include securing your networking hardware from being attacked or hijacked for other uses. For instance, your server is useless if your remote users can't reach it over the WAN link because some intruder hijacked your router and changed its configuration. Similarly, if the access infrastructure is not secure and the traffic easily snooped, your confidential company data can be easily stolen or compromised. Most network administrators are familiar with the concept of using firewalls to block undesired network traffic in and out of a network. However, many have not given much thought to securing the physical aspects of the network, namely the underlying hardware. The following sections cover a few topics related to hardening your physical networking environment. PHYSICAL SECURITYProbably the foremost consideration to securing your networking environment is securing physical access to equipment such as the wiring racks, hubs and switches, and routers. Most of the time, these types of equipment are in a wiring closet located strategically behind closed and locked doors. Unlike in the movies, hackers tapping into your network via available ports on the wiring hubs and switches are rare. It is actually much easier instead to use an available network plug found in one of the empty offices, meeting areas, or conference rooms. Or the attack may even be launched from outside your premises if you have wireless access points installed! (See the "Wireless Security" section later in this chapter.) NOTE A primer on various common networking devices, such as switches and routers, can be found at http://www.practicallynetworked.com/networking/bridge_types.htm. Following are some ideas to ponder when you are implementing physical security for your networking infrastructure:
DEFAULT PASSWORDS AND SNMP COMMUNITY NAMESMany manageable devices, such as routers, switches, and hubs, are shipped with factory-set default passwords, and some are shipped without a password at all. If you fail to change these passwords, an attacker can easily access your device remotely over the network and cause havoc. For instance, Cisco routers are popular, and many corporations use them to connect to the Internet. An attacker can easily use something like Cisco Scanner (http://www.securityfocus.com/tools?platid=-1&cat=1&offset=130) to look for any Cisco routers that have not yet changed their default password of cisco. After locating such a router, the hacker can use it as a launching point to attack your network or others (while the finger points to your router as being the source). TIP You can find many default user IDs and passwords for vendor products at http://www.cirt.net/cgi-bin/passwd.pl. Manageable devices can normally be accessed in a number of different waysfor example, via a console port, Telnet, Simple Network Management Protocol (SNMP), or even a web interface. Each of these access routes is either assigned a default password or none at all. Therefore, you need to change all of them to secure your device. If you don't change them, a hacker can get in via one of the unsecured methods and reset the passwords. Consider this scenario: Your router can be remotely configured either via a Telnet session or SNMP set commands. To be able to manage the router remotely from your office or home, you dutifully changed the default Telnet access password. However, because you don't deal much with SNMP, you left that alone. A hacker stumbled across your router, found out its make, and looked up the default username and password. He tried to gain access through Telnet but found the default password didn't work. But by knowing that the router can also be configured via SNMP and knowing that the default read-write community name is private, the attacker can change the configuration of the router, reset the Telnet password to anything he wishes, and lock you out at the same time, all by using a simple SNMP management utility. Before putting any networking devices into production, first change all their default passwords, following standards of good practice by setting strong passwords (see Chapter 4, "User and Group Administration," and Chapter 11, "Network Security Concepts"). Furthermore, you should disable any unused remote access methods if possible. SNIFFING SWITCHESA switch handles data frames in a point-to-point manner. That is, frames from Node A to Node B are sent across only the circuits in the switch that are necessary to complete a (virtual) connection between Node A and Node B, while the other nodes connected to the same switch do not see that traffic. Consequently, it is the general belief that data sniffing in a switched environment is possible only via the "monitor port" (where all internal traffic is passed) on the switch, if it has one. However, studies have revealed that several methods are available to sniff switched networks. Following are two of these methods:
There are more ways (such as man-in-the-middle method via Address Resolution Protocol [ARP] spoofing) to sniff switched networks. The two methods discussed here simply serve as an introduction and provide a cautionary note that a switched environment is not immune to packet sniffing. Wireless SecurityIn the past few years, wireless networking (IEEE 802.11x standards; http://grouper.ieee.org/groups/802/11/) has become popular for both business and home users. Many laptops available today come with a built-in wireless network card. Setting up your wireless clients is a cinch. It is almost effort less to get a wireless network up and runningno routing of cables behind your desks, through walls or other tight spaces. And no hubs or switches are necessary. Unfortunately, such convenience comes with security concernswhich many people are not readily aware ofand they are discussed next. NOTE Wireless LANs (WLANs) can be set up in one of two modes. In infrastructure mode (also known as a basic service set, or BSS), each client connects to a wireless access point (also frequently referred to simply as an access point, or AP). The AP is a self-contained hardware device that connects multiple wireless clients to an existing LAN. In ad hoc mode (or independent basic service set, IBSS), all clients are all peers to each other without needing an AP. No matter which mode a WLAN operates in, the same security concerns discussed here apply. NOTE 802.11x refers to a group of evolving WLAN standards that are under development as elements of the IEEE 802.11 family of specifications, but that have not yet been formally approved or deployed. 802.11x is also sometimes used as a generic term for any existing or proposed standard of the 802.11 family. Free downloads of all 802.11x specifications can be found at http://standards.ieee.org/getieee802/802.11.html. LOCKING DOWN ACCESSWireless networks broadcast their data using radio waves (in the GHz frequency range), and unless you have a shielded building (like those depicted in Hollywood movies), you cannot restrict physically who can access your WLAN. The useable area depends on the characteristic of the office space. Thick walls degrade the signal to some extent, but depending on the location of the AP and the type and range of antenna it has, its signal may be picked up from outside the building, perhaps from up to a block or two away. WARNING With the popularity of home wireless networks, it is imperative that you take steps to secure your AP so strangers can't easily abuse your Internet connection. Given the typical range of an AP, someone could be sitting at the sidewalk next to your house and use your AP to surf the Net without your permission or, worse, commit a cyber crimewith your Internet connection being the "source"all without your knowledge! Anyone with a wireless-enabled computer equipped with sniffer software that is within the range of your APs can see all the packets being sent to other wireless clients and can gain access to everything they make available. If the APs are acting in bridge mode, someone may even be able to sniff traffic between two wired machines on the LAN itself!
One simple way of closing a WLAN to unauthorized systems is to configure your AP to allow connections only from specific wireless network cards. Like standard network cards, each wireless network card has a unique MAC address. Some APs allow you to specify a list of authorized MAC addresses. If a machine attempts to join the network with a listed MAC address, it can connect; otherwise, the request is silently ignored. This method is sometimes referred to as MAC address filtering. WARNING The vendor hard-codes the MAC address (part of which contains a vendor code) for each network card and thus guarantees its uniqueness. Depending on the operating system, however, the MAC address of the wireless card can be easily changed using something similar to the ifconfig command: sniffer # ifconfig wlan0 12:34:56:78:90:AB If an intruder is determined to gain access to your WLAN, he can simply sniff the airwaves passively and log all MAC addresses that are in use. When one of them stops transmitting for a length of time (presumably disconnected from the WLAN), the intruder can then assume the identity of that MAC address and the AP will not know the difference. Most APs available today allow the use of the optional 802.11 feature called shared key authentication. This feature helps prevent rogue wireless network cards from gaining access to the network. The authentication process is illustrated in Figure 13.1. When a client wants to connect to an AP, it first sends an authentication packet. The AP replies with a string of challenge text 128 bytes in length. The client must encrypt the challenge text with its shared key and send the encrypted version back to the AP. The AP decrypts the encrypted challenge text using the same shared key. If the decoded challenge text matches what was sent initially, a successful authentication is returned to the client and access is granted; otherwise, a negative authentication message is sent and access is denied. Figure 13.1. IEEE 802.11 shared key authentication.NOTE Other than the shared key authentication, 802.11 also provides for open system authentication. The open system authentication is null authentication (meaning no authentication at all). The client workstation can associate with any access point and listen to all data sent as plaintext. This type of authentication is usually implemented where ease of use is the main security issue. This shared key between the AP and a client is the same key used for Wired Equivalency Privacy (WEP) encryption, which is discussed in the next section. NOTE You can also employ IEEE 802.1x Extensible Authentication Protocol (EAP) with specific authentication methods such as EAP-TLS to provide mutual authentication mechanisms. However, such an implementation requires an authentication server, such as Remote Authentication Dial-In User Service (RADIUS), which is not very practical for home, small-, and medium-size businesses. An alternative is to use the preshared key authentication method available in Wi-Fi Protected Access (WPA) for infrastructure mode wireless networks. The WPA preshared key works in a similar manner as the WEP shared key method discussed previously. However, because of the way WPA works (discussed in the following section), the WPA preshared key is not subject to determination by collecting a large amount of encrypted data. ENCRYPTING DATA TRAFFICPart of the IEEE 802.11 standard defines the Wired Equivalency Privacy (WEP) algorithm that uses RC4, a variable key-size stream cipher, to encrypt all transmissions between the AP and its clients. To use this feature, you must configure the AP to use WEP and create or randomly generate an encryption key, sometimes referred to as the network password or a secret key. The key is usually expressed as a character string or hexadecimal numbers, and its length depends on the number of bits your hardware will support. At the time of this writing, APs support encryption keys ranging in size from 64 bits to 256 bits. The methodology that manufacturers employ for WEP encryption, however, is not universal and may differ from another. For instance, for a 64-bit encryption key, most vendors use a 24-bit randomly generated internal key (known as an Initialization Vector, or IV) to trigger the encryption (thus leaving you with a 40-bit key), whereas others may use the full 64 bits for encryption. NOTE The 802.11b specification defined a 40-bit user-specified key. Combined with the 24-bit IV, this yields a 64-bit encryption key for WEP. Likewise, a 128-bit WEP uses a 104-bit key, and a 256-bit WEP uses a 232-bit key. This is why user-defined ASCII keys are only 5 bytes in size for 64-bit WEP, 13 bytes for 128-bit WEP, and 29 bytes for 256-bit WEP. WEP works by using the encryption key as input to the RC4 cipher. RC4 creates an infinite pseudo-random stream of bytes. The endpoint (either the AP or a client) encrypts its data packet by performing a bitwise XOR (logical exclusive OR) operationa simple and fast method for hashing two numbers in reversible fashionwith the latest section of the RC4 pseudo-random stream and sends it. Because the same encryption key is used at the AP and by the clients, the receiving device knows where they are in the RC4 stream and applies the XOR operation again to retrieve the original data. If the intercepted packets are all encrypted using the same bytes, this provides a known cryptographic starting point for recovering the key used to generate the RC4 stream. That is the reason a 24-bit random key (the IV) is added to the user-supplied key to ensure the same key is not used for multiple packets, thus making it more difficult (but not impossible) to recover the key. But because the IV is only 24 bits, eventually you must reuse a previous value. By intercepting sufficient data packets, an attacker could crack the encryption key used based on "seeing" repeating RC4 data bytes. The general rule of thumb is that, depending on the key size, about 5 to 10 million encrypted packets would provide sufficient information to recover the key. On a typical corporate network, this number of packets can be captured in less than a business day. If, with some luckgood or bad, depending on your perspectivemany duplicate IVs are captured, the key may be cracked in less than an hour. There exist a few utilities that can recover the WEP key. WEPCrack (http://wepcrack.sourceforge.net) was the first publicly available WEP cracking tool. AirSnort (http://airsnort.shmoo.com) is another such utility. Although both have Linux versions, you will find AirSnort a lot easier to use because the latest version is now a GTK+ GUI application (see Figure 13.2). Figure 13.2. AirSnort cracking a WEP key.There are a few precautions you can take as deterrence against WEP-cracking, for example:
Recognizing the shortcomings of WEP, IEEE 802.11i is a new standard that specifies improvements to wireless LAN networking security. The 802.11i standard addresses many of the security issues of the original 802.11 standard. While at the time of this writing, the new IEEE 802.11i standard is still being ratified, wireless vendors have agreed on an interoperable interim standard known as Wi-Fi Protected Access (WPA). With WPA, encryption is done using the Temporal Key Integrity Protocol (TKIP, originally named WEP2), which replaces WEP with a stronger encryption algorithm (but still uses RC4 ciphers). Unlike WEP, TKIP avoids key reuse by creating a temporary key using a 128-bit IV (instead of the quickly repeating 24-bit IV WEP uses), and the key is changed every 10,000 packets. It also adds in the MAC address of the client to the mix, such that different devices will never seed RC4 with the same key. Because TKIP keys are determined automatically, there is no need to configure an encryption key for WPA. If your WLAN equipment supports WPA in addition to WEP, by all means, use WPA. PROTECTING THE SSIDThe Service Set ID (SSID) represents the name of a particular WLAN. Because WLAN uses radio waves that are transmitted using the broadcast method, it's possible for signals from two or more WLANs to overlap. The SSID is used to differentiate between different WLANs that are within range of each other. NOTE An SSID contains up to 32 alphanumeric characters and is case sensitive. To connect to a specific WLAN, you need to know its SSID. Many APs broadcast the SSID by default (called a beacon) to make it easier for clients to locate the correct WLAN. Some wireless clients (such as Windows XP) detect these beacons and automatically configure the workstation's wireless configuration for transparent access to the nearest WLAN. However, this convenience is a double-edged sword because these beacons also make it easier for wardrivers to determine the SSID of your WLAN. One of the steps to dissuade unauthorized users from accessing your WLAN is to secure to your SSID. When setting up your WLAN, do not use the default SSID provided by the vendor. For instance, Cisco Aironet APs use autoinstall as the default SSID, some vendors use default for the default SSID, while some other vendors simply use their company name as the default SSID, such as proxim. You can find many wireless vendors' default SSIDs and other default settings at http://www.cirt.net/cgi-bin/ssids.pl. The SSID should be something not related to your name or company. Like your login password, the SSID should be something difficult to guess. Ideally, it should be some random, meaningless string. CAUTION Although it is convenient to use the default SSID, it would cause problems if a company or neighbor next to you set up a wireless LAN with the same vendor's access points and also used the default SSID. If neither of you implements some form of security, which is often the case in both homes and smaller companies (sometimes even large organizations where wireless technologies are new to them), and you're both within range of each other, your wireless clients can mistakenly associate with your neighbor's access point, and vice versa. CAUTION Many APs allow a client using the SSID of any to connect to their APs, but this feature can generally be disabled. You should do so at your earliest convenience. Unless you are running public APs (such as a community Wi-Fi HotSpot) where open connectivity is required, if your AP has the feature, it is generally a good idea to disable SSID broadcasting, even if it doesn't totally prevent your SSID from being sniffed. Disabling SSID broadcasting (also known as SSID blinding) only prevents the APs from broadcasting the SSID via the Beacon and Probe Request packets. But in response to a Probe Request, an AP's Probe Response frame includes the SSID. Furthermore, clients will broadcast the SSID in their Association and Reassociation frames. Therefore, something like SSID Sniff (http://www.bastard.net/~kos/wifi) or Wellenreiter (see Figure 13.3; http://www.wellenreiter.net) can be used to discover a WLAN's SSID. Because of this, SSID should never be considered a valid security "tool." However, it can serve as a small roadblock against casual snoopers and "script kiddies." Figure 13.3. Wellenreiter's main window.
There is one possible countermeasure that you can deploy against SSID snooping: Fake AP (http://www.blackalchemy.to/project/fakeap). As the saying goes, "the best hiding place is in plain sight." Fake AP can generate thousands of counterfeit Beacon frames (with different SSIDs) that essentially hide your actual network among the cloud of fake ones. Again, this is not a surefire solution, but it will definitely discourage amateurs and will require an experienced hacker to spend an extraordinary amount of time wading through the bogus beacons to guess at the real network. CAUTION You must be very careful when using Fake AP because you may unknowingly interfere with neighboring third-party WLANS, which could result in possible legal repercussions. Additionally, the extra traffic generated by Fake AP may decrease your network's available bandwidth. ADDITIONAL WIRELESS LAN SECURITY TIPSOther than those steps already discussed, you can take some additional steps to secure you WLAN. The following list summarizes the basic precautionary measures you should employ to protect your WLAN:
TIP You can find a number of wireless intrusion detection systems (IDS) at http://www.zone-h.org/en/download/category=18. As with fire drills, you should test your WLAN's security on a regular basis using the wardriving tools discussed earlier. It's better to find your own weaknesses (and fix them) than find out second hand from an intruder! |