12.1 General Server Security Guidelines

   

There are two goals in the security process of any server: Allow authorized users to access the information they need, while preventing unauthorized users from gaining information they should not have. These goals seem to be almost polar opposites; an administrator has to let a user access his or her files, at the same time an attacker has to be prevented from accessing them. Considering that an attacker may be another employee who does have legitimate access to the server, it is easy to understand why server administrators are sometimes grumpy.

12.1.1 Server Construction

The first place to start with server security is the server itself. Remember, redundancy, scalability, and availability are critical components of security. A server should be constructed with these features in mind.

Most large organizations do not build their own servers, relying instead on servers from companies such as Dell, IBM or Hewlett-Packard. Fortunately, these companies will allow organizations to configure servers to fit their needs. Take advantage of this by selecting equipment that is designed for availability, and configuring as much redundancy into the system as practical.

Server components most likely to fail are those that have moving parts : power supplies, hard drives, fans, CD-ROMs, and floppy drives. Power supplies , hard drives, and fans are crucial to a functioning server; CD-ROMs and floppy drives are not as critical as the server can generally run without a working CD-ROM or floppy drive until it can be replaced .

To ensure continual performance, all servers on the network should be equipped with dual power supplies. Both power supplies should be plugged in. The server will only use the primary power supply, switching to the secondary if the primary fails (the reason for keeping both power supplies plugged in).

Power failures are especially dangerous, not only because they take the server offline, but also some server operating systems are particularly sensitive to data corruption that can be caused by incorrectly shutting down the server. Databases and other applications that are continuously writing to the disk are also sensitive to corruption by incorrectly shutting down the server.

Failover from one power supply to another should be instantaneous, meaning there is no interruption in service if the power supply fails. One thing to bear in mind is that it is not enough just to have the dual power supplies. The server has to have a way of notifying administrators when the power supply fails. If there is no notification, then there is no way to know that the power supply needs to be replaced. Server vendors should provide information about the notification process; if not, ask.

NOTE

Many servers do not have power buttons . This is to prevent an unknowing administrator from inadvertently destroying a server by using the power button to turn the server off. Unfortunately, this does not stop the same novice administrator from pulling the power cord ”yet another reason for dual power supplies.


Hard drives are equally important to component availability. After all, the reason for the server's existence is the information located on the server. The information on the server is crucial to the existence of an organization, and should be protected as such. Hardware experts recommend Small Computer System Interface (SCSI) hard drives over Integrated Device Electronics (IDE) hard drives. SCSI drives are faster, at 10,000 to 15,000 revolutions per minute, than IDE drives, at 7,200 to 10,000 revolutions per minute, and SCSI drives are designed to last longer, with a longer mean time between failures (MTBF). Some SCSI manufacturers claim up to 1.2 million hours MTBF for their drives, while IDE drives generally have claims in the 100,000 hours MTBF range.

In addition to a quality hard drive, drives should be deployed in a redundant fashion. This is accomplished using a redundant array of independent disks (RAID). The most common RAID configuration is to deploy the drives in a redundant fashion. All data written to Disk 1 is also written to Disk 2, also known as a RAID 1. A second type of RAID configuration is RAID 0, which treats the string of disks as a single large disk. This increases the amount of storage available, but does not provide data redundancy.

The third type of RAID configuration is RAID 5. In this configuration, data is striped across multiple disks in a redundant fashion, giving the best of both worlds : It allows an administrator to create a single large storage area from several smaller disks, and it still allows for redundant data.

SCSI drives arranged in a RAID configuration often run very hot, so they require additional cooling, and sometimes even a separate case for the drives. Server cooling is usually handled through small fans placed throughout the case to optimize the cooling effect. Server fans have small motors, which can fail. Server cases usually have four or more fans placed in the server; if one fails, the others will continue to cool the inside of the case.

Unfortunately because the motors in the fans are so small, they do not usually have a way of alerting the system if there is a failure. Instead many server motherboards now include thermostats. If the temperature inside the case starts to rise an alert is generated, and the administrator of the server can take a look during the next maintenance window (unless the temperature gets too high, in which case the server may shut down).

12.1.2 Server Placement

Where the server is placed is just as important as its components. Two aspects of server placement have to be considered :

  1. Physical location

  2. Network placement

With regard to physical placement, the ideal location would be in a data center environment. Servers should be in a separate room, which is locked at all times, and to which only certain employees have access. Servers should be rack mountable, and they should be fully mounted. That sounds obvious, but many times rack mountable servers are simply stacked on top of each other within the rack. This makes it very easy for anyone who has access to the server room to walk off with a server. [2] In addition to physical security, rack mounting servers does increase availability. Computer racks are generally designed with air circulation in mind. If the servers are properly mounted, it will be easier for air to circulate through the rack, and assist in keeping the server temperature cooler .

[2] If you doubt this try unmounting a server that is fully mounted in less than 10 minutes.

The room should have a separate cooling and filtering unit. The HVAC unit should recycle the air at least every five minutes to keep dust and other contaminants out of the room. Dust is the biggest small-scale natural threat to computers. If dust begins to collect in the servers it can degrade the performance and decrease the life expectancy. The temperature of the data center should hover around 70 degrees Fahrenheit, or 21 degrees Celsius.

The room should also be equipped with FM200 fire suppression conduits , rather than traditional sprinkler systems. FM200, chemically known as heptafluoropropane, is a foam-like fire suppressant that can, unlike water, douse fires that occur in the data center without necessarily destroying the servers, or the data on the servers. Fires that occur as a result of overheated computers are extremely rare, but still possible, so good fire suppression is necessary for a data center environment.

Finally, the data center should have some form of backup power. The most common form of backup power used is an uninterruptible power supply. APC and Liebert are two manufacturers most often associated with data-centerwide UPS.

The two most important considerations when deciding on a backup power supply are the amount of power needed and the length of time the data center needs to stay up. The amount of power needed is based on the number of network devices in the data center and the amount of power they consume . The length of time that the data center needs to remain online will depend on what types of servers are housed there, and who needs to access them.

If the data center is used primarily for on-site employees to access information, then the backup power supply should need to provide power for only an hour or two. This should be ample time to contact the power company and determine when the power will be restored. A one- or two- hour window also gives the administrative staff time to shut down the servers.

On the other hand if there are public servers, or servers that employees from other locations need to access, located within the data center, then additional backup power requirements may needed. Generally, a UPS should not be used to provide more than six hours of backup power. If more time is required a generator should be used. Generators require special wiring to ensure that power will flip over to the generator in the event of a power failure. Before installing one check with building management to ensure that there are no restrictions against generators.

NOTE

If power outages are common in your area, make sure there is gas in the generator. Many companies have procedures in place to deal with power outages, but nothing in the procedures addresses what to do when the power returns. Either make refueling the generator part of the power outage procedure, or make it standard to check the generator once a month to make sure it has fuel.


Physically securing servers is not enough; servers have to be placed on the network in a manner that will ensure their security and availability.

Network administrators will often segment servers into a separate VLAN, as shown in Figure 12.1. This makes sense at one level because it makes the management and monitoring of the servers simpler. It can also make network security easier by isolating the servers and creating more restrictive security policies for those switches.

Figure 12.1. A traditional network design. Each switch block is segmented into different VLANs, creating an unnecessary load on the network.

graphics/12fig01.gif

The downside is it increases network traffic. Requests from the workstations have to be routed to the servers and back. A better solution for servers that are workgroup specific is to include the server as part of the workgroup VLAN.

This network design does not work for all servers, only servers that need to be accessed by a specific workgroup. For example department-specific file servers, DHCP servers, or domain controllers can be isolated in this manner.

As Figure 12.2 shows, a server that is used by the workgroup associated with VLAN 1 is also placed in VLAN 1. Traffic going from the workstations to the server, and vice versa, needs only to be switched, not routed. This decreases the load on the core switches or routers, and makes more efficient use of bandwidth within the network.

Figure 12.2. Segmenting servers into VLANs associated with different networks can decrease the load on routers

graphics/12fig02.gif

Of course this won't work for all servers. web, mail, and DNS servers ” among others ”need to be accessed by all employees, as well as users outside the network. As discussed in Chapter 11, public servers should be placed on a switch that is only used for public servers.

Having each server in a separate VLAN might seem like it would make it more difficult to effectively manage and monitor those servers. It will not if a separate network, a management network, is created.

A management network, sometimes referred to as a backnet, is an isolated network that can be used to manage the servers. This management network is designed to facilitate server upgrades, backups , configuration, and monitoring. The most important aspect of a management network is that it has to be isolated. There should be no traffic, aside from management traffic, that traverses the backnet.

Figure 12.3 illustrates the typical setup of a management network. Each server on the backnet has a primary IP address in a separate VLAN. The backnet interfaces are all part of the same VLAN, and they are part of a separate network. A separate network infrastructure is in place to support the management network. Again, this is to keep management traffic apart from the rest of the network traffic.

Figure 12.3. Using a management network to control the type of traffic that is sent to the main server IP address and to better manage network traffic

graphics/12fig03.gif

There are two advantages to installing a management network in this manner:

  • It isolates management traffic which improves overall network performance.

  • It keeps management passwords, monitoring information, and other tools used to keep tabs on the network isolated from attackers and prying eyes.

In fact, many organizations with a management network in place will only allow human connections to be made through a VPN, further enhancing the security of the information shared across this network.

A backnet will not work well in all network environments, but it does make a great additional layer of security when it can be used.

12.1.3 Server Security

The servers have now been physically secured and isolated on the network. The next step is to secure access to the servers. Because the most damaging network attacks require gaining access to a server, it is important to restrict access as much as possible to all servers.

Start by limiting servers to single-use machines. The web server should be used only to serve websites , the mail server should only be used for mail, and so on. It is easier to manage security on individual servers if an administrator can limit the number of services running on them.

A good example of this is the X-Windows management system that is installed on many Unix systems by default. Because Unix servers are generally managed through the command line, keeping the X-Windows system installed only leaves unnecessary accounts installed and potential security holes open .

Of course in order for this strategy to work, it is also important to follow through and disable any services that are unused. Not only should unnecessary services be disabled, but whenever possible they should be removed, or not installed in the first place.

Services should never be run as the administrative user. It may be necessary to start the service using the administrative user account to bind it to the port, but the service should continue to run as a nonprivileged user. If the service is run as a nonprivileged user then if the service is compromised it is less likely that an attacker will be able to cause any further damage to the network.

In addition to running services as nonprivileged users, unnecessary accounts should be removed from servers, or at a minimum, renamed . The Windows guest account is an example of an unnecessary account that should be deleted. The same goes for the Unix games , bin, and sys accounts as well as other well-known accounts. If the accounts cannot be removed entirely they should be configured with no login capabilities and a very restrictive password. All accounts on all network devices, but especially on servers, should have passwords.

Accounts created on the server should be subject to the same password policy as the rest of the network. This means that the passwords should be changed at regular intervals, and they should be sufficiently difficult to crack using password cracking tools.

Whenever possible information about user accounts should be stored in a centralized database. This helps decrease the likelihood that a wayward account will be created on a server. It also makes it easier for employees to hop from server to server, as long as they have permission to access the server. In cases where a centralized user database is in use, the authentication from the servers to the user database should use some sort of encryption. Kerberos is the type of authentication most often associated with this type of server management. Kerberos is also nice, because Unix operating systems, and Windows 2000, support it, making it possible for a user to authenticate against both types of servers, if necessary.

This does not mean that users should have the run of the network; servers should be configured so that only specific groups have access to specific servers. Users in the accounting group should not need access to the sales server, and so on. Again, limiting who has access to a server will limit the amount of damage an attacker who gains access to one of the servers can do.

Files on the system should be restricted as well. A common way of enforcing file security is to create separate partitions on the server. One partition should be used for system files, while a separate partition can be used for user files. This is a common practice on Unix servers, which are generally broken into /, /usr, /etc, /opt, and others, depending on the needs of the server administrator. This is a less common practice on Windows-based servers, but one that should be followed, even if it is as simple as putting the system files on a C:\ and user files on an F:\ partition.

Partitioning servers has two effects:

  1. It separates system information in a logical manner. If something happens to one of the partitions, data on the second partition is usually safe.

  2. It creates a separate root directory for users. Users on the F:\ partition, or in the /usr partition see F:\ and /usr, respectively, as their root directories, and are unable to access the system files on the other partitions. While this is not perfect security, it does make the job of an attacker more difficult.

Cording off partitions is not enough; servers should have file systems that are as restrictive as possible. No users should have executable access on file servers, and, ideally , the user should only be able to access files in his or her own directory. In other words, the user should not be able to browse the directories or files of other users. This is done by giving the user's directory read and write permissions, but no other user ”except the administrative user ”should have access to the user's directory. Figures 12.4 and 12.5 show the best practices file permissions for Unix and Windows 2000 servers. These permissions assume the server will be used solely for storing data, and no programs will need to be run directly on the server.

Figure 12.4. Unix system file permissions. User Allan has read and write access to the directory allan; no other groups have read or write access.

graphics/12fig04.gif

Figure 12.5. Windows 2000 file permissions. The user Allan is the only person, aside from the Administrator, who has access to the folder allan.

graphics/12fig05.gif

System files should always be owned by the administrative user, and should be viewable or executable only by the administrator. This may mean tightening default permissions on some operating systems. This change is necessary especially when dealing with configuration files. Configuration files are often plain text, so if one has weak permissions, it can be used by an attacker to gather more information about a server and increase the amount of damage an attacker is capable of inflicting on the server.

On Windows servers, the group "Everyone" should be removed from the server, or used sparingly. The "Everyone" group allows all users on a server to have access to a file or directory, including the guest user. Since the "Everyone" group defeats the purpose of restricting file systems, there is no point in using it on the server. Instead only groups that need access to the server should be installed on it.

If each department has its own dedicated server, the task of restricting access is a lot simpler. It is easier to restrict access to a group and then assign permission to individual folders. In some environments, especially within workgroups that are more collaborative in nature, it is acceptable to restrict individual folder access to the group, instead of the individual user. This will allow group members to access files in each other's folders, share information, and still keep the data protected from outside users. Group permission is 660 (rw-rw----) on a Unix system and Read and Change on a Windows server.

Giving group access to an individual's folder can have negative consequences. If an employee's password is compromised the attacker will now have access to all files on the server, and can delete the files, make changes, or even copy them and use them against an organization. Many organizations have opted to use intranets to facilitate group collaboration. With an intranet, files that need to be shared with group members can be posted publicly , while files that are private, or still in development, can be left in the user's directory, keeping them segregated from other members in the group.

NOTE

Because servers, especially web servers, are susceptible to attacks, Unix administrators often install chkrootkit ( www.chkrootkit.org/ ), or a similar program. These programs look for signs of known rootkits on the server and report any suspicious findings.


Another good practice is to monitor file permissions on the server. A task that looks for inappropriate file permissions can be scheduled to run nightly. Usually a report is generated, then permissions can be automatically changed, or e-mail can be sent to the offending users, explaining what needs to be done to correct the problem. Of course, this process should be monitored closely to ensure the users are actually making the permission changes.

On Unix systems, another method of file system security is to create a special environment using chroot. Chroot is a way to create a jailed environment for users. The jail limits the directories to which a user has access. For example, a typical Unix user's home directory might be /home/users/allan/. The directory / home is already a partition; however, an administrator may want to create an environment in which the user Allan sees the root directory as being /allan. This forces the user to stay within the home directory, and prevents users from searching through other files.

An environment protected by chroot is still vulnerable to attacks, so it should not be used as a single solution for securing file systems. However, when used in conjunction with other security precautions , chroot can add an extra level of security to a server.

12.1.4 The Administrative User

The administrative user is the one user who can cause the most damage on a server. The administrator has access to the entire server and is able manipulate any file on the server. Limiting the capabilities of the administrative user is not a good idea; instead, it is better to restrict server access, making it difficult to become the administrative user.

Windows and Unix operating systems approach the administrative user in two distinct fashions , although the user performs the same tasks . The administrative user is known as administrator, which on a Windows server is a username that can be changed. In fact, many Windows security experts recommend changing administrator to an account name that is less obvious. This is another example of security through obscurity, which is not the recommended approach to securing a server, but it is a good idea when used in conjunction with other security measures.

Windows servers require that many services run as administrator, opening the possibility for root exploits. If an attacker is able to compromise one of these services remotely, the username is known, so it is just a matter of determining the password. If the attacker has to determine the username and password, breaking into the server is that much more difficult.

For the name change to be effective, the administrator account should be renamed to something non-obvious. Renaming it to root, or another common name, is very ineffective , and still leaves the server easily exploitable.

Different users can have administrative access to a server, and a user can be made the administrator of a single server, but not necessarily an entire server farm. However, if a user is logged in as a non-administrative user, that person has to log out and login as an administrator to gain server control.

This contrasts greatly with Unix, which allows users to login as one user and switch to the administrative user, known as root, using the substitute user ( su ) command. The obvious downside to su is that, in its most basic implementation, it allows any user to become root. To limit the security risks posed by allowing any user to become root, most administrators create a wheel group. The wheel group is a special Unix group which allows an administrator to control the users that have access to becoming root.

Unix, like Windows, allows users to belong to multiple groups, making it possible to give different users root access to different servers. For example, if the company webmaster needs root access on the web server, but no other servers, then the webmaster's account can be added to the wheel group on the web server, but not on any other servers. An administrator has the ability to add individual users to the wheel group, while leaving other users in the same group unaffected. Using the example of a web server, it is common to have a group of users maintain a website. These users are most likely part of the same group; the webmaster will also be part of this group. An administrator can give the webmaster the capability of becoming root, without giving the other users in the group root access.

Unlike Windows-based servers, it is not recommended that administrators change the root user to a different name. There are too many services that rely on having access to the root user in order to start. In addition, Unix has built-in tools that allow administrators to limit how root is accessed.

One recommendation commonly made by security experts is to limit console and TTY access, so only the root user can login from the console. Within the server, limit the range of IP addresses that are able to access the server remotely, and log all connection attempts, as well as any attempts to change to the root user. Some security experts also recommend disabling remote root access. This forces administrators to log into the server, then su to root, in order to become the administrative user. The argument for this security step is that it forces a potential attacker to try to crack two passwords instead of one.

Finally, on Unix systems it is possible to give certain users permission to execute commands as the root user. The most common way to do this is using the superuser do ( sudo ) suite of commands.

Sudo has become a very popular method of delegating security on Unix servers. Using sudo allows the administrator to tightly restrict who has root access, while still giving users the ability to run commands necessary to perform their jobs. Sudo also has several security enhancements, including extensive logging facilities, creating a detailed trail whenever a command is executed through sudo.

The way sudo works is that a server administrator can assign permissions so that a process is owned by root, but a user, or group, is able to run it as well. The user or group runs the process without having to actually su to the root user, and the administrator can still restrict permissions on the file so that it is owned by root. Referring to the webmaster example, a webmaster should not need access to most system files; however, it is not uncommon for a webmaster to have to restart web services. Rather than make the webmaster part of the wheel group for this one task, an administrator can delegate the web server processes to the webmaster allowing him or her to start and stop web services as needed, but with no other administrative rights on the server.

Taking proper precautions to restrict access to the administrative access to the server can increase server security exponentially and can help protect against the most common system attacks.

   


The Practice of Network Security. Deployment Strategies for Production Environments
The Practice of Network Security: Deployment Strategies for Production Environments
ISBN: 0130462233
EAN: 2147483647
Year: 2002
Pages: 131
Authors: Allan Liska

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net