12.3 Web Server Security

   

An organization's web server is an attacker's most common point of entry into the network. A web server is, almost by definition, a public server, so it makes an attractive target to attackers. In addition, depending on the nature of the website, breaking into the web server may give an attacker access to valuable customer information. Because web servers are such attractive targets, special steps need to be taken to secure the web server against attackers .

The web server should be a single-use server, and should have a very restricted access policy; only personnel who absolutely need access should have it. In fact, a staging server is commonly used as a means of further restricting access to the actual web server.

A staging server is a replica of the web server. It should have the same operating system, same patches, same file structure, and all of the same software as the web server. Content destined for the web server is loaded to the staging server and the pushed to the web server using software like RedDot Solution's Content Management Server (CMS). Different users, or departments, are given accounts on the staging server; the accounts are used to upload content to the staging server. The content is pushed from the staging server to the actual web server using a separate account to which the users do not have access. The web server is configured to only allow the staging account access from the IP address of the staging server.

As shown in Figure 12.6, the staging server should be placed on a separate VLAN than the web server. The staging server should be part of a private VLAN that is not accessible through the firewall. This will prevent an attacker who does gain access to the web server from getting to the staging server and using it to launch additional attacks.

Figure 12.6. A staging server is used as an added measure of web server security. Content is uploaded to the staging server then pushed to the web server.

graphics/12fig06.gif

Because the staging server is the only machine that will send content to the web server, an administrator can restrict the ways of accessing the server. Depending on the type of server, content can be uploaded using either Secure Copy (SCP) or Secure FTP (SFTP), and standard FTP ports can be disabled on the server. If other forms of access are required, they should only be allowed from the staging server, and those ports should be blocked to the server through the firewall.

NOTE

If FTP is required to run on the server, it is critical that anonymous FTP be disabled on all servers ”except servers that have files available for general download. An increasingly common attack is to scan for servers that have anonymous FTP enabled, with very loose file permissions. These servers are then used to store pirated software, and other files, leaving a company open to lawsuits and increasing the amount of bandwidth used exponentially.


Content on the web server should be stored on a separate partition from the operating system files. Programs installed to assist in serving web pages, such as Apache, Internet Information Server, and ColdFusion should have any sample files included as part of the default install deleted.

NOTE

A common practice among attackers is to look for sample files that programs like ColdFusion install by default, and exploit weaknesses in those default installs .


Within the content partition, file permissions should be as restrictive as possible. This probably sounds like an oxymoron; files on a web server have to be world-readable because anyone can visit a website. The difficulty is finding a balance between making the website user experience enjoyable and keeping the web server secure.

If the content of a website is largely static, consider using strict HTML pages, and setting the file permissions so they are world-readable but not writable, and certainly not executable. On the other hand, if website content is dynamic and database driven, the files will have to be world-readable and executable, but they should not be writable. Windows-based web servers have a special scripting permission separate from executable permissions; that permission should be used for dynamic content.

In cases where executable permissions are required, security precautions should be taken to secure the scripting software. Whether the site uses Perl, PHP, Java, VBScript, or ColdFusion, there are vendor-recommended practices that a server administrator can follow, like running PHP in safe mode, to increase the security of scripts on the server.

Outlining security steps for all of the commonly used scripting languages is beyond the scope of this book. There is plenty of excellent information available for locking down each language.

There are three general guidelines that all scripts in use on a website should follow to ensure security:

  1. A script should never accept unchecked data.

  2. All input should be validated .

  3. Scripts should not rely on path information gleaned from the server.

Whether it is a form, database query, or some other type of user input, any data that is processed by a script should be validated. The data has to be checked to ensure it is not passing malicious information to the script.

All input being passed to another program on the server should be validated. While Rule 2 sounds similar to the first rule, the implications are different. In the first instance, the script is checking for potentially damaging input before processing it. This applies to information that is generally damaging to the server. For example, an attacker loads the command:

 ls -al 

into a feedback form, and is able to get a directory listing of the root web directory. In this case, information that is being passed to a specific program has to be validated to ensure it interacts properly with the program. Using the feedback form as an example, an attacker may be able to manipulate it so that the web server can be used to send unsolicited mail.

NOTE

Matt's Formmail is one of the most popular scripts in use on the Internet. Older versions of the script contained a security hole similar to the one described. Attackers were able to exploit that hole to send unsolicited commercial mail to millions of people. The hole has been patched for more than a year, but there are still older versions of the script in use on thousands of websites .


The problem becomes even more insidious when attackers use the same type of attack to submit queries to databases on the server. An attacker can use this type of attack to find out usernames, phone numbers , and even credit card information. Restricting the type of information that can be submitted to a web server helps to reduce the chances that this type of attack will occur.

The third rule is that scripts should not rely on path information gleaned from the server. The Common Gateway Interface (CGI) specification has a variable called PATH, which can prepend the server-supplied path to any files or directories used in a script. So, if a script needs to access a file called sample.exe, the file can be called like this:

 $PATH/sample.exe 

Unfortunately, the PATH variable can be manipulated allowing an attacker to view the directory contents of the server. Rather than rely on information gathered from the PATH variable, programmers should hard code data paths directly into the scripts. Hard coding path information into a script prevents an attacker from manipulating the PATH variable to display files on the web server.

New security holes are constantly being discovered in web applications. It is important that web server administrators stay abreast of current security problems that are introduced and update software as quickly as possible.

Attackers are especially quick to take advantage of security holes in web server software, so quickly patching security holes is critical. Web server attacks are successful because most of the time they resemble a standard HTTP connection attempt. Rather than using malformed packets or spoofing addresses, an attacker makes a standard HTTP GET request, but is sending bad information to the server.

In August 2001 hundreds of thousands of web servers started receiving quests like this:

 "GET/  default.ida?NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN  NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN  NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN  NNNNNNNNNNNNNNNNNNNNNNNNNNNNN%u9090%u6858" 

These requests were part of the Code Red worm. The Code Red worm took advantage of a security hole in Microsoft Index Server, which was susceptible to buffer overflows. [3] The security patch had been available for months, but most administrators had not patched their systems, leaving them vulnerable. The Code Red worm spread across several continents in a few hours. While the damage that was done by this worm was relatively mild, it could have been much worse .

[3] This is the reason why scripts should never accept unchecked data.

The Code Red worm made administrators painfully aware of the inadequate steps that are often taken to secure web servers. The impact of Code Red would have been a lot less severe if companies that were not using Microsoft Index Server (installed by default with an installation of Internet Information Server) had disabled the service and deleted files related to Index Server.

Just as a good administrator would not dream of leaving unnecessary services running on a server, unused web services should be disabled. The most popular web servers, Apache, Microsoft Internet Information Server, and Sun iPlanet, bundle additional services that not all websites will use. It is important to review configuration information when this software is installed to ensure there are no unnecessary web services running. Every unused web service presents a potential security hole, so leaving the service disabled will improve security, simply by not having it active.

Another common mistake made by web server administrators is to store customer information on the web server. No matter what precautions an administrator has taken it is still possible, if not likely, that the web server will be compromised. If this occurs, having a database filled with customer information is the worst thing that can happen. Customers are usually understanding about website defacements, but they are understandably less forgiving if an attacker gains access to confidential information.

To avoid this problem, website databases should be stored on a separate server. The web server can send database queries to the database server and pull the necessary information, on an as-needed basis. Queries between the web server and the database server should be encrypted, and the database username and password should also be secured.

The database server itself should be configured to not allow any logins using the public network address. All administration of the database server should be handled through the management network on the private network interface.

To provide an additional layer of protection, the database that is queried by the web server should be a scaled-down version of the real customer database. The web database can contain some customer contact information, but it should not contain all customer information. If possible, the database should not contain billing information. Unfortunately, it is sometimes necessary to make billing information available, especially for sites that are e-commerce oriented. In cases like this, the website database should only contain partial information, such as the last 4 numbers of a credit card. Now, even if an attacker is able to bypass the database server defenses, the information gained will be significantly less valuable than if all the billing information had been available.

Protecting customer information has to be a top priority for a web server administrator. Taking the proper steps to secure a database, and a database server, goes a long way to protect that information. It also ensures that customer's faith in an organization remains strong.

12.3.1 SSL Encryption

A common way to enhance the security of transactions conducted through a website is to use Secure Sockets Layer (SSL) encryption. Netscape developed SSL as a means of securing web-based transactions. The IETF has recently adopted most of the SSL specification to create a new protocol, designed to secure more than web-based transactions, called Transport Layer Security (TLS).

SSL uses public key cryptography developed by RSA Data Security, Inc., to secure transactions. Public key cryptography uses a public/private key pair to validate the information being sent between parties.

The default SSL port is 443 TCP; however, SSL can run over other ports, and many administrative applications now require SSL connections over different ports. Traditionally, to make an SSL connection to a web server to Port 443, a user simply typed in https ://www.example.com ; if the connection were to be made on a different port, the user would type https://www.example.com:[port number] . The effect is the same: The transactions between the user and the web server will be encrypted.

The process of setting up an SSL on a web server varies depending on the type of web server being used, but the backend functionality is the same across all web servers. The public/private key pair is generated on the web server. The private key is stored separately from the public key which is sent to a Certificate Authority (CA), such as VeriSign or Thawte. The CA verifies the identity of the organization that sent the key and issues an x.509v3 certificate. The x.509v3 certificate contains the organization's public key and is signed by the CA. The newly generated certificate is installed on the web server, and SSL sessions can begin.

NOTE

SSL sessions will actually run without a certificate generated by a CA. The web browser will generate an error window when a user visits an SSL-encrypted website that does not have a CA-signed certificate. The error will indicate that a trusted authority has not signed the certificate, and therefore the information is suspect. The data sent between the user and the server is still encrypted, but there is no third-party verification that the website is owned by the organization that claims to own it.


When a user attempts to connect to a web server using SSL, several things happen. The first is that the user initiates an SSL connection with the web server, and tells the web server the minimum SSL version it will accept (Version 3.0 is the current standard). The web server responds with a copy of the x.509v3 key. The web browser chooses a random symmetric key, encrypts it with the server's x.509v3 key, and sends it back to the web server. The web server decrypts the symmetric key using the server's private key.

The web server and the client use this symmetric key to encrypt traffic during the SSL session. The web server also assigns a unique ID to each encrypted session, called an SSL Session ID. The SSL Session ID is unencrypted and sent between the user and the server with each request.

SSL is a great tool for encrypting data between users and a web server. It should not be thought of as a tool for securing a web server. SSL does not assist with securing data once it is on a server. This is an important point: Many administrators feel that if they have SSL encryption their web server is automatically secure. That is not the case. SSL encryption should be used with other security measures as part of an overall web server security solution.

12.3.2 Load Balancing

So far in this section, the discussion has revolved around attackers attempting to deface a website, and attempting to get customer information. Those attacks are common, but another common attack faced by web server administrators is a DoS attack.

DoS attacks are different in that they often serve no other purpose than to see if the attacker can take a website offline. The attacker may have a grudge against an organization, or may have been frustrated in attempts to attack a server in other ways. Whatever the reason, a DoS attack against a web server is often difficult to defend against, especially if it is a DDoS attack.

A typical website consists of a single server, with a database server sitting behind it. Even if a DoS attack does not saturate an Internet connection, it is not difficult to generate enough requests to knock a server offline. Firewalls can stop many, but not all, DoS attacks, especially if the attacks appear to be a great number of legitimate requests ”which they often are.

A common method used to protect a website against DoS, and other types of attacks, is to load balance the site across multiple servers. Load balancing has traditionally been used to distribute traffic across multiple servers as a means of improving performance and customer experience. As some websites increased in popularity, a single server was not enough to handle the number of requests received.

The solution was to set up multiple servers. At first these multi-server solutions were primitive. A company would create multiple records in their zone file pointing to the IP addresses of different servers, as shown in Figure 12.7.

Figure 12.7. Increase availability by having multiple DNS entries for the same domain pointing to different servers

graphics/12fig07.gif

From a security perspective, there are two problems with this approach. (1) It does not take into account availability. If a web server is unavailable, the DNS server will still direct people to it, because the DNS server does not have any way of knowing the server has failed. (2) DNS information is publicly available, so an attacker can find all of the servers associated with a website and attempt to break into sites.

Two other forms of load balancing have become popular for use with web servers: clustering and network load balancing. Clustering has been used with other types of servers for many years , but it has only recently become commonplace for web servers. A cluster is a series of servers that act as a single server (Figure 12.8). The servers either communicate with each other or with a cluster controller to process requests as they are made. The cluster is assigned an IP address, and each individual server can learn that IP via ARP to requests for that IP address. The individual servers are also assigned unique IP addresses, which allow for server management and communication between the servers.

Figure 12.8. A server cluster. The servers answer to the same IP address on the public side but have unique, private IP addresses for management and intraserver communications.

graphics/12fig08.gif

When a request to the web server is made, the clustered servers determine which is going to accept it based on a preprogrammed set of rules, known as metrics. Depending on the metrics used, a single server may handle all requests from one source IP address, or the requests may be distributed between the servers. If one of the servers fails, the other servers in the cluster pick up its requests and service goes uninterrupted.

Clusters add to the security of a web server not simply because they increase availability, but also because they make it more difficult to launch an attack against a server. Each time a request to the website is made a different server may respond to the request. An attacker engaged in a complicated break-in attempt will need to restart the process each time a request is made because there is no way to guess which server will respond. In addition, if a private network is used to maintain the servers, then there is not a public address to which an attacker can latch on in order to complete an attack. Even within a small cluster, this advantage can give an administrator additional time to catch the alarm, track down the attacker, and stop the attack before it is successful.

One area in which clustering generally cannot assist is with DoS attacks. A well-executed DoS attack can still overpower several servers, rendering them unavailable to legitimate requests. For administrators concerned about DoS and DDoS attacks, network load balancing may be a better solution.

Network load balancing uses a switched device, such Cisco CSS11500, Nortel Network Alteon 184, Extreme Network SummitPx1, or F5 Network BIG-IP 5000, to direct traffic between multiple servers. The load balancer, or balancers, if deployed in pairs for redundancy, assumes the IP address of the website, as shown in Figure 12.9. Website requests reach the load balancer and are forwarded to the appropriate server, depending on the set metrics. Each server is configured with a private IP address that is used for management purposes.

Figure 12.9. Network load balancing. A switched device sits on the network in front of the servers and distributes traffic between the devices. The load balancer assumes the IP address of the website, and the servers are privately addressed.

graphics/12fig09.gif

If a server fails, it is taken out of rotation by the load balancer and an alert is generated. No traffic is lost as the load balancer simply redirects requests to another server. As with clustering, load balancing provides additional protection from attackers. Because there are multiple servers, each with a private IP address, an attacker may to have to continually restart an attack.

In addition, because most network load balancers are, essentially , switches, they are designed to handle large amounts of traffic. Many load balancers include DoS and DDoS detection utilities. These utilities can be used to drop bad requests before they reach the servers. The load balancer is able to pass legitimate traffic to the server and deny bad traffic without any performance degradation. Load balancers are usually deployed in pairs, so if an attacker is able to launch enough traffic to overwhelm one, and it becomes unreachable, the second load balancer will take over and continue to forward traffic.

Keep in mind that these devices are able to process several gigabits of information at any one time, so a DoS attack is more likely to overwhelm an Internet connection than the load balancer itself. The ability of load balancers to stave off a DoS attack, even if the Internet connection has been overwhelmed, is actually beneficial. Many attackers count on DoS attacks crashing the web servers, leaving the servers more susceptible to other attacks. If the load balancers can prevent the site from being overwhelmed, the server will remain intact. The site may be unavailable while the DoS attack is going on, but the servers will remain secure.

Load balancers are certainly not a cure-all for web server security. If a server is not properly patched, or does not have the proper access restrictions, it will be susceptible to simple attacks, which will fall below the radar of the load balancer. A load balancer can be used in conjunction with a solid web server security policy to enhance the security of a web server.

   


The Practice of Network Security. Deployment Strategies for Production Environments
The Practice of Network Security: Deployment Strategies for Production Environments
ISBN: 0130462233
EAN: 2147483647
Year: 2002
Pages: 131
Authors: Allan Liska

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net