Section 6.3. Securing Dynamic Requests


6.3. Securing Dynamic Requests

Securing dynamic requests is a problem facing most Apache administrators. In this section, I discuss how to enable CGI and PHP scripts and make them run securely and with acceptable performance.

6.3.1. Enabling Script Execution

Because of the inherent danger executable files introduce, execution should always be disabled by default (as discussed in Chapter 2). Enable execution in a controlled manner and only where necessary. Execution can be enabled using one of four main methods:

  • Using the ScriptAlias directive

  • Explicitly by configuration

  • Through server-side includes

  • By assigning a handler, type, or filter

6.3.1.1 ScriptAlias versus script enabling by configuration

Using ScriptAlias is a quick and dirty approach to enabling script execution:

ScriptAlias /cgi-script/ /home/ivanr/cgi-bin/

Though it works fine, this approach can be dangerous. This directive creates a virtual web folder and enables CGI script execution in it but leaves the configuration of the actual folder unchanged. If there is another way to reach the same folder (maybe it's located under the web server tree), visitors will be able to download script source code. Enabling execution explicitly by configuration will avoid this problem and help you understand how Apache works:

<Directory /home/ivanr/public_html/cgi-bin>     Options +ExecCGI     SetHandler cgi-script </Directory>

6.3.1.2 Server-side includes

Execution of server-side includes (SSIs) is controlled via the Options directive. When the Options +Includes syntax is used, it allows the exec element, which in turn allows operating system command execution from SSI files, as in:

<!--#exec cmd="ls" -->

To disable command execution but still keep SSI working, use Options +IncludesNOEXEC.

6.3.1.3 Assigning handlers, types, or filters

For CGI script execution to take place, two conditions must be fulfilled. Apache must know execution is what is wanted (for example through setting a handler via SetHandler cgi-script), and script execution must be enabled as a special security measure. This is similar to how an additional permission is required to enable SSIs. Special permissions are usually not needed for other (non-CGI) types of executable content. Whether they are is left for the modules' authors to decide, so it may vary. For example, to enable PHP, it is enough to have the PHP module installed and to assign a handler to PHP files in some way, such as via one of the following two different approaches:

# Execute PHP when filenames end in .php AddHandler application/x-httpd-php .php     # All files in this location are assumed to be PHP scripts. <Location /scripts/>     SetHandler application/x-httpd-php </Location>

In Apache 2, yet another way to execute content is through the use of output filters. Output filters are designed to transform output, and script execution can be seen as just another type of transformation. Server-side includes on Apache 2 are enabled using output filters:

AddOutputFilter INCLUDES .shtml

Some older versions of the PHP engine used output filters to execute PHP on Apache 2, so you may encounter them in configurations on older installations.

6.3.2. Setting CGI Script Limits

There are three Apache directives that help establish control over CGI scripts. Used in the main server configuration area, they will limit access to resources from the main web server user. This is useful to prevent the web server from overtaking the machine (through a CGI-based DoS attack) but only if you are not using suEXEC. With suEXEC in place, different resource limits can be applied to each user account used for CGI script execution. Such usage is demonstrated in the virtual hosts example later in this chapter. Here are the directives that specify resource limits:


RLimitCPU

Limits CPU consumption, in CPU seconds per process


RLimitNPROC

Limits the maximum number of processes, on a per-user basis


RLimitMEM

Limits the maximum consumption of memory, in bytes, on a per-process basis

Each directive accepts two parameters, for soft and hard limits, respectively. Processes can choose to extend the soft limit up to the value configured for the hard limit. It is recommended that you specify both values. Limits can be configured in server configuration and virtual hosts in Apache 1 and also in directory contexts and .htaccess files in Apache 2. An example of the use of these directives is shown in the next section.

6.3.3. Using suEXEC

Having discussed how execution wrappers work and why they are useful, I will now give more attention to practical aspects of using the suEXEC mechanism to increase security. Below you can see an example of configuring Apache with the suEXEC mechanism enabled. I have used all possible configuration options, though this is unnecessary if the default values are acceptable:

> $ ./configure \ > --enable-suexec \ > --with-suexec-bin=/usr/local/apache/bin/suexec \ > --with-suexec-caller=httpd \ > --with-suexec-userdir=public_html \ > --with-suexec-docroot=/home \ > --with-suexec-uidmin=100 \ > --with-suexec-gidmin=100 \ > --with-suexec-logfile=/var/www/logs/suexec_log \ > --with-suexec-safepath=/usr/local/bin:/usr/bin:/bin \ > --with-suexec-umask=022

Compile and install as usual. Due to high security expectations, suEXEC is known to be rigid. Sometimes you will find yourself compiling Apache several times until you configure the suEXEC mechanism correctly. To verify suEXEC works, look into the error log after starting Apache. You should see suEXEC report:

[notice] suEXEC mechanism enabled (wrapper: /usr/local/apache/bin/suexec)

If you do not see the message, that probably means Apache did not find the suexec binary (the --with-suexec-bin option is not configured correctly). If you need to check the parameters used to compile suEXEC, invoke it with the -V option, as in the following (this works only if done as root or as the user who is supposed to run suEXEC):

# /usr/local/apache/bin/suexec -V  -D AP_DOC_ROOT="/home"  -D AP_GID_MIN=100  -D AP_HTTPD_USER="httpd"  -D AP_LOG_EXEC="/var/www/logs/suexec_log"  -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"  -D AP_SUEXEC_UMASK=022  -D AP_UID_MIN=100  -D AP_USERDIR_SUFFIX="public_html"

Once compiled correctly, suEXEC usage is pretty straightforward. The following is a minimal example of using suEXEC in a virtual host configuration. (The syntax is correct for Apache 2. To do the same for Apache 1, you need to replace SuexecUserGroup ivanr ivanr with User ivanr and Group ivanr.) This example also demonstrates the use of CGI script limit configuration:

<VirtualHost *>         ServerName ivanr.example.com     DocumentRoot /home/ivanr/public_html         # Execute all scripts as user ivanr, group ivanr     SuexecUserGroup ivanr ivanr         # Maximum 1 CPU second to be used by a process     RLimitCPU 1 1     # Maximum of 25 processes at any one time     RLimitNPROC 25 25     # Allow 10 MB to be used per-process     RLimitMEM 10000000 10000000         <Directory /home/ivanr/public_html/cgi-bin>         Options +ExecCGI         SetHandler cgi-script     </Directory>     </VirtualHost>

A CGI script with the following content comes in handy to verify everything is configured correctly:

#!/bin/sh echo "Content-Type: text/html" echo echo "Hello world from user <b>`whoami`</b>! "

Placed in the cgi-bin/ folder of the above virtual host, the script should display a welcome message from user ivanr (or whatever user you specified). If you wish, you can experiment with the CGI resource limits now, changing them to very low values until all CGI scripts stop working.

Because of its thorough checks, suEXEC makes it difficult to execute binaries using the SSI mechanism: command line parameters are not allowed, and the script must reside in the same directory as the SSI script. What this means is that users must have copies of all binaries they intend to use. (Previously, they could use any binary that was on the system path.)


Unless you have used suEXEC before, the above script is not likely to work on your first attempt. Instead, one of many suEXEC security checks is likely to fail, causing suEXEC to refuse execution. For example, you probably did not know that the script and the folder in which the script resides must be owned by the same user and group as specified in the Apache configuration. There are many checks like this and each of them contributes to security slightly. Whenever you get an "Internal Server Error" instead of script output, look into the suexec_log file to determine what is wrong. The full list of suEXEC checks can be found on the reference page http://httpd.apache.org/docs-2.0/suexec.html. Instead of replicating the list here I have decided to do something more useful. Table 6-2 contains a list of suEXEC error messages with explanations. Some error messages are clear, but many times I have had to examine the source code to understand what was happening. The messages are ordered in the way they appear in the code so you can use the position of the error message to tell how close you are to getting suEXEC working.

Table 6-2. suEXEC error messages

Error message

Description

User mismatch (%s instead of %s)

The suEXEC binary can only be invoked by the user specified at compile time with the --with-suexec-caller option.

Invalid command (%s)

The command begins with /, or begins with ../, or contains /../. None of these are allowed. The command must be in the current working directory or in a directory below it.

Invalid target user name: (%s)

The target username is invalid (not known to the system).

Invalid target user id: (%s)

The target uid is invalid (not known to the system).

Invalid target group name: (%s)

The target group name is invalid (not known to the system).

Cannot run as forbidden uid (%d/%s)

An attempt to execute a binary as user root was made or the uid is smaller than the minimum uid specified at compile time with the --with-suexec-uidmin option.

Cannot run as forbidden gid (%d/%s)

An attempt to execute a binary as group root was made or the gid is smaller than the minimum gid specified at compile time with the --with-suexec-gidmin option.

Failed to setgid (%ld: %s)

Change to the target group failed.

Failed to setuid (%ld: %s)

Change to the target user failed.

Cannot get current working directory

suEXEC cannot retrieve the current working directory. This would possibly indicate insufficient permissions for the target user.

Cannot get docroot information (%s)

suEXEC cannot get access to the document root. For nonuser requests, the document root is specified at compile time using the --with-suexec-docroot option. For user requests (in the form of ~username), the document root is constructed at runtime when the public subfolder defined with the --with-suexec-userdir option is appended to the user's home directory.

Command not in docroot (%s)

The target file is not within the allowed document root directory. See the previous message description for a definition.

Cannot stat directory: (%s)

suEXEC cannot get information about the current working directory.

Directory is writable by others: (%s)

Directory in which the target binary resides is group or world writable.

Cannot stat program: (%s)

This probably means the file is not found.

File is writable by others: (%s/%s)

The target file is group or world writable.

File is either setuid or setgid: (%s/%s)

The target file is marked setuid or setgid.

Target uid/gid (%ld/%ld) mismatch with directory (%ld/%ld) or program (%ld/%ld)

The file and the directory in which the file resides must be owned by the target user and target group.

File has no execute permission: (%s/%s)

The target file is not marked as executable.

AP_SUEXEC_UMASK of %03o allows write permission to group and/or other

This message is only a warning. The selected umask allows group or world write access.

(%d)%s: exec failed (%s)

Execution failed.


6.3.3.1 Using suEXEC outside virtual hosts

You can use suEXEC outside virtual hosts with the help of the mod_userdir module. This is useful in cases where the system is not (or at least not primarily) a virtual hosting system, but users want to obtain their home pages using the ~username syntax. The following is a complete configuration example. You will note suEXEC is not explicitly configured here. If it is configured and compiled into the web server, as shown previously, it will work automatically:

UserDir public_html UserDir disabled root     <Directory /home/*/public_html>     # Give users some control in their .htaccess files.     AllowOverride AuthConfig Limit Indexes     # Conditional symbolic links and SSIs without execution.     Options SymLinksIfOwnerMatch IncludesNoExec         # Allow GET and POST.     <Limit GET POST>         Order Allow,Deny         Allow from all     </Limit>         # Deny everything other than GET and POST.     <LimitExcept GET POST>         Order Deny,Allow         Deny from all     </LimitExcept> </Directory>     # Allow per-user CGI-BIN folder. <Directory /home/*/public_html/cgi-bin/>     Options +ExecCGI     SetHandler cgi-script </Directory>

Ensure the configuration of the UserDir directive (public_html in the previous example) matches the configuration given to suEXEC at compile time with the --with-suexec-userdir configuration option.

Do not set the UserDir directive to ./ to expose users' home folders directly. This will also expose home folders of other system users, some of which may contain sensitive data.


A frequent requirement is to give your (nonvirtual host) users access to PHP, but this is something suEXEC will not support by default. Fortunately, it can be achieved with some mod_rewrite magic. All users must have a copy of the PHP binary in their cgi-bin/ folder. This is an excellent solution because they can also have a copy of the php.ini file and thus configure PHP any way they want. Use mod_rewrite in the following way:

# Apply the transformation to PHP files only. RewriteCond %{REQUEST_URI} \.php$ # Transform the URI into something mod_userdir can handle. RewriteRule ^/~(\w+)/(.*)$ /~$1/cgi-bin/php/~$1/$2 [NS,L,PT,E=REDIRECT_STATUS:302]

The trick is to transform the URI into something mod_userdir can handle. By setting the PT (passthrough) option in the rule, we are telling mod_rewrite to forward the URI to other modules (we want mod_userdir to see it); this would not take place otherwise. You must set the REDIRECT_STATUS environment variable to 302 so the PHP binary knows it is safe to execute the script. (Read the discussion about PHP CGI security in Chapter 3.)

6.3.3.2 Using suEXEC for mass virtual hosting

There are two ways to implement a mass virtual hosting system. One is to use the classic approach and configure each host using the <VirtualHost> directive. This is a very clean way to support virtual hosting, and suEXEC works as you would expect, but Apache was not designed to work efficiently when the number of virtual hosts becomes large. Once the number of virtual hosts reaches thousands, the loss of performance becomes noticeable. Using modern servers, you can deploy a maximum of 1,000-2,000 virtual hosts per machine. Having significantly more virtual hosts on a machine is possible, but only if a different approach is used. The alternative approach requires all hosts to be treated as part of a single virtual host and to use some method to determine the path on disk based on the contents of the Host request header. This is what mod_vhost_alias (http://httpd.apache.org/docs-2.0/mod/mod_vhost_alias.html) does.

If you use mod_vhost_alias, suEXEC will stop working and you will have a problem with security once again. The other execution wrappers are more flexible when it comes to configuration, and one option is to investigate using them as a replacement for suEXEC.

But there is a way of deploying mass virtual hosting with suEXEC enabled, and it comes with some help from mod_rewrite. The solution provided below is a mixture of the mass virtual hosting with mod_rewrite approach documented in Apache documentation (http://httpd.apache.org/docs-2.0/vhosts/mass.html) and the trick I used above to make suEXEC work with PHP for user home pages. This solution is only meant to serve as a demonstration of a possibility; you are advised to verify it works correctly for what you want to achieve. I say this because I personally prefer the traditional approach to virtual hosting which is much cleaner, and the possibility of misconfiguration is much smaller. Use the following configuration data in place of the two mod_rewrite directives in the previous example:

# Extract the value of SERVER_NAME from the # Host request header. UseCanonicalName Off     # Since there has to be only one access log for # all virtual hosts its format must be modified # to support per virtual host splitting. LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon CustomLog /var/www/logs/access_log vcommon     RewriteEngine On RewriteMap LOWERCASE int:tolower RewriteMap VHOST txt:/usr/local/apache/conf/vhost.map     # Translate the hostname to username using the # map file, and store the username into the REQUSER # environment variable for use later. RewriteCond ${LOWERCASE:%{SERVER_NAME}} ^(.+)$ RewriteCond ${VHOST:%1|HTTPD} ^(.+)$ RewriteRule ^/(.*)$ /$1 [NS,E=REQUSER:%1]     # Change the URI to a ~username syntax and finish # the request if it is not a PHP file. RewriteCond %{ENV:REQUSER} !^HTTPD$ RewriteCond %{REQUEST_URI} !\.php$ RewriteRule ^/(.*)$ /~%{ENV:REQUSER}/$1 [NS,L,PT]     # Change the URI to a ~username syntax and finish # the request if it is a PHP file. RewriteCond %{ENV:REQUSER} !^HTTPD$ RewriteCond %{REQUEST_URI} \.php$ RewriteRule ^/(.*)$ /~%{ENV:REQUSER}/cgi-bin/php/~%{ENV:REQUSER}/$1 \ [NS,L,PT,E=REDIRECT_STATUS:302]     # The remaining directives make PHP work when content # is genuinely accessed through the ~username syntax. RewriteCond %{ENV:REQUSER} ^HTTPD$ RewriteCond %{REQUEST_URI} \.php$ RewriteRule ^/~(\w+)/(.*)$ /~$1/cgi-bin/php/~$1/$2  [NS,L,PT,E=REDIRECT_STATUS:302]

You will need to create a simple mod_rewrite map file, /usr/local/apache/conf/vhost.map, to map virtual hosts to usernames:

jelena.example.com jelena ivanr.example.com  ivanr

There can be any number of virtual hosts mapping to the same username. If virtual hosts have www prefixes, you may want to add them to the map files twice, once with the prefix and once without.

6.3.4. FastCGI

If mod_fastcgi (http://www.fastcgi.com) is added to Apache, it can work to make scripts persistent, where scripts support persistent operation. I like FastCGI because it is easy to implement yet very powerful. Here, I demonstrate how you can make PHP persistent. PHP comes with FastCGI support built-in that is compiled in by default, so you only need to install mod_fastcgi. The example is not PHP specific so it can work for any other binary that supports FastCGI.

To add mod_fastcgi to Apache 1, type the following while you are in the mod_fastcgi source folder:

$ apxs -o mod_fastcgi.so -c *.c # apxs -i -a -n fastcgi mod_fastcgi.so

To add mod_fastcgi to Apache 2, type the following while you are in the mod_fastcgi source folder:

$ cp Makefile.AP2 Makefile $ make top_dir=/usr/local/apache # make top_dir=/usr/local/apache install

When you start Apache the next time, one more process will be running: the FastCGI process manager, which is responsible for managing the persistent scripts, and the communication between them and Apache.

Here is what you need to add to Apache configuration to make it work:

# Load the mod_fastcgi module. LoadModule fastcgi_module modules/mod_fastcgi.so     # Tell it to use the suexec wrapper to start other processes. FastCgiWrapper /usr/local/apache/bin/suexec     # This configuration will recycle persistent processes once every # 300 seconds, and make sure no processes run unless there is # a need for them to run. FastCgiConfig -singleThreshold 100 -minProcesses 0 -killInterval 300

I prefer to leave the existing cgi-bin/ folders alone so non-FastCGI scripts continue to work. (As previously mentioned, scripts must be altered to support FastCGI.) This is why I create a new folder, fastcgi-bin/. A copy of the php binary (the FastCGI version) needs to be placed there. It makes sense to remove this binary from the cgi-bin/ folder to avoid the potential for confusion. A FastCGI-aware php binary is compiled as a normal CGI version but with the addition of the --enable-fastcgi switch on the configure line. It is worth checking for FastCGI support now because it makes troubleshooting easier later. If you are unsure whether the version you have supports FastCGI, invoke it with the -v switch. The supported interfaces will be displayed in the brackets after the version number.

$ ./php -v PHP 5.0.2 (cgi-fcgi) (built: Nov 19 2004 11:09:11) Copyright (c) 1997-2004 The PHP Group Zend Engine v2.0.2, Copyright (c) 1998-2004 Zend Technologies.

This is what an suEXEC-enabled and FastCGI-enabled virtual host configuration looks like:

<VirtualHost *>         ServerName ivanr.example.com     DocumentRoot /home/ivanr/public_html         # Execute all scripts as user ivanr, group ivanr     SuexecUserGroup ivanr ivanr         AddHandler application/x-httpd-php .php     Action application/x-httpd-php /fastcgi-bin/php         <Directory /home/ivanr/public_html/cgi-bin>         Options +ExecCGI         SetHandler cgi-script     </Directory>         <Directory /home/ivanr/public_html/fastcgi-bin>         Options +ExecCGI         SetHandler fastcgi-script     </Directory>     </VirtualHost>

Use this PHP file to verify the configuration works:

<? echo "Hello world!<br>"; passthru("whoami"); ?>

The first request should be slower to execute than all subsequent requests. After that first request has finished, you should see a php process still running as the user (ivanr in my case). To ensure FastCGI is keeping the process persistent, you can tail the access and suEXEC log files. For every persistent request, there will be one entry in the access log and no entries in the suEXEC log. If you see the request in each of these files, something is wrong and you need to go back and figure out what that is.

If you configure FastCGI to run as demonstrated here, it will be fully dynamic. The FastCGI process manager will create new processes on demand and shut them down later so that they don't waste memory. Because of this, you can enable FastCGI for a large number of users and achieve security and adequate dynamic request performance. (The mod_rewrite trick to get PHP to run through suEXEC works for FastCGI as well.)

6.3.5. Running PHP as a Module

Running PHP as a module in an untrusted environment is not recommended. Having said that, PHP comes with many security-related configuration options that can be used to make even module-based operation decently secure. What follows is a list of actions you should take if you want to run PHP as a module (in addition to the actions required for secure installation as described in Chapter 3):

  • Use the open_basedir configuration option with a different setting for every user, to limit the files PHP scripts can reach.

  • Deploy PHP in safe mode. (Be prepared to wrestle with the safe-mode-related problems, which will be reported by your users on a regular basis.) In safe mode, users can execute only the binaries that you put into a special folder. Be very careful what you put there, if anything. A process created by executing a binary from PHP can access the filesystem without any restrictions.

  • Use the disable_function configuration option to disable dangerous functions, including the PHP-Apache integration functions. (See Chapter 3 for more information.)

  • Never allow PHP dynamic loadable modules to be used by your users (set the enable_dl configuration directive to Off).

The above list introduces so many restrictions that it makes PHP significantly less useful. Though full-featured PHP programs can be deployed under these conditions, users are not used to deploying PHP programs in such environments. This will lead to broken PHP programs and problems your support staff will have to resolve.



    Apache Security
    Apache Security
    ISBN: 0596007248
    EAN: 2147483647
    Year: 2005
    Pages: 114
    Authors: Ivan Ristic

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net