VULNERABILITY SCANNERS

VULNERABILITY SCANNERS

Web servers such as Apache, iPlanet, and IIS have gone through many revisions and security updates. A web vulnerability scanner basically consists of a scanning engine and a catalog. The catalog contains a list of common files, files with known vulnerabilities, and common exploits for a range of servers. For example, a vulnerability scanner looks for backup files (such as renaming default.asp to default.asp.bak) or tries directory traversal exploits (such as checking for ..%255c..%255c). The scanning engine handles the logic for reading the catalog of exploits, sending the requests to the web server, and interpreting the requests to determine whether the server is vulnerable. These tools target vulnerabilities that are easily fixed by secure host configurations, updated security patches, and a clean web document root.

Nikto

Whisker, created by RFP, was created to add to a Perl-based scanning library rather than as a solo tool that would be further developed. Nikto, by Chris Sullo, is based on the nextgeneration LibWhisker library. This tool offers support for the Secure Sockets Layer (SSL), proxies, and port scanning.

Implementation

As a Perl-based scanner, nikto runs on Unix, Windows, and Mac OS X. It uses standard Perl libraries that accompany default Perl installations. You can download nikto from http://www.cirt.net/code/nikto.shtml. Nikto also requires LibWhisker (LW.pm), which is simple to install.

Scanning To get started with nikto you need only to specify a target host with the -h option. As the engine discovers potential vulnerabilities, notes accompany the output to explain why a finding may be a security risk:

 -------------------------------------------------------------------- - Nikto 1.35/1.34     -      www.cirt.net + Target IP:       10.0.1.8 + Target Hostname: 10.0.1.8 + Target Port:     80 + Start Time:      Fri Aug 19 21:54:46 2005 -------------------------------------------------------------------- - Scan is dependent on "Server" string which can be faked, use  -g to override + Server: Apache/2.0.53 + Server does not respond with '404' for error messages (uses  '403'). +      This may increase false-positives. + All CGI directories 'found', use '-C none' to test none + Apache/2.0.53 appears to be outdated (current is at least  Apache/2.0.54). Apache 1.3.33 is still maintained and considered  secure. + 2.0.53 - TelCondex Simpleserver 2.13.31027 Build 3289 and below  allow directory traversal with '/.../' entries. + /.DS_Store - Apache on Mac OSX will serve the .DS_Store file,  which contains sensitive information. Configure Apache to ignore  this file or upgrade to a newer version. (GET) + /.FBCIndex - This file son OSX contains the source of the files  in the directory. http://www.securiteam.com/securitynews/5LP0O0  05FS.html (GET) + /docs/ - May give list of installed software (GET) ... 

Table 7-1 lists the basic options necessary to run nikto. The most important options are setting the target host, the target port, and the output file. Nikto accepts the first character of an option as a synonym. For example, you can specify s or ssl to use the HTTPS protocol, or you can specify w or web to format output in HTML.

Table 7-1: Basic Nikto Command-line Options

Option

Description

-host

Specifies a single host. Nikto does not accept files with hostnames, as in the H option for whisker.

-port

Specifies an arbitrary port. Take care; specifying port 443 does not imply HTTPS. You must remember to include ssl .

-verbose

Provides verbose output. This cannot be abbreviated ( -v is reserved for the virtual hosts option).

-ssl

Enables SSL support. Nikto does not assume HTTPS if you specify target port 443.

-generic

Instructs nikto to ignore the server's banner and run a scan using the entire database.

-Format

Formats output in HTML, CSV, or text. Must be combined with -output

-F htm
-F csv
-F txt

-output

Logs output to a file. For example, -output nikto80_website.html F htm

-id

Provides HTTP Basic Authentication credentials. For example, -id username:password

- vhost

Uses a virtual host for the target web server rather than the IP address. This affects the content of the HTTP Host: header. It is important to use this option in shared server environments.

-Cgidirs

Scans all possible CGI directories. This disregards 404 errors that nikto receives for the base directory. See the "Config.txt" section for instructions on how to configure which directories it will search. For example:
-C none
-C all
-C /cgi/

-mutate

Mutated checks are described in the "Config.txt" section.

-evasion

IDS evasion techniques. Nikto can use nine different techniques to format the URL request in an attempt to bypass unsophisticated string-matching intrusion detection systems.

You should remember a few basics about running nikto: specify the host ( -h ), port ( -p ), and SSL ( -s ), and write the output to a file. A handful of additional options are described in Table 7-2. For the most part, these options widen the scope of a scan's guessing routines.

Table 7-2: Additional Nikto Command-line Options

Option

Description

-cookies

Prints the cookies returned by the server. This produces either too much unnecessary information or very useful information depending on how the server treats unauthenticated users.

-root

Prepends the directory supplied with root to all requests. This helps when you wish to test sites with "off by one" directory structures. For example, many language localization techniques will prepend a two-character language identifier to the entire site.
/en/scripts/
/en/scripts/include/
/en/menu/foo/
/de/scripts/
When this is the case, nikto may incorrectly report that it could not find common scripts. Thus, use the root option:
./nikto.pl h website p 80 r /en

-findonly

Scans the target server for HTTP(S) ports only; does not perform a full scan. The scan can use nmap or internal Perl-based socket connections.

-nolookup

Does not resolve IP addresses to hostnames.

-timeout N

Stops scanning if no data is received after a period of N seconds. The default is 10.

-useproxy

Uses the proxy defined in the config.txt file. Previous versions of nikto required you to turn this option on or off in the config.txt file. This is more convenient .

-debug

Enables verbose debug messages. This option cannot be abbreviated. It basically enumerates the LibWhisker request hash for each URL nikto retrieves. This information quickly becomes overwhelming; here's just a small portion of the information printed: D: - Target id:1:ident:10.0.1.8:ports_in:80: D: - Request Hash:
D: - Connection: Keep-Alive
D: - Host: 10.0.1.8
D: - User -Agent: Mozilla/4.75 (Nikto/1.35)
D: - $whisker->INITIAL_MAGIC: 31337
D: - $whisker->anti_ids:
D: - $whisker->force_bodysnatch: 0
D: - $whisker->force_close: 0
D: - $whisker->force_open: 0
D: - $whisker->host: 10.0.1.8
D: - $whisker->http_req_trailer:
D: - $whisker->http_ver: 1.1

-dbcheck

Performs a syntax check of the main scan_database.db and user_scan_database.db files. These files contain the specific tests that nikto performs against the server. You should need this only if you decide to customize one of these files (and if you do, consider dropping the nikto team an e-mail with your additions). This option cannot be abbreviated.

-update

Updates nikto's plug-ins and finds out whether a new version exists. This option cannot be abbreviated.

The update option makes it easy to maintain nikto. It causes the program to connect to http://www.cirt.net and download the latest plug-ins to keep the scan list current:

 $ ./nikto.pl --update + No updates required. + www.cirt.net message: Version 2.0 is still coming... 

Config.txt Nikto uses the config.txt file to set certain options that are either used less often or are most likely to be used for every scan. This file includes over a dozen settings. An option can be unset by commenting the line with a hash ( # ) symbol. Here are some of the default settings that you'll be most likely to change:

 #CLIOPTS=-g -a #NMAP=/usr/bin/nmap SKIPPORTS=21 111 DEFAULTHTTPVER=1.1 #PROXYHOST=10.1.1.1 #PROXYPORT=8080 #PROXYUSER=proxyuserid #PROXYPASS=proxypassword #STATIC-COOKIE=cookiename=cookievalue @CGIDIRS=/cgi.cgi/ /webcgi/ /cgi-914/ /cgi-915/ /bin/ /cgi/ /mpcgi/  /cgi-bin/ /ows-bin/ /cgi-sys/ /cgi-local/ /htbin/ /cgibin/ /cgis/  /scripts/ /cgi-win/ /fcgi-bin/ /cgi-exe/ /cgi-home/ /cgi-perl/ @MUTATEDIRS=/....../ /members/ /porn/ /restricted/ /xxx/ MUTATEFILES=xxx.htm xxx.html porn.htm porn.html @ADMINDIRS=/admin/ /adm/ @USERS=adm bin daemon ftp guest listen lp mysql noaccess nobody nobody4 nuucp operator root smmsp smtp sshd sys test unknown uucp  web www @NUKE=/ /postnuke/ /postnuke/html/ /modules/ /phpBB/ /forum/ 

The @ CGIDIRS setting contains a space-delimited list of directories. Nikto tries to determine whether each directory exists before trying to find files within it, although the allcgi option overrides this behavior.

The CLIOPTS setting contains command-line options to include every time nikto runs. This is useful for shortening the command line by placing the generic , ver-bose , and web options here.

NMAP and SKIPPORTS control nikto's port-scanning behavior ( -findports ). If the nmap binary is not provided (which is usually the case for Windows systems), nikto uses Perl functions to port scan. The SKIPPORTS setting contains a space-delimited list of port numbers never to scan.

Use the PROXY* settings to enable proxy support for nikto. Although there is rarely a need to change the DEFAULTHTTPVER setting, you may find servers that support only version 1.0.

The MUTATE* settings greatly increase the time it takes to scan a server with the mu-tate option. @ MUTATEDIRS instructs nikto to run every check from the base directory or directories listed here. This is useful for web sites that use internationalization, whereby the /scripts directory becomes the /1033/scripts directory. The MUTATEFILES settings instructs nikto to run a check for each file against every directory in its current plug-in. Note that there are two mutate techniques, -mutate3 and mutate4 , that ignore these values. Technique 3 performs user enumeration against Apache servers by requesting /~user directories, which takes advantage of incorrectly configured public_html (UserDir module) settings in the httpd.conf file. Technique 4 is similar, but it uses the /cgi-bin/cgiwrap/~ method.

The @ADMINDIRS is useful for guessing the location of administrator- related portions of the web site.

The @USERS setting can be helpful against Apache servers that may have public html directories enabled (mod_userdir).

Case Study: Catching Scan Signatures

As an administrator, you should be running vulnerability scanners against your web servers as part of routine maintenance. After all, it would be best to find your own vulnerabilities before someone else does. On the other hand, how can you tell if someone is running these tools against you? An intrusion detection system (IDS) can help, but an IDS has several drawbacks: it typically cannot handle high bandwidth, it relies on pattern-matching intelligence, it cannot (for the most part) watch encrypted SSL streams, and it is expensive (even the open -source snort requires a team to maintain and monitor events). The answer, in this case, is to turn to your logfiles. You enabled robust logging for your web server, right?

Common Signatures Logfiles are a security device. They are reactionary , meaning that if you see an attack signature in your file, you know you've already been attacked . If the attack compromised the server, web logs will be the first place to go for re-creating the event. Logs also help administrators and programmers track down bugs or bad pages on a web sitenecessary to maintain a stable web server. With this in mind, you should have a policy for turning on the web server's logging, collecting the logfiles, reviewing the logfiles, and archiving the logfiles.

The following table lists several items to look for when performing a log review. Many of these checks can be automated with simple tools such as grep.

Excessive 404 response codes

A 404 in your logfile usually means one of three things: a typo or error is in a page on the site, a user mistyped a URL, or a malicious user is looking for " goodies ." If you see several requests from an IP address that resulted in a string of 404 errors, check the rest of your logs for that IP address. You may find a successful request (200 response) somewhere else that indicates malicious activity.

Unused file extensions

This is a subset of the excessive 404s, but it's a good indicator of an automated tool. If your site uses only *.jsp files, requests for files with *.asp would be out of place.

Excessive 500 response codes

Any server error should be checked. This might mean the application has errors, or a malicious user is trying to submit invalid data to the server.

Sensitive filenames

Search the logs for requests that contain passwd, cmd.exe, boot.ini, ipconfig, or other system filenames and commands. IDSs often key off of these values.

Examine parameters

Web server attacks also hide within requests that return a 200 response. Make sure that your web server logs the parameters passed to the URI.Directory traversalSearch for attacks that try to break directories, such as , .. , or %2e%2e.

Long strings

Search for long strings (more than 100 characters ) submitted as a parameter. For example, a username with the letter A repeated 200 times probably indicates someone's attempt to break the application.

Unix shell characters

Check for characters that have special meaning in shells or SQL. Common characters are '! < > &*;

Strange User-Agent headers

Check for strings that do not correspond to the most common version of Internet Explorer, Mozilla, Opera, or Safari. For example, nikto produces this User-Agent header: Mozilla/4.75 (Nikto/1.30) Yes, it is trivial to change this string, but laziness and simple mistakes often identify malicious users. Of course, make sure that your web server records this header!

Bear in mind that IIS records the URL in its final, parsed format. For example, the Unicode directory traversal attack appears as /scripts/......cmd.exe?/c+dir , whereas an Apache logfile captures the raw request, /scripts/..%c0%af..%c0%af..%c0%afcmd.exe?/c+dir? . For IIS logging, make sure to turn on the options for recording the uri-stem and uri-query .

 

LibWhisker

The LibWhisker library (http:// sourceforge .net/projects/whisker/) by Rain Forest Puppy brings together many common Perl modules into a single resource for HTTP-based tools. It serves as the core communication engine in nikto and its set of functions provides a way to build web site crawlers quickly.

Implementation

Installation is simple, but it does vary ever so slightly from most CPAN modules. After untarring the download, enter the directory and make the library. Once that is done, install LW2.pm into your Perl directory. You can do this in three commands:

 $ cd libwhisker-current $ perl Makefile.pl lib $ perl Makefile.pl install 

LibWhisker might seem redundant because it apes the functionality of several Perl modules that already exist, such as LWP, Base64, and HTML::Parser. The advantage of LibWhisker is that it is lean (a smaller file size than all the other modules it replaces ), simple (a single module), focused (handles only HTTP and HTTPS requests), and robust (provides a single interface for handling request and response objects). It is also more legible than the original whisker! LibWhisker has also joined the legions of open-source code on the http://sourceforge.net servers, so it shouldn't be too hard to find.

The strength of LibWhisker's HTTP functionality shines in a tool like nikto. You can also use the library to build your own Perl scripts for whatever web-based activities you need to perform. Table 7-3 lists some of the LibWhisker functions that you'll typically find to be the most useful.

Table 7-3: Useful LibWhisker Functions

Function

Description

get_page($url, \%request)

Retrieve a complete URL specified by the $url parameter. The %request hash is optional.

http_new_request(\%request)

Create a request object. The request object contains the headers, method, and data sent to a web server. Set a new header by adding keys to the request hash. For example, this sets the Accept and User-Agent headers: $req->{˜Accept} = ˜*/*; $req->{˜User-Agent} = ˜Mozilla/5.0; Many fundamental URL creation and HTTP connection attributes can be modified in the request object. This includes the end-of-line characters (normally \r\n), parameter separator, and instructions for handling the case of URLs.

crawl_new($start, $max_depth, \%request, \%tracking)

Initialize a crawl object used by LibWhisker to keep track of information as it spiders a web site. The %request hash should be created by http_new_request.

crawl($crawl_object, $start, $max_depth)

Execute the crawl defined by the $crawl_object. This causes LibWhisker to spider a web site and collect all links within a $max_depth number of "clicks" (link depth).

Here is an example of how you might create a script to retrieve the robots.txt file from web servers. It uses the LW2::get_page() function.

 #!/usr/bin/perl use LW2; use strict; my $host = $ARGV[0]; my ($code, $html) = LW2::get_page('http://'.$host.'/robots.txt'); print "$code\n$html"; 

You need only supply the hostname on the command line and the script handles the rest. Of course, you could slightly alter the $host variable so that it accepts any URL.

 Paris:~] mike% ./crawler.pl www.google.com 200 User-agent: * Allow: /searchhistory/ Disallow: /search Disallow: /groups 

Requesting a single page is something that a few lines of shell scripting and the Netcat command could handle. The previous script was written to assume a connection with HTTP; ignoring a possible connection to HTTPS. Whereas we would have to migrate from Netcat to OpenSSL command-lines to handle the different connection types, LibWhisker handles them both. It also provides a simple mechanism for masquerading as any type of web browser. Build a request object with the headers, such as the User-Agent string, of any browser you wish to impersonate.

 #!/usr/bin/perl use LW2; use strict; my $url = $ARGV[0]; my $req = LW2::http_new_request(); $req->{'Accept'} = '*/*'; $req->{'Connection'} = 'Close'; $req->{'User-Agent'} = 'Mozilla/5.0'; LW2::http_fixup_request($req); my ($code, $html) = LW2::get_page($url,$req); print "$code\n$html"; 

The command line must now be entered slightly differently.

 Paris:~] mike% ./crawler.pl http://www.google.com/robots.txt 

The server's response will be identical.

Tip 

Some web sites may respond differently to requests depending on the User-Agent header. Keep this in mind when putting together helper scripts and utilities for crawling and auditing a web application.

There are many other functions that operate on the HTML response from the web server. Such functions enable easy manipulation of forms, links, and headers as well as the capability to alter the content of URL requests in order to test intrusion detection systems. A short perusal of the perldoc LW2 command will demonstrate many of the functions. Plus, the code is written clearly enough that users familiar with Perl will be able to quickly put together powerful scripts.



Anti-Hacker Tool Kit
Anti-Hacker Tool Kit, Third Edition
ISBN: 0072262877
EAN: 2147483647
Year: 2006
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net