Section 40.5. Objective 6: Security Tasks


40.5. Objective 6: Security Tasks

40.5.1. Kerberos

There are two current versions of Kerberos, 4 and 5. But we will describe only Version 5, which is the more stable and secure version, along with the basic concepts and configurations, because Kerberos has a minimum weight in LPI Objectives. To go deeper, read Kerberos: The Definitive Guide (O'Reilly).

40.5.1.1. Overview

Kerberos is known in Greek mythology as a very strange creature that authenticates who can pass through the gates of the underworld. But in our world, Kerberos is an authenticating system developed at MIT. Kerberos uses encryption technology and a trusted third party (the Kerberos in Greek mythology had three heads) to perform secure user authentication among multiple users and application servers.

A Kerberos server can solve many authentication problems using a centralized password database and encrypting the traffic that performs authenticationpasswords are never sent over the network in clear text). Thus, it centralizes authentication services with some of the highest-level security known.

Currently, a bunch of Kerberos software tools are available. MIT Kerberos is the first one and is widely supported, Heimdal is new and is developed by many people around the world, which makes the code more wide open and flexible to users outside the U.S.


Tip: Microsoft has adopted Kerberos, including its own version with Windows domain controllers, but that isn't relevant to this Topic. Windows Kerberos servers have extensions that make them difficult to interoperate with Linux or other standard Kerberos servers.
40.5.1.2. Server installation and configuration

To use Kerberos, you'll first need to install all of its server binaries and libraries. Download it from your distribution repository or from the developers site http://web.mit.edu/kerberos. The package must include the tools in Table 40-3:

Table 40-3. Kerberos executables

Type

Description

kadmind.local

The administration server daemon

kdb5_util

Kerberos database maintainance utility

krb5kdc

Kerberos authentication service daemon

kpropd

Kerberos database propagation daemon


You can now set up your Kerberos Domain Controller (KDC), the central server that other hosts look to in order to perform authentication. To create your Kerberos realm, edit the /etc/krb5.conf file. Generally, the package you installed comes with an example file that you'll find very useful and practical. Just fill in your information (realm name and kerberos server FQDN).

 [logging]  default = FILE:/var/log/krb5libs.log  kdc = FILE:/var/log/krb5kdc.log  admin_server = FILE:/var/log/kadmind.log [libdefaults]  default_realm = LPI.ORG.BR  dns_lookup_realm = false  dns_lookup_kdc = false [realms]  EXAMPLE.COM = {   kdc = kerberos.lpi.org.br:88   admin_server = kerberos.lpi.org.br:749   default_domain = lpi.org.br  } [domain_realm]  .lpi.org.br = LPI.ORG.BR  lpi.org.br = LPI.ORG.BR [kdc]  profile = /var/kerberos/krb5kdc/kdc.conf [appdefaults]  pam = {    debug = false    ticket_lifetime = 36000    renew_lifetime = 36000    forwardable = true    krb4_convert = false  } 

Issue the command to create the Kerberos database and initialize the realm:

 # kdb5_util create -s  Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'LPI.ORG.BR', master key name 'K/M@LPI.ORG.BR' Enter KDC database master key:  Enter key Re-enter KDC database master key to verify:  Re-enter key 


Tip: You will be prompted for the database master password. Remember this password, because you need it later to administer the Kerberos serverbut don't let out the password, because you will then put all your site's security in the hands of whoever finds it.

A Kerberos server must run the krb5kdc and kadmin daemons. On the KDCs, those services should be configured to start automatically on boot time.

The following commands run the servers:

 # /etc/init.d/krb5kdc start # /etc/init.d/kadmin start 

You can create the principal user in Kerberos with the following command:

 # kadmin.local # kadmin.local:  addprinc user WARNING: no policy specified for user@LPI.ORG.BR; defaulting to no policy Enter password for principal "user@LPI.ORG.BR": Re-enter password for principal "user@LPI.ORG.BR": Principal "user@LPI.ORG.BR" created. 

40.5.1.3. Client configuration

Basic client configuration is straightforward. Edit the file /etc/krb5.conf as you did on the server. You must also modify the /var/Kerberos/krb5kdc/kdc.conf file, which contains information about the encryption algorithm policy of the realm.

The configuration information for the system on which you wish to perform Kerberos authentication is the same information that you placed in the /etc/krb5.conf file on the KDC.

To obtain and cache a Kerberos ticket, enter:

 # kinit user Password for user@LPI.ORG.BR: 

If nothing is returned, you are doing well. To list the tickets granted, enter:

 # klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: user@LPI.ORG.BR Valid starting     Expires            Service principal 01/06/06 21:37:02  01/07/06 20:42:03  krbtgt/LPI.ORG.BR@LPI.ORG.BR 

You can destroy tickets using this intuitive command:

 # kdestroy 

Now make sure that everything has been cleaned up.

 # klist klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0) 

Check the /etc/pam.d directory and use the PAM module pam_krb5 to add Kerberos authentication in a network or local resource. Look at this example:

 auth       required     /lib/security/pam_krb5.so use_first_pass 

More information about PAM authentication can be found in Chapter 39.

You can use Kerberized Telnet and FTP services to test your new enviroment. Kerberos also can be used as an authentication mechanism for the Apache Web Server. The mod_auth_kerb Apache module provides that functionality; it can be downloaded from http://modauthkerb.sourceforge.net. Change the APACHE configuration file following this example:

 <Directory "/home/httpd/htdocs/content">    AllowOverride None    AuthType KerberosV5    AuthName "Kerberos Login"    KrbAuthRealm LPI.ORG.BR    require valid-user </Directory> 

It's very important to keep time synchronized on all systems. Otherwise, you could fall into a common problem:

 # kinit user  Password for user@LPI.ORG.BR:  kinit(v5): Clock skew too great while getting initial credentials 

Use ntpdate in your crontab files to update the system time frequently from a trusted source (NTP server). It should solve the problem.

40.5.2. Security Auditing Source Code

This is a very challenging field. There are almost no courses on the subject of writing secure software, and almost no one takes the courses that do exist. When talking about security, many people just talk about encryption, but encryption makes things more secure only if it is used properly (and it very often isn't). The rest of the program, and the environment it runs in, in must also be secure (and very often aren't). To do a real audit, you should know the programming language in question well and have extensive knowledge of how it and other languages have been exploited in the past. Linux, Unix, and the family of C languages were never designed for security, and a compendium of security pitfalls would go on forever. The task of writing secure programs falls upon the programmer, who is usually not prepared for it.

If you want to learn about secure programming, one place to start is the Linux HOWTO "Secure Programming for Linux and Unix" (http://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO). You can also read books such as Innocent Code (Wiley), which deals exclusively with web application security. Following some mailing lists, such as the ones mentioned later in this chapter, will also orient you about how things go wrong in software. If you start to follow this field, you will see, as described earlier, that pitfalls are legion. And also that you quite probably should be just a little bit scared.

When a program runs as root and interacts with a user in any way, it is most important for it not to have weaknesses. We'll look at four of the worst offenders, two of which are quite easy to audit for.

40.5.2.1. Executing subprograms

Linux gives you many good reasons to execute a program from within a program. After all, Linux, like Unix, was built on a toolbox philosophy: many small utilities working together. Hence the long command lines that pipe data from one program to another. The subprograms may not be secure themselves, but that is another matter.

You should look out for three system calls, provided by C and mimicked by other popular languages on Linux: system, popen, and the exec* family. The first two calls execute shells that run subprograms, while the third set replaces the current program with a new one without invoking a shell.

A shell can mainly be subverted in three ways: via the PATH, via IFS, and via insecure input. We'll return to how to check input shortly.

The PATH variable is used by shells as a list of where to look for executable programs. If the user executing the program has changed the shell to start with his $HOME/bin directory and has some well-chosen executables that he knows your program will execute, such as gzip to read compressed files, your program is broken into. He can replace gzip with anything he likes.

Second, if the subprogram is a shell script, the variable IFS can be changed to change the meaning of the script. The IFS variable contains the Input Field Separatorsthe characters recognized as word separators in a script. Ordinarily, IFS contains space, tab, and newline. If it is changed to include colon and semicolon, and perhaps single and double quotes, a script can suddenly get a very different meaning.

The fallout of this is: set the PATH and IFS variables explicitly in any programs that execute as a privileged user.

You should look out for calls in the exec* family for the same reason. Even if the PATH is not used for the exec call itself, the called program will inherit the path and may call another program. And the IFS warning also applies, because surprisingly often a Linux program is a script.

40.5.2.2. Checking input

While it is true that a program should try to be as understanding of its user as possible to make it easy to use, it is also true that a program should be very critical of everything a user says. If a program gets the user to input values that are going to be used in a SQL statement and it accepts things such as quotes in the value, the user can construct an entirely different SQL statement. If you start out with something like this statement:

 select * from employees   where name like "$value" 

then the user could give a carefully constructed value such as Smith%"; delete from employees where name like "%. When inserted into your SQL query, this becomes:

 select * from employees   where name like "Smith%"; delete from employees where name like "%" 

which will very likely empty the employees table. This is a very brute-force and stupid example; the more subtle thing to do is to adjust the wages table. Quite a lot of faults can lead to this kind of attack, but the main one is failing to check input values in the program. The rule for input values should be that all the input that is not explicitly allowed is forbidden. When inputting names, there is no reason to accept quotes and semicolonsin fact, one should exclude any character that is not alphabetic, a space, an apostrophe, or a period, which should at least handle most Western names. (If we're talking about Unicode, this becomes a whole other headache.) If a program takes this kind of approach to all values that are input, it will become dramatically safer. In a web setting, the kind of attack used as an example here is called SQL injection and is wildly popular.

In a Linux setting, when subprograms are executed using system or popen as mentioned earlier, it has also been quite popular to give values such as:

  foo; rm -rf / 

which in a privileged program that does not check its input would cause the whole filesystem to be deleted.

The moral of this? Never trust your user, check everything, and reject or suppress anything that is even remotely fishy.

40.5.2.3. Buffer overflows

This class of insecurity has also been wildly popular with the crackers. It is another example of not checking input properly. The C/C++ languages and the libraries they use are very vulnerable to this, because many of the functions do not do bounds checking. They will gladly input 5000 characters into a 128-character buffer. Overflows are the most fun (for those who deliberately cause them) when the flaw appears in a network service, so that a remote user can gain root shell on the host just by sending some magical input to the network service. But vulnerabilities are also seen in programs that users can run when logged in to a machine.

A buffer overflow works when giving a variable a value, by also overwriting the contents of variables adjacent to it on the stack, heap, or global variable store of the program. By putting her own values into the neighboring memory areas, an intruder can subvert a program to do anything, and the most common trick is to start a root shell.

There are many ways to achieve a buffer overflow, all of which involve not checking the length of the data the program is handling. If you're going to copy an environment variable into a string buffer, always check the length of the value first or use the dynamic strdup function or one of the strn* functions that refuses to copy more than a set number of characters. If you're reading input from a user or the network, do not use gets or some other function that does not accept a length parameter. fgets does the same job but limits the input. And remember that return values from functions such as gethostbyaddr should also be considered as input; you have no way of knowing that the hostname fits in the 64-byte buffer set aside for it.

40.5.2.4. Unsafe temporary file creation

Programs that use temporary files are in danger because they most often use a common temporary area such as /tmp or /var/tmp in which users can leave nasty surprises beforehand to subvert the program. If it is well-known what a temporary file is named, such as /tmp/work.pid, and the program does not create the file the right way, it can be bamboozled into overwriting critical system files. If the attacker is lucky, the victim is even a file that then lets him log in as root afterward.

Many programs use combinations of calls such as tmpnam, mktemp, and open to create a temporary file. The tmpnam function does not create a very safe name. mktemp creates safe names on Linux, but not on some BSD variants. Then, after obtaining a safe, or unsafe, name, one opens the file with open or fopen, but if this is not done correctly, someone may create a symbolic link with exactly that name that points to some file she wants overwritten. A temporary file in a public file area should be opened only with the flags O_CREAT and O_EXCL, which ensure that the file is new and did not previously exist, not even as a symbolic link. The fopen call is not suitable for this at all; the open call must be used. The easy way to do this in one step is to use mkstemp, which fashions a name and opens the file correctly. If the program needs to use standard I/O functions, it should use fdopen to obtain a FILE * handle from the file descriptor returned by mkstemp or open.

40.5.3. IDS

In a secure environment, you usually want IDS tools somewhere to help detect attempts at break-ins and most definitively to help discover a break-in once it has happened. IDSs are broadly put into two categories: network and host, depending on what they monitor. Network tools usually base themselves on firewalls and use network snooping techniques, whereas host tools monitor system activities. A third kind of IDS tool is a host resident network scanning detector. PortSentry is one such tool. It is included in Debian, but not in many other distributions, but it is easy enough to compile. It can be downloaded from http://sourceforge.net/projects/sentrytools.

PortSentry works by listening on a lot of likely portsports that are often subject to network scans. The default configuration is quite good. On a Debian system, it is stored in /etc/portsentry. Some excerpts from portsentry.conf (the lines are broken to fit the page; PortSentry does not support line continuation) follow:

 # Use these if you just want to be aware: TCP_PORTS="1,11,15,79,111,119,143,540,635,1080,1524,2000,5742,6667, \     12345,12346,20034,27665,31337,32771,32772,32773,32774,40421,49724,\     54320" UDP_PORTS="1,7,9,69,161,162,513,635,640,641,700,37444,34555,31335, \     32770,32771,32772,32773,32774,31337,54321" ... # iptables support for Linux #KILL_ROUTE="/sbin/iptables -I INPUT -s $TARGET$ -j DROP" 

PortSentry will listen to all these ports. If something connects to them, PortSentry will log it or optionally run a command. The log entry looks something like this (lines truncated):

 portsentry[578]: attackalert: Connect from host: 172.16.73.1/172.16.73.1      to TCP port: 32772 portsentry[578]: attackalert: Ignoring TCP response per configuration      file setting. portsentry[578]: attackalert: Connect from host: 172.16.73.1/172.16.73.1      to TCP port: 79 portsentry[578]: attackalert: Host: 172.16.73.1 is already blocked. Ignoring 

As shown in the previous configuration, PortSentry can run firewall configuration commands to block attackers entirely. If you want to do so, you should probably put rules into the FORWARD table as well as the INPUT table that the example shows. The -I option inserts the rule in the front of the table instead of appending it as -A does. This can be quite dangerous and a useful aide to a DOS attack. If someone does a spoofed scan of you by UDP, he can easily provoke PortSentry into blocking the spoofed address he's using. This could be an important address, like your DNS server, your email server, or some such. Therefore an attacker can very easily cause you major trouble, and even long downtimes. The best way to use such a tool is probably to generate real-time alerts for administrators to pursue and then perhaps to take action against it.

There is a considerable downside to PortSentry. If a host monitored by it is scanned, it may be a very tempting target because it appears to run a lot of services that are useful break-in tools because they have a history of weaknesses. Furthermore, the upside may not be that big. Most recognizance tools used in network probing, such as nmap -sS, do not actually open a connection. They cause a half opening by sending a SYN, and when the SYN+ACK get back, they conclude that the port is open. But they never send an ACK back, so the connection never goes up and PortSentry never gets a whiff of the scan.

As stated in this chapter, IDS tools fall into two categories: host and network. Here we'll look at one tool in each category.

40.5.3.1. Tripwire

Tripwire uses file fingerprinting to detect changes in files. Information that goes into a fingerprint includes size, time, and date but, more importantly, checksums of different kinds. Tripwire was the first well-known software to do this, and when the work on these LPI Exams started, it had not yet been usurped. Its name is still held in high regard around the Internet. But the free Tripwire version is at this point very much abandonware. Originally from Purdue University, they licensed it to Tripwire Inc. in 1997, and Purdue has made no releases since Version 1.2 from August 1994, except for a patch in 1996. Version 1.x remains available in quite old releases of Red Hat, SuSE, and other Linuxes. The subsequent Version 2.x was released under GPL, and Version 2.3.1 from March 2001 is still available there, but it is not possible to compile with current versions of gcc without modifying the source code.

There is another tool called aide that is included in Debian, but not Red Hat. It does more or less the same job as Tripwire and is being maintained. This is what you should use if you want to install a fingerprinting tool now.

Still, Tripwire is documented, and a summary of the documentation follows.

40.5.3.1.1. Overview of Tripwire

The basic work flow in Tripwire is pretty simple. First, you create the needed site and local encryption keys on a one-time basis with twadmin. These are later used to encrypt and sign Tripwire files to ensure that any changes to the files by an intruder will be plainly detectable. Then the real work startswriting a policy file. This file controls what attributes of a file are fingerprintable and the severity of the situation if it is changed. The policy file is then encrypted and signed with the twadmin command. Once a policy is set, you generate a database with the fingerprints of all the files as set in the policy. This is often called the baseline of the system and it is created with the tripwire command. Later tripwire is run in the checking or update mode to find deviations from the baseline and optionally to accept the changes as legitimate.

Tripwire uses several files. In most cases there is no need to override their locations. /etc/tripwire/tw.cfg contains system information and file paths; a useful default is installed during the first installation. /var/lib/hostname.twd is the fingerprint database file. Tripwire uses cryptography to make it impossible to interfere with its operation without its becoming obvious. The /etc/tripwire/site.key key is used for files that can be used across several hosts. The /etc/tripwire/hostname-local.key key is used on files specific to the given host. The Tripwire policy is stored in an encrypted and signed format in /etc/tripwire/tw.pol and in plain text in /etc/tripwire/twpol.txt. Policy files are created with twadmin.

40.5.3.1.2. Tripwire policy file format

The policy file is made up of comments and rules. Most of the examples in this section are from the twpolicy manpage in the GPL version of Tripwire 2.3, all written by Tripwire Inc. The general file format is like this:

 # This is a comment. # This is a variable assignment mask1 = value; # A rule line path  ->  property_mask;  # A comment can go here, too. # A stop point !path; 

All lines that are not comments must end in semicolons. Each path is the path of a file or directory. On a rule line, it is followed by one or more whitespaces and then the separator (->) followed by more whitespace. If the path is a directory, the setting works for the directory and everything underneath. The stop points are used to exempt files or directories from scanning. A somewhat realistic example is:

 # This demonstrates regular rule lines # Defines Tripwire behavior for entire /bin directory tree. /bin            ->  $(ReadOnly); # Defines Tripwire behavior for a single file.  In this case, # Tripwire watches for all properties of hostname.hme0. /etc/hostname   ->  $(IgnoreNone) -ar; mask1 = $(IgnoreAll) +ugp; mask1 = $(IgnoreNone) -li; # Scan the entire /etc directory tree using mask1, except the # file /etc/passwd, which should be scanned using mask2. /etc            ->  $(mask1); /etc/passwd     ->  $(mask2); # This demonstrates stop points # The directory /etc/init.d will not be scanned. !/etc/init.d; !/etc/rc.d; !/etc/mnttab; # To summarize: scan all of /etc, but do not scan two particular files # and one directory in the /etc hierarchy. 

In the listing, $(ReadOnly) refers to a variable. Variables can be used anywhere on a line. The property masks decide what makes up the fingerprint of the given file or directory. The basic building blocks of property masks are single characters specifying a single property prefixed by a plus or minus to enable or disable checking of this property. A number of predefined variables define useful property masks. ReadOnly is used for files that should not change at all. Growing is good for files that should grow, logs mostly. Device is good for device files, and it stops Tripwire from trying to read them to generate checksums. IgnoreAll is a good starting point if you want to watch only a few attributes. IgnoreNone is good if you're watching all but a few attributes.

Each of the 18 attributes shown in Table 40-4 controls the recording and checking of one property of the file.

Table 40-4. Tripwire attributes

Attribute

Description

a

Access time.

b

Number of blocks allocated.

c

Creation timestamp.

d

Device number of the inode/file.

g

File group.

i

Inode number.

l

File size is growing (violated if the file is ever smaller than before, such as after a log rotation).

m

Modification time.

n

Number of links to the inode.

p

Permission bits.

r

For device files: major and major device number.

s

File size.

t

File type.

u

File owner.

C

CRC-32 checksum. Not a good idea to use this, because it is easily duped.

H

Haval hack value.

M

MD5 hash value. This is cryptologically safe and extremely hard to dupe.

S

SHA hash value. Also safe.


Rules can be given attributes in two ways:

 /etc          -> $(ReadOnly) (attribute = value, attribute = value); (attribute = value ... ) {      /usr/lib -> $(ReadOnly); } 

The rule attributes are:


rulename=name

This is used when checking/updating the database you can search in the reports for this rule name.


emailto=email_address

If a file fails the fingerprinting test and the check is run with the --email-report option, the specified address will be emailed about the problem.


severity=integer

Severities run from 0 to 1000000, with 0 as the default. When running a check, you can ignore files with a severity lower than a given level.


recurse=true|false|integer

Disable or limit recursion of directories. If set to false, the directory will be handled as if it were a file. If set to TRue, TRipwire will recurse infinitely. An integer value sets the maximum depth of recursion. Stop points override this setting.

In addition, you can put directives in the policy files. To wit:

 @@section section name        # Updates and checks can be restricted to                                 # certain sections only. @@ifhost hostname [ || hostname ]   rules                       # To get different rules for different hosts. @@else   rukes @@endif @@print message                 # Print message on standard output @@error message                 # Print message and exit @@end                           # Stop processing here 


Syntax

 tripwire -m i|--init   [ options ... ] tripwire -m c|--check  [ options ... ] [ object1 ... ] tripwire -m u|--update [ options ... ] tripwire -m p|--update-policy [ options ... ] policyfile.txt tripwire -m t|--test -e|--email e-mail address 


Description

To build a fingerprint database, use the initialization mode. Then run a check once a day, or hour, from a crontab file. If a reported change is legitimate, use update mode to update the fingerprints.


Database initialization

Once the policy is correct, the database can be created. Some relevant options include:


-m i, --init

Mode selector: initialize the database.


-v, --verbose

Be verbose.


-s, --silent, --quiet

Be silent.


-e, --no-encryption

Disable cryptographic signing of the database.


Checking mode

Tripwire should be run in checking mode periodically. This verifies that the files watched are OK and have not been changed. Some popular options include:


-m c, --check

Activate checking mode.


-I, --interactive

Open the check summary in an editor. This summary can then be edited to enable changes in the database on a file-by-file basis.


-rreport, --twrfile report

The default location for reports is /var/lib/tripwire/report/hostnamedate.twr.


-l level|name, --severity level|name

Check only rules with severity at or above the given level. Valid level names are low, medium, and high, corresponding to a level of 33, 66, and 100 respectively.


-R rule, --rule-name rule

Check only the named rule. Cannot be combined with -l.


Database updates

If the database is not updated interactively in check mode, it can be updated wholesale or interactively with this mode. Options include:


-m u, --update

Activate update mode.


-a, --accept-all

Accept all changes.


-V editor, -visual editor

Use the given editor to accept individual changes interactively.


Replace policy file

You can replace the policy file with -m p or --update-policy, but this is better done with twadmin (discussed later).


E-mail test report

This option is used to send a test report to the given address.


-m t, --test

Activate test mode.


-e mail address, --email mail_address

Send mail to this address.


Syntax

 twadmin -m F|--create-cfgfile options...  configfile.txt twadmin -m f|--print-cfgfile [ options... ] twadmin -m P|--create-polfile [ options... ] policyfile.txt twadmin -m p|--print-polfile [ options... ] twadmin -m R|--remove-encryption } [ options... ] file1 [ file2... ] twadmin -m E|--encrypt [ options... ] file1 [ file2... ] twadmin -m e|--examine [ options... ] file1 [ file2... ] twadmin -m G|--generate-keys options... 


Description

twamin administers Tripwire configuration and policy files. These are kept in an encoded format and signed with the local or site encryption keys. The keys are also generated with this utility.


General options


-v, --verbose

Verbose output.


-s, --silent, --quiet

Silent mode.


-Ssite_key, --site-key filesite_key

Use the given site key for encrypting or decrypting.


-Qpass_phrase, --site-pass phrasepass_phrase

Use together with -S to give the passphrase for encoding and signing.


-Llocal_key, -Llocal_key

Use the given local key for encrypting or decrypting.


-Ppass_phrase, --local-passphrase pass_phrase

Use this passphrase with the local key.


-e, --no-encryption

Do not encrypt (or sign) the file. Usually you need to use -S, -L, or -e (or one of their long equivalents) in an operation.


-ccfgfile, --cfgfile cfgfile

When operating on a configuration file, print or save to this filename.


-ppolfile, --polfile polfile

When operating on a policy file, print or save to this filename.


Configuration file operations

Configuration files can be replaced or printed. Configuration files are either unencrypted or signed with the site key.

Use -m F or --create-cfgfile to create a configuration file. An input text file should be given as the last argument on the command line.

You can print the configuration file through -m f or --print-cfgfile.


Policy file operations

Policy files are encrypted and signed with the site key. They can be created or printed.

To create a policy file, use the -m P or --create-polfile option. The last thing on the command line should be the name of a text file containing the new policy file.

To print a policy file, use -m p or -print-polfile.


Encryption operations

twadmin can also be used to encrypt, decrypt, and check the encryption of files using either site or local keys.

A Tripwire file can be unencrypted with -m R or --remove-encryption. Last on the command line should be the name of the file or files to decrypt.

The reverse is accomplished with -m E or --encrypt. It too takes the name of the input file or files last on the command line.

Encrypting and signing files is useless without a way to verify the signature to see whether the file is unchanged. This is accomplished with the -m e or --examine option. Here too you should list the file or files to examine last on the command line.


Generating encryption keys

With the -m G or --generate-keys option, you can generate local or site keys as needed.

40.5.3.2. Snort

Snort is a network IDS. It listens in on network traffic, looking for specific patterns in the traffic that indicate some kind of attack or scanning. Snort is included in Debian, but not Red Hat. The Snort web page at http://www.snort.org has documentation and RPMs. This material is based on Snort 2.

Snort can be installed with different database backends, such as MySQL or PostgreSQL, but it can also log to flat files. In most cases you will want a database backend and a presentation frontend such as ACID. Snort has three modes of operation: sniffer (which shows packets on your terminal), packet logger (which saves packets to disk), and most interestingly, intrusion detection mode (in which it analyzes packet traffic according to different rules). We'll focus on Snort with no database backends in intrusion detection mode. Even with this restriction, a full description of Snort is unreasonable and the description here will be focused very directly on elementary use and rule writing.

The Snort configuration is in /etc/snort/snort.conf. A distribution-specific configuration used by the init.d script is found in /etc/snort/snort.debian.conf on Debian and /etc/sysconfig/snort on Red Hat systems. What the Debian packager and the Red Hat packager considered as useful to put in there does not overlap much, but both define which interfaces should be listened on. You should review these and adjust as needed. On Debian, for example, you should define HOME_NET in the system configuration file. On Red Hat this is done in the main snort.conf file. All in all, the init.d scripts build pretty complex command lines, specifically:

  /usr/sbin/snort -m 027 -D -S HOME_NET=[172.16.73.127/24] -c /etc/snort/  \   snort.conf -l /var/log/snort -d -u snort -g snort -i eth0 

on Debian, and you should either always use the scripts or never use them.

The significance of the HOME_NET definition on the command line or in the configuration file is that Snort will know who's friend and who's foe. This helps restrict the pattern matches. The -l option specifies the log directory. The -u and -g options tell Snort what user and group to run as when it is done with the setup work that requires root access. The -d option causes application data to be logged, while the -m option sets the file creation umask. It is the -c option that brings in the IDS configuration and turns Snort into a Network IDS. This default configuration is probably not enough to keep up with a full gigabit Ethernet. If you need to do that, you will find ways to do it and other tuning tips in the documentation and forums on the web site.

Once Snort is running, you can try to scan your Snort host with nmap -sS host (shown later) while looking at /var/log/snort/alert. If you scan from a host that Snort regards as external, you should get a good number of alerts, some examples of which are shown here:

 [**] [1:469:1] ICMP PING NMAP [**] [Classification: Attempted Information Leak] [Priority: 2] 02/01-12:07:22.770254 172.16.73.1 -> 172.16.73.127 ICMP TTL:59 TOS:0x0 ID:35658 IpLen:20 DgmLen:28 Type:8  Code:0  ID:44124   Seq:60516  ECHO [Xref => http://www.whitehats.com/info/IDS162] [**] [117:1:1] (spp_portscan2) Portscan detected from 172.16.73.1: \   1 targets 21 ports in 1 seconds [**] 02/01-12:07:23.073961 172.16.73.1:57705 -> 172.16.73.127:688 TCP TTL:54 TOS:0x0 ID:10591 IpLen:20 DgmLen:40 ******S* Seq: 0x11BFB904  Ack: 0x0  Win: 0xC00  TcpLen: 20 [**] [1:618:4] SCAN Squid Proxy attempt [**] [Classification: Attempted Information Leak] [Priority: 2] 02/01-12:07:23.114308 172.16.73.1:57705 -> 172.16.73.127:3128 TCP TTL:37 TOS:0x0 ID:29886 IpLen:20 DgmLen:40 ******S* Seq: 0x11BFB904  Ack: 0x0  Win: 0x800  TcpLen: 20 [**] [1:1420:2] SNMP trap tcp [**] [Classification: Attempted Information Leak] [Priority: 2] 02/01-12:07:23.136749 172.16.73.1:57705 -> 172.16.73.127:162 TCP TTL:39 TOS:0x0 ID:4837 IpLen:20 DgmLen:40 ******S* Seq: 0x11BFB904  Ack: 0x0  Win: 0x1000  TcpLen: 20 [Xref => http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2002-0013][Xref => http://cve.mitre.org/cgi-bin/cvenam e.cgi?name=CAN-2002-0012] ... 

40.5.3.2.1. Configuring Snort

The main snort.conf file is very generic, but can take a lot of customization. Most importantly, it defines some networks (HOME_NET, EXTERNAL_NET) to help it see who it's working for. Because most Snort rules concentrate on traffic from EXTERNAL_NET to HOME_NET, it's important to define those variables in a way that lets you detect the traffic you want to. Sometimes you don't trust people on the inside either.

Next, you can enumerate DNS servers to help Snort restrict its DNS attack checking to traffic headed for the actual DNS server. This saves time. You can do the same for SMTP, HTTP, SQL, and Telnet servers. For the full list of options, see the comments in the configuration file and the documentation. On the bottom of the config file are a lot of include statements. These include the actual IDS rules that help identify specific attacks. The easiest high-level configuration of Snort is to simply include and exclude rulesets here.

40.5.3.2.2. Understanding Snort rules

When you come to the actual rules, there is rather a lot to learn. We'll use some examples. If you refer back to the previous alerts, you will find this message:

 [**] [1:618:4] SCAN Squid Proxy attempt [**] [Classification: Attempted Information Leak] [Priority: 2] 02/01-12:07:23.114308 172.16.73.1:57705 -> 172.16.73.127:3128 TCP TTL:37 TOS:0x0 ID:29886 IpLen:20 DgmLen:40 ******S* Seq: 0x11BFB904  Ack: 0x0  Win: 0x800  TcpLen: 20 

It corresponds to this rule:

 alert tcp $EXTERNAL_NET any -> $HOME_NET 3128 \    (msg:"SCAN Squid Proxy attempt"; flags:S,12; \    classtype:attempted-recon; sid:618; rev:4;) 

This rule says that all traffic from any port on EXTERNAL_NET to port 3128 on HOME_NET is a reconnaissance attempt. It does not matter whether the connection comes up. And the rule is right; because web proxies are usually for internal use only, access from the outside is suspect.

Now we'll look at two rules to detect X Window System connections going up from the outside. Each rule starts with an action. Actions can be one of:


activate

Raise an alert and activiate a dynamic rule.


alert

Raise an alert and log the packet.


dynamic

This rule must be activated, but once active, it logs packets.


log

Log the packet.


pass

Ignore the packet.

The next field is the IP protocol. Snort can currently handle tcp, ucp, icmp, and ip.

Next is the source address and port, followed by a direction operator, -> here, and the destination address and port. The example shown earlier simply uses variables for this, but some more complex syntax is available:


any

Any address


d.d.d.d

Regular dotted decimal, such as 172.20.12.88


d.d.d.d/m

Dotted decimal with network mask such as 172.20.12.0/22


[a, a,...]

List of addresses and networks, such as [172.20.12.88,62.179.100.29,192.168.0.0/24]


!a

Not the address or list; examples include !172.20.12.88 and ![172.20.12.88,192.168.0.0/24]

The ports can be specified like this:


any

Any port.


d

A port number, such as 22 for SSH.


d:d, :d, d:

A port range. If the lower bound is left out, it defaults to 0. If the higher bound is left out, it defaults to 65536. Examples include 6000:6010 for X Window System ports, and :1024 for all privileged ports.


!ports

Not the given ports.

Between the addresses comes the direction operator. It is either -> or <>. The latter string can be used to log both sides of a protocol exchange. There is no <- operator because that would cause insanity.

activate and dynamic rules are very powerful, but are being replaced by tagging. An example from the Snort documentation follows. It's worth noting that there are no dynamic rules in the default rule base.

 activate tcp !$HOME_NET any -> $HOME_NET 143 (flags: PA; \    content: "E8C0FFFFFF!/bin"; activates: 1; \    msg: "IMAP buffer overflow!";) dynamic tcp !$HOME_NET any -> $HOME_NET 143 \    (activated_by: 1; count: 50;) 

The activate rule detects an IMAP buffer overflow attempt (the connection has to be up for this to occur) and the dynamic rule logs the 50 following packets. This should be able to document what happened or what the attacker attempted to accomplish.

Once a combination of protocol, from address/port, and to address/port has been matched, processing continues inside the parentheses. The most useful processing commands are probably flags to match TCP flags, content to do simple packet content matching, and msg to specify the alert text.


flags

Flags are character codes corresponding to TCP protocol bits. Most administrators have heard of the SYN and ACK bits, which are coded S and A respectively. Also available are F, R, P, U, 1, and 2 corresponding to FIN, RST, PSH, URG, reserved bit 1, and reserved bit 2. 0 means no flags. Prefixed with +, all the given flags must be present, but others are allowed (this is the default). The * prefix matches if any of the given bits is set. ! matches if none of the bits is set. The flag requirements may be followed by a flag mask, which indicates flags whose settings should be masked away before testing. Typically, the two reserved bits are masked. Thus, flags:S,12 matches packets with only the SYN flag set, ignoring the 1 and 2 flags.


content

The content-matching capability in Snort is quite rich, and it may contain both ASCII and binary data. There are keywords to restrict searching in the payload. This is good because string searching is time consuming. An example may help here:

 alert tcp any any -> any 80 (content:"GET"; depth 3; nocase;) 

This will match any packet to port 80 that has the string GET in the first 3 bytes. Matching will be done in a case-insensitive manner. Additionally, the offset keyword tells the processor where to start looking for the string. There are quite a few other options, but these should do here. There also are numerous other ways to match content; please refer to the documentation. To match binary data, quote hexadecimal numbers with a | (pipe). An example is |DE AD BE EF| (a magic number in Unix mythology).


msg

A message to be logged or alerted with.


reference

A reference to an IDS attack profile database for further explanation of this rule. In the alert log, it will be shown as an expanded URL.


sid, rev

These aid in refinding rules. For example, when an attack gets a specific rule assigned to detect it, it is also given a sid and rev, and if it is later found that this rule gives too many false positives, it can be refined. Users concerned with keeping this rule updated can easily check whether they have the latest revision of the rule and replace it with a newer one. There is no need to use these in private rules.


classtype

Attack class types. These are enumerated in the classification.config file, which gives each code class type a description and a priority. One example is attempted-admin, which means "attempted administrator privilege gain" and has a high priority. There are currently more than 30 attack classes.

40.5.4. Miscellaneous

40.5.4.1. Scanning in general

We mentioned nmap earlier. It is a very effective port scanner with some useful stealth features that can help you check for poor firewall setups as well. nmap is very effective at doing a quick scan of many services on all hosts in a network or perhaps to look only for FTP and email servers in a net range like this:

 # nmap -sS -P0 -p 21,25 10.0.0.0/24 ... Host lorbanery.langfeldt.net (10.0.0.1) appears to be up ... good. Initiating SYN Stealth Scan against lorbanery.langfeldt.net (10.0.0.1) at 11:55 Adding open port 25/tcp The SYN Stealth Scan took 2 seconds to scan 3 ports. Interesting ports on lorbanery.langfeldt.net (10.0.0.1): PORT   STATE    SERVICE 21/tcp filtered ftp 25/tcp open     smtp ... Host roke.langfeldt.net (10.0.0.4) appears to be up ... good. Initiating SYN Stealth Scan against roke.langfeldt.net (10.0.0.4) at 11:55 Adding open port 25/tcp Adding open port 80/tcp The SYN Stealth Scan took 0 seconds to scan 3 ports. Interesting ports on roke.langfeldt.net (10.0.0.4): PORT   STATE  SERVICE 21/tcp closed ftp 25/tcp open   smtp ... 

After locating your mail and FTP servers, you can then test them manually or with simple scripts. For example, you may want to have no anonymous FTP servers on your network. It is simple enough to attempt an anonymous login.

A real problem on networks nowadays is the existence of open email relays. If you have one, it will surely be abused for spreading spam. With a list of the SMTP servers present on your network, it is also quite easy to script a test for this. You don't have to test whether the server will relay mail from a host on the company network; this may be what it is set up to do. But if you have a mail server that accepts email from the Internet, it is very practical to have a remote shell account and do such a check from there. The SMTP protocol is simple enough, and the test is to see whether the email server will accept mail to a party outside your relay domain list. The main rules are that when accepting email from the inside network, any email should be accepted. There may be local policies that restricts this, for example checking that the "From" address is valid, and that the email does not contain spam or virus. But that does not concern an open relay test. From the outside, the server should accept emails only to the inside domain. It should not accept email that is to someone outside and claims to be from the inside. Thus, you might run the following test from false.linpro.no, outside your local area network, to your mail server:

 $ telnet mail.example.com 25 ... 220 How may I help you? ESMTP HELO false.linpro.no 250 mail.example.com MAIL FROM:<santa@north-pole.org> 250 Ok RCPT TO:<toothfarie@faries.org> 554 <toothfarie@faries.org>: Relay access denied quit 221 Bye $ telnet mail.example.com 25 ... 220 How may I help you? ESMTP HELO false.linpro.no 250 mail.example.comM AIL FROM:<postmaster@example.com> 250 Ok RCPT TO:<toothfarie@faries.org> 554 <toothfarie@faries.org>: Relay access denied quit 221 Bye 

This server is properly set up and does not seem to allow any relaying. It does not accept mail from just anyone to anyone, nor does it accept email that pretends to come from the inside.

Several automated tools for network scanning make much better use of your time than these manual methods. Nessus, for example, which is included in Debian but not in Red Hat, will scan a network range and includes a lot of very specific vulnerability tests, including anonymous FTP and mail relays. Debian, as a rule, does not update programs to the latest version when holes are discovered, but instead backports the fix and keeps the same version. So when Nessus thinks it has found an old hole-riddled version, it might in fact not have a hole after allif the machine is kept up-to-date, that is.

40.5.4.2. Security alerts

To keep abreast of developments in the security area, you and hopefully all your colleaguesin case you're on vacation sometimeshould subscribe to one or several security announce lists. These follow two different schools. CERT and CIAC follow what they consider to be responsible practices. This means that they give vendors (Unix, Linux, Windows, and others) ample (some would call it infinite) time to release fixes before CERT or CIAC publishes advisories about specific security problems. The problem with this is that it has previously taken many years to fix even the most trivial and well-known problems. This does not make for a very secure network, because a well-informed crook can get in while administrators, being busy at doing more productive things, are oblivious to the problems.

Bugtraq and Full Disclosure follow a more aggressive policy. Holding that an unknown security hole is still a security hole and nothing anyone should have to live with, they publish problems very quickly. On Bugtraq, it is preferred that people submit the problem to the vendor and let the vendor take a reasonable amount of time, perhaps a month, before the information on the problem is posted. The Full Disclosure list is less concerned with this and seeks what they see as fully open information with no compromises. That at least gets security holes fixed fast, at the cost of wider exposure for a shorter time, rather than the long time with unknown exposure that the CERT tactic gives.

So some of the different places to get security information are:


Computer Emergency Response Team (CERT)

The original network security coordination center, located at Carnegie Mellon University. They make announcements on a mailing list. See http://www.cert.org/contact_cert/certmaillist.html for further information. Their web site also contains updated security information, including viruses and vulnerabilities.


Computer Incident Advisory Capability (CIAC)

Operated by the U.S. Department of Energy, CIAC keeps a web site at http://www.ciac.org/ciac/. They release bulletins and "C-Notes" as well as links to papers, documentation, and software. Because they are devoted to keeping the DoE secure, their papers are quite likely to be relevant to practical security work you may need to do. CIAC publishes their bulletins and notes only on their web site.


Bugtraq

Hosted by SecurityFocus, this is the original full disclosure list, now somewhat more conservative. Subscribe at http://www.securityfocus.com/archive/1.


Full Disclosure

The Full Disclosure list should need no further introduction. See http://www.netsys.com/cgi-bin/displaynews?a=301 to subscribe.


Red Hat

Most Linux distributions have announce lists and security pages. Red Hat's security page is at http://www.redhat.com/security, where you can subscribe to alerts and browse their archives.


Debian

The team keeps their security information at http://www.debian.org/security. You'll find information about how to keep Debian up-to-date, announce archives, and announcement subscription instructions.

40.5.4.3. Updating Linux

All your hosts, especially the Internet-exposed ones, need to be kept secure and up-to-date. The best way to keep updated about exactly what is found and what updates to install is to follow a distribution-specific alert mailing list. But a good second option is to just keep updating packages as updates are published. In Debian, this is especially simple, but there are also ways to do it in Red Hat and Fedora. Updating a package is the way fixes and patches are installed on Linux. On Solaris and other operating systems, you would install a patch or fix that modifies some installed package; not so on Linux.

40.5.4.3.1. Keeping Debian up-to-date

The first thing to do is put the following in your /etc/apt/sources.list file (if you use Woody):

 deb http://security.debian.org/ woody/updates main contrib non-free 

If you also drop the apt script, which follows, into /etc/cron.daily, you will receive a email every night after a package has received an upgrade. You will then need to do the actual upgrade yourself.

 #!/bin/sh apt-get --quiet=2 update apt-get --quiet=2 --yes --print-uris dist-upgrade |         awk '{ print $2; };' 

And that is all there is to it.

40.5.4.3.2. Keeping Red Hat up-to-date

Red Hat runs a centralized service called up2date, which, unless you subscribe, can be very slow. Since most production servers in the future will run Red Hat Enterprise Linux and all these users will have premium subscriptions and priority up2date access this may gain popularity.

If you run an unlicensed desktop version of Red Hat, you may be more comfortable running YUM or apt-rpm. YUM is a native RPM package download manager, while apt-rpm is a port of the apt suite to RPM. The yum utility takes pretty much the same commands as apt-get: update to get new package lists, update or dist-upgrade to install new packages, and install package to install or upgrade a specific package.



LPI Linux Certification in a Nutshell
LPI Linux Certification in a Nutshell (In a Nutshell (OReilly))
ISBN: 0596005288
EAN: 2147483647
Year: 2004
Pages: 257

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net