Specialized Auditing Matters

 < Day Day Up > 



Auditing Databases

Today's database subsystems are applications providing functions related to defining, creating, deleting, modifying, and reading data in an information system. By way of review, the principal components of a database subsystem are the database management system, DBMS, used to manage data; the application programs performing operations on the data, central processor in which operations are performed and the storage media maintaining copies of the database. The database subsystem is also called a knowledge base reflecting the power of the data maintained in the database.

As in all auditing practices, the overarching controls design stem from CIA, confidentiality, integrity, and availability.

Auditing database subsystems is an examination of the controls governing the database, beginning with policies and procedures where access to the database is controlled preventing unauthorized access. Auditors must examine the implementation of the various types of integrity controls. There are many good texts about database design and implementation. Before an auditor attempts to engage a review of database operations, it is strongly suggested she have sufficient training and experience. As in all audit practices, auditors should not audit areas where they do not possess expertise.

Database Definitions

Before the discussion travels too much farther, here are some definitions that may be needed by an auditor engaged in database subsystem examination:

Accountability is achieved with two types of access restricting mechanisms, user identification and user authentication controls. Compliance with these controls is achieved through auditing. Major auditing concerns for databases are directed to information security events including logins, granting and revoking access privileges to relations, user activity logs, etc.

Experience Note 

Several years ago a government worker, having broad access to databases containing extremely sensitive information, decided to illicitly sell his knowledge and services. He was aware that his database activities were logged, but he was equally aware those logs were infrequently reviewed. The database was configured in such a fashion that anyone with access to the database was capable of viewing and copying information outside the their assigned duties. Over a period of years, he accessed information for which he did not have a need to know and sold it. The employee was discovered through exterior means and subsequently prosecuted for his criminal activities.

These are a few definitions that should help the auditor in database assessments:

  • Aggregation: The result of combining distinct units of data when handling information. Aggregation of data at one level may result in the total amount of data being designated at a higher privilege level.

  • Data manipulation: Populate and modify the contents of a database by adding, modifying, deleting, and creating rows and columns.

  • Discretionary Access Control: DAC is a method by which access to objects is restricted to authorized users or groups of users. Access is discretionary in that access privileges may be passed to users either directly or indirectly by the object's owner.

  • Inference: Derivation of new information from known information. An inference problem refers to derived information that may be classified at a level for which the user does not have privileges and a need to know. The inference problem is that of users deducing unauthorized information from information they have legitimately acquired. The problem of database inference has significant consequences. For example, physicians specialize in the treatment of specific diseases. It is possible for healthcare provider staffs to infer a patient's ailment by identifying the attending doctor. This type of information could be easily gleaned by viewing the patient information accompanied by the doctor's name. Drugs are also generally associated with a particular disease consequently; it is possible for staff members to infer a patient's ailment by identifying prescriptions.

    Experience Note 

    Auditors should be mindful of the possibility of users gaining unauthorized information through inference resulting from poor database design or access controls.

  • Mandatory Access Control: MAC is a procedure of established access controls relating to resources assigned a classification level and users are assigned clearance levels. For example, users are not allowed to read a resource classified at a certain level, unless their clearance level is equal or greater than the resource's classification.

  • Referential integrity: A database has referential integrity if all foreign keys reference existing primary keys.

  • Schema definition: Used to define the structure of the database, integrity constraints, and access privileges.

  • Schema manipulation: Modify the database structure, integrity constraints, and privileges associated with the tables and views within the database.

  • Transaction management: The ability to define and manage database transactions.

Access Controls

Access controls in the database subsystem have the function of denying unauthorized access and data manipulation. In the case of discretionary access control, DAC, users can specify who can access data they own and what action they have with respect to that data. Conversely, mandatory access control, MAC, requires an administrator to assign security attributes, such as object classifications and employee clearances. These classifications are fixed and cannot be changed by database users.

Discretionary Access Controls

With discretionary access controls, a typical user may be authorized to perform the following functions within the database:

  • Create a schema.

  • Create, modify, or delete views associated with a schema.

  • Create, modify, or delete relations associated with the schema.

  • Create, modify, or delete tuples in relations associated with the database schema.

  • Retrieve data from tuples in relations associated with the schema.

These are privileges granted to users who are designated as the owners of a particular schema along with its related views. There is an important type of privilege, that of a user granting their privileges, or a portion of them, to another user. Privilege propagation is the case of a user granting privileges to another user, who in turn grants privileges to another user.

In the propagation of privileges, it is important for an auditor to determine the allowable degree of privilege propagation. It is equally important for an auditor to examine the degree of privilege revocation. For example, if it is discovered a user has abused her privileges, what affirmative steps were taken to revoke her access privileges?

Mandatory Access Controls

In MAC, database user access to a resource is governed by a strict security policy. Database resources in the way of data-objects/attributes and record/relations are assigned classification levels. It is also a common practice to assign a classification level to each record/relation equal to the highest classification level assigned to a data/item/attribute in the record/relation. When differing levels of classification are present in the database, users are not allowed to view all the data present in the database. They may view only those items they care cleared to see.

Managing access control rules are often done through the operating system and the database management system. For example, the operating system permits only authorized users to access the database subsystem, while the database management system restricts access and the degree of user data manipulation. Auditors must be aware this is somewhat of a redundant security procedure, but one that safeguards database contents.

When a database is distributed, it is even more difficult to ensure that database access and integrity are maintained and that complete and consistent access rules are enforced throughout the enterprise. It does not matter if the database is replicated at multiple sites, or if a different database is distributed to different sites from a central location, auditors should collect evidence that multiple access control mechanisms are implemented and are universal in supporting replication.

In any processing subsystem, the issue of data integrity is one of the primary audit concerns. In database management systems, the application software directly accesses and updates the database, however, the database management system depends on the application software to pass across the correct sequence of commands and update parameters taking appropriate actions when certain types of exceptions arise.

Software Controls and Update Protocols

Application software update protocols ensure that changes to the database reflect changes to entities and associations in data the database is supposed to reflect.

  • Ensure all records are processed correctly. If a master file is in sequential order, correct end of file protocols must be followed in an update program to make certain records are not lost from either a transaction or master file. Designing and implementing correct end of file protocols can be complex if multiple sequential transaction files and multiple sequential files are concurrently processed. Auditors should collect evidence that these protocols have been designed where they can detect, prevent, and correct end of file errors.

  • Sequence check transaction and master files. During batch update processes, the transaction file is often sorted prior to the update of the database master file or the database tables. There are times when the master file or tables, intended to be updated, might be sorted in a particular order. It may seem duplicitous for the update program to check the sequence of the transaction as it processes each record. Regardless, there are situations that occur resulting in records on the transaction or master file that are out of sequence.

  • Single-record multiple-transaction processing order. Database programs frequently receive multiple transactions targeting a single master record, also known as a tuple. The order in which transactions are processed against the master record is important. Different types of transactions must be given transaction codes resulting in them being sorted in correct order before being processed against the master record.

  • Suspense accounts. Suspense accounts are essentially a file for monetary transaction where a master record could not be located at the time the update was attempted. Monetary transactions, for which a master record cannot be located, must be charged to a suspense account. If they are lost because someone fails to correct their mismatch, someone may receive a product rebate payment to which they were not entitled. Auditors must be mindful that suspense accounts, relating to data mismatches, must exist and any suspense accounts with more than a zero balance show there are processing errors needing correction.

Database Concurrency Controls in a Distributed Environment

Databases stored at multiple sites are deemed to be in a distributed environment. In one configuration, a replicated database copy is stored at all sites, and in another configuration, pieces of a database can be stored in different partitions with each partition stored at one site. Data concurrency and deadlocking problems are usually addressed by a two-step process. First step, before a transaction can read data, it must establish a read-lock on the data item. In like fashion, before a transaction can write to a data item, it must establish a write-lock on the data item. Second step, different transactions are not allowed to establish conflicting locks simultaneously. Essentially, this two-step rule means that two transactions can own read-locks on the same data item, but a read-lock and a write-lock or two write-locks are not permitted at the same time. Until a transaction releases the lock, it cannot establish additional locks. Releasing a lock provides another transaction - the opportunity to obtain control over the data item. For this reason, a transaction must commit its database changes before releasing its locks to avoid inconsistent results.

Database concurrency and deadlock problems can become serious threats to distributed database integrity unless the database management system has appropriate control levels. With replicated and distributed databases, the system must ensure that all accessible database versions are kept in a consistent state. There are some replicated database procedures that require that all data items are locked before update operations proceed. Auditors must determine the locking and updating protocols that ensure data integrity is established and maintained in a distributed environment. Further, it is important that auditors ascertain the procedures by which database administrators handle data error and conflict reports.

Audit Trail Controls

Audit trails or logs are electronic records reflecting the chronology of events occurring in the database or the database definition. Most systems require a complete set of events to be recorded such as, creation, deletions, modifications, and specific records accessed. If audit trails do not exist, it is be impossible to determine how the database arrived at its current state, who retrieved a record or who executed a specific transaction.

There are several important characteristics of audit trails. All transactions must have a unique time stamp confirming that a transaction was directed to the database definition or the database itself. Time stamps identify the unique time that the transaction caused a series of events to take place so a documented history is created. It is important to note that audit trails must record not only the time and the transaction, but also the user account from which the transaction occurred. Auditors must be mindful of the length of time that audit trails must be retained. In many cases, laws and regulations applicable to the specific industry or type of data strictly mandate how long an audit trail will be retained.

Object Reuse

It is important for auditors to address issues concerning object reuse in the database management system and operating system. Operating systems are responsible for deallocating system resources, such as files used to store tables. In order to maintain confidentiality and integrity, data stored in these resources and objects must be zeroed or replaced with random information before being reassigned.

Database Existence Controls

Existence controls in the database subsystem must be able to restore the database in the event of loss or corruption. All backup procedures involving the maintenance of a previous version of the database and corresponding audit trails. Recovery procedures generally take two forms. The first is the current state of the database must be restored if the entire database or a portion of the database is lost or corrupted. This activity involves a "roll-forward operation" where a prior correct version of the database is restored along with the log of transactions or changes that have occurred to the database since its last backup copy was made.

The "roll-back operation" is where the current invalid state and the updates are rolled back undoing the updates that caused the database to be corrupted. The log of database changes is used to restore the database to the prior valid state. Auditors must carefully examine the possibility of fraudulent behavior in rolling back database operations, making changes to the database; allowing the database to process the data, then rolling back the database to its prior state, without error report generation and audit trail recording.

Experience Note 

Database administrators had the ability to roll-forward and roll-backward the database during monthly accounts payable processing cycles without generating error reports. Further, these same administrators had the ability to edit the audit trails so there were no records made of their activities. During a monthly billing process, administrators stopped the billing database, inserted several fabricated accounts to be paid, and allowed the process to pay these accounts. Once the process had paid the bogus accounts, they rolled back the operation to the time before they inserted the bogus accounts, and allowed the operation to continue. They merely inserted the accounts, the accounts were paid, and they restored the database to its original state. The system's audit trails were edited in such a fashion, so it appeared nothing had happened in the transaction log. The system was not configured to generate error reports accounting for roll-forward and roll-backward events. After a routine audit discovered the administrator's excessive privileges, an analyst discovered the embezzlement. The offending employees were subsequently indicted and convicted.

Domain Servers

Domain name servers (DNSs) literally translate names suitable for understanding by most people into network addresses. For example, www.myexample.com is sent to a DNS and translated to the numeric address of 192.165.23.22. Of course, the latter address is one that is routable and understood by computer networks. Essentially, DNS is a database of network addresses visited by network users. If the local DNS is not able to resolve the URL, Uniform Resource Locator, it will query the next highest domain server and eventually resolve the alphabetical URL to the familiar numeric network address before being routed.

Because DNS servers are frequently exposed to open-ended networks, such as the Internet, they are subject to a wide variety of attacks. For example:

  • Attacks targeting the name server software allowing an intruder to compromise the server and take control of the DNS host

  • Denial-of-service attacks directed to a single DNS server affecting an entire network by preventing users from translating host names into IP addresses

  • Spoofing attacks trying to induce a DNS server to cache false resource records leading users to unintended sites

  • Information leakage from zone transfers exposing internal network information that could be used to plan and execute future attacks

  • A DNS server could be an unwitting participant in attacks on other sites

As any software application, DNS software evolves with each version release. Essentially, all older DNS versions have widely known vulnerabilities that attackers will exploit. In most cases, vulnerabilities that appear in one version are patched in subsequent releases. Running the latest version of DNS software does not guarantee security; however, it will minimize the likelihood of exploitation.

Auditors should be mindful there are steps that can be taken to secure a DNS server that only as to deliver DNS to a single audience and it can be optimized for that particular function. It is useful to use separate DNS servers configured to play specific roles. It is a useful procedure to have different security policies applied to servers respective to their function. For example, having an external DNS server used only as an external name server is a sound business procedure. This DNS server should provide resolution for zones for which it has authoritative information. In other words, it provides DNS services for Internet or open-ended networks and your internal network's users.

Exterior DNS servers should not contain any information about your internal network addressing or topology. It should be located in the Demilitarized Zone, DMZ, meaning it is behind a packet screen firewall facing the open-ended network and in front of the application firewall that protect the sensitive interior network. Architectures of this nature look like a sandwich with the DMZ located between the slices. DMZs are the areas where Web servers, outside e-mail services, and name servers reside.

Having an internal DNS server is commonly used to provide name resolution services to internal network clients. This DNS server is configured to provide query answers from trusted internal hosts and not from the Internet. It is located behind the packet screen, the DMZ, the application firewall as a member of the internal network. Adopting these security procedures will result in the external DNS server configured to provide little resolution service other than answering queries for which it is authoritative. Internal DNS servers can be protected by restricting the server to respond only to known and trusted hosts. In this fashion, if the resolving server was compromised or its cache corrupted, the outside DNS server's authoritative zone information would not be affected, thereby limiting the potential for damage. In this same vein, if the internal DNS server were also configured to be authoritative for internal zones only, a compromise of the external DNS server would not affect the normal name service operation of the internal network.

As a matter of network security and protection, organizations operate their DNS servers on dedicated hosts. Hosts that run the DNS services do not provide other services; consequently, there is no need for them to respond to non-DNS traffic. In such a dedicated DNS host configuration, it reduces the possibility of the DNS server being compromised by a weakness in any other piece of software located on the same host as the DNS. As a further sound security procedure, administrators disable or remove any unnecessary software or hardware features from the DNS host. The logic supporting this procedure is if unnecessary software and hardware are not present, attackers cannot exploit them.

For DNS servers providing external name resolution, everything but traffic from the Internet to port 53 UDP and port 53 TCP on the DNS server can be safely filtered and denied entry. Similarly, internal network DNS servers can be filtered allowing only internal clients access to ports 53 UDP and TCP on the name server and allowing the internal DNS server to make outbound queries to other internal DNS servers.

Exhibit 11 reflects a packet-filtering table for a typical DNS packet filter.

Exhibit 11: Packet Filtering Security Table

start example

Function

Description

Source IP

Source Port

Destination IP

Destination Port

Public name service

Inbound queries

Any

53/udp, 53/tcp, >1023/udp, >1023/tcp

Nameserver

53/udp, 53/tcp

Public name service

Query replies

Nameserver

53/udp, 53/tcp

Any

53/udp, 53/tcp, any 53/udp, 53/tcp, >1023/udp, >1023/tcp

Internal name server

Queries from clients

Internal Network

>1023/udp, >1023/tcp

Nameserver

53/udp, 53/tcp

Internal name server

Replies to clients

Nameserver

53/udp, 53/tcp

Internal network

>1023/udp, >1023/tcp

Internal name server

Outbound recursive queries

Nameserver

>1023/udp, >1023/tcp

Any

53/udp, 53/tcp

Internal name server

Replies to recursive queries

Any

53/udp, 53/tcp

Nameserver

>1023/udp, >1023/tcp

end example

Experience Note 

Auditors should be mindful of packet screening policy tables similar to the one above. Auditors should request to see the policy tables for all network traffic permitted to pass to through the packet screen as part of their due diligence. It is a matter of some gravity if such screening policy tables are flawed or absent. It is not unreasonable for auditors to test these packet screen policies and their implementation by placing a packet generator, such as Nmap outside the network and a packet detection device using Windump or TCPdump on the inside of the packet screen.

TCPdump is a network information capture program developed at Lawrence Berkeley National Laboratory. It is a UNIX-based program consisting of an executable with a network capture driver program. The purpose of this utility is the interception, capture, and display of information packets passing through the network. If TCPdump were installed on computer connected to the target network, it will capture all passing information packets. The Windows version of TCPdump is called Windump. [8]

Attackers will typically attempt to exploit packet-screening flaws.

Zone transfers are used to transfer DNS information from one DNS server to another. Restricting zone transfers is a significant step in security DNS services. Implementing restrictive zone transfers has the secondary benefit of preventing others from taxing your system's resources as it prevents intruders from gaining a list of the contents of DNS zones. Zone transfers are the delivery of the cached information held in the DNS service. Attackers obtaining zone transfers from your internal DNS server will allow them to see the IP addresses as well as the architecture of your internal network. Denying attackers this type of information raises your internal network's security. An attacker who is able to complete a zone transfer can use that information to identify new targets on your internal network such as routers, mail servers, other DNS servers, file servers, databases, and anything else in your DNS records.

A common administrator mistake is to restrict zone transfers from the primary master DNS server only, while neglecting to restrict transfers from slave servers. Because it is possible to obtain a zone transfer from a slave server, it is important that auditors ensure that all authoritative DNS servers have restrictions placed on zone transfers.

In BIND 8 or 9, use the allow-transfer substatement:

  options {   allow-transfer {192.168.4.154;};  }; 

or specific to a zone:

  type master;  zone "my example.com" {   file "db.myexample.com";   allow-transfer {192.168.4.154;};  }; 

Protecting against DNS Cache Corruption

DNS servers can operate in one of two ways when responding to queries:

  1. Recursive queries are used when a client makes a request to a DNS server and the name server is expected to traverse the DNS hierarchy to locate the answer. At that time, the name server will make a nonrecursive query to locate the requested information.

  2. Nonrecursive queries are used when a name server asks another name server for information. The queried server will return an answer or make a referral to another name server or the name server will indicate an error that the queried name server has no information to fulfill the request. As a default configuration, most name servers allow recursive inquiries from any source. DNS servers that provide recursive resolution services to the Internet may be susceptible to cache corruption. Cache corruption happens when name server caches erroneous data for a domain name. This results in denial of service or man-in-the-middle attacks.

By making a recursive query to a DNS server that provides recursion, an attacker can cause the name server to look up and cache information contained in zones under their control. In this fashion, the victim-name server is forced to query their malicious name servers resulting in the victim caching and retrieving bogus data. There are essentially four steps available to BIND and other types of DNS servers:

  • Disable recursion entirely

  • Restrict IP addresses that are allowed to make any type of queries

  • Restrict IP addresses that are allowed to make recursive queries

  • In BIND versions before version 9 disable fetching of glue records

Auditing UNIX

UNIX and its derivative operating systems comprise a significant portion of the servers in the majority of business organizations. The UNIX system is a distinctive design in that its approach is by solving a problem by interconnecting tools, rather than creating large rigid application programs. Its development and evolution led to a new philosophy of computing architecture and has been a never-ending source of challenge and joy to systems administrators and programmers. UNIX coordinates the use of system resources allowing one user to run an application while another creates a document, and still another user edits graphics and video. Each user is oblivious to the other users making each user feel as if she is the only user on the network.

UNIX is portable in that it can be installed on many different brands of computers with a minimum degree of code changes. It is intended that UNIX can be upgraded without having all customer data reentered again. Newer versions of UNIX are generally backward compatible with older versions making changes in an orderly manner. UNIX is organized on three basic levels:

  • The kernel schedules tasks and manages storage

  • The shell connects and interprets user commands, calls programs from memory, and executes them

  • The tools and applications that provide functionality to the operating system

Auditing UNIX is one of those challenges either enjoyed or hated by the auditor. Because UNIX is one of those systems that requires a certain amount of ability, it is strongly recommended the auditor conducting the UNIX audit to be versed in its use or have a UNIX administrator from another department, not the one being audited, help the auditor. Here are a few areas that should lighten the load:

  • Collect all pertinent information about the system that is being audited:

    • Inventory all UNIX hardware and software. There should be a chart or diagram showing each server, the version of UNIX installed on it, a chart reflecting updating patches, and any maintenance performed on the hardware.

    • Assemble organizational all policies, procedures, and standards that apply to the UNIX being audited.

    • Ask for any schematic diagrams relevant to the UNIX servers and their architecture.

    • Auditors should obtain a listing of authorized files, profiles, and root logins.

    • Ascertain who the users with root privileges are.

    • Ascertain the profile of any nonprivileged users.

  • All UNIX servers considered as critical assets should be audited. For each server, the auditor, either by herself or with an experienced person, will do the following:

    • Attempt to Telnet to the server from the MS-DOS prompt of a machine outside the server room and away from the attached console.

    • Request an administrator to initiate logging on the auditor's root account and execute the following commands:

      • hHostname

      • rusers -l

      • finger 0

      • finger system

      • finger root

      • finger guest

      • finger demo

      • finger ftp

      • finger bin

      • cat/etc/inittab/

      • cat/etc/group/

      • cat/etc/passwd/

      • cat/etc/shadow/

      • cat/usr/lib/uucp/

      • cat/usr/lib/uucp/System Devices

      • cat/usr/lib/uucp/Devices

      • cat/usr/lib/uucp/Systems

      • cat/usr/lib/uucp/Permissions

      • cat/usr/lib/cron/cron.allow

      • cat/usr/lib/cron/at.deny

      • cat/usr/lib/cron/cron.deny

      • ls -alnupFq/etc

      • ls -alnupFq/bin/

      • ls -alnupFq/dev/

      • ls -alnupFq/lib/

      • ls -alnupFq/stand/

      • ls -alnupFq/tmp/

      • ls -alnupFq/usr/

      • ls -alnupFq/unix/

      • ls -alnupFq/usr/spool/cron/crontabs

      • ls/alnupFq/etc/ftpusers

      • pPg/etc/ftpusers

      • pg/etc/inetd.conf

      • pg/etc/hosts.lpd

      • ls -alnupFq/etc/security/

      • ls -alnupFq/etc/security/audit/

      • rsh <system name> csh -I

  • It is important that each of these commands is reflected in the log. Auditors should review recent logs making certain that similar commands were entered as logs previously.

    Auditors should review and ensure passwords are constructed in accordance with policies and procedures. Running a UNIX password cracking utility can test password strength. There are many password crackers available on the Internet, but one of the more popular tools is known as John the Ripper. It is available for UNIX, Windows, and DOS platforms at: www.openwall.com/john/. It is recommended that this test be performed on a copy of the password file or the shadow password file. Password characteristics should include password length, numbers, capital letters, and special characters. Auditors must ensure that the root account password is rigidly controlled.

  • Auditors must identify the system's users and user groups as the organization's policies and procedures define them. For each user account in the/etc/passwd file auditors must ensure that each user has a unique logon ID. Auditors must be mindful that each employee who has a logon is a current employee. Auditors must ensure that all accounts that are not currently required are disabled or removed. It is important that a password is aged and there is a documented policy mandating when passwords are to be changed, their minimum length, and uniqueness from previous passwords. Passwords should be automatically unaccepted if they are not constructed according to the password policy, if the user fails to logon correctly for a given number of tries or if the user has not changed passwords in a policy-specified amount of time.

  • Review the home directory ensuring that a user's directory prohibits access to sensitive files or areas of the operating system. Ensure that each user group is composed of users that are legitimately permitted access to files that the user group is granted access.

  • Auditors must ascertain that the systems administrator daily reviews the sulog for unauthorized super-user logon attempts. The administrator should document these reviews. In most UNIX varieties, the sulog can be viewed with the command: more/usr/adm/sulog.

  • Auditors should review system access capabilities within the system's files and directories to determine if user access is appropriate for the level of security required by the material held in the file. Auditors should identify the users, users within user groups, and other users allowed access to the operating system's files and directories. One method to test directory access is, while logged on, the auditor will change the current directory to each of the below listed directories and issue the ls -l command. This command will display the access capabilities for each of the files within the directory. A thorough auditor will include the subdirectories.

    /

    Root

    /bin

    Contains executable programs and UNIX utilities

    /dev

    Contains special files which represent devices

    /etc

    Contains miscellaneous administration utilities and data files for system administrator

    /lib

    Contains libraries for programs and languages

    /tmp

    Contains temporary files that can be created by any user

    /usr

    Contains user directories and files

  • Auditors should review the /etc/initab file containing the instructions for the init file. This file executes when the system starts up and calls the login prompt and accepts the user name/password and starts the shell to accept user commands.

  • Auditors should ensure that all unnecessary UNIX services are disabled or removed for example, finger should be disabled.

  • Auditors should ensure that users do not have the ability to change their own profiles.

  • Review a user's home directory and look at the listing of the profile. If in the user's home directory, the auditor may use the command: ls -l.profile. The system should respond with the read/write/execute permissions.

  • Auditors should document the permissions on critical UNIX directories ensuring they provide the appropriate levels of security.

    Critical directories include but are not limited to the following:

    Directory

    Permission

    Setting

    Owner


    /bin

    r-x

    bin

    bin

    /dev

    r-x

    root

    sys

    /dev/dsk

    r-x

    root

    other

    /dev/rdsk

    r-x

    root

    other

    /etc

    r-x

    root

    sys

    /etc/conf

    r-x

    root

    sys

    /etc/default

    r-x

    root

    bin

    /etc/init.d

    r-x

    root

    sys

    /etc/log

    r-x

    root

    sys

    /etc/perms

    r-x

    root

    sys

    /lib

    r-x

    bin

    bin

    /root

    r-x

    root

    bin

    /shlib

    r-x

    root

    sys

  • Auditors should ensure that appropriate procedures relating to data backup and restoration are logged to log files. Logging backups provides an audit trail and documentation of the procedure. Reviewing backup logs provides a method of determining if the backup procedure was successful or not. Ensure that all critical files are included in this logging procedure. Look for documentation that the systems administrators have reviewed these logs.

UNIX Shadow Password File

On a UNIX/Linux system without a shadow password file installed, user password information is stored in the /etc/passwd file. Although popular literature will state passwords are encrypted, technically they are actually encoded rather than encrypted. The algorithm used to encode the password is a one-way hash function, ensuring that the encoded password cannot be feasibly decoded. Essentially, this means the algorithm encodes the password in one direction, but makes it mathematically infeasible to reverse the process. When a UNIX user uses an authorized password, it is encoded in a randomly generated value sometimes referenced as the "salt."

This means the password could be stored in 4096 different combinations. When a user logs into the system and provides a password, the salt is first retrieved from the encoded password. The salt value supplied password is encoded with the retrieved salt value, and then compared with the encoded password. If the two values match, the user is authenticated and permitted to access the system.

Password cracking software programmers know all this and simply encrypt a dictionary of words using all possible 4096 salt values and compare them to the already encoded passwords. They will compare the encoded passwords in the /etc/passwd file with the cracker's database. Once they find a match, they have the password for the account. This is known as a dictionary attack.

Think of it in terms of an eight-character password encoded to 4096 times 13 character strings. A dictionary of about 400,000 words, names, common passwords, and simple variations easily fit in a hard drive of 5GB. The program needs only to sort and check for matches.

Along with passwords, the /etc/passwd file contains user IDs and group IDs that are used in the systems programs. In order for UNIX to function, the /etc/passwd file must be read by everyone. If you were to change the file so no one can read it, the first command of ls -l will display all the user and group IDs.

Having a shadow password file solves the problem by relocating the passwords to another file, /etc/shadow, with access privileges set to root. By moving the passwords to the /etc/shadow file, administrators are effectively keeping attackers from having access to the encoded passwords with which to perform a dictionary attack.

Format of the /etc/passwd File

A nonshadowed /etc/passwd file has the following format:

  user name:passwd:UID:GID:full_name:directory:shell 

where

user name

=

the user (login) name

passwd

=

the encoded password

UID

=

numerical user ID

GID

=

numerical default group ID

full_name

=

the user's full name is held here in this field that can also store information other than just the full name

directory

=

user's home directory (full path name)

shell

=

user's login shell (full path name)

For example:

  user name: Tbge08pfz4wuk:503:100:Full Name:/home/user  name:/bin/sh 

where Tb is the salt and ge08pfz4wuk is the encoded password. The encoded salt/password could just as easily have been kbeMVnZM0oL7I and the two are exactly the same password. As was stated earlier, there are 4096 possible encoding combinations for the same password.

Once the shadow password file is installed, the /etc/passwd file would instead contain the entry:

  user name:x:503:100:Full Name:/home/user name: /bin/sh 

The x in the second field in this case is a place-holder. The format of the /etc/passwd file did not change. It no longer contains the encoded password. This means that any program that reads the /etc/passwd file does not actually need to verify passwords and it will operate correctly. The passwords are now relocated to the shadow file /etc/shadow.

Format of the Shadow File

The /etc/shadow file contains the following information:

  user name:passwd:last:may:must:warn:expire:disable:reserved 

where

user name

=

the user name

passwd

=

the encoded password

last

=

days since Jan. 1, 1970 that password was last changed

may

=

days before password may be changed

must

=

days after which password must be changed

warn

=

days before password is to expire that user is warned

expire

=

days after password expires that account is disabled

disable

=

days since Jan. 1, 1970 that account is disabled

reserved

=

a reserved field

Auditors must review the implementation of shadow password files in UNIX/Linux implementations. It is important to note that this UNIX audit guide is intended only to spur auditors to look at the potential areas that should be reviewed because there are many versions of UNIX as well as a great number of applications supported by it. Here are three specialized Web sites that may provide some guidance on auditing UNIX password files:

  1. www.ISACA.org

  2. www.sans.org

  3. www.auditnet.org

Auditing Windows NT

Microsoft Windows server operating systems have a much shorter history than their desktop workstation operating systems but share some of the same history. As early as 1991, at the Windows Developers Conference, Microsoft gave a demonstration of their Windows Advanced Server for LAN Management. This high-end operating system would later be named Windows NT. It offered a familiar graphical user interface and programming model and was capable of running all applications that had already been developed for Windows 3.0.

Microsoft Windows NT Advanced Server version 3.1 was designed to provide dedicated services within a client/server architecture. It was designed to provide scalability, enhanced fault tolerance, and standards-based interoperability. Windows NT Advanced Server was promoted as an application server for Novell NetWare, Banyan VINES, and Microsoft networks providing a platform for business solutions as financial, accounting, and database servers such as SQL Server, SNA Server, and e-mail servers as Microsoft Mail.

Windows NT Server 4.0, Terminal Server Edition, consisted of three components:

  • The Windows NT Server multi-user core, which made it possible to host multiple, simultaneous client sessions.

  • The Remote Desktop Protocol, allowing communication with a server that has Terminal Server enabled over the network.

  • The "super-thin" Windows-based client software, which displayed the familiar 32-bit Windows user interface on a range of desktop hardware. The Windows 2000 Server family offers features as centralized policy-based management, faster deployment options and Active Directory.

Here are a few areas that auditors may wish to review when completing their Windows NT server audits:

  • Ensure that appropriate levels of auditing are enabled by selecting the NT event viewer under policy, file systems.

  • Auditors should select the File manager and ensure that the server is using the NTFS as its file scheme. FAT or file allocation table is not secure.

  • Auditors should review the security log file size. There should be sufficient file space for one week's activities because the log is overwritten when it is full. View the log file size through the event log settings.

  • Auditors should review the documented frequency of reviewed log events.

  • Auditors should ensure the Web server is in the DMZ and is not a member of a domain or Domain Controller.

  • Auditors should ensure that only essential services are installed and operating. All nonessential services should be disabled or removed.

  • Auditors should access the add/remove programs of the NT server and determine if any of the following programs have been installed according to the organization's policies and procedures:

    • Certificate Server

    • FrontPage 98 Server Extensions

    • Internet Connection Service for RAS

    • The following subcomponents under Internet Information Server (IIS):

      • File Transfer Protocol (FTP) Server

      • Internet NNTP Service

      • Internet Service Manager (HTML)

      • SMTP Service

      • World Wide Web Sample Site

      • Microsoft Index Server

      • Microsoft Message Queue

      • Microsoft Script Debugger

      • Microsoft Site Server Express 2.0

    • The following subcomponent under Transaction Server:

      • Transaction Server Development

      • Visual InterDev RAD Remote Deployment Support

      • Windows Scripting Host

  • Auditors should attempt to access the security log from a nonprivileged account noting any deficiencies.

  • Auditors should use the add/remove programs to determine which updates are applied and reconcile against the Microsoft Security Bulletins at: www.microsoft.com/technet/security/current.asp

  • Auditors should note that FTP services should only be started manually and should not be running when the server starts.

  • Auditors should verify NT's built-in accounts (Administrator etc.):

    • Review password policies (length of passwords, expiration period, etc.)

    • Review group memberships and the privileges of these groups.

  • Auditors should review the audit logs for successful access to the NT registry. Auditors should verify and confirm that these logs are reviewed, at least daily, by administrators.

  • Auditors should review permissions and determine their appropriateness for users and user groups.

  • Auditors should determine if a backup of the NT Registry is performed at least bi-weekly with a baseline Registry copy retained.

  • Auditors should verify all trust relationships between NT Domains. Auditors should determine if workstations are operating on Windows environments other than NT, there should be Screensaver and BIOS passwords. The use of Screensaver and BIOS passwords are permitted, and encouraged, in NT environments.

  • Auditors should review all remote NT access for adequate controls.

  • Auditors should verify Access Control Lists for the following files and functions:

    Directory or File

    Suggested Privileges


    C:\

    Administrators: Change

    Users: Read

    C:\WINNT\

    Administrators: Change

    Users: Read

    WINNT\config\

    Administrators: Change

    Users: Read

    \WINNT\inf\

    Administrators: Change

    Users: Read

    \WINNT\profiles\

    Administrators: Change

    \WINNT\system\

    Administrators: Change

    Users: Read

    \WINNT\System32\

    Administrators: Change

    AUTOEXEC.NT

    Administrators: Change

    CONFIG.NT

    Administrators: Change

    C:\...\*.EXE, *.BAT, *.COM, *.CMD, *.DLL

    Administrators: Change

  • Auditors should review interfaces and security for connections to other LANS.

  • Auditors should review the administrator account and ensure it had been named something that will not attract the attention of attackers. Wise administrators routinely establish "decoy" administrator accounts having few privileges. As a matter of course, administrators should review logs for this decoy account looking for attacker activity. Legitimate administrator accounts must have a password consisting of at least ten digits and require special characters, numbers, and capital letters. This password must be changed monthly.

  • Auditors must ensure that all default accounts, such as Guest, have been disabled or removed.

  • Auditors must ensure that all unnecessary features have been disabled or removed. For example, IP routing in NT permits packets from one interface to be routed to another interface, if IP routing is not disabled.

  • Auditors must ensure that the target system has been configured to unbind WINS Client from TCP/IP. If WINS is bound to TCP/IP attackers may access machine information using tools and direct NetBios specific exploits at the target. Auditors may use the bindings tab of the Network Manager to test for unbinding NetBIOS and all other unnecessary protocols.

  • Auditors must ensure that strong encryption has been applied to the Security Account Manager (SAM). This is the database where passwords are stored. It is important for auditors to remember that because the SAM data is encrypted, it does not preclude someone from downloading the SAM file and using a password cracker program to obtain the passwords.

  • Auditors should review local TCP/IP configured filtering. Microsoft IIS server supports filtering and rules for IP addresses. IIS should have ports 80 and 443 open with all other ports denied.

  • Auditors should review auditing of user and system activities. It is important to note that the default installation of NT does not enable auditing. Ensure appropriate levels of auditing are enabled.

  • Auditors should ensure that the default NT installation of samples and unused ODBC drivers are removed.

  • Auditors should ensure that NT null user sessions are disabled. NT uses null user sessions to remotely download user names and share names. Auditors should determine whether the following Registry key is set to a value of 1: HKEY-LOCAL-MACHINE\System\CurrentControlSet\Control\LSA\RestrictAnonymous.

  • Auditors should ensure that NT file system, NTFS, cannot auto-generate 8.3 names for backward compatibility with 16-bit applications. Auditors should verify that the following key value is set to 1: HKEY-LOCAL-MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\.

This is merely a representative sample of the potential areas to be audited in the Windows NT environment. A very good reference guide for security and auditing Windows NT systems is available through www.trustedsystems.com/index.htm.

[8]Both TCPdump and Windump are freely available at www.tcpdump.org and windump.polito.it.



 < Day Day Up > 



Critical Incident Management
Critical Incident Management
ISBN: 084930010X
EAN: 2147483647
Year: 2004
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net