Chapter 20

Section: Part VI:  Platforms and Security

Chapter 20. UNIX

IN THIS CHAPTER

        A Whistle-Stop Tour of UNIX History

        Classifying UNIX Distributions

        Security Considerations in Choosing a Distribution

        UNIX Security Risks

        Breaking Set-uid Programs for Fun and Profit

        Rootkits and Defenses

        Host Network Security

        Telnet

        An Essential Tool: Secure Shell

        FTP

        The r Services

        REXEC

        SMTP

        DNS

        Finger

        SNMP

        Network File System

        The Caveats of chroot

        Better the Daemon You Know

        Assessing Your UNIX Systems for Vulnerabilities

This chapter examines the UNIX operating system from a security perspective. We'll start with a whistle-stop tour of UNIX history, followed by an in-depth look at the issues faced when selecting a UNIX distribution. We'll then consider the security risks and countermeasures along with some thoughts about the decisions you'll face.

We'll cover the hard core UNIX security territory and some useful follow-up material written by respected security practitioners that way, you can dip in and out as your interest takes you.

This chapter isn't a UNIX security manual though, nor is it a step-by-step set of instructions for securing a particular UNIX distribution (see the "Host-Hardening Resources" section for pointers to checklists). You also won't find a list of the most "leet" UNIX exploits here. There are a thousand hacker sites out there waiting for you, if that is all you seek.

My primary goal is to get you thinking about UNIX security in the context of your environment. The way in which your organization deploys UNIX has a fundamental bearing on what you can and should be doing to secure it. Sure, there are some common issues that face the majority of us, such as OS security bugs but computer systems don't exist in a vacuum. They are used by people to get a job done whether it is to serve your home page to an unsuspecting world or to process credit card transactions through a major bank. Failure to grasp the local issues can lead to terminally flawed security "solutions" that simply don't fit your context. These are the "soft issues."

Also, understand that we're talking about a security process not just an initial effort.

On a number of occasions, I'll reference specific programs or filenames. The names and locations of these programs might differ on your system if you don't already know the differences, check your online man pages (man -k search-clue).

URL 


 

Section: Chapter 20.  UNIX

A Whistle-Stop Tour of UNIX History

The seeds of UNIX were sown in 1965 when Bell Labs, General Electric Company, and Massachusetts Institute of Technology designed an operating system called Multics. From the outset, this was designed to be a multiuser system supporting multiple concurrent users, data storage, and data sharing.

By 1969, with the project failing, Bell Labs quit the project. Ken Thompson, a Bell Labs engineer began "rolling his own" soon to be called UNIX (a pun on Multics). The next year, Dennis Ritchie wrote the first C compiler (inventing the C language in the process), and, in 1973, Thompson rewrote the kernel in C.

UNIX was getting to be portable and, by 1975, was distributed to universities. The attraction of UNIX was its portability and low-end hardware requirements. For the time, it could run on relatively inexpensive workstations. Consequently, UNIX developed a strong following within academic circles.

This popularity coupled with the availability of a C compiler lead to the development of core utilities and programs still included in our distributions today. Many utilities have quite a rich or comical history I recommend you check the history books. With businesses recognizing that they could save on expensive hardware and training costs, it was only a matter of time before a number of vendors packaged their own distributions. From there, the UNIX family tree explodes splintering off into very different directions based on the motivation and financing of the maintainers.

For a comprehensive history lesson, visit http://perso.wanadoo.fr/levenez/unix/. There is also a graphical family tree of UNIX where you can trace the origins of your favorite distribution.

Matt Bishop maintains a fascinating archive of papers that record the findings of early UNIX security reviews at http://seclab.cs.ucdavis.edu/projects/history/index.html. Matt is probably best known though for his research into secure programming techniques. Check out his research papers and presentations.

Vendors ported UNIX to new hardware platforms and incorporated "value-added" items such as printed documentation, additional device drivers, enhanced file systems, window managers, and HA (High Availability) technologies. Source code was no longer shipped in favor of "binary-only distributions" as vendors sought to protect their intellectual property rights.

To stand a chance of securing government contracts, vendors implemented security extensions as specified in the Rainbow Series of Books, by the U.S. Department of Defense. Each book defined a set of design, implementation, and documentation criteria that an operating system needed to fulfill to be certified at a particular security level. Probably the best known level is C2, which we'll look at later.

Getting "accredited" was no mean feat. It required a significant amount of time and money. This tended to favor the big players who could afford to play the long game.

As it turns out, the security interfaces across different distributions are pretty incompatible. On top of this, the code running the C2 subsystems tended to be immature, buggy, and slow. The administrative tools were awful (and often still are) as was the support. Ask a UNIX administrator about C2 auditing, and she'll either look at you blankly or laugh.

These developments were happening against a backdrop of low technical security awareness even lower than today. The IBM mainframe stored all the corporate secrets and was considered a well-known commodity. As for UNIX, it gained a reputation for being something of an unruly beast. The combination of its hippie culture, unorthodox parentage, and its almighty superuser (root) proved something of a nightmare for some auditors.

Consequently, the advice given to administrators was very general in nature and seemed to focus solely on who had access to root and what version of sendmail was running (because of its long history of security problems). These things are clearly important, but the fact that their shiny new systems were running a slew of overtrusting network services and buggy, privileged programs just wasn't on their radar. (And we haven't even mentioned the application programs!) Crackers were well aware of the shortcomings in popular distributions and were running rings around the less capable administrators.

However, at the other end of the spectrum was a loose community of "security pioneers" programmers cum administrators, who developed some of the most pervasive security tools ever written. We'll cover the best ones in due course. The authors openly shared their source code with the wider community via Usenet way before the WWW (World Wide Web) had been invented.

Recent years have seen a significant rise in the popularity and business acceptance of open source UNIX (http://www.opensource.org/osd.html). Traditionally, commercial support for open source distributions was limited to small specialist outfits that tended to have limited geographical presence. The recent explosion of business interest in GNU/Linux has vendors lining up to earn support dollars. Times have really changed in the UNIX world. In the world of commerce, proprietary UNIX systems once ruled the roost. Now, everyone is talking open source. What does open source bring to the party? We'll cover that shortly.

URL 


 

Section: Chapter 20.  UNIX

Classifying UNIX Distributions

UNIX distributions offer very different levels of security out of the box. Today, there are hundreds of distributions, although only about a dozen are in very widespread use.

From a security perspective, we can group these into the following categories.

Immature

Immature UNIX distributions include experimental, unsupported, and poorly supported distributions. These distributions ship with programs that have security vulnerabilities that are either well known, or easy to identify and exploit. You'll generally want to leave these well alone, except for maybe shooting practice in the lab.

Mainstream

Mainstream UNIX distributions are characterized by a large installed user-base, commercial support services, and, up until recently, binary-only distribution.

The best selling closed source, commercial off-the-shelf (COTS) distributions as of this writing are

        SUN Solaris

        Hewlett-Packard HP-UX

        IBM AIX

        Compaq Tru64 UNIX

Each and every one of these has suffered serious security problems. But we are not talking about isolated incidents. Every month, significant security flaws are discovered. In other words, people are finding all kinds of ways of cracking these popular systems.

The vendors started with the same common code. Serious security holes are found from time to time, but their occurrence is infrequent enough to consider the code reasonably mature from a security perspective.

The weak area tends to be the vendor enhancements. Writing secure software places certain demands on the programmer. There is little evidence to show that operating system vendors can produce secure code in a sustainable manner. It's not that the entire programming industry is stupid that would be an injustice to some very talented code cutters. It's evidence that writing secure software, under commercial time pressures, is a "hard problem." This should not come as a startling revelation to the software industry.

In case you have a hard time believing that the commercial operating system you run has a security history comparable to Swiss cheese, look up your OS in the SecurityFocus vulnerability database at http://www.securityfocus.com/vdb/. You might be surprised by the results.

Certainly mainstream vendors generally agree that security is important. However, the focus is on tangible security features like encryption, rather than security of design and implementation. It's a lot easier to sell features than it is assurance. Insecure software is invisible that is, until someone decides to break in.

Many IT decision-makers implicitly trust mainstream vendors to produce a secure operating system. However, comprehensive security testing is nontrivial. It requires significant expertise to do comprehensively and takes time both of which come at a cost. Some vendors aren't even trying though. This is evidenced by the frequency of posts to Bugtraq from independent security researchers reporting basic security flaws in COTS software. I am not aware of any COTS vendor publicly pledging that it is pro-actively auditing its distributions for security vulnerabilities as a matter of course.

This situation is exacerbated by a commonly held belief that exploiting security vulnerabilities is rocket science. That's a myth perpetuated by the news media and a section of the security industry the truth is that anyone with even a modicum of IT skills can break into a computer system installed straight off the CD-ROM. Exploit scripts are widely available and simple to use.

This might lead you to think about legal liability issues. Surely, the law must protect the consumer? The truth is on the shrink-wrap clinging to your OS media. Read the big, fat disclaimer (known as the license agreement). Before you accuse me of being flippant, go on read that agreement every last word of it. Here is a snippet from a product disclaimer most likely installed on your workstation:

DISCLAIMER OF WARRANTIES: YOU AGREE THAT XYZ HAS MADE NO EXPRESS WARRANTIES TO YOU REGARDING THE SOFTWARE AND THAT THE SOFTWARE IS BEING PROVIDED TO YOU "AS IS" WITHOUT WARRANTY OF ANY KIND. XYZ DISCLAIMS ALL WARRANTIES WITH REGARD TO THE SOFTWARE, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, MERCHANTABLE QUALITY OR NONINFRINGEMENT OF THIRD PARTY RIGHTS. Some states or jurisdictions do not allow the exclusion of implied warranties so the above limitations may not apply to you.

LIMIT OF LIABILITY: IN NO EVENT WILL XYZ BE LIABLE TO YOU FOR ANY LOSS OF USE, INTERRUPTION OF BUSINESS, OR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING LOST PROFITS) REGARDLESS OF THE FORM OF ACTION WHETHER IN CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT PRODUCT LIABILITY OR OTHERWISE, EVEN IF XYZ HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Now ask yourself, what recourse do you have if someone drives a juggernaut through your systems security via a security hole in this software? The truth is, none zero.

Until the customer wakes up and demands security, vendors will spend more time trying to out-feature their competitors rather than auditing and fixing their existing codebase. Sexy features sell boxes.

Personally, I don't see any convincing reason that the "features versus security" status quo will change any time soon. Consequently you need to adopt some healthy skepticism. After all, outright cynicism can get a tad grating after a while.

How Secure Is Open Source?

Because I have painted a pretty dismal picture of closed source OS security, you might ask, "Is there any tangible security benefit to going open source?" My view is yes and no.

Note

The open source development model contrasts sharply with that of the traditional closed source one. To find out more, see Eric Raymond's classic paper at http://www.tuxedo.org/~esr/writings/cathedral-bazaar.

 

Open source proponents argue that a transparent development process, with the benefit of "many eyes" debugging and reviewing code, adds security in itself. They cite the development process of new cryptographic algorithms/protocols as an example of the need for an open code development model.

Developing secure cryptography is a "hard problem" as is developing secure and reliable program code. At their own cost, less experienced cryptographers have often learned that a closed development model results in seemingly hard-to-break algorithms. Under the glare of an experienced cryptanalyst, however, they are soon broken.

Others feel that developing secure program code doesn't work out that way in practice. They point to the lack of experienced code auditors, the sheer volume of code to audit, and the general lack of interest. This point is worth considering. How many people do you know who actually review the source code they download? How many people do you know who have sufficient experience to actually find most, if not all, of the security holes? We all assume, "Someone else is doing it."

In reality, open source code does get reviewed just don't expect every line of every program you download to have been reviewed by elite security researchers. In reality, software that is high profile or security related tends to grab the lion's share of attention. Even then, the sheer size and the codebase's complexity can make reviewing a formidable challenge even for highly skilled reviewers (as evidenced by code errors that slipped by reviewers of ISC BIND V4 and SSH version 2).

It is more efficient and effective to review open source programs for security problems than it is to review closed source programs.

Reviewing closed source software relies on either black box testing (threat testing) or reverse-engineering. Both can be very time-consuming and require a great deal of patience and sound methodology on the part of the reviewers. Reverse-engineering can be incredibly tedious and exhausting it can also be illegal (refer to your license agreement again). The current trend in lawmaking might soon outlaw reverse-engineering altogether. Our lawmakers run the risk of outlawing one of the few methods available for finding security holes in closed source software. Open source eliminates that problem.

To learn more about software assurance testing (such as white box testing), check out the Cigital (formerly Reliable Software Technologies) Web site at http://www.cigital.com/resources/.

The view you take on open/closed source programs will strongly influence your selection of an operating system and the security tools available to you.

Some organizations can be particularly sensitive about using open source software particularly those subject to regulatory or legal demands. The issues tend to revolve around trustworthiness (that is, no Trojan code), integrity concerns, and formal support.

The extent of the concerns will often relate to the role the software will play and who else is using it (the sheep theory). Back doors are pretty much unheard of in popular open source software. They are more likely to occur on the download site itself (after an attacker has compromised a site). This is exactly what happened to the primary TCPWrappers download site in January 1999. CERT released an advisory (http://www.cert.org/advisories/CA-1999-01.html) reporting that the site had been compromised and a Trojan had been inserted into the TCPWrappers source. The changes were spotted pretty quickly as the modified archive files were not PGP signed by the author. However, this kind of incident is extremely rare and easy to detect.

Rather than blindly trust download sites, I recommend that you follow a policy of downloading software from multiple, well-known, separately managed sites (for example, CERT, COAST, and SecurityFocus.com). Also use cryptographic integrity software (such as MD5, PGP, or GnuPG) to verify the integrity of the archive against a known good signature (again, use unrelated sites for the comparison).

For tips on using MD5, see http://www.cert.org/security-improvement/implementations/i002.01.html

For a Win32 version of MD5, visit http://www.weihenstephan.de/~syring/win32/UnxUtils.html

Formal support is a separate (mostly commercial) issue that I don't plan to cover here.

But, what about using an entire operating system that is open source, such as GNU/Linux? Some people perceive that open source systems suffer more security problems than other platforms. They point to the number of security patches released and the volume of vulnerabilities reported on lists such as Bugtraq.

My view is that the kernel itself doesn't appear to suffer from any more security problems than any other UNIX kernel. It's widely regarded as more stable and better analyzed from a security perspective than any mainstream closed source UNIX.

However, a Linux distribution consists of more than just a kernel. In fact, SuSE has grown so large, it ships on seven CDs! Thousands of applications are available for Linux, many of which have not been written with security in mind. As a result, the distributors who package all this code are regularly sending out security advisories and patches. This can lead to the incorrect conclusion that all patches issued are relevant. The message: Don't install everything off the CD be a little selective install what you need. More on that later.

For more on the open versus closed source code debate, check out these links: Michael H. Warfield: Musings on open source security models.http://www.linuxworld.com/linuxworld/lw-1998-11/lw-11-ramparts.html

Simson Garfinkel: Security Through Obscurity. http://www.wideopen.com/story/101.html

John Viega: The myth of open source security. http://www.earthweb.com/dlink.resource-jhtml.72.1101.|repository||itmanagement|content|article|2000|07|19|EMviegaopen|EMviegaopen~xml.0.jhtml?cda=true

The Fuzz Challenge

In 1990 and 1995, the University of Wisconsin staged the Fuzz Test a study of how 80 popular operating system programs on 9 different platforms behaved when they were subjected to random input data streams. Clearly, a correctly written program should handle anything thrown at it otherwise, an attacker might be able to influence a privileged program to gain unauthorized access.

The findings make interesting reading. Open source proponents argue that the results of the Fuzz Test provide empirical evidence to support their view that open source-created programs are more reliable and secure than the closed source equivalents. Needless to say, in the test the closed source programs suffered from fundamental problems not present in their open source equivalents. Check out the full report here: ftp://grilled.cs.wisc.edu/technical_papers/fuzz-revisited.pdf. For the NT crowd, check out theFuzz Test's August 2000 report testers managed to blow up 100% of NT applications.

For me, the biggest win with open source software is that, if you know what you are doing (or know someone who does), then you can change the system yourself by implementing additional defenses. We'll cover some of the more popular options later. There really is nothing to beat the sense of empowerment you get when you realize that you have complete control over the way your system operates.

Hardened Operating Systems

In this category I'm including distributions that meet one or more of the following criteria:

        The distribution shipped with "secure by default" configuration settings.

        It was programmed defensively. The programmer assumes any user could be an attacker).

        The distribution maintainers subject their existing source code to a security audit whenever a new class of security vulnerability is discovered.

        The distribution has been compiled in such a way as to contain a common class of security exploit, the buffer overflow.

OpenBSD

Probably the best known example of a free, open source, hardened UNIX distribution is OpenBSD (http://www.openbsd.org/). In fact, OpenBSD is one of the only distributions in general circulation to meet the first three criteria (OpenBSD's developers would probably reasonably argue that they don't need to do point 4 because they do points 2 and 3!).

Their publicly stated goal is to be "Number one in the industry for security." They achieve this by attracting security-conscious programmers and adopting a tireless approach to weeding out both possible and not-so-possible security exposures.

Security benefits of OpenBSD include the following:

        Has secure "out-of-the-box" system configuration; that is, no time-consuming hardening is required.

        Ships with strong cryptography ready for use. OpenBSD includes OpenSSH for secure network terminal access, IPSEC, strong PRNG (Pseudo Random Number Generator), secure hashing, and wide support for cryptographic hardware. See http://www.openbsd.org/crypto.html for more details.

        Suffers fewer security vulnerabilities than any other UNIX I am aware of. Equally important though is the turnaround of fixes typically within a day or two.

        Provides source code for independent scrutiny.

        Includes simplified installation and management in recent releases.

OpenBSD has been ported to 11 hardware platforms.

At the time of this writing, OpenBSD has no SMP (Symmetric Multiprocessing) support this is a major drawback confining OpenBSD to single CPU use. A project to rectify this is under way but is dependent on hardware donations and developer time.

Immunix

The Immunix team has taken a very different, albeit limited, approach from that of OpenBSD. Instead of attempting to fix bad code through code auditing, they use a specially modified compiler, StackGuard, to generate object code that can detect a buffer overflow attack in progress and halt program execution. The most common type of buffer overflow is "smashing the stack."

"Smash the stack" [C programming] n. On many C implementations it is possible to corrupt the execution stack by writing past the end of an array declared auto in a routine. Code that does this is said to smash the stack, and can cause return from the routine to jump to a random address. This can produce some of the most insidious data-dependent bugs known to mankind. Variants include trash the stack, scribble the stack, mangle the stack; the term mung the stack is not used, as this is never done intentionally.

Extract from Aleph One's all-time classic paper Smashing The Stack For Fun And Profit available from http://www.securityfocus.com/data/library/P49-14.txt

StackGuard technology does not protect against the many other classes of attack (or even every type of buffer overflow attack) but it does limit the damage of a buffer overflow attack to a denial of service rather than a system compromise.

For a full explanation of the StackGuard approach, check out Crispin Cowans's original research work at http://www.immunix.org/StackGuard/usenixsc98.pdf.

As a practical demonstration of StackGuard in action, the Immunix team created ImmunixOS a complete distribution of Red Hat Linux compiled using their StackGuard technology (the kernel itself is not StackGuarded, however).

A free CD image of ImmunixOS and other goodies can be downloaded from http://www.immunix.org.

To gain maximum benefit from compiler-based stack protection technology, you will need to recompile all your applications. If you don't have access to source code, you will not be able to StackGuard them.

Wrappers

You can develop wrappers to sanity-check program arguments and environment variables whether you use StackGuard or not. This technique is useful for protecting a privileged (set-uid/set-gid) program that you suspect, or know, is vulnerable to a buffer overflow attack.

Sanity-checking involves creating a small C program that replaces the suspect program. The wrapper is programmed to inspect command-line arguments and environment variables for suspicious input before calling the real program. Attempted attacks are logged to the system log via syslog, and the vulnerable program never gets called. Arguments that satisfy your checks are passed to the real program for program execution per normal.

The wrapper will need to be installed with the same permissions as the real program. This will mean making it set-uid/set-gid. You'd better be sure your replacement code doesn't have any security weaknesses! The real program needs to be relocated to a directory only accessible by the owner of the wrapper otherwise, users could bypass your wrapper and call the suspect program directly.

The ultimate wrapper approach is to wrap every privileged program on the system if you have the time. Be on the alert for patches and upgrades that overwrite your wrappers with updated binaries. You'll also need to update your wrappers as new features are added to the protected programs. As you can see there is some cost in doing this on a permanent basis.

Fortunately, someone else (AUSCERT) has done the hard work of creating a wrapper for us. Source code and instructions for use are available here:

[View full width]
 
  ftp://ftp.auscert.org.au/pub/auscert/tools/overflow_wrapper/
graphics/ccc.gifoverflow_
  wrapper.c

The next time you learn of a suspected security problem in a closed source set-uid/set-gid program, you can create a custom wrapper while you wait on the vendor fix.

While I'm on the subject of defending against buffer overflow attacks, it would be remiss of me if I didn't mention Solar Designer's Linux kernel patch. For those wishing to delve deeper into stack attack and protection, check out the links in the following sections. They only go to prove that security is often a game of cat and mouse.

Linux Kernel Patch

Author: Solar Designer

Platform: Linux (but principles apply to other distributions).

URLs: http://www.openwall.com/linux/ and http://www.insecure.org/sploits/non-executable.stack.problems.html (In fact, explore the entire site.)

The patch provides the following features detailed in the README:

        Nonexecutable user stack area

        Restricted links in /tmp

        Restricted FIFOs in /tmp

        Restricted access to /proc

        Special handling of fd 0, 1, and 2

        Enforced RLIMIT_NPROC on execve(2)

        The destruction of shared memory segments when they are no longer in use

        Privileged IP aliases (Linux 2.0 only)

Multilevel Trusted Systems

The final category of UNIX distributions is trusted systems.

Trusted operating systems (TOSs) provide the basic security mechanisms and services that enable a computer system to protect, distinguish, and separate classified data. Trusted operating systems have been developed since the early 1980s and began to receive National Security Agency (NSA) evaluation in 1984.

http://www.sei.cmu.edu/str/descriptions/trusted_body.html

Under the traditional UNIX privilege model, the root user has full run of the system. Root can do anything root is god-like.

Trusted systems, however, totally change the privilege paradigm. The root user becomes a mere mortal subject to the laws of the trusted UNIX universe.

Trusted systems provide a fine-grained mechanism for controlling what actions a user can take.

For example, as any regular reader of news:alt.security.unix will tell you, a frequently asked question (FAQ) is "How do I prevent my UNIX users from accessing other network systems via Telnet/rlogin from my server?"

On a standard UNIX system, your options are limited and ineffectual. You could revoke access to the Telnet binary or delete it all together but a user can simply upload another copy and set the permissions they desire (most likely with a innocent-looking filename to avoid detection). They could use a totally different program or even develop another one.

The solution is to stop trying to control user activity from userland it's futile. Instead of trying to prevent the user from running a command through file permissions, take a different approach identify which system resources the command needs in order to actually function.

So, to communicate with a remote system, the Telnet program must initiate a TCP network connection to the system specified by the user (for example, Telnet yourhost). To do this, it needs a communication endpoint on the local system called a socket. Only after it has been given a socket can it connect to the destination system. After initiating a connection, the receiving Telnet daemon can accept or deny the connection based on its access control policy.

Userland programs can't just create sockets out of thin air. They need to ask the kernel. Subsequently, the Telnet program must ask the kernel to create a socket for TCP communications. This is your control point. You could choose to wait until the call to connect, but why allow the program to allocate a finite system resource (that is, a socket) in the first place if you don't want it to connect? To implement this control, you need to modify the kernel. If you're running a closed source UNIX, for most of you, the ride ends here. Modifying system call code without source is hairy. (It is possible it's just totally unsupported.) However, admins of open source distributions win out in this situation.

Thomas H. Ptacek has documented the complete process for a BSD kernel see http://skoda.sockpuppet.org/tqbf/sysctlpriv.html. Repeat that exercise for every other system call supported by your kernel, and you have one part of a basic trusted operating system.

Note

A user program (often referred to as a userland program) cannot modify the kernel in an arbitrary way. If it could, it would be a breeze to gain root access. You could simply overwrite the memory location storing the owner ID of your current process ID with the number 0, that is, root. The next time your process was subject to an access check, the kernel would look up your process owner ID in the process table and see the number 0 you would pass any access check!

However, userland programs need a way to ask the kernel to carry out actions on their behalf. This is because the kernel is solely in charge of access to system devices for example, the display, hard drives, memory, network interfaces, and so on. Programs make requests of the kernel via system calls. A system call is a discrete action such as "Open this file." The system calls supported by your system are listed in /usr/include/sys.h. Inside the kernel is a syscall table listing the system call number and a pointer to the code that the kernel calls to do the work. The kernel returns control to the userland program when the code implementing the system call returns.

 

Trusted operating systems implement the following concepts/principles:

        The principle of least privilege. This says that each subject (user) is granted the most restrictive set of privileges needed for the performance of authorized tasks. The application of this principle limits the damage that can result from accident, error, or unauthorized use.

        Mandatory Access Controls (MAC), as defined by the TCSEC (Trusted Computer System Evaluation Criteria). "A means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (that is, clearance) of subject to access information of such sensitivity."

        Privilege bracketing. The principle of enabling and disabling privilege around the smallest section of code that requires it.

        A trusted computing base. The totality of protection mechanisms within a computer system including hardware, software, and firmware, the combination of which is responsible for enforcing a security policy. Note: The capability of a trusted computing base to correctly enforce a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.

Until recently, the predominant consumers of TOS technology were military and government agencies. With the explosion of e-business, this has changed. This attention to Internet-facing system is a little ironic, given that survey after survey of IT security incidents conclude that 50% 80% of attacks originate from within the organization. Not only that, but internal attacks actually cost the most because insiders can cause the most damage.

At the time of writing, there are two major commercial suppliers of UNIX TOS products: HP and Argus Systems.

Hewlett-Packard Praesidium VirtualVault

The HP TOS VirtualVault only runs on Hewlett-Packard hardware. The operating system is a hardened version of HP-UX. Focused on Web-based e-business applications, VirtualVault ships with Netscape Enterprise Server and a Trusted Gateway Agent.

VirtualVault replaces the all-powerful root user with 50 distinct privileges, granting each application only the minimum operating system privileges it requires to run properly. It incorporates many of the B-level Department of Defense Trusted Computer System (TCSEC) features. See http://www.hp.com/security/products/virtualvault/papers/ for more information.

Even VirtualVault hasn't escaped the security bugfest unscathed. Check out http://www.securityfocus.com/vdb/middle.html?vendor=HP&title=VirtualVault&version=any.

Argus Systems PitBull

The other main player in the Trusted OS space is Argus Systems (http://www.argus-systems.com). Its PitBull product is available for Solaris and Windows. As of this writing, the development of ports to IBM AIX and GNU/Linux is well underway.

PitBull installs over the top of the existing operating system, replacing the guts of the mainstream OS with trusted substitutes.

To encourage discussion, development, and uptake of TOS technology, the founders of Argus Systems created the online Argus Revolution site at http://www.argusrevolution.com/.

You can even download a free-for-noncommercial-use copy of PitBull with accompanying documentation from http://www.argusrevolution.com/pitbullsupport.html.

If you want to study TOS technology in any further detail, I highly recommend that you try the download.

Both of the previously mentioned products are closed source. The noncommercial, open source options include a BSD project and an offering from the NSA.

Trusted BSD

TrustedBSD (http://www.trustedbsd.org/) lets you peek at the code.

Currently under development, TrustedBSD is a set of security extensions to the FreeBSD UNIX operating system. The developers hope TrustedBSD will take FreeBSD into environments that have higher security requirements. The extensions are being integrated into core FreeBSD (http://www.freebsd.org).

NSA

The highly secretive U.S. National Security Agency (NSA) in conjunction with Network Associates, Mitre, and Secure Computing has published an open source security extension for GNU/Linux. This includes a "strong, flexible mandatory access control architecture based on Type Enforcement." According to the online documentation, the NSA developed the Flask security architecture and prototyped it in the Mach and Fluke research operating systems. By integrating the Flask architecture into the Linux operating system, they hope to substantially broaden the audience of the technology. For more details, check out http://www.nsa.gov/selinux/index.html.

Realities of Running TOS

The substantial security improvement provided by a TOS does come at a cost, particularly:

        The need for specialized administration skills. The administrator(s) must be well versed in both TOS concepts and real-world administration. Hiring experienced TOS administrators is not easy (they are few and far between). Also consider that internal support structures will need to change with the role-based administration inherent in TOS.

        The need to understand your application's security requirements in depth. Installing a TOS brings immediate benefits your OS has suddenly become resilient against the majority of attackers. Now though, you need to tell the TOS about your applications that is, what OS resources your applications need to access. This can be tricky for two reasons. First, application documentation rarely includes anything like the kind of detailed information you'll need to do this. Second, applications are a moving target. Testing before upgrades becomes ultracritical a subtle, undocumented change in a rarely used application function could lead to access problems if the TOS application profile hasn't been updated. For commercial customers deploying popular corporate products (for example, ORACLE RDBMS), this will be less of a hurdle because commercial providers of TOS technologies tend to have application security profiles for common enterprise applications. Unusual closed source applications will require significant testing and observation by administrators well versed in troubleshooting TOS compatibility issues.

        Loss of flexibility. To fully realize the potential of using a TOS, you will want to lock down the privileges of applications and administrative accounts. By definition, this costs you flexibility you will no longer be able to make major configuration changes to security-sensitive parts of the OS on-the-fly! If you value ultimate flexibility over hardcore security, then you probably don't want to run a TOS. It's a case of using the right tool for the right job. But, don't forget: Flexibility benefits the attacker, too!

        TOS systems can still be hacked if they are not configured carefully or contain security bugs. Areas to defend are detailed here: http://www.argusrevolution.com/downloads/DefCon.ppt.

The decision to deploy a trusted UNIX system will hinge on your analysis of risk: the value of the information you are trying to protect, the perceived threats, and the probability of attack. Security controls are an insurance policy of sorts. Your spending on security (both initial and ongoing costs) should reflect this.

URL 


 

Section: Chapter 20.  UNIX

Security Considerations in Choosing a Distribution

Consider the following key security factors when selecting a UNIX distribution:

        Understand the intended use of the system. What threats must the system defend against? Consider physical, human, and technological threats.

        Gauge the technical security competence and awareness of the primary administrator(s). Distributions that are a significant departure from local technical security expertise should be considered a higher risk (unless technical security training will be provided). Vendor-provided security training classes tend to be weak. The SANS Institute run good introductory courses.

In 1997, the CERT coordination center produced a "Report to the President's Commission on Critical Infrastructure Protection." Security awareness and user/administrator security training were key points.

        Learn about the vendor's approach to handling reported security vulnerabilities. Do they even acknowledge that vulnerabilities occur in their distribution? Do they have a clearly documented process for handling reports from outside? Do they watch Bugtraq for reports of security problems in their software? Do they provide e-mail addresses for reporting new security problems?

        Assess the vendor's response time when fixing security vulnerabilities. The SecurityFocus vulnerability database is useful for comparing the public announcement date and vendor fix dates.

        Consider the maturity and stability of built-in security tools and interface. Weak areas tend to be C2 audit log management and analysis, mixed coverage of daemon logging to syslog, and clunky security interfaces that can result in mistakes being made in security settings.

        Do a gap analysis, comparing the native security features against your UNIX security policy. Consider the availability, cost, and installation overhead of third- party/open source tools required to plug the gap.

        Estimate the time it will take to lock down a virgin install of the distribution to comply with your policy. Calculate the cost of the administrator's time and possible delays on projects. This is the cost of buying distributions that are not secure by default. Ask the vendor to provide you with smart ways to lower this cost.

        Visit the vendor support site. How long does it take to find the security alerts/bulletins and security patches? Read a couple of security bulletins. Do they make sense? Do they tell you enough about the problem to figure out whether you would need the patch? Compare a security bulletin with the original announcement made on Bugtraq (search the archives at http://archives.neohapsis.com/search/). Does the vendor's assessment of the problem tally with the original report?

        Assess the ease of security patching. Are stable tools available to easily identify missing patches? Are these kept up to date? Can patch installation be reliably automated for server farms? Are MD5 hashes available to validate patch integrity? Bear in mind the SANS finding that failing to update systems when security holes are found is the third major security mistake.

        Check the release versions of any bundled third-party software (for example, sendmail, bind, or wu-ftpd). Make sure they are current or that the vendor has backported fixes for security problems.

URL 


 

Section: Chapter 20.  UNIX

UNIX Security Risks

You've chosen your UNIX distribution (or someone else did it for you), and you need to know where the risks are. All operating systems have security problems no matter what anyone might tell you to the contrary. Anything with some complexity, written by humans and managed by humans, inherits the wonderful flaws of the authors along the way.

The main risk areas on a typical, modern day UNIX system tend to be

        Misconfigured/buggy network daemons. These leave your system open to attack from anyone who can "see" your server across the network. The attacker doesn't need an account on your system to exploit these security holes. These are classified as remote vulnerabilities.

        Poorly chosen user passwords. Bottom line: Passwords are an inconvenience for most users. Systems configured to enforce fascist password rules only encourage users to write down their passwords. A middle ground is required this is a people issue, rather than a technical one.

        Buggy privileged programs (set-uid/set-gid). What happens when an attacker subverts a program that executes with special privileges? Well, it depends on the specific vulnerability. All too often, though, system security is breached, and the attacker can take control over the operating system. These problems are classified as local vulnerabilities, as the attacker requires a user account to exploit them (which might or might not be obtained legitimately).

        Filesystem nightmares. Badly set file permissions, sloppy handling of temporary files, race conditions, and insecure defaults are all culprits. Exploiting these can lead to leakage of sensitive information, introduction of Trojan code, and destruction of data. Bottom line: An insecure filesystem affects the integrity of the entire system.

        Insecure applications. N ive designs and sloppy programming practices combine to produce a giant sore on your system exposed to anyone who tries to exploit the myriad of possible weaknesses. Bolting down your UNIX server is not enough, if someone can drive a tank through your application security.

The common trait of these risk areas is insecurely written code.

We'll cover each category in more detail in the following sections.

Buzzt!

Actually, some people use these weaknesses to study attackers. Lance Spitzner used to get his kicks from blowing up things in his tanks. Today, he enjoys observing attacks launched against his honey pot system:

  http://project.honeypot.org/
  

A sacrificial host is built running a default Red Hat Linux install with no security patches and connected to the Internet (typically outside a firewall or within a DMZ). A series of logging mechanisms are activated to record probes and attacks. After a compromise, Lance is able to reconstruct all the attackers'activity via the captured packet trace.

This kind of exercise provides an insight into the way attackers compromise a victim machine and, more importantly, what they do when they have.

A number of organizations run honey pots to identify new attacks "in the wild." By coordinating their efforts, they are able to track new trends and issue alerts to the wider community.

Be warned though: Having drawn attackers to your site, you had better be sure they won't compromise your real network or discover you monitoring their activity. Sophisticated attackers can identify a honey pot very quickly.

User Accounts

If you haven't read Chapter 14, "Password Crackers," I recommend that you do so now.

Users, bless 'em, can give up your system's security no matter what lockdown procedures you have implemented.

The age-old problem of poorly chosen passwords continues to plague any operating system or device that requires them.

One security practitioner I know describes passwords as "past their sell-by date." The problem doesn't seem to go away, and there is no reason to believe that it is going to. Therefore, a change of tack might be necessary, and you should give serious consideration to either moving to other forms of authentication such as one-time passwords (OTP), biometrics, or smartcards. No authentication system is perfect many appear impressive until you start analyzing their issues. However, it doesn't take a giant leap to improve upon passwords.

Assuming you're stuck with UNIX passwords, here's what can you can do to improve things:

        Limit access to the root account on a need-to-have basis. Root is all powerful, and you'll want to avoid giving this level of access to anyone who doesn't have a legitimate need for it.

        Don't give root access to anyone who can't demonstrate adequate technical expertise AND judgment. You get to define "adequate." Mistakes will happen from time to time, but allowing untrained newbies access to root is asking for trouble. At the same time though, don't make a big thing about the root account to those you refuse it might cause resentment that could lead to other security problems!

        Set a strong password on the root account. Stick to a minimum of eight characters and include special characters.

        Disable root logins across the network. Have admins make use of su, or, better yet, deploy sudo.

Note

Sudo is an incredibly useful utility. It allows the administrator to permit users to run commands for which they do not usually have the privilege. For example, you have a helpdesk that needs to be able to change passwords for everybody except the administrators. This is easy with sudo. You define a sudo rule to permit anyone in the helpdesk group (or using the helpdesk user id, if you are not allergic to shared accounts) to run the passwd command as root with a twist. You also define what arguments can or cannot be passed to the command. So, in this case, you would specify the administrator user ids as invalid arguments (by using the exclamation mark to signify negation). Sudo can be found here: http://www.courtesan.com/sudo/.

 

        Store the root password offline in an envelope (signed across the join) in a secure place. In large environments, make sure that a log of access is kept.

        Don't use the same passwords across all machines. The compromise of a single password should not result in a complete giveaway. Categorize your systems in virtual groups, by either risk or data sensitivity. Assign a unique root password to each virtual group. In practice, these groups can be test, development, and production. However, if you're storing the same data on all those systems, you either need to revisit your data security policy or think of another way to group your systems.

        Ban access to remote access servers that don't support encryption. Telnet and FTP send passwords in cleartext across the network. At a minimum, make sure your privileged users use ssh and scp.

        Implement password construction checks on the server. Set minimum values for password length, the number of alphanumerics, and, where supported, special characters.

        Implement real-time password dictionary checks. Use software like npasswd (http://www.utexas.edu/cc/unix/software/npasswd/)as a replacement for the stock password program. This type of software does require some configuration on the part of the administrator but goes a long way toward solving the problem of easy-to-guess passwords being used.

        Instigate a password-cracking policy. Every month or quarter, attempt to crack all passwords (administrator accounts included). Track the percentage cracked and set targets. Use a decent-sized cracking dictionary and add words that relate to your environment (for example, project names, team names, supplier names, nicknames, and so on).

        Create a password policy that states the required length and composition of passwords. Make sure all system users have seen it.

        Educate your users on strategies for choosing good passwords. For example, have them think of a line from a favorite song or quote and select the first letter of each word to make up a password for example, "I Left My Heart in San Francisco" would be IlmhiSF. Then mutate the password by adding in special characters for example, !lmhi$F_. That would take a while to crack. This can make hard passwords easy to remember.

        Don't think that by replacing letters with numbers in passwords you are going to outsmart a cracker. Password cracking programs do this automatically, too.

        Give serious consideration to enforcing account lockout after three or five failed logins. This can lead to a denial of service attack if you are in a hostile environment a malicious user could lock out all the accounts on purpose simply by typing gibberish for users'passwords. (However, DoS is generally a low risk in an internal network.)

        Make sure that your helpdesk doesn't just re-enable locked accounts (or create new ones) for anyone who calls the desk with a friendly voice. It's a well-known fact that social engineering of over-obliging support staff is easier than bypassing a well-configured firewall.

        Don't let your support staff fall into the trap of using the same password when resetting locked accounts. Invest in some software that generates passwords that are phonetically easy to pronounce. (This won't work for multilingual support desks.) Mail the author if you find a good product.

        If a user calls to have his password reset, use a callback scheme on a prearranged number or, failing that, leave the new password on the user's voicemail. Don't tell the user the password there and then unless you know the person's voice well enough to spot an impersonator. This might sound a little too paranoid. However, consider what someone might gain by doing this and how stupid you'd look if you simply gave the password to them on a plate!

        Go on walkabout every now and then to check whether users are writing passwords down. For example, are passwords written on sticky notes on monitors? If they are, have a quiet word with the user. Persistent offenders might find remembering different passwords to different machines hard. Consider using software like Counterpane's Password Safe (http://www.counterpane.com/passsafe.html). This installs on the client machine and can securely store passwords unlocked via a single password. Just make sure this one is strong and not written down! This software is particularly useful for administrators.

        Make sure that your systems are using shadow passwords. It used to be that UNIX stored passwords in the /etc/passwd file. However, as CPU technology forged ahead, it wasn't long before these passwords were being cracked. Check to make sure that your passwords are being stored in a file readable only by root.

        Avoid hard-coding passwords in scripts if at all possible. If you have to, then make sure file permissions are set to user access only.

        Avoid badly written client software that stores UNIX server passwords on the client in an easy to decrypt/decipher form. XOR'd in the NT registry does this, for example.

Filesystem Security

This section reviews fundamental filesystem and privilege concepts.

When it comes to input and output, UNIX treats everything as a file. In fact, the term file has multiple meanings in UNIX it can be a

        Regular file. A sequence of data bytes collectively regarded by the operating system as a file.

        Directory file. A list of filenames and pointers to file meta information (that's a fancy way for saying "information about a file"). If you have read access to a directory, it means you can read the contents of the directory in other words, get a directory listing (a la ls(1)). However, only the UNIX kernel has the capability to modify the contents of this file (for example, insert a new entry).

        Symbolic link. Contains the name of another file. When a symbolic link is accessed, the kernel recognizes the file as such by examining its file-type. It then reads the file contents. The kernel opens the file with the name stored in the symbolic link. System administrators frequently use symbolic links to relocate data to another filesystem while maintaining the path of the parent directory. Attackers, on the other hand, use symbolic links for more nefarious purposes, as we'll cover later.

        Character special. Represents a byte-oriented device. It is the UNIX interface to devices that operate on a byte-by-byte basis, like a terminal device.

        Block special. Functions like a character special file, but for block-oriented devices such as disk drives.

        Socket. Allows one process to communicate with another process whether on the local system (via Inter Process Communication) or a remote machine. Programs such as Telnet, rlogin, and FTP all use sockets.

        Named pipe. Supports local Inter Process Communication (IPC). Because of the type of queuing used it is sometimes referred to as a FIFO (First In First Out).

Each of these objects is stored in the filesystem. Protecting the filesystem from abuse is critical to the ongoing integrity of your operating system, application programs, and data.

File Attributes

The UNIX filesystem supports a standard set of file attributes or properties. These attributes are stored in a data structure called the inode (index node) every file has an inode. On Solaris, the inode data structure for the traditional UNIX FileSystem (UFS) is defined in /usr/include/sys/fs/ufs_inode.h.

From a security perspective, the most important attributes include

        The owner id. The numeric user id that owns the file.

        The group id. The numeric group id that owns the file.

        Permissions. Combined with the owner id and group id, these determine the access controls on the file.

        Size. Measured in bytes.

        Time of last access. The time the file was last accessed, in seconds since 1970.

        Time of last modification. The time the file was last modified, in seconds since 1970.

        Time of last inode change. The time the file was created, in seconds since 1970.

        Number of hard links. The number of files that "point" at this file.

The permissions attribute defines the access rights of the file owner, the group owner, and all other users on the system. The root user and file owner can control access to a file by setting permissions on the file and on the parent directories.

In the standard implementation of UNIX, the root user is not subject to permission checking root can read, write, or execute any file. Note that, in UNIX, write access is equivalent to delete by definition, if you can write to the file, you can erase the contents of the file.

Readers unfamiliar with filesystem permissions are encouraged to read the chmod man page. For further reading, I highly recommend Advanced Programming in the UNIX Environment, Addison-Wesley, 1992, ISBN 0-201-56317-7.

Permissions in Practice

To access a file by name, a user must have execute privilege in every directory contained in the file path, as well as appropriate access to the file itself. In the case of files in the current directory, a user needs execute privilege for the current directory.

To be able to create a file in a directory, a user must have execute permission on every directory in the path, as well as write permission in the target directory.

When it comes to deleting a file, it isn't actually necessary to be the file owner or have write permission on the file. By having write and execute permissions on the parent directory, you can delete files. This can be a "gotcha" if you're not careful.

In order to understand how the various permissions are checked when a user attempts to open a file, you need to understand how process privileges work.

Put simply, when you execute a program, a process is created. Associated with a process are at least six IDs:

        Real User ID. The numeric user id of your login account

        Real Group ID. The numeric group ID of your primary group (the group defined in your /etc/passwd entry)

        Effective User ID. The numeric user id used during file access permission checks

        Effective Group ID. The numeric group ID used during file access permission checks

        Saved Set User ID. A copy of the numeric user id saved by the exec function when you execute a program

        Saved Set Group ID. A copy of the numeric group ID saved by the exec function when you execute a program

In addition, if you are a member of more than one UNIX group, a corresponding number of supplementary group IDs will be set.

At first glance this might seem overcomplicated. To appreciate why so many IDs are required, we have to talk about a key security mechanism of UNIX, the set-uid/set-gid privilege.

The Set-uid/Set-gid Privilege

Normally, when you execute a program, a process is created that runs with the privileges associated with your user id. This makes sense; you shouldn't be able to interfere with files or processes belonging to another user. However, some programs need to carry out privileged operations. They can't do this if they execute under the user id of an unprivileged caller. To make a program privileged, the program owner (or root) can assign the set-uid or set-gid bit to the program via the chmod command.

Unlike ordinary programs, a set-uid program executes with the privileges of the program owner not the caller. By making a program set-uid, you allow it to take actions with the authority of the program owner, on your behalf.

Set-gid works the same way but, not surprisingly, for groups. A set-gid program runs with the privileges of the owning group rather than with the privileges associated with the group of the user id who called the program.

Set-gid can also be set on a directory. Files subsequently created within a set-gid directory will have their group ownership set the same as that of the set-gid directory. Usually the group owner would be set to the users'primary group. This way, a group of users can share data despite being in different primary groups.

An example of a set-uid program is the passwd program. When you change your password, the system needs a way to modify your password entry in /etc/shadow. This file is only accessible by root because it stores passwords; however, this prevents you from legitimately changing your password. By making the passwd program set-uid, you allow a nonprivileged user id to update its password. Without the set-uid bit, users would have to ring up the administrator to have the passwords changed. Eventually, the administrator's temper is bound to fray see the BOFH series at http://members.iinet.net.au/~bofh/ for enlightenment.

In our example, the security of the shadow file is at the mercy of the passwd program. If the user running the password program can somehow influence the program in a way the programmer didn't consider, she might be able to directly modify the shadow file!

Therefore, set-uid programs must be programmed defensively to avoid their being subverted by an attacker to gain extra privileges. In the case of a set-uid root program, the stakes are very high one exploitable bug will mean game over the attacker gets root privileges.

The Umask

Our review of file permissions would be incomplete without studying the umask. The umask determines the set of permissions that will apply to a newly created file if no permissions are explicitly specified at creation time. In other words, it's the default file permission.

The umask is represented as the inverse of the file permissions. For example, if our default umask is 022, any files we create in which we don't explicitly set the file permissions will be created with 755 permissions; that is, user id has read, write, and execute permissions, whereas group ID and Other have read and execute permissions. Just remember that the umask should be set to a value opposite of the permissions you want.

A common default umask value is 022. This is usually set in a system-wide login script such as /etc/profile. This can be overridden by a user who specifies a different (usually more restrictive) value in his local login script (for example, ~user/.profile). The umask command is a built-in shell command; it can be run at the shell prompt, for example, as umask 022.

Every process on a UNIX system has a umask setting it doesn't just affect the users who log in interactively. When the system boots and executes the system start-up scripts, a number of network daemons (services) are started. They inherit the umask value of their parent process init usually 022. Any files they subsequently create will be given permissions set by the umask unless the programmer explicitly set permissions.

The umask setting is therefore incredibly important if it is set too loosely, other users might be able to read, or in some cases, write over your files. Despite its importance, it is commonly overlooked by programmers.

Filesystem Risks

With the theory out of the way, let's examine the risks. The primary risks are

        Data disclosure. I don't want to belabor an obvious point, but this is so incredibly common that it deserves some attention. Users and programs create files in /tmp it's a digital scrap yard. If the user doesn't specify file permissions, the umask value applies. Commonly, the default umask of 022 is set, and the file is given world readable permissions any user on the system can read the file.

On a typical multiuser system, it is not unusual to find copies of scripts containing database passwords, confidential business data, sensitive log information, and core files containing encrypted passwords in /tmp. The same goes for user home directories and shared areas that have not been locked down to prevent access by Other. Why break in if you can read your way to root?

        Unauthorized data modification/deletion. This happens in two common ways. First, through lax user practices someone sets world writable permissions on a file. The second way is via world writable directories. If I create a file in a directory that is world writable, any local user can subsequently delete/modify it.

This is also true for filesystems shared via NFS.

The only exception to this is directories that have the sticky bit set (such as /tmp). If the file permissions are locked down, only the owner can write to the file. This is not always obvious, because the world writable directory might be the parentof the parent of the current directory, or the parent of the parent's parent. In an extreme case, if the / directory is world writable, an attacker can replace any file on the system for example, by moving /usr/sbin out of the way and creating a replacement /usr/sbin, filled with Trojan programs of their choosing. This can easily lead to a total system compromise.

The bottom line is that it's not just the parent directory that counts but every directory along the way up to slash (/)! This problem is surprisingly common on system and application directories.

        Resource consumption. Each filesystem is built with a finite number of inodes. When all inodes are consumed, no more files can be written to the filesystem, regardless of available free space. This can cause system daemons to crash or hang when /tmp is involved. Unless file giveaways have been disabled in the kernel, the culprit can cast the blame on another user simply by changing the ownership of the files she has created to the victim via the chown command. Consuming all free space is another approach.

        Temporary files with predictable filenames. Programs can be subverted to overwrite or remove arbitrary files if they create temporary files with predictable filenames in directories writable by Other (commonly /tmp). Other users can guess a filename in advance and create a symbolic link to a system file. When the program runs, it writes data to the system file resulting in data corruption. If that's the passwd file, you have a denial of service attack on your hands. This is incredibly common especially in application code and administrators'shell scripts.

        Named pipes that trip up the find command. By default, most UNIX systems ship with root crontab entries to run the find command. In addition, root users often run find to search for particular files. A user can create a named pipe that will cause find to hang when it reaches the pipe. The find command will open the pipe for reading and block (that is, hang) waiting for data. Because no EOF (End of File) will be sent, find will hang until it is killed. An attacker can use this method to prevent the administrator from finding the attacker's unauthorized set-uid programs. A further attack on some versions of find is to embed commands in the filename. If find passes the file to an exec command switch, the shell will interpret any shell meta characters (in this case ";") and execute the embedded commands.

Privileged shell scripts that read filenames from the filesystem and blindly pass them to another program can be subverted. SNI (Secure Networks Inc.) posted an advisory way back in 1996 about this problem; it is archived here: http://lists.insecure.org/bugtraq/1996/Dec/0133.html. This weakness is still present in some commercial distributions today.

        World readable/writable named pipes. One method for processes to communicate with one another is through the file system using a named pipe. If the pipe has been created with weak permissions, an attacker can read and write to the named pipe subverting/crashing the process at the other end of the pipe or reading privileged data.

        Race conditions. Matt Bishop coined the acronym TOCTTOU (Time Of Check To Time Of Use) for a common race condition namely, when a program checks for a particular characteristic of an object and takes some action based on the assumption that the characteristic still holds true. However, if a program is subject to race conditions, an attacker can swap the object between the time the check is made and the subsequent use of the object. This tricks the program, which will then operate on the wrong object. You can see Chapter 29, "Secure Application Development, Languages, and Extensions," for more on race conditions.

Filesystem Countermeasures

Here are some things you can do to minimize your filesystem exposures.

        Give clear direction in your security policy about the need to protect the organization's data. Classify information by sensitivity and define what access controls are required. Give examples.

        Set the TMPDIR environment variable to a private, per-user temporary directory. Well-behaved programs check TMPDIR before using /tmp.

        Audit your shell scripts and change all references to publicly writable directories to your own tmp directory. For bonus points, create unique filenames without relying on the time, date, or process ID (or a weak pseudo-random number generator).

        Educate users about file permissions and the effect of the umask. In sensitive environments, have your users sign a usage policy that includes good stewardship of information.

        Ask users about their information-sharing needs. Create additional UNIX groups as necessary and enroll users as appropriate to support data sharing at a more granular level. The group's mechanism can be used very creatively think long term and design a flexible group access model.

        Make sure that the system-wide umask is set to 027 in the system shell start-up files as a minimum.

        Modify system startup files to set the umask to 027.

        Create a cron job to check user start-up scripts for inappropriate umask settings.

        Audit /tmp and other shared directories on your servers now. Perform spot checks on /tmp. Persistent offenders should be warned that they are in breach of policy. If the warnings are not heeded and the information is sensitive, consider e-mailing a summary of interesting finds to management.

        Disable core file creation (not to be confused with kernel crash dumps) via the ulimit command. Modern UNIX kernels will refuse to dump core when a set-uid program crashes because this might reveal sensitive information. However, privileged system daemons and application processes might dump core resulting in chunks of sensitive system files being written to world readable core file. Validate your fix by sending a QUIT signal to a expendable network service and check that it doesn't produce a core dump in its current working directory. (/proc or lsof can help find that out.)

        Monitor /tmp for predictable filenames using a tool such as L0pht's tmpwatch available here: http://www.atstake.com/research/tools/l0pht-watch.tar.gz.

        Make sure named pipes are included in your file permission checks! These are used for Inter Process Communication (IPC), so that lax permissions will allow an attacker to interact with processes in ways you don't want.

        Prevent file giveaways by setting CHOWN_RESTRICTED to true in the kernel configuration file.

        Consider using extended ACLs (where supported) via the getfacl and setfacl commands (Solaris). These extend the access information stored in the inode. They can be used to give a user access to a file or directory even if that user is not in the owning group or is not the file owner, and the file permissions deny access by "Other." But ACLs can be a real pain to administrate. Personally, I recommend that you design a good group model and only use extended ACLs when you really need to.

The Set-uid Problem

Programming mistakes in set-uid programs have been a real source of security headaches. A single security hole in just one set-uid root program can be all that is needed for an attacker to gain root access.

The problem is widespread. We're not talking about one or two isolated instances more like a graveyard of broken set-uid programs. Again, check the SecurityFocus.com vulnerability database for set-uid problems there have been hundreds (thousands?)! The problem isn't going away, either especially in third-party programs. New set-uid vulnerabilities are being reported to Bugtraq on a weekly basis.

Writing secure set-uid programs can be difficult. Just because you can program C doesn't make you a security god. Heck, even the security gods get it wrong sometimes. Take, for example, the L0pht (http://www.safermag.com/html/safer25/alerts/33.html) a group that knows its subject inside out.

The C language is pretty unforgiving to the developer of set-uid programs C makes it too easy to screw up and open the barn door. Specifically, the lack of bounds checking in C has allowed many developers to write programs with buffer overflows.

However, it would be pretty lame for us to blame a language for set-uid problems. After all the security pitfalls of C are well documented it's hardly a new language. Alas, the biggest source of security vulnerabilities is the n ive programmers slapping together code they think is secure.

A typical UNIX distribution ships with a large number of set-uid root files averaging between 70 and 100. Now, not every line of code necessarily runs with root privilege the privilege has to be invoked by the program via a call to set-uid. But, even if the privileged lines of code are written super securely, a wily attacker can exploit a hole in the nonprivileged section of code (that is, before the call to set-uid) with devastating consequences. If the attacker exploits a buffer overflow and can force the program to make a call to set-uid boom, game over. Any code the attacker can supply for the program to execute will run with root privileges.

Security-savvy programmers throw away the set-uid privilege as early as possible in the program.

So, given the number of privileged programs, the administrator is left to ponder: "Where will the next vulnerability be found?" The answer is, we simply don't know. Hence the stock advice of any security textbook is to remove the set-uid bit from unnecessary privileged programs. This is much easier said than done. How do you know what is unnecessary? Sure, you know programs like passwd need to be set-uid, but what about all those others? Removing the set-uid bit has to be done with a great deal of care unless you want to have a lot of free time on your hands.

A classic example of irrelevant set-uid code (at least for most people) is the KCMS (Kodak Color Management System) suite of programs. These are installed during a full install of Solaris and set-uid root. The CERT advisory CA-1996-15 describes KCMS as

a set of Openwindows compliant API's and libraries to create and manage profiles that can describe and control the color performance of monitors, scanners, printers and film recorders.

So, if you are a Solaris admin, have you ever used those? The only time I have used them was to demonstrate to system administrators how easily root could be compromised.

Another more common example is the ping program. Ping sends an ICMP ECHO REQUEST packet to a remote system and waits for a response (ICMP ECHO REPLY) to check whether the remote system is alive (although not necessarily functioning). The standard implementation of ping requires a raw socket to be able to build the ICMP ECHO REQUEST packet. This is a privileged action because having access to a raw socket means you can create custom packets very dangerous in the wrong hands. So the ping command is set-uid root.

Unfortunately, allowing mischievous users access to seemingly innocuous programs like ping can result in a security nightmare. Remember the Ping of Death? The ping command has an option whereby the user can control the size of the ICMP packet sent. It turns out that some implementations of ping allow users to send out very large ping packets that have caused remote systems to crash. Was anyone expecting that? Of course not. Hence the need to follow the least privilege principle. Only allow users to run what they need to run in order to do their job. Does every user on every system really need to be able to run network diagnostic tools?

Then there are those set-uid programs that don't even need to be set-uid typically, system administration commands. The set-uid bit is redundant if only root is running it.

I am not aware of any vendors that provide any guidance, or sufficient technical program documentation to help an administrator easily identify nonessential set-uid programs.

Fortunately for Solaris and Linux users, there is some good information out there on locking down your set-uid programs.

Truly paranoid Solaris users should check out this e-mail on the YASSP (Yet Another Solaris Security Package) mailing list from a Sun employee discussing how the company locks down Internet-facing systems:

    http://www.theorygroup.com/Archive/YASSP/2000/msg00548.html
  

Solaris 2.6 set-uid lockdown information is here:

    http://www.ist.uwaterloo.ca/security/howto/2000-08-22.html
  

and here:

    http://www.vetmed.auburn.edu/~whitej4/secureSolaris2.6.html#2.0
  

For Solaris 2.7 information, go here:

    http://ist.uwaterloo.ca/security/howto/Solaris7_set-uid.html
  

So, how do you minimize your system's exposure to set-uid holes that are waiting in the wings?

1.       Try to avoid installing the full distribution install only what you need. This is a security best practice. If the code isn't on your hard drive, then no one can use it against you. But this can be hard to fulfill in a pressure-filled environment where the focus is on getting things live. Just remember the costs of post-live lockdown.

2.       List all the set-uid/set-gid programs on the system. You can do this with the following commands:

3.            find / -perm -u+s -print
4.             
find / -perm -g+s -print

5.       Find out the stated purpose of each program. You're likely to find that some of them are totally unnecessary neither you nor your users would ever need to run them. As long as these programs are not required for system operation, they can have the set-uid bit removed, or, alternatively, all access by Other can be removed (that is, chmod o-rwx file).

6.       Identify the set-uid root programs that only root needs to run and remove the set-uid bit they don't need to be set-uid because you'll be running them as root. There's no point in leaving potential time bombs lying around for someone to play with. Remove the set-uid bit or access by Other (either is good). This can eliminate a large number of programs.

7.       Identify set-uid programs that leak sensitive system information and thereby make an attacker's life easy for example, ps, top, and netstat. Ps and top display process information, including command-line arguments these can contain application usernames and passwords. They also help an attacker identify usage patterns, which assists timing of attacks. Similarly, netstat reveals information about your network topology (via the -R switch) and current network connections (that is, who is talking to your system).

This kind of privacy disclosure can lead to client systems being attacked. Client systems are soft targets, the "low hanging fruit" of the network. They can be used as remote password sniffers (to compromise accounts on more systems), proxies to misdirect attack investigations, and conduits to other network segments that are unreachable directly (that is, they are behind a firewall or not directly attached). It's not hard to imagine the consequences if the client victim happens to belong to the system administrator level.

So limit access to "leaky" set-uid programs on a strictly need-to-have basis. The side message is to secure your clients machines they can be used against you!

8.       Identify the set-uid root programs that only a trusted group of users need to run (for example, network operations). Create a dedicated UNIX group and enroll the trusted users in this group. Next, change the group ownership of the set-uid programs to this group (don't change the owner though because that would make the set-uid call fail). Finally, and very importantly, remove all points of access by Other (that is, chmod o-rwx file). These include print queue management programs, network utilities, and application management interfaces.

9.       Identify the set-uid root programs you think no one will ever need. Before you remove a set-uid bit, you need to be totally convinced you won't break something. In cases like these you need to profile the programs'uses that is, to log program invocation. One approach is to install an AUSCERT wrapper like the one discussed earlier. But, here, we're not going to use the wrapper for its intended purpose (although there is nothing stopping you from doing so). Instead, we're going to modify the wrapper to make a call to the logger command before the real program is called. This is unnecessary if you have C2 auditing configured and are logging calls to exec(). Review your logs after a month, and, if no relevant activity has been logged, then you probably have sufficient basis to remove the set-uid bit. You might want to leave your pager on for a while, though.

10.    You should now be left with a handful of set-uid root programs that you consider essential. Modify the AUSCERT wrapper for those programs to make sure overly long arguments or environment variables cannot be specified. This won't protect you against all attacks but will protect you against some the common ones.

Pat yourself on the back you've made it a lot harder for an attacker to succeed against your system.

Maintenance wise, you will find that vendor patches assume a vanilla install, and, therefore, patches and upgrades clobber your changes. Always run a file integrity checker after applying updates to identify changes that way, you avoid your efforts being undermined by dumb scripts.

Make sure you keep an eye on Bugtraq to keep up with new set-uid exposures, and subscribe to your vendor's security alert mailing list to ensure that you hear about patches quickly.

If you have only a few trusted users on the system, you might be tempted to skip the whole set-uid removal process. Before you do, consider this: You are unwittingly making life easier for a remote attacker. If an attacker gains shell access to your machine through some 0-day (new) exploit, she will use any vulnerability she can find to elevate her privileges to root. Set-uid root programs will be at the top of her list. If you fail to bolt down your set-uid programs, an attacker will not hesitate in leveraging this against you.

Understand, though, that gaining root is not always the attackers'endgame it depends on what they are trying to achieve. For example, you can store all your sensitive company data in a relational database that is owned by a user called "datamart." Clearly, the attacker only needs to target the data owner account (or privileged application accounts) to get full access to the database. This can be through password guessing, social engineering, or exploiting security bugs in set-uid application programs. Don't focus on root to the exclusion of your primary application accounts.

Application software often ships with set-uid/set-gid programs. In my experience these tend to be rife with problems ignore them at your peril. It is rare to find security savvy application programmers.

URL 


 

Section: Chapter 20.  UNIX

Breaking Set-uid Programs for Fun and Profit

(URLs and brief descriptions of tools mentioned in the following section can be found immediately after this list.)

You're faced with a piece of software that smells of security problems. What can you do about it? Some homebrew security testing! You won't become an "Uber-hacker" overnight, but the following techniques are a good place to start:

        Search the Web for information about previous product vulnerabilities. Include any other third-party code shipped as part of the product in your search (for example, library routines).

        Install a copy of the software on a test system and observe the installation routine closely. Check the contents of any logs or temporary files created during the install. They might contain passwords or other insightful information. Check the file permissions. Could an attacker read them? Also study the default configuration. What are the weaknesses?

        Identify programs that are installed set-uid/set-gid and search available documentation for program information. (It might be sparse, though.)

        Run strings and grep on the binaries to identify weaknesses such as back doors, hard-coded calls to other programs without explicit paths, predictable temporary filenames, application-specific environment variables (which you might be able to overflow), and so on. Check for any hidden command-line arguments or back doors.

        If you can identify the actual developers of the code (through the RCS (Revision Control System) strings), search the Web for other code they have developed. If you can find any open source programs, check their coding style and observe any security errors. The odds are on your side that they've repeated the same mistakes in the code you're bashing away at.

        Observe how your target interacts with the kernel. As root, run the software via a program that can trace system calls (for example, strace or truss). If these tools aren't on your system, check your distribution media. Identify how the program works. Try to identify when it makes calls to the set-id family of system calls. Focus on the sections of code that execute with set-uid/set-gid privilege.

        Now watch the program's use of library files. Run the program under a call-tracing program such as ltrace or sotruss. Check for function calls that have known weaknesses (see http://www.whitefang.com/sup/secure-faq.html for a list). Keep rerunning it until you know which files it accesses. Examine those files, remove them, put spurious data in them, and so on.

        Play with the command-line arguments feed the program data it doesn't expect. If the command takes a user-specified file as an argument, try and have the program read a file that you don't have access to (such as a database file or private configuration file). Some applications are so dumb they don't even check the original owner of the supplied file, and you could be staring at your own /etc/shadow file.

        Set crazy values for the standard and application-specific environment variables. Run the application to see whether it breaks. If it does, can you exploit it in some way? Does it dump core and leak sensitive information in the core file? (Use strings or gdb to check this.)

        Deprive the program of resources it expects and see how it reacts. For example, consume all available inodes on application-related (and /tmp) filesystems. Programs that haven't been coded with extreme situations in mind can behave unpredictably use this to help you.

        If the application ships with a network server component, use Telnet (in the case of a TCP-based server) or, better still, grab a copy of netcat (described in the following section) and connect to the network port(s) used by the application. Do you receive any output upon connecting? Try and stimulate a response by pressing Return or other various keys or by typing help. Try sending a large amount of nonsense data. Does it crash because of an overflow? If you can find specs for the protocol (or, in the case of proprietary protocols, if youcan reverse-engineer it), try overflowing specific protocol fields (for example, in the case of Web servers, the HTTP Referer field). Granted, this is not rocket science, but just try stuff. You might be surprised how effective this crude approach can be at unearthing security bugs (or just general flakiness). Remember you might be the first person scrutinizing it this way you never know what you might find.

        Use a network sniffer like tcpdump or Ethereal and observe the communications between the client program and network server. Look for plaintext passwords flying across the network or other information leaks. Advanced testers will attempt to replay the network traffic to see if they can reauthenticate using the captured packet data. Play "Man-in-the-Middle" (MITM) and intelligently modify the data in transit using a program like netsed.

        Check for inadequate settings on shared memory segments. Use the ipcs command to identify application-specific shared segments check their permissions. Are they locked down enough? If not, read them as a non-root user. Try and decipher what you've read check for information leaks such as weak passwords, encryption keys, fragments of database files, and so on. If you have write access, try and alter important values to affect what gets written to the user or application store.

Useful Tools for the Explorer

Tcpdump

Author: Network Research Group (NRG) at Lawrence Berkeley National Laboratory (LBNL)

URL: http://ee.lbl.gov/ or http://www.tcpdump.org/

Tcpdump is the de facto UNIX network sniffer. The version shipped with your distribution is likely out-of-date. Go get the latest one.

Ethereal

Author: Gerald Combs

URL: http://www.ethereal.com/

Ethereal is a GPL equivalent to commercial grade network sniffers. Featuring both a GUI display and a console-only version (tethereal), it can decode an incredible number of protocols. It also supports the capability to read capture files written by many other sniffers.

Netcat

Author: Hobbit (ported to NT by Weld Pond)

URL: http://www.l0pht.com/~weld/netcat/

Nicknamed the "Swiss Army Knife" of networking tools, netcat allows you to make outbound or inbound TCP connections to or from any port you choose. It can optionally hex dump traffic sent and received. Netcat can be used to bypass weakly configured packet filters as well as to throw test data at a network service (useful when checking for basic overflows). Because it can run as a listening network service, you can play all kinds of interesting network tricks with it. See the README for some ideas. If you're bound to a MS desktop, you'll be glad to hear about the NT/95 port.

Ltrace

Author: Juan Cespedes

URL: http://packages.debian.org/unstable/utils/ltrace.html

Ltrace is a Linux-only program to show runtime library call information for dynamically linked libraries. This enables you to trace function calls whether they end up calling systems calls or not. If the program you are interested in is statically linked, this program won't help. Non-Debian GNU/Linux users should be able to find packages available from their favorite package mirror.

Netsed

Author: Michal Zalewski

URL: http://packetstorm.securify.com/UNIX/misc/netsed.tgz

Michal is a very talented programmer, who is active in security research. While he was blackboxing a Lotus product, he wrote netsed a small GNU/Linux-based network utility that brings the functionality of sed (stream editor) to the network. Netsed lets you change network packets on-the-fly as they pass your machine by specifying one or more search strings and a corresponding replacement. This automates an otherwise very tedious and repetitive process: Capture a data stream to a file, modify the capture file, and send downstream, for every client/server communication.

Subterfugue

Author: Mike Coleman

URL: http://www.subterfugue.org/

The author describes Subterfugue as a "framework for observing and playing with the reality of software." In a nutshell, you can mess with the program big time! The user creates "tricks" that affect the way the program operates (either directly or through throttling I/O). By manipulating the world that the application executes within, you can profoundly influence and analyze its actions.

Test Limitations

The previously described attacks can turn up surprising results for very little effort. It can be quite depressing though you've locked down your UNIX server only to find that the application gives up the goods to anyone who knows the magic incantation. DIY (Do It Yourself) testing is certainly valuable and is the only option for most people. However, it is not a proper substitute for a thorough security audit by a seasoned security bug finder.

Perhaps the biggest problem with this approach is that there is really no way to know when you're done. All you can do is keep testing until you have exhausted all the tests you can throw at a program or until frustration gets the better of you. It's often a case of diminishing returns you find some interesting weaknesses, try to exploit them, and see what happens. Eventually, boredom sets in, and you find something more interesting to do.

If you do discover weaknesses, you can report them to the vendor. Depending on your point of view, you might have mixed feelings about doing so. Vendors have a spotted history when it comes to handling security problems. Some fix promptly and notify their users; others try to sweep problems under the carpet (worse still, some threaten the messenger).

If you are concerned about revealing your identify to a vendor, you can either use an anonymous remailer or ask the folks at SecurityFocus (http://www.securityfocus.com) to help you. They offer a free community service to help bug finders draft an advisory. With their experience in moderating Bugtraq, they are also a good sounding board if you do have concerns.

People have different views on the subject of full disclosure it is a classic "religious" debate. Rain.Forest.Puppy (rfp), an active security researcher, developed a disclosure policy in light of his experience reporting vulnerabilities to software maintainers. You can read it here: http://www.wiretrip.net/rfp/policy.html.

URL 


 

Section: Chapter 20.  UNIX

Rootkits and Defenses

Our contemplation of filesystem security isn't complete without a mention of rootkits.

After a successful root compromise, attackers might upload and install a rootkit, which is a collection of replacement system programs that enable attackers to hide their tracks and easily reconnect to the system at a later time. It is not unusual for an attacker to patch the hole that enabled him to gain access, to avoid losing the system to another attacker.

Rootkits typically include replacements for the following commands:

        ps. Shows process information. The rootkit version hides processes run by the attacker they simply don't show up in the output.

        netstat. Shows network connections, routing information, and statistics. Attackers certainly don't want you discovering them connected to your systems. So they install a modified netstat binary that effectively cloaks connections on specific ports or specific client addresses.

        ifconfig. The attacker might want to sniff the network to pick off authentication credentials (among other things). To do this, the network interface card must be put into promiscuous mode. A very observant administrator might notice the "P" flag in the output of the ifconfig command. The modified netstat doesn't print the "P".

        df. Shows filesystem free space and inode usage. The attackers'toolkit and sniffer logs consume diskspace that might be noticed on a quiet system. The rootkit df ignores files stored in a particular directory or owned by a particular user id.

        ls. Lists files. Similar to the modified df, the rootkit ls behaves just like the standard ls but does not report files contained in a hidden directory or owned by a particular user id.

        sum. Calculates checksum and block counts. Should the administrator become suspicious and attempt to checksum the files against known good files (on a "clean" system), the rootkit sum program will produce faked checksum values that match the original binaries. Never ever rely on sum for security. It is possible for an astute attacker to create modified programs that still output the same sum value as the originals. (Instead, use cryptographic routines like MD5.)

Rootkits are readily available across a range of operating systems and architectures regardless of public availability of source. Patching binaries to include rogue code is not rocket science. It involves an understanding of binary file formats (for example, ELF) and some file manipulation so don't assume you're invincible just because you are running a closed source OS.

Rootkit Countermeasures

The primary method to detect the presence of a rootkit is to use integrity assessment software. These programs take a digital snapshot of the system and alert you to changes to important system files.

When you first install integrity assessment software, a baseline database is created. This contains a unique signature for every file that is to be watched. Then, on a basis set by the administrator, new signatures are generated and compared with those stored in the integrity database. A mismatch means a file has been modified in some way possibly indicating your system has been compromised. Alternatively, it could just mean you've applied an OS patch!

It used to be that system administrators would use a program like sum to generate file signatures of important system files. However, as they were to learn, these signatures can be faked. Attackers were able to cash in on the weaknesses of these checksum generators (or simply replace the program), thereby fooling the administrator.

In 1992, Gene Kim and Gene Spafford developed the Tripwire tool. Tripwire made use of digital hashing algorithms such as MD5, to create file signatures that were impractical to forge. Even the slightest change to a file, or to the file's inode information, resulted in an unmistakably different hash. The software filled a real gap in the security toolkit and proved incredibly popular. It was ported to numerous platforms and became the de facto integrity assessment software, referenced in just about every security book you'll ever come across.

This software is today known as Tripwire ASR (Academic Source Release) version 1.3.1.

For all its good points, this software has a major limitation the database must be stored on read-only media like a write-protected floppy disk or tape. Not surprisingly, this is an inconvenience and doesn't scale well. It may be the most common reason why sites give up using Tripwire. Storing the database on a read-only filesystem doesn't cut it either an attacker can simply remount it read-write.

Realizing there was something of a market for this kind of tool, and that there was some mileage to be had in a major update, the authors set up a company, Tripwire Inc. This breathed life back into a popular tool. A number of new features were added, and the product was fully commercialized.

Possibly the most useful feature added is that you are no longer forced to store the integrity database on write-protected media the database itself is signed using a 1,024-bit El Gamal encryption algorithm you can store the integrity database on the system itself. That's not to say storing it on write-protected hardware is not a sensible idea. But, if that is not sustainable in your environment, then this may be for you.

Tripwire uses a policy language to define what to monitor. Check out the included documentation for a useful tutorial. The commercial version is somewhat easier to configure (no compiler required and the policy language seems friendlier) and ships with some reasonable defaults. Whatever version you use, though, don't forget to add in your application files and create the baseline database as soon after OS installation as possible.

For a stealthy way of running Tripwire, consider this. Create a cron job on a separate (hardened) system to remotely copy across the binary and database files and invoke the comparison. Don't forget to erase the files after the check. (The output can be stored on the invoking system ready for checking/filtering.)

While the commercial version does offer some worthwhile benefits, don't feel you have to pay out to get the core benefit many sites still run the ASR version without a hitch.

The commercial version is now available for Windows NT. Those running larger sites might be interested in the Tripwire HQ Manager product to centralize management of all V2+ UNIX/NT Tripwire agents.

Good news if you're a Linux user, though the Linux port of the commercial version was made open source and can be downloaded free.

Binary and source copies are available from http://www.tripwire.org.

The commercially supported product is available here: http://www.tripwire.com

The original ASR release is available here: http://www.tripwire.com/downloads/index.cfml?dl=asr&cfid=265337&cftoken=18379700.

Tutorials for installing Tripwire ASR can be found here:

    http://www.securityportal.com/topnews/tripwire20000711.html
    http://netweb.usc.edu/danzig/cs558/Manual/lab25.html
  

Solaris users should take a look at this CERT security improvement module at http://www.cert.org/security-improvement/implementations/i002.02.html.

Kernel Rootkits

We've covered userland rootkits with which an attacker compromises the system and replaces important system files, and now we examine the ultimate form of deception the kernel rootkit.

To appreciate the stealth provided by a kernel rootkit, it is vital to understand the role played by the kernel. The kernel is the huge C program that runs the show it operates at a low level interfacing directly with system hardware. An attacker who reprograms the kernel can change the behavior of the system in any way he chooses. Consequently, if an attacker modifies the kernel, he can literally change your world. Unless the attacker leaves the digital equivalent of muddy feet on the carpet, you'll probably never even know about the attack.

The means to introduce a kernel rootkit is a root level compromise (just as for a standard rootkit). The usual purpose is to hide cracker activity and provide a convenient way for crackers to reconnect later on.

A kernel rootkit really is the most devious form of back door it is the ultimate cloaking device. All bets are off when the kernel has been subverted.

Kernel rootkits typically modify the kernel call table to redirect system calls to rogue code introduced by the attacker. The rogue code performs whatever actions the cracker intends and then calls the original OS code to let the call complete. The user is kept blissfully unaware of this.

A typical kernel rootkit

        Hides processes. No matter what tool the administrator uses, the attackers'processes are hidden the kernel lies. This overcomes the limitations of backdooring.

        Modifies system logging routines (process accounting, C2 kernel audit, utmp, and so on).

        Hides network connections.

        Modifies NIC (Network Interface Card) status to hide sniffer activity.

        Reports false file modification times.

        Hides the presence of the module (in the case of an LKM).

        Does anything else the attacker can think of.

The technical reference bible for Linux Kernel hacking can be found here: http://packetstorm.securify.com/groups/thc/LKM_HACKING.html

Three main methods exist to introduce rogue code into the kernel:

        Modifying kernel memory on a live system via /dev/[kmem|mem]

        Patching the kernel binary on disk

        Loading a kernel module

Traditionally, kernels were monolithic a big slab of code did everything. Modern UNIX systems support Loadable Kernel Modules (LKM), which enable the administrator to introduce new kernel code into the operating system whilst the system is running. This could be done to provide support for additional filesystem types, network drivers, or custom security routines. Check out your man pages for the following insertion commands: insmod, lsmod, and rmmod.

Whatever insertion method is used, kernel integrity is paramount if the new code doesn't behave and tramples over key kernel structures, then the system is likely to crash. This isn't too subtle. Developers of kernel code know this all too well.

The act of patching a live kernel is actually less scary than it sounds (as long as you do it correctly). This technique is sometimes used to tweak kernel parameters where no "userland" utility exists. (It is generally unsupported, however.) Inserting new kernel code involves locating and overwriting unused areas of kernel space with your code and repatching the system call table to divert callers to your code.

Kernel patching on disk involves writing your changes directly to the kernel image using a binary patcher. You seek through the binary to specific locations and overwrite with your own code. File headers are likely to need modifying, so a basic understanding of object file formats is required.

All other previously mentioned methods are possible if root access has been gained.

LKMs, however, provide the most convenient method for backdooring the kernel. Consequently, LKMs appear to be the most common delivery mechanism for rogue kernel code in the wild. Inversely, LKMs provide the good guys with a way to enhance existing security, too.

At this point you might be thinking that closed source operating systems should be safe from this kind of thing. Again, as with standard rootkits, access to source is not a major factor. (Besides, source code for some closed source operating systems circulates within the underground community.) Kernel hacking requires a familiarity with kernel structures (documented in /usr/include), some skills with a kernel debugger, and an appreciation of kernel issues (for example, how to allocate memory correctly).

For example, in December 1999, mail was sent to Bugtraq announcing the availability of a Solaris Loadable Kernel Module back door. The note was from Plasmoid, a member of The Hackers Choice (THC) a Germany-based group with some very talented individuals. The paper is available from here: http://www.thehackerschoice.com/papers/slkm-1.0.html. Check out their other projects, too.

As with any program, it only takes one person to codify a kernel rootkit with a friendly userland interface. Then anyone can install and use it on a compromised system.

Rootkits come and go in popularity. A collection of current rootkits (both kernel and userland) can be found at the Packetstorm archive: http://packetstorm.securify.com/UNIX/penetration/rootkits/

Even if you can't find a rootkit for your system, it is probably prudent to assume a kernel rootkit exists, and, therefore, you should implement countermeasures. This might sound like unnecessary paranoia perhaps it is. On the other hand, bear in mind that the cracker community is very effective at sharing tools. Crackers don't tend to advertise their tools with big neon signs, though.

Protecting Against Kernel Attacks

Safeguarding against kernel attacks can be summed up in one word: prevention. You need to prevent attackers from writing to kernel memory (directly or indirectly through LKMs) or the on-disk kernel. This is easier said than done because, if they have root, they can modify the kernel. To prevent this, you need to get in there first and change the rules. But, to do this, you need to modify the kernel itself and, if your OS doesn't have LKM support, you're on your own.

The standard advice until recently has been "Disable LKM support." This is now a waste of time Silvio Cesare has created a program to re-enable LKM support. You can download Kinsmod.c from his site, http://www.big.net.au/~silvio/.

If you're wondering why Tripwire isn't mentioned as a countermeasure here, you should probably re-read the introduction to this section. Tripwire is a userland tool. It doesn't run in the kernel it makes calls to the kernel and bases its decisions on values returned by the kernel. Knowing this, an attacker can re-route calls made by Tripwire, to custom code that generates false checksums that match Tripwire's expectations. This is usually implemented by a rootkit in order to hide the presence of the rootkit on disk.

Rootkit Detection

If you can't prevent, then detect. This is a sound security principle that can be applied to many security-related situations.

Currently, there is no generalized method to identify whether a given kernel on an arbitrary system has been subject to a rootkit attack. However, there are specific detect points for a number of published back doors.

Authors of kernel rootkits might include a routine to identify that the kernel rootkit is actually inserted. For example, the Adore LKM back door written by Stealth (http://spider.scorpions.net/~stealth/) can be detected by making a call to setuid with a magic number. If you supply the right number, the kernel module announces its presence. Of course, you are relying on defaults here. If your attackers used any sense at all, they would have modified the magic number or even the particular call used, and this crude detection scheme would fail.

A program that implements a number of checks for common back door modules on Linux is rkscan. Using the technique outlined previously, rkscan can identify multiple versions of popular rootkits, Adore and Knark (written by Creed). Rkscan is available from http://www.hsc.fr/ressources/outils/rkscan/index.html.en.

URL 


 

Section: Chapter 20.  UNIX

Host Network Security

This section focuses on network security at the host. This section does not discuss firewalls, network intrusion detection, and router network security issues. It deals solely with the services provided by a UNIX server to its network clients.

Network Services: General Purpose Versus "Fit for Purpose"

Is your system having an identity crisis? It probably is if you've decided it's a Web server, but it's actually running all kinds of other network services, ranging from file-sharing services (NFS, Samba, and so on) to remote printing services.

What Are Network Services?

If you are familiar with the concept and implementation of network services, skip to the next section otherwise, read on.

A network service is a process (daemon) that provides a service to clients. Apart from some internal housekeeping functions, its job is to process client's requests. It's called a daemon process because it is detached from its controlling terminal and runs in its own session group (of which, it is the session leader). By doing this, the process can survive when the user who executed the program logs out of the system.

In order to respond to clients, network daemons bind to and listen on a TCP (Transport Control Protocol) or UDP (User Datagram Protocol) port. The program uses a network socket to send and receive data. A bound socket can be associated with a single IP address or all IP addresses on the system. This is determined by the parameters given to the bind() call. The port number selected by the programmer is based on a (voluntary) convention. For example; a Web server will listen on TCP port 80. A list of ports and their corresponding service names for your system is normally found in file /etc/services. However, vendor-supplied services files are often incomplete.

For the official IANA (Internet Assigned Numbers Authority) port listing, visit http://www.isi.edu/in-notes/iana/assignments/port-numbers.

The IANA listing does not include a large number of ports in use today. Unless a software developer registers the port with IANA it will not make it into IANA's list. But, that doesn't mean it can't be used just that it might clash with an existing or future entry in their list. For an unoffi cial but comprehensive list, check out http://dhp.com/~whisper/mason/nmap-services.

The decision to use the TCP or UDP transport protocol will depend on the communication requirements of the application.

TCP is a connection-based protocol: One system (the TCP client) makes a connect call to establish a connection with another system (the TCP server). The kernels of both machines maintain state data about the connection. This includes the IP address of the remote system, the remote port number, sequencing data (to support reliability and reordering of out-of-order packets), and a number of other parameters. The kernel examines the header of incoming TCP/IP packets to determine which socket should receive the payload of the packet. A connection is identified by the unique tuple of source IP, source port, destination IP, and destination port.

UDP is connectionless. Put simply, one system can send one or more packets to another system (or systems, in the case of multicasting/broadcasting). The remote system will not respond (that is, it's connectionless), and reliability therefore is not guaranteed. The application program is responsible for reliability. For further details about TCP/IP, refer to Chapter 4.

The Risks of Running Network Services

Standard UNIX distributions ship with a raft of network services. That should come as no surprise after all, they are sold as general-purpose operating systems. Unfortunately, all distributions barring OpenBSD ship with nonessential network services enabled. They are "on" by default.

Network services provide useful functionality to clients. Remote users can download mail, log in to the system, share data remotely, use printers attached to the server in fact, this and much more. Most significantly though, they also enable remote attackers to break into the system, grab sensitive data, snoop the network, install Trojan programs, spy on end users, crash the system, or wipe the disks.

If you're new to IT security, you might find that last statement bewildering. Wouldn't they need to log in first? Why on earth would vendors ship software like that? Well, obviously the problems that enable attacks to happen are not part of their intended functionality. As history demonstrates, however, security bugs in network daemons are very common so common, in fact, that, when you're done installing your operating system, the chances are extremely high that your machine is vulnerable to remote attack.

Some administrators realize this and head straight for the vendor's support site to download and install the latest security patches. With that out of the way, they make the system available on the network, knowing that the system is "secure" at least from a remote attacker. Right? Depends on the network. In general, this isn't enough. Even after applying every security patch available from the vendor, the system is still vulnerable to network attacks for four reasons:

        Insecure network daemon settings

        Insecure network kernel settings

        Insecure network protocols

        Unpublished security bugs in network daemons

Need convincing? Well, limiting ourselves to a subset of network daemons and a subset of their default insecure settings only (and that's quite a limitation), your system is probably vulnerable to some of the following problems post-install.

Your system is most likely configured to run the X Window System (whether you knew it or not). On some default installations, a remote attacker can grab screen shots and kill users'X programs and that's just for starters. What about capturing every key the administrator types (think passwords) or remapping the administrators'keys to carry out additional commands when they hit a particular key?

Your system is most likely running an SNMP agent. SNMP agents enable remote Network Monitoring Stations (NMS) to collect system information. In its default configuration, remote attackers can also collect, and in some cases modify your system settings. More on that later.

Your system gives away the names of user ids on the system. Traditionally, the finger service was the culprit vendors shipped the finger server enabled by default, and remote users could query finger and gain a list of usernames ready for attempting brute-force logins. After pressure from customers, the majority of vendors now ship this service disabled. But this isn't a comprehensive solution. In its default state, sendmail enables remote users to query user ids and will report whether they exist or not. Automating this check and building a dictionary of common usernames to check for is hardly rocket science.

In addition to the categories of problems previously mentioned, your system is also vulnerable to published security bugs in network services that the vendor hasn't even fixed yet. That's right your system could be vulnerable to problems reported in public forums, and yet your vendor doesn't have a fix ready (yet). Again, this might sound crazy. Even worse it might take six months for a vendor to fix a nasty security hole.

Your system is also vulnerable to so-called 0-day exploits. These are exploits for unpublished vulnerabilities typically sent between friends with the accompanying message of "Do not distribute." Ironically, they often spread like wildfire. In fact, this has lead to a number of security groups having to (somewhat embarrassingly) formally announce security problems they discovered simply because the information "leaked" from the group. The whole 0-day thing seems to generate a lot of excitement within certain sections of the security community (if script kiddies are actually considered to be part of the security community).

A group that argues against full-disclosure and releasing of 0-day exploits is the AntiSecurity movement. Check our their views at http://anti.security.is/. The discussion board has attracted some hardcore security people.

As someone with responsibility for securing a computer system, it's important for you to realize that thousands of people are trying to "break" (meaning "compromise") systems every day. Whether they are working for a security organization, a government agency, or an operating system vendor, or as a private individual, all around the globe people are in labs attempting to find security holes.

When security flaws are discovered, the finder has a number of options. Some people inform the vendor; some publish their findings to full disclosure mailing lists. Some tell their friends, and some tell nobody. (In fact, they might do some or all of these things at different times.)

Securing Network Services

Are you depressed yet? You might be feeling that all the "evil forces" of the world are against you. Fortunately, there are steps you can take to either eliminate or reduce your system's exposure to many of these network-borne threats.

        Disable network services you don't need.

        Use available security features of the services you do need.

        If an existing network server can't be secured as-is, find a replacement that has a proven track record.

        Assume holes. Log relevant activity, analyze intelligently, and notify vendors and others.

        Keep on top of those patches or develop workarounds.

Disabling Network Services

Do you know what network services are enabled on your systems? Many administrators simply don't know. They've never bothered to question it they never thought it was a problem. Hopefully by now you realize that not every program your system runs is necessarily healthy for it (or you) from a security point of view.

By turning off the services you don't need, you simply eliminate the risk inherent in running them.

Caution

Turning off the wrong network service might prevent users from doing work that they should legitimately be able to do. On a home system, that might cause you a minor inconvenience. On a production system, this can land you in hot water and, in some cases, cost thousands of dollars. Learn before you burn! Follow sound change-management procedures to establish whether your user community requires a service. Overzealous hardening of systems can backfire in the long run, as managers will be hesitant to support your efforts. This is in nobody's interests.

 

Before turning off unused services, you need to audit what is enabled. Specifically, you need to figure out what services are currently active or will become active if requested by a client.

Network daemons are either standalone or started by a master (or super) daemon when the system enters multiuser mode. By examining each start-up script, you can identify each daemon that is started and the command-line options it is invoked with.

Possibly the most famous master daemon is inetd. Inetd reads a configuration file (often /etc/inetd.conf) to find out which services to listen for. Upon receiving a packet, inetd forks (creates a copy of itself) and executes (exec) the program specified in inetd.conf, handing over the new client connection in the process. Inetd continues listening in the background.

Make yourself familiar with the inetd configuration file. Use the man pages to learn about services you don't recognize.

The start-up (and shutdown) scripts are normally located in the /etc/rc* directories (rc means run command). Each rc directory represents a different system run-level. The start-up scripts are easy to identify they start with a capital "S" (the shutdown scripts start with a "K" for "kill") and are executed in numerical order (for example, S01, S02, S03, and so on). In fact, they are executed in the order generated by the filename shell wildcard character (just like ls *). The convention to use two-digit numbers avoids S3 executing after S24, for example.

We're interested in run-level 3 multiuser mode.

Read the start-up scripts on your system and make a list of services that are started. If you're not sure which program name represents a network daemon and which doesn't, here are some things to check for.

Check the man pages. If you are looking for a program called "nuked" and typing man nuked doesn't get you anywhere, try searching the man pages using the man -k nuked command. Man pages that describe the program as serving network clients or listening for connections are clearly good indicators of a network server.

Run the ps command (ps aux or ps -ef). If the program is listed, run lsof -I and grep for the program name. If it appears, you can be sure it's a network daemon. The -I switch to lsof says, "list processes using a TCP/IP socket."

Check whether the name of the program (minus the d if there is one) is listed in /etc/services. grep is your friend here.

Last of all, if the program name ends in "d" (daemon) it's probably a daemon. Okay, now we're starting to clutch at straws.

The man page for the program talks about RFC compliance. An RFC (Request for Comments) defines how a protocol works and what must be implemented for an implementor of the protocol to call a program RFC compliant. To gain a deep understanding of TCP/IP and application protocols (for example, FTP or HTTP), you'll find RFCs an invaluable source of information. You can find a hyperlinked archive of RFCs at http://www.landfield.com/rfcs/.

A Word About Privileged Ports

Programs written to listen on a port number lower than 1024 must be executed with root privilege (that is, UID 0). This rule protects sensitive system services because these run on ports lower in number than 1024 (that is, the reserved ports). The UNIX kernel enforces this restriction to prevent non-privileged users from launching fake network server processes on idle ports. Without this rule, a local user (that is, a user with an account on the system) could

        Start a fake Telnet server to capture user ids and passwords of unsuspecting Telnet clients logging in to the system. If implemented properly, the victims would never realize their accounts had been compromised.

        Start a fake domain name server (DNS) and supply false IP addressing information to DNS clients. For example, a client system attempting to visit http://www.pottedmeatfoodproducts.com/ could be redirected to an exact clone of the site created by the attacker. Sensitive information could then fall into the wrong hands.

        Start a malicious FTP server. Every time a user connects to the FTP service, the rogue FTP program spits back specially crafted data that exploits a bug in a client FTP program. By exploiting a security weakness in the client side program, the attacker is now able to run code on the user's workstation with the privileges of the remote user!

        and many, many more malicious acts.

On the other hand, non-privileged processes are allowed to bind and listen on port numbers higher than 1024. Network-aware application programs make use of these non-privileged ports. The advantage of using ports higher than 1024 is that programs do not need to be executed with root privilege just to bind and listen for client requests.

Unfortunately, this doesn't stop impersonation attacks. We noted earlier that, when a program makes a call to bind(), it has the option of specifying a single IP address or a wildcard. The wildcard tells the kernel, "Bind to all available interfaces," or, in other words "Listen on every IP address on the system." You can tell which network daemons do this by using the netstat command. A very useful command to learn, netstat shows networking statistics. On most UNIX systems, netstat -a shows all ports that are active or in the LISTEN state. The entries marked LISTEN either have a wildcard (*) source address or a specific IP address.

If a caller to bind() specifies a wildcard address, a subsequent caller (that is, another program) can still impersonate the server by binding "in front" of the original server. This wouldn't be possible if the original call had been made with a specific IP address. For example, a database listener binds to port 1999 and specifies the wildcard IP address. The kernel services the request. A local attacker notices the weak binding (via the netstat command) and runs a rogue database listener (that is, one she made earlier). This bind()s to the primary IP address of the machine, allowing her to perform Man In the Middle Attacks (MITM) or just to snoop on application usernames and passwords.

Some kernels prevent this kind of attack, but, unfortunately, it is still possible on many popular distributions.

A further point to be aware of is the Strong versus Weak End System model, as defined in RFC 1122, "Requirements for Internet Hosts Communication Layers." If your distribution follows the Weak model, remote attackers might be able to communicate with network services in ways you don't expect. Specifically, a multi-homed system can allow packets coming in on one interface to communicate with network services running on another (including a loopback) interface. So, binding network services to specific IP addresses might not gain you anything at all. See this Bugtraq thread for full details: http://archives.neohapsis.com/archives/bugtraq/2001-03/0009.html

Protecting Against Service Hijacking Attacks

Either fix the kernel to prevent the bind()-related problems, or have network applications bind to specific ports. If you have source code for the kernel, you could, of course, modify the bind() call to check a list of unauthorized ports before binding. But, there is a wider question: Should end-users have the capability to start up network services? Consider the other possible risks and the likelihood of these things happening in your scenario:

        A user runs a program such as netcat (renamed, of course) to listen on a high numbered port and execute a shell when a client connects. The next time the user wants to log in to the system, he just telnets to the port running netcat, and, voil , he has a shell waiting no authentication, no logging in, no security! It won't take long for a curious person with a port scanner to find the port. (It takes about 10 minutes on a typical LAN to port scan all 65,535 TCP ports.) Hiding network services on unusual ports does not buy you real security.

        A programmer writes a network application program but fails to write it securely. A malicious client probes the service and attempts to blackbox her way to root.

Additionally, your usage policy should state that end-users should not run unauthorized programs that listen for incoming network connections.

Detecting Fake Servers

As applies to preventing rootkit attacks, if you can't prevent attacks through fake servers, then try to detect these fakes. Fortunately, in this case, detection is trivial. The key is noticing when an unauthorized program listens on a given port. You could write a custom program to do this or use the following countermeasures:

        Run lsof on a regular basis and compare the results to an authorized baseline. lsof fills the gap that netstat leaves. netstat won't show you which process is listening on which ports duh! Thankfully, lsof does. Create a list of authorized program names and their mapping to ports/IP addresses. Write a script to filter the output of lsof and compare the results to your baseline. Run via cron and have differences reported to the administrator for investigation.

        Run a port scanner and compare the results to an authorized baseline. This will tell you when an additional service is running (that is, a port is listening), but not what it is. This approach can be performed remotely without shell access to the system. The only requirement is that you be authorized by the system owner to perform port scanning and that you have network visibility of the system (that is, can send TCP and UDP packets to the server and receive replies).

        If you have kernel source code but are wary about modifying the bind call to limit listening services, try logging instead. Implement a simple logging routine each time bind is successfully called. Check the results against a baseline via a userland program, and report differences to the administrator for investigation.

At this point it's time to look at specific UNIX services. What follows is not a comprehensive list these are the mainstay of UNIX network services.

URL 


 

Section: Chapter 20.  UNIX

Telnet

Note

As with many network services, the name Telnet may refer to the client program, to the protocol, and, if you add the word daemon, to the server-side program. In our discussion of Telnet, we will be referring to the server program. To limit confusion, references to the TELNET protocol will use uppercase letters.

 

The Telnet server provides a network virtual terminal emulation service to clients. The server portion normally listens on TCP port 23. Check out RFC 854 for the technical specification.

Essentially this means users can log in (authenticate) to the machine, perform some work on a text-based virtual terminal, and then disconnect. The TELNET protocol is defined in RFC 854. The Telnet daemon runs with root privileges.

TELNET is slowly being phased out. I say "slowly" because, although secure alternatives have been available for some time, a number of major vendors still insist on shipping distributions with TELNET enabled and no secure alternative installed.

TELNET Protocol Risks

The major security weakness of TELNET is that all communications between the Telnet client and server are passed in plaintext (that is, unencrypted) across the network. That means usernames, passwords, sensitive system data, and other possibly confidential information is visible to anyone running a network sniffer located between the client and server. Worst of all, because of the way a routed IP network functions, machines on other parts of the network might also gain visibility of the data.

This makes TELNET unsuitable for use in environments where the security of the underlying network or every host en route cannot be completely trusted. To put it another way, as a UNIX administrator, you probably have no control over security outside of your system your options are confined to host-based network controls only.

Other attacks include insertion or replay attacks in which a Man In The Middle (MITM) changes the data on-the-fly or plays back an earlier data capture. Imagine finding yourself adding a user to the system with no hands!

Vendors can ship systems with default user ids and passwords. Unless you change default passwords or lock the accounts, a remote attacker can gain access by using Telnet.

Information Leakage

By default, vendors tend to ship the Telnet daemon with a default login prompt that greets users with the name of the operating system, the version, and sometimes the system architecture. This kind of information helps attackers. With it, they can whip out their nastiest exploits geared specifically to your platform. Why give this information away so easily to an attacker? The reality is that removing this information won't stop your operating system from being identified remotely. (To find out why, read on.) However, I would argue that announcing your system details to anyone who makes a connection to your machine is making things a little too easy! Remove product/version info from your login banner. Some sites replace the vendor greeting with a legal "No unauthorized access" message. Check with your legal department for specific wording.

The TELNET daemon leaks information about your operating system in a less obvious way, too. The TELNET protocol defines a number of Telnet options. When a Telnet client connects to a Telnet server, either end can transmit Telnet options. These enable one side to express its capabilities and requested functionality to the other for example, its terminal type. A remote attacker, able to connect to the TELNET daemon, can use this to her advantage.

There is no standard TELNET daemon implementation. Different vendors implemented different Telnet options. By examining the Telnet options and the sequence in which they are received, an attacker can fingerprint your operating system.

The TESO group has developed a tool that can identify a wide range of UNIX flavors by using Telnet options. You can download it here: http://teso.scene.at/releases/Telnetfp_0.1.2.tar.gz

I Spy with My Little Eye

Before launching an attack on a site, an attacker will perform remote reconnaissance. They will want to find out the type and version of operating system and the services you are running. Network daemons commonly announce their software version upon a client connection. This can help when you are remote troubleshooting because network administrators can easily identify software version incompatibilities. In the same way though, it assists the attacker. Armed with this information, she can search vulnerability databases for known weaknesses or, in preparation for an attack, re-create an identical system in her lab for penetration testing.

A common reaction to this problem is to remove product/version information from system banners. This might mean

        Recompiling open source network daemons with this information stripped

        Overwriting the banner strings in closed source binaries

        Modifying configuration files (for example, /etc/issue on Solaris)

This will thwart banner grabbing.

However, even if you did this to all your network daemons, the version of your operating system and all its network software can still be identified remotely through the process of behavioral analysis.

Remote Determination of Network Service Versions

Software versions change because of bug fixes, additional software features, performance hacks, and so on. The attacker can probe for feature or bug differences between versions, thereby determining the specific version in use. This is not the long and complicated task it might sound. The attacker can make the reasonable assumption that a site is running a relatively recent version and work back. In fact, this kind of functionality is built in to some commercial vulnerability scanners.

Remote Operating System Identification

Vendors' TCP/IP stacks respond differently to a given set of packets. By remotely fingerprinting the TCP/IP stack, it is often possible to identify the operating system in use and its version. You've already learned a bit about this in Chapter 5, "Hackers and Crackers." The attacker sends a sequence of packets with specific attributes. The response packets sent by the victim server contain unique elements that, when considered together, uniquely identify a vendor's TCP/IP implementation. The queso tool originally used this approach. This strategy was then adopted and expanded by Fyodor in his nmap tool available from http://www.insecure.org/nmap .

Vendors TCP/IP stacks also exhibit distinguishing timing characteristics in their handling of packets. When a system receives a packet, the network interface hardware generates an interrupt. The kernel processes the packet based on information contained in the packet header. The time taken for a given platform to process the packet will vary depending on the code path taken (that is, if it's x, then do y; if it's x and z, do j). By sending multiple packets of varying complexity, it is theoretically possible to measure response times and compare them to known baselines to identify systems. This has been discussed in public forums, although no tool has been published as of yet.

Modifying network kernel parameters can defeat TCP/IP stack fingerprinting. These change the way that the TCP/IP stack behaves and will thwart known fingerprinting techniques.

This might leave you wondering whether it is worth removing system details from banners at all. There's certainly room for debate, but my personal view is that, for Internet-exposed hosts, it is worth the effort, as long as you understand it doesn't buy you any real security. What you're actually getting is security by obscurity. But, it might just be a reprieve from the less advanced attacker who relies on banner-grabbing-style scanning to identify potential victims. When the next remotely exploitable vulnerability gets announced, your banner-less system is unlikely to appear on the script kiddies'radar. Sure, you'll need to apply patches but at least you might avoid the embarrassment of being nailed by an amateur!

Securing Telnet

One option is to use router- or VPN-based encryption. This is a partial solution; it does not result in end-to-end encryption. This can still leave the TELNET data stream open to MITM attacks near either end of the connection.

The superior solution and stock replacement for TELNET is Secure Shell (SSH). SSH is deployed at thousands of sites worldwide and has become the standard way of remotely accessing a UNIX server across potentially hostile networks.

SSH is a TCP-based service that, by default, listens on port 22.

URL 


 

Section: Chapter 20.  UNIX

An Essential Tool: Secure Shell

According to the "What is Secure Shell" FAQ at http://www.employees.org/~satch/ssh/faq/:

Secure Shell (SSH) is a program to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. It provides strong authentication and secure communications over insecure channels. It is intended as a replacement for Telnet, rlogin, rsh, and rcp. For SSH2, there is a replacement for FTP: sftp.

Additionally, SSH provides secure X connections and secure forwarding of arbitrary TCP connections. You can also use SSH as a tool for things like [http://rsync.samba.org] rsync and secure network backups.

The traditional BSD 'r' commands (rsh, rlogin, rcp) are vulnerable to different kinds of attacks. Somebody, who has root access to machines on the network, or physical access to the wire, can gain unauthorized access to systems in a variety of ways. It is also possible for such a person to log all the traffic to and from your system, including passwords (which ssh never sends in the clear).

The X Window System also has a number of severe vulnerabilities. With ssh, you can create secure remote X sessions which are transparent to the user. As a side effect, using remote X clients with ssh is more convenient for users.

Note

Historically, U.S. vendors have not shipped Secure Shell (SSH) in their UNIX distributions. However, with the expiration of the RSA patent, recent (long overdue) changes in U.S. export legislation, and the release of public domain SSH implementation, this situation could be set to change.

 

As with many things in the UNIX world, SSH is the name of the protocol and an implementation.

The SSH Protocols

Two major versions of the SSH protocol exist. They are quite different and as a result incompatible.

Version 1 is defined in an IETF (Internet Engineering Task Force) draft at http://www.tigerlair.com/ssh/faq/ssh1-draft.txt. SSH1 is being phased out in favor of SSH2.

Copies of current Internet drafts for SSH version 2 can be found on the IETF Web site http://search.ietf.org/ids.by.wg/secsh.html. The specification is broken into four parts: architecture, connection protocol, authentication protocol, and transport layer protocol. If you want to know the SSH specification inside out, these are the documents to read.

SSH1 proved very popular according to the SSH Scanner project (http://ssh-research-scanner.ucs.ualberta.ca/ssh-stats.html), SSH V1.5 (the last version) is still the most widely deployed. This is due, at least in part, to the lack of a free version 2 implementation that is, until recently.

SSH Servers

Commercial SSH server software is available from SSH Communications and Data Fellows. Noncommercial SSH1 versions can be downloaded from http://www.ssh.org. The last free version is 1.2.27.

Frustrated at the lack of a truly free, up-to-date SSH, the OpenBSD team started the OpenSSH project. The stated goal of the project is to have Secure Shell technology shipped with every operating system. To avoid restrictive licensing, the team went back and reused the version 1.2.2 code, written by Tatu Yl nen. The code was developed outside of the United States to avoid restrictions on the exportation of cryptography and is available under a BSD license. This means anyone can use it, for any purpose.

The first major release, supporting SSH protocol version 1, was shipped in December 1999 with OpenBSD 2.6. As of this writing, OpenSSH 2.5.1 has been released. This is a major milestone because it provides support for the SSH2 protocol in addition to 1.3 and 1.5.

The OpenSSH Project is great news for all UNIX users especially when you consider it has been written and reviewed to the same security principles as the OpenBSD project was. The source code has been significantly simplified to ease code review, and all code has been subjected to an extensive security review. The same statements cannot necessarily be made about the commercial alternatives.

Unsurprisingly, a number of groups have integrated OpenSSH into their base operating systems. These include Debian Linux, FreeBSD, SuSE Linux, Red Hat Linux, Mandrake Linux, BSDi BSD/OS, NetBSD, Apple Mac OS-X/Darwin, Computone, Conectiva Linux, and Slackware Linux.

Don't worry if your favorite UNIX isn't on that list. OpenSSH can be downloaded from http://www.openssh.org/ and installed as a separate utility. Official ports exist for SUN Solaris, IBM AIX, Hewlett-Packard HP-UX, Digital UNIX/Tru64/OSF, Irix, NeXT, SCO, and SNI/Reliant UNIX. See http://www.openssh.com/portable.html for an up-to-date list.

Compiler-shy Solaris users can grab a compiled version in Solaris package format from http://www.sunfreeware.com. For Solaris users who want to install from scratch, check out this useful installation guide written CERT: http://www.cert.org/security-improvement/implementations/i062_01.html. (Although the guide is described as Solaris specific, it does include useful generic information.)

OpenSSH relies on two underlying packages:

        zlib available from ftp://ftp.freesoftware.com/pub/infozip/zlib/

        OpenSSL available from http://www.openssl.org/

In my opinion, OpenSSH is the way forward.

SSH Clients

Just like TELNET, SSH is a client/server protocol. To access a SSH server, the client must be running SSH clients.

UNIX users can use the OpenSSH client that ships the OpenSSH server.

My favorite Win32 client is called Putty. It's written by Simon Tatham, is free for both commercial and noncommercial use, and is open source.

Putty supports both SSH protocols and runs on Windows 95, 98, Me, NT, and 2000. It's a slick piece of code, weighing in at less than 300Kb so you can take it on a floppy to your favorite Internet caf . (Check out the Putty FAQ at http://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html to understand the risks involved should actually you want to do this.)

Putty can be downloaded from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html.

Simon has also created pscp, a Win32 port of the Secure Copy program (scp on UNIX systems). Unsurprisingly, this can be used to transfer files across an SSH connection. Use this instead of FTP. A nifty GUI front-end for scp, written by Lars Gunnarsson, is available from http://www.i-tree.org/.

Commercial clients supporting features over and above Putty are available from Data Fellows, SSH Communications, and Van Dyke. A full list of SSH clients can be found at http://www.ece.nwu.edu/~mack23/ssh-clients.html and http://www.freessh.org/.

SSH Resources

An excellent resource to learn more about SSH, written by S an Boran, can be found at http://www.boran.com/security/sp/ssh-part1.html. I consider this a must-read for anyone looking at using SSH in a serious way. S an has integrated a substantial amount of previously fragmented information with his own experiences (4-plus years) in implementing SSH. He covers the configuration options and describes some of the more advanced uses of SSH such as

        SSH VPNs.

        Rdist over SSH for secure remote filesystem synchronization.

        Using SecurID with SSH.

        Tunneling VNC (Virtual Network Computing), a remote control program, over SSH (for NT administrators, he covers PC Anywhere as well). See http://www.uk.research.att.com/vnc/sshvnc.html for more details.

As with any complex piece of software, SSH has had security problems. For a comprehensive rundown of known security holes, check out the OpenSSH security page at http://www.openssh.com/security.html. If you are using a commercial version, I recommend that you check with your vendor to ensure your systems are not vulnerable.

URL 


 

Section: Chapter 20.  UNIX

FTP

FTP is the File Transfer Protocol. An FTP client makes a TCP/IP connection to an FTP server (TCP port 21), and authenticates (or in the case of an anonymous server, supplies an e-mail address). The client can list, put, or retrieve files.

Most client/server protocols use just one server port, whereas FTP uses two one is a control connection for handling commands (port 21), and the other is a data connection (port 20) for transferring data. The data connection can be active or passive. The server initiates an active connection based on a port number specified by the client, whereas the client initiates a passive connection based on a port number specified by the FTP server. This has implications for your ability to successfully firewall FTP as the firewall must dynamically allow data connections based on information transferred via the command connection. This requires the firewall to be able to decode FTP command connections properly and track FTP protocol state transitions. The data connection takes place over an ephemeral port (that is, a port greater than 1024). Standard routers don't normally offer this level of sophistication, which means that network administrators have to implement very relaxed ACLs.

FTP Risks

The FTP protocol design is tricky to secure. The base FTP specification can be found in RFC 959 however, there are many extensions (see http://war.jgaa.com/ftp/?cmd=rfc for a comprehensive list). Firewall vendors hate it, warez peddlers love it, and security practitioners try to avoid it.

Some of the risks of running FTP are

        As with TELNET, there is no encryption so the MITM attacks apply.

        As with Telnet, many FTP daemons announce system/daemon version information when a client connects. This is an information leak.

        RFC-compliant FTP daemons permit port-bouncing attacks. Port bouncing is a technique whereby an attacker instructs a remote FTP server to port scan an unrelated system through judicious use of the FTP PORT command. The victim machine sees connections from the FTP server, hence the attacker doesn't get the blame. Bounce attacks can also be used to bypass basic packet-filtering devices and export restrictions on software downloads.

This is an old problem, and some vendors eventually broke with RFC compliance to prevent this. Other vendors still ship FTP daemons susceptible to this "feature." Hobbit is credited with discovering this weakness. His paper can be viewed online here: http://www.insecure.org/nmap/hobbit.ftpbounce.txt.

        The FTP protocol is hard to firewall properly. To set up the data connection, TCP/IP addressing information is passed down the control connection. To figure out the correct address/port pair to allow through, the firewall must carefully monitor the control con nection. Basic packet-filtering devices don't have the application layer protocol intelligence to do this, and, therefore, large holes have to be punched in network ACLs just to support FTP. A second, more serious problem is the difficulty firewalls seem to have correctly understanding the application dialogue between client and server. This can be exploited to attack other network services running on the same system with the FTP server.

Note

Originally reported by Mikael Olsson, the same problem was independently discovered by John McDonald (also known as Horizon) and Thomas Lopatic. This was against Checkpoint FW1 and a Solaris FTP server, but the principle applies to any other combination where the firewall incorrectly parses the FTP control stream. A technical paper demonstrating the successful exploitation of a buggy Solaris ToolTalk daemon via an FTP data channel can be found here: http://packetstorm.securify.com/0002-exploits/fw1-ftp.txt.

 

        FTP servers that ship with proprietary distributions typically provide very little access control capability.

        Access control can be confusing and prone to procedural failure. Some FTP daemons consult a file called ftpusers, which contains a list of users who may not use FTP. This tends to confuse novice administrators and, even in experienced hands, leads to new users not being included; that is, when a new user is added to the system, he gets FTP access by default. If your site policy is based on the least-privilege principle, this is not helpful.

        Writable, anonymous FTP servers are incredibly hard to secure. Allowing anyone to write to your filesystem is hard to do securely.

        The default umask setting on many FTP daemons results in newly created files accessible to everyone. This is commonly a result of inheriting an weak umask from inetd (the daemon that spawns the FTP deamon).

        By default, many proprietary FTP daemons do not log client connections.

Securing FTP

If you want to offer anonymous FTP services to untrusted clients, give serious consideration to using a dedicated, standalone system (on its own DMZ off the firewall). This isolates any break-in to the FTP service only. And, as the old saying goes, "A chain is only as strong as its weakest link."

For an illuminating read on the pitfalls of running a misconfigured FTP and Web server on the same system, check out the following article: http://www.dataloss.net/papers/how.defaced.apache.org.txt

The server should be stripped down. Only the FTP service should be accessible to the untrusted clients (for example, the Internet).

A popular alternative to running a proprietary FTP daemon is the Washington University FTP daemon available from http://www.wu-ftpd.org/. This enhanced FTP server provides additional functionality useful for minimizing abuse. However, it has a spotted history security wise. If you choose to run it, be prepared to upgrade in a hurry when the next major hole is discovered, and every script kiddy is trawling the Internet looking for vulnerable wu-ftpd installations.

Alternatively, evaluate the benefits of running a cut-down FTP daemon. Dan Bernstein has written a drop-in FTP replacement called Publicfile, designed and written with security as its primary goal. It is ideal for anonymous FTP. The download page is http://cr.yp.to/publicfile.html. Publicfile can also serve as a very basic Web server serving static content only.

If at all possible, avoid running an anonymous writable FTP server altogether. You're likely to end up acting as a mirror for pirated software hidden in surreptitiously named directories. Not only that but write access to the filesystem aids an attacker who can leverage even a minor misconfiguration on your part. For a pretty comprehensive list of anonymous FTP abuses, check out http://www.uga.edu/ucns/wsg/security/FTP/anonymous_ftp_abuses.html.

Various schemes have been proposed to date, but none seem to be bulletproof. However, that said, you could do worse than follow CERT's advice given here: http://www.bris.ac.uk/is/services/networks/anonftp/anonftp.html

Finally, activate FTP daemon logging and make sure that syslog is configured to log LOG_DAEMON messages. (Check your ftpd man page for the specific logging facility used.)

URL 


 

Section: Chapter 20.  UNIX

The r Services

rlogind and rshd are the remote login and remote shell daemon. These so-called r services use TCP ports 513 and 514, respectively. The RLOGIN protocol is described in RFC 1282 and RSH in RFC.

The r services were developed at Berkeley to provide seamless ("Look, Ma no password") authentication between trusted hosts and/or users.

Authentication between client and server is based on the client IP address, TCP port, and client username. The client IP address and username must match an entry in either the system-wide trusted hosts file (/etc/hosts.equiv), or a user trust file (~/.rhosts). An additional so-called safeguard is that the client connection must originate on a reserved TCP port as only programs running with root privilege can do.

The r services are very popular with end-users and administrators, as manual entry of the password is not required (unlike with TELNET). Unfortunately, they are terminally insecure.

r Services Risks

Security of the r services is based on an extremely weak authentication model.

Authentication is based on weak credentials, the source IP and TCP port. The source IP and TCP port can be forged. The original designers assumed a trusted network. Even the UNIX man page for these commands recognizes this fact.

Combined with predictable sequence numbers, crackers had a field day with these services. Steve Bellovin describes address-based authentication as "a disaster waiting to happen." Check out his brief at ftp://ftp.research.att.com/dist/internet_security/seqattack.txt.

The following post gives a line-by-line account of a real hack where the weakness of rsh was exploited: http://www.cs.berkeley.edu/~daw/security/shimo-post.txt

Countermeasures

Avoid the r-services totally switch to SSH. This protocol is just plain broken from a security perspective. Expend your security efforts on bigger rocks (for example, host hardening and security patching).

URL 


 

Section: Chapter 20.  UNIX

REXEC

REXEC is often confused with the other r services. However, it bears no relationship to them. REXEC runs on TCP port 512.

UNIX distributions often ship without an REXEC client program for some, this makes the service all the more mysterious.

The REXEC protocol is predominately used by application programmers to remotely connect to a UNIX system, run a command, and exit. They do this via the REXECREXEC library call. REXEC uses standard username and password authentication. All communications are sent in clear text between client and server.

REXECREXEC Risks

        Brute-force login attempts might go unnoticed as the REXEC daemon performs pitiful logging.

        Communications are unencrypted so that all the MITM is active, and passive attacks apply.

        There is no access-control built in to REXEC. Beyond disabling the service or using third-party software, you cannot define which users can use the service. Therefore a user who normally logs in via a secure protocol could end up inadvertently sending his password (and more) across the network in plaintext, simply by using a client application which relies on REXEC.

        Some REXEC daemons produce a different error message to a client, depending on whether the username or password was incorrect. This behavioral difference permits attackers to ascertain valid usernames. Again, your system is disclosing information.

Securing REXEC

        Disable REXEC. If client applications rely upon it, figure out a migration path away and then disable it.

        If disabling is not an option, consider using SSH to tunnel the protocol. SSH provides remote terminal access.

URL 


 

Section: Chapter 20.  UNIX

SMTP

SMTP is the Simple Mail Transfer Protocol (defined in RFC 821). Among other tasks, its job is to receive mail by accepting connections on TCP port 25 from remote mail servers. By default, UNIX comes with the sendmail program, an age-old program that implements the SMTP protocol (and more).

SMTP Risks

Sendmail is one of those programs every administrator seems to have heard of. Its history of security problems is well known. It could be the most maligned UNIX software ever written. With that reputation, it should be clear that something is fundamentally wrong with sendmail and that something is its monolithic design.

However, the security of sendmail has improved significantly in recent years because of the efforts of its author, Eric Allman, in response to the many security problems it suffered. It's debatable though whether sendmail is totally "out of the woods," or ever will be, because of its design.

Rather than repeat a history of security flaws here (I don't think there's space), these are some generic problems that a default installation of sendmail presents:

        Sendmail is "Yet Another Daemon" that runs as root. Therefore, an exploitable vulnerability in sendmail can mean giving away root to an attacker. Even though a root-run program might temporarily drop privileges, an attacker who is able run shellcode (through a buffer overflow or string format exploit) can simply make a call to seteuid() to re-establish those privileges and have her shell code running as root.

        Sendmail is incredibly complex its configuration file uses m4 the GNU implementa tion of the UNIX macro processor. Few people truly understand m4, and fewer still understand sendmail configuration. As a result, it's easy to make blunders and hard to lock it down without help from outside.

        Sendmail can be used to elicit usernames. By connecting to port 25 and issuing VRFY and EXPN commands, sendmail will confirm valid usernames. This is the first step in taking over an account. Attackers can then use remote login services and attempt to guess passwords. This guessing attack can be automated and use a large dictionary of common usernames to increase the chances of finding a valid username.

        Older versions of sendmail allow spammers to relay mail through your system. Apart from using your resources, this can make you very unpopular and result in your site being listed on RBL (Realtime Blackhole List at http://mail-abuse.org/rbl/). This is bad news for you as any mail servers your site attempts to connect to will drop the connection if they follow the RBL list.

        If incorrectly configured, sendmail leaks internal address information to the outside. Attackers can send probe e-mails to a company mail-server. By sending a malformed message, they can elicit a bounce message possibly including internal IP addresses. This assists an attacker in mapping the internal network.

        The sendmail daemon outputs its version number upon client connection. This information helps the attacker select a relevant exploit.

Securing SMTP

In my experience, few machines on an organization's network actually need to be listening for mail they just happen to be because sendmail is active by default. To put it simply, don't run mail transfer unless you need it. Turning off your mail transfer agent does not affect your systems ability to actually send mails (such as for the output of cron jobs).

        Consider using Qmail instead of sendmail. It has been designed and coded following sound security principles and has an impressive security track record zero security holes. Visit http://www.qmail.org/ for more details. Recent versions of Qmail go further in easing the migration from sendmail. Qmail is available on a wide range of platforms.

        Postfix (formally Vmailer), written by Wietse Venema, is a popular sendmail-compatible alternative written to be fast, easy to install and secure. Full details are at http://www.postfix.org/. If you can't face Qmail, check out Postfix.

        If you must run sendmail, don't run it as root build a chroot environment and run it as a non-privileged user. Russell Coker has detailed how he does this at http://www.coker.com.au/~russell/sendmail.html.

        A common misconception amongst administrators is that sendmail needs to be listening to the network in order to send mail from the local machine. While this is the default on many systems, it's not required. Sendmail can be invoked via cron with the "-q" flag to service the queue of outgoing messages on a regular basis. If all you want is the ability to send mail, then disable the sendmail startup script you don't need sendmail listening on port 25.

Carole Fennelly has written a series of useful articles about installing, configuring, and running sendmail on a firewall; look for them at http://www.unixinsider.com/swol-04-1999/swol-04-security.html. Carole is a regular writer of UNIX security related articles. Find out more here: http://www.wkeys.com/media/CF/writing.html

Note

The authors of Qmail and Postfix have publicly locked horns a number of times on security-related mailing lists. There is clearly no love lost between them as they try and find security bugs in each other's software. Although this might not be a pleasant sight to the uninitiated, it does give a valuable insight into the security issues facing designers of Mail Transfer Agents (MTA), such as where the weaknesses are and how to avoid them. The bottom line is, if you want secure mail servers, use dedicated, hardened systems with shell access given to trusted users only.


URL 


 

Section: Chapter 20.  UNIX

DNS

DNS is the Domain Name System. It's a UDP- and TCP-based protocol that listens on port 53. TCP connections are commonly used for zone transfers.

The DNS matches IP addresses to hostnames (and hostnames to IP addresses). A DNS server is responsible, or authoritative, for a given part of the domain name system (for example, mybitofthenet.com).

Clients make requests of the DNS servers when they want to communicate with systems for which they have only the fully qualified hostname (for example, myserver.mybitofthenet.com).

The DNS is a critical part of the network infrastructure. Its failure whether through administrative incompetence or denial of service can have major consequences.

DNS Risks

The DNS protocol has security problems. A detailed description of DNS and its protocol weaknesses can be found at http://www.geocities.com/compsec101/papers/dnssec/dnssec.html. DNS is defined in a number of RFCs see http://www.dns.net/dnsrd/rfc/ for full details.

As far as UNIX host security goes, the most widely used DNS server software is BIND (Berkeley Internet Name Daemon), developed by the Internet Software Consortium (ISC). By default, the BIND daemon "named" runs with root privileges and this is important it doesn't permanently throw them away after binding to port 53. Therefore, vulnerabilities in BIND can result in a complete system compromise.

Let's look at the track record of BIND.

Release 8.2.3 was released in January 2001 to fix four security vulnerabilities discovered by COVERT Labs (PGP security) and Claudio Musmarra. Two buffer overflows allow remote attackers to run any code of their choosing. An input validation error enables attackers to run any code, and an information leakage allows data in the stack to be viewed by an attacker. This last one might not seem so bad. After all, it's not a direct root compromise in-waiting, but it actually assists the development of exploits against a particular site. The stack holds local variables, environment variables, and important process addresses. By gaining access to these, a buffer overflow can be written without trial and error. This is significant because buffer overflows are often a one-shot attack if you get it even slightly wrong, you are likely to crash a service by scribbling all over stack data subsequently relied upon by the program.

BIND has had 12 security advisories from CERT in just four years hardly an endorsement of BIND security!

A major factor is the complexity of the BIND code. Experienced C programmers comment that the BIND code is incredibly difficult to understand. When accomplished programmers struggle to comprehend what's going on, you can be sure of security problems. Complex code is hard to audit modular designs work best, in which small, discrete programs (that are easier to audit) perform privileged operations. Of course, the ISC developers didn't set out to write something that even they would find hard to maintain the difficulty is a result of feature bloat. Perhaps that's one of the reason's ISC recently rewrote major parts of BIND.

In January 2001, ISC announced the release of a major new version of BIND version 9.1.0. This version includes support for DNSSEC a protocol extension that overcomes some of the security design weakness in the stock DNS protocol (DNSSEC is defined in RFC 2535.) Notably, the code was not modularized, so now we have new code that is still hard to audit. Many sites are likely to continue running BIND 8, at least in the short term.

It turns out attackers like breaking into BIND servers, so much so that SANS rates unpatched BIND servers as the number one security problem on the Internet. Exploits for BIND abound. DNS servers are tasty targets because so much of the Internet relies upon then. With control of a BIND server, you can do truly nasty things, for example:

        Deprive a site of traffic by changing the IP/name mappings to a nonexistent address. Worse still, you can redirect traffic to a pornographic or competitor site. Lost revenue and bad press don't help a company's stock price.

        Clone an e-business site, modify the site's DNS server to map your imposter site, and collect credit card details, user account, and password details.

        Exploit trust relationships between systems by mapping an IP from one side of the trust relationship to your machine.

        Compromise one of the root nameservers.

That last one is particularly worrying. The root nameservers are the starting point for addressing on the Internet. There are only 13 root nameservers in total (because of protocol limitations). Take over the root nameservers, and you can have the Internet in your hand. As an aside, the choice of platform and operating systems for these machines follows the principle of security through diversity. To quote from a Y2K statement on ICANN's (The Internet Corporation for Assigned Names and Numbers) site:

The root servers themselves all use some variant of the UNIX operating system, however both the hardware base and the vendors'UNIX variants are relatively diverse: of the 13 root servers, there are 7 different hardware platforms running 8 different operating system versions from 5 different vendors.

This is a sound idea used in nature by Mother Nature herself. Of course, it breaks down if you don't have sufficiently diverse administrators on hand to securely manage eight different operating systems.

Aside from security flaws, misconfiguration is common. When sizing up your site, attackers will request zone transfers from your DNS server. This is basically a dump of all the information pertaining to a particular DNS zone. This is as good as a network map! You don't want this.

Another configuration is version numbers (again). To discover the version of BIND you are running, a client can query your DNS server, and your server will tell them. This isn't so good.

The other major risk facing the DNS protocol and the BIND implementation are denial of ser vice attacks. Numerous DoS vulnerabilities have been found in BIND. Disabling DNS servers prevents DNS queries from being resolved, thereby stopping clients relying on the DNS resolution services (that is, almost everyone) in their tracks.

Securing DNS

The obvious countermeasure is to find an alternative to running BIND. Here your choices are limited we live in a BIND monoculture. You could switch to Microsoft's DNS implementation (that's not a recommendation), or look for a DNS server where security of implementation was a primary goal.

The only viable alternative I am aware of is the djbdns package by Daniel Bernstein. This has had sufficient production usage to be a serious contender to BIND. It was designed and written with security in mind by a programmer experienced in writing secure code. You can find out more here: http://cr.yp.to/djbdns.html.

If you stick with BIND, there are some things you can do. A useful summary of the issues and countermeasures is available from http://www.acmebw.com/papers/securing.pdf. You should at least give serious thought to the following

        Don't run BIND as root. Instead, create a new user and group. Specify these as command-line options when you execute BIND.

        Use the chroot command to run BIND so that it has restricted access to the filesystem. The chroot program enables you to specify the directory that a process will treat as its root directory (enforced by the kernel). To do this though, you need to create a mini duplicate of your operating system because BIND will no longer be able to see important system libraries and configuration files. Full instructions can be found here: http://www.etherboy.com/dns/chrootdns.html. See the section on chroot later in the chapter for some important caveats.

        The version number is hard-coded in the BIND code, so if you have source code, you can simply remove it or replace it with a fake version or silly message (have fun). Otherwise, you'll have to binary patch the executable good luck! Recent versions of BIND can be configured to enable version requests from specific addresses only.

        Configure BIND to disallow zone transfer except to authorized servers (such as DNS slaves).

URL 


 

Section: Chapter 20.  UNIX

Finger

Finger has been around for years. Its problems have been discussed numerous times and are well documented even vendors today ship it disabled. You can refer back to Chapter 8, "Hiding One's Identity," for more about finger as well.

Don't enable finger unless you don't mind your systems leaking sensitive system information like usernames, home directories, and login patterns.

Finger can be used as an early warning system that someone is checking out your site. Use TCPWrappers to wrap the finger service and enable logging. Some administrators configure TCPWrappers to return bogus finger information, feeding a less savvy attacker false usernames that they subsequently fail to log in with.

URL 


 

Section: Chapter 20.  UNIX

SNMP

SNMP is a protocol to support network monitoring and management. Its use is widespread, and most network monitoring products rely upon it. It runs on UDP ports 161 and 162 (for snmp traps).

For the technical details behind SNMP v1, consult RFC 1157. RFC 1441 introduces the various RFCs that make up SNMP v2.

SNMP Risks

An SNMP client authenticates to an SNMP agent via a string known as a community name. This community name works very much like a password. UNIX hosts often ship with an SNMP agent enabled by default so your system could be exposed to SNMP flaws already. Problems with default SNMP installations include

        The default read-only community name is "public", and the default read/write community is often "private". Hard coded "passwords" like these have blighted IT security for as long as I can remember. A full list of common passwords including SNMP community names is at http://www.securityparadigm.com/defaultpw.htm.

        If the read-only community can be guessed, serious information disclosure issues can crop up. The extent of the data disclosure is dependent on the MIB (Management Information Base). MIBs vary between vendors, but they usually contain the following types of information: network interface settings, network services, current network connections, administrative contacts, and server location. This assists attackers in mapping your network topology (think multihomed hosts), in performing traffic analysis (that is, who is talking to whom), and maybe even in getting some social engineering info.

        If the read/write community name can be guessed, you have the problems previously mentioned, but also, now, the attacker can modify the status of network interfaces and even reboot systems. Vendor-enhanced MIBs can allow even more devastating operations.

        Access to SNMP agents is not logged by default. You won't notice authentication failures.

        Some SNMP implementations, notably Solaris, actually run other SNMP daemons on high-numbered ports. Blocking access on a firewall to UDP port 161 might not be sufficient. Solaris users should check out http://www.ist.uwaterloo.ca/security/howto/2000-10-04.html.

Securing SNMP

        Decide whether you need SNMP. If your network operations team isn't monitoring servers via SNMP and you're not running any special software that relies on SNMP (some clustering implementations do), then disable it.

        Modify the default community strings to be hard-to-guess, random-looking strings. Make them long (at least 10 characters), and, whatever you do, don't use the name of your network supplier! (I've seen this too many times.)

        Configure SNMP authentication traps. If someone is trying to guess your SNMP community string, you want to know earlier rather than later. By configuring authentication traps, you can have the agent inform the SNMP master (normally the network management console) when an authentication failure happens. You might think it improbable that someone could guess a long SNMP community string. The savvy attacker will use a tool like ADMsnmp, written by the highly respected outfit ADM. Check out this post to Bugtraq for more info: http://archives.neohapsis.com/archives/bugtraq/1999_1/0759.html.

URL 


 

Section: Chapter 20.  UNIX

Network File System

The Network File System (NFS) protocol defines a way for co-operating systems to share filesystems. Today, everyone seems to refer to NFS mounts as shares.

NFS is based on the RPC (Remote Procedure Call), a protocol that defines how machines can make calls to procedures on remote machines as if they were local.

NFS implementations consist of more than just a single NFS server process. In fact, they require mountd, statd, and lockd. These daemons have had a plethora of problems especially statd.

NFS is an insecure protocol that you don't want to run. Trust me.

Full details of NFS v2 can be found in RFC 1094. NFS v3 is defined in RFC 1813.

NFS Risks

        If you're running an unsupported or unpatched version of NFS, you're dead in the water if someone takes a shot.

        Misconfigurations are common with NFS. Sharing system-related filesystems is asking for trouble.

        Weak authentication is used. The requests can be spoofed or sometimes proxied through the local portmapper.

        No encryption is used, so your darkest secrets go across the network in plaintext.

        NFS-related daemons commonly run as root. An exploitable security hole can leave you with a root compromise on your hands.

        Watch your defaults! The file /etc/exports (or /etc/dfs/dfstab) controls which filesystems you share and with whom. Unless you specify otherwise, your implementation might default to using insecure options or giving write access by default.

Securing NFS

Don't run it! Solve security headaches in one fail swoop turn if off! OK, so you want this functionality? Read on

        Is NFS the right file-sharing mechanism for what you want? Given its security problems, examine your file-sharing requirements. For example, if you want a mirror of some files, you could just buy another disk (they are cheap these days) and use rdist over SSH to make replicas to other systems. If you can find a way around using NFS, then do so.

        Avoid using NFS for sensitive information and never run Internet-facing NFS servers.

        Firewall NFS to limit your exposure on the wider network.

        Stay up to date with vendor security patches! NFS-related patches seem to come out thick and fast. If your vendor isn't supplying patches, this could be "a Bad Thing." They might simply not be patching known holes.

        Share filesystems on a need-to-have basis. Restrict this to read-only sharing wherever possible. Always specify nosuid as an option, to ensure that the set-id bit is not honored on files created on exported filesystems.

        Remove any references to localhost in your exports file.

        Do not self-reference an NFS server in its own export file.

        Limit export lists to 256 characters (including expanded aliases if aliases are in use).

        Consider using a replacement portmapper that won't forward, or proxy, mount requests. Check Wietse Venema's modified portmapper at http://www.ja.net/CERT/Software/portmapper/.

        Where read-only sharing is possible, consider mounting a locally exported filesystem as read-only (that is, in /etc/vfstab or similar).

Samuel Sheinin has written an excellent article on the risks of NFS and how to remediate them; find it at http://www.sans.org/infosecFAQ/unix/nfs_security.htm. If you are run ning NFS, consider installing the tools nfswatch.

NFS version 4 is the next generation of NFS. Production ready implementations are not readily available as yet. See http://www.nfsv4.org for more information.

Alternatives to NFS include AFS (http://www.contrib.andrew.cmu.edu/~shadow/afs.html) and CODA (http://www.coda.cs.cmu.edu/).

URL 


 

Section: Chapter 20.  UNIX

The Caveats of chroot

A common countermeasure aimed at security network services is to run them in a chroot environment. chroot changes the process's idea of its root directory. The idea is to prevent the pro cess from having access to any files outside of the chroot directory. Therefore, if the network service is compromised, the rest of the system is protected. However, it's not as simple as that.

The most common mistake is to think that chroot is like a virtual computer a totally distinct environment. It isn't. It's a filesystem abstraction; there are escape routes from chroot environments. Here are some details:

        If the process can run as root, your security is hosed. After compromising a chroot'ed network service running as root, the attacker can create device files to access RAM directly via the mknod command. The attacker can then modify the process's idea of the root directory and have unrestricted access to the system.

        Some distributions suffer from a bug in their chroot call. An attacker who has compromised the chroot environment can force a second chroot and then cd out of the restricted area. For more details, see http://www.bpfh.net/simes/computing/chroot-break.html. To prevent this requires a kernel change. See http://archives.neohapsis.com/archives/nfr-wizards/1997/11/0091.html for more details.

URL 


 

Section: Chapter 20.  UNIX

Better the Daemon You Know

Take a multilayered approach to securing network services that way, if one layer of defense fails, you haven't given up the farm.

        Disable the services you don't need.

        Add firewall services you do need. (This could be an expensive market-leading product or a hardening UNIX system with packet forwarding enabled and kernel-based IP filtering in effect.)

        In addition to a network firewall, consider the use of kernel-based access controls to protect your network services from internal systems (which themselves might have been breached or are simply in the hands of a malicious employee). Today, many UNIX systems ship with kernel-based IP filtering. Learn how to use this feature.

        Consider the use of TCPWrappers. This software protects TCP-based network services that are launched by inetd. Normally, when inetd receives a connection from a client, it consults inetd.conf and launches the program that corresponds to the port on which the connection was received. With TCPWrappers installed, inetd calls tcpd, which consults the hosts.allow and hosts.deny files. These files control client access to services based on IP address. For example, you might want to limit SSH access to a range of addresses where you know your shell users are located. Or, you might allow only cluster members to access cluster-based daemons. For extra bonus points, you can set up fake daemons to pick up on suspicious activity. For example, create a fake finger daemon that outputs nonsense data or an HTTP daemon that sends out redirects to the attacker's machine. Parse the logs using a tool like swatch (covered in Chapter 13, "Logging and Auditing Tools" ).

        Read the man page for the network service you want to protect and identify command-line options that can be used to control access, improve logging, or limit dubious functionality. For example, on many systems, syslogd listens to the network for syslog messages. Therefore, an attacker can send spurious messages to either mislead you or fill up the disk. By specifying a command-line switch, you can disable this function.

        If you have a source, consider compiling with StackGuard. This will eliminate common types of buffer overflow.

        Make sure the service is launched with a sane umask value. Umask is inherited from the parent process, so this could be inetd or init. Limit this value to no access by Other as a minimum. Check it out on your system.

        Verify that network server programs and configuration files cannot be overwritten by a nonprivileged user. Check for weak permissions on the files themselves and their parent directories.

        A common sign of intrusion is a second inetd appearing in the process list. Intruders start another copy of inetd with their own configuration file to install a back door such as a password-protected shell when they connect on a specific port. Consider moving the real inetd to an alternative location and replacing it with a fake inetd that notifies you when it is executed. (You'll need to update your startup files to reflect the path change.) Of course, this is an attack-based countermeasure specific to only one kind of attack, albeit a popular one.

        Install a lightweight intrusion detection system such as snort (http://www.snort.org), with a signature set that reflects the services you are offering. Integrate this into a centralized monitoring scheme, such as a central syslog server and swatch (see Chapter 12, "Intrusion Detection Systems," and Chapter 13, "Logging and Auditing Tools" ).

A good example of a patch tool is the Solaris patchdiag tool officially available to SUN customers who have a support contract. As with any patch tool, it had its fair share of teething troubles when it was first released. However, by now the tool has matured. Patchdiag allows you to download a daily updated patch meta file containing an up-to-date list of patches available for each Solaris package. Upon execution, patchdiag compares the installed patches with those available from SUN.

Even better still is the Red Hat Update Agent (http://www.redhat.com/support/manuals/RHL-6.2-Manual/ref-guide/ch-up2date.html), which automatically identifies missing patches. You can either manually select which patches to download and install or let the Agent do it all for you. Unregistered users can use the (much slower) public download site.

Unless you run a trusted UNIX distribution, vendor security extensions or third-party software will be required. A popular option is sudo we'll cover this and alternatives later. Mainstream vendors have started picking up the ball SUN has introduced the Role Based Access Control (RBAC) System in Solaris 8.

URL 


 

Section: Chapter 20.  UNIX

Assessing Your UNIX Systems for Vulnerabilities

A common strategy to assess your system for vulnerabilities is to do this in a number of phases:

1.       Use a network-based vulnerability scanner to identify remotely exploitable security holes. Attackers can exploit these vulnerabilities from across the network they don't need a UNIX account on the victim machine. Fixing these tends to be priority #1 in most shops.

2.       Eliminate false positives by manually double-checking the results. For a number of reasons, scanners sometimes report false positives. Log on and check for them. There is probably nothing worse than a security newbie running a vulnerability scanner, taking the results as gospel, and dumping the output on the system administrator's desk. (A number of large, respected accounting firms gained a reputation for doing this.) Know the weaknesses as well as the strengths of your tools or look a fool in front of a knowledgeable system administrator.

3.       Prioritize the findings based on your understanding of the vulnerabilities and the risk they pose in the context of your site. The scan reports generally include background information on specific vulnerabilities to help you do this. Hopefully this book will serve to sharpen your understanding of the issues.

4.       Draw up a plan for fixing the problems. Identify what needs to be done and who is going to "own" the change. Test the changes on a nonproduction system and ensure applications are fully tested. Start fixing one major change at a time.

5.       Use a host-based vulnerability scanner to identify locally exploitable security holes. Host-based scanners should produce few false positives because they run on the machine itself. Manual checking should be minimal if the product is even remotely decent. At a minimum, your host-based scanner should be able to identify missing security patches, insecure network services, user account problems, and common filesystem insecurities. Don't just take the tool vendor's word for it either always evaluate before you buy against a system build that you are familiar with.

6.       Identify the biggest risks. Given your knowledge of the local user base, state these risks in the fix plan and start fixing.

Commercial and freeware network vulnerability scanners have been around for some time. Personally, I like the Nessus network vulnerability scanner. It's reliable and extensible new checks are relatively easy to add via the NASL scripting language. The wide and enthusiastic user base and scripting language result in a fast turnaround for new tests. Often, NASL scripts are available within a day of vulnerability being announced. This is significantly faster than other scanners I've used even expensive commercial products. Another useful feature is automated updates of new checks, ensuring that you keep current.

If you're assessing vulnerability scanners, take a look at http://www.networkcomputing.com/1201/1201f1b1.html for some interesting insights into the effectiveness of popular scanners.

As with any vulnerability scanner, you can get false positives. The comprehensiveness of the NASL scripts varies, so I recommend that you manually check the results to save embarrassment. However, unlike with the commercial scanners, you can at least review the source and improve it if you have better ideas.

The freeware site is at http://www.nessus.org/.

For those who want formal support, the creators of Nessus will happily sell you a support contract.

Host-based scanners are specific to a particular UNIX flavor, so your options will be tied to the popularity of your platform. A comprehensive list of commercial scanners is available here:

    http://website.lineone.net/~offthecuff/h_scan.htm
  

Just remember that the market leader is not necessarily the best it might just have the nicest GUI. If the guts aren't up to the task, no GUI will make up for that. The host-based scanner market is relatively immature compared with network vulnerability scanners be sure to validate ven dor claims before you buy (and watch out for ancient checks being touted as "state of the art").

One thing I can promise you though as you go through the dragged-out process of locking down a fully operational production system, you'll soon realize that applying your security standard to a virgin system is a walk in the park.

The Cost of Belated Security Hardening

The next time someone (your manager, the project manager, the marketing manager) asks you to release a system to your user community before you've had a chance to harden it (for example, because of late delivery of hardware), ask him to sign a purchase order. Tell him that this is to cover the costs of making the system compliant post-go-live. When they laugh, point out that making security changes to operational systems increases risk however well researched, things might just break. To reduce the risk of a bad change hitting during peak activity, many organizations have change policies that only allow changes to be made off-peak. Personally, I don't know any CIOs that allow changes to be made to systems without application people running some tests the costs start to skyrocket. It's also a bad practice to make a whole slew of changes in one shot because backing out the change becomes nontrivial. This results in a string of late nights that further amplifies the cost of post-live hardening. Factor in "just-in-case" data backups, and you're talking serious money.

The rush to save a day before the hardening activity can easily cost an organization thousands of dollars in overtime to put things right later, as well as leaving a drawn-out window of exposure. The decision-makers in your organization should be made aware of this problem before you hit it. With their buy-in, this kind of situation can be avoided. As well as the overtime savings, the other selling point is avoiding low staff morale. Unless someone is shooting for overtime cash, I don't know of anyone who wants to arrive back in the office at midnight to make a change. If the change goes wrong, they could be left with a night's restore activity. Losing key personnel the following day is also not a "good thing."

Host Lockdown

Host lockdown is the process of making a system compliant with your UNIX security policy. In other words, it is configuring the system to be significantly more resistant to attack.

There are three common approaches:

        Manually make the changes required by your policy. This is certainly useful the first couple of times you do it because you get to see what you are changing. After that, making changes manually is a boring waste of time, which can easily lead to things being missed or mistakes being made.

        Write some scripts to automate the changes. This requires some scripting capability, a test machine, and some time (well, in fact, quite a lot of testing time if you want to cover everything). This time is probably best spent on site-specifics (like in-house application hardening) because writing operating system-hardening scripts is a bit like reinventing the wheel (see the following section).

        Identify a hardening tool that can best match your security standard. Recent years have seen the development of some excellent hardening scripts for the most popular platforms. In the following section, we cover the primary ones. The key here is to understand what the tool does and doesn't do and how to configure it for your site policy. You can fill in the gaps through homegrown scripts.

Host-Hardening Resources

These tools are distribution specific because of the differing UNIX security interfaces and platform-specific risks. This shouldn't be treated as a definitive list though for example, Linux has many, many hardening projects. I've been a little selective and picked out the one's which are sufficiently well developed to be usable in a production environment. As per usual though, test any such tools on a nonproduction system first.

For some distributions, I don't know of a specific tool so I've listed well-regarded hardening documents. So with that disclaimer in place, let's look at the options.

SUN Solaris

Solaris users are spoilt for choice these days. The obvious question is, "Which hardening tool should I go for?" To help you decide, check out this SANS-sponsored report that compares the most popular Solaris hardening tools: http://www.sans.org/sol11c.pdf.

YAASP (Yet Another Solaris Security Package)

YASSP (Yet Another Solaris Security Package)

Primary Author: Jean Chouanard

URL: http://www.yassp.org/

YASSP supports Solaris versions 2.6, 2.7, and 8 on both Sparc and Intel. YASSP looks set to become the de facto tool for hardening Solaris. The SANS Institute has stated that it will promote YASSP's use globally.

YASSP ships as a tar ball containing packages in Solaris package format, some shell scripts, and a set of security tools to replace or supplement stock Solaris programs.

The following packages will be installed by default:

        SECclean: The core package, securing your Solaris installation

        GNUgzip: gzip 1.2.4a [GNU]

        PARCdaily: Some daily scripts, logs rotation, backup, and RCS for systems files

        WVtcpd: tcp_wrappers 7.6 and rpcbind 2.1 [Wietse Venema]

        PRFtripw: Tripwire 1.2 [Purdue Research Foundation of Purdue University]

        OPENssh: OpenSSH 2.3.0p1 [OpenSSH.com]

From a general Solaris-hardening perspective, by default, YASSP does the following:

        Turns off ALL network services in /etc/inetd.conf (configurable) and disables non- essential services started from /etc/init.d.

        Turns off rhosts authentication, disables unused system accounts, disables FTP access to system users, and sets minimum password length to eight.

        Disables stack-smashing attempts (commonly caused by buffer overflows) and activates logging at the kernel level by a parameter change in /etc/system. (Note that this doesn't prevent data segment buffer overflows.) This can actually break your applications, so testing, as ever, is essential.

        Runs Casper Diks "fix-modes" script to lock down filesystem permissions. It also disables honoring of set-uid bit on newly mounted filesystems.

        Modifies behavior of the TCP/IP stack to both improve security and increase resilience to denial of service (DoS) attacks. (This is helpful, but it will not defeat the problem).

Don't be alarmed by that thorough approach the default packages and installation settings are certainly appropriate for an Internet-exposed or highly sensitive internal server. However, for internal multiuser systems, you'll definitely want to investigate the configuration options available in yassp.conf. You could easily end up breaking application functionality if you don't modify the defaults.

For those installing Solaris from scratch, primary YASSP author Jean Chouanard has helpfully documented that process (starting with the Solaris CD in hand) at http://www.yassp.org/os.html.

YASSP is a no-brainer to install. After you've downloaded the YASSP tar ball, you install as follows:

# uncompress yassp.tar.Z
# tar xvf yassp.tar
# cd yassp
# ./install.sh
... check and modify all the configuration files ...
# reboot

The post-install steps require you to edit a small number of configuration files and create the Tripwire integrity database, as follows:

        Edit and configure /etc/yassp.conf.

        Edit and configure /etc/hosts.deny /etc/hosts.allow.

        Edit and configure /etc/sshd_config /etc/ssh_config.

        Read http://www.yassp.org/after.html and the papers linked under http://www.yassp.org/ref.html.

        Make any additional changes and install any additional software.

        Create the Tripwire database and save it to read-only media.

Personally, I think YASSP is the future of Solaris hardening (at least for virgin systems). It's been well tested, can be deinstalled as easily as it is installed, and seems to have attracted some talented individuals to keep it updated.

TITAN (Toolkit for Interactively Toughening Advanced Networks and Systems)

TITAN

Primary Author: Brad Powell

URL: http://www.fish.com/titan/

TITAN predates YASSP and takes a different approach. It is based on the KISS approach (Keep It Simple, Stupid).

Rather than create a mammoth script that attempts to do everything, TITAN's authors wrote a set of Bourne shell scripts (referred to as modules) invoked by the TITAN program itself. Each module targets a specific aspect of operating system security. Modules can be included (or excluded) via the use of a configuration file specified at runtime. This enables you to create different configuration files to reflect the different security postures required by individual systems (for example, firewall, mail server, workstation, and so on).

A TITAN module consists of two primary functions: fix and check. As you'd expect, the fix function does the actual work it makes the changes whereas the check function looks to see if the fix has already been applied. You tell TITAN in which mode to run a particular script through the configuration file.

This makes it easy to check that a system has been configured to your security policy. For me, this is the real strength of TITAN.

To install TITAN, copy the TITAN archive to a target server, run the install program, and customize the configuration file. You're all set to invoke the main TITAN shell script (supplying the name of your configuration as an argument). If you're having problems with a module, you can run TITAN in debug mode.

Run TITAN periodically via cron in check mode, and you have an extensible, host-based scanner.

TITAN version 3.7 supports Solaris only; however, version 4 (in Alpha as of writing) promises scripts for Linux, FreeBSD, and True 64.

The tool itself is structured to accommodate any operating systems but until now few modules actually existed for any other distribution.

Another great thing about TITAN is that you don't need to be a programming genius to add extra modules nor do you have to jump through hoops. (Remember KISS?) Assuming you are competent at Bourne shell scripting, there is no new programming language to learn just some conventions to follow (just copy and customize the supplied template file).

Note that, unlike YASSP, TITAN doesn't install any security tools. (It does include Casper Dik's fix-modes script, but that's a one-shot tool, so I'm not counting that.) I recommend that you run YASSP first to harden and install the tools and then monitor with TITAN on an on-going basis. For huge server farms, TITAN output can get overwhelming consolidate the output using Perl.

When I first started using TITAN, it had an annoying habit of changing things even when you asked it to only check. The principle of a passive check mode hadn't been implemented consistently throughout all the modules. A quick check through the most recent set of scripts suggests the authors have rectified this problem (although I couldn't find a reference to it in the change log).

This is a handy reminder: Always understand what actions a hardening script will take before you run it.

Note

The authors comment that TITAN doesn't actually stand for anything. They just came up with the name, and it stuck.

 

GNU/Linux

There are a host of projects seeking to protect the world's favorite penguin (in case you've been living under a rock, a penguin is the Linux mascot). Here's a select few:

Bastille Linux

Bastille Linux (v1.1)

Primary Author: Jay Beale

URL: http://www.bastille-linux.org/

The Bastille Hardening System is an open source, community-run project suitable for Red Hat and Mandrake systems. (The authors have declared it should be portable to any Linux distribution.)

The stated mission of the project is to provide the most secure, yet usable, system possible. The authors have drawn on a wide range of security sources, including the SANS Linux hardening guide, Kurt Seifried's Linux administrators'security guide, and more. The creators, Jon Lasser and Jay Beale, rank administrator education as a key goal.

Bastille focuses on four key lockdown areas:

        It implements network packet filtering in the kernel (using ipchains). This limits your visibility on the network. (This doesn't "hide" your system rather it sets up access controls to your systems'network services.)

        It downloads and installs the latest security patches (not many tools do that). The caveat here is that it doesn't check the digital signatures on the downloaded files. (You can do this manually using PGP or the open source GnuPG software from http://www.gnupg.org/.)

        It increases the system's resistance to many types of local attacks by removing the privilege bit from a number of set-uid programs.

        It disables nonessential network services.

As a tool, Bastille's notable features include the following:

        Bastille can be run on living systems (that is, not just new installs).

        Bastille is self-aware. This has more to do with multiple runs on the same machine. Bastille knows what it did the last time it was run, so it won't repeat itself. Note that Bastille does not detect your initial security settings during its first run. (It will prompt you to turn off things that you might already have disabled.)

        Bastille has a primitive, but handy, Undo feature. Essentially, it takes a backup of files before modifying them. It replicates the directory structure and permissions under the Undo directory. There is no automagic back out if you need to undo an action, you do it by hand.

        Bastille has a so-called impotent mode (also known as shooting blanks mode). This is definitely recommended for first-time users. Bastille will tell you what it would have done, given your answers, without actually doing it. This might save you having to undo things later

The Interactive-Bastille.pl Perl script is written with the novice in mind. The user is prompted to answer yes or no for each hardening question. Each question is supplemented by explanatory text to help someone unfamiliar with Linux security. At the end of the Q&A session, the user runs BackEnd.pl, and the changes are made.

To make those exact same changes across a number of machines, just copy the Bastille-input-log generated during the first run and feed the log results as input to the BackEnd Bastille script on the target servers. This technique is fully explained in the Bastille documentation (docs/readme.automate). A slicker automation procedure is apparently in the works, but the current approach, although basic, does work.

A Linux novice will have a steep learning curve with whatever tool they end up using. Bastille makes a lot of effort to ease this burden and, to some extent, does achieve this. (I know of nothing better.)

Hewlett-Packard HP-UX

I am not aware of any publicly available hardening scripts for HP-UX. If you have some time, I recommend you port TITAN and donate your changes back to the TITAN crew.

However, Kevin Steves from HP has created a very useful hardening guide.

The paper for HP-UX 10 can be found here: http://people.hp.se/stevesk/bastion.html.

The HP-UX 11 paper is here: http://people.hp.se/stevesk/bastion11.html.

Kevin has written a very readable guide starting from the installation of HP-UX through securing the host and creating a recovery tape.

IBM AIX

Again, I am not aware of any publicly available hardening scripts for AIX.

A basic guide for securing AIX has been developed by IBM. It is available here: http://www.redbooks.ibm.com/redbooks/SG245971.html.

FreeBSD

FreeBSD users should check out the Security HOWTO available at http://people.freebsd.org/~jkb/howto.html.

The FreeBSD ports collection (security tools ported to FreeBSD) is available at http://www.freebsd.org/ports/security.html.

What do you do if you're unable to find a hardening tool for your operating system's specific distribution or version? Unfortunately, vendors have a poor track record of creating credible hardening tools. Check with your user group (if it exists) and the Usenet newsgroup for your distribution. You might wind up having to rewrite the scripts from one of the tools previously mentioned. If you're an HP-UX or AIX admin interested in security, here is the perfect opportunity to make a name for yourself!

URL 


 

Section: Chapter 20.  UNIX

Summary

A single chapter on UNIX security never can do the subject justice. Rather than attempt to cover all aspects of UNIX security, I've hopefully got you thinking about the issues. Specific technology problems come and go (well, they don't always go), but the thought processes and security principles tend to remain static. If I've got you thinking about your environment in a new way, then I've met my goal. The great thing about UNIX is the open community that supports her. Although there are many problems, there are many more solutions. Good luck keeping her safe!

URL 


 



Enterprises - Maximum Security
We Only Played Home Games: Wacky, Raunchy, Humorous Stories of Sports and Other Events in Michigans
ISBN: 0000053155
EAN: 2147483647
Year: 2001
Pages: 38

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net