4.5. Network Service Security

 < Day Day Up > 

FreeBSD and OpenBSD systems can provide an extensive list of services. While Chapter 5 through Chapter 9 of this book provide detailed information about some of the most common and complex network services, you may find a wealth of more basic services are also useful on your network. The second half of this chapter discusses some of these services, what they provide, how to provide them securely, and in some cases, why you should do so.

4.5.1. inetd and tcpwrappers

The Internet daemon (inetd(8)) is a network service super-server. It comes with a bit of a stigma and this is no surprise since most texts on securing hosts contain a step where you disable inetd, yet few describe enabling or even securing it. inetd is not evil, and it can be used safely. The services inetd is often configured to provide, however, should sound like a list of security nightmares: telnetd, ftpd, rlogind, fingerd, etc. All of these services pose unnecessary risk to infrastructure systems, especially when much of the functionality can be provided by ssh.

Again, inetd is not to blame. As most administrators know, the inetd process reads configuration information from /etc/inetd.conf and listens on the appropriate TCP and UDP ports for incoming connections. As connections are made, inetd spawns the appropriate daemon. Unfortunately, there are not a great many daemons traditionally run through inetd that are safe to use in today's unsafe network environments. Nevertheless, should you find yourself in a position to provide services through inetd, you should know three things.

First, on FreeBSD and OpenBSD systems, inetd will limit the number of incoming connections to no more than 256 per second. Unless you legitimately receive this many requests, you may want to lower this threshold by using the -R rate command-line argument.

Second, use tcpwrappers. The manpage for hosts_access(5) describes how tcpwrappers may be configured using /etc/hosts.allow and /etc/hosts.deny to restrict connections based on originating hostname and/or address specification. We briefly examine a hosts.allow file in Example 4-7.

Example 4-7. Sample hosts.allow file
ALL : 1.2.3.4 : allow # SHORT CIRCUIT RFC931 ABOVE THIS LINE ALL : PARANOID : RFC931 20 : deny ALL : localhost 127.0.0.1 : allow sshd : mexicanfood.net peruvianfood.net: allow proftpd : dip.t-dialin.net : deny proftpd : localhost .com .net .org .edu .us : allow ALL : ALL \         : severity auth.info \         : twist /bin/echo "You are not welcome to use %d from %h."

In this example, all connections are allowed from 1.2.3.4. The PARANOID directive in the next line performs some basic hostname and address checking to ensure the hostnames and IP addresses match up. The second part of that stanza utilizes the IDENT protocol to verify that the source host did in fact send the request, provided the source host is running identd.

The latter lines are fairly straightforward. All connections are allowed from localhost. Connections via sshd are permitted from both mexicanfood.net and peruvianfood.net. FTP access from dip.t-dailin.net is explicitly denied access (presumably the administrator noticed a lot of attacks from this network and has no users there) while access from .com, .net, .org, .edu, and .us networks are allowed.

Finally, if the connection was not explicitly permitted or denied before the last line, the user is informed that she is not allowed to use a given service from the source host, and the rejection is logged via syslog to the auth.info facility and level.

FreeBSD systems support tcpwrappers compiled into the inetd binary. This means that by using the -W and -w flags to inetd (these flags are on by default see /etc/defaults/rc.conf), your inetd-based services will automatically be wrapped.

To use tcpwrappers on OpenBSD systems, use tcpd(8). Example 4-8 lists two lines in /etc/inetd.conf that demonstrate the difference between using tcpwrappers for eklogin and not using it for kshell.

Example 4-8. Using tcpwrappers in OpenBSD
eklogin stream tcp nowait root /usr/libexec/tcpd rlogind -k -x kshell stream tcp nowait root  /usr/libexec/rshd rshd -k

Enabling tcpwrappers for eklogin but not kshell is done here for demonstrative purposes only. If possible, use tcpwrappers for all services run through inetd.


The server program field changes to /usr/libexec/tcpd (the tcpwrappers access control facility daemon), which takes the actual service and its arguments as arguments to itself.

Finally, inetd spawns other programs using a fork(2) exec(3) paradigm. Programmers are very familiar with this, as it is the way a process spawns a child process. There is nothing particularly wrong with this approach, but you must be aware that loading a program in this way is not a lightweight operation. For instance, sshd could run out of inetd easily enough, but since sshd generates a server key on startup (which takes some time), the latency would be intolerable for users. Therefore, when supporting a high rate of connections is a requirement, inetd might not be the best solution.

Remember that a variety of daemons utilize tcpwrappers even when they do not run out of inetd. To determine if this is the case, read the manpage for the service. You may also be able to tell by running ldd(1) against the binary. If you see something called libwrap, then tcpwrapper support is available. If the binary is statically linked, of course, your test will be inconclusive.


4.5.2. Network File System

Centralized storage through the use of shared filesystems is a common goal of many administrators. OpenBSD and FreeBSD systems natively support the Network File System (NFS) Version 3. While this service is often used and considered vital in many networks, there are inherent security risks in sharing a filesystem across a network of potentially untrusted systems.

NFS should be avoided if at all possible. We present this section not to describe how you might secure NFS, but instead to illustrate why a secure installation is not possible. If you must have a shared network filesystem, consider more secure NFS alternatives such as the Andrew File System (AFS), Matt Blaze's Cryptographic File System (CFS), or the Self-Certifying File System (SFS).

4.5.2.1 Implicit UID and GID trust

The greatest security concern in deploying NFS is the minimal amount of "authentication" required to access files on a shared filesystem. By default, when exporting an NFS filesystem, user IDs on the server (except root) map to user IDs on the client. For example, a process on the client running with UID 1000 will be able to read and write to all files and directories on the server that are owned by UID 1000. Yet UID 1000 on the client may not be the same user as UID 1000 on the server. The administrator of the client system could trivially su to any user on that system and be able to access all user-readable files on the shared filesystem. This danger extends to the root user if the -maproot option is specified for the shared filesystem in /etc/exports.

This danger may be mitigated by forcibly mapping all remote users to a single effective UID for the client. This essentially provides only guest access to the filesystem. If writing to the filesystem is permitted in this case, it will become impossible to enforce user-based permissions as all users essentially become the same user.

Some administrators have made it possible to tunnel NFS over SSH. This ensures all NFS traffic is encrypted. However, this has limited value as it does not eliminate the implicit UID and GID trust issue described here.

4.5.2.2 NFS export control

NFS is configured by exports(5). That is, what filesystem is exported under what conditions to which systems? This allows for fairly fine-grained control of exports. With the application of the principle of least privilege, you would export filesystems with as many security options enabled as possible. Consider the following examples.

/home/users         devbox, buildbox, sharedbox

This configuration will export home directories to the three systems specified, all of which are under the control of an administrator that ensures users are not able to access other users' home directories.

/scratch            -mapall=neato:    userbox1, userbox2, userbox3

This scratch area for project neato is shared to all systems, but users from all clients are mapped to user neato. This allows only the specified NFS clients to work with this temporary storage area.

/archives           -ro

These archives are shared for all users to read, but no users may write to the filesystem. For more information about restricting exports, see the manual page for exports(5). You should now begin to realize that deploying NFS in anything resembling a secure manner will require that you remove much of the functionality that you would have liked to retain.

4.5.2.3 NFS network restrictions

If you find yourself reading this section, you may be suffering from a mandate to run NFS. We again urge you to consider some of the shared filesystem alternatives mentioned previously.


On the network level, there is an additional set of restrictions about which the administrator should be aware. By default, mountd(8), which services mount requests for NFS, accepts connections only on reserved ports. This ensures that only the root user on remote systems may mount shared filesystems. If the -n argument is specified to mountd, requests from all ports will be honored. This allows any user of any system to mount network drives. Do not enable this option unless you have a specific need to do so the manual page for mountd mentions that servicing Legacy Windows clients may be a motivation.

The ports that NFS needs are managed by the portmapper, now called rpcbind(8) (Sun remote procedure call [RPC] implementation) in FreeBSD 5.X and portmap(8) in OpenBSD. In OpenBSD, portmap will by default run as the _portmap user and given the -s flag in FreeBSD, rpcbind will run as the daemon user. In both cases, these services may be reinforced with the use of tcpwrappers so that only given systems or networks can communicate with these applications and hence, use NFS. Since RPC negotiates ports dynamically, NFS is a very difficult service to firewall.

With or without a firewall, it should be clear that NFS, while it may be useful, lacks any real security. Avoid using it if at all possible.

4.5.3. Network Information Services

What was originally Yellow Pages (yp) was renamed to Network Information Services (NIS) as a result of trademark issues. Thus, many of the programs related to NIS begin with the letters "yp." NIS, like NFS, is RPC-based but provides centrally managed configuration files for all systems in a NIS domain. Although the configuration details of NIS are beyond the scope of this book, there are significant security implications in running NIS on your network, the least of which is the unencrypted dissemination of NIS maps, such as your password file.

NIS should be avoided if at all possible. We present this section not to describe how you might secure NIS, but instead why you cannot. If centralized authentication and authorization is your goal, consider authenticating using Kerberos and providing authorization via LDAP. Unfortunately this is an extensive topic and would require a book dedicated to it. A more straightforward approach may be to safely distribute password files from a trusted administration host. We describe this latter procedure in the next section.

4.5.3.1 Password format compatibility

If you have NIS clients that only understand weaker DES passwords (pre-Solaris 9, update 2 for example), your NIS maps will have to contain only DES encrypted passwords. This may be accomplished by ensuring that users make password changes on systems that understand only DES passwords, or by reconfiguring your system to generate DES encrypted passwords by default. Neither of these are good solutions.

4.5.3.2 Encrypted password exposure

The master.passwd file, which contains encrypted passwords for all your users, is easily readable by others on your network when you use NIS. Although the requests clients make for the master.passwd.byname and master.passwd.byuid maps must come from a privileged port, this is not a significant increase in security. If any users on your network have a root account on any Unix system on your network (or can quickly build a system and plug it in), this restriction becomes irrelevant.

It gets worse. NIS is frequently used in heterogeneous environments and, as described above, passwords may need to be stored using the much weaker DES encryption rather than the default md5 or blowfish encryption of FreeBSD and OpenBSD respectively. As if this weren't bad enough, some older operating systems will not support the concept of shadow passwords. In this case, NIS must be run in UNSECURE mode (specified in the appropriate Makefile in /var/yp). With this configuration, encrypted passwords are exposed in the passwd.byname and passwd.byuid maps. Perhaps not so terrible because the security involved in the "low-port-only" concept was weak to begin with.

4.5.3.3 Limiting access to NIS maps

At the heart of NIS is the ypserv(8), the NIS database server. It is this daemon that accepts RPC and dutifully provides database contents upon request. Host and network specification in /var/yp/securenets can be used to limit the exposure of your password maps through RPC. The ypserv daemon will read the contents of this file and provide maps only to the listed hosts and networks. Given a network of any meaningful size, you may configure an entire network range in this file for which RPC should be answered. With securenets configured in this way it is trivial to bypass by merely connecting to the network in question. Specifying only a handful of hosts in this file, however, could effectively provide NIS maps to a group of servers while limiting "public" access.

4.5.3.4 On the client side

If you have chosen to lock NIS down to a handful of servers, ypbind(8) can use some attention. This daemon searches for a NIS server to which it should bind and facilitates subsequent NIS information requests. All systems running NIS should have statically configured NIS domain names and servers, so that instead of attempting to find a server by broadcast, ypbind immediately binds to a known NIS server. This prevents malicious users from setting up alternate NIS servers and perhaps providing password-free passwd maps.

4.5.3.5 When is NIS right for you?

If all systems involved in the NIS domain support shadow passwords and can understand md5/blowfish encrypted passwords, some of the risk associated with NIS is mitigated. If NIS is being provided only to a handful of closely administered servers via securenets, the risk is further mitigated.

However, NIS still relies on the difficult-to-protect RPC and operates without encryption. Avoid NIS altogether if you are working with heterogeneous or not completely trusted networks. Instead, develop another, more secure, way to distribute user, group, or configuration files.

For basic configuration information about YP/NIS see the yp(8) manpage and Chapter 19.9 of the FreeBSD Handbook.


4.5.4. Secure File Distribution Using scp

One alternative to NIS is file distribution over ssh. In fact, this paradigm will work not only for password and group files but also for other arbitrary configuration files. The secure copy (scp(1)) program is included as part of the ssh program suite and is included in the base distributions of both OpenBSD and FreeBSD. Secure copy, as the name implies, copies files between networked systems and guarantees data integrity and confidentiality during the transfer. Authentication for scp is the same as for ssh.

In order to put in place secure file distribution, you will need a management station to house all files that are distributed to other hosts as shown in Figure 4-1. This host should be exceptionally well protected and access should be restricted to only the administrators responsible for managing file distribution, in line with our principle of least privilege. Transferring configuration files to remote systems is a three-stage process:

  1. Put the files in staging area on the management station.

  2. Distribute the files to systems.

  3. Move the files from the staging area on target systems into production.

Figure 4-1. Secure file distribution architecture


4.5.4.1 Initial setup

Initial setup will vary depending on the environment. The following steps provide one example of preparing for secure file distribution. Your requirements may dictate changes to the approach presented below.

  1. Create ssh keys for authentication.

    First, create a pair of ssh keys for copying files over the network on the management station. For the purposes of this discussion, we will name these keys autobackup and autobackup.pub and place them in /root/.ssh. These keys should be generated using ssh-keygen(1) and may be with or without a passphrase. For pros and cons of these two approaches, keep reading.

  2. Create a staging area from which files will be copied.

    Next, if servers to which files are being transferred have differing configuration requirements, it becomes necessary to gather files into a staging area before the transfer. In most cases, workgroup servers and infrastructure servers to which you are copying files will permit login from different sets of users. You may need to write simple scripts to extract a subset of accounts from your master.passwd and group file instead of copying the entire contents.

    If you are copying a master.passwd file from the management station to remote systems, bear in mind the root password on the remote systems will become the same as that of the management station. In most cases, this is not desirable, and the root account should be stripped from master.passwd using a program like sed or grep before transmission.

    Also note that the master.passwd and group files may not be /etc/master.passwd and /etc/group. You may keep syntactically correct organization-wide master files anywhere on your system. In fact, this is preferable since you do not want to grant everyone in the organization access to your management station.


    This staging area may be anywhere on the management station. Simply declare a directory as a staging area, and begin writing scripts to collect configuration files.

  3. Write scripts to gather files.

    Once the staging area has been assigned, you must write the necessary scripts to gather configuration files from the system. In the case of master.passwd, you may need to customize the contents by extracting only a subset of users. A script to create the necessary files might look something like Example 4-9.

    Example 4-9. Script to gather configuration files into a staging area
    #!/bin/sh # This ensures the nested for loop iterates through # lines, not whitespace OIFS="$IFS" IFS=" " # This is where we keep the maps, our "staging area" # This variable is just a template for various "level" dirs level_dir=/home/users/netcopy/level # Make sure our 3 level directories exist and clear them out # before continuing with the script. for level in 1 2 3; do   mkdir -p ${level_dir}{$level}   rm -rf ${level_dir}${level}/* done # Let's make sure /etc and /usr/local/etc exist # within the staging area for level in 1 2 3; do   for dir in /etc /usr/local/etc; do     mkdir -p ${level_dir}${level}/${dir}   done done # We're going to be writing the contents of master.passwd # Let's make sure the file's got the right permissions first for level in 1 2 3; do   touch ${level_dir}${level}/etc/master.passwd   chown root:wheel ${level_dir}${level}/etc/master.passwd   chmod 600 ${level_dir}${level}/etc/master.passwd done # Here we grab users from the master.passwd and group for line in `grep -v '^#' /some/master.passwd | sort -t : -k3n`; do   IFS=$OLDIFS   set -- $line   uid=$2   gid=$3   # If the uid is betweeen 1000 and 4999, it's a level 1 user   if ([ $uid -ge 1000 ] && [ $uid -lt 5000 ]); then     echo $line >> ${level_dir}1/etc/master.passwd   fi   # If the uid is betweeen 5000 and 9999, it's a level 2 user   if ([ $uid -ge 5000 ] && [ $uid -lt 10000 ]); then     echo $line >> ${level_dir}2/etc/master.passwd   fi   # If the group is 101 (dev), it's a level 3 user   if ([ $gid -eq 101 ]); then     echo $line >> ${level_dir}3/etc/master.passwd   fi   IFS=" " done # Copy additional configuration files for level in 1 2 3; do   tar -cf - \    /etc/group \    /etc/resolv.conf \    /etc/hosts \    /etc/aliases \    /usr/local/etc/myprogram.conf \    | tar -xf - -C ${level_dir}${level}    # Additional files may be listed above the previous line   cd ${level_dir}${level} && tar -czpf config.tgz etc usr/local/etc   rm -rf etc usr done

    Note that this script copies users based on user ID and group ID. In most cases, a subset of accounts is more easily garnered when distinguishable by group as opposed to user ID range. For ease of administration, pick whichever approach works best in your environment and stick with it. Finally, bear in mind that this script must execute as root and will be working with sensitive files. Be very sure the staging directories and files are well protected.

  4. Prepare remote systems.

    After scripts have been written to gather the necessary files for transmission, prepare the remote systems to receive files. Create a designated account to receive the transferred files. In this example, we will call this account netcopy. Create a ~netcopy/.ssh/authorized_keys with the contents of autobackup.pub from the management station.

    You might be thinking that this is a lot of trouble and it would be easier to merely copy files over as the root user. However, we advise that you disable root logins via ssh in /etc/ssh/sshd_config and log in under your user account. Permitting remote root logins makes accountability much more difficult.


    The remote systems will also need scripts to move files from the staging area into the appropriate place on the system. Given the gathering script in Example 4-9, a trivial tar extraction from the root of the filesystem on the remote system will place all configuration files in the correct places with the correct permissions. This script must also execute as root and should be placed in root's crontab.

4.5.4.2 Pushing files with passphrase authentication

As discussed previously, for increased security, the ssh daemon should be configured to accept only key-based authentication, as opposed to password authentication. Because scp uses the same authentication as ssh, however, requiring keys and passphrases can be difficult to automate. However, automation is not always necessary. Even when using NIS, you must issue a make(1) in the /var/yp directory to push the maps to remote systems. To provide the same functionality, you can (this should sound familiar) write a script to accomplish the push while requiring password entry only one time with the help of ssh-agent(1). Example 4-10 shows how this might be accomplished.

Example 4-10. Script to copy files using an ssh key
#!/bin/sh level1_dir=/home/users/netcopy/level1 level2_dir=/home/users/netcopy/level2 level1_sys=alpha beta gamma delta level2_sys=mercury venus earth mars # This runs the ssh-agent which keeps track of ssh keys # added using ssh-add.  Using eval facilitates placing # values for SSH_AUTH_SOCK and SSH_AGENT_PID in the # environment so that ssh-add can communicate with the agent. eval `ssh-agent` # This will prompt for a passphrase.  Once entered, you # are not prompted again. ssh-add /root/.ssh/autobackup # Securely transfer the compressed tarballs foreach system in $level1_sys; do  scp ${level1_dir}/config.tgz ${system}: done foreach system in $level2_sys; do  scp ${level2_dir}/config.tgz ${system}: done # Kill the agent we spawned kill $SSH_AGENT_PID

This script requires a passphrase every time it is executed, so a person must initiate the transfer. Admittedly this script could be replaced by one that acts like a daemon, prompting for authentication once and then copying repeatedly at specified intervals. In this scenario, a passphrase would still be required every time the script is started but this would occur perhaps only at boot time.

4.5.4.3 Pushing files without passphrase authentication

It is possible to generate ssh keys without an associated passphrase. These are logically similar to the key to your house door: if you have it, you can open the door. There is an inherent danger in creating keys that provide a means to log into a system without any additional checks. It is vital that the private key in this case is very well protected (readable only by the netcopy user).

This risk can be mitigated somewhat with a few options in the netcopy user's ~/.ssh/authorized_keys file. For example, we could configure remote systems to restrict access not only by key, but also by host, as shown in Example 4-11.

Example 4-11. Restricting access by key and host, disabling pty(4)
from="mgmthost.example.com",no-pty,no-port-forwarding ssh-dss base64_key NETCOPY

Before our base-64 encoded ssh key, we provide three options and the ssh-dss key type. The first option specifies that not only does the source host have to provide the private key to match this public key, but it must also come from a host named mgmthost.example.com. Moreover, when connections are made, no pty will be allocated and port forwarding will be disabled.

Despite the security concerns with using passphrase-less keys, it becomes possible to automate file distribution. In this way, modifications can be made to files on the master system with the understanding that, given enough time, changes will propagate to all systems to which files are regularly copied. The script required to perform a secure copy is almost identical to that in Example 4-10, but the ssh-agent and ssh-add commands can be removed.

4.5.4.4 An scp alternative

We discussed earlier in this chapter a way to track changes to configuration files using CVS. If you have a CVS repository that contains all configuration files for your systems, you already have a staging area from which you can copy files to target systems. You need only decide which system will push the files, and perform a cvs checkout of your configuration data onto that system. The rest of the procedure will be very similar.

Alternately, you may prefer a pull method instead of a push. With little effort, you could write a script to check the status of configuration files installed on the system via cvs status filename, and check out-of-date files out of the repository as necessary. Since cvs will use ssh for authentication, you are again in a position to automate this procedure by placing the script in cron and using an ssh key that does not require a passphrase. Similarly, organizations with a Kerberos infrastructure might choose to place a service-only keytab on systems used for checking configuration files out of your repository.

4.5.4.5 Wrapping up

The script to gather files and copy files to the remote system may easily be combined into one script. The file copy will occur based on the successful authentication of the netcopy user. A regular cron(8) job should check for the existence of the file on all remote systems, and if it exists, extract the contents into the appropriate folders.

Also, be aware we have glossed over an important mutual exclusion problem in the sample scripts here. If, for some reason, either our scripts that collect configuration files or our scripts that un-tar configuration file blobs run slowly, the next iteration of the script may interfere with this iteration by clobbering or deleting files. Before building a system like this, make sure to include some kind of lockfile (this can be as simple as touching a specially named file in /tmp) to ensure that one iteration does not interfere with another.

Although this approach requires a great deal more initial configuration than NIS (because ypinit performs the setup for you), the vulnerabilities inherent in NIS are mitigated. This paradigm works well for copying user account information and system configuration and may be easily adapted to copy configuration files for other software like djbdns and Postfix.

4.5.5. The Importance of Time (NTP)

The naïve administrator will assume that once he sets the system clock, he need not concern himself with system time. After all, computers are good with numbers, right? Not so. As any experienced administrator knows system clocks drift. When systems in your network start drifting away from each other, you can run into a variety of problems including, but not limited to:

  • Being unable to build a reliable audit trail because it is impossible to reliably determine the ordering of events on different systems

  • Checking things into and out of version control repositories

  • Authenticating Kerberos tickets

  • Working with shared filesystems

  • Operating clustered or high availability configurations

  • Properly servicing DHCP and DDNS requests

  • Creating correct timestamps on emails within and leaving your organization

Fortunately, NTP on FreeBSD and OpenBSD systems is trivial to set up. The ntp(8) package is included with the base of both operating systems (as of OpenBSD 3.6), so there is nothing to install. All that remains are security and architecture considerations.

4.5.5.1 Security

Trivial NTP security can be achieved through the use of restrict directives in the NTP configuration file: /etc/ntp.conf on FreeBSD systems and /etc/ntpd.conf on OpenBSD systems. These directives determine how your NTP server will handle incoming requests and are expressed as address, mask, and flag tuples. From a least privilege perspective, you would configure NTP much as you would a firewall: initially restrict all traffic and subsequently describe which hosts should have what kind of access. A base configuration ought to look something Example 4-12.

Example 4-12. Default NTP restrictions
restrict default ignore driftfile /etc/ntp.drift

From this point, additional servers may be listed. Example 4-13 is a contrived example that permits unrestricted access from localhost, while hosts on 192.168.0.0/24 may query the nameserver, and the final two NTP servers may be used as time sources.

Example 4-13. Specific ntp restrictions
restrict 127.0.0.1 restrict 192.168.0.0 mask 255.255.255.0 notrust nomodify nopeer restrict 10.1.30.14 notrust nomodify noserve restrict 10.1.30.15 notrust nomodify noserve

If you are unfamiliar with the restrict directive, these configuration lines might look a little odd. Flags to the restrict directive limit access, thus the lack of flags for the localhost entry specifies no restrictions rather than being fully restrictive.


This is an adequate solution when providing NTP services to known clients. There are situations where IP restrictions are not enough. In these cases, you may want to consider NTP authentication. Authentication provides a more flexible way of controlling access when:

  • You need to provide time service to a limited number of systems across untrusted networks.

  • You wish to grant certain entities the ability to query or modify your time server, but cannot rely on a static remote IP address.

  • You feel mere IP restrictions that permit runtime configuration are inadequate.

NTP authentication is supported using both public and private key cryptography (via the Autokey protocol). After keys have been generated using the ntp-genkeys(8) utility, the server may be configured to use specific keys with specific hosts, be they symmetric or asymmetric. Bear in mind, sensitive symmetric keys will have to be exchanged securely through some out-of-band mechanism. Asymmetric keys contain a public portion that may be exchanged in the clear. Additional details about the configuration of authentication for ntp are beyond the scope of this book but are addressed in the documentation available through http://www.ntp.org.

4.5.5.2 Architecture

As with any other network service, providing time to your organization requires a little planning. NTP is typically woven into a network in tiers. The first (highest level) tier is authoritative time source for your organization. All NTP servers in this tier are configured as peers and use publicly accessible time servers as authoritative time source, or if your requirements dictate, acquire time from local time-keeping devices. The second tier of NTP servers for your organization will derive time from the first tier and provide time services for clients or subsequent tiers.

Unique security considerations exist for every tier. Top level organizational tiers that communicate with external servers are vulnerable to attack. One of the most effective ways to mitigate the risks associated with this exposure is to limit the external NTP servers that can communicate with your systems through firewall rules. This places implicit trust in your external time sources, which in most cases is acceptable. More stringent security requirements will necessitate local time-keeping devices.

Middle-tier systems that communicate with both upper- and lower-tier systems, but no clients should be configured such that only upper-tier systems may be used as time sources, and lower-tier systems may query time. All other requests should be denied. More stringent security requirements may dictate that upper-tier and lower- tier encryption keys must exist to authenticate communications. Smaller environments generally do not have a need for systems in this tier.

Finally, the lowest tier NTP servers provide time to internal clients. These systems should be configured so that only the immediate upper tier systems may be used as time sources, but anyone on the local network should be able to query time on the system.

As with the other tiers, high security requirements may require authentication to guarantee time sources are in fact the systems they claim to be.

     < Day Day Up > 


    Mastering FreeBSD and OpenBSD Security
    Practical Guide to Software Quality Management (Artech House Computing Library)
    ISBN: 596006268
    EAN: 2147483647
    Year: 2003
    Pages: 142
    Authors: John W. Horch

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net