5.4 Simple Cluster Configuration Walkthrough


5.4 Simple Cluster Configuration Walkthrough

Now that we have discussed basic UNIX networking concepts and services, and briefly described the protocols involved in network communication, it is time to walk through the configuration of a simple cluster. Since we cannot cover the variety of Linux distributions in existence, we have chosen to use Red Hat Linux 9 for our example. If you are using a different Linux distribution, the concepts should be same, but the exact commands and files may be different. Note that Chapter 6 describes tools that automate many of the following steps; we are describing them here to provide an understanding of the steps involved in setting up a cluster network.

Our example cluster consists of eight nodes. As in our previous examples, to avoid using IP addresses that may belong to an existing domain, we place the nodes of our cluster on a private network with a network address of 192.168.13.0 and a netmask of 255.255.255.0. The gateway address to our router is 192.168.13.254, the domain is phy.myu.edu, and our nodes are named bc1-01 through bc1-08. The cluster configuration is depicted in Figure 5.2.

click to expand
Figure 5.2: Diagram showing the configuration of our simple example cluster.

When installing Red Hat Linux 9 on each of the eight nodes, we used the standard "Workstation" install with one exception. We included the NIS server package ypserv on the first node. Later, we will run a NIS server on bc1-01 for the purposes of propagating system information like accounts and hostname to IP address mappings. Although the NIS hosts map is used for resolving names local to our cluster, we assume that a DNS server exists at 192.168.1.1 to obtain information about hosts outside of our cluster network. In addition to NIS, we will also run a NFS server on bc1-01 to provide each user access to a common home directory accessible from all of the nodes.

5.4.1 Hostname and gateway address

We begin by setting the hostname and gateway address on each of the machines. These parameters may have been set during the installation of the operating system; in which case, we need only verify that they are correct. Both of these parameters are set in '/etc/sysconfig/network'. The contents of this file for the first node of our cluster should be as follows.

     NETWORKING=yes     HOSTNAME=bc1-01.phy.myu.edu     GATEWAY=192.168.13.254 

Alterations made to this file do not take effect immediately; however, the changes should be realized the next time the system is rebooted. If you had to make changes, it is recommended that you reboot now. This can be accomplished by executing shutdown -r now.

Notice that the long name is used in the HOSTNAME setting. Use of the short name for this setting is discouraged as doing so makes it difficult, if not impossible, for applications and libraries to properly identify the local machine in the global namespace. This can cause some programs to behave incorrectly or fail altogether.

5.4.2 Network interface configuration

Next, we need to configure the IP settings for the network interface on each of the nodes. The network interface settings can be changed using two different methods. The first is to use a program like netconf; the second is to edit the configuration file directly. We will edit the configuration file, '/etc/sysconf/network-scripts/ifcfg-eth0', so the exact location of the settings is clear. The contents of the configuration file for the first node of our cluster should be as follows.

     DEVICE=eth0     ONBOOT=yes     BOOTPROTO=static     IPADDR=192.168.13.1     NETMASK=255.255.255.0     NETWORK=192.168.13.0     BROADCAST=192.168.13.255 

The settings on the other nodes are largely the same. Only the IPADDR setting needs to be adjusted.

Alterations to the network interface configuration file should only be made when the interface is disabled, accomplished by running ifdown eth0. Once the changes are complete, ifup eth0 can be run to re-enable the interface with the new settings.

5.4.3 Name resolution

For our cluster, the hostname to IP address mappings are as follows.

     127.0.0.1      localhost.phy.myu.edu  localhost     192.168.13.1   bc1-01.phy.myu.edu     bc1-01     192.168.13.2   bc1-02.phy.myu.edu     bc1-02     192.168.13.3   bc1-03.phy.myu.edu     bc1-03     192.168.13.4   bc1-04.phy.myu.edu     bc1-04     192.168.13.5   bc1-05.phy.myu.edu     bc1-05     192.168.13.6   bc1-06.phy.myu.edu     bc1-06     192.168.13.7   bc1-07.phy.myu.edu     bc1-07     192.168.13.8   bc1-08.phy.myu.edu     bc1-08     192.168.13.254 bc1-gw.phy.myu.edu     bc1-gw 

To avoid a substantial amount of repetitive typing, the complete set of mappings need only be entered into the '/etc/hosts' file on bc1-01. Later, in Section 5.4.7, we will configure NIS to provide this information to the other seven nodes. The '/etc/hosts' file on the remaining nodes should consist only of the following entry.

     127.0.0.1      localhost.phy.myu.edu localhost 

In addition to the hosts file, we need to configure the service that resolves names (the resolver for short) on each node of the cluster. The configuration file, '/etc/resolv.conf', must contain the following.

     nameserver 192.168.1.1     search phy.myu.edu 

The resolver configuration file contains two important pieces of information. The first is the IP address of the DNS server used to resolve names not found in the hosts file or the NIS hosts map; the second is the search list for hostname lookup. If a short or incomplete hostname is supplied, entries in the search list are individually appended to the hostname. For example, if the system were attempting to resolve the hostname foo, it would append phy.myu.edu and then perform a DNS query for foo.phy.myu.edu.

5.4.4 Accounts

At this time, we need to create accounts for the users of our cluster. It is recommended that each user have his own account, including the system administrator(s). While the administrator already has access to the root account, that account should only be used to perform administrative tasks. Use of the root account for non-administrative tasks is frowned upon because that account is unchecked, allowing for unintentional damage to the operating system. For more details on account management, see Section 13.6.

Users may be added to the system with the adduser program. Running adduser <username> creates an entry for the user in the account information and shadow password files, '/etc/passwd' and '/etc/shadow' respectively. The adduser program also adds a group for the user in '/etc/group' and creates a home directory for that user in '/home/<username>'. Usage information about the adduser program can be obtained by running man adduser.

The creation of user accounts and home directories across all of the nodes in the cluster could be handled by running adduser on each node for each user. However, this repetition is tedious and requires care so that the user and group identifiers are consistent across all nodes. Alternatively, we could create a script which uses scp to replicate the appropriate system files and ssh create the necessary home directories on each node. Instead, since we are already using NIS to provide the host map, we will configure NIS to also provide account and group information to the other seven nodes. Additionally, we will use NFS to make the '/home' directory on bc1-01 accessible to the remaining nodes. NIS will be configured in Section 5.4.7 and NFS in Section 5.4.8.

By default, adduser creates the account with a bogus password entry; thus effectively disabling the account. To enable the account, run passwd <username> to set an initial password for the account. Usage information about the passwd program can be obtained by running man passwd.

Unlike normal user accounts, NIS does not publish account information for the root user, and NFS is not configured to export the root user's home directory, '/root'. Doing either is considered a security risk as it may allow a malicious user to obtain privileged information and compromise one or more nodes of the cluster. Instead the root user has a separate entry in '/etc/passwd' and '/etc/shadow' and a separate home directory on each cluster node. While these restrictions affect the ease with which the root user can change its password or share files between machines, the security of the cluster as a whole is improved.

5.4.5 Packet filtering

As a security measure, the Linux kernel has the ability to filter IP packets. Among other things, packet filtering allows the system administrator to control access to services running on a machine. By default, Red Hat Linux 9 uses packet filtering to block remote access to most services including SSH, NFS and NIS. This default configuration presents a problem for a cluster environment where remote execution, file sharing and collective system administration are critical.

To allow SSH, NIS and NFS to function, we must add a few new packet filtering rules to each node of our cluster, allowing SSH, NFS and NIS to function. For Red Hat Linux 9, packet filtering rules are specified in the file '/etc/sysconfig/iptables'. Into this file, we insert the following rules before the first line that starts with -A INPUT.

     -A INPUT -p tcp -m tcp --dport 22 --syn -j ACCEPT     -A INPUT -p tcp -s 192.168.13.0/24 -j ACCEPT     -A INPUT -p udp -s 192.168.13.0/24 -j ACCEPT 

Once those changes have been made, the following command must be executed so the changes will take effect.

     /etc/rc.d/init.d/iptables restart 

The first rule we added tells the packet filter to allow new TCP connection requests made to port 22, the port monitored by sshd. With this rule in place, ssh and scp can be used to access the nodes in our cluster from any other machine on the network, including those not part of the cluster. If we were using routable addresses and our network was Internet accessible, any machine on the Internet could attempt to access our cluster nodes. This accessibility might appear to be a security concern; but, the connecting entity must know the name of an existing account and the associated password to obtain access, both of which SSH encrypts before transmitting them over the network.

The second and third rules tell the packet filter to accept any packets from any machines on the cluster network. These rules allow NFS and NIS to function between nodes in the cluster. The rules may seem unnecessarily liberal because they allow all UDP and TCP packets to pass. However, the NIS services and the network status monitoring service used by NFS are dynamically assigned ports by the portmap service. Because these port values are not known in advance, they cannot be explicitly specified in our packet filtering rules. In addition, we don't want to prevent applications running across nodes of the cluster from being able to communicate with each other. Therefore, we allow packets to freely flow between machines on the cluster network while still blocking potentially security threatening traffic from the outside.

More information about Linux firewalls and iptables can be found in Section 5.6.2.

5.4.6 Secure shell

The OpenSSH package is installed automatically with Red Hat Linux 9, which means the SSH remote access clients like ssh and scp are available to users immediately. The SSH service sshd is also available and started by default. Once the packet filtering rules discussed in Section 5.4.5 have been applied, the root user should be able to remotely access any of the nodes in the cluster. This ability can be tremendously useful when one needs to replicate configuration files across several nodes of the cluster or to restart a service without being at the console of the specific node.

Initially, non-root users will only be able to remotely access bc1-01. This restriction is lifted once NIS and NFS have been configured and enabled, thus providing account information and home directories to other nodes in the cluster. The configuration of NIS and NFS are discussed in Section 5.4.7 and Section 5.4.8 respectively.

The first time the sshd service is started, authentication keys for the host are generated. The keys for the remote host are used during the establishment of a SSH session, allowing the client (e.g., ssh) to validate the identity of the remote host. However, the validation can occur only if the client knows the public key of the remote host to which it is trying to connect. When the public key of the remote host is unknown, the user is notified that the authenticity of the remote host could not be verified. The connection process then continues only if the user explicitly authorizes it. If the user agrees to continue establishing the connection, the client stores the name of the remote host and its public key in '~/.ssh/known_hosts' on the local machine. The stored public key is used during the establishment of future connections to validate the authenticity of the remote host.

To prevent the user from being questioned about host authenticity, the system administrator can establish a system-wide list of hosts and their associated public keys. This list is placed in the '/etc/ssh/ssh_known_hosts' file on each of the nodes and any other machines that are likely to remotely access the nodes. This approach has one other advantage. If a node is rebuilt and new authentication keys are generated, then the system administrator can update the 'ssh_known_hosts' files. Such updates can prevent the user from receiving errors about host identification changes and potential man-in-the-middle attacks.

The contents of the 'ssh_known_hosts' file can be generated automatically using ssh-keyscan. To use ssh-keyscan, we must first create a file containing a list of our cluster nodes. Each line of this file, which we will call 'hosts', should contain the primary IP address of a node followed by all of the names and addresses associated with that node.

     192.168.13.1 bc1-01.phy.myu.edu,bc1-01,192.168.13.1     192.168.13.2 bc1-02.phy.myu.edu,bc1-02,192.168.13.2     192.168.13.3 bc1-03.phy.myu.edu,bc1-03,192.168.13.3     192.168.13.4 bc1-04.phy.myu.edu,bc1-04,192.168.13.4     192.168.13.5 bc1-05.phy.myu.edu,bc1-05,192.168.13.5     192.168.13.6 bc1-06.phy.myu.edu,bc1-06,192.168.13.6     192.168.13.7 bc1-07.phy.myu.edu,bc1-07,192.168.13.7     192.168.13.8 bc1-08.phy.myu.edu,bc1-08,192.168.13.8 

Once the 'hosts' file has been created, the following command will obtain the public keys from each of the nodes and generate the 'ssh_known_hosts' file.

     ssh-keyscan -t rsa,dsa,rsal -f hosts >/etc/ssh/ssh_known_hosts 

The 'ssh_known_hosts' file needs to exist on each node of the cluster. While the above command could be executed on each of the nodes, regenerating the contents each time, it is also possible to use scp to copy the file to each of the remaining nodes.

At this point, the client tools are able to validate the identity of the remote host. However, the remote host must still authenticate the user before allowing the client access to the remote system. By default, users are prompted for their passwords as a means of authentication. To prevent this from happening when access is from one cluster node to another, host based authentication can be utilized. Host based authentication, as discussed in Section 5.3.5, allows a trusted client host to vouch for the user. The remote host uses the public key of the client host, found in the file we just generated, to verify the identity of the client host. Enabling host based authentication requires a few configuration changes on each of the nodes.

The sshd serv ice must be configured to allow host based authentication by changing, the following parameters in '/etc/ssh/sshd_config'.

     HostbasedAuthentication yes     IgnoreUserKnownHosts yes     IgnoreRhosts no 

The first parameter enables host based authentication. The second parameter disables the use of the user maintained known host file, '~/.ssh/known_hosts', when host based authentication is performed. This change allows the system administrator to maintain strict control over which hosts can be authenticated and thus authorized. The third parameter allows the user maintained authorization file, '~/.shosts', to be used when determining whether or not a remote host is authorized to access the system using host based authentication. This is largely provided for the root user for whom the system maintained file, '/etc/ssh/shosts.equiv', is not used. While utilizing user authorization files could be considered a security risk, the change to the IgnoreUserKnownHosts parameter prevents the user from authorizing access to any hosts not listed in the system controlled '/etc/ssh/ssh_known_hosts' file. But, if host based authentication is not desired for the root user, then the third parameter should be left at its default value of "yes".

Once the sshd configuration file has been updated to enable host based authentication, the system authorization file, '/etc/ssh/shosts.equiv', must be created. That file simply consists of the hostnames of machines trusted by the local host. For our cluster, the file should contain the following.

     bc1-01.phy.myu.edu     bc1-02.phy.myu.edu     bc1-03.phy.myu.edu     bc1-04.phy.myu.edu     bc1-05.phy.myu.edu     bc1-06.phy.myu.edu     bc1-07.phy.myu.edu     bc1-08.phy.myu.edu 

If host based authentication is to be used to allow a client to vouch for the root user, this same list of hostnames must also be placed in the root user's authorization file, '/root/.shosts'. Again, scp can be employed to push copies of these files to each of the nodes.

Now that the sshd service has been configured and the list of authorized hosts properly established, the sshd service must be restarted. This is accomplished using the following command.

     /etc/rc.d/init.d/sshd restart 

In addition to changing the service configuration file, a small change must be made to the client configuration file, '/etc/ssh/ssh_config'. The following line should be added just after the line containing "Host *".

     HostbasedAuthentication yes 

This option tells the client tools that they should attempt to use host based authentication when connecting to a remote host. By default, they do not.

Users, including the root user, also have the ability to create authentication keys which can be use in place of passwords. Such keys are generated with the command ssh-keygen -t rsa. By default, the public and private keys are placed in '~/.ssh/id_rsa.pub' and '~/.ssh/id_rsa' respectively. The contents of the public key can be added to '~/. ssh/authorized_keys' on any machine, allowing remote access to that machine using the authentication keys.

The ssh-keygen command will allow keys to be generated without a passphrase to protect the private key. Users often generate unprotected keys simply to avoid having to reenter the passphrase with each remote operation. However, this practice is not recommended as it substantially weakens the security of any machine allowing public key authentication. Instead of using unprotected keys, a SSH agent can be established to manage the private key(s) of the user for the duration of a session. The passphrase need only be typed once when the private key is registered with the agent. Thereafter, remote operations can proceed without the continual reentry of the passphrase; but, the private key is still protected should a malicious user obtain access to the file containing it.

From the shell, the agent is often used in the following manner.

     [root@bc1-01 root]# ssh-agent $SHELL     [root@bc1-01 root]# ssh-add     Enter passphrase for /root/.ssh/id_rsa:     Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)     [root@bc1-01 root]# <various SSH client commands>     [root@bc1-01 root]# exit 

The first command starts the agent and then begins a new shell. The second command adds the root user's private key to the set of keys managed by the agent. In this case, only one key is managed by the agent, but more could be added through subsequent invocations of ssh-add. After the agent has been started and the private key has been registered, the root user may execute various client commands attempting to access to one or more remote machines. If the user's '~/.ssh/authorized_keys' file on the remote host contains root's public key, the client command will proceed without requesting a password or passphrase. The final command, exit, causes the shell and thus the agent to terminate.

A general discussion of SSH usage, configuration and protocols can be found in [11], although the details involving OpenSSH are somewhat out of date. Information specific to OpenSSH commands and configuration can be found in the manual pages installed with Red Hat Linux 9 and on the OpenSSH website, www.openssh.org. Links to IETF draft documents describing the SSH protocols can also be found on the OpenSSH website.

5.4.7 Network Information Service

Now that the IP packet filter has been configured in a way that allows our services to function, and we have introduced the finer points of SSH, we will proceed with configuring NIS. On each of the nodes, the following line must be added to '/etc/sysconfig/network'.

     NISDOMAIN=bc1.phy.myu.edu 

This line tells the NIS services the name of the NIS domain to which our nodes belong. The NIS domain name can be different than the Internet domain in which the nodes reside (phy.myu.edu). The NIS domain name should identify the group of machines the domain is servicing. In our case, this NIS domain is used only by our first Beowulf cluster. Therefore, we use the domain name bc1.phy.myu.edu to avoid conflicts with other NIS domains that might exist on our local network.

Once the NIS domain has been set on each of the nodes, we must prepare bc1-01 to run the NIS server. Before enabling the server to export information, we must secure the server so that only hosts in our cluster can obtain information from it. Entries in the '/var/yp/securenets' file accomplish this. For our cluster, this file (on bc1-01) should contain the following entries.

 host            127.0.0.1 255.255.255.0   192.168.13.0 

Now we are ready to configure and run the NIS server. To begin, we edit the '/var/yp/Makefile' file on bc1-01. We need to comment out the existing line that begins with "all:" and add the following line before it.

     all:  passwd group hosts networks services protocols rpc 

This line lists the information sources that we desire NIS to export to the client nodes. NIS maintains a set of databases, known as maps, separate from the source files. To build the maps, the following commands must be executed on bc1-01.

     echo "loopback 127" >>/etc/networks     /etc/rc.d/init.d/ypserv start     /etc/rc.d/init.d/ypxfrd start     /etc/rc.d/init.d/yppasswdd start     cd /var/yp     make 

Since '/etc/networks' does not exist on Red Hat Linux 9 installations, the first command creates the file, adding the loopback network as an entry. The next three commands start the services needed by a NIS server. And the last two commands build the actual maps.

The previous commands started the necessary NIS services; however, they did not configure the system so the services would be automatically started at boot time. To accomplish this, we must adjust the runlevel associated with the services. The following commands tell the system to automatically start the services when booting the system. Remember, these commands should only be executed on bc1-01, the system running the NIS server.

     chkconfig --level 345 ypserv on     chkconfig --level 345 ypxfrd on     chkconfig --level 345 yppasswdd on 

Now that we have a running NIS server, it is time to configure the clients. The NIS client service ypbind will be run on all of the nodes in our cluster, including bc1-01. The following commands start the client service and configure the operating system so the service is started automatically when the system is booted.

     /etc/rc.d/init.d/ypbind start     chkconfig --add ypbind     chkconfig --level 345 ypbind on 

To make the operating system use NIS when looking up information, we must update the name service switch configuration file, '/etc/nsswitch.conf', on each of the nodes. The entries that follow should be modified accordingly.

     passwd:        files nis     group:         files nis     hosts:         files nis dns     networks:      files nis     services:      files nis     protocols:     files nis     rpc:           files nis 

When the source files on the server are modified, the NIS maps are not automatically updated . Therefore anytime a new account or group is added or the hosts file is updated, the maps need to be rebuilt. To rebuild the maps, the following commands must be run on bc1-01.

     cd /var/yp     make 

Once the maps are rebuilt, any updates are available to all nodes in the cluster.

The exception to the maps not being immediately updated is the changing of a user's password. If the password is changed using the yppasswd program, the yppasswdd service immediately updates both the NIS maps and account files on bc1-01, making the updated password immediately available to all nodes in the cluster. yppasswd may be run from any node that is part of the NIS domain.

A small problem exists with regards to the NIS client and the sshd service. If sshd is started before ypbind, as it was in our example, then sshd will not use NIS services to obtain account information. Therefore users will not be able to remotely access bc1-02 through bc1-08 until sshd is restarted. The service may be restarted by executing

     /etc/rc.d/init.d/sshd restart 

on each of those nodes. A similar problem will occur if bc1-02 through bc1-08 are rebooted and bc1-01 is not online or is not running the NIS server. The ypbind service will fail causing sshd not to use NIS even if ypbind is started later. So, as a general rule, if ypbind is manually started, sshd should also be restarted.

5.4.8 Network File System

Now that the NIS server and clients are running, the next task is to configure the NFS server and clients, thus allowing users access to their home directories from any of the cluster nodes.

We will begin with configuring the server on bc1-01. To export the user home directories, the following line must be added to the file '/etc/exports'.

     /home 192.168.13.0/24(rw) 

This line tells the NFS server that any machine on our cluster network may access the home file system. Once the file has been modified, the NFS service must be enabled and started using the following commands.

     chkconfig --level 345 nfs on     /etc/rc.d/init.d/nfs start 

Next we need to configure the other seven nodes to mount the '/home' directory on bc1-01. Mounting is the UNIX term for attaching a file system space, whether it be local or remote, into the local directory structure. To express that we wish to have '/home' on bc1-01 be mounted as '/home' in the directory structure present on our remaining cluster nodes, we must add the following line to '/etc/fstab' on all nodes except bcl-01.

     192.168.13.1:/home /home nfs rw,hard,intr,bg,rsize=8192,wsize=8192 0 0 

Then we execute the following command on each of those nodes to cause the remote file system to be mounted.

     /etc/rc.d/init.d/netfs restart 

You might have noticed that we use the IP address for bc1-01 instead of its host-name when we added the entry to '/etc/fstab'. The reason is that netfs is started before before ypbind when the operating system is booting. If we were to use bc1-01 in place of the address, hostname resolution would fail causing the mount to fail.

The options for mounting a file system exported by NFS are numerous. The manual pages, obtained by executing man fstab and man nfs, provide an explanation of the '/etc/fstab' structure and the available options when mounting file systems via NFS. Additional information can also be found in [109].

5.4.9 Scripting it

For small clusters, installing the operating system on each node, and performing the previously mention configuration adjustments might not seem so bad. However, for a larger cluster, the task can be annoyingly repetitive and prone to error. Fortunately, several solutions exist.

The Kickstart system, part of the Red Hat Linux 9 distribution, is one such solution. When Red Hat Linux 9 is installed, a Kickstart configuration file is automatically generated during the installation process and stored as '/root/anaconda-ks.cfg'. Starting with the file on created for bc1-01, we can create a 'ks.cfg' file for the other nodes of the cluster. Below is an example configuration file for bc1-02.

     install     lang en_US.UTF-8     langsupport --default en_US.UTF-8 en_US.UTF-8     keyboard us     mouse generic3ps/2 --device psaux     skipx     network --device eth0 --bootproto static --ip 192.168.13.2             --netmask 255.255.255.0 --gateway 192.168.13.254             --nameserver 192.168.1.1 --hostname bc1-02.phy.myu.edu     rootpw --iscrypted $1$i0.gt4GF$75mVC3kgB2keUwJVgTZo8.     firewall --medium     authconfig --enableshadow --enablemd5     timezone --utc America/Chicago     bootloader --location=mbr     clearpart --all --drives=sda     part /boot --fstype ext3 --size=100 --ondisk=sda     part / --fstype ext3 --size=1100 --grow --ondisk=sda     part swap --size=96 --grow --maxsize=192 --ondisk=sda     %packages     @ Administration Tools     @ Development Tools     @ Dialup Networking Support     @ Editors     @ Emacs     @ Engineering and Scientific     @ GNOME Desktop Environment     @ GNOME Software Development     @ Games and Entertainment     @ Graphical Internet     @ Graphics     @ Office/Productivity     @ Printing Support     @ Sound and Video     @ Text-based Internet     @ X Software Development     @ X Window System     %post 

Note

the lines containing the network option were broken into three separate lines for printing purposes. The Kickstart system requires that these three lines exist as a single line in the actual 'ks.cfg' file.

A few changes have been made to the original 'anaconda-ks.cfg' file in creating a 'ks.cfg' for bc1-02. First, the hostname and IP address, part of the network option, have been updated. Second, the disk partitioning options clearpart and part have been uncommented informing Kickstart to clear and rewrite the disk partition table with an appropriate set of partitions for bc1-02. Finally, the ypserv package was removed from packages list as bc1-01 is the only node that needs to run the NIS server.

Now that we have created a 'ks.cfg' file, we need to place that file on a floppy diskette. Insert a floppy diskette, preferably a blank one, into the floppy drive and execute the following commands.

     mformat a:     mcopy ks.cfg a: 

The mformat command will destroy any existing files on the diskette, so do not insert a diskette containing files you wish to keep.

Once the 'ks.cfg' file is on the diskette, you should boot the new node with CD-ROM #1 from the Red Hat Linux 9 distribution. When the Linux boot prompt appears, insert the Kickstart floppy and type the following.

     linux ks=floppy 

If the Red Hat Linux 9 installation system has difficulty detecting the graphics chipset or monitor type for your machine, the following may have to be typed instead.

     linux ks=floppy text 

Since we are installing from CD-ROM, the Red Hat installer will prompt you to change CD-ROMs as necessary. When the process completes, the operating system has been installed on the new cluster node. However, the adjustments we made throughout this section must still be made. But, the process of answering several pre-installation questions, partitioning the disk, and selecting packages has now been eliminated.

Kickstart has a variety of options, many of which we did not use in our example 'ks.cfg'. These options can be used to directly adjust some of the settings described earlier in this section. In addition, Kickstart has the ability to run a post installation script. People with knowledge of one or more UNIX scripting environments should be able to create a post install script to automatically perform the configuration adjustments we made throughout this section. The full set of Kickstart options are described in the Kickstart Installations chapter of [93].

In [85], the authors describe a set of tools for rapidly building (or rebuilding) a cluster. These tools consist of a set of Kickstart configuration files and postprocessing scripts. Although their post-processing scripts are run separate from Kickstart, their toolkit is an excellent example of a simple yet effective means of automating operating system installation and network configuration in a cluster environment. More sophisticated approaches are described in Chapter 6, including NPACI Rocks, which takes advantage of Kickstart to setup a cluster.




Beowulf Cluster Computing With Linux 2003
Beowulf Cluster Computing With Linux 2003
ISBN: N/A
EAN: N/A
Year: 2005
Pages: 198

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net