|< Day Day Up >|| |
This section provides information on how to configure nodes to run a secure shell (SSH) environment in a way that ssh and scp commands can replace the use of rsh and rcp for CSM and GPFS.
This section does not intend to provide information on how to configure SSH to run in the most secure way, but to provide information on how it can be configured so CSM and GPFS can utilize it.
Tests using ssh and rsh commands have shown some performance issues with the secure shell. Also, comparing scp and rcp commands in a cluster presented performance differences. So, before installing SSH, make sure your environment requires the secure shell service.
OpenSSH is the open-source implementation of the IEEE SSH protocol version 1 and 2. The Institute of Electrical and Electronics Engineers, Inc. (IEEE) is a non-profit, technical professional association. For more information about the IEEE (known as Eye-triple-E) can be found at the following Web site:
OpenSSH is provided under Berkeley Software Distribution (BSD) license. For more information about the BSD license, refer to the following Web site:
The RPM package includes the clients for ssh, scp, and sftp, as well as the sshd server and many local applications like ssh-keygen, ssh-add, ssh-agent, and sftp-server.
Apart from keeping all communication encrypted during network transmission, SSH can provide stronger authentication methods, and in situations where non-password logins are needed, it can increase the security level for those transactions.
The SSH protocol Version 2 uses two different kinds of authentication: public key and password authentication. Because CSM and GPFS need all nodes to communicate to each other using non-password logins, the first method will be used, and this section provides you with information on how to prepare SSH for this situation.
In order to ease key and host names management, we suggest that you include all nodes in the /etc/hosts file, as in Example B-4.
Example B-4: /etc/hosts file
127.0.0.1 localhost.localdomain localhost 10.0.0.1 masternode.cluster.com masternode 10.0.0.2 node001.cluster.com node001 10.0.0.3 node002.cluster.com node002
Ensure that no host name other than localhost will be assigned to the 127.0.0.1 address.
There are many ways to permit non-password login from one host to other hosts. For our situation, we generate a pair of keys for the root user on each node and exchange them.
The first step is to generate the keys themselves. This procedure will produce two files for each node: id_rsa that holds the private keys and id_rsa.pub that holds the public key.
Example B-5 shows the creation of a key pair using a null password. It is very important that the password for the key is null so there will not be any password prompt for the connection to be established. This is achieved by the use of the -N "" option.
Example B-5: Key generation
# ssh-keygen -t rsa -N "" Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: a6:c4:e5:02:7b:a0:3f:4b:58:e2:52:4b:05:ec:34:b5 root@masternode
When you try to log on to a host using public key authentication, a basic public key signing procedure is used to authenticate the user and grant access to the system. The SSH client signs a session identifier and sends the result to the server. The server will verify whether the signature is correct using the user's pre-stored public key in the $HOME/.ssh/authorized_keys file. If the signature is correct, access is granted. If not, or if the public key was not found in the authorized_keys file, the server will try the next authentication method.
To create the authorized_keys file, you must copy the public keys from the root user on all nodes to the authorized_keys file on a single node (the local node in Example B-6). It is very important that all keys are copied into the file, even the public key of the host where the file is being created.
Example B-6: Creating the authorized_keys file
# cd /root/.ssh # cat id_rsa.pub >> authorized_keys # ssh node001 cat .ssh/id_rsa.pub >> authorized_keys root@node001's password: # ssh node002 cat .ssh/id_rsa.pub >> authorized_keys root@node002's password:
Copying the authorized_keys file generated in the last step to the other hosts will allow the root user on every node to log on to every other node without having to provide a password (see Example B-7).
Example B-7: Distributing the authorized_keys file
# cd /root/.ssh # scp authorized_keys node001:.ssh/ root@node001's password: authorized_keys 100% |*****************************| 711 00:00 # scp authorized_keys node002:.ssh/ root@node002's password: authorized_keys 100% |*****************************| 711 00:00
After copying the file, any valid login attempt will be done without a password prompt.
SSH has some methods to ensure that the host you are connecting to really is the node you are trying to connect to. This may seem unnecessary, but it is very useful to prevent the man-in-the-middle attack.
The man-in-the-middle attack is done by replacing the server, changing the network routing, or placing a fake server between the client and the real server in a way that when clients try to connect to the server, they connect actually on the false server, pretending to be the real one.
When you install the SSH server on a node, you must create the host key pair for the server where the public key will be stored by the clients in the known_hosts file. When the SSH client connects again to the server, the client verifies whether the private key used by the server matches the public key stored in the known_hosts file. If it matches, it means that the private key has not been changed and the server you are connecting to really is the server you wanted to connect to. When the SSH client verifies the key has been changed, it will issue a warning to the user, as in Example B-8. Depending on the SSH configuration, the user will be able (or not) to confirm that the key has changed and connected to the server anyway. If the client does not permit the override of the alarm, the user will have to edit the known_hosts file and change (or delete) the key manually.
Example B-8: Server verification
# ssh node001 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is c9:7e:7b:b6:fc:54:b7:92:98:df:73:4f:6c:fc:39:60. Please contact your system administrator. Add correct host key in /root/.ssh/known_hosts to get rid of this message. Offending key in /root/.ssh/known_hosts:3 RSA host key for node001 has changed and you have requested strict checking.
When an SSH connection is made to a host whose key is not in the known_hosts file, the client will ask the user to confirm the server key and identification, as shown in Example B-9 on page 289.
Example B-9: Confirm server key and identification
# ssh node001 The authenticity of host 'node001 (10.0.0.1)' can't be established. RSA key fingerprint is 1d:a9:d8:9b:f7:9d:fa:41:c9:ce:32:9b:14:00:b2:e3. Are you sure you want to continue connecting (yes/no)?
This situation requires user interaction (type yes to confirm the server key) for the logon process to continue. If that happens during CSM or GPFS installation, the installation process will fail.
It is very important that you ensure that all nodes know each other. This can be done by using the master node, for example, to connect to all nodes (including itself) and then copying the master node's known_hosts file to all other nodes.
Example B-10 shows the process of creating the known_hosts file and replicating it to the other nodes.
Example B-10: Creating and replicating known_hosts file
# ssh node001 The authenticity of host 'node001 (10.0.0.1)' can't be established. RSA key fingerprint is 1d:a9:d8:9b:f7:9d:fa:41:c9:ce:32:9b:14:00:b2:e3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node001,10.0.0.1' (RSA) to the list of known hosts. Last login: Thu Oct 31 16:20:00 2002 from masternode [root@node001 /root]# exit logout Connection to node001 closed. # ssh node002 The authenticity of host 'node002 (10.0.0.2)' can't be established. RSA key fingerprint is bb:ea:56:05:5d:e4:66:08:bb:66:70:10:6d:9d:0f:4b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node02,10.0.0.2' (RSA) to the list of known hosts. Last login: Thu Oct 31 16:20:00 2002 from masternode [root@node002 /root]# exit logout Connection to node002 closed. # scp .ssh/known_hosts node001:.ssh/ known_hosts 100% |*****************************| 907 00:00 # scp .ssh/known_hosts node02:.ssh/ known_hosts 100% |*****************************| 907 00:00
We must remember also that SSH ties the key to the host name used in the connection. So if you already have the key for the host node001, and you try to connect to the same host using its Fully Qualified Domain Name (FQDN), SSH will ask you again to confirm the key for the new host name. If your cluster may use both names (host name and FQDN), be sure to populate the known_hosts file with both names for all hosts.
If you want to avoid the user interaction on the first SSH call, you can set the following parameter in the /etc/ssh/ssh_config file:
With this setting, you do not need to make any user interactions and the known_hosts file will automatically be updated. Warning messages will be included in the /var/log/messages log file.
In order not to have problems during CSM and GPFS installation using ssh and scp commands, we suggest that you run a few tests to ensure your SSH configuration is adequate for these systems to run.
The main requirement for CSM and GPFS regarding remote copy and remote shell connectivity is the possibility of one node to execute commands and transfer files to other nodes without having to provide any password. One way way to test this is to verify that no password is being asked for when you run ssh on every node connecting to every other node, including itself.
For more information on OpenSSH, access:
For trouble-shooting information, access the OpenSSH Frequently Asked Questions (FAQ) and/or the mailing lists archive links on the home page.
|< Day Day Up >|| |