5.1 CSM concepts and architecture

 <  Day Day Up  >  

To understand what CSM is, it is useful to understand the details of what clustering is, what types of clusters there are, and what are the features of each cluster. We cover the fundamentals in this section, and delve into details in later sections of this chapter.

We specifically discuss:

  • Clustering basics

  • Type of clusters

  • CSM components such as Reliable Scalable Clustering Technology (RSCT), Resource Monitoring and Control (RMC), Resource Managers (RM), CSM hardware control, Cluster File Manager (CFM), CSM software maintenance, CSM diagnostic probes, CSM security and distributed shell.

5.1.1 Clustering basics

In simple terms, clustering is defined as two or more computers ( generally referred to as nodes ) working together and joined by a common network, sharing resources and services.

It differs from a workgroup or local area network (LAN) in that the nodes in the cluster use the resources (such as computing power and network adapters) of the other nodes. Moreover, nodes or member servers in the cluster communicate with each other or with one centralized server to maintain the state of the cluster. To keep the centralized server and/or member servers informed of up-to-date changes, the cluster design can get quite complex with respect to network connectivity and arrangements of shared resources, depending on cluster type and business requirements.

5.1.2 Types of clusters

Based on functionality, clusters are grouped into four major types:

  • High availability clusters

  • High performance clusters

  • Virtual load balancing clusters

  • Management clusters

These clustering groups are discussed in the following sections.

High availability (HA) clusters

As the name suggests, high availability clusters provide high availability of resources in a cluster. These resources can be processors, shared disk, network adapters and nodes.

The main goal of a high availability cluster is to eliminate single points of failure (SPOFs) in the design. High availability is often confused with fault tolerant design. A fault tolerant design provides 100% availability by using spare hardware, whereas high availability provides 90+% availability by eliminating SPOFs and better usage of shared resources. RAID levels 1, 5 are examples of a fault tolerant design.

HA clusters are commonly designed based on the functionality of applications and resources that are part of the cluster. Figure 5-1 shows a simple high availability cluster.

Figure 5-1. High availability cluster

graphics/05fig01.gif

A simple HA cluster consists of two servers sharing a common disk and network. One server is the primary server running active applications, and the second server is the hot standby server, which takes over resources (shared disk, network IP and applications) in case of a problem with the primary server. This type of configuration is typically referred as an active-passive type of high availability.

There is another type of configuration in which both servers in the cluster function as primaries, running active applications at the same time. In case of a failure or problem, the surviving or active node will take over the resources of the failed server. This type of configuration is referred as an active-active type of high availability.

Depending on the failover mechanism, a high availability cluster can be designed to meet complex needs of a business scenario. Failover behavior is customized in several ways to keep the resources running on takeover node; some examples of how this can be customized include fail back immediately when the failed node becomes active, or fail back later at a scheduled time so that no outage occurs.

For more complex applications, such as parallel databases or applications, concurrent access configurations are designed with a distributed lock type of control and availability.

High Performance Computing (HPC) clusters

HPC clusters are used in high computing environments, with two or more nodes accessing the processors in parallel combining the power of multiple nodes at the same time.

HPC clusters are normally used in scientific computing and benchmarking centers that require a lot of processing power to analyze complex mathematical and scientific solutions. More information on HPC clusters can be found in Chapter 7, "High Performance Computing case studies" on page 303.

Figure 5-2 on page 215 shows a typical high performance cluster.

Figure 5-2. High performance cluster

graphics/05fig02.gif

Virtual load balancing clusters

These clusters are frequently used in high volume Web server and e-business environments where a front-end load balancing server routes TCP traffic to back-end Web servers on common TCP ports such as port 80 (http), port 443 (ssl), 119 (nntp) and 25 (smtp). Figure 5-3 on page 216 shows a sample virtual load balancing cluster.

Figure 5-3. Virtual load balancing cluster

graphics/05fig03.gif

The inbound traffic is received on the load balancer and is dispatched immediately to an available back-end server. The availability of back-end servers, and the server to be routed next , is determined using an algorithm that assigns a particular weight based on number of TCP connections and custom polling agents .

The main advantage with these clusters is that the outside world sees only one virtual IP which in turn is mapped to an intranet network of computers forming a virtual cluster. Example 5-1 illustrates virtual load balancing.

Example 5-1. Example of virtual IP load balancing
 VirtualIP = 192.168.1.10 (URL = www.testsite.com) IP of load balancer ent0 interface= 192.168.1.1 IP of load balancer ent1 interface = 10.10.10.4 IP of web server1 = 10.10.10.1 IP of web server2 = 10.10.10.2 IP of web server3 = 10.10.10.3 When a client connects to www.testsite.com, it resolves to IP 192.168.1.10 and client browser loads the page directly from one of available web servers without knowing the real IP address of that particular web server. 

Depending on the requirements (such as virtual IPs, ports, and applications), load balancing can become quite complex with rules, cookie-based affinities, sticky connections, and so on. The IBM WebSphere Edge server (IBM eNetwork dispatcher) is an example of a virtual load balancing server; refer to IBM WebSphere Edge Server User Guide , GC09-4567, for more information on this subject.

Management clusters

A management cluster consists of a group of computers networked together and functioning either independently or together by sharing resources, but all being managed and controlled from one centralized management server. IBM Cluster Systems Management (CSM) is an example of a management cluster.

CSM uses the management server as a single point of control in the management domain managing a set of servers referred as managed nodes. Using the management server, a system administrator centralizes the cluster environment. Some of the shared tasks that can be performed from a management server are:

  • Monitor all nodes in the cluster

  • Install and update software

  • Distribute files across the cluster, and centralize user ID management shared functions

  • Remotely control node hardware such as reboot, power on/off

  • Manage node groups

  • Diagnose problems on nodes

  • Run commands on multiple nodes at the same time with a distributed shell

Figure 5-4 on page 218 shows a CSM cluster environment.

Figure 5-4. CSM cluster

graphics/05fig04.gif

A management server consists of a single server with management tools communicating to a group of nodes across a shared LAN. The CSM architecture includes a set of components and functions that work together to form a management cluster.

The following sections briefly define and describe the major components of CSM and how they interact with each other.

5.1.3 RSCT, RMC, and RM

Reliable Scalable Clustering Technology (RSCT) is the backbone of the Cluster Systems Management domain. Other CSM components such as event monitoring and CSM security are based on RSCT Infrastructure. RSCT and its related components are explained in this section.

Reliable Scalable Cluster Technology (RSCT)

RSCT provides a monitoring environment for CSM. RSCT was primarily developed for high availability applications such as GPFS and PSSP for AIX, and it has ported to CSM.

A detailed description of RSCT is available in RSCT for Linux: Guide and Reference , SA22-7892.

Resource Monitoring and Control (RMC)

RMC is part of RSCT. RMC is a framework for providing availability monitoring to CSM. The RMC subsystem monitors system resources such as file systems, CPU, and disks, and performs an action based on a condition and response behavior.

For further information on RMC, refer to RSCT for Linux: Guide and Reference , SA22-7892.

Resource Manager (RM)

RM is a daemon that maps resources and class attributes to commands for several resources. Some of the standard resource managers available are IBM.AuditRm for system-wide audit logging, IBM.HWCtrlRM for hardware control for managed nodes, IBM.ERRM for providing actions in response to conditions, and IBM.DMSRM for managing a set of nodes and node groups.

RSCT, RMC and RM are discussed with examples and in more detail in 5.3, "CSM administration" on page 251.

5.1.4 CSM hardware control

As the name indicates, CSM hardware control allows control of the Hardware Management Console (HMC)-connected pSeries hardware from a single point. This feature lets system administrators remotely power on/off and open a serial console from the management server to the pSeries LPARs. An HMC is a must for using this feature as the LPARs are connected over an RS232/RS422 serial connection to the HMC.

For further information on hardware control, including diagnostic messages, refer to CSM for Linux: Hardware Control Guide , SA22-7856.

5.1.5 Cluster File Manager (CFM)

CSM cluster file manager allows centralized repository and file management across the cluster. This is installed along with CSM server packages. On the management server, the default directory structure under /cfmroot is created at install time with sample config files.

Using CFM, file management is simplified for all common files across a management cluster. CFM uses a push mechanism to push defined files to managed nodes thorough a cron job once every 24 hours. Customizations can be made to control which file needs to be pushed , and to what nodes.

Configuration of CFM and details are discussed in 5.3.4, "Configuration File Manager (CFM)" on page 256.

5.1.6 CSM software maintenance

The software maintenance component provides an interface to manage remote software install and updates on to the pSeries Linux nodes. RPM packages can be queried, installed, updated and removed with this tool.

SMS uses NFS to remotely mount RPM directories, dsh to distributed commands, and Autoupdate to automatically update installed software. Auto update software is not packaged with CSM and has to be installed separately. Detailed prerequisites and configuration are discussed further in "Software maintenance" on page 259.

5.1.7 CSM diagnostic probes

CSM diagnostic probes consists of a probe manager and a set of probes. It can be used to diagnose a system problem, which is helpful to identify root cause of a problem.

Detailed descriptions of the probes, and usage examples, can be found in "Diagnostic probes" on page 265.

5.1.8 CSM security

CSM security uses underlying RMC to provide a secure environment. CSM security provides authentication, secure shell, and authorization for all managed nodes in the cluster.

By default, shell security is provided using openSSH. Secure shell is used for opening a remote shell from the management server to all managed nodes using SSH-v2 protocol. The SSHD daemon is installed and started at install time on managed nodes.

Authentication is a host-based authentication (HBA) model which uses public-private key pairs. Public keys are exchanged from all managed nodes with the management server. CSM performs the key exchange at node install time; no manual activity needs to be performed by the system administrator.

Note

Public keys are exchanged only between a managed node and a management server. No keys are exchanged between the managed nodes.


A public and private key pair is generated using ssh-keygen at node install time. By default, CSM security uses "dsa"-based security for generating keys. These keys cannot be used for any other remote command applications, and they are stored at the following locations:

  • /var/ct/cfg/ct_has.qkf

  • /var/ct/cfg/ct_has.pkf

  • /var/ct/cfg/ct_has.thl

CSM authorization is based on an access control list (ACL) file. RMC uses this control list to verify and control any command execution by a user. The ACL file is stored at /var/ct/cfg/ctrmc.acl. This file can be modified as needed to grant access control. By default, the root account has read and write access to all resource classes, and all other uses are allowed read access only.

5.1.9 Distributed shell (dsh)

CSM packages a distributed shell in the csm.dsh package and it is installed while running installms. dsh is used for most cluster distributed management functions such as simultaneous node updates, commands, queries and probes. By default, dsh uses ssh for Linux nodes; it can be modified to use any other remote shells such as rsh using csmconfig -r . Example 5-2 shows the csmconfig command output.

Example 5-2. csmconfig output
 #csmconfig    AddUnrecognizedNodes = 0 (no)     ClusterSNum =     ClusterTM = 9078-160     ExpDate = Mon Dec 15 18:59:59 2003     HeartbeatFrequency = 12     HeartbeatSensitivity = 8     MaxNumNodesInDomain = -1 (unlimited)     RegSyncDelay = 1  RemoteShell = /usr/bin/ssh  SetupRemoteShell = 1 (yes) 

The dsh command runs concurrently on each node specified with the -w flag, or on all nodes specified in the WCOLL environment variable. Nodegroups can also be specified with the -N flag.

A sample dsh output is shown in Example 5-3 for a single node.

Example 5-3. dsh command output for a single node
 #dsh -w lpar date    lpar1: Wed Oct 22 17:06:51 EDT 2003 

Example 5-4 shows a group of nodes part of a node group called group1.

Example 5-4. dsh command output for a node group
 #dsh -N group1 date    lpar3: Wed Oct 22 17:08:28 EDT 2003    lpar1: Wed Oct 22 17:08:27 EDT 2003 

The dshbak command is also part of the distributed shell and is used to format dsh output. Example 5-5 shows how to use dshbak for formatting dsh output.

Example 5-5. dsh and dshbak output
 #dsh -N group1 date  dshbak    HOST: lpar1    -----------    Wed Oct 22 17:09:03 EDT 2003    HOST: lpar3    -----------    Wed Oct 22 17:09:03 EDT 2003 
 <  Day Day Up  >  


Quintero - Deploying Linux on IBM E-Server Pseries Clusters
Quintero - Deploying Linux on IBM E-Server Pseries Clusters
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 108

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net