Configuring High-Availability Solutions

Test Objectives Covered:

  1. Configure and test high-availability file access.

  2. Configure and test high-availability services.

Now we are clustering! Well, actually, the servers are clustering but the users are not.

After you have created an NCS cluster, you must configure cluster resources to make them highly available to users. There are two main network resources that users are interested in: files and services. In the remaining lesson of this chapter, we will learn how to configure high availability for each of these resources:

  • Configuring High-Availability File Access

  • Configuring High-Availability Services

Let's continue by enabling an NCS file system.

NCS High-Availability File Access

As we just learned, there are two main network resources that you can make highly available by using NCS: files and services. In this section, we will explore using NCS 1.6 to create high-availability file access. Fortunately, clustering leverages the sophistication and stability of Novell Storage Services (NSS).

In order to cluster-enable NSS, you must first create a shared disk partition and NSS file system on the shared device. Then you can cluster-enable the NSS components (such as Volumes and Pools) by associating them with a new Virtual Server object via a unique IP address. This enables NSS Volumes to be accessible even if the user's host server fails such is the definition of high availability.

To configure NCS 1.6 to make data and files highly availabile to users, we must perform three steps:

  1. Create a Shared Disk Partition

  2. Create a Shared NSS Volume and Pool

  3. Cluster-Enable the NSS Volume and Pool

Now, let's use ConsoleOne to create a NetWare 6 high-availability file-access solution.

Create a Shared Disk Partition

As you recall from Chapter 5, NetWare 6 NSS architecture relies on partitions, pools and volumes (in that order) for scalable file storage. Therefore, it makes sense that you need to create a shared disk partition to enable high-availability file access. Effectively we are building a shared NSS file system on the SAN.

To create a shared disk partition on your clustered SAN, make sure that all of your nodes are attached to the SAN and the appropriate drivers have been loaded. Then activate ConsoleOne and navigate to the Cluster object. Right-click it and select Properties. On the Media tab, select Devices and choose the device that will host your shared partition. Make sure that the shareable For Clustering box is marked. In fact, this box should be marked because NetWare 6 detects that it is a shared storage device when you add it to the SAN. If this option is not chosen, then it means that NetWare did not detect it as a shared storage device. This might be a problem.

Next, on the Media tab, select Partitions and click New. Select the device once again and configure the following parameters:

  • Partition Size Specify the largest possible partition size the device will support.

  • Partition Type NSS is selected by default.

  • Hot Fix Should be marked.

  • Mirror Should be marked.

  • Create New Mirror Group Should be marked.

To create the new shared partition, click OK. That completes step 1. In step 2, you must create a shared NSS Volume and Pool for hosting clustered files.

Create a Shared NSS Volume and Pool

Storage pools are next in the NSS architecture hierarchy. Although storage pools must be created prior to creating NSS volumes, you can create both at the same time by using the Create a New Logical Volume option in ConsoleOne.

First, right-click any Server object in your cluster and select Properties. Next, choose Media, NSS Logical Volumes, New. The Create a New Logical Volume dialog box should appear. In the Name field, enter a unique name for the volume. ConsoleOne will choose a subsequent related name for the host storage pool. Select Next to continue.

When the Storage Information dialog box appears, select the shared disk partition that you created in step 1. This is where the shared storage pool and volume will reside. Enter a quota for the volume, or select the box to allow the volume to grow to the pool size. Remember that we want to make the volume and pool as large as possible because it will host shared file storage. After you select Next, the Create a New Pool dialog box will appear. Again, enter a related name for the pool and select OK.

Next the Attribute Information dialog box will appear. Review and edit the attributes as necessary. (Refer to Chapter 5 for more details.) When you have finished editing the volume attributes, select Finish to complete step 2.

Now that you have created an NSS storage pool and volume on the shared storage device, it's time to cluster-enable them. Believe it or not, NetWare 6 does not cluster-enable shared volumes by default. At this point, the volume and pool are currently assigned as local resources to the server you chose in step 2. Now we will cluster-enable it in step 3.

Cluster-Enable the NSS Volume and Pool

When you create a standard NSS volume, it is associated with a specific server. For example, the WHITE_NSSVOL01 volume would be connected to the WHITE-SRV1 server. The problem with this scenario is that all files on the NSS volume are subject to a single point of failure the WHITE-SRV1 server. Furthermore, if WHITE-SRV1 goes down, its server IP address is no longer broadcast and the volume cannot be migrated to a new server for high availability.

To solve this problem, NCS allows you to cluster-enable an NSS volume and pool independently of the physical server object. This means you associate the volume and pool with a new virtual server with its own IP address. This enables the volume to be accessible even if WHITE-SRV1 goes down.

Furthermore, during the cluster-enabling process, the old Volume object is replaced with a new Volume object that is associated with the pool and the old Pool object is replaced with a new Pool object that is associated with the virtual server. Table 7.6 provides a detailed description of this eDirectory object transition.

TIP

You should create an A-record on your DNS server for the new virtual server's IP address. This enables your users to login using the logical DNS name.


Table 7.6. New Cluster-Enabled Volume and Pool Objects in eDirectory

OBJECT TYPE

TRADITIONAL OBJECT

CLUSTER-ENABLED OBJECT

Volume

WHITE_NSSVOL01

WHITE_Cluster_NSSVOL01

Storage Pool

WHITE_NSSPOOL01

WHITE_Cluster_NSSPOOL01

Virtual Server Object

None

WHITE_Cluster_SERVER01

Volume Resource Object

None

WHITE_Cluster_SERVER01

Following are three important guidelines that you must follow when you cluster-enable volumes and pools in NCS 1.6:

  • Cluster-enabled volumes no longer appear as cluster resources. If you want each cluster-enabled volume to be its own cluster resource, you must create a one-to-one mapping from volume to storage pool. Each cluster-enabled NSS pool requires its own IP address for the virtual server. Therefore, it's important to note that the load and unload scripts in Cluster Resource objects apply to pools directly (not volumes).

  • The first volume that you cluster-enable in a pool automatically cluster-enables the entire pool. Once the pool is cluster-enabled, you must cluster-enable all volumes in the pool if you want them to be mounted during a failover. This is because NSS only mounts cluster-enabled volumes when pools are migrated throughout the cluster. Any volumes in the pool that are not cluster-enabled must be mounted manually.

  • Storage pools should be deactivated and volumes should be dismounted before being cluster-enabled.

To cluster-enable an NSS volume (and pool) using ConsoleOne, navigate to the Cluster object and select File, New, Cluster, Cluster Volume. Then browse and select a volume on the shared disk system to be cluster-enabled. Next enter an IP address for the new volume. This is only required for the first volume in the pool. Subsequent volumes will adopt the same IP address because it is assigned at the pool level. Finally, mark the following three fields and click Create: Online Resource After Create (to mount the volume once it is created), Verify IP Address (validates there are no IP address conflicts), and Define Additional Properties.

REAL WORLD

When you cluster-enable an NSS volume (and pool), two important new objects are created: a Virtual Server object and a Cluster Volume object. During the creation process, ConsoleOne and NetWare Remote Manager allow you to optionally change the default name of each of these objects. Here is the syntax that NCS normally uses:

  • Virtual Server Object is given the name of the Cluster object plus the cluster-enabled pool. For example, if the cluster name is WHITE_Cluster and the cluster-enabled pool is NSSPool01, then the default virtual server name is WHITE_Cluster_NSSPool01_Server.

  • Cluster Volume Object is given the name of the Cluster object plus the volume name. For example, if the cluster name is WHITE_Cluster and the volume name is NSSVol01, then the default cluster-enabled Volume object name is WHITE_Cluster_NSSVol01.

That completes our lesson in NCS high-availability file access. In this section, we learned the three-step process for creating a clustered file access solution. First, we created a shared disk partition on the SAN. Then we created an NSS volume and pool to host the shared files. Finally, we cluster-enabled the volume and pool with a new Virtual Server object. This process should help you sleep at night now that your users' files are always up.

In the final NCS lesson, we will learn how to build a high-availability service solution.

NCS High-Availability Services

Network services are just as important to users as files. With NCS you can make network applications and services highly available to users even if they don't recognize the cluster. The good news is Novell already includes a number of cluster-aware applications that take full advantage of NCS clustering features (one example is GroupWise). However, you can also cluster-enable any application by creating a cluster resource and migrating it into NCS.

In this section, we will learn how to use NCS 1.6 to guarantee always up NetWare 6 services. Along the way, we will discover two different types of NCS resources:

  • Cluster-Aware Applications are programmed to take advantage of NCS clustering. These applications always know they are running within an NCS cluster and try very hard to stay available. Table 7.7 lists cluster-aware applications in NCS 1.6.

  • Cluster-Naïve Applications are not programmed to recognize NCS clustering. Fortunately, NCS does support cluster-naïve applications; however, failover and failback operations are not as seamless as with their cluster-aware cousins. In this case, NCS must work extra hard to ensure that cluster-naïve resources are migrated to other cluster nodes when their host server fails.

Table 7.7. Cluster-Aware Applications for NCS 1.6

APPLICATION CATEGORY

CLUSTER-AWARE APPLICATION

NetWare 6 Application

iFolder, iPrint, iManager, and NFAP Common Internet File Services (CIFS)

Novell Applications

BorderManager (Proxy and VPN), GroupWise 5.5 and 6, NDPS, Novell Clients for Windows 98/2000, and Btrieve

Web Services

Apache Web Server, Enterprise Web Server (LDAP and NDS), and WebDAV

Protocols

AppleTalk Filing Protocol (AFP) and DHCP Server

Third-Party Applications

Oracle Database and the Norton AntiVirus

UNIX-Related Applications

NetWare 5.1 FTP Server and NFS 3.0

ZENworks

ZENworks for Servers and ZENworks for Desktops 2 and 3

In this final NCS lesson, we will learn how to configure high-availability services by performing these five administrative tasks:

  • Cluster-Enabling Applications

  • Assigning Nodes to a Cluster Resource

  • Configuring Cluster Resource Failover

  • Migrating Cluster Resources

  • Configuring Cluster Resource Scripts

Cluster-Enabling Applications

Cluster resources are at the center of the NCS universe. To cluster-enable any network service (such as an application), you must create a corresponding cluster resource. The resource includes a unique IP address and is available for automatic or manual migration during a node failure.

You can create cluster resources for cluster-aware or cluster-naïve applications, including websites, e-mail servers, databases, or any other server-based application. This magic is accomplished using ConsoleOne or NetWare Remote Manager. After you have created an application's cluster resource, you can assign nodes to it and configure failover options (we will discuss these topics in just a moment).

To create a cluster resource for a given network application, launch ConsoleOne. Next navigate to the host Cluster object and select File, New, Cluster, Cluster Resource. Then enter a descriptive name for the cluster resource that defines the application it will be serving. Next, mark the Inherit from Template field to perform additional configurations based on a preexisting template. If one does not exist, select the Define Additional Properties box to make the configurations manually. Finally if you want the resource to start on the master node as soon as it is created, select Online Resource After Create and click Create.

You have created a new cluster resource in eDirectory for your highly available application. However, this is only the beginning. For users to have constant access to the application, you must assign nodes to the cluster resource, configure failover options, and build load scripts (so NCS knows how to enable the application). Let's continue with Node Assignment.

Assigning Nodes to a Cluster Resource

Before your new cluster resource is highly available it must have two (or more) nodes assigned to it. Furthermore, the order in which the nodes appear in the Assigned Nodes list determines their priority during failover.

To assign nodes to a cluster resource in ConsoleOne, navigate to the new Cluster Resource in eDirectory. Next right-click it and select Properties. When you activate the Nodes tab, two lists will appear: Unassigned (which should have two or more servers in it) and Assigned (which should be blank).

To assign nodes to this cluster resource, simply highlight the server from the Unassigned list and click the right-arrow button to move the selected server to the Assigned Nodes list. Then when you have two (or more) servers in the Assigned Nodes list, you can use the up-arrow and down-arrow buttons to change the failover priority order.

Speaking of failover, let's continue with a quick lesson in configuring cluster resource failover.

Configuring Cluster Resource Failover

After you have created a cluster resource for your application and added nodes to it, you're ready to configure the automatic and manual failover settings. Following is a list of the failover modes supported by the Policies page in ConsoleOne:

  • Start Mode NCS supports two Start Modes for cluster resource: Automatic and Manual. When set to Automatic, the cluster resource automatically starts on its preferred node anytime the cluster is activated. When set to Manual, the cluster resource goes into an Alert state anytime the cluster is restarted. In this state, ConsoleOne displays the resource as an Alert and presents you with the option of manually starting the resource.

  • Failover Mode NCS also supports two failover modes for cluster resources: Automatic and Manual. When set to Automatic, the cluster resource starts on the next server on the Assigned Nodes list when its Host node fails. When set to Manual, the cluster resource goes into an Alert state when its Host node fails. In the Alert state, ConsoleOne allows you to manually move the resource to any cluster node of your choice.

  • Failback Mode NCS supports three failback modes for cluster resources: Automatic, Manual, and Disable. When set to Automatic, the cluster resource automatically fails back to its most preferred node when that node rejoins the cluster. When set to Manual, the cluster resource goes into an Alert state when its preferred node rejoins the cluster. At this point, ConsoleOne allows you to move the resource back to its preferred node when you think the time is right. Finally, in Disable mode, the cluster resource doesn't do anything when its most preferred node rejoins the cluster. This is the default setting, and is also recommended under most circumstances.

If you don't feel comfortable automatically migrating cluster resources in NCS, you can always migrate them manually. Let's continue with a quick lesson in resource migration.

TIP

When configuring cluster resource failover modes, ConsoleOne presents an Ignore Quorum check box. By selecting this parameter, you can instruct NCS to ignore the cluster-wide timeout period and node number limits. This ensures that the cluster resource is launched immediately on any server in the Assigned Nodes list as soon as the server is brought online. We highly recommend that you check the Ignore Quorum box because time is of the essence when building a high-availability solution.


Migrating Cluster Resources

You can migrate cluster resources to different nodes in the Assigned Nodes list without waiting for a failure to occur. This type of load-balancing is a very good idea to lessen the performance load on any specific server. In addition, resource migration is a great tool to free up servers when they are scheduled for routine maintenance. Finally, migration allows you to match resource-intensive applications with the best server hardware.

To migrate cluster resources by using ConsoleOne, navigate to the Cluster object that contains the resource that you want to migrate. Highlight the Cluster object and select View, Cluster State View. Then in the Cluster Resource list, select the resource you want to migrate.

Next, the Cluster Resource Manager screen appears, displaying the resources host server and a list of possible servers you can migrate the resource to. Select a server from the list and click the Migrate button to manually move the resource to the new server. Furthermore, you can select a resource and click the Offline button to unload it from its host server. At this point, the resource hangs in limbo until you manually assign it to another node.

TIP

Cluster resources must be in a Running state to be migrated.


So far, you have created a cluster resource for your network application and assigned nodes to it. Then you configured automatic cluster failover modes and migrated resources manually for load balancing. That leaves us with only one important high-availability task configuring cluster resource scripts. This is probably the most important task because it determines what the resources do when they are activated.

Ready, set, script!

Configuring Cluster Resource Scripts

When a cluster resource loads, NCS looks to the Load Script to determine what to do. This is where the application commands and parameters are stored for the specific cluster resource. Load Scripts are analogous to NCF (NetWare Configuration Files) batch files that run automatically when NetWare servers start. In fact, cluster resource load scripts support any command that you can place in an NCF file.

Similarly, the Unload Script contains all of the commands necessary to deactivate the cluster resource, or take it offline. Both Load and Unload Scripts can be viewed or edited by using ConsoleOne or NetWare Remote Manager.

To configure a specific cluster resources Load Script in ConsoleOne, navigate to the Cluster Resource object and right-click it. Next, select Properties and enable the Load Script tab. Then, the Cluster Resource Load Script window will appear. Simply edit the commands as you would any NCF batch file. In addition, you will need to define a timeout setting for the load script. If the load script does not complete within the timeout period (600 seconds by default), then the resource will go into a comatose state.

REAL WORLD

Cluster Resource Load/Unload Scripts support command line input using the << parameter. For example, to load the SLPDA application with a Yes autoconfiguration parameter, enter the following command in the cluster resource load script:

 LOAD SLPDA << Y 

The string following a command-line parameter can be up to 32 characters long.

In this final NetWare 6 lesson, we learned how to implement Novell's new AAA Anytime, Anywhere, Always Up. Always up is accomplished by using NCS (Novell Cluster Services). In this lesson, we learned how to design a NetWare 6 NCS solution, how to install it, how to configure it, and how to keep it running.

In the first NCS section, we explored high availability in theory and built an impressive NCS vocabulary, including Mean Time Between Failures (MTBF) and Mean Time to Recovery (MTTR). After we nailed down the basic fundamentals of NCS, we used NCS 1.6 to design a clustering solution. In the basic system architecture, we learned how to use a Fiber Channel or SCSI configuration to share a central disk system.

In the third lesson, we discovered the four-step process for installing NCS 1.6. Then, at the end of the chapter, we learned how to configure two high-availability solutions: File Access and Services. So there you go…. Novell Cluster Services in all its glory!!

Congratulations! You have completed Novell's CNE Update to NetWare 6 Study Guide.

With this companion, you have extended your CNE venture beyond NetWare 4 and 5, into the Web-savvy world of NetWare 6. Furthermore, you have learned how to boldly serve files and printers where no one had served them before with iPrint, iFolder, and iManager. That is Novell Course 3000: Upgrading to NetWare 6 in a nutshell.

Wow, what a journey! You should be very proud of yourself. Now you are prepared to save the 'Net with NetWare 6. Your mission should you choose to accept it is to pass the NetWare 6 CNE Update exam. You will need courage, eDirectory, iFolder, NCS, and this book.

All in a day's work….

Well, that does it. The end. Finito. Kaput. Everything you wanted to know about NetWare 6 but were afraid to ask. I hope that you have had as much fun reading this book as I've had writing it. It's been a long and winding road a life changer. Thanks for spending the last 700 pages with me, and I bid you a fond farewell in the only way I know how:

Cheerio!

Happy, Happy Joy, Joy!

Hasta la Vista!

Ta, Ta, for now!

Grooovy, Baby!

May the force be with you….

So long, and thanks for all the fish!

Lab Exercise 7.1: Building a High-Availability Network (Word Search Puzzle)

Q1:

Circle the 20 cluster services terms hidden in this word search puzzle, finding the words by using the hints provided.

graphics/07fig17.gif

Hints

  1. This console command lists all NCS storage pools and the nodes assigned to each pool.

  2. The best utility for configuring NCS 1.6.

  3. Displays a detailed history of your cluster activity, sorted by time stamp.

  4. The automatic migration of cluster resources after a node fails.

  5. Load balancing the migration of resources to other nodes during a failover based on factors such as node traffic and/or availability of installed applications.

  6. Novell's very own clustering-aware application for messaging.

  7. The amount of time, in seconds, between LAN transmissions for all nodes.

  8. An Ethernet network of cluster nodes.

  9. The average time that a device takes to recover from a nonterminal failure.

  10. A high-availability solution built into NetWare 6 that allows you to create redundant Storage Area Networks (SANs) for critical network applications and files.

  11. The loss of a computer service, and the enemy of high availability.

  12. The disks contained in a clustered SAN must be configured in this way to achieve true fault tolerance.

  13. A cluster configuration for nonmission-critical environments.

  14. The duration of time a service is functioning.

  15. The amount of time, in seconds, between LAN transmissions from the master node to all other nodes in the cluster.

See Appendix C for answers.

Lab Exercise 7.2: NetWare 6 High-Availability with Cluster Services (Crossword Puzzle)

Q1:

graphics/07fig18.gif

Across

3. Enough to get started

4. Dividing the total number of operating hours by the total number of failures

7. Deactivating NCS at the server console

8. The number of nines to achieve high availability

9. Resolves conflicts between two Cluster Membership Views

11. Two make up a cluster

12. LAN of storage devices

13. Not a Master

Down

1. Resources are in normal operating condition

2. Small dedicated partition on the shared storage device

4. Runs the NCS show

5. Window of opportunity to get started

6. A "family" of highly available servers

8. Resources returning where they belong

10. Activating NCS at the server console



Novell's CNE Update to NetWare 6. Study Guide
CNE Update to NetWare 6 Study Guide
ISBN: 0789729792
EAN: 2147483647
Year: 2003
Pages: 128

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net