17.3 Windows

 < Day Day Up > 



In Oracle 9i Release 2 of RAC, Oracle introduced the cluster file system (CFS) for the Windows operating system. Installation of CFS on the Windows operating system eliminates the need for raw devices for RAC.

17.3.1 Cluster file system

CFS eliminates the requirement for Oracle database files to be linked to logical drives and enables all nodes to share a single Oracle home instead of requiring each node to have its own local copy. CFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements.

The benefits of CFS are as follows:

  • It is extensible without interrupting availability. Oracle homes and data files stored on CFS can be extended dynamically. Unlike raw partitions where each partition can only hold one partition (i.e., a partition is a file), with CFS multiple files can reside in one location.

  • It eliminates the requirement for each node in a cluster to have its own local copy of the Oracle home.

  • It takes full advantage of RAID volumes and storage area networks.

  • It simplifies Oracle database administration. CFS provides a uniform view of files and directories across a cluster for both Oracle home files and Oracle database files.

  • It provides uniform accessibility to archive logs in the event of physical node failures.

  • When Oracle patches are applied, the updated Oracle home is visible to all nodes in the cluster.

  • It guarantees consistency of metadata across all nodes in a cluster.

17.3.2 Before beginning

The procedures described in the following sections enable RAC to use CFS for database files and/or a shared Oracle home.

Before starting the installation process:

  1. Ensure that sufficient unallocated space is available on the shared disks.

  2. Ensure that all the required administrative privileges are available for all nodes.

  3. Make sure all the nodes to be part of the CFS are up and can communicate with each other in a TCP/IP environment.

  4. Have the following hardware and network configuration informa tion available:

    1. The public network names for each node (also known as host or TCP/IP names).

    2. If VIA hardware is used for cluster interconnects, the name of the VIA connection network interface card (NIC).

    3. The private network names of each node for the high-speed private interconnect. For optimal performance, a dedicated private interconnect for CFS should be used.

17.3.3 CFS-installed components and services

On successful installation of CFS, the following components and services are available:

Ocfs.sys

File system driver for Windows NT and Windows 2000. The correct O/S- specific version of the driver is installed depending on the Windows O/S.

OracleClusterVolumeService

CFS service that ensures consistent mount points across the cluster and provides configuration support for the file system driver. After installation, CFS service appears in the Windows Services panel.

OcfsFormat.exe

Utility that prepares volumes for use with the CFS. In order to enable a volume for use with CFS, it needs to be formatted by running this utility from one of the nodes in the cluster.

OcfsUtil.exe

Utility that is used for changing the cluster name for a given volume, managing the list of nodes configured on a volume, and creating node- specific files and directories.

OcfsOui.bat

Batch file that automatically runs from OUI during the installation of an Oracle home on CFS. It is called by OracleClusterVolumeService and it creates the needed node-specific directories and files on CFS for the Oracle home.

17.3.4 System requirements

  • For accessibility requirements, Oracle Cluster Setup Wizard requires JAWS 4.0.2 or higher as the minimum configuration.

  • CFS can be installed on a Windows workgroup or a Windows domain. For a domain, each node must be a member of the same domain or belong to a trusted domain.

  • CFS does not support mixed clusters containing Windows NT and Windows 2000.

  • Windows Terminal Services Client cannot be used to perform the installation.

  • To guard against unnecessary troubleshooting, downtime, and performance problems, any node belonging to the CFS should not be:

    1. A domain controller

    2. Configured as a DHCP, WINS, or DNS server

  • CFS supports nodes with multiple NICs and VIA hardware. A unique node name is assigned to each NIC. For performance and failover, multiple NICs with static IP addresses assigned to each are recommended.

  • CFS supports up to 32 nodes in a cluster.

Table 17.4 provides the system requirements for installing CFS for RAC.

Table 17.4: Hardware and O/S Requirements for CFS

Component

Requirement

Processor

Pentium 266 or higher recommended

Operating system

Windows NT 4 Server with Service Pack 6 or higher Windows 2000 Server or Windows 2000 Advanced Server with Service Pack 2 or higher Not supported: Windows 2000 Datacenter Server

RAM

256 MB minimum

Virtual memory

Initial size 200 MB Maximum size 400 MB

Hard disk space

100 MB minimum for a volume to be usable for CFS 4 MB local disk space for CFS components

Networking protocol

TCP/IP

Video adapter

256 colors (required only for running installations and tools)

17.3.5 Cluster file system preinstallation steps

Under the Windows operating systems, raw partitions are referred to as logical drives.

Note 

For additional information on creating partitions, refer to the Windows online help from within the disk administration tools.

Run Windows NT Disk Administrator or Windows 2000 Disk Management from one node to create an extended partition. For Windows 2000 only, use a basic disk. Dynamic disks are not supported.

  • Create at least two partitions: one for the Oracle home and one for the Oracle database files.

  • In the prior version of RAC, a separate partition was required to be configured as a voting device. When CFS is used, this is not required. CFS stores the voting device in a file.

  • Caution should be taken when creating partitions. The number of partitions should be kept to the minimum. Increasing the number of partitions affects the overall performance of the system.

To create partitions

  1. From one of the existing nodes of the cluster, run the Windows disk administration tool as follows:

    1. On Windows NT start Disk Administrator using the path:

      Start>Programs>Administrative Tools>Disk Administrator.

    2. On Windows 2000 start Disk Management using the path:

      Start>Settings>Control Panel>Administrative Tools>Computer Management. Figure 17.5 provides a view of the Computer Management list of functions. From this screen the Storage folder is expanded to select the Disk Management folder. For Windows 2000 only, use a basic disk as an extended partition for creating partitions.

      click to expand
      Figure 17.5: Options under Computer Management utility.

  2. From list of disk partitions the unallocated part of an extended partition is selected.

    1. For Windows NT choose Create Partition.

    2. For Windows 2000 choose Create Logical Drive. A wizard presents pages for configuring the logical drive.

  3. Next is the Create Partition Wizard screen, shown in Figure 17.6. From this screen the actual size of the partition is defined. Oracle CFS requires that the partition created should be 100 MB or higher.

    click to expand
    Figure 17.6: Defining the partition size.

  4. When creating the partition, caution should be taken not to assign any letter to the partition. Partition names are assigned through the Cluster Setup Wizard at a later stage in the process.

    1. Windows NT automatically assigns a drive letter. Remove the drive letter by right clicking on the new drive and selecting ''Do not assign a drive letter'' for the Assign Drive Letter option. Do this for any Oracle partitions.

    2. For Windows 2000 choose the option ''Do not assign a drive letter'' and then click ''Next'' to continue. Figure 17.7 illustrates the screen where the drive letter is to be assigned for the partition. However, the path of the name definition for the partition is accomplished through the Cluster Setup Wizard option.

      click to expand
      Figure 17.7: Create Partition Wizard window—Path definition.

    3. The next window prompts the user to determine if the partition needs to be formatted. In this screen ''Do not format this partition'' is selected.

    4. Figure 17.8 provides a view of the last screen while defining a partition. From the screen the ''Finish'' option is selected, marking the completion of creating a partition (e.g., of size 300 MB as illustrated in the figure).

      click to expand
      Figure 17.8: Create Partition Wizard status screen.

  5. Choose ''Commit Changes Now'' from the Partition menu to save the new partition information.

  6. Steps 2 through 4 are repeated for creation of any additional partitions. An optimal configuration would be with one partition created for Oracle home and another partition for the Oracle database files.

  7. Once all the required partitions have been defined, it should be verified that these partitions are visible from all nodes participating in the cluster. As part of the verification process, it should be confirmed that no drive letter or path is assigned to a partition.

17.3.6 Installing clusterfilesystem

This section describes the steps for the first installation of CFS.

Run clustersetup.exe from the preinstall_rac\cluster- setup\ directory of the CFS product CD. Do not run clustersetup.exe from the Oracle 9i Database product CD.

To install CFS:

  1. From one of the nodes in the cluster, insert the CFS product CD, and navigate to the \preinstall_rac\clustersetup\ direc tory and double-click clustersetup.exe. The welcome page for the Oracle Cluster Setup Wizard appears, as shown in Figure 17.9. Click ''Next'' to continue.

    click to expand
    Figure 17.9: CFS installation welcome screen.

  2. From the next screen choose ''Create a cluster'' and click ''Next,'' at which point the Network Selection page appears.

  3. Choose ''Use private network for interconnect'' if the nodes have a high-speed private network connecting them. Otherwise, the public network can be selected. Click ''Next.'' The Private Network Configuration page appears.

  4. Enter the name for the cluster being created, and enter the names of the nodes. If a private network interconnect was selected in step 3, enter the public and private names for the nodes; otherwise, enter the public names. Figure 17.10 defines the public and private names for the nodes participating in the cluster. Once the names are assigned, click ''Next,'' at which point the CFS options window appears.

    click to expand
    Figure 17.10: Public and private node name definition.

  5. Choose a CFS option for this cluster: ''CFS for Oracle Home and Datafiles,'' ''CFS for Oracle Home,'' or ''CFS for Datafiles.'' Click ''Next.''

  6. Depending on the CFS option selected, a page for selecting the disk partition and the drive letter appears:

    1. If ''CFS for Oracle Home and Datafiles'' is selected, then two pages appear sequentially: ''CFS for Oracle Home'' and ''CFS for Datafiles.''

    2. If ''CFS for Oracle Home'' is selected, then the ''CFS for Oracle Home'' page appears followed by the Disk Configuration screen for configuring the voting disk (Figure 17.11). Select the appropriate disk for the voting disk, ''srvcfg.'' CFS stores the voting device for operating-system-dependent (OSD) clusterware as a file on a CFS partition. Click ''Next'' when done.

      click to expand
      Figure 17.11: Voting disk selection.

    3. If ''CFS for Datafiles'' is selected, then the ''CFS for Datafiles'' page appears.

  7. Choose a partition of the required size from the list of available partitions and then choose a drive letter from the Drive Letter drop- down list. For the CFS option selected in step 5, the partition and drive letter combination will be assigned to the CFS drive letter for all of the volumes in the cluster.

  8. For additional CFS volume definitions step 6 is repeated. Click ''Next.'' If ''CFS for Oracle Home and Datafiles'' or ''CFS for Datafiles'' was selected in step 5, then skip to step 10.

  9. If ''CFS for Oracle Home'' was selected in step 5 then the Disk Configuration page for configuring the voting disk appears because the Oracle database files will not use CFS.

  10. Click ''Next.'' The installation wizard automatically checks that the cluster interconnect hardware such as VIA is configured. Figure 17.12 provides the status after the cluster interconnect identification has been performed by the CFS installation process. The installation process verifies the existence of VIA in the configuration.

    click to expand
    Figure 17.12: Cluster interconnect identification.

    1. If VIA is not detected, TCP is used for the clusterware interconnect. Click ''Next'' and skip to step 13.

    2. If VIA is detected, then the VIA Selection page appears. Continue to step 11.

  11. Choose ''Yes'' to use VIA for the interconnect and click ''Next.'' The VIA Configuration page appears. Option ''No'' will inform the CFS installation process to use TCP/IP as the cluster interconnect.

  12. Enter the name of the VIA connection and click ''Next.''

  13. The Install Location page is the last page that appears. The default location is %systemroot%\osd9i. Click ''Browse'' to navigate to a different location if needed.

  14. Click ''Finish.'' A progress page displays the actions being performed. Figure 17.13 verifies the CFS configuration. Depending on the number of partitions and the number of nodes defined in the configuration, this could take several minutes to complete.

    click to expand
    Figure 17.13: CFS verification.

17.3.7 Installing Oracle Real Application Clusters

The steps described in this section create a RAC database on the CFS.

To perform these steps the following are required:

  • To be logged in with administrative privileges.

  • The CFS product CD.

  • The Oracle 9i Enterprise Edition Release 2 (9.2.0.1.0) product CDs.

  • The CFS drive letter that was specified earlier, for the Oracle home, and if ''CFS for Datafiles'' is created, then the CFS drive letter is specified for data files.

Task 1: Install Oracle RAC components

The steps required for installation of RAC are the same as defined in Chapter 8 (Installation and Configuration).

Task 2: Apply the patches

Before running DBCA to configure the database, the required patches for SRVM and DBCA should be applied. The patch documents are located in the \patch directory of the CFS product CD, or in the %ORACLE_HOME%\cfspatch directory of the Oracle home. Oracle provides the information required for installation of the server manager and the DBCA utility in their respective text files.

  1. Perform the patch procedures described in srvm.txt to the SRVM patch.

  2. Perform the patch procedure described in dbca.txt to apply the DBCA patch.

Task 3: Configure listener services

  1. Stop the Oracle<oracle home name>TNSListener service from the Windows Services control panel.

    1. For example: OracleOraHome92TNSListener where the Oracle home name is OraHome92 and appears in the CFS under \oracle\ora92.

    2. On some systems the node name of the first node may be appended, for example OracleOraHome92TNSListener Listener_deptclust 1.

  2. Change Startup Type to Disabled for the listener service on each node, for all nodes in the cluster.

  3. On each node, open an MS-DOS window and from the command prompt enter:

    lsnrctl start <listener_name>

    For example:

    O:\>lsnrctl start listener_deptclust1

    where listener_deptclust1 is the name of the listener configured for the node deptclust1 of the cluster named deptclust.

    Note 

    For the command-line argument, the listener name should match the listener configured for each node. For example: listener_deptclust1, listener_deptclust2, listener_deptclust3.

    Perform this step on each node. The lsnrctl command creates the service for the listener provided as argument and starts the service. The listener service appears in the Services control panel for the node.

  4. After the services for the listeners are created, the Startup Type for each service is changed to Automatic. Repeat this step for each node.

Task 4: Configure the Oracle RAC database

  1. On the CFS drive that was created for data files, create an oradata directory at the root. This directory will be visible from all nodes in the CFS.

    For example: P:\>md oradata.
  2. Open a new MS-DOS window and run DBCA from the command prompt as follows:

    dbca -data fileDestination P:\oradata

    The DBCA welcome page appears. Choose ''Oracle cluster database'' and click ''Next.''

  3. Choose ''Create database'' and click ''Next.''

  4. Select all nodes on the Node Selection page and click ''Next.''

Note 

The steps required for creation of a RAC database are discussed in Chapter 8 (Installation and Configuration).

17.3.8 Installing CFS on an existing cluster

If Release 2 (9.2.0.1.0) of RAC is installed, which does not contain the CFS feature, and it is determined that for easy maintenance the RAC installation needs to be upgraded to include CFS, the following section describes the steps involved.

Run the Oracle Cluster Setup Wizard from the CFS product CD to re-create the cluster to use CFS and choose ''CFS for Datafiles'' as the CFS option.

Preparation for installing CFS for Datafiles

Prior to installation of CFS the following should be performed:

  • All databases running in the cluster should be stopped.

  • Oracle services running on all nodes should be stopped.

  • Oracle OSD clusterware services should also be stopped.

Task 1: Shut down databases

  • Using the SRVCTL command the databases can be shut down from any of the participating nodes in the cluster. The following command shuts down the database:

    C:\> srvctl stop database -d <db_name>

    Control is not returned to the session that initiates a database shutdown until shutdown is complete. The database cannot be shut down if the database has any shared server processes that are maintaining session states.

  • Alternatively, the SQL*Plus SHUTDOWN command could be used on each node on which the cluster database has an instance to shut down that instance.

Task 2: Stop Oracle services

Stop all Oracle database services. From the Services dialog box in the Windows Control Panel, stop these services (and others if applicable):

  • OracleCMService9i

  • OracleGSDService

  • OracleService<SID>

  • OracleTNSListener service on each node

Task 3: Back up database configuration information

Back up configuration information for existing databases by using the srvconfig utility:

C:\> srvconfig -exp %ORACLE_HOME%\conf_backup.txt

Task 4: Install CFS

  • From the CFS product CD, run Cluster Setup Wizard as described in Section 17.3.6.

  • Choose ''CFS for Datafiles'' as the CFS Option.

After the cluster is re-created, all nodes will have access to CFS for Datafiles.

Task 5: Restore database configuration information

  • Initialize the SRVM configuration file by using the srvconfig utility as:

    C:\> srvconfig -init
  • Restore the backed up configuration information that was created in Task 3:

    C:\> srvconfig -imp %ORACLE_HOME%\conf_backup.txt

Task 6: Start Oracle services

From the Services dialog box in Windows Control Panel on each node, start these services (and others if applicable):

  • OracleCMService9i

  • OracleGSDService

  • OracleService<SID>

  • OracleTNSListener service on each node.

17.3.9 Cluster file system features

Node-specific files and directories

CFS supports node-specific files and directories. This allows nodes in a cluster to see different views of the same files and directories although they have the same pathname on CFS. This feature supports products that are installed on the Oracle home (like Oracle Intelligent Agent) that need to have the same file name on different nodes but require a private copy on each node because node-specific information might be stored in these files.

Unique clustername integrity

CFS associates a unique clustername with a CFS volume. The clustername is automatically selected from the cluster manager registry, and if a valid non-default clustername is present in the value

HKEY_LOCAL_MACHINE\Software\Oracle\CM\ClusterName

then any volume formatted from this node will be available to nodes with the same clustername as this node. OcfsUtil provides a way to change the clustername for a volume to another clustername which makes the volume visible to all nodes in the cluster. Clustername allows a hardware cluster to be segregated into logical software clusters from a storage viewpoint. This is important for supporting a storage area network.

OcfsUtil command summary

OcfsUtil is a command-line utility that is used for:
  • Changing the clustername for a given volume.

  • Managing the list of nodes configured on a volume.

  • Creating node-specific files and directories.

Table 17.5 provides a list of OcfsUtil commands.

Table 17.5: OcfsUtil Command Summary

Command

Description and Syntax

ChangeClusterName

Enables the clustername for a volume to be changed, with the mount point <VolumeMount Point>, for example O:, to <NewClusterName> as specified.

Specifying no <NewClusterName> resets the clustername to null clustername making the volume specified by VolumeMountPoint visible to all nodes that have hardware connectivity to it.

Syntax:

OcfsUtil /c ChangeClusterName /m <VolumeMountPoint> /n <NewClusterName>

ChangeVolConfig

Prints the current volume specified with mount point <Volume MountPoint>, for example, O:.

If /d <NodeName> is specified, the NodeName will be removed from the config map.

The config map is the list of nodes that have ever accessed this CFS.

Syntax:

OcfsUtil /c ChangeVolConfig /p /m <VolumeMountPoint> /d <NodeName>

NodeSpecificFile (Create)

This command has three options: create, delete, or revert

Makes the file or directory (/d) specified by <FullPath> on the <VolumeMountPoint>, for example O:, into a node-specific file. The file or directory will have the same name on all nodes, but will have different contents and will be treated as a local file or directory.

Syntax:

OcfsUtil /c NodeSpecificFile /o <create/delete/revert>/m <MountPoint> /p <Path> /d and OcfsUtil /c NodeSpecificFile /o create /m <VolumeMountPoint> /p <FullPath> /d

NodeSpecificFile (Delete)

Deletes the node-specific file or directory specified by <FullPath> on the <VolumeMount Point> for example O:.

Syntax: OcfsUtil/c NodeSpecificFile /o delete /m <VolumeMountPoint> /p

<FullPath>

NodeSpecificFile (Revert)

Reverts the node-specific file or directory specified by <FullPath> on the <VolumeMountPoint>, for example O:, to a shared file and will point the file/directory to the contents of the node-specific file on <NodeName>.

If no NodeName is specified, the reverted shared file directory will have the contents of the node on which the command is run.

Syntax:

OcfsUtil /c NodeSpecificFile /o revert /m <VolumeMountPoint> /p <FullPath> /n <NodeName>

17.3.10 Performance tuning

For CFS to perform optimally, do not store the Oracle home and the Oracle database files on the same partition or logical drive.

Allocation unit sizes

For volumes that have only Oracle database files, the allocation unit size should be set to greater than 1024 KB. For volumes on which the Oracle home is created, the allocation unit size should be set to 4 KB to 8 KB.

Use the Windows disk administration tool to set the file allocation unit size appropriate for the type of file access.

Table 17.6 provides the recommended allocation unit sizes. The default allocation for Windows is 4 KB.

Table 17.6: Recommended Allocation Unit Sizes

File Type

Recommended Allocation Unit Size

CFS for Oracle Home

4 KB to 8 KB

CFS for Datafiles

1024 KB minimum

Caching

For optimal performance, CFS does not cache data for Oracle database files. Instead it performs caching for both metadata and non-database files. Therefore, Oracle Corporation strongly recommends that third party products that require a file system to aggressively cache not be used with CFS. Doing so could cause conflicts with the caching mechanism used by CFS.

Networking recommendations

Public interconnects should not be used for the clustered database. Public interconnects experience busy network traffic. Do not use DHCP to dynamically assign IP addresses to the nodes running the clustered database. DHCP produces increased network traffic through leasing and revoking IP addresses.

Each node should have at least two NICs to provide a private interconnect for the internode cache fusion traffic. The private interconnect takes advantage of the performance gains provided by cache fusion.

The NICs should have dedicated IP addresses for optimal bandwidth. Alternatively, the NIC teaming where multiple physical NICs are configured as one logical NIC and multiple IP addresses are assigned to the single logical NIC. Using multiple NICs safeguards against the possibility of network card failure.



 < Day Day Up > 



Oracle Real Application Clusters
Oracle Real Application Clusters
ISBN: 1555582885
EAN: 2147483647
Year: 2004
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net