The Actual CRS Install Itself

 < Day Day Up > 



Now that we have configured the shared storage and networking components properly, we can move on to the actual install of CRS. Assuming that all of the above configuration steps have been followed correctly, the actual CRS install itself is pretty straightforward. Get the CRS disk, mount the CD-ROM, and run ./runInstaller. Follow the prompts, and within a few minutes, voila!-you have a cluster.

However, there are still a couple of caveats-the biggest being an existing Oracle Database 10g single-instance install. If you have a need or desire to run a separate ORACLE_HOME, which supports only a single instance on one or all of the nodes, you may do so, but you should still install CRS first. The reason for this is the ocssd daemon. This daemon is needed, even in a single-instance environment, for

the purpose of possibly using ASM for your storage (recall that ASM can be used for single-instance or RAC installs). However, you should only have one CSS daemon, and if you also have a RAC environment, the CSS daemon must be running out of the CRS home. If you install CRS first, and then later install the Oracle RDBMS, this should work fine. During the RDBMS installation, the installer will give you the option to do a single-instance install (referred to as local only), whereby you can check a box on the Cluster Node Selection screen. You can therefore install Oracle into multiple homes-with one or more home(s) being a RAC home, and one or more home(s) being a single-instance (or local only) home, as long as CRS is installed first.

Coexistence of CRS and Local Only Installs

How about the case, then, where there is already an existing Oracle Database 10g single-instance install in place? This existing single-instance home will already have the CSS daemon running out of it, but we want that daemon to be running out of the CRS home, not out of the existing single-instance home. In that situation, there is a solution. Oracle provides a shell script in the ORACLE_HOME\bin directory called localconfig.sh (on Windows, it is localconfig.bat). This script can be run (as root) from the existing single-instance ORACLE_HOME with the delete flag, like so:

/u01/app/oracle/10g/bin]$ ./localconfig delete 

Running LOCALCONFIG DELETE will stop the ocssd daemon and remove its entries from /etc/inittab. It will also remove the existing OCR. Thus, during the CRS install, the entries for ocssd will be added back into /etc/inittab, but will be created to run out of the CRS_HOME instead of the existing ORACLE_HOME. The localconfig script must be run on each node that will be part of the cluster, if there is an existing Oracle Database 10g single-instance install on that node.

Note 

Since ASM relies on CSS, you must not run localconfig while any ASM instances are running. Since any other databases using ASM rely on the ASM instances, they should also be stopped before running localconfig.

Installing CRS

The main pieces that you need are now in place. All that is left is to decide the location of the CRS install. It is worth reiterating that CRS should be installed into its own home, separate from any RDBMS installs. To have an OFA-compliant install, you should create an ORACLE_BASE directory, followed by a subdirectory within the ORACLE_BASE for the CRS_HOME itself. In the examples we are using for this chapter, the path to the ORACLE_BASE is /u01/app/oracle. Within there, we have created a directory called CRS, making the full path of the CRS_HOME /u01/app/oracle/CRS. Within the ORACLE_BASE directory is another subdirectory called 10G, which will be used later as the ORACLE_HOME. The directories from /u01 on down should be owned by the oracle user, and permissions are 775 on all directories and subdirectories.

Installation Walkthrough

Before starting the installer, make sure that you are logged on as the oracle user, and ensure that you have the ORACLE_BASE environment variable set appropriately in your environment. Kick off the installer by running ./runInstaller as the oracle user. The first screen you will see will be a Language Selection screen, after which you will see the File Locations screen. In the File Locations screen, fill in the location for the CRS_HOME in the target page, and ensure that you assign this home a unique name. The first time you install, you will also be prompted to define the inventory location and then run the orainstRoot.sh to set up the inventory directories. The orainstRoot.sh script will create the oraInst.loc file, which defines the location for the Oracle inventory and the install group, which should be oinstall.

Next, you will see the Cluster Configuration screen (see Figure 4-4), where you will be prompted to enter the public and private names of each node. This is where your previous assignments come into play-here is where we define the private and public names for the nodes, previously set up and defined only in the hosts file. Note here that you can also give the cluster a name-this name can be anything you like and will be stored in the Oracle Cluster Registry, in the SYSTEM.css .clustername section.

click to expand
Figure 4-4: Defining cluster name, and public and private node names

The next screen is the Private Interconnect Enforcement screen (see Figure 4-5). This screen allows you to do two things. First, we can ensure that the correct network is being used for IPC traffic (that is, the interconnect) by choosing that network as Private and the other as Public. The other advantage that this screen provides, however, is the ability to choose multiple networks for private traffic. Thus, if you have three or more network cards and want to use two networks for private/IPC traffic (thus providing extra bandwidth on the interconnect), you can define it as such on this screen. Simply click on the box under the Interface Type and select Private next to each network you wish to use for IPC traffic. If you have additional network cards that you do not want to be used for the RAC configuration, simply leave the default of Do Not Use for those particular subnets.

click to expand
Figure 4-5: Private Interconnect Enforcement screen

Note 

Private interconnect enforcement can also be achieved by using the init.ora parameter CLUSTER_INTERCONNECTS. On Linux, this parameter will be necessary until Patch 3385409 has been applied. See Chapter 5 for more information on setting CLUSTER_INTERCONNECTS in the spfile.

Next, you will be prompted for the location of the OCR, followed by the location of the voting disk. Recall that if using RAW devices or ASM, you must have two separate partitions-you will enter paths such as /dev/raw/raw1and /dev/raw/raw2. In Figure 4-6, you see that we are actually using an OCFS drive for the OCR. The permissions should be 755, with oracle as the owner (this will be changed for the OCR at the end of the install, such that root is the owner-but prior to the install, oracle must be the owner). One thing to note during the CRS install is that during the running of root.sh, a file in /etc/oracle will be created called ocr.loc, which will point to the location of the OCR. If this is an upgrade, or if the CRS install has been run previously, you will not be prompted for this location, meaning this screen will not display, but rather the location defined in the ocr.loc file will be used. You will see the screen for the voting disk location, regardless of any previous installs.

click to expand
Figure 4-6: OCR location

At the end of the install, you will be prompted to pause and run the root.sh script, found in the ORA_CRS_HOME/bin directory. This must be done on each node, one node at a time. This script will change the ownership of the OCR, as well as the ownership of the ORA_CRS_HOME directory to the root user, but will give execute permissions to all users on the CRS_HOME. root.sh is where the CRS stack is first started. Successful output from root.sh should look something like this:

root@/u01/app/oracle/CRS>: ./root.sh Checking to see if Oracle CRS stack is already up... /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root assigning default hostname rmsclnxclu1 for node 1. assigning default hostname rmsclnxclu2 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rmsclnxclu1 private1 rmsclnxclu1 node 2: rmsclnxclu2 private2 rmsclnxclu2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /ocfs/vote/vote_lnx.dbf Successful in setting block0 for voting disk. Format complete. Adding daemons to inittab Preparing Oracle Cluster Ready Services (CRS): Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes.         rmsclnxclu1 CSS is inactive on these nodes.         rmsclnxclu2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.

As instructed above, follow this up by running root.sh on each of the remaining nodes in the cluster before finishing the install.

What Just Happened?

The installation of CRS will add the following files into the /etc/init.d directory (on Linux):

-rwxr-xr-x    1 root     root          763 Jan 16 18:02 init.crs -rwxr-xr-x    1 root     root         2250 Jan 16 18:02 init.crsd -rwxr-xr-x    1 root     root         5939 Jan 16 18:02 init.cssd -rwxr-xr-x    1 root     root         2269 Jan 16 18:02 init.evmd 

Soft links S96init.crs and K96init.crs are created in /etc/rc2.d, /etc/rc3.d, and /etc/rc5.d pointing to the init.crs in /etc/init.d, along with the following entries in the /etc/inittab file:

h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

This will spawn the crs, css, and evm daemons at startup of the cluster node. The respawn option tells init to restart the daemon should it fail, whereas the fatal option advises that should cssd fail, the entire node will be rebooted. This is necessary to avoid the possibility of data corruption, should the node become unstable.

Troubleshooting the CRS Install

Should errors occur during the CRS install, they will most likely be during the running of the root.sh script at the end. This is where the modifications are made to the inittab and the CRS daemons are started up for the first time. Failure of these daemons to start will be reported at the end of the root.sh run. Should you get an error here, the first place to check is the logfiles for each of the associated daemons. These logfiles are found in the CRS_HOME under the appropriate directory-that is, <CRS_HOME>/css/log, <CRS_HOME>/crs/log, or <CRS_HOME>/evm/log.



 < Day Day Up > 



Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
ISBN: 71752080
EAN: N/A
Year: 2003
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net