Installing the RDBMS

 < Day Day Up > 



Once CRS is installed, you are now ready for the RDBMS install itself. Again, one of the first considerations to take into account before beginning the RDBMS install is the storage that will be used, both for the ORACLE_HOME and for the database itself. As we have said, in  most environments, you have a couple of options, and you can mix and match between them.

ORACLE_HOME on Local or Shared?

Probably the most basic option is to simply do the install to a private (or local) drive for each node. The CRS install will do the push for you, allowing you to do the install just once from a single node. You will have the option of selecting the nodes to be pushed to during the install. If you choose this route, you may still choose to use OCFS for the datafiles, or you may choose either RAW devices or ASM as the storage medium.

ORACLE_HOME on Cluster File System

If you prefer, you may decide to use a shared ORACLE_HOME, by using OCFS as the file system for the shared ORACLE_HOME. On Linux, this requires that you be running OCFS version 2.0 or higher. OCFS version 1.x on Linux does not support installing the ORACLE_HOME-this version of OCFS can only be used for the database files.  On the Windows platform, Oracle also provides a cluster file system that can be used for both the ORACLE_HOME and the RDBMS. This functionality has been available from Oracle on the Windows platform since the initial release of OCFS, shortly after 9.2.0.1 was released.

ASM Considerations

Using OCFS for the ORACLE_HOME will not necessarily tie you into using OCFS for the datafiles themselves, though that is an option. In fact, Oracle envisions environments with OCFS as the file system for a shared ORACLE_HOME, with the database files themselves being stored using ASM. Since ASM will not support regular files (only database files and database backups), OCFS is a logical alternative for the ORACLE_HOME on those platforms that support it. In addition, you can still choose to install to the private drive on each node, and use ASM for the datafiles.

Confirm CRS Configuration

Prior to beginning the RDBMS installation, you must ensure that the CRS stack is running on all nodes in the cluster. To make this determination, run the olsnodes command from the bin directory in the CRS_HOME. Usage of olsnodes can be found by using the -help switch:

[root@rmsclnxclu1 bin]# ./olsnodes -help Usage: olsnodes [-l] [-n] [-v] [-g]         where                 -l print the name of the local node                 -n print node number with the node name                 -v run in verbose mode                 -g turn on logging

For example, olsnodes -n should return the node name of each node, as well as its node number:

[root@rmsclnxclu1 bin]# ./olsnodes -n rmsclnxclu1     0 rmsclnxclu2     1

After verifying the correct feedback from the olsnodes command, you can begin with the RDBMS install. Ensure that the ORACLE_HOME and ORACLE_BASE environment variables are properly set. (Recall in our examples that ORACLE_BASE is set to /u01/app/oracle, and ORACLE_HOME is /u01/app/oracle/10G.) You will also want to append the ORACLE_HOME/bin directory to the path. An example of the environment variables we have set is shown here:

ORACLE_BASE=/u01/app/oracle ORACLE_HOME=$ORACLE_BASE/10g ORACLE_SID=grid1 PATH=$ORACLE_HOME/bin:/usr/local/bin:$PATH LD_ASSUME_KERNEL=2.4.19 export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH LD_ASSUME_KERNEL

Note the value for LD_ASSUME_KERNEL. This environment variable is required if installing on Red Hat 3.0 (not needed for Red Hat 2.1, or other Linux variants). However, if running Red Hat 3.0, this variable should not only be set for the oracle user, but it also must be set for root. This is required for running the Virtual IP Configuration Assistant (VIPCA) at the end of the installation; otherwise, you may encounter a failure of VIPCA. To be sure that it is set, we recommend that you also add the above lines for LD_ASSUME_KERNEL to the .bash_profile in /root, and ensure that you reconnect as root prior to running root.sh at the end of the install. This will ensure that the environment variable is in effect for VIPCA.

Installing the Product

When running the installer for the RDBMS, with the CRS stack running on all nodes, you should then see the Specify Hardware Cluster Installation Mode screen directly after the File Locations page of the installer. This screen is depicted in Figure 4-7. As alluded to earlier in this chapter, this is where you will have the choice, if you prefer, to check the box for a Local Installation, which will allow you to install into a home without the RAC option. However, for our purposes here, that is not what we want. Instead, you will want to ensure that all nodes in your cluster are listed in the Cluster Installation window. If you wish to do individual installations to each node in the cluster, you may do so by checking only the box next to the local node. Our recommendation is that in this window, you check the box for all nodes in the cluster, so that the install can be done only once and pushed to all nodes. This will also ensure consistency in the installations on all nodes. It will also simplify the process of patching later on down the road.

click to expand
Figure 4-7: Cluster Installation screen-select nodes for install

Prerequisite Checks

After choosing Enterprise Edition, Standard Edition, or Custom for the installation type, you will then be placed into the Product-Specific Prerequisite Checks screen. Here, the OUI will go through various checks, depending on which options you have chosen to install. These checks include confirmation of the required operating system version, any OS patches or packages needed, kernel parameters, and so on. If a particular portion of the checks does not pass-for example, kernel parameters- but you know that the parameters are adequate for your needs, you can manually check the box next to that particular check, and the status will switch to User Verified and then allow you to proceed.

Database Creation

The next big decision you must make is to decide whether or not you want to create a database at the end of the installation. We recommend that you do not create a starter database during the installation-you can manually run the Database Creation Assistant afterward, creating the database once the installation is completed. Therefore, our advice is to select the option next to 'Do not create a starter database' and just get the binaries laid down. We will cover the creation of the database in the next section.

Install Progress and Pushing to Remote Nodes

For those of you who are experienced with Oracle installations, one of the pleasant surprises you will find with Oracle Database 10g is that the speed of the install is much faster. Many extraneous items have been moved off of the installation CD to a Companion CD, so that the install itself can all fit onto a single CD. This alleviates the need to swap disks and/or to have to stage the CDs to get a smooth install. In addition, the process for pushing to the other nodes in the cluster is now smoother. In previous releases, most of the work on the other nodes was put off until the end, so the HA DBA would be stuck watching an installer that was sitting at 100 percent, wondering if the installation was hung or if there actually was work being done on another node. With Oracle Database 10g, operations on remote nodes are spread out across various stages of the install, so as the status bar crawls across the screen, it is a more accurate indicator of the overall progress. The bottom line is that you no longer have to let the install run overnight to ensure that it completes.

Configuration Assistants

At the end of the installation, several assistants will be kicked off to finish out the configuration of your installation. Previously we have mentioned the VIPCA, or Virtual IP Address Configuration Assistant. This assistant is actually run as root, kicked off by running a root.sh script toward the end of the installation. VIPCA must be run successfully before you can use the DBCA to create a database. The NETCA will automatically configure your tnsnames.ora and listener.ora files. In addition to the Database Configuration Assistant (DBCA), the OC4J Assistant will be run. This is necessary for Enterprise Manager and Grid Control functionality. In the following sections we will provide an overview of what is needed for the two most critical assistants, the VIPCA and DBCA.

VIPCA  As we mentioned, at the end of the installation, you will be prompted to run the root.sh script out of the ORACLE_HOME directory on each node. Prior to doing so, you should ensure that the display is set appropriately for an X-Term, because this root.sh script will fire up the VIPCA. Remember, it is critical on Red Hat 3.0 that root has the LD_ASSUME_KERNEL env variable set to 2.4.19. The other critical component to the success of the VIPCA is that the virtual IPs to be used have been previously configured in the hosts file on each node, as we demonstrated earlier in this chapter.

While root.sh must be run on each node of the cluster, VIPCA will only be fired off on the first node. The first screen is simply a Welcome screen, while the second screen will display all of the network interfaces found on the cluster. Select all of the public interfaces you will be using (do not select the private interfaces), and then proceed to the next screen, where you will actually put in the aliases and IP addresses used as virtual IPs. Refer to Figure 4-8, as well as the example hosts file that we provided earlier in this chapter in the section 'Configuring the Hosts File,' for specifics on what you will provide on this screen.

click to expand
Figure 4-8: Specification of virtual IP addresses

After root.sh and VIPCA are run on the first node, run root.sh on all other nodes in the cluster. As noted, the VIPCA should not have to be run on the other cluster nodes. Instead, what you should see at the end of the root.sh script on the remaining nodes is feedback such as this:

Now product-specific root actions will be performed. CRS resources are already configured

Once root.sh has been run on all nodes, return to the Installation screen and click on OK to finish out the installation.

DBCA  The DBCA will be run at the end of the install automatically, if you chose the option to configure a database. If you did so, you would have been prompted for various pieces of the RDBMS configuration prior to the installation actually having started. This would include options such as the type of database, the database name, and storage options (that is, RAW, ASM, or file system). We recommend that you run the DBCA independently after the installation, as you will have more flexibility in specifying parameters and other options. In addition, if you are using ASM, you have more flexibility in defining disks and disk group names to be used by the database.

What we do not recommend is attempting to manually create the database without using the DBCA at all. The DBCA automates many functions, particularly those necessary for a smooth operation in a RAC environment. Examples include configuration of networking files using the VIPs, running of srvctl commands (which we will discuss in Chapter 6), and the automatic creation of ASM instances if you decide to use ASM for your storage, as we discussed in Chapter 3.

The DBCA can be run on its own, as the oracle user, by changing to the ORACLE_HOME/bin directory and simply running ./dbca. Again, make sure that the display is set properly for an X-Term session. Ensure that the CRS stack is properly configured (see the olsnodes command previously mentioned) so that we can be sure that when DBCA is run, we will see the Node Selection screen. The very first screen in the DBCA should give you the option to choose between an 'Oracle Real Application Clusters database,' and an 'Oracle single instance database.' If you do not get the choice for the Real Application Clusters database, that is an indication that the CRS stack is not running or has experienced a problem (see the section on the ocrcheck utility at the end of Chapter 6 for instructions on checking the integrity of the Oracle Cluster Registry). The next two screens should then allow you to select the option to create a database and then choose the nodes on which instances for that database will be created. Subsequent screens will allow you to choose a predefined database template or create your own custom database. You will then be prompted for the DB name, passwords for various accounts, and the all-important Storage Options screen (see Figure 4-9).

click to expand
Figure 4-9: Database Storage Options screen in DBCA

Using RAW Devices for Database Files  Storage Options, as shown in Figure 4-9, allows you to choose between a cluster file system (CFS), ASM, or RAW for the location of your database files. For details on using ASM, please refer to the 'Automatic Storage Management (ASM)' section of Chapter 3, which has details on how the DBCA creates the ASM instance and disk groups for use with your database. If choosing a cluster file system, the database creation is very straightforward. We would simply recommend that you ensure that a base directory such as /ocfs/oradata is created ahead of time on the appropriate OCFS drive(s), and owned by oracle. So far, we have not made much mention of using RAW devices, so we will devote the rest of this section to that very topic.

For RAW device configuration, you should ensure that you have created all of the necessary RAW slices, as defined in the appropriate preinstallation chapter. For Linux, refer to the section earlier in this chapter, in the 'Shared Storage Configuration' section, and to the HA Workshop at the beginning of the chapter, where we discussed using the fdisk command to create the partitions. Keep in mind that you must create one partition for each file. This includes every datafile, logfile and controlfile, as well as the system parameter file (discussed more in Chapter 5). In addition, we recommend that you create a RAW device mapping file, mapping predefined link names for each file associated with the database to a specific RAW slice in the /etc/sysconfig/rawdevices file. In our example, the db_name is grid, so you would create a mapping file called grid_raw.conf. Within that file, each link should be defined, pointing to a RAW slice of the appropriate size for that file. The following table gives you an idea of how to lay these files out:

Link Name

Disk Slice

Size

System

/dev/raw/raw5

600MB

Sysaux

/dev/raw/raw6

600MB

Users

/dev/raw/raw7

100MB

Temp

/dev/raw/raw8

500MB

redo1_1

/dev/raw/raw9

250MB

redo1_2

/dev/raw/raw10

250MB

redo2_1

/dev/raw/raw11

250MB

redo2_2

/dev/raw/raw12

250MB

control1

/dev/raw/raw13

100MB

control2

/dev/raw/raw14

100MB

undotbs1

/dev/raw/raw15

650MB

undotbs2

/dev/raw/raw16

650MB

Spfile

/dev/raw/raw17

25MB

Pwdfile

/dev/raw/raw18

25MB

The sizes listed are recommended minimum values. You may decide you need larger sizes, specifically for certain tablespaces and/or the redo logs. Adjust the values in the above table as you deem appropriate. Also keep in mind, as we mentioned in the 'Shared Storage Configuration' section, that these mappings must be created in the /etc/sysconfig/rawdevices file, pointing to the actual partitions (as shown via the fdisk -l command).

Combine this information with the information in your rawdevices file to lay the files out so that they are evenly distributed across multiple disks to balance out I/O operations. Use the Size column to verify that the partitions are adequately sized for the appropriate file. Then, create the grid_raw.conf file by mapping each filename to the appropriate RAW device-for example:

system=/dev/raw/raw5 sysaux=/dev/raw/raw6 users=/dev/raw/raw7 etc.

After choosing the appropriate storage options, continue on, setting initialization parameters as needed for the SGA size, choosing the desired character set, and so on. At the end of the database creation, you should see a screen pop up like that seen in Figure 4-10, signifying that you are done. (That example used ASM as the storage medium.) Voila! You are done-you now have a RAC-enabled database. In the next two chapters, we will go into more detail on managing your RAC environment, as well as configuring resources for high availability.

click to expand
Figure 4-10: Database Creation Complete



 < Day Day Up > 



Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
ISBN: 71752080
EAN: N/A
Year: 2003
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net