8.3 Installation

 < Day Day Up > 



Based on the hardware concepts and technologies discussed in Chapter 2 (Hardware Concepts) an appropriate selection of hardware platform should have been made before proceeding to installation and configuration phase of the product.

As a preinstallation process the following should be completed before installation and configuration of the product. Consideration should be given to a wide variety of details before installing and configuring the database. Such questions could include:

  • What should be the physical configuration of the machine?

  • What layered products will it have, taking into consideration the basic system requirements of scalability and high availability?

Once this selection has been made, then the next step to the installation process would be to read through the installation guide to determine if any specific needs are to be completed with respect to the O/S of choice. Prior to installation of the product, it is always good to review the release notes and the installation guidelines that are provided. All steps required for the installation should be added to a formal work plan created and reviewed for errors and omissions. Following a work plan of this nature would help to keep track of the work accomplished. It also helps to take follow-up action on any tasks not completed.

Let us review some of the preinstallation steps that should be considered before installing any Oracle-based configuration.

8.3.1 Preinstallation steps

You will need to ensure that:

  1. The clustered configuration has been set up, and that you have the correct versions of the clustered operating system and the cluster interconnect configuration.

  2. The Oracle version has been certified for the version of the operating system.

  3. A work plan to install the product has been completed. (A sample work plan is included in Appendix 4 of this book.)

  4. Based on the architecture of the database, all tools and utilities that will be installed from the CD have been preselected.

  5. All the latest installation documents have been downloaded and verified for installation procedure and requirements.

  6. All required patches for the installation, operating system, and Oracle have been downloaded and verified.

  7. A backup is taken of the entire system. This is a precautionary measure just in case there are any installation issues; the backup can be restored before the system is returned to a prior state.

  8. There is enough disk space and memory as mentioned in the system requirement section of the installation guide.

  9. Disks with sufficient space have been identified for the Oracle product directory including the required swap space. Oracle requires a swap space of about four times the install space during the installation process. This space released after the installation, however, is essential to complete the installation. While allocating space, consideration should be given to future releases of Oracle, as they become available and require installation and/or upgrade. Avoid NFS mounting of disks. Availability and reliability of disks are at risk on NFS-mounted devices.

  10. A Unix systems administrator has created all Oracle procedures including Oracle administration user ID, and required directories under the DBA group at the operating system level. As required by Oracle, a user, named Oracle, with an exclusive password, should be created for use in maintaining and administering the database. The Unix systems administrator at the operating system level creates this account. It is required that the product be installed as user Oracle.

  11. The required installation directory as required has been defined per OFA specification. Depending on the hardware platform and in the case of a RAC implementation, there is another task, namely configuration of raw devices on certain hardware platforms. Since these partitions are fixed-size partitions, they require a considerable amount of planning and organization.

  12. Archive log files should be on file systems. If raw devices are being used, ensure that sufficient disk space on file systems is available for archive log files. If the file systems are NFS mounted (NFS-mounted devices have performance limitations and should be avoided if possible), do not NFS mount archive destinations on all nodes from a single source.

After the selection of the hardware and operating system, and once the preinstallation steps have been verified, the next step to the process would be to configure the hardware itself. A first step to this process would be configuration of a common shared disk subsystem.

Once the storage media has been selected, the next step is the selection of the appropriate RAID technology. The commonly used RAID levels have been discussed in detail in Chapter 2 (Hardware Concepts). Depending on the usage (data warehouse, OLTP, etc.) the appropriate RAID level should be selected.

Once the RAID level has been determined, the next step is to configure the devices.

Device configuration

RAC requires that all instances be able to access either a shared disk subsystem such as a raw device or an Oracle-certified clustered file system. In this section, we will look at both these methods of device configuration.

Note 

While most of the details pertaining to any specific platform have been avoided, certain inclusions and examples are unavoidable.

Raw devices

A raw device partition is a contiguous region of a disk accessed by a Unix character-device interface. This interface provides raw access to the underlying device, arranging for direct I/O between a process and the logical disk. Therefore, the issuance of a write command by a process to the I/O system directly moves the data to the device. Raw devices are precreated using O/S-specific commands before the database can be created. In this section we will look at how to set up and define raw devices.

Setting up the raw devices

The first step in this process of setting up raw devices is the configuration of the shared disk subsystems. To accomplish this step we have to define the raw devices.

Defining raw devices

Each disk attached to the cluster is referred to as a physical volume. These physical volumes, when in use, belong to a volume group, and are divided into physical partitions. Within each volume group, one or more logical volumes are defined. Each logical volume represents a set of information located on the physical volume. The information found on the logical volume appears contiguous to the user, but can be discontiguous at the physical volume layer.

By providing the definition of logical volumes to manage disk space, the volume manager can provide greater availability, performance, and flexibility than a single physical disk. Among other benefits, a volume can be extended across multiple disks to increase capacity, striped across disks to improve performance, and mirrored on another disk to provide data redundancy.

Creation of raw devices

Raw devices need to be defined for the control file, parameter file, data file, and the online redo log files. Apart from these traditional files, another device is also required to store server manager configuration information. These files are of a specific size, because raw devices are prepartitioned to a specific size; it is important to analyze and define a chart containing the various sizes that would be required for the database definitions.

For example, the system tablespace in Oracle 9i requires about 500 MB of tablespace. Take into consideration the functionalities such as replication or advanced queuing that could subsequently be added. Providing for this growth, the optimal size of the system tablespace could be somewhere around 800 MB. How many other data files would require a similar size? Such are the questions that need to be asked and answered based on the current database structure.

Consider a system that requires the following partition sizes for the various system files (system, temp, tools, control files, etc.). Based on the example above, the following are the various numbers and sizes of partitions:

  1. 100 partitions of 100 MB each

  2. 100 partitions of 200 MB each

  3. 100 partitions of 500 MB each

  4. 50 partitions of 800 MB each

  5. 50 partitions of 1000 MB each

Note 

On operating systems such as Linux there is a limitation of 128 SCSI disks that could be connected to a machine and this limits the number of partitions per disk to 15, of which only 14 are useable. The number of raw devices that could be recognized by the Linux kernel is limited to 255.

Configuring raw devices into volume groups and logical volumes

  1. Create all necessary volume groups (VG) and logical volumes on the first node.

  2. Deactivate all the VGs and export the structure of all the VGs to map files by issuing the following command. Remember only one map file per VG is created.

    vgexport -v -s -p -m /tmp/vg_system.map /dev/vg/system
  3. Copy all map files to the remote node, using the rcp command.

  4. Log into the other node.

  5. Create all VG directories and group files corresponding to the first node by issuing the following commands; the numbers selected should match the values defined on the first node.

    mkdir /dev/vg_raw_system  mknod /dev/vg_raw_system/group c 64 0x010000

  6. Import the structure of all the VGs by issuing the following command:

    vgimport -v -s -m /tmp/vg_raw_system.map /dev/vg_raw_ssystem /dev/dsk/c2t52d0s2
  7. Change the permissions on all the VGs to 777, and change the permissions on all raw LVs to 660, with oracle:dba being the owner and group on both nodes. This is accomplished using the following commands:

    chmod 777 /dev/vg_raw_system  chmod 660 /dev/vg_*/r*  chown oracle:dba /dev/vg_*/r*

Once the devices have been configured and made shareable from all nodes participating in the clustered configuration, to complete the installation process another prerequest has to be completed, which is creating a user named oracle.

Creation of user

  1. Connect to the node as user root.

  2. Create the groups corresponding to the OSOPER roles in the /etc/ group file on all nodes of the cluster. The default name for this group is DBA. It is recommended that another group, oinstall, also be created. This group is responsible for installing the Oracle software. The /etc/group file has the following format:

    groupname:password:gid:user-list

    For example a typical entry should look like this:

    dba::8500:oracle, mvallath oinstall::42425:root,oracle
  3. Next create an Oracle user account that is a member of the dba group, with the following:

    $useradd -c ''oracle software owner'' -G dba, oinstall -u 222 -s /bin/ksh oracle

  4. Create a mount point directory on each node to serve as the top- level directory for Oracle software. The name of the mount point should be identical to that on the initial node. The oracle account should have read, write, and execute privileges on this directory.

  5. Depending on from which node the Oracle installation will be performed using the Oracle universal installer, a user equivalence is to be established by adding an entry for all nodes in the cluster. This includes the local node to the rhosts file of the Oracle account, or to the /etc/host equivalence file.

Before moving to the next step, it is required that all the work performed this far has been verified for accuracy. For example, it is required that the attributes of the Oracle user are the same on all nodes. This could be verified by using the rlogin command. If the password is requested, then user equivalence is not set, in which case the remote copy command rcp could be used to evaluate user equivalence.

Clustered file system (CFS)

A Unix file system is a hierarchical tree of directories and files implemented on a raw device partition through the file system subsystem of the kernel. Traditional file systems use the concept of a buffering cache that optimizes the number of times the operating system must access the disk. The file system releases a process that executes a write to disk by taking control of the operation, thus freeing the process to continue other functions. The file system then attempts to cache or retain the data to be written until multiple data writes can be done at the same time. This can have the effect of enhancing system performance.

However, system failures before writing the data from the cache can result in the loss of file system integrity. Additionally, the file system adds overhead to any operation that reads or writes data in direct accordance with its physical layout.

Shared file systems allow access from multiple hosts to the same file system data. This reduces the amount of multiple copies of the same data, while distributing the load across those hosts going to the same data.

Oracle supports CFS on certain hardware platforms, such as HP OpenVMS, and HP Tru64. Oracle uses the Direct I/O feature available in CFS. Direct I/O enables Oracle to bypass the buffer cache. Oracle manages the concurrent access to the file itself; this is similar to what it does with the raw devices. On CFS without Direct I/O enabled on files, file access goes through a CFS server. A CFS server runs on a cluster member and serves a file domain. A file domain can be relocated from one cluster member to another cluster member online. A file domain may also contain one or more file systems.

VERITAS file system configuration

VERITAS Database EditionTM Advanced Cluster for Oracle 9i RAC enables Oracle to use the CFS. The VERITAS CFS is an extension of the VERITAS File System (VxFS). VERITAS CFS allows the same file system to be simultaneously mounted on multiple nodes. Any node can initiate an operation to create, delete, or resize data; the actual operation is carried out by the master node.

Oracle clustered file system

CFS is a shared file system designed specifically for RAC. CFS eliminates the requirement for Oracle database files to be linked to logical drives and enables all nodes to share a single Oracle home instead of requiring each node to have its own local copy. CFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements.

Oracle currently provides CFS for platforms that use Linux and Windows operating systems. These operating systems are in their infancy and do not contain any robust mechanisms for managing clusters to the extent useful for Oracle. Hence, clustered file systems for these operating system platforms have been developed and implemented by Oracle.

Configuring the kernel

Kernel configuration of the operating systems such as Unix and Linux involves sizing the semaphores and the shared memory (Table 8.1). The shared memory feature of the operating system is required by Oracle.

Table 8.1: Kernel Parameters

Kernel Parameter

Purpose

SHMMAX

Maximum allowable size of a single shared memory segment. Normally this parameter is set to half the size of the physical memory

SHMMIN

Minimum allowable size of a single shared memory segment

SEMMNI

The number of semaphore set identifiers in the system. It determines the number of semaphores sets that can be created at any one time

SEMMSL

The maximum number of semaphores that can be in one semaphore set. Should be set to the sum of the PROCESSES parameter for each Oracle instance. While setting this value add the largest one twice, and add an additional 10 for each additional instance

The SGA resides in shared memory; therefore shared memory must be available for each Oracle process to address the entire SGA.

Table 8.2 shows the recommended semaphore and shared memory settings for the various O/S. The values for these shared memory and semaphore parameters are set in the kernel configuration file of the operating system. On most systems this file is /etc/systems.

Table 8.2: Semaphore and shared memory settings

Operating System

Shared Memory Parameters

Semaphore

Solaris

SHMMAX = 8,388,608

SHMSEG = 20

SHMMNI = 100

SEMMNS = 200

SEMMSL = 50

SEMMNI = 70

HP-UX

SHMMAX = 0x4000000

SHMESG = 12

SEMMNS = 128

SEMMNI = 10

HP Tru64

SHMMAX = 419304

SHMESG = 32

SEMMNS = 200

SEMMNI = 50

Linux

SHMMAX = Physical memory /2

SHMMIN = 1

SEMMNI = 1024

SEMMSL = 100

SEMOPM = 100

SEMVMX = 32,767

The values of the system-level kernel parameters can be checked (on most systems) using the sysdef command.

Open file descriptors limit

Operating systems maintain two parameters that define the maximum and minimum open file descriptor limits on Unix and Linux platforms. While the maximum value is set to 4096, the minimum value is calculated based on the following formula:

db_files * 2 ( twice for equal number of temp files to be opened) + 2*  maximum_no_of_log_files_simultaneously_opened +  maximum_number_of_controlfiles +  safety_margin_for_misc_files (like trace, alert logs, etc.  minimum 32 )

Hangcheck-timer

The oracm for Linux now includes the use of a Linux kernel module called hangcheck-timer. This module monitors the Linux kernel for long operating system hangs that could affect the reliability of a RAC node and cause a corruption of a RAC database. When such a hang occurs, this module reboots the node. This approach offers the following advantages over the approach used by its predecessor watchdogd:

  • Node resets are triggered from within the Linux kernel making them much less affected by system load.

  • Oracm on a RAC node can easily be stopped and reconfigured because its operation is completely independent of the kernel module.

Oracle 9iR2 

New Feature: The watchdog daemon process that existed in Oracle Release 9.1 impacted system availability as it initiated system reboots under heavy workloads. This module has now been removed from Oracle. In place of the watchdog daemon (watchdogd), Version 9.2.0.3 of the oracm for Linux now includes the use of a Linux kernel module called hangcheck-timer. The hangcheck-timer module monitors the Linux kernel for long operating system hangs, and reboots the node if this occurs, thereby preventing potential corruption of the database. This is the new I/O fencing mechanism for RAC on Linux.

Configuration parameters

The removal of the watchdogd and the introduction of the hangcheck- timer module require several parameter changes in the CM configuration file, $ORACLE_HOME/Oracm/admin/cmcfg.ora.

  1. The following watchdogd related parameters are no longer valid and should be removed from all nodes in the cluster.

    WatchdogTimerMargin  WatchdogSafetyMargin
  2. A new parameter that identifies the hangcheck module to the oracm is required to be added to the cmcfg.ora file.

    kernelModuleNam=hangcheck-timer
    Note 

    If the module in KernelModuleName is either not loaded but correctly specified or incorrectly specified, the oracm will produce a series of error messages in the syslog system log(/var/log/messages). However, it will not prevent the oracm process from running. The module must be loaded prior to oracm startup.

  3. The following parameter is now a required parameter. This informs the oracm to use the quorum partition.

    CMDiskFile=<quorum disk directory path>
  4. The following new parameters have been introduced; these parameters are used when the hangcheck-timer module is loaded and indicates how long a RAC node must hang before the hangcheck-timer will reset the system.

    • hangcheck_tick – the hangcheck_tick is an interval indicating how often the hangcheck-timer checks on the health of the system.

    • hangcheck_margin – certain kernel activities may randomly introduce delays in the operation of the hangcheck-timer. hangcheck_margin provides a margin of error to prevent unnecessary system resets due to these delays.

    • The node reset occurs when the system hang time > (hangcheck_tick) hangcheck_margin)

    • For example the addition of the following lines to the rc.local script would demonstrate the loading of the hangcheck-timer.

      #load hangcheck-timer module for ORACM 9.2.0.3 /sbin/insmod/lib/modules/2.4.19- 4GB/kernel/drivers/char/hangcheck-timer.o hangcheck_tick=30 hangcheck_margin=180 

    The following is the contents of the cmcfg.ora file

    HeartBeat=15000  ClusterName=Oracle/Cluster Manager, version 9i  KernelModuleName=hangcheck-timer  PollInterval=1000 MissCount=250 PrivateNodenames=mars-int  venus-int PublicNodeNames=mars venus ServicePort=9998 CmDiskFile=/dev/quorum HostName=venus-int

Oracle 9iR2 

New Feature: In place of the watchdog daemon (watchdogd), Version 9.2.0.2 of the oracm for Linux now includes the use of a Linux kernel module called hangcheck-timer. The hangcheck-timer module monitors the Linux kernel for long operating system hangs, and reboots the node if this occurs, thereby preventing potential corruption of the database. This is the new I/O fencing mechanism for RAC on Linux.

8.3.2 Installing Oracle

As the functionality of the product has increased over the years, the amount of space required for the installation has increased considerably. Similarly, the media containing the software also has increased. Oracle is supplied on multiple CD-ROM disks (Oracle 9i Enterprise Edition is shipped on 3 CD-ROMs). This means that in order to complete the installation of the software, it is required to switch the CD-ROMs. Of course the Oracle Universal Installer (OUI) manages the switching between CDs. However, if the working directory is set to the CD device, OUI cannot unmount it. To avoid this problem, ensure that the directory to the CD-ROM device is not changed before starting the OUI process.

An alternative method to avoid switching of CD-ROMs is to copy the contents on to the disk before installation.

Once all the preinstallation steps including the creation of all required directories required for the product have been completed, the next step is to install the product. To accomplish this the database administrator connects to the system as user oracle:

  1. Log in as the oracle user.

  2. From the ORACLE_HOME directory, the following command is issued at the command line:

    oracle$ ./<cdrom_mount_point>/runinstaller

    The command above invokes the OUI screen. If the installation is on a Windows platform, then the OUI is self-started on insertion of the CD.

    Caution 

    A word of caution is in line at this stage. The OUI software is written using Java, and requires considerable memory to load. The database administrator should ensure that sufficient memory is available when using this tool.

    Figure 8.3 shows the first introduction screen of the OUI. This screen provides three options: install/deinstall products, explore CD, and browse documentation. Select the first option to install the Oracle database product.

    click to expand
    Figure 8.3: Oracle Universal Installer.

  3. The next screen is the welcome screen (Figure 8.4), which gives the user or database administrator the choice to either install the product or to deinstall products already installed during an earlier process. From this screen, select ''next'' if the intention is to install new software. Prior to Oracle 8i, Oracle did not have an OUI product to install the software.

    click to expand
    Figure 8.4: Welcome screen.

    Note 

    The OUI will deinstall only tools or products that have been installed by the same version of the installer.

  4. If this is the first time that the OUI has been run on this system, a prompt appears for the inventory location. This is the base directory into which OUI installs the files. Enter the directory path for the Oracle install directory in the text field and click ''OK.''

  5. The next screen is to select the Unix group name; enter dba as the group name and click ''next'' to continue. If a window pops up providing instructions to run /orainstRoot.sh, at this point the Unix administrator with the root privileges is required to run the orainstRoot.sh file and click ''OK.''

  6. The next screen (Figure 8.5) identifies the file locations for the media containing the Oracle database product and the destination where the product will be installed. In this screen the default ORACLE_HOME is defined. In a Unix environment this directory is precreated for easy installation. On Windows, if this directory is not already created it will be created by the OUI. The destination paths will vary between operating systems. However, it is a good practice to follow a structure similar to the OFA even in environments such as Windows. For example, the install directory for a Unix operating system install would be /app/oracle/ product/9.2.0/ and following a similar directory path on a Windows platform, the directory structure could be E:\usr\app\oracle\product\9.2.0.

    click to expand
    Figure 8.5: File location and ORACLE_HOME identifica tion screens.

  7. The OUI loads all high-level product information that the CD contains. Since we are planning on installing the Oracle database product, the next screen should take us to this selection. Figure 8.6 shows the various installation types: Oracle Enterprise Edition (OEE), the standard edition, personal edition, or custom install. Once the type is selected, and based on whether the CD is for the standard edition or OEE, the appropriate options available in the CD are loaded. It should be noted that a similar install and load process is available when installing all products of Oracle. Oracle has standardized the installation of all its products using the OUI interface.

    click to expand
    Figure 8.6: Selecting the appropriate installation type.

    Our main plan is to install the RAC feature with other features identified in Chapter 7 (Database Design). Since the RAC and partitioning features are bundled with OEE, it should be ensured that the correct set of CD-ROMs is available. However, in order to ensure that only the required database components are installed, it would be safer to select the custom option from this screen. Select this option and click ''next.''

    Note 

    OEE has all tools and utilities that Oracle provides, for example, advanced replication, advanced queuing, database partitioning, Oracle Enterprise Manager, etc. Unless all these features or tools of the database will be used, it is not advisable to install them. Furthermore, every additional option has a price attached to it. Unless the options available under OEE are absolutely necessary (such as RAC), it is not worth purchasing this edition of the product.

  8. The next screen is the database configuration screen. In this screen, it is selected whether the database configuration assistant is to be used to install the RAC database.

  9. The next screen is the cluster node selection screen, which basically helps to identify the other nodes in the cluster onto which the Oracle RDBMS software is to be installed. After making the selection click ''next.''

  10. Once the cluster nodes have been identified, the next screen is the raw device identification screen to hold the configuration file. This device should be on a common shared device visible to all instances in the cluster. After entering the device name, click ''next.''

    Note 

    The raw partition or device selected for the configuration file should be at least 100 MB in size.

  11. The next screen is the custom selection screen.

    Note 

    The custom software selection screen appears only if the custom option was selected in Figure 8.6. If OEE was selected, then a summary screen is displayed instead of the custom software selection screen.

Figure 8.7 gives a list of components provided by the OUI. The database administrator will select the components that he or she would like to install and deselect those that are not required. While installing the software it is a good practice to install a copy of the OUI with the components, which will help in easy install and deinstall of products, if required at a later date. Confirm that the Oracle 9i RAC database software will be installed and click ''next.''

click to expand
Figure 8.7: Custom soft ware selection screen.

Note 

The RAC option will show on the list of products to be installed, provided the clustered file system or raw device has been configured correctly as per specifications/recommendations for the appropriate hardware platform.

The next few steps required for the installer process are self- explanatory in the sense that they have sufficient information for easy navigation.

Another important precaution that needs to be observed concerns the amount of memory. With every release of Oracle some additional memory could be required and it is always a good practice to monitor the progress of the installation.

  1. The next screen is invoked automatically and provides information about the configuration tools that will be installed. Figure 8.8 displays the various configuration tools that need to be set up. For example, if required, the database administrator could use the Oracle Net Configuration Assistant to install the required Oracle Net files like the listener and the tnsnames.ora files. These files are required to get the Oracle Net component functioning. If the database administrator chooses to configure these components manually, he or she could select ''next'' and proceed to the next part of the installation.

    click to expand
    Figure 8.8: Configuration tools selection.

  2. For first-time database administrators installing and configuring the database files, it is advised to use the configuration assistants, which guides the user through the required installation. Figure 8.9 gives the various options available for the Oracle Net Configuration Assistant. Using this tool, the Oracle Net, which includes the tnsnames and listener, can be installed and configured.

    click to expand
    Figure 8.9: Oracle Net Configuration Assistant.

    Similarly, using the Oracle Net Configuration Assistant, other components like agents, including the database agent, could be configured. Once all the required agents and components have been configured, this marks the end of the database installation process.

The configuration assistant includes database configuration and creation assistants. These components help in configuring and setting up of the database either as a stand-alone configuration or as a RAC configuration.



 < Day Day Up > 



Oracle Real Application Clusters
Oracle Real Application Clusters
ISBN: 1555582885
EAN: 2147483647
Year: 2004
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net