8.5 Database configuration

 < Day Day Up > 



8.5.1 Cluster configuration

  1. The first and foremost step in the cluster configuration is to ensure that the node is aware of the database. To provide this communication, it is required that the oratab file in the /etc directory is updated with the following entry:

    <DATABASE NAME>:<ORACLE_HOME>:<REBOOT STATUS> 

    DATABASE_NAME is the name of the database that is being configured. Please note that this is the database name and not the SID name normally entered in a single stand-alone configuration. ORACLE_HOME is the Oracle home and the Oracle version that the database is created under. This is specfied by the directory path of ORACLE_HOME. REBOOT_STATUS is a Y/N indicator to inform the operating system if the database is to be automatically started when the node is started.

    For example:

    PRODDB:/app/home/oracle/product/9.2.0:N

    PRODDB is the name of the clustered database, /app/home/ oracle/product/9.2.0 is the ORACLE-HOME directory, and N indicates that the database should not be started when the node is bounced. What this means is that the database will be started either by a manual process or through some other automated processes using scripts.

    The oratab file is located in the /var/opt/oracle directory on Sun Solaris and HP Tru64 platforms.

  2. The second step is to ensure that all nodes participating in the clustered configuration are available. This is verified using the lsnodes command.

    oracle$ lsnodes ora-db2.summerskyus.com ora-db1.summerskyus.com

    The above command will list all the nodes defined to be part of the clustered configuration at the O/S level.

  3. The third step is to make the database and instances cluster aware. This is done by completing the following steps:

    1. Ensure that the global service daemon (GSD) has been started. GSD is started by using the GSD control (gsdctl) utility. For example:

      oracle$ gsdctl start Successfully started the daemon on the local node.      Usage: gsdctl [options]      Where options include:       start start the gsd         stop stop the gsd         stat query the status of the gsd 
      Note 

      GSD must be started on all the nodes in a RAC environment so that the manageability features and tools operate properly.

      Oracle 9iR2 

      New Feature: gsdctl is a new utility introduced in Oracle 9i Release 2. Prior to this release, issuing the gsd command started the process.

    2. Check if the server configuration parameter file (srvConfig.loc) is present. This file is to configure parameters that would be shared across multiple instances. The server configuration file contains the raw device or file system that is visible from two or more instances and will contain cluster-specific parameters. On most platforms the file is located in the /var/opt/oracle directory. The file contains the raw device partition where the server configuration file will be visible from all the instances in the RAC cluster.

      oracle$ ls -ltr total 8 -rw-r--r--     1 root other 55 Aug 12 15:45                                 oraInst.loc -rw-rw-r--     1 oracle other 60 Aug 23 10:48                                  srvConfig.loc -rw-rw-r--     1 oracle other 820 Sep 21 17:14                                         oratab oracle$ more srvConfig.loc srvconfig_loc=/dev/vx/rdsk/oraracdg/srvm_ shared_config_100m
    3. If this is the first RAC database and instances are being installed on this cluster, the server configuration file needs to be initialized. The file can be initialized in one of many ways. For example, the file can be initialized by resetting its contents or by importing data from an already created text file. The server configuration file has the following commands:

      oracle$ srvconfig Usage: srvconfig [options]

      where options include:

      -help                 display command-line help message -?same as -help option -init                 initialize configuration repository -init -f              force initialization of configuration repository -exp <file>           export configuration repository                       contents to given text file -imp <file>           import given text file contents to                       the configuration repository -conv<file> convert given config file contents into 9.0 style             configuration -version display repository version information 

      To reset the server configuration file, the following command can be used:

      oracle$ srvconfig --init
    4. After the configuration file has been reset, the next step is to update the configuration file with the database and instance information. This is done using the server control utility (srvctl). The GSD process receives requests from srvctl to execute administrative job tasks, such as startup or shutdown of instances. OEM and Oracle Intelligent Agent (discussed in step 6) use the configuration information that srvctl to discover and monitor nodes in the cluster.

      The first step in this process is to add the database that supports this clustered environment that is being configured. This is done with the following command:

      oracle$ srvctl add database -d <database name> -o <oracle home directory> oracle$ srvctl add database -d PRODDB -o /app/home/oracle/product/9.2.0 Successful addition of cluster database: PRODDB 

      The above command adds the database PRODDB to the configuration file with the ORACLE-HOME information.

      The second step is to add the instances that will share the database defined above. To add the instances to the server configuration file, the following command will be used:

      oracle$ srvctl add instance -d <database name> -i <instance name> -n <node name> oracle$ srvctl add instance -d PRODDB -i RAC1 -n ora-db1 Instance successfully added to node: ora-db1

      The above command adds the instance named RAC1 that is configured to be used with the common shared database PRODDB and will run on node ora-db1.

      From the same node the other instances participating in the clustered configuration can be added to the configuration file. Once all the instances and databases have been added, the srvctl utility can be used to check the configuration or check the status of the clustered databases and their respective instances:

      srvctl config <database name>

      The above command will verify which databases have been configured in the configuration file. For example, to check if the configuration of the PRODDB made it into the configuration file, the following command can be issued:

      oracle$ srvctl config -d PRODDB

      Similarly, to check the configuration files for the instances running on a specific node, the following command will help:

      oracle$ srvctl config -d PRODDB ora-db1 RAC1 ora-db2 RAC2

      By checking the status of the server control, the instance level detail and the corresponding node information will be displayed:

      srvctl status database -d <database name>

      The above command will check the status of a given database:

      oracle$ srvctl status database -d PRODDB instance RAC1 is running on node ora-db1 instance RAC2 is running on node ora-db2
      Note 

      A list of parameters and definitions for using the srvctl utility can be found in Appendix 1.

  4. Configure Net8 support.

    1. Having a shared disk configuration, two separate listeners are required, one for each instance. There are no hard and fast rules in naming the listeners. Since they run on two different nodes they could be called the same, or for administration purposes if they need to be easily identified they could be specifically named to match the instance, etc. They are LISTENER_RAC1 and LISTENER_RAC2 for ORA-DB1 and ORA-DB2 respectively.

    2. The following information is added to the listener.ora file located in the TNS_ADMIN directory:

      LISTENER=      (DESCRIPTION_LIST=       (DESCRIPTION=        (ADDRESS_LIST=         (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)) ) (ADDRESS_LIST =      (ADDRESS=(PROTOCOL=TCP)(HOST=15.152.2.10) (PORT=1521)) ) ) ) SID_LIST_LISTENER=       (SID_LIST= (SID_DESC= (SID_NAME=PLSExtProc) (ORACLE_HOME=/apps/oracle/product/9.2.0.) (PROGRAM=extproc) ) (SID_DESC=      (ORACLE_HOME=/apps/oracle/product/9.2.0) (SID_NAME=RAC1) ) )

    3. The /etc/services file needs to reflect that new ports used for the new LISTENER_SID.

    4. Similar to the listener, the tnsnames file should also be maintained specifically on each instance. If there is only one ORACLE_HOME for multiple instances (e.g., in an HP Tru64 environment), in such a situation there needs to be only one tnsnames file. The following information is added to the $TNS_ADMIN/tnsnames.ora file on the database server and on the client machines:

      PRODDB.SUMMERSKYUS.COM=      (DESCRIPTION=      (ADDRESS_LIST=      (LOAD_BALANCING=ON) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=15.152.2.10) (port=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=15.152.2.20) (port=1521)) (CONNECT_DATA= (SERVICE_NAME=PRODDB.SUMMERSKYUS.COM) (FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC) ) ) ) RAC1.SUMMERSKYUS.COM= (DESCRIPTION= (SDU=8192) (TDU=8192) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=15.152.2.10) (PORT=1521)) ) (CONNECT_DATA= (SERVICE_NAME=PRODDB.SUMMERSKYUS.COM) (INSTANCE_NAME=RAC1) ) ) RAC2.SUMMERSKYUS.COM=       (DESCRIPTION=      (ADDRESS_LIST=      (ADDRESS=      (PROTOCOL=TCP)(HOST=15.152.2.20) (PORT=1521)) ) (CONNECT_DATA=      (SERVICE_NAME=PRODDB.SUMMERSKYUS.COM) (INSTANCE_NAME=RAC2) ) ) 

    5. Test connection using SQL_Plus:

      sqlplus username/password@RAC1 sqlplus username/password@RAC2
  5. Configure STATSPACK, which is an Oracle-provided utility used to gather performance statistics for a specific period of time. The statistics gathered are stored in a specific tablespace and schema. Reports could be generated from these stored data for specific run periods. This new utility from Oracle replaces the traditional utility called utlebstat/ultestat. While both these utilities are available, STATSPACK provides a more formated and detailed information not present in utlestat.

    Note 

    A detailed explanation of the installation and configuration process including samples of outputs is discussed in detail in Chapter 13 later in this book.

  6. Configure the intelligent agent, which is a daemon process that runs on the database server to help Oracle Enterprise Manager (OEM) to communicate with the database. This is an snmp agent provided by Oracle called dbsnmp. This agent, when started, allows remote monitoring of the database via OEM.

    The intelligent agent process can be started and stopped using the agentctl utility:

    oracle$ agentctl

    Usage:

    agentctl start|stop|status|restart [agent] agentctl start|stop|status blackout [<target>] [-d/uration <timefmt>] [-s/ubsystem <subsystems>]

    The following are valid options for blackouts:

    <target> name of the target. Defaults to node target. <timefmt> is specified as [days] hh:mm <subsystem> is specified as [jobs events collections] defaults to all subsystems oracle $ agentctl status DBSNMP for Solaris: Version 9.2.0.1.0 - Production on 09-OCT-2002 08:56:41 Copyright (c) 2002 Oracle Corporation. All rights reserved. Could not contact agent. It may not be running. oracle $ agentctl start DBSNMP for Solaris: Version 9.2.0.1.0 - Production on 09-OCT-2002 08:56:54 Copyright (c) 2002 Oracle Corporation. All rights reserved. Starting Oracle Intelligent Agent...... Agent started oracle $ agentctl status DBSNMP for Solaris: Version 9.2.0.1.0 - Production on 09-OCT-2002 08:57:34 Copyright (c) 2002 Oracle Corporation. All rights reserved. Version : DBSNMP for Solaris: Version 9.2.0.1.0 - Production Oracle Home : /apps/oracle/product/9.2.0 Started by user : oracle Agent is running since 10/09/02 08:57:09 
Oracle 9i 

New Feature: Prior to Oracle 9i, the intelligent process was controlled through the listener control utility. In other words, all administrative tasks, like starting and stopping the agent, were done via the listener control utility (lsnrctl) using the dbsnmp_agent command. In Oracle 9i a new utility called agent control (agentctl) replaces the dbsnmp_agent and the agent eliminates the dependency on the lsnrctl that existed in the previous version.

8.5.2 Parameter file configuration

If you have worked with an Oracle database, then the parameter file is not something that is new to you. Oracle requires the parameter file to find certain initial definitions during startup of the database. One common parameter is the location of the control file, because without the control file, it would be difficult to start the database.

In the case of a single instance configuration, all parameters are located in one file. Prior to Oracle 9i, these parameters were stored in an ASCII text file and were read once before instance startup. If a subsequent change was required later on, a restart of the instance was needed for the change to take effect.

Oracle 9i 

New Feature: With the introduction of Oracle 9i, Oracle has changed the ASCII-based parameter file into a binary file and most of the parameters can be changed dynamically while the instance is active.

In the case of a multi-instance Oracle configuration like OPS, Oracle required that each instance maintain its own parameter file and another parameter file that contained common parameters applicable to all instances participating in the clustered configuration. The parameter file that contained the common parameters got included at runtime with the individual parameter file.

Now with the introduction of the binary server parameter file concept in Oracle 9i, Oracle has brought about a change in how these parameters are defined. This way, in the case of a RAC implementation, instead of having instance-specific parameter files, one parameter file that contains both the common parameters and instance-specific parameters can be used. However, the syntax for the parameter definition has changed. Parameters that are common to all instances are defined with a ''*'' in front of the parameter, and the instance-specific parameter is defined with the instance name in front of the parameter.

For example, when defining the location of the control file, the parameter CONTROL_FILES is defined as follows:

*.CONTROL_FILES= (/dev/vx/rdsk/oraracdg/partition300m_3, /dev/vx/rdsk/oraracdg/partition300m_10) *.OPEN_CURSORS=400 *.DB_BLOCK_SIZE=8192 *.OPTIMIZER_MODE=CHOOSE

Notice the ''*'' in front of the parameter. This indicates that the parameter is applicable to both instances.

However, when defining the instance number, which is specific for every instance, the parameter is suffixed with the instance name:

RAC1.INSTANCE_NUMBER=1 RAC1.INSTANCE_NAME=RAC1 RAC1.INSTANCE_THREAD=1

Once these parameters have been added to the parameter file, there are two options: it could be retained in the location that contains the init.ora file in the traditional ASCII format, or at a suitable location from where it could be converted into a server parameter file stored in binary format.

The disadvantage of the server parameter file is that changes to the parameters, including additions and deletions, can only be done online while the instance is up.

A common file should be available to all instances from a common shared location. The server parameter file is no different. For this purpose it is important to identify a common shared area where the parameter file can be stored. This is true when RAC is implemented on systems that support clustered file systems or require raw device partitions.

The next step is to convert the existing ASCII parameter file (init.ora) to the binary file and store the file in the shared location:

oracle$ sqlplus '/as sysdba'; SQL*Plus: Release 9.2.0.1.0 - Production on Tue Oct 1 23:17:53 2002 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options JServer Release 9.2.0.1.0 - Production SQL> show user USER is "SYS" SQL> create spfile='/new location' from pfile File Create 

Similar to a non-clustered database configuration where a soft link is created to the $ORACLE_HOME/dbs directory, under RAC also such a definition is required. However, unlike a stand-alone configuration in the case of a RAC implementation, each instance will have an instance-specific parameter file soft-linked from the shared disk to the $ORACLE_HOME/ dbs directory:

ln -s <path/parameter file> <$ORACLE_HOME/dbs/ <parameter file>

The above command from the operating system prompt will create the soft link from its current shared disk to this area.

For example, the symbolic link for the initPRODDB.ora could be created as illustrated below:

oracle$ ln -s $ORACLE_BASE/admin/PRODDB/pfile/initPRODDB.ora $ORACLE_HOME/dbs/initRAC1.ora

8.5.3 Password file configuration

Oracle requires a password file to create and maintain passwords for schema owners with sysdba and sysoper privileges, for example the sys user. This file is important and should be current for the database to start. If the database structure was to change, this indirectly affects the control file, causing it to refresh the password file information and a new password file is required. What this implies is that every time the database is opened, the password file is verified against the control file.

A password file for each instance can be created using the following command:

orapwd file=<path/file_name> entries=<no of entries> password=<passpassword>

For example, a password file for the RAC1 instance is created as below:

oracle$ orapwd file=$ORACLE_HOME/dbs/orapwRAC1 entries=32 password=oracle



 < Day Day Up > 



Oracle Real Application Clusters
Oracle Real Application Clusters
ISBN: 1555582885
EAN: 2147483647
Year: 2004
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net