Additional RAC Considerations

 < Day Day Up > 



In this section, we will discuss some of the other miscellaneous key differences between a RAC instance and a single Oracle instance. This includes a discussion on how ASM works in a RAC environment and how you can apply patches to the RDBMS in a RAC environment. We will finish with a discussion of how Enterprise Manager Grid Control fits into a RAC environment.

Managing ASM Environments

ASM instances in a RAC environment behave similarly to regular instances in a RAC environment. When using ASM, each node must have an ASM instance, and that instance should be part of its own database, so to speak, with the CLUSTER_DATABASE parameter set to TRUE and instance_number set to a unique value for each instance. There is no db_name parameter in an ASM environment, because there is not really a database. By the same token, the thread parameter has no meaning, as an ASM instance generates no redo. However, there are other similarities. Each instance must have a unique instance name, and will generally be in the form of +ASM1, +ASM2, +ASM3, and so on. When you create a RAC database using the DBCA, the DBCA will automatically create the RAC-enabled ASM instances on each node.

The format of the parameter file is the same as what we have discussed previously for a RAC instance-parameters specific to an instance will be prefaced with the sid name (that is, +ASM1.instance_number=1). Note that the parameter file will be a regular pfile, by default, and will not be on the shared disk. This is because the parameter file must be read by the ASM instance when it starts up, but the disk groups managed by ASM are not mounted and accessible until after the ASM instance starts.

ASM Disk Groups in a RAC Environment

If the database files for any RAC instance are on ASM disk groups, then each ASM instance must define and mount those disk groups. As discussed in Chapter 3, this is determined by the parameter ASM_DISKGROUPS. The disk group(s) must be mounted by all instances in order for the RAC instances for your database to be able to see the disks. It is possible that a given ASM instance can mount other disk groups that are not used by cluster databases. This may be desired if there are stand-alone databases on one or more of the nodes, and/or if there is additional storage that is attached to one of the nodes but not accessed by all nodes in the cluster. This storage would not be available for use by the cluster database, but could be used by individual stand-alone instances.

Patching in a RAC Environment

In releases prior to Oracle Database 10g, patching in a RAC environment was done to each node and instance at the same time, requiring that all instances be shut down during the maintenance window when patches were being applied. Oracle Database 10g allows, for certain patches, the ability to do rolling patch upgrades, meaning that you may apply a patch to a given node and instance while that instance is down and other instances in the cluster are up and running. Once the patch has been applied to the first node, the first instance can be brought back online and subsequent instances can then be brought down and patched. The ability to do this will depend on the patch itself, and will likely be for interim patches at first, rather than full patchsets.

For patches that do not qualify for the rolling upgrade option, it is still possible to maintain uptime during the patch application by combining your real application clusters environment with a logical standby database. By switching over to the logical standby database, you can have the user community up and running as usual while patches are being applied to the nodes and instances in the RAC environment. At the end of the patch installation, the users can then be switched back from the logical standby environment back over to the primary cluster environment with relatively little downtime. While this setup is beyond the scope of this chapter, we do discuss the setup and configuration of logical standby and RAC environments in Chapter 7.

Enterprise Manager Grid Control and RAC

We discuss Grid Control in this chapter and this section because we feel that Grid Control is a necessary ingredient in managing your RAC environment. We have discussed the grid concept on a couple of different levels-namely, the storage grid, with ASM and other storage components providing the redundancy and ability to add/remove and relocate components. We have discussed (or will discuss) the database grid with RAC and other components such as logical and physical standby, again providing redundancy and the ability to add and remove capacity as needed. A core component of grid computing is also the ease of use and manageability. Enterprise Manager Grid Control is the tool that sits atop these various grids, making the integration and management of these components simpler and more flexible. (What we do not discuss in this book is the application grid, where mid-tier machines are clustered together to provide redundancy at the application server level, though that can be managed using Grid Control as well.)

Grid Control vs. Database Control

Database Control is the simplest form of Enterprise Manager, which comes installed by default with every Oracle Database 10g install. Nothing special is required to configure the DB Control piece-you simply start it at the operating system via the command

emctl start dbconsole

after the database has been installed. At that point, you will be able to connect to that instance using any standard browser, connecting to <http://<hostname>:5500/em>. This allows you to manage that database on that machine, and not much more.

This is all well and good if that is all that is needed. However, in a real application clusters environment, you will have many machines, many instances, and many nodes to manage-having access to a single database at a time via the basic DB Control is just not adequate.  This is where Grid Control comes in. Grid Control will allow you to manage multiple targets from one central console, including multiple instances on a single machine and multiple machines. Different types of targets are available for management as well, including a cluster_database, individual instances in a cluster database, and the cluster itself (not to mention application servers and other targets such as the hosts themselves).

Grid Control Installation Overview

Enterprise Manager Grid Control comes on a separate CD set from the Oracle Database 10g set. The initial Grid Control install creates a Management Server repository in its own Oracle database. The most straightforward option during the install is to do a complete install-simply allowing it to create its own repository database during the install. The option to create a repository database will actually create a 9.0.1.5 database, using the sid name of emrep by default. The Grid Control install will also install the Oracle Application Server. As such, this option requires that you have a minimum of 1GB of RAM on the machine where Grid Control is being installed, and the Grid Control install should be done to its own separate home. The complete install will also install the Oracle Database 10g Management Agent into its own separate home on that machine.

At the end of the Grid Control install, several configuration assistants will be kicked off. Most of these assistants have a status of Recommended to be completed. Should any of these assistants fail, we recommend that you choose the Retry option, as there is often a time lag in completing one or two of these assistants. Our experience has been that the retry will generally allow the failed assistant to complete successfully the second time around.

click to expand

Management Agent

Once the Grid Control install has completed on the machine you have decided will be the management server machine, you must then determine which target machines you want to have managed via your Grid Control setup. On each of these target machines, you must install the Additional Management Agent option, again using the Grid Control installation media, as shown in Figure 5-6. During the install of the agent on each target, you will be prompted to register back to the management server. When prompted for the management service location, give the host name of the management server machine and port: 4889. Provide the same information on each target machine.

click to expand
Figure 5-6: Management Agent install

Navigating Enterprise Manager Grid Control

Once the Grid Control Management Server install has completed, and targets have been registered, you can begin to manage them using a browser from any machine, just as with the Database Control. However, you will be connecting using the host name of the management server machine, rather than any of the targets, and using Port:7777. When prompted for login, you will actually be logging in to the OMS using the SYSMAN account, whose password was specified during the Grid Control installation. You will be placed into the home screen for Oracle Enterprise Manager 10g Grid Control, as shown in Figure 5-7, showing you the number of targets monitored and their availability. The Critical Patch Advisories section will be populated if you configured metalink login info during the installation. If not, you can configure it at any time by clicking on the link.

click to expand
Figure 5-7: Enterprise Manager Grid Control Home screen

As you can see in Figure 5-7, in this case a total of 21 targets are being monitored, and currently all of them are available. To see the full list of targets, click on the Targets tab. This will initially show you each node/machine with a management agent, but clicking on the All Targets link will allow you to see all 21 targets being monitored. Note in Figure 5-8 that our cluster named crs_lnx is listed as a separate target, as are the Cluster Database itself (grid) and each instance in the cluster (grid1 and grid2). In addition, we can manage the ASM instances on each node (as mentioned in Chapter 3) and the listeners, as well as viewing/monitoring operating system configuration information for each individual node. To view each individual target, simply click on the target itself and supply the required login information, if prompted.

click to expand
Figure 5-8: Grid Control targets

Unfortunately, the constraints of time and space prevent us from going in-depth on all of the various aspects of Enterprise Manager Grid Control. However, we encourage you to familiarize yourself with the resources and monitoring functionality available, as this tool will become more and more valuable in simplifying the management of complex environments. For additional information on Grid Control usage and configuration, please refer to the Oracle® Enterprise Manager Advanced Configuration guide.



 < Day Day Up > 



Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
ISBN: 71752080
EAN: N/A
Year: 2003
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net