|< Day Day Up >|| |
Aside from using the DBCA, Oracle provides the ability to manage services from the command line using the SRVCTL utility. DBCA makes calls to SRVCTL behind the scenes to execute the operations involved in creating the database, stopping and starting instances, creating services, and so forth. The SRVCTL utility can be used to take a database or an instance offline, add or remove services, modify services, or stop and start services, particularly built-in services that are not user-defined. This is of particular use when maintenance operations need to be performed, where it is desirable that CRS (Cluster Ready Services) not monitor or attempt to bring online certain services while maintenance work is being performed. When using SRVCTL to add or remove services, databases and instances, and so forth, this information is written to the Oracle Cluster Registry, which we will discuss later in this chapter.
Node applications on a cluster consist of components at the cluster layer, sitting between the operating system and your RAC instances. Node applications are also known as nodeapps, and consist of components such as the Virtual IP Address (VIP) used by the node, the listener running on the node (which should be listening on the Virtual IP Address), the GSD (Group Services Daemon), and the ONS (Oracle Notification Services) daemon on the node. You might use srvctl in conjunction with the nodeapps option if you want to add any or all of these components to a node-for example, adding a virtual IP to a new node. When using srvctl to add a VIP to a node, you log on as root and then execute a command such as
srvctl add nodeapps -n rmsclnxclu3 -o /u01/app/oracle/10g -A '220.127.116.11/255.255.255.0'
After the nodeapps have been added, you can start them via a START command such as
srvctl start nodeapps -n rmsclnxclu3
Another area where you may need to use the nodeapps configuration is if you need to change the virtual IP for an existing node. In order to do this, you must first stop the nodeapps, and then remove them, using the following syntax:
srvctl stop nodeapps -n rmsclnxclu3 srvctl remove nodeapps -n rmsclnxclu3
After removing the nodeapps, run the ADD NODEAPPS command as before, specifying the new/corrected IP address. Again, these commands must be run as root. All other srvctl operations should be done as the oracle user. Of course, prior to attempting to add a virtual IP, it should be defined in the hosts file. A listener entry would also need to be created for the new VIP, which will need to be done via the NETCA, or via a manual modification of the listener.ora.
Note also that generally the nodeapps are added when the addnode operation is run-when adding a new node-as we discussed in Chapter 5. As a general rule, you should not need to add nodeapps to a node if it has been configured properly via the OUI and DBCA. The VIPCA should add the nodeapps at the time the first instance is added to a new node. We simply illustrate the command here to give you an understanding of what nodeapps are, and how they are created-or in the rare case where you may need to change the VIP.
The srvctl utility can also be used to get the status of the nodeapps running on a particular node:
oracle@/home/oracle>: srvctl status nodeapps -n rmsclnxclu1 VIP is running on node: rmsclnxclu1 GSD is running on node: rmsclnxclu1 Listener is running on node: rmsclnxclu1 ONS daemon is running on node: rmsclnxclu1
This information is handy to check the status of these services in a node, and also to confirm the configuration after an install or after adding of a node.
The information stored in the OCR can be retrieved using SRVCTL commands. In addition, SRVCTL commands can be used to write information out to the OCR. An example of this is getting information about cluster databases that are part of the OCR. For example, the command SRVCTL CONFIG will show the name of any databases that exist in the Oracle Cluster Registry. Knowing the database name, you can use that information to retrieve additional information, such as the instances and nodes associated with the database. For example, we have a database with a db_name of grid. The following command will give us information about the grid database, as stored in the OCR:
oracle@/home/oracle>: srvctl config database -d grid rmsclnxclu1 grid1 /u01/app/oracle/10g rmsclnxclu2 grid2 /u01/app/oracle/10g
This tells us that node rmsclnxclu1 has an instance called grid1, and node rmsclnxclu2 has an instance called grid2. We can get the status of these instances using the STATUS command:
oracle@/home/oracle>: srvctl status database -d grid Instance grid1 is running on node rmsclnxclu1 Instance grid2 is running on node rmsclnxclu2
A database can be added or removed from the configuration (OCR) using the SRVCTL ADD DATABASE or SRVCTL REMOVE DATABASE options, as well. The remove operation may be necessary if a database has been deleted, but the deletion was not done through the DBCA. This would leave information about the database in the OCR, meaning that you would not be able to re-create the database using the DBCA until that information was removed. A command such as the following can be used to remove the database from the OCR, thus allowing a new database with the same name as the original to be re-created:
srvctl remove database -d grid
A database can be manually added using the ADD DATABASE command. In our example with the grid database, and grid1 and grid2 instances, we could re-create the basic configuration with a couple of simple commands such as
srvctl add database -d grid -o /u01/app/oracle/10g srvctl add instance -d grid -i grid1 -n rmsclnxclu1 srvctl add instance -d grid -i grid2 -n rmsclnxclu2
Where -d signifies the database name (DB_NAME), -o the ORACLE_HOME, -i the instance name, and -n the node name. These basic commands will define a database in the Oracle Cluster Registry, as well as the instances and nodes associated with that database.
Since an ASM instance is a special type of instance, with no real database associated with it, there is a separate command switch associated with the ASM instances-the asm switch. The SRVCTL CONFIG command that we mentioned in the preceding section will not show the ASM instances-only a regular cluster database. You can use the asm switch to get the status and names of the ASM instances on each node by using the following syntax:
oracle@/home/oracle>: srvctl status asm -n rmsclnxclu1 ASM instance +ASM1 is running on node rmsclnxclu1. oracle@/home/oracle>: srvctl status asm -n rmsclnxclu2 ASM instance +ASM2 is running on node rmsclnxclu2.
To add an ASM instance into the OCR, the ADD ASM option would be used as shown in Example 1. To create a dependency between your database and the ASM instance (that the OCR is aware of), you can use the syntax as noted in Example 2:
srvctl add asm -n rmsclnxclu1 -i +ASM1 -o /u01/app/oracle/10g srvctl add asm -n rmsclnxclu2 -i +ASM2 -o /u01/app/oracle/10g
srvctl modify instance -d grid -i grid1 -s +ASM1 srvctl modify instance -d grid -i grid2 -s +ASM2
In the preceding, the -s signifies the ASM instance dependency between the database instances grid1 and grid2 and the asm instances +ASM1 and +ASM2, respectively.
Again, note that these commands that we have walked through are done for you automatically when creating the database using the DBCA. The information provided here is meant to simply assist in understanding the configuration operations done by the DBCA, and how they relate to CRS. Should you decide to create a database manually, you would need to run these commands in order to ensure that the database and ASM instances are properly registered with CRS via the Oracle Cluster Registry. However, it is strongly recommended that the DBCA be used to create the databases, to avoid any issues with possible misconfiguration of the database.
As we mentioned at the beginning of this section, where SRVCTL will come in handy is to disable the monitoring of these resources by CRS so that maintenance operations can be done. Disabling an object with SRVCTL will prevent the cluster from attempting to restart the object (a service or an instance, for example) when it is brought down, thus allowing for repair or other maintenance operations. The Disabled status persists through reboots, avoiding automatic restarts of an instance, the database, or a service-these targets will remain as Disabled until a corresponding ENABLE command is run.
For example, suppose that you need to do some maintenance such as adding or replacing memory on a node. This maintenance will require that you reboot the machine a couple of times, to confirm that the machine comes back up and to allow configuration changes such as kernel parameter modifications. If you shut down the instance through SQL*Plus, CRS will leave it alone-but on a reboot, CRS will try to start the instance back up, even though you do not want this until you are completely finished with your maintenance. To avoid this, you must first shut the instance down. You could do this via SQL*Plus, or you could use SRVCTL to stop the instance-after stopping the instance, simply disable it with SRVCTL, as in the following example:
srvctl stop instance -d grid -i grid1 -o immediate srvctl disable instance -d grid -i grid1 srvctl stop asm -n rmsclnxclu1 -i +ASM1 -o immediate srvctl disable asm -n rmsclnxclu1 -i +ASM1
Note that in this case, we have also stopped and disabled the ASM instance, as we also do not want that instance to start during our subsequent reboots. You can view the status of these targets by choosing the Targets tab | All Targets from within the EM Grid Control screen. As you can see in Figure 6-6, there is no special status to indicate that these instances (grid1 and +ASM1) are disabled-the targets are simply noted as being down or unavailable. Attempting to start the instances would be allowed-the disabled status simply indicates that CRS is not currently monitoring them.
Figure 6-6: Target availability in EM Grid Control
After the subsequent reboots, and confirmations that the maintenance or repair has been completed to your satisfaction on this node, any instances can be reenabled using SRVCTL in the reverse order:
srvctl enable asm -n rmsclnxclu1 -i +ASM1 srvctl start asm -n rmsclnxclu1 -i +ASM1 srvctl enable instance -d grid -i grid1 srvctl start instance -d grid -i grid1 -o open
After a few moments, you should be able to refresh the screen in the Targets option of EM Grid Control and see that the database is back online.
In addition to taking a single instance offline and disabling it, it is also possible to stop and disable the entire database using the SRVCTL STOP DATABASE and SVRCTL DISABLE DATABASE commands:
srvctl stop database -d grid srvctl disable database -d grid
This is particularly useful when you need to do maintenance or repairs that involve the entire cluster, especially if there are many instances/nodes involved, as the preceding commands will stop or disable all instances in the cluster. Enterprise Manager Grid Control uses the STOP command if you choose the Shutdown All option when shutting down the database from the Cluster Database screen.
Service creation, deletion, and management are also possible via the command line using the SRVCTL utility. The syntax is similar to what we have seen so far, with some additional switches. As we have seen through the DBCA, we can define services to run on a subset of nodes, or on every node in the cluster, and we can also define certain nodes as being preferred nodes or simply available. In addition, we have seen through EM that we can relocate services between nodes-the same is also possible using SRVCTL.
Services are created from the command line using the ADD SERVICE option. The switches for defining the nodes and the failover policy are -r for the list of preferred instances, -a for the list of available instances, and -P for the failover policy, where the failover policy will be either NONE, BASIC, or PRECONNECT-the same options as seen in the DBCA. The following example will create a service within the grid database called grid_ap, with a failover policy of BASIC and node rmsclnxclu2 as the preferred node, with rmsclnxclu1 being available:
srvctl add service -d grid -s grid_ap -r grid2 -a grid1 -P NONE
Note that this command alone does not start the service. After creating the service, you must follow it up with a START SERVICE command:
srvctl start service -d grid -s grid_ap
At that point, you should see the ALTER SYSTEM command in the alert log from grid2, and the service will be available for connections. A key piece to note here, however, is that the entries do not get added to the tnsnames.ora, as they do when a service is created using the DBCA. However, you can still connect using the Easy*Connection syntax (which we will discuss more in Chapter 11). The Easy*Connection syntax is simply using //VIPname/servicename as the connect string. So, the following connections will now work, even though there is no entry in the tnsnames.ora:
SQL> connect scott/tiger@//rmscvip2/grid_ap Connected. SQL> connect scott/tiger@//rmscvip1/grid_ap Connected.
Additional operations that can be done on services via SRVCTL include the DISABLE and ENABLE options, START and STOP, MODIFY, and RELOCATE. As with EM, the relocate operation will only relocate the service to an instance that has been defined initially as being available for that service. However, the command-line operations provide a bit more flexibility with the ADD and MODIFY options, which will allow you to change the list of preferred or available instances for a service. For example, let's say that a third node is added to the cluster, with an instance of grid3. In this case, the following command could be used to change our existing grid_ap service as an available instance:
srvctl add service -d grid -s grid_ap -u -a grid3
The following command should then show us the status of that service:
srvctl config service -d grid -s grid_ap grid_ap PREF: grid2 AVAIL: grid3
A node can be upgraded from available to preferred using the MODIFY syntax as follows:
srvctl modify service -d grid -s grid_ap -i grid3 -r
This command will upgrade instance grid3 from an available instance to a preferred instance, so the SRVCTL CONFIG command should now show as follows:
srvctl config service -d grid -s grid_ap grid_ap PREF: grid2 grid3 AVAIL:
So now, both grid2 and grid3 show as being preferred instances for this service. This change will not take full effect until the service is stopped and restarted.
To wrap up our discussion on services, we will discuss a couple of outstanding items that the HA DBA should be aware of. To begin with, in past releases, it was often necessary to configure and manually stop and restart the gsd service or daemon using commands such as GSDCTL STOP and START. While gsd is still required for the service operations that we have discussed, gsd is now part of the nodeapps, and as such is automatically stopped and started by CRS. Also, as we mentioned previously, running the VIPCA for the first time on a node will create and start the nodeapps, which in turn will start the gsd for you.
In addition, services should be configured to use the virtual IP address, and the listeners should be configured to be listening on the virtual IP address in order for the services and service recovery to work properly. Along those lines, the REMOTE_LISTENERS parameter should be set in all instances, so that the listeners are cross-registered and are aware of all services across the database. We will discuss this configuration and client-side connection configurations in more depth in Chapter 11.
In Oracle Database 10g Release 1, you are limited to a total of 62 services defined for applications. This is on a per-database basis. In addition to the application services defined, there are two predefined internal services, which you may have noticed if you looked at the Resource Consumer Group Mappings in Enterprise Manager. The first is SYS$BACKGROUND, which is used by the background processes, and the second is SYS$USERS, which is used as the default service for any users connected outside of a regular service defined in the database. These internal services cannot be stopped or disabled. This brings the total maximum number of services for a database to 64.
|< Day Day Up >|| |