Oracle Enterprise Manager can be used to manage any database, located anywhere on the wide area network. Therefore, from this one tool, you can manage your entire system, no matter where the components might be. This is an extremely powerful capability, and you should remember that what we are describing here could be used not only for the data warehouse but also for other databases on your network. In this section, we'll take a look at some of the concepts and steps required to start using OEM.
Enterprise Manager is designed to run in a three-tier architecture, as follows:
The console provides the graphical user interface on the client system.
The management servers (OMS) provide administrative functions, such as executing jobs and events, and runs in the middle tier.
The intelligent agents monitor the database, start and stop it, and gather performance data. The intelligent agents run in the third tier and are located on the same nodes as the Oracle databases.
All information required to manage and administer the environment is stored in the OEM repository, which can be part of any Oracle database, or in an Oracle database that is used exclusively for Enterprise Manager. It is where the management server stores the information it needs to manage the network configuration, the events to monitor, jobs to run, and the administrator accounts.
All work is performed from the Oracle Enterprise Manager Console. The console either can be launched connecting to the management server or standalone, as shown in Figure 7.1. When you want to use jobs and events and the data management and backup wizards, you must connect to the Oracle Management Server. If you only need to perform basic database functions to manage the schema, instance, storage, and security the console can be launched standalone.
Figure 7.1: Launching the Oracle Enterprise Manager Console.
A typical console is shown in Figure 7.2. The console contains menus, toolbars, and drawers. The toolbar is located on the upper left-hand side of the screen, and the tool drawers are located on the left side of the screen. When an object in the navigator tree is selected, the interface is displayed on the right-hand side in the detail pane.
Figure 7.2: Oracle Enterprise Manager Console.
When you start the console for the first time, or whenever anything changes in your environment, the information shown in Figure 7.2 should be refreshed. This is accomplished by selecting Discover Nodes from the Navigator option on the top toolbar. You will be prompted for the nodes to be searched, and, provided the Oracle Agent process is running on those nodes, it will interrogate your system and advise the console of the databases and listeners that are running. Therefore, you don't have to specify anything. In our example here, we can see that it has discovered the default ORCL instance that contains our data warehouse EASYDW on the node shillson-lap.us.oracle.com.
The Console Navigator tree, shown in Figure 7.3, is probably the most important area on the screen, because it is from here that numerous operations are started. By default, there are five subject areas: databases, groups, HTTP servers, listeners, and nodes.
Figure 7.3: The Enterprise Manager Navigator window.
In addition, in Oracle 9i Release 2, the Navigator tree also provides access to database administration, events, report definitions, jobs, and groups. The database folder in the Navigator tree allows you to administer the database instances, schemas, security, and storage.
In Figure 7.3, the Navigator window has been expanded. In the databases section, it lists every database, irrespective of the node on which it resides. To see on which system a database resides, you have to expand the nodes area and then see which databases are on that node. In our example, it is easy, because we only have one node. If we had many nodes running many databases, however, the view would be more complex. This is when the group feature, which is described shortly, becomes very useful.
From the Navigator window, you can perform a number of tasks with respect to databases. By clicking on the database name, it will expand to display a list of features. When you select one of the features, a description appears in the detail pane on the right side of the console. For example, you can start and stop an instance by clicking on Instance. You can create and drop tables, indexes, and views by clicking on Schema, and you can manage security, storage, and other objects in a similar manner.
An Enterprise Manager administrator account is required to log in to the console with a Management Server connection but not to launch the console standalone. Enterprise Manager has two types of administrator accounts: regular administrators and super administrators, who have additional privileges. An Enterprise Manager administrator account is different from the user accounts on the target databases you want to manage.
A Super Administrator is created when Enterprise Manager is installed and configured. Selecting the Manage Administrators option from the configuration toolbar can create additional administrators. In Figure 7.4, the Super Administrator is creating an account with the user name of EASYDW for the DBA of the EASYDW data warehouse. Access is granted to the job and event systems and, finally, to the target databases the new administrator will manage. Note that this is not a Super Administrator account, because we have not selected that option. We can specify the user name and password for every database and node the DBA's require access to.
Figure 7.4: Creating an Enterprise Manager administrator account.
Earlier, we spoke of the issues surrounding managing multiple databases, and one solution to this problem is to be able to see a graphical representation of your environment. The purpose of a group is to allow you to visually represent an environment. It doesn't have to be the entire environment; it could be just a part of it.
A group is created from the Object menu. You enter a name and select which objects from the available targets make up the group. You can also select a background image. The group name describes the environment, and the image is required so that you can overlay the objects onto a picture.
In Figure 7.5, we can see our group called Easy Shopping Inc., which represents our warehouse. The underlying graphic is a map of the area in the United States where the warehouse is located. Referring to Figure 7.6, we have added the ORCL database and the machine shillson_lap.us.oracle.com. They have been placed on the map to represent where they physically reside; of course, the choice is yours regarding where to place them.
Figure 7.5: Creating a group.
Figure 7.6: Group—Easy Shopping Inc.
You may be saying to yourself: This all looks very pretty, but what is the point apart from graphically showing the system? In the next section, we will describe events, and when events are combined with the group picture, you have a very powerful management environment.
You can create as many groups as you like, and they are displayed in the navigator tree.
One of the problems for anyone managing a database is knowing when certain events occur. For example, suppose the database instance suddenly shuts down. Wouldn't you like to know immediately that it has happened rather than wait for the calls from your users, who are complaining that the system is no longer available?
Within the console, you can define events that will allow you to monitor your data warehouse and be notified when certain events occur. An event is created by clicking on Event in the toolbar and then clicking on Create Event. At this point, the screen shown in Figure 7.7 will appear.
Figure 7.7: Creating an event.
Referring to Figure 7.7, we have given our event a name, Warehouse Status, and also added some further comment. This particular event is a database event, but we could select others from the drop-down list, such as node, in the section marked Target Type. Next, we select the ORCL database from the list on the right side and, using the arrow buttons, move it over to the left side. The frequency of monitoring can also be changed, if required. Then we must click on the other tabs at the top of the window to complete the event definition.
Clicking on the Tests tab displays the screen in Figure 7.8, where we select the events to monitor. In this example, we have selected DatabaseUp-Down, so that we are notified whenever the database goes up or down.
Figure 7.8: Selecting events to monitor.
This event has no parameters for us to specify, so the next step is to state who should be notified when this event occurs. It is not mandatory to notify anyone; you could define the event and then, provided the database has been dragged over to the group, the status of the event will be registered there.
For example, in Figure 7.6, the database ORCL has a colored flag against it (which, unfortunately, is very difficult to show in a black-and-white book!). The ORCL database is showing a green flag to denote that it is open. If it were closed, it would have a red flag against it.
This is one of the powerful visual features of creating a group, because you can instantly look at the picture and see the state of your system. In Figure 7.9, we have stated that users SYSMAN and EASYDW should be notified whenever this event occurs. Administration staff can be notified of events via paging and/or e-mail. Both of these options can be configured within the console.
Figure 7.9: Advising users of an event.
By selecting Configuration from the toolbar and clicking on the Schedule tab, the screen in Figure 7.10 will appear.
Figure 7.10: Notification schedule for events.
Once again, this is difficult to see in a black-and-white book, but here you specify how you are to be notified for each hour of the day. Since we know that lunch is always taken at 1:00 p.m., during this period and on Saturday, we have stated that we should be paged whenever this event occurs. However, during the working day, the preferred contact method is e-mail.
Here we have just scratched the surface of what is possible with events. But hopefully it will encourage you to investigate these further and implement them as part of your data warehouse management procedures.
Data warehouses require a great deal of maintenance, because data is continually being loaded. You could write scripts to perform these tasks and remember to run them, or you could use the Job Scheduler in OEM to automatically run your jobs at the designated time.
The job facility provides the ability to define jobs that must be run in order to manage your data warehouse. These jobs can be placed in a library and scheduled automatically by the console to run at the specified time.
To create a job, select Job from the toolbar and then Create Job. The screen shown in Figure 7.11 will appear, which is where you describe the details for this job. In this example, we are creating a job that will automatically close the data warehouse at 7:00 p.m. First, we name the job and state upon which database it is to be performed. Although this example features a database task, other options are available from the drop-down list on destination type.
Figure 7.11: Creating a job.
All jobs may be kept in the job library for reuse. Therefore, if you wish to retain the job, now is a good time to click on the button at the bottom of the window to Submit & Add Job to the library. Choosing this option at this time will merely enable that option and change the submit button to Submit & Add. It is not until you click the Submit & Add button that the job will be submitted to the job queue.
To move to the next screen, click on the Tasks tab to select the action that is to be performed. You can see in Figure 7.12 that many tasks are available. In this example, we have selected Shutdown Database.
Figure 7.12: Selecting task for a job.
Once the task has been chosen, click on the Parameters tab to see if any parameters are needed for this task. In Figure 7.13, we can see that for closing the database, it asks us how we want to close the database. We have selected the Immediate mode.
Figure 7.13: Parameters for a job.
Also note here that we have the ability to specify a different user name and password to perform this operation from the default one that we set up earlier. Since closing the database requires a powerful user name, we have changed it to use the system user name for this task.
The definition of the job is almost complete. We must now click on the Schedule tab to state when this job is to be run. Referring to Figure 7.14, we have stated that this job is run on Monday through Friday, starting from July 3 at 7:00 p.m.
Figure 7.14: Scheduling a job.
Finally, select the Access tab to state who, if anyone, should be notified when this job is run. Then click on the Submit button to place this job on the queue to be run.
To see the jobs that are in the queue select Jobs in the Navigator tree. Once the job appears in the queue, you can monitor the job's progress, as shown in Figure 7.15. In the example in Figure 7.15, the status of the job is Scheduled, because the job is pending. If it were actually running, the status would be shown as Active. Once the job has completed, it will disappear from the active window. Click on the History tab to see whether it succeeded or failed.
Figure 7.15: Watching a job run.
Double-clicking on the job will display the screen shown in Figure 7.16, which is where we can see the details of the job. In Figure 7.16, we can see that the job has been submitted and will run next at 7:00 p.m. on July 3.
Figure 7.16: Logging from the job.
To see the results from the job, select the job, and click on the Show Output button. Figure 7.17 shows a load job that failed, because it could not find the control file in the specified location.
Figure 7.17: Output from the job.
The examples in this section have shown database events and jobs. Alternatively, a job could be an operating system task that needs to be performed for managing the warehouse, such as ensuring that files are copied from one system to another prior to loading the warehouse.
The number of jobs we can submit here is quite extensive, and, once they are in the library, we have a comprehensive set of tasks that are safe and that can never be lost.
Clicking on Job on the toolbar and then selecting Job Library can find the job library.
Don't forget to back up the database used to hold the Enterprise Manager repository, or you could lose all of the jobs in the library if there was ever a failure of the database.
The console is not limited to monitoring databases and nodes; we can also launch database applications from here. The licensed options will determine the applications that are available. In Figure 7.18, we can see one of the techniques used to launch an application. Here we have selected the database from the Navigator window. By clicking on the right mouse button, the available tools are displayed, such as Shutdown.
Figure 7.18: Starting applications from the console.
In this example, we have clicked on Backup Management, giving us access to the backup and recovery tools. To perform an export or import of the data, the Data Management option must be selected.
An alternative method for starting the applications is to use the applications drawer located on the left-hand side of the screen. For example, to launch SQL*Plus Worksheet click on the icon. Applications can also be started from the toolbar under the Tools section.
Hopefully, you are now beginning to appreciate how to manage your data warehouse using Oracle Enterprise Manager. Next, we will look at how we can use OEM to perform the management tasks in our data warehouse.