Automatic Storage Management (ASM)

 < Day Day Up > 



Automatic Storage Management, or ASM, is another new Oracle Database 10g feature that revolutionizes the way Oracle and the HA DBA manage database files. ASM combines volume management with the concept of Oracle managed files to allow the HA DBA to create a database comprised of datafiles that are not only self-managed, but also the I/O is automatically balanced among available disks. ASM combines the ability to simplify management of files with the ability to automatically self-tune, while at the same time providing a level of redundancy and availability that is absolutely imperative for the storage grid.

The implementation of ASM involves the creation of a normal Oracle instance with the parameter INSTANCE_TYPE=ASM set to a value of ASM on a node where a database or databases reside. This instance does not have an associated database, but rather is used to manage the disks that are accessed by your database(s). As such, an ASM instance is never opened-it is only mounted. Mounting an ASM instance involves mounting the disk groups associated with the ASM instance, so that the disk groups and files are then accessible from the other instances. We will discuss ASM in various sections throughout the remainder of the book, but we will take the time here to discuss the concepts behind ASM, how to implement ASM in your environment, and how to manage an ASM environment once you are up and running.

ASM Concepts

The underlying concept behind ASM is that it is a file system created specifically for Oracle datafiles, on top of RAW or block devices. This file system is kept and maintained by the Oracle kernel, so Oracle knows where file extents are and automatically manages the placement of these extents for maximum performance and availability of your database. You, as the HA DBA, will not know or care where Oracle is placing extents on disk. Oracle will do all of that management for you through ASM. No volume management software is needed, and no file system is needed.

ASM Disk Group

At its highest level, within ASM you will create ASM disk groups, comprised of one or more disks (usually RAW, but certified NFS storage will work as well). Oracle will take that disk group as the location for creating files, and will lay down files in 1MB extents across however many disks are available. The more disks that are used within a disk group, the more flexibility you will give Oracle to spread the I/O out among disks, resulting in better performance and improved redundancy. ASM disk groups can be used for all Oracle files, including the spfile, the controlfile, the online redo logs, and all datafiles. In addition, you can use an ASM disk group for your flashback recovery area (discussed in Chapter 8), as a location for all RMAN backups, flashback logs, and archived logs. Bear in mind, however, that ASM was created specifically for Oracle, so it cannot be used as a general purpose file system. As such, files in an ASM disk group are not visible at the OS, and files such as Oracle binaries and Oracle trace files must be kept on a regular file system (such as UFS or NTFS).

Note 

We mentioned that extents are written out in 1MB sizes, and this is true for all files except controlfiles and logfiles. Redo logs, controlfiles, and flashback logs use fine-grained striping, by default, which results in extents of 128K, rather than 1MB. This allows large I/Os to be split into smaller chunks and processed by more disks, resulting in better performance for these types of files.

Stripe and Mirror Everything (SAME)

ASM adheres to the same philosophy, which recommends to stripe and mirror everything. This is handled in ASM by allowing the setting of redundancy levels during the creation of a disk group. Normal redundancy implies that you have at least two disks, because every allocation unit (or extent) will be written twice, to two different disks within the disk group. High redundancy implies three-way mirroring, meaning every allocation unit (or extent) will be written to three separate disks within the disk group. This mirroring is not the traditional type of mirroring that you may be used to, however-this is done at the extent level. For example, let's assume that we are mirroring with normal redundancy (two-way mirroring), and that we have five disks in a disk group. If we then create a 10MB file on that disk group, the first 1MB extent may be mirrored across disks 3 and 5, the next 1MB extent may be mirrored across disks 2 and 4, the next extent across disks 1 and 3, and so on. When all is said and done, every extent has been mirrored, but no two disks will contain identical data. If you choose external redundancy when creating a disk group, this is perfectly acceptable, but it implies that all mirroring is handled at the hardware level.

By the same token, ASM achieves striping by spreading the extents, aka allocation units, for a given file across all available disks in a disk group. So, your TEMP tablespace may be 4GB in size, but if you have a disk group with 10 disks in it, you will not care how the tablespace is laid out-Oracle with ASM will automatically spread the extents for this file across the disks, seeking to balance out the I/O and avoid hot spots on disk. If Oracle detects that a particular disk is getting too much I/O, it will attempt to read the mirrored copy of an extent from a different disk, if it is available.  The same is true for all files, including redo logs.

Note 

Mirroring is actually performed to what are known as 'partner' disks. Within an ASM disk group, any given disk can have a maximum of eight partners. This means that the extents written to a disk can be mirrored to any one of the eight partners defined for that disk. In our simple example, where we have only five disks, any disk can be the partner of another disk because we have not exceeded this limit. However, in a disk group with more than eight disks (say, hundreds or even thousands of disks), it is important to realize that each disk will be limited in the number of partners that can participate in the mirroring for that disk. This is done intentionally, as limiting the number of partners minimizes the possibility that a double disk failure could lead to data loss-this could only happen if the two disks that fail also happen to be partners. Utilizing high redundancy (triple-mirroring) reduces this likelihood even further. An ASM disk group will theoretically support up to 10,000 disks with a single ASM instance, spread across as many as 63 disk groups. ASM also supports up to 1 million files in a disk group. In Oracle Database 10g Release 1, only one ASM instance is allowed per node.

Failure Groups  A failure group allows you to take the redundancy of disks to the next level, by creating a group containing disks from multiple controllers. As such, if a controller fails, and all of the disks associated with that controller are inaccessible, other disks within the disk group will still be accessible as long as they are connected to a different controller. By creating a failure group within the disk group, Oracle and ASM will mirror writes to different disks, but will also mirror writes to disks within different failure groups, so that the loss of a controller will not impact access to your data.

File Size Limits on ASM

As we have discussed, ASM disk groups support a variety of different file types, including online redo logs, controlfiles, datafiles, archived redo logs, RMAN backup sets, and Flashback Logs. In Oracle Database 10g Release 1, ASM imposes a maximum file size on any file in an ASM disk group, regardless of the file type. That maximum size depends on the redundancy level of the disk group itself, as shown here:

Max File Size

Redundancy Level

300GB

External redundancy

150GB

Normal redundancy

100GB

High redundancy

These maximum values do not affect the maximum values imposed by other limitations, such as a maximum size for database files themselves. In Oracle Database 10g Release 1, for example, a database file is limited to approximately 4 million blocks. This limit applies irrespective of the underlying storage mechanism, file system, or platform. As you can see, for most block sizes this will not be an issue. However, some platforms (such as Tru64) support a db_block_size of up to 32k. As such, the normal maximum database file size with a 32K block would be 128GB. However, if you are on a high-redundancy ASM disk group (3-way mirroring), the maximum file size would actually be 100GB.

In addition, Oracle Database 10g includes a new feature: the ability to create a tablespace using the BIGFILE syntax. When creating a BIGFILE tablespace, you are only allowed a single datafile in that tablespace, but the limit on the number of blocks is increased to approximately 4 billion blocks (from 4 million). The theoretical maximum file size for a datafile in a BIGFILE tablespace would then be in the Terabytes-but as you can see, ASM will limit the datafile size to 300GB or lower, based on the Redundancy Level of the disk group. Expect this limitation to be removed in subsequent patches or releases. See Metalink Note 265659.1 for details.

Rebalancing Operations

Inherent to ASM is the ability to add and remove disks from a disk group on the fly without impacting the overall availability of the disk group itself, or of the database. This, again, is one of the precepts of grid computing. ASM handles this by initiating a rebalance operation any time a disk is added or removed. If a disk is removed from the disk group, either due to a failure or excess capacity in the group, the rebalance operation will remirror the extents that had been mirrored to that disk and redistribute the extents among the remaining disks in the group. If a new disk is added to the group, the rebalance will do the same, ensuring that each disk in the group has a relatively equal number of extents.

Because of the way the allocation units are striped, a rebalance operation only requires that a small percentage of extents be relocated, minimizing the impact of this operation. Nevertheless, you can control the rebalance operation by using the parameter ASM_POWER_LIMIT, which is a parameter specific to the ASM instance. By default, this is set to 1, meaning that any time a disk is added or removed, a rebalance operation will begin-using a single slave. By setting this value to 0 for a disk group, you can defer the operation until later (say overnight), at which time you can set the ASM_POWER_LIMIT to as high as 11. This will generate 11 slave processes to do the work of rebalancing. This can be accomplished via the alter system command:

alter system set asm_power_limit=0; alter system set asm_power_limit=11;

Background Processes for ASM  An ASM instance introduces two new types of background processes-the RBAL process and the ARBn processes. The RBAL process within the ASM instance actually determines when a rebalance needs to be done and estimates how long it will take. RBAL then invokes the ARB processes to do the actual work. The number of ARB processes invoked depends on the ASM_POWER_LIMIT setting. If this is set to the max of 11, then an ASM instance would have 11 ARB background processes, starting with ARB0 and ending with ARBA. In addition, a regular database instance will have an RBAL and an ASMB process, but the RBAL process in a database instance is used for making global calls to open the disks in a disk group. The ASMB process communicates with the CSS daemon on the node and receives file extent map information from the ASM instance. ASMB is also responsible for providing I/O stats to the ASM instance.

ASM and RAC

Because it is managed by Oracle, ASM environments are particularly well-suited for a RAC installation. Using a shared disk array with ASM disk groups for file locations can greatly simplify the storage of your datafiles. ASM eliminates the need to configure RAW devices for each file, simplifying the file layout and configuration. ASM also eliminates the need to use a cluster file system, as ASM takes over all file management duties for the database files. However, you can still use a cluster file system if you want to install your ORACLE_HOME on a shared drive (on those platforms that support the ORACLE_HOME on a cluster file system). We discuss ASM instance configuration in a RAC environment in the next section, as well as in Chapter 4.

Implementing ASM

Conceptually, as we mentioned, ASM requires a separate instance be created on each node/server where any Oracle instances reside. On the surface, this instance is just like any other Oracle instance, with an init file, init parameters, and so forth, except that this instance never opens a database. The major difference between an ASM instance and a regular instance lies in a few parameters:

  • INSTANCE_TYPE = ASM (mandatory for an ASM instance)

  • ASM_DISKSTRING = /dev/raw/raw* (path to look for candidate disks)

  • ASM_DISKGROUPS = ASM_DISK (defines disk groups to mount at startup)

Aside from these parameters, the ASM instance requires an SGA of around 100MB, leaving the total footprint for the ASM instance at around 130MB. Remember-no controlfiles, datafiles, or redo logs are needed for an ASM instance. The ASM instance is used strictly to mount disk groups. A single ASM instance can manage disk groups used by multiple Oracle databases on a single server. However, in a RAC environment, each separate node/server must have its own ASM instance (we will discuss this in more detail in Chapter 4).

Creating the ASM Instance

If you are going to create a database using ASM for the datafiles, you must first create the ASM instance and disk groups to be used. It is possible to do this via the command line by simply creating a separate init file, with INSTANCE_TYPE = ASM. Disk groups can be created or modified using SQL commands such as CREATE DISK GROUP, ALTER DISK GROUP, DROP DISK GROUP, and so on. The instance name for your ASM instance should be +ASM, with the + actually being part of the instance name. In a RAC environment, the instance_number will be appended, so ASM instances will be named +ASM1, +ASM2, +ASM3, and so forth.

However, as in the past, we recommend using the GUI tools such as DBCA. The simplest way to get an ASM instance up and running is to create your database by using the DBCA. If there is not currently an ASM instance on the machine, the DBCA will create an ASM instance for you, in addition to your database instance. If you are using the DBCA to create a RAC database, then an ASM instance will be created on each node selected as part of your cluster. After creation of the ASM instance, you will be able to create the disk groups directly from within the DBCA, which is created by providing disks in the form of RAW slices. The ASM instance will then mount the disk groups before proceeding with the rest of the database creation. Figure 3-5 shows the option you will have when creating the database, to choose the type of storage on the Storage Options screen. After selecting this option, you will be presented with the ASM Disk Groups screen, which will show you the available disk groups. On a new installation, this screen will be blank, as there will be no disk groups. So, at this point, on a new installation, you would choose the Create New option.

click to expand
Figure 3-5: Choosing ASM on the DBCA Storage Options screen

Creating ASM Disk Groups Using the DBCA

In the Create Disk Group screen (see Figure 3-6), ASM searches for disks based on the disk discovery string defined in the ASM_DISKSTRING parameter. Different platforms have different default values for the disk string. On Linux, the default will be /dev/raw/*, and on Solaris, the default will be /dev/rdsk/*. This value can be modified from within the DBCA, as shown in Figure 3-6, by clicking the Change Disk Discovery Path button. Once you have the correct path, you should end up with a list of possible candidate disks. Note in Figure 3-6 that the Show Candidates radio button is selected. This will only display disks in the discovery string that are not already part of another disk group. Note also that two of the disks have FORMER under the Header Status. This is because these two disks were at one time part of a different disk group, but that group was dropped. This status might also show up if a disk is removed from an existing disk group-nevertheless, these disks are available to be added to this group, and are considered valid candidates. In Figure 3-6, you can see that we have selected these two disks to make up our new disk group.

click to expand
Figure 3-6: Creating the ASM disk group

Note 

On most Linux platforms, Oracle provides a special ASM library to simplify the ASM integration with the operating system, and to ease the disk discovery process. At this time, the library is not available for all platforms, but you can check OTN periodically for the availability of a library for your platform at http://otn.oracle.com/software/tech/linux/asmlib/index.html

At this point, the ASM instance has already been created by the DBCA, and the disk group will be mounted by the ASM instance. If this is a RAC environment, the ASM instance will have been created on all nodes selected as participating in the database creation. (ASM mounts the disk groups in a manner similar to how it mounts the database, but rather than via an ALTER DATABASE MOUNT; command, the ALTER DISKGROUPS MOUNT; command is used instead.) Once the disk group is mounted, you can proceed to select the group for your database files and continue on with the creation of the actual database. You will be asked if you want to use Oracle managed files, and you will also be prompted to set up a flashback recovery area. If you choose to set up a flashback recovery area, we recommend that you set up a separate ASM disk group for the flashback recovery area. You will be given an opportunity by the DBCA to create that separate disk group. If you choose not to do so during creation time, you can create additional disk groups later on, either manually or using Enterprise Manager.

Note 

ASM requires that the ocssd daemon be running (cluster synchronization service) even in a single instance. (On Windows, ocssd manifests itself as a service called OracleCSService.) For this reason, Oracle will install/create the ocssd daemon (or OracleCSService) automatically on a new Oracle Database 10g install, even in a single-instance environment.

Managing ASM Environments with EM

Using Enterprise Manager to help you manage your ASM environment is most likely the simplest way to keep on top of the situation. In order to do so, we recommend that you configure Enterprise Manager Grid Control (as described in Chapter 5). This is a particular necessity when running in a RAC environment, as Grid Control allows you to manage all instances in the cluster, as well as all ASM instances from one central location.

Navigating Through EM

Enterprise Manager provides a graphical interface for such ASM operations as adding or removing disks from a disk group, dropping and creating new disk groups, mounting and dismounting disk groups, and rebalancing within a disk group. While you cannot create the instance itself through EM, you can modify parameters such as ASM_DISKSTRING or ASM_POWER_LIMT. Additionally, while we mentioned that files in an ASM disk group are not visible at the OS, Enterprise Manager will allow you to look at the file and directory structure used by ASM to manage these files. To end this section, we will go through a HA Workshop that will walk you through the steps needed to navigate ASM through Enterprise Manager.

HA Workshop: Managing ASM Through Enterprise Manager

start example

Workshop Notes

This workshop assumes that Enterprise Manager Grid Control has already been configured in your environment, and also assumes that an ASM instance is running. If EM Grid Control has not been configured, please refer to Chapter 5 for an overview of Grid Control configuration, or refer to the Oracle Database 10g Release 1 Oracle Enterprise Manager Advanced Configuration guide.

Step 1.  Log on as SYSMAN to the Enterprise Manager Grid Control screen using the host name of the management repository machine, and port 7777. In this example, rmsclnxclu1 is a node in our cluster, but it also happens to be the host for the EM management repository:

http://rmsclnxclu1:7777/em

Step 2.  Click on the Targets tab across the top, and then choose All Targets from the blue bar. This will list all instances, including ASM instances.

Step 3.  Find the ASM instance from the list of targets. The name will be something along the lines of +ASM1_rmsclnxclu1.us .oracle.com, where rmsclnxclu1.us .oracle.com is the host name where the ASM instance is running. Click on the link to the ASM instance. This will bring you to the home page for the ASM instance. Here, you can see a list of disk groups, the databases serviced by the disk groups, and a graphical depiction of the amount of disk space each database is using. In addition, you will see any alerts related to your ASM instance.

Step 4.  Click on the Performance link for the ASM instance. (You may be prompted to log in-if so, provide the sysdba password for the ASM instance.) Here you will see a graphical depiction of the throughput, disk I/O per second, and disk I/O response times for all disk groups managed by ASM. Clicking on the Expand All option at the bottom of the page will allow you to see the cumulative statistics for each disk in the disk group.

Step 5.  Next click on the Configuration link across the top. Here you can modify the values for the ASM_DISKSTRING, ASM_DISKGROUPS, and ASM_POWER_LIMITS parameters.

Step 6.  Now click on the Administration link. This will provide you a link to each ASM disk group managed by the ASM instance, as shown in Figure 3-7. If there are no disk groups, you can create them from here, or create additional groups by clicking on the Create button. Disk groups can also be mounted or dismounted from here-if you are in a RAC environment, you will be prompted to select which instances you want to dismount the group from. By selecting the options from the drop-down list, you can initiate an immediate rebalance operation as well, as shown in Figure 3-7.

click to expand
Figure 3-7: ASM disk groups in Enterprise Manager

Step 7.  Click on the disk group itself now, to again see the disks in the group. This will take you into the General description page for that disk group. Here you can check the disks' integrity, add disks to the disk group, or delete disks from the group. You can also drill down further to see performance stats for the individual disks.

Step 8.  Next click on the Files link to view the files on the ASM disk group. Choose the Expand All option to view all the files within all of the ASM directories. In Figure 3-8, we have displayed a partial expansion of the directory list. As you can see, Oracle creates a directory structure within the ASM disk group, which it maintains internally. In this example, Oracle managed files are in use, meaning that the parameter DB_CREATE_FILE_DEST is set to +ASM_DISK. Oracle automatically created an ASM directory with the database name, and then within that directory created additional subdirectories for controlfiles, logfiles, datafiles, and so forth. When using Oracle managed files, it is not necessary to specify either the name or the location of the file. Oracle will determine the location based on the ASM file type, and will then assign a filename based on the type of file, the file number, and the version number.

click to expand
Figure 3-8: Viewing ASM files from within Enterprise Manager

Step 9.  Lastly, click on a single file to get a description of how the file is mirrored, the block size and number of blocks in the file, the file type, the creation date, and the striping (either coarse or fine). The output should look something like this:

SYSAUX.270.3: Properties Name SYSAUX.270.3 Type DATAFILE Redundancy MIRROR Block Size (bytes) 8192 Blocks 64001 Logical Size (KB) 512008 KB Striped COARSE Creation Date 15-FEB-2004 02:43:32
end example

ASM Metadata

As we have discussed, an ASM instance has no physical storage component associated with it-the ASM instance is purely a logical incarnation, in memory. However, there are physical components to an ASM disk group, essentially stored on disk on each ASM disk. When a disk is made part of an ASM disk group, the header of each disk is updated to reflect information, including the disk group name, the physical size of all disks in the group, the allocation unit size, and so

on. The header also contains information relating specifically to that disk, including the size, the failure group, the disk name, and so forth. In addition, metadata is stored in ASM files on the disks themselves, using file numbers below 256. For this reason, when creating a new database on an ASM disk group, the system datafile will generally be file 256, and the rest of the files in the database are numbered upward from there-because file 255 and below are reserved for ASM metadata. The ASM metadata is always mirrored across three disks (if available), even when the external redundancy option is chosen.



 < Day Day Up > 



Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
Oracle Database 10g. High Availablity with RAC Flashback & Data Guard
ISBN: 71752080
EAN: N/A
Year: 2003
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net