Translate

Monday, October 25, 2010

Engineering Dynamic Optimal Storage with Oracle ASM


The possibility of customizing storage performance is no longer left to the many products which have the capability to dynamically reorganize most dynamic or active data block onto hot volume sectors, i.e., those sectors whose band width and overall response time entices better I/O performance, usually supporting enterprise applications.

In the case of Oracle databases most active blocks are normally determined by segments associated with logical objects, such as tables. Indexes or materialized views. Furthermore, the capability of aligning storage performance with more dynamic objects is quite relevant to ASM technology, and a core value in Exadata’s database machine elasticity. ASM unique’s capability to perform rebalancing operations at the ASM file level and based on its unique extend indexing allows for an optimal rebalancing not only on conventional rebalancing operations but on a smart implementation engineered on the basis on Intelligent Data Placement.

The key idea is to be able to categorize segments associated with logical objects both by size (top-down) and in relation to their level of utilization, in particular, with respect to top most queries and database statistics. Both a size indicator and a level of usage index can be implemented. Furthermore a compound index can be finalized by finding the product of both size indicator and index value, which can be then subject to a mathematical norm, whose outcome will allow an Oracle DBA or Architect to categorize that segment as a candidate for Mirror Hot sector or Mirror Cold, otherwise. Three levels can be implemented with the highest level being applicable to the Mirror Hot sectors, and the lowest to the Mirror cold sectors.

[Compund Index] = [Size Indicator] x [Usage Level Index]

Based on the compound index, an object with a high compound index value should use the best slots in the hot sectors, which actually used the best band width and head speed in the physical disks building the ASM drives.

This is done by creating a custom template and applying to appropriate ASM datafile and subsequently determining relevance to logical objects. The SQL API is shown in the example below:

ALTER DISKGROUP adndata1 ADD TEMPLATE datafile_hot
ATTRIBUTE ( HOT MIRRORHOT);

ALTER DISKGROUP adndata1 MODIFY FILE '+data/adn3/datafile/tools.255.765689507'
ATTRIBUTE ( HOT MIRRORHOT);












Consistently, tables can be associated with tablespaces that have been designed on the basis of ASM data files, which have already been set as mirror hot or mirror cold templates, accordingly. These maintenance activities can occur as a baseline, for instance, when the database is created (first loaded), migrated, or upgraded accordingly, and systematically when a relevant event occurs. For instance, a partitioned table, using partition pruning, will require to change the most active partition due to a partition interval or time range issue, which will make a different or new partition the most dynamic segment, which in turn may require to be set in a mirror hot sector for optimal performance. In general, business algorithms can be customized, implemented, and set for periodic ASM storage maintenance in order to attain optimal performance.

Concluding Remarks

With Oracle11g Release 2, enterprise storage can now benefit of ASM elasticity, and the ability to customize storage optimal performance for a specific database, by engineering ASM production maintenance practices that optimize object allocation onto hot sectors, which use tablespaces associated with ASM data files utilizing MIRRORHOT templates.

Sunday, September 19, 2010

Join me at Oracle OpenWorld 2010, Moscone Center, San Francisco, CA

You are welcome to attend my presentation at the IOUG Forum.

ID#: S313474
Title: IOUG: Oracle Automatic Storage Management Load Balancing
Date: 19-SEP-10
Time: 16:00-17:00 PST
Venue: Moscone West L2 (Level 2)
Room: Rm 2007


Wednesday, June 2, 2010

Oracle ASM Load Balancing

ASM Load Balancing

Anthony D. Noriega, MBA, MSCS, BSSE, OCP

Abstract
ASM brings the ability to leverage integrated volume management, usage, and hard drive access optimization beyond underlying formatting, by also reaching a minimum level of I/O contention. Understanding ASM processes, in particular RBAL, and the various strategies to attain optimization during regular operations requiring load balancing in various aspects is carefully discussed, and presented with key details, in particular adjusting the ASM Power Limit. Various scenarios involving specific command usage are introduced and clearly explained with focus on driving the best load balancing approaches and desirable outcomes, while monitoring ASM instance accordingly. Other tasks such as rolling ASM upgrades and ASM repairs are covered in relation to load balancing goals.

Background and Conceptual Framework
Both an Oracle ASM instance and an Oracle Database instance are built on the same technology. Like a database instance, an Oracle ASM instance has memory structures (System Global Area) and background processes. Besides, Oracle ASM has a minimal performance impact on a server. Rather than mounting a database, Oracle ASM instances mount disk groups to make Oracle ASM files available to database instances.

An ASM instance has two background processes, namely:
  • RBAL, which coordinates rebalance activities for disk groups.
  • ARBx, x=0,1,2..., which actually performs the balance of extents movements.
In the database instances, there are three background process to support ASM, namely:
  • RBAL, which performs global opens on all disks in the disk group.
  • ASMB, this process contact CSS using the group name and acquires the associated ASM connect string. The connect string is subsequently used to connect to the ASM instance.
  • O00x, a group slave processes, with a numeric sequence starting at 000.

Exhibit 1. ASM Global Architecture (Oracle ASM Instance with Oracle Database Instance.)

ASM and Oracle Clusterware

To share a disk group among multiple nodes, the DBA must install Oracle Clusterware on all of the nodes, regardless of whether Oracle RAC is installed on these nodes. Besides, Oracle ASM instances that are on separate nodes do not need to be part of an Oracle ASM cluster. However, if the Oracle ASM instances are not part of an Oracle ASM cluster, they cannot communicate with each other. Likewise, multiple nodes which are not part of an Oracle ASM cluster are unable to share a disk group.

Failure groups are defined for a disk group when creating an Oracle ASM disk group, after which, it is not possible to alter the redundancy level of the disk group.
Oracle ASM metadata resides within the disk group and contains information that Oracle ASM uses to control a disk group, including:
· The disks that belong to a disk group
· The filenames of the files in a disk group
· The location of disk group data file extents
· The amount of space that is available in a disk group
· A redo log recording information about atomically changing metadata blocks
· Oracle ADVM volume information
When Oracle ASM instances are clustered using Oracle Clusterware, there is one Oracle ASM instance for each cluster node. In contrast, when there are several database instances for different databases on the same node, then these database instances share the same single Oracle ASM instance.
Oracle ASM Striping
There are two primary purposes for Oracle ASM striping, namely:

· To balance loads across all of the disks in a disk group
· To reduce I/O latency
Coarse-grained striping provides load balancing for disk groups while fine-grained striping reduces latency for certain file types by spreading the load more widely.
Extents



The contents of Oracle ASM files are stored in a disk group as a collection of extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU), and can use variable size to enable support for larger Oracle ASM data files, reduce SGA memory requirements for very large databases, and improve performance for file create and open operations. The initial extent size equals the disk group allocation unit size and it increases by a factor of 4 or 16 at predefined thresholds.
The extent size of a file varies as follows:
· Extent size always equals the disk group AU size for the first 20000 extent sets (0 - 19999).
· Extent size equals 4*AU size for the next 20000 extent sets (20000 - 39999).
· Extent size equals 16*AU size for the next 20000 and higher extent sets (40000+).
Initialization Parameters



The following initialization parameter file should be well known when balancing ASM load, namely:
ASM_DISKGROUPS



The ASM_DISKGROUPS is a dynamic parameter specifies a list of the names of disk groups that an Oracle ASM instance mounts at startup. In particular, Oracle ignores the value set for ASM_DISKGROUPS when specifying the NOMOUNT option at startup or when issuing the ALTER DISKGROUP ALL MOUNT statement. Besides, its default value is a NULL string.
The ASM_DISKGROUPS parameter is dynamic. When using a server parameter file (SPFILE), Oracle ASM automatically adds a disk group to this parameter when the disk group is successfully created or mounted. Similarly, Oracle ASM automatically removes a disk group from this parameter when the disk group is dropped or dismounted.

The following is an example of setting the ASM_DISKGROUPS parameter dynamically:
SQL> ALTER SYSTEM SET ASM_DISKGROUPS = ADNDATA1, ADNFRA;
When using a text initialization parameter file (PFILE), this parameter is edited in order to add the name of a disk group such that it is mounted automatically at instance startup.
The following is an example of the ASM_DISKGROUPS parameter in the initialization file:
ASM_DISKGROUPS = ADNDATA1, ADNFRA
ASM_DISKSTRING
The ASM_DISKSTRING initialization parameter specifies the comma-delimited list of strings that limits the set of disks that an Oracle ASM instance discovers. The discovery strings can include wildcard characters.
This discovery string format depends on the Oracle ASM library and the operating system that are in use.
For instance, on Linux servers that not using ASMLIB, to limit the discovery process to only include disks that are in the /dev/adndisks directory, set the ASM_DISKSTRING initialization parameter to:
/dev/adndisks/*
Oracle ASM cannot use a disk unless all of the Oracle ASM instances in the cluster can discover the disk through one of their own discovery strings, i.e., all disks must be discoverable by all of the nodes in the cluster.
ASM_POWER_LIMIT
The ASM_POWER_LIMIT initialization parameter specifies the default power for disk rebalancing. The default value is 1 and the range of allowable values is 0 to 11 inclusive. Indeed, setting a value of 0 disables rebalancing. Likewise, higher values enable the rebalancing operation to perform tasks faster, but could lead to higher I/O overhead as well.

ASM_PREFERRED_READ_FAILURE_GROUPS
The ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter value is a comma-delimited list of strings that specifies the failure groups that should be preferentially read by the given instance.

DB_CACHE_SIZE
The setting for the DB_CACHE_SIZE parameter determines the size of the buffer cache. The DB_CACHE_SIZE does not need to be set when using automatic memory management.

DIAGNOSTIC_DEST
The DIAGNOSTIC_DEST initialization parameter specifies the directory where diagnostics for an instance are located. It defaults to the $ORACLE_BASE directory for the Oracle grid infrastructure installation.

INSTANCE_TYPE



The INSTANCE_TYPE initialization parameter must be set to Oracle ASM for an Oracle ASM instance.
LARGE_POOL_SIZE
The setting for the LARGE_POOL_SIZE parameter is utilized for large allocations. In general, the default value for this parameter is suitable for most environments, e.g., 16M. This parameter is not be set when using automatic memory management (AMM).

PROCESSES



The PROCESSES initialization parameter affects Oracle ASM, but the default value is usually suitable. However, if multiple database instances are connected to an Oracle ASM instance, the following formula is quite useful:
PROCESSES = 50 + 50*N
where N is the number database instances connecting to the Oracle ASM instance.
REMOTE_LOGIN_PASSWORDFILE

The REMOTE_LOGIN_PASSWORDFILE initialization parameter specifies whether the Oracle ASM instance checks for a password file.

SHARED_POOL_SIZE



The setting for the SHARED_POOL_SIZE parameter determines the amount of memory required to manage the instance. There is no need to set this parameter when using automatic memory management.
The configuration guidelines for parameters to properly plan SGA sizing on the database instance are as follows:
· PROCESSES: Add 16 to the current or estimated database instance value
· LARGE_POOL_SIZE: Add an additional 600K to the current or estimated database instance value
· SHARED_POOL_SIZE: Add up the values from the following queries to obtain the current database storage size that is either on Oracle ASM or stored in Oracle ASM. Next, determine the redundancy type and calculate the SHARED_POOL_SIZE using the aggregated value as input.
SELECT SUM(bytes)/(1024*1024*1024) AS TOT_DATA_BYTES
FROM V$DATAFILE;
SELECT SUM(bytes)/(1024*1024*1024) AS TOT_TEMP_BYTES
FROM V$TEMPFILE
WHERE status='ONLINE';
SELECT SUM(bytes)/(1024*1024*1024) AS TOT_LOG_BYTES
FROM V$LOGFILE L1, V$LOG L2
WHERE L1.group#=L2.group#;
· When using normal redundancy disk groups, every 50 GB of space requires about 1 MB of extra shared pool plus 4 MB

· When using high redundancy disk groups, about every 33 GB of space requires about 1 MB of extra shared pool plus 6 MB

· When using external redundancy disk groups, every 100 GB of space requires about 1 MB of extra shared pool plus 2 MB.

AU_SIZE



Determines the size of the allocation unit for the disk group. For information about allocation unit size and extents query the AU_SIZE disk group attribute in the ALLOCATION_UNIT_SIZE column of the V$ASM_DISKGROUP view.
COMPATIBLE.ASM
Specifies the minimum software version for any Oracle ASM instance that uses a disk group.
COMPATIBLE.RDBMS
Specifies the minimum software version for any database instance that uses a disk group.
COMPATIBLE.ADVM



Determines whether the disk group can contain Oracle ASM volumes.
Optimizing Oracle ASM Disk Discovery
In ASM technology, disk discovery is the mechanism used to find the operating system names for disks Oracle ASM can access. Disk discovery occurs when the instance is initialized. Oracle ASM discovers and examines the contents of all of the disks that are in the paths that you designated with values in the ASM_DISKSTRING initialization parameter. Disk discovery also takes place when:
· Running the following SQL statements such as, namely:
· Mount a disk group with ALTER DISKGROUP ... MOUNT · Add a disk to a disk group with CREATE or ALTER DISKGROUP...ADD DISK
· Online a disk with ALTER DISKGROUP ... ONLINE DISK
· Resize a disk in a disk group with ALTER DISKGROUP...RESIZE DISK · Query with SELECT ... FROM V$ASM_DISKGROUP or V$ASM_DISK views
· Running Oracle Enterprise Manager or Oracle ASM Configuration Assistant (ASMCA) operations that invoke the SQL statements previously mentioned.
· Running ASMCMD commands that perform the same operations as the SQL statements as listed before.
Once Oracle ASM successfully discovers a disk, the disk appears in the V$ASM_DISK view. The disk header can have a value of MEMBER for disks that belong to a disk group; CANDIDATE or PROVISIONED for disks that were discovered, but that have not yet been assigned to a disk group; or FORMER for disks that previously belonged to a disk group and were dropped accordingly from the disk group. In Linux environments, for instance, ASMLIB API can facilitate disk discovery.
Important Disk Discovery Rules
The following rules for discovering Oracle ASM disks can affect most rebalancing operations:

· Oracle ASM can discover up to 10,000 disks.
· Oracle ASM only discovers disk partitions, excluding partitions that include the partition table.
· From the installation point of view, candidate disks are those that have the CANDIDATE, PROVISIONED, or FORMER header status, and they can be added to disk groups without using the FORCE flag.
When adding a disk, the FORCE option must be used if Oracle ASM recognizes that the disk was managed by Oracle, listed as FOREIGN in the V$ASM_DISK view. In this scenario, the FORCE keyword can be used to add the disk to a disk group.
MEMBER disks can normally be added to a disk group by specifying the FORCE flag, if the disks are not part of a currently mounted disk group.
In addition, Oracle ASM is able to identify issues such as multiple paths on the same disk, which causes the mount to fail. Besides, proper setting of the ASM_DISKSTRING parameter can lead to discovery time improvement.
ASM Maintenance and Load Balancing



Adding Disks to a Diskgroup


The following statement successfully adds disks /dev/diskA3 through /dev/diskA5 to adnata1. Since no FAILGROUP clauses are included in the ALTER DISKGROUP statement, each disk is assigned to its own failure group.
ALTER DISKGROUP adndata1 ADD DISK
'/dev/diskA3' NAME diskA3,
'/dev/diskA4' NAME diskA4,
'/dev/diskA5' NAME diskA5;



The NAMEclauses assign names to the disks, otherwise they would have been assigned system-generated names.
ALTER DISKGROUP adndata1 ADD DISK
'/dev/diskA*';
An alter diskgroup statement can fail when either a disk is already part of another diskgroup or when the search string matches disks that are already contained in other disk groups.
Similarly, the following statement would successfully add /dev/diskD1 through /dev/diskD7 to disk group adnata2. This statement runs with a rebalance power of 5, and does not return until the rebalance operation is complete.

ALTER DISKGROUP adndata2 ADD DISK
'/dev/diskD*'
REBALANCE POWER 4 WAIT;



If /dev/diskC3 was previously a member of a disk group that no longer exists, then it is possible to use the FORCE option to add the disk as a member of another disk group. The following statement illustrates this scenario. For this statement to succeed, adndata3 cannot be mounted.
ALTER DISKGROUP adndata3 ADD DISK
'/dev/diskC3' FORCE;



Managing Disk Drops from a Diskgroup


With its DROP DISK clause the ALTER DISKGROUP statement handles the task of dropping disks from a disk group. The option to drop all of the disks in specified failure groups is also possible using the DROP DISKS IN FAILGROUP clause.
In general, an important load balancing drill occurs when a disk is dropped, for which the disk group is rebalanced by moving all of the file extents from the dropped disk to other disks in the disk group. When performing both add and drop disk operations, the best approach is to perform them using the same ALTER DISKGROUP statement. Indeed, this has the benefit of rebalancing data extents only one time and emphasizing that there is enough space for the rebalance operation to succeed. In many scenarios, it is recommended to reduce the ASM_POWER_LIMIT in a commensurate fashion to leverage the overhead involved.
The ALTER DISKGROUP...DROP DISK SQL statement returns to SQL prompt before the drop and rebalance operations are complete, so the DBA must wait until the HEADER_STATUS column for this disk in the V$ASM_DISK view changes to FORMER before reusing, removing or disconnecting the dropped disk. Querying V$ASM_OPERATION view is useful to determine the amount of time remaining for the drop/rebalance operation to complete.
When specifying the FORCE clause for the drop operation, the disk is dropped even if Oracle ASM cannot read or write to the disk, and it may not be applied to drop a disk from an external redundancy disk group. Thus, a DROP FORCE operation leaves data at decreased redundancy until the subsequent rebalance operation completes. This increases the possibility of data loss if a subsequent disk failure occurs during rebalancing.
The following statements demonstrate how to drop disks from the disk group adndata1.
The following example drops diskA5 from disk group adnata1.
ALTER DISKGROUP adnata1 DROP DISK diskA5;
The following example drops diska5 from disk group adndata1, and also illustrates how multiple actions are possible with one ALTER DISKGROUP statement.
ALTER DISKGROUP adndata1 DROP DISK diskA5
ADD FAILGROUP failgroup1 DISK '/dev/diskA7' NAME diskA7;
ASM Storage Management Optimization with Intelligent Data Placement

Intelligent Data Placement enables setting disk regions on Oracle ASM disks for optimal performance, such that frequently accessed data is placed on the outermost (hot) tracks which have greater speed and higher bandwidth while files with similar access patterns are set physically close in order to reduce latency. Intelligent Data Placement also enables the placement of primary and mirror extents into different hot or cold regions.

Intelligent Data Placement settings can be specified for a file or in disk group templates. The disk region settings can be modified after the disk group has been created. The disk region setting can improve I/O performance by placing more frequently accessed data in regions furthest from the spindle, while reducing cost by increasing the usable space on a disk.
Intelligent Data Placement works best for the following:
· Databases with data files that are accessed at different rates. A database that accesses all data files in the same way is unlikely to benefit from Intelligent Data Placement.
· Disk groups that are more than 25% full. If the disk group is only 25% full, the management overhead is unlikely to be worth any benefit.
· Disks that have better performance at the beginning of the media relative to the end. Because Intelligent Data Placement leverages the geometry of the disk, it is well suited to JBOD (just a bunch of disks). In contrast, a storage array with LUNs composed of concatenated volumes masks the geometry from Oracle ASM.
The COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes must be set to 11.2 or higher to use Intelligent Data Placement.
Intelligent Data Placement can be managed with the ALTER DISKGROUP ADD or MODIFY TEMPLATE SQL statements and the ALTER DISKGROUP MODIFY FILE SQL statement.
· The ALTER DISKGROUP TEMPLATE SQL statement includes a disk region clause for setting hot/mirrorhot or cold/mirrorcold regions in a template:

ALTER DISKGROUP adndata1 ADD TEMPLATE datafile_hot
ATTRIBUTE (
HOT
MIRRORHOT);



· The ALTER DISKGROUP ... MODIFY FILE SQL statement that sets disk region attributes for hot/mirrorhot or cold/mirrorcold regions:
ALTER DISKGROUP adndata1 MODIFY FILE '+data/adn3/datafile/tools.255.765689507'
ATTRIBUTE (
HOT
MIRRORHOT);



Performing Disks Resizing in Disk Groups
The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations:

· Resize all disks in the disk group
· Resize specific disks
· Resize all of the disks in a specified failure group
If a new size in the SIZE clause is omitted, then Oracle ASM uses the size of the disk as returned by the operating system. The new size is written to the Oracle ASM disk header and if the size of the disk is increasing, then the new space is immediately available for allocation. Similarly, when the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation successfully relocates all extents, then the new size is made permanent, otherwise the rebalance fails.
The following SQL statement illustrates a sample scenario:
ALTER DISKGROUP adndata1
RESIZE DISKS IN FAILGROUP failgroup1 SIZE 150G;
Undoing Disks Drops in Disk Groups
The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending drops of disks within disk groups. If a drop disk operation has completed, then this statement cannot be used to restore it. This statement cannot be used to restore disks that are being dropped as the result of a DROP DISKGROUP statement, or for disks that are being dropped using the FORCE clause.

ALTER DISKGROUP adndata1 UNDROP DISKS;
Preparedness for Manual Rebalancing of Disk Groups



Manual rebalance of files in a disk group is possible using the REBALANCE clause of the ALTER DISKGROUP statement. This would normally not be required, because Oracle ASM automatically rebalances disk groups when their configuration changes.
Significantly important is the fact that the POWER clause of the ALTER DISKGROUP...REBALANCE statement determines the degree of parallelism, and subsequently the speed of the rebalance operation. It can be set to a value from 0 to 11. A value of 0 halts a rebalancing operation until the statement is either implicitly or explicitly re-run. The default rebalance power is set by the ASM_POWER_LIMIT initialization parameter. Likewise, the power level of an ongoing rebalance operation can be changed by entering the rebalance statement with a new level.

The ALTER DISKGROUP...REBALANCE command by default returns immediately so other commands can run while the rebalance operation takes place asynchronously in the background. It is practical to query the V$ASM_OPERATION view for to check the status of the rebalance operation.
The DBA can specify the WAIT keyword to cause the ALTER DISKGROUP...REBALANCE command to wait until the rebalance operation is complete.
Additional rules for the rebalance operation include the following:
· An ongoing rebalance command is restarted if the storage configuration changes either when you alter the configuration, or when the configuration changes due to a failure or an outage. Furthermore, when the new rebalance fails because of a user error, a manual rebalance may be required.
· The ALTER DISKGROUP...REBALANCE statement runs on a single node even if you are using Oracle Real Application Clusters (Oracle RAC).
· Oracle ASM can perform one disk group rebalance at a time on a given instance. Therefore, when multiple rebalances on different disk groups occur, Oracle processes these tasks serially. However, it is possible to initiate rebalances on different disk groups on different nodes in parallel.
Rebalancing continues across a failure of the Oracle ASM instance performing the rebalance.
· The REBALANCE clause (with its associated POWER and WAIT/NOWAIT keywords) can also be used in ALTER DISKGROUP commands that add, drop, or resize disks.
· Oracle restarts the processing of an ongoing rebalance operation if the storage configuration changes. If the next rebalance operation fails because of a user error, then a manual rebalance may be required.
The following SQL statement illustrates how to manually rebalance the disk group adndata2.
ALTER DISKGROUP adndata2 REBALANCE POWER 6 WAIT;
Tuning Rebalance Operations
When the POWER clause is omitted in an ALTER DISKGROUP statement, or when rebalance is implicitly run by adding or dropping a disk, then the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization parameter, which can be adjusted dynamically.
When a rebalance is in progress because a disk is manually or automatically dropped, increasing the power of the rebalance shortens the duration for redundant copies of data on the dropped disk to be reconstructed on other disks.

The V$ASM_OPERATION view provides information for adjusting ASM_POWER_LIMIT and the resulting power of rebalance operations, including an estimate in the EST_MINUTES column of the amount of time remaining for rebalance completion.
Managing Capacity in Disk Groups
When Oracle ASM provides redundancy, such as when you create a disk group with NORMAL or HIGH redundancy, you must have sufficient capacity in each disk group to manage a re-creation of data that is lost after a failure of one or two failure groups. After one or more disks fail, the process of restoring redundancy for all data requires space from the surviving disks in the disk group. When the space left is insufficient, some files might end up with reduced redundancy which translates into one or more extents in the file not being mirrored at the expected level. Therefore, the REDUNDANCY_LOWERED column in the V$ASM_FILE view provides accurate information about files with reduced redundancy.
The following guidelines help ensure that there is enough space to restore full redundancy for all disk group data after the failure of one or more disks.

· For Normal redundancy disk group, it is best to have enough free space in the disk group to tolerate the loss of all disks in one failure group whose size could be equivalent to the size of the largest failure group.

· For High redundancy disk group, it is optimal to have enough free space to cope with the loss of all disks in two failure groups. The amount of free space should be equivalent to the sum of the sizes of the two largest failure groups.
The V$ASM_DISKGROUP view contains the following columns that contain information to help manage capacity:
· REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be available in a disk group to restore full redundancy after the worst failure that can be tolerated by the disk group without adding additional storage. This requirement ensures that there are sufficient failure groups to restore redundancy. Also, this worst failure refers to a permanent failure where the disks must be dropped, not the scenario where the disks go offline and then back online.
The amount of space displayed in this column takes the effects of mirroring into account. The value is computed as follows:
· Normal redundancy disk group with more than two failure groups
The value is the total raw space for all of the disks in the largest failure group. The largest failure group is the one with the largest total raw capacity. For instance, when each disk is in its own failure group, the value would be the size of the largest capacity disk.
· High redundancy disk group with more than three failure groups
The value is the total raw space for all of the disks in the two largest failure groups.
· USABLE_FILE_MB indicates the amount of free space, adjusted for mirroring, that is available for new files to restore redundancy after a disk failure.
TOTAL_MB is the total usable capacity of a disk group in megabytes.
FREE_MB is the unused capacity of the disk group in megabytes, without considering any data imbalance.
With fine grain striping using 128 KB, the storage is preallocated to be eight times the AU size. The data file size may appear slightly larger on Oracle ASM than on a local file system because of the preallocation.
Oracle ASM Redundancy Levels
The Oracle ASM redundancy levels are, namely:
External redundancy
With this approach, essentially Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error causes a forced dismount of the disk group. All disks must be located to successfully mount the disk group.

Normal redundancy
Oracle ASM provides two-way mirroring by default, which means that all files are mirrored so that there are two copies of every extent.
High redundancy
Oracle ASM provides triple mirroring by default. A loss of two Oracle ASM disks in different failure groups is tolerated.
Oracle Automatic Storage Management Cluster File System (ACFS)
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of the Oracle Database. Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.
Oracle ACFS establishes and maintains communication with the Oracle ASM instance to participate in Oracle ASM state transitions including Oracle ASM instance and disk group status updates and disk group rebalancing. Oracle Automatic Storage Management with Oracle ACFS and Oracle ASM Dynamic Volume Manager (Oracle ADVM) delivers support for all customer data and presents a common set of Oracle storage management tools and services across multiple vendor platforms and operating system environments on both Oracle Restart (single-node) and cluster configurations.
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (Oracle ADVM) extend Oracle ASM support to include database and application executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are text, images, audio, video, engineering drawings, and other general-purpose application file data.
Expected Limits of Oracle ADVM
The limits of Oracle ADVM are now discussed: The default configuration for an Oracle ADVM volume is four columns of 64 MB extents in length and a 128 KB stripe width. Oracle ADVM writes data as 128 KB stripe chunks in round robin fashion to each column and fills a stripe set of four 64 MB extents with 2000 stripe chunks before moving to a second stripe set of four 64 MB extents for volumes greater than 256 megabytes. Note that setting the number of columns on an Oracle ADVM dynamic volume to 1 effectively turns off striping for the Oracle ADVM volume.
On Linux platforms Oracle ASM Dynamic Volume Manager (Oracle ADVM) volume devices are created as block devices regardless of the configuration of the underlying storage in the Oracle ASM disk group.
The Oracle ASM instance is started during the Grid Infrastructure installation process whenever the Oracle Clusterware Registry (OCR) and voting files are configured within an Oracle ASM disk group. In that case, the Oracle ACFS drivers are initially loaded during Grid Infrastructure Installation based on the resource dependency. Besides, the Oracle ASM instance can also be started using the Oracle ASM Configuration Assistant and the Oracle ACFS drivers are loaded based on that action. In steady state mode, the Oracle ACFS drivers are automatically loaded during Oracle Clusterware initialization when the Oracle High Availability Services Daemon (OHASD) calls the start action for the Oracle ASM instance resource that also results in loading the Oracle ACFS drivers due to the resource dependency relationship. The start action for the Oracle ACFS drivers’ resource attempts to load the Oracle ACFS, Oracle ADVM, and Oracle Kernel Services (OKS) drivers into the native operating system.
In particular, the policy for the Oracle ACFS drivers is that they remain loaded until Oracle Clusterware is shut down. Likewise, the ora.drivers.acfs resource is managed automatically by Oracle High Availability Services Daemon (OHASD) and its state cannot be manually manipulated by srvctl or crsctl.
Managing Volumes in a Disk Group
An Oracle DBA can create Oracle ASM Dynamic Volume Manager (Oracle ADVM) volume in a disk group. The volume device associated with the dynamic volume can then be used to host an Oracle ACFS file system.
The compatibility parameters COMPATIBLE.ASM and COMPATIBLE.ADVM must be set to 11.2 or higher for the disk group.
The following examples illustrate possible ALTER DISKGROUP VOLUME SQL statements, Oracle ADVM volume management, including the functionality to add, modify, resize, disable, enable, and drop volumes:
SQL> ALTER DISKGROUP adndata1 ADD VOLUME voladn1 SIZE 12G;
Diskgroup altered.

SQL> ALTER DISKGROUP adndata1 RESIZE VOLUME voladn1 SIZE 16G;
Diskgroup altered.

SQL> ALTER DISKGROUP adndata1 DISABLE VOLUME voladn1;
Diskgroup altered.

SQL> ALTER DISKGROUP adndata1 ENABLE VOLUME voladn1;
Diskgroup altered.

SQL> ALTER DISKGROUP ALL DISABLE VOLUME ALL;
Diskgroup altered.

Oracle ACFS Registry Resource Management

The Oracle ACFS registry resource is supported only for Oracle grid infrastructure cluster configurations; it is not supported for Oracle Restart configurations.

The Oracle ACFS registry resource (ora.registry.acfs) is created by the root script that is executed following Grid Infrastructure installation. The start action for the Oracle ACFS mount registry is set online, if successful, otherwise to offline. The state of the Oracle ACFS registry resource is determined only by the active state of the mount registry. The online status is independent of any registry contents or the current state of any individual registered file systems that may exist within the Oracle ACFS registry.
In addition to activating the local node state of the mount registry, the Oracle ACFS registry resource start action assists in establishing a clusterwide Oracle ACFS file name space. On each node, the resource start action scans the contents of the clusterwide mount registry and mounts any file systems designated for mounting on the local cluster member. Before mounting a registered file system, the resource start action confirms that the associated file system storage stack is active and will mount the disk group, enable the volume file, and create the mount point if necessary to complete the mount operation.
The check action for the Oracle ACFS registry resource assists in maintaining the clusterwide Oracle ACFS file system name space. On each node, the check action scans the contents of the mount registry for newly created entries and mounts any Oracle ACFS file systems registered for mounting on the local node. Consequently, a new Oracle ACFS file system can be created and registered on one node of the cluster, and is automatically mounted on all cluster members designated by the Oracle ACFS registry entry.
The Oracle ACFS registry resource stop action is usually called during the Grid Infrastructure shutdown sequence of operations. To transition the registry resource to an offline state, all file systems on this cluster member that are configured with Oracle ADVM devices must be dismounted.
The Oracle ACFS registry resource clean action is called implicitly if the resource stop action fails to transition the resource to the offline state. In that scenario, the registry resource clean action can be called to effectively force the resource offline. The registry resource clean action scans the operating system internal mount table searching for any file system that is mounted upon an Oracle ADVM device. If any is found, the resource clean action attempts to umount the file system as in the resource stop action.
Each time Oracle Clusterware is started on a cluster node, the Oracle ACFS startup operations for the node consult the cluster mount registry and attempt to mount all Oracle ACFS file systems that are registered with that node. After each file system addition to the mount registry, the newly registered file system is automatically mounted on each node designated by the registry entry. If a registered file system is automatically mounted and is later dismounted, it is not automatically remounted until the system is rebooted or the Oracle Clusterware is restarted. It can be manually remounted using the mount command or Oracle Enterprise Manager.
The Oracle ACFS cluster mount registry action routines attempt to mount each Oracle ACFS file system on its registered mount point and create the mount point if it does not exist. The registry action routines also mount any Oracle ASM disk groups and enable any Oracle ADVM volumes required to support the Oracle ACFS mount operation. In the event that a file system enters into an offline error state, the registry action routines attempt to recover the file system and return it to an on-line state by dismounting and remounting the file system.
ACFS Individual File System Resource Management



Oracle ACFS individual file system resource is supported only for the Oracle grid infrastructure cluster configurations; it is not supported for Oracle Restart configurations.
Oracle ASM Configuration Assistant (ASMCA) facilitates the creation of Oracle ACFS individual file system resources (ora.diskgroup.volume.acfs). During database creation with Database Configuration Assistant (DBCA), the individual file system resource is included in the dependency list of its associated disk group so that stopping the disk group also attempts to stop any dependent Oracle ACFS file systems.
Typically, an Oracle ACFS individual file system resource is created for use with application resource dependency lists.
Besides, an Oracle ACFS file system that is to be mounted from a dependency action should not be included in the Oracle ACFS mount registry.
The start action for an Oracle ACFS individual file system resource is to mount the file system. This individual file system resource action includes confirming that the associated file system storage stack is active and mounting the disk group, enabling the volume file, and creating the mount point if necessary to complete the mount operation. If the file system is successfully mounted, the state of the resource is set to online; otherwise, it is set to offline.
The check action for an individual file system resource verifies that the file system is mounted. It sets the state of the resource to online status if mounted, otherwise the status is set to offline.
Tracking Performance Rebalancing Factors
The following factors could be used a factorial design to determine the impact of load balancing on ASM instances:
Factor 1: Redundancy
Factor 2: Clusterware
Factor 3: Allocation Unit
Factor 4: Striping Types
Factor 5: Extent design
Factor 6: Compatibility Issues
Factor 7: Support for ACFS
Factor 8: Support for ADVM
The design is left as a proposal for various interest groups and organizations, such as SNIA, which could work with various platforms, SAN architectures, and a variety of relevant storage technologies as well.



Exhibit 2. ASM Instance Stack and Storage System Infrastructure displaying disk groups
Concluding Remarks



Upon completion of this paper, the following remarks are widely applicable:
· ASM's rebalance process is very easy, and it happens without the intervention of the DBA or System Administrator. ASM simply rebalances a diskgroup whenever a disk is added or dropped.
· Rather than restriping all the data, ASM only needs to move an amount of data by using index techniques to spread extents on the available disks.
· In some failed scenarios, the DBA can manually perform rebalancing operations using the SYSDBA privilege for all destructive operations. In other scenarios, the SYSOPER privileges allows similar goals in non-destructive tasks.
· It is important to combine operations, such as ADD and DROP a DISK, at the same time, in order to minimize the overhead on both the ASM and database instances, and consequently or normal production operations.
· Finally, adjusting the ASM_POWER_LIMIT can enable controlled rebalancing while operations requiring data movement, control of redundancy and other-relevant are being performed.
· The above mentioned conclusions are valid for a native ASM environment, and one supporting ACFS and ADVM.
As ASM technology becomes a widely RAC infrastructure standard, with about 65% utilization on production environments, and nearly 25% of standalone database architectures, the future of this storage technology is certainly bright and it opens new doors to emerging technologies.