Translate

Friday, August 3, 2007

Advances in Multiple Block Size Caches

Advances in Multiple Blocks Size Caches

New architectures involving Ajax, XML, and SOA architectures have placed a great deal of pressure for leveraging cache utilization and for explicit differentiation of resources, as for instance those involving Ajax clients and XML-driven client applications, in particular. The ability for persistent database objects to perform with less and nearly no I/O and latch contention when dynamic, transparent cache utilization occurs is an outstanding feature of what I called the OMBDB paradigm. Indeed, this paradigms is obviously supporting not only datawarehousing and business intelligence driven architectures, but also emerging technologies and existing persistent architectures such as EJBs and Hibernate, all of which require explicit faster cache utilization, and using multiple block caches act as an intelligent cache differentiator that automatically leverage cache performance optimization.

I have successfully tested, like many other DBAs and developers, that beyond any possible SGA tuning that using multiple-block-size database can certainly improve performance through this performance approach. OMBDB is ideal for Ajax, XML-driven clients and other SOA-driven architectures.

There has been a great deal of improvements to the usage of multiple-block-size databases including the flexibility to manage the involved initialization parameters in a more dynamic way. Besides, I also believe that your overall your data warehouse, decision support, and business analytics can perform better is a custom multiple block size database environment. The Oracle11g partitioning features can greatly enhance the OBMDB paradigm, since there are several other ways to achieve custom partitioning, one can think of other ways to further optimize performance thereon. I would like to encourage everyone to include this paradigm as part of your Oracle11g beta testing as well.

Here are some useful links:

http://www.dba-oracle.com/oracle_news/2005_6_23_multiple_block_size_data_warehouse.htm

http://www.samspublishing.com/articles/article.asp?p=26338&seqNum=3&rl=1

http://www.oracle-base.com/articles/9i/MultipleBlockSizes.php

http://www.dbazine.com/oracle/or-articles/burleson2

Below is an excerpt of my original paper presented at IOUG LIVE 2005.

¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡

OMBDB: An Innovative Paradigm
for Data Warehousing Architectures

Anthony D. Noriega, Oracle Consultant, BSSE, MSCS, OCP-DBA


Abstract

Managing a multi-block data warehouse database falls in a special category that involves business intelligence and combines expertise in areas such as Extract Transform Load (ETL), On-Line-Analytical Process (OLAP), Data Mining, in conjunction with custom applications, legacy systems, and related business process applications, such as Supply Change Management (SCM), Customer Relationship Management (CRM), and Enterprise Resource Planning (ERP) systems. This paper intends to enhance a blueprint for major production applications in VLDB. The paper and presentation will discuss how to best utilize multiple block size databases in conjunction with table partitioning and related techniques, table compression, and best strategies for extent and segment space management and related tablespace utilization, including best Index Allocation, Index Organized Tables benchmarks. Finally, the discussion will center in critical aspects of tuning and usage of the DMBS_STATS, DBMS_UTILITY, and CBO-based application and instance performance co-tuning. The paper will also discuss upgrade and migration issues from versions 9i to 10g, and a framework for a potential case study. This paper will discuss advantages and disadvantages, pitfalls, and a robust strategy to attain the most out of multi-block database performance.

Pre-Configuration Tasks

An important pre-creation and pre-configuration task is to choose the appropriate storage format approach. In most cases, simply considering mirroring and striping seems a practical solution. Thus, Striping and Mirroring Everything (SAME) is key to providing a healthy level or redundancy and leverage contingent recovery processes. When using Automatic Storage Management (ASM), the ASM instance will resolve these issues via its volume balancing metadata[1], once files are set as coarse striping, e.g., data files, or fine striping, e.g., online redo and control files. Potential combinations of RAID file system and ASM are quite possible and fully supported by Oracle10g. Previous storage device approaches such as OCFS and other RAW can also be considered and sometimes mixed with existing approaches. However, ASM and RAID 10 or RAID 0+1 are currently considered the most reliable approaches, with ASM in ways to become the de facto standard for high-availability architectures.

Steps to Create a Multi-Block Database

The following lines summarize the necessary steps to accomplish the creation of a multiple block size database, namely:

  1. Create a basic data warehouse database either using the Database Configuration Assistant (DBCA) or manually from cloned or originally designed scripts. Remember that database templates are useful when using the DBCA and that larger block sizes are recommended for data warehousing projects. It is recommended that you choose your default database block size as large as your operating system can afford or at least 16K, which ever will be more practical in your specific scenario.
  2. Once the database creation is completed, it is necessary to configure the database caches for each different block size to be supported. The instance db_nK_cache_size parameters, namely, db_32k_cache_size, db_16k_cache_size, db_8k_cache_size, db_4k_cache_size, and db_2k_cache_size can be set accordingly, provided that they do not match the database default block size. Although Oracle10g supports dynamic settings of these parameters, it is recommended for them to be set natively at startup. In Oracle9i, it was necessary to bounce the database to accomplish this task.
  3. At this time, the DBA is in a position to create tablespaces matching these database buffer cache sizes and allow for the usage of DDL to add objects matching each individual cache size.
  4. ¨ The subsequent creation of a server parameter file, spfile, from the update init.ora parameter file is highly recommended, to maintain robustness on both instance architecture scopes.

General Initialization Parameters Guidelines

At this stage, it is highly important to support the concept that dynamic memory management is essential to achieve he highest level or reliability. Planning on an automatic setting of auto-tune and dynamic memory management initialization parameters is crucial to establish a set of good business practices. Furthermore, it is appropriate to apply similar concerns for most automatically tunable framework management issues and related parameters, for instance, undo management and program global. Similarly, achieving early tuning and overall planning of additional components resources, such as Oracle Data Mining or Streams AQ, Oracle Messaging Gateway, and Procedural Gateways for Websphere MQ, is appropriate at this stage, since they require specific planning which could otherwise translate into undesirable downtime.

A Blueprint to Work with Multiple Block Size Tablespaces

Tablespace creation, tuning, and monitoring could either be a nearly automated activity or a complex one involving a great amount of DBA resources and time, depending on how well the process is organized and managed. In general, all tablespaces are likely to be locally managed (LMT), and there is nearly no reason to utilize dictionary managed any longer, but backwards compatibility could be an issue. The package DBMS_SPACE_ADMIN[2] is provided for conversion purposes upon migrating, and the recommendation is for a thorough conversion of all tablespaces to locally managed. Thus, as a rule it EXTENT MANAGEMENT LOCAL is a key factor to improve I/O performance. The AUTOALLOCATE option is preferred for PERMANENT tablespaces while the UNIFORM ALLOCATION is advised for TEMPORARY TABLESPACES. Similarly, AUTOMATIC SEGMENT SPACE MANAGEMENT is highly recommended in order to avoid the overhead caused by the handling of free lists and related structures. All related storage parameters will then be ignored by Oracle. Specifying the BLOCKSIZE clause is necessary when the tablespace block size is different from the database default. Similarly, other DDL tasks are relatively easy to achieve. The great innovation in Oracle10g is the usage of the BIGFILE and SMALLFILE clause. The former is intended to accommodate very large objects in VLDB, while the latter is the default option and can be omitted. In spite of larger block sizes and larger data files, multiple block size databases can also take advantage from the usage of DBMS_SERVER_ALERT settings to control thresholds and high water marks and make necessary adjustments. This is in spite of the pro-active work done by a combination of tasks involving Automatic Database Diagnostic Monitoring and the Automatic Workload Repository. Different block size tablespaces are also useful when properly aligned with business rules and technical requirements in order to prevent and minimize row fragmentation and migration issues.


CREATE BIGFILE TABLESPACE ts_bfdataw DATAFILE

'/oradata/adndba/portal/ts_dataw_idx_01.dbf' SIZE 120G

ONLINE

PERMANENT

EXTENT MANAGEMENT LOCAL AUTOALLOCATE

BLOCKSIZE 32K

SEGMENT SPACE MANAGEMENT AUTO;

Exhibit 1. Scripts creating Bigfile tablespaces with a block size different from the database default.





Working with Physical and Logical Objects

Most of the tasks that relate to working with logical objects involve tables, indexes, and materialized views, all of which can utilize database segments. Relevant physical objects, can involve data files and ASM files, among others.

Working with Tables

Table block size is inherited from tablespace block size, and —like many other tablespace parameter— they require a careful selection matching logical objects, such as tables and indexes, accordingly. Therefore, it is appropriate to consider this issue when specifying extent allocation, regardless of the approach utilized. In general, new features offered by Oracle10g, such as ENABLE ROW MOVEMENT, can lead to outstanding dynamic manipulation on larger non-default block size tablespaces. Thus tasks related to table relocation and redefinition can be accomplished with much better timing.

Using Partitioned Tables

When partitioning tables on multiple block size databases, information about tablespace block sizes needs to be handy for the DBA to make the best decisions on how to allocated each partition. Because partitioned tables are subject to several constraints, for instance, when performing a partition exchange, merge, split or move, it is a best practice to consistently set segments in similar block size tablespaces for the best outcome. All types of partitions, namely, range, hash, list, and all possible composite partitioning options thereon are quite possible on any non-default block size tablespace and related cache operations are equally reliable. Attempting to mix block size in partition objects will result in error messages.

ERROR at line 1:

ORA-14519: Conflicting tablespace blocksizes for table : Tablespace TS_ADN4KTBS1 block size 4096 [partition specification] conflicts with previously specified/implied tablespace DATAPAR1 block size 16384 [partitionspecification]

Exhibit 3. Error Message. Oracle can detect inaccurate block size mixes when running partitioning DDL scripts.

Using Index-Organized Tables (IOT)

Index organized table are also subject to several rules and constraints. In particular, it is appropriate to consistently plan on equal-block size for both the original and overflow tablespaces. The vast majority of practical considerations applicable to partitioned tables are also applicable to partitioned index-organized tables (IOT).

Using Compressed Tables

Utilizing the appropriate compression algorithm can lead to optimal storage and cache utilization. It has been investigated statistically that the performance degradation due to compression can be negligible in comparison to other benefits in can provide such as storage and cache optimization. Partitioned tables are usually candidates for compression in a data warehouse.

Working with Indexes

Indices involved in data warehousing databases are likely to be partitioned due not only to their overall segment size but also due to specific business rules, leveraging I/O both logically and physically, and establishing a consistent architecture paradigm.

Using Partitioned Indexes

Partition indexes can be of two main types, namely, local or global, as discussed below:

Local Partitions

Local index partitioning is preferred in most instances because of its performance benefits, in addition to the easiness to establish a one-to-one correspondence with the respective table segment.

Global Partitions

Global index partition achieve relevance when hashing operations can provide optimal performance, and when the number of changing DML transactions, in particular, a combination of a greater number of deletes and inserts together in comparison to overall updates or certain table DDL, can lead to undesirable invalidation.


CREATE INDEX DBAMBA.IDX$CUSTOMER$STATE ON DBAMBA.CUSTOMER (STATE)

PARALLEL 4 TABLESPACE INDX INITRANS 2 MAXTRANS 255 ONLINE

COMPUTE STATISTICS REVERSE GLOBAL PARTITION BY RANGE (STATE)

(PARTITION IDX$CUSTOMER$STATE_P1 VALUES LESS THAN ('DE')

TABLESPACE DATAPAR1,

PARTITION IDX$CUSTOMER$STATE_P2 VALUES LESS THAN ('IA')

TABLESPACE DATAPAR2,

PARTITION IDX$CUSTOMER$STATE_P3 VALUES LESS THAN ('NY')

TABLESPACE DATAPAR3,

PARTITION IDX$CUSTOMER$STATE_P4 VALUES LESS THAN (MAXVALUE)

TABLESPACE DATAPAR4);

Exhibit 4. Sample Partitioned Index SQL Scripts

Using Other Index Types

Function indexes, reverse-key, and bitmap indexes are equally supported on tablespaces with block size different from the database default. In most cases, larger non-default block size tablespaces can provide better performance for both disk access and cache access, when holding primary and foreign key indexes, but the customization of how to best allocate indexes in relation to its density or ratio to a specific block size is a the discretion and expertise of each DBA. A perspective on content-based indexing can lead to equivalent impressions, which is very useful in XML, Text and Intermedia applications. In most cases in regression tests, Locally-Managed Tablespaces (LMT) with the AUTOALLOCATE option and AUTOMATIC SEGMENT SPACE MANAGEMENT (ASSM) have been used in order to obtain optimal results.

Working with Materialized Views

Materialized views have a significant role in data warehousing, data marts and reporting databases. Materialized views also support the set of rules applicable to table and index partitioning, and concerns can be enhanced by the utilization of analytical functions, and replication.

Working with LOBs and User-Defined Datatypes

Working with CLOBs, BLOBs, BFILE, and XMLTYPE and other complex user-defined data types represents an overall match for selected large block size tablespaces. The usage of the supplied PL/SQL packages and API allows the easy implementation and development of applications with a solid architecture. When using user-defined data types they will automatically be stored as LOB, if such data type is greater than 4000 bytes, regardless of whether the STORE AS LOB clause is specified accordingly.

Performance Tuning

The greatest significant upgrade from Oracle9i to Oracle10g does not relate to how much easier and consistent it is to handle the auto-tune memory structures but rather the impact of the Automatic Workload Repository (AWR) in conjunction with the ADDM (Automatic Database Diagnostic Monitoring) for the sole purpose of achieving optimal performance in a pro-active way. Therefore, the positive impact of further visibility and monitoring capability in the most recent release is enhanced by the consistently established auto tuning memory structures, namely, the shared pool, the buffer cache, affecting the db_cache_size, db_Nk_cache_size, the db_keep_cache_size, the db_recycle_cache_size. Furthermore, the PGA dynamic statistics, visible in the V$PGASTAT view, play a crucial role in the reliability of the Oracle instance, in particular when the WORKAREA_SIZE_POLICY is set to automatic. Oracle Enterprise Manager can assist with great user-friendliness in tuning each of these memory structures.

SQL> SELECT * FROM v$pgastat order by 1;

NAME VALUE UNIT

---------------------------------------- ---------------- ---------------

PGA memory freed back to OS 0 bytes

aggregate PGA auto target 14330880 bytes

aggregate PGA target parameter 25165824 bytes

bytes processed 537556992 bytes

cache hit percentage 100 percent

extra bytes read/written 1083392 bytes

global memory bound 1257472 bytes

maximum PGA allocated 41468928 bytes

maximum PGA used for auto workareas 1226752 bytes

maximum PGA used for manual workareas 266240 bytes

over allocation count 0

total PGA allocated 22992896 bytes

total PGA inuse 9285632 bytes

total PGA used for auto workareas 0 bytes

total PGA used for manual workareas 0 bytes

total freeable PGA memory 0 bytes

16 rows selected.


Exhibit 5. Querying the V$PGASTAT view.


Normally, besides the DB_KEEP_CACHE_SIZE, the automatic tuning cache components, namely, the Shared Pool, the Default Buffer Cache, the Java Pool, and the Large Pool are expected to add up altogether the SGA_TARGET value. It is possible to configure a RECYCLE buffer pool for blocks pertaining to segments that you do not want to remain in memory. Furthemore, the SGA_TARGET parameter could be increased up to the value specified for the SGA_MAX_SIZE, or otherwise reduced. If the DBA reduces the value of SGA_TARGET, the system identifies one or more automatically tunable components to release memory, including DB_NK_CACHE_SIZE values and respective memory areas. The DBA can reduce SGA_TARGET until one or more automatically tuned components reach their minimum size. Oracle Database determines the minimum allowable value for SGA_TARGET taking into account several factors, including values set for the automatically sized components, manually sized components that use SGA_TARGET space, and number of CPUs. The change in the amount of physical memory consumed when SGA_TARGET is modified depends on the operating system. There are important view such as, namely, V$SGASTAT, V$CACHE_ADVICE, V$SYSSTAT, V$SYSTEM_EVENT, V$SESSION_WAIT, V$WAIT_STAT, and V$PGASTAT, among others than can lead to understanding memory structure tuning and optimal shared and dynamic memory utilization.

STATSPACK

However, the habit of keeping a tuning and statistics gathering strategy is within the goals of best practices of leading Database Administrators. The usage of STATSPACK is appropriate, when consistently used with a chronologically approach, since it can allow the building of statistics patterns and the dredging of a performance map useful in the process improvement for database administration.













































Exhibit 6. Sample Statspack Report Showing Multiple Block Size Buffer Pool and I/O statistics (Oracle10g)


NAME VALUE

---------------------------------------- -----------------------------------

__db_cache_size 12582912

db_16k_cache_size 83886080

db_2k_cache_size 0

db_32k_cache_size 0

db_4k_cache_size 8388608

db_8k_cache_size 0

db_cache_advice ON

db_cache_size 4194304

db_keep_cache_size 0

db_recycle_cache_size 0

10 rows selected.

SQL> ALTER SYSTEM set db_keep_cache_size =4m;

System altered.

SQL> alter system set db_recycle_cache_size=4m;

System altered.

SQL> SELECT name,value

2 FROM v$parameter

3 WHERE name like '%db%cache%'

4 ORDER BY 1

5 /

NAME VALUE

---------------------------------------- -----------------------------------

__db_cache_size 4194304

db_16k_cache_size 83886080

db_2k_cache_size 0

db_32k_cache_size 0

db_4k_cache_size 8388608

db_8k_cache_size 0

db_cache_advice ON

db_cache_size 4194304

db_keep_cache_size 4194304

db_recycle_cache_size 4194304

10 rows selected.

SQL> SELECT name,class,value

2 FROM v$sysstat

3 WHERE name like '%buffer%'

4 OR name like '%cache%'

5* ORDER BY 1

SQL> /

NAME CLASS VALUE

--------------------------------------------- ---------- -------------

Commit SCN cached 128 2

DBWR checkpoint buffers written 8 16580

DBWR thread checkpoint buffers written 8 656

buffer is not pinned count 72 752095

buffer is pinned count 72 760846

commit cleanout failures: buffer being written 8 1

consistent gets from cache 8 2074808

db block gets from cache 8 1014371

dirty buffers inspected 8 18079

free buffer inspected 8 160733

hot buffers moved to head of LRU 8 27648

no buffer to keep pinned count 72 1

physical reads cache 8 145416

physical writes from cache 8 37524

pinned buffers inspected 8 188

redo buffer allocation retries 2 64

switch current to new buffer 8 3114

table scans (cache partitions) 64 0

28 rows selected.

Exhibit 7. Database cache parameters values and usage statistics before and after some dynamic adjustments.


SUPPLIED PL/SQL TUNING PACKAGES

The systematic approach to gather statistics using PL/SQL scripts with supplied packages, such as DBMS_UTILITY and DBMS_STATS, allow to derive analytical power in understanding the instance performance behavioral pattern. The following script shows a possible utilization of these two supplied packages:


BEGIN

DBMS_UTILITY.analyze_database ( METHOD => 'ESTIMATE',

ESTIMATE_PERCENT => 12

);

DBMS_STATS.delete_schema_stats( OWNNAME => 'SYS' );
END;

Exhibit 8. A brief script to quickly analyze a database and estimate statistics accordingly.

ORACLE ADVISORIES

Oracle Cache Advisories allow DBAs to make decisions on how to dynamically adjust certain memory structures such as the database buffer cache, the shared pool and consequently the System Global Area (SGA) actual target (SGA_TARGET). By utilizing the Cache Advisory available in the Instance Manager, thresholds for each buffer cache can be visualized from a chart view. Similarly, the DBA can query the V$DB_CACHE_ADVICE[3] view, which displays estimated physical read factors and buffers used for estimate as the main source to establish the best cache size value. Cache advisories are very useful when recycle and keep cache are in place, and are very helpful to maintain a consistent balance between cache objects and relevant read/write access to non-cache objects on disks, and they represent a complementary effort to leverage balanced database access in conjunction with RAID and ASM approaches. The ADDM and AWR based advisory frames work leads to a comprehensive set of available pro-active options. Thus, Oracle advisories can be basically summarized as follows:

  • Buffer Cache Advisory
  • Share Pool Advisory
  • PGA Advisory
  • Segment Advisor
  • Undo Advisor
  • SQL Advisor

SQL> select name, block_size,

2 advice_status,

3 SUM(size_for_estimate) "SUM_ESTD SYZE",

4 SUM(estd_physical_reads) "SUM_ESTD PHYS READS",

5 SUM(estd_physical_read_time) "SUM_ESTD READ TIME"

6 FROM v$db_cache_advice

7 GROUP BY name,block_size,advice_status

8* order by 1

SQL> /


NAME BLOCK_SIZE ADV SUM_ESTD SYZE SUM_ESTD PHYS READS SUM_ESTD READ TIME

------------ ---------- --- ------------- ------------------- ------------------

DEFAULT 4096 ON 40 0 0

DEFAULT 8192 ON 12 63029 402

DEFAULT 16384 ON 1680 0 0

KEEP 8192 ON 12 0 0

RECYCLE 8192 ON 12 0 0

Exhibit 9. Querying the Oracle V$DB_CACHE_ADVICE view.

Undo Management

When working with Undo management, it is important to consider using at least two undo tablespaces, particularly, at peak production time or in the event of a major application upgrade, large load or import task and so on, where the undo_retention value may not lead to releasing undo space as expected because the intensiveness of the import processes, which can overrule the UNDO_RETENTION goals as far as releasing undo space. The main decision criteria here is to be able to switch from the undo tablespace nearly overflowing to the rather empty one and avoid unnecessary intensive addition of data files to the undo tablespace. This is particularly certain if a parameter such as _DISABLE_LOGGING is set to speed up an Import utility process and minimize redo logging activity, if a technique such as ORAPEPI, or alike, is eventually used. It is always possible to switch between tablespaces.

In fact, the 80-20 rule is a good approach. Once one Undo tablespace reaches 80% full, then use the ALTER SYSTEM SET UNDO_TABLESPACE . The DBA can query the V$UNDOSTAT or DBA_ROLLBACK_SEGS to determine the current UNDO tablespace, which is also useful to further visualize online and offline rollback or undo segments.

COST-BASED OPTIMIZATION (CBO)

At the instance level, when the DBA optimizer_mode to either CHOOSE or COST, Oracle performance CBO-based optimization is activated, representing a typical approach for data warehouses. Similar hints can apply at an application level where FIRST_ROWS_n rather than FIRST_ROWS, n being a typical power or 10, can be enabled for custom optimal performance for a particular application. An important performance tuning concern is not to assume that ALL_ROWS will always produce the best results. Oracle multi-block databases can provide optimization in situations where these hints are very valuable, in particular, if a table has been properly designed and utilizes optimal indexing capabilities.

High Availability

The best recommendation in a RAC environment is to have each node’s cache sharing a production database to be able to equate the cache size in such a fashion that tables caching is significantly alike, such that performance issues related to node heterogeneous configuration are less likely to occur. Similarly, recommendations, rules and constraints previously discussed do apply for RAC and other relevant approaches, such as Oracle Data Guard. In particular, the SAME approach discussed with ASM and RAID concerns brings great relevance to these concerns. Therefore, the rules, constraints, and recommendations discussed also apply to any level of availability, including maximum availability data warehouse, where RMAN—the only certified backup solution for ASM— is agnostic of tablespace block size, except for cataloging purposes. Furthermore, it is potentially helpful to implement any goal-oriented RAC, such as Load Balancing, Speed Up, and Scale Up among others with this perspective in mind. Finally, keeping a consistent approach that enforces these rules and recommendations will allow for standalone, RAC, and Data Guard databases to be better suited for overall grid computing. Oracle Enterprise Manager Grid Control can greatly leverage and optimize all aspects of resource management when established in the grid.

On Oracle10g Replication and Streams

When working with replication using either Oracle Advanced Replication, Oracle Procedural Replication, or similarly when using Oracle Streams, it is important to be particularly careful about matching associated tablespaces both on the Master and Replica or Target sites in conjunction with the tables involved. In Oracle10g, the usage of Streams MQ enhances the concept that data warehouses can also validate the concept that dynamic data available to the data warehouse can also exist outside the database, and be available through inbound and outbound heterogeneous propagation from and to that queue.

Similarly, it is particularly important for the DBA to be aware of underlying object components, such replicating a partition table using a local index, since invalidation of this objects may lead to a time consuming rebuilding process, and consider consistency rules on block sizes at all times. It is also of great importance to envision, design, and implement a strategy that can contemplate not only tuning at the initial replication stage, but also the resilient periodic replicating job processes on groups, and materialized views, whose intensiveness needs to be carefully measured and planned for. This is also essentially true when associated with the timeliness and frequency of the replication jobs, the load imposed on the system and channel, and the Storage Area Network (SAN) or Network Attached Storage (NAS) channel itself. In fact, this issue is applicable mostly if the entire database image is replicated and it has commensurate validity as well when a Business Continuity Volume (BCV) is using multiple block sizes and partitioning.

OMBDB AND BUSINESS INTELLIGENCE

A key aspect of where business intelligence comes in place is precisely in relation to Oracle’s unique ability to productively achive the following business goals:

  • Provide innovative content and document management with extensive datatype support, including user-defined, XMLTYPE, and customized support for LOBs.
  • Align Oracle Applications and third party SCM, ERP, and CRM systems with data warehousing technology on the basis of custom block size support and associated object caching, table compression and partitioning, among other options.
  • Integrate existing proprietary information and legacy systems into new data warehouses, data marts, and reporting databases, whose infrastructure outperforming any historic achievements.
  • Integrate and support technologies such as Oracle Data Mining, Business Intellengence Beans (Bi Beans), OLAP, XML, and Java-driven Service Oriented Applications (SOA).
  • Provide the infrastructure to enhance systems via development paradigms such as J2EE and Model View Controller (MVC) with JDeveloper, or third party approaches such as Microsoft .net, or the Eclipse integrated IDE environment. Other tools such as Oracle Warehouse Builder (or third parties like Informatica), Discoverer, TopLink, and the Embedded Data Warehouse can align the capabilities exposed by mutiple block size databases.

OMBDB can achieve many other goals when supporting business intelligence systems and applications.

SQL> CREATE MATERIALIZED VIEW

2. PS_EMP_PROJECT_ CUBE_mvw

3. pctfree 5 pctused 50

4. build immediate

5. using no index

6. refresh force

7. enable query rewrite

8. AS

9. SELECT

10. GROUPING_ID(D.did,D.dname,p.pid,p.pname,e.empid,e.name) GID1,

11. INITCAP(d.dname) AS "DIVISION_NAME",

12. INITCAP(P.pname) AS "PROJECT_NAME",

13. INITCAP(e.name) AS "EMPLOYEE_NAME",

14. SUM(W.HOURS) AS "WORKED_HOURS",

15. SUM(E.SALARY) AS "TOTAL_SALARY",

16. SUM(P.BUDGET) AS "BUDGET_UTILIZED",

17. COUNT(*) "EMPLOYEES_INVOLVED",

18. GROUPING_ID(w.region,w.state,w.city) GID2,

19. INITCAP(w.region) AS "Region",

20. w.state AS "State",

21. INITCAP(W.CITY) AS "city",

22. w.quarter AS quarter,

23. w.month AS month,

24. GROUPING_ID(w.year,w.quarter,w.month) GID3,

25 GROUPING_ID(w.region,w.city,w.state,w.year,w.quarter,w.month) GID4,

26. SUM(E.SALARY) AS SUM_SALARY,

27. SUM(W.HOURS) AS SUM_HOURS,

28. SUM(P.BUDGET) AS SUM_BUDGET

29. FROM DBAMBA.employee@biportal E,

30. DBAMBA.work_schedule@biportal W,

31. DBAMBA.project@biportal P,

32. DBAMBA.division@biportal D

33. WHERE D.did = E.did

34. AND E.empid = W.empid(+)

35. AND D.did = P.pid

36. GROUP BY GROUPING SETS

37. ( (D.did,D.dname,P.pid,P.pname,E.empid,E.name),

38. (w.region,w.state,w.city),

39. (w.quarter,w.year,w.quarter),

40. (w.region,w.state,w.city,w.year,w.quarter,w.month),

41. ROLLUP(d.dname),

42. CUBE(p.pname),

43. ROLLUP(e.name)

44. )

SQL> /

Materialized view created.


Exhibit 10. Sample Materialized View, using OLAP functions, created in tablespace with a block size different from the database default
.


Some Business Intelligence Scenarios

Intelligent document management systems today can take advantage of Oracle multiblock databases that can critically align size and business properties and methods to specific logical object containers. As such, a bigfile tablespace can hold large amounts of data, rich content, and in many cases replace the operability of old systems, for instance, tape-based broadcasting and video conferencing, where a maximum availability approach is strongly recommended. This can result not only in great savings for companies involved in this business, but also the technological redefinition or, sometimes, almost the reconversion of the very business paradigm.

In today’s Supply Chain Management world, Radio Frequency Identification (RFID) has become a requirement for leading companies utilizing sensor-based computing and object-oriented relational systems. Leveraging appropriate segment allocation, and aligning information that may utilize user-defined and XML datatype as well as custom media manipulation is key to provide optimal database performance.

Independently from the operating system involved, Oracle10g can achieve the desirable level of customization in all aspects that business intelligence processes, such as data mining, OLAP, and related ERP, SCM, and CRM systems could require.

Besides, database instances that utilize multi-block caches can further leverage performance tuning and optimize memory resources and equivalently provide an outstanding level of I/O performance and reliability, with the advantage that all intelligent technology applied has a solid infrastructure to support it.

At the present time, when most companies are in the process of migrating their data warehouses to Oracle10g, migration and upgrade business and technical process can prove to be challenging and time-consuming. The process of migrating itself an ERP system is subject to many milestones, each one which could represent a bottleneck in the project timeline.

CONCLUDING REMARKS AND IMPLICATIONS

The selection, design and implementation of an Oracle multiple block size database for a data warehousing project is an innovative paradigm for many businesses and their applications. The many aspects where DBAs can derive the appropriate level of information technology support for business intelligence using OMBDB are numerous, such as mentioned utilization of LOBs, and XML types in content rich applications, to leverage the diversity of data structures and the persistence and volatility of objects handled by increasingly complex multi-tier applications. Among the most important associated technologies, it is possible to highlight the following ones:

  • Table and Index Partitioning, including relevant operations such as smart splitting or exchange.
  • Large Object and User-Defined Datatypes
  • Large Tablespace (BIGFILE) size support for very large databases (VLDBs), including multiple block sizes.
  • Table compression.
  • Practical and seamless integration with ERP, SCM, CRM and legacy systems.
  • Integration with cluster and high availability approaches.
  • Support on file systems, conventional raw devices, OCFS, RAID, and ASM environments.
  • The OMBDB paradigm can be easily integrated with Oracle technologies such as Oracle Business Intelligence, Oracle Applications Server and Oracle Collaboration Suite.

Utilizing Oracle multiblock databases in data warehousing based systems will prove in the long-term to be a reliable methodology to approach the diversity of information and related business intellegence applications processes when integrating existing systems, consolidating older systems with existing or newly created ones, to avoid redundancy and lower costs of operations, among other factors. The input received from those already using multiblock databases in highly satisfactory in areas such as marketing, advertisement, finance, pharmaceutical, document management, manufacturing, inventory control, and entertainment industry.



[1] The RBAL and ARBn process are involved.

[2] The procedure DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL is run to perform a conversion from Dictionary Manage Tablespace (DMT) to Locally Managed Tablespace (LMT).

[3] The DB_CACHE_ADVICE initialization parameter needs to be set to either ON or READY

2 comments:

Ananad said...

I have been searching for a useful post like this on salesforce course details, it is highly helpful for me and I have a great experience with this Salesforce Training who are providing certification and job assistance. Salesforce job oriented training in Gurgaon 

KITS Technologies said...

Hey, thanks for the blog article.Really looking forward to read more. Cool.
Oracle Exadata online training
Oracle Exadata training
Oracle fusion order management online training
Oracle fusion order management training
Oracle golden gate online training
Oracle golden gate training
Oracle identity manager online training
Oracle identity manager training
Oracle performance tuning online training
Oracle performance tuning training