Wednesday, July 10, 2013

-- Oracle RDBMS 12c was formaly released today --

The new version of Oracle has over 500 new features, some of those features indicate to me that in order to achieve them they did a massive re-architecture. The one that called my attention the most was pluggable database, apparently you create a database defined as a container, and you can add up to 255 pluggable databases to the same container, now this is the interesting part, in the past 1 instance belongs to one and only one database, however a database can have many instances (RAC), well now that instance that is the container is shared among all the pluggable databases on it. All the databases in the same container inherit the features of the container database (RAC, DG, etc).

Tom Kyte presented his favorite 12 features of 12C, he mentioned that it was a challenge because there were 500 new features, which he narrowed to 80 and then to 12 :). Here they are.

  1. Default to Column
  2. Even better PL/SQL
  3. Increase of size limits for certain data types.
  4. Easy Top-N and pagination queries
  5. Row pattern matching (good for analysis of data)
  6. Lots of partitioning improvements (move partitions online)
  7. Adaptive execution plan (Your execution plan changes on the fly)
  8. Enhanced Statistics (New histograms).
  9. Temporary undo (Temporary tables store undo on temporary tablespace)
  10. New Data Optimization capabilities (Heat map)
  11. Application continuity (this is an extension of TAF), Transaction guard (this ensures web transaction do not get duplicated by error).
  12. Pluggable Databases.
You can get more details about this at

Tuesday, July 2, 2013

-- Oracle RDBMS 12c Is here --

Breaking the tradition of releasing it's new version of the RDBMS during their Oracle OpenWorld, Oracle announced the introduction of their 12c on July 10th 12 pm est. Tom Kyte will be presenting the Top 12 New Features of it. Some of the new features include

Simplify database consolidation
- Automatically compress and tier data
- Improve database and application continuit 
- Redact sensitive data 

To register click here


Tuesday, June 18, 2013

-- Can DBMS_JOBS Failover when using instance affinity? --

One of my customers recently approached me asking this question, first let me explain what is instance affinity for DBMS_JOBS.

Let's assume you have 3 jobs you need to run and you are using Oracle RAC, you have 3 instances (1 Database), and you want to be able to balance the load (not using SQL Net load balancing), you could use node affinity to ensure that a particular job only runs from one node.

           INSTANCE                     IN BINARY_INTEGER,
           FORCE                          IN BOOLEAN DEFAULT FALSE);

In the above syntax the INSTANCE parameter represent the instance number where you want the job to run, the FORCE when FALSE (Default) and having INSTANCE > 0 will alter the job affinity for the specified instance assuming that instance is running, if INSTANCE = 0 then the job will be executed by any instance, If FORCE = TRUE then the job will be altered regardless of the state of the instance as long as the INSTANCE parameter is not negative or null.

Leaving that clear the answer is no, DBMS_JOB node affinity will not failover to a surviving instance during an Oracle RAC instance failover, it will simple not run while the instance is down.

People have suggested that Oracle should enhance DBMS_JOB to achieve that function, the answer is that they did, the DBMS_SCHEDULER uses job classes that you can associate to an specific service, and as long as you failover the service to the surviving instance the job will be executed there.


     JOB_NAME => '[JOB_NAME]',
     ENABLES => TRUE);

In this way you can distribute job across RAC instances using job classes and services.

-- Oracle Materialized Views real case Performance Improvements --

Recently I got pulled into a case where our customer was complaining that Materialized Views were taking too long to refresh, upon research I found the following:

  • Materialized Views are located on the same server as the source tables.
  • They use the Materialized views to do joins of tables and views for reporting purposes.
  • They have an Active Data Guard Database on the DR Data Center.
  • They created 3 refresh groups in an attempt to improve performance
  • They are not using the scheduler but DBA_JOBS.
  • They do complete refresh every 2 hours.
  • Their database is a 3 node RAC cluster.
These were my recommendations:

  • Do you really need Materialized views, or a view using the proper partitioning and parallel query will do the job, never underestimate the power of Parallel Query using partitioned tables and having the right indexes in place.
  • Use your Active Data Guard (ADG) for reporting purposes.
  • Favor Fast Refresh over Complete refresh.
  • Use the Oracle Scheduler rather than DBA_JOBS.
  • Use node affinity so you do not tax a single server consuming a bunch of resources during the refresh.
I know this is just common sense, but I though about share it with you as there may be some other people having similar issues.

Monday, March 4, 2013

-- Oracle 11g SQL Plan Management (SPM) --

What is SQL Plan Management (SPM)

A big part of the performance of an application depends on its query execution, for the most part the Oracle Optimizer does a fair job without human intervention, however problems may arise for many reasons, until now there has not been guarantee that a plan execution will always change for better, many people have opted to use stored outlines (freeze execution plans) or lock their optimizer statistics, however the risk her is that you may be as well avoiding a more optimal execution plan. 

SQL Plan Management provides a framework for completely transparently controlled  execution plan evolution. It is a mechanism that records and evaluates your SQL plans over time and creates SQL Plan baselines which is composed by a group of plans known to be efficient. The idea is to use this as a preventive method to preserve the performance of SQL statements. With SPM the optimizer automatically managed execution plans and ensures that only known or verified plans are used. In other words when a new plan is found for a SQL Statement, it will not be used until it has been verified by the database to have comparable or better performance than the current plan.

Let me explain that better, to produce this guarantee only accepted (trusted) execution plans will be used; any plan evolution will be tracked and evaluated at a later point in time and only accepted as verified if the new plan causes no runtime change or an improvement of the runtime.

The main advantages of SPM is that it minimizes potential performance regressions that may occur due to event such as new deployments or upgrades. 

SPM Components

A) SQL Plan Baseline Capture

Process of creating SQL plan baselines that represent accepted executions plans for all relevant SQL statements. For it to work you must seed the SQL management base with the execution plans, which will become the new SQL plan baseline. There are 2 ways to achieve this

    •  A.1 - Automatic Plan Capture: Automatic plan capture can be switched on by setting the parameter OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES to TRUE (The default value is false), once enabled the SPM repository will be populated with any repeatable SQL statement.
    • A.2 - Bulk load execution plans: This is specially useful when a database upgraded from previous a version of Oracle to 11g, or for new deployments. You can use achieve this in three different ways.
      • A.2.1 - Using SQL Tuning Sets (STS): Capture the plan for a workload into a STS, then load the plan into the SQL Management Base as SQL Plan baslines using the PL/SQL Procedure DBMS_SPM.LOAD_PLANS_FROM_SQLSET. These are the steps to follow for this approach
        • Create a STS that contain the execution plan for each of the SQL Statements.
        • Load the STS into a staging table and export the staging table into a flat file.
        • import the staging table from a flat file into the Database and unload the STS.
        • Use DBMS_SPM.LOAD_PLANS_FROM_SQLSET to load the execution plans into the SQL Management base.
      • A.2.2 - Using the plan in the Cursor Cache: Identify the SQL Statements you wish to capture and load them using the procedure DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE.
      • A.2.3 - Unpack existing SQL plan baselines from as staging table:  If an application is developed in house, you can export a good plan from the test system into production using the following procedure.
        • Create a staging table using the DBMS_SPM.CREATE_STGTAB_BASELINE procedure.
        • Pack the plan beselines you want to export into the table using DBMS_SPM.PACK_STGTAB_BASLINE function.
        • Export the staging table into a flat using the export command or data pump.
        • Transfer this flat file to the target system. 
        • Import the staging table from the flat file using the import command or data pump
        • Unpack the SQL plan baselines from the staging table into the SQL Management base using the DBMS_SPM.UNPACK_STGTAB_BASELINE function.

B) SQL Plan Baseline selection

Process of ensuring that only accepted execution plans are used for statements with a SQL Plan baseline and track all new executions in the plan history for a particular statement. Each time a SQL Statement gets compiled, the optimizer uses the traditional cost-based search method to build the best cost plan. Having the initialization parameter OPTIMIZER_USE_PLAN_BESELINES is set to TRUE (Default value) the optimizer will try to find a matching plan in the SQL Plan baselines. If a match if found then it is used, if not the newly generated plan will be added to the plan history; it will have to be verified before it can be accepted as a SQL Plan baseline. Instead of executing the new plan the optimizer will cost each of the accepted plans for the SQL statement and pick the one with the lower cost.

C) SQL Plan Baseline evolution

Process of evaluating all unverified executions  plans for a given statement in the plan history to be come either accepted or rejected. When a new plan if found, it is added to the plan history as a non-accepted plan that needs to be verified. it is possible to evolve a SQL statement execution plan using DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE using either of the following methods

    - Accept the plan only if it performs better than the existing SQL plan baselines.

    - Accept the plan without doing performance verification

    - Run the performance comparison and generate a report without evolving the new plan.

NOTE: The plan history includes all accepted and unaccepted plans, An unaccepted plan can be unverified (newly found but not verified) or rejected (verified but not found to be a performer ).  The Plan baselines are stored in the plan history in the SQL Management base in the SYSAUX tablespace. It is possible to influence the optimizer's choice of plan by marking a SQL Plan as fixed, Fixed SQL plans baselines tells the optimizer that they are preferred.

How to Enable / Disable SQL Plan Management tracing

You can enable tracing by executing the command below connected as a DBA. This will enable traces for all SPM session which will be located in /diag/rdbms///trace/

exec dbms_spm.configure('spm_tracing',1);

The following command will disable the tracing.

exec dbms_spm.configure('spm_tracing',0);

You can verify if tracing is enable or disables by executing the following command

SELECT parameter_name, parameter_value 
FROM sys.smb$config WHERE parameter_name='SPM_TRACING';

In addition you can also enable tracing at the session level as follow

alter session set events 'trace [sql_planmanagement.*]';