Sunday, December 30, 2007

When the use of Oracle range Partitioning is effective

Have you ever wondered what is the point (number of rows) where the use of Oracle range partitioning start been effective? , we all know that there is some internal overhead in using partitioning but at what moment that overhead pays off.

For this test I created 2 tables as follow:

Non partitioned Table

create table np (id number primary key, name varchar2(30), last_name varchar2(30),
age number, dept_no number);

Partitioned Table

create table pt (id number primary key, name varchar2(30), last_name varchar2(30),
age number, dept_no number)
partition by range (dept_no)
(partition dept1 values less than (2),
partition dept2 values less than (3),
partition dept3 values less than (4),
partition dept4 values less than (5),
partition dept5 values less than (6),
partition dept6 values less than (7),
partition dept7 values less than (8),
partition dept8 values less than (9),
partition dept9 values less than (10),
partition dept10 values less than (maxvalue));

In addition I created 2 indexes one for each table on the column dept_no. For the partition table it was a local index

I used the following sql to populate the tables with different amount of data.

set serveroutput on;
declare
dept_alea number(2);
name_alea varchar2(30);
last_alea varchar2(30);
begin
for i in 1..&limit
loop
select dbms_random.value(1,10) into dept_alea from dual;
select dbms_random.string('U',30) into name_alea from dual;
select dbms_random.string('U',30) into last_alea from dual;
insert into &tabla
values (i, name_alea, last_alea, 37, dept_alea);
dbms_output.put_line (i);
commit;
end loop;
end;
/

In order to be partial and try to obtain valid information I did flushed the buffer cache and shared pool after each query. These are the queries I used:

@flush.sql
select * from np where dept_no=5;

@flush.sql
select * from pt where dept_no=5;

The following are the results I obtained

Number of Rows Non Partition Time Partition Time



100 Elapsed: 00:00:00.11 Elapsed: 00:00:00.51
1000 Elapsed: 00:00:00.14 Elapsed: 00:00:00.29
10000 Elapsed: 00:00:00.67 Elapsed: 00:00:00.94
11500 Elapsed: 00:00:00.87 Elapsed: 00:00:01.08
12000 Elapsed: 00:00:00.84 Elapsed: 00:00:01.05
12500 Elapsed: 00:00:01.25 Elapsed: 00:00:01.30
15000 Elapsed: 00:00:01.27 Elapsed: 00:00:01.42
17500 Elapsed: 00:00:01.63 Elapsed: 00:00:01.59
18000 Elapsed: 00:00:01.64 Elapsed: 00:00:01.40
20000 Elapsed: 00:00:04.24 Elapsed: 00:00:01.83

Conclusion: Now you know and you don't have to test, if you will have more than 17,500 rows
in a table, and you can range partition and will query or access the table by the partition key, then range partitioning is a good idea. If you are wondering this test was done in 11.0.6 EE on a Intel dual core 1.3Ghz HP computer on Oracle Unbreakable Linux.





Monday, November 19, 2007

4th. and final Day at the Oracle Open World.

This was the last day and you kind of wish for it, the information you get here is very interesting but after receiving such amount of information day after day at the end of the week you are totally burn out. One thing that I noticed and I recommend is to bring a digital camera, sit close to the screen and take pictures of the slides. It is impossible to take notes and pay attention to what the presenter is saying, most of them talk too fast and the slides are shown just for few seconds, you don’t even have time to read them much less take notes out of them. It is like a race they want to cover as much as they can in 90 min.

My first session was “Fault diagnostic best practices, what every DBA must know about Oracle11g by Prabhaker Gongloor”. Oracle is changing the architecture where your log and trace files are stored, there is an new incident, trace and metadata directory. Oracle introduced a new parameter diagnostic_dest that when set it places your log file traces as well as other debugging files on that directory. If you don't set this parameter it would be set to your ORACLE_BASE, if ORACLE_BASE is not set then it would be set to your ORACLE_HOME. The new view v$diag_info contain all the locations for the different directories.

There is a new enterprise manager "support workbench" to support all this new architecture. This includes Incident Packaging Services (IPS). IPS uses the new directory structure to automate packaging of diagnostic data. It solves the problem of what to send to support. The new EM also offers 11g Health monitoring which helps you find problems before they impact the service.

Background_dump_dest is deprecated as well as user_dump_dest; however core_dump_dest still exist. Oracle new alert.log is in XML format however Oracle is still providing the alert.log in text format in case you have a program that uses the text format.


Also in 11g the RDBMS automatically purges your trace files, reducing the possibility for traces to fill up your disk. The Automatic diagnostic repository stores diagnostic data in a hierarchy directory.


My second session was “What is new in Oracle Transparent Data Encryption by Daniel Wong”
. Oracle did a strong effort to improve this tool but the main characteristic is the Tablespace encryption, now you don't have to guess what data to encrypt as you can encrypt a complete tablespace. There is not issues in having encrypted partition tables or an encrypted parent table with references on a unencrypted child table, etc.

My third session was “Best Practices for Upgrading to Oracle Database 11g by Ravi Pattabhi” Ravi basically went through the different ways to execute the upgrade.

- Manual using Database upgrade assistant.

- Automatic using the Database upgrade assistant.

- Data pump export and import.

Oracle introduced since 10g (I believe it was 10g) a set of scripts that you run before the upgrade (pre-upgrade info tool and pre-upgrade analysis tool) and it tells you all the possible problems you may face if you upgrade. There is also a post-upgrade script that you can run to verify that everything was correct. If you are using RAC or ASM, you need to address the clusterware and ASM upgrade before you upgrade the database.

There is a new dba_registry view in 11g that helps you with the upgrade process , only real errors are spooled during the upgrade. There is even in Metalink a white paper that helps you to minimize downtime during the upgrade.

At then end of the presentation Ravi went through the print screen slides of the upgrade process using DBUA (DB Upgrade Assistant). If you have upgraded from 10gR1 to 10gR2 it is basically the same screens you are familiar with. Ravi suggested to test your upgrade (Any intelligent DBA does this anyway) use RMAN to clone you production database and test the upgrade in the clone database. Database control and grid control offers a variety of tools that help you measure the effect of upgrading to 11g for your application. If you are going to do the manual upgrade automate as much as you can so you minimize human errors.

My last session was “Oracle Database 10g tuning arsenal by John Kanagaraj" John basically went through the efficient use of AWR (Advance workdload repository) and ADDM (Advance Diagnostic Manager) which are features we currently use therefore I will not elaborate on it.

In conclusion it is my personal opinion that in 11g Oracle took an step back to improve and fix all the nice features they launched in 10g. There were comments from presenters saying that they believe 11g is the most stable beta Oracle has ever produced and I admire and congratulate Oracle for just taking the time to mature their product.

I hope you enjoyed this overview at the Oracle Openworld 2007

Rafael Orta

Friday, November 16, 2007

3rd. day at the Oracle Open World

Last night was the customer appreciation event with Billy Joel, Lenny Kravitz and Stevie (Fleetwood mac). I got to my hotel pretty late and was exhausted, so I am trying to catch up today.

My first session of the day was "Oracle 11g simplifying management of Oracle database files by Aiman Al-Khammash". This session was mostly about ASM and how you can leverage it to help you administer your database files. ASM (Automatic Storage Manager) provides better performance than OCFS (Oracle Cluster File system).

11g allows rolling upgrades patches for Clusterware and ASM, in 11g you can have variable size extends and the extend size grows automatically with the file. 11g also increases maximum ASM size which is platform dependent. In 11g the allocation unit (AU) is not longer 1MB. as in 10g, you can now set it up to 1, 2, 4,8,16,32,64 MB. Same as in 10g the striping size is one allocation unit (AU) an file stripe is 128KB.

There are new ASM disk group attributes such as

AU_SIZE , DISK_REPAIR_TIME (Length of time before removing a disk once offline), COMPATIBLE_ASM and COMPATIBLE_RDBMS.

There is also a new view: V$ASM_ATTRIBUTE

There are new command line parameters (asmcmd):

lsdsk = List asm disks.
cp = Copy files between asm and non asm storage
remap = Re-maps unreadable blocks in normal high redundancy
md_backup / md_restore = Backup and restore ASM disk group environment.

In addition you can use $asmcmd help to find commands

My second session was "Backup and recovery best practices for very large databases by Tim Chien". Tim explained that every day disk is becoming a more popular backup media since you get performance at a low cost. Tim spoke about VTL (Virtual Tape Libraries) and said they provide you the best of both worlds (tape / disk). He advised to allocate disk backup for the most critical databases, and he also suggested to use locally attached tape drives. Tim suggest the to use the following tools:

- RMAN (Incremental backups)
- Data Pump
- Oracle Secure Backup
- Flash back technology (This is redo base).
- Data Guard (Use it as a disk backup or take the backup out of it)

Use third party storage space solutions.

- Snapshots
- Split mirror backup
- CDP appliances (provides continue snapshot copy-on-write or
allocate-on-write)

Plan your data layer and exploit partitioning and read only tablespaces.

Develop your backup procedure using the following options

Option 1: Use level 0 (FULL) and fast incremental backups.

Option 2: Use level 0, fast incremental backups and incremental updates backup.

Option 3: Data Guard + Full incremental backups.

Option 4: Maintain ETL loads , restore full backups and run needed ETL.

Consider the use of nologging (used widely for Data warehouses to speedup performance.

Divide full backup workloads across multiple days.

New in 11g he said you can use multi section backups (intra file parallel backup)

And lastly develop recovery strategies and use the data recovery advisory.


Then I assisted to "Larry Ellison key notes" Larry was introduced by Billy Joel and then he announced once again Oracle Virtualization and the software was open for download after the announcement. Larry basically spoke about what they heard the customers wants and what Oracle is doing to cover that demand. He also made reference to Unbreakable and the fact that despite it has been only one year since they launched it and they have not sales force for it several big companies have moved to it.

My fourth meeting of the day was "Oracle Active data guard and how to utilize your standby databases for production workload by Grant McAlister" . Grant started by explaining the traditional physical standby architecture

He also spoke about "Active data guard" which is a new Oracle option for which you have to pay a license fee; basically it allows you to continue ship and apply of redo on the standby while the database is in read only mode, with this you get better utilization and performance.

Grant said that you can scale out with active data guard as you can have your reports been feed out of the read only standby.

Active Data Guard allows real time query and RMAN block change tracking on a standby database. When the DB is open read only the consistency is guarantee since it is maintained through query scn.

Whole the database is open read only you can select and alter session. He also spoke about the best practices such as what are the apps that are candidate for active data guard. Grant explained how he was able to scale out just separating the queries from the writes.

He mentioned 3 methods to redirect writes:

1. - DML application level change.
2. - Use of a Database Link
3. - DML via database link.

He advises to use synonyms to hide the database links, keep in mind that from a read only database you can't invoke a remote store procedure.

My last session was "Oracle Data Guard tips and tricks by Larry Carpenter".
Larry started encouraging people to use ASM, he said it provides ease of management and advises the following:

- Use ASM
- Use Flashback recovery area
- Use RMAN to create your standby database.

Convert your standby first to ASM in that way moving to ASM does not mean downtime. He explained how having a flash recovery are simplifies management. The flash recovery area is required for flash back database. Larry explained how to turn on the flash recovery option. He explained that running jobs in the logical standby will not work until the standby gets to be the primary. In 11g you can use the DBMS_SCHEDULER on the active , the standby or both.

Well that is all for today folks.

Tuesday, November 13, 2007

2nd Day at the Oracle Openworld

I am still trying to adjust to the jet lag, my internal clock is still in NJ or Bahamas where I was on vacation before getting here, never mind.

My first conference today was "Oracle Database 11g Business Intelligence and Data Warehousing by Ray Roccaforte". Ray spoke about the exorbitant grow of data volume, he mentioned Yahoo has a DW database of 250TB. he said that due to the increase in the amount of data you need more intelligent tools to interpret that data and that is where the business intelligence tools come to help. The oracle BI solution differentiates from other solutions in that it performs analytics inside the database therefore is faster.

Oracle provides:

Scalable Data management

ASM - Partitioning - Parallel Operations - Aggregation management - RAC

Analytic Platform

Multi dimensions Calculations - Time Series - Forecasting - Statistics - Data mining

ETL & Data Quality

Data Integration - Data Modeling - Meta data Management - Data Profiling - SOA


Oracle is launching a new initiative partnered with some of the main hardware providers like Dell, IBM and Sun and they are shipping along with the hardware the DW solution perfectly tunned and ready to use out of the box.

In 11g there are performance enhancements for Oracle partitioning such as interval partitioning that I mentioned yesterday, there are new types of composite partitioning (range - range , list - range , list - hash) , you can also partition on virtual columns (on expression values) , you also have reference partitioning (equi-partition a FK table with a PK table). The advantage of partition interval is that it creates the partition on the fly when the first row is inserted, for example:

create table sh.sales partition by range(time_id) interval (numtomyinterval (1,'month'),

Also in 11g there is a new feature called "SQL Query result cache" , I think I mentioned this in my previous blog from yesterday but basically you cache the results of a query and it can be used by subsequent queries that can make use of it. For example you may have an expensive group by part of a query, that group by can be used by subsequent queries that may need of it.

Also in 11g a single cube provides equivalence to thousand of materialized views, The 11g SQL query optimizer treats OLAP cubes as materialized views. what is not clear to me is if I need to buy OLAP or they are allowing me to create cubes as part of EE base.

My second conference was "Oracle 11g Real application testing: Avoid performance regressions with SQL performance analyzer by Benoit Dageville" Benoit mentioned that the new real application testing (RAT) has 2 components and is a package of Grid control that you can access for an extra fee.

RAT - SQL Performance Analyzer
- Database replay.

You use the SQL performance analyzer to do unit testing it, helps you to predict problems before it happen.

You use the database replay feature for comprehensive testing and replay real workload.

The SQL performance analyzer offers automatic data capture with almost no overhead, it offers you production sql context and complete sql workload as well as automatic analysis in minutes. You can later use the automatic regression tuning to improve your sql.

The cycle of RAT is as follow:

You capture your sql in production -> transfer the sql to test -> execute the sql pre-change -> set changes -> execute replay -> compare performance -> do the production changes.

Creating sql plan base lines helps to revert to previously known stable plans. SQL Tunning adv visor helps to explore better execution plans. You can leverage SQL performance analyzer 11g (SPA) to determinate how changes will affect you when you move to 11g, even if your database is a 10g database and you don't have a 11g database yet (optimizer upgrade simulator).

My third session was "Oracle 11g Data pump: Data compression, encryption and more by Carol Palmer" Carol started by giving an overview of data pump, and then focused in the following new features on 11g.

- Data compression
- Dump file encryption
- Partition base transportable tablespace

she mentioned that the original import (imp) will be around still in future versions but the export as we know it is not longer on 11g. Oracle is pushing everybody to move to data pump. Data pump import (impdp) is 15-40 times faster than the regular import and expdp (Data pump export) is twice faster than the regular export.

The new data compression feature compress data and meta data or both depending of what you want, it is an inline operation meaning that while you import of export the data gets compressed and decompressed contrary to the traditional two step extraction and compression. The speed of compression is as fast as traditional utilities like gzip and the compression ration is similar as well. Compression adds a 10% overhead versus non compressed import and export.

Data pump encryption offers password base encryption, you can use the Oracle wallet or you can use both. it is also an inline operation and works in conjunction with Oracle transparent data encryption, the default encryption algorithm is AES125.

Carol briefly spoke about partition base transportable tablespace, no much details on it. She also spoke about data transform (remap_data) , you basically create a function to obscure sensitive data and the translation is done once you import in your test system, in that way you can protect columns such as social security numbers, salaries etc from the developers or anyone with access to development. You can remap several columns for several tables at a time, the disadvantage I see is that you need to create a function for every column you want to remap.

Carol also spoke about "secure files" they basically are the new lobs but with better performance , you can compress them, you can encrypt them and they are supported by data pump.

My last session and my favorite for the day was "The top 10 new features of Oracle 11g by Thomas Kyte" . Tom gave a preamble saying that it was very easy to see the data out of database files and that you could find in sourceforge.com plenty of data unloaders that can read Oracle database files and that is why you need to encrypt the data.

1. - Encrypted tablespace: Column encryption as been there since 10g R2, however you could not use range scan and you had primary / foreign key issues when one or the other was not encrypted. Also sometimes you do not know what data to encrypt. Tablespace encryption solves all those limitations.

2. - Oracle 11g does more cache: You have client side cache, server result cache (JIT - MV) and PL/SQL functions result cache. There is a result_cache hint that you can use to tell Oracle to cache or not the results. You can also ask for the result of a function to be cache using the "create or replace function cache" statement.

3. - Standby just got better: Logical standby has been always limited in type support but always open for business, while the physical standby is easier but considered not helpful for day to day operation. Now the logical standby has XML type support and the physical standby can be used for destructive testing, in other words you can open it do the operations you need then bring it back to the original state using flash back and place it again in standby mode. Another improvement is that even that you are querying the database the redo is continuously been applied. This is great because you can offload your reports to the standby database without impacting the synchronization with your primary.

4. - Real application testing: I mentioned this before, Tom did not mentioned much about this other than it has 2 components: capture workload and replay DB workload.

5. - Data pump compression and encryption: Tom mentioned the new partition options (none - departition - merge) and you can also reuse dump files (reuse_dumpfiles).

6. - Virtual columns: They are column expressions involving other columns in the table or constants.

7. - Partitioning just got better: Now Oracle covers all the set of composite partitions and you can have virtual column partition, you can partition by reference and you can use interval partitioning which was mentioned before.

8. - Long waited pivot columns: now Oracle implemented a new clause called pivot to display pivot columns.

9. - Flash back data archive: For years customers have wanted to go back in time more than just few hours or a day, well because of the architecture of flash back that was not possible (you can't keep years of undo inline) , now with the new flash back data archive you can do total data recall or table by table.

10. - Finer grained dependency tracking: In 11g you have less objects invalidations even if you specifications to a procedure or add / modify or drop a column, or if you modify a synonym pointer.

11. - OLTP table compression: Oracle advance compression now has a 3x ratio.

Personal Comment: it seems that in 11g Oracle rather than getting crazy adding features that people may not use, they took an step back and matured and improved what they already have which I think is great.

More tomorrow...

Day 1 Oracle Open World

Hello Everybody

this is my report of the first day at the Oracle Open World, my trip here was not easy, I spent Sunday since 2PM flying here and arrived Monday at 9:50 AM, during the trip my luggage was lost, but I am not going to bore you with the details.

I did not arrived on time for the opening keynote of Charles Philip but friends told me that it was good. There was a Keynote on Sunday which I also missed, but this time was Larry Ellison the speaker, he spoke about Oracle and its beginnings and today everybody has made comments about how good it was.

My first meeting was "The fastest and most cost effective backup for Oracle Database" by Sean McKeown. Sean is the manager for the system administrators in the Oracle Global IT organization, he was explaining how Oracle saved 1.5 million dollars using Oracle Secure Backup (OSB), Oracle Secure Backup is a media library manager from Oracle, In the future you won't need for third party media library managers to backup Oracle databases to tape. The product only works for Oracle databases and with certain hardware, they couldn't say if in the future it will work for non Oracle databases, however my guess is that it wont.

OSB supports virtual library tape, if you are not familiar with the term it is just making the backup to a disk that looks like a tape to your server, later on the information is backup to tape from that special disk and the space taken by the backup is released.

One curious thing is that is the System Administrators and not the DBAs the ones that take care of the backup and recovery of information at Oracle, the DBAs just need to be sure that RMAN is setup properly. Oracle is using OSB for their email, they serve more than 70,000 employees and the amount of information they store is pretty big. He said that Oracle secure backup was 20 to 50% faster than the Veritas solution and a lot cheaper too, it is just 3K per tape drive. OSB does not backup the undo tablespace.

My second meeting was "Oracle 11g next generation of high availability" by Sushil Kumer
. In this session he talked about flash back query, flash back version which I was not familiar with (you basically can tell the database to show you the transactions that happened between 2 date time you choose), he spoke about flash back database, class="blsp-spelling-error" id="SPELLING_ERROR_12">ASM (Automatic Storage Manager), flash back table (none of this are new) , what is new is flash back transaction, he said that now Oracle is intelligent enough to know all the dependencies of a transaction and flash it back eliminating the need for you to guess what a particular process did and what tables touched. Flashback is free with

Flashback has become very popular as a test tool since you can bring back the whole database to the original state after you test any process, and it is a lot simpler and faster than restoring and recovering your database.

In a regular DR scenario the correction time = error time + f(db size), with
flash back the correction time = error time

He said a lot of Oracle customers has been asking to flash back longer periods but until now the product was not designed to support that, in 11g Oracle introduced flash back data archive which allows you to go back years using the same flash back architecture.

My third meeting was "The best Oracle 11g new features" by Rich Niemic. This was probably the best meeting I had so far, however he did not go technical and spend 15 min. making jokes, no that the jokes were not good, but when you have the opportunity to get to Rich in front of you, you want to get the most out his time.

He said that Oracle in 11g did not added a lot of new features but they focused in improving the current features and it was his opinion that 11g was the beta version of Oracle with the best quality ever. He said Oracle did improvements automating the backup to disk, now it does data corruption validation so you know that the backup is corrupt before you need to restore it. He said more people now is using backup to cheap ATA disks.

Rich spoke about Data recovery adviser, now you don't have to think about what is the best way to recover your database in the less time, Oracle will recommend it to you.

In Data guard now can synchronize disk mirroring, and in 11g you can have a snapshot standby and been recovered at the same time, let me explain that. In the past you could use your read only standby for reporting (read only) but while the database was open it was not applying the transaction log from the primary, you had to close the database and place it in recovery mode again so it apply the changes. In 11g you can have the database open read only and the transaction log is been applied at the same time.

11g has a feature called real patch application where making use of grid control and if Oracle detects an error it can apply a patch to fix the error online. Also Oracle is doing their best to deliver all their future patches making use of rolling patches in RAC, same with the Database upgrade.

In 11g you can do online table and index redefinition online, yes I know you will say you can do it in 10g too, the difference is that is 11g there is no locking.

He said that in 11g you only have to set 1 parameter and that parameter is memory_target and that includes your PGA and your SGA.

In 11g there is a new parameter called result_cache_mode, using this parameter you can ask Oracle not to cache the results, in addition you can flush the db cache. The cache in 11g is more intelligent therefore improving performance.

In 11g there are invisible indexes, which are useful if you want to know how an application with lots of indexes will behave if you drop some of them, but without the need to drop them. If you make the index invisible Oracle will ignore it, and if you really need the index all you have to do is make it visible again.

He said people usually rebuild indexes online while what they should be doing is coalescing the indexes.

In 11g Oracle introduced a simple integer data type saving space and speeding up transactions. 11g also introduced a lot of improvements in ADDM, there is also a Global ADDM which access the entire cluster performance information.

11g introduced the SQL Plan management (SPM) which basically is a tool that records what is going on and can play it back.

Oracle is deprecating Outlines and replacing it with SQL Plan base line.

There are also improvements in grid control 11g for performance. You can capture baselines, then use the sql access advisor and sql repair advisor.

The SQL Performance analyzer captures performance and allows you to run the same in other environment, at the end it provides you a graphic comparison between environments.

Real Application testing captures your database workload and allows you to replay it in another system, which this features you an test different version of software and architectures.

Partitioning just got better with the interval clause, now if you forget to specify the maximum value for the partition or a partition gets out of range you will not get an error are long as the interval parameter is specified. In other words the interval is telling Oracle what interval to use to create new partitions when data is out of range and now you can compress partitioned tables.

Oracle is introducing Oracle Secure Files which is faster than using lobs. Oracle also did improvements on the optimizer statistics , now you have an auto sampling feature. You can also gather stats and not publish the stats until you are not ready.

There is also a new feature where Oracle understand the correlation between different column tables and behave different depending of the utilization. You can have multi column, multi table stats. They also included several security enhancements which he did not detailed.

My last meeting for the day was the keynotes with Andy Meldensohn, he is the Senior product manager for the database, he went over some of the new features for the database again, and some of their customers implementing and taking advantage of those features spoke as well. The CIO of eHarmony said that they tried using SQL Server 2005 clustered environment but did not provided them the scalability they needed so they switched to Oracle RAC.

More information tomorrow.


Thursday, October 25, 2007

Oracle 10g Background processes


There are a large number of new background processes in Oracle 10g, however there are just 5 that are vital for the Database to work.

5 Old and Vital background processes

- SMON (System Monitor): Performs instance recovery, cleans up temporary segments
no longer in use and recovers dead transactions, coalesce free extents in the database.

- DBWR (Database Writer): This process in on charge of writing to the DB files.

- CKPT (Checkpoint): This process is on charge of executing checkpoint

- PMON (Process Monitor): Recovers when a user process fails, clean up cache and free
resources, it also monitors dispatchers and server processes.

- LGWR (Log Writer): This process is on charge of write to the redo log files

New Background processes

ASM:

- ARBn : (ASM) Rebalance working process , it rebalances extents.

- ORBn: Rebalance ASM data extent movements. There can be many of this at the same time.

- RBAL : This is the ASM Rebalance Master disk manager, open all disks under each disk group and make them
available to the various clients. It also creates the rebalance plan to move extents.

- OSMB: (ASM) Helps to manage the drive storage.

- ASMB: (ASM) Responsible to communicate the database Instance to the ASM Instance, provides the heartbeat to the ASM instance.

Server Side Optional

- RECO: Helps to Recover distributed Transactions.

- ORA_CJQ0_: Job queue coordinator , checks the log and spawn J### processed


- QMNC: AQ Coordinator, used to manage Oracle Streams Advance Queuing.

- ARCH, ARCn: Copies redo logs to archive logs destination.

- CTWR: Block Change Tracking Writer.

- ORA_D###_: Oracle Dispatchers processed

- ORA_J###_: Job queue process

- ORA_MMAN_: Memory manager, used for the manage of the automatic shared memory.

- ORA_MMNL_: Helps collecting metrics and information about sessions.


- ORA_MMON_: On charge of collecting stats.

- ORA###_: Dedicated server (user process).

- ORA_RVWR_: Recovery Writer (Flashback Database).

- ORA_S###_: Oracle Shared Server Process (Multi Threaded Server)


- ORA_P###_: Parallel Query Slaves Process


- ORA_LNS#_: Network Server.

Other Processes

- ORA_AQ_: Sends notifications when messages arrive into the queues (Advance Queue).

- ORA_Q###_: Queue Process.


- ORA_DMON_: Data Guard Broker Process.

- ORA_SNP###_: Snapshot process (Data Guard).

- ORA_TRWR_: Advance Queuing Time Manager.

- ORA_WMON_: Wakeup monitor process.

Undocumented Server Side Extra

- ORA_FMON_: Manage mapping information when using FILE_MAPPING

Standby Mode

- ORA_LSP0_: Logical Standby

- ORA_LSP1_: Dictionary build process for Logical Standby

- ORA_LSP2_: Set Guard Standby information for logical Standby.

- ORA_DMON_: Data Guard Broker Monitor Process.

- ORA_RSM#_: Data Guard Broker Resource Guard process.

- ORA_INSV_: Data Guard Broker Instance slave process

- ORA_NSV#_: Data Guard Broker NetSlave Process.

- ORA_MRP0_: Managed Recovery process for Data Guard.

- ORA_
RFS_: Remote File Server process (Data Guard) receives archived redo from primary.

Real Application Cluster

- The following processes are unique to a RAC environment. You will not see them otherwise.

* Lock monitor (LMON) process: The LMON process monitors all instances in a cluster to detect the failure of an instance. It then facilitates the recovery of the global locks held by the failed instance. It is also responsible for reconfiguring locks and other resources when instances leave or are added to the cluster (as they fail and come back online, or as new instances are added to the cluster in real time).

* Lock manager daemon (LMD) process: The LMD process handles lock manager service requests for the global cache service (keeping the block buffers consistent between instances). It works primarily as a broker sending requests for resources to a queue that is handled by the LMSn processes. The LMD handles global deadlock detection/resolution and monitors for lock timeouts in the global environment.

* Lock manager server (LMSn) process: As noted earlier, in a RAC environment, each instance of Oracle is running on a different machine in a cluster, and they all access, in a read-write fashion, the same exact set of database files. To achieve this, the SGA block buffer caches must be kept consistent with respect to each other. This is one of the main goals of the LMSn process In earlier releases of Oracle Parallel Server (OPS) this was accomplished via a ping. That is, if a node in the cluster needed a read-consistent view of a block that was locked in exclusive mode by another node, the exchange of data was done via a disk flush (the block was pinged). This was a very expensive operation just to read data. Now, with the LMSn, this exchange is done via very fast cache-to-cache exchange over the clusters¿ high-speed connection. You may have up to ten LMSn processes per instance.

* Lock (LCK0) process: This process is very similar in functionality to the LMD process described earlier, but it handles requests for all global resources other than database block buffers.

- CM: Cluster Manager , it maintains the status of the nodes and instances across the cluster.

- GCS: Global Cache Service

- GES: Global Enqueue Service daemon

- GMS: Group Membership Service

- NM: Node Monitor

-NPIC: Network Inter-process communication

- ORA_PSP0_: Process spawner, allows Oracle PL/SQL embedded in HTML (PLUG IN)

- CSS (Cluster Synchronization Services): Cluster management and node monitoring, monitors
ASM

- CRS (Cluster Register Service):

- ORA_EMNO_
: Event Monitor Process

- ORA_DIAG_: Diagnosability daemon (DIAG) process: The DIAG process is used exclusively in a RAC environment. It is responsible for monitoring the overall ¿health¿ of the instance, and it captures information needed in the processing of instance failures.

Tuesday, October 16, 2007

SQL Server useful hints for Oracle and Sybase DBAs

1. Some System Catalog tables to get familiar with:

Sysdatabases - Contains one row for each database on Microsoft® SQL Server™. When SQL Server is initially installed, sysdatabases contains entries for the master, model, msdb, mssqlweb, and tempdb databases. This table is stored only in the master database.

Sysdevices - Contains one row for each disk backup file, tape backup file, and database file. This table is stored only in the master database.

Sysfiles - Contains one row for each file in a database. This system table is a virtual table; it cannot b updated or modified directly.

Sysindexes - Contains one row for each index and table in the database. This table is stored in each database.

Syslogins - Contains one row for each login account.

Sysobjects - Contains one row for each object (constraint, default, log, rule, stored procedure, and so on) created within a database. In tempdb only, this table includes a row for each temporary object.

Sysprocesses - The sysprocesses table holds information about processes running on Microsoft® SQL Server™. These processes can be client processes or system processes. sysprocesses is stored only in the master database.

2. Filegroups:

Database files can be grouped together in filegroups for allocation and administration purposes. Some systems can improve their performance by controlling the placement of data and indexes onto specific disk drives. Filegroups can aid this process. The system administrator can create filegroups for each disk drive, then assign specific tables, indexes, or the text, ntext, or image data from a table, to specific filegroups.

These are what Sybase calls database segments. In MS SQL Server you can backup, restore and run DBCC’s on individual filegroups of a database.

3. Indexes

Clustered indexes have one row in sysindexes with indid = 1. The pages in the data
chain and the rows in them are ordered on the value of the clustered index key.
No more than one clustered index per table.

When the clustered index is rebuild, all non clustered index get automatically rebuild.

4. xp_cmdshell command:

Xp_cmdshell - Executes a given command string as an operating-system command i.e. 1>xp_cmdshell dir

*** This is considered a security risk. When allowed. ***

5. Statistics

statistics - Statistical information can be used by the query processor to determine the optimal strategy for evaluating a query. When you create an index, SQL Server automatically stores statistical information regarding the distribution of values in the indexed column(s). The query optimizer in SQL Server uses these statistics to estimate the cost of using the index for a query. Additionally, when the AUTO_CREATE_STATISTICS database option is set to ON (default), SQL Server automatically creates statistics for columns without indexes that are used in a predicate.

sp_autostats - Displays or changes the automatic UPDATE STATISTICS setting for a specific index and statistics, or for all indexes and statistics for a given table or indexed view in the current database.

The following commands can help you determine if a index needs to be rebuild.

DBCC show statistics - Displays the current distribution statistics for the specified target on the specified table. The results returned indicate the selectivity of an index (the lower the density returned, the higher the selectivity) and provide the basis for determining whether or not an index is useful to the query optimizer. The results returned are based on distribution steps of the index.

sp_spaceused – If the unused column goes negitave, it is a candidate to check the index.

** When you do a sp_helpindex and see an indexed name _WA_sys_xxx, where xxx is a column name, These are statistics that are generated by SQL server. You can look at these as possible candidates to put and index on.

6. Terms

MSDB database - This is essentially where SQL server stores backup and restore
history. Must have it, one of the four necessary DB’s along with model, tempdb
and master.

SQL Server Agent – Essentially is the process that allows you to schedule periodic activities and notifications on the SQL Servers jobs scheduler.

1433 – MS SQL Server default listening port. On Sybase it was 2025. We usually change this to another port for security purposes.

Tran log - Unlike Sybase, on MS SQL Server 2000 you must have a separate file for the transaction log. It must be separate from the data files

Detach and attach a database- In SQL Server 2000, the data and transaction log files of a database can be detached and then reattached to another server, or even to the same server. Detaching a database removes the database from SQL Server but leaves the database intact within the data and transaction log files that compose the database. These data and transaction log files can then be used to attach the database to any instance of SQL Server, including the server from which the database was detached. This makes the database available in exactly the same state it was in when it was detached.

Log Shipping - In SQL Serve 2000 Enterprise Edition, you can use log shipping to feed transaction logs from one database to another on a constant basis. Continually backing up the transaction logs from a source database and then copying and restoring the logs to a destination database keeps the destination database synchronized with the source database. You must have Enterprise Edition, otherwise, you can do this yourself manually or with scheduled jobs.

Microsoft Clustering - A cluster is a group of independent computers that work together to run a common set of applications and provide the image of a single system to the client and application. The computers are physically connected by cables and programmatically connected by cluster software. These connections allow computers to use failover and load balancing, which is not possible with a stand-alone computer.

7. Database Placement

A general rule of thumb is to place the data, logs and backups on separate devices or disk drives. In case a drive should go bad.

Oracle Hints:

1. Analyze command. Analyze command serves several purposes in Oracle.

The two features that are most important to remember are:

A. Analyze allows you to update statistics on tables. (i.e. like update statistics

B. Analyze allows you to validate objects. (i.e. like dbcc checktable)
2. If you in log mode, Oracle writes the archive logs to disk when they become full.
If the disk where the archive logs are written is full, Oracle will stop working until
Space is made to right the archive logs.

Sunday, September 30, 2007

Tips for Selling over the Internet and Marketing Strategies

The process I am about to describe is a compilation of several seminars, Webminars, online training and personal experience.

Choosing the perfect strategy to sell is a moving target, what is true today may not apply tomorrow therefore you should modify your strategies all the time.

Choose the right product: This is probably the most important step, choose a product that you would like to know more about or a product that you believe on and have some knowledge about (reviews, explanations, white papers, advices). It is important that besides selling a product you demonstrate you are an authority on the product you provide, people will go to your side looking for advice and you can promote your product.

If you are unsure about where to find a product, the URLs below will point you to several wholesalers you can use.

Alibaba http://www.alibaba.com/

Thomasnet http://www.thomasnet.com/

Gowholesale http://www.gowholesale.com/

Wholesalecetral http://www.wholesalecentral.com/

Idropshipper http://www.idropshipper.com/

Doba http://www.doba.com/

Ezdropshipper http://www.ezdropshipper.com/

What if I can’t find a wholesaler for my product? This is far from been an issue but a business opportunity, for sure you will find

people which are not wholesalers and sale the product on retail. You can become a wholesaler, all you have to do is to search for the people selling that product on the internet using a search engine, skip the first 2 or 3 pages of the search (The people on the top of the search engine is selling and there is a good chance they do not need to do business with you). Call several of the providers you found on the third page and beyond and ask them if they would do drop shipment for you and ask them for a discount selling their product.

There are 2 types of drop shippers.

A) Blind Drop Shippers: Will not send the orders with their logo, the advantage here is that is likely the clients will return to your website to purchase more. The disadvantage here is that you will have to deal with the returns.

B) Standard Drop Shippers: The will send an invoice with their logo and price to your customers, the advantage is that the customer can directly deal with the wholesaler for returns.

What if you can’t find your product on the internet? If you can’t buy the product on the internet but you know the manufacturer, contact them and tell them you want to sale their product on the internet and ask them if they would ship directly for you.

You can also choose to sale products you use, products of your interest or related to your profession, products that a local business sales an have not representation on the internet, Information about something you know about.

I recommend for not mixing products for example selling lamps and detergents, websites like this would rank bad and would not attract people. Try to concentrate in a product or line of products that are related, try to not be generic. For example it is very unlikely you will rank well if you choose to sell MP3 Players (too generic), focus in selling a line of MP3 players for example Ipod Players.

What makes a good product? A good product is the one that there is a market for it, it can be profitable and there is supply for it (Manufactures or Retail Supplier).

How do I find a good product? There are several ways to do this but the list “What is hot on eBay is a pretty good indicator. http://pages.ebay.com/sellercentral/whatshot.html.

In addition you can use Google or Yahoo reverse search tool to see what people is looking for, or use a paid service such as Hammertap https://secure.hammertap.com/deepanalysis/

Build the first version of your website: Nobody expects this to be perfect; you will tune up your website with time but you need some presence as soon as possible, it takes some time (weeks) for your site to be recognized and indexed by the search engines. I do recommend for you to read my other paper about how to rank higher on the search engines.

You want your web page to be a resource about the product you are selling.

Generate Initial traffic:

This can be archived in 2 ways:

A) Relevance Listing: Choose the right keyword phrases for your website, 2 to 5 words specific to your product. Here you can use the reverse search tools offered by Google and Yahoo, while doing this you need to understand your market, know what keywords your competitors are using (use them)

You need to optimize your website to rank high following these steps:

A1) Optimize each page of your website for a different keyword phrase, use the right keywords on the right places.

A2) Places 10 to 12 words in the meta title and here place the keywords phrase you are optimizing the page for only once.


A3
) Place 25 words or less in the Meta Description and include there the keyword you are optimizing for 2 to 3 times.


A4) Through the text of the page bold the keywords you are optimizing for and place it about 4% of times of the total text of the page. In other words if the page have 100 words write your keyword 4 times in bold on the page, (If bold will make your site look ugly don’t bold it).


A5
) Place the keyword in the alternate text of every image in the page.


A6
) Link exchange is a good strategy to rank higher, keep in mind that your goal is to gain incoming links rather that outgoing links.


A7) Publish your link into vertical portals (product specific search engines)
http://www.searchengineguide.com/

A8
) Publish your website in this directory http://www.dmoz.com


B) Pay per click: The 2 most common are Overture (Yahoo search marketing) and Google adwords, use middle to low keywords that appear in the reverse search tool, also bid on misspelling keywords (you can use http://www.fatfingers.com) While you get familiar with this keep your bid amount to 10 cents and set a daily budget of $1.


Test your website:
Your goal has to be to improve your website, you need to improve the conversion rate (sales / visitor), you need to convert 2% of your visitors into sales. You need to keep your visitors for at least 15 seconds in your page, if you can’t make this happen you need to improve your site with interesting content.


- Test your products prices (high vs. low) what is the floor and ceiling price? Low prices will generate volume.
- Increase your average order size by offering adds on.
- Do cross promotions, people buy related products at the same time for similar reasons.
- Offer a quantity discount.
- Offer free shipping.
- If bundling offer slight discounts.
- Ask friends and family for feedback.
- Test one thing at a time.
- Maintain a close look to your web site statistics.
- Test each change with at least 100 visitors.
- Improve the exit page (the last page your visitors saw in your website before going to other website).
- Send emails with propaganda to the customers in your database (not spam).

Market more aggressively:

- Sell your product in addition to your website on eBay.
- Find and research the product you want to sell.
- Try to be in the middle zone for the shipping cost.
- Create an effective title with rich keywords that catches attention.
- Create credibility on the description, if you item has scratches you want pictures of the scratch and say how the scratch was produced, that will create trustiness.
- Fill description with keywords.
- Be honest, reputation is something difficult to make easy to lose.
- List the features and benefits of your product.
- Check what others are charging for the same product you are selling.
- Tie into emotions put a personal note with your shipment or through something your customer is not expecting, example a postcard with discounts on future purchases, a business card, a thank you note or a brochure.
- Personalize the description use words like you or your.
- Use lot of pictures, a picture is worth 1000 words.
- When using eBay try to have your bid ending at Universal time, Saturday and Sunday are the best.
- Do post sale correspondence, contact your buyer, treat them as customers and notify them of the shipment.

Friday, August 31, 2007

Oracle Partitioning the old fashion way

Long long time ago in a in a galaxy far far away, no just kidding about the galaxy part but it was really around Oracle version 8 / 8i when Oracle introduced Oracle partitioning. When there was not partitioning Database Administrators and developers had to come up with different techniques to emulate partitioning (manual partitioning). It recently called my attention that a software vendor didn't have a clue of what I was talking about when I mentioned "manual partition". In their development data model they have 1 single table that keeps historic information and they wanted their customer to pay for Oracle Enterprise Edition + Oracle Partitioning for an application that really requires Oracle Standard Edition (up to 4 CPUs). This translates into several thousand or dollars more in software and maintenance fees. That motivated me to post this blog when I will provide a general idea about the things we use to do, then you take it from here and change it to fit your needs.

In this example we are going to simulate range partitioning by date; instead of a partition table we are going to use 2 tables, table name "primero" (first in Spanish) which will hold the most recent 3 months of data, and table "segundo" (second in Spanish) will hold the rest of the data.

First Step - Table Creation

create table rorta.primero (cust_id number primary key, cust_name varchar2(30),
cust_lname varchar2(30), fecha date)
create table rorta.segundo (cust_id number primary key, cust_name varchar2(30),
cust_lname varchar2(30), fecha date)

Second Step - View Creation (This will allow you to query the 2 tables as a single table)

create view rorta.customer as select * from primero
union all select * from segundo;

Third Step - Create the procedure to insert into the table.

create procedure rorta.insertar as (c_id primero.CUST_ID%TYPE, c_name primero.CUST_NAME%TYPE,
c_lname primero.CUST_LNAME%TYPE, c_fecha primero.FECHA%TYPE)
begin
if c_fecha <= sysdate - 90 then insert into primero values (c_id, c_name, c_lname, c_fecha); else insert into segundo values (c_id, c_name, c_lname, c_fecha); end if; end; Four Step - Insert information in the table.

Fifth Step - Insert into the table

exec rorta.insertar(1,'Rafael','Orta','28-APR-1970')

The way that it works is that instead of executing an insert statement you will pass to the store procedure the data you want to insert as I did above. If you want to select, delete or update you could create similar store procedures to do that operation or you will need to go directly to the table where the data is hosted primero or segundo. If you need to query all the data use the view we created.


I hope this helps, contact me if you need further details at Consulting@Oraprofessionals.com