Thursday, January 14, 2010

RMAN Slow Performance and Resource Manager

RMAN is running slow; Well there could be several reasons behind it , the load on the system , bandwidth , cpu , tape library's performance and many other aspects. One such area to be looked into is Resource Manager.

Resource Manager is a key tool when it comes to complex environment where resources should be managed properly.

SYS and SYSTEM users are sometimes missed when planning for Resource Manager, especially SYS user.

Some Tips while creating the Resource Manager Plan
Create a separate group called SYS_GROUP and map the SYS and SYSTEM users to it.
When creating a plan make sure that you have the SYS_GROUP in Level 1 and 100%.
Do not allocate resources to any other group in Level 1
Make sure that you have UNLIMITED to all the Directives
  • Maximum Degree of Parallelism
  • Maximum Number of Active Sessions Activation Queue Timeout (sec)
  • Maximum Undo Space (KB)
  • Maximum Estimated Execution Time (sec)
  • Max Idle Time (sec)
  • Max Idle Time if Blocking Another Session (sec) 

With the above setup in Resource Manager , RMAN will get 100% of the Resource since it runs as SYS_GROUP

Tuesday, January 12, 2010

Compressing Existing Tables

We backup tables many times and have never worried about them for months and years may be. In an Warehouse Environment , Backup Tables are common as it is done before making any changes like business re-org , code change and so on.

As a DBA we should be in a situation to understand where to place these tables and how to optimize them with the storage. Some tables in my environment are 50GB and they are backed up every month before running a critical process. Then the backup stays for 1 year.

Some tips to handle such cases
Some backup tables are never used , so I would suggest to export them with EXPDP utility and delete them on the schema.
Some backup tables are used by developers , so to address that there are various options
- Create a New Disk Group with cheaper devices and create a tablespace on it. Use tablespace clause in the create table command to move the tablespace into them. 
   ex : create table sh.employee_history_bkup_jan2010 tablespace tablebackup as select * from sh.employee_history;
- Use compress option when creating such tables.
   ex : create table sh.employee_history_bkup_jan2010 tablespace tablebackup compress as select * from sh.employee_history;
- If you have already backed up the table without compression method , then use alter table move procedure with compress option.
   ex : alter table sh.employee_history_bkup_jan2010 move compress;

Note : When creating these backup tables make use of the nologging feature to avoid archivelogs.

I hope this helps people is warehousing environments.

Sunday, January 10, 2010

ASM Adding and Removing Disks

Had been using ASM quite a long time and once such maintenance that keeps me busy these days are to replace some of the disk that are used by ASM.
To do so some disks has to be removed from the existing diskgroup and use the newly added disks.

Steps Involved
Step 1: Add the new devices to the hardware
Step 2: Update the asm_diskstring with the new device path
Step 3: Add the new devices to the existing diskgroup.
alter diskgroup dg1 add disk '/dev/newdisk1','/dev/newdisk2','/dev/newdisk3';
Step 4: Wait for ASM to rebalance the data.
this process will redistribute the data onto the new storage but not 100%
Step 5: Remove the old devices from ASM
alter diskgroup dg1 drop disk '/dev/olddisk1','/dev/olddisk2','/dev/olddisk3';
Step 6: Wait for ASM to rebalance the data.
this process will redistribute the data onto the new storage 100%

The above process can be still made quicker
Alternate to this you can avoid step 4 by running the alter diskgroup to add and drop the disk in one command. ASM will take care of it internally.

alter diskgroup dg1 add disk '/dev/newdisk1','/dev/newdisk2','/dev/newdisk3' drop disk '/dev/olddisk1','/dev/olddisk2','/dev/olddisk3';

Then you have to wait for one rebalance. This second alternate is a much quicker approach.
Additional tip to speed up the rebalance is to alter the power of the diskgroup.

I hope this helps people involved in disk management with ASM and Oracle.

Thursday, January 7, 2010

alter index indexname rebuild parallel 4

Usually when we rebuild a index especially on a high resourced server we tend to do it paralleled , sometimes with 4 and 8 degree.

i.e., alter index owner.index_name rebuild parallel 4;
or    alter index owner.index_name rebuild parallel 8;

The above statement might has rebuild the index faster but it has left a hidden trouble.

It has set the index parallelism for a higher degree than the table. 
Usually wait event "PX DEQ CREDIT SEND BLKD" are high because of this issue when the query runs parallel. 

So the best advice would be to restore the old parallel degree value of the index after having used a higher degree of parallelism for rebuilding.

Wednesday, January 6, 2010

Dashboards are Interesting

I came across this interesting article Source

OBIEE is used by Kaleida Health as a "Flu Dashboard" and interestingly they could save a lot of time in getting the data at the right time.

Sometimes dashboards are really useful that it makes life simpler.

At job I use Oracle Application Express to build dashboard. One such dashboard is with traffic light which indicates the status of data load for different regions. Converted all complex queries into meaningful symbols and everything in one dashboard.

Now the Dashboard in APEX is used by the full team to know the status of the jobs. We do have dashboard for other purposes like handling errors in the dataload , monitoring discoverer usage and portal usage.

Monday, January 4, 2010

OWB 11g and External Tables

I am working on a OWB 11g migration project and we wanted to create external table using OWB 11g. External table is having a column with string data type. So we are expecting this column to be with VARCHAR2(X CHAR), however this is created as VARCHAR2(X BYTE) in OWB 11g. We did not face this issue with OWB 10.1

Solution is to set the init.ora parameter NLS_LENGTH_SEMANTICS to CHAR and restart the database.

Popular Posts