Monday, December 27, 2010

Oracle Streams Archived-Log Downstream Capture

The parameter DOWNSTREAM_REAL_TIME_MINE controls whether a downstream capture process performs real-time downstream capture or archived-log downstream capture.


If the parameter is set to Y , when setting the capture process using the below procedure then the capture process is real time as described here.



BEGIN
  DBMS_CAPTURE_ADM.SET_PARAMETER(
    capture_name => 'real_time_capture',
    parameter    => 'downstream_real_time_mine',
    value        => 'Y');
END;


If the parameter is set to N , then instead of the redo-log , archived logs are transported and mined at the downstream site.




In the archived-log downstream capture process , the archived redo log has to be transported to the downstream database using redo transport services,the DBMS_FILE_TRANSFERa package, file transfer protocol (FTP), or some other mechanism. Once the archived redo logs are available at the downstream site then they are mined and applied as per the requirement setup.

Monday, December 20, 2010

Schema Replication by Oracle Streams Real-Time Downstream

Recently had a requirement to configure a replication environment which can sync a schema.
The requirement was not to over load the production system at any point.
Thought about Oracle Active Data Guard but the target database also will be used for updates and so had to rule this option out.

The best option which came to be after doing a complete research on the replication what Oracle offers was the Downstream Setup.
Very simple and very promising to me.

A picture is better than a detailed explanation












































Here is a simple over view of what and how it can be done.
- Create a streams administrator user on both the databases.
- Create a database link between both the databases.
- Setting parameter for Downstream archeving.
- Creating standby redo-logs on the Downstream site to receive redo data from Source
- Get Source and Downstream database to archive-log mode
- Setting up the source schema
- Creating the streams queue on the downstream site
- Creating the apply process at the downstream site
- Creating the capture process at the downstream site
- Set capture for real-time captuing of changes
- Add rules to instruct the capture process, what to capture
- Instantiating the to be replicated objects.
- Start the capture process
- Start the apply process
(Detailed implementation can be found at Oracle Note 753158.1)

Wednesday, December 15, 2010

Installing - OBIEE 11g

After completing the Installation of RCU , the following has to be started. So RCU is a pre-requisite for the OBIEE Installation.

A bright flash screen !!!


welcome screen from the installer


I opted the Simple Install so that it configures with default settings.


Pre Check , here I am warned about the missing RPM and kernel settings. I installed the RPM and tuned the kernel settings.


The username of the administrator and the password. we can also create any name for the administrator username other than the default weblogic.


Oracle Home for the installation has to be selected. Before selecting make sure that you have sufficient space also you can refer to the installation document to know the minimum space required.


The list of components that will be installed as a part of this installation.


Here specify the details of the database where the repository has been created. 


Here specify the mysupport email address and password to get notified on the security updates.


Pre-Install Summary for review


After clicking the install button , the installation process starts




The final screen of a successful installation with the details of the Oracle Home and the URLs of the application.



With this we have completed a health installation of OBIEE 11g.

Tuesday, December 14, 2010

Repository Creation Utility - OBIEE 11g

RCU is the short name and is basically used to create the database schema for Fusion Middleware 11g.

The first screen welcomes you !!!

This screen provides an option to create or drop the FMW schema. Here I have selected Create option so that I can create the schema for OBIEE.


This screen helps to specify the database and its details where the repository will be created. It also provides an option to install the repository on a RAC database.


Next the RCU validates the database. Here I got a warning message that the database is not AL32UTF8 character set. I have opted to ignore but it is recommended to create the database with the AL32UTF8 character set.


The below screen lists a successful pre-check.


Next I have given a Prefix and also selected "Business Intelligence Platform" , automatically the dependencies are selected.


Prerequisites relevant to BI are checked and verified in the below screen.


Here we have to input the password for the schemas , I have opted a common password.


Review screen for the tablespaces.


The below screen show that the tablespaces are created and validated.


With all the inputs provided summary page is displayed. After verifying it by clicking the create button the necessary schema will be created in the target database.


Information of the successful creation of repository for OBIEE.


Looks good , RCU was simple and good to me.
The next tasks would be to Install OBIEE.

Monday, December 13, 2010

OBIEE 11g Getting Started

After a long gap I am looking into OBIEE 11g. After completing Datacenter Migration Project and Database 11g Upgrade Project, I am now set to go for Fusion Middleware 11g Upgrade early next year.

But this December it is OBIEE 11g.
Early this year I created a OBIEE 10g Environment with BI Publisher 10g. But I see that there has been a lot(a lot) of enhancement done with OBIEE 11g.

As a preparation I created the Database that is required for the Repository. In OBIEE 11g we have to build a repository. I am building it on a 11g R1 database. I made a Oracle Home Cloning and created a database on it.

Oracle Home cloning is pretty simple.
1. Copy a healthy Oracle Home
2. Source the environment variables
3. Run the clone scripts
perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=test11_1

In three simple steps the Oracle Home is Ready and now it is the time for DBCA to take care of the rest in creating the database.

Between I have downloaded the following
- Repository Creation Utility.
- Softwares necessary for OBIEE 11g.

You can find more details here (this link might change in the future )

Will keep posting on the developments.



Thursday, August 5, 2010

11g Upgrade Failed with Incorrect Deduction of Vault

ScreenHunter_0049 2010-08-05 10.35.gif

DBUA reported on a database where I have not configured Oracle Database Vault when upgrading from 10.2.0.4 to 11.1.0.7.

Here is a workaround that will enable you to get rid of this error.
Please make a note that this is possible only if you do not have enabled Oracle Database Vault and that is verified.

To verify
Check is if Data Vault is enabled or not. Data Vault is disabled if following is TRUE
$ cd $ORACLE_HOME/rdbms/lib
$ ar -t libknlopt.a | grep dv
kzvndv.o <<<<<< Shows DV is disabled

In my case the above was verified and I did not have Vault.

Workaround
1. Cleanly shutdown and take complete cold backup of the 10.2 database
2. Startup (in upgrade mode) and shutdown the 10.2 database (prior to upgrade) using 11.1 binaries
-- login to 11.1 Oracle Home environment
% export ORACLE_SID=<10.2 database SID>
% sqlplus / as sysdba
SQL> startup upgrade pfile="<10.2 database pfile>"
SQL> select * from v$option where PARAMETER='Oracle Database Vault';
-- ensure that DV option is FALSE
SQL> shutdown immediate

3. Now retry the DBUA
% dbua

Once database is opened (in upgrade mode) and closed using 11.1 binaries, DBUA stops showing this error.

Note: The above action plan was provided by Oracle Support. So before executing it please cross check with Support.

Monday, July 26, 2010

RMAN Backup Monitoring

Here is a simple query to monitor the RMAN progress.

select
 SID||'-'||OPNAME SID_OPNAME,
 100-round((sofar/totalwork) * 100,2) pct_left
from 
   v$session_longops
where 
   totalwork > sofar
AND 
   opname NOT LIKE '%aggregate%'
AND 
   opname like 'RMAN%';

SID_OPNAME                             PCT_LEFT               
-------------------------------------- --------
3119-RMAN: incremental datafile backup 79.77                  
3135-RMAN: incremental datafile backup 79.87                  
3123-RMAN: incremental datafile backup 79.79                  
3143-RMAN: incremental datafile backup 79.92                  
3127-RMAN: incremental datafile backup 79.82                  
3159-RMAN: incremental datafile backup 78.28                  
3120-RMAN: incremental datafile backup 79.91                  
3136-RMAN: incremental datafile backup 78.16                  
3146-RMAN: incremental datafile backup 79.86                  
3183-RMAN: incremental datafile backup 85.29                  
3152-RMAN: incremental datafile backup 78.10
3151-RMAN: incremental datafile backup 79.74                  
3111-RMAN: incremental datafile backup 80.00                     
3124-RMAN: incremental datafile backup 79.74                  
3139-RMAN: incremental datafile backup 79.68                  

15 rows selected

Monday, July 19, 2010

UNSUPPORTED way to delete scheduled workbook in Discoverer

Purely UNSUPPORTED way to delete scheduled workbook in Discoverer
Needs testing from your side and please do it with proper backup.

I have been recently working on a datacenter migration project that involved Discoverer.
After the migration some users who had large number of Discoverer Scheduled workbook had issues when click on Manage Schedule Workbook.

By documentation when a Discoverer Administrator in the admin edition set the scheduled results to be deleted and then the user logs in and logs out the workbook and its associated database objects gets deleted automatically.

Unfortunately this did not work for two of my users because of their mass schedules.

Here is a backend work that I did.


SQL> DELETE FROM EUL5_BATCH_PARAMS WHERE BP_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BQ_DEPS WHERE BQD_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BQ_TABLES WHERE BQT_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BATCH_QUERIES WHERE BQ_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BATCH_PARAMS WHERE BP_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BATCH_SHEETS WHERE BS_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BATCH_QUERIES WHERE BQ_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BR_RUNS WHERE BRR_CREATED_BY='TESTUSER';
SQL> DELETE FROM EUL5_BATCH_REPORTS WHERE BR_CREATED_BY='TESTUSER';
SQL> COMMIT;

Scripts to delete the objects
SQL> SELECT LOWER('DROP PACKAGE TESTUSER.'||OBJECT_NAME||';')CMD FROM DBA_OBJECTS WHERE OWNER='TESTUSER' AND OBJECT_TYPE='PACKAGE' ;
SQL> SELECT LOWER('DROP TABLE TESTUSER.'||OBJECT_NAME||';')CMD FROM DBA_OBJECTS WHERE OWNER='TESTUSER' AND OBJECT_NAME LIKE 'EUL5_B%' AND OBJECT_TYPE='TABLE' ;

I hope this will be used with proper testing.

Friday, July 16, 2010

Invisible Index in 11g

11g has many new feature and one such is Invisible Index. It saved my day today.
There was a warehouse job having performance problem since the 11g upgrade.

My Test Database was on outage , so I had to tune it directly on the Production Database.
I know if a Index is created the issue will be resolved but the since the introduction of index would cause other problem , I had to create it in Invisible state.

create index index_for_testing on table table_name(column_name1,column_name2) invisible;

In the above statement the keyword invisible will not make the index ready to use. 
The optimizer has to be hinted on the usage and then the session will start to use the index.

SQL> alter session set optimized_use_invisible_indexes=true;
SQL> exec start_job;

The index is now valid only for that session and that run.
After proper testing I will make the index permanent by the below syntax

SQL> alter index index_for_testing visible;

I hope this tip will be helpful when it comes to creating indexes directly on the production environment.

Thursday, July 15, 2010

Start to blog after a long break

Dear friends

I have started to blog after a long break.
Was busy with a datacenter migration project.

I hope you all will continue to support me.

Senthil

Monday, March 1, 2010

Oracle[Sun] Virtual Box on Oracle Enterprise Linux

Came across an issue when installing Oracle[Sun] Virtual Box on Oracle Enterprise Linux

[root@skrajend Desktop]# ls
VirtualBox-3.1.4-57640-Linux_x86.run


[root@skrajend Desktop]# ./VirtualBox-3.1.4-57640-Linux_x86.run
Verifying archive integrity... All good.
Uncompressing VirtualBox for Linux installation.........
VirtualBox Version 3.1.4 r57640 (2010-02-12T19:32:16Z) installer
Please install the build and header files for your current Linux kernel.
The current kernel version is 2.6.18-164.el5
Problems were found which would prevent VirtualBox from installing.
Please correct these problems and try again.
[root@skrajend Desktop]#



Solution
[root@skrajend rpm]# rpm -ivh kernel-devel-2.6.18-164.el5.i686.rpm
Preparing...                ########################################### [100%]
   1:kernel-devel           ########################################### [100%]
[root@skrajend rpm]#



[root@skrajend Desktop]# ./VirtualBox-3.1.4-57640-Linux_x86.run
Verifying archive integrity... All good.
Uncompressing VirtualBox for Linux installation.........
VirtualBox Version 3.1.4 r57640 (2010-02-12T19:32:16Z) installer
Installing VirtualBox to /opt/VirtualBox
Python found: python, installing bindings...
Building the VirtualBox vboxdrv kernel module
Building the VirtualBox netflt kernel module
Building the VirtualBox netadp kernel module

VirtualBox has been installed successfully.

You will find useful information about using VirtualBox in the user manual
  /opt/VirtualBox/UserManual.pdf
and in the user FAQ
  http://www.virtualbox.org/wiki/User_FAQ

We hope that you enjoy using VirtualBox.

[root@skrajend Desktop]#


Monday, February 8, 2010

Sample Dataset for Oracle

I was recently working on a project where there was a demand to have random set of sales and transaction data to test performance. Well I thought of a better solution and it worked out with this website http://www.generatedata.com

Interestingly the dataset can be generated directly on the website itself and the maximum rows that will be generated is 5000.

The result output can be on different forms like excel,csv,html,xml and sql.
SQL was the best part , the page gives option to select the database type and then the tablename.
By default it creates the script with the create tablename and then the insert statements.

I hope this will help people who need sample dataset.

Thursday, January 14, 2010

RMAN Slow Performance and Resource Manager

RMAN is running slow; Well there could be several reasons behind it , the load on the system , bandwidth , cpu , tape library's performance and many other aspects. One such area to be looked into is Resource Manager.

Resource Manager is a key tool when it comes to complex environment where resources should be managed properly.

SYS and SYSTEM users are sometimes missed when planning for Resource Manager, especially SYS user.

Some Tips while creating the Resource Manager Plan
Create a separate group called SYS_GROUP and map the SYS and SYSTEM users to it.
When creating a plan make sure that you have the SYS_GROUP in Level 1 and 100%.
Do not allocate resources to any other group in Level 1
Make sure that you have UNLIMITED to all the Directives
  • Maximum Degree of Parallelism
  • Maximum Number of Active Sessions Activation Queue Timeout (sec)
  • Maximum Undo Space (KB)
  • Maximum Estimated Execution Time (sec)
  • Max Idle Time (sec)
  • Max Idle Time if Blocking Another Session (sec) 

With the above setup in Resource Manager , RMAN will get 100% of the Resource since it runs as SYS_GROUP


Tuesday, January 12, 2010

Compressing Existing Tables

We backup tables many times and have never worried about them for months and years may be. In an Warehouse Environment , Backup Tables are common as it is done before making any changes like business re-org , code change and so on.

As a DBA we should be in a situation to understand where to place these tables and how to optimize them with the storage. Some tables in my environment are 50GB and they are backed up every month before running a critical process. Then the backup stays for 1 year.

Some tips to handle such cases
Some backup tables are never used , so I would suggest to export them with EXPDP utility and delete them on the schema.
Some backup tables are used by developers , so to address that there are various options
- Create a New Disk Group with cheaper devices and create a tablespace on it. Use tablespace clause in the create table command to move the tablespace into them. 
   ex : create table sh.employee_history_bkup_jan2010 tablespace tablebackup as select * from sh.employee_history;
- Use compress option when creating such tables.
   ex : create table sh.employee_history_bkup_jan2010 tablespace tablebackup compress as select * from sh.employee_history;
- If you have already backed up the table without compression method , then use alter table move procedure with compress option.
   ex : alter table sh.employee_history_bkup_jan2010 move compress;

Note : When creating these backup tables make use of the nologging feature to avoid archivelogs.

I hope this helps people is warehousing environments.

Sunday, January 10, 2010

ASM Adding and Removing Disks

Had been using ASM quite a long time and once such maintenance that keeps me busy these days are to replace some of the disk that are used by ASM.
To do so some disks has to be removed from the existing diskgroup and use the newly added disks.

Steps Involved
Step 1: Add the new devices to the hardware
Step 2: Update the asm_diskstring with the new device path
Step 3: Add the new devices to the existing diskgroup.
alter diskgroup dg1 add disk '/dev/newdisk1','/dev/newdisk2','/dev/newdisk3';
Step 4: Wait for ASM to rebalance the data.
this process will redistribute the data onto the new storage but not 100%
Step 5: Remove the old devices from ASM
alter diskgroup dg1 drop disk '/dev/olddisk1','/dev/olddisk2','/dev/olddisk3';
Step 6: Wait for ASM to rebalance the data.
this process will redistribute the data onto the new storage 100%

The above process can be still made quicker
Alternate to this you can avoid step 4 by running the alter diskgroup to add and drop the disk in one command. ASM will take care of it internally.

Example:
alter diskgroup dg1 add disk '/dev/newdisk1','/dev/newdisk2','/dev/newdisk3' drop disk '/dev/olddisk1','/dev/olddisk2','/dev/olddisk3';

Then you have to wait for one rebalance. This second alternate is a much quicker approach.
Additional tip to speed up the rebalance is to alter the power of the diskgroup.

I hope this helps people involved in disk management with ASM and Oracle.

Thursday, January 7, 2010

alter index indexname rebuild parallel 4

Usually when we rebuild a index especially on a high resourced server we tend to do it paralleled , sometimes with 4 and 8 degree.

i.e., alter index owner.index_name rebuild parallel 4;
or    alter index owner.index_name rebuild parallel 8;

The above statement might has rebuild the index faster but it has left a hidden trouble.

It has set the index parallelism for a higher degree than the table. 
Usually wait event "PX DEQ CREDIT SEND BLKD" are high because of this issue when the query runs parallel. 

So the best advice would be to restore the old parallel degree value of the index after having used a higher degree of parallelism for rebuilding.


Wednesday, January 6, 2010

Dashboards are Interesting

I came across this interesting article Source searchoracle.techtarget.com

OBIEE is used by Kaleida Health as a "Flu Dashboard" and interestingly they could save a lot of time in getting the data at the right time.

Sometimes dashboards are really useful that it makes life simpler.

At job I use Oracle Application Express to build dashboard. One such dashboard is with traffic light which indicates the status of data load for different regions. Converted all complex queries into meaningful symbols and everything in one dashboard.


Now the Dashboard in APEX is used by the full team to know the status of the jobs. We do have dashboard for other purposes like handling errors in the dataload , monitoring discoverer usage and portal usage.

Monday, January 4, 2010

OWB 11g and External Tables

I am working on a OWB 11g migration project and we wanted to create external table using OWB 11g. External table is having a column with string data type. So we are expecting this column to be with VARCHAR2(X CHAR), however this is created as VARCHAR2(X BYTE) in OWB 11g. We did not face this issue with OWB 10.1

Solution is to set the init.ora parameter NLS_LENGTH_SEMANTICS to CHAR and restart the database.

Popular Posts