Thursday, May 13, 2021

How to find the Package/Procedure/View associated with A WEBADI Integrator

How to find the Package/Procedure/View associated with A WEB ADI Integrator

Query to get WebADI Details in Oracle APPS




At times we want to know what is the name of the package/procedure/view associated with a WEB ADI. I struggle to get this information for a long and finally got the answer, posting the same for easy reference.

Query to find the View Name :

SELECT bct.user_name
FROM   apps.bne_contents_tl bct,
              apps.bne_contents_b bcb,
              apps.bne_integrators_tl bit
WHERE bit.integrator_code = bcb.integrator_code
AND     bcb.content_code = bct.content_code
AND     bit.user_name = <your_integrator_name>;

Query to find Package/procedure associated with WEB ADI Interface ( For UPDATE/UPLOAD metadata type WEB ADI) :

select ba.attribute2
 from   apps.bne_attributes ba,
        apps.bne_param_lists_b bplb,
        apps.bne_interfaces_b  bib,
        apps.bne_integrators_tl bit
 where  bib.upload_param_list_code = bplb.param_list_code
 and    bib.integrator_code = bit.integrator_code
 and    ba.attribute_code = bplb.param_list_code || '_ATT'
 and    bit.user_name = <integrator_user_name>




 

Wednesday, May 12, 2021

How to enable trace for a USER session

 

User level

‘Initialization SQL Statement – Custom’ ( FND_INIT_SQL )
==============================================

BASICS
======
Oracle Applications 11i provides a profile option that allows to execute ‘custom’ code at the beginning of every database session. These sessions can be linked to an online Form, a Concurrent Program, or any other piece of code that requires accessing the database.
The most common use of this profile option is to generate detailed raw SQL Trace files including the values of the bind variables used by SQL statements. This profile is also used to report on raw SQL Trace, all database waits, used to determine gaps between elapsed and CPU times.



COMMAND
=========

1) Using System Administrator Responsibility, navigate to System Profile Values Form (Profile ➨ System from Navigator). Query profile option ‘Initialization SQL Statement – Custom’ for user <username> and update the value with string below:

BEGIN FND_CTL.FND_SESS_CTL ('','','TRUE','TRUE','LOG','ALTER SESSION SET EVENTS=''10046 TRACE NAME CONTEXT FOREVER, LEVEL 12'''); END;


BEGIN FND_CTL.FND_SESS_CTL('','', '', 'TRUE','','ALTER SESSION SET TRACEFILE_IDENTIFIER='||''''||'AVIADE' ||''''||' EVENTS ='||''''||' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12 '||''''); END;

** Just replace AVIADE with the user you enable trace for. 


2) Ask user to login. As soon as  user login, all database sessions opened with this user  will generate one raw SQL Trace with detailed information produced by Event 10046.



IMPORTANT POINTS
=================
> All quotes used on FND_CTL.FND_SESS_CTL are single quotes. Where it looks like double quotes it is actually two single quotes. After LEVEL 12 it has 3 single quotes. Exact syntax is very important.

> Event 10046 level 1 is regular trace, level 4 is tracing with bind variables, level 8 is with database waits and level 12 with both, bind variables and database waits.








=======================================================



How to enable trace for a USER session

I was being asked to examine a performance issue within one of our CRM application screens, after some users complained about a specific long time action.

First thing, I tried to enable trace for the CRM session, but It turned out that it’s definitely not simple to identify a CRM session. Especially in my case, when a session opens two (sometimes more) database sessions. It’s quite impossible actually.

So how it is possible to trace those CRM sessions anyway?

Oracle has provided an option to execute custom code for every session opened in the database through a system profile. This profile called “Initialization SQL Statement - Custom” (the short name is 'FND_INIT_SQL') and allows customize sql/pl*sql code.

Once setting this profile in user level, each session opened for this user will first execute the code within the profile. No matter which type of activity the user does – Forms, CRM, Concurrent request, or anything else that opens a database session – the content of this profile will be executed.

So, clearly we can use this capability to enable Trace for users sessions.

Steps to enable trace for specific user:

  1. Login with “Application Developer” responsibility
  2. Open the “Create Profile” form –> Query the profile “FND_INIT_SQL”
  3. Make sure that “visible” and “updateable” are checked in user level. 

     
  4. Switch responsibility to “System Administrator”
  5. Navigate to Profile –> System –> Query the profile “Initialization SQL Statement - Custom” in user level for the user we would like to enable trace for. 

     
  6. Update the profile option value in user level to the following: 

    BEGIN FND_CTL.FND_SESS_CTL('','', '', 'TRUE','','ALTER SESSION SET TRACEFILE_IDENTIFIER='||''''||'AVIADE' ||''''||' EVENTS ='||''''||' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12 '||''''); END;
        
    ** Just replace AVIADE with the user you enable trace for. 

      
  7. Now, after the user logout from the application (the user you enabled trace for), the user can login and reproduce the issue. 
     
  8. When finish to reproduce the issue, you should disable the trace by clearing the profile option value and update it to NULL. (profile “Initialization SQL Statement – Custom” of course..) 
  9. The trace file/s will wait for you in your udump (user_dump_dest init’ parameter) directory. 

Since I enabled and disabled the trace quite a few times while investigating my performance issue, I wrote these handy simple programs which enable and disable the trace for a user in a quick and easy manner.

Execute this program to enable trace for a specific user: (substitute step 6 above)

DECLARE 
  l_ret     boolean; 
  l_user_id number; 
BEGIN 

  select user_id 
    into l_user_id 
    from fnd_user 
   where user_name = '&&USER_NAME';

  l_ret := fnd_profile.SAVE(X_NAME        => 'FND_INIT_SQL', 
                            X_VALUE       => 'BEGIN FND_CTL.FND_SESS_CTL('''','''','''', ''TRUE'','''',''ALTER SESSION SET TRACEFILE_IDENTIFIER=''||''''''''||''&&USER_NAME'' ||''''''''||'' EVENTS =''||''''''''||'' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12 ''||''''''''); END;', 
                            X_LEVEL_NAME  => 'USER', 
                            X_LEVEL_VALUE => l_user_id); 
  commit;

  dbms_output.put_line('Profile has updated successfully');

EXCEPTION 
  when others then 
    dbms_output.put_line('Failed to update the profile: '||sqlerrm); 
END;

 

Execute this program to disable trace for a specific user: (substitute step 8 above)

DECLARE 
  l_ret     boolean; 
  l_user_id number; 
BEGIN

  select user_id 
    into l_user_id 
    from fnd_user 
   where user_name = '&USER_NAME';

  l_ret := fnd_profile.DELETE(X_NAME        => 'FND_INIT_SQL', 
                              X_LEVEL_NAME  => 'USER', 
                              X_LEVEL_VALUE => l_user_id); 
  commit;

  dbms_output.put_line('Profile has erased successfully');

EXCEPTION 
  when others then 
    dbms_output.put_line('Failed to erase the profile: '||sqlerrm); 
END;

Script to Monitor and tune Oracle Database

 




Create a profile with the best plan-
1. alter session set nls_date_format = 'DD/MM/YY HH24:MI:SS';
2. select sql_id,plan_hash_value,timestamp,sum(cpu_cost),sum(io_cost),sum(cost) from DBA_HIST_SQL_PLAN where sql_id='&SQL_ID' group by sql_id,timestamp,plan_hash_value;
3. select ADDRESS, HASH_VALUE from GV$SQLAREA where SQL_ID like '&SQL_ID'; 
4. exec DBMS_SHARED_POOL.PURGE ('&addr, &hashvalue', 'C'); -- Provide details of the above query 
Profiling - 
5. Zap query with attached zap.sql with Parameters - SQL_ID and Plan Hash Value
6. Execute coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql
and then Flush the shared pool..
7. exec DBMS_SHARED_POOL.PURGE ('&addr, &hashvalue', 'C'); -- Provide details of the above query
8. Drop profile with - BEGIN DBMS_SQLTUNE.DROP_SQL_PROFILE(NAME=>'coe_SQLID_HASH'); end;
9. Disable profile - EXEC DBMS_SQLTUNE.ALTER_SQL_PROFILE (name=>:pname,attribute_name=>'STATUS',value=>'DISABLED');
10. Enable profile - EXEC DBMS_SQLTUNE.ALTER_SQL_PROFILE (name=>:pname,attribute_name=>'STATUS',value=>'ENABLED');

9. Check all profiles created - 
select NAME, SIGNATURE, SQL_TEXT, FORCE_MATCHING from dba_sql_profiles where NAME='&SQL_Profile_name';

select trim(to_char(sysdate, 'Day'))||to_char(sysdate, ' dd ')||trim(to_char(sysdate, 'Month'))||to_char(sysdate, ' yyyy') from dual;

Number of Rows in a table from old stats -
select n.object_name as Obj_Name,o.ROWCNT,o.BLKCNT,to_char(o.SAVTIME,'dd/mm/yyyy HH:MI') as time from dba_objects n,WRI$_OPTSTAT_TAB_HISTORY o where o.obj#=n.object_id and n.object_name='&OBJECT_NAME';

LOB details - 
select OWNER,TABLE_NAME from dba_lobs where SEGMENT_NAME=’&LOB'

No of execution details from AWR - 
col begin_time for a25
col end_time for a11
col inst for 99999
col snapid for 999999
set lines 200
set pages 20000
select snap_id snapid,
(select substr(BEGIN_INTERVAL_TIME,1,18)||' '||substr(BEGIN_INTERVAL_TIME,24,2) from dba_hist_snapshot b where b.snap_id=a.snap_id and
a.INSTANCE_NUMBER=b.INSTANCE_NUMBER) begin_time
,(select substr(end_INTERVAL_TIME,11,8)||' '||substr(end_INTERVAL_TIME,24,2) from dba_hist_snapshot b where b.snap_id=a.snap_id and
a.INSTANCE_NUMBER=b.INSTANCE_NUMBER) end_time
,INSTANCE_NUMBER inst , PLAN_HASH_VALUE,
EXECUTIONS_DELTA Executions,
ROWS_PROCESSED_DELTA rows1,
round( CPU_TIME_DELTA /1000000,0) cpu_time,round(IOWAIT_DELTA /1000000,0) io_wait,
round( ELAPSED_TIME_DELTA /1000000,0) elapsed
from wrh$_sqlstat a where sql_id in('&SQL_ID')
order by snap_id, INSTANCE_NUMBER;

Execution time of SQL from AWR-
column program format a30
SELECT query_runs.*,
ROUND ( (end_time - start_time) * 24, 2) AS duration_hrs
FROM ( SELECT u.username,
ash.program,
ash.sql_id,
ASH.SQL_PLAN_HASH_VALUE as plan_hash_value,
ASH.SESSION_ID as sess#,
ASH.SESSION_SERIAL# as sess_ser,
CAST (MIN (ASH.SAMPLE_TIME) AS DATE) AS start_time,
CAST (MAX (ash.sample_time) AS DATE) AS end_time
FROM dba_hist_active_sess_history ash, dba_users u
WHERE u.user_id = ASH.USER_ID AND ash.sql_id = lower(trim('&sql_id'))
GROUP BY u.username,
ash.program,
ash.sql_id,
ASH.SQL_PLAN_HASH_VALUE,
ASH.SESSION_ID,
ASH.SESSION_SERIAL#) query_runs
ORDER BY sql_id, start_time;


Long Running SQL - 
select m.INST_ID, TO_CHAR(TRUNC((sysdate-SQL_EXEC_START)*86400/3600),'FM9900') || ':' ||TO_CHAR(TRUNC(MOD((sysdate-SQL_EXEC_START)*86400,3600)/60),'FM00') || ':' ||TO_CHAR(MOD((sysdate-SQL_EXEC_START)*86400,60),'FM00') duration,
sid,m.session_serial#,m.sql_id,sql_plan_hash_value,username,px_maxdop parallel,m.cpu_time/1000000, to_char(s.sql_fulltext), q.rows_processed
from gv$sql_monitor m,gv$sql s, gv$sqlarea q where m.sql_id = s.sql_id and q.sql_id = s.sql_id and status = 'EXECUTING' and username is not null

No of Distinct values for col - 
SELECT column_name, num_distinct, num_buckets, histogram FROM USER_TAB_COL_STATISTICS WHERE table_name = '&TAB_NAME' AND column_name = '&COL_NAME';


DBTIMEZONE details -
select dbtimezone from dual;


Invisible indexes usages -
alter session set OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;
to use the INVISIBLE indexes...

alter session set optimizer_index_cost_adj=1; ---- set val 1 to 100


Flasdback shared pool -
alter system flush SHARED_POOL;

alter system flush buffer_cache;

SGA resize stats -
select component, oper_type, oper_mode, initial_size/1024/1024/1024 "Initial in GB" , TARGET_SIZE/1024/1024/1024 "Target in GB", FINAL_SIZE/1024/1024/1024 "Final in GB", status from v$sga_resize_ops;

SGA occupants - 
SELECT o.object_name object_name, o.object_type object_type, COUNT(1) num_blocks FROM dba_objects o, v$bh bh WHERE o.object_id = bh.objd AND o.owner NOT IN ('SYS','SYSTEM') GROUP BY o.object_name, o.object_type ORDER BY COUNT(1) DESC;

SGA - Object wise Read / status- V$BH displays the status and number of pings for every buffer in the SGA.
SELECT t.name AS tablespace_name, o.object_name, SUM(DECODE(bh.status, 'free', 1, 0)) AS NotcurrentlyINuse, SUM(DECODE(bh.status, 'xcur', 1, 0)) AS Exclusivee, SUM(DECODE(bh.status, 'scur', 1, 0)) AS Sharedcurrent, SUM(DECODE(bh.status, 'cr', 1, 0)) AS ConsistentRead, SUM(DECODE(bh.status, 'read', 1, 0)) AS BeingReadFromDisk, SUM(DECODE(bh.status, 'mrec', 1, 0)) AS InMediaRecoveryMode, SUM(DECODE(bh.status, 'irec', 1, 0)) AS IninstanceRecoveryMode FROM v$bh bh JOIN dba_objects o ON o.object_id = bh.objd JOIN v$tablespace t ON t.ts# = bh.ts# GROUP BY t.name, o.object_name ORDER BY 4 DESC;

Select O.Owner Object_Owner, O.Object_Name Object_Name, O.Object_Type Object_Type, Count(1) Num_Blocks, Count(1)*(8192)/(1024*1024*1024) From Dba_Objects O, V$bh Bh Where O.Object_Id = Bh.Objd And O.Owner Not In ('SYS','SYSTEM') Group By O.Owner, O.Object_Name, O.Object_Type Order By Count(1) Desc; --- 8192 is a block size


HIT ratio -
SELECT name, physical_reads, db_block_gets, consistent_gets, 1 - (Physical_Reads / (Db_Block_Gets + Consistent_Gets)) "Hit Ratio" FROM V$BUFFER_POOL_STATISTICS;


Set Trace -
ALTER SESSION SET events '10053 trace name context forever';
ALTER SESSION SET events '10053 trace name context off';

General Queries -
select owner, table_name, num_rows, last_analyzed, temporary from dba_tables where table_name = '&TABLE_NAME';
select 'alter table ' || table_owner || '.' || table_name || ' drop partition ' || partition_name || ' ;' from all_tab_partitions where table_name like '&TABLE_NAME' AND TABLE_OWNER = '&TABLE_OWNER'and tablespace_name like '&Tablespace_Name';
select 'alter table SCOTT.' || TABLE_NAME || ' move partition ' || PARTITION_NAME || ' tablespace ABC;' from dba_tab_partitions where TABLE_OWNER='&OWNER';
select 'ALTER TABLE ' || '&TABLE_OWNER' || '.' || '&TABLE_NAME' || 'RENAME PARTITION ' || '&OLD_PartName' || 'TO ' || ' &NEW_PartName' || ';' from dual;
select SQL_ID,LAST_SQL_ACTIVE_TIME,SQL_TEXT from gv$open_cursor where sid='&SiD' and upper(SQL_TEXT) like '%&Some_TEXT%';
select table_name, index_name, column_position, column_name from dba_ind_columns where table_name = '&TABLE_NAME' and table_owner = '&OWNER' order by 1,2,3 ;

Change the maxsize
alter database datafile 'DB_File_Name' AUTOEXTEND ON MAXSIZE 30g;

SELECT address, child_address, sql_text, sql_id FROM v$sql WHERE sql_fulltext LIKE '%&SQL_ID%';
SELECT fetches, executions, parse_calls, disk_reads FROM V$SQLAREA WHERE sql_id LIKE '%&SQL_ID%';
SELECT application_wait_time, user_io_wait_time, cpu_time FROM V$SQLAREA WHERE sql_id LIKE '%&SQL_ID%';
SELECT optimizer_mode, optimizer_cost, sorts FROM V$SQLAREA WHERE sql_id LIKE '%&SQL_ID%';

alter system set db_recovery_file_dest_size=35G;
select count(*) from dba_tab_partitions;
select count(*) from dba_tab_subpartitions;
select count(*) from dba_ind_partitions;
select * from dba_dependencies where name in ('ABC');
select * from v$active_instances;
alter system flush shared_pool;
alter system flush buffer_cache;
SELECT * FROM V$RESULT_CACHE_OBJECTS WHERE name like '%&obj_name%' ORDER BY CREATION_TIMESTAMP DESC;
SELECT a.type, Substr(a.owner,1,30) owner,a.sid,Substr(a.object,1,30) object FROM gv$access a WHERE a.owner NOT IN ('SYS','PUBLIC') ORDER BY 1,2,3,4;

Col Usages from the specific table -

SELECT c.name, cu.timestamp, cu.equality_preds AS equality, cu.equijoin_preds AS equijoin, cu.nonequijoin_preds AS noneequijoin, cu.range_preds AS range, cu.like_preds AS "LIKE", cu.null_preds AS "NULL"
FROM sys.col$ c, sys.col_usage$ cu, sys.obj$ o, sys.user$ u
WHERE c.obj#=cu.obj# (+) AND c.intcol# = cu.intcol# (+)
AND c.obj# = o.obj# AND o.owner#=u.user#
AND o.name = '&TABLE_NAME' AND u.name = '&SCHEMA_NAME' ORDER BY c.col#;

Who locked account from Audit trail -
SELECT username,userhost, os_username, TIMESTAMP FROM dba_audit_session WHERE username='&USER_NAME' and returncode=1017 order by TIMESTAMP; ==== Returncode - 1017 - invalid id/pass, 2004 - security violation and 0 - success
or set Trace as below and check the trace file generated on trace file / background dump destination -
alter system set events '1017 trace name errorstack level 10';
alter system set events '1017 trace name errorstack off';


To get the resource limits from DB -
select * from v$resource_limit;

Number of session per user -
select inst_id, username, osuser, count(*) from gv$session where type <> 'BACKGROUND' group by inst_id, username, osuser having count(*) > 20 order by count(*);

Collect Stats - Compute Stats -
ANALYZE TABLE <TRANS> COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;
EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'HROA', ESTIMATE_PERCENT=>10, DEGREE=>4, CASCADE=>TRUE);
EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'HROA', ESTIMATE_PERCENT=>10, DEGREE=>4, CASCADE=>TRUE,OPTIONS=>'GATHER STALE');

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => '&OWNER', tabname => '&TABLE_NAME', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO',cascade => false, degree => DBMS_STATS.DEFAULT_DEGREE) ;

DBMS_STATS.GATHER_TABLE_STATS (NULL,'EMP', method_opt => 'FOR COLUMNS sal SIZE 10');
ANALYZE TABLE emp COMPUTE STATISTICS FOR COLUMNS sal SIZE 10;

Getting the system statistics - 
select * from aux_stats$

Manipulating the stats -
The following example assumes that your production WINNERS table is going to have 1,000,000 rows in 6,000 blocks:
EXECUTE SYS.DBMS_STATS.SET_TABLE_STATS (OWNNAME=>'HROA_DEV', TABNAME=>'WINNERS', NUMROWS=> 1000000, NUMBLKS=> 6000);

Monitoring Table -
ALTER TABLE WINNERS MONITORING;

Estimate Stats -
ANALYZE TABLE EMP ESTIMATE STATISTICS SAMPLE 5 PERCENT FOR ALL INDEXED COLUMNS;
ANALYZE INDEX EMP_NDX1 ESTIMATE STATISTICS SAMPLE 5 PERCENT FOR ALL INDEXED COLUMNS;
ANALYZE TABLE EMP COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;

Delete stats
exec dbms_stats.delete_table_stats(ownname => 'table-owner>', tabname => '<table-name>');

Locked statistics - 
select owner, table_name, stattype_locked from dba_tab_statistics where stattype_locked is not null;

Lock / unlock stats
exec dbms_stats.unlock_schema_stats('&Schema_name'); 
exec dbms_stats.unlock_table_stats('&OWNER','&Table_name');

Copying Statistics Using DBMS_STATS
DBMS_STATS gives you the ability to copy statistics from one schema to another, or from one database to another, using the following procedure:
Step 1. Create a table to store the statistics, if you have not already done so:
EXECUTE SYS.DBMS_STATS.CREATE_STATS_TABLE (OWNNAME=>'HROA', STATTAB=>'HROA_STAT_TABLE');
Step 2. Populate the table with the statistics from the schema that you are copying from:
EXECUTE SYS.DBMS_STATS.EXPORT_SCHEMA_STATS (OWNNAME=> 'HROA', STATTAB=>'HROA_STAT_TABLE', STATID=>'HROA_21SEP_2001');
Step 3. If you are copying statistics to a different database, such as from production to development, export and import that statistics table as required:
exp hroa/secret@prod file=stats tables=hroa_stat_table
imp hroa/secret@dev file=stats tables=hroa_stat_table
Step 4. Populate the statistics in the target schema's dictionary. In the following example, statistics are being loaded for the schema HROA_TEST from the table named HROA_STAT_TABLE:
EXECUTE SYS.DBMS_STATS.IMPORT_SCHEMA_STATS (OWNNAME=>'HROA_TEST', STATTAB=>'HROA_STAT_TABLE', STATID=>'HROA_21SEP_2001', STATOWN=> 'HROA');

Get the Session level OPTIMIZER Hint set -
SELECT S.SID, SUBSTR(S.PROGRAM,1,8) PROGRAM, SOE.NAME, SOE.VALUE, SOE.ISDEFAULT FROM V$SESSION S, V$SES_OPTIMIZER_ENV SOE WHERE S.SID=SOE.SID and SOE.ISDEFAULT='NO' and (SOE.NAME like '%&PARAMETER%' or S.SID= '&SID' ) ORDER BY NAME, PROGRAM;

Flasdback recovery area usages -
SELECT Name, (SPACE_LIMIT/1024/1024/1024) Space_Limit_GB, SPACE_USED/1024/1024/1024 Space_Used_GB, SPACE_RECLAIMABLE, NUMBER_OF_FILES FROM V$RECOVERY_FILE_DEST;
SELECT * from V$FLASH_RECOVERY_AREA_USAGE;

RMAN backup details -
select RECID, STAMP, COMPLETION_TIME, INCREMENTAL_LEVEL from v$backup_set where INCREMENTAL_LEVEL=&DBACKUP_LEVE
select start_time, END_Time, input_bytes/(1024*1024), output_bytes/(1024*1024), status from v$rman_backup_job_details order by 1;
select * from v$database_block_corruption;
select * from V$backup_corruption;
Getting the BLOCK details -
select rowid, dbms_rowid.rowid_block_number(rowid) blockno, dbms_rowid.rowid_relative_fno(rowid) fno from &SCHEMA.TABLE where rownum < 100
Select segment_name,segment_type,owner from dba_extents where file_id=&file_number and &block_number between block_id and block_id+blocks-1;

Getting the data of block details above - //
select * from owner.table_name where dbms_rowid.rowid_block_number(rowid)=4770 and dbms_rowid.rowid_relative_fno(rowid)=0


$rman target / 
RMAN>run { 
-- allocate channels 
backup validate check logical database; -- Backup time is the time taken to read + write blocks whereas validate is the time only to read the block.. so validate would take less time than backup.
RMAN> run {blockrecover corruption list;}
To do individual block recovery we can use 
RMAN> run {blockrecover datafile file# block block_number;} 
Example: 
RMAN> run {blockrecover datafile 5 block 114;} 
RMAN>RESTORE ARCHIVELOG ALL VALIDATE; " - --- This will validate archived logs before using them for recovery. 

SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
, greatest(e.block_id, c.block#) corr_start_block# 
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block# 
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) 
- greatest(e.block_id, c.block#) + 1 blocks_corrupted 
, null description 
FROM dba_extents e, v$database_block_corruption c 
WHERE e.file_id = c.file# 
AND e.block_id <= c.block# + c.blocks - 1 
AND e.block_id + e.blocks - 1 >= c.block# 
UNION 
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file# 
, header_block corr_start_block# 
, header_block corr_end_block# 
, 1 blocks_corrupted 
, 'Segment Header' description 
FROM dba_segments s, v$database_block_corruption c 
WHERE s.header_file = c.file# 
AND s.header_block between c.block# and c.block# + c.blocks - 1 
UNION 
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file# 
, greatest(f.block_id, c.block#) corr_start_block# 
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block# 
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) 
- greatest(f.block_id, c.block#) + 1 blocks_corrupted 
, 'Free Block' description 
FROM dba_free_space f, v$database_block_corruption c 
WHERE f.file_id = c.file# 
AND f.block_id <= c.block# + c.blocks - 1 
AND f.block_id + f.blocks - 1 >= c.block# 
order by file#, corr_start_block#; 


Check oracle owned files with the below command - eg.. provide your local password

suexec -u oracle /usr/bin/crontab -l
export PATH=$PATH:/usr/local/bin:.
Get sql_id and child_number for the preceding statement:
SELECT sql_id, child_number FROM gv$sql WHERE sql_text LIKE '%TOTO%';
SELECT sql_id, child_number, optimizer_mode, plan_hash_value FROM gv$sql WHERE sql_ID like '&SQL_ID';

Display the execution plan of all cursors matching the string 'TOTO':
SELECT t.* FROM v$sql s, table(DBMS_XPLAN.DISPLAY_CURSOR(s.sql_id, s.child_number)) t WHERE sql_text LIKE '%TOTO%';

To get an execution plan -
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'));

To get an execution plan -
explain plan set statement_id='TTT' for
select * from dual;

set long 2000
set linesize 200
set pagesize 2000
set trimspool on
select * from table(sys.dbms_xplan.display('','TTT','ALL'));

To get an execution plan ALL STATS-
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','allstats'));
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','allstats last'));
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','allstats all'));
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no',Format=>'typical +peeked_binds'));

select SQL_ID,CHILD_NUMBER from v$sql_plan where (ADDRESS,HASH_VALUE) in (select SQL_ADDRESS, SQL_HASH_VALUE from v$session where sid = &sid);
select * from table(dbms_xplan.display_cursor('&sqlid','&child,'TYPICAL'));

To get an execution plan from AWR -
SELECT * FROM table(dbms_xplan.display_awr(nvl('&sql_id',null), nvl('&plan_hash_value',null),null,'typical +peeked_binds'));
SELECT * FROM table(dbms_xplan.display_awr(nvl('&sql_id',null), nvl('&plan_hash_value',null),null,'ADVANCED'));
or - @XMSH - to get the Stats from AWR...


To get an execution plan from from V$sql_plan_Monittor 
SELECT status, KEY, SID, sql_id, elapsed_time, cpu_time, fetches, buffer_gets, disk_reads FROM v$sql_monitor where sql_id='&SQL_ID';
SELECT plan_line_id, plan_operation || ' ' || plan_options operation, starts, output_rows FROM v$sql_plan_monitor where key=&KEY ORDER BY plan_line_id;
SELECT DBMS_SQLTUNE.report_sql_monitor(sql_id =>'&SQL_ID', type => 'TEXT',report_level=>'ALL') AS report FROM dual;

Bind variable Details-
SELECT a.sql_text,b.name, b.position, b.datatype_string, b.value_string FROM v$sql_bind_capture b, v$sqlarea a WHERE b.sql_id = '&SQL_ID' AND b.sql_id = a.sql_id;

SELECT s.child_number, m.position, m.max_length, decode(m.datatype,1,'VARCHAR2',2,'NUMBER',m.datatype) AS datatype FROM v$sql s, v$sql_bind_metadata m WHERE s.sql_id = '&SQL_ID' AND s.child_address = m.address ORDER BY 1, 2;

To Find a High Water Mark -
select a.file_id data_file_id,b.file_id extents_file_id,a.tablespace_name, a.file_name,a.autoextensible
,a.bytes/1024/1024 curr_size_M
,ceil( (nvl(b.hwm,1)*block.siz)/1024/1024 ) hwm_M
,ceil( (nvl(b.hwm,1)*block.siz)/1024/1024/128)*128 new_size_M -- to nearest 128M (rounded up)
,a.bytes/1024/1024 - ceil( (nvl(b.hwm,1)*block.siz)/1024/1024/128)*128 saving_M,a.tablespace_name
,'alter database datafile '''||a.file_name||''' resize '||ceil( (nvl(b.hwm,1)*block.siz)/1024/1024/128)*128||'m;' cmd
from dba_data_files a, (select 8192*2 siz from dual) block, --16k blocks or 8k blocks
( select file_id, max(block_id+blocks-1) hwm
from dba_extents
-- where file_id = 93
-- where tablespace_name like 'HEDW_MART%'
-- where tablespace_name in ('HERMES_MI_MART_PART_201003')
group by file_id
) b
where a.file_id = b.file_id(+)
--and a.file_id = 93
--and tablespace_name like 'HEDW_MART%'
--and tablespace_name in ('HERMES_MI_MART_PART_201003')
and ceil( (nvl(b.hwm,1)*block.siz)/1024/1024/128)*128 < a.bytes/1024/1024 -128 --ensure shrinking frees up at least 128M
order by autoextensible,bytes desc;

To Find a Long running SQL -
SELECT s.SID, s.SERIAL#, s.LOGON_TIME, s.SQL_ID, s.STATUS, s.USERNAME, s.OSUSER, s.PREV_SQL_ID, a.SQL_TEXT PREV_SQL,
(SELECT max(v.SQL_TEXT) FROM v$sql v WHERE v.SQL_ID = s.SQL_ID) CURR_SQL
FROM v$session s, v$sqlarea a WHERE s.USERNAME IS NOT NULL and s.STATUS = 'ACTIVE'
AND a.SQL_ID(+) = s.PREV_SQL_ID ORDER BY s.LOGON_TIME DESC;


To Find a Long runnings SQL -
SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK, ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE" FROM V$SESSION_LONGOPS WHERE TOTALWORK != 0 AND SOFAR != TOTALWORK
Find a Fragmentaion in a db -
select a.owner,a.table_name,round(b.bytes/1024/1024) "currentsize in MB",
round((a.num_rows*a.avg_row_len)/1024/1024) "actualsize in MB",
round(((b.bytes/1024/1024-((a.num_rows*a.avg_row_len)/1024/1024))/(b.bytes/1024/1024))*(100-a.pct_free),2) "frag percent"
from dba_segments b,dba_tables a
where a.owner=b.owner and b.bytes/1024/1024>100
and b.segment_name=a.table_name and a.owner not in ('SYS','SYSTEM');

To Find a SQL-
col sql_text for a60 wrap
set verify off
set pagesize 999
set lines 155
col username format a13
col prog format a22
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col avg_pio format 9,999,999.99
col avg_lio format 999,999,999
col etime format 9,999,999.99

select sql_id, child_number, plan_hash_value plan_hash, executions execs, (elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime, buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio, sql_text
from gv$sql s where upper(sql_text) like upper(nvl('&sql_text',sql_text)) and sql_text not like '%from v$sql where sql_text like nvl(%'and sql_id like nvl('&sql_id',sql_id)
order by 1, 2, 3
/

Find a SQL from AWR –
set long 32000
set lines 155
col sql_text format a40
col execs for 999,999,999
col etime for 999,999,999.9
col avg_etime for 999,999.999
col lio for 999,999,999,999
col avg_lio for 999,999,999,999
col avg_pio for 999,999,999,999
col rows_proc for 999,999,999,999 head rows
col begin_interval_time for a30
col node for 99999
col versions for 99999
col percent_of_total for 999.99
break on report
compute sum of percent_of_total on report
select sql_id, sql_text, avg_pio, avg_lio, avg_etime, execs, rows_proc
from (
select dbms_lob.substr(sql_text,3999,1) sql_text, b.*
from dba_hist_sqltext a, (
select sql_id, sum(execs) execs, sum(etime) etime, sum(etime)/sum(execs) avg_etime, sum(pio)/sum(execs) avg_pio,
sum(lio)/sum(execs) avg_lio, sum(rows_proc) rows_proc
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
rows_processed_delta rows_proc,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(rows_processed_delta/decode(nvl(rows_processed_delta,0),0,1,executions_delta)) avg_rows,
(disk_reads_delta/decode(nvl(disk_reads_delta,0),0,1,executions_delta)) avg_pio
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where
ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and ss.snap_id between nvl('&starting_snap_id',0) and nvl('&ending_snap_id',999999999)
and executions_delta > 0
)
group by sql_id
order by 5 desc
) b
where a.sql_id = b.sql_id
and execs > 1
)
where rownum <31
and sql_text like nvl('&sql_text',sql_text)
and sql_id like nvl('&sql_id',sql_id)
-- group by sql_id, sql_text
order by etime desc
/

Busiest time of a database -
set lines 155
col dbtime for 999,999.99
col begin_timestamp for a40
select * from (
select begin_snap, end_snap, timestamp begin_timestamp, inst, a/1000000/60 DBtime from
(select e.snap_id end_snap, lag(e.snap_id) over (order by e.snap_id) begin_snap,
lag(s.end_interval_time) over (order by e.snap_id) timestamp,
s.instance_number inst, e.value, nvl(value-lag(value) over (order by e.snap_id),0) a
from dba_hist_sys_time_model e, DBA_HIST_SNAPSHOT s where s.snap_id = e.snap_id
and e.instance_number = s.instance_number
and to_char(e.instance_number) like nvl('&instance_number',to_char(e.instance_number))
and stat_name = 'DB time')
where begin_snap between nvl('&begin_snap_id',0) and nvl('&end_snap_id',99999999)
and begin_snap=end_snap-1 order by dbtime desc
)where rownum < 31
/


Blockers -
SELECT DECODE(request,0,'Holder: ','Waiter: ') || sid sid, id1, id2, lmode, request, type, inst_id FROM GV$LOCK WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM GV$LOCK WHERE request > 0) ORDER BY id1, request;

To get session details, blocking detail -
select sid, event, state, wait_time, seconds_in_wait from v$session where SID='&SID';

OR
·         By Arup Nanda..
col "Description" format a50
select sid,decode(state, 'WAITING','Waiting',
'Working') state,
decode(state,'WAITING',
'So far '||seconds_in_wait,
'Last waited '||
wait_time/100)||' secs for '||event "Description",
blocking_session B_SID,
blocking_instance B_Inst
from gv$session where username = '&USERNAME' -- SID='&SID';

To get blocking detail -
select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
from gv$session where username = '&USERNAME' -- SID='&SID';

To get blocking detail -
select owner, object_type, object_name, data_object_id from dba_objects where object_id = 241876; --above ROW_WAIT_OBJ#

To get max Temporary tablespace usages - 
SELECT tablespace_name, SUM(bytes_used/(1024*1024*1024)) , SUM(bytes_free/(1024*1024*1024)) FROM V$temp_space_header GROUP BY tablespace_name;
SELECT INST_ID , tablespace_name, SUM (bytes_used/(1024*1024*1024)), SUM (bytes_free/(1024*1024*1024)) FROM gv$temp_space_header GROUP BY tablespace_name, INST_ID;


Check if a session is hanging or not
column sid format 990
column seq# format 99990
column wait_time heading 'WTime' format 99990
column event format a30
column p1 format 9999999990
column p2 format 9999999990
column p3 format 9990
select sid,event,seq#,p1,p2,p3,wait_time from gV$session_wait order by sid
/
OR
set lines 120 trimspool on
col event head "Waited for" format a30
col total_waits head "Total|Waits" format 999,999
col tw_ms head "Waited|for (ms)" format 999,999.99
col aw_ms head "Average|Wait (ms)" format 999,999.99
col mw_ms head "Max|Wait (ms)" format 999,999.99
select event, total_waits, time_waited*10 tw_ms, average_wait*10 aw_ms, max_wait*10 mw_ms from v$session_event where sid = '&SID'


OR -
SELECT sid, to_char(start_time,'hh24:mi:ss') stime, message, ( sofar/totalwork)* 100 percent FROM v$session_longops WHERE sofar/totalwork < 1 and sid=&SID;
select sid,event,seq#,p1,p2,p3,wait_time from gv$session_wait where sid=&SID order by sid;


This selection should be repeated at least 3 times and the results compared.
Column meanings:

sid System IDentifier of the session
seq# Sequence number. This increments each time a new event is waited for by aparticular session.It can be used to tell if a session is moving along or not.
event Operation that the session is waiting for or last waited for.
p1 p2 p3 These columns have different meanings for different event values.
wait_time Zero values indicate that the session is waiting for the event.
Non-zero values indicate that this was the last event that the
session waited for and that the session is currently using cpu.


Sample output:

SID EVENT SEQ# P1 P2 P3 WTime
---- ------------------------------ ------ ----------- ----------- ----- ------
1 pmon timer 335 300 0 0 0
2 rdbms ipc message 779 300 0 0 0
6 smon timer 74 300 0 0 0
9 Null event 347 0 300 0 0
16 SQL*Net message from client



How long will it take to rollback a transaction? (Oracle 9.x) - The 8.x method should work for Oracle 9.i, however it has not been tested with the new undo tablespace in 9i.

If the database has been restarted in 9i, there is an easier way to determine the number of undo blocks required for rollback by using the following query:

SELECT DISTINCT ktuxesiz FROM x$ktuxe;

KTUXESIZ
0
1
2
3
107
Blocking sessions - 
SELECT (SELECT username FROM gv$session s WHERE SID=a.sid AND s.inst_id = a.inst_id) blocker
, a.sid blocker_sid
, 'IS BLOCKING' is_blocking
, (SELECT username FROM gv$session s WHERE SID=b.sid AND s.inst_id = b.inst_id) blockee
, b.sid blockee_sid
FROM gv$lock a
, gv$lock b
WHERE a.block = 1
AND b.request > 0
AND A.ID1 = B.ID1
and a.id2 = b.id2;


If the database has not been shut down and restarted, look at look at v$transaction.used_urec and v$transaction.used_ublk. These fields are the number of undo records and undo blocks currently held by a transaction.

SELECT a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk FROM gv$session a, gv$transaction b WHERE a.saddr = b.ses_addr;

If the database has not been shut down and restarted, look at look at v$transaction.used_urec and v$transaction.used_ublk. These fields are the number of undo records and undo blocks currently held by a transaction.
SELECT a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk FROM gv$session a, gv$transaction b WHERE a.saddr = b.ses_addr;


The until seq no u need to get from the output of the qry for each thread those are backed up
select thread#,max(SEQUENCE#) from v$BACKUP_ARCHIVELOG_DETAILS group by thread#;

RMAN Commands - 
To delete the archive log files use below st (the end sequence will be output of the above qry)
delete noprompt archivelog from sequence 1000 until sequence 826070 thread 2;
DELETE BACKUP device type disk completed before 'sysdate-1';

To delete the archive log - 10+ days old 
delete noprompt archivelog all completed before 'sysdate-10';
delete noprompt archivelog until time '(sysdate-10/24/60)';
To Backup and delete the archive log starting with sequence - 
run
{allocate channel c1 type disk format '/orabackup/DBNAME_%U_%t_%s_%p';
BACKUP ARCHIVELOG FROM SEQUENCE 125194 until SEQUENCE 125293;
backup incremental from scn 3794086344 database;
delete input;
}

RMAN> RESTORE SPFILE FROM AUTOBACKUP --- OR RESTORE SPFILE TO '/tmp/spfileTEMP.ora' from AUTOBACKUP:

RMAN> STARTUP [FORCE] pfile= '/tmp/spfileTEMP.ora';

RMAN> RECOVER DATABASE DELETE ARCHIVELOG MAXSIZE 25M; --- Means - delete archive logs once applied and use max 25M of archive log space.

To Get the index status - 
select index_name, status from dba_indexes where index_name like '%&INDEX_NAME%';
select * from dba_ind_partitions where index_name like '%&INDEX_NAME%';
alter index abc rebuild subpartition abc online;
cat myrun.sh
#!/bin/ksh
date
sqlplus -s << EOF > /dev/null 2>&1
scott/tiger@MYDB.world
spool /tmp/mysql.log
@mysqlsql
exit;
date
spool off
---------------------------------OR----------------------------
To run a script in a back grond after every 15 min (60*15=900) - 
linux1> cat bg_process.sh
#!/bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin
###Defining a timeout of 15 min i.e. - 900 sec (60 *15)

timeout=900

while true
do
/home/oracle11/.deepak/check.sql
sleep $timeout
done

---and
linxu1> cat check.sql
#!/usr/bin/ksh
ORACLE_SID=TESTDB
ORACLE_HOME=/oracle11/11.2/oradb
echo $ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_HOME ORACLE_SID PATH
echo $ORACLE_HOME $ORACLE_SID $PATH

sqlplus "/as sysdba" <<EOF
set pages 300
set feedback off;
select 'UNDO - FROM DBA_UNDO_EXTENTS ' from dual;
SELECT DISTINCT STATUS, SUM(BYTES)/(1024*102) mb, COUNT(*) FROM DBA_UNDO_EXTENTS GROUP BY STATUS;
select 'BLOCKING SESSION DETAILS ' from dual;
select SQL_ID,LAST_SQL_ACTIVE_TIME,SQL_TEXT from gv\$open_cursor where sid='156';
exit
EOF

To run the script in back groound - hohup myrun.sh &

select index_owner, index_name, subpartition_name from dba_ind_subpartitions where tablespace_name='SYSTEM' and index_owner ='SCOTT';
alter index DBO.AI_rebuild subpartition SYS_PART1 online;
alter index scott.myindex rebuild subpartition SYS_SUBP31 tablespace users;
alter index scott.myind rebuild tablespace users;
select index_name, status from dba_indexes where index_name like '%CRESULT%';

Remove duplicate values from table -
DELETE FROM table_name A WHERE a.rowid > ANY (SELECT B.rowid FROM table_name B WHERE A.col1 = B.col1 AND A.col2 = B.col2 );

This query gives us information on all ACTIVE Transactions -

select t.start_time, s.sid,s.serial#,s.username,s.status,s.schemaname, s.osuser,s.process,s.machine,s.terminal,s.program,s.module,s.type, to_char(s.logon_time,'DD/MON/YY HH24:MI:SS') logon_time from v$transaction t, v$session s where s.saddr = t.ses_addr and s. status = 'ACTIVE' order by start_time

OR -
col name format a8
col username format a8
col osuser format a8
col start_time format a17
col status format a12
tti 'Active transactions'
SELECT username, terminal, osuser, t.start_time, r.name, t.used_ublk "ROLLB BLKS", DECODE(t.SPACE, 'YES', 'SPACE TX', DECODE(t.recursive, 'YES', 'RECURSIVE TX', DECODE(t.noundo, 'YES', 'NO UNDO TX', t.status))) status FROM sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s WHERE t.xidusn = r.usn AND t.ses_addr = s.saddr;
The execution patern of the SQL -
Query Here -QueryLink

All Active Session by specific user - 
col machine format a10 trunc
col module format a10 trunc
undefine username
select inst_id, sid, username,osuser, decode(status,'ACTIVE','A', status) status, machine,sql_id, round(last_call_et/60,2) Mins from gv$session where username like upper('%&&username%') order by status desc,last_call_et desc
/

Long Running Query from LongOPS -
COLUMN percent FORMAT 999.99
SELECT sid, to_char(start_time,'hh24:mi:ss') stime, message, ( sofar/totalwork)* 100 percent FROM gv$session_longops WHERE sofar/totalwork < 1

Locked Objects -

select object_name, object_type, session_id, type, lmode, request, block, ctime from gv$locked_object, all_objects, gv$lock where v$locked_object.object_id = all_objects.object_id AND v$lock.id1 = all_objects.object_id AND v$lock.sid = v$locked_object.session_id order by session_id, ctime desc, object_name

SQL that is currently "ACTIVE":-

select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece

Find SQL from SQL ID -

set lines 155
col username format a13
col prog format a22
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col avg_pio format 9,999,999.99
col avg_lio format 999,999,999
col etime format 9,999,999.99
select sql_id, child_number, plan_hash_value plan_hash, executions execs,
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
buffer_gets/decode(nvl(executions,0),0,1,executions) avg_lio,
sql_text
from v$sql s
where upper(sql_text) like upper(nvl('&sql_text',sql_text))
and sql_text not like '%from v$sql where sql_text like nvl(%'
and sql_id like nvl('&sql_id',sql_id)
order by 1, 2, 3

Get OS PID -
set lines 132
SELECT s.username, s.user#, s.sid, s.serial#, s.prev_hash_value, p.spid os_pid
FROM V$SESSION S, v$process p
WHERE ( sid = nvl('&sid',sid) OR p.spid = nvl('&os_pid',p.spid))
and p.addr = s.paddr
and s.username is not null

High CPU Process details (details from OS PID) -
set lines 132
SELECT s.username, s.user#, s.sid, s.serial#, s.prev_hash_value, p.spid os_pid, s.machine, s.state, s.state, s.seconds_in_wait FROM V$SESSION S, v$process p WHERE p.spid='&os_pid' and p.addr = s.paddr;

12. Get Unstable plans -
set lines 155
col execs for 999,999,999
col min_etime for 999,999.99
col max_etime for 999,999.99
col avg_etime for 999,999.999
col avg_lio for 999,999,999.9
col norm_stddev for 999,999.9999
col begin_interval_time for a30
col node for 99999
break on plan_hash_value on startup_time skip 1
select * from (
select sql_id, sum(execs), min(avg_etime) min_etime, max(avg_etime) max_etime, stddev_etime/min(avg_etime) norm_stddev
from (
select sql_id, plan_hash_value, execs, avg_etime,
stddev(avg_etime) over (partition by sql_id) stddev_etime
from (
select sql_id, plan_hash_value,
sum(nvl(executions_delta,0)) execs,
(sum(elapsed_time_delta)/decode(sum(nvl(executions_delta,0)),0,1,sum(executions_delta))/1000000) avg_etime
-- sum((buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta))) avg_lio
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
and elapsed_time_delta > 0
and s.snap_id > nvl('&earliest_snap_id',0)
group by sql_id, plan_hash_value
))group by sql_id, stddev_etime
)
where norm_stddev > nvl(to_number('&min_stddev'),2)
and max_etime > nvl(to_number('&min_etime'),.1)
order by norm_stddev
/

13. Active session History -
-- Note: v$active_session_history does not capture all events
set lines 155
col avg_time_waited for 9.9999999
col min_time_waited for 9.9999999
col max_time_waited for 9.9999999
col sum_time_waited for 999,999,999

select event, sql_id, count(*),
avg(time_waited/1000000) avg_time_waited,
min(time_waited/1000000) min_time_waited,
max(time_waited/1000000) max_time_waited,
sum(time_waited/1000000) sum_time_waited
from v$active_session_history
where event like nvl('&event',event)
and sql_id like nvl('&sql_id',sql_id)
group by event, sql_id
order by 7 desc
/


To get blocking detail -

select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
from v$session where username = '&USERNAME' -- SID='&SID';
Count the number of recent UPDATE statements:
SELECT max(command_type), count (*)
FROM v$sqlarea WHERE command_type =6 AND open_versions > 0 AND rows_processed > 0;
Count the number of recent UPDATE statements:
SELECT count (*) FROM SYS.v_$sqlarea WHERE sql_text LIKE 'UPDATE %' AND open_versions > 0 AND rows_processed > 0;



Show SQL statements that are running right now:
SELECT sql_text FROM v$sqlarea WHERE users_executing > 0;


List recent SQL activity grouped by type:

SELECT decode(command_type, 1,'CRE TAB', 2,'INSERT', 3,'SELECT', 6,'UPDATE', 7,'DELETE', 9,'CRE INDEX', 12,'DROP TABLE', 15,'ALT TABLE',39,'CRE TBLSPC', 42,'DDL', 44,'COMMIT', 45,'ROLLBACK', 47,'PL/SQL EXEC', 48,'SET XACTN', 62,'ANALYZE TAB', 63,'ANALYZE IX', 71,'CREATE MLOG', 74,'CREATE SNAP',79,'ALTER ROLE', 85,'TRUNC TAB', to_char(command_type)) SQLcmd, count (*)
FROM v$sqlarea
WHERE open_versions > 0
AND rows_processed > 0
Group By decode(command_type, 1,'CRE TAB', 2,'INSERT', 3,'SELECT', 6,'UPDATE', 7,'DELETE', 9,'CRE INDEX', 12,'DROP TABLE', 15,'ALT TABLE',39,'CRE TBLSPC', 42,'DDL', 44,'COMMIT', 45,'ROLLBACK', 47,'PL/SQL EXEC', 48,'SET XACTN', 62,'ANALYZE TAB', 63,'ANALYZE IX', 71,'CREATE MLOG', 74,'CREATE SNAP',79,'ALTER ROLE', 85,'TRUNC TAB', to_char(command_type));



Track the progress of a specific (long running) statement:

SELECT SUBSTR(sql_text, 1, 60) "SQL Text",
rows_processed "Total Rows Processed",
ROUND((SYSDATE - TO_DATE(first_load_time, 'yyyy-mm-dd hh24:mi:ss')) * 24 * 60, 1) "Total Time (Min)", TRUNC(rows_processed /((SYSDATE - TO_DATE(first_load_time, 'yyyy-mm-dd hh24:mi:ss')) * 24 * 60)) "Rows/Min", TRUNC(rows_processed /((SYSDATE - TO_DATE(first_load_time, 'yyyy-mm-dd hh24:mi:ss')) * 24 * 60 * 60)) "Rows/Sec"
FROM v$sqlarea
WHERE sql_text LIKE 'INSERT INTO TABLE T_BLAH VALUES 1,2,3%'
AND open_versions > 0 AND rows_processed > 0;

===========================================================================================================================

Data Guard -


To get Archive log GAP ON Data guard 
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received",APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

To Check the Recovery Progress – 
alter session set nls_date_format = 'DD/MM/YY HH24:MI:SS';
select to_char(sysdate,'MM/DD/YY HH24:MI:SS') as_of_date, to_char(TIMESTAMP,'MM/DD/YY HH24:MI:SS') Applied_time , round((sysdate - TIMESTAMP)*24 ,2) hrs_gap from v$recovery_progress where item = 'Last Applied Redo' and start_time=(select max(start_time) from v$recovery_progress);

To Check the DG Error Logs
SELECT gvi.thread#, timestamp, message FROM gv$dataguard_status gvds, gv$instance gvi WHERE gvds.inst_id = gvi.inst_id AND severity in ('Error','Fatal') ORDER BY timestamp, thread#;

To Check the Archive applied and delete with RMAN - 
Select chr(10)||' DELETE noprompt ARCHIVELOG until sequence '||seq||' thread '||thread||';'
From ( select
-- give 2 log file buffer
thread# thread,max(sequence#)-2 seq
From
( select thread#,sequence#
from
v$archived_log
where
first_time < (SELECT MIN(first_time) FROM (SELECT MAX(first_time) FIRST_TIME
FROM v$archived_log
WHERE applied='YES'
GROUP BY THREAD#))
and name is not null
)
group by thread#
)
/

To Check the SCN details from backup -
SELECT MIN(checkpoint_change#) start_scn, GREATEST(MAX(checkpoint_change#),MAX(absolute_fuzzy_change#)) beyond_scn FROM v$backup_datafile WHERE incremental_level=(SELECT MAX(incremental_level) FROM v$backup_datafile WHERE incremental_level>=0);
RMAN> restore database preview summary; -- this is to check the latest scn to restrore the db
RMAN > RESTORE SPFILE FROM AUTOBACKUP --- OR RESTORE SPFILE TO '/tmp/spfileTEMP.ora' from AUTOBACKU:
RMAN > STARTUP [FORCE] pfile= '/tmp/spfileTEMP.ora';

To Check the Timestamp of SCN
select scn_to_timestamp(1301571) from dual;

select process, thread#, sequence#, status from v$managed_standby;
alter database recover managed standby database disconnect;

Script to check issues related to DG on RAC - new_dg_prim_diag.sql

IO Details -

SELECT redo_hist.snap_id AS SnapshotID

, TO_CHAR(redo_hist.snaptime, 'DD-MON HH24:MI:SS') as SnapshotTime

, ROUND(redo_hist.statval/elapsed_time/1048576,2) AS Redo_MBsec

--, SUBSTR(RPAD('*', 20 * ROUND ((redo_hist.statval/elapsed_time) / MAX (redo_hist.statval/elapsed_time) OVER (), 2), '*'), 1, 20) AS Redo_Graph

, ROUND(physical_read_hist.statval/elapsed_time/1048576,2) AS Read_MBsec

--, SUBSTR(RPAD('*', 20 * ROUND ((physical_read_hist.statval/elapsed_time) / MAX (physical_read_hist.statval/elapsed_time) OVER (), 2), '*'), 1, 20) AS Read_Graph

, ROUND(physical_write_hist.statval/elapsed_time/1048576,2) AS Write_MBsec

--, SUBSTR(RPAD('*', 20 * ROUND ((physical_write_hist.statval/elapsed_time) / MAX (physical_write_hist.statval/elapsed_time) OVER (), 2), '*'), 1, 20) AS Write_Graph

FROM (SELECT s.snap_id

,g.value AS stattot

,s.end_interval_time AS snaptime

,NVL(DECODE(GREATEST(VALUE, NVL(lag (VALUE) OVER (PARTITION BY s.dbid, s.instance_number, g.stat_name

ORDER BY s.snap_id), 0)), VALUE, VALUE - LAG (VALUE) OVER (PARTITION BY s.dbid, s.instance_number, g.stat_name

ORDER BY s.snap_id), VALUE), 0) AS statval

,(EXTRACT(day FROM s.end_interval_time)-EXTRACT(day FROM s.begin_interval_time))*86400 +

(EXTRACT(hour FROM s.end_interval_time)-EXTRACT(hour FROM s.begin_interval_time))*3600 +

(EXTRACT(minute FROM s.end_interval_time)-EXTRACT(minute FROM s.begin_interval_time))*60 +

(EXTRACT(second FROM s.end_interval_time)-EXTRACT(second FROM s.begin_interval_time)) as elapsed_time

FROM dba_hist_snapshot s,

dba_hist_sysstat g,

v$instance i

WHERE s.snap_id = g.snap_id

AND s.begin_interval_time >= sysdate-NVL('&num_days', 0.5)

AND s.instance_number = i.instance_number

AND s.instance_number = g.instance_number

AND g.stat_name = 'redo size') redo_hist,

(SELECT s.snap_id

,g.value AS stattot

,NVL(DECODE(GREATEST(VALUE, NVL(lag (VALUE) OVER (PARTITION BY s.dbid, s.instance_number, g.stat_name

ORDER BY s.snap_id), 0)), VALUE, VALUE - LAG (VALUE) OVER (PARTITION BY s.dbid, s.instance_number, g.stat_name

ORDER BY s.snap_id), VALUE), 0) AS statval

FROM dba_hist_snapshot s,

dba_hist_sysstat g,

v$instance i

WHERE s.snap_id = g.snap_id

AND s.begin_interval_time >= sysdate-NVL('&num_days', 0.5)

AND s.instance_number = i.instance_number

AND s.instance_number = g.instance_number

AND g.stat_name = 'physical read total bytes') physical_read_hist,

(SELECT s.snap_id

,g.value AS stattot

,NVL(DECODE(GREATEST(VALUE, NVL(lag (VALUE) OVER (PARTITION BY s.dbid, s.instance_number, g.stat_name

ORDER BY s.snap_id), 0)), VALUE, VALUE - LAG (VALUE) OVER (PARTITION BY s.dbid, s.instance_number, g.stat_name

ORDER BY s.snap_id), VALUE), 0) AS statval

FROM dba_hist_snapshot s,

dba_hist_sysstat g,

v$instance i

WHERE s.snap_id = g.snap_id

AND s.begin_interval_time >= sysdate-NVL('&num_days', 0.5)

AND s.instance_number = i.instance_number

AND s.instance_number = g.instance_number

AND g.stat_name = 'physical write total bytes') physical_write_hist

WHERE redo_hist.snap_id = physical_read_hist.snap_id

AND redo_hist.snap_id = physical_write_hist.snap_id

ORDER BY 1 desc;


==============================================================================================================================================

Cluster Commands -

$ srvctl config database -d testprod1
How to stop RAC Single NODE(database and instance)
crsctl stop crs OR
crsctl stop cluster
this above command will stop rdbms instance first, then asm instance after that cluster services.

How to start RAC Single NODE
crsctl start crs OR
crsctl start cluster

This is known problem since Oracle 10G release 1. It only affected in single instance non RAC using ASM which require CSS (Cluster Service Synchronization). See Metalink note id 264235 or it is know as bug 3458327.
The problem created by localconfig script put this line at end of file:
h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null

That line should be put before line starting level 3 script show below:
l3:3:wait:/etc/rc.d/rc 3

It caused database failed to start at system startup because daemon ocssd.bin started after level3 finished. The script init.cssd and dbora script should be executed after daemon running by executing "init.cssd run".

So every time you execute localconfig reset or add you need to modify /etc/inittab to move this line
h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
before this line:
l3:3:wait:/etc/rc.d/rc 3

================================
1. START / STOP CLUSTERWARE CRS:
================================
To start and stop CRS when the machine starts or shutdown, on unix there are rc scripts in place.
/etc/init.d/init.crs start
/etc/init.d/init.crs stop
/etc/init.d/init.crs enable
/etc/init.d/init.crs disable
You can also, as root, manually start, stop, enable or disable the services with:
·         crsctl start crs
·         crsctl stop crs
·         crsctl enable crs
·         crsctl disable crs
On a unix system, you may find the following in the /etc/inittab file.
·         cat /etc/inittab | grep crs
h3:35:respawn:/etc/init.d/init.crsd run > /dev/null 2>&1 </dev/null**
·         cat /etc/inittab | grep evmd
h1:35:respawn:/etc/init.d/init.evmd run > /dev/null 2>&1 </dev/null**
·         cat /etc/inittab | grep css
h2:35:respawn:/etc/init.d/init.cssd fatal > /dev/null 2>&1 </dev/null
/etc/init.d> ls -al *init*
init.crs
init.crsd
init.cssd
init.evmd**
·         cat /etc/inittab
h1:35:respawn:/etc/init.d/init.evmd run > /dev/null 2>&1 </dev/null
h2:35:respawn:/etc/init.d/init.cssd fatal > /dev/null 2>&1 </dev/null
h3:35:respawn:/etc/init.d/init.crsd run > /dev/null 2>&1 </dev/null
=======================
2. STARTING / STOPPING THE CLUSTER:
=======================
-- Stopping the Cluster:
Before you shut down any processes that are monitored by Enterprise Manager Grid Control, set a blackout in
Grid Control for the processes that you intend to shut down. This is necessary so that the availability records
for these processes indicate that the shutdown was planned downtime, rather than an unplanned system outage.
Shut down all Oracle RAC instances on all nodes. To shut down all Oracle RAC instances for a database,
enter the following command, where db_name is the name of the database:
$ ORACLE_HOME/bin/srvctl stop database -d db_name
Shut down all ASM instances on all nodes. To shut down an ASM instance, enter the following command,
where node is the name of the node where the ASM instance is running:
$ ORACLE_HOME/bin/srvctl stop asm -n node
Stop all node applications on all nodes. To stop node applications running on a node, enter the following command,
where node is the name of the node where the applications are running
$ ORACLE_HOME/bin/srvctl stop nodeapps -n node
Log in as the root user, and shut down the Oracle Clusterware or CRS process by entering the following command
on all nodes:**
·         CRS_HOME/bin/crsctl stop crs # as root
-- Starting the Cluster:**
·         CRS_HOME/bin/crsctl start crs # as root
$ ORACLE_HOME/bin/srvctl start nodeapps -n node
$ ORACLE_HOME/bin/srvctl start asm -n node
$ ORACLE_HOME/bin/srvctl start database -d db_name # will start all instances of the Database
$ ORACLE_HOME/bin/crsctl start resource ora.asm
====================================
3. CRS_STAT -t command OR SRVCTL STATUS command:
====================================
CRS_STAT example:

Viewing the status of instances, database, nodeapps:
For example, to list the status of the apps in the cluster, use crs_stat:
/home/oracle-->$CRS_HOME/bin/crs_stat -t
Name Type Target State Host

ora....SM1.asm application ONLINE ONLINE aix1
ora....x1.lsnr application ONLINE ONLINE aix1
SRVCTL example:

$ srvctl status nodeapps -n mynewserver
VIP is running on node: mynewserver
GSD is running on node: mynewserver
Listener is not running on node: mynewserver
ONS daemon is running on node: mynewserver
CRSCTL example:

Checking crs on the node:**
·         crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
Checking crs clusterwide:
crsctl check cluster**
·         crsctl check cluster
node1-pub ONLINE
node2-pub ONLINE
=======
4. OTHER EXAMPLES:
=======
Example 1. Bring up the MYSID1 instance of the MYSID database.
$ srvctl start instance -d MYSID -i MYSID1
Example 2. Stop the MYSID database: all its instances and all its services, on all nodes.
$ srvctl stop database -d MYSID
Example 3. Stop the nodeapps on the myserver node. NB: Instances and services also stop.
$ srvctl stop nodeapps -n myserver
Example 4. Add the MYSID3 instance, which runs on the myserver node, to the MYSID
clustered database.
$ srvctl add instance -d MYSID -i MYSID3 -n myserver
Example 4. Add a new node, the mynewserver node, to a cluster.
$ srvctl add nodeapps -n mynewserver -o $ORACLE_HOME -A 10.10.10.100/255.255.255.0/eth1
(The -A flag precedes an address specification.)
Example 5. To change the VIP (virtual IP) on a RAC node, use the command
$ srvctl modify nodeapps -A new_address
Example 6. Disable the ASM instance on myserver for maintenance.
$ srvctl disable asm -n myserver**

crs_stat -t
crsctl stop resource ora.cssd
crsctl start resource ora.cssd
The 11g Release 2 version of the Oracle RAC and Grid Infrastructure course went live earlier this year, and has generated much discussion concerning several aspects of the release. In this post, I would like to share some observations about the software based on my research and teaching experience during the past five months. The new release of the Grid Infrastructure consists of:
·         A New “Local” resource management layer, known as the OHASD
·         A new set of Agents which replace the RACG Layer
·         Support for new features: Grid Plug and Play, Grid Naming Service, Grid IPC and Cluster Time Synchronisation Service
·         Integration of ASM and the Clusterware to form the Grid Infrastructure
·         A reworked Cluster Ready Services Daemon (CRSD)
·         Automatically managed Server Pools
·         Support for Intelligent Platform Management Interface (IPMI), for node fencing and node termination
This is an extensive change to the clusterware from previous releases, and is a very large topic so in this blog post, I will restrict the discussion to the New Local Resource management layer, called the “Lower Stack” and how it relates to the “Upper Stack“.
The Lower Stack – Managed by OHASD

The 11gR2 Grid Infrastructure consists of a set of daemon processes which execute on each cluster node; the voting and OCR files, and protocols used to communicate across the interconnect. Prior to 11gR2, there were various scripts run by the init process to start and monitor the health of the clusterware daemons. From 11gR2, the Oracle High Availability Services Daemon (OHASD) replaces these. The OHASD starts, stops and checks the status of all the other daemon processes that are part of the clusterware using new agent processes listed here:
·         CSSDAGENT – used to start,stop and check status of the CSSD resource
·         ORAROOTAGENT – used to start “Lower Stack” daemons that must run as root: ora.crsd, ora.ctssd, ora.diskmon, ora.drivers.acfs, ora.crf
·         ORAAGENT – used to start “Lower Stack” daemons that run as the grid owner:ora.asm, ora.evmd, ora.gipcd, ora.gpnpd, ora.mdnsd
·         CSSDMONITOR - used to monitor the CSSDAGENT
The OHASD is essentially a daemon which starts and monitors the clusterware daemons themselves. It is started by init using the /etc/init.d/ohasd script and starts the ohasd.bin executable as root. The Oracle documentation lists the “Lower Stack” daemons where they are referred to as the “The Oracle High Availability Services Stack” and notes which agent is responsible for starting and monitoring each specific daemon. It also explains the purpose of each of the stack components. (Discussions of some of these components will feature in future blog posts.) If the grid infrastructure is enabled on a node, then OHASD starts the “Lower Stack” on that node at boot time. If disabled, then the “Lower Stack” is started manually. The following commands are used for these operations:
·         crsctl enable crs – enables autostart at boot time
·         crsctl disable crs – disables autostart at boot time
·         crsctl start crs - manually starts crs on the local node
The “Lower Stack” consists of daemons which communicate with their counterparts on other cluster nodes. These daemons must be started in the correct sequence, as some of them depend on others. For example, the Cluster Ready Services Daemon (CRSD), may depend on ASM being available if the OCRfile is stored in ASM. Clustered ASM in turn, depends on the Cluster Synchronisation Services Daemon(CSSD), as the CSSD must be started in order for clustered ASM to start up. This dependency tree is similar to that which already existed for the resources managed by the CRSD itself, known as the “Upper Stack“, which will be discussed later in this post.
To define the dependency tree for the “Lower Stack“, a local repository called the OLR is used. This contains the metadata required by OHASD to join the cluster and configuration details for the local software. As a result, OHASD can start the “Lower Stack” daemons without reference to the OCR. To examine the OLR use the following command, and then examine the dump file produced:
·         ocrdump -local <FILENAME>
Another benefit of the OHASD, is that there is a daemon running on each cluster node whether or not the “Lower Stack” is started. As long as the OHASDdaemon is running, then the following commands may be used in 11gR2:
·         crsctl check has – check the status of the OHASD
·         crsctl check crs - check the status of the OHASD, CRSD, CSSD and EVMD
·         crsctl check cluster – all – this checks the “Lower Stack” on all the nodes
·         crsctl start cluster – this attempts to start the “Lower Stack” on all the nodes
·         crsctl stop cluster - this attempts to stop the “Lower Stack” on all the nodes
Here are some examples:
# crsctl check has

CRS-4638: Oracle High Availability Services is online
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

# crsctl check cluster -all
racn1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
racn2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
racn3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

To check the status of the “Lower Stack“ resources use the following:
·         crsctl stat res -init -t
An example is shown here:
# crsctl stat res -init -t
NAME TARGET STATE SERVER STATE_DETAILS
————————————————————————————-
Cluster Resources
————————————————————————————-

ora.asm ONLINE ONLINE racn1 Started
ora.crsd ONLINE ONLINE racn1
ora.cssd ONLINE ONLINE racn1
ora.cssdmonitor ONLINE ONLINE racn1


How to check Clusterware version 

crsctl query crs softwareversion


Active Clusterware version - 
crsctl query crs activeversion

Cluster Name - 
$ORA_CRS_HOME/bin/cemutlo -n

Getting Interconnect Details-

SELECT * FROM V$CLUSTER_INTERCONNECTS;
SELECT * FROM V$CONFIGURED_INTERCONNECTS;


ASM Commands -

REMAP - Repairs a range of physical blocks on a disk. The remap command only repairs blocks that have read disk I/O errors. It does not repair blocks that contain corrupted contents, whether or not those blocks can be read. The command assumes a physical block size of 512 bytes and supports all allocation unit sizes (1 to 64 MB).
remap disk_group_name disk_name block_range
The following example repairs blocks 5000 through 5999 for disk DATA_0001 in disk group DISK_GRP1.
SQL> remap DISK_GRP1 DATA_0001 5000-5999

SQL> select group_number,name,total_mb,free_mb from v$asm_disk_stat;
SQL> select path, name, header_status from v$asm_disk;
SQL> select group_number, path, name, os_mb, header_status from v$asm_disk;
SQL> select group_number, name, total_mb,free_mb from v$asm_diskgroup;
SQL> alter diskgroup DATA drop disk MY_ASM_T2_DISK10;
SQL> create diskgroup DWEMARCHDG external redundancy disk 'ORCL:MY_ASM_T2_DISK10';
SQL> drop diskgroup DATA;
Diskgroup dropped.
SQL> create diskgroup DWEMDATADG external redundancy disk 'ORCL:MY_ASM_T2_DISK1','ORCL:MY_ASM_T2_DISK2','ORCL:MY_ASM_T2_DISK3';
Diskgroup created.
SQL>select name,total_mb,free_mb from v$asm_diskgroup;
SQL>
srvctl start asm -n prdve0-MYmu02
How to remove disk and add to another diskgroup -
select * from v$asm_diskgroup;
Chek disks - select name,path from v$asm_disk where group_number=2;
Drop disk - alter diskgroup DATADG1MRG drop disk 'DATADG1MRG_0030' rebalance power 5;
Re-balabance rest of the disks - alter diskgroup DATADG1MRG rebalance power 5;
Check Rebal Operation - select * from v$asm_operation;
then add the disk to other diskgroup -
alter diskgroup DATA drop disk MY_ASM_T2_DISK10;
SQL> alter diskgroup DATA add disk MY_ASM_DISK rebalance power 10;
SQL> select * from V$asm_operation;
alter diskgroup MYGRP mount;

create diskgroup TEST external redundancy disk 'ORCL:MY_ASM_T2_DISK1' ;
alter diskgroup TEST set attribute 'compatible.asm'='11.2';
alter diskgroup TEST set attribute 'compatible.rdbms'='11.2';
OR-create diskgroup TEST external redundancy disk 'ORCL:MY_ASM_T2_DISK1' set attribute ‘compatible.asm’=’11.2’,‘compatible.advm’=’11.2’;
volcreate -G TEST -s 10G volume1
Check vol in ls -lrt /dev/asm/
get VOl info - /sbin/advmutil volinfo /dev/asm/volume1-367
OR - acfsutil info fs /oraback
OR - asmcmd volinfo -a
create ACFS mount point -
/sbin/mkfs.acfs /dev/asm/volume1-367
create os level Directory /TEST_FS mkdir /TEST_FS
Register the Vol - /sbin/acfsutil registry -f -a /dev/asm/volume1-367 /TEST_FS --(as grid user)
Mount - mount /dev/asm/volume1-367 /TEST_FS -t acfs
--To drop / umount -
umount /TEST_FS
- Delete volume - asmcmd - voldelete -G TEST volume1
drop disk group -sqlplus - drop diskgroup TEST ;

To Resize the ACFS FS -
1. Check the vol details with below commands and get the disk group info/
acfsutil info fs /oraback OR
asmcmd volinfo -a
2. Add the disks to the disk group found in the above - alter diskgroup DATA add disk MY_ASM_DISK rebalance power 10;
3. Resize vol with - /sbin/acfsutil size +544G /oraback ---USE "+" to increase and "-" to reduce the size of volume.

Admin Commands - 
Get all users GRANTS -
set head off
set pages 0
set long 9999999
select dbms_metadata.get_ddl('USER', username) || '/' usercreate from dba_users;
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','SYS') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','SYS') FROM DUAL;
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','SYS') FROM DUAL;
SELECT DBMS_METADATA.get_ddl ('TABLESPACE', tablespace_name) FROM dba_tablespaces WHERE tablespace_name = DECODE(UPPER('&1'), 'ALL', tablespace_name, UPPER('&1'));

Active session details -
set termout off sqlblanklines on feedback off
alter session set nls_date_format="MM/DD/RR HH24:MI:SS";
alter session disable parallel query;
alter session disable parallel dml;
alter session disable parallel ddl;
set termout on
set linesize 999
set pagesize 120
set wrap on
prompt
prompt I/O (This is since initial connection - not a snapshot interval)
column inst_id heading "I" format 9
COLUMN username HEADING "User" FORMAT A16
col userinfo heading "SID,Serial" format a10
COLUMN PhyR FORMAT 99,999,999,999
COLUMN LogR FORMAT 99,999,999,999
COLUMN PhyW FORMAT 99,999,999,999
col BlkChanges format 99,999,999,999
col status heading "S" format a1 trunc
break on inst_id on username
select
s.inst_id
,s.username
,s.sid||','||s.serial# userinfo
,si.block_gets+si.consistent_gets "LogR"
,si.physical_reads "PhyR"
,si.block_changes+si.consistent_changes "PhyW"
--,si.block_changes "BlkChanges"
,s.status
from
gv$sess_io si
,gv$session s
where
(s.status='ACTIVE' or s.last_call_et = 0)
and s.type <> 'BACKGROUND'
and s.username IS NOT NULL
and s.AUDSID != USERENV('SESSIONID') -- OMIT THIS SESSION
and si.inst_id(+) = s.inst_id
and si.sid(+)=s.sid
order by
s.inst_id, s.username
/

Parallel Process details -

select * from DBA_PARALLEL_EXECUTE_CHUNKS where task_name ='&MY_TASK'; 

OR 
select
a.qcinst_id coorinst
,b.username coorname
,a.qcsid||','||a.qcserial# coordinator
,a.inst_id pxinst
,c.username pxname
,a.sid||','||a.serial# pxsess
from
gv$px_session a
,gv$session b
,gv$session c
where
a.qcserial# is not null
and a.qcinst_id = b.inst_id
and a.qcsid = b.sid
and a.qcserial# = b.serial#
and a.inst_id = c.inst_id
and a.sid = c.sid
and a.serial# = c.serial#
and b.AUDSID != USERENV('SESSIONID') -- OMIT THIS SESSION
order by
qcinst_id,coordinator
/

What parallel processes and co-coordinator doing - 

SELECT dfo_number, tq_id, server_type, process, num_rows, bytes FROM v$pq_tqstat ORDER BY dfo_number DESC, tq_id, server_type DESC7, process;



alter profile DEFAULT limit PASSWORD_LIFE_TIME UNLIMITED;zap.sql

SQL> set serveroutput on
SQL> exec dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 672K bytes (672 blocks)
Maximum Result Size = 33K bytes (33 blocks)
[Memory]
Total Memory = 5132 bytes [0.005% of the Shared Pool]
... Fixed Memory = 5132 bytes [0.005% of the Shared Pool]
... Dynamic Memory = 0 bytes [0.000% of the Shared Pool]
PL/SQL procedure successfully completed.

SQL>
Execute the STATUS function to check the current status of the result cache, which could be ENABLED
or DISABLED. You can purge the contents of the result cache by executing the FLUSH procedure or the
FLUSH function. You may have to purge the result cache if the database ends up filling up the result
cache, as the result cache doesn’t automatically flush its contents. If you load a new version of a function,
for example, you can get rid of the older function’s results from the result cache by purging
the results with the FLUSH procedure or function. Before you execute the FLUSH procedure or FLUSH
function, you must first put the result cache in bypass mode by executing the BYPASS procedure with
the TRUE value. Once you purge the result cache, execute the BYPASS procedure again, now with the
FALSE value, as shown here:

BEGIN
EXEC dbms_result_cache.bypass (FALSE);
END;

/

PL/SQL procedure successfully completed.

SQL>

You can use the following views to manage the result cache:
• V$RESULT_CACHE_STATISTICS: Lists cache settings and memory usage statistics
• V$RESULT_CACHE_OBJECTS: Lists all cached objects and their attributes
• V$RESULT_CACHE_DEPENDENCY: Lists the dependency information between the cached

results and dependencies

• V$RESULT_CACHE_MEMORY: Lists all memory blocks and their statistics
• V$RESULT_CACHE_OBJECTS: Lists both cached results and all dependencies

Below Queries from metalink -
-- sessions with highest CPU consumption
SELECT s.sid, s.serial#, p.spid as "OS PID",s.username, s.module, st.value/100 as "CPU sec"
FROM v$sesstat st, v$statname sn, v$session s, v$process p
WHERE sn.name = 'CPU used by this session' -- CPU
AND st.statistic# = sn.statistic#
AND st.sid = s.sid
AND s.paddr = p.addr
AND s.last_call_et < 1800 -- active within last 1/2 hour
AND s.logon_time > (SYSDATE - 240/1440) -- sessions logged on within 4 hours
ORDER BY st.value;

-- sessions with the highest time for a certain wait
SELECT s.sid, s.serial#, p.spid as "OS PID", s.username, s.module, se.time_waited
FROM v$session_event se, v$session s, v$process p
WHERE se.event = '&event_name'
AND s.last_call_et < 1800 -- active within last 1/2 hour
AND s.logon_time > (SYSDATE - 240/1440) -- sessions logged on within 4 hours
AND se.sid = s.sid
AND s.paddr = p.addr
ORDER BY se.time_waited;
e. g-
SQL>
Enter value for event_name: db file sequential read
-- sessions with highest DB Time usage
SELECT s.sid, s.serial#, p.spid as "OS PID", s.username, s.module, st.value/100 as "DB Time
(sec)"
, stcpu.value/100 as "CPU Time (sec)", round(stcpu.value / st.value * 100,2) as "% CPU"
FROM v$sesstat st, v$statname sn, v$session s, v$sesstat stcpu, v$statname sncpu, v$process p
WHERE sn.name = 'DB time' -- CPU
AND st.statistic# = sn.statistic#
AND st.sid = s.sid
AND sncpu.name = 'CPU used by this session' -- CPU
AND stcpu.statistic# = sncpu.statistic#
AND stcpu.sid = st.sid
AND s.paddr = p.addr
AND s.last_call_et < 1800 -- active within last 1/2 hour
AND s.logon_time > (SYSDATE - 240/1440) -- sessions logged on within 4 hours
AND st.value > 0;

The following commands can be used:

For open files in general
$ lsof | grep $ORACLE_HOME
also
$ ps -ef | grep $ORACLE_HOME

For running processes
$ fuser $ORACLE_HOME/* | grep <userid>

(Please check for process like the dbconsole/emagent which may not be stopped by crsctl)

If needed use these commands:

Process Name Command
Database Control $ORACLE_HOME/bin/emctl stop dbconsole
Oracle Net listener $ORACLE_HOME/bin/lsnrctl stop
iSQL*Plus $ORACLE_HOME/bin/isqlplusctl stop
Ultra Search $ORACLE_HOME/bin/searchctl stop


==========================
Check the stats hostory retention time -
SQL> select dbms_stats.get_stats_history_retention from dual;
GET_STATS_HISTORY_RETENTION

30

Change the histry retention -
SQL> exec DBMS_STATS.PURGE_STATS(SYSDATE-15);

select dbms_stats.get_stats_history_retention from dual;
GET_STATS_HISTORY_RETENTION

15

==========================
Create a duplicate database -
if required manually set the NLS Parameter and run the below in on the RMAN prompt
$ export NLS_DATE_FORMAT='DD-MON-YYYY-HH24:MI:SS'
$ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

$ echo $NLS_LANG
AMERICAN_AMERICA.AL32UTF8
$ echo $NLS_DATE_FORMAT
DD-MON-YYYY-HH24:MI:SS

rman complete command is as -

connect target abc/abc@primarydb -- This is the prod db which is up and running
connect auxiliary / --- this is the db to be duplicated
CONFIGURE CHANNEL DEVICE TYPE disk CLEAR;
CONFIGURE DEFAULT DEVICE TYPE TO disk;
CONFIGURE DEVICE TYPE disk PARALLELISM 8;
·         note that a RUN command is necessary because you can only execute SET NEWNAME# within a RUN command
RUN {
set until time "to_date('05-28-2018 21:00:00','MM-DD-YYYY HH24:MI:SS')";
DUPLICATE TARGET DATABASE TO dup102
LOGFILE
GROUP 1 ('/local/recode1.log') SIZE 500M REUSE,
GROUP 2 ('/local/recode2.log') SIZE 500M REUSE,
GROUP 3 ('/local/recode3.log') SIZE 500M REUSE;
}

========================================================
SCRIPT header for HEAD + SUM
==============
set pagesize 80 linesize 100 feedback off
col name new_value dbname
set termout off
select name from v$database;
set termout on colsep '|'

TTITLE '&dbname Tablespace Free Space Statistics';

BREAK on REPORT;

COMPUTE sum label 'Totals' of freemb on report;
COMPUTE sum label 'Totals' of totmb on report;
COMPUTE sum label 'Totals' of usedmb on report;

--col cb format 999,999 heading "Free|Block|Entries"
--col sb format 999,999,999 heading "Total|Free Blocks"
--col mb format 9,999,999 heading "Largest|Free Block|Area"
--col maxfreepct format 999.99 heading "Max|Free %"
col freemb format 9,999,999 heading "Free MB"
col freepct format 999.99 heading "Free %"
col totmb format 999,999,999 heading "Total MB"
col totblk format 999,999,999 heading "Total Blocks"
col usedmb format 99,999,999 heading "Used MB"
col tablespace_name format a30
col usedpct format 999.99 heading "Used %"
col name new_value dbname
set termout off
select name from v$database;
set termout o

Banner for Script - 
spool something.log 
select '+----------------------------------------------------+' from dual
union all
select '| Diagnostic-Name: '||'&&SRDCNAME' from dual
union all
select '| Timestamp: '||to_char(systimestamp,'YYYY-MM-DD HH24:MI:SS TZH:TZM') from dual
union all
select '| Machine: '||host_name from v$instance
union all
select '| Version: '||version from v$instance
union all
select '| DBName: '||name from v$database
union all
select '| Instance: '||instance_name from v$instance
union all
select '+----------------------------------------------------+' from dual
/


OPatch Details -
opatch lsinventory
select * from dba_registry_history;
select substr(comp_id,1,15) comp_id, substr(comp_name,1,30) comp_name, substr(version,1,10) version, status from dba_registry order by modified;
select * from sys.registry$history;

col COMP_NAME for a50
set lines 200
col action_time for a30
col comments for a30
col ACTION for a15
select action_time, action,namespace, version, id, bundle_series, comments from registry$history;
select action_time, action, version, id, comments from dba_registry_history order by action_time;
select COMMENTS, ACTION_TIME from sys.registry$history;
select COMP_NAME, VERSION, STATUS from dba_registry;




Master Note for Transportable Tablespaces (TTS) -- Common Questions and Issues ( Doc ID 1166564.1 )

How to Perform A Full Database Export Import During Upgrade, Migrate, Copy Or Move Of A Database ( Doc ID 286775.1 )

How to Migrate APEX Applications and Their Supporting Objects from One APEX Instance to Another ( Doc ID 758216.1 )

Migrating Apex / HTMLDB Using Export/Import (Datapump) to a Different Database Using Dmp File Makes Objects Invalid ( Doc ID 467593.1 )

$> ./runInstaller -help
$> ./runInstaller -executePrereqs

$> ./runInstaller -silent -responseFile client_install.rsp -waitforcompletion -noconfig -ignorePrereq


==
Connect to db without updating the TNSNAME.ORA - sqlplus userid/passwd@HOST:1521/DBNAME or service name;

Document:1628089.1 AWR Report Interpretation Checklist for Diagnosing Database Performance Issues
Document:1359094.1 How to Use AWR Reports to Diagnose Database Performance Issues
Document:1274511.1 General SQL_TRACE / 10046 trace Gathering Examples




PGA USED by Session - 


SELECT DECODE(TRUNC(SYSDATE - LOGON_TIME), 0, NULL, TRUNC(SYSDATE - LOGON_TIME) || ' Days' || ' + ') ||

TO_CHAR(TO_DATE(TRUNC(MOD(SYSDATE-LOGON_TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') LOGON,

SID, v$session.SERIAL#, v$process.SPID , ROUND(v$process.pga_used_mem/(1024*1024), 2) PGA_MB_USED,

v$session.USERNAME, STATUS, OSUSER, MACHINE, v$session.PROGRAM, MODULE

FROM v$session, v$process

WHERE v$session.paddr = v$process.addr

--and status = 'ACTIVE'

--and v$session.sid = 97

--and v$session.username = 'SYSTEM'

--and v$process.spid = 24301

ORDER BY pga_used_mem DESC;

===================================================================


  1. Tuning Shared pool – Oracle Database 10g
    Shared PoolThe most important part of memory resource when considering the production system is “shared pool”. The shared pool is a part of the SGA that holds almost all the necessary elements for execution of the SQL statements and PL/SQL programs.
    In addition to caching program code, the shared pool caches the data dictionary information that Oracle needs to refer to often during the course of program execution.
    Proper shared pool configuration leads to dramatic improvements in performance. An improperly tuned shared pool leads to problems such as the following:
  • Fragmentation of the pool
  • Increased latch contention with the resulting demand for more CPU resources
  • Greater I/O because executable forms of SQL aren’t present in the shared pool
  • Higher CPU usage because of unnecessary parsing of SQL code
    The general increase in shared pool waits and other waits observed during a severe slowdown of the production database is the result of SQL code that fails to use bind variables.
    As the number of users increases, so does the demand on shared pool memory and latches, which are internal locks for memory areas. If there are excessive latches the result might be a higher wait time and a slower response time. Sometimes the entire database seems to hang. The shared pool consists of two major areas: the library cache and the data dictionary cache. You can’t allocate or decrease memory specifically for one of these components. If you increase the total shared pool memory size, both components will increase in some ratio that Oracle determines. Similarly, when you decrease the total shared pool memory, both components will decrease in size.
    Let’s look at these two important components of the shared pool in detail.
    The Library CacheThe library cache holds the parsed and executable versions of SQL and PL/SQL code. As you may recall from Chapter 21, all SQL statements undergo the following steps during their processing:
    Parsing, which includes syntactic and semantic verification of SQL statements and checking of object privileges to perform the actions.
    Optimization, where the Oracle optimizer evaluates how to process the statement with the
    least cost, after it evaluates several alternatives.
    Execution, where Oracle uses the optimized physical execution plan to perform the action
    stated in the SQL statement.
    Fetching, which only applies to SELECT statements where Oracle has to return rows to you.
    This step isn’t necessary in any nonquery-type statements. Parsing is a resource-intensive operation, and if your application needs to execute the same
    SQL statement repeatedly, having a parsed version in memory will reduce contention for latches, CPU, I/O, and memory usage. The first time Oracle parses a statement, it creates a parse tree. The optimization step is necessary only for the first execution of a SQL statement. Once the statement is optimized, the best access path is encapsulated in the access plan. Both the parse tree and the access plan are stored in the library cache before the statement is executed for the first time. Future invocation of the same statement will need to go through only the last stage, execution, which avoids the overhead of parsing and optimizing as long as Oracle can find the parse tree and access plan in the library cache. Of course, if the statement is a SQL query, the last step will be the fetch
    operation.
    The library cache, being limited in size, discards old SQL statements when there’s no more
    room for new SQL statements. The only way you can use a parsed statement repeatedly for multiple executions is if a SQL statement is identical to the parsed statement. Two SQL statements are identical if they use exactly the same code, including case and spaces. The reason for this is that when Oracle compares a new statement to existing statements in the library cache, it uses simple string comparisons. In addition, any bind variables used must be similar in data type and size. Here are a couple of examples that show you how picky Oracle is when it comes to considering whether two SQL statements are identical.
    In the following example, the statements aren’t considered identical because of an extra space in the second statement:
    SELECT * FROM employees;
    SELECT * FROM employees;
    In the next example, the statements aren’t considered identical because of the different case used for the table Employees in the second statement. The two versions of employees are termed literals because they’re literally different from each other.SELECT * FROM employees;
    SELECT * FROM Employees;
    Let’s say users in the database issue the following three SQL statements:SELECT * FROM persons WHERE person_id = 10
    SELECT * FROM persons WHERE person_id = 999
    SELECT * FROM persons WHERE person_id = 6666
    Oracle uses a different execution plan for the preceding three statements, even though they seem to be identical in every respect, except for the value of person_id. Each of these statements has to be parsed and executed separately, as if they were entirely different. Because all three are essentially the same, this is inherently inefficient. As you can imagine, if hundreds of thousands of such statements are issued during the day, you’re wasting database resources and the query performance will be slow. Bind variables allow you to reuse SQL statements by making them “identical,” and thus eligible to share the same execution plan.
    In our example, you can use a bind variable, which I’ll call 
    :var, to help Oracle view the three statements as identical, thus requiring a single execution instead of multiple ones. The person_id values 10, 99, and 6666 are “bound” to the bind variable, :var. Your replacement SQL statement using a bind variable, then, would be this:SELECT * FROM persons WHERE person_id = :varUsing bind variables can dramatically increase query performance, you can even “force” Oracle to use bind variables, even if an application doesn’t use them.Checking hard parsing and soft parsingWhen a query is actually fired from the SQL prompt, we have seen that it undergoes many steps as explained above. Now the role of library cache is to hold the information about parse tree and executing plan. If we run a query and information about parsing and executing plan is found directly in library cache, then Oracle doesnt have to do time consuming operation of parsing the statement and creating the execution plans and selecting the best one. So Oracle is going to save its time by picking up the execution plan directly from cache. This is called soft parsing.
    But if Oracle is not able to get the execution plan from library cache and have to carry out all these steps, then in that case the operation is expensive. This we call it as hard parse.
    For a query we can check if it was a hard parse or soft parse. For that we need to turn on the session level tracing.
    Usually when we fire the query for first time its a hard parse and with subsequent execution, Oracle will find the information in cache so its going to be soft parse (provided the query we fire is exactly same).
    Lets try this.
    Step 1)
    SQL> alter system flush shared_pool;
    System altered.SQL> ALTER SESSION SET SQL_TRACE=TRUE;Session altered.SQL> select * from emp where empno = 2;If we check the trace file we see following infoPARSING IN CURSOR #5 len=43 dep=1 uid=0 oct=3 lid=0 tim=2461739217275 hv=1682066536 ad=’bb4c0df8′
    select user#,type# from user$ where name=:1
    END OF STMT
    PARSE #5:c=0,e=2245,p=0,cr=0,cu=0,
    mis=1,r=0,dep=1,og=4,tim=2461739217253
    EXEC #5:c=10000,e=5622,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739223682
    Step 2) Run the same statement again by enabling the tracing
    We see following in trace file.
    PARSING IN CURSOR #1 len=33 dep=0 uid=5 oct=3 lid=5 tim=2461984275940 hv=1387342950 ad=’bc88b950′
    select * from emp where empno = 2
    END OF STMT
    PARSE #1:c=0,e=631,p=0,cr=0,cu=0,
    mis=0,r=0,dep=0,og=1,tim=2461984275914
    EXEC #1:c=0,e=487,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984277279
    Since we used exactly the same statement (not even a change in space) so Oracle reuses
    the parsed version. Hence mis=0 indicates there wasn’t a hard parse but merely a soft parse, which is a lot cheaper in terms of resource usage.
    Following is the TKPROF ouput.
    *************************************************************************
    select * from emp where empno = 2
    call count cpu elapsed disk query current rows
    ——- —— ——– ———- ———- ———- ———- ———-
    Parse 2 0.04 0.03 0 2 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 4 0.00 0.00 0 6 0 2
    ——- —— ——– ———- ———- ———- ———- ———-
    total 8 0.04 0.03 0 8 0 2
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 5
    Rows Row Source Operation
    ——- —————————————————
    1 TABLE ACCESS FULL EMP (cr=3 pr=0 pw=0 time=217 us)
    *************************************************************************
    As you can see from the above output there is one library cache miss and that was
    during first executing of statement.
    There is a measure to check the parsing efficiency. usually this is done by checking parse to execute ratio. In this we take a ratio of number of parse vs number of exections made. In a real database, some SQL statements will be fully reentrant (execute to parse = 100%), while others must be re-parsed for every execution (execute to parse = 1%).
    You can use the statspack or AWR report to get the instance efficency percentages as shown below.
    Lets first check the ADDM report
    SQL> @?/rdbms/admin/addmrpt.sqlCurrent Instance
    ~~~~~~~~~~~~~~~~
    DB Id DB Name Inst Num Instance
    ———– ———— ——– ————
    1053231558 TKR12M 1 tkr12m1

    Instances in this Workload Repository schema
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    DB Id Inst Num DB Name Instance Host
    ———— ——– ———— ———— ————
    3640598056 1 LA4001 LA4001 adsdemo
    4208397564 1 VISASUPG VISASUPG ap6004bld
    1053231558 1 TKR12M tkr12m ap6313rt
    3143212236 1 AZ2TP202 AZ2TP202 ap6002rems
    1967308183 1 AR1202D2 ar1202d2 ap6175rt


    Using 1053231558 for database Id
    Using 1 for instance number

    Specify the number of days of snapshots to choose from
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Entering the number of days (n) will result in the most recent
    (n) days of snapshots being listed. Pressing without
    specifying a number lists all completed snapshots.
    Listing the last 3 days of Completed Snapshots
    Snap
    Instance DB Name Snap Id Snap Started Level
    ———— ———— ——— —————— —–
    tkr12m1 TKR12M 334 07 Jun 2007 00:00 1
    335 07 Jun 2007 01:00 1
    336 07 Jun 2007 02:00 1


    Specify the Begin and End Snapshot Ids
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Enter value for begin_snap: 
    334Begin Snapshot Id specified: 334
    Enter value for end_snap: 
    357End Snapshot Id specified: 357
    Specify the Report Name
    ~~~~~~~~~~~~~~~~~~~~~~~
    The default report file name is addmrpt_1_334_357.txt. To use this name,
    press to continue, otherwise enter an alternative.
    Enter value for report_name: 
    ./addmrpt_1_334_357.txtRunning the ADDM analysis on the specified pair of snapshots …For creating the statspack report you need to first install statspack in Oracle database 10g. Following are the steps we have to do for getting a statspack report.1) Install StatspackFor installing statspack you have to create a tablespace perfstat and after that run the script @?/rdbms/admin/spcreateSQL> create tablespace perfstat datafile ‘/dy/oracle/product/db10g/dbf/perfstat01.dbf’ size 1000M;Tablespace created.Once the tablespace is created run the script @?/rdbms/admin/spcreate and provide perfstat user password, perfstat tablespace name and temp tablespace name.2) Take snapshotsuse the command “execute statspack.snap” to take snapshot. Also after some time take another snapshot.3) Run reportsuse @?/rdbms/admin/spreport to generate statspack report.Instance Efficiency Percentages
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 100.00 In-memory Sort %: 100.00
    Library Hit %: 84.44 Soft Parse %: 64.14
    Execute to Parse %: 70.27 Latch Hit %: 100.00
    Parse CPU to Parse Elapsd %: 99.08 % Non-Parse CPU: 67.62
    So from here we can get – Execute to Parse %: 70.27
    This means that there are enough hard parse going on in database.
    Also soft parse percentage is low.
    You can also find out the sessions with high number of hard parses.
    SQL> SELECT s.sid, s.value “Hard Parses”,
    t.value “Executions Count”
    2 3 FROM v$sesstat s, v$sesstat t
    WHERE s.sid=t.sid 4
    5 AND s.statistic#=(select statistic#
    6 FROM v$statname where name=’parse count (hard)’)
    7 AND t.statistic#=(select statistic#
    8 FROM v$statname where name=’execute count’)
    9 AND s.value>0
    10 ORDER BY 2 desc;
    SID Hard Parses Executions Count
    ———- ———– —————-
    1089 2023 125305
    1063 557 129291
    963 338 55504
    878 252 46754
    1072 226 179833
    871 181 61504
    The ratio of parse count “(hard)/execute count” should be on lower side to have better performance.
    Also you can check the ratio of “parse count (total)/execute count”. This ratio too should be on lower side.
    There are various other ratios that can be computed to assist in determining whether parsing may be a problem:
    For example we can derive other ratios from following query
    SQL> SELECT NAME, VALUE
    FROM V$SYSSTAT
    WHERE NAME IN ( ‘parse time cpu’, ‘parse time elapsed’,
    ‘parse count (hard)’, ‘CPU used by this session’ ); 2 3 4
    NAME VALUE
    —————————————————————- ———-
    CPU used by this session 88709
    parse time cpu 7402
    parse time elapsed 7570
    parse count (hard) 5871
    1) parse time CPU / parse time elapsedThis ratio indicates how much of the time spent parsing was due to the parse operation itself, rather than waiting for resources, such as latches. A ratio of one is good, indicating that the elapsed time was not spent waiting for highly contended resources.
    The parse time elapsed Oracle metric is the total elapsed time for parsing in 10s of milliseconds. By subtracting parse time cpu from the this statistic, the total waiting time for parse resources is determined.
    SQL> select a.value/b.value from v$sysstat a, v$sysstat b
    2 where a.name = ‘parse time cpu’
    3 and b.name = ‘parse time elapsed’;
    A.VALUE/B.VALUE
    —————
    .977810065
    2) parse time CPU / CPU used by this sessionThis ratio indicates how much of the total CPU used by Oracle server processes was spent on parse-related operations. A ratio closer to zero is good, indicating that the majority of CPU is not spent on parsing.SQL> select a.value/b.value from v$sysstat a, v$sysstat b
    2 where a.name = ‘parse time cpu’
    3 and b.name = ‘CPU used by this session’;
    A.VALUE/B.VALUE
    —————
    .083350217
    We can still go on in case of finding different ratios, but lets stop here since we have to check other optimization techniques.Checking Library cache performance
    SQL> select sum(pinhits)/sum(pins) from V$LIBRARYCACHE;
    SUM(PINHITS)/SUM(PINS)
    ———————-
    .918569272
    In V$LIBRARYCACHE we have following three columns
    GETS: – The # of lookups for object of the namespace.
    PINS: – The # of reads or execution of the object.
    RELOADS: – cache miss.
    So basically pins are the actual executions.
    A library cache hit ratio between 95 to 99 is considered good. But dont just rely on these numbers because this can be a wrong guessing. There can be always a sitution where your library cache is showing a good hit ratio but there are sessions which are waiting for resource. So always check for that.
    Measuring Library Cache EfficiencyWe can also check the effeciency of library cache using the same view V$LIBRARYCACHESQL> select namespace, pins, pinhits, reloads from V$LIBRARYCACHE
    2 order by namespace;
    NAMESPACE PINS PINHITS RELOADS
    ————— ———- ———- ———-
    BODY 546 473 20
    CLUSTER 1075 1055 6
    INDEX 2016 1588 41
    JAVA DATA 0 0 0
    JAVA RESOURCE 0 0 0
    JAVA SOURCE 0 0 0
    OBJECT 0 0 0
    PIPE 0 0 0
    SQL AREA 129364 122594 1415
    TABLE/PROCEDURE 35855 29402 1359
    TRIGGER 6 4 0
    11 rows selected.
    Here we see that there are high number of reloads for SQL AREA and TABLE/PROCEDURE. Basically the memory used by the Oracle for Library cache is not just a single heap of memory. For better efficiency the memory is allocated to different kinds to entities. For example a chuck of memory is allocated to SQLs called as SQL AREA. Also memory is allocated for procedures, objects, triggers etc.
    For each of these areas we can see PINS, PINHITS and RELOADS to get the efficiencies. Note that RELOAD involves reloading of SQL statements after being aged out.
    Some times, increasing the size of shared pool for these kind of output wont help, because the application itself can be big and number of SQLs are too high to keep it in memory. Another solution is to create KEEP and RECYCLE cache, to store the statements those are used intensively and those can be flushed off respectively. Also you can use DBMS_SHARED_POOL package (KEEP and UNKEEP) procedures to pin the
    objects so that they will remain in memory.
    You can use the V$LIBRARY_CACHE_MEMORY view to determine the number of library cache memory objects currently in use in the shared pool and to determine the number of freeable library cache memory objects in the shared pool
    Once we get the exact info about the issues related to library cache like hit ratio, efficiency, information about hard parse and soft parse etc, we can move ahead for optimizing library cache.
    Optimizing the Library CacheWe can optimize Library cache using some of the initialization parameters. We will see some of the parameters here now.Using the CURSOR_SHARING (Literal Replacement) ParameterWhen a SQL statment is used Oracle engine generates an execution plan and store it in library cache. If a statement is having literal value and same statement is used with same value next time, then no issues with that. There wont be any system overhead. But if the same statement is used next time but with different values then in that case Oracle will have to reparse the statement generate execution plans and select the best plan. This adds to the system overhead and performance goes down. So ots recomended to use bind variables. Following example shows the use of bind variables.SQL> VARIABLE bindvar NUMBER;
    SQL> BEGIN
    2 :bindvar :=7900;
    3 END;
    4 /
    PL/SQL procedure successfully completed.SQL> SELECT ename FROM scott.emp WHERE empid = :bindvar;ENAME
    JAMES
    In this case the statement can be executed many times but it will be parsed only once. We can change the value of bind varaible bindvar and use this query efficiently.
    But its also true that in application many times developed dont use bind varables in query. Here we can do favor on our side by setting the parameter CURSOR_SHARING value to FORCE or SIMILAR. By default the value for this variable is EXACT, which means that it will check only the exact statement (even the spaces, actually it uses string search). If the statement is found in library cache then it wont parse it, else it will. FORCE option will enable Oracle to reuse the parsed SQL statements in its library cache. Oracle replaces the literal values with bind values to make the statements identical.
    The CURSOR_SHARING=FORCE option forces the use of bind variables under all circumstances, whereas the CURSOR SHARING=SIMILAR option does so only when Oracle thinks doing so won’t adversely affect optimization.
    You can change this parameter dynamically if you are using SPFILE, using following statements.
    SQL> show parameters CURSOR_SHARINGNAME TYPE VALUE
    ———————————— ———– ——————————
    cursor_sharing string EXACT
    SQL> alter system set CURSOR_SHARING=FORCE;System altered.Using the SESSION_CACHED_CURSORS ParameterIdeally, an application should have all the parsed statements available in separate cursors, so that if it has to execute a new statement, all it has to do is pick the parsed statement and change the value of the variables. If the application reuses a single cursor with different SQL statements, it still has to pay the cost of a soft parse. After opening a cursor for the first time, Oracle will parse the statement, and then it can reuse this parsed version in the future. This is a much better strategy than re-creating the cursor each time the database executes the same SQL statement. If you can cache all the cursors,
    you’ll retain the server-side context, even when clients close the cursors or reuse them for new SQL statements.
    You’ll appreciate the usefulness of the SESSION_CACHED_CURSORS parameter in a situation where users repeatedly parse the same statements, as happens in an Oracle Forms-based application when users switch among various forms. Using the SESSION_CACHED_CURSORS parameter ensures that for any cursor for which more than three parse requests are made, the parse requests are automatically cached in the session cursor cache. Thus new calls to parse the same statement avoid the parsing overhead. Using the initialization parameter SESSION_CACHED_CURSORS and setting it to a high number makes the query processing more efficient. Although soft parses are cheaper than hard parses, you can reduce even soft parsing by using the SESSION_CACHED_CURSORS parameter and setting it to a high number.
    You can enforce session caching of cursors by setting the SESSION_CACHED_CURSORS in your initialization parameter file, or dynamically by using the following ALTER SESSION command:
    SQL> ALTER SESSION SET SESSION_CACHED_CURSORS = value;You can check how good your SESSION_CACHED_CURSORS parameter value is by using the V$SYSSTAT view. If the value of session cursor cache hits is low compared to the total parse count for a session, then the SESSION_CACHED_CURSORS parameter value should be bumped up.SQL> show parameters SESSION_CACHED_CURSORSNAME TYPE VALUE
    ———————————— ———– ——————————
    session_cached_cursors integer 500
    SQL> select name, value from V$SYSSTAT
    2 where name in (‘session cursor cache hits’,'parse count (total)’);
    NAME VALUE
    —————————————————————- ———-
    session cursor cache hits 8194208
    parse count (total) 13252290
    The perfect situation is where a SQL statement is soft parsed once in a session and executed multiple times. For a good explanation of bind variables, cursor sharing, and related issues, please read the Oracle white paper “Efficient use of bind variables, cursor_sharing and related cursor parameters” (http://otn.oracle.com/deploy/performance/pdf/cursor.pdf).Sizing the Shared PoolBest way to maintain the correct size of memory structure is to set the value for SGA_TARGET. This is a new initialization parameter introduced in Oracle database 10g for automatic management of memory allocation like
  1. SHARED_POOL_SIZE
  2. DB_CACHE_SIZE
  3. JAVA_POOL_SIZE
  4. LARGE_POOL_SIZE
  5. STREAMS_POOL_SIZE
    Also for accurate size of shared pool, you can use the view V$SHARED_POOL_SIZE. We can get the total memory size using
    SQL> select sum(bytes)/1024/1024 from v$sgastat
    2 where pool = ‘shared pool’;
    SUM(BYTES)/1024/1024
    ——————–
    84.0046158
    You can also check the estimated library cache size for a specified shared_pool size using following query.SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_SIZE
    2 from v$shared_pool_advice;
    SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_SIZE
    —————————– ———————– ————
    60 .7143 12
    72 .8571 22
    84 1 33
    96 1.1429 44
    108 1.2857 55
    120 1.4286 66
    132 1.5714 77
    144 1.7143 88
    156 1.8571 99
    168 2 110
    10 rows selected.
    ESTD_LC_TIME_SAVED column tell estimated elapsed parse time saved (in seconds), owing to library cache memory objects being found in a shared pool of the specified size. This is the time that would have been spent in reloading the required objects in the shared pool had they been aged out due to insufficient amount of available free memory.SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_TIME_SAVED
    2 from v$shared_pool_advice;
    SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_TIME_SAVED
    —————————– ———————– ——————
    60 .7143 552
    72 .8571 560
    84 1 564
    96 1.1429 567
    108 1.2857 577
    120 1.4286 577
    132 1.5714 577
    144 1.7143 577
    156 1.8571 577
    168 2 577
    10 rows selected.
    So from above queries we can see that even though the current size of shared pool is 84M,
    but the correct size will be 108M. You can also check the parsed time required in different shared pool sizes using following query.
    SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_LOAD_TIME
    2 from v$shared_pool_advice;
    SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_LOAD_TIME
    —————————– ———————– —————–
    60 .7143 94
    72 .8571 86
    84 1 82
    96 1.1429 79
    108 1.2857 69
    120 1.4286 69
    132 1.5714 69
    144 1.7143 69
    156 1.8571 69
    168 2 69
    10 rows selected.
    This also shows that a shared pool of 108M required a minimum elapsed time(69 sec).Pinning Objects in the Shared PoolAs I have discussed, if code objects have to be repeatedly hard-parsed and executed, database performance will deteriorate eventually. Your goal should be to see that as much of the executed code remains in memory as possible so compiled code can be re-executed. You can avoid repeated reloading of objects in your library cache by pinning objects using the DBMS_SHARED_POOL package. Pinning the object means storing the parse tree and execution plan of object in memory continuously until the instance reboots. We can determine the objects to be pinned using following query.SELECT type, COUNT(*) OBJECTS,
    SUM(DECODE(KEPT,’YES’,1,0)) KEPT,
    SUM(loads) – count(*) reloads
    FROM V$DB_OBJECT_CACHE
    GROUP BY type
    ORDER BY objects DESC;
    TYPE OBJECTS KEPT RELOADS
    —————————- ———- ———- ———-
    CURSOR 1428 584 719
    NOT LOADED 1423 0 179
    TABLE 252 28 320
    VIEW 88 0 117
    SYNONYM 41 0 0
    INVALID TYPE 41 37 12
    INDEX 26 7 0
    PACKAGE 10 1 21
    TYPE 10 0 2
    CLUSTER 8 6 1
    PACKAGE BODY 7 1 6
    TYPE OBJECTS KEPT RELOADS
    —————————- ———- ———- ———-
    SEQUENCE 4 0 1
    TYPE BODY 2 0 0
    RSRC CONSUMER GROUP 1 0 1
    NON-EXISTENT 1 0 0
    PUB_SUB 1 0 8
    FUNCTION 1 0 0
    17 rows selected.
    If the number of reloades are more then you need to make sure that the objects are pinned in database.SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(object_name,object_type);You can use the following statements to pin a package first in the shared pool and then remove it, if necessary:SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(NEW_EMP.PKG, PACKAGE);
    SQL> EXECUTE SYS.DBMS_SHARED_POOL.UNKEEP(NEW_EMP.PKG,PACKAGE);
    You check the total memory consumed by an object of type of objects using following query.SQL> select count(*) from v$DB_OBJECT_CACHE
    2 where type = ‘CURSOR’;
    COUNT(*)
    ———-
    1431
    SQL> select sum(sharable_mem) from v$DB_OBJECT_CACHE
    2 where type = ‘CURSOR’;
    SUM(SHARABLE_MEM)
    —————–
    15300756
    The above size is in bytes. So for holding 1431 cursor, Shared_pool needs approx 15MB of memory.
    So basically it all depends on the reports that you take from database and your database initialization parameter settings. So check your database quickly and tune it. Good luck !!!
    The Dictionary CacheThe dictionary cache, as mentioned earlier, caches data dictionary information. This cache is much smaller than the library cache, and to increase or decrease it you modify the shared pool accordingly. If your library cache is satisfactorily configured, chances are that the dictionary cache is going to be fine too. You can get an idea about the efficiency of the dictionary cache by using the following query:SQL> select sum(gets – getmisses – fixed) / sum(gets) from v$rowcache;SUM(GETS-GETMISSES-FIXED)/SUM(GETS)
    ———————————–
    .995904392
    Usually, it’s a good idea to shoot for a dictionary hit ratio as high as 95 to 99 percent, although Oracle itself sometimes seems to refer to a figure of 85 percent as being adequate. To increase the library cache ratio, you simply increase the shared pool size for the instance.


=================================================================



  1. Tuning Buffer cache – Oracle Database 10g
    For tuning buffer cache, we need to understand following point very closely.Physical reads: These are the data blocks that Oracle reads from disk. Reading data from disk is much more expensive than reading data that’s already in Oracle’s memory. When you issue a query, Oracle always first tries to retrieve the data from memory—the database buffer cache—and not disk.DB block gets: This is a read of the buffer cache, to retrieve a block in current mode. This most often happens during data modification when Oracle has to be sure that it’s updating the most recent version of the block. So, when Oracle finds the required data in the database buffer cache, it checks whether the data in the blocks is up to date. If a user changes the data in the buffer cache but hasn’t committed those changes yet, new requests for the same data can’t show these interim changes. If the data in the buffer blocks is up to date, each such data block retrieved is counted as a DB block get.
    Consistent gets: This is a read of the buffer cache, to retrieve a block in consistent mode. This may include a read of undo segments to maintain the read consistency principle. If Oracle finds that another      session has updated the data in that block since the read began, then it will apply the new information from the undo segments.
    Logical reads: Every time Oracle is able to satisfy a request for data by reading it from the database buffer cache, you get a logical read. Thus logical reads include both DB block gets and consistent gets.Buffer gets: This term refers to the number of database cache buffers retrieved. This value is the same as the logical reads described earlier.
    So basically buffer cache hit ratio is all about the rate at which you get information in your memory cache and less accessing the disk.
    so in short we can say from above defination,
    Buffer cache hit ratio = 1 – (physical read/logical reads)Here logical reads means reading from memory.SQL> SELECT name, value FROM v$sysstat
    2 where name IN (‘physical reads cache’,
    3 ‘consistent gets from cache’,
    4 ‘db block gets from cache’); 

    NAME VALUE
    —————————————————————- ———-
    db block gets from cache 164998308
    consistent gets from cache 2136448944
    physical reads cache 2787422
    Here physical reads are ‘physical reads cache’ stored in v$sysstat.
    logical reads are, sum of ‘consistent gets from cache’ and ‘db block gets from cache’.
    so our buffer cache hit ratio will be 1 – (2787422 / (2136448944 + 164998308)) = 0.9987
    Another way to calculate the same buffer cache hit ratio is our query on v$sysstat
    SQL> SELECT 1- ((p.value – l.value – d.value) / s.value)
    AS “Buffer Cache Hit Ratio”
    FROM v$sysstat s, v$sysstat l, v$sysstat d, v$sysstat p
    WHERE s.name = ‘session logical reads’
    AND d.name = ‘physical reads direct’
    AND l.name = ‘physical reads direct (lob)’
    AND p.name = ‘physical reads’
    The above query will also give almost the same result. Here actually,
    physical reads cache = physical reads – (physical reads direct (lob) + physical reads direct) and session logical reads = consistent gets from cache + db block gets from cache
    LRU Algorithm or Touch Count Algorithm ??Since version 8.1, Oracle has used a concept called touch count to measure how many times an object is accessed in the buffer cache. This algorithm of using touch counts for managing the buffer cache is somewhat different from the traditional modified LRU algorithm that Oracle used to employ for managing the cache. Each time a buffer is accessed, the touch count is incremented.
    low touch count means that the block isn’t being reused frequently, and therefore is wasting database buffer cache space. If you have large objects that have a low touch count but occupy a significant proportion of the buffer cache, you can consider them ideal candidates for the recycle pool.
    TCH (Touch count) is a column present in x$bh table of data dictionary, which keeps track of touch counts of objects.
    Following query will give you the objects which are consuming reasonable amount of memory and are the candidates for getting flushed out of buffer cache.
    SELECT
    obj object,
    count(1) buffers,
    (count(1)/totsize) * 100 percent_cache
    FROM x$bh,
    (select value totsize
    FROM v$parameter
    WHERE name =’db_block_buffers’)
    WHERE tch=1
    OR (tch = 0 and lru_flag <10)
    GROUP BY obj, totsize
    HAVING (count(1)/totsize) * 100 > 5
    Remember here that, this explaination is just for understanding. We dont have to do anything in buffer cache for getting these objects flushed out. Removal of this objects are handled automatically by database engine.
    From this query you can get object_id and from that you can find object_name (using dba_objects).
    using Multiple Pools for the Buffer CacheAs you already know we can have multiple pool for buffer cache, so I wont be describing the same here, else we will loose the focus on current discussion.
    We have basically 3 types of buffer pools.
  1. KEEP
  2. RECYCLE
  3. DEFAULT
    Default pool will always be there. However depending on the situation we can create keep and recycle pools. If there are some objects, which are accessed frequently, then will want to keep such objects in database cache. For such objects we can use keep buffer pool. Objects which are big and not accessed frequently can be put in recycle pool. By default if we dont specify buffer pool, object will always go to default pool.
    V$BUFFER_POOL_STATISTICS will give statistics for all pools.
    Determining Candidates for the Recycle Buffer Pool
    Candidates which are having low TCH value as given by above query are best to put in recycle pool.
    Determining Candidates for the Keep Buffer Cache
    SELECT obj object,
    count(1) buffers,
    AVG(tch) average_touch_count
    FROM x$bh
    WHERE lru_flag = 8
    GROUP BY obj
    HAVING avg(tch) > 5
    AND count(1) > 25;

    Above query will give the candidates which are having avg TCH value of more then 5 and number of buffers occupied in buffer cache as more then 25. Such objects can be placed in KEEP buffer cache. You can place an object in a perticular pool using alter table command.
    ALTER TABLE test1 STORAGE (buffer_pool keep);
    Sizing buffer cacheFor sizing buffer cache, you can check V$DB_CACHE_ADVICE view. This view will provide you information about various buffer cache sizes and estimated physical read, when we use those sizes for buffer cache. Based on this result you should be able to decide the correct size of database buffer cache.SQL> select NAME, SIZE_FOR_ESTIMATE, SIZE_FACTOR, ESTD_PHYSICAL_READS, ESTD_PCT_OF_DB_TIME_FOR_READS
    2 from v$db_cache_advice;
    NAME SIZE_FOR_ESTIMATE SIZE_FACTOR ESTD_PHYSICAL_READS ESTD_PCT_OF_DB_TIME_FOR_READS
    ——— —————– ———– ——————- —————————–
    DEFAULT 112 .0946 6871847 2.4
    DEFAULT 224 .1892 5435019 1.8
    DEFAULT 336 .2838 4600629 1.5
    DEFAULT 448 .3784 4125422 1.3
    DEFAULT 560 .473 3831101 1.1
    DEFAULT 672 .5676 3598589 1
    DEFAULT 784 .6622 3381913 .9
    DEFAULT 896 .7568 3154726 .8
    DEFAULT 1008 .8514 2957398 .8
    DEFAULT 1120 .9459 2841502 .7
    DEFAULT 1184 1 2791921 .7
    DEFAULT 1232 1.0405 2757728 .7
    DEFAULT 1344 1.1351 2689955 .6
    DEFAULT 1456 1.2297 2653143 .6
    DEFAULT 1568 1.3243 2631218 .6
    DEFAULT 1680 1.4189 2608447 .6
    DEFAULT 1792 1.5135 2588899 .6
    DEFAULT 1904 1.6081 2573463 .6
    DEFAULT 2016 1.7027 2561587 .6
    DEFAULT 2128 1.7973 2549937 .6
    DEFAULT 2240 1.8919 2535972 .6
    21 rows selected.
    Based on ESTD_PCT_OF_DB_TIME_FOR_READS and ESTD_PHYSICAL_READS, you can decide the correct size for db buffer cache.


=========================================================

Tuning PGA Memory – Oracle database 10g
Correct size of PGAYou can get the correct size of PGA using V$PGA_TARGET_ADVICE, dynamic performance view.SQL> SELECT ROUND(pga_target_for_estimate/1024/1024) target_mb,
2 estd_pga_cache_hit_percentage cache_hit_perc,
3 estd_overalloc_count
4 FROM V$PGA_TARGET_ADVICE;
TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
512 98 909
1024 100 0
2048 100 0
3072 100 0
4096 100 0
4915 100 0
5734 100 0
6554 100 0
7373 100 0
8192 100 0
12288 100 0
TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
16384 100 0
24576 100 0
32768 100 0
14 rows selected.
Checking PGA for each sessionsYou can check session level PGA using V$SESSTAT and V$SESSION view and also you can check the username, who is using that memory.SELECT
s.value,s.sid,a.username
FROM
V$SESSTAT S, V$STATNAME N, V$SESSION A
WHERE
n.STATISTIC# = s.STATISTIC# and
name = ‘session pga memory’
AND s.sid=a.sid
ORDER BY s.value;
VALUE SID USERNAME
———- ———- ——————————
487276 1070 APPS
552812 1068 SYS
552812 1088
618348 1009 APPS_READ_ONLY
683884 1091
749420 846 MOBILEADMIN
749420 1090
749420 1051 APPLSYSPUB
749420 1000 APPLSYSPUB
749420 929 APPLSYSPUB
790412 1093
To check the total PGA in use and hit ratio for PGASQL> SELECT * FROM V$PGASTAT;NAME VALUE UNIT
————————————————– ———- ————
aggregate PGA target parameter 4294967296 bytes
aggregate PGA auto target 3674290176 bytes
global memory bound 201252864 bytes
total PGA inuse 218925056 bytes
total PGA allocated 433349632 bytes
maximum PGA allocated 1526665216 bytes
total freeable PGA memory 86835200 bytes
process count 113
max processes count 250
PGA memory freed back to OS 8.3910E+10 bytes
total PGA used for auto workareas 6505472 bytes
NAME VALUE UNIT
————————————————– ———- ————
maximum PGA used for auto workareas 70296576 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 4292608 bytes
over allocation count 0
bytes processed 2.1553E+11 bytes
extra bytes read/written 10403840 bytes
cache hit percentage 99.99 percent
recompute count (total) 205474
19 rows selected.
The ideal way to perform sorts is by doing the entire job in memory. A sort job that Oracle performs entirely in memory is said to be an optimal sort. If you set the PGA_AGGREGATE_TARGET too low, some of the sort data is written out directly to disk (temporary tablespace) because the sorts are too large to fit in memory. If only part of a sort job spills over to disk, it’s called a 1-pass sort. If the instance performs most of the sort on disk instead of in memory, the response time will be high. This is called multi pass sort.
Another method of checking the efficiency of PGA memory is to check V$SQL_WORKAREA_HISTOGRAM.
V$SQL_WORKAREA_HISTOGRAM displays the cumulative work area execution statistics (cumulated since instance startup) for different work area groups. The work areas are split into 33 groups based on their optimal memory requirements with the requirements increasing in powers of two. That is, work areas whose optimal requirement varies from 0 KB to 1 KB, 1 KB to 2 KB, 2 KB to 4 KB, … and 2 TB to 4 TB.
For each work area group, the V$SQL_WORKAREA_HISTOGRAM view shows how many work areas in that group were able to run in optimal mode, how many were able to run in one-pass mode, and finally how many ran in multi-pass mode. The DBA can take a snapshot at the beginning and the end of a desired time interval to derive the same statistics for that interval.
SELECT
low_optimal_size/1024 “Low (K)”,
(high_optimal_size + 1)/1024 “High (K)”,
optimal_executions “Optimal”,
onepass_executions “1-Pass”,
multipasses_executions “>1 Pass”
FROM v$sql_workarea_histogram
WHERE total_executions <> 0;
Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
2 4 6254487 0 0
64 128 110568 0 0
128 256 20041 0 0
256 512 86399 0 0
512 1024 145082 0 0
1024 2048 31207 0 0
2048 4096 104 0 0
4096 8192 79 2 0
8192 16384 116 0 0
16384 32768 30 0 0
32768 65536 4 0 0
Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
65536 131072 2 0 0
12 rows selected.
You can check the proportion of work areas since you started the Oracle instance, using optimal, 1-pass, and multipass PGA memory sizes.SELECT name PROFILE, cnt COUNT,
DECODE(total, 0, 0, ROUND(cnt*100/total)) PERCENTAGE
FROM (SELECT name, value cnt, (sum(value) over ()) total
FROM V$SYSSTAT
WHERE name like ‘workarea exec%’);

PROFILE COUNT PERCENTAGE
————————————————– ———- ———-
workarea executions – optimal 6650608 100
workarea executions – onepass 2 0
workarea executions – multipass 0 0
Since almost all the sorting and temporary operation are carried out inder optimal catagory we can conclude that out PGA is sized correctly.

================================================================================


  1. Monitoring CPU Usage – Oracle Database 10g
    System Activity Report (SAR)-bash-2.05b$ sar -u 10 5Linux 2.4.21-37.0.0.1.4.ELhugemem (ap6188rt) 06/12/2007
    10:48:24 PM CPU %user %nice %system %iowait %idle
    10:48:34 PM all 22.07 0.00 14.36 0.03 63.54
    10:48:44 PM all 16.70 0.00 13.93 0.17 69.20
    10:48:54 PM all 8.80 0.00 8.15 0.25 82.80
    10:49:04 PM all 2.52 0.00 3.55 0.00 93.92
    10:49:14 PM all 2.05 0.00 4.00 0.00 93.95
    Average: all 10.43 0.00 8.80 0.09 80.69

    We can check the processes which are consuming high CPU units. When we use the command ps -eaf, 4th column shows the number of CPU units, that process is using. Exampleoracle03 12979 1 0 21:39 ? 00:00:00 ora_q003_tkr12m1
    oracle03 22815 1 
    0 21:57 ? 00:00:00 ora_q001_tkr12m1
    oracle03 2720 1 
    0 22:36 ? 00:00:00 oracletkr12m1 (LOCAL=NO)
    oracle03 30548 1 
    0 22:42 ? 00:00:00 oracletkr12m1 (LOCAL=NO)
    oracle03 25572 1 
    0 22:48 ? 00:00:00 oracletkr12m1 (LOCAL=NO)
    oracle03 14232 1 
    35 22:53 ? 00:00:03 oracletkr12m1 (LOCAL=NO)You can also check the users, which are consuming high CPU.SELECT n.username,
    s.sid,
    s.value
    FROM v$sesstat s,v$statname t, v$session n
    WHERE s.statistic# = t.statistic#
    AND n.sid = s.sid
    AND t.name=’CPU used by this session’
    ORDER BY s.value desc;
    USERNAME SID VALUE
    —————————— ———- ———-
    1093 194184
    1092 77446
    1088 67564
    1089 43054
    1090 19192
    1072 15009
    APPS 832 1777
    APPS 998 1190
    APPS 822 577
    APPS 900 508
    APPS 823 477
    You can also check session level CPU usingSQL> select * from v$sesstat
    2 where statistic# = 12
    3 order by value desc;
    SID STATISTIC# VALUE
    ———- ———- ———-
    1093 12 194184
    1092 12 77475
    1088 12 67579
    1089 12 43062
    1090 12 19194
    1072 12 15010
    832 12 1785
    998 12 1197
    822 12 577
    You can also decompose the total CPU usage. Basically a CPU time is
    total CPU time = parsing CPU usage + recursive CPU usage + other CPU usage
    SELECT name,value FROM V$SYSSTAT
    WHERE NAME IN (‘CPU used by this session’,
    ‘recursive cpu usage’,
    ‘parse time cpu’);
    NAME VALUE
    —————————————————————- ———-
    recursive cpu usage 6276669
    CPU used by this session 8806491
    parse time cpu 482645
    Ideally (parsing CPU usage + recursive CPU usage) should be significantly less then CPU used by this session. Here we can see that (parsing CPU usage + recursive CPU usage) is almost equal to CPU used by this session. Here we need to make some tuning in reducing the time for recursive CPU usage.
    To determine percentage of CPU usage for parsing
    select (a.value / b.value)*100 “% CPU for parsing”
    from V$SYSSTAT a, V$SYSSTAT b
    where a.name = ‘parse time cpu’
    and b.name = ‘CPU used by this session’;
    % CPU for parsing
    —————–
    5.48300146
    Reducing Parse Time CPU UsageIf parse time CPU is the major part of total CPU usage, you need to reduce this by performing the following steps:
  1. Use bind variables and remove hard-coded literal values from code, as explained in the
    “Optimizing the Library Cache” section earlier in this chapter.
  2. Make sure you aren’t allocating too much memory for the shared pool. Remember that even if you have an exact copy of a new SQL statement in your library cache, Oracle has to find it by scanning all the statements in the cache. If you have a zillion relatively useless statements sitting in the cache, all they’re doing is slowing down the instance by increasing the parse time.
  3. Make sure you don’t have latch contention on the library cache, which could result in increased parse time CPU usage. We will see latch contentions and how to reduce it – later.To determine percentage of Recursive CPU Usage
    select (a.value / b.value)*100 “% recursive cpu usage”
    from V$SYSSTAT a, V$SYSSTAT b
    where a.name = ‘recursive cpu usage’
    and b.name = ‘CPU used by this session’;
    Recursive CPU usage
    ——————-
    71.421139
    This is really a bad number for recursive CPU usage. Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used.
    Recursive calls are also generated due to the inavailability of dictionary info in the dictionary cache, firing of database triggers, execution of DDL, execution of SQL within PL/SQL blocks, functions or stored procedures and enforcement of referential integrity constraints.
    If the recursive CPU usage percentage is a large proportion of total CPU usage, you may want to make sure the shared pool memory allocation is adequate. However, a PL/SQL-based application will always have a significant amount of recursive CPU usage.
    To reduce the recursive CPU usage, make sure that shared pool is sized correctly. Also you can check the parameters like
    CURSOR_SHARING – values can be EXACT, SIMILAR, FORCE
    SESSION_CACHED_CURSORS – value can be anything numeric like 500.



===============================================================================


  1. Monitoring IO Statistics – Oracle Database 10g
    There are several ways to check the IO statistics and load on the devices. We have OS level utilities like iostat and sar. We will check out first the OS utilities and then we will move to check IO stats using database query.
    Example of using sar -d for IO stats:
    (appmgr01) appmgr – -bash $ sar -d 10 5Linux 2.4.21-37.0.0.1.4.ELhugemem (ap6157rt) 06/18/2007
    02:19:21 AM DEV tps rd_sec/s wr_sec/s
    02:19:31 AM dev8-0 1.30 0.00 19.20
    02:19:31 AM dev8-1 0.00 0.00 0.00
    02:19:31 AM dev8-2 1.30 0.00 19.20
    02:19:31 AM DEV tps rd_sec/s wr_sec/s
    02:19:41 AM dev8-0 0.50 0.00 12.81
    02:19:41 AM dev8-1 0.00 0.00 0.00
    02:19:41 AM dev8-2 0.50 0.00 12.81
    Here we can see that dev8-0, dev8-1 and dev8-2 are the devices (disks).
    tps is the transfer rate/second in terms of number of requests
    rd_sec/s is Number of sectors read from the device. The size of a sector is 512 bytes.
    wr_sec/s is Number of sectors write to the device. The size of a sector is 512 bytes.
    Another command is iostat
    -bash-2.05b$ iostatLinux 2.4.21-37.0.0.1.4.ELhugemem (ap6188rt) 06/18/2007
    avg-cpu: %user %nice %sys %iowait %idle
    7.54 0.00 6.37 0.14 85.94
    Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
    sda 1.21 6.16 15.18 19695596 48524898
    sda1 0.94 6.15 10.84 19655648 34666374
    sda2 0.26 0.01 4.33 39404 13858522

    Here you can check the %user column, which will show you the percentage of CPU that occurred while executing at the user level (application)
    You can check the IO stats using some SQLs on dynamic views as given below. This query will provide you with the read/write statistics for each file of database.
    SELECT d.name,
    f.phyrds reads,
    f.phywrts wrts,
    (f.readtim / decode(f.phyrds,0,-1,f.phyrds)) readtime,
    (f.writetim / decode(f.phywrts,0,-1,phywrts)) writetime
    FROM
    v$datafile d,
    v$filestat f
    WHERE
    d.file# = f.file#
    ORDER BY
    d.name;
    NAME READS WRTS READTIME WRITETIME
    ————————————————– ———- ———- ———- ———-
    /slot/ems1149/oracle/db/apps_st/data/tx_idx20.dbf 70 129 .9 .031007752
    /slot/ems1149/oracle/db/apps_st/data/tx_idx21.dbf 46 72 .6521739 .041666667
    /slot/ems1149/oracle/db/apps_st/data/tx_idx22.dbf 151 428 .7814569 .044392523
    /slot/ems1149/oracle/db/apps_st/data/tx_idx23.dbf 107 300 1.177570 .08
    /slot/ems1149/oracle/db/apps_st/data/tx_idx24.dbf 177 935 .4293785 .098395722
    /slot/ems1149/oracle/db/apps_st/data/tx_idx3.dbf 103 1624 1.427184 .580049261
    v$filestat view gives the read/write statistics for datafiles. Here phyrds are the number of physical reads done from respective datafiles and phywrts are the number of physical writes down by DBWR process in writting from buffer cache to disk. Also readtime, what we are calculating here is the average read time per request. Similarly the writetime.
    Excessive disk reads and write time shows that you have I/O contention. Here are some of the point which you can make a note in order to reduce I/O contention.
  • Increase the number of disks in the storage system.
  • Separate the database and the redo log files.
  • For a large table, use partitions to reduce I/O.
  • Stripe the data either manually or by using a RAID disk-striping system.
  • Invest in cutting-edge technology, such as file caching, to avoid I/O bottlenecks.
  • Consider using the new Automatic Storage Management system.



========================================================================



  1. Database Wait Statistics – Oracle Database 10g
    When a SQL is submitted by a user to Oracle database, it never happens that Oracle will execute the SQL continuously at one go. Oracle process never get to work on the execution of statement without any interruptions. Often the process has to pause or wait for some event or some other resource to be released. Thus active Oracle process is doing one of the following thing at any point of time.The process is executing the SQL statement.
    The process is waiting for something (for example, a resource such as a database buffer or a latch). It could be waiting for an action such as a write to the buffer cache to complete.
    That’s why the response time—the total time taken by Oracle to finish work—is correctly
    defined as follows:
    response time = service time + wait time
    So only part of the time is spend by oracle process to actually “do” some thing. Rest of the time process just wait for some resource to get freed up. It can be waiting log writter process or database writter process or any other resources.
    The wait event may also be due to unavailable buffers or latches.
    Four dynamic performance views contain wait information: V$SESSION, V$SYSTEM_EVENT, V$SESSION_EVENT, and V$SESSION_WAIT. These four views list just about all the events the instance was waiting for and the duration of these waits. Understanding these wait events is essential for resolving performance issues.
    There are different wait classes defined in database. Each class will contain different wait events. There are around 860 wait events defined in Oracle database 10g and are classified under different wait classes.
    Some of the main wait classes includes:
  • Administrative: Waits caused by administrative commands, such as rebuilding an index, for example.
  • Application: Waits due to the application code.
  • Cluster: Waits related to Real Application Cluster management.
  • Commit: Consists of the single wait event log file sync, which is a wait caused by commits in the database.
  • Concurrency: Waits for database resources that are used for locking; for example, latches.
  • Configuration: Waits caused by database or instance configuration problems, including a low shared-pool memory size, for example.
  • Idle: Idle wait events indicate waits that occur when a session isn’t active; for example, the ‘SQL*Net message from client’ wait event.
    You can see the complete list of wait classes using V$SESSION_WAIT_CLASS dynamic performance view.
    Analyzing Instance PerformanceYou can check the percentage of time spend by the database in waiting for resource and percentage of time spend by database in actual execution.SQL> SELECT METRIC_NAME, VALUE
    2 FROM V$SYSMETRIC
    3 WHERE METRIC_NAME IN (‘Database CPU Time Ratio’,
    4 ‘Database Wait Time Ratio’) AND
    5 INTSIZE_CSEC =
    6 (select max(INTSIZE_CSEC) from V$SYSMETRIC);
    METRIC_NAME VALUE
    —————————————————————- ———-
    Database Wait Time Ratio 15.6260647
    Database CPU Time Ratio 84.3739353
    If the database performance shows that ‘Database Wait Time Ratio’ is greater then ‘Database CPU Time Ratio’ or the value for ‘Database Wait Time Ratio’ is quite significant, then in that case you need to dig inside to get the information about where exactly oracle is waiting. You need to basically find the type of wait. This will give you root cause. Once you get the root cause you can work to fix the same.
    you can determine the total waits and percentage of waits by wait class.
    SELECT WAIT_CLASS,
    TOTAL_WAITS,
    round(100 * (TOTAL_WAITS / SUM_WAITS),2) PCT_TOTWAITS,
    ROUND((TIME_WAITED / 100),2) TOT_TIME_WAITED,
    round(100 * (TIME_WAITED / SUM_TIME),2) PCT_TIME
    FROM
    (select WAIT_CLASS,
    TOTAL_WAITS,
    TIME_WAITED
    FROM V$SYSTEM_WAIT_CLASS
    WHERE WAIT_CLASS != ‘Idle’),
    (select sum(TOTAL_WAITS) SUM_WAITS,
    sum(TIME_WAITED) SUM_TIME
    from V$SYSTEM_WAIT_CLASS
    where WAIT_CLASS != ‘Idle’)
    ORDER BY PCT_TIME DESC;
    WAIT_CLASS TOTAL_WAITS PCT_TOTWAITS TOT_TIME_WAITED PCT_TIME
    ——————– ———– ———— ————— ———-
    System I/O 180300 19.96 3008.8 49.53
    Commit 67855 7.51 1302.46 21.44
    User I/O 291565 32.28 1056.55 17.39
    Application 3637 .4 596.66 9.82
    Other 15388 1.7 67.4 1.11
    Concurrency 1264 .14 38.12 .63
    Network 343169 37.99 3.86 .06
    Configuration 22 0 1 .02
    8 rows selected.
    In the above output percentage of time waited (last column) is more important and gives the correct picture of the impact due to wait. Example if we see total number of Network waits are large but the actual percentage of time contributed to the wait is very less (0.06%).
    The key dynamic performance tables for finding wait information are the V$SYSTEM_EVENT, V$SESSION_EVENT, V$SESSION_WAIT, and the V$SESSION views. The first two views show the waiting time for different events.
    V$SYSTEM_EVENTThe V$SYSTEM_EVENT view shows the total time waited for all the events for the entire system since the instance started up. The view doesn’t focus on the individual sessions experiencing waits, and therefore it gives you a high-level view of waits in the system. You can use this view to find out that the top instance-wide wait events are. You can calculate the top n waits in the system by dividing the event’s wait time by the total wait time for all events.
    select EVENT, TOTAL_WAITS, TIME_WAITED, WAIT_CLASS from V$SYSTEM_EVENT
    where wait_class != ‘Idle’
    order by time_waited desc;
    EVENT TOTAL_WAITS TIME_WAITED WAIT_CLASS
    —————————— ———– ———– ——————–
    log file parallel write 128953 210308 System I/O
    log file sync 67904 130313 Commit
    db file sequential read 259065 73686 User I/O
    enq: TX – row lock contention 226 59080 Application
    control file parallel write 28282 57929 System I/O
    db file parallel write 19155 32924 System I/O
    db file scattered read 31841 30925 User I/O
    os thread startup 95 3262 Concurrency
    rdbms ipc reply 485 2116 Other
    PX Deq: Signal ACK 1971 1103 Other
    local write wait 245 864 User I/O
    we can get the session level waits for each event using V$SESSION_EVENT view. In this view the TIME_WAITED is the wait time per session.
    V$SESSION_EVENT
    select sid, EVENT, TOTAL_WAITS, TIME_WAITED, WAIT_CLASS
    from V$SESSION_EVENT
    where WAIT_CLASS != ‘Idle’
    order by TIME_WAITED;
    SID EVENT TOTAL_WAITS TIME_WAITED WAIT_CLASS
    ———- —————————— ———– ———– ————
    390 os thread startup 55 1918 Concurrency
    393 db file sequential read 10334 4432 User I/O
    396 db file parallel write 8637 14915 System I/O
    397 db file parallel write 10535 18035 System I/O
    394 control file parallel write 28294 57928 System I/O
    395 log file parallel write 129020 210405 System I/O
    As we can see from above output that session 395 is having maximum wait time because of system I/O. Here system I/O is the I/O because of background processes like DBWR and LGWR etc.
    You can get all the database wait events from V$EVENT_NAME and the related meaning of all the wait events available in oracle 10g by checking 
    this documentation link.V$SESSION_WAITThe third dynamic view is the V$SESSION_WAIT view, which shows the current waits or just completed waits for sessions. The information on waits in this view changes continuously based on the types of waits that are occurring in the system. The real-time information in this view provides you with tremendous insight into what’s holding up things in the database right now. The
    V$SESSION_WAIT view provides detailed information on the wait event, including details such as file number, latch numbers, and block number. This detailed level of information provided by the V$SESSION_WAIT view enables you to probe into the exact bottleneck that’s slowing down the database. The low-level information helps you zoom in on the root cause of performance problems.
    SQL> select sid, event, WAIT_CLASS, WAIT_TIME, SECONDS_IN_WAIT, STATE from v$session_wait
    2 where wait_class != ‘Idle’;
    SID EVENT WAIT_CLASS WAIT_TIME
    ———- —————————— ——————– ———-
    SECONDS_IN_WAIT STATE
    ————— ——————-
    337 SQL*Net message to client Network -1
    0 WAITED SHORT TIME
    Here wait time -1 means that session has waited for less then 1/100th of a second.
    You can get the complete wait information for a particular session using V$SESSION view. You can get SQLID of the sql, which is causing wait.
    V$SESSIONFor getting the wait statistics you can go with the following methodology.
  • First, look at the V$SYSTEM_EVENT view and rank the top wait events by the total amount of time waited, as well as the average wait time for that event.
  • Next, find out more details about the specific wait event that’s at the top of the list. You can check V$WAITSTAT view for the same. Check the type of wait this view is showing. If the wait is due to undo header or undo block then wait is related to undo segment.
  • Finally, use the V$SESSION view to find out the exact objects that may be the source of a problem. For example, if you have a high amount of db file scattered read-type waits, the V$SESSION view will give you the file number and block number involved in the wait events.
    In V$SESSION we have a column called BLOCKING_SESSION_STATUS. IF this column value is ‘valid’, then we can presume that corresponding session is getting blocked.
    V$SESSION_WAIT_HISTORYThe V$SESSION_WAIT_HISTORY view holds information about the last ten wait events for each active session. The other wait-related views, such as the V$SESSION and the V$SESSION_WAIT, show you only the wait information for the most recent wait. This may be a short wait, thus escaping your scrutiny.SQL> select sid from v$session_wait_history
    2 where wait_time = (select max(wait_time) from v$session_wait_history);
    SEQ# EVENT P1 P2 P3 WAIT_TIME
    ———- ———————— ———- ———- ———- ———-
    1 rdbms ipc message 180000 0 0 175787
    2 rdbms ipc message 180000 0 0 175787
    3 rdbms ipc message 180000 0 0 60782
    4 rdbms ipc message 180000 0 0 175787
    5 rdbms ipc message 180000 0 0 138705
    6 db file sequential read 1 368 1 0
    7 rdbms ipc message 180000 0 0 158646
    8 db file sequential read 1 368 1 0
    9 db file sequential read 1 73 1 0
    10 db file sequential read 1 30 1 0
    Note that a zero value under the WAIT_TIME column means that the session is waiting for a specific wait event. A nonzero value represents the time waited for the last event.
    V$ACTIVE_SESSION_HISTORYThe V$SESSION_WAIT view tells you what resource a session is waiting for. The V$SESSION view also provides significant wait information for active sessions. However, neither of these views provides you with historical information about the waits in your instance. Once the wait is over, you can no longer view the wait information using the V$SESSION_WAIT view. The waits are so fleeting that by the time you query the views, the wait in most times is over. The new Active Session History (ASH) feature, by recording session information, enables you to go back in time and review the history of a performance bottleneck in your database. Although the AWR provides hourly snapshots
    of the instance by default, you won’t be able to analyze events that occurred five or ten minutes ago, based on AWR data. This is where the ASH information comes in handy. ASH samples the V$SESSION view every second and collects the wait information for all active sessions. An active session is defined as a session that’s on the CPU or waiting for a resource. You can view the ASH session statistics through the view 
    V$ACTIVE_SESSION_HISTORY, which contains a single row for each active session in your instance. ASH is a rolling buffer in memory, with older information being overwritten by new session data. Every 60 minutes,the MMON background process flushes filtered ASH data to disk, as part of the hourly AWR snapshots. If the ASH buffer is full, the MMNL background process performs the flushing of data. Once the ASH data is flushed to disk, you won’t be able to see it in the V$ACTIVE_SESSION_HISTORY view. You’ll now have to use the DBA_HIST_ACTIVE_SESS_HISTORY view to look at the historical data.obtaining the objects with highest waitsSELECT a.current_obj#, o.object_name, o.object_type, a.event,
    SUM(a.wait_time +
    a.time_waited) total_wait_time
    FROM v$active_session_history a,
    dba_objects o
    WHERE a.sample_time between sysdate – 30/2880 and sysdate
    AND a.current_obj# = o.object_id
    GROUP BY a.current_obj#, o.object_name, o.object_type, a.event
    ORDER BY total_wait_time;
    OBJECT_NAME OBJECT_TYPE EVENT TOTAL_WAIT_TIME
    ——————– ——————- ———————– —————-
    FND_LOGINS TABLE db file sequential read 47480
    KOTTB$ TABLE db file sequential read 48077
    SCHEDULER$_WINDOW TABLE db file sequential read 49205
    ENG_CHANGE_ROUTE_STEPS_TL TABLE db file sequential read 52534
    JDR_PATHS_N1 INDEX db file sequential read 58888
    MTL_ITEM_REVISIONS_B TABLE SQL*Net more data to client
    select p1text, p1, p2text, p2, p3text, p3, a.event
    from v$active_session_history a
    WHERE a.sample_time between sysdate – 30/2880 and sysdate
    AND a.current_obj# = 1938000
    P1TEXT P1 P2TEXT P2 P3TEXT P3 EVENT
    ——– — ——- ——- ———— —— ————————
    file# 71 block# 4389 blocks 1 db file sequential read
    file# 187 block# 89977 blocks 1 db file sequential read
    file# 80 block# 79301 blocks 1 db file sequential read
    driver id 675562835 #bytes 1 0
    file# 11 block# 831 blocks 1 db file sequential read
    driver id 675562835 #bytes 1 0

    so we can see few history wait events for a particular object in database. We can get the segment stats for this object. Finally we can got to some conclusing and implementaing the solution to reduce the wait. For example if it is a ‘db file sequential read’ wait then
  • Increase buffer cache size, but this wont help much. For this to do, you need to check cache miss percentages.
  • Check the query and optimize it, so that it can read less number of blocks
  • Increase freelists for that segmentsome important wait eventsThe following query lists the most important wait events in your database in the last 15 minutes:SELECT a.event,
    SUM(a.wait_time +
    a.time_waited) total_wait_time
    FROM v$active_session_history a
    WHERE a.sample_time between
    sysdate – 30/2880 and sysdate
    GROUP BY a.event
    ORDER BY total_wait_time DESC;
    EVENT TOTAL_WAIT_TIME
    —————————————————————- —————
    enq: TX – row lock contention 877360289
    816854999
    TCP Socket (KGAS) 13787430
    SQL*Net break/reset to client 6675324
    db file sequential read 2318850
    control file parallel write 1790011
    log file parallel write 1411201
    db file scattered read 62132
    os thread startup 39640
    null event 0
    Users with the Most WaitsThe following query lists the users with the highest wait times within the last 15 minutes:SELECT s.sid, s.username,
    SUM(a.wait_time +
    a.time_waited) total_wait_time
    FROM v$active_session_history a,
    v$session s
    WHERE a.sample_time between sysdate – 30/2880 and sysdate
    AND a.session_id=s.sid
    GROUP BY s.sid, s.username
    ORDER BY total_wait_time DESC;
    SID USERNAME TOTAL_WAIT_TIME
    ———- —————————— —————
    773 APPS 877360543
    670 APPS 374767126
    797 98408003
    713 APPS 97655307
    638 APPS 53719218
    726 APPS 39072236
    673 APPS 29353667
    762 APPS 29307261
    746 APPS 29307183
    653 APPS 14677170
    675 APPS 14676426
    Identifying SQL with the Highest WaitsUsing the following query, you can identify the SQL that’s waiting the most in your instance with in last 15 minsSELECT a.user_id,d.username,s.sql_text,
    SUM(a.wait_time + a.time_waited) total_wait_time
    FROM v$active_session_history a,
    v$sqlarea s,
    dba_users d
    WHERE a.sample_time between sysdate – 30/2880 and sysdate
    AND a.sql_id = s.sql_id
    AND a.user_id = d.user_id
    GROUP BY a.user_id,s.sql_text, d.username;



==========================================================================


  1. Identifying Locks On Objects Using V$LOCKS
    Ever wondered when a user comes to you saying that he/she cannot run a DML statement, which one should be able to run with in a second.We suspect 2 reason for the same
    1) Database is terribely slow (Which cannot be the case for processing a simple update)
    2) Some one is holding an exclusive lock on object which user is trying to update (quite possible).
    There can be many more reasons we can find upon further investigations. In this post we will see how to investigate reason 2 – If some other user is holding lock on the object which this user is trying to modify.
    lets take a simple scenario.
    session 1:SQL> create table test (col1 number, col2 varchar2(5));
    Table created.
    SQL> insert into test values (1,’a');
    1 row created.
    SQL> insert into test values (2,’b');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from test for update;
    COL1 COL2
    ———- —–
    1 a
    2 b
    Session 2:SQL> update test set col2=’a’ where col1 = 1;The above session 2 will hang !!!
01
SQL> select * from v$lock;
02
03
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
04
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
05
00000003BFD5D868 00000003BFD5D888 54 CF 0 0 2 0 669613 0
06
00000003BFD5D900 00000003BFD5D920 54 XR 4 0 1 0 669625 0
07
00000003BFD5DA30 00000003BFD5DA50 54 RS 25 1 2 0 669613 0
08
00000003BFD5DAC8 00000003BFD5DAE8 35 TX 196652 882 0 6 344 0
09
00000003BFD5DB60 00000003BFD5DB80 53 TS 3 1 3 0 669599 0
10
00000003BFD5DBF8 00000003BFD5DC18 55 RT 1 0 6 0 669613 0
11
00000003BFD5DDC0 00000003BFD5DDE0 57 MR 1 0 4 0 669601 0
12
00000003BFD5DE58 00000003BFD5DE78 57 MR 2 0 4 0 669601 0
13
00000003BFD5DEF0 00000003BFD5DF10 57 MR 3 0 4 0 669601 0
14
00000003BFD5DF88 00000003BFD5DFA8 57 MR 4 0 4 0 669601 0
15
00000003BFD5E020 00000003BFD5E040 57 PW 1 0 3 0 669599 0
16
17
ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
18
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
19
00000003BFD5E3C8 00000003BFD5E3E8 57 MR 81 0 4 0 669593 0
20
00000003BE50B9B8 00000003BE50B9E0 49 TM 21837 0 3 0 374 0
21
00000003BE50BAB8 00000003BE50BAE0 35 TM 21837 0 3 0 344 0
22
00000003BDC81138 00000003BDC812C0 49 TX 196652 882 6 0 374 1
23
24
15 rows selected.
If a session is blocking another session, you will see block = 1 for that session. So in out case SID=49 is blocking some other session. We can also find the session which got blocked because of SID=49.
There are 2 columns ID1 and ID2. The values of ID1 and ID2 for SID=49 will match with some other SID in v$lock table. If you see carefully in our case its matching with SID=35. So SID 35 is the session which got blocked because of SID=49. Also the session which gets blocked will not be able to get its request processed. So you will see REQUEST column will have a value > 0.
So from the above analysis we can say that SID 49 is blocking SID 35.
We can directly write a query which will give the required output.
SQL> select a.SID “Blocking Session”, b.SID “Blocked Session”
2 from v$lock a, v$lock b
3 where a.SID != b.SID
4 and a.ID1 = b.ID1
5 and a.ID2 = b.ID2
6 and b.request > 0
7 and a.block = 1;
Blocking Session Blocked Session
—————- —————
49 35
Lets understand rest of the columns in v$lock tables here.
ID1 and ID2 -> There represents the rollback segment and transaction table entries for that transaction. So when session 1 acquired the lock it got rollback segment and transaction table entry. When another session requested the same block, Oracle tried to generate a CR “Consistent read” image for the same by applying the rollback segment. But since there was exclusive lock it was not able to proceed. Unless first session relievs the lock, second session cannot proceed. So for second session its showing the value of ID1 and ID2 same as session 1 as session 2 was using the same values from rollback segment to make a CR copy.
TYPE -> This column gives the type of lock obtained on that table in which the data was locked. For more information on lock type check defination of v$lock table in Oracle references. For a session to change the data in a table it has to acquire a TX lock. This is the transaction enqueue lock.
LMODE -> This is the mode of lock. The is having values from 0 to 6, 6 being the most restrictive and 0 being least restrictive. When a session obtains lock in mode 6 that mean it has obtained exclusive lock and no other session is allowed to update the data. So for SID=49 we can see here that lock mode is exclusive (6). For more information on lock mode, you can check the v$lock table definition in Oracle references
REQUEST -> This column represent the lock mode requested by a blocking session. The value in this column gets updated only when the session is blocked. For example in our case SID=35 is being blocked so LMODE column shows a value of 0, but REQUEST column shows a value of 6. That means that SID 35 has requested lock mode 6 which has not yet assigned to it.
Some time if a session request for exclusive lock (mode 6), it might not get it because there are no Interested Transaction List (ITL) available in the block in which a user want to change data. For this user session will wait on mode 4 (shared mode) for some time and as soon as ITL is available, it will obtain the lock in mode 6.
Also in the above rows, you can see that for SID 49 and 35, there are 2 more rows and the TYPE column shows ‘TM’. There are the DML level enqueues and are acquired in lock mode 3 (Shared Row Exclusive). The lock will prevent any DDL activity on this table.
We can find the object name by getting the ID1 column value from these rows containing TM lock. 21837 in our case.
SQL> select object_name from dba_objects where object_id=21837;
OBJECT_NAME
————–
TEST
We can even get the row which is being blocked by transaction using v$session.
SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2 from v$session where sid=35;
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
————- ————– ————— ————-
21837 1 45082 0
The above 4 components are the components of ROWID and we can generate ROWID number from there components using DBMS_ROWID package.
SQL> select dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
2 from v$session where sid=35;
DBMS_ROWID.ROWID_C
——————
AAAFVNAABAAALAaAAA
Now we can check if this was the row which blocking session was trying to update
SQL> select * from test where rowid = ‘AAAFVNAABAAALAaAAA’;
COL1 COL2
———- —–
1 a
this was the row blocking session was trying to update.

How to improve blog performance

Improving the performance of a blog can involve a variety of strategies, including optimizing the website's technical infrastructure, im...