Saturday, April 24, 2021

Query to check concurrent program schedules ,request set schedules oracle apps r12

 select r.request_id,
       p.user_concurrent_program_name || case
         when p.user_concurrent_program_name = 'Report Set' then
          (select ' - ' || s.user_request_set_name
             from fnd_request_sets_tl s
            where s.application_id = r.argument1
              and s.request_set_id = r.argument2
              and language = 'US')
         when p.user_concurrent_program_name = 'Check Periodic Alert' then
          (select ' - ' || a.alert_name
             from alr_alerts a
            where a.application_id = r.argument1
              and a.alert_id = r.argument2
              and language = 'US')
       end concurrent_program_name,
       case
         when p.user_concurrent_program_name != 'Report Set' and
              p.user_concurrent_program_name != 'Check Periodic Alert' then
          r.argument_text
       end argument_text,
       r.requested_start_date next_run,
       r.hold_flag on_hold,
       decode(c.class_type,
              'P',
              'Periodic',
              'S',
              'On Specific Days',
              'X',
              'Advanced',
              c.class_type) schedule_type,
       case
         when c.class_type = 'P' then
          'Repeat every ' ||
          substr(c.class_info, 1, instr(c.class_info, ':') - 1) ||
          decode(substr(c.class_info, instr(c.class_info, ':', 1, 1) + 1, 1),
                 'N',
                 ' minutes',
                 'M',
                 ' months',
                 'H',
                 ' hours',
                 'D',
                 ' days') ||
          decode(substr(c.class_info, instr(c.class_info, ':', 1, 2) + 1, 1),
                 'S',
                 ' from the start of the prior run',
                 'C',
                 ' from the completion of the prior run')
         when c.class_type = 'S' then
          nvl2(dates.dates, 'Dates: ' || dates.dates || '. ', null) ||
          decode(substr(c.class_info, 32, 1), '1', 'Last day of month ') ||
          decode(sign(to_number(substr(c.class_info, 33))),
                 '1',
                 'Days of week: ' ||
                 decode(substr(c.class_info, 33, 1), '1', 'Su ') ||
                 decode(substr(c.class_info, 34, 1), '1', 'Mo ') ||
                 decode(substr(c.class_info, 35, 1), '1', 'Tu ') ||
                 decode(substr(c.class_info, 36, 1), '1', 'We ') ||
                 decode(substr(c.class_info, 37, 1), '1', 'Th ') ||
                 decode(substr(c.class_info, 38, 1), '1', 'Fr ') ||
                 decode(substr(c.class_info, 39, 1), '1', 'Sa '))
       end schedule,
       c.date1 start_date,
       c.date2 end_date,
       c.class_info
  from fnd_concurrent_requests r,
       fnd_conc_release_classes c,
       fnd_concurrent_programs_tl p,
       (SELECT release_class_id,
               substr(max(SYS_CONNECT_BY_PATH(s, ' ')), 2) dates
          FROM (select release_class_id,
                       rank() over(partition by release_class_id order by s) a,
                       s
                  from (select c.class_info,
                               l,
                               c.release_class_id,
                               decode(substr(c.class_info, l, 1),
                                      '1',
                                      to_char(l)) s
                          from (select level l from dual connect by level <= 31),
                               fnd_conc_release_classes c
                         where c.class_type = 'S')
                 where s is not null)
        CONNECT BY PRIOR
                    (a || release_class_id) = (a - 1) || release_class_id
         START WITH a = 1
         group by release_class_id) dates
 where r.phase_code = 'P'
   and c.application_id = r.release_class_app_id
   and c.release_class_id = r.release_class_id
   and nvl(c.date2, sysdate + 1) > sysdate
   and c.class_type is not null
   and p.concurrent_program_id = r.concurrent_program_id
   and p.application_id = r.program_application_id
   and p.language = 'US'
   and dates.release_class_id(+) = r.release_class_id
   and to_char(R.requested_start_date,'DD-MON-YYYY') between '&begin_date' and '&end_date'
 order by on_hold, next_run;

Concurrent request running longer than expected, Concurrent request is not picking jobs

Concurrent request running longer than expected, Concurrent request is not picking jobs , Concurrent request is not fetching/executing data etc these are very frequent issue we get from development team.



Following are the points which we have to ask the development team:

1. Is there any code changes done in the concurrent program

2. Was this running long in last few run as well, or this time only

3. How much time does it normally takes to complete

4. Is this jobs fetching higher data compare to last run

5. Does this job runs any specific time or it can be run anytime

6. Does this job fetching data using DB Link

Now when you asked these question to development team in meanwhile you can start your basic checks in your environment.

1. Check the load of the server using top command

2. Verify the mount space using df -hP

3. Check the tablespace size

4. Verify the status of the concurrent request using query:

select request_id ,phase_code, status_code from fnd_concurrent_requests where request_id ='333333';

5. Check which CM is running this request.

SELECT request_id, user_concurrent_program_name, status_code, phase_code, completion_text FROM apps.fnd_conc_req_summary_v WHERE request_id =&req_id ;

6. Confirm the Actual and target of the CM from above output

7. Confirm the Concurrent Manager Status and queue

8. Check the SID of the concurrent request using query:

SELECT a.request_id, d.sid, d.serial# ,d.osuser,d.process , c.SPID ,d.inst_id FROM apps.fnd_concurrent_requests a,apps.fnd_concurrent_processes b,gv$process c,gv$session d

WHERE a.controlling_manager = b.concurrent_process_id AND c.pid = .oracle_process_id

AND b.session_id=d.audsid AND a.request_id =&req_id AND a.phase_code = 'R';

9. Query to get the row fetched using the SID from step 8.

column name format a30 word_wrapped

column vlu format 999,999,999,999

select b.name, a.value vlu from v$sesstat a, v$statname b where a.statistic# = b.statistic# and sid =&sid and a.value != 0 and b.name like '%row%' /


col sid format 999999

col username format a20

col osuser format a15

select a.sid, a.serial#,a.username, a.osuser, b.spid from v$session a, v$process b where .paddr= b.addr and a.sid='&sid' order by a.sid;

10. Check Database session status and what it is running

set echo off

set linesize 132

set verify off

set feedback off

set serveroutput on;

declare

   SID number        := 0 ;

   SERIAL number       := 0 ;

   username varchar(20)   := '';

   Status varchar(8)     := '';

   machine varchar(10)   := '';

   terminal  varchar(25)  := '';

   program  varchar(30)   := '';

   Module varchar(30)    := '';

   Action varchar(20)    := '';

   sql_hash_value number  := 0 ;

   logontime varchar(30)   := '';

   last_call_et number    := 0 ;

   proc number        := 0 ;

   spid number        := 0 ;

   event varchar(30)     := '';

   state varchar(30)     := '';

   sql_text varchar(2000)  := '';

cursor cur1 is

select a.sid sid,

   a.serial# serial,

   a.username username,

   a.status status ,

   a.machine machine,

   a.terminal terminal,

   a.program program,

   a.module module,

   a.action action,

   a.sql_hash_value sql_hash_value,

   to_char(a.logon_time,'DD-Mon-YYYY HH:MI:SS') logontime,

   a.last_call_et last_call_et,

   a.process proc,

   b.spid spid,

   sw.event event,

   sw.state state

from  gv$session a, gv$process b, gv$session_wait sw

where a.paddr=b.addr and a.inst_id=b.inst_id

   and a.sid='&1'

   and a.inst_id=sw.inst_id

   and a.sid=sw.sid;

begin

 DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------');

 DBMS_OUTPUT.PUT_LINE(' Database session detail for the shadow process ');

 DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------------------');

for m in cur1

loop

DBMS_OUTPUT.ENABLE(50000);

  DBMS_OUTPUT.PUT_LINE(' ');

  DBMS_OUTPUT.PUT_LINE( 'SID............ : ' || m.sid );

  DBMS_OUTPUT.PUT_LINE('SERIAL#........ : ' || m.serial  );

  DBMS_OUTPUT.PUT_LINE('USERNAME....... : ' || m.username  );

  DBMS_OUTPUT.PUT_LINE('STATUS......... : ' || m.status   );

  DBMS_OUTPUT.PUT_LINE( 'Machine........ : ' || m.machine );

  DBMS_OUTPUT.PUT_LINE( 'Terminal....... : ' || m.terminal);

  DBMS_OUTPUT.PUT_LINE( 'Program........ : ' || m.program );

  DBMS_OUTPUT.PUT_LINE('Module......... : ' || m.module );

  DBMS_OUTPUT.PUT_LINE( 'Action......... : ' || m.action );

  DBMS_OUTPUT.PUT_LINE('SQL Hash Value. : ' || m.sql_hash_value );

  DBMS_OUTPUT.PUT_LINE( 'Logon Time..... : ' || m.logontime );

  DBMS_OUTPUT.PUT_LINE( 'Last Call Et... : ' || m.last_call_et );

  DBMS_OUTPUT.PUT_LINE( 'Process ID..... : ' || m.proc );

  DBMS_OUTPUT.PUT_LINE( 'SPID........... : ' || m.spid );

  DBMS_OUTPUT.PUT_LINE('Session Waiting for event:'||m.event );

  DBMS_OUTPUT.PUT_LINE('Session state ...........:'||m.state);

dbms_output.put_line('SQL_TEXT is..........:');

for rec in ( select sql_text from v$session s,v$sqltext v where

  s.sql_hash_value=v.hash_value and

  s.sql_address=v.address and s.sid=m.sid order by piece)

loop

dbms_output.put_line(rec.sql_text);

end loop;

  DBMS_OUTPUT.PUT_LINE('--------------------------------------------------------------------');

  DBMS_OUTPUT.PUT_LINE(' ');

end loop;

end;

/

11. Check which sql id it is running, you can get these details using above steps.

12. You should check the sql_id history and can run sql advisory against the

   sql_id. this will give you incase of the sql query is not correct or may have

   to see the recommendation based on the sql advisory

13. Check the stale value of the tables which is used by the concurrent program.

    select table_name, stale_stats, last_analyzed from dba_tab_statistics where

   stale_stats='YES';

14. You may have to run the gather stats against those tables which are having    stale value.

15. You can also use OEM and monitor the Session id.

After checking these above points you can definitely narrow down the issue and based on your finding you can suggest the user corrective action item.


Thursday, April 15, 2021

Miscellaneous Metalink Performance Articles

Miscellaneous Metalink Performance Articles



  • Doc ID 233112.1 “START HERE – Diagnosing Query Tuning Problems” – basically a click to jump to the specific problem being experienced.
  • Doc ID 745216.1 “Query Performance Degradation – Upgrade Related – Recommended Actions” – a tree like structure for performance tuning.
  • Doc ID 398838.1 “FAQ: Query Tuning Frequently Asked Questions” – another tree like structure.
  • Doc ID 223806.1 “Query with Unchanged Execution Plan is Slower than Previously” – another tree like structure.
  • Doc ID 387394.1 “Query using Binds is Suddenly Slow”
  • Doc ID 604256.1 “Why is a Particular Query Slower on One Machine than Another?” – another tree like structure.
  • Doc ID 372431.1 “Troubleshooting: Tuning a New Query”
  • Doc ID 163563.1 “Troubleshooting: Advanced Query Tuning” – another tree like structure
  • Doc ID 122812.1 “How to Tune a Query that Cannot be Modified”
  • Doc ID 67522.1 “Diagnosing Why a Query is Not Using an Index”
  • Doc ID 69992.1 “Why is my hint ignored?”
  • Doc ID 163424.1 “How to Identify a Hot Block within the Buffer Cache”
  • Doc ID 223117.1 “Tuning I/O-related waits” – another tree structure
  • Doc ID 402983.1 “Database Performance FAQ” – mentions pstack, system state dumps, 10046 traces, AWR/Statspack
  • Doc ID 66484.1 “Which Optimizer is Being Used”
  • Doc ID 271196.1 “Automatic SQL Tuning – SQL Profiles”
  • Doc ID 276103.1 “Performance Tuning Using 10g Advisors and Manageability Features”
  • Doc ID 463288.1 “How to generate an outline with a good plan loaded into the shared_pool”
  • Doc ID 43718.1 “View: V$SESSION_WAIT Reference”

 

Miscellaneous Metalink Performance Articles 2

Miscellaneous Metalink Performance Articles 2


  • Doc ID 179668.1 “TROUBLESHOOTING: Tuning Slow Running Queries”
  • Doc ID 296377.1 “Handling and resolving unshared cursors/large version_counts”
  • Doc ID 43214.1 “AUTOTRACE Option in sqlplus” – lists all of the various SQL*Plus options for controlling the output of expected execution plans for SQL statements, and the runtime statistics.
  • Doc ID 215187.1 “SQLT (SQLTXPLAIN) – Enhanced Explain Plan and related diagnostic information for one SQL”
  • Doc ID 235530.1 “Methods for Obtaining a Formatted Explain Plan” – shows several ways to retrieve execution plans, including DBMS_XPLAN.DISPLAY_CURSOR.
  • Doc ID 41634.1 “TKProf Basic Overview” – quick reference guide for enabling a basic SQL trace, processing the trace file with TKPROF and understanding the contents of the resulting report.
  • Doc ID 39817.1 “Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output” – shows how to decode a raw 10046 extended SQL trace file, includes keywords added in 11g.
  • Doc ID 779226.1 “Troubleshooting Oracle Net”
  • Doc ID 219968.1 “SQL*Net, Net8, Oracle Net Services – Tracing and Logging at a Glance”
  • Doc ID 216912.1 “How to Perform Client-Side Tracing of Programmatic Interfaces on Windows Platforms”
  • Doc ID 438452.1 “Performance Tools Quick Reference Guide”
  • Doc ID 394937.1 “Statistics Package (STATSPACK) Guide” – the 26 page reference for creating and understanding Statspack reports in 9.2.0.1 through 11.1.0.6.
  • Doc ID 215858.1 “Interpreting HANGANALYZE trace files to diagnose hanging and performance problems”
  • Doc ID 423153.1 “Understanding and Reading System states” – shows how to read the contents of systemstate dumps.
  • Doc ID 370363.1 “CASE STUDY: Using Real-Time Diagnostic Tools to Diagnose Intermittent Database Hangs”
  • Doc ID 362791.1 “STACKX Core / Stack Trace Extraction Tool” – Unix/Linux utility that pulls the call stack information from core dump files.
  • Doc ID 352363.1 “LTOM – The On-Board Monitor User Guide” – tracing in the database and in the operating system.
  • Doc ID 301137.1 OS Watcher for Unix/Linux, Doc ID 433472.1 OS Watcher for Windows.
  • Doc ID 377152.1 “Best Practices for automatic statistics collection on Oracle 10g”
  • Doc ID 149560.1 “System Statistics: Collect and Display System Statistics (CPU and IO)”

Thursday, April 1, 2021

How to read Tkprof oracle Database performance tuning

 

 

TKPROF: Release 12.2.0.1.0 - Development on Thu Dec 27 05:39:56 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. Trace file: tracefile_10046.trc Sort options: default count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call SQL ID: 4m94ckmu16f9k Plan Hash: 3910148636 select count(*) from dual call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 0 0 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.00 0.00 0 0 0 1 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: SYS Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT AGGREGATE (cr=0 pr=0 pw=0 time=21 us starts=1) 1 1 1 FAST DUAL (cr=0 pr=0 pw=0 time=1 us starts=1 cost=2 size=0 card=1) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ Disk file operations I/O 1 0.00 0.00 SQLNet message to client 2 0.00 0.00 SQLNet message from client 2 10.63 10.63 TKProf output file mainly has following sections:- 1. Tabular Statistics 2. Row Source Operations 3. Wait Event Information We’ll study each section individually. Section 1: Tabular Statistics:- call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 0 0 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.00 0.00 0 0 0 1 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: SYS Number of plan statistics captured: 1 To understand these statistics we should understand following CALLS. PARSE: Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects. EXECUTE: Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows. FETCH: Retrieves rows returned by a query. Fetches are only performed for SELECT statements. The other columns of the TKProf output are combined statistics for all parses, all executes, and all fetches of a statement. All column names are self explanatory but just to give a brief idea:- count: It is the number of times a call (parse/execute/fetch) was performed. cpu: Total CPU time in seconds elapsed: Total elapsed time in seconds disk: Total number of data blocks physically read from the data files on disk. query: Total number of buffers retrieved in consistent mode. current: Total number of buffers retrieved in current mode. rows: Total number of rows processed by the SQL statement. Sum of query & current columns is the total number of buffers accessed i.e. LIOs. In this section, we should first look if any operation that is taking high number of cpu times. Or if we have high number of block reads i.e. query column, specially if the block read LIOs are very high compared to number of rows fetched. Another important thing is to see if there is lot of hard parsing which can be seen from count of parse call and the number of times it was hard parsed i.e. misses in library cache. All these things give a sense if there is something wrong with query or not and if there is a problem then where ? I would try to include some examples of Tabular Statistics in subsequent posts for scenarios where we have some problem with library cache or buffer cache etc. Section 2: Row Source Operations Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT AGGREGATE (cr=0 pr=0 pw=0 time=21 us starts=1) 1 1 1 FAST DUAL (cr=0 pr=0 pw=0 time=1 us starts=1 cost=2 size=0 card=1) Where cr is consistent reads, pr is physical reads, pw is physical writes, time is time in microseconds. From Tabular Statistics we can understand if there is any problem with the query and then we can refer to Row Source Operations to see what operations are costly. Section 3: Wait Event Information We know if there is problem with a hint about the problem area from tabular statistics and the most costly operation from row source operation, now we can get the details where this query is waiting from wait event information. Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ Disk file operations I/O 1 0.00 0.00 SQLNet message to client 2 0.00 0.00 SQLNet message from client 2 10.63 10.63 This is another important section of TKProf output file as it gives us the details on the wait events on which the query is waiting. We can use Wait Event Information and link it to the theory deduced from tabular statistics to confirm the bottleneck of the query. From this information we can either try to reduce the cause for the waits or from row source information see if any other alternate access method can be used.

How to improve blog performance

Improving the performance of a blog can involve a variety of strategies, including optimizing the website's technical infrastructure, im...