Subscribe to Posts by Email

Subscriber Count

    701

Disclaimer

All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove. Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only. This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only. If you see any issues with Content and copy write issues, I am happy to remove if you notify me. Contact Geek DBA Team, via geeksinsights@gmail.com

Pages

Tools & Scripts

I am keep on updating the scripts as and when I get time.

For each of the scripts, the usage is as follows,

Current Queries:- Simple run the @scriptname.sql

AWR/Statspack:- Run the @Scriptname.sql instnumber beginsnapid endsnapid

ASH: Run the @scriptname.sql and input the number of minutes you would like to see (2 hours data you can get)

DASH: Run the @scriptname.sql instnumber beginsnapid endsnapid

General Scripts for Troubleshooting

 

(To answer:  

Can you provide the TOP Statements

or Sessions with..)

CURRENT

(Non-ASH)

PAST

AWR

PAST

STATSPACK

ASH

or

DASH

Custom_ASH

(those who don't have ASH)

High CPU   Current_CPU_Queries.sql AWR_Top_CPU_Queries.sql Statspack_CPU_Queries.sql    custom_dash_top_sessions.sql
High I/O           
High Waits  current_top_waits.sql  awr_top_waits.sql  statspack_top_waits.sql    
Wait by SOL         custom_ash_waits_by_sql.sql
Wait by SID         custom_ash_waits_by_sid.sql
PGA  current_pga.sql

 current_pga_by_sid.sql

 awr_pga.sql  statspack_pga.sql  ash_pga_by_sid.sql  NA
Temp current_temp.sql 

current_temp_by_sid.sql

 awr_temp.sql  statspack_temp.sql  ash_temp_by_sid.sql  NA
Undo
Redo          
Locks  Current_Lock.sql  <2HoursLocks.sql  >2hoursLocks.sql    
Latches          
Expensive Queries          
 Response Time Analysis  awr_response_time_analysis
 
 
 
 
 
 
 

Capacity Planning:-

Task Current AWR Statspack
Estimating Memory --> Memory Advisories
Estimating CPU --> Average Active Sessions
Estimating Storage --> dba_hist_tablespace_usage,dba_hist_free_space
Estimating Redo --> dba_hist_sysstat, v$archived_log
Estimating Undo --> dba_hist_tablespace_usage
Estimating DBWR --> v$log_history (logswitches)
Estimating ARCH --> v$log_history , v$archive_log
Estimating Object
Estimating Sessions/Process - v$sysstat or dba_hist_sysstat (hourly session count)


Tracing

Detail How to
Trace Current Session
Trace Other Session
Trace by SQL ID
Monitoring SQL Execution
Using 10046/SQL Trace
Using 10052 - Optimizer Trace

Statistics Related Tasks:-

Detail Command
Check the Table Statistics select table_name,last_analyzed,num_rows,partitioned from dba_tables where table_name='TEST';

select table_name,partition_name,last_analyzed,num_rows from dba_tab_partitions where table_name='TEST';

Check the Index Statistics select index_name,last_analyzed,num_rows,clustering_factor,leaf_blocks, distinct_rows from dba_indexes where index_name='TEST_IDX1';

select index_name,partition_name,,last_analyzed,num_rows,clustering_factor,leaf_blocks, distinct_rows from dba_ind_partitions where index_name='TEST_IDX1';

Check the Column Statistics select table_name,column_name,num_distinct,high_value,low_value,density,num_buckets,histogram from dba_tab_col_statistics where table_name='TEST';
Check the Stale Objects select owner,table_name,partition_name,
Check the Statistics Job Duration
Check the Statistics Scheduling
Check the DML Percentages in Database colu anlyzd_rows form 99999,999,999

colu tot_rows form 99999,999,999

colu tab_name form a45

colu chngs form 99,999,999,999

colu pct_c form 9999999990.99

col truncated head 'Trun|cated' for a5 justify l

select dbta.owner||'.'||dbta.table_name tab_name

, dbta.num_rows anlyzd_rows

, to_char(dbta.last_analyzed,'yyyy-mm-dd hh24:mi:ss') last_anlzd

, nvl(dbta.num_rows,0)+nvl(dtm.inserts,0) -nvl(dtm.deletes,0) tot_rows

, nvl(dtm.inserts,0)+nvl(dtm.deletes,0)+nvl(dtm.updates,0) chngs

,(nvl(dtm.inserts,0)+nvl(dtm.deletes,0)+nvl(dtm.updates,0)) /greatest(nvl(dbta.num_rows,0),1) pct_c

, dtm.truncated

from dba_tab_statistics dbta

left outer join sys.dba_tab_modifications dtm

on dbta.owner = dtm.table_owner

and dbta.table_name = dtm.table_name

and dtm.partition_name is null

where

--and

--dbta.last_analyzed < sysdate - 1

STALE_STATS = 'YES'

and dbta.owner='&owner' order by dbta.last_analyzed desc;

Collecting the Statistics For Table:- (Generally Do not Use, Method OPT, Estimate Size, or any other options, leave to oracle, it will collect histograms, best estimation, depends on staleness etc)

exec dbms_stats.gather_table_stats(ownname=>'USER',tabname=>'TABLE_NAME',cascade=>true, no_invalidate=>false, degree=>3);

For Table:- (If you want to collect the histograms, and specifically on a column, apart from above)

exec dbms_stats.gather_table_stats(ownname=>'USER',tabname=>'TABLE_NAME',cascade=>true, no_invalidate=>false, degree=>3, method_opt=>for columns empno size auto);

For Partitioned Table:- (If you know which partition to collect the statistics, along with partition stats you must update the global stats as well )

exec dbms_stats.gather_table_stats(ownname=>'USER',tabname=>'TABLE_NAME',partname=>'PARTNAME', granulariy=>'GLOBAL AND PARTITION', cascade=>true, no_invalidate=>false, degree=>3);

---------------------------------------------------------------------------------------------------------------------------

For Index:- Do we really need to collect stats for index? Collecting stats on Table will collect stats on index

exec dbms_stats.gather_index_stats(OWNAME=>'USER',TABNAME=>'TABLE_NAME',INDNAME=>'INDEX_NAME', degree=>2);

For Partition Index:-

exec dbms_stats.gather_index_stats(OWNAME=>'USER',TABNAME=>'TABLE_NAME',INDNAME=>'INDEX_NAME', degree=>2,partname=>'PARTNAME',granularity=>'GLOBAL AND PARTITION');

Changing the Statistics Preferences  For Database Level

For Schema Level

For Table Level

Tweak/Set the Dummy Statistics for Table & Index

SQL Performance Troubleshooting

(In order to troubleshoot SQL Performance , you will need to do some basic troubleshooting in systematic approach)

Step Detail Current History
1 Get the Execution Statistics for given SQL_ID
2 Get the Execution Plan for given SQL_ID
3 Get the SQL Report for given SQL_ID
4 Compare the Plan_Hash_Value with Old Plan Hash Value

Check the given SQL ID is using same plan or different plan

5 Condition 1:- If you have a different Plan

a) Check where exactly the change is appearing in the plan with current and historical

b) Check the statistics of the tables where change is appearing or for all objects

c) Check the Volume differences in executions plan causing change

Solution 1:- Collect the statistics and ask to rerun

Solution 2:- Even if collecting statistics does not resolved the issue and you need to fix on fly,

Create SQL Profile based on old good plan. See Below SQL Profile Section

6 Condition 2:- If you have same plan but still slow (Condition 1 Clear)

a) Check what the SQL is waiting for?  Use SQLReport, ASH Report for given SQL_ID

c) Troubleshoot based on wait event

d) Identify is the SQL is culprit or victim, means

culprit:- If taking more I/O,CPU,Change plan etc

victim:- Because of some other consumers your SQL waiting for resources?

Solution if Culprit:-

 

A Simple Healthcheck Script to Run against a database, reports to html file.

Backup Scripts

  • Simple Shell script for RMAN Level 0 & Level 1 Script
  • Simple Shell Script for Archive backups
  • Simple shell script for deleting backups
  • Simple shell script for Backup Recovery Area

Automatic Recovery Script

  • Cold backup restore
  • RMAN Restore
  • Duplicating database using RMAN using backups
  • Duplicating database using active database

Automatic Storage Management

    Adding/Dropping a Diskgroup
    Modifying a disk attribute
    Managing ASM LIB Disks

Refresh script using exp/imp

  • Standard export script
  • Standard import script

RAC Scripts

  • SRVCTL Command usage
  • CRSCTL Command usage
  • Registering crs profile
  • Changing the ethernet interface
  • crs_register

Unix

  • Memory Usage
  • I/O Stats

Housekeeping Scripts

Purge Traces/Files/Dumps using Simple Shell ScriptPurge Traces/Files/Dumps using ADRCI-Automatic Simple Shell Script

Dataguard

  • Checking Dataguard status

Troubleshooting & Maintaining Oracle Streams

  • Add or delete a table to streams
  • Troubleshooting APPLY process
  • Troubleshooting CAPTURE Process
  • Troubleshooting Propagation Process
  • Automatic script generation Excel sheet for adding a table or tables to queues

External Links to wonderful scripts in other blogs.

  1. Tanel Poder: Session snapper
  2. Pete Finnigan: Find Privileges for a given schema
  3. Karl arao’s workload generation script
  4. Kerry Osbornes SQL Profile scripts
  5. AWR Top scripts from DB Speed
  6. Ask tom, Print table script
  7. Finding Latch

Capacity Planning Tools:-

  • orca
  • NMON
  • ORION
  • I/O Saturation Toolkit

 

Finding Table Fragmentation and Defragment it.

Link - http://db.geeksinsight.com/wp-content/uploads/2017/04/Table_Shrink.txt

 

#############################################################

Find the SQL Run History

#############################################

set lines 500  pages 500
col Snap for a25
col SQL_PROFILE for a40
select distinct b.BEGIN_INTERVAL_TIME as Snap,a.PLAN_HASH_VALUE as Plan,a.EXECUTIONS_DELTA as EXECUTIONS,
a.ELAPSED_TIME_DELTA/1000000 as ELAPSED_sec,a.ROWS_PROCESSED_DELTA as "Rows",a.ROWS_PROCESSED_DELTA/CASE WHEN a.EXECUTIONS_DELTA = 0
THEN -1 ELSE a.EXECUTIONS_DELTA END "Avg Rows",a.ELAPSED_TIME_DELTA/1000000/CASE WHEN a.EXECUTIONS_DELTA = 0 THEN -1
ELSE a.EXECUTIONS_DELTA END "Avg Elapsed",a.optimizer_cost,a.SQL_PROFILE
from DBA_HIST_SQLSTAT a,DBA_HIST_SNAPSHOT b where a.SQL_ID ='&sqlid' and
a.snap_id = b.snap_id order by b.BEGIN_INTERVAL_TIME;

#######################################################################

Running Tuning Advisory Task from Cursor or from AWR for given SQL ID

########################################################################

select max(snap_id) from dba_hist_snapshot;

If you want from AWR

DECLARE
l_sql_tune_task_id  VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
begin_snap  => 764,
end_snap    => 938,
sql_id      => '19v5guvsgcd1v',
scope       => DBMS_SQLTUNE.scope_comprehensive,
time_limit  => 1000,
task_name   => '19v5guvsgcd1v',
description => 'Tuning task for statement 19v5guvsgcd1v in AWR.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '19v5guvsgcd1v');

SET LONG 10000;

SETLONGCHUNKSIZE 10000;
SET PAGESIZE 1000
SET LINESIZE 200
SELECT DBMS_SQLTUNE.report_tuning_task('19v5guvsgcd1v') AS recommendations FROM dual;
SET PAGESIZE 24

If want from cursor

-- Tuning task created for specific a statement from the cursor cache.
DECLARE
l_sql_tune_task_id  VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id      => '19v5guvsgcd1v',
scope       => DBMS_SQLTUNE.scope_comprehensive,
time_limit  => 60,
task_name   => '19v5guvsgcd1v',
description => 'Tuning task for statement 19v5guvsgcd1v.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

 

################################################################################

To find the orphan process and sessions, the following may help

1. Run this query to find out them:

SELECT spid FROM v$process WHERE NOT EXISTS ( SELECT 1 FROM v$session WHERE paddr = addr);

2. Grep what are they and what are they doing :

SELECT '!ps -ef || grep ' spid FROM v$processWHERE NOT EXISTS ( SELECT 1 FROM v$session WHERE paddr = addr);

3. Kill them, if they are not required: DO NOT RUN UNLESS YOU ARE SURE 

SELECT '!kill -9 ' :: spid FROM v$process WHERE NOT EXISTS ( SELECT 1 FROM v$session WHERE paddr = addr);

############################################################################

ASH TOP Queries

 

select
decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
"STATUS",
topsession.session_id             "SESSION_ID",

topsession.sample_time    "SAMPLE_TIME",
u.name                                         "NAME",
topsession.program                  "PROGRAM",
max(topsession.CPU)               "CPU",
max(topsession.WAITING)       "WAITING",
max(topsession.IO)                   "IO",
max(topsession.TOTAL)            "TOTAL"
from (   select
ash.session_id,

to_char(ash.sample_time,'DD-MON-YYYY HH24:MI') ,
ash.session_serial#,
ash.user_id,
ash.program,
sum(decode(ash.session_state,'ON CPU',1,0))     "CPU",
sum(decode(ash.session_state,'WAITING',1,0))    -
sum(decode(ash.session_state,'WAITING',
decode(en.wait_class,'User I/O',1, 0 ), 0))    "WAITING" ,
sum(decode(ash.session_state,'WAITING',
decode(en.wait_class,'User I/O',1, 0 ), 0))    "IO" ,
sum(decode(session_state,'ON CPU',1,1))     "TOTAL"
from v$active_session_history ash,
v$event_name en
where en.event# = ash.event#
group by session_id,user_id,session_serial#,program
order by sum(decode(session_state,'ON CPU',1,1))   )        topsession,
v$session s,
user$ u
where
u.user# =topsession.user_id and
/* outer join to v$session because the session might be disconnected */
topsession.session_id         = s.sid         (+) and
topsession.session_serial# = s.serial#   (+) and  to_char(topsession.sample_time,'DD-MON-YYYY HH24:MI')  between '19-AUG-2016 03' and '19-AUG-2016 04'
group by  topsession.session_id, topsession.session_serial#, topsession.user_id,
topsession.program, s.username,s.sid,s.paddr,u.name,topsession.sample_time
order by max(topsession.TOTAL) desc

Top Session consuming CPU last N minutes

Select
session_id,
count(*)
from
v$active_session_history
where
session_state= 'ON CPU' and
SAMPLE_TIME > sysdate – (&minutes/(24*60))
group by
session_id
order by
count(*) desc;

 

Top Waiting Session  in last 5 minutes

Select                        session_id,
count(*)
from
v$active_session_history
where
session_state='WAITING'  and
SAMPLE_TIME >  SYSDATE - (&minutes/(24*60))
group by
session_id
order by
count(*) desc;

 

Top Session Consuming CPU, IO and Waiting last n minutes

select
ash.session_id,
ash.session_serial#,ash.sql_id,
ash.user_id,
ash.program,
sum(decode(ash.session_state,'ON CPU',1,0))     "CPU",
sum(decode(ash.session_state,'WAITING',1,0))    -
sum(decode(ash.session_state,'WAITING',
decode(en.wait_class,'User I/O',1, 0 ), 0))    "WAITING" ,
sum(decode(ash.session_state,'WAITING',
decode(en.wait_class,'User I/O',1, 0 ), 0))    "IO" ,
sum(decode(session_state,'ON CPU',1,1))     "TOTAL"
from v$active_session_history ash,
v$event_name en
where en.event# = ash.event# and  SAMPLE_TIME >  SYSDATE - (&minutes/(24*60))
group by sql_id,session_id,user_id,session_serial#,program
order by sum(decode(session_state,'ON CPU',1,1));

 

http://stackoverflow.com/questions/2337271/ora-30926-unable-to-get-a-stable-set-of-rows-in-the-source-tables

 

nohup expdp \"/ as sysdba\" full=y directory=DATA_PUMP_DIR dumpfile=CRNUMBER_%U.dmp  parallel=4 logfile=crnumber.log &

 

PGA Allocation from ASH

WITH
pga_data as
(
SELECT /*+ MATERIALIZED */
sample_time,
nvl(sum(ash.pga_allocated/1024/1024),0) AS sum_pga_mb
FROM
dba_hist_active_sess_history ash,
dba_users u
WHERE ash.user_id = u.user_id
AND u.username  LIKE :username
AND sample_time > SYSDATE-:n_days
AND sample_time < SYSDATE
GROUP BY action, sample_time
),
cal_data AS
(
SELECT
trunc(SYSDATE, 'MI') - (LEVEL/(24*60)*:per_mins)     AS date_min,
trunc(SYSDATE, 'MI') - ((LEVEL-1)/(24*60)*:per_mins) AS date_max
FROM dual
CONNECT BY LEVEL < (24*60*:n_days/:per_mins)+1
ORDER BY date_min
)
SELECT /*+ NO_MERGE(h) NO_MERGE(c) */
to_char(c.date_min, 'YYYY-MM-DD HH24:MI:SS') date_min,
trunc(nvl(avg(sum_pga_mb),0), 2) avg_pga_mb,
trunc(nvl(min(sum_pga_mb),0), 2) min_pga_mb,
trunc(nvl(max(sum_pga_mb),0), 2) max_pga_mb
FROM
pga_data h,
cal_data c
WHERE h.sample_time (+) >= c.date_min
AND h.sample_time (+) <  c.date_max
GROUP BY c.date_min;

 

##############################################################

Script to check the AAS for last seven days hourly

set linesize 199
col day for a11
select
to_char(begin_time,'DD-MON-YY') Day,
round(max(decode(to_char(begin_time,'HH24'),'00',maxval,NULL)),2) "00",
round(max(decode(to_char(begin_time,'HH24'),'01',maxval,NULL)),2) "01",
round(max(decode(to_char(begin_time,'HH24'),'02',maxval,NULL)),2) "02",
round(max(decode(to_char(begin_time,'HH24'),'03',maxval,NULL)),2) "03",
round(max(decode(to_char(begin_time,'HH24'),'04',maxval,NULL)),2) "04",
round(max(decode(to_char(begin_time,'HH24'),'05',maxval,NULL)),2) "05",
round(max(decode(to_char(begin_time,'HH24'),'06',maxval,NULL)),2) "06",
round(max(decode(to_char(begin_time,'HH24'),'07',maxval,NULL)),2) "07",
round(max(decode(to_char(begin_time,'HH24'),'08',maxval,NULL)),2) "08",
round(max(decode(to_char(begin_time,'HH24'),'09',maxval,NULL)),2) "09",
round(max(decode(to_char(begin_time,'HH24'),'10',maxval,NULL)),2) "10",
round(max(decode(to_char(begin_time,'HH24'),'11',maxval,NULL)),2) "11",
round(max(decode(to_char(begin_time,'HH24'),'12',maxval,NULL)),2) "12",
round(max(decode(to_char(begin_time,'HH24'),'13',maxval,NULL)),2) "13",
round(max(decode(to_char(begin_time,'HH24'),'14',maxval,NULL)),2) "14",
round(max(decode(to_char(begin_time,'HH24'),'15',maxval,NULL)),2) "15",
round(max(decode(to_char(begin_time,'HH24'),'16',maxval,NULL)),2) "16",
round(max(decode(to_char(begin_time,'HH24'),'17',maxval,NULL)),2) "17",
round(max(decode(to_char(begin_time,'HH24'),'18',maxval,NULL)),2) "18",
round(max(decode(to_char(begin_time,'HH24'),'19',maxval,NULL)),2) "19",
round(max(decode(to_char(begin_time,'HH24'),'20',maxval,NULL)),2) "20",
round(max(decode(to_char(begin_time,'HH24'),'21',maxval,NULL)),2) "21",
round(max(decode(to_char(begin_time,'HH24'),'21',maxval,NULL)),2) "22",
round(max(decode(to_char(begin_time,'HH24'),'21',maxval,NULL)),2) "23"
from     dba_hist_sysmetric_SUMMARY A where BEGIN_TIME > sysdate - 7
and  A.METRIC_NAME in('Average Active Sessions') group by to_char(begin_time,'DD-MON-YY')
/

 

########################################

Resizing Datafile

#####################################################

set serveroutput on

DECLARE
BLKSIZE INTEGER;

BEGIN
SELECT VALUE INTO BLKSIZE FROM V$PARAMETER WHERE NAME = 'db_block_size';

FOR INDEX_ROW IN (
SELECT 'ALTER DATABASE DATAFILE ''' || FILE_NAME || ''' RESIZE ' || CEIL( (NVL(HWM,1)*BLKSIZE)/1024/1024 ) || 'M;' SHRINK_DATAFILES FROM DBA_DATA_FILES DBADF,
(SELECT FILE_ID, MAX(BLOCK_ID+BLOCKS-1) HWM FROM DBA_EXTENTS GROUP BY FILE_ID ) DBAFS
WHERE DBADF.FILE_ID = DBAFS.FILE_ID(+) AND DBADF.TABLESPACE_NAME='&tablespacename' AND CEIL(BLOCKS*BLKSIZE/1024/1024)- CEIL((NVL(HWM,1)* BLKSIZE)/1024/1024 ) > 0
) LOOP
DBMS_OUTPUT.PUT_LINE(INDEX_ROW.SHRINK_DATAFILES);
END LOOP;
END;

/

### Until above /

If you see you can resize much with current size, lets find if any free space there between segments, Run below

################################################################

Finding Objects at end of Datafile and Free Number of blocks (Typically used when resizing datafile)

################################################################

set pages 1000

set lines 10000

col segment_name for a40

col partition_name for a40

spool frag_ts.txt

select
file_id,
block_id,
block_id + blocks - 1   end_block,
(block_id + blocks - 1 ) - block_id numblocks,
owner,
segment_name,
partition_name,
segment_type
from
dba_extents
where
tablespace_name = '&tablespacename'
union all
select
file_id,
block_id,
block_id + blocks - 1   end_block,
(block_id + blocks - 1 ) - block_id numblocks,
'free'          owner,
'free'          segment_name,
null            partition_name,
null            segment_type
from
dba_free_space
where
tablespace_name = '&tablespacename'
order by
1,2
/

spool off

The output will be like

stats

As you see there are no blocks free at last file, so you cannot resize until the free blocks in between those segments, should be moved ahead. So you need to reorg the tables which having more free blocks adjacent to it, for example it is stats$sql_summary table so I could reorganise the tables as follows.

alter table perfstat.stats$sql_summary enable row movement;

alter table perfstat.stats$sql_summary shrink space cascade;

 

###Temp Usage by Session ###

col hash_value for a40
col tablespace for a10
col username for a15
set linesize 132 pagesize 1000
SQL> SELECT s.sid, s.username, u.tablespace, s.sql_hash_value||'/'||u.sqlhash hash_value, u.segtype, u.contents, u.blocks
FROM v$session s, v$tempseg_usage u
WHERE s.saddr=u.session_addr
order by u.blocks;
Undo advisory:
==============/
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
       (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
       g.undo_block_per_sec) / (1024*1024)
      "NEEDED UNDO SIZE [MByte]"
  FROM (
       SELECT SUM(a.bytes) undo_size
         FROM v$datafile a,
              v$tablespace b,
              dba_tablespaces c
        WHERE c.contents = 'UNDO'
          AND c.status = 'ONLINE'
          AND b.name = c.tablespace_name
          AND a.ts# = b.ts#
       ) d,
      v$parameter e,
      v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
         undo_block_per_sec
         FROM v$undostat
       ) g
 WHERE e.name = 'undo_retention'
  AND f.name = 'db_block_size';
 ##########

Table Level Statistics Duration

####
spool optimizer_operations
column start_time       format a35
column operation        format a28
column target           format a28
column duration         format a12
column max_duration     format a12
column utc_start_time   format a28
column first_utc_time   format a28
select
        operation,
        target,
        -- start_time,
        cast(sys_extract_utc(start_time) as timestamp(0))                               utc_start_time,
        to_char(extract(hour from (end_time - start_time)),'FM00') ||
        ':' ||
        to_char(extract(minute from (end_time - start_time)),'FM00') ||
        ':' ||
        to_char(round(extract(second from (end_time - start_time)),2),'FM00.00')        duration
from
        dba_optstat_operations
where
--        end_time - start_time >= numtodsinterval(2,'SECOND') and
target='&table_name'
--and     target is not null
order by
        start_time
;

 

exec dbms_stats.unlock_table_stats('scott', 'test');

exec dbms_stats.lock_table_stats('scott', 'test');

 

 


 

Table row level block usage for insert slowness

 

set serveroutput on size unlimited
declare
total_bl number;
total_by number;
unused_bl number;
unused_by number;
last_used_extent_file_id number;
last_used_extent_block_id number;
last_used_block number;
unformatted_blocks number;
unformatted_bytes number;
fs1_bl number;
fs1_by number;
fs2_bl number;
fs2_by number;
fs3_bl number;
fs3_by number;
fs4_bl number;
fs4_by number;
full_bl number;
full_by number;
i number;
BEGIN
dbms_space.unused_space('&schemaname','&tablename', 'TABLE',total_bl , total_by , unused_bl , unused_by , last_used_extent_file_id ,last_used_extent_block_id , last_used_block );
dbms_space.space_usage('&schemaname','&tablename', 'TABLE',unformatted_blocks , unformatted_bytes , fs1_bl , fs1_by , fs2_bl , fs2_by , fs3_bl , fs3_by, fs4_bl , fs4_by , full_bl , full_by );
dbms_output.put_line('| total_bl| total_by| unused_bl| unused_by| last_used_extent_file_id|last_used_extent_block_id| last_used_block');
dbms_output.put_line('| '||lpad(total_bl,8)||'|'||
lpad(total_by,9)||'|'||
lpad(unused_bl,10)||'|'||
lpad(unused_by,10)||'|'||
lpad(last_used_extent_file_id,25)||'|'||
lpad(last_used_extent_block_id,26)||'|'||
lpad(last_used_block,16));
dbms_output.put_line(' .');
dbms_output.put_line('| unformatted_blocks| unformatted_bytes| fs1_bl| fs1_by| fs2_bl|fs2_by| fs3_bl| fs3_by| fs4_bl| fs4_by| full_bl| full_by');
dbms_output.put_line('| '||lpad(unformatted_blocks,18)||'|'||
lpad(unformatted_bytes,18)||'|'||
lpad(fs1_bl,7)||'|'||
lpad(fs1_by,7)||'|'||
lpad(fs2_bl,7)||'|'||
lpad(fs2_by,7)||'|'||
lpad(fs3_bl,7)||'|'||
lpad(fs3_by,7)||'|'||
lpad(fs4_bl,7)||'|'||
lpad(fs4_by,7)||'|'||
lpad(full_bl,8)||'|'||
lpad(full_by,8));
end;
/

 


 

 

Download

-Thanks