Subscribe to Posts by Email

Subscriber Count

    703

Disclaimer

All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove. Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only. This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only. If you see any issues with Content and copy write issues, I am happy to remove if you notify me. Contact Geek DBA Team, via geeksinsights@gmail.com

Pages

ora-01610: recovery using the BACKUP CONTROLFILE option must be done, but how does Oracle knows where does it see?

SQL> recover database
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done

Am trying to recover a database after a controlfile lost and restored from the backup. But How does Oracle knows that we are using a backup control file where does it look after to get this information?

SQL> select 'controlfile' "SCN location",'SYSTEM checkpoint' name,checkpoint_change#
from v$database
union
select 'file in controlfile',name,checkpoint_change#
from v$datafile where name like 'users01%'
union
select 'file header',name,checkpoint_change#
from v$datafile_header where name like '%users01%';

SCN location        NAME                             CHECKPOINT_CHANGE#
------------------- ------------------------------   ------------------
controlfile         SYSTEM checkpoint                            333765
file header         /u02/oradata/OD2/users01.dbf                 355253
file in controlfile /u02/oradata/OD2/users01.dbf                 333765

If you see above Continue reading ora-01610: recovery using the BACKUP CONTROLFILE option must be done, but how does Oracle knows where does it see?

Internals: Control File Records

What kind of records that control file maintains in Oracle database?

Lets have a look.

SQL> oradebug setmypid
Statement processed.
SQL> oradebug unlimit;
Statement processed.
SQL> alter session set events 'immediate trace name controlf level 9';
Session altered.

SQL> oradebug tracefile_name;
/home/oracle/app/oracle/diag/rdbms/testdb/testdb/trace/testdb_ora_1238.trc
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

[oracle@host~]$

egrep "ENTRY|RECORD" /home/oracle/app/oracle/diag/rdbms/testdb/testdb/trace/testdb_ora_1238.trc

DATABASE ENTRY
CHECKPOINT PROGRESS RECORDS
EXTENDED DATABASE ENTRY
REDO THREAD RECORDS
LOG FILE RECORDS
DATA FILE RECORDS
TEMP FILE RECORDS
TABLESPACE RECORDS
RMAN CONFIGURATION RECORDS
FLASHBACK LOGFILE RECORDS
THREAD INSTANCE MAPPING RECORDS
MTTR RECORDS
STANDBY DATABASE MAP RECORDS
RESTORE POINT RECORDS
ACM SERVICE RECORDS
LOG FILE HISTORY RECORDS
OFFLINE RANGE RECORDS
ARCHIVED LOG RECORDS
FOREIGN ARCHIVED LOG RECORDS
BACKUP SET RECORDS
BACKUP PIECE RECORDS
BACKUP DATAFILE RECORDS
BACKUP LOG RECORDS
DATAFILE COPY RECORDS
BACKUP DATAFILE CORRUPTION RECORDS
DATAFILE COPY CORRUPTION RECORDS
DELETION RECORDS
PROXY COPY RECORDS
INCARNATION RECORDS
RMAN STATUS RECORDS
DATAFILE HISTORY RECORDS
NORMAL RESTORE POINT RECORDS
DATABASE BLOCK CORRUPTION RECORDS

-Thanks

Geek DBA

Change in LGWR behavior , living with beast or best?

If you are experiencing the log file sync wait after an upgrade to 11.2.0.3 and most importantly that as there are no other symptoms of issues with I/O or in other areas, the problem could be with excessive switching between post/wait and polling wait methods.

While writing previous post, came across the another interesting note and reference post that there is a change in the LGWR behavior appears from 11.2.0.3,

Background:- LGWR posts/poll to the user session (acknowledgement) that receives requests or send acknowledgement to the foreground process or to other process like checkpoint.

But  due to kernel scheduling delays (due to CPU starvation, memory starvation) LGWR can also be victim waiting to post/poll or another angle could be user sessions may not receive the acknowledgement in time which can drastically increase the log file sync events.

According to metalink note, Adaptive Switching Between Log Write Methods can Cause 'log file sync' Waits [ID 1462942.1] , On a system with 11.2.0.3 (solaris) log file sync can be appear , even if the other contributors (read here) are taking less wait time which revealing in your troubleshooting.

This is because of change in the lgwr post or polling (kernel calls to post to foreground processes) behavior with an underscore parameter, called _use_adaptive_log_file_sync (default true)

Description of _use_adaptive_log_file_sync

With " _use_adaptive_log_file_sync " = true:
Adaptive switching between post/wait and polling for log file sync is implemented.
When polling is adaptively selected, the polling interval is also dynamically adjusted.
If post/wait is selected and the foreground processes fail to receive a post from LGWR, an incident is recorded, diagnostic traces are performed, and polling is used instead of post/wait.

If you dont want to live with beast, then Set the parameter _use_adaptive_log_file_sync = false and restart the database:

SQL> ALTER SYSTEM SET "_use_adaptive_log_file_sync" = FALSE;

As this parameter by default true, you may need to consider to set it false if you are seeing higher log file sync waits

Along with this there are other underscore parameters for lgwr behavior which were discussed in the reference link below. Thanks to Pythian Team and its blog.

References:-

http://www.pythian.com/news/36791/adaptive-log-file-sync-oracle-please-dont-do-that-again/

Adaptive Switching Between Log Write Methods can Cause 'log file sync' Waits [ID 1462942.1]

-Thanks

Geek DBA

Troubleshooting: Logfile sync Wait breakdown & causes

Log file sync event can appear in any busiest system or the environment that configured badly or might be due to some bugs. But log file sync is an accumulated time taken of other aspects which is not alone a foreground process that is waiting for commit post/poll acknowledgement from LGWR.

The log file sync wait event is the time the foreground processes spends waiting for the redo to be flushed. According to the documentation , The "log file sync" wait event may be broken down into the following components:

1. Wakeup LGWR if idle
2. LGWR gathers the redo to be written and issues the I/O
3. Wait time for the log write I/O to complete
4. LGWR I/O post processing ,  LGWR posting the foreground/user session that the write has completed
5. Foreground/user session wakeup

When you troubleshoot , Where to look at this metrics and how you determine which is causing high waits.

The log file sync wait can be broken into:

1 = In a busiest system or anytime this case is not possible, It can be assumed here that the LGWR is always active, it does not have to be awaken by the system dispatcher.

2= AWR Report , two statistics , namely, Redo Write time and User commits.

Redo Write time ,  this is the total elapsed time of the write from the redo log buffer to the current redo log file (in centiseconds).

User commits, number of commits per second, per transaction

3= Awr Report, Log file parallel write per second

Note:- Steps 2 and 3 are accumulated in the "redo write time" statistic. (i.e. as found under STATISICS section of Statspack) Step 3 is the "log file parallel write" wait event.

4= AWR report, calculate commit frequency. microstat –PL and prstat on lgwr process shows how much time it is spending on CPU delaying

5= Though LGWR post , The foreground process is finally waken up by the system dispatcher. This completes the ‘log file sync’ wait.

So you may need to understand , is log file sync can be high due to
1) Excessive commits
2) Disk I/O slowness
3) OS scheduling or CPU issues
To quickly understand this. Look at few stats from AWR report as below.
Case 1:-

Event                        Avg wait Time (ms)
------------------------------------------
log file sync                       10

Event                          Avg wait Time (ms)
------------------------------------------
log file parallel write              6.3

Statistic                             Total     per Second
------------------------------- ----------- --------------
redo blocks written                 230,881        2,998.5
redo write time                       6,226           80.9
user commits                         38,135          495.3
user rollbacks                            1            0.0
user calls                          433,717        5,632.7

 

Note: Redo write time in centiseconds

Log file sync wait - per user commits (redo write time) - log file parallel write.

6226/38135 = 0.16ms per user "redo write time"  out 10 log file sync

Therefore, 0.16ms per user per redo write wait out of is not ideally a bad number of waiting
but 500 per commits per second is on higher side, this represents that lgwr is experiencing excessive commits. Consider application reduce the commits using batch transactions.

Case 2:-

Event                        Avg wait Time (ms)
------------------------------------------
log file sync                       10

Event                          Avg wait Time (ms)
------------------------------------------
log file parallel write              6.3

Statistic                             Total     per Second
------------------------------- ----------- --------------
redo blocks written                 230,881        2,998.5
redo write time                 38,135          495.3
user commits                         6,226           80.9
user rollbacks                            1            0.0
user calls                          433,717        5,632.7

38135/6225 = 6.1ms per "redo write time"

Therefore, 6.1ms per second wait of I/O write (log file parallel write) consuming and causing log file sync event, you may need to  check your extended device statistics using
sar, iostat etc. Consider moving redo to faster disks etc.
Case 3:-

Event                        Avg wait Time (ms)
------------------------------------------
log file sync                       101

Event                          Avg wait Time (ms)
------------------------------------------
log file parallel write              37

Statistic          Total     per Second     per Trans
------------------------------------------------------
redo write time  275,944           77.6           0.2
user commits   1,163,048          327.1           1.0

776/327 = 2.3ms per user commits of "redo write time"

Note: Redo write time in centiseconds

Log file sync wait - per user commits (redo write time) - log file parallel write.

101-2.3-37=61.7ms

Therefore, 61.7 ms out of 101 ms was going to post processing (can be lgwr kick off by cpu load or due to other contentions or latch waitings in the database)

Consider scheduling of the lgwr using nice, load of the database, other contentions causing cpu loads etc.

The above written in order to clear my understanding and represent the log file sync event in conjunction with AWR or sys stats to pin point the issue. This case may not be accurate in terms of stats but provides a overview of troubleshooting or break down the log file sync wait.

References:-

http://www.pythian.com/news/1098/tuning-log-file-sync-event-waits/

NOTE:1376916.1 - Troubleshooting: log file sync' Waits

http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/

http://www.confio.com/db-resources/oracle-wait-events/log-file-sync/

-Thanks

Geek DBA

Tracking password change in Oracle

Hi

How to track the password change for a user in Oracle?

A) Oracle only tracks the date that the password will expire based on when it was latest changed. So by looking at the DBA_USERS.EXPIRY_DATE and subtracting PASSWORD_LIFE_TIME you can determine when password was last changed. The last password change time can also directly be seen from the PTIME column in dictionary table USER$ (on which DBA_USERS view is based).

But If you have PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in a profile assigned to a user account then you can reference dictionary table USER_HISTORY$ for when the password was changed for this account. This will maintain any password which still falls with in the PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX limits.

Must Run this Query after connecting by Sys user

SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#

NAME                PTIME     PASSWORD_Changed
------------------- --------- -----------------
Test_AWR_USER         20-JUN-12 04-JUN-12
Test_AWR_USER         20-JUN-12 30-MAY-12
Test_AWR_USER         20-JUN-12 20-JUN-12

Thanks
Geek DBA

Oracle database limits

Hi,

Quick reference from documentation on Oracle Database limits

Reference:-
http://docs.oracle.com/cd/E11882_01/server.112/e24448/limits.htm

Datatype Limits: Talks about the limits for each data type eg: raw size, char size etc.

Physical Database Limits: Talks about limits of datafile, blocksizes, maxlimits etc

Logical Database Limits: Talks about limits of max subqueries, constraints, partitions,users and roles etc.

Process and Runtime Limits: Talks about limits of max services,locks,servers, etc.

-Thanks
Geek DBA

Opatch: Enable or disable oracle patch using opatch

Opatch: Enable/disable the one off oracle patch to all of the databases using opatch util

Problem:
========

Consider this situation,

1) You have Shared Oracle Home

2) 2 Databases running

3) You need to apply a patch for one databases only

Many of us know once we apply the patch to a home this new patch will be used by Databases attached to that Oracle Home,

But you may consider this situation.

1) Not all the patches are enabled by default, you have to manually enable it (I believe from 10g onwards)

2) Though its enabled, Out of your 2 databases you may just required only for 1 database, so you need to disable the patch or you may required to enable the patch

Back to solution:-
=================

According to Metalink note:- RDBMS Online Patching Aka Hot Patching [ID 761111.1]

1) Assuming you have applied the patch 9877980

2) Verification for database 1 :-

SQL> Show parameter database_name
db_name value
========================
db_name TEST1

SQL> oradebug patch list
Patch File Name State
================ =========
bug9877980.pch ENABLED

3) Verification for database 2 :-
SQL> Show parameter database_name
db_name value
========================
db_name TEST1

SQL> oradebug patch list
Patch File Name State
================ =========
bug9877980.pch ENABLED

4) Now I do not want the patch 9877980 to be enabled in TEST2 database.

opatch util disableonlinepatch -connectString TEST2:sys:oracle123:hostname -ph /home/oracle/9877980/online

5) Suppose after some time I want to enable the patch again back to TEST2 database.

opatch util enableonlinepatch -connectString TEST2:sys:oracle123:hostname -id 9877980

Another option to enable or disable the patches for given SID ,

Here: - http://db.geeksinsight.com/2012/10/04/switching-onoff-bug-fixes-patches/

Opatch: How to check whether the patch require database bounce or not. Means online apply or not.

From 11g onwards Oraclr releasing patches without downtime requirement for some patches.

Opatch command line has now an additional options called query and _is_online_patch which can be used to determine whether the patch can apply online or not.

XXXXXXXX:TEST:UAT $ opatch query -is_online_patch /home/users/oracle/9877980 -invptrloc /u01/in/TEST/oracle/db/11.2.0.2/oraInst.loc
Invoking OPatch 11.2.0.1.1

Oracle Interim Patch Installer version 11.2.0.1.1
Copyright (c) 2009, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/in/TEST/oracle/db/11.2.0.2
Central Inventory : /u01/in/TEST/oracle/orainventory
   from           : /u01/in/TEST/oracle/db/11.2.0.2/oraInst.loc
OPatch version    : 11.2.0.1.1
OUI version       : 11.2.0.2.0
OUI location      : /u01/in/TEST/oracle/db/11.2.0.2/oui
Log file location : /u01/in/TEST/oracle/db/11.2.0.2/cfgtoollogs/opatch/opatch2012-11-07_02-54-03AM.log

Patch history file: /u01/in/TEST/oracle/db/11.2.0.2/cfgtoollogs/opatch/opatch_history.txt

--------------------------------------------------------------------------------
Patch is an online patch: false

OPatch succeeded.
XXXXXXXXX:TEST:UAT $

Here you can see the patch is offline as its false. You cannot apply online.

If you have a patch which is true for online patch you can apply the same using apply online option in opatch.

Opatch apply online

Hope this helps

Thanks
Geek DBA

Oracle Index Splits: What happens when index leaf block is fulland a new key has to insert in the same leaf block

We all know that Oracle maintain index keys as a pointers to the original rows of table and there are different types of indexes (Btree and bitmap), while what if the index entries are getting inserted either last of index leaf blocks or in between of leaf block and those leaf blocks are already full. BTW, for btree indexes you might have already know that logical structure would be root, branch and leaf blocks.

In order to get the index keys inserted into leaf blocks that already full, oracle splits the leaf blocks in two ways.

90-10 Splits

1) Indexes that has monotonically increased values Eg: sequence based ID columns, date,etc, the following btree index has an entries of 1-35 and packed across 10 blocks and there is no room for new entry, if a new value 36 has to be insert

clip_image001[9]

 

Oracle splits the block as below. This is called Index block 90-10 splits. Here the the 35 the old key (assuming approximate of 10% ) split to new block and insert of new key 36 inserted.

 

clip_image001[15]

 

Index block 50-50 Split

2) Indexes that has pairs of values Ex: Names, Regions etc. Here names are sorted with alphabets for example, A,BC etc…. and inserted keys are in between the blocks (not at the end of right most leaf block)

 

clip_image001[21]

 

Now If a user has inserted a row/key with name again a G , here in our case Index leaf block 2, if it is already full, then Oracle will create two index leaf blocks under the same branch splitting the original keys to 50-50 % and then discard the original leaf block. this is called 50-50 leaf Splits.

clip_image001[25]

 

Typically the above is for B-tree indexes and may not exact representation of index structure, but can give some insight about splits

-Thanks

Geek DBA

References:-

http://richardfoote.wordpress.com/

http://hemantoracledba.blogspot.in/2012/05/index-block-splits.html

Quiz Post 14#: Difference between undo and flashbacklog, undo vs. flashback

My old good friend has a question, am not sure whether I have cleared when we discussed but here I made this attempt.

“What is the difference between undo,flashback log, redo log files.”

First Flashback log vs. Redo log

1) Flashback logs and redo log files are similar in nature

2) LGWR writes to redo log files

3) RVWR writes to flashback logs

3) Redolog files are archived but flashback logs are reused

4) Flashback log file sizes are exactly or equal to the size of the redo buffer, unlike the redo log files (custom size)

5) Redo log files are used for rollforward the changes, but flashback logs are rollbacking the changes

6) * IMPORTANT * Flashbacklogs are alone not sufficient to do the flashback operation, redologs (archive) covering the flashback period must be available, since the previous images of the blocks contains in archives. (covering the whole time period between oldest_flashback_time column value and the time of failure)

Second, Flashback log vs. undo

1) Undo data is at transaction level

2) Flashback log covers the undo data at the block level

3) Undo does not record ddl operations where the flashback log does

4) UNDO data is used to backward changes at the transaction level while flashback logs are used to backward changes at the database level