All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use.
This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove.
Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only.
This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only.
If you see any issues with Content and copy write issues, I am happy to remove if you notify me.
Contact Geek DBA Team, via geeksinsights@gmail.com
We want to report general audit report daily from our databases who have made manual modifications to the database so it may be helpful for us to retrospect when needed.
cat Download_Audit_Files.sh
########################################
#Configuration
########################################
AUDITLOGDATE=`date +"%Y%m%d" --date="$2 days ago"`
AUDIT_DIR=/backups/washexports/audit
S3BUCKET=s3://databaselogs.geeksinsight.com/database/audit/$1/
DBUSERNAME=dbmanager
DBPASSWORD=password
SCRIPT_DIR=${SCRIPTS_DIR}
MAIL_RECEIPIENT=DBAUDIT mail@123.com,mail2@123.com
#######################################
# Export variables
#######################################
export AWS_DEFAULT_REGION=ap-southeast-2
export ORACLE_HOME=/opt/oracle/app/oracle/product/11.2.0/client_1
export PATH=/usr/local/rvm/gems/ruby-2.1.5/bin:/usr/local/rvm/gems/ruby-2.1.5@global/bin:/usr/local/rvm/rubies/ruby-2.1.5/bin:/usr/lib64/qt-3.3/bin:/opt/aws/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/oracle/app/oracle/product/11.2.0/client_1/bin:/usr/local/rvm/bin:/root/bin
##############################################
# Download the xml files for given instance
##############################################
for var in `aws rds describe-db-log-files --db-instance-identifier $1 --filename-contains "xml" | grep LogFileName | sort -r | cut -d '"' -f4- |rev | cut -c 3- | rev | grep _$AUDITLOGDATE`
do
aws rds download-db-log-file-portion --db-instance-identifier $1 --log-file-name $var --max-items 99999999 --starting-token 0 --output text >> ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
done
######################################
# Cleanse the Audit Files
######################################
#sed -i '/pscs/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/<\/Audit>/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/<\/AuditRecord>/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/xml/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/Version/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i -e 's/<\/DBID>/<\/DBID><\/AuditRecord>/g' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
vim -e ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml <<@@@
g/truncated/.-1,.d
wq
@@@
sed -i '1i\'"<root><Audit>" ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '$ a\'"</Audit></root>" ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
##################################################################
# Generate report using analyze2.py python script
# YOu may need to further customize the script as per your needs
##################################################################
cd ${AUDIT_DIR}
${SCRIPTS_DIR}/audit/analyze2.py ${AUDITLOGDATE}_$1.xml >> ${SCRIPTS_DIR}/logs/Audit_Process_${AUDITLOGDATE}_$1.log
SIZE=`ls -ltr ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml | tr -s ' ' | cut -d ' ' -f 5`
##################################################################
#Send Email of the report
###################################################################
cat /scripts/db_reconfiguration/logs/Audit_Process_${AUDITLOGDATE}_$1.log | mailx -s "Campus Database Audit Log - ${AUDITLOGDATE}" -a "/scripts/db_reconfiguration/logs/Audit_Process_${AUDITLOGDATE}_$1.log" -r ${MAIL_RECEIPIENT}
###########################################
# Copy final audit xml file to S3 Bucket
###########################################
gzip ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
aws s3 cp ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml.gz ${S3BUCKET}
rm -f ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml.gz
#####################################################################
## Get Database Endpoint Name
#####################################################################
EPNAME=`aws rds describe-db-instances --db-instance-identifier $1 | grep -i "Address" | cut -d '"' -f4`
echo $EPNAME
####################################################################
## Get Database Name for given Instance
####################################################################
DBNAME=`aws rds describe-db-instances --db-instance-identifier $1 | grep -i "DBName" | cut -d '"' -f4`
echo $DBNAME
###################################################################
# Purge XML files from RDS Instance
##################################################################
sqlplus -s ${DBMANAGER}/${DBPASSWORD}@$EPNAME/$DBNAME <<EOF
set pages 0
set echo off
set lines 1000
set feedback off
set termout off
spool ${SCRIPTS_DIR}/purgexml.sql
select 'exec utl_file.fremove (''ADUMP'','||''''||filename||''''||');' from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('ADUMP')) where to_char(mtime,'DD-MON-YYYY') <= trunc(sysdate)-1;
spool off
@${SCRIPTS_DIR}/purgexml.sql
EOF
sqlplus -s ${DBMANAGER}/${DBPASSWORD}@$EPNAME/$DBNAME <<EOF
BEGIN
sys.DBMS_AUDIT_MGMT.clean_audit_trail(
audit_trail_type => sys.DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
use_last_arch_timestamp => true);
END;
/
BEGIN
sys.DBMS_AUDIT_MGMT.clean_audit_trail(
audit_trail_type => sys.DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
use_last_arch_timestamp => true);
END;
/
EOF
###################################################################
# Remove Xml file
##################################################################
rm -f ${AUDIT_DIR}purgexml.sql
exit 0
Then analyze2.py looks like this, as you see I have filtered out the names of user that I don't want to report, you may keep your own username as required.
#!/usr/bin/python
import sys
import collections
from tabulate import tabulate
try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
results = dict()
elements_done=0
fname = sys.argv[1]
for action,elem in ET.iterparse(sys.argv[1]):
if elem.tag == 'AuditRecord':
if (elem.find('OS_User').tag == 'OS_User' and elem.find('OS_User').text != 'SYSTEM' and elem.find('OS_User').text != 'tomcat' and elem.find('OS_User').text != 'dsadm' and elem.find('OS_User').text != 'rdsdb' and elem.find('OS_User').text != 'rdshm' and elem.find('OS_User').text != 'pscs' and elem.find('OS_User').text != 'dbmanager' and elem.find('OS_User').text != 'root'):
if elem.find('OS_User') != None:
osuser = elem.find('OS_User').text
else:
osuser = 'Unknown'
if elem.find('Object_Name') != None:
obj = elem.find('Object_Name').text
else:
obj = 'Unknown'
if elem.find('DB_User') != None:
usr = elem.find('DB_User').text
else:
usr = 'Unknown'
key=osuser + obj + usr
if not key in results:
results[key] = {'name': osuser, 'object': obj,'dbuser': usr, 'updcnt':0, 'delcnt':0, 'altcnt':0, 'inscnt':0, 'grtcnt':0, 'drpcnt':0, 'trcnt':0}
record = results[key]
if ((elem.find('SesActions') != None and elem.find('SesActions').tag == 'SesActions' and elem.find('SesActions').text == 'S---------------') or (elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '15')):
oname = elem.find('Object_Name').text
record['altcnt'] += 1
if ((elem.find('SesActions') != None and elem.find('SesActions').tag == 'SesActions' and elem.find('SesActions').text == '---S------------') or (elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '7')):
oname = elem.find('Object_Name').text
record['delcnt'] += 1
if ((elem.find('SesActions') != None and elem.find('SesActions').tag == 'SesActions' and elem.find('SesActions').text == '----------S-----') or (elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '6')):
oname = elem.find('Object_Name').text
record['updcnt'] += 1
if ((elem.find('SesActions') != None and elem.find('SesActions').tag == 'SesActions' and elem.find('SesActions').text == '------S---------') or (elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '2')):
oname = elem.find('Object_Name').text
record['inscnt'] += 1
if elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '17':
oname = elem.find('Object_Name').text
record['grtcnt'] += 1
if elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '12':
oname = elem.find('Object_Name').text
record['drpcnt'] += 1
if elem.find('Action') != None and elem.find('Action').tag == 'Action' and elem.find('Action').text == '85':
oname = elem.find('Object_Name').text
record['trcnt'] += 1
elements_done += 1
header_row = '''
User \tSchema Name \tObject \tINSERTS\tDELETES\tUPDATES\tALTER \tGRANTS \tDROPS \tTRUNCATE
-------------- \t-------------------------------\t----------------------- \t-------\t-------\t-------\t-------\t-------\t-------\t--------'''
output_line = '%-15.15s\t%-30.30s\t%-30.30s\t%7d\t%7d\t%7d\t%7d\t%7d\t%7d\t%7d'
current_user = ''
row_count = 0
page_size = 100
for key in collections.OrderedDict(sorted(results.items())):
record = results[key]
if current_user != record['name']:
name = record['name']
current_user = record['name']
if (row_count % page_size) == 0:
name = record['name']
print
print header_row
print output_line%(name, record['dbuser'], record['object'], record['inscnt'], record['delcnt'], record['updcnt'], record['altcnt'], record['grtcnt'], record['drpcnt'], record['trcnt'])
row_count += 1
name=''
And then crontab your script, this way
## Oracle Audit Logs Download and Copy to S3 Bucket and give instance name and num of days
00 07 * * * sh /scripts/db_reconfiguration/audit/Download_Audit_Files.sh rdsinstancename 1
My Next Presentation in APACOUC Webinar Tour - Building a Datalake - registration link is now open.If any one interested, please register at below link.
August 18, 2017 - 19:00 AEST (3PM IST) - Suresh Gandhi presenting: Building a DataLake - More info or register Here
Today it's another script, we have got piles of EBS snapshots (around 50,000) over period of 2 years.To give a bit background the EBS volume snapshots will be taken every 4 hours along with our rds snapshots to maintain the recover-ability for some of the lob objects. So each volume having 6 per day and we have around 7 volumes of same kind.
So it's time to cleanup them. Rather deleting every snapshot or by retention of particular days snapshots, we want to retain one per day (which was the last one of the day) and delete the rest of snapshots for each EBS Volume.
And here is the script: (Note we have Description of Volume filtered, you can use Tag or Name of Snapshot Instead, Use wildcharacter * if you want so)
#!/usr/bin/env python
import boto.ec2, os
import datetime
from itertools import groupby
from operator import itemgetter
import time
MAX_SNAPSHOTS = 1 # Number of snapshots to keep
# Connect to EC2 in this region
connection = boto.ec2.connect_to_region('ap-southeast-2')
# Get a list of all volumes which having description filtered
snapshots = connection.get_all_snapshots(filters={'description':'*Production WINFS /dev/xvdk shared storage snapshot*'})
# Get a list of all volumes named *winfs-/dev/xvdk*
#snapshots = connection.get_all_snapshots(filters={'Name':'*winfs-/dev/xvdk*'})
snap_sort = sorted([(s.id, datetime.datetime.strptime(s.start_time, '%Y-%m-%dT%H:%M:%S.%fZ').date()) for s in snapshots], key=lambda k: k[1])
snaps = len(snap_sort)
for i in range(0,snaps):
s = snap_sort[i]
print "Checking snapshot", s[0], s[1]
if i < snaps-1 and s[1] == snap_sort[i+1][1]:
print "Delete snapshot", s[0], s[1]
connection.delete_snapshot(s[0])
time.sleep(1)
Run as below and the output shown as, And snaps will be deleted.
Well, many of you may already have this, but most of you have deployed this kind of script with lambda with a list of instances as input.
However, in my case the instances changes at least weekly as we build and terminate every two weeks and instances list is not constant, so with instances list in lambda we have to change that frequently.
So to have flexibility , while building we provide a tag ('AutoOff=True') to our RDS instance so our lambda or cronjob will pick up the rds database and stop according to schedule. If we do not want the database to be shut down we simply remove the tag.
Pre-Requisite:- Use the stable version of boto and botocore which contains the rds stop/start modules
And here is the script, which takes input of start, stop,status , if you provide an instance name the second argument will be start/stop/status
rdsmanage.py
import boto3
import logging
import sys
import os
import boto3
rds_client = boto3.client('rds')
db_instance_info = rds_client.describe_db_instances()
for each_db in db_instance_info['DBInstances']:
response = rds_client.list_tags_for_resource(ResourceName=each_db['DBInstanceArn'])
taglist = response['TagList']
if sys.argv[1] == each_db['DBInstanceIdentifier'] and sys.argv[2] == 'stop':
for tag in taglist:
if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'available':
db=each_db['DBInstanceIdentifier']
status=each_db['DBInstanceStatus']
print db +':'+ status
response = rds_client.stop_db_instance(DBInstanceIdentifier=db)
elif sys.argv[1] == each_db['DBInstanceIdentifier'] and sys.argv[2] == 'start':
for tag in taglist:
if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'available':
db=each_db['DBInstanceIdentifier']
status=each_db['DBInstanceStatus']
print db +':'+ status
response = rds_client.start_db_instance(DBInstanceIdentifier=db)
elif sys.argv[1] == each_db['DBInstanceIdentifier'] and sys.argv[2] == 'status':
for tag in taglist:
if tag['Key'] == 'AutoOff' and tag['Value'] == 'True':
db=each_db['DBInstanceIdentifier']
status=each_db['DBInstanceStatus']
print db +':'+ status
elif sys.argv[1] == 'stop' and sys.argv[:2]:
for tag in taglist:
if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'available':
db=each_db['DBInstanceIdentifier']
status=each_db['DBInstanceStatus']
print db +':'+ status
response = rds_client.stop_db_instance(DBInstanceIdentifier=db)
elif sys.argv[1] == 'start' and sys.argv[:2]:
for tag in taglist:
if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'stopped':
db=each_db['DBInstanceIdentifier']
status=each_db['DBInstanceStatus']
print db +':'+ status
response = rds_client.start_db_instance(DBInstanceIdentifier=db)
elif sys.argv[1] == 'status' and sys.argv[:2]:
for tag in taglist:
if tag['Key'] == 'AutoOff' and tag['Value'] == 'True':
db=each_db['DBInstanceIdentifier']
status=each_db['DBInstanceStatus']
print db +':'+ status
Hope you might have already gone through below, just in case if not. All of our developer tools/ETL tools datastage/Informatica etc. are failing with ora-28040 once we upgraded to oracle 12c.
This is due to, As per Oracle, In Oracle 12.1 the default value for the SQLNET.ALLOWED_LOGON_VERSION parameter has been updated to 11. This means that database clients using pre-11g JDBC thin drivers cannot authenticate to 12.1 database servers unless the SQLNET.ALLOWED_LOGON_VERSION parameter is set to the old default of 8.
Change/add the SQLNET.ALLOWED_LOGON_VERSION=8 in sqlnet.ora
Note: For the databases on AWS RDS this still works.
The Second wave of APACOUC - IAOUG ANZ Webinars registrations open now and you have more exciting presentations to come.
And I am presenting topic Devops for Databases which details about Devops Principles to databases, why they are hard to implement, creating your own framework, strategy, approach, tools that needed for each approach etc.
July 28, 2017 - 19:00 AEST - Suresh Gandhi presenting: DevOps for Databases - More info or register Here
Registration links:-
July 17, 2017 - 17:00 AEST - Gurcan Orhan presenting: Migration steps from OWB to ODI - More info or register Here
July 19, 2017 - 10:00 AEST - Dan McGhan presenting: Techniques for Taming the Asynchronous Nature of Node.js - More info or register Here
July 19, 2017 - 19:00 AEST - Steven Feuerstein presenting: Change Impact Analysis with 12.2 PL/Scope - More info or register Here
July 21, 2017 - 10:00 AEST - Lino Schildenfeld presenting: Thinking of going APEX 5.1? - our experience - More info or register Here
July 24, 2017 - 10:00 AEST - Satyendra Pasalapudi presenting: Running Highly Available E Business Suite on Oracle Bare Metal Cloud Service - More info or register Here
July 24, 2017 - 17:00 AEST - Heli Helskyaho presenting: Top 10 Features of Oracle SQL Developer everybody should use, even in the Cloud - More info or register Here
July 26, 2017 - 10:00 AEST - Tim Gorman presenting: Accelerating Development and Testing Using Data Virtualization - More info or register Here
July 26, 2017 - 19:00 AEST - Biju Thomas presenting: Do I have to learn new tricks to manage Oracle database in the Cloud? - More info or register Here
July 28, 2017 - 10:00 AEST - Kai Yu presenting: Get ready to upgrade your Oracle databases to 12cR2: tools, methods and paths - More info or register Here
July 28, 2017 - 19:00 AEST - Suresh Gandhi presenting: DevOps for Databases - More info or register Here
July 31, 2017 - 10:00 AEST - Charles Kim presenting: Backup to the Cloud and Beyond - More info or register Here
August 2, 2017 - 10:00 AEST - Dan McGhan presenting: Building Real-time Data in Web Applications with Node.js - More info or register Here
August 2, 2017 - 19:00 AEST - Joel Perez presenting: RMAN one of the best tools to move on-premise databases to the Cloud - More info or register Here
August 4, 2017 - 10:00 AEST - Daniel Morgan presenting: Oracle Database 12.2 Application Containers - More info or register Here
August 4, 2017 - 17:00 AEST - Gurcan Orhan presenting: EDQ, OGG and ODI over Exadata in the Cloud for Perfection - More info or register Here
August 7, 2017 - 10:00 AEST - Tim Gorman presenting: Securing Data Using Data Obfuscation - More info or register Here
August 7, 2017 - 17:00 AEST - Heli Helskyaho presenting: Introduction to the Big Data World - More info or register Here
August 7, 2017 - 19:00 AEST - Francisco M Alvarez presenting: Logging or NoLogging:That's the question - More info or register Here
IAOUG and APACOUC proudly announces ANZ Webinar Tour - 2017.
The 2017 Webinar Tour will be running from July 3rd until August 25th with the participation of over 20 International Speakers that will be presenting over 50 interesting presentations, making it the largest webinar series ever made in the Oracle world!
Registration Links are open for 1st wave. Please find more information here.
July 7, 2017 - 10:00 AEST - Kai Yu presenting: Under the hood of Oracle Database Cloud Service for Oracle DBAs - More info or register Here
July 7, 2017 - 19:00 AEST - Lucas Jellema presenting: The Art of Intelligence – A Practical Introduction Machine Learning for Oracle professionals - More info or register Here
July 10, 2017 - 10:00 AEST - Charles Kim presenting: Bulletproof Your Data Guard Environment - More info or register Here
July 10, 2017 - 17:00 AEST - Gurcan Orhan presenting: Is Data Warehousing dying? - More info or register Here
July 12, 2017 - 10:00 AEST - Satyendra Pasalapudi presenting: Experience of being a Oracle Bare Metal Cloud DBA - More info or register Here
July 12, 2017 - 19:00 AEST - Joel Perez presenting: What I must know to be expert managing Databases in the Cloud ? - More info or register Here
July 14, 2017 - 19:00 AEST - Debra Lilley presenting: EBS to Cloud Applications – A Govt Case Study - More info or register Here
July 17, 2017 - 10:00 AEST - Arup Nanda presenting: Docker and Oracle Database in the Cloud - More info or register Here
In Previous post of the series, I have written about an overview of Oracle Sharding and its features.
In this post, you will be seeing how to configure oracle shard and what steps are need to be performed.
Over all, I felt its very easy to setup and configurewhen compare to RAC setup. All you need to install the oracle home software in all nodes and gsm installation in shard catalog node and rest all are simple commands. One of the excited part is deploy , which creates databases in shards automatically and get shard database ready.
Follow Me!!!