Subscribe to Posts by Email

Subscriber Count

    701

Disclaimer

All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove. Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only. This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only. If you see any issues with Content and copy write issues, I am happy to remove if you notify me. Contact Geek DBA Team, via geeksinsights@gmail.com

Pages

AWS RDS : Oracle RDS Auditing (XML) Script

Hello,

We want to report general audit report daily from our databases who have made manual modifications to the database so it may be helpful for us to retrospect when needed.

How this report looks like, find useful?

User               Schema Name             Object                            INSERTS    DELETES    UPDATES    ALTER      GRANTS     DROPS     TRUNCATE
--------------     --------------------    -----------------------           -------    -------    -------    -------    -------    -------    --------
insights           SYSADM                  USERN                                 0          0          2          0          0          0          0
                   SYSADM                  RUNCTL_BDEP                           0          0          1          0          0          0          0
                   SYSADM                  PS_SYS_DATA                           0          0          1          0          0          0          0
                   SYSADM                  PS_PER                                0          0          1          0          0          0          0
test1              SYSADM                  SS_MTHD                               0          0          2          0          0          0          0
                   SYSADM                  SHOP_CART                             0          1          0          0          0          0          0
                   SYSADM                  ITEM_HECS_ELEC                        0          0          4          0          0          0          0
                   SYSADM                  USER_WISH_LIST_TBL                    0          2          0          0          0          0          0

 

So here it is what we planned.

  • We want to store the audit files in s3 for backup purposes.
  • We want to store the audit files in xml format rather general oracle audit file format, in order to have a schema on read for our datalake.
  • As its RDS , first we have to download all the files and then cleanse , remove unnecessary xml records and keep only and extract them as above report
  • Purge xml audit files in RDS as such they consume lot of space in rds directories

Note:- By the way, you can use v$xml_audit_trail, but I find not useful and also I want to do other things like mailing, purging, storing to s3 etc.

Further, What you need before you run this scripts are as below

  • An ec2 instance (ondemand) which having IAM role to read access to RDS and access to s3 bucket
  • Install AWS cli and set your credentials or above IAM role is enough
  • Oracle Client Installed for purging of files or you can manage different way
  • Python 2.7 Installed and XML

Download Scripts from Here

Download_Audit_Files.sh does below.

  • Download, cleanse the audit files
  • Run analyze.py script to generate report above
  • Send mail for that report
  • Purge XML Files from Oracle RDS lesser than 1 day
cat Download_Audit_Files.sh

########################################
#Configuration
########################################
AUDITLOGDATE=`date +"%Y%m%d" --date="$2 days ago"`
AUDIT_DIR=/backups/washexports/audit
S3BUCKET=s3://databaselogs.geeksinsight.com/database/audit/$1/
DBUSERNAME=dbmanager
DBPASSWORD=password
SCRIPT_DIR=${SCRIPTS_DIR}
MAIL_RECEIPIENT=DBAUDIT mail@123.com,mail2@123.com

#######################################
# Export variables
#######################################
export AWS_DEFAULT_REGION=ap-southeast-2
export ORACLE_HOME=/opt/oracle/app/oracle/product/11.2.0/client_1
export PATH=/usr/local/rvm/gems/ruby-2.1.5/bin:/usr/local/rvm/gems/ruby-2.1.5@global/bin:/usr/local/rvm/rubies/ruby-2.1.5/bin:/usr/lib64/qt-3.3/bin:/opt/aws/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/oracle/app/oracle/product/11.2.0/client_1/bin:/usr/local/rvm/bin:/root/bin

##############################################
# Download the xml files for given instance
##############################################

for var in `aws rds describe-db-log-files --db-instance-identifier $1 --filename-contains "xml" | grep LogFileName | sort -r |  cut -d '"' -f4- |rev | cut -c 3- | rev | grep _$AUDITLOGDATE`
do
aws rds download-db-log-file-portion --db-instance-identifier $1 --log-file-name $var  --max-items 99999999 --starting-token 0 --output text >> ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
done


######################################
# Cleanse the Audit Files
######################################
#sed -i '/pscs/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/<\/Audit>/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/<\/AuditRecord>/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/xml/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '/Version/d' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i -e 's/<\/DBID>/<\/DBID><\/AuditRecord>/g' ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
vim -e ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml <<@@@
g/truncated/.-1,.d
wq
@@@
sed -i '1i\'"<root><Audit>" ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
sed -i '$ a\'"</Audit></root>" ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml

##################################################################
# Generate report using analyze2.py python script
# YOu may need to further customize the script as per your needs
##################################################################
cd ${AUDIT_DIR}
${SCRIPTS_DIR}/audit/analyze2.py ${AUDITLOGDATE}_$1.xml >> ${SCRIPTS_DIR}/logs/Audit_Process_${AUDITLOGDATE}_$1.log
SIZE=`ls -ltr ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml | tr -s ' ' | cut -d ' ' -f 5`

##################################################################
#Send Email of the report
###################################################################

cat /scripts/db_reconfiguration/logs/Audit_Process_${AUDITLOGDATE}_$1.log | mailx -s "Campus Database Audit Log - ${AUDITLOGDATE}" -a "/scripts/db_reconfiguration/logs/Audit_Process_${AUDITLOGDATE}_$1.log" -r ${MAIL_RECEIPIENT}

###########################################
# Copy final audit xml file to S3 Bucket
###########################################
gzip ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml
aws s3 cp ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml.gz ${S3BUCKET}
rm -f ${AUDIT_DIR}/${AUDITLOGDATE}_$1.xml.gz

#####################################################################
## Get Database Endpoint Name
#####################################################################
EPNAME=`aws rds describe-db-instances --db-instance-identifier $1 | grep -i "Address" | cut -d '"' -f4`
echo $EPNAME

####################################################################
## Get Database Name for given Instance 
####################################################################

DBNAME=`aws rds describe-db-instances --db-instance-identifier $1 | grep -i "DBName" | cut -d '"' -f4`
echo $DBNAME

###################################################################
# Purge XML files from RDS Instance
##################################################################
sqlplus -s ${DBMANAGER}/${DBPASSWORD}@$EPNAME/$DBNAME <<EOF
set pages 0
set echo off
set lines 1000
set feedback off
set termout off
spool ${SCRIPTS_DIR}/purgexml.sql
select 'exec utl_file.fremove (''ADUMP'','||''''||filename||''''||');' from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('ADUMP')) where to_char(mtime,'DD-MON-YYYY') <= trunc(sysdate)-1;
spool off
@${SCRIPTS_DIR}/purgexml.sql
EOF


sqlplus -s ${DBMANAGER}/${DBPASSWORD}@$EPNAME/$DBNAME <<EOF
BEGIN
  sys.DBMS_AUDIT_MGMT.clean_audit_trail(
   audit_trail_type        => sys.DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   use_last_arch_timestamp => true);
END;
/

BEGIN
  sys.DBMS_AUDIT_MGMT.clean_audit_trail(
   audit_trail_type        => sys.DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   use_last_arch_timestamp => true);
END;
/
EOF

###################################################################
# Remove Xml file 
##################################################################
rm -f ${AUDIT_DIR}purgexml.sql
exit 0

 

Then analyze2.py looks like this, as you see I have filtered out the names of user that I don't want to report, you may keep your own username as required.

#!/usr/bin/python
import sys
import collections

from tabulate import tabulate
try:
    import xml.etree.cElementTree as ET
except ImportError:
    import xml.etree.ElementTree as ET

results = dict()
elements_done=0
fname = sys.argv[1]
for action,elem in ET.iterparse(sys.argv[1]):
    if elem.tag == 'AuditRecord':
        if (elem.find('OS_User').tag == 'OS_User' and elem.find('OS_User').text != 'SYSTEM' and elem.find('OS_User').text != 'tomcat' and elem.find('OS_User').text != 'dsadm' and elem.find('OS_User').text != 'rdsdb' and elem.find('OS_User').text != 'rdshm' and elem.find('OS_User').text != 'pscs' and elem.find('OS_User').text != 'dbmanager' and elem.find('OS_User').text != 'root'):
            if elem.find('OS_User') != None:
                osuser = elem.find('OS_User').text
            else:
                osuser = 'Unknown'
            if elem.find('Object_Name') != None:
                obj = elem.find('Object_Name').text
            else:
                obj = 'Unknown'
            if elem.find('DB_User') != None:
                usr = elem.find('DB_User').text
            else:
                usr = 'Unknown'
            key=osuser + obj + usr
            if not key in results:
                results[key] = {'name': osuser, 'object': obj,'dbuser': usr, 'updcnt':0, 'delcnt':0, 'altcnt':0, 'inscnt':0, 'grtcnt':0, 'drpcnt':0, 'trcnt':0}
            record = results[key]
            if ((elem.find('SesActions') != None and elem.find('SesActions').tag  == 'SesActions' and elem.find('SesActions').text == 'S---------------') or (elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '15')):
                oname = elem.find('Object_Name').text
                record['altcnt']  += 1
            if ((elem.find('SesActions') != None and elem.find('SesActions').tag  == 'SesActions' and elem.find('SesActions').text == '---S------------') or (elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '7')):
                oname = elem.find('Object_Name').text
                record['delcnt']  += 1
            if ((elem.find('SesActions') != None and elem.find('SesActions').tag  == 'SesActions' and elem.find('SesActions').text == '----------S-----') or (elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '6')):
                oname = elem.find('Object_Name').text
                record['updcnt']  += 1
            if ((elem.find('SesActions') != None and elem.find('SesActions').tag  == 'SesActions' and elem.find('SesActions').text == '------S---------') or (elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '2')):
                oname = elem.find('Object_Name').text
                record['inscnt']  += 1
            if elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '17':
                oname = elem.find('Object_Name').text
                record['grtcnt']  += 1
            if elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '12':
                oname = elem.find('Object_Name').text
                record['drpcnt']  += 1
            if elem.find('Action') != None and elem.find('Action').tag  == 'Action' and  elem.find('Action').text == '85':
                oname = elem.find('Object_Name').text
                record['trcnt']  += 1

            elements_done += 1

header_row = '''
User           \tSchema Name                    \tObject                \tINSERTS\tDELETES\tUPDATES\tALTER  \tGRANTS \tDROPS \tTRUNCATE
-------------- \t-------------------------------\t-----------------------       \t-------\t-------\t-------\t-------\t-------\t-------\t--------'''
output_line = '%-15.15s\t%-30.30s\t%-30.30s\t%7d\t%7d\t%7d\t%7d\t%7d\t%7d\t%7d'
current_user = ''
row_count = 0
page_size = 100
for key in collections.OrderedDict(sorted(results.items())):
    record = results[key]
    if current_user != record['name']:
        name = record['name']
        current_user = record['name']
    if (row_count % page_size) == 0:
        name = record['name']
        print
        print header_row
    print output_line%(name, record['dbuser'], record['object'], record['inscnt'], record['delcnt'], record['updcnt'], record['altcnt'], record['grtcnt'], record['drpcnt'], record['trcnt'])
    row_count += 1
    name=''


And then crontab your script, this way

## Oracle Audit Logs Download and Copy to S3 Bucket and give instance name and num of days
00 07 * * * sh /scripts/db_reconfiguration/audit/Download_Audit_Files.sh rdsinstancename 1

Download Scripts from Here

Hope this helps you to write your own when needed.

-Thanks

Suresh

 

APACOUC Webinar – My Next Presentation – Building a Datalake

Hello All,

My Next Presentation in APACOUC Webinar Tour -  Building a Datalake - registration link is now open.If any one interested, please register at below link.

August 18, 2017 - 19:00 AEST (3PM IST) - Suresh Gandhi presenting: Building a DataLake  - More info or register Here

Other webinar registrations are also open http://www.apacouc.org/2017-webinar-tour.html

-Thanks

Suresh

AWS EBS Snapshot Cleanup Script

Hello,

Today it's another script, we have got piles of EBS snapshots (around 50,000) over period of 2 years.To give a bit background the EBS volume snapshots will be taken every 4 hours along with our rds snapshots to maintain the recover-ability for some of the lob objects. So each volume having 6 per day and we have around 7 volumes of same kind.

So it's time to cleanup them. Rather deleting every snapshot or by retention of particular days snapshots, we want to retain one per day (which was the last one of the day) and delete the rest of snapshots for each EBS Volume.

And here is the script: (Note we have Description of Volume filtered, you can use Tag or Name of Snapshot Instead, Use wildcharacter * if you want so)

#!/usr/bin/env python

import boto.ec2, os
import datetime
from itertools import groupby
from operator import itemgetter
import time

MAX_SNAPSHOTS = 1   # Number of snapshots to keep

# Connect to EC2 in this region
connection = boto.ec2.connect_to_region('ap-southeast-2')

# Get a list of all volumes which having description filtered
snapshots = connection.get_all_snapshots(filters={'description':'*Production WINFS /dev/xvdk shared storage snapshot*'})
# Get a list of all volumes named *winfs-/dev/xvdk*
#snapshots = connection.get_all_snapshots(filters={'Name':'*winfs-/dev/xvdk*'})
snap_sort = sorted([(s.id, datetime.datetime.strptime(s.start_time, '%Y-%m-%dT%H:%M:%S.%fZ').date()) for s in snapshots], key=lambda k: k[1])
snaps = len(snap_sort)
for i in range(0,snaps):
    s = snap_sort[i]
    print "Checking snapshot", s[0], s[1]
    if i < snaps-1 and s[1] == snap_sort[i+1][1]:
        print "Delete snapshot", s[0], s[1]
        connection.delete_snapshot(s[0])
    time.sleep(1)

Run as below and the output shown as, And snaps will be deleted.

root@wash-i-16ca26c8-prod ~ $ python deletesnapshot.py
Checking snapshot snap-188a4188 2016-10-27
Delete snapshot snap-188a4188 2016-10-27
Checking snapshot snap-51c6a8c1 2016-10-27
Checking snapshot snap-d4c8d7d8 2016-10-28
Delete snapshot snap-d4c8d7d8 2016-10-28
Checking snapshot snap-a331d1ac 2016-10-28
Delete snapshot snap-a331d1ac 2016-10-28
Checking snapshot snap-460af849 2016-10-28
Delete snapshot snap-460af849 2016-10-28
Checking snapshot snap-35dac2c6 2016-10-28
Delete snapshot snap-35dac2c6 2016-10-28
Checking snapshot snap-7f8e958c 2016-10-28
Delete snapshot snap-7f8e958c 2016-10-28
Checking snapshot snap-f88e7a0a 2016-10-28
Delete snapshot snap-f88e7a0a 2016-10-28
Checking snapshot snap-2c5213de 2016-10-28
Delete snapshot snap-2c5213de 2016-10-28
Checking snapshot snap-a802215a 2016-10-28
Delete snapshot snap-a802215a 2016-10-28
Checking snapshot snap-1eb2d780 2016-10-28

AWS RDS: Stop & Start Script

Hello,

Well, many of you may already have this, but most of you have deployed this kind of script with lambda with a list of instances as input.

However, in my case the instances changes at least weekly as we build and terminate every two weeks and instances list is not constant, so with instances list in lambda we have to change that frequently.

So to have flexibility , while building we provide a tag ('AutoOff=True') to our RDS instance so our lambda or cronjob will pick up the rds database and stop according to schedule. If we do not want the database to be shut down we simply remove the tag.

Pre-Requisite:- Use the stable version of boto and botocore which contains the rds stop/start modules

pip install botocore==1.5.75
pip install boto3==1.4.4

And here is the script, which takes input of start, stop,status , if you provide an instance name the second argument will be start/stop/status

rdsmanage.py

import boto3
import logging
import sys
import os
import boto3
rds_client = boto3.client('rds')

db_instance_info = rds_client.describe_db_instances()

for each_db in db_instance_info['DBInstances']:
    response = rds_client.list_tags_for_resource(ResourceName=each_db['DBInstanceArn'])

    taglist = response['TagList']
    if sys.argv[1] == each_db['DBInstanceIdentifier'] and sys.argv[2] == 'stop':
     for tag in taglist:
        if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'available':
              db=each_db['DBInstanceIdentifier']
              status=each_db['DBInstanceStatus']
              print db +':'+ status
              response = rds_client.stop_db_instance(DBInstanceIdentifier=db)

    elif sys.argv[1] == each_db['DBInstanceIdentifier'] and sys.argv[2] == 'start':
     for tag in taglist:
        if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'available':
              db=each_db['DBInstanceIdentifier']
              status=each_db['DBInstanceStatus']
              print db +':'+ status
              response = rds_client.start_db_instance(DBInstanceIdentifier=db)

    elif sys.argv[1] == each_db['DBInstanceIdentifier'] and sys.argv[2] == 'status':
     for tag in taglist:
        if tag['Key'] == 'AutoOff' and tag['Value'] == 'True':
              db=each_db['DBInstanceIdentifier']
              status=each_db['DBInstanceStatus']
              print db +':'+ status

    elif sys.argv[1] == 'stop' and sys.argv[:2]:
     for tag in taglist:
        if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'available':
              db=each_db['DBInstanceIdentifier']
              status=each_db['DBInstanceStatus']
              print db +':'+ status
              response = rds_client.stop_db_instance(DBInstanceIdentifier=db)

    elif sys.argv[1] == 'start' and sys.argv[:2]:
     for tag in taglist:
        if tag['Key'] == 'AutoOff' and tag['Value'] == 'True' and each_db['DBInstanceStatus'] == 'stopped':
              db=each_db['DBInstanceIdentifier']
              status=each_db['DBInstanceStatus']
              print db +':'+ status
              response = rds_client.start_db_instance(DBInstanceIdentifier=db)

    elif sys.argv[1] == 'status' and sys.argv[:2]:
     for tag in taglist:
        if tag['Key'] == 'AutoOff' and tag['Value'] == 'True':
              db=each_db['DBInstanceIdentifier']
              status=each_db['DBInstanceStatus']
              print db +':'+ status

Sample Run To check the status, start and stop

root@wash-i-xx-restore ~ $ python rdsmanage.py status
 c92base:stopped
 cs89upg:stopped
 cs92dev:stopped
 csdmo:stopped
 dp4adwe8ind3oy:stopped
 rp19a3160yh53k3:stopped
 sitecore:stopped

root@wash-i-xx-restore ~ $ python rdsmanage.py start
 c92base:starting
 cs89upg:starting
 cs92dev:starting
 csdmo:starting
 dp4adwe8ind3oy:starting
 rp19a3160yh53k3:starting
 sitecore:starting

root@wash-i-xx-restore ~ $ python rdsmanage.py stop
 c92base:stopping
 cs89upg:stopping
 cs92dev:stopping
 csdmo:stopping
 dp4adwe8ind3oy:stopping
 rp19a3160yh53k3:stopping
 sitecore:stopping

You can also stop the instance specifically by providing the instance name and then action.

root@ /scripts/db_reconfiguration $ python /scripts/rdsmanage.py xxx stop
ipfx2:available
root@  /scripts/db_reconfiguration $ python /scripts/rdsmanage.py xxx status
ipfx2:stopping

You can schedule this script using cron or lambda, for example we want stop databases at night 7PM and start them at 7AM. The cron looks like follows

00 19 * * * python /scripts/rdsmanage.py stop
00 07 * * * python /scripts/rdsmanage.py start
00 19 * * * python /scripts/rdsmanage.py ipfx stop
00 07 * * * python /scripts/rdsmanage.py ipfx start

Hope this may help you somewhere.
Thanks
Suresh

ANZ Webinar Tour 2017 – Presentation Replay – DevOps for Databases

Hello All,

My latest presentation replay for DevOps for Databases is here.

And please watch out for My upcoming presentation:

August 18, 2017 - 19:00 AEST - Suresh Gandhi presenting: Building a DataLake , more information at http://www.apacouc.org/2017-webinar-tour.html

More replay's at Youtube channel :

https://www.youtube.com/channel/UCRy6L_T302epD9YlXJT_udg

-Thanks

Suresh

ORA-28040: No matching authentication protocol Database driver error

Hi

Hope you might have already gone through below, just in case if not.  All of our developer tools/ETL tools datastage/Informatica etc. are failing with ora-28040 once we upgraded to oracle 12c.

This is due to, As per Oracle, In Oracle 12.1 the default value for the SQLNET.ALLOWED_LOGON_VERSION parameter has been updated to 11. This means that database clients using pre-11g JDBC thin drivers cannot authenticate to 12.1 database servers unless the SQLNET.ALLOWED_LOGON_VERSION parameter is set to the old default of 8.

Change/add the SQLNET.ALLOWED_LOGON_VERSION=8 in sqlnet.ora

Note: For the databases on AWS RDS this still works.

-Thanks

Suresh

ANZ Free Webinar Tour – Second Wave registrations

Hello All,

The Second wave of APACOUC - IAOUG ANZ Webinars registrations open now and you have more exciting presentations to come.

And I am presenting topic Devops for Databases which details about Devops Principles to databases, why they are hard to implement, creating your own framework, strategy, approach, tools that needed for each approach etc.

July 28, 2017 - 19:00 AEST - Suresh Gandhi presenting: DevOps for Databases  - More info or register Here

Registration links:-

 

  • July 17, 2017 - 17:00 AEST - Gurcan Orhan presenting: Migration steps from OWB to ODI - More info or register Here
  • July 19, 2017 - 10:00 AEST - Dan McGhan presenting: Techniques for Taming the Asynchronous Nature of Node.js - More info or register Here
  • July 19, 2017 - 19:00 AEST - Steven Feuerstein presenting: Change Impact Analysis with 12.2 PL/Scope - More info or register Here
  • July 21, 2017 - 10:00 AEST - Lino Schildenfeld presenting: Thinking of going APEX 5.1? - our experience - More info or register Here
  • July 24, 2017 - 10:00 AEST - Satyendra Pasalapudi presenting: Running Highly Available E Business Suite on Oracle Bare Metal Cloud Service  - More info or register Here
  • July 24, 2017 - 17:00 AEST - Heli Helskyaho  presenting: Top 10 Features of Oracle SQL Developer everybody should use, even in the Cloud  - More info or register Here
  • July 26, 2017 - 10:00 AEST - Tim Gorman presenting: Accelerating Development and Testing Using Data Virtualization  - More info or register Here
  • July 26, 2017 - 19:00 AEST - Biju Thomas presenting: Do I have to learn new tricks to manage Oracle database in the Cloud?  - More info or register Here
  • July 28, 2017 - 10:00 AEST - Kai Yu presenting: Get ready to upgrade your Oracle databases to 12cR2: tools, methods and paths  - More info or register Here
  • July 28, 2017 - 19:00 AEST - Suresh Gandhi presenting: DevOps for Databases  - More info or register Here
  • July 31, 2017 - 10:00 AEST - Charles Kim presenting: Backup to the Cloud and Beyond  - More info or register Here
  • August 2, 2017 - 10:00 AEST - Dan McGhan presenting: Building Real-time Data in Web Applications with Node.js  - More info or register Here
  • August 2, 2017 - 19:00 AEST - Joel Perez presenting: RMAN one of the best tools to move on-premise databases to the Cloud  - More info or register Here
  • August 4, 2017 - 10:00 AEST - Daniel Morgan presenting: Oracle Database 12.2 Application Containers  - More info or register Here
  • August 4, 2017 - 17:00 AEST - Gurcan Orhan presenting: EDQ, OGG and ODI over Exadata in the Cloud for Perfection  - More info or register Here
  • August 7, 2017 - 10:00 AEST - Tim Gorman presenting: Securing Data Using Data Obfuscation  - More info or register Here
  • August 7, 2017 - 17:00 AEST - Heli Helskyaho  presenting: Introduction to the Big Data World  - More info or register Here
  • August 7, 2017 - 19:00 AEST - Francisco M Alvarez presenting: Logging or NoLogging:That's the question  - More info or register Here

-Thanks

Suresh

 

Oracle Sharding – Updated Presentation & Webinar Recording

Hello All,

Thanks for joining and attached the updated presentation and webinar recording link.

Note: Please skip the first 10 Mins as the presentation is not shared properly. :).

Find other Recorded Webinars at https://www.youtube.com/channel/UCRy6L_T302epD9YlXJT_udg

Updated Presentation: Click Here

 

ANZ Webinar Tour – 2017 – Registrations Open

Hello All,

IAOUG and APACOUC proudly announces ANZ Webinar Tour - 2017.

The 2017 Webinar Tour will be running from July 3rd until August 25th with the participation of over 20 International Speakers that will be presenting over 50 interesting presentations, making it the largest webinar series ever made in the Oracle world!

Registration Links are open for 1st wave. Please find more information here.

http://www.apacouc.org/2017-webinar-tour.html

I will be presenting the following topics, register for free , 13- days to go for first one.

  1. July 5, 2017 - 19:00 AEST - Suresh Gandhi presenting: Oracle Sharding - More info or register Here
  2. July 28, 2017 - 19:00 AEST - Suresh Gandhi presenting: DevOps for Databases  - Registration Not open yet
  3. Aug 18, 2017 - 19:00 AEST - Suresh Gandhi presenting: Building Datalake - Registration Not open yet

And Full List of First wave is here

  • July 3, 2017 - 19:00 AEST - Connor McDonald presenting: 12 things about 12c Release 2 - More info or register Here
  • July 5, 2017 - 10:00 AEST - Tim Gorman presenting: Securing Data Using Data Obfuscation - More info or register Here
  • July 5, 2017 - 19:00 AEST - Suresh Gandhi presenting: Oracle Sharding - More info or register Here
  • July 7, 2017 - 10:00 AEST - Kai Yu presenting: Under the hood of Oracle Database Cloud Service for Oracle DBAs - More info or register Here
  • July 7, 2017 - 19:00 AEST - Lucas Jellema presenting: The Art of Intelligence – A Practical Introduction Machine Learning for Oracle professionals - More info or register Here
  • July 10, 2017 - 10:00 AEST - Charles Kim presenting: Bulletproof Your Data Guard Environment - More info or register Here
  • July 10, 2017 - 17:00 AEST - Gurcan Orhan presenting: Is Data Warehousing dying? - More info or register Here
  • July 12, 2017 - 10:00 AEST - Satyendra Pasalapudi presenting: Experience of being a Oracle Bare Metal Cloud DBA - More info or register Here
  • July 12, 2017 - 19:00 AEST - Joel Perez presenting: What I must know to be expert managing Databases in the Cloud ? - More info or register Here
  • July 14, 2017 - 19:00 AEST - Debra Lilley presenting: EBS to Cloud Applications – A Govt Case Study - More info or register Here
  • July 17, 2017 - 10:00 AEST - Arup Nanda presenting: Docker and Oracle Database in the Cloud - More info or register Here

 

-Thanks

Geek DBA

Oracle Sharding : Part 2 – Installating & Configuring Shards

In Previous post of the series, I have written about an overview of Oracle Sharding and its features.

In this post, you will be seeing how to configure oracle shard and what steps are need to be performed.

Over all, I felt its very easy to setup and configurewhen compare to RAC setup. All you need to install the oracle home software in all nodes and gsm installation in shard catalog node and rest all are simple commands. One of the excited part is deploy , which creates databases in shards automatically and get shard database ready.

You can also download the presentation from Here

Continue reading Oracle Sharding : Part 2 – Installating & Configuring Shards