Subscribe to Posts by Email

Subscriber Count

    701

Disclaimer

All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove. Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only. This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only. If you see any issues with Content and copy write issues, I am happy to remove if you notify me. Contact Geek DBA Team, via geeksinsights@gmail.com

Pages

18c Database: Shadow Lost write protection

From 18c Onwards, we can enable Lost write protection for data blocks which occurs when an I/O subsystem acknowledges the completion of the block write even though the write did not occur or when a former image of the block overwrites the current image.

We can enable the Shadow Lost Write protection at Database, Tablespace, DataFile level for which a shadow tablespace is required. As per documentation,

When a tracked data block is read from disk, shadow lost write protection can detect a lost write by comparing the SCN for the block in the shadow tablespace with the SCN of the most recent write in the block being read. If the shadow entry has an SCN greater than the data block being read, then a lost write has occurred. When a lost write is detected, an error is returned.

First create a tablespace to track the data blocks reads.

CREATE BIGFILE TABLESPACE SHDW_LP  DATAFILE 'shadow_protection.dbf'  SIZE 10M LOST WRITE PROTECTION;

Enable Lost Protection at database level

ALTER DATABASE ENABLE LOST WRITE PROTECTION;

Enable Lost Protection for pluggable database Level

ALTER PLUGGABLE DATABASE ENABLE LOST WRITE PROTECTION;

Enable Lost Protection at tablespace level

ALTER TABLESPACE tbsu1 ENABLE LOST WRITE PROTECTION;

Enable Lost Protection at Datafile Level

ALTER DATABASE DATAFILE 'dfile1.df' ENABLE LOST WRITE PROTECTION;

Enable Lost protection at specific pluggable database datafile level

ALTER PLUGGABLE DATABASE DATAFILE 'dfile2.df' ENABLE LOST WRITE PROTECTION;

As such the SCN are compared and when you have load intensive databases running, anticipate some overhead with this. I really did not tested this one.

Thanks

Suresh

12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup

Thanks to my Colleague Jaipal to introduce me to this Metalink Note: Doc ID 2005729.1

In a 12c and higher environment, to migrate data between systems that have different endian formats, with the least amount of application down time.

The first step is to copy a full back up from the source to the destination. Then, by using a series of incremental backups, each smaller than the last, the data at the destination system can be brought nearly current with the source system, before any downtime is required.

This procedure requires down time only during the final incremental backup and the meta-data export/import.

The high-level steps for Cross Platform Incremental Backup are:

  1.  Initial setup
  2.  Prepare phase(source data remains online)
    1. Backup (level=0) of tablespaces to be transported
    2. Transfer backup and other necessary setup files to destination system
    3. Restore datafiles on destination system endian format

3.  Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database)

    1. Create incremental backup on source system
    2. Transfer incremental backup and other necessary setup files to destination system
    3. Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies
    4. Determine next_scn for next incremental backup
    5. Repeat steps until ready to transport the tablespace.

4.  Transport phase (source data is READ ONLY)

    1. Make tablespaces in source database READ ONLY
    2. Repeat the Roll Forward phase one final time
      • This step makes destination datafile copies consistent with source database and generates necessary export.
      • Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller.
    3. Import metadata of objects in the tablespaces into destination database using Data Pump
    4. Make tablespaces in destination database READ WRITE

Thanks

Suresh

Oracle Autonomous Datawarehouse : Loading Data from Oracle Objects Stores or AWS S3

Oracle Autonomous Datawarehouse offers loading the data from object stores like Oracle Object Store and AWS S3 or Azure Blob Store.

For this, a new package has been introduced called DBMS_CLOUD and you can load/unload the data and also manage the files with the package. This facilitates easy management of objects in the objects store from the database and also provides programmatic access to the object stores.

Copying Data from Oracle Object store or S3

Create Credentials (object store)

BEGIN
DBMS_CLOUD.create_credential (
credential_name => 'awss3cred',
username => 'geeks',
password => 'secretkey'
) ;
END;
/

Create Credentials (object store)

BEGIN
DBMS_CLOUD.create_credential (
credential_name => 'oraclecred,
username => 'geeks',
password => 'secretkey'
) ;
END;
/

Copy the data (example oracle object store)

BEGIN
DBMS_CLOUD.copy_data(
table_name      => 'CHANNELS',
credential_name => 'oraclecred',
file_uri_list   => 'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/geeksinsights/datalake3/channels.txt',
);
END;
/

Copy the data (example AWS S3)

BEGIN
DBMS_CLOUD.copy_data(
table_name      => 'CHANNELS',
credential_name => 'awss3cred',
file_uri_list   => 'https://s3-eu-west-1.amazonaws.com/datalake/channels.txt',
);
END;
/

Copying Data from Datawarehouse to Object Store or S3

In addition, to copy the data back to object store you can use dbms_cloud.put_object procedure with directory name, the size limit of the file is 5gb.

BEGIN
DBMS_CLOUD.PUT_OBJECT (
credential_name => 'awss3cred',
object_uri   => 'https://s3-eu-west-1.amazonaws.com/datalake/',
directory_name => 'DATA_PUMP_DIR',
file_name => 'EXPORT_ORCL_28NOV2018.dmp'
);
/

Listing Files in the Object Store similar like AWS RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')

SELECT * FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR');

Thanks

Suresh

 

OOW#18 – Kafka with Oracle Database

Yes, Oracle always strive to integrate latest trends in the industry to its Database. Now the new one is Kafka integration with Oracle Database through DBMS_KAFKA package. Where you can publish events and consume or load data into tables.

This feature has been discussed in CodeOne 2018 by Melli Annamalai, Senior Principal Product Manager at Oracle as per Lucas blog post. Awaiting to see the feature and get hands dirty on it. Very recently I am into a project where I have to publish events back to Oracle database and I am using much of work around solutions. :). This will help

Read full article directly from Lucas Jellama's post

BEGIN
dbmskafka.register_cluster
(‘SENS2’
,'<Zookeeper URL>:2181′,
‘<Kafka broker URL>:9092’
,’DBMSKAFKA DEFAULT DIR’ ,
’DBMSKAFKA_LOCATION DIR’
‘Testing DBMS KAFKA’);
END;

Thanks

Suresh

OOW#18 – Updates – 19c Automatic Indexing

Really???. Straight from the OOW#18 updates in Twitter, LinkedIn, various posts.

Here is the snippet I got it from twitter from Franck Pachot & Maria Colgan posts.

This is how it may/will works :). Runs as background task at Fixed and Limited time.  "Do not impact your application" ??? True, read below, Follow a predefined resource plan which constraints its resource utilization means its intelligent enough to do or not to do. if you have a resource plan.

19c Beta seems to be released next week, lets wait for more updates from Oracle Geeks and Gurus.... BTW I have added a new category to my posts now :).

Keep going Oracle !!! Cheers.

AIOUG – Mumbai – Performance Tuning Tech Day -3-Nov-28 – Full Day Event

Hello All

Are you around Mumbai, India? I am delivering a Full Day Event is planned in Mumbai on 3rd November about Oracle Performance Tuning, Please join with me and session abstracts and registration details can be found in link below.

https://www.meraevents.com/event/aioug-mumbai-performance-tech-day-november-2018

See you there !!!

Building Datalake in Oracle Cloud

Hello All

My Latest presentation on Building Datalake in Oracle Cloud, covers at least 6 services in oracle cloud and 4 patterns of datalake editions in Oracle Cloud.

Also uploaded videos to my youtube channel which are embedded in the pdf presentations.

[embedyt] https://www.youtube.com/embed?listType=playlist&list=PLR6rN4cTV4BTcDAeqZmFdXMijlBu65kxc&layout=gallery[/embedyt]

-Thanks

Suresh

Datalake in Oracle Cloud – AIOUG & APACOUC Events

Hello All

I am speaking about Datalake in Oracle Cloud in two events as below. I will be covering the fundamentals and building blocks of datalakes and how they differ from datawarehouse, I will also demo the new generation datalake in Oracle Cloud by using BigData, Event HubService & Object store.

Hyderabad - Tech Mahindra Campus - 25th Aug 2018 - AIOUG Cloudathon Techday

https://www.meraevents.com/event/aioug-hyderabad-august-2018-oracle-cloud-cloudathon-techday

Free Webinar -  27-Aug-2018 - APACOUC - Webinar Tour 2018

https://events.genndi.com/register/818182175026319207/a6aad93c6bhttp://www.apacouc.org/2017-webinar-tour.html

See you all in the events.

-Thanks

Suresh

APACOUC Webinar 2018 – Presenting SQL Plan Stability & Flexibility Features – Today

APACOUC Webinar 2018 - Wave 2 Sessions  In next 2.5 hrs , @apacouc  @fcomunoz @DataIntensity , I am presenting SQL Plan Stability & Flexibility Features

Register for free

https://events.genndi.com/register/818182175026319207/528ad96d7a

APACOUC.org – 2018 Webinar Tour – 18c New Features for DBA’s – 27th July

Hi

Join in to my free webinar for APACOUC.org webinar tour tomorrow i.e 27th July and I am presenting 18c New Features for DBA's.

Timing: July 27th - 12.30PM (IST) , 17.00PM (AEST) Registration link : https://events.genndi.com/register/818182175026319207/2e96be318e

And also next wave of presentations announced, please check here.

http://www.apacouc.org/2017-webinar-tour.html

See you all there.

-Suresh