Subscribe to Posts by Email

Subscriber Count

    701

Disclaimer

All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove. Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only. This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only. If you see any issues with Content and copy write issues, I am happy to remove if you notify me. Contact Geek DBA Team, via geeksinsights@gmail.com

Pages

AWS RDS – Storing CloudWatch metrics for historical analysis

Update: 17 nov. AWS announced the cloudwatch metrics now stores until 15 months read this. Oh AWS world its ever chaning.....

https://aws.amazon.com/about-aws/whats-new/2016/11/cloudwatch-extends-metrics-retention-and-new-user-interface/

all aware, AWS CloudWatch metrics only hold for two weeks in console and beyond that you cannot analyse what happened unlike our other monitoring tools.

For those who are new to RDS or AWS world, cloudwatch is monitoring tool and its metrics are like host monitoring metrics , ex: CPU, Memory, Storage, IO Latencies etc.

If you largely depend on Cloudwatch metrics and assume you do not have any other monitoring system, I would suggest to store the metrics by download them into a flat file using AWS CLI.

The following scripts does this,

  • set your region
  • Set date to 1 day ago
  • Takes input of your database instance identifier
  • For every 5 mins from 00:00 to 23:59 of last day using aws cloudwatch get-metric-statistics --namespace AWS/RDS to a flat file
  • skim the file as per requirement using sed
  • load them to my oracle database using sqlldr utilisation, freeablememory, freestorage, writelatency, diskqueuedepth etc.

Sample snippet of script.

#!/bin/ksh

        export AWS_DEFAULT_REGION=ap-southeast-2

        echo `date`

        ${METRICDATE}=`date +"%d-%m-%y" --date="1 days ago"`

        METRICDATE=`date +"%d-%m-%y"`

        echo $${METRICDATE}

        echo $METRICDATE

        NOW=$(date +"%Y-%m-%d")

        export WASHLOGFILE=logs/WASHPROD_$NOW.log

#####################################################################

## Get New Endpoint Name

#####################################################################

        EPNAME=`aws rds describe-db-instances --db-instance-identifier $1 | grep -i "Address" | cut -d '"' -f4` >> $WASHLOGFILE 2>&1

        echo $EPNAME

####################################################################

## Get metrics for given identifier

####################################################################

        DBNAME=`aws rds describe-db-instances --db-instance-identifier $1 | grep -i "DBName" | cut -d '"' -f4` >> $WASHLOGFILE 2>&1

        echo $DBNAME

aws cloudwatch get-metric-statistics --namespace AWS/RDS --metric-name CPUUtilization --start-time $${METRICDATE}T00:00:00Z --end-time ${METRICDATE}T23:59:00 --statistics Maximum --period 600 --dimensions Name=DBInstanceIdentifier,Value=$1 --output text >> CPUUtilization.txt

sed -i 's/DATAPOINTS/CAMPUS-PROD CPU/g' CPUUtilization.txt

sed -i '1d' CPUUtilization.txt

sed -i 's/Z//g' CPUUtilization.txt

sed -i 's/' '/,/g' CPUUtilization.txt

sed -i 's/T/ /g' CPUUtilization.txt

####################################################################

## Load them into a table

####################################################################

sqlldr dbmanager/sss@$EPNAME/$DBNAME control=CPUUtilization.ctl

########################################################################

Sample: sqlldr control file

########################################################################

LOAD DATA

 INFILE CPUUtilization.txt

 APPEND INTO TABLE CloudWatch_Metrics_History

 FIELDS TERMINATED BY ','

(instance_name,

metric_type,

value,

collection_timestamp date 'YYYY-MM-DD HH24:MI:SS',

metric_unit)

Usage: process_cloudwatchmetrics.sh <instancename>

############################################################################

Installation Instructions

############################################################################

1. Create the table in oracle database or desired database. (if not oracle you have to use different method than sqlldr to copy the data)

      CREATE TABLE "DBMANAGER"."CLOUDWATCH_METRICS_HISTORY" 

        ("INSTANCE_NAME" VARCHAR2(50 BYTE), 

"METRIC_TYPE" VARCHAR2(30 BYTE), 

"VALUE" NUMBER(15,2), 

"COLLECTION_TIMESTAMP" DATE, 

"METRIC_UNIT" VARCHAR2(20 BYTE)  ;

2. Run the script CloudWatch_Metric_History.sh <instance_name>

Hope this helps.

Thanks

Comments are closed.