Subscribe to Posts by Email

Subscriber Count

    698

Disclaimer

All information is offered in good faith and in the hope that it may be of use for educational purpose and for Database community purpose, but is not guaranteed to be correct, up to date or suitable for any particular purpose. db.geeksinsight.com accepts no liability in respect of this information or its use. This site is independent of and does not represent Oracle Corporation in any way. Oracle does not officially sponsor, approve, or endorse this site or its content and if notify any such I am happy to remove. Product and company names mentioned in this website may be the trademarks of their respective owners and published here for informational purpose only. This is my personal blog. The views expressed on these pages are mine and learnt from other blogs and bloggers and to enhance and support the DBA community and this web blog does not represent the thoughts, intentions, plans or strategies of my current employer nor the Oracle and its affiliates or any other companies. And this website does not offer or take profit for providing these content and this is purely non-profit and for educational purpose only. If you see any issues with Content and copy write issues, I am happy to remove if you notify me. Contact Geek DBA Team, via geeksinsights@gmail.com

Pages

AIOUG – Sangam 17 – I am Speaking too

Hello All,

I am speaking this year at AIOUG Sangam 17 and my presenation topic is Oracle Sharding.

I_m_Speaking

Agenda yet to be published and registrations are open.

http://www.aioug.org/sangam17/index.php/registration/registration-details

-Thanks

Suresh

 

Australia OTN 2017 – IAOUG – OTN (ODC) Conference 2017

We are back again, this year with more fantastic sessions with another set of Speakers who are well known in Oracle Community and ACE (D).

iaoug

More Details at http://www.iaoug.com/otn-days.html

Registrations are Open and Tickets are Limited

Venue: Rydge Hotel World Square, Sydney, Australia

Date: Nov 24,

From: 8:30 am until 5:30 pm

Schedule Details: https://sydneyotnday2017.sched.com/

And I am speaking too about : DevOps for Databases,

Register here: https://sydneyotnday2017.sched.com/event/CVLo/devops-for-databases

-Thanks

Suresh

 

catctl.pl -E Emulation Feature

Hi

Many of us know about catctl.pl script to upgrade the databases using parallel slave processes since 12c. Adding to that there is executable in $ORACLE_HOME/rdbms/bin called dbupgrade utility which in turn calls out catctl.pl.

There is little known feature which is emulation feature you can run with this utility or script. it displays on screen the same output that you see during an actual upgrade.

$ORACLE_HOME/rdbms/catctl.pl -E

You can read more from here https://docs.oracle.com/database/122/UPGRD/testing-upgraded-production-oracle-database.htm#UPGRD52882

-Thanks

Suresh

 

18c Database

Hello All

As you all aware or if not aware of, the next database release i.e 12.2.0.2 is named as 18c and 19c so on. Here is the roadmap from document 742060.1.

18c roadmapWhat does it contain or we expect from this release?

  1. Tagged as fully autonomous database if your database in Oracle Public Cloud
  2. If you are not in Oracle Public cloud, you will not be getting fully autonomous features
  3. Self patching, sounds to me like Oracle Configuration Management feature enhancements

-Thanks

Suresh

Oracle 12c Upgrade : Virtual Column with Function Based Index (bug)

Issue: Post 12C upgrade: Oracle Bug - Virtual Column issue

Fix: Set it at session level or system level - fix_control=’16237969:OFF’

Be aware of the virtual column getting created with function based index, which may lead to sub-optimal plans.

In 11g , index is getting picked by the oracle optimizer and going with optimal Plan. But in 12c , index is not getting picked by the oracle optimizer and hence it’s going with sub-optimal plan in the query that  result in high logical I/O’s and elapse time.

Thanks Vasu N, by sharing his findings on this and working with Oracle on the same and he thought of helping to all by sharing this.

#########################################
Reproduce the issue with below steps:
#########################################

CREATE TABLE TEST_VIRTUAL
AS
SELECT ROWNUM id, 'TEST' || ROWNUM name
FROM DUAL
CONNECT BY ROWNUM < 10000;

CREATE TABLE TEST_VIRTUAL_PARENT
AS
SELECT ROWNUM id, MOD (ROWNUM, 3) GROUP_NAME
FROM DUAL
CONNECT BY ROWNUM < 10000;

CREATE INDEX TEST_VIRTUAL_FBI_IDX ON TEST_VIRTUAL (UPPER ("NAME"));
CREATE INDEX TEST_VIRTUAL_PARENT_IDX ON TEST_VIRTUAL_PARENT (ID);

##############
#Execute statement with a simple function
#############
SELECT LRD.ID FROM TEST_VIRTUAL_PARENT LRD, TEST_VIRTUAL MLR
WHERE     UPPER (MLR.NAME) = 'TEST1'
AND LRD.ID = MLR.ID(+)
AND LRD.GROUP_NAME = 1;

#####################
The resulted Plan is as below by not picking fbi  index
#####################
SELECT STATEMENT ALL_ROWS Cost: 17 Bytes: 66,660 Cardinality: 3,333

4 FILTER Filter Predicates: UPPER("NAME")='TEST1'
3 HASH JOIN OUTER Access Predicates: "LRD"."ID"="MLR"."ID"(+) Cost: 17 Bytes: 66,660 Cardinality: 3,333
1 TABLE ACCESS FULL TABLE PERF11I.TEST_VIRTUAL_PARENT Filter Predicates: "LRD"."GROUP_NAME"=1 Cost: 7 Bytes: 23,331 Cardinality: 3,333
2 TABLE ACCESS FULL TABLE PERF11I.TEST_VIRTUAL Cost: 10 Bytes: 129,987 Cardinality: 9,999

##############
#Execute statement with a simple function, along with hint to not use the virtual columns
#############

SELECT /*+ OPT_PARAM('_replace_virtual_columns','false') */
LRD.ID
FROM TEST_VIRTUAL_PARENT LRD, TEST_VIRTUAL MLR
WHERE UPPER (MLR.NAME) = 'TEST1' AND LRD.ID = MLR.ID(+)

###############
Plan
###############
SELECT STATEMENT ALL_ROWS Cost: 16 Bytes: 1,700 Cardinality: 100
4 HASH JOIN Access Predicates: "LRD"."ID"="MLR"."ID" Cost: 16 Bytes: 1,700 Cardinality: 100
2 TABLE ACCESS BY INDEX ROWID BATCHED TABLE PERF11I.TEST_VIRTUAL Cost: 9 Bytes: 1,300 Cardinality: 100
1 INDEX RANGE SCAN INDEX PERF11I.TEST_VIRTUAL_FBI_IDX Access Predicates: UPPER("NAME")='TEST1' Cost: 1 Cardinality: 40
3 INDEX FAST FULL SCAN INDEX PERF11I.TEST_VIRTUAL_PARENT_IDX Cost: 7 Bytes: 39,996 Cardinality: 9,999

Oracle 12.2 New Features : Approximate Query Processing

While sometime ago, I was looking for a database project requirement to process analytical workload. The organisation choice was Oracle and Developers choice was postgres or vertica. But to handle Vertica we do not have that much of data processing actually needed so the final contenders are Oracle and Postgres.

As per licensing guidelines I cannot publish the details of outcome , but I would say the results are not good for me as a Oracle DBA , atleast in processing aggregate functions Oracle for a 2 million record table with more than 200 columns doing a count (*). But indeed we want an enterprise database and stable one not the open source database. Well our data in terms of distinctivity does not change much so some sort of approximation is also fine for us.

That is what now in 12.2, with Approximate Query Processing which is as per documentation

Approximate Query Processing

The 12.2 release extends the area of approximate query processing by adding approximate percentile aggregation. With this feature, the processing of large volumes of data is significantly faster than the exact aggregation. This is especially true for data sets that have a large number of distinct values with a negligible deviation from the exact result.

Approximate query aggregation is a common requirement in today's data analysis. It optimizes the processing time and resource consumption by orders of magnitude while providing almost exact results. Approximate query aggregation can be used to speed up existing processing.

Oracle provides a set of SQL functions that enable you to obtain approximate results with negligible deviation from the exact result. There are additional approximate functions that support materialized view based summary aggregation strategies. The functions that provide approximate results are as follows:

APPROX_COUNT_DISTINCT
APPROX_COUNT_DISTINCT_DETAIL
APPROX_COUNT_DISTINCT_AGG
TO_APPROX_COUNT_DISTINCT
APPROX_MEDIAN
APPROX_PERCENTILE
APPROX_PERCENTILE_DETAIL
APPROX_PERCENTILE_AGG
TO_APPROX_PERCENTILE

Approximate query processing can be used without any changes to your existing code. When you set the appropriate initialization parameters, Oracle Database replaces exact functions in queries with the corresponding SQL functions that return approximate results.

Parameter that effects the Approximate Query Processing

SQL> show parameter approx

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
approx_for_aggregation               boolean     TRUE
approx_for_count_distinct            boolean     TRUE
approx_for_percentile                string      ALL
SQL>

When we use the functions , the value returns in hexadecimal to retrieve exact values we need to use associate to_* functions along with materialized views

SQL> select unit_code,APPROX_COUNT_DISTINCT_AGG(id) from (select unit_code,APPROX_COUNT_DISTINCT_DETAIL(ID) id from INSIGHTS.TABLE group by unit_code) group by unit_code;

UNIT_CODE       APPROX_COUNT_DISTINCT_AGG(ID)
--------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------
MHIX115         0D0C0925009E00000000000000A1000000000000000000000040000000000000000000000000000000001000000000000000000000000000000000000000000000040000000000000000000000000000
POIX108         0D0C0DD500740000000000000075000000000000000000000000000080000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000
PHIX365         0D0C0DD500270000000000000027000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000

So, let me create a MV first with my query

CREATE MATERIALIZED VIEW test_approx_agg_mv AS
SELECT unit_code,
APPROX_COUNT_DISTINCT_DETAIL(id) id
FROM INSIGHTS.TABLE
GROUP BY unit_code;
Then let's access the query as below, as you see the values provided are approximate , when you compare the values with original count (second query)

SQL> SELECT unit_code, TO_APPROX_COUNT_DISTINCT(id) "ids" FROM test_approx_agg_mv ORDER BY unit_code;

UNIT_CODE              ids
--------------- ----------
ABR13                    2
ABT10                   42
ABT13                   63

SQL> SELECT unit_code, count(id) from INSIGHTS.TABLE where unit_code in ('ABR13','ABT10','ABT13') group by unit_code;

UNIT_CODE       COUNT(id)
--------------- -------------
ABR13                      11
ABT13                     264
ABT10                     202

If i directly try to access the column with regular count we get an error;

SQL>SELECT unit_code, count(id) from test_approx_agg_mv ORDER BY unit_code
                     *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got BLOB
SQL>

So use the approximation function with associate function to retrieve the data, this works well with MV's rather plain queries

-Thanks

Suresh

 

 

AWR Report – Error ORA-13605: The specified task or object does not exist for the current user.

When you run the awr report with non privileged user i.e custom user (even with dba privilege) you may receive the following error.

@?/rdbms/admin/awrrpt.sql

ORA-13605: The specified task or object  does not exist for the current user.

Error encountered in special section 99995

The possible cases are,

  • missing grant on sys.dbms_advisor
  • bug 17588463 -- found reference to where tasks are stored in table WRI$_ADV_TASKS

Our case is resolved when we granted the following privilege.

GRANT ADMINISTER ANY SQL TUNING SET TO <user>;

Update: 16-Feb-2017 (Thanks Vasu)

The issue still reports, but the following patch is permanent fix.

.(Bug 25059420: Schemas with Non-SYSDBA Privileges Get ORA-13605 in AWR Reports)

Patch ## 25105847 

Patch 25105847: ORA-13605 IN ADDM SECTION OF AWR REPORT FOR DBA USER

Hope this helps

-Thanks

Geek DBA

Securing/Simplifying AWS Network Architecture using Transit VPC and IPSec Tunnel

Written by  Akhil Mathema,

We recently incorporated IPSec Tunnel and Transit VPC for our project to simplify our Network connectivity between different AWS Accounts and different VPC's.

AWS provides a great flexibility of architecturing your network infrastructure either in terms of Security, flexibility and simplicity.

Whether it is from Security or Infrastructure architect perspective, it is ideal to have lesser public facing endpoints in the cloud. An ideal scenario would be to have all non-production resources are to be accessible only via Corporate network. However, it could be challenging if there are multiple non-production environments such as Integration, Configuration, Development, Test, Staging, User-Acceptance Test and so on.

The most common practice would be to setup a multiple Direct Connects or VPN connections to various VPCs. This would be less economical and multiple external routes from the corporate network. This can be overcome by the usage of the concept of Transit VPC in conjunction of IPSec tunnel.

Take a reference as if you have requirement of setting up a site-to-site VPN between a main and branch office. The end objective would be site office being able to reach file server in local network of main office.

The concept of Transit VPC is like all traffic from the corporate network would route to this VPC through a Direct Connect. This would further peer to non-production VPCs. An EC2 instance can be launched in the private subnet of the Transit VPC with IPSec Tunnel configured. On the other hand, either a physical or VM can would be configured with IPSec Tunnel which eventually establishes an IPSec Tunnel over Direct Connect.

AWS IPSec Tunnel

I will present a scenario with the following resources:

  • Corporate network: 192.168.0.0/24
  • Transit VPC: 10.1.0.0/16
  • Linux VM (Centos 6.X) in corporate network: 192.168.0.10
  • Linux Ec2 instance in Transit VPC: 10.1.0.100
  • Integration VPC: 10.8.0.0/16
  • Configuration VPC: 10.9.0.0/16
  • Development VPC: 10.10.0.0/16
  • Test VPC: 10.11.0.0/16
  • Staging VPC: 10.12.0.0/16
  • UAT VPC: 10.13.0.0/16
  • Openswan tool to setup IPSec tunnel

Here is how it can be configured:

Step 1: Establish Direct Connect

Ensure you have Direct Connect establish between your corporate network and a transit VPC. This means any resource on Transit VPC are routable via Direct Connect. In this case, test the network connectivity from 192.168.0.0/24 to 10.1.0.0/16

Step 2: VPC peering

You can peer VPC as follows:

Peering connection name tag: Peering-Transit-DEV

VPC (Requester): vpc-id of Transit VPC 10.1.0.0/16

VPC (Accepter): vpc-id of Development VPC 10.10.0.0/16

Once created Peering Connection, you need to “Accept Request” for the peering.

Repeat the same process for the rest of the VPCs (Integration, Configuration, Test, Staging and UAT)

For example:

Peering Connection Peering ID
Peering-Transit-INT pcx-abcd1234
Peering-Transit-CFG pcx-abcd5678
Peering-Transit-DEV pcx-cdef1234
Peering-Transit-TST pcx-cdef5678
Peering-Transit-STG pcx-fedc0987
Peering-Transit-UAT pcx-fedc7890

 

Step 3: Configure IPSec Tunnel

Using an Open Source tool OpenSwan, you can configure a IPSec Tunnel between a VM in corporate network and an ec2-instance in Transit VPC. In this example, I have used Pre-shared key. The following are the primary configs:

$cat /etc/ipsec.d/awsvpc.conf
              conn awsvpc
            type=tunnel
             authby=secret
            mtu=1436
            keyingtries=3
            left=192.168.0.10
            leftsubnets={192.168.0.0/24}
            leftnexthop=%defaultroute
            right=10.1.0.100
            rightsubnets={10.8.0.0/13}
            rightnexthop=%defaultroute
            auto=start 
$cat /etc/ipse.d/ipsec.secrets
             %any %any : PSK "*********************"
$ cat /etc/ipsec.conf
             config setup
            nat_traversal=yes
            virtual_private=%v4:10.92.168.0.0/24,%v4:10.8.0.0/13
              include /etc/ipsec.d/*.conf

Please note: 10.8.0.0/13 would cover the range from 10.8.0.0 to 10.15.255.255


Step 4: Update Static route for non-prod VPCs:

All traffic to non-prod VPCs will traverse through an on-prem VM.

Hence a static route needs to be configured at the corporate router as follows:

Source Destination Gateway
192.168.0.0/24 10.8.0.0/13 192.168.0.10/32

This means any traffic from corporate network 192.168.0.0/24 with destination to 10.8.0.0/16, 10.9.0.0/16, 10.10.0.0/16, 10.11.0.0/16, 10.12.0.0/16 and 10.13.0.0/16 will be routed through a VM 192.168.0.10.

Step 5: Configure Route Table:

For the given subnet inside Transit VPC, create a route table as follows:

Destination Target
10.1.0.0/16 local
10.8.0.0/16 pcx-abcd1234
10.9.0.0/16 pcx-abcd5678
10.10.0.0/16 pcx-cdef1234
10.11.0.0/16 pcx-cdef5678
10.12.0.0/16 pcx-fedc0987
10.13.0.0/16 pcx-fedc7890

For the given subnet inside Development VPC, create route table as follows:

Destination Target
10.1.0.0/16 pcx-cdef1234

 

Repeat same process for rest of the VPC in INT, CFG, TST, STG and UAT.

Step 6: Configure Security Groups:

Finally, configure Security Groups for each subnet required to be accessible from the corporate network.

 

Type Protocol Port Range Source
HTTP TCP 80 Custom 192.168.0.0/24
SSH TCP 22 Custom 192.168.0.0/24
All-ICMPv4 ICMP 0-65535 Custom 192.168.0.0/24
RDP TCP 3389 Custom

192.168.0.0/24

On completion of all six steps, you should be able to access AWS resources in non-production VPCs as per defined in Security Groups.

 

Hope you like this post.

-Akhil Mathema

SQL Server on Linux

Hello,

Back in Nov 2016, Microsoft has announced SQL Server on Linux and being a Database enthusiast I would like to hands on and see how it works, but I am been busy with some other projects and could not look into it. Last week I had a conversation with one of my peer in lift and he asked me a question how it works that reminds me this topic again. My first answer to him was , there must be some component or a process  which converts a windows calls from SQL platform to Linux abstraction layer which is must, the second in the list was compatibility workarounds. Without looking into any document I was sure about it since Microsoft (or anyone) would never do a overhaul or rewrite SQL Server Platform into Linux. Well, they found themselves with internal projects which offers the bridge between Windows and Linux called "DrawBridge" and evolved something called "SQLPAL" to make sqlserver platform work on linux.

You can read more about SQLPAL here from the Microsoft technet blog: https://blogs.technet.microsoft.com/dataplatforminsider/2016/12/16/sql-server-on-linux-how-introduction/

And after reading it, I decided to give a go with installation and see compatibility for features especially with clients and other integration tools we use.

I have spinned up an ec2 instance (RHEL 7.4, doc says 7.3) with atleast 2cpu and 7gb of memory with RHEL 7.3 and the script here does the following list of things if your server can connect to internet.  I do not want to reinvent the wheel to create my own scripts etc to save sometime, so I have decided to use this script as is.

  1. Install sql server package
  2. Configure sqlserver using msql-conf utiliaty
  3. Create login user
  4. Create Database

And Script worked like charm, It took just less than 4 mins (the size of package was 175 m) to install and configure the SQL Server and SQL Server Agent. (contrary to Windows installation of dependencies , .net framework etc etc)

Log Here and excerpt of log below Full log : sqlserver_on_linux_install

Complete!
Configuring firewall to allow traffic on port 1433...
sudo: firewall-cmd: command not found
sudo: firewall-cmd: command not found
Restarting SQL Server...
Waiting for SQL Server to start...
                                                                                                                                                                                 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2017 (RC2) - 14.0.900.75 (X64)
        Jul 27 2017 08:53:49
        Copyright (C) 2017 Microsoft Corporation
        Enterprise Evaluation Edition (64-bit) on Linux (Red Hat Enterprise Linux Server 7.4 (Maipo))                                                                            

(1 rows affected)
Done!

And my first look with processes

[root@ip-******** ~]# ps -eaf | grep sql
mssql      778     1  0 11:32 ?        00:00:00 /opt/mssql/bin/sqlservr
mssql      781   778  3 11:32 ?        00:00:07 /opt/mssql/bin/sqlservr
root       984 10394  0 11:36 pts/1    00:00:00 grep --color=auto sql

Second, connecting database through sqlcmd tool, Add the /opt/msssql-tools/bin to environment

[root@ip-****** ~]# echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
[root@ip-****** ~]# source .bash_profile
[root@ip-****** ~]# sqlcmd
Microsoft (R) SQL Server Command Line Tool
Version 13.1.0007.0 Linux
Copyright (c) 2012 Microsoft. All rights reserved.

usage: sqlcmd            [-U login id]          [-P password]
  [-S server or Dsn if -D is provided]
  [-H hostname]          [-E trusted connection]
  [-N Encrypt Connection][-C Trust Server Certificate]
  [-d use database name] [-l login timeout]     [-t query timeout]
  [-h headers]           [-s colseparator]      [-w screen width]
  [-a packetsize]        [-e echo input]        [-I Enable Quoted Identifiers]
  [-c cmdend]
  [-q "cmdline query"]   [-Q "cmdline query" and exit]
  [-m errorlevel]        [-V severitylevel]     [-W remove trailing spaces]
  [-u unicode output]    [-r[0|1] msgs to stderr]
  [-i inputfile]         [-o outputfile]
  [-k[1|2] remove[replace] control characters]
  [-y variable length type display width]
  [-Y fixed length type display width]
  [-p[1] print statistics[colon format]]
  [-R use client regional setting]
  [-K application intent]
  [-M multisubnet failover]
  [-b On error batch abort]
  [-D Dsn flag, indicate -S is Dsn]
  [-X[1] disable commands, startup script, environment variables [and exit]]
  [-x disable variable substitution]
  [-? show syntax summary]

Next , stop and start and status of  service.

[root@ip-***** ~]# sudo systemctl restart mssql-server
[root@ip-***** ~]# sudo systemctl status mssql-server
● mssql-server.service - Microsoft SQL Server Database Engine
   Loaded: loaded (/usr/lib/systemd/system/mssql-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2017-08-27 11:53:23 UTC; 10s ago
     Docs: https://docs.microsoft.com/en-us/sql/linux
 Main PID: 1502 (sqlservr)
   CGroup: /system.slice/mssql-server.service
           ├─1502 /opt/mssql/bin/sqlservr
           └─1504 /opt/mssql/bin/sqlservr

Next, I forgot my sa login password, hence reset needed, run a script mssql-conf with set-sa-password it will ask the password.

[root@ip-***** ~]# /opt/mssql/bin/mssql-conf set-sa-password
Enter the SQL Server system administrator password:
Confirm the SQL Server system administrator password:
Configuring SQL Server...

This is an evaluation version.  There are [149] days left in the evaluation period.
The system administrator password has been changed.
Please run 'sudo systemctl start mssql-server' to start SQL Server.

Finally, log into database using sqlcmd

[root@ip-****** ~]# sqlcmd -S localhost -U sa -P ***** -d master -Q "select name, database_id from sys.databases"
name                                                                                                                             database_id
-------------------------------------------------------------------------------------------------------------------------------- -----------
master                                                                                                                                     1
tempdb                                                                                                                                     2
model                                                                                                                                      3
msdb                                                                                                                                       4
geekdb                                                                                                                                     5

(5 rows affected)

And also logged through SSMS as well.

sqlserver on linux - ssms

You can set some configuration using mssql-conf tool. The following configuration can be done

Collation Set a new collation for SQL Server on Linux.
Customer feedback Choose whether or not SQL Server sends feedback to Microsoft.
Default data directory Change the default directory for new SQL Server database data files (.mdf).
Default log directory Changes the default directory for new SQL Server database log (.ldf) files.
Default dump directory Change the default directory for new memory dumps and other troubleshooting files.
Default backup directory Change the default directory for new backup files.
Dump type Choose the type of dump memory dump file to collect.
High availability Enable Availability Groups.
Local Audit directory Set a a directory to add Local Audit files.
Locale Set the locale for SQL Server to use.
Memory limit Set the memory limit for SQL Server.
TCP port Change the port where SQL Server listens for connections.
TLS Configure Transport Level Security.
Traceflags Set the traceflags that the service is going to use.

My Final go to install SSIS package,

[root@********* ~]# curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   220  100   220    0     0    100      0  0:00:02  0:00:02 --:--:--   100
[root@ip-******** ~]# sudo yum install -y mssql-server-is
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
packages-microsoft-com-mssql-server                                                                                                                       | 2.9 kB  00:00:00
Resolving Dependencies


Installed:
  mssql-server-is.x86_64 0:14.0.900.75-1

Complete!

Run setup,

[root@******* ~]# sudo /opt/ssis/bin/ssis-conf setup
The license terms for this product can be downloaded from:
https://go.microsoft.com/fwlink/?LinkId=852741&clcid=0x409

The privacy statement can be viewed at:
https://go.microsoft.com/fwlink/?LinkId=853010&clcid=0x409

Do you accept the license terms? [Yes/No]:yes

Choose an edition of SSIS:
  1) Evaluation (free, no production use rights, 180-day limit)
  2) Developer (free, no production use rights)
  3) Express (free)
  4) Web (PAID)
  5) Standard (PAID)
  6) Enterprise (PAID)
  7) I bought a license through a retail sales channel and have a product key to enter.

Details about editions can be found at
https://go.microsoft.com/fwlink/?LinkId=852748&clcid=0x409

Use of PAID editions of this software requires separate licensing through a
Microsoft Volume Licensing program.
By choosing a PAID edition, you are verifying that you have the appropriate
number of licenses in place to install and run this software.

Enter your edition(1-7): 1
Only user in 'ssis' group can run 'dtexec' on Linux. Do you want to add current user into 'ssis' group? [Yes/No]:yes
Please logout to reload the group information.
SSIS telemetry service is now running.
Setup has completed successfully.

SSIS Telemetry services running

[root@****** ~]# ps -eaf | grep tele
root      2958     1  1 12:26 ?        00:00:00 /opt/ssis/bin/ssis-telemetry
root      2977  2958 20 12:26 ?        00:00:06 /opt/ssis/bin/ssis-telemetry
root      3106  3086  0 12:26 pts/1    00:00:00 grep --color=auto tele

And we can run SSIS package using , will explore that later more.

dtexec /F <package name> /DE <protection password>

Bingo, everything is transparent , the bridge between SQL Platform and Underlying OS. You will not know what you are running behind.

Enjoy..

-Thanks

Suresh