Tuesday, 2 November 2021

Exacc overview

Overview and scope of Customer 

Oracle’s Exadata Cloud@Customer (ExaCC) is an on premise cloud solution offered by Oracle for clients who want to venture into the Cloud technology with its elasticity and agility for hosting its databases but do not want to expose its data into the public cloud. 

ExaCC solution design involves the deployment of Oracle’s Exadata Platform in the customer’s data centers with appropriate networking components configured with sufficient controls for any interaction with the Oracle cloud interfaces.

In an ExaCC platform, the Exadata Appliance, which is hosted on a client’s data center, is configured with a control plane server and interfaces with Oracle cloud interface using a secure tunnel.  While Oracle manages the infrastructure components, the clients manage the virtual machines and database components.


Support Model

Green Color layers are managed by Customer ( DOMU)

Red Color layers are managed by Oracle.(DOMO)




The following tasks will be performed by Customer
  • Compartments          :  create, terminate, move from one compartment to another
  • User Management       :  User creattion IAM , Federated Accounts, MFA
  • Exadata Infrastructure: create, terminate, manage contacts, and define maintenance window.
  • VCN                            : create, terminate, manage
  • VM cluster: create, terminate, scale-up/down OCPU/Memory/Storage, add SSH keys, and update license type.
  • Grid Infrastructure (GI): patch and upgrade. GI is automatically installed when the VM cluster is created.
  • Virtual Machines: stop, start, restart, and patch the operating system. scale up resources like CPU, Memory, local storage and ASM
  • Custom Database Software Images: create and delete.
  • Oracle Homes: create, terminate, and patch.
  • Oracle Databases: create, terminate, enable automatic backup, edit backup settings, restore, move to another Oracle Home, patch, and upgrade. 
  • Transparent Data Encryption (TDE) is set up and enabled by default. Configure TDE on HSM 
  • High Availability: databases are created as Real Application Clusters (RAC) databases by default.
  • Disaster Recovery: build (Active) Data Guard environments, Switchover, Failover, and Reinstate.
  • Backup configuration:  nfs, block storage, ZFS
  • Migration: export, rman, XTTS, golden gate, standby setup, Zero Downtime  
  • Operation Access control:  Review and Approve, support Oracle SRs
  • Application Continuity

  • For these tasks, we can avail Cloud Tooling. These include the web-based UI,OCI CLI, SDKs, and REST APIs. 
  • on top of it Oracle provides  dbaascli command line utility to manage all cloud related operation from the VM












Thursday, 1 October 2020

Exadata offerings by Oracle

 

Oracle offers the following Exdata deployments for their customers

 

Exadata On-premise

Exadata Cloud@Customer

Exadata Cloud Servces ( OCI + Exadata Service)

 





Exadata on-premise

In the traditional On-Premise Exadata, Exadata Database Machine is in the client’s datacentre and is the client’s responsibility to manage the Exadata hardware with Oracle.

 

 Exadata Cloud@Customer

  Customer who would like to avail cloud service and  at the same time, they dont want to expose their data in the public. Exadata machine runs in customer datacentre with cloud feature enabled

 

 Exadata Cloud Servces

 Oracle Exadata Cloud Service is a combination of the powerful Exadata and Oracle Cloud.


Operation Model

Green layers are managed by customers
Red Layers are managed by Oracle




Deployment Comparison




Sunday, 21 July 2019

Oracle to PostgresSQL Migration


PostgreSQL is one of the most powerful and used open source rdbms. Performed a simple testing on migration of a Oracle 12c to Postgres 11.4 running in OEL ( Linux ) platform .


In this post, I would like to share postgres installation steps & migrate a schema from Oracle database to postgres database


1. Install PostgresSQL

1.1. Create a os level postgres account ( postgres:postgres/oinstall )


1.2 Download postgresql-11.4.tar.gz binary from postgres download site /engg site & run the below commands


tar xvf postgresql-11.4.tar
cd postgresql-11.4
./configure
make
make install ( as root)



1.3. Verify installation directory


ls -l /usr/local/pgsql/
[postgres@oelunixserver0101 postgres]$ ls -l /usr/local/pgsql/
total 6
drwxr-xr-x 2 root root 1024 Jul 16 10:39 bin
drwx------ 19 postgres postgres 1024 Jul 16 10:41 data
drwxr-xr-x 6 root root 1024 Jul 16 10:39 include
drwxr-xr-x 4 root root 2048 Jul 16 10:39 lib
drwxr-xr-x 6 root root 1024 Jul 16 10:39 share



1.4.Create postgres data directory ( login as postgres account)
Ensure you have enough space to accomodate the data space.
mkdir /usr/local/pgsql/data



1.5. Initilaise data Dierctory ( login as postgres)
postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... GB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ -l logfile start



1.6. Verify data directory


ls -l /usr/local/pgsql/data
total 53
drwx------ 7 postgres oinstall 1024 Jul 19 00:21 base
drwx------ 2 postgres oinstall 1024 Jul 19 03:23 global
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_commit_ts
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_dynshmem
-rw------- 1 postgres oinstall 4513 Jul 16 10:41 pg_hba.conf
-rw------- 1 postgres oinstall 1636 Jul 16 10:41 pg_ident.conf
drwx------ 4 postgres oinstall 1024 Jul 19 03:27 pg_logical
drwx------ 4 postgres oinstall 1024 Jul 16 10:41 pg_multixact
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_notify
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_replslot
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_serial
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_snapshots
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_stat
drwx------ 2 postgres oinstall 1024 Jul 19 06:20 pg_stat_tmp
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_subtrans
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_tblspc
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_twophase
-rw------- 1 postgres oinstall 3 Jul 16 10:41 PG_VERSION
drwx------ 3 postgres oinstall 1024 Jul 16 10:41 pg_wal
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_xact
-rw------- 1 postgres oinstall 88 Jul 16 10:41 postgresql.auto.conf
-rw------- 1 postgres oinstall 23874 Jul 16 10:41 postgresql.conf
-rw------- 1 postgres oinstall 59 Jul 16 10:41 postmaster.opts
-rw------- 1 postgres oinstall 89 Jul 16 10:41 postmaster.pid

1.7. start postgres database
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1
7. create a database
[postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/createdb test

1.8. connect to test database and list all the databases
[postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/psql test
psql (11.4)
Type "help" for help.

test=#
test=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-------------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres
(4 rows)

2. Migrate Oracle Install PostgressSQL using ora2pg tool
2.1 Pre-requiste ( install required binaries)
Download and install the following binaries
-DBD-Oracle-1.75_2.tar.gz : Oracle database driver for the DBI module
-DBD-Pg-3.6.0.tar.gz : PostgreSQL database driver for the DBI module
-DBI-1.636.tar.gz : Database independent interface for Perl
-ora2pg-18.1.tar.gz : ora2pg archive
installation steps
tar xvf <file>
cd <main_folder>
perl Makefile.Pl
make
make install ( root)
Verify ora2pg version
[postgres@oelunixserver0101 ~]$ /data/oracle/product/agent12c/core/12.1.0.5.0/perl/bin/ora2pg -version
Ora2Pg v18.1

2.2 Create config ora2pg.conf with the following entries


ORACLE_HOME /data/oracle/product/12.2.0.2
ORACLE_DSN dbi:Oracle:host=172.17.80.138;service_name=postg1;port=1721
ORACLE_USER hr
ORACLE_PWD <password>
EXPORT_SCHEMA 1
SCHEMA HR
OUTPUT output.sql
TYPE TABLE PACKAGE COPY VIEW GRANT SEQUENCE TRIGGER FUNCTION PROCEDURE TABLESPACE TYPE PARTITION

2.3 export the HR schema data into output.sql
/data/oracle/product/agent12c/core/12.1.0.5.0/perl/bin/ora2pg -c /home/postgres/ora2pg.conf
/data/oracle/product/agent12c/core/12.1.0.5.0/perl/bin/ora2pg -c /home/postgres/ora2pg.conf
[========================>] 7/7 tables (100.0%) end of scanning.
[> ] 0/7 tables (0.0%) end of scanning.
[========================>] 7/7 tables (100.0%) end of table export.
[========================>] 0/0 packages (100.0%) end of output.
[========================>] 25/25 rows (100.0%) Table COUNTRIES (25 recs/sec)
[==> ] 25/215 total rows (11.6%) - (1 sec., avg: 25 recs/sec).
[========================>] 27/27 rows (100.0%) Table DEPARTMENTS (27 recs/sec)
[=====> ] 52/215 total rows (24.2%) - (1 sec., avg: 52 recs/sec).
[========================>] 107/107 rows (100.0%) Table EMPLOYEES (107 recs/sec)
[=================> ] 159/215 total rows (74.0%) - (1 sec., avg: 159 recs/sec).
[========================>] 19/19 rows (100.0%) Table JOBS (19 recs/sec)
[===================> ] 178/215 total rows (82.8%) - (2 sec., avg: 89 recs/sec).
[========================>] 10/10 rows (100.0%) Table JOB_HISTORY (10 recs/sec)
[====================> ] 188/215 total rows (87.4%) - (2 sec., avg: 94 recs/sec).
[========================>] 23/23 rows (100.0%) Table LOCATIONS (23 recs/sec)
[=======================> ] 211/215 total rows (98.1%) - (2 sec., avg: 105 recs/sec).
[========================>] 4/4 rows (100.0%) Table REGIONS (4 recs/sec)
[========================>] 215/215 total rows (100.0%) - (3 sec., avg: 71 recs/sec).
[========================>] 215/215 rows (100.0%) on total estimated data (3 sec., avg: 71 recs/sec)
[========================>] 1/1 views (100.0%) end of output.
[========================>] 3/3 sequences (100.0%) end of output.
[========================>] 1/1 triggers (100.0%) end of output.
[========================>] 0/0 functions (100.0%) end of output.
[========================>] 2/2 procedures (100.0%) end of output.
[========================>] 0/0 types (100.0%) end of output.
[========================>] 0/0 partitions (100.0%) end of output.
[postgres@oelunixserver0101 ~]$

2.4 create a HR_database
/usr/local/pgsql/bin/createdb HR_database

2.5 connect to HR database
[postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/psql -d HR_database
psql (11.4)
Type "help" for help.
HR_database=# \l
HR_database | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres

HR_database=#


2.6 load the data ( output.sql contains create schema command)
HR_database=# \i output.sql
SET
CREATE SCHEMA
ALTER SCHEMA
SET
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
ALTER TABLE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
CREATE INDEX
ALTER TABLE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
CREATE INDEX
...
...
...
...
...
ALTER TABLE
COMMIT

2.7 List the tables

HR_database=# \dt
List of relations
Schema | Name | Type | Owner
--------+-------------+-------+----------
hr | countries | table | postgres
hr | departments | table | postgres
hr | employees | table | postgres
hr | job_history | table | postgres
hr | jobs | table | postgres
hr | locations | table | postgres
hr | regions | table | postgres
(7 rows)

2.8 query data
HR_database=# select * from hr.countries;
country_id | country_name | region_id
------------+--------------------------+-----------
AR | Argentina | 2
AU | Australia | 3
BE | Belgium | 1
BR | Brazil | 2
CA | Canada | 2
CH | Switzerland | 1
CN | China | 3
DE | Germany | 1
DK | Denmark | 1
EG | Egypt | 4
FR | France | 1
HK | HongKong | 3
IL | Israel | 4
IN | India | 3
IT | Italy | 1
JP | Japan | 3
KW | Kuwait | 4
MX | Mexico | 2
NG | Nigeria | 4
NL | Netherlands | 1
SG | Singapore | 3
UK | United Kingdom | 1
US | United States of America | 2
ZM | Zambia | 4
ZW | Zimbabwe | 4
(25 rows)

HR_database=#

2.9. create HR_schema ( just testing purpose


HR_database=# create schema HR_schema
SCHEMA


2.10 List the schemas


HR_database=# \dn
List of schemas
Name | Owner
-----------+----------
hr | hr
hr_schema | postgres
public | postgres
(3 rows)

HR_database=#

Sunday, 31 March 2019

Oracle Grid Infrastructure - reconfiguration

There are some situations we need to reconfigure the Grid Home like IP-address change/hostname changer,
fresh GI installation due to binary issue etc..
Please follow the below steps


0. Please capture the below details before you do reconfigurations
      crsctl stat res -p
      crsctl stat res -t
      cluster Name:cemutlo -n
      scan name :  srvctl confif scan
      listener port :  srvctl confif listener
      host vip names
      if you are doing it in exadata   . please do the step 3 before you do the reconfiguration


1. Run the below command to deconfig the GI from all the nodes except the last node.

 $ORACLE_HOME/crs/install/rootcrs.pl -verbose -deconfig -force

2. Run the below command in the last node ( keepdg to retain the DG configuration)
 $ORACLE_HOME\crs\install\rootcrs.pl -deconfig -force -verbose -lastnode -keepdg





3. During reconfiguration, incase if the OCR disks are not listed as candidates for OCR DG creations, we may need to delete and add the disks

    ORACLE ASM:
            we can use either Plan A or Plan B         
          
           Plan A:
                   oracleasm createdisk <label> <dev_mapper>
                   oracleasm createdisk OCR_DK1  /dev/mapper/ASM_OCR_0001
                  
          Plan B:
                 incase the above is faling with "Unable to open device "/dev/mapper/ASM_OCR_0001": Device or resource busy "
          /usr/sbin/asmtool -C -l /dev/oracleasm -n OCR_DK1 -s  /dev/mapper/ASM_OCR_0001 -a force=yes

          Post task:
                   oracleasm scandisks
                   oracleasm listdisks
     AFD:
            ( BS = block size) and count= 1000 * BS = size of the disk
     
              dd if=/dev/zero of=/dev/mapper/ASM_REG_OCR_0001 bs=1M count=1000
        Post task:
       asmcmd afd_scan
       asmcmd afd_lsdsk
    Exadata:
             delete 3 disks from a DG from different cell to make them as a candidate for the OCR DG ( this step need to be done before the deconfigruation run)


4. Cleanup the gpnp files from all the nodes
  find <GRID_HOME>/gpnp/* -type f -exec rm -rf {} \;




5. run config.sh
 cd $ORACLE_HOME/crs/config
 ./config.sh

 feed the below values
    => cluster_name, scan name, port & Disable GNS if it is not used.
    => ensure all the host and VIPs are fetched automatically, if not, please add them manually.
    => Storage for OCR: create new DG and choose the disks candidates.
   

Sunday, 24 March 2019

ACFS on Exadata


Oracle Automatic Storage Management Cluster File System (ACFS) on Exadata Database Machine:

Starting with Oracle Grid Infrastructure version 12.1.0.2, Oracle ACFS supports all database files and general purpose files on Oracle Exadata Database Machine running Oracle Linux on database servers.

The following database versions are supported by Oracle ACFS on Exadata Database Machine:

  • Oracle Database 10g Rel. 2 (10.2.0.4 and 10.2.0.5)
  • Oracle Database 11g (11.2.0.4 and higher)
  • Oracle Database 12c (12.1.0.1 and higher)
The below steps are duly tested in Exadata X7 machines

Pre-requisite for ACFS creation

  1. Verify if ACFS/ADVM module is loaded ( oracleacfs & oracleadvm )

dcli -g /home/oracle/dbs_group -l oracle 'lsmod | grep oracle'

 

Step by step instruction to create ACFS on Exadata

 

  1. As ORAGRID : Create ASM volume to be used for ACFS file systems on FLASH01 DG
     

asmcmd volcreate -G FLASH01 -s 10G acfsvol
 

 

 

        2) As ORAGRID:  Capture the volume device name

asmcmd volinfo -G FLASH01 acfsvol

 

For example this case:

 

Diskgroup Name: FLASH01

 

         Volume Name: ACFSVOL

         Volume Device: /dev/asm/acfsvol-118

         State: ENABLED

         Size (MB): 10240

         Resize Unit (MB): 512

         Redundancy: MIRROR

         Stripe Columns: 8

         Stripe Width (K): 1024

         Usage:

         Mountpath: 

 

 

        3) As ORAGRID:  verify if the volume device name created on all the nodes

dcli -g /home/oracle/dbs_group -l oracle ‘s -l /dev/asm/acfsvol-118
 

 

        4) As ORAGRID:  Verify if ADVM is enabled

crsctl stat res ora.FLASH01.ACFSVOL.advm -t

 

For example this case:

 

crsctl stat res ora.FLASH01.ACFSVOL.advm -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.FLASH01.ACFSVOL.advm

               ONLINE  ONLINE       fraespou0101             STABLE

               ONLINE  ONLINE       fraespou0102             STABLE

               ONLINE  ONLINE       fraespou0103             STABLE

               ONLINE  ONLINE       fraespou0104             STABLE

--------------------------------------------------------------------------------

 

    

For example this case:

 

 

5) AS ORAGRID:  Create ACFS on ADVM  

/sbin/mkfs -t acfs /dev/asm/acfsvol-321 –b 4K

 

 

 

 6) As root - Create the mount points for the file systems and change ownership

 

dcli -g ~/dbs_group -l root ‘mkdir /u01/app/oracle/admin/common’
 
dcli -g ~/dbs_group -l root 'chown oracle:oinstall /u01/app/oracle/admin/common’
 
dcli -g ~/dbs_group -l root 'chmod 775 /u01/app/oracle/admin/common’

 

7) As ROOT:  Add filesystem  

/u01/app/oragrid/product/18.0/bin/srvctl add filesystem -d /dev/asm/acfsvol-321 -m /u01/app/oracle/admin/common  -u oracle -fstype ACFS -autostart ALWAYS

 

 

8) As ROOT/Oracle:  Start  filesystem  

/u01/app/oragrid/product/18.0/bin/srvctl start filesystem -d /dev/asm/acfsvol-321

 

 

 

  

Verifications

 

 

9root/oragrid:  Verify if change ownership and permission  

crsctl stat res ora.FLASH01.ACFSVOL.acfs -t

 

crsctl stat res ora.FLASH01.ACFSVOL.advm –t

 

 

 

 

10as Oracle:  list the ACFS FS  from all the nodes  

dcli -g  /home/oracle/dbs_group -l oracle df –h /u01/app/oracle/admin/common

 

 

 

 

 

11as Oracle:  Touch a file on the ACFS FS  from all the nodes  

dcli -g  /home/oracle/dbs_group -l oracle touch /u01/app/oracle/admin/common/a.txt
dcli -g  /home/oracle/dbs_group -l oracle cat /u01/app/oracle/admin/common/a.txt
 

 

 

 

Friday, 22 March 2019

How to restore a corrupted/missing datafile in standby:



How to restore a corrupted/missing datafile in standby:


Before 12.2
1. Cancel the recovery
    dgmgrl > edit database <db> set state='APPLY-OFF';
     (or) alter database recover managed standby database cancel;

2. From Primary take a datafile copy
rman target /
 backup as copy datafile 2 format "/u01/app/oracle/data01/sysaux_01.dbk" ;

3. scp the file from primay server to standby server ( may be to the same location in standby server)
4. catalog the datafile copy ( in sby server)
rman target /
  catalog datafilecopy '/u01/app/oracle/data01/sysaux_01.dbk'

5. Either do a switch copy or restore the datafile to the same location mentioned in the controlfile( in sby server)
   rman target /
 switch datafile 2 to copy;
 report schema

  alternatively
RUN {
  ALLOCATE CHANNEL ch00 TYPE disk ;
  restore datafile 2  ;
  }

6. enable recovery
    dgmgrl > edit database <db> set state='APPLY-ON';
     (or) alter database recover managed standby database disconnect from session ;


After 12
rman target sys/oracle@prod
RMAN> run
{
set newname for datafile 4 to '/u01/oracle/data02/users01.dbf' section size 1G;
restore (datafile 4 from service prodservice) using compressed backupset;
catalog datafilecopy '/u01/oracle/data02/users01.dbf';
}

ZFS

Public Cloud tools comparison