Dear Friends, The content in this blog are purely based on my own opinion ,it is not reflecting any of my current or previous employers materials or official documents. All my posts here are not warranted to be free of errors. Please use at your own risk and after thorough testing in your environment. If you feel that i am violating any of the company's policies or documents, kindly mail me at jeyaseelan.hi@gmail.com,I am happy to take out those content from this blog.
Sunday, 21 July 2019
Oracle to PostgresSQL Migration
PostgreSQL is one of the most powerful and used open source rdbms. Performed a simple testing on migration of a Oracle 12c to Postgres 11.4 running in OEL ( Linux ) platform .
In this post, I would like to share postgres installation steps & migrate a schema from Oracle database to postgres database
1. Install PostgresSQL
1.1. Create a os level postgres account ( postgres:postgres/oinstall )
1.2 Download postgresql-11.4.tar.gz binary from postgres download site /engg site & run the below commands
tar xvf postgresql-11.4.tar
cd postgresql-11.4
./configure
make
make install ( as root)
1.3. Verify installation directory
ls -l /usr/local/pgsql/
[postgres@oelunixserver0101 postgres]$ ls -l /usr/local/pgsql/
total 6
drwxr-xr-x 2 root root 1024 Jul 16 10:39 bin
drwx------ 19 postgres postgres 1024 Jul 16 10:41 data
drwxr-xr-x 6 root root 1024 Jul 16 10:39 include
drwxr-xr-x 4 root root 2048 Jul 16 10:39 lib
drwxr-xr-x 6 root root 1024 Jul 16 10:39 share
1.4.Create postgres data directory ( login as postgres account)
Ensure you have enough space to accomodate the data space.
mkdir /usr/local/pgsql/data
1.5. Initilaise data Dierctory ( login as postgres)
postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... GB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ -l logfile start
1.6. Verify data directory
ls -l /usr/local/pgsql/data
total 53
drwx------ 7 postgres oinstall 1024 Jul 19 00:21 base
drwx------ 2 postgres oinstall 1024 Jul 19 03:23 global
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_commit_ts
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_dynshmem
-rw------- 1 postgres oinstall 4513 Jul 16 10:41 pg_hba.conf
-rw------- 1 postgres oinstall 1636 Jul 16 10:41 pg_ident.conf
drwx------ 4 postgres oinstall 1024 Jul 19 03:27 pg_logical
drwx------ 4 postgres oinstall 1024 Jul 16 10:41 pg_multixact
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_notify
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_replslot
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_serial
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_snapshots
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_stat
drwx------ 2 postgres oinstall 1024 Jul 19 06:20 pg_stat_tmp
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_subtrans
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_tblspc
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_twophase
-rw------- 1 postgres oinstall 3 Jul 16 10:41 PG_VERSION
drwx------ 3 postgres oinstall 1024 Jul 16 10:41 pg_wal
drwx------ 2 postgres oinstall 1024 Jul 16 10:41 pg_xact
-rw------- 1 postgres oinstall 88 Jul 16 10:41 postgresql.auto.conf
-rw------- 1 postgres oinstall 23874 Jul 16 10:41 postgresql.conf
-rw------- 1 postgres oinstall 59 Jul 16 10:41 postmaster.opts
-rw------- 1 postgres oinstall 89 Jul 16 10:41 postmaster.pid
1.7. start postgres database
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1
7. create a database
[postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/createdb test
1.8. connect to test database and list all the databases
[postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/psql test
psql (11.4)
Type "help" for help.
test=#
test=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-------------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
2. Migrate Oracle Install PostgressSQL using ora2pg tool
2.1 Pre-requiste ( install required binaries)
Download and install the following binaries
-DBD-Oracle-1.75_2.tar.gz : Oracle database driver for the DBI module
-DBD-Pg-3.6.0.tar.gz : PostgreSQL database driver for the DBI module
-DBI-1.636.tar.gz : Database independent interface for Perl
-ora2pg-18.1.tar.gz : ora2pg archive
installation steps
tar xvf <file>
cd <main_folder>
perl Makefile.Pl
make
make install ( root)
Verify ora2pg version
[postgres@oelunixserver0101 ~]$ /data/oracle/product/agent12c/core/12.1.0.5.0/perl/bin/ora2pg -version
Ora2Pg v18.1
2.2 Create config ora2pg.conf with the following entries
ORACLE_HOME /data/oracle/product/12.2.0.2
ORACLE_DSN dbi:Oracle:host=172.17.80.138;service_name=postg1;port=1721
ORACLE_USER hr
ORACLE_PWD <password>
EXPORT_SCHEMA 1
SCHEMA HR
OUTPUT output.sql
TYPE TABLE PACKAGE COPY VIEW GRANT SEQUENCE TRIGGER FUNCTION PROCEDURE TABLESPACE TYPE PARTITION
2.3 export the HR schema data into output.sql
/data/oracle/product/agent12c/core/12.1.0.5.0/perl/bin/ora2pg -c /home/postgres/ora2pg.conf
/data/oracle/product/agent12c/core/12.1.0.5.0/perl/bin/ora2pg -c /home/postgres/ora2pg.conf
[========================>] 7/7 tables (100.0%) end of scanning.
[> ] 0/7 tables (0.0%) end of scanning.
[========================>] 7/7 tables (100.0%) end of table export.
[========================>] 0/0 packages (100.0%) end of output.
[========================>] 25/25 rows (100.0%) Table COUNTRIES (25 recs/sec)
[==> ] 25/215 total rows (11.6%) - (1 sec., avg: 25 recs/sec).
[========================>] 27/27 rows (100.0%) Table DEPARTMENTS (27 recs/sec)
[=====> ] 52/215 total rows (24.2%) - (1 sec., avg: 52 recs/sec).
[========================>] 107/107 rows (100.0%) Table EMPLOYEES (107 recs/sec)
[=================> ] 159/215 total rows (74.0%) - (1 sec., avg: 159 recs/sec).
[========================>] 19/19 rows (100.0%) Table JOBS (19 recs/sec)
[===================> ] 178/215 total rows (82.8%) - (2 sec., avg: 89 recs/sec).
[========================>] 10/10 rows (100.0%) Table JOB_HISTORY (10 recs/sec)
[====================> ] 188/215 total rows (87.4%) - (2 sec., avg: 94 recs/sec).
[========================>] 23/23 rows (100.0%) Table LOCATIONS (23 recs/sec)
[=======================> ] 211/215 total rows (98.1%) - (2 sec., avg: 105 recs/sec).
[========================>] 4/4 rows (100.0%) Table REGIONS (4 recs/sec)
[========================>] 215/215 total rows (100.0%) - (3 sec., avg: 71 recs/sec).
[========================>] 215/215 rows (100.0%) on total estimated data (3 sec., avg: 71 recs/sec)
[========================>] 1/1 views (100.0%) end of output.
[========================>] 3/3 sequences (100.0%) end of output.
[========================>] 1/1 triggers (100.0%) end of output.
[========================>] 0/0 functions (100.0%) end of output.
[========================>] 2/2 procedures (100.0%) end of output.
[========================>] 0/0 types (100.0%) end of output.
[========================>] 0/0 partitions (100.0%) end of output.
[postgres@oelunixserver0101 ~]$
2.4 create a HR_database
/usr/local/pgsql/bin/createdb HR_database
2.5 connect to HR database
[postgres@oelunixserver0101 ~]$ /usr/local/pgsql/bin/psql -d HR_database
psql (11.4)
Type "help" for help.
HR_database=# \l
HR_database | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres
HR_database=#
2.6 load the data ( output.sql contains create schema command)
HR_database=# \i output.sql
SET
CREATE SCHEMA
ALTER SCHEMA
SET
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
ALTER TABLE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
CREATE INDEX
ALTER TABLE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
CREATE TABLE
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
COMMENT
CREATE INDEX
...
...
...
...
...
ALTER TABLE
COMMIT
2.7 List the tables
HR_database=# \dt
List of relations
Schema | Name | Type | Owner
--------+-------------+-------+----------
hr | countries | table | postgres
hr | departments | table | postgres
hr | employees | table | postgres
hr | job_history | table | postgres
hr | jobs | table | postgres
hr | locations | table | postgres
hr | regions | table | postgres
(7 rows)
2.8 query data
HR_database=# select * from hr.countries;
country_id | country_name | region_id
------------+--------------------------+-----------
AR | Argentina | 2
AU | Australia | 3
BE | Belgium | 1
BR | Brazil | 2
CA | Canada | 2
CH | Switzerland | 1
CN | China | 3
DE | Germany | 1
DK | Denmark | 1
EG | Egypt | 4
FR | France | 1
HK | HongKong | 3
IL | Israel | 4
IN | India | 3
IT | Italy | 1
JP | Japan | 3
KW | Kuwait | 4
MX | Mexico | 2
NG | Nigeria | 4
NL | Netherlands | 1
SG | Singapore | 3
UK | United Kingdom | 1
US | United States of America | 2
ZM | Zambia | 4
ZW | Zimbabwe | 4
(25 rows)
HR_database=#
2.9. create HR_schema ( just testing purpose
HR_database=# create schema HR_schema
SCHEMA
2.10 List the schemas
HR_database=# \dn
List of schemas
Name | Owner
-----------+----------
hr | hr
hr_schema | postgres
public | postgres
(3 rows)
HR_database=#
Sunday, 31 March 2019
Oracle Grid Infrastructure - reconfiguration
There are some situations we need to reconfigure the Grid Home like IP-address change/hostname changer,
fresh GI installation due to binary issue etc..
Please follow the below steps
0. Please capture the below details before you do reconfigurations
crsctl stat res -p
crsctl stat res -t
cluster Name:cemutlo -n
scan name : srvctl confif scan
listener port : srvctl confif listener
host vip names
if you are doing it in exadata . please do the step 3 before you do the reconfiguration
1. Run the below command to deconfig the GI from all the nodes except the last node.
$ORACLE_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
2. Run the below command in the last node ( keepdg to retain the DG configuration)
$ORACLE_HOME\crs\install\rootcrs.pl -deconfig -force -verbose -lastnode -keepdg
3. During reconfiguration, incase if the OCR disks are not listed as candidates for OCR DG creations, we may need to delete and add the disks
ORACLE ASM:
we can use either Plan A or Plan B
Plan A:
oracleasm createdisk <label> <dev_mapper>
oracleasm createdisk OCR_DK1 /dev/mapper/ASM_OCR_0001
Plan B:
incase the above is faling with "Unable to open device "/dev/mapper/ASM_OCR_0001": Device or resource busy "
/usr/sbin/asmtool -C -l /dev/oracleasm -n OCR_DK1 -s /dev/mapper/ASM_OCR_0001 -a force=yes
Post task:
oracleasm scandisks
oracleasm listdisks
AFD:
( BS = block size) and count= 1000 * BS = size of the disk
dd if=/dev/zero of=/dev/mapper/ASM_REG_OCR_0001 bs=1M count=1000
Post task:
asmcmd afd_scan
asmcmd afd_lsdsk
Exadata:
delete 3 disks from a DG from different cell to make them as a candidate for the OCR DG ( this step need to be done before the deconfigruation run)
4. Cleanup the gpnp files from all the nodes
find <GRID_HOME>/gpnp/* -type f -exec rm -rf {} \;
5. run config.sh
cd $ORACLE_HOME/crs/config
./config.sh
feed the below values
=> cluster_name, scan name, port & Disable GNS if it is not used.
=> ensure all the host and VIPs are fetched automatically, if not, please add them manually.
=> Storage for OCR: create new DG and choose the disks candidates.
fresh GI installation due to binary issue etc..
Please follow the below steps
0. Please capture the below details before you do reconfigurations
crsctl stat res -p
crsctl stat res -t
cluster Name:cemutlo -n
scan name : srvctl confif scan
listener port : srvctl confif listener
host vip names
if you are doing it in exadata . please do the step 3 before you do the reconfiguration
1. Run the below command to deconfig the GI from all the nodes except the last node.
$ORACLE_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
2. Run the below command in the last node ( keepdg to retain the DG configuration)
$ORACLE_HOME\crs\install\rootcrs.pl -deconfig -force -verbose -lastnode -keepdg
3. During reconfiguration, incase if the OCR disks are not listed as candidates for OCR DG creations, we may need to delete and add the disks
ORACLE ASM:
we can use either Plan A or Plan B
Plan A:
oracleasm createdisk <label> <dev_mapper>
oracleasm createdisk OCR_DK1 /dev/mapper/ASM_OCR_0001
Plan B:
incase the above is faling with "Unable to open device "/dev/mapper/ASM_OCR_0001": Device or resource busy "
/usr/sbin/asmtool -C -l /dev/oracleasm -n OCR_DK1 -s /dev/mapper/ASM_OCR_0001 -a force=yes
Post task:
oracleasm scandisks
oracleasm listdisks
AFD:
( BS = block size) and count= 1000 * BS = size of the disk
dd if=/dev/zero of=/dev/mapper/ASM_REG_OCR_0001 bs=1M count=1000
Post task:
asmcmd afd_scan
asmcmd afd_lsdsk
Exadata:
delete 3 disks from a DG from different cell to make them as a candidate for the OCR DG ( this step need to be done before the deconfigruation run)
4. Cleanup the gpnp files from all the nodes
find <GRID_HOME>/gpnp/* -type f -exec rm -rf {} \;
5. run config.sh
cd $ORACLE_HOME/crs/config
./config.sh
feed the below values
=> cluster_name, scan name, port & Disable GNS if it is not used.
=> ensure all the host and VIPs are fetched automatically, if not, please add them manually.
=> Storage for OCR: create new DG and choose the disks candidates.
Sunday, 24 March 2019
ACFS on Exadata
Oracle Automatic Storage Management
Cluster File System (ACFS) on Exadata Database Machine:
Starting with Oracle Grid Infrastructure version 12.1.0.2, Oracle ACFS supports all database files and general purpose files on Oracle Exadata Database Machine running Oracle Linux on database servers.
The following database versions are supported by Oracle ACFS on Exadata Database Machine:
Starting with Oracle Grid Infrastructure version 12.1.0.2, Oracle ACFS supports all database files and general purpose files on Oracle Exadata Database Machine running Oracle Linux on database servers.
The following database versions are supported by Oracle ACFS on Exadata Database Machine:
- Oracle Database 10g Rel. 2 (10.2.0.4 and 10.2.0.5)
- Oracle Database 11g (11.2.0.4 and higher)
- Oracle Database 12c (12.1.0.1 and higher)
The below steps are duly tested in Exadata X7 machines
Pre-requisite for ACFS creation
- Verify if ACFS/ADVM module is loaded ( oracleacfs & oracleadvm )
dcli -g
/home/oracle/dbs_group -l oracle 'lsmod | grep oracle'
Step by step instruction to
create ACFS on Exadata
- As ORAGRID : Create ASM volume to be used for ACFS file systems on FLASH01 DG
asmcmd volcreate -G FLASH01 -s 10G
acfsvol
|
2) As ORAGRID: Capture the
volume device name
asmcmd volinfo -G
FLASH01 acfsvol
|
For example this case:
Diskgroup Name: FLASH01
Volume Name: ACFSVOL
Volume Device: /dev/asm/acfsvol-118
State: ENABLED
Size
(MB): 10240
Resize Unit (MB): 512
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:
3) As ORAGRID: verify if the
volume device name created on all the nodes
dcli -g
/home/oracle/dbs_group -l oracle ‘s -l /dev/asm/acfsvol-118’
|
4) As ORAGRID: Verify if ADVM
is enabled
crsctl stat res ora.FLASH01.ACFSVOL.advm
-t
|
For example this case:
crsctl stat res ora.FLASH01.ACFSVOL.advm -t
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.FLASH01.ACFSVOL.advm
ONLINE ONLINE
fraespou0101 STABLE
ONLINE ONLINE
fraespou0102 STABLE
ONLINE ONLINE
fraespou0103 STABLE
ONLINE ONLINE
fraespou0104 STABLE
--------------------------------------------------------------------------------
For example this case:
5) AS ORAGRID: Create ACFS on ADVM
/sbin/mkfs -t acfs
/dev/asm/acfsvol-321 –b 4K
|
6) As root -
Create the mount points for the file systems and change ownership
dcli -g ~/dbs_group -l
root ‘mkdir /u01/app/oracle/admin/common’
dcli -g ~/dbs_group -l
root 'chown oracle:oinstall /u01/app/oracle/admin/common’
dcli -g ~/dbs_group -l
root 'chmod 775 /u01/app/oracle/admin/common’
|
7) As ROOT: Add filesystem
/u01/app/oragrid/product/18.0/bin/srvctl
add filesystem -d /dev/asm/acfsvol-321 -m
/u01/app/oracle/admin/common -u oracle -fstype ACFS
-autostart ALWAYS
|
8) As ROOT/Oracle: Start filesystem
/u01/app/oragrid/product/18.0/bin/srvctl
start filesystem -d /dev/asm/acfsvol-321
|
Verifications
9) root/oragrid: Verify if
change ownership and permission
crsctl stat res
ora.FLASH01.ACFSVOL.acfs -t
|
crsctl stat res ora.FLASH01.ACFSVOL.advm –t
10) as Oracle: list the ACFS
FS from all the nodes
dcli -g /home/oracle/dbs_group -l oracle df –h /u01/app/oracle/admin/common
|
11) as Oracle: Touch a file on
the ACFS FS from all the nodes
dcli -g /home/oracle/dbs_group -l oracle touch /u01/app/oracle/admin/common/a.txt
dcli -g /home/oracle/dbs_group -l oracle cat /u01/app/oracle/admin/common/a.txt
|
Friday, 22 March 2019
How to restore a corrupted/missing datafile in standby:
How to restore a corrupted/missing datafile in standby:
Before 12.2
1. Cancel the recovery
dgmgrl > edit database <db> set state='APPLY-OFF';
(or) alter database recover managed standby database cancel;
2. From Primary take a datafile copy
rman target /
backup as copy datafile 2 format "/u01/app/oracle/data01/sysaux_01.dbk" ;
3. scp the file from primay server to standby server ( may be to the same location in standby server)
4. catalog the datafile copy ( in sby server)
rman target /
catalog datafilecopy '/u01/app/oracle/data01/sysaux_01.dbk'
5. Either do a switch copy or restore the datafile to the same location mentioned in the controlfile( in sby server)
rman target /
switch datafile 2 to copy;
report schema
alternatively
RUN {
ALLOCATE CHANNEL ch00 TYPE disk ;
restore datafile 2 ;
}
6. enable recovery
dgmgrl > edit database <db> set state='APPLY-ON';
(or) alter database recover managed standby database disconnect from session ;
After 12
rman target sys/oracle@prod
RMAN> run
{
set newname for datafile 4 to '/u01/oracle/data02/users01.dbf' section size 1G;
restore (datafile 4 from service prodservice) using compressed backupset;
catalog datafilecopy '/u01/oracle/data02/users01.dbf';
}
Friday, 15 March 2019
OPatch lsinventory command doesnot show RAC details
Please use the below command to get the remote nodes patching details
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
===============================================================
This is applicable to all releases of OPatch.
The 12.2.0.1.13 release (and later) of OPatch, which is used for all releases 12.1.0.x and later
The 11.2.0.3.18 release (and later) of OPatch, which is used for all releases 11.2.0.1 - 11.2.0.4
Beginning with these releases of OPatch, OPatch will only support patching/listing inventory for local node of a RAC cluster. There will be no propagation to other nodes in cluster.
1. In Opatch 12.2.0.1.13 and Opatch 11.2.0.3.18, OPatch command option “-all_nodes” will be no-op and existed in -help.
If opatch command is being called with option “-all_nodes”, Opatch will print out the warning on console msg as:
""OPatch was called with -all_nodes option. The -all_nodes option is being deprecated. Please remove it while calling OPatch."
2. In future release Opatch 12.2.0.1.14 (and later) and Opatch 11.2.0.3.20 (and later) , Option “-all_nodes” will be removed from -help, user will get syntax error if specify the option.
Alternative Feature of -all_node is available for Multi-Node GI/RAC :
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
(The remote command will get a list of patches from all nodes)
===========================================================
Refer:
GI/RAC/Single Instance Announcing Deprecation of OPatch Command Option "-all_nodes" (Doc ID 2331762.1)
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
===============================================================
This is applicable to all releases of OPatch.
The 12.2.0.1.13 release (and later) of OPatch, which is used for all releases 12.1.0.x and later
The 11.2.0.3.18 release (and later) of OPatch, which is used for all releases 11.2.0.1 - 11.2.0.4
Beginning with these releases of OPatch, OPatch will only support patching/listing inventory for local node of a RAC cluster. There will be no propagation to other nodes in cluster.
1. In Opatch 12.2.0.1.13 and Opatch 11.2.0.3.18, OPatch command option “-all_nodes” will be no-op and existed in -help.
If opatch command is being called with option “-all_nodes”, Opatch will print out the warning on console msg as:
""OPatch was called with -all_nodes option. The -all_nodes option is being deprecated. Please remove it while calling OPatch."
2. In future release Opatch 12.2.0.1.14 (and later) and Opatch 11.2.0.3.20 (and later) , Option “-all_nodes” will be removed from -help, user will get syntax error if specify the option.
Alternative Feature of -all_node is available for Multi-Node GI/RAC :
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
(The remote command will get a list of patches from all nodes)
===========================================================
Refer:
GI/RAC/Single Instance Announcing Deprecation of OPatch Command Option "-all_nodes" (Doc ID 2331762.1)
Saturday, 2 February 2019
Near Zero down time for relocating a PDB from one CDB to another CDB - a 12.2 new feature
This new feature significantly reduces downtime by leveraging the clone functionality to relocate a pluggable database (PDB) from one multitenant container database (CDB) to another CDB. The source PDB is still open and fully functional while the actual cloning operation is taking place. The application outage is reduced to a very small window while the source PDB is quiesced and the destination PDB is brought online after applying the incremental redo. The source PDB is subsequently dropped.
With this feature, you can now adjust to changing workloads and comply with Service Level Agreement (SLA) requirements in near real time.
In Oracle Database 12.1, unplugging and plugging a PDB requires several steps such as unplugging the PDB from the source CDB, copying the database files to a new location, creating the new PDB by plugging the source PDB at the target CDB, and finally dropping the PDB from the source CDB.
A single DDL statement can relocate a PDB, using the “pull” mode, connected to the CDB where the PDB will be relocated to pull it from the CDB where the PDB exists, managing draining existing connections and migrating new connections without requiring any changes to the application.
There are two relocation methods:
• Normal availability mode
With this feature, you can now adjust to changing workloads and comply with Service Level Agreement (SLA) requirements in near real time.
In Oracle Database 12.1, unplugging and plugging a PDB requires several steps such as unplugging the PDB from the source CDB, copying the database files to a new location, creating the new PDB by plugging the source PDB at the target CDB, and finally dropping the PDB from the source CDB.
A single DDL statement can relocate a PDB, using the “pull” mode, connected to the CDB where the PDB will be relocated to pull it from the CDB where the PDB exists, managing draining existing connections and migrating new connections without requiring any changes to the application.
There are two relocation methods:
• Normal availability mode
- − When the newly created PDB is opened in read-write mode for the first time, the source PDB is automatically closed and dropped, and the relocation operation is completed with the relocated PDB being fully available. This is the “normal availability” default mode.
- − This method can be used to relocate application PDBs too.
- Maximum availability mode
- − The maximum availability mode reduces application impact by handling the migration of connections, preserving the source CDB in mount state to guarantee connection forwarding of the listener to the remote listener where the PDB is relocated. In this case, you cannot create a PDB with the same name as the source PDB because it will conflict with the listener forwarding. It is expected that connect strings are updated at a time that is convenient for the application. After this is done and all the clients connect to the new host without forwarding, the DBA can drop the source PDB.
- − If AVAILABILITY MAX is specified during the CREATE PLUGGABLE DATABASE RELOCATE command, additional handling is performed to ensure smooth migration of workload and persistent connection forwarding from the source to the target. The PDB is always first opened in read-only mode. This makes the PDB available as a target for new connections before the source PDB is closed. During this operation, listener information of the target CDB is automatically sent to the source and a special forwarding registration is performed with the source PDB’s current listener. New connections to the existing listener are automatically forwarded to connect to the new target. This forwarding persists even after the relocation operation has been completed, and effectively allows for no changes to connect strings.
- − It is still recommended that connect strings are updated eventually at a time that is convenient for the application, but availability is not dependent on when this action is performed.
PDB relocation requires enabling the local UNDO mode and ARCHIVELOG mode in both CDBs.
Generic Python DB Module to connect Oracle Database
Please refer my post [ http://jeyaseelan-m.blogspot.com/2018/11/how-to-connect-oracle-database-using.html ] for basic setup or python to oracle connection
import sys,os,datetime,cx_Oracle,subprocess,traceback,logging
from pprint import pprint
# Global Variables declarations
global CONFIG_FILE
CONFIG_FILE=".BLACKOUT_CONFIG"
CONFIG_TEMPLATE = { 'SOURCE_DIR' : 'SOURCE_DIR', 'USERID' : 'USERID', 'PASSWORD' : 'PASSWORD', 'TNS_NAME' : 'TNS_NAME' }
# Print messages
def printError(module,message):
logging.error("Error at Module - {}: {} " .format(module,message))
sys.exit(1)
def printWarning(module,message):
print("Wanning at Module - {}: {} " .format(module,message))
# Get config values
def getConfig():
global configDict
configDict = {}
if os.path.exists(CONFIG_FILE):
with open(CONFIG_FILE) as config_file:
for row in config_file:
key,value = row.split(":")
configDict[key]=value.strip()
else:
printError("getConfig","{} doesnot exist".format(CONFIG_FILE))
CONFIG_TEMPLATE_set=set(CONFIG_TEMPLATE)
config_set =set(configDict)
if CONFIG_TEMPLATE_set ^ config_set :
printError("getConfig"," {} parameter is missing in the {} confgile file ".format(CONFIG_TEMPLATE_set ^ config_set,CONFIG_FILE) )
# Get DB details
def getDBconfig():
global cmdb_dBuser ,cmdb_dBpass,cmdb_dBname,cmdb_dBconection
cmdb_dBuser=configDict["USERID"]
cmdb_dBpass=configDict["PASSWORD"]
cmdb_dBconection=configDict["TNS_NAME"]
# Connect Database
def connectDB():
global cmdb_dBconection,cmdb_dBcursor
try:
cmdb_dBconection = cx_Oracle.connect(cmdb_dBuser,cmdb_dBpass,cmdb_dBconection)
cmdb_dBcursor=cmdb_dBconection.cursor()
except cx_Oracle.DatabaseError as e:
error, = e.args
printError(connectDB,error.message)
# run query
def runQuery(sql):
cmdb_dBcursor.execute(sql, REFERENCE_NUMBER=str(sys.argv[1]) )
result=cmdb_dBcursor.fetchall ()
for row in result:
print ( " Task Number : {0} ". format(row[0]))
print ( " Assigned To : {0} ". format(row[1]))
print ( " Assigment Group : {0} ". format(row[2]))
print ( " Change Number : {0} ". format(row[3]))
print ( " Task State : {0} ". format(row[4]))
print ( " Planned Start : {0} ". format(row[5]))
print ( " Planned End : {0} ". format(row[6]))
print ( " Task details : {0} ". format(row[11]))
Subscribe to:
Posts (Atom)
-
We may not be able to disable the logtransport when the database is in maximum availabilty mode. We may get the below error DGMGRL...
-
Error Messages: We may get the following error messages while doing ASM operations ORA-15137: The ASM cluster is in rolling patch state....
-
MIRA - Multi Instance Redo Apply - 12cR2 onwards/18C With Oracle 12c Release 2, We can enable Multi Instance Redo apply ( MIR...