Dear Friends, The content in this blog are purely based on my own opinion ,it is not reflecting any of my current or previous employers materials or official documents. All my posts here are not warranted to be free of errors. Please use at your own risk and after thorough testing in your environment. If you feel that i am violating any of the company's policies or documents, kindly mail me at jeyaseelan.hi@gmail.com,I am happy to take out those content from this blog.
Friday, 25 May 2018
Hot Cloning of PDB - 12c2 / 18c
12C has brought the container & Pluggable database feature. To clone a PDB from another clone, we need to keep the source PDB in read only mode.
SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'
2 ;
no rows selected
SQL> create pluggable database DEVPDB4 from DEVPDB3;
create pluggable database DEVPDB4 from DEVPDB3
*
ERROR at line 1:
ORA-65035: unable to create pluggable database from DEVPDB3
From 12cr2 onwards we can perform this operation while source PDB is in read write mode
In-order to achieve this, we should have the following conditions met.
1. Enabled local Undo
2. cloneddb=true ( init.ora parameter)
Enable Local Undo
This need to be done in upgrade mode
# stop DB
srvctl stop database -d db1
# set cluster_database = false ( it you are doing on RAC)
sqlplus /nolog
conn /as sysdba
alter system set cluster_database=false scope=spfile;
shutdown immediate;
# enable local undo
sqlplus /nolog
conn / as sysdba
startup upgrade
ALTER DATABASE LOCAL UNDO ON;
shutdown immediate;
exit
# enable cluster_database=true ( if you are peforming this activity in RAC)
sqlplus /nolog
conn /as sysdba
alter system set cluster_database=true scope=spfile;
shutdown immediate;
# start db
srvctl start database -d db1
set clonedb=true
alter system set cloned=true scope=spfile sid='*' ;
# bounce the db to effect the above parameter
# Verify if local undo is enabled
SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME
--------------------------------------------------------------------------------
PROPERTY_VALUE
--------------------------------------------------------------------------------
LOCAL_UNDO_ENABLED
TRUE
Hot Clone PDB
# Source PDB in read write mode
1* select name,open_mode from v$pdbs where name='DEVPDB3'
SQL> /
NAME OPEN_MODE
------------------------------ ----------------------------------------
DEVPDB3 READ WRITE
SQL> create pluggable database DEVPDB4 from DEVPDB3;
Pluggable database created.
SQL>
MIRA - Multi Instance Redo Apply - 12cR2 onwards/18C
MIRA - Multi
Instance Redo Apply - 12cR2 onwards/18C
With Oracle 12c Release 2, We can enable Multi Instance Redo
apply ( MIRA) option in the DataGuard which will apply redo logs to multiple
instance at the same time to improve recovery time objective.
Once we enabled, we could see there will be pr* process trees
in both instances and pr** trace files on both nodes (ps -ef|grep pr*)
Please note that there will be 1 MRP0 for the entire MIRA
recovery which is normal. MRP0 is the coordinator process for all other pr*
recovery processes on all instances.
This can be enabled either using sqlplus / dgmgrl
1. How to
enable using sql plus
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION INSTANCES ALL -> to start from all the instances
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION INSTANCES 2 -> to start
from any two instances
2. How to
enable using DGMGRL
Change value for
ApplyInstances = number of instances (set value to 2 in this case)
DGMGRL> edit database "18CDB" set property ApplyInstances=2;
Property "applyinstances" updated
DGMGRL> show configuration
Alert.log
ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION INSTANCES 2
2018-05-25T07:22:10.126643+01:00
Attempt to start background Managed Standby Recovery process
(18CDB1)
Starting background process MRP0
2018-05-25T07:22:10.144066+01:00
MRP0 started with pid=47, OS id=343863
2018-05-25T07:22:10.145157+01:00
Background Managed Standby Recovery process started (18CDB1)
2018-05-25T07:22:15.302656+01:00
Started logmerger process on instance id 1
Started logmerger process on instance id 2
Starting Multi Instance Redo Apply (MIRA) on 2 instances
2018-05-25T07:22:15.354896+01:00
Starting Multi Instance Redo Apply (MIRA)
2018-05-25T07:22:15.459146+01:00
.... (PID:399563): Managed Standby Recovery starting Real
Time Apply
2018-05-25T07:22:16.289661+01:00
Started 24 apply slaves on instance
id 1 =========> apply slaves started on instance 1
2018-05-25T07:22:16.818633+01:00
Started 24 apply slaves on instance
id 2 =========> apply slaves started on instance 2
2018-05-25T07:22:17.515862+01:00
ALTER SYSTEM SET remote_listener=' ahbdre1-sc.des.sg.com:1722'
SCOPE=MEMORY SID='18CDB1';
2018-05-25T07:22:17.516825+01:00
ALTER SYSTEM SET listener_networks='' SCOPE=MEMORY
SID='18CDB1';
3.
Background processes => once we enabled we could see the following
background process
SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition
Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
[18CDB1] $ ps -ef | grep GW | grep pr
oracle 323507 1
1 07:11 ? 00:00:01
ora_pr00_18CDB1
oracle 323512 1
0 07:11 ? 00:00:00
ora_pr02_18CDB1
oracle 323514 1
0 07:11 ? 00:00:00
ora_pr03_18CDB1
oracle 323516 1
0 07:11 ? 00:00:00
ora_pr04_18CDB1
oracle 323518 1
0 07:11 ? 00:00:00
ora_pr05_18CDB1
oracle 323520 1
0 07:11 ? 00:00:00 ora_pr06_18CDB1
oracle 323522 1
0 07:11 ? 00:00:00
ora_pr07_18CDB1
oracle 323534 1
0 07:11 ? 00:00:00
ora_pr08_18CDB1
oracle 323544 1
0 07:11 ? 00:00:00
ora_pr09_18CDB1
oracle 323546 1
0 07:11 ? 00:00:00 ora_pr0a_18CDB1
oracle 323548 1
0 07:11 ? 00:00:00
ora_pr0b_18CDB1
oracle 323550 1
0 07:11 ? 00:00:00
ora_pr0c_18CDB1
oracle 323552 1
0 07:11 ? 00:00:00
ora_pr0d_18CDB1
oracle 323554 1
0 07:11 ? 00:00:00
ora_pr0e_18CDB1
oracle 323556 1
0 07:11 ? 00:00:00
ora_pr0f_18CDB1
oracle 323558 1
0 07:11 ? 00:00:00
ora_pr0g_18CDB1
oracle 323560 1
0 07:11 ? 00:00:00
ora_pr0h_18CDB1
oracle 323562 1
0 07:11 ? 00:00:00
ora_pr0i_18CDB1
oracle 323564 1
0 07:11 ? 00:00:00
ora_pr0j_18CDB1
oracle 323566 1
0 07:11 ? 00:00:00
ora_pr0k_18CDB1
oracle 323568 1
0 07:11 ? 00:00:00
ora_pr0l_18CDB1
oracle 323570 1
0 07:11 ? 00:00:00
ora_pr0m_18CDB1
oracle 323572 1
0 07:11 ? 00:00:00
ora_pr0n_18CDB1
oracle 323574 1
0 07:11 ? 00:00:00
ora_pr0o_18CDB1
oracle 323576 1
0 07:11 ? 00:00:00
ora_pr0p_18CDB1
oracle 323582 1 0
07:11 ? 00:00:00 ora_pr1e_18CDB1
oracle 323584 1
0 07:11 ? 00:00:00
ora_pr1f_18CDB1
oracle 323586 1
0 07:11 ? 00:00:00
ora_pr1g_18CDB1
oracle 323588 1
0 07:11 ? 00:00:00
ora_pr1i_18CDB1
oracle 323590 1
0 07:11 ? 00:00:00
ora_pr1l_18CDB1
oracle 323592 1
0 07:11 ? 00:00:00
ora_pr1n_18CDB1
4.
Managed standby processes => there will be only one MRP process
1* select inst_id,PROCESS,STATUS,CLIENT_PROCESS,CLIENT_PID,THREAD#,SEQUENCE#,BLOCK#,ACTIVE_AGENTS,KNOWN_AGENTS
from gv$managed_standby order by
CLIENT_PROCESS,THREAD#,SEQUENCE#
SQL> /
INST_ID
PROCESS STATUS CLIENT_P CLIENT_PID THREAD#
SEQUENCE# BLOCK# ACTIVE_AGENTS
KNOWN_AGENTS
---------- --------- ------------ --------
---------------------------------------- ---------- ---------- ----------
------------- ------------
1 ARCH CONNECTED ARCH
399744
0 0 0 0 0
1 ARCH CONNECTED ARCH
399776
0 0 0 0 0
1 ARCH CONNECTED ARCH
399774 0 0 0 0 0
1 ARCH CONNECTED ARCH
399770
0 0 0 0
0
1 ARCH CONNECTED ARCH
399768
0 0 0 0 0
1 ARCH CONNECTED ARCH
399766
0 0 0 0 0
1 ARCH CONNECTED ARCH
399764
0 0 0 0 0
1 ARCH CONNECTED ARCH
399750 0 0 0 0 0
1 ARCH CONNECTED ARCH
399754
0 0 0 0 0
1 ARCH CONNECTED
ARCH 399756
0 0 0 0 0
1 ARCH CONNECTED ARCH
399758
0 0 0 0 0
INST_ID
PROCESS STATUS CLIENT_P CLIENT_PID THREAD# SEQUENCE#
BLOCK# ACTIVE_AGENTS KNOWN_AGENTS
---------- --------- ------------ --------
---------------------------------------- ---------- ---------- ----------
------------- ------------
1 ARCH CONNECTED ARCH
399760
0 0 0 0 0
2 ARCH CLOSING ARCH
117678
1 2242
10240 0 0
2 ARCH CLOSING ARCH
117670
1 2243 1 0 0
2 ARCH CLOSING ARCH
117690
1 2244 1 0 0
2 ARCH CLOSING ARCH
117684
1 2245 1
0 0
2 ARCH CLOSING ARCH
117680
1 2246 10240 0 0
2 ARCH CLOSING ARCH
117674
1 2247 1 0 0
2 ARCH CLOSING ARCH
117686
2 2215 45056 0 0
2 ARCH CLOSING ARCH
117664
2 2218 4096 0 0
2 ARCH CLOSING ARCH
117668
2 2237 1 0 0
2 ARCH CLOSING ARCH
117688 2 2238 1 0 0
INST_ID
PROCESS STATUS CLIENT_P CLIENT_PID THREAD# SEQUENCE#
BLOCK# ACTIVE_AGENTS KNOWN_AGENTS
---------- --------- ------------ --------
---------------------------------------- ---------- ---------- ----------
------------- ------------
2 ARCH CLOSING ARCH
117682
2 2239 10240 0 0
2 ARCH CLOSING ARCH
117676
2 2240 1 0 0
2 RFS IDLE Archival 222920 1 0 0 0 0
2 RFS IDLE Archival 67323 2 0 0 0 0
2 RFS IDLE
LGWR 222400
1 2248 136 0 0
2 RFS IDLE LGWR 67057
2 2241 180 0 0
1 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
2 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
2 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
1 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
1 MRP0 APPLYING_LOG N/A N/A
1 2248 135 62 62
33 rows selected.
Wednesday, 23 May 2018
How to clone a database using ZFS snapshot
STEP by STEP
Guide to create an Oracle database using ZFS snapshots.
Generally
for 100TB DB running on an X5 Exadata
machine with 64 channels via two ZFS controllers,
Traditional RMAN
L0 backup runs around 6 hours. Restoration of the whole database takes around 7
to 9 hours
Image backup
with L1 merge on daily basis, then if we can simply switch the database within
around 15 mins to the image backup (apply required archive-logs).
We write the
image backup on ZFS we can create snapshots ( only delta changes will recorded)
in between daily incremental and merge backup . This will help DBAs to do
point-in time recovery (or) create multiple testing environment faster
I explained
the below steps to do a point time clone using ZFS snapshots. This can be used
for testing purpose or extract some point time data if production data has
corrupted.
ZFS configuration ( ZS3-2 80TB)
This test case has
been tested on the below ZFS configuration
ZFS appliance with two heads with 80TB storage capacity.
If you have ZFS cluster configuration with (
active-active ), you can avail both controllers to get better performance .
If you have ZFS cluster configuration with (
active-passive ), you can avail only one controllers
Heada :
akzfsa101a.des.sg.com
POOLS : ( Active – Active )
akzfsa101a : (Double Parity/Online
akzfsa101b : Exported
Cluster:
Active/Active
Projects:
exp_backup_heada ( on akzfsa101a.des.sg.com )
Shares :
exp_backup_heada/image_backup
=> mount point ( /export/exp_backupa/image_backup)
exp_backup_heada/rman_archivelog
=> mount point ( /export/exp_backupa/rman_archivelog)
Headb :
akzfsa101b.des.sg.com
POOLS :
( Active – Active )
akzfsa101b : Exported
akzfsa101a : (Double Parity/Online
Cluster: Active/Active
Projects:
exp_backup_headb ( on
akzfsa101b.des.sg.com )
Shares :
exp_backup_headb/image_backup
=> mount point ( /export/exp_backup_headb/image_backup)
exp_backup_headb/rman_archivelog
=> mount point ( /export/exp_backup_headb/rman_archivelog)
Shares Compression
Option
- Take Base image backup with GZIP-2 FAST on backup share
- Then revert compression option to LZJB ( Fastest) for level 1 backups.
- Archive log can be compressed at rman level and write on LZJB ( fastest) share.
Share Mount
options ((Oracle best practice setting)
Linux
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,vers=3,time0=600,noac,actimeo=0
Solaris
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,vers=3,forcedirectio,noac,actimeo=0
For example:
mount -t nfs -o rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,vers=3,time0=600,noac,actimeo=0
akzfsa101a.des.sg.com: /export/exp_backup_heada/image_backup /export/exp_backup_heada/image_backup
mount -t nfs -o
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,vers=3,time0=600,noac,actimeo=0
akzfsa101b.des.sg.com: /export/exp_backup_heada/image_backup /export/exp_backup_heada/image_backup
Create a
database using ZFS snapshots.
I.1 High Level steps
I. 1.1) Pre-requisite
- Create pools/projects/shares in the ZFS appliance
- Mount the shares with best practice attributes on the compute node or DB server
- Take an image backup
- Create snapshot
- Take L1 backup and merge with base image backup
(Repeat steps 3 to 5 at regular
interval)
I. 1.2) when we
would like to create a clone copy on ZFS Snapshot
- Create a clone against the select snapshot ( based on the day ) in the ZFS
- Set quota and create a mount point against the snapshot created in the ZFS
- Mount the snapshot mount point in compute node
- start the db in mount mode
- catalog the snapshot mounts
- switch the database to copy
- open the database
II.1 Detail Steps
II. 1.1 Create base
image backup ( take backup via both controllers – heads )
run {
sql 'alter system set
"_backup_disk_bufcnt"=64';
sql 'alter system set
"_backup_disk_bufsz"=1048576';
allocate channel ch001 device
type disk format '/export/exp_backup_heada/image_backup/%U';
allocate channel ch002 device
type disk format '/export/exp_backup_headb/image_backup/%U';
RECOVER COPY OF DATABASE WITH
TAG 'image_copy_ZFS' ;
BACKUP AS COMPRESSED
BACKUPSET INCREMENTAL LEVEL 1 cumulative FOR RECOVER OF COPY WITH TAG
'image_copy_ZFS' DATABASE;
}
II.1.2 Create
archivelog backups in a separate Share
run {
sql 'alter system set
"_backup_disk_bufcnt"=64';
sql 'alter system set
"_backup_disk_bufsz"=1048576';
allocate channel ch001 device
type disk format
'/export/exp_backup_heada/rman_archivelog/%U';
allocate channel ch002 device
type disk format
'/export/exp_backup_headb/rman_archivelog/%U';
backup archivelog all not backed up 1 times
delete input TAG 'logs_ab';
backup current controlfile
TAG 'ctl_bk' FORMAT '/export/exp_backup_heada/image_backupa/%d_%s_%p_%c_%t.ctl.rman';
backup spfile TAG 'spfile_bk'
FORMAT '/export/exp_backup_heada/image_backupa/%d_%s_%p_%c_%t.spfile.rman';
release channel ch001;
release channel ch002;
}
II.1.3 Create a snapshot on both heads.
II. 1.3. 1 High level
steps
- ssh from compute node to zfs netman ( this need ssh key to be updated in the ZFS. Ref Apendix)
- type ‘shares’ ( in zfs prompt )
- type ‘ls’ ( to list projects )
- type ‘select project’ ( select image_backup_heada)
- type ‘snapshot’ ( to go to snapshot area )
- type ‘snapshot <snap name>’ ( snapshot day1 ) ( to create snapshot)
- type ‘ls’ ( to list snapshots )
- type exit
II. 1.3. 2
Detailed steps
On Head 1:
ssh from compute node ( this can be done via GUI as well )
ssh -l oracle akzfsa101b-netman
( ensure oracle keys are copied to the zfs appliance)
akzfsa101a-netman:> shares
akzfsa101a-netman:> ls
Properties:
pool = akzfsa101a
usage_data = 29.6T
usage_snapshots = 1.09T
usage_total = 30.69T
Projects:
exp_backup_heada
Children:
encryption => Manage encryption keys
replication => Manage remote replication
schema => Define custom property schema
akzfsa101a-netman:shares> select
exp_backup_heada
akzfsa101a-netman:shares exp_backup_heada > ls
…
Filesystems:
NAME
SIZE ENCRYPTED MOUNTPOINT
image_backup
29.5T off
/export/exp_backup_heada/image_backup
akzfsa101a-netman:shares exp_backup_heada > select
image_backup
akzfsa101a-netman:shares
exp_backup_heada/image_backup> snapshots
akzfsa101a-netman:shares exp_backup_heada/image_backup
snapshots> snapshot day1
akzfsa101a-netman:shares exp_backup_heada/image_backup
snapshots> ls
day1
Children:
automatic => Configure
automatic snapshots
akzfsa101a-netman:shares exp_backup_heada/image_backup
snapshots> exit
Repeat the above step in head b as well
On Head b:
ssh from compute node ( this can be done via GUI as well
)
ssh -l oracle akzfsa101b-netman ( ensure oracle keys are
copied to the zfs appliance)
akzfsa101b-netman:> shares
akzfsa101b-netman:> ls
Properties:
pool = akzfsa101b
usage_data = 28.6T
usage_snapshots = 1.09T
usage_total = 29.69T
Projects:
exp_backup_headb
Children:
encryption => Manage encryption keys
replication => Manage remote replication
schema => Define custom
property schema
akzfsa101b-netman:shares> select
exp_backup_headb
akzfsa101b-netman:shares exp_backup_headb > ls
…
Filesystems:
NAME
SIZE ENCRYPTED MOUNTPOINT
image_backup
29.5T off
/export/exp_backup_headb/image_backup
akzfsa101b-netman:shares exp_backup_headb > select image_backup
akzfsa101b-netman:shares
exp_backup_headb/image_backup> snapshots
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots> snapshot day1
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots> ls
day1
Children:
automatic => Configure automatic snapshots
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots> exit
II 1.4 run image
backup script ( step 2.1)
--- run the image backups
II 1.5 Create Clone based a day1 snapshot on both
Nodes
II.1.5.1 High level steps
- Select snapshot ( select day1)
- Clone <clone name> ( clone DB12C_PITR )
- Set quota=50g
- Set mountpoint=(clone path)set mountpoint=/export/image_backup_heada/DB12C_PITR1
- Commit.
- List clones
Do the above step on both heads.
II.1.5.2 Detail steps
Node akzfsa101a
akzfsa101a-netman:shares exp_backup_heada/image_backup
snapshots> select day1
akzfsa101a-netman:shares exp_backup_heada/image_backup snapshots@day1>
clone DB12C_PITR1
akzfsa101a-netman:shares exp_backup_heada/image_backup/DB12C_PITR1
(uncommitted clone)> set quota=50G
quota = 50G (uncommitted)
akzfsa101a-netman:shares exp_backup_heada/image_backup /DB12C_PITR1
(uncommitted clone)> set mountpoint=/export/image_backup_heada/DB12C_PITR1
mountpoint = /export/image_backup_heada/DB12C_PITR1 (uncommitted)
akzfsa101a-netman:shares exp_backup_heada/image_backup/DB12C_PITR1
(uncommitted clone)> commit
akzfsa101a-netman:shares exp_backup_heada/image_backup
snapshots@day1> list clones
Clones: 1 total
PROJECT SHARE
Image_backup_heada DB12C_PITR1
akzfsa101a-netman:shares exp_backup_heada/image_backup
snapshots@day1
Node akzfsa101b
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots> select day1
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots@day1> clone DB12C_PITR1
akzfsa101b-netman:shares exp_backup_headb/image_backup/DB12C_PITR1
(uncommitted clone)> set quota=50G
quota = 50G (uncommitted)
akzfsa101b-netman:shares exp_backup_headb/image_backup
/DB12C_PITR1 (uncommitted clone)> set
mountpoint=/export/image_backup_headb/DB12C_PITR1
mountpoint = /export/image_backup_headb/DB12C_PITR1 (uncommitted)
akzfsa101b-netman:shares
exp_backup_headb/image_backup/DB12C_PITR1 (uncommitted clone)> commit
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots@day1> list clones
Clones: 1 total
PROJECT SHARE
Image_backup_headb
DB12C_PITR1
akzfsa101b-netman:shares exp_backup_headb/image_backup
snapshots@day1
II.1.6 Catalog the files into the database from
the clone
- Start the db in mount mode ( using the control file taken on the day)
DB12C_PITR> $ rman target
/
Recovery Manager: Release 11.2.0.4.0 - Production on Mon
Dec 11 08:58:55 2017
Copyright (c) 1982, 2011, Oracle and/or its
affiliates. All rights reserved.
connected to target database: DB12C_PITR
(DBID=1161943131, not open)
RMAN> list copy of
datafile 1;
using target database control file instead of recovery
catalog
specification does not match any datafile copy in the
repository
RMAN> catalog start with "/export/image_backup_heada/DB12C_PITR1
";
searching for all files that match the pattern /export/image_backup_heada/DB12C_PITR1
List of Files Unknown to the Database
=====================================
File Name: /export/image_backup_heada/DB12C_PITR1/c-1161943131-20171211-03
File Name: /export/rman_backup_heada/DB12C_PITR1/data_D-DB12C_PITE_I-1161943131_TS-SYSAUX_FNO-2_0hslr01a
File Name:
RMAN> catalog start with
"/export/image_backup_headb/DB12C_PITR1 ";
RMAN> list copy of
datafile 1;
List of Datafile Copies
=======================
Key File S
Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
10 1 A 11-DEC-17 1282837 11-DEC-17
Name: /export/image_backup_headb/DB12C_PITR1/data_D-DB12C_PITR-1161943131_TS-SYSTEM_FNO-1_0gslr01a
Tag:
IMAGECOPY
II.1.7 Switch the database to clone copy
RMAN> switch database to
copy;
datafile 1 switched to datafile copy "/export/image_backup_heada/DB12C_PITR/data_D-DB12C_PITR_I-1161943131_TS-SYSTEM_FNO-1_0gslr01a"
datafile 1 switched to datafile copy
"/export/image_backup_heada/DB12C_PITR/data_D-DB12C_PITR_I-1161943131_TS-SYSAUX_FNO-1_0gslr01a"
….
datafile 4 switched to datafile copy
…..
II. 1.8 Recover and open the database to PITR
RMAN> run {
SET UNTIL TIME
"to_date('2017-12-11:05:59:00', 'yyyy-mm-dd:hh24:mi:ss')";
recover database;
}2> 3> 4>
executing command: SET until clause
Starting recover at 11-DEC-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=118 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=131 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=134 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=137 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=139 device type=DISK
starting media recovery
archived log for thread 1 with sequence 6 is already on
disk as file +DATA01/db12c/archivelog/2017_12_11/thread_1_seq_6.8711.962431237
archived log for thread 2 with sequence 3 is already on
disk as file
…..
archived log file name=+DATA01/db12c/archivelog/2017_12_11/thread_8_seq_3.8712.962431237
thread=8 sequence=3
media recovery complete, elapsed time: 00:00:02
Finished recover at 11-DEC-17
RMAN> alter database open
resetlogs;
database opened
RMAN> exit
II.1.9 Verify the datafiles
Recovery Manager complete.
$ sqlplus /nolog
SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 11
09:04:53 2017
Copyright (c) 1982, 2013, Oracle. All rights reserved.
SQL> conn /as sysdba
Connected.
SQL> select name from
v$datafile;
NAME
--------------------------------------------------------------------------------
/export/image_backup_heada/DB12C_PITR1/data_D-DB12C
U_I-1161943131_TS-SYSTEM_FNO-1_0gslr01a
….
13 rows selected.
….
SQL>
Appendix
- How to setup SSH key for oracle account in ZFS
- Generate DSA key from Compute Node.
- Login into netman ( gui)
- https:// akzfsa101a-netman.des..com:215
- go to configuration=>preferences => SSH Publick Keys (+) => paste the compute node DSA key and give a relevant key comment
- How to create pools
- Login into netman ( gui)
- https:// akzfsa101a-netman.des..com:215
- go to configuration=>storage => Add
- How to create projects
- Login into netman ( gui)
- https:// akzfsa101a-netman.des..com:215
- go to configuration=>shares => Projects
- How to create shares
- Login into netman ( gui)
- https:// akzfsa101a-netman.des..com:215
- go to configuration=>shares => shares
Subscribe to:
Posts (Atom)
-
We may not be able to disable the logtransport when the database is in maximum availabilty mode. We may get the below error DGMGRL...
-
Error Messages: We may get the following error messages while doing ASM operations ORA-15137: The ASM cluster is in rolling patch state....
-
MIRA - Multi Instance Redo Apply - 12cR2 onwards/18C With Oracle 12c Release 2, We can enable Multi Instance Redo apply ( MIR...