There are some situations we need to reconfigure the Grid Home like IP-address change/hostname changer,
fresh GI installation due to binary issue etc..
Please follow the below steps
0. Please capture the below details before you do reconfigurations
crsctl stat res -p
crsctl stat res -t
cluster Name:cemutlo -n
scan name : srvctl confif scan
listener port : srvctl confif listener
host vip names
if you are doing it in exadata . please do the step 3 before you do the reconfiguration
1. Run the below command to deconfig the GI from all the nodes except the last node.
$ORACLE_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
2. Run the below command in the last node ( keepdg to retain the DG configuration)
$ORACLE_HOME\crs\install\rootcrs.pl -deconfig -force -verbose -lastnode -keepdg
3. During reconfiguration, incase if the OCR disks are not listed as candidates for OCR DG creations, we may need to delete and add the disks
ORACLE ASM:
we can use either Plan A or Plan B
Plan A:
oracleasm createdisk <label> <dev_mapper>
oracleasm createdisk OCR_DK1 /dev/mapper/ASM_OCR_0001
Plan B:
incase the above is faling with "Unable to open device "/dev/mapper/ASM_OCR_0001": Device or resource busy "
/usr/sbin/asmtool -C -l /dev/oracleasm -n OCR_DK1 -s /dev/mapper/ASM_OCR_0001 -a force=yes
Post task:
oracleasm scandisks
oracleasm listdisks
AFD:
( BS = block size) and count= 1000 * BS = size of the disk
dd if=/dev/zero of=/dev/mapper/ASM_REG_OCR_0001 bs=1M count=1000
Post task:
asmcmd afd_scan
asmcmd afd_lsdsk
Exadata:
delete 3 disks from a DG from different cell to make them as a candidate for the OCR DG ( this step need to be done before the deconfigruation run)
4. Cleanup the gpnp files from all the nodes
find <GRID_HOME>/gpnp/* -type f -exec rm -rf {} \;
5. run config.sh
cd $ORACLE_HOME/crs/config
./config.sh
feed the below values
=> cluster_name, scan name, port & Disable GNS if it is not used.
=> ensure all the host and VIPs are fetched automatically, if not, please add them manually.
=> Storage for OCR: create new DG and choose the disks candidates.
Dear Friends, The content in this blog are purely based on my own opinion ,it is not reflecting any of my current or previous employers materials or official documents. All my posts here are not warranted to be free of errors. Please use at your own risk and after thorough testing in your environment. If you feel that i am violating any of the company's policies or documents, kindly mail me at jeyaseelan.hi@gmail.com,I am happy to take out those content from this blog.
Sunday, 31 March 2019
Sunday, 24 March 2019
ACFS on Exadata
Oracle Automatic Storage Management
Cluster File System (ACFS) on Exadata Database Machine:
Starting with Oracle Grid Infrastructure version 12.1.0.2, Oracle ACFS supports all database files and general purpose files on Oracle Exadata Database Machine running Oracle Linux on database servers.
The following database versions are supported by Oracle ACFS on Exadata Database Machine:
Starting with Oracle Grid Infrastructure version 12.1.0.2, Oracle ACFS supports all database files and general purpose files on Oracle Exadata Database Machine running Oracle Linux on database servers.
The following database versions are supported by Oracle ACFS on Exadata Database Machine:
- Oracle Database 10g Rel. 2 (10.2.0.4 and 10.2.0.5)
- Oracle Database 11g (11.2.0.4 and higher)
- Oracle Database 12c (12.1.0.1 and higher)
The below steps are duly tested in Exadata X7 machines
Pre-requisite for ACFS creation
- Verify if ACFS/ADVM module is loaded ( oracleacfs & oracleadvm )
dcli -g
/home/oracle/dbs_group -l oracle 'lsmod | grep oracle'
Step by step instruction to
create ACFS on Exadata
- As ORAGRID : Create ASM volume to be used for ACFS file systems on FLASH01 DG
asmcmd volcreate -G FLASH01 -s 10G
acfsvol
|
2) As ORAGRID: Capture the
volume device name
asmcmd volinfo -G
FLASH01 acfsvol
|
For example this case:
Diskgroup Name: FLASH01
Volume Name: ACFSVOL
Volume Device: /dev/asm/acfsvol-118
State: ENABLED
Size
(MB): 10240
Resize Unit (MB): 512
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:
3) As ORAGRID: verify if the
volume device name created on all the nodes
dcli -g
/home/oracle/dbs_group -l oracle ‘s -l /dev/asm/acfsvol-118’
|
4) As ORAGRID: Verify if ADVM
is enabled
crsctl stat res ora.FLASH01.ACFSVOL.advm
-t
|
For example this case:
crsctl stat res ora.FLASH01.ACFSVOL.advm -t
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.FLASH01.ACFSVOL.advm
ONLINE ONLINE
fraespou0101 STABLE
ONLINE ONLINE
fraespou0102 STABLE
ONLINE ONLINE
fraespou0103 STABLE
ONLINE ONLINE
fraespou0104 STABLE
--------------------------------------------------------------------------------
For example this case:
5) AS ORAGRID: Create ACFS on ADVM
/sbin/mkfs -t acfs
/dev/asm/acfsvol-321 –b 4K
|
6) As root -
Create the mount points for the file systems and change ownership
dcli -g ~/dbs_group -l
root ‘mkdir /u01/app/oracle/admin/common’
dcli -g ~/dbs_group -l
root 'chown oracle:oinstall /u01/app/oracle/admin/common’
dcli -g ~/dbs_group -l
root 'chmod 775 /u01/app/oracle/admin/common’
|
7) As ROOT: Add filesystem
/u01/app/oragrid/product/18.0/bin/srvctl
add filesystem -d /dev/asm/acfsvol-321 -m
/u01/app/oracle/admin/common -u oracle -fstype ACFS
-autostart ALWAYS
|
8) As ROOT/Oracle: Start filesystem
/u01/app/oragrid/product/18.0/bin/srvctl
start filesystem -d /dev/asm/acfsvol-321
|
Verifications
9) root/oragrid: Verify if
change ownership and permission
crsctl stat res
ora.FLASH01.ACFSVOL.acfs -t
|
crsctl stat res ora.FLASH01.ACFSVOL.advm –t
10) as Oracle: list the ACFS
FS from all the nodes
dcli -g /home/oracle/dbs_group -l oracle df –h /u01/app/oracle/admin/common
|
11) as Oracle: Touch a file on
the ACFS FS from all the nodes
dcli -g /home/oracle/dbs_group -l oracle touch /u01/app/oracle/admin/common/a.txt
dcli -g /home/oracle/dbs_group -l oracle cat /u01/app/oracle/admin/common/a.txt
|
Friday, 22 March 2019
How to restore a corrupted/missing datafile in standby:
How to restore a corrupted/missing datafile in standby:
Before 12.2
1. Cancel the recovery
dgmgrl > edit database <db> set state='APPLY-OFF';
(or) alter database recover managed standby database cancel;
2. From Primary take a datafile copy
rman target /
backup as copy datafile 2 format "/u01/app/oracle/data01/sysaux_01.dbk" ;
3. scp the file from primay server to standby server ( may be to the same location in standby server)
4. catalog the datafile copy ( in sby server)
rman target /
catalog datafilecopy '/u01/app/oracle/data01/sysaux_01.dbk'
5. Either do a switch copy or restore the datafile to the same location mentioned in the controlfile( in sby server)
rman target /
switch datafile 2 to copy;
report schema
alternatively
RUN {
ALLOCATE CHANNEL ch00 TYPE disk ;
restore datafile 2 ;
}
6. enable recovery
dgmgrl > edit database <db> set state='APPLY-ON';
(or) alter database recover managed standby database disconnect from session ;
After 12
rman target sys/oracle@prod
RMAN> run
{
set newname for datafile 4 to '/u01/oracle/data02/users01.dbf' section size 1G;
restore (datafile 4 from service prodservice) using compressed backupset;
catalog datafilecopy '/u01/oracle/data02/users01.dbf';
}
Friday, 15 March 2019
OPatch lsinventory command doesnot show RAC details
Please use the below command to get the remote nodes patching details
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
===============================================================
This is applicable to all releases of OPatch.
The 12.2.0.1.13 release (and later) of OPatch, which is used for all releases 12.1.0.x and later
The 11.2.0.3.18 release (and later) of OPatch, which is used for all releases 11.2.0.1 - 11.2.0.4
Beginning with these releases of OPatch, OPatch will only support patching/listing inventory for local node of a RAC cluster. There will be no propagation to other nodes in cluster.
1. In Opatch 12.2.0.1.13 and Opatch 11.2.0.3.18, OPatch command option “-all_nodes” will be no-op and existed in -help.
If opatch command is being called with option “-all_nodes”, Opatch will print out the warning on console msg as:
""OPatch was called with -all_nodes option. The -all_nodes option is being deprecated. Please remove it while calling OPatch."
2. In future release Opatch 12.2.0.1.14 (and later) and Opatch 11.2.0.3.20 (and later) , Option “-all_nodes” will be removed from -help, user will get syntax error if specify the option.
Alternative Feature of -all_node is available for Multi-Node GI/RAC :
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
(The remote command will get a list of patches from all nodes)
===========================================================
Refer:
GI/RAC/Single Instance Announcing Deprecation of OPatch Command Option "-all_nodes" (Doc ID 2331762.1)
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
===============================================================
This is applicable to all releases of OPatch.
The 12.2.0.1.13 release (and later) of OPatch, which is used for all releases 12.1.0.x and later
The 11.2.0.3.18 release (and later) of OPatch, which is used for all releases 11.2.0.1 - 11.2.0.4
Beginning with these releases of OPatch, OPatch will only support patching/listing inventory for local node of a RAC cluster. There will be no propagation to other nodes in cluster.
1. In Opatch 12.2.0.1.13 and Opatch 11.2.0.3.18, OPatch command option “-all_nodes” will be no-op and existed in -help.
If opatch command is being called with option “-all_nodes”, Opatch will print out the warning on console msg as:
""OPatch was called with -all_nodes option. The -all_nodes option is being deprecated. Please remove it while calling OPatch."
2. In future release Opatch 12.2.0.1.14 (and later) and Opatch 11.2.0.3.20 (and later) , Option “-all_nodes” will be removed from -help, user will get syntax error if specify the option.
Alternative Feature of -all_node is available for Multi-Node GI/RAC :
$ORACLE_HOME/OPatch/opatchauto report -format xml -type patches -remote
(The remote command will get a list of patches from all nodes)
===========================================================
Refer:
GI/RAC/Single Instance Announcing Deprecation of OPatch Command Option "-all_nodes" (Doc ID 2331762.1)
Subscribe to:
Posts (Atom)
-
We may not be able to disable the logtransport when the database is in maximum availabilty mode. We may get the below error DGMGRL...
-
Error Messages: We may get the following error messages while doing ASM operations ORA-15137: The ASM cluster is in rolling patch state....
-
MIRA - Multi Instance Redo Apply - 12cR2 onwards/18C With Oracle 12c Release 2, We can enable Multi Instance Redo apply ( MIR...