The following are important new sqlplus features in 12c & 18c
1)SET FEEDBACK ON SQL_ID
It will show sql_id for the sql statements we execute from sqlplus prompt
SQL> create table test_invisible_cols (emp_id number, emp_info char(20),
emp_acc_no number invisible); 2
Table created.
SQL_ID: cw7wvs1x99d72
2) SET ROWLIMIT 10
it limits the number of rows returned from a sql
3) set linesize window
It will make the columns to fit into the window size
4) SET COLINVISIBLE ON
This will allow to show the invisible columns using desc
SQL> create table test_invisible_cols (emp_id number, emp_info char(20),
emp_acc_no number invisible); 2
Table created.
SQL> desc test_invisible_cols
Name Null? Type
--------------------------------------------------------------------------------------------------------------------------------------- -------- -------------------------------------------------------------------------------------------
EMP_ID NUMBER
EMP_INFO CHAR(20)
SQL> SET COLINVISIBLE ON
SQL> desc test_invisible_cols
Name Null? Type
--------------------------------------------------------------------------------------------------------------------------------------- -------- -------------------------------------------------------------------------------------------
EMP_ID NUMBER
EMP_INFO CHAR(20)
EMP_ACC_NO (INVISIBLE) NUMBER
SQL>
5) sqlplus -nologintime
from 12.2 sqlplus connection will show the last_login ( dba_users has additional column last_login).
if we want to disable the last login time show, we can use -nologintime
6) set history on - Another important feature is sqlplus history
- we can list the history
- we can run the command out of the history list
- we can edit a command from history
- we can delete all/a command from history
( hist clear/ hist 1 delete )
SQL> set history on
SQL> set history on
SQL> select count(*) from dba_objects;
COUNT(*)
----------
60537
SQL> show sga
Total System Global Area 3.9278E+10 bytes
Fixed Size 30153104 bytes
Variable Size 7381975040 bytes
Database Buffers 3.1675E+10 bytes
Redo Buffers 190418944 bytes
SQL> show parameter recovery
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FLASH01
db_recovery_file_dest_size big integer 17793M
recovery_parallelism integer 0
remote_recovery_file_dest string
SQL> history
1 select count(*) from dba_objects;
2 show sga
3 show parameter recovery
SQL> history 2 run
Total System Global Area 3.9278E+10 bytes
Fixed Size 30153104 bytes
Variable Size 7381975040 bytes
Database Buffers 3.1675E+10 bytes
Redo Buffers 190418944 bytes
SQL>
SQL> help history
HISTORY
-------
Stores, lists, executes, edits of the commands
entered during the current SQL*Plus session.
HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]
N is the entry number listed in the history list.
Use this number to recall, edit or delete the command.
Example:
HIST 3 RUN - will run the 3rd entry from the list.
HIST[ORY] without any option will list all entries in the list.
SQL>
Dear Friends, The content in this blog are purely based on my own opinion ,it is not reflecting any of my current or previous employers materials or official documents. All my posts here are not warranted to be free of errors. Please use at your own risk and after thorough testing in your environment. If you feel that i am violating any of the company's policies or documents, kindly mail me at jeyaseelan.hi@gmail.com,I am happy to take out those content from this blog.
Thursday, 15 November 2018
Monday, 12 November 2018
How to connect Oracle database using Pything 3.7
To connect oracle database from python ,we need cx_Oracle module to be installed.
let us first go thru how to install cx_Oracle module and followed by writing a python script using this module
Step 1. Download & Install cx_Oracle ( Detail installation steps are in https://oracle.github.io/python-cx_Oracle/)
Installation methods.
There are various methods to install python modules. the following are most popular methods
python setup.py build
python setup.py install
Note:
cx_Oracle 7 has been tested with Python version 2.7, and with versions 3.5 through 3.7. You can use cx_Oracle with Oracle 11.2, 12.1 and 12.2 and 18.3 client libraries. Oracle's standard client-server version interoperability allows connection to both older and newer databases. For example Oracle 18.3 client libraries can connect to Oracle Database 11.2.
import sys
import cx_Oracle
username ='user1'
password='password'
databaseName='connect_string'
con = cx_Oracle.connect( username,password,databaseName)
cursor=con.cursor()
cursor.execute('select startup_time,sysdate from v$instance')
result=cursor.fetchall ()
for row in result:
print ( row[0] )
print ( row[1] )
./python db1.py
2018-11-12 02:06:38
2018-11-13 02:54:36
let us first go thru how to install cx_Oracle module and followed by writing a python script using this module
- pip
- git
- Using source from pypi
I used the last option. let us quickly go thru all the steps
1.1 PIP:
python -m pip install cx_Oracle --upgrade
incase if proxy for the internet is not set you want to use the following
python -m pip install --upgrade --user --proxy http://proxy.xxxx.com:8080 cx_oracle
1.2 GIT:
git clone https://github.com/oracle/python-cx_Oracle.git cx_Oracle
cd cx_Oracle
git submodule init
git submodule update
python setup.py install
cd cx_Oracle
git submodule init
git submodule update
python setup.py install
1.3 Install Using Source from PyPI ( https://pypi.org/project/cx_Oracle/#files)
The source package can be downloaded manually from PyPI and extracted, after which the following commands should be run from the target python version
python setup.py build
python setup.py install
I downloaded cx_Oracle version 7 on 3.7 python
cx_Oracle 7 has been tested with Python version 2.7, and with versions 3.5 through 3.7. You can use cx_Oracle with Oracle 11.2, 12.1 and 12.2 and 18.3 client libraries. Oracle's standard client-server version interoperability allows connection to both older and newer databases. For example Oracle 18.3 client libraries can connect to Oracle Database 11.2.
2 . Database connection script
cat db1.py
import sys
import cx_Oracle
username ='user1'
password='password'
databaseName='connect_string'
con = cx_Oracle.connect( username,password,databaseName)
cursor=con.cursor()
cursor.execute('select startup_time,sysdate from v$instance')
result=cursor.fetchall ()
for row in result:
print ( row[0] )
print ( row[1] )
Execution:
./python db1.py
2018-11-12 02:06:38
2018-11-13 02:54:36
Friday, 9 November 2018
No space left on device" while fileystem has lots of free space - Inode header full - Unix
No space left on device" while fileystem has lots of free space . Sometime it happens due to inode header is full. Unix keep tack of file details in the inode header. incase if too many files under a file-system or folder may cause the inode header to become full.
-- df shows it has 61% free space
df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-u01
9.8G 5.6G 3.7G 61% /u01
-- df -I shows it has 100% full.
df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/rootvg-u01
655360 655360 0 100% /u01
use the below command to find out the folder with huge files
find /u01 -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
we may need to either house keep or move it to different folder the unwanted files to make the inode header free
-- df shows it has 61% free space
df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-u01
9.8G 5.6G 3.7G 61% /u01
-- df -I shows it has 100% full.
df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/rootvg-u01
655360 655360 0 100% /u01
use the below command to find out the folder with huge files
find /u01 -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
we may need to either house keep or move it to different folder the unwanted files to make the inode header free
Tuesday, 6 November 2018
Exadata - centralized management utilities
Exadata has the following utilitities which facilitates centralized management across an Oracle Exadata System Software realm
dcli
cellcli
dbmcli
=> dcli :utility runs commands on multiple compute nodes and cells in parallel threads
=> cellcli :The Cell Control Command-Line Interface (CellCLI) utility is the command-line administration tool for Oracle Exadata Storage Server.
=> dbcli :utility is the command-line administration tool for configuring database servers,
dcli examples:
Assume dbs_group file contains compute nodes & cell_group file contains cell servers
cat dbs_group
db1comp0101.yg.com
db1comp0102.yg.com
db1comp0103.yg.com
db1comp0104.yg.com
db1comp0105.yg.com
db1comp0106.yg.com
db1comp0107.yg.com
db1comp0108.yg.com
cat cell_group
cell1server0101.yg.com
cell1server0102.yg.com
cell1server0103.yg.com
cell1server0104.yg.com
cell1server0105.yg.com
cell1server0106.yg.com
cell1server0107.yg.com
cell1server0108.yg.com
cell1server0109.yg.com
cell1server0110.yg.com
cell1server0111.yg.com
cell1server0112.yg.com
cell1server0113.yg.com
cell1server0114.yg.com
1. to run a command in all compute nodes
dcli -g <group_file> -l <user> <command>
dcli -g dbs_group -l oracle df -h
2. to run a command in all cell servers ( assume oracle has ssh key set for cellmonitor account)
dcli -g <group_file> -l <user> <command>
dcli -g cell_group -l cellmonitor cellcli -e list griddisk
3. to copy a file from one compute node to all computes
dcli -g dbs_group -l oracle -f <filename> -d <target_location>
dcli -g dbs_group -l oracle -f /tmp/p27681568_180000_Linux-x86-64.zip -d /u01/app/oracle/patch/
4. unzip the files
dcli -l oragrid -g dbs_group unzip -oq -d <target_directory> <patch_file>
dcli -l oragrid -g dbs_group unzip -oq -d /u01/app/oracle/patch/ /u01/app/oracle/patch/p27681568_180000_Linux-x86-64.zip
dcli
cellcli
dbmcli
=> dcli :utility runs commands on multiple compute nodes and cells in parallel threads
=> cellcli :The Cell Control Command-Line Interface (CellCLI) utility is the command-line administration tool for Oracle Exadata Storage Server.
=> dbcli :utility is the command-line administration tool for configuring database servers,
dcli examples:
Assume dbs_group file contains compute nodes & cell_group file contains cell servers
cat dbs_group
db1comp0101.yg.com
db1comp0102.yg.com
db1comp0103.yg.com
db1comp0104.yg.com
db1comp0105.yg.com
db1comp0106.yg.com
db1comp0107.yg.com
db1comp0108.yg.com
cat cell_group
cell1server0101.yg.com
cell1server0102.yg.com
cell1server0103.yg.com
cell1server0104.yg.com
cell1server0105.yg.com
cell1server0106.yg.com
cell1server0107.yg.com
cell1server0108.yg.com
cell1server0109.yg.com
cell1server0110.yg.com
cell1server0111.yg.com
cell1server0112.yg.com
cell1server0113.yg.com
cell1server0114.yg.com
1. to run a command in all compute nodes
dcli -g <group_file> -l <user> <command>
dcli -g dbs_group -l oracle df -h
2. to run a command in all cell servers ( assume oracle has ssh key set for cellmonitor account)
dcli -g <group_file> -l <user> <command>
dcli -g cell_group -l cellmonitor cellcli -e list griddisk
3. to copy a file from one compute node to all computes
dcli -g dbs_group -l oracle -f <filename> -d <target_location>
dcli -g dbs_group -l oracle -f /tmp/p27681568_180000_Linux-x86-64.zip -d /u01/app/oracle/patch/
4. unzip the files
dcli -l oragrid -g dbs_group unzip -oq -d <target_directory> <patch_file>
dcli -l oragrid -g dbs_group unzip -oq -d /u01/app/oracle/patch/ /u01/app/oracle/patch/p27681568_180000_Linux-x86-64.zip
Thursday, 1 November 2018
How to onboard ZFS ( Backup appliance) into OEM EM12c/EM13c
These steps prepared based on Oracle documentation to enable Monitoring for Oracle ZFS Appliance in Oracle Enterprise Manager 12c Version 12.1.0.5.
OMS Version: 12.1.0.5
Plugin version : 12.1.0.6/12.1.0.7
ZFS plugin : 2013.06.05.7.14,1-1.1/ 8.7.14
Log in to the Oracle ZFS Storage Appliance BUI as user root and go to the Maintenance/Workflows tab.
2. Got to Configuration/USERS and edit the user oracle_agent by clicking the pencil on the very right side of the user’s entry.
3. Ensure that the “Kiosk user” checkbox is unchecked.
4. Press APPLY button in case you changed the user.
5. Enable TLS V1.0 Protocol ( Configuration => HTTPS=> Edit SSL/TSL => check TLS V1.0 Protocol ) ( this step is required only if we use OEM 12.1.0.5 + ZFS plugin
( 12.1.0.6/12.1.0.7 , for EM13c, we can skip this step)
6.Communicate the oracle_agent password to the OEM Administrator
Task 2.1 : Push the plugin to the target agent from OEM
1. Login to OEM12c Enterprise Manager with an Account with OEM ADMIN privilege
2 Select Setup → Extensibility → Plug-ins → Servers,Storage & Network → Oracle ZFS Storage Appliance
3. Select Actions → Deploy On → Management Agent → Continue
4. Enter the "Management Agent" ( zfsappliance.kh.com:3830) → continue → Next
5. Once plugin installation is successful.
6. Login to the agent host ( zfsappliance.kh.com)
7. Create blackout & Stop agent
. oraenv
<AGENT/AGENT12C>
emctl start blackout ZFS_ONBOARD -nodelevel -d 01:00
emctl stop agent
8. cd /data/oracle/product/agent12c/plugins/oracle.sun.oss7.agent.plugin_12.1.0.7.0/scripts/emx/sun_storage_7000
( <agent_home>/plugins/oracle.sun.oss7.agent.plugin_12.1.0.7.0/scripts/emx/sun_storage_7000/ )
9. Backup current Oracle_Grid.jar as Oracle_Grid.jar.bak
( cp Oracle_Grid.jar Oracle_Grid.jar.bak )
10. Download the latest Oracle_Grid.jar [ This step is required only if we use OMS 12.1.0.5 + plugin version 12.1.0.6/12.1.0.7 + ZFS image version > 8 ( 8.1.14) ]
( Alternatively this jar can be downloaded from Note 2384693.1 )
11. Install latest JDK ( take backup of the current JDK)
this patch can be downloaded from MOS 1944044.1
$AGENT_HOME/jdk/bin/java -version
cd $AGENT_HOME
unzip p19138237_16085_Linux-x86-64.zip
./jdk-6u85-linux-x64.bin
$AGENT_HOME/jdk/bin/java -version
11. start agent & remove blackout
emctl start agent
emctl stop blackout ZFS_ONBOARD
Task 2.2 : Discover the target in OEM
6. Press Add Manually
7. In the opening window enter the target details as follows:
Target
Target Name: zfsappliance.kh.com
Monitoring Target Credentials
UserName: oracle_agent
Password / Confirm Password: <oracle_agent password>
Properties Management Port: 215
Storage Server Name or IP Address: zfsappliance-netman.kh.com
Global Properties
Department: Standard Database Support
Lifecycle Status: Production
Location: Germany
Contact : DB_GLOBAL_DATABASE
Press OK
8.Verify that target has been added and appears as up and running.
9. Repeat steps 2) – 8) for the all the heads in zfs appliance
OMS Version: 12.1.0.5
Plugin version : 12.1.0.6/12.1.0.7
ZFS plugin : 2013.06.05.7.14,1-1.1/ 8.7.14
1 . UNIX SA Task: Preparing Oracle ZFS Storage Appliance for Oracle Enterprise Manager Monitoring
Log in to the Oracle ZFS Storage Appliance BUI as user root and go to the Maintenance/Workflows tab.
- Run the Configure for Oracle Enterprise Manager Monitoring workflow, entering and recording the password for the oracle_agent user. You can rerun the workflow later to reset the worksheet, re-enable data sets, or recreate the user.
When you receive the message that oracle_agent already exists and you get asked whether to recreate the user, please answer NO.
The following occurs when the workflow is run:
- A restricted role user named oracle_user is created for use by the monitoring agent. The restricted access role associated with the oracle_agent user is also named oracle_agent.
- An analytics worksheet is automatically created to ensure smooth communication between the Cloud Control administrator and the storage administrator tasked with administering the target Oracle ZFS Storage Appliance.
3. Ensure that the “Kiosk user” checkbox is unchecked.
4. Press APPLY button in case you changed the user.
5. Enable TLS V1.0 Protocol ( Configuration => HTTPS=> Edit SSL/TSL => check TLS V1.0 Protocol ) ( this step is required only if we use OEM 12.1.0.5 + ZFS plugin
( 12.1.0.6/12.1.0.7 , for EM13c, we can skip this step)
6.Communicate the oracle_agent password to the OEM Administrator
2: OEM Admin Task : Discover / Add ZFS Storage Appliances to OEM 12c
Task 2.1 : Push the plugin to the target agent from OEM
1. Login to OEM12c Enterprise Manager with an Account with OEM ADMIN privilege
2 Select Setup → Extensibility → Plug-ins → Servers,Storage & Network → Oracle ZFS Storage Appliance
3. Select Actions → Deploy On → Management Agent → Continue
4. Enter the "Management Agent" ( zfsappliance.kh.com:3830) → continue → Next
5. Once plugin installation is successful.
6. Login to the agent host ( zfsappliance.kh.com)
7. Create blackout & Stop agent
. oraenv
<AGENT/AGENT12C>
emctl start blackout ZFS_ONBOARD -nodelevel -d 01:00
emctl stop agent
8. cd /data/oracle/product/agent12c/plugins/oracle.sun.oss7.agent.plugin_12.1.0.7.0/scripts/emx/sun_storage_7000
( <agent_home>/plugins/oracle.sun.oss7.agent.plugin_12.1.0.7.0/scripts/emx/sun_storage_7000/ )
9. Backup current Oracle_Grid.jar as Oracle_Grid.jar.bak
( cp Oracle_Grid.jar Oracle_Grid.jar.bak )
10. Download the latest Oracle_Grid.jar [ This step is required only if we use OMS 12.1.0.5 + plugin version 12.1.0.6/12.1.0.7 + ZFS image version > 8 ( 8.1.14) ]
( Alternatively this jar can be downloaded from Note 2384693.1 )
11. Install latest JDK ( take backup of the current JDK)
this patch can be downloaded from MOS 1944044.1
$AGENT_HOME/jdk/bin/java -version
cd $AGENT_HOME
unzip p19138237_16085_Linux-x86-64.zip
./jdk-6u85-linux-x64.bin
$AGENT_HOME/jdk/bin/java -version
11. start agent & remove blackout
emctl start agent
emctl stop blackout ZFS_ONBOARD
Task 2.2 : Discover the target in OEM
- Login to OEM12c Enterprise Manager with an Account with ADD TARGET privilege
- Select Setup -> Add Target -> Add Targets Manually
- Select Add Targets Declaratively by Specifying Target Monitoring Properties
- As Target Type choose Oracle ZFS Storage Appliance
- For Monitoring Agent click the Search Button and select as
Management Agent:zfsappliance.kh.com:3830
6. Press Add Manually
7. In the opening window enter the target details as follows:
Target
Target Name: zfsappliance.kh.com
Monitoring Target Credentials
UserName: oracle_agent
Password / Confirm Password: <oracle_agent password>
Properties Management Port: 215
Storage Server Name or IP Address: zfsappliance-netman.kh.com
Global Properties
Department: Standard Database Support
Lifecycle Status: Production
Location: Germany
Contact : DB_GLOBAL_DATABASE
Press OK
8.Verify that target has been added and appears as up and running.
9. Repeat steps 2) – 8) for the all the heads in zfs appliance
Saturday, 27 October 2018
Unix rm command returns 'bin/rm: cannot execute [Argument list too long]' when the list of files are more to hanle by rm buffer
-ksh: rm: /bin/rm: cannot execute [Argument list too long]
Work-around:
find . -name '*.trm' | xargs rm -f
Wednesday, 4 July 2018
GI Patch level mismatch between Nodes - after rolling back failed patch at GI_HOME
Error Messages:
We may get the following error messages while doing ASM operations
ORA-15137: The ASM cluster is in rolling patch state.
The cluster upgrade state is [ROLLING PATCH].
Before implement the solution, Please validate where is the actual problem
1. Check patchlevel
Node1
$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [1295926169] and the complete list of patches [27678276 ] have been applied on the local node. The release patch string is [18.2.0.0.0].
Node 2
$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [3087104924] and the complete list of patches [27676517 27678271 27678276 ] have been applied on the local node. The release patch string is [18.2.0.0.0].
# Node 2 is ahead of Node 1
2. Kfod on GI Binary to list the patch levels
Node 1:
$ $ORACLE_HOME/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
1295926169
$ORACLE_HOME/bin/kfod op=patches
---------------
List of Patches
===============
27678276
Node 2:
$ORACLE_HOME/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
3087104924
$ORACLE_HOME/bin/kfod op=patches
---------------
List of Patches
===============
27676517
27678271
27678276
# In Node 1 27676517 & 27678271 are missing.
3. lsinventory
Node 1
$ORACLE_HOME/OPatch/opatch lsinventory | grep -i desc
ARU platform description:: Linux x86-64
Patch description: "OCW RELEASE UPDATE 18.2.0.0.0 (27678276)"
Patch description: "ACFS RELEASE UPDATE 18.2.0.0.0 (27678271)"
Patch description: "Database Release Update : 18.2.0.0.180417 (27676517)"
Node 2
$ORACLE_HOME/OPatch/opatch lsinventory | grep -i desc
ARU platform description:: Linux x86-64
Patch description: "OCW RELEASE UPDATE 18.2.0.0.0 (27678276)"
Patch description: "ACFS RELEASE UPDATE 18.2.0.0.0 (27678271)"
Patch description: "Database Release Update : 18.2.0.0.180417 (27676517)"
# But lsinventory shows both patchlevel same
4. Patch log files
Node 1 logs:
Node1_opatch_logs
Applying /data/oracle/OraclePatches/27681568/27676517 ...
Patch /data/oracle/OraclePatches/27681568/27676517 applied.
Applying /data/oracle/OraclePatches/27681568/27678271 ...
Patch /data/oracle/OraclePatches/27681568/27678271 applied.
Applying /data/oracle/OraclePatches/27681568/27678276 ...
Patch /data/oracle/OraclePatches/27681568/27678276 applied. <==== applied successfully
Solutions:
Try the following solutions one by one. Check the releaselevel after each solution implemented, if that particular solution fixed, then you can skip the other steps.
# Sol-1
ALTER SYSTEM STOP ROLLING PATCH
# SOl-2
crsctl stop rollingpatch
# Sol-3
As root user,
$GRID_HOME/crs/install/rootcrs.sh -prepatch
$GRID_HOME/crs/install/rootcrs.sh -postpatch
if above does not help, execute following and check status
$GI_HOME/bin/clscfg -patch
# Sol-4
Action Plan
==============
Step 1
$GRID_HOME/crs/install/rootcrs.sh -prepatch
Step 2
Add the missing patches to the kfod output[which are available in opatch lsinventory],
As grid owner,
$GRID_HOME/bin/patchgen commit -pi 27676517
$GRID_HOME/bin/patchgen commit -pi 27678271
We may get the following error messages while doing ASM operations
ORA-15137: The ASM cluster is in rolling patch state.
The cluster upgrade state is [ROLLING PATCH].
Before implement the solution, Please validate where is the actual problem
1. Check patchlevel
Node1
$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [1295926169] and the complete list of patches [27678276 ] have been applied on the local node. The release patch string is [18.2.0.0.0].
Node 2
$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [3087104924] and the complete list of patches [27676517 27678271 27678276 ] have been applied on the local node. The release patch string is [18.2.0.0.0].
# Node 2 is ahead of Node 1
2. Kfod on GI Binary to list the patch levels
Node 1:
$ $ORACLE_HOME/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
1295926169
$ORACLE_HOME/bin/kfod op=patches
---------------
List of Patches
===============
27678276
Node 2:
$ORACLE_HOME/bin/kfod op=patchlvl
-------------------
Current Patch level
===================
3087104924
$ORACLE_HOME/bin/kfod op=patches
---------------
List of Patches
===============
27676517
27678271
27678276
# In Node 1 27676517 & 27678271 are missing.
3. lsinventory
Node 1
$ORACLE_HOME/OPatch/opatch lsinventory | grep -i desc
ARU platform description:: Linux x86-64
Patch description: "OCW RELEASE UPDATE 18.2.0.0.0 (27678276)"
Patch description: "ACFS RELEASE UPDATE 18.2.0.0.0 (27678271)"
Patch description: "Database Release Update : 18.2.0.0.180417 (27676517)"
Node 2
$ORACLE_HOME/OPatch/opatch lsinventory | grep -i desc
ARU platform description:: Linux x86-64
Patch description: "OCW RELEASE UPDATE 18.2.0.0.0 (27678276)"
Patch description: "ACFS RELEASE UPDATE 18.2.0.0.0 (27678271)"
Patch description: "Database Release Update : 18.2.0.0.180417 (27676517)"
# But lsinventory shows both patchlevel same
4. Patch log files
Node 1 logs:
Node1_opatch_logs
Applying /data/oracle/OraclePatches/27681568/27676517 ...
Patch /data/oracle/OraclePatches/27681568/27676517 applied.
Applying /data/oracle/OraclePatches/27681568/27678271 ...
Patch /data/oracle/OraclePatches/27681568/27678271 applied.
Applying /data/oracle/OraclePatches/27681568/27678276 ...
Patch /data/oracle/OraclePatches/27681568/27678276 applied. <==== applied successfully
Solutions:
Try the following solutions one by one. Check the releaselevel after each solution implemented, if that particular solution fixed, then you can skip the other steps.
# Sol-1
ALTER SYSTEM STOP ROLLING PATCH
# SOl-2
crsctl stop rollingpatch
# Sol-3
As root user,
$GRID_HOME/crs/install/rootcrs.sh -prepatch
$GRID_HOME/crs/install/rootcrs.sh -postpatch
if above does not help, execute following and check status
$GI_HOME/bin/clscfg -patch
# Sol-4
Action Plan
==============
Step 1
$GRID_HOME/crs/install/rootcrs.sh -prepatch
Step 2
Add the missing patches to the kfod output[which are available in opatch lsinventory],
As grid owner,
$GRID_HOME/bin/patchgen commit -pi 27676517
$GRID_HOME/bin/patchgen commit -pi 27678271
Incase if we need to rollback extra patch
$GRID_HOME/bin/patchgen commit -rb 27678271
Check the below on both nodes and notify if there is any difference
kfod op=patchlvl
kfod op=patches
Step 3
If they are same run the below,
$GRID_HOME/crs/install/rootcrs.sh -postpatch
Check the below on both nodes and notify if there is any difference
kfod op=patchlvl
kfod op=patches
Step 3
If they are same run the below,
$GRID_HOME/crs/install/rootcrs.sh -postpatch
Monday, 4 June 2018
How to open ASM trace files using vi
When we open ASM trace files using vi command
we may get the following error
Error detected while processing command line:
E492: Not an editor command: ASM1_arb0_396061.trc
Press ENTER or type command to continue
to overcome this challenge, we can use -- option in vi command
vi -- +ASM_arb0_396061.trc
we may get the following error
Error detected while processing command line:
E492: Not an editor command: ASM1_arb0_396061.trc
Press ENTER or type command to continue
to overcome this challenge, we can use -- option in vi command
vi -- +ASM_arb0_396061.trc
Friday, 25 May 2018
Hot Cloning of PDB - 12c2 / 18c
12C has brought the container & Pluggable database feature. To clone a PDB from another clone, we need to keep the source PDB in read only mode.
SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'
2 ;
no rows selected
SQL> create pluggable database DEVPDB4 from DEVPDB3;
create pluggable database DEVPDB4 from DEVPDB3
*
ERROR at line 1:
ORA-65035: unable to create pluggable database from DEVPDB3
From 12cr2 onwards we can perform this operation while source PDB is in read write mode
In-order to achieve this, we should have the following conditions met.
1. Enabled local Undo
2. cloneddb=true ( init.ora parameter)
Enable Local Undo
This need to be done in upgrade mode
# stop DB
srvctl stop database -d db1
# set cluster_database = false ( it you are doing on RAC)
sqlplus /nolog
conn /as sysdba
alter system set cluster_database=false scope=spfile;
shutdown immediate;
# enable local undo
sqlplus /nolog
conn / as sysdba
startup upgrade
ALTER DATABASE LOCAL UNDO ON;
shutdown immediate;
exit
# enable cluster_database=true ( if you are peforming this activity in RAC)
sqlplus /nolog
conn /as sysdba
alter system set cluster_database=true scope=spfile;
shutdown immediate;
# start db
srvctl start database -d db1
set clonedb=true
alter system set cloned=true scope=spfile sid='*' ;
# bounce the db to effect the above parameter
# Verify if local undo is enabled
SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME
--------------------------------------------------------------------------------
PROPERTY_VALUE
--------------------------------------------------------------------------------
LOCAL_UNDO_ENABLED
TRUE
Hot Clone PDB
# Source PDB in read write mode
1* select name,open_mode from v$pdbs where name='DEVPDB3'
SQL> /
NAME OPEN_MODE
------------------------------ ----------------------------------------
DEVPDB3 READ WRITE
SQL> create pluggable database DEVPDB4 from DEVPDB3;
Pluggable database created.
SQL>
MIRA - Multi Instance Redo Apply - 12cR2 onwards/18C
MIRA - Multi
Instance Redo Apply - 12cR2 onwards/18C
With Oracle 12c Release 2, We can enable Multi Instance Redo
apply ( MIRA) option in the DataGuard which will apply redo logs to multiple
instance at the same time to improve recovery time objective.
Once we enabled, we could see there will be pr* process trees
in both instances and pr** trace files on both nodes (ps -ef|grep pr*)
Please note that there will be 1 MRP0 for the entire MIRA
recovery which is normal. MRP0 is the coordinator process for all other pr*
recovery processes on all instances.
This can be enabled either using sqlplus / dgmgrl
1. How to
enable using sql plus
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION INSTANCES ALL -> to start from all the instances
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION INSTANCES 2 -> to start
from any two instances
2. How to
enable using DGMGRL
Change value for
ApplyInstances = number of instances (set value to 2 in this case)
DGMGRL> edit database "18CDB" set property ApplyInstances=2;
Property "applyinstances" updated
DGMGRL> show configuration
Alert.log
ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION INSTANCES 2
2018-05-25T07:22:10.126643+01:00
Attempt to start background Managed Standby Recovery process
(18CDB1)
Starting background process MRP0
2018-05-25T07:22:10.144066+01:00
MRP0 started with pid=47, OS id=343863
2018-05-25T07:22:10.145157+01:00
Background Managed Standby Recovery process started (18CDB1)
2018-05-25T07:22:15.302656+01:00
Started logmerger process on instance id 1
Started logmerger process on instance id 2
Starting Multi Instance Redo Apply (MIRA) on 2 instances
2018-05-25T07:22:15.354896+01:00
Starting Multi Instance Redo Apply (MIRA)
2018-05-25T07:22:15.459146+01:00
.... (PID:399563): Managed Standby Recovery starting Real
Time Apply
2018-05-25T07:22:16.289661+01:00
Started 24 apply slaves on instance
id 1 =========> apply slaves started on instance 1
2018-05-25T07:22:16.818633+01:00
Started 24 apply slaves on instance
id 2 =========> apply slaves started on instance 2
2018-05-25T07:22:17.515862+01:00
ALTER SYSTEM SET remote_listener=' ahbdre1-sc.des.sg.com:1722'
SCOPE=MEMORY SID='18CDB1';
2018-05-25T07:22:17.516825+01:00
ALTER SYSTEM SET listener_networks='' SCOPE=MEMORY
SID='18CDB1';
3.
Background processes => once we enabled we could see the following
background process
SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition
Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
[18CDB1] $ ps -ef | grep GW | grep pr
oracle 323507 1
1 07:11 ? 00:00:01
ora_pr00_18CDB1
oracle 323512 1
0 07:11 ? 00:00:00
ora_pr02_18CDB1
oracle 323514 1
0 07:11 ? 00:00:00
ora_pr03_18CDB1
oracle 323516 1
0 07:11 ? 00:00:00
ora_pr04_18CDB1
oracle 323518 1
0 07:11 ? 00:00:00
ora_pr05_18CDB1
oracle 323520 1
0 07:11 ? 00:00:00 ora_pr06_18CDB1
oracle 323522 1
0 07:11 ? 00:00:00
ora_pr07_18CDB1
oracle 323534 1
0 07:11 ? 00:00:00
ora_pr08_18CDB1
oracle 323544 1
0 07:11 ? 00:00:00
ora_pr09_18CDB1
oracle 323546 1
0 07:11 ? 00:00:00 ora_pr0a_18CDB1
oracle 323548 1
0 07:11 ? 00:00:00
ora_pr0b_18CDB1
oracle 323550 1
0 07:11 ? 00:00:00
ora_pr0c_18CDB1
oracle 323552 1
0 07:11 ? 00:00:00
ora_pr0d_18CDB1
oracle 323554 1
0 07:11 ? 00:00:00
ora_pr0e_18CDB1
oracle 323556 1
0 07:11 ? 00:00:00
ora_pr0f_18CDB1
oracle 323558 1
0 07:11 ? 00:00:00
ora_pr0g_18CDB1
oracle 323560 1
0 07:11 ? 00:00:00
ora_pr0h_18CDB1
oracle 323562 1
0 07:11 ? 00:00:00
ora_pr0i_18CDB1
oracle 323564 1
0 07:11 ? 00:00:00
ora_pr0j_18CDB1
oracle 323566 1
0 07:11 ? 00:00:00
ora_pr0k_18CDB1
oracle 323568 1
0 07:11 ? 00:00:00
ora_pr0l_18CDB1
oracle 323570 1
0 07:11 ? 00:00:00
ora_pr0m_18CDB1
oracle 323572 1
0 07:11 ? 00:00:00
ora_pr0n_18CDB1
oracle 323574 1
0 07:11 ? 00:00:00
ora_pr0o_18CDB1
oracle 323576 1
0 07:11 ? 00:00:00
ora_pr0p_18CDB1
oracle 323582 1 0
07:11 ? 00:00:00 ora_pr1e_18CDB1
oracle 323584 1
0 07:11 ? 00:00:00
ora_pr1f_18CDB1
oracle 323586 1
0 07:11 ? 00:00:00
ora_pr1g_18CDB1
oracle 323588 1
0 07:11 ? 00:00:00
ora_pr1i_18CDB1
oracle 323590 1
0 07:11 ? 00:00:00
ora_pr1l_18CDB1
oracle 323592 1
0 07:11 ? 00:00:00
ora_pr1n_18CDB1
4.
Managed standby processes => there will be only one MRP process
1* select inst_id,PROCESS,STATUS,CLIENT_PROCESS,CLIENT_PID,THREAD#,SEQUENCE#,BLOCK#,ACTIVE_AGENTS,KNOWN_AGENTS
from gv$managed_standby order by
CLIENT_PROCESS,THREAD#,SEQUENCE#
SQL> /
INST_ID
PROCESS STATUS CLIENT_P CLIENT_PID THREAD#
SEQUENCE# BLOCK# ACTIVE_AGENTS
KNOWN_AGENTS
---------- --------- ------------ --------
---------------------------------------- ---------- ---------- ----------
------------- ------------
1 ARCH CONNECTED ARCH
399744
0 0 0 0 0
1 ARCH CONNECTED ARCH
399776
0 0 0 0 0
1 ARCH CONNECTED ARCH
399774 0 0 0 0 0
1 ARCH CONNECTED ARCH
399770
0 0 0 0
0
1 ARCH CONNECTED ARCH
399768
0 0 0 0 0
1 ARCH CONNECTED ARCH
399766
0 0 0 0 0
1 ARCH CONNECTED ARCH
399764
0 0 0 0 0
1 ARCH CONNECTED ARCH
399750 0 0 0 0 0
1 ARCH CONNECTED ARCH
399754
0 0 0 0 0
1 ARCH CONNECTED
ARCH 399756
0 0 0 0 0
1 ARCH CONNECTED ARCH
399758
0 0 0 0 0
INST_ID
PROCESS STATUS CLIENT_P CLIENT_PID THREAD# SEQUENCE#
BLOCK# ACTIVE_AGENTS KNOWN_AGENTS
---------- --------- ------------ --------
---------------------------------------- ---------- ---------- ----------
------------- ------------
1 ARCH CONNECTED ARCH
399760
0 0 0 0 0
2 ARCH CLOSING ARCH
117678
1 2242
10240 0 0
2 ARCH CLOSING ARCH
117670
1 2243 1 0 0
2 ARCH CLOSING ARCH
117690
1 2244 1 0 0
2 ARCH CLOSING ARCH
117684
1 2245 1
0 0
2 ARCH CLOSING ARCH
117680
1 2246 10240 0 0
2 ARCH CLOSING ARCH
117674
1 2247 1 0 0
2 ARCH CLOSING ARCH
117686
2 2215 45056 0 0
2 ARCH CLOSING ARCH
117664
2 2218 4096 0 0
2 ARCH CLOSING ARCH
117668
2 2237 1 0 0
2 ARCH CLOSING ARCH
117688 2 2238 1 0 0
INST_ID
PROCESS STATUS CLIENT_P CLIENT_PID THREAD# SEQUENCE#
BLOCK# ACTIVE_AGENTS KNOWN_AGENTS
---------- --------- ------------ --------
---------------------------------------- ---------- ---------- ----------
------------- ------------
2 ARCH CLOSING ARCH
117682
2 2239 10240 0 0
2 ARCH CLOSING ARCH
117676
2 2240 1 0 0
2 RFS IDLE Archival 222920 1 0 0 0 0
2 RFS IDLE Archival 67323 2 0 0 0 0
2 RFS IDLE
LGWR 222400
1 2248 136 0 0
2 RFS IDLE LGWR 67057
2 2241 180 0 0
1 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
2 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
2 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
1 DGRD ALLOCATED N/A
N/A
0 0 0 0 0
1 MRP0 APPLYING_LOG N/A N/A
1 2248 135 62 62
33 rows selected.
Subscribe to:
Posts (Atom)
-
We may not be able to disable the logtransport when the database is in maximum availabilty mode. We may get the below error DGMGRL...
-
Error Messages: We may get the following error messages while doing ASM operations ORA-15137: The ASM cluster is in rolling patch state....
-
MIRA - Multi Instance Redo Apply - 12cR2 onwards/18C With Oracle 12c Release 2, We can enable Multi Instance Redo apply ( MIR...