Monday 22 August 2011

How to extract the files from a unmounted ASM Diskgroup

Please note this amdu is undocumented .  Without oracle's confirmation dont try this options
amdu is a utlity comes with 11g in default ( for 10g we need to download from metalink and install it).It has various options like to compare the mirrors,print the block contents,fullscan on the diskgroup,extract the files from a diskgroup.This utility works even if the diskgroups are in unmounted stage.
Suppose one or more diskgroups are not able to mount due to various reason and mirror copy  is also not working, we need to extract few files from the diskgroup to bring the database to consistent mode. For example some datafiles/archive files/redo log files etc… This utility is very much useful in this scenario.
If there is a physical/media error on the files we are trying extract then the whole process may not work as expected.
A case study has been made to restore all data files from an unmounted diskgroups and build the database in the non-asm filesystem.

1.    Created  a sample database into the ASM diskgroups
create database TEST
MAXINSTANCES 32
MAXLOGHISTORY 1000
MAXLOGFILES 64
MAXLOGMEMBERS 5
MAXDATAFILES 2056
datafile '+DATA_01/TEST/system01.dbf' size 300M EXTENT MANAGEMENT
LOCAL
SYSAUX DATAFILE '+DATA_01/TEST/sysaux01.dbf' size 300M
UNDO TABLESPACE "UNDOTBS1" DATAFILE
'+DATA_01/TEST/undotbs01.dbf' size 200M
LOGFILE GROUP 1 ('+REDO_01/TEST/redo01.log') SIZE 51200K,
GROUP 2 ('+REDO_02/TEST/redo02.log') SIZE 51200K,
GROUP 3 ('+REDO_01/TEST/redo03.log') SIZE 51200K ,
GROUP 4 ('+REDO_02/TEST/redo04.log') SIZE 51200K
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE
'+DATA_01/TEST/temp01.dbf' size 500m
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET AL16UTF16;
@?/rdbms/admin/catalog
@?/rdbms/admin/catproc
@?/rdbms/admin/dbmspool.sql
2.    Identify the file# against the database files.
set pagesize 0
set linesize 200
col full_alias_path format a80
select  x.gnum,x.filnum,x.full_alias_path,f.ftype from (
SELECT gnum,filnum,concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path
FROM (SELECT g.name gname, a.parent_index pindex, a.name aname,
            a.reference_index rindex,a.group_number gnum,a.file_number filnum
      FROM v$asm_alias a, v$asm_diskgroup g
      WHERE a.group_number = g.group_number)
START WITH (mod(pindex, power(2, 24))) = 0 CONNECT BY PRIOR rindex = pindex) x,
(select group_number gnum,file_number filnum, type ftype from v$asm_file order by group_number,file_number) f
where x.filnum != 4294967295
and x.gnum=f.gnum and x.filnum=f.filnum
/
   1        256 +DATA_01/JEYSERVER/control01.ctl                                             CONTROLFILE
         1        256 +DATA_01/JEYSERVER/CONTROLFILE/Current.256.691068219                         CONTROLFILE
         1        257 +DATA_01/JEYSERVER/CONTROLFILE/Current.257.691068219                         CONTROLFILE
         1        257 +DATA_01/JEYSERVER/control02.ctl                                             CONTROLFILE
         1        258 +DATA_01/JEYSERVER/CONTROLFILE/Current.258.691068219                         CONTROLFILE
         1        258 +DATA_01/JEYSERVER/control03.ctl                                             CONTROLFILE
         1        259 +DATA_01/JEYSERVER/system01.dbf                                              DATAFILE
         1        259 +DATA_01/JEYSERVER/DATAFILE/SYSTEM.259.691068223                             DATAFILE
         1        260 +DATA_01/JEYSERVER/DATAFILE/UNDOTBS1.260.691068239                           DATAFILE
         1        260 +DATA_01/JEYSERVER/undotbs01.dbf                                             DATAFILE
         1        261 +DATA_01/JEYSERVER/DATAFILE/SYSAUX.261.691068243                             DATAFILE
         1        261 +DATA_01/JEYSERVER/sysaux01.dbf                                              DATAFILE
         1        262 +DATA_01/JEYSERVER/temp01.dbf                                                TEMPFILE
         1        262 +DATA_01/JEYSERVER/TEMPFILE/TEMP.262.691068249                               TEMPFILE
         1        263 +DATA_01/test_asm.dmp                                                       DUMPSET
         1        263 +DATA_01/JEYSERVER/DUMPSET/SIVASYS_EXPORT_TRANSPORTABLE_0_231115_1.263.75217 DUMPSET
                      0035

         1        264 +DATA_01/JEYSERVER/DATAFILE/TEST_ASM.264.752168641                           DATAFILE
         1        265 +DATA_01/TEST/CONTROLFILE/Current.265.759845783                             CONTROLFILE
         1        265 +DATA_01/TEST/control01.ctl                                                 CONTROLFILE
         1        266 +DATA_01/TEST/CONTROLFILE/Current.266.759845783                             CONTROLFILE
         1        266 +DATA_01/TEST/control02.ctl                                                 CONTROLFILE
         1        267 +DATA_01/TEST/CONTROLFILE/Current.267.759845783                             CONTROLFILE
         1        267 +DATA_01/TEST/control03.ctl                                                 CONTROLFILE
         1        268 +DATA_01/TEST/DATAFILE/SYSTEM.268.759845785                                 DATAFILE
         1        268 +DATA_01/TEST/system01.dbf                                                  DATAFILE
         1        269 +DATA_01/TEST/DATAFILE/SYSAUX.269.759845791                                 DATAFILE
         1        269 +DATA_01/TEST/sysaux01.dbf                                                  DATAFILE
         1        270 +DATA_01/TEST/DATAFILE/UNDOTBS1.270.759845795                               DATAFILE
         1        270 +DATA_01/TEST/undotbs01.dbf                                                 DATAFILE
         1        271 +DATA_01/TEST/TEMPFILE/TEMP.271.759845795                                   TEMPFILE
         1        271 +DATA_01/TEST/temp01.dbf                                                    TEMPFILE
         2        256 +DATA_02/JEYSERVER/XTRANSPORT/TEST_ASM.256.752170417                         XTRANSPORT
         4        256 +REDO_01/JEYSERVER/redo01.log                                                ONLINELOG
         4        256 +REDO_01/JEYSERVER/ONLINELOG/group_1.256.691068219                           ONLINELOG
         4        257 +REDO_01/JEYSERVER/redo03.log                                                ONLINELOG
         4        257 +REDO_01/JEYSERVER/ONLINELOG/group_3.257.691068221                           ONLINELOG
         4        258 +REDO_01/TEST/redo01.log                                                    ONLINELOG
         4        258 +REDO_01/TEST/ONLINELOG/group_1.258.759845783                               ONLINELOG
         4        259 +REDO_01/TEST/redo03.log                                                    ONLINELOG
         4        259 +REDO_01/TEST/ONLINELOG/group_3.259.759845783                               ONLINELOG
         5        256 +REDO_02/JEYSERVER/ONLINELOG/group_2.256.691068221                           ONLINELOG
         5        256 +REDO_02/JEYSERVER/redo02.log                                                ONLINELOG
         5        257 +REDO_02/JEYSERVER/redo04.log                                                ONLINELOG
         5        257 +REDO_02/JEYSERVER/ONLINELOG/group_4.257.691068221                           ONLINELOG
         5        258 +REDO_02/TEST/redo02.log                                                    ONLINELOG
         5        258 +REDO_02/TEST/ONLINELOG/group_2.258.759845783                               ONLINELOG
         5        259 +REDO_02/TEST/redo04.log                                                    ONLINELOG
         5        259 +REDO_02/TEST/ONLINELOG/group_4.259.759845785                               ONLINELOG

48 rows selected.
3.    Shutdown the databases

4.    Dismount the diskgroups
SQL> alter diskgroup DATA_01 dismount;
         alter diskgroup REDO_01 dismount;
         alter diskgroup REDO_02 dismount;

5.    Extract the files from diskgroups
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.265
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.266
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.267
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.268
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.269
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.270
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract DATA_01.271
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract REDO_01.258
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract REDO_02.258
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract REDO_01.259
amdu -dis "/dev/vx_wa/rdsk/hostnamedg/" -extract REDO_02.259

It creates a folder and extract the file and report.txt  into that folder.

drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_22_15
drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_22_28
drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_22_42
drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_22_55
drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_24_09
drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_25_21
drwxr-xr-x   2 oracle   orainst       96 Aug 22 16:29 amdu_2011_08_22_16_26_25
drwxr-xr-x   2 oracle   orainst       96 Aug 22 17:12 amdu_2011_08_22_16_58_18
drwxr-xr-x   2 oracle   orainst       96 Aug 22 17:12 amdu_2011_08_22_17_06_57
drwxr-xr-x   2 oracle   orainst       96 Aug 22 17:12 amdu_2011_08_22_17_10_31
drwxr-xr-x   2 oracle   orainst       96 Aug 22 17:13 amdu_2011_08_22_17_04_04


6.    Rename the files to the relevant name

-rw-r--r--   1 oracle   orainst  524296192 Aug 22 16:28 temp01.dbf
-rw-r--r--   1 oracle   orainst  52429312 Aug 22 17:15 redo02.log
-rw-r--r--   1 oracle   orainst  52429312 Aug 22 17:15 redo03.log
-rw-r--r--   1 oracle   orainst  52429312 Aug 22 17:15 redo04.log
-rw-r--r--   1 oracle   orainst  209723392 Aug 22 17:46 undotbs01.dbf
-rw-r--r--   1 oracle   orainst  314580992 Aug 22 17:46 system01.dbf
-rw-r--r--   1 oracle   orainst  314580992 Aug 22 17:46 sysaux01.dbf
-rw-r--r--   1 oracle   orainst  52429312 Aug 22 17:47 redo01.log

7.    Start the instance with control file pointing to filesystem or available diskgroup
*.control_files='/data/oracle/JEYSERVER/backup/export/control01.ctl','/data/oracle/JEYSERVER/backup/export/control02.ctl'

8. create control file using the below
CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS  NOARCHIVELOG
    MAXLOGFILES 64
    MAXLOGMEMBERS 5
    MAXDATAFILES 2056
    MAXINSTANCES 32
    MAXLOGHISTORY 1168
LOGFILE
  GROUP 1 '/data/oracle/JEYSERVER/backup/DB/redo01.log'  SIZE 50M,
  GROUP 2 '/data/oracle/JEYSERVER/backup/DB/redo02.log'  SIZE 50M,
  GROUP 3 '/data/oracle/JEYSERVER/backup/DB/redo03.log'  SIZE 50M,
  GROUP 4 '/data/oracle/JEYSERVER/backup/DB/redo04.log'  SIZE 50M
-- STANDBY LOGFILE
DATAFILE
  '/data/oracle/JEYSERVER/backup/DB/system01.dbf',
  '/data/oracle/JEYSERVER/backup/DB/sysaux01.dbf',
  '/data/oracle/JEYSERVER/backup/DB/undotbs01.dbf'
CHARACTER SET WE8ISO8859P1
;
--RECOVER DATABASE USING BACKUP CONTROLFILE
-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/data/oracle/JEYSERVER/backup/DB/temp01.dbf'
     SIZE 524288000  REUSE AUTOEXTEND OFF;
-- End of tempfile additions.
--




8.    Recover the database and If required provide the redo log files/archive log files for the recovery
SQL>  recover database using backup controlfile;
ORA-00279: change 238825 generated at 08/22/2011 16:32:54 needed for thread 1
ORA-00289: suggestion : /data/oracle/TEST/archive1_13_759845780.dbf
ORA-00280: change 238825 for thread 1 is in sequence #13


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/data/oracle/JEYSERVER/backup/DB/redo01.log
Log applied.
Media recovery complete.
SQL> alter database open resetlogs;

Database altered.

SQL> select name from v$database;

NAME
---------
TEST

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/data/oracle/JEYSERVER/backup/DB/system01.dbf
/data/oracle/JEYSERVER/backup/DB/sysaux01.dbf
/data/oracle/JEYSERVER/backup/DB/undotbs01.dbf


ASM Metadata backup - 11G

We know that asm is full of metadata . It is advisable to backup the metadata ,incase if asm instance crash or issue with particalu diskgroups. 11gR1 includes the facility to take metadata backup and restore.

 md_backup and md_restore are asmcmd commands.

This script will help you to take metadata backup

dtm=`date +%d%m%y%H%M%S`
LOG_FILE=/tmp/asm_backup.${dtm}
rm /app/oracle/TEST/backup/export/asm_dg.backup
HOSTNAME=`hostname`
if [ -s /etc/oratab ] ; then
   ORATAB=/etc/oratab
elif [ -s /var/opt/oracle/oratab ] ; then
     ORATAB=/var/opt/oracle/oratab
elif [ -s /etc/opt/oracle/oratab ] ; then
     ORATAB=/etc/opt/oracle/oratab
else
     echo "ORATAB is missing" 2>&1 1>> $LOG_FILE
     cat $LOG_FILE
     exit 1
fi
cat ${ORATAB}|awk -F\# '{print $1}'|grep -v '^$'|awk -F\: '{print $1" "$2" "$3}' |while read ORACLE_SID ORACLE_HOME BOOT_FLAG
do
 if [ "${ORACLE_SID}" = "+ASM" -a "${BOOT_FLAG}" = "Y" ] ; then
  export ORACLE_SID
  export ORACLE_HOME
  export BOOT_FLAG
  PATH=$ORACLE_HOME/bin:$PATH
  export PATH
  asmcmd  lsdg
 asmcmd  md_backup -b /app/oracle/TEST/backup/export/asm_dg.backup #ASMCMD> md_restore -t full -g RECOVER /tmp/dg.backup
fi
done

Thursday 11 August 2011

Rman backup using standby database

Rman backup using standby database

Some companies want to schedule backup from the standby database to reduce the load in the production environment. Infact standby serve this purpose without any additional effort.

Backup of standby database is similar to production. To take consistent backup we need to cancel the recovery mode and bring the database in mount mode, otherwise the backup will be in inconsistent mode.

The rman user id should be part of sysdba

$ rman target rman/oracle@db1 catalog rmanUser/passwd@catalog
RMAN> backup database;
RMAN> backup archivelog all;
Note:

1.Both the primary database and standby database should use the same recovery catalog
2.The control file and SPFILE must be backed up on the primary database. 
3. It is not advisable to run the backup from both the standby and primary database.
    When you run delete obsolete ,you may get the below error if backup has been taken from both primary and standby databases.

             RMAN-06207: WARNING: 1 objects could not be deleted for DISK channel(s) due
             RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status
             RMAN-06210: List of Mismatched objects

4. if at all you would like to take backup from both the primary and standby,Use a different recovery catalog schema for each of these two targets.  still you need to take the controlfile backup from the primary database.

Rac Database Backup Configuration - 11gr2

Configuring backup purely depends on the nodes where backup appliance has been configured. Here are som of the common methods followed in most of the organisations.

1. Single or muliple backup/storage media configured in first ( one ) node only.So want to run the backup from the first node only.

    This is the most common approach.Depends on the no of tape device configured,no of channels can be allocated to parallelise the rman backup. ( IT is a myth that more channels will always improve the performance of rman backup)

The below example script will help us to schedule a backup with two channels

run {
    allocate channel t1 type 'sbt_tape'
     format 'df.%d.%f.dbf_t%t_s%s_p%p.rmn'
     maxpiecesize 5G
     parms 'ENV=(NSR_SERVER=tape_srv)';
   allocate channel t2 type 'sbt_tape'
     format 'df.%d.%f.dbf_t%t_s%s_p%p.rmn'
     maxpiecesize 5G
     parms 'ENV=(NSR_SERVER=tape_srv)';
  backup
     incremental level 0
     filesperset 10
     tag 'INC1_DATABASE'
     database;
 }

2. Backup/Storage media configured in all nodes and want to run the backup from all instances

2.1 Dedicate the channel to a particular instance irrespective of the load on the instance

CONFIGURE DEFAULT DEVICE TYPE TO SBT;
CONFIGURE DEVICE TYPE SBT Parallelism 3;
configure channel 1 device type sbt connect='rman_id/rman_pwd@racdb1';
configure channel 2 device type sbt connect='rman_id/rman_pwd@racdb2';
configure channel 3 device type sbt connect='rman_id/rman_pwd@racdb3';
run{
backup
     incremental level 0
     tag 'INC1_DATABASE'
     database;
 }


2.2  To run the backup based on the load,
create a service with the preferred instances and connect the backup session through the service name. Oracle will automatically direct the channels to the instances where the loads are less

CONFIGURE DEFAULT DEVICE TYPE TO SBT;
CONFIGURE DEVICE TYPE SBT Parallelism 3;
configure channel 1 device type sbt connect='rman_id/rman_pwd@Service1';

2.3 Non-determinstic connection - 11g new feature.
    With this oracle automatically decides the instance where the channel need to be run.    Just mention the parallelism will be sufficient to avail this feature.

CONFIGURE DEFAULT DEVICE TYPE TO SBT;
CONFIGURE DEVICE TYPE SBT Parallelism 3;

3.Backup/Storage media configured in Standby database

Some companies want to schedule backup from the standby database to reduce the load in the production environment. Infact standby serve this purpose without any additional effort.

Backup of standby database is similar to production, it is exactly same as production. To take consistent backup we need to cancel the recovery mode and bring the database in mount mode, otherwise the backup will be in inconsistent mode.

for further details please refer Rman backup using standby database

Wednesday 10 August 2011

Diskmon cannot handle more than 13 instances - Exadata

Diskmon process is one of the background process in exadata environment and is a part of Oracle Clusterware ready services.This takes care of the I/O Fencing.

In 11.2.0.1/11.2.0.2 version, Diskmon can handle upto 13instances. Each instance has one diskmon(ora_dskm_<sid>) process.Oracle can fork upto 13 diskmon slaves.This is a limitation in these versions.The CRS startup will hang if the exadata server has more than 13 instances. We could find the below messages in the  instance's alert logfile which are failed to start.

ORA-00600: internal error code, arguments: [2116], [900], [], [], [], [], [], [], [], [], [], [] In the alert log and DISKMON logs show dskm_slave_thrd_creat1: no more slave table slots left!

Oracle has provided a patch  9790947  to support upto 62 instances  and later introduced another patch 9842238 to completely remove this limitation.Exadata Patch Bundle 4 has covered the latest patch.

Tuesday 9 August 2011

Incremental statistics for Partioned table.-11g/10g


Incremental statistics for Partioned table.-11g

  This feature improves the performance of statistics collection in Datawarehouse databases.In Exadata environment some of the big tables sizes are from 50g. It is really a nightmare to collect the statistics for those big tables( partitioned).

Until 10g ( <10.2.0.3) if we want to collect stats for partitioned table,oracle scans full table and collects stats for all the partitioned( including the partions which were not modified).

With this new feature,Oracle will gather statistics on the new partition and update the global table statistics by scanning only those partitions that have been modified and not the entire table.

When the following criteria are met, the database updates global statistics incrementally by scanning only the partitions that have changed:

•The INCREMENTAL value for the partitioned table is true.
•The PUBLISH value for the partitioned table is true. ( default)
•The user specifies AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT and AUTO for GRANULARITY when gathering statistics on the table.

How to collect a stats for a partitioned table

1.exec dbms_stats.set_table_pref('TABLEOWNER','TABLENAME','INCREMENTAL','TRUE');
2.exec dbms_stats.gather_table_stats( Owner=>'TABLEOWNER', Tabname=>'JRC', Partname=>'MARCH_2010', Granularity=>'AUTO');

GRANULARITY to AUTO to gather subpartition, partition, or global statistics depending on the partition
type

Consequences of Incremental stats

  • If a table uses composite partitioning, then the database only gathers statistics for modified subpartitions. The database does not gather statistics at the subpartition level for unmodified subpartitions.

  • If a table uses incremental statistics, and if this table has a locally partitioned index, then the database gathers index statistics at the global level and for modified (not unmodified) index partitions. The database does not generate global index statistics from the partition-level index statistics. Rather, the database gathers global index statistics by performing a full index scan.


  • CAUTION:

    * ( In 11gR1/R2 ,if we use estimate_percent a value other than AUTO_SAMPLE_SIZE,incremental stats will ignore the sampling and it does full scan due to a Bug:11710056 so either use auto instead of sampling or disable the incremental stats)

    * Incremental stats will be slower if the dictionary stats is missing.so it is advisable to collect dictionary stats

    * Uncremental stats will consume more space if incremental stats is enabled . The biggest occupants are the following objects:
    - Table SYS.WRI$_OPTSTAT_SYNOPSIS$
    - Index SYS.I_WRI$_OPTSTAT_SYNOPSIS.
     Incremental Statistics stores a synopsis(10KB size) for each partition in the sysaux table for each table. synopsis is nothing but statistical metadata for that partition and the columns in the partition. Global statistics are collected by aggregating the synopses from each partition, its eliminates the necessary for the full table scan .



    Incremental statistics for Partioned table.-10g

    For Oracle Database10g Release 2 (10.2.0.5) there is a solution that simulates the same behavior.

    The 10g solution is a new value, 'APPROX_GLOBAL AND PARTITION' for the GRANULARITY parameter of the GATHER_TABLE_STATS procedures. It behaves the same as the incremental maintenance feature except that oracle doesnn’t update the number of distinct values for non-partitioning columns and number of distinct keys of the index at the global level.

    For partitioned column oracle updates the number of distinct values as the sum of number of distinct values at the partition levels. Also it sets the number of distinct values of columns of unique indexes as the number of rows of the table. The above option is best suited for the cases where  non-partitioning column number of distinct values at the global level becomes stale less often. It may be possible to collect global statistics less frequently then the default (when table changes 10%) since approx_global option maintains most of the global statistics accurately.

    This new value exists in 10.2.0.5 and for 10.2.04 a patch 6526370 is available  to enable this feature.

    Friday 5 August 2011

    What to do if by mistake listener.log got removed while the listener is keep running

    Until 11g there is no option to rotate the alert/listener.log ( text format). ADRCI has purge option to housekeep the old listener/alert.xml file. infact *.log ( text format) is there for downward compatibility.

    DBAs housekeep the listener.log using various methods like
       1. cat /dev/null >listener.log ( after taking backup)
       2. split the listener.log file and remove the old contents using text editor like vi....
       3. temporarily off the log off and take backup of the file and cat /dev/null the listener.log

    Sometime by mistake we remove the listener.log and oracle stop writting into the listener.log even if we create a file with the same name.It is because the listener process has the file descriptor open and it will keep open it even if we remove the file.Because of that you can notice the filesystem where the listener log resides doesnot release the space even afetr the removal of file.

    There is a workaround to fix this without stop and start the listener. Just stop and start the log_status.

       lsrnctl > set current_listener  LISTENER_ABC1
      LSNRCTL> set current_listener LISTENER_ABC1
    Current Listener is LISTENER_ABC1
    LSNRCTL> set current_listener LISTENER_ABC1
    Current Listener is LISTENER_ABC1
    LSNRCTL> set log_status off
    Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=ABC1))
    LISTENER_ABC1 parameter "log_status" set to OFF
    The command completed successfully
    LSNRCTL> set log_status on
    Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=ABC1))
    LISTENER_ABC1 parameter "log_status" set to ON
    The command completed successfully

    now you can see the listener.log will be updated by listener process.





     

    Tuesday 2 August 2011

    How to enable tracing for srvctl commands

    This summary is not available. Please click here to view the post.

    Fast Mirror resync and Fast Rebalance - ASM

         Fast Mirror resync and Fast Rebalance are  two new features available in 11g .

     Fast Mirror resync:

       This feature is usefull when
                 1. Disk error is transient and it can be recovered easily within short  time ( short time means how long???.i would like to discuss about this in my next post)
                 2. Temporary maintenance of a disk.
      ASM fast resync keeps track of pending changes to extents on an OFFLINE disk during an outage. The extents are resynced when the disk is brought back online or replaced.

    By default, ASM drops a disk shortly after it is taken offline. You can set the DISK_REPAIR_TIME attribute to prevent this operation by specifying a time interval to repair the disk and bring it back online.

    The default value for  DISK_REPAIR_TIME  is 3.6H


    1) This feature requires that the redundancy level for the disk should be set to NORMAL or HIGH.

    2) To use this feature, the disk group compatibility attributes must be set to 11.1 or higher
     compatible.asm & compatible.rdbms = 11.1.0.0.0 or higher

    3) ALTER DISKGROUP dgroupA SET ATTRIBUTE 'DISK_REPAIR_TIME'='3H';

    Fast Rebalance

    Sometime DBAs dont want to do the expensive rebalance operation during the production hour after replacing (hot)  a disk. They may have downtime during week-end to do the rebalance.ofcourse fast rebalance is the best option for this

    In 11g, you can use the STARTUP RESTRICT command to control access to an ASM instance while you perform maintenance. This startup option is used to implement ASM Fast Rebalance feature. Rebalance operations that occur while a disk group is in RESTRICTED mode eliminate the lock and unlock extent map messaging between ASM instances in Oracle RAC environments.

    When an ASM instance is started with 'STARTUP RESTRICT' option and is active in this mode, all of the disk groups are mounted in RESTRICTED mode which prevents databases from connecting to the ASM instance. In addition, the restricted clause of the ALTER SYSTEM statement is disabled for the ASM instance.
      Follow the below steps to do the fast rebalance
      1. startup restrict;
      2.  alter diskgroup dg1 mount restricted;
                ( we cannot mount the disk in mount mode and may get ORA-15251: only restricted mount is allowed in the ASM instance )

    3.alter diskgroup dg1 rebalance;
      ( we cannot open database against the asm when it is in restrict mode,we may get the below error


    ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
    ORA-01110: data file 5: '+DG1/11g/datafile/exy.256.627345731'


    4.shutdown immediate
       ( we cannot enable restriced mode as we do in normal rdbms
         we may the below error
    ORA-15000: command disallowed by current instance type
    )

    5.startup

    Alternatively we can dismount and mount in restricted mode a particular diskgroup when asm instance is running in non- restricted mode. This will help us when we have multiple diskgroups and each diskgroup is used by different database and other databases cannot afford downtime when fastrebalnce is happening.still if a diskgroup is mounted in restricted mode, no asm client can use that particular diskgroup

    SQL> ALTER DISKGROUP data DISMOUNT;

    Diskgroup altered.

    SQL> ALTER DISKGROUP data MOUNT RESTRICTED;

    Diskgroup altered.

    SQL> ALTER DISKGROUP data DISMOUNT;

    Diskgroup altered.

    ASQL> ALTER DISKGROUP data MOUNT;

    Diskgroup altered.

    How to optimize the rebalance operation in asm when multiple disk maintenance activities are need to be run at the same time,also monitor the status of the rebalance operation

    Sometime dbas need to do multiple diskgroup managment commands like add a disk  and drop a disk. We used to run the commands one by one.since rebalance process will be forked for one disk maintenance activity,in this case two rebalance process will be running.To avoid that it is recomended to run both the command in a single statement and it invokes single rebalances operations

      alter diskgroup diskgroup_name add disk 'o/xxx/xxxx.'  drop disk  'o/xxx/xxx' ;

    We can montior the staus of the asm operations using v$asm_operations view


    SQL> desc v$asm_operation
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     GROUP_NUMBER                                       NUMBER
     OPERATION                                          CHAR(5)
     STATE                                              VARCHAR2(4)
     POWER                                              NUMBER
     ACTUAL                                             NUMBER
     SOFAR                                              NUMBER
     EST_WORK                                           NUMBER
     EST_RATE                                           NUMBER
     EST_MINUTES                                        NUMBER
     ERROR_CODE                                         VARCHAR2(44)

    State of the operation:
    WAIT - No operations running for the group
    RUN - Operation running for the group
    REAP - Operation is being run down
    HALT - Operation halted by admin
    ERRORS - Operation halted by errors

    select state,ACTUAL ,sofar,EST_MINUTES ,operation from v$asm_operation;

    Monday 1 August 2011

    Why My query is not availing the offloading feature

    Some of the area we need to check before we go to oracle to check if there is any known bug ?

    1. Verify STORAGE clause is included in your query plan.

    2. Verify if your function is offloloadabe using v$sqlfn_metadata

    SQL> desc v$sqlfn_metadata
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     FUNC_ID                                            NUMBER
     NAME                                               VARCHAR2(30)
     MINARGS                                            NUMBER
     MAXARGS                                            NUMBER
     DATATYPE                                           VARCHAR2(8)
     VERSION                                            VARCHAR2(12)
     ANALYTIC                                           VARCHAR2(3)
     AGGREGATE                                          VARCHAR2(3)
     OFFLOADABLE                                        VARCHAR2(3)
     DISP_TYPE                                          VARCHAR2(13)
     USAGE                                              VARCHAR2(30)
     DESCR                                              VARCHAR2(4000)

    3. Verify if the below  init.ora parameters are matching
                cell_offload_processing = TRUE
                _kcfis_storageidx_disabled =FALSE  ( Storage index is enabled)
                 _kcfis_cell_passthru_enabled = TRUE ( perform smart IO filtering on the cell )



    4. Verify if 'cell.smart_scan_capable'='TRUE' is enabled for all the disk
      
            

    ZFS

    Public Cloud tools comparison