Tuesday, 16 April 2013

SLOW HISTORICAL STATISTICS PURGING AND SYSAUX GROWS

Summary:
In some of the database we noticed that SYSAUX tablespace is keep growing , the STATS history objects are the major contributors for the space consumption, Upon investigation, found that stats history is not getting purged from SYSAUX tablespace though it crossed the retention period.
Symptoms:
1. Run the below script to check the sysuax usage.
 @?/rdbms/admin/awrinfo.sql
   - SM/OPTSTAT is the top space consumer from the above scripts report
2. The below obejcts might be major space consumed segments in the sysaux tablespace
   select segment_name,bytes/1024/1024 from dba_segments order by 2;
·         WRI$_OPTSTAT_HISTHEAD_HISTORY
·        wri$_optstat_histgrm_history
·         I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST
·         I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST


Why Purge is not running?
By default, the MMON performs the automatic purge that removes all history older than the following:

* current time - statistics history retention (by default 31 days)
* time of recent analyze in the system - 1
MMON performs the purge of the optimizer stats history automatically, but it has an internal limit of 5 minutes to perform this job.
If the operation takes more than 5 minutes, then it is aborted and stats not purged.
No trace or alert message is reported.
Versions Affected:  10g & 11g versions
Patch for this BUG:
REF: SYSAUX Grows Because Optimizer Stats History is Not Purged [ID 1055547.1]
For 11g, patch 14373728 is available (Ensure to apply POST Patch Installation script after the patch)
After applying this post  patch, P_OLD partition will be created and all the old records will be moved to P_OLD partition and  for each day one partition(SYS_) will be created
·           WRI$_OPTSTAT_HISTHEAD_HISTORY table.
·           SYS_IL0003716802C00022$$ - Partition index
·           I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST – Global
·           I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST – Global

 P_OLD partition will not get dropped automatically even after the retention period is over so we may need to drop this partition after the retention period
If we want drop the partition immediately then  backup this partition and drop the partitions and rebuild the global indexes
Ref "How to transfer statistics from one database to another [ID 333742.1]" for taking backup of the statistics and restoration from the backup
For 10g, this patch is not available, we may need to the below work-around
Work-Around to housekeep the old records .
I.                    Check the size of the WRI$_OPTSTAT_HISTHEAD_HISTORY table

select segment_name,bytes/1024/1024 from dba_segments where segment_name='WRI$_OPTSTAT_HISTHEAD_HISTORY';

II  if the size of the WRI$_OPTSTAT_HISTHEAD_HISTORY table is in MBs

     exec dbms_stats.purge_stats(sysdate-retention_periond)
     for ex: exec dbms_stats.purge_stats(sysdate-31)


III  If the size of the WRI$_OPTSTAT_HISTHEAD_HISTORY table is in GBs

  if the patch is applied and partitions are created , Wait for 31days or retention period. Simply drop the partition P_OLD

Alter table sys. WRI$_OPTSTAT_HISTHEAD_HISTORY drop partition P_OLD;
alter index I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST  rebuild;
alter index I_WRI$_OPTSTAT_HH_ST  rebuild;

Ensure all the below indexes are usable by rebuild them.

select INDEX_NAME,PARTITION_NAME,STATUS from dba_ind_partitions where index_name in ( select index_name from dba_part_indexes where table_name='WRI$_OPTSTAT_HISTHEAD_HISTORY');

select INDEX_NAME,INDEX_TYPE,PARTITIONED,status from dba_indexes where table_name='WRI$_OPTSTAT_HISTHEAD_HISTORY';


   if the patch is not applied or want to housekeep the stats records immediately
 dbms_stats.purge_stats(days)  will take ages to delete the old records so we could try    dbms_stats.purge_stats(dbms_stats.purge_all) , it is equilavent to truncate the stats table.

 It will be faster than purge days. This is available in 11.2.0.3. for 11.2.02. we need to apply patch 10279045 to get this option.





Friday, 12 April 2013

How to drop other schema's Database link



There are multiple methods to create or drop other schema's db link




1) using 'connect through' method


SQL> alter user <targetuser> grant connect through oracle;


User altered.


SQL> grant create database link to <targetuser>;


Grant succeeded.


SQL>conn /


SQL> conn [<targetuser>]


Connected.


SQL>


create or drop


SQL> alter user <targetuser> revoke connect through ops$oracle;


User altered.


SQL> revoke create database link from <targetuser>;




2) Using a procedure


We could not drop other schema's database link with schema qulalifier Since dot is allowed in the database link name.Oracle check for a dblink with the given in under
the same schema
>drop database link perf_mon.sts;
drop database link perf_mon.sts
                   *
ERROR at line 1:
ORA-02024: database link not found

>create database link perf_mon.sts;
Database link created.
so some of dbas either reset the password to logon or logon from the same schema to drop the db links.
Would like to share  a small script to drop the database links from sysdba without resetting the schema password or login into the schema.
This will be useful during our UAT refresh to drop the production links sitting under different schemas.

================drop_schema_dblink.sh====================================
username=$1
db_link_name=$2

sqlplus /nolog <<!
spool $1_$2_drop_dblink.log
conn / as sysdba
prompt " DB_LINK_STAT  Before Drop"
select * from dba_db_links where owner='$1' and db_link='$2' ;
CREATE or replace PROCEDURE $username.drop_db_link AS
BEGIN
EXECUTE IMMEDIATE 'drop database link $2';
END ;
/
execute $username.drop_db_link;
drop procedure $username.drop_db_link;
prompt " DB_LINK_STAT  After Drop"
select * from dba_db_links where owner='$1' and db_link='$2' ;

!
==========================================================
This can be achieved using various other option like scheduler job/ dbms_job. I feel the procedure is quite simple and straight forward.
==========

SQL> conn perf_mon/perf_mon
Connected.
SQL> create database link remote ;
Database link created.
SQL> conn / as sysdba
Connected.
SQL> drop database link perf_mon.remote;
drop database link perf_mon.remote
                   *
ERROR at line 1:
ORA-02024: database link not found

$ ./drop_schema_dblink.sh PERF_MON REMOTE
SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 4 12:04:38 2013
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
SQL> SQL> Connected.
SQL> " DB_LINK_STAT     Before Drop"
SQL>
OWNER
------------------------------
DB_LINK
--------------------------------------------------------------------------------
USERNAME
------------------------------
HOST
--------------------------------------------------------------------------------
CREATED
---------
PERF_MON
REMOTE

OWNER
------------------------------
DB_LINK
--------------------------------------------------------------------------------
USERNAME
------------------------------
HOST
--------------------------------------------------------------------------------
CREATED
---------
04-MAR-13

SQL> SQL>   2    3    4    5
Procedure created.
SQL>
PL/SQL procedure successfully completed.
SQL>
Procedure dropped.
SQL> SQL> " DB_LINK_STAT        After Drop"
SQL> SQL>
no rows selected
SQL> SQL> SQL>
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning option

Unix Ulitity to grep a search string with few lines before and after the match

OEL Linux has an option in  grep command to search a string with some lines before and after the  match.
This utility helps us to search an ERROR  from the alert.log/trace with timestamp which is recorded one line before the actual error.
Unfortunately this option is not available in some of the platform like the Solaris and AIX ( of course  there are addition packages to have enhanced grep option)
The following small wrapper script to simulate the same grep option available in Linux







Script:
# usage <grep_add.sh> <no_rows_b4> <no_rows_a4> <search_string> <file_name>
b4=$1
a4=$2
search_string=$3
file_name=$4
grep -n $search_string $file_name | cut -d':' -f1 |  xargs  -n1 -I % awk "NR<=%+$a4 && NR>=%-$b4" $file_name




==========================================================
Usage:
grep_add.sh 1 0 force_match coe_xfr_sql_profile_56c1s0454r13x_1115266317.sql
replace     => TRUE,
force_match => FALSE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
grep_add.sh  2 2 force_match coe_xfr_sql_profile_56c1s0454r13x_1115266317.sql
validate    => TRUE,
replace     => TRUE,
force_match => FALSE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
DBMS_LOB.FREETEMPORARY(sql_txt);
END;
=======================================================











Linux Option:
In linux ,this option comes with the exisitng grep command , we can use -B and -A option to grep the additional lines
grep -A 3 FIXED awrinfo.txt
FIXED      39,506.7   44.2        3,272      306.7     2,147.2    47% : 53%
EVENTS     32,080.4   35.9        2,657      249.1     1,743.6    39% : 61%
ASH        11,324.3   12.7          938       87.9       615.5    89% : 11%
SPACE       3,275.4    3.7          271       25.4       178.0    68% : 32%
$ grep -B 2 FIXED awrinfo.txt
COMPONENT        MB  % AWR  KB_PER_SNAP MB_PER_DAY MB_PER_WEEK TABLE% : INDEX%
--------- --------- ------ ------------ ---------- ----------- ----------------
FIXED      39,506.7   44.2        3,272      306.7     2,147.2    47% : 53%

Oracle Pluggable Database - 12c

Schema Consolidation/Database Consolidation & Non-Production Fast Cloning:

Schema Consolidation/Database Consolidation:

In the database farm, As a part of Cost control, Organization  is looking for consolidating multiple  single databases into a single server or build huge database in a single server to support multiple application.

Non-Production Fast Cloning:

It is a big challenge for DBAs to do build test environment  when database is very big,ETA to setup is very less and Arranging Storage capacity  .
We use several options depends the feasibility like RMAN Active Duplicate database, Standby setup, BCV …… and Finally Delphix.!


Are they most cost effective/effective methods? of course ,we say yes until we hear about the Oracle Open world 2012 news . Oracle also thought about the same challenges and introduced some interesting new features in their upcoming 12c.


Oracle Pluggable Database & Fast Cloning

In Open World 2012, Oracle revealed the following  interesting feature in Oracle 12c

·         Pluggable Database.

·         Fast cloning


Oracle Pluggable Database

This Feature allows to run multiple databases under one instance or Container, is called multi-tenancy.

Separated Database metadata and user DBs Metadata

The new database design splits the database into two physical entities. One is root or container database ( CDB- Container database)  which contains the metadata required to run the database itself. The second entity will be user's database ( PDB- Pluggable database)  and its metadata, with this Oracle's metadata has been separated from users metadata.

We are all  looking for multiple databases on a single server or build huge database that supports multiple application. Both can be achieved with this new feature.

Key Points about this feature

  •       Once container can have 250 user databases.
  •        New admin role is introduced CDB Administrator - To manage CBD database
  •         Only way to connect PDB is through Service
  •          Can convert non CDB to Pluggable database into a CDB quickly
  •          With this Can easily
    Migrate databases to new platform
  Migrate databases to new hardware
  Migrate databases to new DB releases
  Move databases to different systems
  Move databases to different CDB
Fast Cloning:

When we have pluggable database setup in a container,  Oracle says that cloning takes few mins to clone a PDB with the same CDB or into another CDB.

Just run the below command with some preparatory work, it creates a clone copy in few secs.

create pluggable database C01P02P  from C01P01P  file_name_convert = ('/c01p01p', '/c01p02p')
Oracle use Vfabric Data Director technology for provisining, backup & clone. Need to understand the funda behind this....!
SPARC SUPER CLUSTER a General Purpose Engineered System recently unveiled by Oracle Corporation which is a combination of Exadata Storage Server and Exalogic Application server.

 You may want to explore in the Internet as there are lots of documents available on this like DataSheet/ Comparison between exadata etc...


Oracle SPARC SUPER CLUSTER

Oracle Unveiled the World's First General Purpose Engineered System

It is designed, tested and delivered by Oracle

The SPARC SuperCluster T4-4 utilizes high performance technologies from Oracle Exadata
Storage Servers and Oracle Exalogic Elastic Cloud combined with SPARC T4-4 servers, ZFS Storage Appliance, and InfiniBand technology, and Oracle Solaris 11. With the addition of the SPARC SuperCluster, Oracle continues to set the standard for engineered systems: maximizing customer value with leading performance in a complete and tested
package

Interesting Super Cluster Configuration

I.SPARC T4-4 Compute Node ( Full RACK 4 Nodes)

Each SPARC T4-4 compute node configured with:

  • 4 x eight-core SPARC T4 Processors (3.0GHz)
  • 1 TB Memory
  • 6 x 600 GB 10,000 RPM SAS-2 Disks
  • 2 x 300 GB solid state disks
  • 4 x dual-port InfiniBand QDR
  • 4 x dual-port 10 Gb Ethernet

II.Exadata Storage Server X2-2 ( Full RACK 6 Nodes)

Each Exadata Storage Server is configured with:
  •      Either 12 x 600 GB 15,000 RPM High Performance SAS disks or 12 x 3TB 7,200 RPM High Capacity SAS disks
  •      12 CPU cores for SQL processing
  •      384 GB Exadata Smart Flash Cache

III.Sun ZFS Storage 7320 Appliance ( full rack 1 )

7320 Dual Controller, each with:
  • 2 x four-core 2.4GHz Intel® Xeon® Processors
  • 24 GB Memory
  • 1 x dual-port InfiniBand HCA
  • 2 x 500 GB SATA HDD
  • 4 x 512 GB read optimized SSD

7320 Disk Shelf with the following:
  • 20 x 3 TB using high capacity SAS-2 7,200 RPM disks
  • 4 x 73 GB write-optimized SSDs

IV.Sun Datacenter InfiniBand Switch 36 ( Full rac 3 )

V.Additional Hardware Components
Additional Hardware Components Included:
  • Ethernet management switch
  • 42U rack packaging
  • 2 x Redundant Power Distributions Units (PDUs)
  • InfiniBand cables
  • 10G cables
Spares Included:
  • 1 x 600 GB High Performance SAS disk or 1 x 3 TB High Capacity SAS disk
  • 1 x 3 TB SAS disk
  • 1 x 96 GB Exadata Smart Flash Cache card

ZFS

Public Cloud tools comparison