Thursday, 19 April 2018

SNAPSHOT DATABASES in EXADATA - Quick Clone & Storage Optimization



 
Clone multiple snapshot databases in exadata platform quickly  with less additional disk space .
 
 
Traditional Method:

To create snapshot databases in a production system, we create test master and snapshot databases on a non-Exadata system, such as ZFS .In certain cases, these databases are a full copy that
consumes as much storage as its source.    

 
  • We have limitation in ZFS as we most of the time use it for backup purpose with tight storage.
  • If the clones are a full copy of the production database, this is expensive in terms of the amount of storage consumed and the time it takes to create the clones. Imagine creating ten clones for a
    multi-terabyte database and it is easy to see why this approach does not scale.
  • Another drawback to this approach is that Oracle Exadata Storage Server Software features such as Smart Scan, Smart Logging, and Smart Flash are not available.

Exadata Snapshots:
To solve these problems, Oracle has come out with a feature called Exadata snapshots.
  • Exadata snapshots are ideal for creating space-efficient read-only or read-write clones of an Oracle database that you can use for development, testing, or other non-production purposes, and when multiple clones are required because of disk space and time savings.depicts the space required for an Exadata snapshot.
 
 
 
  • An Exadata snapshot is based on a test master, which is a full clone of the source database. The test master is the only full copy of the source database. From a single test master we can create multiple Exadata snapshots with minimal additional storage and minimal effort. Each Exadata snapshot uses a small fraction of the disk space required for the test master and can be created or dropped in seconds. Each Exadata snapshot is a logical copy of the test master.
 
High Level Steps to create snapshot databases in Exadata
 
1. Create Sparse Grid Disks in the storage server
2. Create ASM Sparse DG.
3. Setup Test Master ( either full clone on a DG with ASM ACL enabled or converting an existing Full clone or Standby Database to a test master)
4.Create multiple Snapshot Databases ( it can be a PDB or full CDB or non-CDB)
 

Picture : Exadata Snapshot Databases in a Non-Multitenant Environment
sagug_vm_009.png
 
Prerequisites for Exadata Snapshots:
 
  • Grid Disks should be spare disks. (mention virtual size while creating sparse Grid Disks)
  • ASM Disk Group should be on Sparse Disk Groups 
 
  • Storage nodes must be X3 or later
  • Exadata Storage Server software 12.1.2.1.0 or later for Exadata Storage and Database Servers
  • Oracle Grid Infrastructure & Database 12.1.0.2.0 BP5 or later
  • The data files for the snapshot database and the parent database must be on the same ASM cluster.
  • The db_block_size must be at least 4K and be a multiple of 4K.
 
High Level Steps to create snapshot databases in Exadata
 
 
1. Create Sparse Grid Disks in the storage server
2. Create ASM Sparse DG.
3. Setup Test Master ( either full clone on a DG with ASM ACL enabled or converting an existing Full clone or Standby Database to a test master)
4.Create multiple Snapshot Databases ( it can be a PDB or full CDB or non-CDB)
 
Detailed steps to create Snapshot databases
1.   Sparse Grid Disks:
 
A sparse ASM disk group is composed of sparse grid disks. A sparse grid disk has a virtual size attribute as well as physical size.The maximum allowed physical size of all grid disks on a cell disk is 4 TB. The maximum allowed virtual size is 100 TB.we need to create sparse grid disks before creating a sparse ASM disk group.
 
For example: to create 108G grid Disk with 1000G Virtual Size.
create griddisk all harddisk prefix=DATA_SPAR, size=108G, virtualsize=1000G;
 
2.ASM Sparse Disk Groups:
Exadata snapshots utilize Oracle ASM sparse disk groups: sparse data files can be created only in Oracle ASM sparse disk
 
For example : To create ASM sparse Disk Group
 
 
create diskgroup DATA_SPARSE high redundancy disk 'o/*/DATA_SP_*' attribute
'compatible.asm'='12.1.0.2',
'compatible.rdbms'='12.1.0.2',
'au_size'='4M',
'cell.smart_scan_capable'='true',
'cell.sparse_dg'='allsparse',
'appliance.mode' = 'TRUE';
 
  • compatible.asm must be set to 12.1.0.2 or higher.
  • compatible.rdbms must be set to 12.1.0.2 or higher.
  • cell.sparse_dg must be set to "allsparse". This identifies the disk group to ASM as being made up of sparse grid disks.
  • appliance.mode must be set to true.
 
3. Setting Up the Test Master:
 
Creating an Exadata snapshot from the test master is as simple as recording the parent file name in the child file header, an operation that completes in seconds and requires minimal disk space.
Additional disk space is consumed only when the user of the snapshot begins to change data. Only new data is written to data blocks that are allocated to the snapshot on write. All requests for data that has not changed are serviced by the data blocks of the test master.
Multiple users can create independent snapshots from the same test master. This enables multiple development and test environments to share space while maintaining independent databases for each user
 
 
Steps to create Test Master
3.1.Build the full clone of a database using RMAN backup/restore, data pump, standby ( please not when we use standby we cannot enable log-apply & log-shiping) or any of the other methods typically used to create a full clone of a database.
 
3.2.Enable access control on the disk group that contains the test master's data files. The disk group must be on Exadata.
SQL> ALTER DISKGROUP DATA_SPARSE SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';
 
3.3. After creating the full clone, remove write permissions on all the data files to help prevent accidental overwrite.
 
SQL commands in ASM instance only allow to set file permissions to read only. You cannot remove write permissions in SQL.
 
SQL> ALTER DISKGROUP DATA_SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file 'FILENAME';
 
3.4.Setting the Ownership of the Test Master Data Files
After you have cloned the database to create a test master database,  set an operating system user as the owner of the disk group, and make the operating system user as the owner of the test master's data files.

Specify an operating system user as the owner of the disk group:
SQL> ALTER DISKGROUP DATA_SPARSE ADD USER 'perf_mon';
 
Make the operating system user the owner of the test master's data files:
SQL> ALTER DISKGROUP DATA_SPARSE SET OWNERSHIP OWNER='perf_mon' FOR FILE
  '+DATA_SPARSE/TESTMASTER/DATAFILE/system.378.165163315';
SQL> ALTER DISKGROUP DATA_SPARSE SET OWNERSHIP OWNER='perf_mon' FOR FILE
  '+DATA_SPARSE/TESTMASTER/DATAFILE/sysaux.374.165163215';
SQL> ALTER DISKGROUP DATA_SPARSE SET OWNERSHIP OWNER='perf_mon' FOR FILE
  '+DATA_SPARSE/TESTMASTER/DATAFILE/sysabs.371.165163015';
SQL> ALTER DISKGROUP DATA_SPARSE SET OWNERSHIP OWNER='perf_mon' FOR FILE
  '+DATA_SPARSE/TESTMASTER/DATAFILE/tbs_1.372.165161315';
 
 
4.Sparse Database and Sparse Files:
A database consists of the following files:
 
  • control files
  • online redo logs
  • temp files
  • data files
 
  In a sparse database, such as an Exadata snapshot, its data files are sparse files. A sparse file contains only changes made to blocks from the parent file (the parent file remains unchanged) and maintains a pointer to the parent file for access to unchanged data.
The Exadata snapshot has its own copy of the other database files (control files, online redo logs, and temp files). These other database files will not be sparse files.
 
4.1) Create snapshot  PDB database
 
4.1.1.In SQL*Plus connect to the cdb$root.

4.1.2.Close the test master PDB in all instances.
SQL> alter pluggable database PDB1TM1 close instances=all;
 
4.1.3.Open the test master PDB in the local instance in read only mode.
SQL> alter pluggable database PDB1TM1 open read only;
 
4.1.4.Create an Exadata snapshot PDB of the test master.
SQL> create pluggable database PDB1S1 from PDB1TM1 tempfile reuse
create_file_dest='+DATA_SPARSE' snapshot copy;
 
create_file_dest must specify the name of a sparse disk group; this ensures the files for the Exadata snapshot are created in the correct disk group. The "snapshot copy" clause creates the PDB as a snapshot rather than a full PDB clone.
 
4.2) Create non CDB snapshot database
 
   4.2.1. Create control file backup to trace
     4.2.1.1.) SQL> select value from v$diag_info where name = 'Default Trace File';
     VALUE
     --------------------------------------------------------------------------
     /u01/app/oracle/diag/rdbms/TESTMASTER/TESTMASTER1/trace/TESTMASTER1_ora_25778.trc
 
  4.2..1.2) alter database backup controlfile to trace;
   4.2.1.3 ) Edit the controlfile (The control file should be created with Exadata snapshot database name, new log file names, and the test master's data file names )
 
CREATE CONTROLFILE REUSE SET DATABASE SPARSETest RESETLOGS
      MAXLOGFILES 32
      MAXLOGMEMBERS 2
      MAXINSTANCES 1
      MAXLOGHISTORY 908
  LOGFILE
      GROUP 1 '+DATA/SPARSETEST/t_log1.f' SIZE 100M BLOCKSIZE 512,
      GROUP 2 '+DATA/SPARSETEST/t_log2.f' SIZE 100M BLOCKSIZE 512
  DATAFILE
      '+DATA_SPARSE/TESTMASTER/DATAFILE/system.257.865863315',
      '+DATA_SPARSE/TESTMASTER/DATAFILE/sysaux.258.865863317',
      '+DATA_SPARSE/TESTMASTER/DATAFILE/sysext.259.865863317',
      '+DATA_SPARSE/TESTMASTER/DATAFILE/tbs_1.256.865863315'
  CHARACTER SET WE8DEC;
 
4.2.2. Generate dnfs files
set newpage 0
set linesize 999
set pagesize 0
set feedback off
set heading off
set echo off
set space 0
set tab off
set trimspool on
spool rename_files.sql
select 'EXECUTE dbms_dnfs.clonedb_renamefile
('||''''||name||''''||','||''''||replace(replace(replace(name,'.','_'),'TESTMASTER','JOHNTEST'),'DATA','DATA_SPARSE')||''''||');' from v$datafile;
exit

The query above builds a file called rename_files.sql that contains statements for each data file similar to the following:
EXECUTE dbms_dnfs.clonedb_renamefile (
'+DATA/TESTMASTER/DATAFILE/system.257.865863315',
'+DATA_SPARSE/SPARSETEST/DATAFILE/system_257_865863315');

 
4.2.3.Shut down the test master.
SQL> shutdown;
 
4.2.4. Create an init.ora file for the Exadata snapshot database.
$ cp init_TestMaster.ora snap_init.ora
 
Modify snap_init.ora with the new database name and new control files, for example:
db_name = SPARSETest
control_files = '+DATA_SPARSE/SPARSETEST/control1.f'

4.2.5. In the following example the script is named crt_ctlfile.sql.
SQL> @crt_ctlfile
      Run the script you modified in step 4.2.1. All the files must be renamed prior to opening the Exadata snapshot database.
 
4.2.6 Rename files
SQL> @rename_files
This script modifies the permissions of the test master database files in ASM, marking them as READONLY.
The dbms_dnfs.clonedb_renamefile procedure, which is called by rename_files.sql, sets up the parent-child relationship between the test master database and the snapshot database, and renames the filenames in the snapshot database's control file.
 
  4.2.7 Open the Exadata snapshot database with the RESETLOGS option:
SQL> ALTER DATABASE OPEN RESETLOGS;

Confirm that the Exadata snapshot files are child files of the test master database. Connect using SQL*Plus as sysasm to the Exadata snapshot, and run the following command:
SQL> a.select filenumber num, CLONEFILENAME child, SNAPSHOTFILENAME parent
from x$ksfdsscloneinfo;

The following is an example of the output from the query:
NUM CHILD                                        PARENT
------------------------------------------------------------------
1   +DATA_SPARSE/SPARSETEST/DATAFILE/system_257_865863315
+DATA_SPARSE/TESTMASTER/DATAFILE/system.257.865863315
2   +DATA_SPARSE/SPARSETEST/DATAFILE/sysaux_258_865863317
+DATA_SPARSE/TESTMASTER/DATAFILE/sysaux.258.865863317
3   +DATA_SPARSE/SPARSETEST/DATAFILE/sysext_259_865863317
+DATA_SPARSE/TESTMASTER/DATAFILE/sysext.259.865863317
4   +DATA_SPARSE/SPARSETEST/DATAFILE/tbs_1_256_865863315
+DATA_SPARSE/TESTMASTER/DATAFILE/tbs_1.256.865863315

4.2.8 Log in using SQL*Plus to the Exadata snapshot database, and add temp files to the TEMP tablespace. This will be a full size temp file, not sparse.
alter tablespace TEMP add tempfile '+DATA_SPARSE' size 10g;

Tuesday, 3 April 2018

How to remove all disks from a cellserver

Assume we have the following 3 Disks groups
   DATA  ( Database)
   FRA  ( for Archive log/FRA )
   SYSTEM ( After os partition )














Verify if you have enough usable space ( very important thing before you drop)


Before you drop the disks, you need to check the DG redundancy and the free usable space whether after the disk drop, the usable free space can accommodate the rebalance






A,1 Get the below values from the ASM instance


select sum(total_mb) from v$asm_disk where failgroup='CELLSERVER01' and lower(name) like '%data%';
select sum(total_mb) from v$asm_disk where failgroup='CELLSERVER01 and lower(name) like '%fra%';
 select sum(total_mb) from v$asm_disk where failgroup='CELLSERVER01' and lower(name) like '%system%';


 


 A.2 Depends on the redundancy  divide the value by 3 or 2 ( High – 3, normal -2)   )




select sum(total_mb) from v$asm_disk where failgroup='CELLSERVER01 and lower(name) like '%fra%';





SUM(TOTAL_MB)


-------------


       344080     total MB for one cell in FRA without redundancy (with normal redundancy approx 344/2 = 172 GB free space needed)  ( High – 3, normal -2)  













A.3) Validate using  lsdg




ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  4194304  298592448  124655796          3554672        60550562              0             N  DATA/
MOUNTED  NORMAL  N         512   4096  4194304    2422560     118196            34608           41794              0             Y  FRA/
MOUNTED  NORMAL  N         512   4096  4194304   18930240   17817060           225360         8795850              0             N  SYSTEM


Note : if we don't have enough usable space to accommodate the rebalance, only option is that increase DISK_REPAIR time .We may loose data if any other failgroup goes down ( 1 for normal, 2 for high)










Drop all Disks
for a cellserver maintenenace, if we want to remove all the disks from the above DGs




  alter diskgroup DATA drop disks in failgroup cellserver01rebalance power 60;
  alter diskgroup FRA drop disks in failgroup cellserver01rebalance power 60;
  alter diskgroup SYSTEM drop disks in failgroup cellserver01rebalance power 60;


- - Monitor v$asm_operation










Add the Disks Back


( why we use two ip-addreses here? from X4 onwards infiniband is in active -active mode )


To add all disks from once cell server in a single command
       ALTER DISKGROUP DATA ADD DISK 'o/10.10.10.1;10.10.2/DATA_DG_CD*' rebalance power 60 NOWAIT;
       ALTER DISKGROUP FRA ADD DISK 'o/10.10.10.1;10.10.2/FRA_DG_CD*' rebalance power 60 NOWAIT;
       ALTER DISKGROUP SYSTEM ADD DISK 'o/10.10.10.1;10.10.2/SYSTEM_DG_CD*' rebalance power 60 NOWAIT;


- Monitor v$asm_operation





ZFS

Public Cloud tools comparison