Pages

Thursday, January 28, 2016

Linux monitoring by Example. Code Example for bash, perl, sqlplus.

===========================================
General
===========================================
Code Example for bash, perl, sqlplus

Have a crontab task running every 5 minutes.
crontab task is executing a main bash script.
main bash script. is calling another bash script.
bash script is preparing a list of top memory consuming processes.
main bash script is calling perl code.
perl script is calling the database to get Database SYSDATE and SNAP_ID.
perl script is preparing a INSERT statements sql file.
main bash script is calling another bash script that is running the sql file.
In main bash script there is a logic, to keep log files under control.

===========================================
File list
===========================================
crontab
.set_env
.set_profile


main_get_process_data.sh
list_processes.sh
load_to_db.pl
load_data_to_db.sh

===========================================
crontab
===========================================
0-59/5 * * * * /software/oracle/oracle/scripts/main_get_process_data.sh

===========================================
.set_env
===========================================
export LD_LIBRARY_PATH=/software/oracle/101/lib:/usr/lib
export PATH=/usr/kerberos/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/local/cmcluster/bin:/usr/sbin:/sbin:/software/oracle/101/bin 

===========================================
.set_profile
===========================================
export ORACLE_SID=orainst
export ORACLE_BASE=/software/oracle
export ORACLE_HOME=/software/oracle/101
export PATH=$PATH:/software/oracle/101/bin

===========================================
main_get_process_data.sh
===========================================
#!/bin/bash

export WORKING_DIR=/software/oracle/oracle/scripts
export OUTPUT_FILE=$WORKING_DIR/os_processes.txt
export OUTPUT_SQL_FILE=$WORKING_DIR/sh_hist_os_processes.sql
export LOG_FILE=$WORKING_DIR/main.log
export LOG_FILE_PERL=$WORKING_DIR/load_data.log

export DB_SCHEMA=dba_user/dba_pass@DWHP_NEW
. $WORKING_DIR/.export_all_env
. $WORKING_DIR/.set_profile

#export SYSDATE_SQL=$WORKING_DIR/sh_hist_os_sysdate.sql
#export SYSDATE_FILE=$WORKING_DIR/sh_hist_os_sysdate.lst
#$WORKING_DIR/call_sql_main.sh $SYSDATE_SQL

#export SNAP_ID_SQL=$WORKING_DIR/sh_hist_os_snap_id.sql
#export SNAP_ID_FILE=$WORKING_DIR/sh_hist_os_snap_id.lst
#$WORKING_DIR/call_sql_main.sh $SNAP_ID_SQL

rm -f $OUTPUT_FILE
touch $OUTPUT_FILE
rm -f $OUTPUT_SQL_FILE
touch $OUTPUT_SQL_FILE
touch $LOG_FILE
touch $LOG_FILE_PERL

echo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE
echo Starting Running perl script                 | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE

$WORKING_DIR/list_processes.sh $OUTPUT_FILE


echo ============================================ | tee >> $LOG_FILE
echo Before Running perl script                   | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE
date >> $LOG_FILE
ls -l $OUTPUT_FILE >> $LOG_FILE
ls -l $OUTPUT_SQL_FILE >> $LOG_FILE

perl $WORKING_DIR/load_to_db.pl $OUTPUT_FILE $OUTPUT_SQL_FILE $LOG_FILE_PERL

echo ============================================ | tee >> $LOG_FILE
echo Finished Running perl script                 | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE
date >> $LOG_FILE
ls -l $OUTPUT_FILE >> $LOG_FILE
ls -l $OUTPUT_SQL_FILE >> $LOG_FILE

echo ============================================ | tee >> $LOG_FILE
echo Running $WORKING_DIR/load_data_to_db.sh      | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE
date >> $LOG_FILE
ls -l $OUTPUT_SQL_FILE >> $LOG_FILE

$WORKING_DIR/load_data_to_db.sh $DB_SCHEMA $OUTPUT_SQL_FILE

echo ============================================ | tee >> $LOG_FILE
echo Finished Loading Data from $OUTPUT_SQL_FILE  | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE

date >> $LOG_FILE
ls -l $OUTPUT_SQL_FILE >> $LOG_FILE
echo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | tee >> $LOG_FILE

echo ============================================ | tee >> $LOG_FILE
echo Keep the log files under control             | tee >> $LOG_FILE
echo ============================================ | tee >> $LOG_FILE

let limit_size=1024
check_file=$LOG_FILE
let size=`du -hk $check_file | awk '{print $1}'`
if [ $size -gt $limit_size ]; then
  mv ${check_file} ${check_file}_bak
fi
check_file=$LOG_FILE_PERL
let size=`du -hk $check_file | awk '{print $1}'`
if [ $size -gt $limit_size ]; then
  mv ${check_file} ${check_file}_bak
fi

exit 0

===========================================
list_processes.sh
===========================================
#! /bin/bash

OUTPUT_FILE=$1

ps -eo pid,ppid,rss,size,vsize,pcpu,pmem,cmd -ww --sort=vsize | tail -50 | awk '{print "INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (SNAP_ID,SYSDATE,"$1","$2","$3","$4,","$5","$6","$7",'\''"$8"'\'',PARTITION_ID);" }'  >  $OUTPUT_FILE
exit 0


===========================================
load_to_db.pl
===========================================
#! /usr/bin/perl

use DBI;
use Net::FTP;
use Time::gmtime;
use File::Copy;

use strict;
use warnings;

#===========================================================================
# Global Variables
#===========================================================================
my $dbh;

############################################################################
# subroutines
############################################################################
sub getlocalDate
{
  use Time::gmtime;
  
  my $tm=gmtime; my $day; my $month; my $year;
  ($day,$month,$year) = (gmtime) [3,4,5];
  my $localDate=sprintf("%04d%02d%02d",$tm->year+1900,($tm->mon)+1,$tm->mday);
  return $localDate;
}

sub getSysdate
{
  my $sth;
  #my $sql=qq(SELECT TO_CHAR(SYSDATE,'YYYYMMDD hh24:mi:ss') FROM DUAL);
  my $sql=qq(SELECT 'TO_DATE('''||TO_CHAR(SYSDATE,'YYYYMMDD hh24:mi:ss')||''',''YYYYMMDD hh24:mi:ss'')' FROM DUAL);
  my $res_sysdate;
  $sth=$dbh->prepare($sql);
  $sth->execute;
  $sth->bind_columns(undef,\$res_sysdate);
  $sth->fetch();
  $sth->finish();
  return ($res_sysdate);
}

sub getSnapId{
  my $sth;
  my $sql=qq(SELECT MAX (snap_id) FROM DBA_HIST_SNAPSHOT);
  my $snap_id;

  $sth=$dbh->prepare($sql);
  $sth->execute;
  $sth->bind_columns(undef,\$snap_id);
  $sth->fetch();
  $sth->finish();
  return ($snap_id);
}

sub getPartitionId{
  my $sth;
  my $sql=qq(SELECT MOD(TO_NUMBER(TO_CHAR(SYSDATE,'WW')),10) FROM DUAL);
  my $part_id;

  $sth=$dbh->prepare($sql);
  $sth->execute;
  $sth->bind_columns(undef,\$part_id);
  $sth->fetch();
  $sth->finish();
  return ($part_id);
}

#============================
# main ()
#============================
#---------------------------------------
# variables
#---------------------------------------
my $input_file='';
my $output_file='';
my $logFile='';

$input_file=shift;
$output_file=shift;
$logFile=shift;

my $db_user='system';
my $db_pass='xen86pga';
my $db_inst='igt';

my $sysdate;
my $snap_id;
my $partition_id;
my $runDate;
my $RetCode;
my @mailArg;

my $my_sysdate;
$runDate=getlocalDate();

#---------------------------------------
# Open Logfile
#---------------------------------------
#unlink $logFile;
open MyLog,">>",$logFile or die $!;
print MyLog "----I---- ---------------------------------"."\n";
print MyLog "----I---- Starting OS counters collection  "."\n";
print MyLog "----I---- ---------------------------------"."\n";

print MyLog "----I---- Input File: ".$input_file."\n";
print MyLog "----I---- Output File: ".$output_file."\n";

#---------------------------------------
# Open Database Connection
#---------------------------------------

print MyLog "----I---- Opening Database Connection ".$db_inst."@".$db_user."\n";
my $db_driver="dbi:Oracle:".$db_inst;
$dbh=DBI->connect($db_driver,$db_user,$db_pass,{RaiseError =>1,AutoCommit=>0})|| die "$DBI::errstr";
#---------------------------------------
# Fetch Database Paramaters
#---------------------------------------
print MyLog "----I---- Database Connection Opened"."\n";
$my_sysdate='';
$my_sysdate=getSysdate();
print MyLog "----I---- SYSDATE=".$my_sysdate."\n";
$snap_id=getSnapId();
print MyLog "----I---- snap ID=".$snap_id."\n";
$partition_id=getPartitionId();
print MyLog "----I---- snap ID=".$partition_id."\n";

#---------------------------------------
# read generated temp file and create sh_hist_os_processes.sql
#---------------------------------------
# INSERT INTO SH_HIST_OS_PROCESSES (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd) VALUES (SYSDATE,SNAP_ID,8608,1,10540,1536 ,1003824,0.0,0.5,'ora_qmnc_igt');
# INSERT INTO SH_HIST_OS_PROCESSES (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd) VALUES (SYSDATE,SNAP_ID,8608,1,10540,1536 ,1003824,0.0,0.5,'ora_qmnc_igt');

my $line;
my $input_line;
my $output_line;
my $line_counter;

  #unlink $output_file;
  open IN_FILE,$input_file or die $!;
  open OUT_FILE,">>",$output_file or die $!;
  print MyLog    sprintf("----I---- After opening file ".$output_file."\n");
  print OUT_FILE sprintf("SET TERMOUT OFF;\n");  
  print OUT_FILE sprintf("SET FEEDBACK OFF;\n");  
  print OUT_FILE sprintf("SET SHOW OFF;\n");  
  print OUT_FILE sprintf("SET VERIFY OFF;\n");  
  
  $line_counter=0;
  while ($line = <IN_FILE>){    

    $line_counter+=1;
    $input_line=$line;
    #print MyLog "----I---- Reading Line ".$input_line."\n";
    #print 'replacing SYSDATE with '.$my_sysdate."\n";
    $line=~s/SYSDATE/$my_sysdate/ ;
    
    #print 'replacing SNAP_ID with '.$snap_id."\n";
    $line=~s/SNAP_ID/$snap_id/ ;

    #print 'replacing PARTITION_ID with '.$snap_id."\n";
    $line=~s/PARTITION_ID/$partition_id/ ;

    print OUT_FILE sprintf("%s\n",$line);
  }
  print OUT_FILE sprintf("COMMIT;\n");
  print OUT_FILE sprintf("EXIT;\n");  
  
  close IN_FILE or die $!;
  close OUT_FILE or die $!;

  print MyLog sprintf("----I---- ---------------------------------"."\n");
  print MyLog sprintf("----I---- Finished OS counters collection  "."\n");
  print MyLog sprintf("----I---- ---------------------------------"."\n");
  print MyLog sprintf("\n");
  close MyLog or die $!;
#---------------------------------------
exit 0

===========================================
load_data_to_db.sh
===========================================
#!/bin/bash

sqlplus -s $1 @$2


===========================================
Result
===========================================
The result is a SQL file, in the format below.

SET TERMOUT OFF;
SET FEEDBACK OFF;
SET SHOW OFF;
SET VERIFY OFF;
INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),31243,1,12248,1568 ,1003852,0.3,0.5,'ora_p002_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),31245,1,12276,1568 ,1003852,0.3,0.5,'ora_p003_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),12694,1,23484,1596 ,1003880,0.0,1.1,'ora_q000_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),6615,1,21236,1600 ,1003884,0.0,1.0,'ora_j014_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),8612,1,110336,1680 ,1003968,0.0,5.3,'ora_mmnl_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),8559,1,39132,1700 ,1003988,0.0,1.8,'ora_ckpt_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),31262,1,14936,1708 ,1003992,0.5,0.7,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),31260,1,15388,1716 ,1004000,1.5,0.7,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),8563,1,269960,1716 ,1004004,0.0,13.0,'ora_reco_orainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),11459,1,81184,1724 ,1004008,0.0,3.9,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),1041,1,23372,1724 ,1004008,0.0,1.1,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),3724,1,20740,1724 ,1004008,0.0,0.9,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),25043,1,21176,1728 ,1004012,0.0,1.0,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),31264,1,22524,1728 ,1004012,6.5,1.0,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),28382,1,20828,1736 ,1004020,0.0,1.0,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),21345,1,25044,1740 ,1004024,0.0,1.2,'oracleorainst',4);

INSERT INTO SH_HIST_OS_PROCESS   (snap_id, ts_date, pid, ppid, rss, mem_size, mem_vsize, pcpu, pmem, cmd, partition_id) VALUES (79003,TO_DATE('20160128 15:00:02','YYYYMMDD hh24:mi:ss'),15590,1,25816,1856 ,1004140,0.0,1.2,'oracleorainst',4); 

COMMIT;
EXIT;  

Tuesday, January 26, 2016

Oracle Statspack by Theory

=====================
General
=====================
Although AWR and ADDM (introduced in Oracle 10g) provide better statistics than STATSPACK, users that are not licensed to use the Enterprise Manager Diagnostic Pack, such as SE Oracle installations, should continue to use statspack.

=====================
Installation
=====================
To install statspack

cd $ORACLE_HOME/rdbms/admin
sqlplus / as sysdba
@spdrop.sql   -- Drop existing statspack schema.
@spcreate.sql -- Create Schema and Objects. Enter tablespace names when prompted.
@spauto.sql   -- Create a dbms_job that will execute a snapshot every hour

To run in batch mode, these parameter values must be defined:
define default_tablespace='TOOLS'
define temporary_tablespace='TEMP' 
define perfstat_password='my_perfstat_password'
SQL>  @?/rdbms/admin/spcreate.sql


Check each of the three output files produced (spcusr.lis, spctab.lis, spcpkg.lis) by the 
installation to ensure no errors were encountered.

PERFSTAT schema is created by the installation script, and owns all objects needed by this package.

=====================
Taking Snapshots
=====================
To manually create a snapshot:
sqlplus perfstat/perfstat_pass@orainst 
or
sqlplus / as sysdba

EXEC perfstat.statspack.snap;

The automation of the task of taking a snapshot, is done via DBMS_JOB.
The default is once every hour, by executing this script
sqlplus perfstat/perfstat_pass@orainst
@/rdbms/admin/spauto.sql


=====================
View Available Snapshots
=====================
See available snaphot IDs.
SELECT snap_id, snap_time FROM STATS$SNAPSHOT; 

Get the last snapshot_id
SELECT MAX(snap_id) FROM STATS$SNAPSHOT; 
Is same as using 
SQL> variable snap_id number;
SQL> begin :snap_id := statspack.snap;  end;
  2  /
PL/SQL procedure successfully completed.
SQL> print snap_id
      SNAP

----------
     34535

=====================
Jobs
=====================
This is example of defining perfstat jobs:
One job to gather stats which runs every hour.
Another job to purge data older than 14 days, which runs once a day.

SELECT schema_user, interval, next_date, what FROM USER_JOBS;

PERFSTAT   TRUNC(SYSDATE+1/24,'HH') 20160125 19:00:00
DECLARE snap NUMBER;BEGIN snap := STATSPACK.SNAP(i_snap_level=>7); END;


PERFSTAT   TRUNC(SYSDATE+1)+3/24    20160126 03:00:00
STATSPACK.PURGE(i_purge_before_date=>sysdate-14,i_extended_purge=>true);

=====================
Reporting
=====================
Once you have at least two snapshots created, you can run the performance reports.  
There are two reports available – an Instance report, and a SQL report.  
To run report without having a prompt for input values, need to define these variables:

sqlplus perfstat/my_perfstat_password 
SQL> define begin_snap=1 
SQL> define end_snap=2 
SQL> define report_name=batch_run 
SQL> @?/rdbms/admin/spreport.sql

And for SQL Report:
sqlplus perfstat/my_perfstat_password
SQL>  define begin_snap=39
SQL>  define end_snap=40
SQL   define hash_value=1988538571
SQL>  define report_name=batch_sql_run
SQL>  @?/rdbms/admin/sprepsql.sql


=====================
Make your life easy =====================
run_statspack.sh
#!/bin/bash
.  ~oracle/.set_profile
sqlplus perfstat/perfstat_pass@igt @${ORACLE_HOME}/rdbms/admin/spreport.sql


run_statspack_4sql.sh
#!/bin/bash
.  ~oracle/.set_profile
sqlplus perfstat/perfstat_pass@igt @${ORACLE_HOME}/rdbms/admin/sprepsql.sql

=====================
The Instance Report
=====================
The Instance Report (spreport.sql) is a general instance health report, covering all aspects of instance performance.  
It will show the top wait events, the top SQL queries, the busiest segments, the busiest datafiles, the database activity level and many, many other things.  
To run Instance Report:
@/rdbms/admin/spreport.sql

=====================
The SQL Report
=====================
The SQL Report (sprepsql.sql) is a report for a specific SQL statement.  
The SQL Report is usually run after examining the high-load SQL sections of the instance health report.  
The SQL Report provides detailed statistics and data for a single SQL statement, identified by the Hash Value.
To run SQL Report:
@/rdbms/admin/sprepsql.sql

=====================
Performance Tuning for Perfstat 
=====================
In order for the statspack code to complete fast, it is advisable to collect performance stats on objects owned by perfstat schema.

EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME=>'PERFSTAT',CASCADE=>TRUE);

=====================
Snapshot Levels
=====================

Possible values: 0, 5, 6, 7, 10
Default value: 5


In Short

Levels  0 and higher:  Collects General Instance Level Performance statistics.

Levels  5 and higher: Additional Data: SQL Statements

Levels  6 and higher: Additional Data: SQL Plans and SQL Plan Usage
Levels  7 and higher: Additional data: Segment Level Statistics
Levels  10 and higher:  Additional Statistics: Parent and Child Latches
Session specific:  Additional Statistics: Specific session statistics and wait events.

In Details
Level 0
Collects General Instance Level Performance statistics.
Such as: wait statistics, system events, system statistics, rollback segment data, row cache, SGA, background events, session events, lock statistics, buffer pool statistics, and parent latch statistics.

Levels  5 and higer
Collects Additional Data: SQL Statements
Such as SQL Statements Performance data.
The data is collected on top SQL statements with high resource usage. 

The SQL statements gathered by Statspack are those that exceed one of six predefined threshold parameters:
1. Number of executions of the SQL statement. The default 100.
2. Number of disk reads performed by the SQL statement. The default 1,000.
3. Number of parse calls performed by the SQL statement. The default 1,000.
4. Number of buffer gets performed by the SQL statement. The default 10,000.
5. Size of sharable memory used by the SQL statement. The default 1 Mb.
6. Version count for the SQL statement. The default 20.
If a SQL statement's resource usage exceeds any one of these threshold values, then it is captured during the snapshot.

The SQL threshold levels used are either those stored in the table STATS$STATSPACK_PARAMETER or by the thresholds specified when the snapshot is taken.

Levels  6 and higher
Collects Additional Data: SQL Plans and SQL Plan Usage
Such as SQL plans and plan usage data for each of the high-resource SQL statements captured.

A level 6 snapshot gathers valuable information for determining whether the execution plan used for a SQL statement has changed. Therefore, level 6 snapshots should be used whenever a plan might have changed.

To gather the plan for a SQL statement, the statement must be in the shared pool at the time the snapshot is taken, and it must exceed one of the SQL thresholds. 
To gather plans for all statements in the shared pool, specify the executions threshold to be zero (0) for those snapshots.

Levels  7 and higher
Collects Additional data: Segment Level Statistics
Such as performance data on highly used segments. 
RAC specific segment level statistics are also captured with level 7.

Level 7 includes the following segment statistics:

Logical reads
Db block changes
Physical reads
Physical writes
Physical reads direct
Physical writes direct
Global cache consistent read blocks served (RAC specific)
Global cache current blocks served (RAC specific)
Buffer busy waits
ITL waits
Row lock waits

Although Statspack captures all segment statistics, it reports only the following statistics that exceed one of the predefined threshold parameters:

1. Number of logical reads on the segment. The default is 10,000.
2. Number of physical reads on the segment. The default is 1,000.
3. Number of buffer busy waits on the segment. The default is 100.
4. Number of row lock waits on the segment. The default is 100.
5. Number of ITL waits on the segment. The default is 100.
6. Number of global cache consistent read blocks served (RAC only). The default is 1,000.
7. Number of global cache current blocks served (RAC only). The default is 1,000.

The values of the threshold parameters are used when deciding which segment statistics to collect. 
If a segment's statistic exceeds a threshold value, all statistics regarding this segment are captured during the snapshot.
The threshold levels used are either those stored in the table STATS$STATSPACK_PARAMETER.

Levels  10 and higher
Additional Statistics: Parent and Child Latches

This level includes all statistics gathered in the lower levels, as well as parent and child latch information. Sometimes data gathered at this level can cause the snapshot to take longer to complete. This level can be resource-intensive, and it should only be used when advised by Oracle personnel.

Session specific:  
Additional Statistics: Specific session statistics and wait events.
By default, there is to no collection of session level statistics.
The statistics gathered for the session include session statistics, session events, and lock activity. 
To enable Sessions Statistics Collection, need to provide sessions ID.
For example:

SQL>  EXECUTE STATSPACK.SNAP(i_session_id=>343);

=============================
Changing Statspack parameters
=============================
Statspack parameters can be changed for a specific snapshot, of for good.

For a specific snapshot:
EXECUTE STATSPACK.snap(i_ucomment=>'temporary commment');

Permenently. 
This effectively changes the default value:

The new thresholds values are updated in STATS$STATSPACK_PARAMETER table.


EXECUTE STATSPACK.snap(i_snap_level=>10,i_modify_parameter=>'true');
Or
EXECUTE STATSPACK.modify_statspack_parameter(i_ucomment=>'this is a commment that is saved');

The List of STATSPACK parameters that can be changed:

Parameter Name     Possible Values Default Value Parameter meaning
------------------ --------------- ------------- ----------------------------------------
i_snap_level       0, 5, 6, 7, 10  5             Snapshot level
i_ucomment         Text            Blank         Free Text
i_executions_th    Integer >=0     100           Threshold for SQL statement executions
i_disk_reads_th    Integer >=0     1000          Threshold for SQL disk reads
i_parse_calls_th   Integer >=0     1000          Threshold for SQL number of parse calls
i_buffer_gets_th   Integer >=0     10000         Threshold for SQL number of buffer gets
i_sharable_mem_th  Integer >=0     1048576       Threshold for SQL shareable memory
i_version_count_th Integer >=0     20            Threshold for SQL versions
i_seg_phy_reads_th Integer >=0     1000          Threshold for Segment physical reads
i_seg_log_reads_th Integer >=0     10000         Threshold for Segment logical reads
i_seg_buff_busy_th Integer >=0     100           Threshold for Segment buffer busy
i_seg_rowlock_w_th Integer >=0     100           Threshold for Segment row lock waits
i_seg_itl_waits_th Integer >=0     100           Threshold for Segment ITL waits
i_seg_cr_bks_sd_th Integer >=0     1000          Threshold for Segment consistent reads 
i_seg_cu_bks_sd_th Integer >=0     1000          Threshold for Segment current blocks
                                                 served by the instance
i_session_id       V$SESSION.sid   0             Session ID for which capture statistics
i_modify_parameter TRUE, FALSE     FALSE         Determines whether the parameters 
                                                 specified are used for future snapshots

=====================
Enable statistics collection
=====================
In order for STATSPACK to work, Oracle has to collect statistics data.
This feature is configured by two parameters: TIMED_STATISTICS and STATISTICS_LEVEL

TIMED_STATISTICS
TIMED_STATISTICS specifies whether or not statistics related to time are collected.

Possible Values: TRUE and FALSE

TRUE:  The statistics are collected and stored, and displayed in the V$SESSTATS and V$SYSSTATS dynamic performance views.


FALSEThe value of all time-related statistics is set to zero. This setting lets Oracle avoid the overhead of requesting the time from the operating system.

TIMED_STATISTICS must be set to TRUE

ALTER SYSTEM SET TIMED_STATISTICS=TRUE;

STATISTICS_LEVEL
STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. 
Possible ValuesTYPICAL, ALL, BASIC
Default Value: TYPICAL

TYPICAL, Collection of all major statistics and provides best overall performance. This value  should be adequate for most environments.

ALL, additional statistics are added, the additional statistics are timed OS statistics and plan execution statistics.

BASIC, Effectively disables the collection of many of the important statistics 

=====================
Statspack scripts
=====================
Installation
spdrop.sql   - Uninstall Statspack from database (Run as SYS)
spcreate.sql - Installs the Statspack user, tables and package on a database (Run as SYS).

Reports
spauto.sql   - Automates Statspack statistics collection (using DBMS_JOB)
spreport.sql - Generates a Statspack report.
sprepsql.sql - Generates a Statspack SQL report for the specific SQL hash value. 
sprepins.sql - Generates a Statspack report for the database and instance specified.

Clean Up

sppurge.sql  - Purge a range of Snapshot Id's. 
sptrunc.sql  - Truncates all data in Statspack tables.


=====================
Known Issues
=====================
Some statistics may only be reported on completion of a query. 
If a query runs for 12 hours, its processing won't be reported during any of the snapshots taken while the query was busy executing.


If queries are aged out of the shared pool, the stats from V$SQL are reset. 
This can throw off the delta calculations and even make it negative. 
For example, query A has 10,000 buffer_gets at snapshot 1, but at snapshot #2, it has been aged out of the pool and reloaded and now shows only 1,000 buffer_gets. So, when you run spreport.sql from snapshot 1 to 2, you'll get 1,000-10,000 = -9,000 for this query.