Pages

Monday, May 16, 2022

Stop Cluster, unmount and remount oracle shared storage, Start Cluster .

===========
General
===========
In short:
Stop Cluster
unmount  Oracle shared storage
Do DBA work
remount Oracle shared storage
Start Cluster.

How to know if this is a Veritas or Pacemaker?
getaclu
VCS - > Veritas
PMK -> pacemaker

Starhome Technote
http://10.135.10.64/portal/projects/howto/mount-cl-fs-locally.html

=====================
Pacemaker
=====================

=============
Stop the cluster 
==============
Stop the cluster

#> pcs cluster stop --all

#> pcs status cluster
Error: cluster is not currently running on this node

Tag and enable 
#> for i in $(vgscan |grep Ora | cut -d '"' -f 2) ;do vgchange --addtag sometag $i ; vgchange -ay --config 'activation{volume_list=["@sometag"]}' $i ; done

  Volume group "OraVg3" successfully changed
  1 logical volume(s) in volume group "OraVg3" now active
  Volume group "OraVg2" successfully changed
  2 logical volume(s) in volume group "OraVg2" now active
  Volume group "OraVg1" successfully changed
  1 logical volume(s) in volume group "OraVg1" now active

Mount locally
#> mount -t xfs -f -b size=4096 -m crc=0 /dev/OraVg1/db1 /oracle_db/db1
#> mount -t xfs -f -b size=4096 -m crc=0  /dev/OraVg2/Ora_Exp /backup/ora_exp
#> mount -t xfs -f -b size=4096 -m crc=0  /dev/OraVg2/Ora_Online /backup/ora_online
#> mount -t xfs -f -b size=4096 -m crc=0  /dev/OraVg3/db2 /oracle_db/db2

or
mount /dev/OraVg1/db1 /oracle_db/db1
mount /dev/OraVg2/Ora_Exp /backup/ora_exp
mount /dev/OraVg2/Ora_Online /backup/ora_online
mount /dev/OraVg3/db2 /oracle_db/db2

==============
Perform Database Restore
==============

==============
Start the cluster 
==============
Unmount it all locally:
#> umount -f /mnt/oratmp /oracle_db/db1 /backup/ora_exp /backup/ora_online /oracle_db/db2

umount /oracle_db/db1
umount /oracle_db/db2
umount /backup/ora_exp
umount /backup/ora_online


Delete tag and deactivate 

#> for i in $(vgscan |grep Ora | cut -d '"' -f 2) ;do vgchange -an $i ; vgchange --deltag sometag $i ; done

Restart the cluster 
#> pcs cluster start --all

=====================
pcs commands
=====================
pcs cluster stop --all
pcs status cluster

=====================
VCS
=====================
Take screenshot of current status
df -hP | grep ora
/dev/vx/dsk/OraDg1/db1        79G   26G   54G  33% /oracle_db/db1
/dev/vx/dsk/OraDg2/Ora_Online 159G   16G  143G 10% /backup/ora_online
/dev/vx/dsk/OraDg2/Ora_Exp    100G  7.0G   93G  8% /backup/ora_exp
/dev/vx/dsk/OraDg3/db2        199G  662M  197G  1% /oracle_db/db2

Stop Cluster
hastop -all 
or
hastop -all -force

Perioticly check cluster status until it is unavailable - like the following message:
hastatus -summary

VCS ERROR V-16-1-10600 Cannot connect to VCS engine
VCS WARNING V-16-1-11046 Local system not available

Force stop oracle service, in case of PARTIAL status which cannot be resolved.
sudo hagrp -flush -force <service_group> -sys <host_group>
sudo hagrp -flush -force ora_igt_sg -sys apu-rm

or freeze just the oracle group
hagrp -freez ora_igt_sg

Mount locally the oracle mount points
vxdg import OraDg1
vxdg import OraDg2
vxdg import OraDg3

vxvol -g OraDg1 startall
vxvol -g OraDg2 startall
vxvol -g OraDg3 startall

mount -t vxfs /dev/vx/dsk/OraDg1/db1        /oracle_db/db1
mount -t vxfs /dev/vx/dsk/OraDg2/Ora_Online /backup/ora_online
mount -t vxfs /dev/vx/dsk/OraDg2/Ora_Exp    /backup/ora_exp
mount -t vxfs /dev/vx/dsk/OraDg3/db2        /oracle_db/db2

df -hP | grep ora

Do the Oracle stuff

Dismount locally the oracle mount points

umount /oracle_db/db1 
umount /backup/ora_online
umount /backup/ora_exp
umount /oracle_db/db2

vxvol -g OraDg1 stopall
vxvol -g OraDg2 stopall
vxvol -g OraDg3 stopall

vxdg deport OraDg1
vxdg deport OraDg2
vxdg deport OraDg3

Start the service
Run the following command on ALL cluster nodes:
node a
hastart

node b
hastart


===========
Additional Commands
===========
move service to another node
as root
pcs status
pcs resource cleanup ora_igt_rg
pcs resource disable ora_igt_rg
pcs resource enable ora_igt_rg
pcs resouce show ora_igt_rg

To create a service
as root Check on oracle service definition
/etc/sysconfig/env.oracledb 

for oracle service - check this file:
/etc/systemd/system/dbora.service
This defines the oracle service, ORACLE_HOME, ORACLE_SID
/etc/sysconfig/evn.oracledb

run this:
systemctl status dbora
systemctl status dbora.service
systemctl start dbora.service
systemctl deamon
pcs resource create dbora_igt_ap systemd:dbora op stop interval=0 timeout=120s on-fail="block" monitor interval=30s timeout=600s start interval=0 timeout=120s --group ora_igt_rg

pacemaker cluster commands info
pcs status - this will show cluster status
pcs resource show - this will show current node status
pcs resource show dbora_igt_ap - this will show oracle service info
 Resource: dbora_igt_ap (class=systemd type=dbora)
  Operations: monitor interval=30s timeout=600s (dbora_igt_ap-monitor-interval-30s)
              start interval=0 timeout=120s (dbora_igt_ap-start-interval-0)
              stop interval=0 on-fail=block timeout=120s (dbora_igt_ap-stop-interval-0)

pacemaker cluster commands oracle
pcs resource move ora_igt_rg PIPNVHED901G
pcs resource cleanup ora_igt_rg 
pcs resource disable ora_igt_rg
pcs resource enable ora_igt_rg

pacemaker cluster commands restart oracle service
pcs status
pcs resource cleanup ora_igt_rg
pcs resource disable ora_igt_rg
pcs resource enable ora_igt_rg
pcs resource restart ora_igt_rg PIPNVHED901G
pcs status

pacemaker cluster commands other

--pcs resource disable oracle
--pcs resource enable oracle/19/dbhome_1/rdbms/log/startup



Check corosync in short
root>% vi /etc/corosync/corosync.conf 
root>% pcs cluster sync
server901G: Succeeded
server902G: Succeeded
root>% pcs cluster reload corosync
Corosync reloaded
root>% corosync-cmapctl | grep totem.token
runtime.config.totem.token (u32) = 5000
runtime.config.totem.token_retransmit (u32) = 1190
runtime.config.totem.token_retransmits_before_loss_const (u32) = 4
totem.token (u32) = 5000

Change corosync timeout
1. Edit /etc/corosync/corosync.conf on one of the cluster nodes
Add the required line if does not exist or update the value if the line does exist.
For 5 seconds, set the value to 5000 msec.

totem {
   version: 2
   secauth: off
   cluster_name: rhel7-cluster
   transport: udpu
   rrp_mode: passive
   token: 15000      <--- If this line is missing, add it, otherwise update the value.-->
}

2. Propagate the updated corosync.conf to the rest of the nodes as follows:
pcs cluster sync

3. Reload corosync.
    This command can be run from one node to reload corosync on all nodes and does not require a downtime.
pcs cluster reload corosync

4. Confirm changes
corosync-cmapctl | grep totem.token

For example:
corosync-cmapctl | grep totem.token
runtime.config.totem.token (u32) = 5000
runtime.config.totem.token_retransmit (u32) = 1190
runtime.config.totem.token_retransmits_before_loss_const (u32) = 4

Thursday, May 12, 2022

User Permissions SQL Server


db_owner
Members of the  fixed database role can perform all configuration and maintenance activities on the database, and can also drop the database in SQL Server. 
db_datawriter
Members of the db_datawriter fixed database role can add, delete, or change data in all user tables.
db_ddladmin
Members of the db_ddladmin fixed database role can run any Data Definition Language (DDL) command in a database.


To change User permissions:
Database -> Security -> Users -> Properties -> membership -> add use to the required group

Rename files in SQL Server

USE [master];
GO
--Disconnect all existing session.
ALTER DATABASE jiradev4 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

--Change database in to OFFLINE mode.
ALTER DATABASE jiradev4 SET OFFLINE
GO

ALTER DATABASE jiradev4 MODIFY FILE (Name='jiradev4', FILENAME='D:\DB\jiradev4.mdf')
GO

ALTER DATABASE jiradev4 MODIFY FILE (Name='jiradev4_log', FILENAME='D:\Logs\jiradev4_log.ldf')
GO

Here - physically rename datafiles on server to the new names

ALTER DATABASE jiradev4 SET ONLINE
Go
ALTER DATABASE jiradev4 SET MULTI_USER
Go

Refresh the databases list, and check the database properties.

SQL Server Copy Database options

Need to copy database from databaseName=dba_util to dba_util_test.

In SQL Server, several options to do that.

Option 1 – Using Backup and Restore
Backup
1.      
Right click on the database you want to duplicate and choose Tasks->"Back Up..."
2.     Save the back up to a .bak file
ImportantWhen doing export: go to tab Media Options -> select "Overwrite all existing backup sets"

Restore
1.      Right click on the "Databases" ->  "Restore Database"
2.      As the source, select "File" and point to the .bak file you created earlier.
3.      Change the name of the database to restore to.


Important:  When doing import: go to tab Options -> Uncheck "Tail-long Backup"
It will make the restore much faster, and to the point of time the backup was taken.

Option 2 – using Database Copy Wizard
Database -> my_database -> Tasks -> Copy ->
Source server MY_SERVER->
Destination Server MY_SERVER->
Select option Detach + Attach (source database must be offline, faster) or "SQL Management Option" (source database can be online, slower) ->
Select database Name to copy ->
Give new Database Name -> my_database_new
Give new Database File path  D:\DB
Give new Database Log path D:\Logs ->
A new package is created under server_name/DTS Packages/Copy Database Wizard Packages CDW_MY_SERVER_dba_util_copy ->
Run Immediately ->

 

Click Finish to perform the following actions:

 Source: MY_SERVER Other SQL Server Version, Microsoft SQL Server Standard Edition (64-bit) , Build 3223, Microsoft Windows NT 6.3 (14393) NT x64

Destination: MY_SERVER Other SQL Server Version, Microsoft SQL Server Standard Edition (64-bit) , Build 3223, Microsoft Windows NT 6.3 (14393) NT x64

Using SMO online transfer

The following databases will be moved or copied:

 

Copy:dba_util

Destination file will be created: D:\DB\dba_util_test.mdf

Destination file will be created: D:\Logs\dba_util_test_log.ldf

Stop transfer if duplicate database name exists at destination

 

Package scheduled to run immediately

 





 

Sunday, May 8, 2022

check connection bash script

==================
General
==================
Bash script that takes 3 parameters, and returns 1 if connection was bad or zero if connection was good.

input:
- user
- password
- connection string

output
number 1/0
0 - OK
1 - Bad

#!/bin/bash

VIPUSER=$1
VIPPASS=$2
DB_INST=$3

#---------------------------------
#functions
#---------------------------------
report_error(){
  db_user=$1
  echo ==========================================
  echo Error!!!
  echo User $db_user Connection Check........Failed
  echo Connection Error for User $db_user
  echo ERROR: ORA-01017: invalid username/password. logon denied
  echo Exiting script!
  echo ==========================================
  exit 1
}

report_success(){
  db_user=$1
  echo "User $db_user Connection Check........Passed"
}

#---------------------------------
#main
#---------------------------------

echo ==========================================
echo Check Connections Start
echo ==========================================

export OUTPUT_FILE=./chk_connections.log

rm -f $OUTPUT_FILE 2>/dev/null
touch $OUTPUT_FILE

echo "Check Connection for:  ${VIPUSER}"
sqlplus -s ${VIPUSER}/${VIPPASS}@${DB_INST} >> $OUTPUT_FILE << EOD
set serveroutput on
set heading off linesize 130 pagesize 1000  feedback off
SELECT 'Connection is OK for '||USER FROM DUAL;
EOD


#----------------------------
# Analyze Connection Results
#----------------------------
#echo "OUTPUT_FILE=$OUTPUT_FILE"
#cat $OUTPUT_FILE
connection_ok=`grep "Connection is OK for" $OUTPUT_FILE | grep -iw ${VIPUSER} | wc -l`

if [[ $connection_ok != 1 ]];then
  report_error ${VIPUSER}
  ret_status=1
else
  report_success ${VIPUSER}
  ret_status=0
fi

rm -f $OUTPUT_FILE

echo ==========================================
echo Check Connections Finish
echo ==========================================

exit $ret_status

Tuesday, May 3, 2022

Sqloader by simple example

load_data.sh
DB_USER=MY_USER
DB_PASS=MY_PASS
DB_INST=orainst
CTL_FILE=/starhome/iu/workarea/loader_ora/raf_msisdn_set_details_sfi_b.ctl
SQL_LOADER_LOG_FILE=/starhome/iu/workarea/loader_ora/raf_msisdn_set_details_sfi_b.log
sqlldr ${DB_USER}/${DB_PASS}@${DB_INST} CONTROL=${CTL_FILE} LOG=${SQL_LOADER_LOG_FILE}


raf_msisdn_set_details_sfi_b.ctl
OPTIONS(DIRECT=TRUE)
LOAD DATA
INFILE '/starhome/iu/workarea/loader_ora/input_file.txt'
BADFILE '/starhome/iu/workarea/loader_ora/input_file.bad'
DISCARDMAX 0
APPEND
INTO TABLE raf_msisdn_set_details_sfi_b
FIELDS TERMINATED BY ','
(
MSISDN_SET_ID    "TRIM (:MSISDN_SET_ID)",
MSISDN_FROM      "TRIM (:MSISDN_FROM)",
MSISDN_TO        "TRIM (:MSISDN_TO)",
MSISDN_DESC      "TRIM (:MSISDN_DESC)",
IS_PREFIX        "TRIM (:IS_PREFIX)",
TS_LAST_MODIFIED SYSDATE
)

replace tabs with comma in input file
less input_file.txt_comma | sed 's/\t/,/g' > input_file.txt

=====================
OPTIONS(DIRECT=TRUE)
=====================
By default, direct load is not activated in sqlldr, and the load is via regular flow.
To avoid archiving during load, one can activate direct load by:

A. set table to be NOLOGGING
B. add OPTIONS(DIRECT=TRUE) to sqlldr control file

This will cause a direct path load, and when sqlldr is running, this message with appear:

Path used:      Direct

For example:
OPTIONS(DIRECT=true)
LOAD DATA
INFILE '/starhome/sfi_usage/input/imput.dat'
BADFILE '/starhome/sfi_usage/ctl/input.bad'
DISCARDMAX 0
TRUNCATE
INTO TABLE table_hourly_a
FIELDS TERMINATED BY ','
(
KEY1                          "TRIM (:KEY1)",
IMSI                          "TRIM (:IMSI)"
)