Linux Useful Commands
===========================================
Aliases
Linux file system in brief
Different help options
Server, Linux and User info
Create User
Files and Folders
Linux Permissions. chown, chmod, sudoers
ACL: Access Control List setfacl and getfacl
Delete file by inode
STDIN, STDOUT, STDERR, Tee.
Environment variables and aliases.
Utilities
nohup
script
grep examples
Disk usage
Using fsck
Different help options
Server, Linux and User info
Create User
Files and Folders
Linux Permissions. chown, chmod, sudoers
ACL: Access Control List setfacl and getfacl
Delete file by inode
STDIN, STDOUT, STDERR, Tee.
Environment variables and aliases.
Utilities
nohup
script
grep examples
Disk usage
Using fsck
Using xfs
Use public keys with ssh
System Log files
top command
Use public keys with ssh
System Log files
top command
change hostname
Memory on Linux
Linux datetime
Port Forwarding script
Check if files exist example
Extend disk space on Volume
Memory on Linux
Linux datetime
Port Forwarding script
Check if files exist example
Extend disk space on Volume
rpm
defunct processes
File Processing Examples
Code examples
awk
defunct processes
File Processing Examples
Code examples
awk
open sig file with gpg
pacemaker - pcs commands
Oracle related
===========================================
#!/bin/bash
export ORA_INST=igt
export DAYS_TO_KEEP=14
export DAYS_TO_KEEP_LIST=2
export LIST_SERVER=`hostname`
export LISTENER_ROOT=/software/oracle/diag/tnslsnr/${LIST_SERVER}/lsnr_igt/trace
export LISTENER_ROOT_ALERT=/software/oracle/diag/tnslsnr/${LIST_SERVER}/lsnr_igt/alert
#find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trc" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
#find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trm" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
#find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -size +1000M -exec rm {} \;
mv -f ${LISTENER_ROOT}/lsnr_igt.log_6 ${LISTENER_ROOT}/lsnr_igt.log_7
mv -f ${LISTENER_ROOT}/lsnr_igt.log_5 ${LISTENER_ROOT}/lsnr_igt.log_6
mv -f ${LISTENER_ROOT}/lsnr_igt.log_4 ${LISTENER_ROOT}/lsnr_igt.log_5
mv -f ${LISTENER_ROOT}/lsnr_igt.log_3 ${LISTENER_ROOT}/lsnr_igt.log_4
mv -f ${LISTENER_ROOT}/lsnr_igt.log_2 ${LISTENER_ROOT}/lsnr_igt.log_3
mv -f ${LISTENER_ROOT}/lsnr_igt.log_1 ${LISTENER_ROOT}/lsnr_igt.log_2
mv -f ${LISTENER_ROOT}/lsnr_igt.log ${LISTENER_ROOT}/lsnr_igt.log_1
#rm -f /software/oracle/diag/tnslsnr/${LIST_SERVER}/lsnr_igt/alert/*.xml
===========================================
/usr - software installations
/lib - libraries
/opt - optional. external vendors installations.
/dev - device information
/proc - processes info. Temporary system generated files, by and about running processes.
/tmp
===========================================
Different help options
===========================================
help options:
man ls
info ls
ls -h
ls --help
man -k ls - will generate list with man entries for "ls"
===========================================
Server, Linux and User info
===========================================
uname -a
lscpu
hostname - use with caution, cause this command can be used to set system hostname.
/etc/issue
/etc/redhat-release - only for Red Hat, in format version.revision (for example 6.5)
whoami
id
who - shows all currently connected users.
whereis
which
locate - Is looking in hash DB, which is created by update_db command, and then updated per schedule.
===========================================
Create New User
===========================================
Changing password for user akaplan.
New UNIX password:
BAD PASSWORD: it is too short
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
less /etc/passwd
akaplan:x:601:602::/home/akaplan:/bin/bash
The password was accepted, although it gave error message "BAD PASSWORD: it is too short"
===========================================
Files and Folders
===========================================
block device - the writing to the device is done with blocks, for example storage disk
character device - the writing to the device is done with character, for example screen.
-cwx - stands for character device - normally these files would be under /dev folder.
-lwr - stands for soft link
file some_file - get information about the file
links
ls -i - displays the inode ID (most left number)
ln -s file link - creates soft link.
ln file link - creates hard link.
In hard link - Both file and link have same inode - so effectively point to same place on disk.
The link is same as another file - pointing to same inode.
If deleting the file - the link is still valid.
In soft link - The link has another inode ID - and is pointing to the file.
If deleting the file - the link is not valid.
===========================================
Linux Permissions. chown, chmod, sudoers
===========================================
Useful examples for chmod and chown
chmod
chmod o+r filename - Add read to others on filename
Reference
========================
Code examples
========================
Code example - delete file by size
Code example - delete files by modified date
Code example - move file to backup once a month
Code example - zip file in a folder into one zip file
Code example - infinite loop
Code example - loop on files, with space in file name
Code example - loop on processes, killing only specific on
========================
Code examples
========================
Code example - delete file by size
Code example - delete files by modified date
Code example - move file to backup once a month
Code example - zip file in a folder into one zip file
Code example - infinite loop
Code example - loop on files, with space in file name
Code example - loop on processes, killing only specific on
===========================================
aliases
===========================================
alias ss='~/.search_str.sh'
.search_str.sh'
search_str=$1
find . -type f | xargs grep ${search_str}
alias ll='ls -ltra'
===========================================
Top commands
===========================================
locate my_file
find . -type f -name my_file
===========================================
Top commands
===========================================
Get disk space usage
find . -type f -printf '%s %p\n'| sort -nr | head -10
per mount point
find /mount/point/ -type f -printf '%s %p\n'| sort -nr | head -10
du
per folder
du -sh * (Will display totals in Kb/Mb/Gb)
du -sh * | sort -n | tail -20
du -sm * (Will display used in blocks)
du -sm * | sort -n
top 20 folders
du -m /software/oracle | sort -n | tail -20
du -m . | sort -n | tail -20
echo ================================== echo top 10 files echo ================================== find /software/oracle -type f -printf '%s %p\n'| sort -nr | head -10 echo ================================== echo top 10 Folders echo ================================== du -m /software/oracle/ | sort -n | tail -20 echo ================================== echo Done echo ==================================
===========================================
Top grep command
===========================================
egrep 'ORA-|connect|warn|sp2-|SP2-' /some/path/*log | egrep -i -v 'connectstr|connection|Running'
Top grep command
===========================================
egrep 'ORA-|connect|warn|sp2-|SP2-' /some/path/*log | egrep -i -v 'connectstr|connection|Running'
Oracle related
===========================================
Keep listener log size under control===========================================
#!/bin/bash
export ORA_INST=igt
export DAYS_TO_KEEP=14
export DAYS_TO_KEEP_LIST=2
export LIST_SERVER=`hostname`
export LISTENER_ROOT=/software/oracle/diag/tnslsnr/${LIST_SERVER}/lsnr_igt/trace
export LISTENER_ROOT_ALERT=/software/oracle/diag/tnslsnr/${LIST_SERVER}/lsnr_igt/alert
export LOG_PATH=/software/oracle/122/rdbms/log
#find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trm" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
#find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -size +1000M -exec rm {} \;
mv -f ${LISTENER_ROOT}/lsnr_igt.log_6 ${LISTENER_ROOT}/lsnr_igt.log_7
mv -f ${LISTENER_ROOT}/lsnr_igt.log_5 ${LISTENER_ROOT}/lsnr_igt.log_6
mv -f ${LISTENER_ROOT}/lsnr_igt.log_4 ${LISTENER_ROOT}/lsnr_igt.log_5
mv -f ${LISTENER_ROOT}/lsnr_igt.log_3 ${LISTENER_ROOT}/lsnr_igt.log_4
mv -f ${LISTENER_ROOT}/lsnr_igt.log_2 ${LISTENER_ROOT}/lsnr_igt.log_3
mv -f ${LISTENER_ROOT}/lsnr_igt.log_1 ${LISTENER_ROOT}/lsnr_igt.log_2
mv -f ${LISTENER_ROOT}/lsnr_igt.log ${LISTENER_ROOT}/lsnr_igt.log_1
#rm -f /software/oracle/diag/tnslsnr/${LIST_SERVER}/lsnr_igt/alert/*.xml
find $LISTENER_ROOT_ALERT -type f -mtime +${DAYS_TO_KEEP_LIST} -exec rm {} \;
find ${LOG_PATH} -type f -mtime +${DAYS_TO_KEEP} -exec rm {} \;
find ${LOG_PATH} -type f -mtime +${DAYS_TO_KEEP} -exec rm {} \;
for RAC
#!/bin/bash
DAYS_TO_KEEP=30
ORACLE_HOME=/u01/app/12.2.0.1/
LISTENER_HOME=/u01/app/oracle/diag/tnslsnr/qabcs-1-dbs-1a
find ${ORACLE_HOME}/grid/rdbms/audit -type f -name "*.aud" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
find ${LISTENER_HOME}/listener/alert -type f -name "log*xml" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
find ${LISTENER_HOME}/listener_scan1/alert -type f -name "log*xml" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
find ${LISTENER_HOME}/listener_scan2/alert -type f -name "log*xml" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
find ${LISTENER_HOME}/listener_scan3/alert -type f -name "log*xml" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
rm ${LISTENER_HOME}/listener_scan1/trace/listener_scan1.log
rm ${LISTENER_HOME}/listener_scan2/trace/listener_scan2.log
rm ${LISTENER_HOME}/listener_scan3/trace/listener_scan3.log
find /u01/app/oracle/diag/rdbms/ipnbc/IPNBC1/trace/*.trc -mtime +3 -exec rm {} \;
find /u01/app/oracle/diag/rdbms/ipnbc/IPNBC1/trace/*.trm -mtime +3 -exec rm {} \;
find /u01/app/oracle/diag/rdbms/ipnbc/IPNBC1/trace/cdmp_* -mtime +3 -exec rm -rf {} \;
find /u01/app/oracle/admin/IPNBC/adump/*.aud -mtime +3 -exec rm {} \;
find /u01/app/oracle/diag/rdbms/ipnbc/IPNBC1/incident/incdir_* -mtime +1 -exec rm -rf {} \;
find /u01/app/oracle/diag/rdbms/ipnbc/IPNBC1/alert/log_* -mtime +3 -exec rm {} \;
find /u01/app/oracle/diag/asm/+asm/+ASM1/trace/*.trc -mtime +3 -exec rm {} \;
find /u01/app/oracle/diag/asm/+asm/+ASM1/trace/*.trm -mtime +3 -exec rm {} \;
find /u01/app/oracle/diag/asm/+asm/+ASM1/incident/incdir_* -mtime +1 -exec rm -rf {} \;
ps -ef | grep ora_scm | grep -v grep | awk '{print $2}' | xargs kill
===========================================
Keep /var/log files under control===========================================
#!/bin/bash
export DAYS_TO_KEEP=14
export LOG_PATH=/var/spool/clientmqueue
find $LOG_PATH -type f -mtime +${DAYS_TO_KEEP} -exec rm {} \;
export DAYS_TO_KEEP=30
export LOG_PATH=/var/log/aide
find $LOG_PATH -type f -mtime +${DAYS_TO_KEEP} -exec rm {} \;
Linux file system in brief
===========================================
/etc - configuration files
/bin - binaries/usr - software installations
/lib - libraries
/opt - optional. external vendors installations.
/dev - device information
/proc - processes info. Temporary system generated files, by and about running processes.
/tmp
===========================================
Different help options
===========================================
help options:
man ls
info ls
ls -h
ls --help
man -k ls - will generate list with man entries for "ls"
===========================================
Server, Linux and User info
===========================================
uname -a
lscpu
hostname - use with caution, cause this command can be used to set system hostname.
/etc/issue
/etc/redhat-release - only for Red Hat, in format version.revision (for example 6.5)
whoami
id
who - shows all currently connected users.
whereis
which
locate - Is looking in hash DB, which is created by update_db command, and then updated per schedule.
===========================================
Create New User
===========================================
root@my_server:/my_path/my_dir>% useradd akaplan
root@my_server:/my_path/my_dir>% passwd akaplanChanging password for user akaplan.
New UNIX password:
BAD PASSWORD: it is too short
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
less /etc/passwd
akaplan:x:601:602::/home/akaplan:/bin/bash
The password was accepted, although it gave error message "BAD PASSWORD: it is too short"
===========================================
Files and Folders
===========================================
block device - the writing to the device is done with blocks, for example storage disk
character device - the writing to the device is done with character, for example screen.
-cwx - stands for character device - normally these files would be under /dev folder.
-lwr - stands for soft link
file some_file - get information about the file
links
ls -i - displays the inode ID (most left number)
ln -s file link - creates soft link.
ln file link - creates hard link.
In hard link - Both file and link have same inode - so effectively point to same place on disk.
The link is same as another file - pointing to same inode.
If deleting the file - the link is still valid.
In soft link - The link has another inode ID - and is pointing to the file.
If deleting the file - the link is not valid.
===========================================
Linux Permissions. chown, chmod, sudoers
===========================================
Useful examples for chmod and chown
chmod
chmod o+r filename - Add read to others on filename
chmod u+x filename - Add single permission to a file/directory
chmod u+r,g+x filename - Add multiple permission to a file/directory
chmod a+x filename - This example assigns execute privilege to user, group and others
chmod -R 755 directory-name/ - Apply the permission to all the files under a directory recursively
chmod u+X * - Change execute permission only on the directories (files are not affected)
chown chown new_user filename - Change the owner of a file
chown :new_group filename - Change the group of a file
chown new_user:new_group filename - Change both owner and the group
From a script:
#!/bin/bash
WORK_PATH=/my_mount/my_user/workarea/ora_exp
FILE_NAME=AUT*.dmp
chmod a+r,a+w ${WORK_PATH}/${FILE_NAME}
/etc/sudoers
To allow user_A to execute script owned by user_B, add an entry to /etc/sudoers
For Example:
iu ALL=NOPASSWD: /usr/local/bin/script_A.sh
iu ALL=NOPASSWD: /usr/local/bin/script_B.sh
The usage is:
as user iu run:
sudo /usr/local/bin/script_A.sh
===========================================
ACL: Access Control List setfacl and getfacl
===========================================
ACL is Access Control List
Consider following permissions:
-rw-r--r-- 1 iu starhome 112615 Aug 2 10:12 mapPROVtables_14.inc
-rw-rw-r--+ 1 iu starhome 112413 Jan 21 10:43 mapPROVtables_14.inc_A
What is the "+" at the end?
It means your file has extended permissions called ACLs
ACL is Access Control List
ACL specifies which users or system processes are granted access to objects, and which operations are allowed on object.
For Example if a file has an ACL, that would look like:
user_a: read,write
user_b: read
getfacl
The setfacl command reads file access control lists.
setfacl
The setfacl command sets file access control lists.
getfacil Examples:
my_user@my_server:~>% ls -l
drwxr-xr-x 2 my_user my_group 4096 Mar 11 2009 temp
drwxr-xr-x+ 6 my_user my_group 4096 Oct 20 2010 workarea
my_user@my_server:~>% getfacl workarea
# file: workarea
# owner: my_user
# group: my_group
user::rwx
user:oracle:r-x
group::r-x
mask::r-x
other::r-x
Sub directories inherit the permissions from parent directory.
Example:
my_user@my_server:~/workarea>% mkdir ALEC
my_user@my_server:~/workarea>% getfacl ALEC
# file: ALEC
# owner: my_user
# group: my_group
user::rwx
group::r-x
other::r-x
setfacil Examples:
via ACL add read, write, and execute permissions to an additional user new_dba and an additional group dba_group.
setfacl -m user:new_dba:rwx,group:dba_group:rwx mydir
Grant user my_user read access to some_file file.
setfacl -m u:my_user:r some_file
#chmod 777 $GG_HOME/dirprm/
setfacl -m d:u:oracle:rwx $GG_HOME/dirprm
setfacl -m u:iu:rwx $GG_HOME/dirprm
Copy the ACL of some_file1 to some_file2.
getfacl some_file1 | setfacl --set-file=- some_file2
Copy ACL Permisisons from one directory to another
ls -l
drwxrwxr-x 2 iu starhome 4096 Aug 6 12:54 CDR_TAP
drwxrwxr-x+ 2 iu starhome 4096 Aug 9 08:51 ora_exp
More Examples and reference:
Linux setfacl command
===========================================
Delete file by inode
===========================================
ls -l
===========================================
STDIN, STDOUT, STDERR, Tee.
===========================================
Examples:
ls -l not_existing_file > list_file (Errors would go to screen)
ls -l not_existing_file > list_file 2>/dev/null (Errors are ignored)
ls -l not_existing_file > list_file 2>errors_file (Errors would go to file)
ls -l not_existing_file > list_file 2>&1 | tee errors_file (Errors would go to screen and displayed on screen)
===========================================
cat example
===========================================
Example:
Retrive time only from a long string, fields separated with ;
>% cat edr_1.dat | grep '28.09.16 05:' | head -1
[JvmId=1] 1320492111901;1320492111900;1773022;1;IPN;1;250016387186852;;35;CYPRUS;A;0;28.09.16 05:00:00.008;28.09.16 05:00:00.008;68730;35796967002;63;MTN (Areeba) (CYP);1;SYSTEM_FAILURE;76;PNLogic: NWBarred;63;Barred Network;280;96;E214;true;100;LU_FROM_SRM_EVENT_TYPE;GSM;;;;;;;;;
>% cat edr_1.dat | grep '28.09.16 05:'| cut -d ";" -f13 | head -1
28.09.16 05:00:00.008
>% cat edr_1.dat | grep '28.09.16 05:'| cut -d ";" -f13 |cut -d " " -f2 | head -1
05:00:00.008
>% cat edr_1.dat | grep '28.09.16 05:'| cut -d ";" -f13 |cut -d " " -f2 |cut -d":" -f1,2 | head -1
05:00
===========================================
Environment variables and aliases.
===========================================
Environment variables are set from:
/etc/profile
$HOME/.bash_profile
Aliases are set from:
/etc/bashrc
$HOME/.bashrc
===========================================
Utilities
===========================================
vim - vi Mitkadem.
dos2unix - remove ^M Windows carriage return.
Delete files older than x days
Remove multiple files with one command:
prompt - is set from environment variable PS1.
common format: PS1="\u@\h:\w>% "
Where:
\u - user
\h - hostname
\w - pwd
Another common option:
\W - current directory
`cmd` same as $(cmd)
mkdir -p
mkdir -p /path/to/new_folderA/new_folderB/{new_folder1,new_folder2}
Would create the whole tree at one command.
dd
dd if=/dev/zero/ of=output_file bs=1K count=1000
Would create new file "output_file" with initial size of 1K*1000. (bs stands for block_size.)
===========================================
nohup
===========================================
When using the command shell, prefixing a command with nohup prevents the command from being aborted if you log out or exit the shell.
For example:
nohup <command> [args]...
nohup: ignoring input and appending output to `nohup.out'
nohup.out file is created in the working directory, and logs all stdout of the executed command.
===========================================
script my_log.log
===========================================
script command would log all commands, and their output, to the file provided as argument.
To stop script:
exit
Example:
script /software/oracle/oracle/my_log.log
cmd1
cmd2
cmd3
...
...
exit
exit
Script done, file is /software/oracle/oracle/my_log.log
===========================================
grep examples
===========================================
===========================================
ps command
===========================================
ps command summary
uses of linux ps command
Example - get number of threads per process
ps -eLf |awk '{print $2}' | sort|uniq -c |sort -n|tail -5
EXAMPLES
To see every process on the system using standard syntax:
ps -e
ps -ef
ps -eF
ps -ely
e - see all processes
f - see full details, including command
F - see full details, more details than f, including command
To see every process on the system using BSD syntax:
ps ax
ps axu
To print a process tree:
ps -ejH
ps axjf
To get info about threads:
ps -eLf
ps axms
To get security info:
ps -eo euser,ruser,suser,fuser,f,comm,label
ps axZ
ps -eM
To see every process running as root (real & effective ID) in user format:
ps -U root -u root u
To see every process with a user-defined format:
ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
ps -eopid,tt,user,fname,tmout,f,wchan
Print only the process IDs of syslogd:
ps -C syslogd -o pid=
Print only the name of PID 42:
ps -p 42 -o comm=
Process id of top 10 thread consuming processes
ps -eLf | awk '{print $2}' | sort | uniq -c |sort -nr |head
For getting the process name:
for i in `ps -eLf | awk '{print $2}' | sort | uniq -c |sort -nr |head` ; do ps -ef |grep $i ; done
Get connections to oracle count per process
netstat -anp |grep 1521 |grep ESTAB | awk '{print $NF}' |sort |uniq -c |sort -nr
===========================================
Disk usage
===========================================
> mount
> cd /mount/point
First - find the mount points:
mount | awk '{print $3}'
Find overall disk usage per mount point
df -hP
find big files:
Find top 10 biggest files
Common cause for error "No space on Device"
Although the device disk is not 100% full, there is this error.
The reason: There are too many files, and the whole iNode table is used.
Usually this is due to many files generated under /var/spool/mqueue.
Normally these are generated whenever there is some event, and should be taken care of by deamon sendmail. Best practice is to disable these files generation.
===========================================
For specific mount point
lsof +D /my_mount/
or
lsof /my_mount/
For specific process starting with XXX or YYY
lsof -c XXX -c YYY
To see which process id is listening on port 7777
===========================================
===========================================
===========================================
System Log files
===========================================
1. General Linux log
/var/log/messages
2. Veritas Cluster logs
/var/VRTSvcs/log/engine_A.log
Reference for Linux log files
3. Pacemaker Cluster Logging
configuration file: /etc/corosync/corosync.conf
Log files:
4. Logins log
/var/log/secure
Jul 23 08:42:55 my_server sshd[3185]: Accepted password for my_user from 11.222.333.444 port 2140 ssh2
Jul 23 08:42:55 my_server sshd[3185]: pam_unix(sshd:session): session opened for user my_user by (uid=0)
Jul 23 08:43:33 my_server sudo: iu : TTY=pts/0 ; PWD=/some_path/my_user ; USER=root ; COMMAND=/usr/local/bin/some_command
===========================================
top command
===========================================
And now get the SQLs text
SELECT SQLTEXT.sql_text,
SQLTEXT.piece ,
SQLTEXT.address,
'kill -9 '||PROCESS.spid AS "LINUX Kill",
SS.sid,
SS.username,
SS.schemaname,
SS.osuser,
SS.process,
SS.machine,
SS.terminal,
SS.program,
SS.type,
SS.module,
SS.logon_time,
SS.event,
SS.service_name,
SS.seconds_in_wait
FROM V$SESSION SS, V$SQLTEXT SQLTEXT, V$PROCESS PROCESS
WHERE SS.sql_address = SQLTEXT.address(+)
AND SS.service_name = 'SYS$USERS' AND SS.paddr = PROCESS.addr
AND PROCESS.spid IN (2183, 8322, 8133, 8234)
ORDER BY SQLTEXT.address, SQLTEXT.piece
Some important files
===========================================
===========================================
chmod u+r,g+x filename - Add multiple permission to a file/directory
chmod a+x filename - This example assigns execute privilege to user, group and others
chmod -R 755 directory-name/ - Apply the permission to all the files under a directory recursively
chmod u+X * - Change execute permission only on the directories (files are not affected)
chown chown new_user filename - Change the owner of a file
chown :new_group filename - Change the group of a file
chown new_user:new_group filename - Change both owner and the group
From a script:
#!/bin/bash
WORK_PATH=/my_mount/my_user/workarea/ora_exp
FILE_NAME=AUT*.dmp
chmod a+r,a+w ${WORK_PATH}/${FILE_NAME}
/etc/sudoers
To allow user_A to execute script owned by user_B, add an entry to /etc/sudoers
For Example:
iu ALL=NOPASSWD: /usr/local/bin/script_A.sh
iu ALL=NOPASSWD: /usr/local/bin/script_B.sh
The usage is:
as user iu run:
sudo /usr/local/bin/script_A.sh
===========================================
ACL: Access Control List setfacl and getfacl
===========================================
ACL is Access Control List
Consider following permissions:
-rw-r--r-- 1 iu starhome 112615 Aug 2 10:12 mapPROVtables_14.inc
-rw-rw-r--+ 1 iu starhome 112413 Jan 21 10:43 mapPROVtables_14.inc_A
What is the "+" at the end?
It means your file has extended permissions called ACLs
ACL is Access Control List
ACL specifies which users or system processes are granted access to objects, and which operations are allowed on object.
For Example if a file has an ACL, that would look like:
user_a: read,write
user_b: read
getfacl
The setfacl command reads file access control lists.
The setfacl command sets file access control lists.
getfacil Examples:
my_user@my_server:~>% ls -l
drwxr-xr-x 2 my_user my_group 4096 Mar 11 2009 temp
drwxr-xr-x+ 6 my_user my_group 4096 Oct 20 2010 workarea
# file: workarea
# owner: my_user
# group: my_group
user::rwx
user:oracle:r-x
group::r-x
mask::r-x
other::r-x
Sub directories inherit the permissions from parent directory.
Example:
my_user@my_server:~/workarea>% mkdir ALEC
my_user@my_server:~/workarea>% getfacl ALEC
# file: ALEC
# owner: my_user
# group: my_group
user::rwx
group::r-x
other::r-x
setfacil Examples:
via ACL add read, write, and execute permissions to an additional user new_dba and an additional group dba_group.
setfacl -m user:new_dba:rwx,group:dba_group:rwx mydir
Grant user my_user read access to some_file file.
setfacl -m u:my_user:r some_file
#chmod 777 $GG_HOME/dirprm/
setfacl -m d:u:oracle:rwx $GG_HOME/dirprm
setfacl -m u:iu:rwx $GG_HOME/dirprm
chown -R iu:starhome /starhome/iu/workarea
setfacl -m u:oracle:rwx /starhome/
setfacl -m u:oracle:rwx /starhome/iu
setfacl -m u:oracle:rwx /starhome/iu/workarea
setfacl -m u:oracle:rwx /starhome/iu/workarea/ora_exp
setfacl -m d:u:oracle:rwx /starhome/iu/workarea/ora_exp
setfacl -m d:u:iu:rwx /starhome/iu/workarea/ora_exp
Copy the ACL of some_file1 to some_file2.
getfacl some_file1 | setfacl --set-file=- some_file2
Copy ACL Permisisons from one directory to another
ls -l
drwxrwxr-x 2 iu starhome 4096 Aug 6 12:54 CDR_TAP
drwxrwxr-x+ 2 iu starhome 4096 Aug 9 08:51 ora_exp
getfacl ora_exp | setfacl --set-file=- CDR_TAP
ls -l
drwxrwxr-x+ 2 iu starhome 4096 Aug 6 12:54 CDR_TAP
drwxrwxr-x+ 2 iu starhome 4096 Aug 9 08:51 ora_exp
getfacl ora_exp
# file: ora_exp
# owner: iu
# group: starhome
user::rwx
user:oracle:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:oracle:rwx
default:user:iu:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
getfacl CDR_TAP
# file: CDR_TAP
# owner: iu
# group: starhome
user::rwx
user:oracle:rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:user:oracle:rwx
default:user:iu:rwx
default:group::r-x
default:mask::rwx
default:other::r-x
Linux setfacl command
===========================================
Delete file by inode
===========================================
There is a file, that was created by mistake, with a weird name. How to delete it?
The delete is dome by file inode.
ls -l
drwxr-xr-x+ 8 iu starhome 4096 Dec 24 09:47 workarea
-rw-r--r-- 1 iu starhome 0 Dec 30 13:59 - rest of line ignored.?SQL> SP2-0734: unknown command beginning OWNER
iu@esp-vod-1-dbu-2:~>% ls -il
25 drwxr-xr-x+ 8 iu starhome 4096 Dec 24 09:47 workarea
622633 -rw-r--r-- 1 iu starhome 0 Dec 30 13:59 - rest of line ignored.?SQL> SP2-0734: unknown command beginning OWNER
iu@esp-vod-1-dbu-2:~>% find . -inum 622633
./ - rest of line ignored.?SQL> SP2-0734: unknown command beginning OWNER
iu@esp-vod-1-dbu-2:~>% find . -inum 622633 -exec rm -i {} \;
rm: remove regular empty file `./ - rest of line ignored.\nSQL> SP2-0734: unknown command beginning OWNER'? y
STDIN, STDOUT, STDERR, Tee.
===========================================
Examples:
ls -l not_existing_file > list_file (Errors would go to screen)
ls -l not_existing_file > list_file 2>/dev/null (Errors are ignored)
ls -l not_existing_file > list_file 2>errors_file (Errors would go to file)
ls -l not_existing_file > list_file 2>&1 | tee errors_file (Errors would go to screen and displayed on screen)
===========================================
cat example
===========================================
Example:
Retrive time only from a long string, fields separated with ;
>% cat edr_1.dat | grep '28.09.16 05:' | head -1
[JvmId=1] 1320492111901;1320492111900;1773022;1;IPN;1;250016387186852;;35;CYPRUS;A;0;28.09.16 05:00:00.008;28.09.16 05:00:00.008;68730;35796967002;63;MTN (Areeba) (CYP);1;SYSTEM_FAILURE;76;PNLogic: NWBarred;63;Barred Network;280;96;E214;true;100;LU_FROM_SRM_EVENT_TYPE;GSM;;;;;;;;;
>% cat edr_1.dat | grep '28.09.16 05:'| cut -d ";" -f13 | head -1
28.09.16 05:00:00.008
>% cat edr_1.dat | grep '28.09.16 05:'| cut -d ";" -f13 |cut -d " " -f2 | head -1
05:00:00.008
>% cat edr_1.dat | grep '28.09.16 05:'| cut -d ";" -f13 |cut -d " " -f2 |cut -d":" -f1,2 | head -1
05:00
===========================================
Environment variables and aliases.
===========================================
Environment variables are set from:
/etc/profile
$HOME/.bash_profile
Aliases are set from:
/etc/bashrc
$HOME/.bashrc
===========================================
Utilities
===========================================
vim - vi Mitkadem.
dos2unix - remove ^M Windows carriage return.
Delete files older than x days
Remove multiple files with one command:
find . -type f -name "FILE-TO-FIND" -exec rm {} \;
Remove multiple files, older than 5 days with one command:
Remove multiple files, older than 5 days with one command:
find . -type f -name "FILE-TO-FIND" -mtime +5 -exec rm {} \;
change promptprompt - is set from environment variable PS1.
common format: PS1="\u@\h:\w>% "
Where:
\u - user
\h - hostname
\w - pwd
Another common option:
\W - current directory
`cmd` same as $(cmd)
mkdir -p
mkdir -p /path/to/new_folderA/new_folderB/{new_folder1,new_folder2}
Would create the whole tree at one command.
dd
dd if=/dev/zero/ of=output_file bs=1K count=1000
Would create new file "output_file" with initial size of 1K*1000. (bs stands for block_size.)
===========================================
nohup
===========================================
When using the command shell, prefixing a command with nohup prevents the command from being aborted if you log out or exit the shell.
For example:
nohup <command> [args]...
nohup: ignoring input and appending output to `nohup.out'
nohup.out file is created in the working directory, and logs all stdout of the executed command.
===========================================
script my_log.log
===========================================
script command would log all commands, and their output, to the file provided as argument.
To stop script:
exit
Example:
script /software/oracle/oracle/my_log.log
cmd1
cmd2
cmd3
...
...
exit
exit
Script done, file is /software/oracle/oracle/my_log.log
===========================================
grep examples
===========================================
grep
grep -c my_string my_files*
grep -c my_string my_files*
Do a grep without "permission denied" errors
find . -type f | xargs grep xxx 2>/dev/null
Do a grep inside zip file
find . -type f -name "*.zip"| xargs zgrep xxx
ps command
===========================================
ps command summary
uses of linux ps command
Example - get number of threads per process
ps -eLf |awk '{print $2}' | sort|uniq -c |sort -n|tail -5
EXAMPLES
To see every process on the system using standard syntax:
ps -e
ps -ef
ps -eF
ps -ely
e - see all processes
f - see full details, including command
F - see full details, more details than f, including command
To see every process on the system using BSD syntax:
ps ax
ps axu
To print a process tree:
ps -ejH
ps axjf
To get info about threads:
ps -eLf
ps axms
To get security info:
ps -eo euser,ruser,suser,fuser,f,comm,label
ps axZ
ps -eM
To see every process running as root (real & effective ID) in user format:
ps -U root -u root u
To see every process with a user-defined format:
ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
ps -eopid,tt,user,fname,tmout,f,wchan
Print only the process IDs of syslogd:
ps -C syslogd -o pid=
Print only the name of PID 42:
ps -p 42 -o comm=
Process id of top 10 thread consuming processes
ps -eLf | awk '{print $2}' | sort | uniq -c |sort -nr |head
For getting the process name:
for i in `ps -eLf | awk '{print $2}' | sort | uniq -c |sort -nr |head` ; do ps -ef |grep $i ; done
Get connections to oracle count per process
netstat -anp |grep 1521 |grep ESTAB | awk '{print $NF}' |sort |uniq -c |sort -nr
Disk usage
===========================================
> mount
> cd /mount/point
First - find the mount points:
mount | awk '{print $3}'
Find overall disk usage per mount point
df -hP
find big files:
find / -size +100M -print
find /mount/point/ -type f -printf '%s %p\n'| sort -nr | head -10To delete these files:
find /mount/point/ -type f -printf '%s %p\n'| sort -nr | head -10 | awk '{print $2}' | xargs rm
Although the device disk is not 100% full, there is this error.
The reason: There are too many files, and the whole iNode table is used.
Usually this is due to many files generated under /var/spool/mqueue.
Normally these are generated whenever there is some event, and should be taken care of by deamon sendmail. Best practice is to disable these files generation.
===========================================
Using lsof===========================================
lsof is a command to display open files
For all files:
lsof
For specific file:
lsof /my_mount/my_path/my_folder/my_file
For specific directory
lsof +D /my_mount/my_path/my_folder/
lsof +D /my_mount/
or
lsof /my_mount/
For specific user
lsof -u my_user
lsof -u my_user
lsof -c XXX -c YYY
For specific process by PID
lsof -p PID
Combination of options
by default it is OR
use -a to change it to AND
lsof -u my_user +D /my_mount/
lsof -u my_user +D /my_mount/ -a
lsof -p PID
Combination of options
by default it is OR
use -a to change it to AND
lsof -u my_user +D /my_mount/
lsof -u my_user +D /my_mount/ -a
To see which process id is listening on port 7777
lsof -i | grep :7777 | awk '{print $2}'
Reference:
===========================================
Using fsck===========================================
Useful fsck options:
fsck -n /backup/ora_online/ - Report Only, Fix - (n)o.
fsck -a /backup/ora_online/ - Fix. Fix - (a)utomatically. (softer than fsck -y)
fsck -y /backup/ora_online/ - Fix. Fix - (y)es.
fsck /backup/ora_online/ - Need to provide prompt input(y/n), in case of corruption.
Here is a Reference:
10 Linux Fsck Command Examples
Linux and Unix fsck command
===========================================
===========================================
fsck -a /backup/ora_online/ - Fix. Fix - (a)utomatically. (softer than fsck -y)
fsck -y /backup/ora_online/ - Fix. Fix - (y)es.
fsck /backup/ora_online/ - Need to provide prompt input(y/n), in case of corruption.
Here is a Reference:
10 Linux Fsck Command Examples
Linux and Unix fsck command
===========================================
Using xfs===========================================
The XFS File system is a high-performance file system.
Useful xfs options:
Check without repair:
xfs_check /dev/mapper/vg_mount
Repair:
Make sure you umount the XFS filesystem first before running the xfs_repair command!
xfs_repair /dev/mapper/vg_mount
Disk is taken by zombie process
===========================================
===========================================
When running df, du, find, the number not up.
The /backup/ora_online/ mount shows it is 93% full
du /backup/ora_online/ -sh shows only 40K are used
df /backup/ora_online/ -h shows 15Gb are used
find /backup/ora_online/ -type f shows nothing
Where did 15Gb go?
What to check?
A. Try to check for deleted files, that were not really deleted.
To see these files:
To see these files:
lsof | grep deleted
B. Using lsof, find out which zombie process is locking up files.
In this case, Oracle RMAN process was launched by crontab task from user shdaemon. This execution has crashed, but the shdaemon process still holds lock on 89Gb.
du /backup/ora_online/ -sh
24K /backup/ora_online/
df /backup/ora_online/ -hP
/dev/vx/dsk/OraDg2/Ora_Online 160G 71M 89G 45% /backup/ora_online
find /backup/ora_online/ -type f
nothing...
lsof +u shdaemon
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 22166 shdaemon cwd DIR 253,12 4096 1343502 /starhome/dbinstall/backup
bash 22166 shdaemon rtd DIR 253,1 4096 2 /
bash 22166 shdaemon txt REG 253,1 801528 163902 /bin/bash
bash 22166 shdaemon mem REG 253,1 144776 426012 /lib64/ld-2.5.so
bash 22166 shdaemon mem REG 253,1 1722248 426022 /lib64/libc-2.5.so
bash 22166 shdaemon mem REG 253,1 23360 426206 /lib64/libdl-2.5.so
bash 22166 shdaemon mem REG 253,1 15840 426294 /lib64/libtermcap.so.2.0.8
bash 22166 shdaemon mem REG 253,1 53880 426008 /lib64/libnss_files-2.5.so
bash 22166 shdaemon mem REG 253,2 56446448 1048614 /usr/lib/locale/locale-archive
bash 22166 shdaemon mem REG 253,2 25464 1179895 /usr/lib64/gconv/gconv-modules.cache
bash 22166 shdaemon 0u CHR 136,1 0t0 3 /dev/pts/1
bash 22166 shdaemon 1u CHR 136,1 0t0 3 /dev/pts/1
bash 22166 shdaemon 2u CHR 136,1 0t0 3 /dev/pts/1
bash 22166 shdaemon 255u CHR 136,1 0t0 3 /dev/pts/1
kill -9 22166
lsof +u shdaemon
Now nothing...
df /backup/ora_online/ -hP
/dev/vx/dsk/OraDg2/Ora_Online 160G 71M 159G 1% /backup/ora_online
Issue resolved.
B. Using lsof, find out which zombie process is locking up files.
In this case, Oracle RMAN process was launched by crontab task from user shdaemon. This execution has crashed, but the shdaemon process still holds lock on 89Gb.
du /backup/ora_online/ -sh
24K /backup/ora_online/
df /backup/ora_online/ -hP
/dev/vx/dsk/OraDg2/Ora_Online 160G 71M 89G 45% /backup/ora_online
nothing...
Using lsof:
lsof +D /backup/ora_online/
nothing...
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 22166 shdaemon cwd DIR 253,12 4096 1343502 /starhome/dbinstall/backup
bash 22166 shdaemon rtd DIR 253,1 4096 2 /
bash 22166 shdaemon txt REG 253,1 801528 163902 /bin/bash
bash 22166 shdaemon mem REG 253,1 144776 426012 /lib64/ld-2.5.so
bash 22166 shdaemon mem REG 253,1 1722248 426022 /lib64/libc-2.5.so
bash 22166 shdaemon mem REG 253,1 23360 426206 /lib64/libdl-2.5.so
bash 22166 shdaemon mem REG 253,1 15840 426294 /lib64/libtermcap.so.2.0.8
bash 22166 shdaemon mem REG 253,1 53880 426008 /lib64/libnss_files-2.5.so
bash 22166 shdaemon mem REG 253,2 56446448 1048614 /usr/lib/locale/locale-archive
bash 22166 shdaemon mem REG 253,2 25464 1179895 /usr/lib64/gconv/gconv-modules.cache
bash 22166 shdaemon 0u CHR 136,1 0t0 3 /dev/pts/1
bash 22166 shdaemon 1u CHR 136,1 0t0 3 /dev/pts/1
bash 22166 shdaemon 2u CHR 136,1 0t0 3 /dev/pts/1
bash 22166 shdaemon 255u CHR 136,1 0t0 3 /dev/pts/1
kill -9 22166
lsof +u shdaemon
Now nothing...
df /backup/ora_online/ -hP
/dev/vx/dsk/OraDg2/Ora_Online 160G 71M 159G 1% /backup/ora_online
Issue resolved.
C. If nothingcan be found by lsof, try to see a corruption on disk using fcsk command.
This might not be straight forward, because the Linux fsck command translates to HW provider fsck command, and the implementation of the provided fsck options can vary.
To see the issues, without fixing them, use fsck -N.
===========================================
Disk is reported as being full, while it got space.
When adding a new entry to crontab, an error is thrown:
iu@isr-sth-2-cgw-1:~/SA_COUNTERS>% crontab -e
crontab: installing new crontab
cron/tmp.2503: No space left on device
crontab: edits left in /tmp/crontab.2503
When checking the disk space, there is plenty of space:
iu@isr-sth-2-cgw-1:~/SA_COUNTERS>% df -hP
Filesystem Size Used Avail Use% Mounted on
/dev/Volume00/LogVol00 2.0G 252M 1.7G 14% /
/dev/cciss/c0d0p1 193M 32M 152M 18% /boot
/dev/Volume00/LogVol05 1008M 58M 910M 6% /home
/dev/Volume00/LogVol04 2.0G 553M 1.4G 29% /opt
none 501M 0 501M 0% /dev/shm
/dev/Volume00/LogVol03 1008M 33M 925M 4% /tmp
/dev/Volume00/LogVol01 5.0G 1.8G 2.9G 39% /usr
/dev/Volume00/LogVol02 2.0G 1.4G 515M 74% /var
/dev/Volume00/Backup 12M 1.1M 10M 10% /backup
/dev/Volume00/LogVol10 1008M 554M 404M 58% /kits
But when checking the inodes usage, the issue is found:
iu@isr-sth-2-cgw-1:~/SA_COUNTERS>% df -iP
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/Volume00/LogVol00 262144 28348 233796 11% /
/dev/cciss/c0d0p1 51000 52 50948 1% /boot
/dev/Volume00/LogVol05 131072 216 130856 1% /home
/dev/Volume00/LogVol04 262144 5933 256211 3% /opt
none 128176 1 128175 1% /dev/shm
/dev/Volume00/LogVol03 131072 21 131051 1% /tmp
/dev/Volume00/LogVol01 655360 95977 559383 15% /usr
/dev/Volume00/LogVol02 262144 262144 0 100% /var
/dev/Volume00/Backup 3072 16 3056 1% /backup
/dev/Volume00/LogVol10 131072 390 130682 1% /kits
257184
Not easy to delete these files, where inode table is full:
root@isr-sth-2-cgw-1:/var/spool/clientmqueue>% rm -f q*
-bash: /bin/rm: Argument list too long
So the solution is to delete one by one:
find . -type f | xargs rm -f
This might not be straight forward, because the Linux fsck command translates to HW provider fsck command, and the implementation of the provided fsck options can vary.
To see the issues, without fixing them, use fsck -N.
===========================================
Disk is reported as being full, while it got space.
===========================================
iu@isr-sth-2-cgw-1:~/SA_COUNTERS>% crontab -e
crontab: installing new crontab
cron/tmp.2503: No space left on device
crontab: edits left in /tmp/crontab.2503
When checking the disk space, there is plenty of space:
iu@isr-sth-2-cgw-1:~/SA_COUNTERS>% df -hP
Filesystem Size Used Avail Use% Mounted on
/dev/Volume00/LogVol00 2.0G 252M 1.7G 14% /
/dev/cciss/c0d0p1 193M 32M 152M 18% /boot
/dev/Volume00/LogVol05 1008M 58M 910M 6% /home
/dev/Volume00/LogVol04 2.0G 553M 1.4G 29% /opt
none 501M 0 501M 0% /dev/shm
/dev/Volume00/LogVol03 1008M 33M 925M 4% /tmp
/dev/Volume00/LogVol01 5.0G 1.8G 2.9G 39% /usr
/dev/Volume00/LogVol02 2.0G 1.4G 515M 74% /var
/dev/Volume00/Backup 12M 1.1M 10M 10% /backup
/dev/Volume00/LogVol10 1008M 554M 404M 58% /kits
But when checking the inodes usage, the issue is found:
iu@isr-sth-2-cgw-1:~/SA_COUNTERS>% df -iP
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/Volume00/LogVol00 262144 28348 233796 11% /
/dev/cciss/c0d0p1 51000 52 50948 1% /boot
/dev/Volume00/LogVol05 131072 216 130856 1% /home
/dev/Volume00/LogVol04 262144 5933 256211 3% /opt
none 128176 1 128175 1% /dev/shm
/dev/Volume00/LogVol03 131072 21 131051 1% /tmp
/dev/Volume00/LogVol01 655360 95977 559383 15% /usr
/dev/Volume00/LogVol02 262144 262144 0 100% /var
/dev/Volume00/Backup 3072 16 3056 1% /backup
/dev/Volume00/LogVol10 131072 390 130682 1% /kits
When checking the offending folder, it is /var/spool/clientmqueue/
root@isr-sth-2-cgw-1:/var/spool/clientmqueue>% ls -l | wc -l257184
Not easy to delete these files, where inode table is full:
root@isr-sth-2-cgw-1:/var/spool/clientmqueue>% rm -f q*
-bash: /bin/rm: Argument list too long
So the solution is to delete one by one:
find . -type f | xargs rm -f
Use public keys with ssh
===========================================
1. SCP the public key to server my_user on my_server.com:
cd
scp .ssh/id_dsa.pub my_user@my_server.com:.ssh/id_dsa.pub
===========================================
1. SCP the public key to server my_user on my_server.com:
cd
scp .ssh/id_dsa.pub my_user@my_server.com:.ssh/id_dsa.pub
2. On the server, append the public key to authorized_keys
ssh my_user@my_server.com
cd .ssh
cat id_dsa.pub >> authorized_keys
===========================================
System Log files
===========================================
1. General Linux log
/var/log/messages
2. Veritas Cluster logs
/var/VRTSvcs/log/engine_A.log
Reference for Linux log files
3. Pacemaker Cluster Logging
configuration file: /etc/corosync/corosync.conf
Log files:
/var/log/cluster/corosync.log
/var/log/corosync.log
/var/log/pacemaker.log
/var/log/secure
Jul 23 08:42:55 my_server sshd[3185]: Accepted password for my_user from 11.222.333.444 port 2140 ssh2
Jul 23 08:42:55 my_server sshd[3185]: pam_unix(sshd:session): session opened for user my_user by (uid=0)
Jul 23 08:43:33 my_server sudo: iu : TTY=pts/0 ; PWD=/some_path/my_user ; USER=root ; COMMAND=/usr/local/bin/some_command
===========================================
top command
===========================================
To see user top processes:
top -u<my_user>
This will run in batch mode
top -u<my_user>
This will run in batch mode
To run in batch mode with N seconds delay:top -d N -b
Sort. By default the sort is by CPU usage. To change default sort in top output:
top -> Shift+F -> select field
Change output fields::
top -> f -> select field
===========================================
change hostname
===========================================
vi /etc/hosts
hostname NEWHOSTNAME01
systemctl status rsyslog
systemctl stop rsyslog
systemctl status rsyslog
systemctl start rsyslog
systemctl status rsyslog
===========================================
memory setting on server
===========================================
useful commands:
free -m
Get top processes utilizing memory
ps -eo pcpu,pid,user,args | sort -k 1 -r | head -5
%CPU PID USER COMMAND
8.3 2183 oracle oracleigt (LOCAL=NO)
72.7 8322 oracle oracleigt (LOCAL=NO)
72.6 8133 oracle oracleigt (LOCAL=NO)
53.5 8234 oracle oracleigt (LOCAL=NO)
memory setting on server
===========================================
useful commands:
free -m
Get top processes utilizing memory
ps -eo pcpu,pid,user,args | sort -k 1 -r | head -5
%CPU PID USER COMMAND
8.3 2183 oracle oracleigt (LOCAL=NO)
72.7 8322 oracle oracleigt (LOCAL=NO)
72.6 8133 oracle oracleigt (LOCAL=NO)
53.5 8234 oracle oracleigt (LOCAL=NO)
See SWAP Usage
find . -type f | grep smaps | xargs grep Swap | grep -v "0 kB"
And now get the SQLs text
SELECT SQLTEXT.sql_text,
SQLTEXT.piece ,
SQLTEXT.address,
'kill -9 '||PROCESS.spid AS "LINUX Kill",
SS.sid,
SS.username,
SS.schemaname,
SS.osuser,
SS.process,
SS.machine,
SS.terminal,
SS.program,
SS.type,
SS.module,
SS.logon_time,
SS.event,
SS.service_name,
SS.seconds_in_wait
FROM V$SESSION SS, V$SQLTEXT SQLTEXT, V$PROCESS PROCESS
WHERE SS.sql_address = SQLTEXT.address(+)
AND SS.service_name = 'SYS$USERS' AND SS.paddr = PROCESS.addr
AND PROCESS.spid IN (2183, 8322, 8133, 8234)
ORDER BY SQLTEXT.address, SQLTEXT.piece
===========================================
File processing Examples
===========================================
Delete empty line from file
sed '/^\s*$/d' my_file.sql
Insert commit every 500 rows
awk ' {print;} NR % 500 == 0 { print "commit;"; }' my_file.sql
File processing Examples
===========================================
Delete empty line from file
sed '/^\s*$/d' my_file.sql
Insert commit every 500 rows
awk ' {print;} NR % 500 == 0 { print "commit;"; }' my_file.sql
Delete file with a special character name
ls -l
ls -l
-rw-r-----+ 1 oracle dba 9715 Jan 30 2020 -ltr
ls -i
537837103 -ltr
find . -maxdepth 1 -type f -inum 537837103
./-ltr
find . -maxdepth 1 -type f -inum 537837103 -delete
ls -i
<no file is there>
===========================================Some important files
===========================================
===========================================
Linux DATETIME
===========================================
To return date_time in format YYYYMMDD_hh24miss:
===========================================
To return date_time in format YYYYMMDD_hh24miss:
date "+%Y%m%d"_"%H%M%S"
my_user@my_server:~>% cat port_forwarding.sh
Configure the connection in tnsnames.ora:
less /software/oracle/920/network/admin/tnsnames.ora
REMOTE_A=(DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=127.0.0.1) (PORT=7777) ) (CONNECT_DATA= (SID=igt)))
REMOTE_B=(DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=127.0.0.1) (PORT=7778) ) (CONNECT_DATA= (SID=igt)))
Test the connection:
my_user@my_server:~>% tnsping REMOTE_A
TNS Ping Utility for Linux: Version 9.2.0.5.0 - Production on 05-JUN-2018 08:20:55
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/software/oracle/920/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=127.0.0.1) (PORT=7777)) (CONNECT_DATA= (SID=igt)))
OK (210 msec)
To make from crontab ongoing check that the connection is open:
crontab
7,23,38,53 * * * * /starhome/my_user/check_connection.sh
/starhome/my_user/check_connection.sh
#!/bin/bash
. /etc/sh/orash/oracle_login.sh orainst
status=`tnsping REMOTE_A | tail -1 | awk '{print $1}'`
echo $status
if [[ $status == 'OK' ]]; then
echo GOOD
else
nohup /starhome/my_user/port_forwarding.sh &
echo BAD
fi
===========================================
Check if files exist example
===========================================
To check if folder exists:
export TO_FOLDER=/starhome/iu/workarea/ora_exp/BL_EXTRACT_HISTORY
if [[ ! -d $TO_FOLDER ]]; then
mkdir $TO_FOLDER
fi
To check if file exists:
export RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
WORK_DIR=`pwd`
FROM_FILE=${FROM_FOLDER}/${FILE_BLACK_SPARX}
TO_FILE=${TO_FOLDER}/${FILE_BLACK_SPARX}_${RUN_DATE}
LOG_FILE=${WORK_DIR}/run_log.log
echo "Handlng File: " $FROM_FILE
if [[ -f ${FROM_FILE} ]]; then
echo
echo "===============================================" | tee ${LOG_FILE}
echo "Moving $FROM_FILE to $TO_FILE" | tee ${LOG_FILE}
echo "===============================================" | tee ${LOG_FILE}
echo
mv $FROM_FILE $TO_FILE
fi
Note - This if would not work if there is more than one file!!!
For Example:
There are several files aa*.log, and we want to move them all to logs/ folder
#!/bin/bash
--This would not work!! - since there is more than one file, the condition cannot be evaluated.
if [[ -f aa*.log ]] ; then
mv aa*.log logs/
fi
file_counter=`ls -l aa*.log | wc -l`
if [[ $file_counter -gt 0 ]]; then
mv aa*.log logs/
fi
===========================================
% ps -ef | grep defu | grep -v grep
03:49:07 up 1 day, 6:49, 1 user, load average: 1.53, 1.58, 1.50
my_user@my_server:~>% who -b
system boot 2018-02-02 21:00
===========================================
#!/bin/bash
export ORA_INST=orainst
#older than 2 days
export DAYS_TO_KEEP=2
#older than 2 hours
export MINUTES_TO_KEEP=120
find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trc" -mtime +${DAYS_TO_KEEP} -exec rm -f {} \;
find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trm" -mtime +${DAYS_TO_KEEP} -exec rm -f {} \;
find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trc" -mmin +${MINUTES_TO_KEEP} -exec rm -f {} \;
bash script:
#!/bin/bash
/software/oracle/diag/tnslsnr/tha-tot-2-dbu-1/lsnr_igt/trace/lsnr_igt.log
export ORA_INST=igt
export HOSTNAME=tha-tot-2-dbu-1
mv /software/oracle/diag/tnslsnr/${HOSTNAME}/lsnr_${ORA_INST}/trace/lsnr_${ORA_INST}.log /software/oracle/diag/tnslsnr/${HOSTNAME}/lsnr_${ORA_INST}/trace/lsnr_${ORA_INST}.log_bak
/software/oracle/oracle/scripts/delete_arch_files.sh
#!/bin/bash
#Delete archive files
DAYS_TO_KEEP=3
ARCH_DIR=/oracle_db/db2/db_igt/arch
find ${ARCH_DIR} -type f -name "*.arc" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
do
./delete_archive.sh
sleep 2
done
===========================================
#!/bin/bash
HOME_DIR=`pwd`
SOURCE_DIR=/software/oracle/admin/igt/utl_file
TARGET_DIR=/starhome/iu/workarea/OLD_REPORTS/RUS/BEELINE
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
cd $SOURCE_DIR
for file in `ls -1 | grep Russia | grep Beeline`
do
echo "moving file $file"
mv "$file" $TARGET_DIR/
done
#restore IFS
IFS=$SAVEIFS
cd $HOME_DIR
===========================================
delete_old_jobs_processes.sh
#!/bin/bash
LOG_FILE=/software/oracle/oracle/scripts/delete_old_jobs.log
PROCESS_LIST=`ps -ef | grep oracle | grep j0 | awk '{print $1" " $2" " $5" " $8}' | grep -v : | awk '{print $2}'`
touch $LOG_FILE
ps -ef | grep oracle | grep j0 | awk '{print $1" " $2" " $5" " $8}' | grep -v : >>$LOG_FILE
for v_proc in $PROCESS_LIST
do
#echo "kill -9 $v_proc"
kill -9 $v_proc
done
===========================================
Run crontab with local env variables
===========================================
===========================================
Add Space to Disk
===========================================
Add more allocation to mount point.
In This example, /software/oracle resides on /dev/Volume00/LogVol07
Volume00 got 13.19G free.
$ Gb of these 13.19G free are allocated to /software/oracle
df -hP
/dev/mapper/Volume00-LogVol07
7.8G 7.2G 194M 98% /software/oracle
cat /etc/fstab
/dev/Volume00/LogVol07 /software/oracle ext3 defaults,acl 1 2
root@server:~>% vgs
VG #PV #LV #SN Attr VSize VFree
Volume00 1 17 0 wz--n- 135.75G 13.19G
Volume01 1 1 0 wz--n- 136.70G 33.70G
EE
root@server:~>%lvextend -L +4G /dev/Volume00/LogVol07 && resize2fs /dev/Volume00/LogVol07
SE
root@server:~>%lvextend -L +4G /dev/Volume00/LogVol14 && resize2fs /dev/Volume00/LogVol14
Extending logical volume LogVol07 to 12.00 GB
Logical volume LogVol07 successfully resized
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/Volume00/LogVol07 is mounted on /software/oracle; on-line resizing required
Performing an on-line resize of /dev/Volume00/LogVol07 to 3145728 (4k) blocks.
The filesystem on /dev/Volume00/LogVol07 is now 3145728 blocks long.
df -hP
/dev/mapper/Volume00-LogVol07
12G 7.2G 3.9G 65% /software/oracle
On newer systems, run this:
lvextend -L +4G /dev/mapper/Volume00-LogVol08 && xfs_growfs /dev/mapper/Volume00-LogVol08
%root>lvextend -L +4G /dev/mapper/Volume00-LogVol08 && xfs_growfs /dev/mapper/Volume00-LogVol08
meta-data=/dev/mapper/Volume00-LogVol08 isize=256 agcount=108, agsize=244160 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=64 swidth=64 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 26214400 to 27262976
===========================================
awk
===========================================
awk examples.
===> To print ' , use "'\''" <===
In comma separated file, print only second field
less my_file| awk -F "," '{print $2}'
Add ' to the output
less all_ip | awk -F ',' '{ print "INSERT INTO CAPACITY_SERVERS (DATABASE_IP, INSTANCE_NAME, PORT) VALUES (" "'\''" $2 "'\''" "," "'\''" $5 "'\''" "," $6 ") "}'
INSERT INTO CAPACITY_SERVERS (DATABASE_IP, INSTANCE_NAME, PORT) VALUES ('100.200.300.400','ora_inst',1521)
One more example:
Input Line:
Customer Name,Database IP,Username,Password,Scheme,Port,Months back to check,Sold Capacity,Amount of allowed exceptions
MY_CUST GLR,100.100.999.220,MY_USER,MY_PASS,orainst,1521,36,30,0
awk command:
less sa_customers.txt | awk -F ',' '{print "INSERT INTO CAPACITY_CUSTOMERS(customer_id, service_name, schema_name, database_ip, months_to_check, sold_capacity, allowed_exceptions, access_type, db_link, cdrometer_ind, crm_name, active_ind) VALUES (" "'\''" 222 "'\''" "," "'\''" $1 "'\''" "," "'\''" $3 "'\''" "," $2 "," 7 "," 8 "," 9 "," "'\''" "SA" "'\''" "," "NULL" "," "'\''" "N" "'\''" "," "NULL" "," "'\''" "Y" "'\''" "); " }'
Generated Line:
INSERT INTO CAPACITY_CUSTOMERS(customer_id, service_name, schema_name, database_ip, months_to_check, sold_capacity, allowed_exceptions, access_type, db_link, cdrometer_ind, crm_name, active_ind) VALUES ('222','MY_CUST GLR','MY_USER',100.100.999.220,7,8,9,'SA',NULL,'N',NULL,'Y');
Another Example, convert list to SQL Statement
less ref_tables_list.sql | awk '{print "UPDATE GG_REF_TABLES_LIST SET is_replicated = '\''Y'\'' , gg_group='\''PROV'\'' WHERE table_name = '\''" $1"'\'';" }'
del_exp_backups.sh
Reference
===========================================
Short and nice Linux reference
www.tutorialspoint.com Advanced Bash-Scripting Guide
===========================================
Port Forwarding script
===========================================
Connect from my_server to remote IP on port 1521 via port forwarding.
Port 1521 is closed, but connection can be done via default tcp port (22).
First - need to enable ssh connection by public key sharing.
Then - use below script.
Connect from my_server to remote IP on port 1521 via port forwarding.
Port 1521 is closed, but connection can be done via default tcp port (22).
First - need to enable ssh connection by public key sharing.
Then - use below script.
my_user@my_server:~>% cat port_forwarding.sh
#!/bin/bash
ssh -l iu <REMOTE_IP_A> -p 22 -N -f -C -L 7777:<REMOTE_IP_A>:1521 #A
ssh -l iu <REMOTE_IP_B> -p 22 -N -f -C -L 7778:<REMOTE_IP_B>:1521 #B
sleep 1200
ps -fU root -C ssh | grep "ssh -l" | grep "7777:" | awk '{print $2}' | xargs kill
ps -fU root -C ssh | grep "ssh -l" | grep "7778:" | awk '{print $2}' | xargs kill
Configure the connection in tnsnames.ora:
less /software/oracle/920/network/admin/tnsnames.ora
REMOTE_A=(DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=127.0.0.1) (PORT=7777) ) (CONNECT_DATA= (SID=igt)))
REMOTE_B=(DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=127.0.0.1) (PORT=7778) ) (CONNECT_DATA= (SID=igt)))
Test the connection:
my_user@my_server:~>% tnsping REMOTE_A
TNS Ping Utility for Linux: Version 9.2.0.5.0 - Production on 05-JUN-2018 08:20:55
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/software/oracle/920/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=127.0.0.1) (PORT=7777)) (CONNECT_DATA= (SID=igt)))
OK (210 msec)
To make from crontab ongoing check that the connection is open:
crontab
7,23,38,53 * * * * /starhome/my_user/check_connection.sh
/starhome/my_user/check_connection.sh
#!/bin/bash
. /etc/sh/orash/oracle_login.sh orainst
status=`tnsping REMOTE_A | tail -1 | awk '{print $1}'`
echo $status
if [[ $status == 'OK' ]]; then
echo GOOD
else
nohup /starhome/my_user/port_forwarding.sh &
echo BAD
fi
Check if files exist example
===========================================
To check if folder exists:
export TO_FOLDER=/starhome/iu/workarea/ora_exp/BL_EXTRACT_HISTORY
if [[ ! -d $TO_FOLDER ]]; then
mkdir $TO_FOLDER
fi
To check if file exists:
export RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
WORK_DIR=`pwd`
FROM_FILE=${FROM_FOLDER}/${FILE_BLACK_SPARX}
TO_FILE=${TO_FOLDER}/${FILE_BLACK_SPARX}_${RUN_DATE}
LOG_FILE=${WORK_DIR}/run_log.log
echo "Handlng File: " $FROM_FILE
if [[ -f ${FROM_FILE} ]]; then
echo
echo "===============================================" | tee ${LOG_FILE}
echo "Moving $FROM_FILE to $TO_FILE" | tee ${LOG_FILE}
echo "===============================================" | tee ${LOG_FILE}
echo
mv $FROM_FILE $TO_FILE
fi
Note - This if would not work if there is more than one file!!!
For Example:
There are several files aa*.log, and we want to move them all to logs/ folder
#!/bin/bash
--This would not work!! - since there is more than one file, the condition cannot be evaluated.
if [[ -f aa*.log ]] ; then
mv aa*.log logs/
fi
file_counter=`ls -l aa*.log | wc -l`
if [[ $file_counter -gt 0 ]]; then
mv aa*.log logs/
fi
if example
===========================================
For String comparison
if [[ $status == 'OK' ]]; then
echo GOOD
else
echo BAD
fi
echo GOOD
else
echo BAD
fi
if [[ $status == 'OK' ]]; then
echo GOOD
else
echo BAD
fi
For Number comparison
if [[ $status -eq 0 ]]; thenecho GOOD
else
echo BAD
fi
-eq = equal
-gt = greater than
-lt = less than
-ge = greater or equal then
===========================================
defunct process
===========================================
defunct process is a zombie process who`s parent process is still running but the process itself is dead.
You cannot kill a <defunct> (zombie) process as it is already dead.
The only reason why the system keeps zombie processes is to keep the exit status for the parent to collect.
If the parent does not collect the exit status then the zombie processes will stay around forever.
The only way to get rid of those zombie processes are by killing the parent, or to reboot the server.
for example:
% ps -ef | grep defu | grep -v grep
root 2370 15126 0 Jan22 ? 00:02:06 [save] <defunct>root 2418 15126 0 Jan22 ? 00:03:37 [save] <defunct>
ri 3681 5528 0 Jan25 ? 00:00:00 [notification] <defunct>
ri 5033 29790 0 Mar29 ? 00:00:00 [ussd_gw] <defunct>
ri 13962 10253 0 Jan24 ? 00:00:00 [alarmer] <defunct>
ri 13964 10253 0 Jan24 ? 00:00:00 [dbrefresh] <defunct>
root 27883 15126 0 Jan22 ? 00:00:21 [save] <defunct>ri 28913 7604 0 Feb02 ? 00:00:00 [notification] <defunct>
===========================================
Reboot Linux server
===========================================
shutdown -r - Graceful reboot
shutdown -r 5 - Graceful reboot after 5 minutes
shutdown -r now - Graceful reboot now
shutdown -f -r now - Forceful reboot now
init 6 - Graceful reboot, shutting down services in reverse order, per init file.
reboot - Graceful reboot (same as shutdown -r)
reboot -f - Forceful reboot (same as reset in PC)
Reboot from crontab (each day at 04:01)
0 4 * * * /sbin/shutdown -r +1
0 4 * * * /sbin/shutdown -r +1
How to tell when the server was rebooted:
my_user@my_server:~>% uptime03:49:07 up 1 day, 6:49, 1 user, load average: 1.53, 1.58, 1.50
my_user@my_server:~>% who -b
system boot 2018-02-02 21:00
===========================================
Code example - delete file by size
===========================================
===========================================
#!/bin/bash
check_file=/starhome/iu/workarea/test_file.dat
let limit_size=1024
let size=`du -hm $check_file | awk '{print $1}'`
echo $size
if [ $size -gt $limit_size ]; then
rm -f $check_file
else
echo NOT DELETE
fi
check_file=/starhome/iu/workarea/test_file.dat
let limit_size=1024
let size=`du -hm $check_file | awk '{print $1}'`
echo $size
if [ $size -gt $limit_size ]; then
rm -f $check_file
else
echo NOT DELETE
fi
===========================================
Code example - delete files by modified date
===========================================
Delete files older than x days, or x minutes.#!/bin/bash
export ORA_INST=orainst
#older than 2 days
export DAYS_TO_KEEP=2
#older than 2 hours
export MINUTES_TO_KEEP=120
find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trc" -mtime +${DAYS_TO_KEEP} -exec rm -f {} \;
find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trm" -mtime +${DAYS_TO_KEEP} -exec rm -f {} \;
find /software/oracle/diag/rdbms/${ORA_INST}/${ORA_INST}/trace -type f -name "*.trc" -mmin +${MINUTES_TO_KEEP} -exec rm -f {} \;
===========================================
Code example - move file to backup once a month
===========================================
in crontab (running at 01:15 on 1-st day of each month)
15 1 1 * * /usr/scripts/oracle_purge/backup_listener_log.shin crontab (running at 01:15 on 1-st day of each month)
bash script:
#!/bin/bash
/software/oracle/diag/tnslsnr/tha-tot-2-dbu-1/lsnr_igt/trace/lsnr_igt.log
export HOSTNAME=tha-tot-2-dbu-1
mv /software/oracle/diag/tnslsnr/${HOSTNAME}/lsnr_${ORA_INST}/trace/lsnr_${ORA_INST}.log /software/oracle/diag/tnslsnr/${HOSTNAME}/lsnr_${ORA_INST}/trace/lsnr_${ORA_INST}.log_bak
===========================================
Code example - delete arch files
===========================================
in crontab (run each hour at 01:xx)
01 * * * * /software/oracle/oracle/scripts/delete_arch_files.shin crontab (run each hour at 01:xx)
/software/oracle/oracle/scripts/delete_arch_files.sh
#!/bin/bash
#Delete archive files
DAYS_TO_KEEP=3
ARCH_DIR=/oracle_db/db2/db_igt/arch
find ${ARCH_DIR} -type f -name "*.arc" -mtime +${DAYS_TO_KEEP} -exec rm {} \;
===========================================
Code example - infinite loop
===========================================
user@host:~>% less main_delete_archive.sh
while truedo
./delete_archive.sh
sleep 2
done
===========================================
Code example - loop on files, with space in file name===========================================
IFS stands for Internal Field Separator.
By default it is space.
By default it is space.
To process files with spaces in the file name, a simple solution is to change IFS to some other value than space.
HOME_DIR=`pwd`
SOURCE_DIR=/software/oracle/admin/igt/utl_file
TARGET_DIR=/starhome/iu/workarea/OLD_REPORTS/RUS/BEELINE
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
cd $SOURCE_DIR
for file in `ls -1 | grep Russia | grep Beeline`
do
echo "moving file $file"
mv "$file" $TARGET_DIR/
done
#restore IFS
IFS=$SAVEIFS
cd $HOME_DIR
===========================================
Code example - loop on processes, killing only specific one===========================================
For some reason, some oracle jobs leave behind zombie processes on Linux.
In database - there is no job in running state.
But in Linux, there are oracle j0xx processes.
The script is killing old jobs.
crontab
1 12 * * * /software/oracle/oracle/scripts/delete_old_jobs_processes.sh
delete_old_jobs_processes.sh
#!/bin/bash
LOG_FILE=/software/oracle/oracle/scripts/delete_old_jobs.log
PROCESS_LIST=`ps -ef | grep oracle | grep j0 | awk '{print $1" " $2" " $5" " $8}' | grep -v : | awk '{print $2}'`
touch $LOG_FILE
ps -ef | grep oracle | grep j0 | awk '{print $1" " $2" " $5" " $8}' | grep -v : >>$LOG_FILE
for v_proc in $PROCESS_LIST
do
#echo "kill -9 $v_proc"
kill -9 $v_proc
done
===========================================
Run crontab with local env variables
===========================================
"bash -l " - set user env variables
5 * * * * bash -l /directory/some/path/xxx.sh
Add Space to Disk
===========================================
Add more allocation to mount point.
In This example, /software/oracle resides on /dev/Volume00/LogVol07
Volume00 got 13.19G free.
$ Gb of these 13.19G free are allocated to /software/oracle
df -hP
/dev/mapper/Volume00-LogVol07
7.8G 7.2G 194M 98% /software/oracle
cat /etc/fstab
/dev/Volume00/LogVol07 /software/oracle ext3 defaults,acl 1 2
root@ora-se-rh8:~>% pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 Volume00 lvm2 a-- 148.13g <75.54g
VG #PV #LV #SN Attr VSize VFree
Volume00 1 17 0 wz--n- 135.75G 13.19G
Volume01 1 1 0 wz--n- 136.70G 33.70G
EE
root@server:~>%lvextend -L +4G /dev/Volume00/LogVol07 && resize2fs /dev/Volume00/LogVol07
SE
root@server:~>%lvextend -L +4G /dev/Volume00/LogVol14 && resize2fs /dev/Volume00/LogVol14
Extending logical volume LogVol07 to 12.00 GB
Logical volume LogVol07 successfully resized
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/Volume00/LogVol07 is mounted on /software/oracle; on-line resizing required
Performing an on-line resize of /dev/Volume00/LogVol07 to 3145728 (4k) blocks.
The filesystem on /dev/Volume00/LogVol07 is now 3145728 blocks long.
df -hP
/dev/mapper/Volume00-LogVol07
12G 7.2G 3.9G 65% /software/oracle
On newer systems, run this:
lvextend -L +4G /dev/mapper/Volume00-LogVol08 && xfs_growfs /dev/mapper/Volume00-LogVol08
%root>lvextend -L +4G /dev/mapper/Volume00-LogVol08 && xfs_growfs /dev/mapper/Volume00-LogVol08
meta-data=/dev/mapper/Volume00-LogVol08 isize=256 agcount=108, agsize=244160 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=64 swidth=64 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 26214400 to 27262976
Another example
%root>vgdisplay
--- Volume group ---
VG Name OraVg1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 12
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size <1.76 TiB
PE Size 4.00 MiB
Total PE 460799
Alloc PE / Size 397031 / 1.51 TiB
Free PE / Size 63768 / 249.09 GiB
VG UUID bUBYrc-EszB-Cp8P-TeZB-mRzJ-8kAO-z8nShm
--- Volume group ---
VG Name Volume00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 31
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 17
Open LV 13
Max PV 0
Cur PV 1
Act PV 1
VG Size 98.13 GiB
PE Size 4.00 MiB
Total PE 25122
Alloc PE / Size 24856 / 97.09 GiB
Free PE / Size 266 / <1.04 GiB
VG UUID 9ogQb1-FsKg-mb9w-7ZMB-dAjK-EkQ6-hiKDKw
===========================================
rpm
===========================================
rpm -qa - Will list all installations and their version
rpm
===========================================
rpm -qa - Will list all installations and their version
===========================================
awk
===========================================
awk examples.
===> To print ' , use "'\''" <===
In comma separated file, print only second field
less my_file| awk -F "," '{print $2}'
Add ' to the output
less all_ip | awk -F ',' '{ print "INSERT INTO CAPACITY_SERVERS (DATABASE_IP, INSTANCE_NAME, PORT) VALUES (" "'\''" $2 "'\''" "," "'\''" $5 "'\''" "," $6 ") "}'
INSERT INTO CAPACITY_SERVERS (DATABASE_IP, INSTANCE_NAME, PORT) VALUES ('100.200.300.400','ora_inst',1521)
One more example:
Input Line:
Customer Name,Database IP,Username,Password,Scheme,Port,Months back to check,Sold Capacity,Amount of allowed exceptions
MY_CUST GLR,100.100.999.220,MY_USER,MY_PASS,orainst,1521,36,30,0
awk command:
less sa_customers.txt | awk -F ',' '{print "INSERT INTO CAPACITY_CUSTOMERS(customer_id, service_name, schema_name, database_ip, months_to_check, sold_capacity, allowed_exceptions, access_type, db_link, cdrometer_ind, crm_name, active_ind) VALUES (" "'\''" 222 "'\''" "," "'\''" $1 "'\''" "," "'\''" $3 "'\''" "," $2 "," 7 "," 8 "," 9 "," "'\''" "SA" "'\''" "," "NULL" "," "'\''" "N" "'\''" "," "NULL" "," "'\''" "Y" "'\''" "); " }'
Generated Line:
Another Example, convert list to SQL Statement
less ref_tables_list.sql | awk '{print "UPDATE GG_REF_TABLES_LIST SET is_replicated = '\''Y'\'' , gg_group='\''PROV'\'' WHERE table_name = '\''" $1"'\'';" }'
del_exp_backups.sh
===========================================
Code Example
===========================================
Code Example
===========================================
Keep export files and logs history under control
#!/bin/bash
WORK_DIR=/backup/ora_exp
BACKUP_DIR=/backup/ora_exp/for_backup
LOG_DIR=/backup/ora_exp/for_backup/old_log
FILE_NAME=export_igt*dmp
LOG_NAME=export_igt*log
#Delete dmp Files older than 3 Days
KEEP_DAYS=3
for file in `ls -1 ${WORK_DIR}/${FILE_NAME}`
do
find $file -mtime +${KEEP_DAYS} -exec rm {} \;
done
for file in `ls -1 ${BACKUP_DIR}/${FILE_NAME}`
do
find $file -mtime +${KEEP_DAYS} -exec rm {} \;
done
KEEP_DAYS=60
for file in `ls -1 ${LOG_DIR}/${LOG_NAME}`
do
find $file -mtime +${KEEP_DAYS} -exec rm {} \;
done
===========================================
Code Example
===========================================
Code Example
===========================================
Delete files if more than X percent is used
#!/bin/sh
LIMIT_VALUE=60
df -hP | grep -vE '^Filesystem|tmpfs|cdrom' | grep /oracle_db/db2 | awk '{ print $5,$6 }' | while read output;
do
echo $output
used=$(echo $output | awk '{print $1}' | sed s/%//g)
partition=$(echo $output | awk '{print $2}')
if [ $used -ge $LIMIT_VALUE ]; then
find /oracle_db/db2/db_igt/arch -name '*.arc' -delete
fi
done
===========================================
open sig file with gpg
===========================================
>% gpg --output memory_target_8192m-13.0.0.1.tar.gz --decrypt memory_target_8192m-13.0.0.1.tar.gz.sig
gpg: directory '/home/akaplan/.gnupg' created
gpg: keybox '/home/akaplan/.gnupg/pubring.kbx' created
gpg: Signature made Tue 15 Sep 2020 08:02:25 AM UTC
gpg: using DSA key 3E5663249FA0F355
gpg: Can't check signature: No public key
The error is meaningless, sig file was opened.
===========================================
pacemaker - pcs commands
===========================================
pcs status
===========================================
sar
===========================================
sar -f /var/log/sa/sa01
===========================================
===========================================
Short and nice Linux reference
www.tutorialspoint.com Advanced Bash-Scripting Guide
No comments:
Post a Comment