Pages

Tuesday, August 30, 2022

Slow expdp in oracle 19.0 and higher

Slow expdp.
The process hangs at step Processing INDEXES
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

View SYS.KU$_CONSTRAINT_VIEW is having more than 1,000,000 rows, and the SQL accessing that view is extremely slow.

Investigation
Following This Oracle Technote did not help:
Expdp Performance Issue - Datapump expdp is slow and hanging on queries against datapump dictionary views ( SYS.KU$ views) (Doc ID 2704705.1)
It suggested to run tuning query, and follow suggestion

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '3kq263ugt7drn');

SELECT DBMS_SQLTUNE.report_tuning_task( '3kq263ugt7drn') FROM DUAL;

The suggestion was to gather stats for table 'KU$NOEXP_TAB'

  Recommendation
  --------------
  - Consider collecting optimizer statistics for this table.
    execute dbms_stats.gather_table_stats(ownname => 'SYS', tabname =>
            'KU$NOEXP_TAB', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
            method_opt => 'FOR ALL COLUMNS SIZE AUTO');

But that did not make a difference.

Solution
Following This Oracle Technote helped:
Bug 31050896 - Datapump Export Slow on Constraints When Using Privileged User (Doc ID 31050896.8)

Instead of running expdp as schema owner, run expdp as system with SCHEMAS tag

Description
If  export is started by a privileged user (SYSTEM or a user with
DATAPUMP_EXP_FULL_DATABASE), the expdp takes long time exporting constraints.
 
The slow export operation is observed during unload of:
 
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
 
for the following Data Pump internal query:
 
SELECT /*+all_rows*/ SYS_XMLGEN(VALUE(KU$),
 XMLFORMAT.createFormat2('CONSTRAINT_T', '7')), 0 ,
 NVL(KU$.CON1.IND.ANC_OBJ.NAME, KU$.BASE_OBJ.NAME) ,KU$.BASE_OBJ.OWNER_NAME ,
 KU$.BASE_OBJ.TYPE_NAME ,KU$.NAME ,KU$.NAME ,'CONSTRAINT' ,KU$.OWNER_NAME
FROM
SYS.KU$_CONSTRAINT_VIEW KU$ WHERE ...

Workaround
If the export is started by the user that owns the constraints the issue should be avoided.

"Process J000 is dead" "kkjcre1p: unable to spawn jobq slave process"

Errors in alert.log
Multiple errors:
 "Process J000 is dead"
"kkjcre1p: unable to spawn jobq slave process"

Process J000 is dead (pid=10383 req_ver=7762 cur_ver=7762 state=KSOSP_SPAWNED).
2022-08-29T23:59:09.365187+00:00
Process J002 died, see its trace file
2022-08-29T23:59:09.365502+00:00
kkjcre1p: unable to spawn jobq slave process
2022-08-29T23:59:09.365830+00:00
Errors in file /software/oracle/diag/rdbms/igt/igt/trace/igt_cjq0_8620.trc:
2022-08-29T23:59:10.566884+00:00
Process J002 died, see its trace file
2022-08-29T23:59:10.567017+00:00
kkjcre1p: unable to spawn jobq slave process

Check 1
Was checked - but not related in this case 

Per Oracle Technote: "ORA-00610 And/Or "unable to spawn jobq secondary process " And/Or "Process(<>) creation failed" In The Alert Log And/Or TNS-12518/ TNS-12500 In Listener Log (Doc ID 416244.1)"

Check your SGA and PGA settings, make sure they are not set too low or too high. 
Preferably PGA_AGGREGATE_TARGET is set at a rate of 2 MBs per process defined by the PROCESSES

Current settings
pga_aggregate_limit  = 60129542144
pga_aggregate_target = 15728640000 
processes            = 1000

So pga_aggregate_target should be higher than 1000*2=2Gb
Now it is set to 16Gb.

check 1 - OK


Check 2
Check that kernel.shmmax and kernel.shmall are sized appropriately


shmmax - should be "Half the size of physical memory in bytes"
shmall - should be "Greater than or equal to the value of shmmax, in pages."
shmmni - should be set to 4096

/sbin/sysctl -a | grep shm
This command displays the details of the shared memory segment sizes.

Per Oracle Technote 
What is the maximum value of SHMMAX for a 64-bit (x86-64) Linux system?
Oracle Global Customer Support officially recommends a " maximum" for SHMMAX of "1/2 of physical RAM".

Host memory is 193280Mb (202668769280 bytes)
free -m
            total     used   free   shared  buff/cache   available
Mem:       193280    82467  60637    33699       50175       76511
Swap:       10239       0   10239

/sbin/sysctl -a | grep shm
kernel.shmmax=549755813888
kernel.shmall=268435456

kernel.shmmaxis higher than Host memory!!


Edit /etc/sysctl.conf to set kernel.shmall and kernel.shmall

193280*1024*1024/2=101334384640
101334384640/(4*1024)=24739840

vi /etc/sysctl.conf 
kernel.shmmax = 101334384640
kernel.shmall = 24739840


root@server>% /sbin/sysctl -a | grep shm
kernel.shmall = 268435456
kernel.shmmax = 549755813888
kernel.shmmni = 4096

change settings
root@server>%/sbin/sysctl --system


Now
root@server>% /sbin/sysctl -a | grep shm
kernel.shmall = 24739840
kernel.shmmax = 101334384640


Monday, August 29, 2022

Datapump Extract not running. Drop + Create + Resync incoming file sequence

Datapump Extract - not running. How to restart

DBLOGIN USERID OGG PASSWORD XXXXXXX

GGSCI (qanfv-2-dbs-01) 58> INFO DPM_S_01 DETAIL

EXTRACT    DPM_S_01  Last Started 2022-08-29 08:30   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Process ID           15414
Log Read Checkpoint  File /software/ogg/191/software/ogg/191/dirdat/01/out/es000000000
                     First Record  RBA 0
==================
Drop and create
==================
STOP EXTRACT dpm_s_01
DELETE EXTRACT dpm_s_01
ADD EXTRACT dpm_s_01 EXTTRAILSOURCE /software/ogg/191/dirdat/01/out/es
ADD RMTTRAIL /software/ogg/191/dirdat/01/in/es EXTRACT dpm_s_01
START EXTRACT dpm_s_01

==================
Resync
==================
Might need to reset the extract to the correct file 


GGSCI (qanfv-2-dbs-01) 59> INFO DPM_S_01

EXTRACT    DPM_S_01  Last Started 2022-08-29 09:12   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
Process ID           23093
Log Read Checkpoint  File /software/ogg/191/dirdat/01/out/es000000000
                     First Record  RBA 0


oracle@qanfv-2-dbs-01:/software/ogg/191>% ls -l /software/ogg/191/dirdat/01/out/es*
-rw-r----- 1 oracle dba  12194 Aug 29 08:15 /software/ogg/191/dirdat/01/out/es000008488
-rw-r----- 1 oracle dba 784527 Aug 29 09:14 /software/ogg/191/dirdat/01/out/es000008489


STOP EXTRACT DPM_S_01
ALTER EXTRACT DPM_S_01, EXTSEQNO 8488 EXTRBA 0
START DPM_S_01
SEND DPM_S_01 STATS
INFO DPM_S_01

GGSCI (qanfv-2-dbs-01 as OGG@igt) 69> INFO DPM_S_01

EXTRACT    DPM_S_01  Last Started 2022-08-29 09:20   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           24073
Log Read Checkpoint  File /software/ogg/191/dirdat/01/out/es000008489
                     2022-08-29 09:20:07.000000  RBA 790916

Wednesday, August 24, 2022

Move Tables and Indexes to a new Tablespace

==========================================
Move Tables and Indexes to a new Tablespace by Example
==========================================

PURGE RECYCLEBIN;
PURGE RECYCLEBIN;

--Create New Tablespace
CREATE TABLESPACE NEW_TABLE_TBS DATAFILE '/oracle_db/db1/db_igt/new_tab_tbs.dbf' SIZE 100M AUTOEXTEND ON MAXSIZE 30000M;

CREATE TABLESPACE NEW_INDEX_TBS DATAFILE '/oracle_db/db1/db_igt/new_ind_tbs.dbf' SIZE 100M AUTOEXTEND ON MAXSIZE 30000M;

--For Non Partitioned Objects
SELECT 'ALTER TABLE '||owner||'.'||table_name||' MOVE TABLESPACE NEW_TABLE_TBS;' 
  FROM DBA_TABLES 
 WHERE tablespace_name = 'OLD_TABLE';

ALTER TABLE COLLECTOR.EXT_OVMD_CTP MOVE TABLESPACE NEW_TABLE_TBS;

--For Partitioned Objects
SELECT 'ALTER TABLE '||table_owner||'.'||table_name||' MOVE PARTITION '||PARTITION_NAME||' TABLESPACE NEW_TABLE_TBS;' 
  FROM  DBA_TAB_PARTITIONS 
 WHERE tablespace_name = 'OLD_TABLE';

ALTER TABLE COLLECTOR.OVMD_COUNTER_DATA MOVE PARTITION P_25_31 TABLESPACE OVMD_TABLE_TBS;

--For Non Partitioned Index
SELECT 'ALTER INDEX '||owner||'.'||index_name||' REBUILD  TABLESPACE NEW_TABLE_TBS;' 
  FROM  DBA_INDEXES 
 WHERE tablespace_name = 'OLD_TABLE';

ALTER INDEX COLLECTOR.OVMD_COUNTER_IX REBUILD TABLESPACE NEW_INDEX_TBS;


--For Partitioned Index
SELECT 'ALTER INDEX '||DBA_IND_PARTITIONS.index_owner||'.'||DBA_IND_PARTITIONS.index_name||' REBUILD PARTITION '||DBA_IND_PARTITIONS.partition_name||' TABLESPACE NEW_INDEX_TBS;'
  FROM DBA_IND_PARTITIONS       
WHERE DBA_IND_PARTITIONS.tablespace_name = 'OLD_IND_PARTITION';


ALTER INDEX COLLECTOR.OVMD_COUNTER_DATA_A2 REBUILD PARTITION P_01_06 TABLESPACE NEW_INDEX_TBS;

--Check
SELECT * FROM DBA_SEGMENTS WHERE tablespace_name = 'OLD_TABLE';
SELECT * FROM DBA_TAB_PARTITIONS WHERE tablespace_name = 'OLD_TABLE';
SELECT * FROM DBA_IND_PARTITIONS WHERE tablespace_name = 'OLD_TABLE';

SELECT * FROM DBA_SEGMENTS WHERE tablespace_name = 'OLD_INDEX';
SELECT * FROM DBA_TAB_PARTITIONS WHERE tablespace_name = 'OLD_INDEX';
SELECT * FROM DBA_IND_PARTITIONS WHERE tablespace_name = 'OLD_INDEX';

--Drop Old Tablespace
DROP TABLESPACE OLD_TABLE INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE OLD_INDEX INCLUDING CONTENTS AND DATAFILES;

Shrink space Oracle

=======================
Shrink Table
=======================
ALTER TABLE MY_USER.MY_TABLE ENABLE ROW MOVEMENT;
ALTER TABLE MY_USER.MY_TABLE SHRINK SPACE;
ALTER TABLE MY_USER.MY_TABLE DEALLOCATE UNUSED;

=======================
Shrink Index
=======================
SELECT INDEX_NAME FROM DBA_INDEXES 
WHERE OWNER = 'MY_USER' AND table_name = 'MY_TABLE';
ALTER INDEX MY_USER.MY_INDEX REBUILD ONLINE;

=======================
Rebuild Partitioned Indexes
=======================
SELECT 'ALTER INDEX '||DBA_IND_PARTITIONS.index_owner||'.'||DBA_IND_PARTITIONS.index_name||' REBUILD PARTITION '||DBA_IND_PARTITIONS.partition_name||';'
  FROM DBA_IND_PARTITIONS       
WHERE DBA_IND_PARTITIONS.index_name = 'MY_INDEX';
  AND DBA_IND_PARTITIONS.index_owner = 'MY_OWNER';
  
ALTER INDEX DEU_O2Q7Q_SPARX.SFI_CUSTOMER_PROFILE_PK REBUILD PARTITION SYS_P49;

=======================
Shrink SubPartition
=======================
ALTER TABLE MY_USER.MY_TABLE ENABLE ROW MOVEMENT;
ALTER TABLE MY_USER.MY_TABLE MODIFY SUBPARTITION SYS_SUBP315772  SHRINK SPACE;
ALTER TABLE MY_USER.MY_TABLE MODIFY SUBPARTITION SYS_SUBP315772  DEALLOCATE UNUSED; 


Generate the sql above
SELECT 'ALTER TABLE '||USER_TAB_PARTITIONS.table_name||' MODIFY SUBPARTITION '||USER_TAB_SUBPARTITIONS.subpartition_name||' SHRINK SPACE;'||CHR(13)
       'ALTER TABLE '||USER_TAB_PARTITIONS.table_name||' MODIFY SUBPARTITION '||USER_TAB_SUBPARTITIONS.subpartition_name||'  DEALLOCATE UNUSED;' as sql_cmd
   FROM (
SELECT USER_TAB_PARTITIONS.table_name, 
       USER_TAB_SUBPARTITIONS.subpartition_name, 
   USER_TAB_SUBPARTITIONS.last_analyzed,       ROUND(USER_TAB_SUBPARTITIONS.blocks*DBA_TABLESPACES.block_size/1024/1024) as sub_part_used_mb,
       USER_TAB_SUBPARTITIONS.num_rows sub_part_num_rows
  FROM USER_TAB_PARTITIONS,
       USER_TAB_SUBPARTITIONS,
   DBA_TABLESPACES
 WHERE USER_TAB_SUBPARTITIONS.table_name = USER_TAB_PARTITIONS.table_name
   AND USER_TAB_SUBPARTITIONS.partition_name = USER_TAB_PARTITIONS.partition_name
   AND USER_TAB_SUBPARTITIONS.last_analyzed < TRUNC(SYSDATE)-2
   AND USER_TAB_PARTITIONS.partition_name <> 'BASE_PART'
   AND DBA_TABLESPACES.tablespace_name = 'IGT_TABLE_BIG'
   AND ROUND(USER_TAB_SUBPARTITIONS.blocks*DBA_TABLESPACES.block_size/1024/1024) > 10
   AND USER_TAB_PARTITIONS.table_name = 'SGA_W_PSMS_SUBSCRIBER'
   );



=======================
Generate for non-partitioned Objects
=======================
set linesize 120
set pagesize 0
set heading off
set feedback off
spool enable_row_movement.sql
SELECT 'ALTER TABLE '||owner||'.'||table_name||'  ENABLE ROW MOVEMENT;' FROM DBA_TABLES WHERE tablespace_name = 'IGT_TABLE';
spool off
spool shrink_space.sql
SELECT 'ALTER TABLE '||owner||'.'||table_name||'  SHRINK SPACE;' FROM DBA_TABLES WHERE tablespace_name = 'IGT_TABLE';
spool off
spool deallocate_unused.sql
SELECT 'ALTER TABLE '||owner||'.'||table_name||'  DEALLOCATE UNUSED;' FROM DBA_TABLES WHERE tablespace_name = 'IGT_TABLE';
spool off
ALTER TABLE MY_USER.MY_TABLE ENABLE ROW MOVEMENT;
ALTER TABLE MY_USER.MY_TABLE SHRINK SPACE;
ALTER TABLE MY_USER.MY_TABLE DEALLOCATE UNUSED;
spool rebuild_index_online.sql
SELECT 'ALTER INDEX '||owner||'.'||index_name||' REBUILD ONLINE;' FROM DBA_INDEXES WHERE tablespace_name = 'IGT_TABLE';
spool off

@enable_row_movement.sql
@shrink_space.sql
@deallocate_unused.sql
@rebuild_index_online.sql

For Partitioned Objects:
ALTER TABLE MY_OWNER.MY_TABLE MODIFY PARTITION P_01_06 SHRINK SPACE;

ALTER TABLE MY_OWNER.MY_TABLE MOVE PARTITION P_01_06 TABLESPACE NEW_TBS_01;

ALTER TABLE MY_OWNER.MY_TABLE MOVE SUBPARTITION P_01_06 TABLESPACE NEW_TBS_01;


To Generate the SQL:
SELECT 'ALTER TABLE '||table_name||' MODIFY PARTITION '||partition_name||' SHRINK SPACE;' 
FROM USER_TAB_PARTITIONS WHERE table_name = 'MY_TABLE';

Tuesday, August 16, 2022

Shrink Temporary Tablespace and Move temp file to another location

==============
General
==============

=====================
Shrink Temporary Tablespace
=====================
SELECT file_name,
       tablespace_name, 
       ROUND(SUM(bytes)/1024/1024) TOTAL_MB 
  FROM DBA_TEMP_FILES 
 GROUP BY file_name,tablespace_name;

FILE_NAME                                     TABLESPACE_NAME    TOTAL_MB
--------------------------------------------- ------------------ --------
D:\ORACLE_DB\DB1\DB_IGT\ORA_TEMPORARY_01.DBF  TEMPORARY              4400

ALTER TABLESPACE TEMPORARY SHRINK SPACE KEEP 200M;
Tablespace altered;

SELECT file_name,
       tablespace_name, 
       ROUND(SUM(bytes)/1024/1024) TOTAL_MB 
  FROM DBA_TEMP_FILES 
GROUP BY file_name,tablespace_name;

FILE_NAME                                     TABLESPACE_NAME    TOTAL_MB
--------------------------------------------- ------------------ --------
D:\ORACLE_DB\DB1\DB_IGT\ORA_TEMPORARY_01.DBF  TEMPORARY               201


=====================
Move temp file to another location
=====================

Now: temporary table space has one file:
/oracle_db/db1/db_igt/ora_temporary_01.dbf
Need to move it to /oracle_db/db2/db_igt/datafile location




==============
Steps
==============
sqlplus / as sysdba

SET LINESIZE 120
COL NAME FOR A60
COL FILE_NAME FOR A60

SELECT name, status FROM V$TEMPFILE;

NAME                                       STATUS
------------------------------------------ --------
/oracle_db/db1/db_igt/ora_temporary_01.dbf ONLINE


SELECT file_id, tablespace_name, file_name, status, ROUND(bytes/1024/1024) as used_md, ROUND(maxbytes/1024/1204) as total_mb
FROM DBA_TEMP_FILES;

FILE_ID TABLESPACE_NAME FILE_NAME                                  STATUS   USED_MD    TOTAL_MB
------- --------------- ------------------------------------------ -------- ---------- ----------
1       TEMPORARY       /oracle_db/db1/db_igt/ora_temporary_01.dbf ONLINE   6000       5103

!mkdir /oracle_db/db2/db_igt
!mkdir /oracle_db/db2/db_igt/datafile

ALTER TABLESPACE TEMPORARY ADD TEMPFILE '/oracle_db/db2/db_igt/datafile/ora_temporary_01.dbf' SIZE 100M AUTOEXTEND ON MAXSIZE 8000M;

Tablespace altered.


SELECT name, status FROM V$TEMPFILE;

NAME                                                  STATUS
----------------------------------------------------- --------
/oracle_db/db1/db_igt/ora_temporary_01.dbf            ONLINE
/oracle_db/db2/db_igt/datafile/ora_temporary_01.dbf   ONLINE


ALTER DATABASE TEMPFILE '/oracle_db/db1/db_igt/ora_temporary_01.dbf' OFFLINE;
Database altered.

SELECT name, status FROM V$TEMPFILE;
NAME                                                  STATUS
----------------------------------------------------- ----------
/oracle_db/db1/db_igt/ora_temporary_01.dbf            OFFLINE
/oracle_db/db2/db_igt/datafile/ora_temporary_01.dbf   ONLINE

ALTER DATABASE TEMPFILE '/oracle_db/db1/db_igt/ora_temporary_01.dbf' DROP INCLUDING DATAFILES;

ERROR at line 1:
ORA-25152: TEMPFILE cannot be dropped at this time


There are active sessions that are using the temp files:

SELECT B.tablespace, B.segfile#, A.sid, A.serial#,
 A.username, A.osuser, A.status
 FROM V$SESSION A,
      V$SORT_USAGE B
 WHERE A.saddr = B.session_addr;

Kill the active sessions:
ALTER SYSTEM DISCONNECT SESSION '590,39359' IMMEDIATE;
ALTER SYSTEM DISCONNECT SESSION '970,6840' IMMEDIATE;
ALTER SYSTEM DISCONNECT SESSION '22,42942' IMMEDIATE;

ALTER DATABASE TEMPFILE '/oracle_db/db1/db_igt/ora_temporary_01.dbf' DROP INCLUDING DATAFILES;
Database altered.

SELECT name, status FROM V$TEMPFILE;

NAME                                                  STATUS
----------------------------------------------------- ---------
/oracle_db/db2/db_igt/datafile/ora_temporary_01.dbf   ONLINE

Sunday, August 7, 2022

How to: Resolve DRBD split-brain recovery manually

How to: Resolve DRBD split-brain recovery manually

After split brain has been detected, one node will always have the resource in a StandAlone connection state. The other might either also be in the StandAlone state (if both nodes detected the split brain simultaneously), or in WFConnection (if the peer tore down the connection before the other node had a chance to detect split brain).

At this point, unless you configured DRBD to automatically recover from split brain, you must manually intervene by selecting one node whose modifications will be discarded (this node is referred to as the split brain victim). 
This intervention is made with the following commands:

Below is the implementation of above technote
On secondary site



Regulal Flow 
on secondary site
drbdadm secondary Ora_Exp
drbdadm disconnect Ora_Exp
drbdadm -- --discard-my-data connect Ora_Exp

on primary site
drbdadm connect Ora_Exp
drbdadm status



If above does not work
on secondary site
root@server-1b:~>% drbdadm secondary Ora_Exp
root@server-1b:~>% drbdadm disconnect Ora_Exp
root@server-1b:~>% drbdadm -- --discard-my-data connect Ora_Exp
root@server-1b:~>% drbdadm invalidate Ora_Exp


On primary site
root@server-1a:~>% drbdadm status

root@server-1a:~>% drbdadm connect Ora_Exp



See progress and log:
/sys/kernel/debug/drbd/resources/<resource_name>/connections/<server_name>/0/proc_drdb/

See progress
root@server-1a:~>% drbdadm status
Ora_Exp role:Primary
  disk:UpToDate
  server-1b role:Secondary
    peer-disk:UpToDate

Ora_Online role:Primary
  disk:UpToDate
  server-1b role:Secondary
    peer-disk:UpToDate

db1 role:Primary
  disk:UpToDate
  server-1b role:Secondary congested:yes ap-in-flight:96 rs-in-flight:14336
    replication:SyncSource peer-disk:Inconsistent done:81.03

db2 role:Primary
  disk:UpToDate
  server-1b role:Secondary congested:yes ap-in-flight:32 rs-in-flight:14336
    replication:SyncSource peer-disk:Inconsistent done:86.60


In this example, Ora_Exp and Ora_Online were already synced.
db1 and db2 are in process of sync.
The numbers 81.03 and 86.60 are percent of the synced disk.

once the percent is 100% - the 2 sites are in sync
on site A
Ora_Exp role:Primary
  disk:UpToDate
  server-1b role:Secondary
    peer-disk:UpToDate

on site B
Ora_Exp role:Secondary
  disk:UpToDate
  server-1a role:Primary
    peer-disk:UpToDate



Example:
commands on secondary site
commands on primary site

drbdadm secondary Ora_Exp
drbdadm disconnect Ora_Exp
drbdadm -- --discard-my-data connect Ora_Exp
drbdadm connect Ora_Exp
drbdadm status
 
drbdadm secondary Ora_Online
drbdadm disconnect Ora_Online
drbdadm -- --discard-my-data connect Ora_Online
drbdadm connect Ora_Online
drbdadm status

drbdadm secondary db1
drbdadm disconnect db1
drbdadm -- --discard-my-data connect db1
drbdadm connect db1
drbdadm status

drbdadm secondary db2
drbdadm disconnect db2
drbdadm -- --discard-my-data connect db2
drbdadm connect db2
drbdadm status
 
drbdadm status
drbdadm status





How to: Make a node Primary
On to be Primary site:
root@server-1a:~>% drbdadm status
ogg role:Secondary
  disk:UpToDate
  server-1b connection:Connecting
root@server-1a:~>% drbdadm primary ogg
root@server-1a:~>% drbdadm disconnect ogg
root@server-1a:~>% drbdadm connect ogg
root@server-1a:~>% drbdadm status
ogg role:Primary
  disk:UpToDate
  server-1b connection:Connecting


drbdadm primary
Promote the resource´s device into primary role.
You need to do this before any access 
to the device, such as creating or mounting a file system.

drbdadm secondary
Brings the device back into secondary role.

Reference
https://manpages.ubuntu.com/manpages/xenial/en/man8/drbdadm.8.html


=================
Correct status
=================
root@SRV901G:~>% drbdadm  status
Ora_Exp role:Primary
  disk:UpToDate
  SRV902G role:Secondary
    peer-disk:UpToDate

Ora_Online role:Primary
  disk:UpToDate
  SRV902G role:Secondary
    peer-disk:UpToDate

db1 role:Primary
  disk:UpToDate
  SRV902G role:Secondary
    peer-disk:UpToDate

db2 role:Primary
  disk:UpToDate
  SRV902G role:Secondary
    peer-disk:UpToDate

root@SRV902G:~>% drbdadm  status
Ora_Exp role:Secondary
  disk:UpToDate
  SRV901G role:Primary
    peer-disk:UpToDate

Ora_Online role:Secondary
  disk:UpToDate
  SRV901G role:Primary
    peer-disk:UpToDate

db1 role:Secondary
  disk:UpToDate
  SRV901G role:Primary
    peer-disk:UpToDate

db2 role:Secondary
  disk:UpToDate
  SRV901G role:Primary
    peer-disk:UpToDate

=================
Not Correct Status
=================
root@SRVDBD901G:~>% drbdadm status
Ora_Exp role:Primary
  disk:UpToDate
  SRVDBD902G connection:StandAlone

Ora_Online role:Primary
  disk:UpToDate
  SRVDBD902G connection:StandAlone

db1 role:Primary
  disk:UpToDate
  SRVDBD902G connection:StandAlone

db2 role:Primary
  disk:UpToDate
  SRVDBD902G connection:StandAlone


root@SRVDBD902G:~>%  drbdadm status
Ora_Exp role:Secondary
  disk:UpToDate
  SRVDBD901G connection:StandAlone

Ora_Online role:Secondary
  disk:UpToDate
  SRVDBD901G connection:StandAlone

db1 role:Secondary
  disk:UpToDate
  SRVDBD901G connection:StandAlone

db2 role:Secondary
  disk:UpToDate
  SRVDBD901G connection:StandAlone




In case the connection between sites cannot be restored, need to rebuild drbd metadata
1. check /var/log/messages
   Possible error: 
Jun  5 14:42:05 my_host kernel: drbd Ora_Exp my_host: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jun  5 14:42:05 my_host kernel: drbd Ora_Exp/0 drbd4 my_host: drbd_sync_handshake:
Jun  5 14:42:05 my_host kernel: drbd Ora_Exp/0 drbd4 my_host: self A1AFF6AACC929146:4740BC40E4819C9C:54BF4A72763A34A6:0CA1C9B60B25B87E bits:8184550 flags:122
Jun  5 14:42:05 my_host kernel: drbd Ora_Exp/0 drbd4 my_host: peer D3401D4CCA4EA0C8:54BF4A72763A34A6:4740BC40E4819C9C:F0CF106B5ED25C7C bits:676 flags:120
Jun  5 14:42:05 my_host kernel: drbd Ora_Exp/0 drbd4 my_host: uuid_compare()=unrelated-data by rule 100
Jun  5 14:42:05 my_host kernel: drbd Ora_Exp/0 drbd4: Unrelated data, aborting!

The reason of the issue is that during split brain situation data has diverged beyond the point that the Peers no longer recognize each other’s generation identifiers.

The solution was to create the metadata again, and sync data. 

Run the following commands:

On Secondary node:
drbdadm down Ora_Exp
drbdadm wipe-md Ora_Exp
drbdadm create-md Ora_Exp
drbdadm up Ora_Exp
drbdadm disconnect Ora_Exp
drbdadm -- --discard-my-data connect Ora_Exp

Right after above, execute on Primary node:
drbdadm connect Ora_Exp

drbdadm status
drbdadm status