==============================
Export and Import Full Database
==============================
in bash script#!/bin/bash
RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
expdp / DIRECTORY=DP_DIR dumpfile=exp_dump_${RUN_DATE}.dmp LOGFILE=exp_dump_${RUN_DATE}.log full=y
expdp system/passwd@orainst FULL=Y FILE=FULL.DMP DIRECTORY=EXP_DIR LOGFILE=FULL_EXP.log
import impdp system/passwd@orainst FULL=Y FILE=FULL.DMP DIRECTORY=EXP_DIR LOGFILE=FULL_IMP.log
==============================
Export and Import One Schema
==============================
expdp user_a/user_a_pass@orainst DUMPFILE=USER_A.DMP DIRECTORY=EXP_DIR LOGFILE=USER_A_EXP.log
--Create User A on new Instance
CREATE USER USER_A IDENTIFIED BY USER_A DEFAULT TABLESPACE IGT_TABLE PROFILE DEFAULT ACCOUNT UNLOCK;GRANT CONNECT, RESOURCE TO USER_A;
GRANT CREATE PROCEDURE, EXECUTE ANY PROCEDURE TO USER_A;
GRANT CREATE SNAPSHOT, CREATE TRIGGER, CREATE ANY INDEX TO USER_A;
GRANT CREATE SYNONYM TO USER_A;
GRANT UNLIMITED TABLESPACE TO USER_A;
GRANT QUERY REWRITE TO USER_A;
GRANT CREATE ANY DIRECTORY TO USER_A;
GRANT EXECUTE ON DBMS_LOCK TO USER_A;
GRANT SELECT_CATALOG_ROLE TO USER_A;
GRANT CHANGE NOTIFICATION TO USER_A;
another option
CREATE USER USER_XXX IDENTIFIED BY USER_XXX DEFAULT TABLESPACE IGT_TABLE TEMPORARY TABLESPACE TEMPORARY PROFILE DEFAULT ACCOUNT UNLOCK;
GRANT CONNECT, RESOURCE TO USER_XXX;
GRANT CREATE PROCEDURE, EXECUTE ANY PROCEDURE TO USER_XXX;
GRANT CREATE SNAPSHOT, CREATE TRIGGER, CREATE ANY INDEX TO USER_XXX;
GRANT CREATE SYNONYM TO USER_XXX;
GRANT UNLIMITED TABLESPACE TO USER_XXX;
GRANT QUERY REWRITE TO USER_XXX;
GRANT CREATE ANY DIRECTORY TO USER_XXX;
GRANT EXECUTE ON DBMS_LOCK TO USER_XXX;
GRANT SELECT_CATALOG_ROLE TO USER_XXX;
GRANT CHANGE NOTIFICATION TO USER_XXX;
EXIT;
expdp user_a/user_a_pass@orainst DUMPFILE=USER_A.DMP DIRECTORY=EXP_DIR LOGFILE=USER_A_IMP.log
==============================
Make your life easy
Prepare export exe file
vi exp.cmd
expdp system/system_pass@orainst $1
ESTIMATE_ONLY=Y
ESTIMATE=BLOCKS
SCHEMAS=MY_SCHEMA_A,MY_SCHEMA_B,MY_SCHEMA_C
#DUMPFILE=exp_my_schema.dmp - not supported for ESTIMATE_ONLY=Y
Run export estimate:
./exp.cmd parfile=estimate.prm
Prepare export parameters file
vi exp_my_schema.prm
DIRECTORY=EXP_DIR
DUMPFILE=exp_my_schema.dmp
LOGFILE=exp_my_schema.log
SCHEMAS=MY_SCHEMA
REUSE_DUMPFILES=YES
./exp.cmd parfile=exp_my_schema.prm
-----------------------------------------------
STATUS=60
By default STATUS=0.
By setting any value, it would cause every X seconds display of status to expdp log.
vi exp.cmd
expdp system/system_pass@orainst $1
Prepare estimate export parameters file
vi estimate.prmESTIMATE_ONLY=Y
ESTIMATE=BLOCKS
SCHEMAS=MY_SCHEMA_A,MY_SCHEMA_B,MY_SCHEMA_C
#DUMPFILE=exp_my_schema.dmp - not supported for ESTIMATE_ONLY=Y
Run export estimate:
./exp.cmd parfile=estimate.prm
vi exp_my_schema.prm
DIRECTORY=EXP_DIR
DUMPFILE=exp_my_schema.dmp
LOGFILE=exp_my_schema.log
SCHEMAS=MY_SCHEMA
REUSE_DUMPFILES=YES
STATUS=60
Run export:./exp.cmd parfile=exp_my_schema.prm
-----------------------------------------------
STATUS=60
By default STATUS=0.
By setting any value, it would cause every X seconds display of status to expdp log.
==============================
Estimate size for export file.
==============================
See detailed Explanation Here:
ESTIMATE=BLOCKS
ESTIMATE_ONLY
A parameter that tells expdp process to skip actual export, and only to estimate the size.
Values: Y/N
Default=N
ESTIMATE
Method to estimate the size of a segment.
Values: BLOCKS/STATISTICS
Default: BLOCKS. Number of segment blocks * block size.
The output:
Estimate in progress using BLOCKS method...Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "MY_SCHEMA"."A_SOME_TABLE" 233213 KB
...
. estimated "MY_SCHEMA"."WORLDWIDE_LIST_NAME" 0 KB
Total estimation using BLOCKS method: 2.965 GB
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 12:01:52
==============================
version parameter
==============================
Downgrade.
In case need to make export from newer version (say 11.2), and import into older version (say 10.20), need to explicitly use the version option
In case need to make export from newer version (say 11.2), and import into older version (say 10.20), need to explicitly use the version option
DIRECTORY=EXP_DIR
DUMPFILE=exp_my_schema.dmp
LOGFILE=exp_my_schema.log
SCHEMAS=MY_SCHEMA
REUSE_DUMPFILES=YES
VERSION=10.2
==============================
compression
==============================
compression can be used if Advanced Compression Option option of Enterprise Edition is enabled, and Oracle version is 11.2 or higher
COMPRESSION=ALL: Both metadata and data are compressed.
COMPRESSION=DATA_ONLY: Only data is compressed.
COMPRESSION=METADATA_ONLY: Only metadata is compressed. This is the default setting.
COMPRESSION=NONE: Nothing is compressed.
==============================
Example of export and import of schema
==============================
Short and simple example of export from once instance and import to another instance.
The import is done to the same schema name.
The import is done to the same schema name.
A. Export
expdp source_user/source_pass@source_inst PARFILE=param_exp.prm
param_exp.prm
SCHEMAS=source_user
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SOURCE_USER_EXP_<DATE_TIME>.DMP
LOGFILE=SOURCE_USER_EXP_<DATE_TIME>.LOG
REUSE_DUMPFILES=YES
B. Create user on new Instance
CREATE USER source_user IDENTIFIED BY source_user;
GRANT CONNECT, RESOURCE TO source_user;
ALTER USER source_user DEFAULT TABLESPACE NEW_INST_TBS_01;
In SH: just run script: create_vip_user.sql (under /starhome/dbupgrade/<host>/<version>/Sa/Full_Install/Install_Base_<version>/create_vip_user.sql
To recreate the user:
conn system/pass@orainst--Drop User if exist
BEGIN
EXECUTE IMMEDIATE 'DROP USER MY_USER CASCADE';
EXCEPTION
WHEN OTHERS THEN RAISE;
END;
/
--Create user
CREATE USER MY_USER
IDENTIFIED BY MY_PASS
DEFAULT TABLESPACE MY_TBS
PROFILE DEFAULT ACCOUNT UNLOCK;
GRANT CREATE PROCEDURE, EXECUTE ANY PROCEDURE, CONNECT, CREATE SNAPSHOT,
CREATE TRIGGER, CREATE ANY INDEX TO MY_USER;
GRANT CREATE SYNONYM TO MY_USER;
GRANT UNLIMITED TABLESPACE TO MY_USER;
GRANT QUERY REWRITE TO MY_USER;
GRANT CREATE ANY DIRECTORY TO MY_USER;
GRANT EXECUTE ON DBMS_LOCK TO MY_USER;
GRANT SELECT_CATALOG_ROLE TO MY_USER;
C. Import
impdp system/system_pass@new_inst PARFILE=param_imp.prm
param_imp.prm
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SOURCE_USER_EXP_<DATE_TIME>.DMP
LOGFILE=SOURCE_USER_IMP_<DATE_TIME>.DMP
FULL=Y //to import full database, can be done only when export was done with FULL=Y
==============================
Example of export exclude tables from bash
==============================
exp_exclude.sh
#!/bin/bash
USER=MY_USER
PASSWORD=MY_PASS
RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
EXCLUDE_LIST="'SGA_W_PSMS_SUBSCRIBER', 'GA_W_COUNTERS_HISTORY', 'REP_DAILY_DNORM', 'SFI_CUSTOMER_PROFILE', 'SMM_CUSTOMER_PROFILE_202209', 'SMM_CUSTOMER_PROFILE_A', 'SGA_W_MOCO_SUBSCRIBER', 'SFI_CUSTOMER_OPTIONS', 'SFI_CUSTOMER_USAGE_HOURLY', 'SGA_SUBSCRIBER_SFI', 'REP_MONTHLY_DNORM', 'BAD_LOAD_INCIDENTS', 'IPN_IBR_CELL_REPORT_DATA', 'SGA_SUBSCRIBER_SFI_LOG', 'ELM_EVENT', 'DB_PROC_SFI_SUBSCRIBER_TEMP', 'INCIDENTS_HISTORY', 'GA_COUNTERS'"
EXCLUDE_PARAM="TABLE:\"IN (${EXCLUDE_LIST})\""
EXCLUDE=${EXCLUDE_PARAM}
echo "expdp ${USER}/${PASSWORD}@igt directory=IG_EXP_DIR dumpfile=${USER}_${RUN_DATE}.dmp logfile=${USER}_${RUN_DATE}.log EXCLUDE=${EXCLUDE_PARAM}"
expdp ${USER}/${PASSWORD}@igt directory=IG_EXP_DIR dumpfile=${USER}_${RUN_DATE}.dmp logfile=${USER}_${RUN_DATE}.log EXCLUDE=${EXCLUDE_PARAM}
echo "expdp USER/PASSWORD@igt directory=IG_EXP_DIR dumpfile=USER.dmp logfile=USER.log EXCLUDE=${EXCLUDE_PARAM}"
expdp USER/PASSWORD@igt directory=IG_EXP_DIR dumpfile=USER.dmp logfile=USER.log EXCLUDE=${EXCLUDE_PARAM}
==============================
Example of export exclude table data.
==============================
For regular table - use table name.
For partitioned table - use partition name.
param_exp.prm
DIRECTORY=EXP_DIR
SCHEMAS=MY_USER
DUMPFILE=my_user_exp_data.dmp
LOGFILE=my_user_exp_data.log
EXCLUDE=TABLE_DATA:"IN ('NON_PARTITIONED_TABLE','PART_P01', 'PART_P02', 'PART_P03', 'PART_P04')"
REUSE_DUMPFILES=YES
==============================
Example of export and import of single table
==============================
Short and simple example of export from once instance and import to another instance.
The import is dome to the same schema name.
The import is dome to the same schema name.
A. Export
expdp source_user/source_pass@source_inst PARFILE=param_exp.prm
param_exp.prm
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SOURCE_USER_EXP_<DATE_TIME>.dmp
LOGFILE=SOURCE_USER_EXP_<DATE_TIME>.log
TABLES="(TABLE_A, TABLE_B)"
with bash
with bash
#!/bin/bash
. /etc/sh/orash/oracle_login.sh igt
RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
TABLES_LIST="(TABLE_A, TABLE_B, TABLE_C, TABLE_D, TABLE_E)"
expdp USER/PASSWORD@igt DIRECTORY=exp_dir dumpfile=exp_dump_${RUN_DATE}.dmp LOGFILE=exp_dump_${RUN_DATE}.log TABLES=${TABLES_LIST}
B. Import
impdp system/system_pass@new_inst PARFILE=param_exp.prm
param_exp.prm
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SOURCE_USER_EXP_<DATE_TIME>.dmp
LOGFILE=SOURCE_USER_IMP_<DATE_TIME>.log
REMAP_SCHEMA=RI_REMOTE:RI_CENTRAL
CONTENT=data_only
TABLE_EXISTS_ACTION=truncate
impdp
schema_user/schema_pass@connectstr
DIRECTORY=IG_EXP_DIR
DUMPFILE=file-name.dmp
LOGFILE=imp_packages.log
INCLUDE=FUNCTION,PROCEDURE,PACKAGE
==============================
Import only one table from another schema
==============================
==============================
impdp TO_USER/TO_PASS@connectstr
DIRECTORY=IG_EXP_DIR
DUMPFILE=file_name.dmp
LOGFILE=imp_packages.log
TABLES='FROM_USER.TABLE_A,FROM_USER.TABLE_B'
REMAP_SCHEMA=FROM_USER:TO_USER
REMAP_TABLESPACE=FROM_TBS_A:TO_TBS_A,FROM_TBS_B:TO_TBS_Bor
REMAP_TABLESPACE=FROM_TBS_A:TO_TBS_A REMAP_TABLESPACE=FROM_TBS_B:TO_TBS_B
==============================
Export/Import only Data From one table
==============================
==============================
expdp FROM_USER/FROM_PASS@connectstr
DIRECTORY=IG_EXP_DIR
DUMPFILE=exp_tables_data_DATE.dmp
LOGFILE=exp_tables_data_DATE.log
CONTENT=data_only
TABLES='FROM_USER.TABLE_A,FROM_USER.TABLE_B';
impdp TO_USER/TO_PASS@connectstr
DIRECTORY=IG_EXP_DIR
DUMPFILE=exp_tables_data_DATE.dmp
LOGFILE=exp_tables_data_DATE.log
CONTENT=data_only
TABLE_EXISTS_ACTION=truncate
TABLES='FROM_USER.TABLE_A,FROM_USER.TABLE_B'
==============================
Example of using expdp
expdp example:
expdp my_user/my_pass@$connectStr directory=EXP_DIR schemas=$my_user dumpfile=$expDmpFile logfile=$expLogFile reuse_dumpfiles=yes
expdp example with parfile:
expdp my_user/my_pass@orainst PARFILE=param_exp.prm
param_exp.prm
To export a schema without specific tables:
SCHEMAS=my_user
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=MY_USER_EXT_DATE.dmp
LOGFILE=MY_USER_EXT_DATE.log
REUSE_DUMPFILES=yes
EXCLUDE=TABLE:"IN
('TABLE_A', 'TABLE_B')"
To export a specific table:
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=MY_USER_EXT_DATE.dmp
LOGFILE=MY_USER_EXT_DATE.log
REUSE_DUMPFILES=yes
TABLES="(TABLE_A, TABLE_B)"
1. param_exp.prm should be in same "place", as the command expdp.
2. The dmp and log files are generated into DATA_PUMP_DIR path, as defined in ALL_DIRECTORIES.
2. The dmp and log files are generated into DATA_PUMP_DIR path, as defined in ALL_DIRECTORIES.
To export only part of a table:
expdp my_user/my_pass@orainst PARFILE=param_exp_my_table.prm
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=MY_USER_MY_TABLE_EXT_DATE.dmp
LOGFILE=MY_USER_MY_TABLE_EXT_DATE.log
REUSE_DUMPFILES=yes
TABLES= MY_TABLE
TABLES=
QUERY='MY_TABLE:"WHERE MY_TABLE.upd_date > SYSDATE - 10")'
==============================
Example of importing the whole schema
==============================
As user
impdp usr2/usr2
directory=dp_dir dumpfile=test.dmp logfile=imp.log remap_schema=usr1:usr2`
As system to same user
As system to another user
As system to another user
impdp system/XXXX PARFILE=IMP_SCHEMA.prm
imp_schema.prm
DIRECTORY=EX_EXP_DIR
DUMPFILE=FULL_EXP.dmp
LOGFILE=imp_schema.log
SCHEMAS=MY_USER
1. CREATE USER to_user IDENTIFIED BY to_pass ...
(See create user example at the end)2.Edit parameters file
param_exp.prm
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=MY_USER_EXT_DATE.dmp
LOGFILE=MY_USER_EXT_DATE.log
REMAP_SCHEMA=from_user:to_user
3.
impdp system/system_pass@orainst PARFILE=param_exp.prm
1. param_exp.prm should be in same "place", as the command expdp.
2. The dmp and log files are generated into DATA_PUMP_DIR path, as defined in ALL_DIRECTORIES.
3. The impdp is run as system, since need to create the schema, and grants, which requires DBA permission.
==============================
Example of importing only the data
==============================
Example of importing only the data
==============================
impdp schema_user/password@sid
DIRECTORY=EXP_DMP_DIR
DUMPFILE=dmp_file.dmp
LOGFILE=imp_file.log
CONTENT=data_only
TABLE_EXISTS_ACTION=truncate
Optionally, import schema to another schema. In this case the impdp must be run as system account.
impdp system/system_password@sid
DIRECTORY=export_dir
DUMPFILE=dmp_file.dmp
LOGFILE=imp_file.log
CONTENT=data_only
TABLE_EXISTS_ACTION=truncate
REMAP_SCHEMA=from_user:to_user
=====================
DIRECTORY
=====================
- Is the value of directory_name entry in DBA_DIRECTORIES.
It is pointing to the path in directory_path for that entry.
It is pointing to the path in directory_path for that entry.
To query existing directories:
SELECT * FROM DBA_DIRECTORIES
To create a new entry:
CREATE OR REPLACE DIRECTORY MY_DIR AS '/some/path/my/path';
Parameters file syntax:
expdp schema_user/password@sid PARFILE=exp_params.par
The content of exp_params.txt would be something like:
DIRECTORY=EXP_DMP_DIR
DUMPFILE=dmp_file.dmp
The content of exp_params.txt would be something like:
DIRECTORY=EXP_DMP_DIR
DUMPFILE=dmp_file.dmp
LOGFILE=exp_file.log
CONTENT=data_only
EXCLUDE=TABLE:"IN('TABLE_A','TABLE_B')";
or
EXCLUDE=TABLE:"IN ((SELECT TABLE_NAME FROM EXCLUDED_TABLES))";
EXCLUDE=TABLE:"IN('TABLE_A','TABLE_B')";
or
EXCLUDE=TABLE:"IN ((SELECT TABLE_NAME FROM EXCLUDED_TABLES))";
==============================
Example of export and import of schema, without some tables
==============================
==============================
SQLFILE parameter
==============================
"SQLFILE"="impdb_sql_file.sql"
The output would be generated into $DIRECTORY/script.sql
This would create a file script.sql with a list of "to be created" Objects.
impdp myuser/mypass directory=exp_dir dumpfile=somefile.dmp sqlfile=script.sqlThis would create a file script.sql with a list of "to be created" Objects.
It would list all the commands the import would do:
The flow of things:
==============================
CONTENT parameter
==============================
With CONTENT one can control if only metadata/data/both are imported or exported==============================
ALL (default) - both
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
==============================
Create User
==============================
Create user:==============================
CREATE USER my_user IDENTIFIED BY my_pass DEFAULT TABLESPACE "MY_TBS" TEMPORARTY TABLESPACE "TEMPORARY";
Grant System permissions to user
GRANT EXECUTE ANY PROCEDURE TO "STARBILL_ADMIN";
GRANT CREATE ANY TABLE TO "STARBILL_ADMIN";
GRANT UNLIMITED TABLESPACE TO "STARBILL_ADMIN";
GRANT "CONNECT" TO "STARBILL_ADMIN";
GRANT "RESOURCE" TO "STARBILL_ADMIN";
Grant Roles to user
GRANT "DBA" TO "STARBILL_ADMIN";
GRANT "DBA_ROLE" TO "STARBILL_ADMIN";
ALTER USER "STARBILL_ADMIN" DEFAULT ROLE ALL;
Call SYS.DBMS_LOGREP_IMP.instantiate_schema
SYS.DBMS_LOGREP_IMP.instantiate_schema
Create objects: SYNONYMS, DB_LINKS, SEQUENCES, TABLES
Grant Permissions on user owned objects.
GRANT SELECT ON "MY_USER"."TABLE_B" TO "SOME_USER" WITH GRANT OPTION;
GRANT SELECT ON "MY_USER"."TABLE_B" TO "SOME_USER" WITH GRANT OPTION;
Import DBMS_STATS statistics
DBMS_STATS.set_index_stats
DBMS_STATS.set_table_stats
DBMS_STATS.set_column_stats
DBMS_STATS.set_index_stats
DBMS_STATS.set_table_stats
DBMS_STATS.set_column_stats
Create PL/SQL Objects
Packages
Procedures
Functions
Packages
Procedures
Functions
Create Foreign Keys
ALTER TABLE "MY_USER"."TABLE_A" ADD CONSTRAINT "TABLE_A_FK" FOREIGN KEY ("CUSTOMER_ID")
REFERENCES "MY_USER"."TABLE_B" ("CUSTOMER_ID") ENABLE;
Create Triggers
CREATE TRIGGER "MY_USER"."TABLE_A_TRG"
before update or insert on TABLE_A
for each row
BEGIN
If INSERTING THEN
:new.status_change_date:=sysdate;
ELSIF UPDATING THEN
if :new.status_id!=:old.status_id then
:new.status_change_date:=sysdate;
end if;
END IF;
END;
/
ALTER TRIGGER "MY_USER"."TABLE_A_TRG" ENABLE;
ALTER TRIGGER "MY_USER"."TABLE_A_TRG"
COMPILE
PLSQL_OPTIMIZE_LEVEL=2
PLSQL_CODE_TYPE=INTERPRETED;
before update or insert on TABLE_A
for each row
BEGIN
If INSERTING THEN
:new.status_change_date:=sysdate;
ELSIF UPDATING THEN
if :new.status_id!=:old.status_id then
:new.status_change_date:=sysdate;
end if;
END IF;
END;
/
ALTER TRIGGER "MY_USER"."TABLE_A_TRG" ENABLE;
ALTER TRIGGER "MY_USER"."TABLE_A_TRG"
COMPILE
PLSQL_OPTIMIZE_LEVEL=2
PLSQL_CODE_TYPE=INTERPRETED;
==============================
View status of datapump jobs
==============================
==============================
Query the status of datapump jobs
SELECT * FROM DBA_DATAPUMP_JOBS
To Kill a datapump job:
SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
-- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');
h1 := DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','SCHEMA_USER');
DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/
SELECT * FROM DBA_DATAPUMP_JOBS
To Kill a datapump job:
SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
-- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');
h1 := DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','SCHEMA_USER');
DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/
==============================
Temporary Objects Generated by impdb
==============================
Query temporary Objects that were generated by impdb
==============================
Query temporary Objects that were generated by impdb
SELECT bytes/1024/1024 AS Mb, SEGMENT_NAME, SEGMENT_TYPE
FROM USER_SEGMENTS
WHERE tablespace_name = 'SOME_TBS'
ORDER BY 1 DESC
FROM USER_SEGMENTS
WHERE tablespace_name = 'SOME_TBS'
ORDER BY 1 DESC
impdp might generate CLOBs, related to tables named 'SYS_IMPORT_FULL99' - where 99 is a sequence number.
SELECT * FROM USER_LOBS
WHERE tablespace_name = 'SYSTEM'
WHERE tablespace_name = 'SYSTEM'
AND TABLE_NAME LIKE 'SYS_IMPORT%'
The solution is just to drop them.
sqlplus system/system_pass@orainst
DROP TABLE SYS_IMPORT_SCHEMA_01
==============================
Troubleshooting
==============================
Running expdp.
Where is the generated dmp file?
The output says that there are no errors, and that log file and dmp file were generated.
However when checking the file system, there are no such files.
Permissions are OK, both on Oracle and on UNIX.
Permissions are OK, both on Oracle and on UNIX.
No issue with disk space.
What is going on? Why has the file disappeared.??!!
The issue is, that expdb was run on a client, but dmp files are generated on server side.
In Linux cluster environments, this might be quite confusing...
Running impdp.
OS user running the impdp command has full permissions on the file
What might be wrong?
OS user running the impdp command has full permissions on the file
What might be wrong?
impdp userA/pswdA@orainst directory=EXP_DIR dumpfile=my.dmp NOLOGFILE=Y remap_schema=old_user:new_user
Import: Release 11.2.0.4.0 - Production on Tue Apr 14 12:41:14 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/dbnfs/dp/IG_EXP_DIR/Exp_NZL_TELEC_IPNQQ.dmp" for read
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 3
First of all - ORA-39001, ORA-39000,ORA-31640 are general wrapper errors.
The real error is ORA-27037
Stuff to check:
1. userA would need to have permission IMP_FULL_DATABASE permission.
To check on existing permissions:
SELECT user, table_name, privilege
FROM USER_TAB_PRIVS
WHERE privilege IN ('WRITE', 'READ') ORDER BY 1,2
GRANT WRITE ON DIRECTORY EXP_DIR TO my_user
2. Consider cluster environment mount.
In a cluster environment, use CLUSTER=NO to force the job use only the instance where it has been started.
This might be needed because the EXP_DIR directory might be mounted only on the active node, i.e. on a local directory.
This might be needed because the EXP_DIR directory might be mounted only on the active node, i.e. on a local directory.
impdp userA/pswdA@orainst directory=EXP_DIR dumpfile=my.dmp NOLOGFILE=Y remap_schema=old_user:new_user CLUSTER=NO
==============================
PARALLEL
==============================
To use PARALLEL:
1. check the number of CPUs
less /proc/cpuinfo | grep processor
2. Per Oracle Documantation: "Set the degree of parallelism to two times the number of CPUs, then tune from there"
When using the substitution variable %U, during execution Data Pump Export will determine the optimal degree of parallelism for the export job and create enough dump files for the parallel export processes. Therefore the parameter PARALLEL should be seen as an upper limit value, rather than the actual value for the number of parallel processes.
When using parallel=N - N files would be generated
When importing, simply use the file name template
expdp my_user/my_pass@orainst directory=MY_EXP_DIR parallel=4 dumpfile=EXP_TEMPLATE_%U_${RUN_DATE}.dmp logfile=exp.logRUN_DATE=`date +"%Y%m%d"
RUN_DATE=`date +"%Y%m%d"
impdp my_user/my_pass@orainst directory=MY_EXP_DIR parallel=4 dumpfile=EXP_TEMPLATE_%U_20240525.dmp logfile=impdp_SCOTT_${RUN_DATE}.log REMAP_SCHEMA=FROM_SCHEMA:TO_SCHEMA REMAP_TABLESPACE=FROM_TBS:TO_TBS
Instead, use exp and imp commands
Example:
To Export
exp old_user/old_pass@orainst FILE=exp_old_user.dmp LOG=exp_log.log COMRESS=N
To Import:
imp system/syspass@orainst FILE=exp_old_user.dmp LOG=imp_log.log FROMUSER=old_user TOUSER=new_user STATISTICS=RECALCULATE
exp with oracle 9 example
Define variables
BAS_ORACLE_LIST=gin oscp
CYG_ORACLE_HOME=/cygdrive/d/software/oracle/920
ORACLE_ENV_DEFINED=yes
ORACLE_SID=gin
ORASH=/cygdrive/d/etc/orash
ORA_EXP=/cygdrive/d/ora_exp
ORA_VER=920
exp file=/cygdrive/d/ora_exp/export_gin_20_08_19_16_06.dmp userid=/ full=y direct=n statistics=NONE
Error:
IMP-00019: row rejected due to ORACLE error 1
Solution:
None
In case table has data - imp fill fail with PK violation
There is no way to workaround that!
The solution is to manually truncate tables about to be imported.
Error:
IMP-00013: only a DBA can import a file exported by another DBA
Solution:
GRANT IMP_FULL_DATABASE TO <SCHEMA>
==============================
expdp Error UDE-00018: Data Pump client is incompatible
==============================
> expdp QAT2_DCRT/QAT2_DCRT@igt DUMPFILE=Exp_QAT2_DCRT_20200430.dmp LOGFILE=Exp_QAT2_DCRT_20200430.log DIRECTORY=IG_EXP_DIR
Export: Release 12.2.0.1.0 - Production on Thu Apr 30 16:20:47 2020
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
UDE-00018: Data Pump client is incompatible with database version 12.1.0.2.0/software/oracle/12.2.0.1/db_home/bin/expdp
but Oracle version is 12.1
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
Solution:
Several ORACLE_HOME exist on same server.
Locate correct expdp client and call it:
locate expdp
/oracle_db/db1/19000/db_home/bin/expdp
/software/oracle/12.2.0.1/db_home/bin/expdp
/software/oracle/121/bin/expdp
> /software/oracle/121/bin/expdp QAT2_DCRT/QAT2_DCRT@igt DUMPFILE=Exp_QAT2_DCRT_20200430.dmp LOGFILE=Exp_Exp_QAT2_DCRT_20200430.log DIRECTORY=IG_EXP_DIR
==============================
Create User Example
==============================
Grant Connect To DBA_USER;
Grant Create Procedure, Execute any procedure To DBA_USER;
Grant Create Snapshot, Create Trigger, Create any index To DBA_USER;
Grant Create synonym To DBA_USER;
Grant Unlimited Tablespace To DBA_USER;
Grant QUERY REWRITE to DBA_USER;
grant create any directory to DBA_USER;
GRANT EXECUTE ON DBMS_LOCK TO DBA_USER;
grant SELECT_CATALOG_ROLE to DBA_USER;
GRANT DBA TO DBA_USER;
==============================
Sample Errors
==============================
UDI-00014: invalid value for parameter, 'attach'
impdp MY_USER/MY_PASS@orainst DUMPFILE=Exp_schema.dmp LOGFILE=imp_schema.log DIRECTORY-IG_EXP_DIR
The issue:
There is a "-" instead of "="
Should be:
impdp MY_USER/MY_PASS@orainst DUMPFILE=Exp_schema.dmp LOGFILE=imp_schema.log DIRECTORY=IG_EXP_DIR
==============================
Example: Migrate USER_A from server_a to server_b minus few tables
==============================
The issue: task to migrate user from server to server but without some large tables.
Step A - create user on server B
CREATE USER user_a IDENTIFIED BY user_a DEFAULT TABLESPACE IGT_TABLE PROFILE DEFAULT ACCOUNT UNLOCK;GRANT CONNECT, RESOURCE TO user_a;
GRANT CREATE PROCEDURE, EXECUTE ANY PROCEDURE TO user_a;
GRANT CREATE SNAPSHOT, CREATE TRIGGER, CREATE ANY INDEX TO user_a;
GRANT CREATE SYNONYM TO user_a;
GRANT UNLIMITED TABLESPACE TO user_a;
GRANT QUERY REWRITE TO user_a;
GRANT CREATE ANY DIRECTORY TO user_a;
GRANT EXECUTE ON DBMS_LOCK TO user_a;
GRANT SELECT_CATALOG_ROLE TO user_a;
GRANT CHANGE NOTIFICATION TO user_a;
Step B - export metadata only
exp_metadata.sh:expdp USER_A/USER_A@igt parfile=exp_metadata.prm
exp_metadata.prm:
DIRECTORY=DP_DIR
DUMPFILE=exp_metadata.dmp
LOGFILE=exp_metadata.log
CONTENT=metadata_only
SCHEMAS=user_a
Step C - export data only
exp_data_only.sh:DIRECTORY=DP_DIR
DUMPFILE=exp_data_only.dmp
LOGFILE=exp_data_only.log
CONTENT=data_only
SCHEMAS=USER_A
EXCLUDE=TABLE:"IN ('TABLE_A','TABLE_B','TABLE_B')"
Step D - import metadata only
imp_metadata.sh:impdp system/Xen86Pga@igt parfile=imp_metadata.prm
imp_metadata.prm:
DIRECTORY=DP_DIR
DUMPFILE=exp_metadata.dmp
LOGFILE=imp_metadata.log
CONTENT=metadata_only
SCHEMAS=USER_A
Step D - import data only
TABLE_EXISTS_ACTION=TRUNCATE
Example:
import one schema and remap it name
==============================
Example: script to run impdp
==============================
First - create new user
SET VERIFY OFF
DEFINE vipuser=MY_USER
DEFINE vippass=MY_PASS
DEFINE adminuser=MY_ADMIN_USER
DEFINE adminpass=MY_ADMIN_PASS
DEFINE connectstr=orainst
PROMPT ************************************************************
PROMPT Creating User &&vipuser
PROMPT ************************************************************
PROMPT
conn &&adminuser/&&adminpass@&&connectstr
--Drop User if exist
BEGIN
EXECUTE IMMEDIATE 'DROP USER &&vipuser CASCADE';
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
--Create user
--CREATE USER &&vipuser IDENTIFIED BY &&vippass DEFAULT TABLESPACE IGT_TABLE PROFILE APP_PROF ACCOUNT UNLOCK;
--PROFILE DEFAULT
DECLARE
v_profile_exists NUMBER;
v_sql_str VARCHAR2(1000);
BEGIN
SELECT COUNT(*) INTO v_profile_exists FROM DBA_PROFILES WHERE profile = 'APP_PROF_XXX';
IF v_profile_exists = 0 THEN
v_sql_str := 'CREATE USER &&vipuser IDENTIFIED BY &&vippass DEFAULT TABLESPACE IGT_TABLE PROFILE DEFAULT ACCOUNT UNLOCK';
ELSE
v_sql_str := 'CREATE USER &&vipuser IDENTIFIED BY &&vippass DEFAULT TABLESPACE IGT_TABLE PROFILE APP_PROF ACCOUNT UNLOCK';
END IF;
EXECUTE IMMEDIATE v_sql_str;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END;
/
GRANT RESOURCE TO &&VIPUSER;
GRANT CREATE PROCEDURE, EXECUTE ANY PROCEDURE, CONNECT, CREATE SNAPSHOT, CREATE TRIGGER, CREATE ANY INDEX TO &&VIPUSER;
GRANT CREATE SYNONYM TO &&VIPUSER;
GRANT UNLIMITED TABLESPACE TO &&VIPUSER;
GRANT QUERY REWRITE TO &&VIPUSER ;
GRANT CREATE ANY DIRECTORY TO &&VIPUSER;
GRANT EXECUTE ON DBMS_LOCK TO &&VIPUSER;
GRANT SELECT_CATALOG_ROLE TO &&VIPUSER;
GRANT CREATE JOB TO &&VIPUSER;
GRANT SELECT_CATALOG_ROLE TO &&VIPUSER;
GRANT CREATE JOB TO &&VIPUSER;
GRANT RESOURCE TO &&VIPUSER;
BEGIN
EXECUTE IMMEDIATE 'GRANT CREATE TABLE TO &&VIPUSER';
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
BEGIN
EXECUTE IMMEDIATE 'GRANT CREATE SEQUENCE TO &&VIPUSER';
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
BEGIN
EXECUTE IMMEDIATE 'GRANT CREATE VIEW TO &&VIPUSER';
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
BEGIN
EXECUTE IMMEDIATE 'GRANT CHANGE NOTIFICATION TO &&VIPUSER';
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
spool off
exit;
imp_cmd.sh
#!/bin/bash
echo -n " Dump File.....: "; read DUMP_FILE
echo -n " From User.....: "; read FROM_USER
echo -n " To User.......: "; read TO_USER
RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
echo impdp system/Xen86Pga@igt logfile=import_${RUN_DATE}.log DUMPFILE=${DUMP_FILE} REMAP_SCHEMA=${FROM_USER}:${TO_USER} DIRECTORY=IG_EXP_DIR
impdp system/Xen86Pga@igt logfile=import_${RUN_DATE}.log DUMPFILE=${DUMP_FILE} REMAP_SCHEMA=${FROM_USER}:${TO_USER} DIRECTORY=IG_EXP_DIR
==============================
GRANT BECOME USER
==============================
In Oracle 12 there is change in permissions of IMP_FULL_DATABASE.
GRANT BECOME USER is revoked from IMP_FULL_DATABASE system privilege.
It is needed during impdp to import objects in the schemas of other users.
To fix this issue:
GRANT BECOME USER TO IMP_FULL_DATABASE;
Without this Grant, this is the error:
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/********@igt directory=IG_EXP_DIR dumpfile=Exp_databse.dmp logfile=Imp_database.log_20200514_171126 remap_schema=USER_A:USER_B content=METADATA_ONLY
Processing object type SCHEMA_EXPORT/USER
ORA-39083: Object type USER failed to create with error:
ORA-31625: Schema SYSTEM is needed to import this object,
but is unaccessible
ORA-01031: insufficient privileges
==============================
RAC considerations. impdp
==============================
There are some dedicated parameters for datapump in RAC
===========
CLUSTER
===========
impdp on RAC can run in 2 modes:
Option A - Have a directory which is common for both RAC nodes, and run impdp on both nodes in parallel.
Option B - Run the impdp form a directory which is accessible only by one node.
CLUSTER values
CLUSTER=N This will cause impdp worker process to run only on current node.
It is the mode for pre Oracle 11.2 versions.
CLUSTER=Y This will cause impdp worker process to run only on all RAC nodes.
Y is the default value
===========
SERVICE_NAME
===========
SERVICE_NAME should be used together with CLUSTER=Y
Use SERVICE_NAME specify a specific, existing service and constrain worker processes to run only on instances defined for that service.
The SERVICE_NAME parameter is ignored if CLUSTER=N is also specified.
Performance issues:
Use of the CLUSTER parameter may affect performance because there is some additional overhead in distributing the import job across Oracle RAC instances.
For small jobs, it may be better to specify CLUSTER=N to constrain the job to run on the instance where it is started.
For jobs involving large amounts of data, better to use the CLUSTER parameter.
===========
PARALLEL
===========
Up to 3 parallel processes can be used.
Example:
impdp hr/hr@orainst DIRECTORY=DPUMP_DIR DUMPFILE=hr.dmp LOGFILE=dpump_hr_log CLUSTER=N PARALLEL=2
==============================
Code Example: Script to exclude big tables from expdp
==============================
Export FULL database
Import a scpecific schema, exclude single table
Import specific table, with remap_tablespace
impdp system/passwd@igt DIRECTORY=IG_EXP_DIR LOGFILE=imp_SCHEMA_NAME.log DUMPFILE=exp_full_20210729.dmp SCHEMAS=SCHEMA_NAME exclude=TABLE:\"IN \(\'SUBSCRIBER\'\)\"
"
impdp SCHEMA_NAME/SCHEMA_PASS@igt DIRECTORY=IG_EXP_DIR LOGFILE=imp_SCHEMA_NAME_2.log DUMPFILE=exp_full_20210729.dmp TABLES=SUBSCRIBER REMAP_TABLESPACE=IGT_TABLE:IGT_TABLE_BIG
==============================
Code Example: Script to exclude big tables from expdp
==============================
#!/bin/bash
EXCLUDE_TABLES="exclude=TABLE:\"IN('TABLE_A','TABLE_A')\""
EXCLUDE_TABLE_DATA="exclude=TABLE_DATA:\"IN('TABLE_C','TABLE_D')\""
VIPUSER=MY_USER
VIPPASS=MY_PASSWORD
CONNECTSTR=orainst
#EXP_DIRECTORY=IG_EXP_DIR
EXP_DIRECTORY=DP_DIR
RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
echo "expdp $VIPUSER/$VIPPASS@$CONNECTSTR directory=${EXP_DIRECTORY} schemas=$VIPUSER dumpfile=exp_wo_big_${VIPUSER}_${RUN_DATE}.dmp logfile=exp_wo_big_${VIPUSER}_${RUN_DATE}.log ${EXCLUDE_TABLE_DATA} ${EXCLUDE_TABLES} reuse_dumpfiles=no"
expdp $VIPUSER/$VIPPASS@$CONNECTSTR directory=${EXP_DIRECTORY} schemas=$VIPUSER dumpfile=exp_wo_big_${VIPUSER}_${RUN_DATE}.dmp logfile=exp_wo_big_${VIPUSER}_${RUN_DATE}.log ${EXCLUDE_TABLE_DATA} ${EXCLUDE_TABLES} reuse_dumpfiles=no
==============================
Code Example: Drop orphaned export objects
==============================
Some unexpected tables exist under application user schema:
SYS_EXPORT_SCHEMA_01
SYS_EXPORT_SCHEMA_02
SYS_EXPORT_SCHEMA_03
etc
COL owner_name FOR A30
COL job_name FOR A30
COL operation FOR A10
COL job_mode FOR A10
COL state FOR A20
COL state FOR A20
SET LINESIZE 140
SELECT owner_name, job_name, operation, job_mode, state, attached_sessions
FROM DBA_DATAPUMP_JOBS
ORDER BY job_name;
VR2_GROUP_SHARE SYS_EXPORT_SCHEMA_72 EXPORT SCHEMA NOT RUNNING
0
VR2_GROUP_SHARE_V80 SYS_EXPORT_SCHEMA_72 EXPORT SCHEMA NOT RUNNING
0
VR2_GROUP_SHARE SYS_EXPORT_SCHEMA_73 EXPORT SCHEMA NOT RUNNING
0
VR2_GROUP_SHARE_V80 SYS_EXPORT_SCHEMA_73 EXPORT SCHEMA NOT RUNNING
0
VR2_GROUP_SHARE SYS_EXPORT_SCHEMA_74 EXPORT SCHEMA NOT RUNNING
0
And the tables in question are the job_name.
Generate DROP TABLE statement, and execute them
Generate DROP TABLE statement, and execute them
SELECT 'DROP TABLE '||owner_name||'.'||job_name||';'
FROM DBA_DATAPUMP_JOBS
WHERE state = 'NOT RUNNING';
-- AND owner_name = 'LAB_QANFV_ALLQQ';
Execute the generated SQLs
To cleanup Recycle bin, run
PURGE DBA_RECYCLEBIN;
==============================
Appendix
==============================
Links for Oracle documentation of expdp and impdp
- expdp
- impdp
. /etc/sh/orash/oracle_login.sh igt
USER=USER
PASSWORD=PASS
RUN_DATE=`date +"%Y%m%d"_"%H%M%S"`
EXCLUDE_LIST="'SGA_W_PSMS_SUBSCRIBER', 'GA_W_COUNTERS_HISTORY', 'REP_DAILY_DNORM', 'SFI_CUSTOMER_PROFILE', 'SMM_CUSTOMER_PROFILE_202209', 'SMM_CUSTOMER_PROFILE_A', 'SGA_W_MOCO_SUBSCRIBER', 'SFI_CUSTOMER_OPTIONS', 'SFI_CUSTOMER_USAGE_HOURLY', 'SGA_SUBSCRIBER_SFI', 'REP_MONTHLY_DNORM', 'BAD_LOAD_INCIDENTS', 'IPN_IBR_CELL_REPORT_DATA', 'SGA_SUBSCRIBER_SFI_LOG', 'ELM_EVENT', 'DB_PROC_SFI_SUBSCRIBER_TEMP', 'INCIDENTS_HISTORY', 'GA_COUNTERS'"
EXCLUDE_PARAM="TABLE:\"IN (${EXCLUDE_LIST})\""
EXCLUDE=${EXCLUDE_PARAM}
INCLUDE_LIST="'REP_DAILY_DNORM'"
INCLUDE_PARAM="TABLE:\"IN (${INCLUDE_LIST})\""
INCLUDE=${INCLUDE_PARAM}
echo "expdp ${USER}/${PASSWORD}@igt directory=IG_EXP_DIR dumpfile=${USER}_${RUN_DATE}.dmp logfile=${USER}_${RUN_DATE}.log INCLUDE=${INCLUDE_PARAM}"
expdp ${USER}/${PASSWORD}@igt directory=IG_EXP_DIR dumpfile=${USER}_${RUN_DATE}.dmp logfile=${USER}_${RUN_DATE}.log INCLUDE=${INCLUDE_PARAM}
#!/bin/bash
. /etc/sh/orash/oracle_login.sh igt
IMP_USER=system
IMP_PASS=Xen86Pga
connectstr=igt
DUMPFILE=LAB_QANFV_ALLQQ.dmp
LOGFILE=imp_reports_LAB_QANFV_ALLQQ.log
TABLES_LIST="'REP_DAILY_DNORM, REP_MONTHLY_DNORM'"
FROM_USER=LAB_QANFV_ALLQQ
TO_USER=LAB_QANFV_ALLQQ_OCT_2020
impdp ${IMP_USER}/${IMP_PASS}@${connectstr} DIRECTORY=DP_DIR DUMPFILE=${DUMPFILE} LOGFILE=${LOGFILE} TABLES=${TABLES_LIST} REMAP_SCHEMA=${FROM_USER}:${TO_USER}
=====================
CREATE OR REPLACE DIRECTORY MY_DIR AS '/my/path/to/my_dir';
GRANT READ, WRITE ON DIRECTORY MY_DIR TO MY_USER ;
GRANT READ, WRITE ON DIRECTORY MY_DIR TO MY_USER ;
DROP DIRECTORY DPLOG_DIR;
No comments:
Post a Comment