Export Job Possible Statuses
Export job might be in several statuses:
- running
- orphaned table level export job
- stopped
- temporary stopped
The jobs used in this example:
- Export job SCOTT.EXPDP_20051121 is a schema level export that is running
- Export job SCOTT.SYS_EXPORT_TABLE_01 is an orphaned table level export job
- Export job SCOTT.SYS_EXPORT_TABLE_02 is a table level export job that was stopped
- Export job SYSTEM.SYS_EXPORT_FULL_01 is a full database export job that is temporary stopped
Step 1. Determine in SQL*Plus which Data Pump jobs exist in the database:
%sqlplus /nolog
CONNECT / as sysdba
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state
-- locate Data Pump jobs:
SELECT owner_name, job_name, operation, job_mode, state, attached_sessions
FROM DBA_DATAPUMP_JOBS
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
---------- ------------------- --------- --------- ----------- --------
SCOTT EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
SCOTT SYS_EXPORT_TABLE_01 EXPORT TABLE NOT RUNNING 0
SCOTT SYS_EXPORT_TABLE_02 EXPORT TABLE NOT RUNNING 0
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0
Step 2. Ensure that the listed jobs in
DBA_DATAPUMP_JOBS are not active export/import Data Pump jobs. i.e. status should be
'NOT RUNNING'.
Step 3. Check with the job owner that the job with status 'NOT RUNNING' in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB).
Step 4. Determine in SQL*Plus the related master tables:
-- locate Data Pump master tables:
SELECT 'DROP TABLE '||OWNER_OBJECT||';' FROM
(
SELECT OBJECTS.status,
OBJECTS.object_id,
OBJECTS.object_type,
OBJECTS.owner||'.'||OBJECTS.object_name OWNER_OBJECT
FROM DBA_OBJECTS OBJECTS,
DBA_DATAPUMP_JOBS JOBS
WHERE OBJECTS.owner=JOBS.owner_name
AND OBJECTS.object_name=JOBS.job_name
AND JOBS.job_name NOT LIKE 'BIN$%'
ORDER BY 4,2
);
STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
------- ---------- ------------ -------------------------
VALID 85283 TABLE SCOTT.EXPDP_20051121
VALID 85215 TABLE SCOTT.SYS_EXPORT_TABLE_02
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01
Step 5. For jobs that were stopped in the past and won't be restarted anymore, delete the master table. E.g.:
DROP TABLE SCOTT.EXPDP_20051121;
DROP TABLE SCOTT.SYS_EXPORT_TABLE_02;
DROP TABLE SYSTEM.SYS_EXPORT_FULL_01;
-- For systems with recycle bin additionally run:
PURGE DBA_RECYCLEBIN;
Step 6. Re-run the query on
DBA_DATAPUMP_JOBS and
DBA_OBJECTS (step 1 and 4).
If there are still jobs listed in
DBA_DATAPUMP_JOBS, and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.:
CONNECT scott/tiger
SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SCOTT');
DBMS_DATAPUMP.STOP_JOB (h1);
END;
/
Note that after the call to the
STOP_JOB procedure, it may take some time for the job to be removed. Query the view
USER_DATAPUMP_JOBS to check whether the job has been removed:
SELECT * FROM USER_DATAPUMP_JOBS;
Step 7. Confirm that the job has been removed:
CONNECT / as sysdba
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state
-- locate Data Pump jobs:
SELECT owner_name, job_name, operation, job_mode, state, attached_sessions
FROM DBA_DATAPUMP_JOBS
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
---------- ------------------- --------- --------- ----------- --------
SCOTT EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0
-- locate Data Pump master tables:
SELECT OBJECTS.status,
OBJECTS.object_id,
OBJECTS.object_type,
OBJECTS.owner||'.'||OBJECTS.object_name "OWNER.OBJECT"
FROM DBA_OBJECTS OBJECTS,
DBA_DATAPUMP_JOBS JOBS
WHERE OBJECTS.owner=JOBS.owner_name
AND OBJECTS.object_name=JOBS.job_name
AND JOBS.job_name NOT LIKE 'BIN$%'
ORDER BY 4,2;
STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
------- ---------- ------------ -------------------------
VALID 85283 TABLE SCOTT.EXPDP_20051121
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01
Remarks:
1. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs.
The view
DBA_DATAPUMP_JOBS is a view, based on
GV$DATAPUMP_JOB, OBJ$, COM$, and
USER$.
The view shows the Data Pump jobs that are still running, or jobs for which the master table was kept in the database, or in case of an abnormal end of the Data Pump job (the orphaned job).
If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.
2. When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the
DBA_DATAPUMP_JOBS in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.
3. A Data Pump job is not the same as a job that is defined with
DBMS_JOBS.
Jobs created with
DBMS_JOBS use there own processes.
Data Pump jobs use a master process and worker process(es).
In case a Data Pump still is temporary stopped (STOP_JOB while in interactive command mode), the Data Pump job still exists in the database (status: NOT RUNNING), while the master and worker process(es) are stopped and do not exist anymore. The client can attach to the job at a later time, and continue the job execution (START_JOB).
4. The possibility of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job.
4.a. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.
4.b. If the job is an import job then the situation is different. When dropping the master table, the Data Pump worker and master processes will abort. This will probably lead to an incomplete import: e.g. not all table data was imported, and/or table was imported incomplete, and indexes, views, etc. are missing. This situation is similar to aborting an import of the original import client.
The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter:
KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption.