Tuesday, November 10, 2009

Manage Listener Log Files in 11g

In Oracle 11g, the listener log files by default are located in /diag/tnslsnr/product_name/listener. The nice feature about listener log file in this version is, whenever the size of log file grow to 10MB, Oracle starts to writes to a new file. So the log file will not be too large to open for troubleshooting. Overtime, you will have a lot of 10MB log file in the directory.


Oracle DBA needs to manage the listener log files regularly so the log files will not take too much space on the server. Based on my own experiment, compressed log files could take as little as 1/36 of the space uncompressed log files take. If you compressed the old listener log files in the same directory, when you tried to read the listener log file with ADRCI, you will get something similar to this:
------------------------------
>adrci
ADRCI: Release 11.1.0.6.0 - Beta on Tue Nov 10 09:53:49 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ADR base = "/u01/app/oracle"
adrci> show alert
ADR Home = /u01/app/oracle/diag/tnslsnr/rhetdan/listener:
*************************************************************************
DIA-48156: Alert log purge has occurred - retry operation
DIA-48121: error with opening the ADR stream file [/u01/app/oracle/diag/tnslsnr/rhetdan/listener/alert/log_1.xml][0]
DIA-48127: attempting to open a file that does not exist
Linux Error: 2: No such file or directory
Additional information: 5
------------------------------
Log_1.xml is one of the files I have compressed with gzip.

Solution: create a subdirectory in the same alert directory. Move the entire compressed file into that subdirectory. ADRCI will not throw out error messages as it cannot see those files in the sub directory. ADRCI will read all the uncompressed file, combine them and present it as one file.

If you need to go back to those compressed files in the future for troubleshooting, you can unzip the file of your interest and use following command to open the file:

adrci>show alert –file /directory-where-you-have-the-logfile/yourlogfile.xml

Another observation: when you run ADRCI to purges old content of log files in the diag/tnslsnr/product_name/listener directory, the purge is based on the timestamp of files, not the content of them.

For example, I have following files :

-rw-r----- 1 oracle oinstall 6036 Sep 15 14:47 log.xml
-rw-r----- 1 oracle oinstall 348750 Sep 15 14:05 log.xml.gz


when running

adrci> purge -age 5 -type alert

only log.xml.gz is deleted. The content of log.xml is intact, even through some of the content in the log.xml are much older than five minutes.

The side effect of this purge operation is, all the old log files that are in the alert directory and that meet “age” requirement are purged, regardless whether they are compressed or not. You may lose old compressed log file you intent to keep. That is why I recommend saving old compressed log files in a sub directory, not current directory.

For the listener log files in /diag/tnslsnr/product_id/listener/trace, Oracle keeps them for backward compatibility purpose. The size of log files will grow and take up spaces. You can also compress them as you wish. After the log file is compressed, Oracle process will create a new log file with the same name when a network activity such as a new database connection occurred.
References:
1. http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/diag001.htm#CHDIABAA
2. http://download.oracle.com/docs/cd/B28359_01/server.111/b28278/diaus.htm#sthref13996
3. Metalink note # 816871.1, How To Purge Listener Log.Xml File?
4. Metalink note # 453125.1, 11g Diagnosability: Frequently Asked Questions, SQL*Net Listener Log File

Thursday, June 4, 2009

Shell Script to Monitor the 11g Oracle Database Alert Log File

From Oracle database version 11g, the default format of alert.log file is XML. The text version of the alert log file is still available in ADR (Automatic Diagnostic Repository) directory, but it exists for backward compatibility only. Database Administrators do have a few options to view XML formatted alert log file:

1. Use Oracle Enterprise Manager;
2. Query X$DBGALERTEXT table; and
3. Use ADRCI command line utility.

Using ADRCI utility is the only option to access to alert log file while database is closed. You can even view a XML formatted alert log file for another database or from another server, as long as the file can be read from the server where ADRCI is run. To open the file, use following command after ADRCI utility is started. It will open the alert log file in your text editor with the XML tags stripped.

show alert –file /directory-where-you-have-the-logfile/log.xml

Following script provides a way to check 11g alert log file periodically using cronjob. It will send out an email whenever a new Oracle error message is added in alert log file by the Oracle server. The email will include the original error message from the alert log file, and brief description of the error. The format of email is something similar to this:

ORA-19815: WARNING: db_recovery_file_dest_size of 4294967296 bytes is 97.38% used, and has 112542208 remaining bytes available.
19815, 00000, "WARNING: %s of %s bytes is %s%% used, and has %s remaining bytes available."
// *Cause: DB_RECOVERY_FILE_DEST is running out of disk space.
// *Action: One of the following:
// 1. Add disk space and increase DB_RECOVERY_FILE_DEST_SIZE.
// 2. Backup files to tertiary device using RMAN.
// 3. Consider changing RMAN retention policy.
// 4. Consider changing RMAN archived log deletion policy.
// 5. Delete files from recovery area using RMAN.

-----------------------------------------------------

Here is the script:

#!/bin/sh
#Author: Dan Zheng, danthedba.blogspot.com
# June 2009
 
# This script checks alert_sid.log and email ORA error.
# It can be used on a Oracle database version 11g and higher.
#
# For RAC database, please change the homepath setting accordingly when calling
# the ADRCI utility in the script.
#
# Before running this script for the first time, please create a directory:
# $ORACLE_BASE/admin/$SID/dbmonitor for files related to this script.
# Also make sure $ORACLE_BASE AND $ORACLE_HOME is set correctly in your environment.

# Set up environment.
# Make necessary change on following line based on your Unix flavor:

. ~oracle/.bash_profile

export SID=$ORACLE_SID
export TIMESTAMP=`date +%M%H%d%m%y`;
export mon_dir=$ORACLE_BASE/admin/$SID/dbmonitor
export fn=$mon_dir/email_body.tmp
export alertck=$mon_dir/alertck.log

touch $alertck
touch $fn
touch $ORACLE_BASE/admin/$SID/dbmonitor/donot_delete.log
EMAIL='dan_zheng@danthedba.blogspot.com'

cd $ORACLE_BASE
sid=$( echo "$ORACLE_SID" tr '[:upper:]' '[:lower:]')

ADRCI << EOF
spool $mon_dir/alert_$TIMESTAMP.log
set termout off
set homepath diag/rdbms/$sid/$SID
SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-%'" -term
spool off
exit
EOF

export c_log=`wc -l $mon_dir/alert_$TIMESTAMP.log awk '{ print $1 }'`
export c_tmp=`wc -l $mon_dir/donot_delete.log awk '{ print $1 }'`
echo $c_log
echo $c_tmp
if (($(($c_log)) > $(($c_tmp)))); then
comm -23 $mon_dir/alert_$TIMESTAMP.log $mon_dir/donot_delete.log grep ORA- sort -u > $alertck

exec 3< $alertck
# read until the end of the file
until [ $done ]
do
read <&3 ERR_LINE
if [ $? != 0 ] then
done=1
continue
fi
echo $ERR_LINE >> $fn
export ERR_NO=`echo $ERR_LINE awk '{ print $1 }'awk -F- '{ print $2 }' awk -F: '{ print $1 }'`
echo " Oracle error is : ORA-$ERR_NO"
oerr ora $ERR_NO >> $fn
echo " " >> $fn
echo "-----------------------------------------------------" >> $fn
echo " " >> $fn
done

echo "ERRORS: sending mail!"
SUBJECT="$SID - ORA Error Message found in alert.log on `date`"
echo $SUBJECT

# for a HPUX server, use following line to send email:
#cat $fn /usr/bin/mailx -s "$SUBJECT" $EMAIL

# for a Linux server, use following line to send email:
cat $fn mail -s "$SUBJECT" $EMAIL

else
echo "No Oracle error is found in alert.log since last time this script was run."
fi

mv $mon_dir/alert_$TIMESTAMP.log $mon_dir/donot_delete.log
rm $fn

echo "The script was executed successfully."


Reference:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/adrci.htm#BABBHGFC






Tuesday, April 7, 2009

Load Test Script for Oracle Database

Have you ever in a situation when you wish you could do some load testing on your test database before the production release? I am sure every DBA and developer does. Here is an example to run load testing on an Oracle database server. You can add PL/SQL procedure your application will run into this script to suit your need.

#!/usr/bin/ksh
#
# file name: load_test.sh
# author: Dan Zheng April 2009
#
# This program assume following conditions:
# 1. you are running from a Unix server with Oracle client installed;
# 2. $ORACLE_HOME environmental parameter is set;
# 3. you can logon to a database as scott user.
#
# Check if file is executed with wrong number of input parameters
#
if [ $# -ne 3 ]
then
echo "Wrong number of arguments: $# "
echo "To execute this script follow the instruction:"
echo " (script filename) (ORACLEsid) (NUMofSESSION) (minutes) "
echo " for example if you want to open 300 sessions in RMDB1 database,"
echo " and leave the sessions open for 20 minutes:"
echo " load_test.sh RMDB1 300 20 > load.lst 2>&1 "
exit 1
fi

export ORACLE_SID=$1
export sessions=$2
export openmin=$3

let count=0
let totalnum=$sessions

while (($totalnum > $count )); do
$ORACLE_HOME/bin/sqlplus -s scott/tiger@ORACLE_SID << EOF
set serverout on
declare
currenttime timestamp;
endtime timestamp;
begin
currenttime := current_timestamp;
endtime := (current_timestamp + $openmin/1440);
loop
exit when currenttime > endtime;
currenttime := current_timestamp;
end loop;
dbms_output.put_line('exiting Sql*Plus at '||currenttime);
dbms_output.put_line(' ');
end;
/
exit;
EOF

let count="count + 1"
done
exit;

The above script would consume all your CPU power as it constantly checks system time. If you just want to test how many idle sessions your system can handle, use following routine inside your SQL*Plus session in your script:

exec dbms_backup_restore.sleep($openmin*60);

Reference:
Oracle Support metalink notes:
1. How to simulate a slow query. Useful for testing of timeout issues Doc ID: 357615.1
2. Instead of DBMS_LOCK.SLEEP Procedure Use SYS.DBMS_BACKUP_RESTORE.SLEEP For Time Interval > 3600 seconds Doc ID: 471246.1

Friday, March 27, 2009

Evaluating Execution Results of a Oracle PL/SQL Procedure in UNIX Shell Script

There are many notes with extensive review on this topic at Oracle support’s metalink website. Please see the list of references at the end of this blog. In this blog, I am only focused on how to evaluate the execution result of a PL/SQL procedure within a shell script. One practical use of this is to check the status of an application, as many of the applications primarily call PL/SQL procedures to interact with Oracle database.
I found the following two methods that are useful.
1. Use “whenever sqlerror exit 1” to check if there is a problem:

#!/usr/bin/sh
sqlplus scott/tiger@${ORACLE_SID} << EOF
whenever sqlerror exit 1
exec ‘some PLSQL procedure’
EOF
ABC=$?
if [[ $ABC = 1 ]];
then
‘do something and send out an email…’
else
‘do something else…’
fi

This method is simple. But many things could set exit status to 1. For example, you may have problem with SQL*Plus, database user account permission, etc. But if you knew the only thing that could go wrong is the execution of PLSQL procedure, this is still a viable choice.

2. Use nested block along with sql.sqlcode. The SQLCODE returns code of the most recent operation in SQL*Plus. You have a lot more control on what exit status you want to return to shell script. Here is a example:
(Note: even for the exceptions that are Oracle predefined exceptions, the following code can still capture the value of SQLCODE. Tested in 10gR2.)

#!/usr/bin/sh
sqlplus -s scott/tiger@${ORACLE_SID} << EOF
set serverout on
variable set_flag number;
DECLARE
sql_code number := 0 ;
my_errm VARCHAR2(32000);
BEGIN
BEGIN
‘some PLSQL procedure’;
EXCEPTION
WHEN OTHERS THEN
sql_code := SQLCODE;
my_errm := SQLERRM;
END;

dbms_output.put_line('Error message is: 'my_errm);

IF sqlcode !=0 THEN
:set_flag := 4;
END IF;
END;
/
exit :set_flag
EOF

#It is good idea to capture the exit status by assigning it to a variable, in case you need to use it multiple time later on in the script.
ABC=$?
if [[ $ABC = 1 ]]; then
your SQL*Plus failed to run successfully for some reason, do something such as sending an email, write to log file, etc.…
elif [[ $ABC = 4 ]]; then
the PLSQL procedure failed, do something about this, send email, write to log file, etc.….
Else
there is no problem…
fi

References:
(1) metalink note # 351015.1 How To Pass a Parameter From A UNIX Shell Script To A PLSQL Function Or Procedure.
(2) metalink note # 400195.1 How to Integrate the Shell, SQLPlus Scripts and PLSQL in any Permutation?
(3) metalink note# 73788.1 Example PL/SQL: How to Pass Status from PL/SQL Script to Calling Shell Script
(4) metalink note # 6841.1 Using a PL/SQL Variable to Set the EXIT code of a SQL script
(5)
http://download.oracle.com/docs/cd/B12037_01/appdev.101/b10807/13_elems049.htm
(6) Oracle magazine: On Avoiding Termination By Steven Feuerstein
http://www.oracle.com/technology/oramag/oracle/09-mar/o29plsql.html

Thursday, February 19, 2009

Consistent Export with Data Pump Utility (updated)

In oracle 10g, Data Pump utility is introduced to replace old Export/Import utility. Database Administrators and Backup Administrators who used to backup their production databases on regular basis using Export utility can use Data Pump utility instead. Oracle does not have plan to desupport the Import utility in the future releases (as of February 29, 2009, see metalink note 132904.1). But as of 11g, the original Export utility is desupported for general use:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/exp_imp.htm#i1023560

When running the original Export utility on a live database, a parameter consistent=y is usually set so the objects in the backup is consistent to a single point in time. For Data Pump Export, that parameter does not exist. According to 10g Oracle documentation:

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref181

A parameter comparable to CONSISTENT is not needed. Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality.

The default value of either FLASHBACK_SCN or FLASHBACK_TIME is none. What happened if we run a Data Pump export on a live database without setting the value of FLASHBACK_SCN or FLASHBACK_TIME?


1. Set up the environment.


connect / as sysdba

create or replace directory dpump_dir as '/tmp';
grant read, write on directory dpump_dir to system;
drop user dpumpusr cascade;

create user dpumpusr identified by dpumpusr default tablespace users temporary tablespace temp;
grant connect, resource to dpumpusr;

connect dpumpusr/dpumpusr

create table test1
(
col11 number,
col12 date,
col13 varchar2(1000),
Constraint col11_pk primary key (col11)
)
partition by range (col11)
(
partition part101 values less than (5001),
partition part102 values less than (10001)
)
;

create table test2
(
Col21 number,
Col22 date,
Col23 varchar2(1000),
Col24 number,
Constraint col24_fk
foreign key (col24) References test1(col11)
on delete cascade
)
partition by range (col21)
(
partition part201 values less than (5001),
partition part202 values less than (10001)
)
;

I use “on delete cascade” here so when records in the table test1 are deleted, corresponding records in the child table test2 are deleted as well.

Now insert records into test1 and test2:

begin
for i in 1..10000 loop
insert into test1 values (i, sysdate, lpad (to_char(i), 100, '0'));
commit;
end loop;
end;
/

select count(*) from test1 partition(part101);

COUNT(*)
----------
5000

select count(*) from test1 partition(part102);

COUNT(*)
----------
5000


begin
for i in 1..10000 loop
insert into test2 values (i, sysdate, lpad (to_char(i), 100, '0'), i);
commit;
end loop;
end;
/

select count(*) from test2 partition(part201);

COUNT(*)
----------
5000

select count(*) from test2 partition(part202);

COUNT(*)
----------
5000


2. Perform a Data Pump Export on a database running active transactions without setting FLASHBACK_SCN or FLASHBACK_TIME.

a. expdp system/password@rmdb1 dumpfile=dpump_dir:dpumpusr.dmp logfile=dpump_dir:dpumpusr.log schemas=dpumpusr job_name=expdpumpusr

b. During the Data Pump export, start a Sql*Plus session as dpumpusr user and delete some records. Each transaction in following PL/SQL block consists of deleting one record from both tables:

begin
for i in 1..10000 loop
delete from test1 where col11=i;
commit;
end loop;
end;
/

Here is the result of Data Pump export:
Export: Release 10.2.0.3.0 - 64bit Production on Friday, 27 March, 2009 8:40:38

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."EXPDPUMPUSR": system/********@rmdb1 dumpfile=dpump_dir:dpump usr.dmp logfile=dpump_dir:dpumpusr.log schemas=dpumpusr job_name=expdpumpusr
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.671 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "DPUMPUSR"."TEST2":"PART201" 290.5 KB 2410 rows
. . exported "DPUMPUSR"."TEST2":"PART202" 596.5 KB 5000 rows
. . exported "DPUMPUSR"."TEST1":"PART101" 274.9 KB 2358 rows
. . exported "DPUMPUSR"."TEST1":"PART102" 576.7 KB 5000 rows
Master table "SYSTEM"."EXPDPUMPUSR" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.EXPDPUMPUSR is:
/tmp/dpumpusr.dmp
Job "SYSTEM"."EXPDPUMPUSR" successfully completed at 08:41:09

Number of records in the parent table (test1) is not same as number of records in child table (test2). This test proves two points:
a. The export dump file could capture data from different transactions, and
b. The dump file may be useless if the records in the dump file violated referential integrity constraint on the tables.

3. Perform another Data Pump export. This time set FLASHBACK_TIME to the beginning of export session:
Create a parameter file, /tmp/dpump.par with following entries:


dumpfile=dpump_dir:dpumpusr.dmp
logfile=dpump_dir:dpumpusr.log
schemas=dpumpusr
flashback_time="TO_TIMESTAMP('27-03-2009 10:02:00', 'DD-MM-YYYY HH24:MI:SS')"
job_name=expdpumpusr
During the export, run the PL/SQL block to delete records.
Here is the output of the export log:

>expdp system/password@rmdb1 parfile=/tmp/dpump.par

Export: Release 10.2.0.3.0 - 64bit Production on Friday, 27 March, 2009 10:03:30

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."EXPDPUMPUSR": system/********@rmdb1 parfile=/tmp/dpump.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.671 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported "DPUMPUSR"."TEST2":"PART201" 596.3 KB 5000 rows
. . exported "DPUMPUSR"."TEST2":"PART202" 596.5 KB 5000 rows
. . exported "DPUMPUSR"."TEST1":"PART101" 576.6 KB 5000 rows
. . exported "DPUMPUSR"."TEST1":"PART102" 576.7 KB 5000 rows
Master table "SYSTEM"."EXPDPUMPUSR" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.EXPDPUMPUSR is:
/tmp/dpumpusr.dmp
Job "SYSTEM"."EXPDPUMPUSR" successfully completed at 10:03:58

All records that exist in the database at the beginning of export are included in the export dump file. The export is consistent.

4. For comparison, let’s do an export with original Export utility using consistent=y:

a. Drop and recreate the same partitioned tables.
b. Run the export.
c. Run the same PL/SQL block while the export session is running.

Here is the result:
>exp system/password@rmdb1 file=/tmp/dpumpusr_export.dmp log=/tmp/dpumpusr_export.log owner=dpumpusr consistent=y

Export: Release 10.2.0.3.0 - Production on Fri Mar 27 09:19:03 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user DPUMPUSR
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user DPUMPUSR
About to export DPUMPUSR's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export DPUMPUSR's tables via Conventional Path ...
. . exporting table TEST1
. . exporting partition PART101 5000 rows exported
. . exporting partition PART102 5000 rows exported
. . exporting table TEST2
. . exporting partition PART201 5000 rows exported
. . exporting partition PART202 5000 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
All the rows that existed before the delete command were exported. The objects in the export file were consistent to the point at the beginning of the export session.


5. Now repeat the steps above. This time set consistent=n.

Here is the result:

exp system/password@rmdb1 file=/tmp/dpumpusr_export2.dmp log=/tmp/dpumpusr_export2.log owner=dpumpusr consistent=n

Export: Release 10.2.0.3.0 - Production on Fri Mar 27 09:25:37 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user DPUMPUSR
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user DPUMPUSR
About to export DPUMPUSR's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export DPUMPUSR's tables via Conventional Path ...
. . exporting table TEST1
. . exporting partition PART101 3997 rows exported
. . exporting partition PART102 5000 rows exported
. . exporting table TEST2
. . exporting partition PART201 3957 rows exported
. . exporting partition PART202 5000 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

Number of records in table test1 is not same as in table test2. The export is not consistent.

6. Clean up:

Drop the tables and user dpumpusr.


SQL> drop table dpumpusr.test2 purge;

Table dropped.

SQL> drop table dpumpusr.test1 purge;

Table dropped.

SQL> drop user dpumpusr cascade;

User dropped.
7. Conclusions:

a. The behavior of Data Pump Export utility is different from original Export utility in terms of how the active transactions are treated.
b. The original Export utility, when setting consistent=y, will create export dump file of database objects from the point in time at the beginning of the Export session.
c. Without setting values for FLASHBACK_SCN or FLASHBACK_TIME, Data Pump Export utility may create an inconsistent export.
d. To insure a consistent export with Data Pump export, either set the FLASHBACK_SCN or FLASHBACK_TIME parameter, or restart the database in restrict mode before the export session starts.

(I wish to thank friends and online readers who provided valuable inputs after the last version of this blog was published.)