Monday, July 21, 2008

NLS considerations in Import/Export - Frequently Asked Questions

Subject: NLS considerations in Import/Export - Frequently Asked Questions
Doc ID: Note:227332.1 Type: BULLETIN
Last Revision Date: 17-JUL-2008 Status: PUBLISHED


Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
For the main Globalization (NLS) FAQ please see:
Note 60134.1 Globalization (NLS) - Frequently Asked Questions

1. How does NLS affect import/export (exp/imp)?
2. How should NLS_LANG be set when using export?
3. How should NLS_LANG be set when using import?
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
5. I have the message "( possible ncharset conversion )" during import.
6. How to know in what characterset a dmp (export) file is created?
7. How does NLS affect datapump (expdp/impdp)?
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?


For the main exp/imp FAQ please see:
Note 175624.1 Oracle Server - Export and Import FAQ

Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
1. How does NLS affect import/export(exp/imp)??

Import and export are client products, in the same way as SQL*Plus or
Oracle Forms, and will therefore translate characters from the database
character set to that defined by NLS_LANG. The character set used for
the export will be stored in the export file and, when the file is imported,
the import will check the character set that was used. If it is different to
that defined by NLS_LANG at the import site, the characters will be
translated to the import character set and then, if necessary, to the
database character set.

References:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set

2. How should NLS_LANG be set when using export?

Oracle recommends to set the character set part of NLS_LANG environment parameter
to ALWAYS the same character set as the character set of the database you are exporting.

select value from nls_database_parameters where parameter='NLS_CHARACTERSET';

That way no conversion will take place and the exportfile will be created
in the same character set as the original database and contain ALL data from
original database (even incorrectly stored data if that would be the case).
Even if the plan is to import this into a database with a different character set
later the conversion can be postponed until the import.

Note that this has no relation with the Operating system. If your have a
WE8MSWIN1252 database on a unix server (which is totally supported) then you
should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.

During *interaction* with the database (= sqlplus) you need to configure
your *unix* client properly and that cannot be 1252 seen *unix* does not
has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments


3. How should NLS_LANG be set when using import?

If the source and target database have the same character set,
the character set part of the NLS_LANG should be set to that same character set
on both the export and the import.

Even if the character sets of the exporting and importing databases
are not the same the best (preferred) value to use for
the character set part of NLS_LANG on both export and import is still
the character set of the source database.

select value from nls_database_parameters where parameter='NLS_CHARACTERSET';

Setting the NLS_LANG to the character set of the target database during import
is also correct as such, but import has some limitations when going to a multibyte
characterset (like UTF8), hence using the SOURCE NLS_CHARACTERSET during both imp and exp
session is simply the best option, avoiding any problems like
IMP-16 "Required character set conversion (type %lu to %lu) not supported".

So, the preferred place to do the conversion is between the import executable
and the target database.

Note that this has no relation with the Operating system. If your source
database is a WE8MSWIN1252 database then you simply should set NLS_LANG
to AMERICAN_AMERICA.WE8MSWIN1252 before import, even on a Unix server.

Note that during *interaction* with the database (= sqlplus) you need to
configure your *unix* client properly and that cannot be 1252 seen *unix*
does not has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments


4. Example: you want to go from an WE8MSWIN1252 to an UTF8 db:

(note that this is only the exp/imp example, if you want to migrate to UTF8
check Note 260192.1 for the full story )

1) double check the NLS_CHARACTERSET on the SOURCE database

select * from nls_database_parameters where parameter = 'NLS_CHARACTERSET';

and export with the NLS_LANG set to AMERICAN_AMERICA.

In this case we want to create a export file containing WE8MSWIN1252 data.

(This is also the setting you want to use if you take an
export as backup)

on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ exp ....

on windows this is:

c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>exp ....

2) import with the NLS_LANG set to American_america.WE8MSWIN1252 (= source NLS_CHARACTERSET)
into the new UTF8 db.

on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ imp ....

on windows this is:

c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>imp ....


The conversion to UTF8 is done while inserting the data
in the UTF8 database by the imp connection.

We recommend to set the NLS_LANG explicit in the current shell for unix
(-> Note 131207.1 How to Set Unix Environment Variable )
or in the dos box used for the exp or imp tool on windows.
(-> "c:\>set NLS_LANG=AMERICAN_AMERICA.")

4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?

Not. Normally you use the AMERICAN_AMERICA default, but
if you imported with NLS_LANG set to FRENCH_FRANCE for example
then you will not have problems, even if the originating environment
used GERMAN_GERMANY or so.

5. I have the message "( possible ncharset conversion )" during import.

you see something similar to

Export file created by EXPORT:V08.01.07 via direct path
import done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P15 NCHAR character set (possible ncharset conversion)

in the import log, please see point 11 in
Note 276914.1 The National Character Set in Oracle 9i and 10g

6. How to know in what characterset a dmp (export) file is created?

simply issue: imp system/oracle@database show=yes file=test.dmp

the output gives you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
-> this is the current NLS_LANG value set in the environment
and the NCHAR characterset of the target database

import server uses WE8MSWIN1252 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during this import session is different
from the target database characterset, so if you see 3 lines you might have problems :-)

export client uses UTF8 character set (possible charset conversion)
-> this is the characterset used during the export session and the
characterset used in the dmp file.

7. How does NLS affect datapump (expdp/impdp)?

Datapump does not use the NLS_LANG to do conversion between databases.
Conversion between 2 database charactersets is done purely based on the
NLS_CHARACTERSET ( or NLS_NCHAR_CHARACTTERSET for Nchar,Nvarchar and Nclob datatypes)
of the source and target database.

However, if you specify a parameter file then the NLS_LANG *is* used.
This is only important if you use non-English characters (e.g. for the
QUERY parameter) in the parameter file.

If you use non-English characters in the parameter file then the NLS_LANG
environment variable should be set (in the session where the Data Pump job
is started) to the correct encoding of the parameter file (!).

Note:
-----

There is a chance to have data corruption when going from a 8 bit
characterset to UTF8 or a other multibyte characterset on ALL 10g versions
(including 10.1.0.5 and 10.2.0.3) and 11.1.0.6.
Impdp may provoke data corruption unless you applied Patch 5874989.
This is caused by the impdp Bug 5874989.
The "old" exp/imp work fine. This problem is fixed in the upcoming
10.2.0.4 and 11.1.0.7 patchset.

All existing patches for this bug are found here:
http://updates.oracle.com/download/5874989.html

For 10.1.0.5, 10.2.0.2, 10.2.0.3 and 11.1.0.6 you can ask a backport
for Bug 5874989 if there is no patch yet for your version/platform.

For windows the fix is included in
10.1.0.5.0 Patch 20 (10.1.0.5.20P) or later ( Note 276548.1 )
10.2.0.3.0 Patch 11 (10.2.0.3.11P) or later ( Note 342443.1 )

The patch is technically only needed on the impdp side, but if you use
expdp/impdp between different character sets we suggest to patch all your
systems.


8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?

9i and lower gives ORA-01401: inserted value too large for column
10g and up gives ORA-12899: value too large for column

This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET
(like WE8ISO8859P1, WE8MSWIN1252 , WE8DEC ...) or 16 bit NLS_CHARACTERSET
(like JA16SJIS , ZHS16GBK, KO16MSWIN949) to a database with a
NLS_CHARACTERSET set to AL32UTF8 or UTF8

(AL32)UTF8 uses more BYTES to store a character then a 8 and 16 bit charactersets.
This is explained in Note 119119.1 AL32UTF8 / UTF8 (Unicode) Database Character Set
Implications

CHAR and VARCHAR2 colum sizes are defined by default in BYTES not characterset.

The best solution is to pre-create the tables using CHAR semantics (9i and up)
Note 144808.1 Examples and limits of BYTE and CHAR semantics usage


Other Known Problems:
---------------------
Note 278980.1 Oracle10g Import Errors like IMP-00008 / IMP-00009 if Eported with NLS_LANG=FRENCH_FRANCE.[characterset]


References
----------

For further NLS / Globalization information you may start here:
Note 267942.1 - Globalization Technology (NLS) Knowledge Browser Product Page
Note 60134.1 Globalization (NLS) - Frequently Asked Questions

Further details of issues discussed in this note can be found in:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set

Note 175624.1 Oracle Server - Export and Import FAQ

Export/Import and NLS Considerations

Subject: Export/Import and NLS Considerations
Doc ID: Note:15095.1 Type: FAQ
Last Revision Date: 07-JUL-2006 Status: PUBLISHED


Introduction
------------
This note provides an in-depth overview of exp/imp and NLS, you may not need to
know all this if you just want to take a export and make sure you don't lose any
data. Please read this first:
Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions


Export/Import and NLS considerations
------------------------------------
If you have exported/imported a database or table(s) and are now encountering
character set conversion problems, use the following information to confirm
whether the export/import procedure was performed correctly.

+ In Oracle9 and higher, most data is automaticaly exported in the character set
of the database that is exported. The only exception to that, is that
data-dictionary information is exported in the character set that is set in
the NLS_LANG environment variable when making the export.
In pre-Oracle9 export, ALL data is exported in the character set that is set
in the NLS_LANG environment variable when making the export.
This means that if the character set part of the NLS_LANG environment variable
during export is different than the database character set, there will be a
automatic conversion between those (for 9i and higher only for data-dictionary
data since "normal" data is exported in the database character set, regardless
of the NLS_LANG, as described above).

+ The export file contains the character set that the data was exported in, in
the first few bytes of the export file. Export stores the character set ID
(not the text string) of the "exporting character set". See Note 48644.1.
This is relevant in pre-Oracle9 export files because all data will be exported
in the same character set and this way we can "see" what that character set
is, and we can even "update" these fields to work around certain problems
(only when advised by Oracle Support!).
For Oracle9 and higher export files this is less relevant because the
character set is stored in many more places in the export file, and can be
different depending on the data. So for Oracle9 and higher we can never
"update" the export file manualy.

+ An import session runs in the character set that is set in the NLS_LANG
environment variable for import session.

+ Import will check the character set of the data in the dump file and compare
it with the session's character set as defined in NLS_LANG.

+ No conversion occurs if the export data character set and the import session
character set are the same. If they are not the same, conversion is performed
from the export data character set to the import session character set prior
to the data being inserted into the database.
However, import can only perform this conversion for single-byte character
sets !

+ For imports into multibyte character set databases (i.e. UTF8) the character
set part of NLS_LANG should be set identical to that of the export session.
Otherwise an
IMP-16 "Required character set conversion (type %lu to %lu) not supported"
error will come up during import. See also Bug 896407.

+ The import session character set should be a the same, or a superset of the
export data character set, otherwise special characters will not be correctly
converted.

+ If you import pre-oracle7 export files include the parameter 'CHARSET' when
defining the import parameter set. CHARSET identifies the character set of the
export file. The CHARSET option was developed to import older export files
which did not have stored character set ID information.
If you define CHARSET but the export file does contain character set
information (export version 7 and higher) then the value in CHARSET should
match the export file character set. If they do not match, IMP-42 will
result (so it's better not to use this parameter at all, unless you import
pre-oracle7 data).

+ After the data has been converted to the import session character set, it is
then converted to the database character set if they differ. The database
character set should be a superset (or the same) of the import's session
character set otherwise special characters will not be correctly converted.

So this means that there is a potential for 3 seperate conversions when moving
data from between databases, as described in the following graph:

------------------- for all data pre-oracle9 and data-
| db in character | export dictionary data in oracle9 and up:
| set A |---------------------> exp session is in character
------------------- set B as defined by NLS_LANG.
source Therefore the dump file is in
character set B. Character set
conversion may occur.
|
|
| move file over to
| another machine
|
V
destination
-------------------
| db in character | import imp session is in character
| set D | <------------------- set C as defined by NLS_LANG.
------------------- The dump file is still in
character set B. Character set
conversion may occur.
During the import process
character set conversion
may occur between character
set C and the db's character
set D if they differ.


If you have any problems related to character set conversion in export/import
then you need to identify the following:
-- What is (was) the database character set in the source database (character
set A in the above)?
-- What was the client character set specified in NLS_LANG when the data was
exported (character set B in the above)?
-- What was the client character set specified in NLS_LANG when the data was
imported (character set C in the above) ?
-- What is the database character set of the destination database (character
set D in the above)?


Minimizing character set conversions during export/import
---------------------------------------------------------
As described, it is important to note import will do up to 3 character set
conversions depending on:

(a) character set of exported database
(b) NLS_LANG of export session (-> this is the "export file character set")
(c) NLS_LANG of import session
(d) character set of taget database.

Obviously there really is only the need for 1 conversion only (from original
database character set to target database character set). Minimizing the number
of conversions means that you minimize the potential of "loosing" data.

To minimize the number of conversions you are advised to follow these rules:
+ Set the NLS_LANG during export to the same as the character set of the
exported database -> this means no conversion takes place, all data is still
stored in the export file as it was stored in the database.
+ Set the NLS_LANG during import to the same value as during the export -> this
means that no conversion takes place in the import session.
+ If the character set of the target database is different, then the data will
be automatically converted when import inserts the data into the database, you
do not have to "set" anything for this, it's automatic.

Of course you do not have to follow these rules, but complying with these rules
does minimize the risk of loosing data during export/import.


Regarding the National Characterset:
------------------------------------
NCLOBs are always exported/imported in UCS-2/AL16UTF16.
NCHAR/NVARCHAR2s are always exported in the database's national character set.
This is something you can't influence by setting any parameters.

Wednesday, July 02, 2008

Subject: ORA-01555 "Snapshot too old" - Detailed Explanation

Subject: ORA-01555 "Snapshot too old" - Detailed Explanation
Doc ID: Note:40689.1 Type: BULLETIN
Last Revision Date: 16-MAY-2008 Status: ARCHIVED

In this Document
Purpose
Scope and Application
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
ORA-01555 Explanation
Solutions
Examples
Special Cases
Summary
References



--------------------------------------------------------------------------------



Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.0
Information in this document applies to any platform.

Purpose
@Checked for relevance on 16-MAY-2008

This article will discuss the circumstances under which a query can return the Oracle error ORA-01555: snapshot too old: rollback segment number string with name "string" too small. The article will then discuss actions that can be taken to avoid the error and finally will provide some simple PL/SQL scripts that illustrate the issues discussed.


Scope and Application
It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Database Concepts manual and related Oracle Database documentation.

ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
Two key concepts are briefly covered below which help in the understanding of ORA-01555, Read Consistency and Delayed Block Cleanout.

1. Read Consistency

This is documented in the Oracle Database Concepts manual and so will not be discussed further. However, for the purposes of this article this should be read and understood if not understood already. Oracle Server has the ability to have multi-version read consistency which is invaluable to you because it guarantees that you are seeing a consistent view of the data (no 'dirty reads').

2. Delayed Block Cleanout

This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout').

Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes made.)

Upon commit, the database simply marks the relevant rollback segment header entry as committed. Now, when one of the changed blocks is revisited Oracle examines the header of the data block which indicates that it has been changed at some point. The database needs to confirm whether the change has been committed or whether it is currently uncommitted. To do this, Oracle determines the rollback segment used for the previous transaction (from the block's header) and then determines whether the rollback header indicates whether it has been committed or not.

If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing.

This behaviour is illustrated in a very simplified way below. Here we walk through the stages involved in updating a data block.

Stage 1 - No Changes Made

Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment.

In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.)



Stage 2 - Row 2 is Updated

Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active).



Stage 3 - The User Issues a Commit

Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block.



Stage 4 - Another User Selects Data Block 500

Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header.

Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout).



ORA-01555 Explanation
There are two fundamental causes of the error ORA-01555 that are a result of Oracle trying to attain a 'read consistent' image. These are:

The rollback information itself is overwritten so that Oracle is unable to rollback the (committed) transaction entries to attain a sufficiently old enough version of the block.
The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transaction header sufficiently to derive the original rollback segment transaction slot.
Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a query is first started and to which Oracle is trying to attain a read consistent image. Associated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN 50.

Case 1 - Rollback Overwritten

This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the rollback information that it requires. The latter is discussed in this article because this is usually the harder one to understand.

Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 does some other work that generates rollback information.
Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information)
Session 1 revisits the same block B1 (perhaps for a different row).
Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image of the block as of this QENV.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required QENV.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1's changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

Case 2 - Rollback Transaction Slot Overwritten

Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 commits the changes (Now other transactions are free to overwrite this rollback information)
A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions.
These transactions each consume a slot in the rollback segment transaction table such that it eventually wraps around (the slots are written to in a circular fashion) and overwrites all the slots. Note that Oracle is free to reuse these slots since all transactions are committed.
Session 1's query then visits a block that has been changed since the initial QENV was established. Oracle therefore needs to derive an image of the block as at that point in time.
Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been overwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry.

If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the data block.

It is also possible to encounter a variant of the transaction slot being overwritten when using block cleanout. This is briefly described below:

Session 1 starts a query at QENV 50. After this another process updates the blocks that Session 1 will require. When Session 1 encounters these blocks it determines that the blocks have changed and have not yet been cleaned out (via delayed block cleanout). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed.

In order to do this, Oracle must look at the relevant rollback segment transaction table slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try to construct an older version of the block and if it is before then the block just needs clean out to be good enough for the QENV.

If the transaction slot has been overwritten and the transaction table cannot be rolled back to a sufficiently old enough version then Oracle cannot derive the block image and will return ORA-1555.

Note: Normally Oracle can use an algorithm for determining a block's SCN during block cleanout even when the rollback segment slot has been overwritten. But in this case Oracle cannot guarantee that the version of the block has not changed since the start of the query.

Solutions
This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment information is overwritten by the same session and when the rollback segment transaction table entry is overwritten.

It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session must be using an Oracle extension whereby fetches across commits are tolerated. This does not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used.

Case 1 - Rollback Overwritten

Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed.
Reduce the number of commits (same reason as 1).
Run the processing against a range of data rather than the whole table. (Same reason as 1).
Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information.
If fetching across commits, the code can be changed so that this is not done.
Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by:
Using a full table scan rather than an index lookup
Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks.
Case 2 - Rollback Transaction Slot Overwritten

Use any of the methods outlined above except for '6'. This will allow transactions to spread their work across multiple rollback segments therefore reducing the likelihood or rollback segment transaction table slots being consumed.
If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can be achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :
alter session set optimizer_goal = rule;
select count(*) from table_name;
If indexes are being accessed then the problem may be an index block and clean out can be forced by ensuring that all the index is traversed. Eg, if the index is on a numeric column with a minimum value of 25 then the following query will force cleanout of the index:

select index_column from table_name where index_column > 24;
Examples
Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases given above. Before these PL/SQL examples will return this error the database must be configured as follows:

Use a small buffer cache (db_block_buffers).
REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block visit without requiring the rollback information.
Use one rollback segment other than SYSTEM.
REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required.
Ensure that the rollback segment is small.
REASON: See the reason for using one rollback segment.
Rollback Overwritten

rem * 1555_a.sql - Example of getting ora-1555 "Snapshot too old" by
rem * a session overwriting the rollback information required
rem * by the same session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
drop table dummy1;
create table dummy1 (a varchar2(200));
rem * Populate the example tables.
begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/
rem * Ensure that table is 'cleaned out'.
select count(*) from bigemp;

declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.
-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;
begin
for c1rec in c1 loop
update dummy1 set a = 'aaaaaaaa';
update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/
Rollback Transaction Slot Overwritten

rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
rem * Populate demo table.
begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/
drop table mydual;
create table mydual (a number);
insert into mydual values (1);
commit;
rem * Cleanout demo table.
select count(*) from bigemp;
declare
cursor c1 is select * from bigemp;
begin
-- The following update is required to illustrate the problem if block
-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;
for c1rec in c1 loop
for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/
Special Cases
There are other special cases that may result in an ORA-01555. These are given below but are rare and so not discussed in this article:

Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.
If a query visits a data block that has been changed by using the Oracle discrete transaction facility then it will return ORA-01555.
It is feasible that a rollback segment created with the OPTIMAL clause may cause a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to be lost.
Summary
This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, and has provided simple PL/SQL scripts that illustrate the cases discussed.

References
Note 1005107.6 - ORA-01555: snapshot too old - Causes and Solutions
Note 104008.1 - ORA-01555: "SNAPSHOT TOO OLD" - Additional Solutions
Note 10630.1 - ORA-01555: "Snapshot too old" - Overview
Note 216971.1 - ORA-01555 When Using Undo Tablespace
Note 45895.1 - ORA-01555 "Snapshot too old" in Very Large Databases
Note 62005.1 - Creating, Optimizing, and Understanding Rollback Segments
Note 69464.1 - Rollback Segment Configuration & Tips
Oracle Database Concepts on http://otn.oracle.com

Keywords

Tuesday, July 01, 2008

Oracle中哪些等待事件是Idle waitevent

9iR2中可以通过STATS$IDLE_EVENT查看等待事件是空闲等待事件。
10gR2中,选择比较多,v$event_name是比较典型的一个。