v$lock中type为TX时,id1、id2的含义
trunc(id1/65536)=v$transaction.XIDUSN
mod(id1/65536)=v$transaction.XIDSLOT
id2=v$transaction.XIDSQN
Sunday, September 21, 2008
Wednesday, September 03, 2008
Why is a Particular Query Slower on One Machine than Another
Subject: Why is a Particular Query Slower on One Machine than Another?
Doc ID: Note:604256.1 Type: TROUBLESHOOTING
Last Revision Date: 18-JUL-2008 Status: REVIEWED
In this Document
Purpose
Last Review Date
Instructions for the Reader
Troubleshooting Details
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.2.0.1
Information in this document applies to any platform.
Purpose
This article outlines some reasons why a particular query may be slower on one machine than another.
The article assumes that software versions are the same on the two machines being compared.
Last Review Date
June 24, 2008
Instructions for the Reader
A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.
Troubleshooting Details
Computer systems are deterministic - the same inputs should produce the same outputs. Unless the systems under comparison are exactly the same, behavior on these systems cannot be expected to be exactly the same. Factors such as CPU speed, memory size, memory allocation, memory usage, other concurrent database and non-database activity can all have an affect on the relative performance of different systems. Additionally, any differences in data including volumes and distribution can have an effect. Some of these factors will feed into the optimizer's decisions and may even cause a difference in execution plan.
If all the inputs to a query running on a particular version on different machines are the same, but the performance is different then that would imply that those differences are introduced by something outside of Oracle. Different hardware and settings can have a significant affect on the relative performance on different systems.
NOTE: If the comparison of different systems is part of a benchmarking exercise then support for this is provided via the relevant Oracle Account Manager and not through normal Support Services. Please contact your Account Manager for assistance with benchmarking queries.
Potential issues:
Issue can be divided into 2 main categories - those that influence the optimizer's plan choice resulting in different access paths for queries, and those that do not change the execution plan but affect the performance of all or part of the query execution.
Plan Differences - plan changes can be caused by a number of different issues:
Is the Query really the same?- Any changes in a SQL statement can have an impact on the selection of an access path for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query However similar the statements may be, there may be unexpected differences that result in reduced (or increased) performance. If a select list of a query is changed to add or modify columns or a where clause has altered predicate values, then this can cause the optimizer to choose a different execution plan. Changes in execution plans used for queries are likely to cause different response times when the query is executed. Additionally, changes in data volumes in the underlying objects can make a large difference. Remember that changes in the predicates (ie the where clause) can result in changes in the proportion of rows returned from an object - different values may return a larger or smaller proportion of the rows and this in turn can have a significant affect on the performance of queries.
Different statistics - The Cost Based Optimizer (CBO) uses statistics to determine the execution plan for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query. Assuming that the same statistics levels are gathered on the different systems and that systems have the same parameter settings, statistic differences are attributable to data volume and data distribution differences. Potentially, with reduced sample sizes, sampling could produce different statistics due to chance groupings of data that may be the result of differing loading methods etc. On 10g and above it is recommended to use an estimate sample size of 100% (if it is possible for this to fit within the window maintenance), even if that means that statistics are gathered on a reduced frequency. If 100% is not feasible, try using at least an estimate of 30%. Generally, the accuracy of the statistics overall outweighs the day to day changes in most applications. Additionally, care should be taken that column statistics (histograms) are gathered in the same way on the systems. Note that the defaults for different versions of statistics gathering are not necessarily the same, for example:
ESTIMATE_PERCENT: defaults:
9i : 100%
10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage)
METHOD_OPT: defaults:
9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
10g and 11g : AUTO - DBMS_STATS decides in which columns a histogram may help to produce a better plan.
Note that features such as Dynamic Sampling and System Statistics can also have an affect on query access paths even if parameters and gathered statistics on user objects are the same. See the information in the following sections for details.
Different volume and distribution of data - although data differences may appear trivial, they may be enough to change a plan from one to another if the choice of one access path over another is close in cost terms. Even small differences in the number of rows can easily be magnified by many orders dependent on the selected execution plan. Scalability issues do not always mean that a smaller data set is representative of a larger one. It is not only the volume of data which can be an issue, but also the method used to load or insert the data in the first place. For example data with a different distribution (such as pre-sorted data) may produce different access paths for queries due to statistical changes which mean that certain access paths appear more attractive than they actually are.
Different Block Sizes - The way that the data itself is stored can have an impact on the access path chosen by a query. If the block size selected at database or tablespace level are different on the 2 systems, then this can result in different access paths being chosen. This is because the objects in question may take up less data blocks overall with a larger block size making certain operations seem more (or less) efficient in terms of I/O than on the comparative system. Note that block size changes can even affect cases where no statistics have been gathered since the default stats used by the Cost Based Optimizer are derived in part from the block size and the number of blocks. Any storage differences that could affect the number of blocks taken up by an object could affect plans. Even the same data in two different databases (or even in the same database) with different block sizes, pct used/free or intital/next settings (leading to the use of a different number of blocks) can lead to different default statistics.
Different Parameter settings - there are a large number of parameters that can have an affect on the optimizer. If parameter settings are the same then the optimizer should produce the same execution plan given the same SQL and statistics input. Specific things to look out for here are the settings for:
Dynamic Sampling: If dynamic sampling is enabled then the optimizer may sample information at runtime to determine a better plan for a query. If data distribution or volume is different then this sampling could return different statistics and this could clearly make a difference to the chosen execution plan. See:
Note 336267.1 Parameter : OPTIMIZER_DYNAMIC_SAMPLING
Optimizer Features Enable: "OPTIMIZER_FEATURES_ENABLE" is an 'umbrella' parameter that covers a large variety of version specific optimizer changes. Changing this parameter can result in features being enabled or disabled or the defaults they use from changing and the result of this can be execution plan changes. See:
Note 62337.1 Init.ora Parameter "OPTIMIZER_FEATURES_ENABLE" Reference Note
Parallelism : If the parallel degree available on a system is different then the optimizer may select dif rent execution plans for queries that were relying on parallel execution for their access paths to be perform well. See:
Note 203238.1 Using Parallel Execution
Schema Differences - Differences in indexes, table partitioning and storage clauses can also have a big impact on plan choice. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
PGA and sort area size differences - the availability of large amounts of PGA can influence plan choices since sorts or hash joins are less likely to spill over to disk. With a large PGA, this can mean that plans that may otherwise have involved high I/O associated with this spill over may be selected since the reduced I/O may make them a lot more attractive to the optimizer.
Different NLS settings may affect sort orders and mean that certain indexes are not available to avoid sorts or would be inefficient to use. See:
Note 227335.1 Linguistic Sorting - Frequently Asked Questions
Note 67522.1 Diagnosing Why a Query is Not Using an Index
Different Bind Variable Peeking - The default value for _OPTIM_PEEK_USER_BINDS is true in 9.0 and above.This means the first time a SQL statement with bind variables is parsed, the optimizer will examine the value of that bind variable and use that for creating the execution plan. If that bind is not representative of the majority of the supplied values, then it may cause the optimizer to choose a different execution plan than when one of these more representative values is submitted. If that is the case then the chosen plan may perform in a suboptimal manner. See:
Note 70075.1 Use of bind variables in queries
Different System Statistics - system statistics gathered using the DBMS_STATS.GATHER_SYSTEM_STATS procedure and stored in the SYS.AUX_STATS$ table will be used by default when the "_OPTIMIZER_COST_MODEL" (hidden) parameter (which defaults to CHOOSE) is set to CHOOSE or CPU. These statistics can cause access paths for queries to change dependent upon the recorded capability of a system. Systems with different capabilities may use different access paths for the same query. See:
Note 470316.1 Using Actual System Statistics (Collected CPU and IO information
Note 149560.1 Collect and Display System Statistics (CPU and IO) for CBO usage
Note 153761.1 Scaling the System to Improve CBO optimizer
Query Performance Differences with Unchanged Execution Plan
Is the Query really the same?- As with a changed execution plan, any changes in a SQL statement can have an impact on performance. Adding or changing columns in a select list, especially the addition of functions or scalar sub-queries may add significantly to the CPU usage of a query since these will be executed for every row returned.
Platform - Differences in any of the technology stack underneath the database can cause differences in performance. CPU specification, memory, I/O subsystems, Operating System versions and patch level can all have an impact. Price, physical size and branding are irrelevant. Platform choices are made for a variety of reasons, price and performance are just two of them. Similar or even dissimilar price is no guarantee of similar or dissimilar performance.
Network - Any operation that requires pulling data across a network, for example across a dblink, will be affected by network performance. Even same specification networks can perform differently depending on the load on the particular network segment. The larger the amount of data being sent across a network, the greater impact such variances in network performance are likely to have.
CPU - CPU intensive operations will take longer when executed on slower CPUs. If an application is CPU intensive, it does not matter how 'powerful' in terms of concurrent users and throughput a system is, if this processing is not scale-able and spreadable over the resources available, then the performance may come down to the raw CPU speed.
Memory - It is possible that memory constraints may mean that more sorts or join information spills to disk incurring the inevitable I/O penalties and reducing the performance of the queries. Additionally, operating system memory usage and constraints due to concurrent usage of oracle and non-oracle users should be monitored to avoid swapping and paging.
Disk - I/O throughput can make a large difference to I/O dependent queries. Different I/O subsystems may be tuned for massive throughput as opposed to raw speed. Additionally, Network based storage may depend on network speed and load on the network itself. These problems can be alleviated to a degree by buffer cache and o/s caching. See:
Note 223117.1 Tuning I/O-related waits.
Different Block Sizes - Just as in the section above regarding plan differences, using different block sizes can have an affect even if the access path has not changed. Storing data in a different block size can impact the number of I/Os required and thus change the relative level of stressing of the I/O subsystem.
Concurrency - If the concurrent usage of a system is not the same as that it is being compared with, then the relative performance of that system may appear to be different. Ensure that the load profiles of the systems being compared are the same in order to make valid comparisons.
Schema Differences - Differences in indexes, table partitioning and storage clauses can have a big impact on execution time. If the data is stored in a different way then more blocks may be required to be read to return the same information. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
Parallelism - When the Optimizer selects an execution plan that it believes should be using Parallel Execution then it has to assume that sufficient resource is actually available at execution time to actually execute the query at the desired level of parallelism. If this resource is unavailable at execution time, then the query may be executed at a reduced level of parallelism or in serial, which could have a significant affect on the overall performance of a query and of the system as a whole. See:
Note 203238.1 Using Parallel Execution
References
Note 149560.1 - System Statistics: Collect and Display System Statistics (CPU and IO) for CBO us
Note 153761.1 - System Statistics: Scaling the System to Improve CBO optimizer
Note 203238.1 - Using Parallel Execution
Note 223117.1 - Tuning I/O-related waits
Note 223806.1 - Query with Unchanged Execution Plan is Slower than Previously
Note 227335.1 - Linguistic Sorting - Frequently Asked Questions
Note 398838.1 - FAQ: Query Tuning Frequently Asked Questions
Note 470316.1 - Using Actual System Statistics (Collected CPU and IO information)
Note 70075.1 - Use of bind variables in queries
Doc ID: Note:604256.1 Type: TROUBLESHOOTING
Last Revision Date: 18-JUL-2008 Status: REVIEWED
In this Document
Purpose
Last Review Date
Instructions for the Reader
Troubleshooting Details
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.2.0.1
Information in this document applies to any platform.
Purpose
This article outlines some reasons why a particular query may be slower on one machine than another.
The article assumes that software versions are the same on the two machines being compared.
Last Review Date
June 24, 2008
Instructions for the Reader
A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.
Troubleshooting Details
Computer systems are deterministic - the same inputs should produce the same outputs. Unless the systems under comparison are exactly the same, behavior on these systems cannot be expected to be exactly the same. Factors such as CPU speed, memory size, memory allocation, memory usage, other concurrent database and non-database activity can all have an affect on the relative performance of different systems. Additionally, any differences in data including volumes and distribution can have an effect. Some of these factors will feed into the optimizer's decisions and may even cause a difference in execution plan.
If all the inputs to a query running on a particular version on different machines are the same, but the performance is different then that would imply that those differences are introduced by something outside of Oracle. Different hardware and settings can have a significant affect on the relative performance on different systems.
NOTE: If the comparison of different systems is part of a benchmarking exercise then support for this is provided via the relevant Oracle Account Manager and not through normal Support Services. Please contact your Account Manager for assistance with benchmarking queries.
Potential issues:
Issue can be divided into 2 main categories - those that influence the optimizer's plan choice resulting in different access paths for queries, and those that do not change the execution plan but affect the performance of all or part of the query execution.
Plan Differences - plan changes can be caused by a number of different issues:
Is the Query really the same?- Any changes in a SQL statement can have an impact on the selection of an access path for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query However similar the statements may be, there may be unexpected differences that result in reduced (or increased) performance. If a select list of a query is changed to add or modify columns or a where clause has altered predicate values, then this can cause the optimizer to choose a different execution plan. Changes in execution plans used for queries are likely to cause different response times when the query is executed. Additionally, changes in data volumes in the underlying objects can make a large difference. Remember that changes in the predicates (ie the where clause) can result in changes in the proportion of rows returned from an object - different values may return a larger or smaller proportion of the rows and this in turn can have a significant affect on the performance of queries.
Different statistics - The Cost Based Optimizer (CBO) uses statistics to determine the execution plan for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query. Assuming that the same statistics levels are gathered on the different systems and that systems have the same parameter settings, statistic differences are attributable to data volume and data distribution differences. Potentially, with reduced sample sizes, sampling could produce different statistics due to chance groupings of data that may be the result of differing loading methods etc. On 10g and above it is recommended to use an estimate sample size of 100% (if it is possible for this to fit within the window maintenance), even if that means that statistics are gathered on a reduced frequency. If 100% is not feasible, try using at least an estimate of 30%. Generally, the accuracy of the statistics overall outweighs the day to day changes in most applications. Additionally, care should be taken that column statistics (histograms) are gathered in the same way on the systems. Note that the defaults for different versions of statistics gathering are not necessarily the same, for example:
ESTIMATE_PERCENT: defaults:
9i : 100%
10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage)
METHOD_OPT: defaults:
9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
10g and 11g : AUTO - DBMS_STATS decides in which columns a histogram may help to produce a better plan.
Note that features such as Dynamic Sampling and System Statistics can also have an affect on query access paths even if parameters and gathered statistics on user objects are the same. See the information in the following sections for details.
Different volume and distribution of data - although data differences may appear trivial, they may be enough to change a plan from one to another if the choice of one access path over another is close in cost terms. Even small differences in the number of rows can easily be magnified by many orders dependent on the selected execution plan. Scalability issues do not always mean that a smaller data set is representative of a larger one. It is not only the volume of data which can be an issue, but also the method used to load or insert the data in the first place. For example data with a different distribution (such as pre-sorted data) may produce different access paths for queries due to statistical changes which mean that certain access paths appear more attractive than they actually are.
Different Block Sizes - The way that the data itself is stored can have an impact on the access path chosen by a query. If the block size selected at database or tablespace level are different on the 2 systems, then this can result in different access paths being chosen. This is because the objects in question may take up less data blocks overall with a larger block size making certain operations seem more (or less) efficient in terms of I/O than on the comparative system. Note that block size changes can even affect cases where no statistics have been gathered since the default stats used by the Cost Based Optimizer are derived in part from the block size and the number of blocks. Any storage differences that could affect the number of blocks taken up by an object could affect plans. Even the same data in two different databases (or even in the same database) with different block sizes, pct used/free or intital/next settings (leading to the use of a different number of blocks) can lead to different default statistics.
Different Parameter settings - there are a large number of parameters that can have an affect on the optimizer. If parameter settings are the same then the optimizer should produce the same execution plan given the same SQL and statistics input. Specific things to look out for here are the settings for:
Dynamic Sampling: If dynamic sampling is enabled then the optimizer may sample information at runtime to determine a better plan for a query. If data distribution or volume is different then this sampling could return different statistics and this could clearly make a difference to the chosen execution plan. See:
Note 336267.1 Parameter : OPTIMIZER_DYNAMIC_SAMPLING
Optimizer Features Enable: "OPTIMIZER_FEATURES_ENABLE" is an 'umbrella' parameter that covers a large variety of version specific optimizer changes. Changing this parameter can result in features being enabled or disabled or the defaults they use from changing and the result of this can be execution plan changes. See:
Note 62337.1 Init.ora Parameter "OPTIMIZER_FEATURES_ENABLE" Reference Note
Parallelism : If the parallel degree available on a system is different then the optimizer may select dif rent execution plans for queries that were relying on parallel execution for their access paths to be perform well. See:
Note 203238.1 Using Parallel Execution
Schema Differences - Differences in indexes, table partitioning and storage clauses can also have a big impact on plan choice. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
PGA and sort area size differences - the availability of large amounts of PGA can influence plan choices since sorts or hash joins are less likely to spill over to disk. With a large PGA, this can mean that plans that may otherwise have involved high I/O associated with this spill over may be selected since the reduced I/O may make them a lot more attractive to the optimizer.
Different NLS settings may affect sort orders and mean that certain indexes are not available to avoid sorts or would be inefficient to use. See:
Note 227335.1 Linguistic Sorting - Frequently Asked Questions
Note 67522.1 Diagnosing Why a Query is Not Using an Index
Different Bind Variable Peeking - The default value for _OPTIM_PEEK_USER_BINDS is true in 9.0 and above.This means the first time a SQL statement with bind variables is parsed, the optimizer will examine the value of that bind variable and use that for creating the execution plan. If that bind is not representative of the majority of the supplied values, then it may cause the optimizer to choose a different execution plan than when one of these more representative values is submitted. If that is the case then the chosen plan may perform in a suboptimal manner. See:
Note 70075.1 Use of bind variables in queries
Different System Statistics - system statistics gathered using the DBMS_STATS.GATHER_SYSTEM_STATS procedure and stored in the SYS.AUX_STATS$ table will be used by default when the "_OPTIMIZER_COST_MODEL" (hidden) parameter (which defaults to CHOOSE) is set to CHOOSE or CPU. These statistics can cause access paths for queries to change dependent upon the recorded capability of a system. Systems with different capabilities may use different access paths for the same query. See:
Note 470316.1 Using Actual System Statistics (Collected CPU and IO information
Note 149560.1 Collect and Display System Statistics (CPU and IO) for CBO usage
Note 153761.1 Scaling the System to Improve CBO optimizer
Query Performance Differences with Unchanged Execution Plan
Is the Query really the same?- As with a changed execution plan, any changes in a SQL statement can have an impact on performance. Adding or changing columns in a select list, especially the addition of functions or scalar sub-queries may add significantly to the CPU usage of a query since these will be executed for every row returned.
Platform - Differences in any of the technology stack underneath the database can cause differences in performance. CPU specification, memory, I/O subsystems, Operating System versions and patch level can all have an impact. Price, physical size and branding are irrelevant. Platform choices are made for a variety of reasons, price and performance are just two of them. Similar or even dissimilar price is no guarantee of similar or dissimilar performance.
Network - Any operation that requires pulling data across a network, for example across a dblink, will be affected by network performance. Even same specification networks can perform differently depending on the load on the particular network segment. The larger the amount of data being sent across a network, the greater impact such variances in network performance are likely to have.
CPU - CPU intensive operations will take longer when executed on slower CPUs. If an application is CPU intensive, it does not matter how 'powerful' in terms of concurrent users and throughput a system is, if this processing is not scale-able and spreadable over the resources available, then the performance may come down to the raw CPU speed.
Memory - It is possible that memory constraints may mean that more sorts or join information spills to disk incurring the inevitable I/O penalties and reducing the performance of the queries. Additionally, operating system memory usage and constraints due to concurrent usage of oracle and non-oracle users should be monitored to avoid swapping and paging.
Disk - I/O throughput can make a large difference to I/O dependent queries. Different I/O subsystems may be tuned for massive throughput as opposed to raw speed. Additionally, Network based storage may depend on network speed and load on the network itself. These problems can be alleviated to a degree by buffer cache and o/s caching. See:
Note 223117.1 Tuning I/O-related waits.
Different Block Sizes - Just as in the section above regarding plan differences, using different block sizes can have an affect even if the access path has not changed. Storing data in a different block size can impact the number of I/Os required and thus change the relative level of stressing of the I/O subsystem.
Concurrency - If the concurrent usage of a system is not the same as that it is being compared with, then the relative performance of that system may appear to be different. Ensure that the load profiles of the systems being compared are the same in order to make valid comparisons.
Schema Differences - Differences in indexes, table partitioning and storage clauses can have a big impact on execution time. If the data is stored in a different way then more blocks may be required to be read to return the same information. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
Parallelism - When the Optimizer selects an execution plan that it believes should be using Parallel Execution then it has to assume that sufficient resource is actually available at execution time to actually execute the query at the desired level of parallelism. If this resource is unavailable at execution time, then the query may be executed at a reduced level of parallelism or in serial, which could have a significant affect on the overall performance of a query and of the system as a whole. See:
Note 203238.1 Using Parallel Execution
References
Note 149560.1 - System Statistics: Collect and Display System Statistics (CPU and IO) for CBO us
Note 153761.1 - System Statistics: Scaling the System to Improve CBO optimizer
Note 203238.1 - Using Parallel Execution
Note 223117.1 - Tuning I/O-related waits
Note 223806.1 - Query with Unchanged Execution Plan is Slower than Previously
Note 227335.1 - Linguistic Sorting - Frequently Asked Questions
Note 398838.1 - FAQ: Query Tuning Frequently Asked Questions
Note 470316.1 - Using Actual System Statistics (Collected CPU and IO information)
Note 70075.1 - Use of bind variables in queries
Monday, July 21, 2008
NLS considerations in Import/Export - Frequently Asked Questions
Subject: NLS considerations in Import/Export - Frequently Asked Questions
Doc ID: Note:227332.1 Type: BULLETIN
Last Revision Date: 17-JUL-2008 Status: PUBLISHED
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
For the main Globalization (NLS) FAQ please see:
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
1. How does NLS affect import/export (exp/imp)?
2. How should NLS_LANG be set when using export?
3. How should NLS_LANG be set when using import?
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
5. I have the message "( possible ncharset conversion )" during import.
6. How to know in what characterset a dmp (export) file is created?
7. How does NLS affect datapump (expdp/impdp)?
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
For the main exp/imp FAQ please see:
Note 175624.1 Oracle Server - Export and Import FAQ
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
1. How does NLS affect import/export(exp/imp)??
Import and export are client products, in the same way as SQL*Plus or
Oracle Forms, and will therefore translate characters from the database
character set to that defined by NLS_LANG. The character set used for
the export will be stored in the export file and, when the file is imported,
the import will check the character set that was used. If it is different to
that defined by NLS_LANG at the import site, the characters will be
translated to the import character set and then, if necessary, to the
database character set.
References:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
2. How should NLS_LANG be set when using export?
Oracle recommends to set the character set part of NLS_LANG environment parameter
to ALWAYS the same character set as the character set of the database you are exporting.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
That way no conversion will take place and the exportfile will be created
in the same character set as the original database and contain ALL data from
original database (even incorrectly stored data if that would be the case).
Even if the plan is to import this into a database with a different character set
later the conversion can be postponed until the import.
Note that this has no relation with the Operating system. If your have a
WE8MSWIN1252 database on a unix server (which is totally supported) then you
should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.
During *interaction* with the database (= sqlplus) you need to configure
your *unix* client properly and that cannot be 1252 seen *unix* does not
has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
3. How should NLS_LANG be set when using import?
If the source and target database have the same character set,
the character set part of the NLS_LANG should be set to that same character set
on both the export and the import.
Even if the character sets of the exporting and importing databases
are not the same the best (preferred) value to use for
the character set part of NLS_LANG on both export and import is still
the character set of the source database.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
Setting the NLS_LANG to the character set of the target database during import
is also correct as such, but import has some limitations when going to a multibyte
characterset (like UTF8), hence using the SOURCE NLS_CHARACTERSET during both imp and exp
session is simply the best option, avoiding any problems like
IMP-16 "Required character set conversion (type %lu to %lu) not supported".
So, the preferred place to do the conversion is between the import executable
and the target database.
Note that this has no relation with the Operating system. If your source
database is a WE8MSWIN1252 database then you simply should set NLS_LANG
to AMERICAN_AMERICA.WE8MSWIN1252 before import, even on a Unix server.
Note that during *interaction* with the database (= sqlplus) you need to
configure your *unix* client properly and that cannot be 1252 seen *unix*
does not has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
4. Example: you want to go from an WE8MSWIN1252 to an UTF8 db:
(note that this is only the exp/imp example, if you want to migrate to UTF8
check Note 260192.1 for the full story )
1) double check the NLS_CHARACTERSET on the SOURCE database
select * from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
and export with the NLS_LANG set to AMERICAN_AMERICA.
In this case we want to create a export file containing WE8MSWIN1252 data.
(This is also the setting you want to use if you take an
export as backup)
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ exp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>exp ....
2) import with the NLS_LANG set to American_america.WE8MSWIN1252 (= source NLS_CHARACTERSET)
into the new UTF8 db.
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ imp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>imp ....
The conversion to UTF8 is done while inserting the data
in the UTF8 database by the imp connection.
We recommend to set the NLS_LANG explicit in the current shell for unix
(-> Note 131207.1 How to Set Unix Environment Variable )
or in the dos box used for the exp or imp tool on windows.
(-> "c:\>set NLS_LANG=AMERICAN_AMERICA.")
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
Not. Normally you use the AMERICAN_AMERICA default, but
if you imported with NLS_LANG set to FRENCH_FRANCE for example
then you will not have problems, even if the originating environment
used GERMAN_GERMANY or so.
5. I have the message "( possible ncharset conversion )" during import.
you see something similar to
Export file created by EXPORT:V08.01.07 via direct path
import done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P15 NCHAR character set (possible ncharset conversion)
in the import log, please see point 11 in
Note 276914.1 The National Character Set in Oracle 9i and 10g
6. How to know in what characterset a dmp (export) file is created?
simply issue: imp system/oracle@database show=yes file=test.dmp
the output gives you
import done in US7ASCII character set and AL16UTF16 NCHAR character set
-> this is the current NLS_LANG value set in the environment
and the NCHAR characterset of the target database
import server uses WE8MSWIN1252 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during this import session is different
from the target database characterset, so if you see 3 lines you might have problems :-)
export client uses UTF8 character set (possible charset conversion)
-> this is the characterset used during the export session and the
characterset used in the dmp file.
7. How does NLS affect datapump (expdp/impdp)?
Datapump does not use the NLS_LANG to do conversion between databases.
Conversion between 2 database charactersets is done purely based on the
NLS_CHARACTERSET ( or NLS_NCHAR_CHARACTTERSET for Nchar,Nvarchar and Nclob datatypes)
of the source and target database.
However, if you specify a parameter file then the NLS_LANG *is* used.
This is only important if you use non-English characters (e.g. for the
QUERY parameter) in the parameter file.
If you use non-English characters in the parameter file then the NLS_LANG
environment variable should be set (in the session where the Data Pump job
is started) to the correct encoding of the parameter file (!).
Note:
-----
There is a chance to have data corruption when going from a 8 bit
characterset to UTF8 or a other multibyte characterset on ALL 10g versions
(including 10.1.0.5 and 10.2.0.3) and 11.1.0.6.
Impdp may provoke data corruption unless you applied Patch 5874989.
This is caused by the impdp Bug 5874989.
The "old" exp/imp work fine. This problem is fixed in the upcoming
10.2.0.4 and 11.1.0.7 patchset.
All existing patches for this bug are found here:
http://updates.oracle.com/download/5874989.html
For 10.1.0.5, 10.2.0.2, 10.2.0.3 and 11.1.0.6 you can ask a backport
for Bug 5874989 if there is no patch yet for your version/platform.
For windows the fix is included in
10.1.0.5.0 Patch 20 (10.1.0.5.20P) or later ( Note 276548.1 )
10.2.0.3.0 Patch 11 (10.2.0.3.11P) or later ( Note 342443.1 )
The patch is technically only needed on the impdp side, but if you use
expdp/impdp between different character sets we suggest to patch all your
systems.
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
9i and lower gives ORA-01401: inserted value too large for column
10g and up gives ORA-12899: value too large for column
This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET
(like WE8ISO8859P1, WE8MSWIN1252 , WE8DEC ...) or 16 bit NLS_CHARACTERSET
(like JA16SJIS , ZHS16GBK, KO16MSWIN949) to a database with a
NLS_CHARACTERSET set to AL32UTF8 or UTF8
(AL32)UTF8 uses more BYTES to store a character then a 8 and 16 bit charactersets.
This is explained in Note 119119.1 AL32UTF8 / UTF8 (Unicode) Database Character Set
Implications
CHAR and VARCHAR2 colum sizes are defined by default in BYTES not characterset.
The best solution is to pre-create the tables using CHAR semantics (9i and up)
Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
Other Known Problems:
---------------------
Note 278980.1 Oracle10g Import Errors like IMP-00008 / IMP-00009 if Eported with NLS_LANG=FRENCH_FRANCE.[characterset]
References
----------
For further NLS / Globalization information you may start here:
Note 267942.1 - Globalization Technology (NLS) Knowledge Browser Product Page
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
Further details of issues discussed in this note can be found in:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
Note 175624.1 Oracle Server - Export and Import FAQ
Doc ID: Note:227332.1 Type: BULLETIN
Last Revision Date: 17-JUL-2008 Status: PUBLISHED
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
For the main Globalization (NLS) FAQ please see:
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
1. How does NLS affect import/export (exp/imp)?
2. How should NLS_LANG be set when using export?
3. How should NLS_LANG be set when using import?
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
5. I have the message "( possible ncharset conversion )" during import.
6. How to know in what characterset a dmp (export) file is created?
7. How does NLS affect datapump (expdp/impdp)?
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
For the main exp/imp FAQ please see:
Note 175624.1 Oracle Server - Export and Import FAQ
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
1. How does NLS affect import/export(exp/imp)??
Import and export are client products, in the same way as SQL*Plus or
Oracle Forms, and will therefore translate characters from the database
character set to that defined by NLS_LANG. The character set used for
the export will be stored in the export file and, when the file is imported,
the import will check the character set that was used. If it is different to
that defined by NLS_LANG at the import site, the characters will be
translated to the import character set and then, if necessary, to the
database character set.
References:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
2. How should NLS_LANG be set when using export?
Oracle recommends to set the character set part of NLS_LANG environment parameter
to ALWAYS the same character set as the character set of the database you are exporting.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
That way no conversion will take place and the exportfile will be created
in the same character set as the original database and contain ALL data from
original database (even incorrectly stored data if that would be the case).
Even if the plan is to import this into a database with a different character set
later the conversion can be postponed until the import.
Note that this has no relation with the Operating system. If your have a
WE8MSWIN1252 database on a unix server (which is totally supported) then you
should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.
During *interaction* with the database (= sqlplus) you need to configure
your *unix* client properly and that cannot be 1252 seen *unix* does not
has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
3. How should NLS_LANG be set when using import?
If the source and target database have the same character set,
the character set part of the NLS_LANG should be set to that same character set
on both the export and the import.
Even if the character sets of the exporting and importing databases
are not the same the best (preferred) value to use for
the character set part of NLS_LANG on both export and import is still
the character set of the source database.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
Setting the NLS_LANG to the character set of the target database during import
is also correct as such, but import has some limitations when going to a multibyte
characterset (like UTF8), hence using the SOURCE NLS_CHARACTERSET during both imp and exp
session is simply the best option, avoiding any problems like
IMP-16 "Required character set conversion (type %lu to %lu) not supported".
So, the preferred place to do the conversion is between the import executable
and the target database.
Note that this has no relation with the Operating system. If your source
database is a WE8MSWIN1252 database then you simply should set NLS_LANG
to AMERICAN_AMERICA.WE8MSWIN1252 before import, even on a Unix server.
Note that during *interaction* with the database (= sqlplus) you need to
configure your *unix* client properly and that cannot be 1252 seen *unix*
does not has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
4. Example: you want to go from an WE8MSWIN1252 to an UTF8 db:
(note that this is only the exp/imp example, if you want to migrate to UTF8
check Note 260192.1 for the full story )
1) double check the NLS_CHARACTERSET on the SOURCE database
select * from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
and export with the NLS_LANG set to AMERICAN_AMERICA.
In this case we want to create a export file containing WE8MSWIN1252 data.
(This is also the setting you want to use if you take an
export as backup)
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ exp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>exp ....
2) import with the NLS_LANG set to American_america.WE8MSWIN1252 (= source NLS_CHARACTERSET)
into the new UTF8 db.
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ imp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>imp ....
The conversion to UTF8 is done while inserting the data
in the UTF8 database by the imp connection.
We recommend to set the NLS_LANG explicit in the current shell for unix
(-> Note 131207.1 How to Set Unix Environment Variable )
or in the dos box used for the exp or imp tool on windows.
(-> "c:\>set NLS_LANG=AMERICAN_AMERICA.
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
Not. Normally you use the AMERICAN_AMERICA default, but
if you imported with NLS_LANG set to FRENCH_FRANCE for example
then you will not have problems, even if the originating environment
used GERMAN_GERMANY or so.
5. I have the message "( possible ncharset conversion )" during import.
you see something similar to
Export file created by EXPORT:V08.01.07 via direct path
import done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P15 NCHAR character set (possible ncharset conversion)
in the import log, please see point 11 in
Note 276914.1 The National Character Set in Oracle 9i and 10g
6. How to know in what characterset a dmp (export) file is created?
simply issue: imp system/oracle@database show=yes file=test.dmp
the output gives you
import done in US7ASCII character set and AL16UTF16 NCHAR character set
-> this is the current NLS_LANG value set in the environment
and the NCHAR characterset of the target database
import server uses WE8MSWIN1252 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during this import session is different
from the target database characterset, so if you see 3 lines you might have problems :-)
export client uses UTF8 character set (possible charset conversion)
-> this is the characterset used during the export session and the
characterset used in the dmp file.
7. How does NLS affect datapump (expdp/impdp)?
Datapump does not use the NLS_LANG to do conversion between databases.
Conversion between 2 database charactersets is done purely based on the
NLS_CHARACTERSET ( or NLS_NCHAR_CHARACTTERSET for Nchar,Nvarchar and Nclob datatypes)
of the source and target database.
However, if you specify a parameter file then the NLS_LANG *is* used.
This is only important if you use non-English characters (e.g. for the
QUERY parameter) in the parameter file.
If you use non-English characters in the parameter file then the NLS_LANG
environment variable should be set (in the session where the Data Pump job
is started) to the correct encoding of the parameter file (!).
Note:
-----
There is a chance to have data corruption when going from a 8 bit
characterset to UTF8 or a other multibyte characterset on ALL 10g versions
(including 10.1.0.5 and 10.2.0.3) and 11.1.0.6.
Impdp may provoke data corruption unless you applied Patch 5874989.
This is caused by the impdp Bug 5874989.
The "old" exp/imp work fine. This problem is fixed in the upcoming
10.2.0.4 and 11.1.0.7 patchset.
All existing patches for this bug are found here:
http://updates.oracle.com/download/5874989.html
For 10.1.0.5, 10.2.0.2, 10.2.0.3 and 11.1.0.6 you can ask a backport
for Bug 5874989 if there is no patch yet for your version/platform.
For windows the fix is included in
10.1.0.5.0 Patch 20 (10.1.0.5.20P) or later ( Note 276548.1 )
10.2.0.3.0 Patch 11 (10.2.0.3.11P) or later ( Note 342443.1 )
The patch is technically only needed on the impdp side, but if you use
expdp/impdp between different character sets we suggest to patch all your
systems.
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
9i and lower gives ORA-01401: inserted value too large for column
10g and up gives ORA-12899: value too large for column
This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET
(like WE8ISO8859P1, WE8MSWIN1252 , WE8DEC ...) or 16 bit NLS_CHARACTERSET
(like JA16SJIS , ZHS16GBK, KO16MSWIN949) to a database with a
NLS_CHARACTERSET set to AL32UTF8 or UTF8
(AL32)UTF8 uses more BYTES to store a character then a 8 and 16 bit charactersets.
This is explained in Note 119119.1 AL32UTF8 / UTF8 (Unicode) Database Character Set
Implications
CHAR and VARCHAR2 colum sizes are defined by default in BYTES not characterset.
The best solution is to pre-create the tables using CHAR semantics (9i and up)
Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
Other Known Problems:
---------------------
Note 278980.1 Oracle10g Import Errors like IMP-00008 / IMP-00009 if Eported with NLS_LANG=FRENCH_FRANCE.[characterset]
References
----------
For further NLS / Globalization information you may start here:
Note 267942.1 - Globalization Technology (NLS) Knowledge Browser Product Page
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
Further details of issues discussed in this note can be found in:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
Note 175624.1 Oracle Server - Export and Import FAQ
Export/Import and NLS Considerations
Subject: Export/Import and NLS Considerations
Doc ID: Note:15095.1 Type: FAQ
Last Revision Date: 07-JUL-2006 Status: PUBLISHED
Introduction
------------
This note provides an in-depth overview of exp/imp and NLS, you may not need to
know all this if you just want to take a export and make sure you don't lose any
data. Please read this first:
Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
Export/Import and NLS considerations
------------------------------------
If you have exported/imported a database or table(s) and are now encountering
character set conversion problems, use the following information to confirm
whether the export/import procedure was performed correctly.
+ In Oracle9 and higher, most data is automaticaly exported in the character set
of the database that is exported. The only exception to that, is that
data-dictionary information is exported in the character set that is set in
the NLS_LANG environment variable when making the export.
In pre-Oracle9 export, ALL data is exported in the character set that is set
in the NLS_LANG environment variable when making the export.
This means that if the character set part of the NLS_LANG environment variable
during export is different than the database character set, there will be a
automatic conversion between those (for 9i and higher only for data-dictionary
data since "normal" data is exported in the database character set, regardless
of the NLS_LANG, as described above).
+ The export file contains the character set that the data was exported in, in
the first few bytes of the export file. Export stores the character set ID
(not the text string) of the "exporting character set". See Note 48644.1.
This is relevant in pre-Oracle9 export files because all data will be exported
in the same character set and this way we can "see" what that character set
is, and we can even "update" these fields to work around certain problems
(only when advised by Oracle Support!).
For Oracle9 and higher export files this is less relevant because the
character set is stored in many more places in the export file, and can be
different depending on the data. So for Oracle9 and higher we can never
"update" the export file manualy.
+ An import session runs in the character set that is set in the NLS_LANG
environment variable for import session.
+ Import will check the character set of the data in the dump file and compare
it with the session's character set as defined in NLS_LANG.
+ No conversion occurs if the export data character set and the import session
character set are the same. If they are not the same, conversion is performed
from the export data character set to the import session character set prior
to the data being inserted into the database.
However, import can only perform this conversion for single-byte character
sets !
+ For imports into multibyte character set databases (i.e. UTF8) the character
set part of NLS_LANG should be set identical to that of the export session.
Otherwise an
IMP-16 "Required character set conversion (type %lu to %lu) not supported"
error will come up during import. See also Bug 896407.
+ The import session character set should be a the same, or a superset of the
export data character set, otherwise special characters will not be correctly
converted.
+ If you import pre-oracle7 export files include the parameter 'CHARSET' when
defining the import parameter set. CHARSET identifies the character set of the
export file. The CHARSET option was developed to import older export files
which did not have stored character set ID information.
If you define CHARSET but the export file does contain character set
information (export version 7 and higher) then the value in CHARSET should
match the export file character set. If they do not match, IMP-42 will
result (so it's better not to use this parameter at all, unless you import
pre-oracle7 data).
+ After the data has been converted to the import session character set, it is
then converted to the database character set if they differ. The database
character set should be a superset (or the same) of the import's session
character set otherwise special characters will not be correctly converted.
So this means that there is a potential for 3 seperate conversions when moving
data from between databases, as described in the following graph:
------------------- for all data pre-oracle9 and data-
| db in character | export dictionary data in oracle9 and up:
| set A |---------------------> exp session is in character
------------------- set B as defined by NLS_LANG.
source Therefore the dump file is in
character set B. Character set
conversion may occur.
|
|
| move file over to
| another machine
|
V
destination
-------------------
| db in character | import imp session is in character
| set D | <------------------- set C as defined by NLS_LANG.
------------------- The dump file is still in
character set B. Character set
conversion may occur.
During the import process
character set conversion
may occur between character
set C and the db's character
set D if they differ.
If you have any problems related to character set conversion in export/import
then you need to identify the following:
-- What is (was) the database character set in the source database (character
set A in the above)?
-- What was the client character set specified in NLS_LANG when the data was
exported (character set B in the above)?
-- What was the client character set specified in NLS_LANG when the data was
imported (character set C in the above) ?
-- What is the database character set of the destination database (character
set D in the above)?
Minimizing character set conversions during export/import
---------------------------------------------------------
As described, it is important to note import will do up to 3 character set
conversions depending on:
(a) character set of exported database
(b) NLS_LANG of export session (-> this is the "export file character set")
(c) NLS_LANG of import session
(d) character set of taget database.
Obviously there really is only the need for 1 conversion only (from original
database character set to target database character set). Minimizing the number
of conversions means that you minimize the potential of "loosing" data.
To minimize the number of conversions you are advised to follow these rules:
+ Set the NLS_LANG during export to the same as the character set of the
exported database -> this means no conversion takes place, all data is still
stored in the export file as it was stored in the database.
+ Set the NLS_LANG during import to the same value as during the export -> this
means that no conversion takes place in the import session.
+ If the character set of the target database is different, then the data will
be automatically converted when import inserts the data into the database, you
do not have to "set" anything for this, it's automatic.
Of course you do not have to follow these rules, but complying with these rules
does minimize the risk of loosing data during export/import.
Regarding the National Characterset:
------------------------------------
NCLOBs are always exported/imported in UCS-2/AL16UTF16.
NCHAR/NVARCHAR2s are always exported in the database's national character set.
This is something you can't influence by setting any parameters.
Doc ID: Note:15095.1 Type: FAQ
Last Revision Date: 07-JUL-2006 Status: PUBLISHED
Introduction
------------
This note provides an in-depth overview of exp/imp and NLS, you may not need to
know all this if you just want to take a export and make sure you don't lose any
data. Please read this first:
Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
Export/Import and NLS considerations
------------------------------------
If you have exported/imported a database or table(s) and are now encountering
character set conversion problems, use the following information to confirm
whether the export/import procedure was performed correctly.
+ In Oracle9 and higher, most data is automaticaly exported in the character set
of the database that is exported. The only exception to that, is that
data-dictionary information is exported in the character set that is set in
the NLS_LANG environment variable when making the export.
In pre-Oracle9 export, ALL data is exported in the character set that is set
in the NLS_LANG environment variable when making the export.
This means that if the character set part of the NLS_LANG environment variable
during export is different than the database character set, there will be a
automatic conversion between those (for 9i and higher only for data-dictionary
data since "normal" data is exported in the database character set, regardless
of the NLS_LANG, as described above).
+ The export file contains the character set that the data was exported in, in
the first few bytes of the export file. Export stores the character set ID
(not the text string) of the "exporting character set". See Note 48644.1.
This is relevant in pre-Oracle9 export files because all data will be exported
in the same character set and this way we can "see" what that character set
is, and we can even "update" these fields to work around certain problems
(only when advised by Oracle Support!).
For Oracle9 and higher export files this is less relevant because the
character set is stored in many more places in the export file, and can be
different depending on the data. So for Oracle9 and higher we can never
"update" the export file manualy.
+ An import session runs in the character set that is set in the NLS_LANG
environment variable for import session.
+ Import will check the character set of the data in the dump file and compare
it with the session's character set as defined in NLS_LANG.
+ No conversion occurs if the export data character set and the import session
character set are the same. If they are not the same, conversion is performed
from the export data character set to the import session character set prior
to the data being inserted into the database.
However, import can only perform this conversion for single-byte character
sets !
+ For imports into multibyte character set databases (i.e. UTF8) the character
set part of NLS_LANG should be set identical to that of the export session.
Otherwise an
IMP-16 "Required character set conversion (type %lu to %lu) not supported"
error will come up during import. See also Bug 896407.
+ The import session character set should be a the same, or a superset of the
export data character set, otherwise special characters will not be correctly
converted.
+ If you import pre-oracle7 export files include the parameter 'CHARSET' when
defining the import parameter set. CHARSET identifies the character set of the
export file. The CHARSET option was developed to import older export files
which did not have stored character set ID information.
If you define CHARSET but the export file does contain character set
information (export version 7 and higher) then the value in CHARSET should
match the export file character set. If they do not match, IMP-42 will
result (so it's better not to use this parameter at all, unless you import
pre-oracle7 data).
+ After the data has been converted to the import session character set, it is
then converted to the database character set if they differ. The database
character set should be a superset (or the same) of the import's session
character set otherwise special characters will not be correctly converted.
So this means that there is a potential for 3 seperate conversions when moving
data from between databases, as described in the following graph:
------------------- for all data pre-oracle9 and data-
| db in character | export dictionary data in oracle9 and up:
| set A |---------------------> exp session is in character
------------------- set B as defined by NLS_LANG.
source Therefore the dump file is in
character set B. Character set
conversion may occur.
|
|
| move file over to
| another machine
|
V
destination
-------------------
| db in character | import imp session is in character
| set D | <------------------- set C as defined by NLS_LANG.
------------------- The dump file is still in
character set B. Character set
conversion may occur.
During the import process
character set conversion
may occur between character
set C and the db's character
set D if they differ.
If you have any problems related to character set conversion in export/import
then you need to identify the following:
-- What is (was) the database character set in the source database (character
set A in the above)?
-- What was the client character set specified in NLS_LANG when the data was
exported (character set B in the above)?
-- What was the client character set specified in NLS_LANG when the data was
imported (character set C in the above) ?
-- What is the database character set of the destination database (character
set D in the above)?
Minimizing character set conversions during export/import
---------------------------------------------------------
As described, it is important to note import will do up to 3 character set
conversions depending on:
(a) character set of exported database
(b) NLS_LANG of export session (-> this is the "export file character set")
(c) NLS_LANG of import session
(d) character set of taget database.
Obviously there really is only the need for 1 conversion only (from original
database character set to target database character set). Minimizing the number
of conversions means that you minimize the potential of "loosing" data.
To minimize the number of conversions you are advised to follow these rules:
+ Set the NLS_LANG during export to the same as the character set of the
exported database -> this means no conversion takes place, all data is still
stored in the export file as it was stored in the database.
+ Set the NLS_LANG during import to the same value as during the export -> this
means that no conversion takes place in the import session.
+ If the character set of the target database is different, then the data will
be automatically converted when import inserts the data into the database, you
do not have to "set" anything for this, it's automatic.
Of course you do not have to follow these rules, but complying with these rules
does minimize the risk of loosing data during export/import.
Regarding the National Characterset:
------------------------------------
NCLOBs are always exported/imported in UCS-2/AL16UTF16.
NCHAR/NVARCHAR2s are always exported in the database's national character set.
This is something you can't influence by setting any parameters.
Wednesday, July 02, 2008
Subject: ORA-01555 "Snapshot too old" - Detailed Explanation
Subject: ORA-01555 "Snapshot too old" - Detailed Explanation
Doc ID: Note:40689.1 Type: BULLETIN
Last Revision Date: 16-MAY-2008 Status: ARCHIVED
In this Document
Purpose
Scope and Application
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
ORA-01555 Explanation
Solutions
Examples
Special Cases
Summary
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.0
Information in this document applies to any platform.
Purpose
@Checked for relevance on 16-MAY-2008
This article will discuss the circumstances under which a query can return the Oracle error ORA-01555: snapshot too old: rollback segment number string with name "string" too small. The article will then discuss actions that can be taken to avoid the error and finally will provide some simple PL/SQL scripts that illustrate the issues discussed.
Scope and Application
It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Database Concepts manual and related Oracle Database documentation.
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
Two key concepts are briefly covered below which help in the understanding of ORA-01555, Read Consistency and Delayed Block Cleanout.
1. Read Consistency
This is documented in the Oracle Database Concepts manual and so will not be discussed further. However, for the purposes of this article this should be read and understood if not understood already. Oracle Server has the ability to have multi-version read consistency which is invaluable to you because it guarantees that you are seeing a consistent view of the data (no 'dirty reads').
2. Delayed Block Cleanout
This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout').
Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes made.)
Upon commit, the database simply marks the relevant rollback segment header entry as committed. Now, when one of the changed blocks is revisited Oracle examines the header of the data block which indicates that it has been changed at some point. The database needs to confirm whether the change has been committed or whether it is currently uncommitted. To do this, Oracle determines the rollback segment used for the previous transaction (from the block's header) and then determines whether the rollback header indicates whether it has been committed or not.
If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing.
This behaviour is illustrated in a very simplified way below. Here we walk through the stages involved in updating a data block.
Stage 1 - No Changes Made
Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment.
In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.)
Stage 2 - Row 2 is Updated
Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active).
Stage 3 - The User Issues a Commit
Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block.
Stage 4 - Another User Selects Data Block 500
Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header.
Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout).
ORA-01555 Explanation
There are two fundamental causes of the error ORA-01555 that are a result of Oracle trying to attain a 'read consistent' image. These are:
The rollback information itself is overwritten so that Oracle is unable to rollback the (committed) transaction entries to attain a sufficiently old enough version of the block.
The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transaction header sufficiently to derive the original rollback segment transaction slot.
Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a query is first started and to which Oracle is trying to attain a read consistent image. Associated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN 50.
Case 1 - Rollback Overwritten
This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the rollback information that it requires. The latter is discussed in this article because this is usually the harder one to understand.
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 does some other work that generates rollback information.
Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information)
Session 1 revisits the same block B1 (perhaps for a different row).
Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image of the block as of this QENV.
If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required QENV.
It is under this condition that Oracle may not be able to get the required rollback information because Session 1's changes have generated rollback information that has overwritten it and returns the ORA-1555 error.
Case 2 - Rollback Transaction Slot Overwritten
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 commits the changes (Now other transactions are free to overwrite this rollback information)
A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions.
These transactions each consume a slot in the rollback segment transaction table such that it eventually wraps around (the slots are written to in a circular fashion) and overwrites all the slots. Note that Oracle is free to reuse these slots since all transactions are committed.
Session 1's query then visits a block that has been changed since the initial QENV was established. Oracle therefore needs to derive an image of the block as at that point in time.
Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been overwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry.
If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the data block.
It is also possible to encounter a variant of the transaction slot being overwritten when using block cleanout. This is briefly described below:
Session 1 starts a query at QENV 50. After this another process updates the blocks that Session 1 will require. When Session 1 encounters these blocks it determines that the blocks have changed and have not yet been cleaned out (via delayed block cleanout). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed.
In order to do this, Oracle must look at the relevant rollback segment transaction table slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try to construct an older version of the block and if it is before then the block just needs clean out to be good enough for the QENV.
If the transaction slot has been overwritten and the transaction table cannot be rolled back to a sufficiently old enough version then Oracle cannot derive the block image and will return ORA-1555.
Note: Normally Oracle can use an algorithm for determining a block's SCN during block cleanout even when the rollback segment slot has been overwritten. But in this case Oracle cannot guarantee that the version of the block has not changed since the start of the query.
Solutions
This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment information is overwritten by the same session and when the rollback segment transaction table entry is overwritten.
It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session must be using an Oracle extension whereby fetches across commits are tolerated. This does not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used.
Case 1 - Rollback Overwritten
Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed.
Reduce the number of commits (same reason as 1).
Run the processing against a range of data rather than the whole table. (Same reason as 1).
Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information.
If fetching across commits, the code can be changed so that this is not done.
Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by:
Using a full table scan rather than an index lookup
Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks.
Case 2 - Rollback Transaction Slot Overwritten
Use any of the methods outlined above except for '6'. This will allow transactions to spread their work across multiple rollback segments therefore reducing the likelihood or rollback segment transaction table slots being consumed.
If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can be achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :
alter session set optimizer_goal = rule;
select count(*) from table_name;
If indexes are being accessed then the problem may be an index block and clean out can be forced by ensuring that all the index is traversed. Eg, if the index is on a numeric column with a minimum value of 25 then the following query will force cleanout of the index:
select index_column from table_name where index_column > 24;
Examples
Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases given above. Before these PL/SQL examples will return this error the database must be configured as follows:
Use a small buffer cache (db_block_buffers).
REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block visit without requiring the rollback information.
Use one rollback segment other than SYSTEM.
REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required.
Ensure that the rollback segment is small.
REASON: See the reason for using one rollback segment.
Rollback Overwritten
rem * 1555_a.sql - Example of getting ora-1555 "Snapshot too old" by
rem * a session overwriting the rollback information required
rem * by the same session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
drop table dummy1;
create table dummy1 (a varchar2(200));
rem * Populate the example tables.
begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/
rem * Ensure that table is 'cleaned out'.
select count(*) from bigemp;
declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.
-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;
begin
for c1rec in c1 loop
update dummy1 set a = 'aaaaaaaa';
update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/
Rollback Transaction Slot Overwritten
rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
rem * Populate demo table.
begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/
drop table mydual;
create table mydual (a number);
insert into mydual values (1);
commit;
rem * Cleanout demo table.
select count(*) from bigemp;
declare
cursor c1 is select * from bigemp;
begin
-- The following update is required to illustrate the problem if block
-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;
for c1rec in c1 loop
for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/
Special Cases
There are other special cases that may result in an ORA-01555. These are given below but are rare and so not discussed in this article:
Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.
If a query visits a data block that has been changed by using the Oracle discrete transaction facility then it will return ORA-01555.
It is feasible that a rollback segment created with the OPTIMAL clause may cause a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to be lost.
Summary
This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, and has provided simple PL/SQL scripts that illustrate the cases discussed.
References
Note 1005107.6 - ORA-01555: snapshot too old - Causes and Solutions
Note 104008.1 - ORA-01555: "SNAPSHOT TOO OLD" - Additional Solutions
Note 10630.1 - ORA-01555: "Snapshot too old" - Overview
Note 216971.1 - ORA-01555 When Using Undo Tablespace
Note 45895.1 - ORA-01555 "Snapshot too old" in Very Large Databases
Note 62005.1 - Creating, Optimizing, and Understanding Rollback Segments
Note 69464.1 - Rollback Segment Configuration & Tips
Oracle Database Concepts on http://otn.oracle.com
Keywords
Doc ID: Note:40689.1 Type: BULLETIN
Last Revision Date: 16-MAY-2008 Status: ARCHIVED
In this Document
Purpose
Scope and Application
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
ORA-01555 Explanation
Solutions
Examples
Special Cases
Summary
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.0
Information in this document applies to any platform.
Purpose
@Checked for relevance on 16-MAY-2008
This article will discuss the circumstances under which a query can return the Oracle error ORA-01555: snapshot too old: rollback segment number string with name "string" too small. The article will then discuss actions that can be taken to avoid the error and finally will provide some simple PL/SQL scripts that illustrate the issues discussed.
Scope and Application
It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Database Concepts manual and related Oracle Database documentation.
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
Two key concepts are briefly covered below which help in the understanding of ORA-01555, Read Consistency and Delayed Block Cleanout.
1. Read Consistency
This is documented in the Oracle Database Concepts manual and so will not be discussed further. However, for the purposes of this article this should be read and understood if not understood already. Oracle Server has the ability to have multi-version read consistency which is invaluable to you because it guarantees that you are seeing a consistent view of the data (no 'dirty reads').
2. Delayed Block Cleanout
This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout').
Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes made.)
Upon commit, the database simply marks the relevant rollback segment header entry as committed. Now, when one of the changed blocks is revisited Oracle examines the header of the data block which indicates that it has been changed at some point. The database needs to confirm whether the change has been committed or whether it is currently uncommitted. To do this, Oracle determines the rollback segment used for the previous transaction (from the block's header) and then determines whether the rollback header indicates whether it has been committed or not.
If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing.
This behaviour is illustrated in a very simplified way below. Here we walk through the stages involved in updating a data block.
Stage 1 - No Changes Made
Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment.
In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.)
Stage 2 - Row 2 is Updated
Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active).
Stage 3 - The User Issues a Commit
Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block.
Stage 4 - Another User Selects Data Block 500
Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header.
Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout).
ORA-01555 Explanation
There are two fundamental causes of the error ORA-01555 that are a result of Oracle trying to attain a 'read consistent' image. These are:
The rollback information itself is overwritten so that Oracle is unable to rollback the (committed) transaction entries to attain a sufficiently old enough version of the block.
The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transaction header sufficiently to derive the original rollback segment transaction slot.
Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a query is first started and to which Oracle is trying to attain a read consistent image. Associated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN 50.
Case 1 - Rollback Overwritten
This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the rollback information that it requires. The latter is discussed in this article because this is usually the harder one to understand.
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 does some other work that generates rollback information.
Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information)
Session 1 revisits the same block B1 (perhaps for a different row).
Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image of the block as of this QENV.
If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required QENV.
It is under this condition that Oracle may not be able to get the required rollback information because Session 1's changes have generated rollback information that has overwritten it and returns the ORA-1555 error.
Case 2 - Rollback Transaction Slot Overwritten
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 commits the changes (Now other transactions are free to overwrite this rollback information)
A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions.
These transactions each consume a slot in the rollback segment transaction table such that it eventually wraps around (the slots are written to in a circular fashion) and overwrites all the slots. Note that Oracle is free to reuse these slots since all transactions are committed.
Session 1's query then visits a block that has been changed since the initial QENV was established. Oracle therefore needs to derive an image of the block as at that point in time.
Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been overwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry.
If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the data block.
It is also possible to encounter a variant of the transaction slot being overwritten when using block cleanout. This is briefly described below:
Session 1 starts a query at QENV 50. After this another process updates the blocks that Session 1 will require. When Session 1 encounters these blocks it determines that the blocks have changed and have not yet been cleaned out (via delayed block cleanout). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed.
In order to do this, Oracle must look at the relevant rollback segment transaction table slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try to construct an older version of the block and if it is before then the block just needs clean out to be good enough for the QENV.
If the transaction slot has been overwritten and the transaction table cannot be rolled back to a sufficiently old enough version then Oracle cannot derive the block image and will return ORA-1555.
Note: Normally Oracle can use an algorithm for determining a block's SCN during block cleanout even when the rollback segment slot has been overwritten. But in this case Oracle cannot guarantee that the version of the block has not changed since the start of the query.
Solutions
This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment information is overwritten by the same session and when the rollback segment transaction table entry is overwritten.
It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session must be using an Oracle extension whereby fetches across commits are tolerated. This does not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used.
Case 1 - Rollback Overwritten
Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed.
Reduce the number of commits (same reason as 1).
Run the processing against a range of data rather than the whole table. (Same reason as 1).
Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information.
If fetching across commits, the code can be changed so that this is not done.
Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by:
Using a full table scan rather than an index lookup
Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks.
Case 2 - Rollback Transaction Slot Overwritten
Use any of the methods outlined above except for '6'. This will allow transactions to spread their work across multiple rollback segments therefore reducing the likelihood or rollback segment transaction table slots being consumed.
If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can be achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :
alter session set optimizer_goal = rule;
select count(*) from table_name;
If indexes are being accessed then the problem may be an index block and clean out can be forced by ensuring that all the index is traversed. Eg, if the index is on a numeric column with a minimum value of 25 then the following query will force cleanout of the index:
select index_column from table_name where index_column > 24;
Examples
Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases given above. Before these PL/SQL examples will return this error the database must be configured as follows:
Use a small buffer cache (db_block_buffers).
REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block visit without requiring the rollback information.
Use one rollback segment other than SYSTEM.
REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required.
Ensure that the rollback segment is small.
REASON: See the reason for using one rollback segment.
Rollback Overwritten
rem * 1555_a.sql - Example of getting ora-1555 "Snapshot too old" by
rem * a session overwriting the rollback information required
rem * by the same session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
drop table dummy1;
create table dummy1 (a varchar2(200));
rem * Populate the example tables.
begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/
rem * Ensure that table is 'cleaned out'.
select count(*) from bigemp;
declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.
-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;
begin
for c1rec in c1 loop
update dummy1 set a = 'aaaaaaaa';
update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/
Rollback Transaction Slot Overwritten
rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
rem * Populate demo table.
begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/
drop table mydual;
create table mydual (a number);
insert into mydual values (1);
commit;
rem * Cleanout demo table.
select count(*) from bigemp;
declare
cursor c1 is select * from bigemp;
begin
-- The following update is required to illustrate the problem if block
-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;
for c1rec in c1 loop
for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/
Special Cases
There are other special cases that may result in an ORA-01555. These are given below but are rare and so not discussed in this article:
Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.
If a query visits a data block that has been changed by using the Oracle discrete transaction facility then it will return ORA-01555.
It is feasible that a rollback segment created with the OPTIMAL clause may cause a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to be lost.
Summary
This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, and has provided simple PL/SQL scripts that illustrate the cases discussed.
References
Note 1005107.6 - ORA-01555: snapshot too old - Causes and Solutions
Note 104008.1 - ORA-01555: "SNAPSHOT TOO OLD" - Additional Solutions
Note 10630.1 - ORA-01555: "Snapshot too old" - Overview
Note 216971.1 - ORA-01555 When Using Undo Tablespace
Note 45895.1 - ORA-01555 "Snapshot too old" in Very Large Databases
Note 62005.1 - Creating, Optimizing, and Understanding Rollback Segments
Note 69464.1 - Rollback Segment Configuration & Tips
Oracle Database Concepts on http://otn.oracle.com
Keywords
Tuesday, July 01, 2008
Oracle中哪些等待事件是Idle waitevent
9iR2中可以通过STATS$IDLE_EVENT查看等待事件是空闲等待事件。
10gR2中,选择比较多,v$event_name是比较典型的一个。
10gR2中,选择比较多,v$event_name是比较典型的一个。
Wednesday, April 23, 2008
TX Transaction locks - Example wait scenarios (enqueue)
Subject: TX Transaction locks - Example wait scenarios
Doc ID: Note:62354.1 Type: TROUBLESHOOTING
Last Revision Date: 28-FEB-2008 Status: PUBLISHED
Introduction
~~~~~~~~~~~~
This short article gives examples of TX locks and the waits which can
occur in various circumstances. Often such waits will go unnoticed unless
they are of a long duration or when they trigger a deadlock scenario (which
raises an ORA-60 error).
The examples here demonstrate fundamental locking scenarios which should
be understood by application developers and DBA's alike.
Useful SQL statements
~~~~~~~~~~~~~~~~~~~~~
If you encounter a lock related hang scenario the following SQL statements
can be used to help isolate the waiters and blockers:
Show all sessions waiting for any lock:
select event,p1,p2,p3 from v$session_wait
where wait_time=0 and event='enqueue';
Show sessions waiting for a TX lock:
select * from v$lock where type='TX' and request>0;
Show sessions holding a TX lock:
select * from v$lock where type='TX' and lmode>0;
What is a TX lock ?
~~~~~~~~~~~~~~~~~~~
A TX lock is acquired when a transaction initiates its first change and is
held until the transaction does a COMMIT or ROLLBACK. It is used mainly as
a queuing mechanism so that other sessions can wait for the transaction to
complete. The lock name (ID1 and ID2) of the TX lock reflect the transaction
ID of the active transaction.
Example Tables
~~~~~~~~~~~~~~
The lock waits which can occur are demonstrated using the following
tables. Connect as SCOTT/TIGER or some dummy user to set up the test
environment using the following SQL:
DROP TABLE tx_eg;
CREATE TABLE tx_eg ( num number, txt varchar2(10), sex varchar2(10) )
INITRANS 1 MAXTRANS 1;
INSERT into tx_eg VALUES ( 1, 'First','FEMALE' );
INSERT into tx_eg VALUES ( 2, 'Second','MALE' );
INSERT into tx_eg VALUES ( 3, 'Third','MALE' );
INSERT into tx_eg VALUES ( 4, 'Fourth','MALE' );
INSERT into tx_eg VALUES ( 5, 'Fifth','MALE' );
COMMIT;
In the examples below three sessions are required:
Ses#1 indicates the TX_EG table owners first session
Ses#2 indicates the TX_EG table owners second session
DBA indicates a SYSDBA user with access to
The examples covered below include:
Waits due to Row being locked by an active Transaction
Waits due to Unique or Primary Key Constraint enforcement
Waits due to Insufficient 'ITL' slots in the Block
Waits due to rows being covered by the same BITMAP index fragment
Waits due to Row being locked by an active Transaction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a session updates a row in a table the row is locked by the sessions
transaction. Other users may SELECT that row and will see row as it was
BEFORE the UPDATE occurred. If another session wishes to UPDATE the same
row it has to wait for the first session to commit or rollback. The
second session waits for the first sessions TX lock in EXCLUSIVE mode.
Eg:
Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Garbage' where num=1;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 131075 597 6 0
10 TX 131075 597 0 6
> This shows SID 10 is waiting for the TX lock held by SID 8 and it
> wants the lock in exclusive mode (as REQUEST=6).
The select below is included to demonstrate that a session waiting
on a lock will show as waiting on an 'enqueue' in V$SESSION_WAIT
and that the values of P1RAW, P2 and P3 indicate the actual lock
being waited for. When using Parallel Server the EVENT will be
'DFS enqueue lock acquisition' rather than 'enqueue'.
This select will be omitted from the following examples.
DBA: select sid,p1raw, p2, p3
from v$session_wait
where wait_time=0 and event='enqueue';
SID P1RAW P2 P3
---------- -------- ---------- ----------
10 54580006 131075 597
> ~~~~ ~~ ~~~~~~ ~~~
> type|mode id1 id2
> T X 6 131075 597
The next select shows the object_id and the exact row that the
session is waiting for. This information is only valid in V$SESSION
when a session is waiting due to a row level lock. The statement
is only valid in Oracle 7.3 onwards. As SID 10 is the waiter above
then this is the session to look at in V$SESSION:
DBA: select ROW_WAIT_OBJ#,
ROW_WAIT_FILE#,ROW_WAIT_BLOCK#,ROW_WAIT_ROW#
from v$session
where sid=10;
ROW_WAIT_O ROW_WAIT_F ROW_WAIT_B ROW_WAIT_R
---------- ---------- ---------- ----------
3058 4 2683 0
> The waiter is waiting for the TX lock in order to lock row 0
> in file 4, block 2683 of object 3058.
Ses#1: rollback;
Ses#2: rollback;
Waits due to Unique or Primary Key Constraint enforcement
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a table has a primary key constraint, a unique constraint
or a unique index then the uniqueness of the column/s referenced by
the constraint is enforced by a unique index. If two sessions try to
insert the same key value the second session has to wait to see if an
ORA-0001 should be raised or not.
Eg:
Ses#1: ALTER TABLE tx_eg ADD CONSTRAINT tx_eg_pk PRIMARY KEY( num );
Ses#1: insert into tx_eg values (10,'New','MALE');
Ses#2: insert into tx_eg values (10,'OtherNew',null);
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 196625 39 6 0
10 TX 262155 65 6 0
10 TX 196625 39 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). SID 10 holds a TX lock
for its own transaction.
Ses#1: commit;
Ses#2: ORA-00001: unique constraint (SCOTT.TX_EG_PK) violated
Ses#2: rollback;
Waits due to Insufficient 'ITL' slots in a Block
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Oracle keeps note of which rows are locked by which transaction in an area
at the top of each data block known as the 'interested transaction list'.
The number of ITL slots in any block in an object is controlled by
the INITRANS and MAXTRANS attributes. INITRANS is the number of slots
initially created in a block when it is first used, while MAXTRANS places
an upper bound on the number of entries allowed. Each transaction which
wants to modify a block requires a slot in this 'ITL' list in the block.
MAXTRANS places an upper bound on the number of concurrent transactions
which can be active at any single point in time within a block.
INITRANS provides a minimum guaranteed 'per-block' concurrency.
If more than INITRANS but less than MAXTRANS transactions want to be
active concurrently within the same block then the ITL list will be extended
BUT ONLY IF THERE IS SPACE AVAILABLE TO DO SO WITHIN THE BLOCK.
If there is no free 'ITL' then the requesting session will wait on one
of the active transaction locks in mode 4.
Eg: Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Different' where num=2;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 327688 48 6 0
10 TX 327688 48 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Ses#1: ALTER TABLE tx_eg MAXTRANS 2;
Ses#1: update tx_eg set txt='First' where num=1;
Ses#2: update tx_eg set txt='Second' where num=2;
Both rows update as there is space to grow the ITL list to
accommodate both transactions.
Ses#1: commit;
Ses#2: commit;
Also from 9.2 you can check the ITL Waits in v$segment_statistics
with a query like :
SELECT t.OWNER, t.OBJECT_NAME, t.OBJECT_TYPE, t.STATISTIC_NAME, t.VALUE
FROM v$segment_statistics t
WHERE t.STATISTIC_NAME = 'ITL waits'
AND t.VALUE > 0;
If need be, increase INITTRANS and MAXTRANS.
Waits due to rows being covered by the same BITMAP index fragment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Bitmap indexes index key values and a range of ROWIDs. Each 'entry'
in a bitmap index can cover many rows in the actual table.
If 2 sessions wish to update rows covered by the same bitmap index
fragment then the second session waits for the first transaction to
either COMMIT or ROLLBACK by waiting for the TX lock in mode 4.
Eg: Ses#1: CREATE Bitmap Index tx_eg_bitmap on tx_eg ( sex );
Ses#1: update tx_eg set sex='FEMALE' where num=3;
Ses#2: update tx_eg set sex='FEMALE' where num=4;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 262151 62 6 0
10 TX 327680 60 6 0
10 TX 262151 62 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Other Scenarios
~~~~~~~~~~~~~~~
There are other wait scenarios which can result in a SHARE mode wait for a TX
lock but these are rare compared to the examples given above.
Eg: If a session wants to read a row locked by a transaction in a PREPARED
state then it will wait on the relevant TX lock in SHARE mode (REQUEST=4).
As a PREPARED transaction should COMMIT , ROLLBACK or go to an in-doubt
state very soon after the prepare this is not generally noticeable..
Doc ID: Note:62354.1 Type: TROUBLESHOOTING
Last Revision Date: 28-FEB-2008 Status: PUBLISHED
Introduction
~~~~~~~~~~~~
This short article gives examples of TX locks and the waits which can
occur in various circumstances. Often such waits will go unnoticed unless
they are of a long duration or when they trigger a deadlock scenario (which
raises an ORA-60 error).
The examples here demonstrate fundamental locking scenarios which should
be understood by application developers and DBA's alike.
Useful SQL statements
~~~~~~~~~~~~~~~~~~~~~
If you encounter a lock related hang scenario the following SQL statements
can be used to help isolate the waiters and blockers:
Show all sessions waiting for any lock:
select event,p1,p2,p3 from v$session_wait
where wait_time=0 and event='enqueue';
Show sessions waiting for a TX lock:
select * from v$lock where type='TX' and request>0;
Show sessions holding a TX lock:
select * from v$lock where type='TX' and lmode>0;
What is a TX lock ?
~~~~~~~~~~~~~~~~~~~
A TX lock is acquired when a transaction initiates its first change and is
held until the transaction does a COMMIT or ROLLBACK. It is used mainly as
a queuing mechanism so that other sessions can wait for the transaction to
complete. The lock name (ID1 and ID2) of the TX lock reflect the transaction
ID of the active transaction.
Example Tables
~~~~~~~~~~~~~~
The lock waits which can occur are demonstrated using the following
tables. Connect as SCOTT/TIGER or some dummy user to set up the test
environment using the following SQL:
DROP TABLE tx_eg;
CREATE TABLE tx_eg ( num number, txt varchar2(10), sex varchar2(10) )
INITRANS 1 MAXTRANS 1;
INSERT into tx_eg VALUES ( 1, 'First','FEMALE' );
INSERT into tx_eg VALUES ( 2, 'Second','MALE' );
INSERT into tx_eg VALUES ( 3, 'Third','MALE' );
INSERT into tx_eg VALUES ( 4, 'Fourth','MALE' );
INSERT into tx_eg VALUES ( 5, 'Fifth','MALE' );
COMMIT;
In the examples below three sessions are required:
Ses#1 indicates the TX_EG table owners first session
Ses#2 indicates the TX_EG table owners second session
DBA indicates a SYSDBA user with access to
The examples covered below include:
Waits due to Row being locked by an active Transaction
Waits due to Unique or Primary Key Constraint enforcement
Waits due to Insufficient 'ITL' slots in the Block
Waits due to rows being covered by the same BITMAP index fragment
Waits due to Row being locked by an active Transaction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a session updates a row in a table the row is locked by the sessions
transaction. Other users may SELECT that row and will see row as it was
BEFORE the UPDATE occurred. If another session wishes to UPDATE the same
row it has to wait for the first session to commit or rollback. The
second session waits for the first sessions TX lock in EXCLUSIVE mode.
Eg:
Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Garbage' where num=1;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 131075 597 6 0
10 TX 131075 597 0 6
> This shows SID 10 is waiting for the TX lock held by SID 8 and it
> wants the lock in exclusive mode (as REQUEST=6).
The select below is included to demonstrate that a session waiting
on a lock will show as waiting on an 'enqueue' in V$SESSION_WAIT
and that the values of P1RAW, P2 and P3 indicate the actual lock
being waited for. When using Parallel Server the EVENT will be
'DFS enqueue lock acquisition' rather than 'enqueue'.
This select will be omitted from the following examples.
DBA: select sid,p1raw, p2, p3
from v$session_wait
where wait_time=0 and event='enqueue';
SID P1RAW P2 P3
---------- -------- ---------- ----------
10 54580006 131075 597
> ~~~~ ~~ ~~~~~~ ~~~
> type|mode id1 id2
> T X 6 131075 597
The next select shows the object_id and the exact row that the
session is waiting for. This information is only valid in V$SESSION
when a session is waiting due to a row level lock. The statement
is only valid in Oracle 7.3 onwards. As SID 10 is the waiter above
then this is the session to look at in V$SESSION:
DBA: select ROW_WAIT_OBJ#,
ROW_WAIT_FILE#,ROW_WAIT_BLOCK#,ROW_WAIT_ROW#
from v$session
where sid=10;
ROW_WAIT_O ROW_WAIT_F ROW_WAIT_B ROW_WAIT_R
---------- ---------- ---------- ----------
3058 4 2683 0
> The waiter is waiting for the TX lock in order to lock row 0
> in file 4, block 2683 of object 3058.
Ses#1: rollback;
Ses#2: rollback;
Waits due to Unique or Primary Key Constraint enforcement
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a table has a primary key constraint, a unique constraint
or a unique index then the uniqueness of the column/s referenced by
the constraint is enforced by a unique index. If two sessions try to
insert the same key value the second session has to wait to see if an
ORA-0001 should be raised or not.
Eg:
Ses#1: ALTER TABLE tx_eg ADD CONSTRAINT tx_eg_pk PRIMARY KEY( num );
Ses#1: insert into tx_eg values (10,'New','MALE');
Ses#2: insert into tx_eg values (10,'OtherNew',null);
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 196625 39 6 0
10 TX 262155 65 6 0
10 TX 196625 39 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). SID 10 holds a TX lock
for its own transaction.
Ses#1: commit;
Ses#2: ORA-00001: unique constraint (SCOTT.TX_EG_PK) violated
Ses#2: rollback;
Waits due to Insufficient 'ITL' slots in a Block
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Oracle keeps note of which rows are locked by which transaction in an area
at the top of each data block known as the 'interested transaction list'.
The number of ITL slots in any block in an object is controlled by
the INITRANS and MAXTRANS attributes. INITRANS is the number of slots
initially created in a block when it is first used, while MAXTRANS places
an upper bound on the number of entries allowed. Each transaction which
wants to modify a block requires a slot in this 'ITL' list in the block.
MAXTRANS places an upper bound on the number of concurrent transactions
which can be active at any single point in time within a block.
INITRANS provides a minimum guaranteed 'per-block' concurrency.
If more than INITRANS but less than MAXTRANS transactions want to be
active concurrently within the same block then the ITL list will be extended
BUT ONLY IF THERE IS SPACE AVAILABLE TO DO SO WITHIN THE BLOCK.
If there is no free 'ITL' then the requesting session will wait on one
of the active transaction locks in mode 4.
Eg: Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Different' where num=2;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 327688 48 6 0
10 TX 327688 48 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Ses#1: ALTER TABLE tx_eg MAXTRANS 2;
Ses#1: update tx_eg set txt='First' where num=1;
Ses#2: update tx_eg set txt='Second' where num=2;
Both rows update as there is space to grow the ITL list to
accommodate both transactions.
Ses#1: commit;
Ses#2: commit;
Also from 9.2 you can check the ITL Waits in v$segment_statistics
with a query like :
SELECT t.OWNER, t.OBJECT_NAME, t.OBJECT_TYPE, t.STATISTIC_NAME, t.VALUE
FROM v$segment_statistics t
WHERE t.STATISTIC_NAME = 'ITL waits'
AND t.VALUE > 0;
If need be, increase INITTRANS and MAXTRANS.
Waits due to rows being covered by the same BITMAP index fragment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Bitmap indexes index key values and a range of ROWIDs. Each 'entry'
in a bitmap index can cover many rows in the actual table.
If 2 sessions wish to update rows covered by the same bitmap index
fragment then the second session waits for the first transaction to
either COMMIT or ROLLBACK by waiting for the TX lock in mode 4.
Eg: Ses#1: CREATE Bitmap Index tx_eg_bitmap on tx_eg ( sex );
Ses#1: update tx_eg set sex='FEMALE' where num=3;
Ses#2: update tx_eg set sex='FEMALE' where num=4;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 262151 62 6 0
10 TX 327680 60 6 0
10 TX 262151 62 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Other Scenarios
~~~~~~~~~~~~~~~
There are other wait scenarios which can result in a SHARE mode wait for a TX
lock but these are rare compared to the examples given above.
Eg: If a session wants to read a row locked by a transaction in a PREPARED
state then it will wait on the relevant TX lock in SHARE mode (REQUEST=4).
As a PREPARED transaction should COMMIT , ROLLBACK or go to an in-doubt
state very soon after the prepare this is not generally noticeable..
Tuesday, April 22, 2008
Finding root blocker (enqueue)
Subject: Detecting Blocking Sessions in RAC and non-RAC (Enqueue Locks) Environments
Doc ID: Note:398519.1 Type: SCRIPT
Last Revision Date: 12-JAN-2007 Status: PUBLISHED
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Script
Running the Script
Caution
Script
Script Output
References
--------------------------------------------------------------------------------
Applies to: Oracle Server - Enterprise Edition - Version: 7.3.4 to 10.2
Information in this document applies to any platform.
PurposeScript and explanation for finding blocking and blocked sessions in both RAC (multi instance) and non-RAC (single instance) environments.
.
Locking Mechanism
Resources (rows, tables, control files ...) represented by enqueues (TX, TM, CF, ...) can be locked in various modes (i.e. shared, exclusive, ...). Concurrent locking requests can conflict as per compatibility rules. Resources (enqueues) are externalized via GV$RESOURCE and lock requests via GV$LOCK views. Details can be found in
'Oracle® Database Concepts' -> chapter 'Data Concurrency and Consistency' (10.2. version)
.
Locking Conflict Types
1. Local locking conflict (block) - conflicting sessions are connected to the same instance (also applies to one instance of RAC)
Drawback: V$LOCK column BLOCK contains value 1 for blocking lock (session)
2. Global locking conflict (block) - conflicting sessions are connected to different instances (multi instance RAC only)
Drawback: V$LOCK column BLOCK contains value 2 to mark potential conflict (value is always 2 in RAC environments unless there is local conflict)
@ as per Bug:3230663 in 10.2 we should be able to detect global conflict (but still reproducible in 10.2.0.2 @ Linux x86 )
.
Script Principle
Display all sessions holding or requesting lock of resource some session is waiting for. Waiting session has non-zero value of column GV$LOCK.REQUEST. Resource is identified by (TYPE,ID1,ID2 columns of GV$LOCK view).
.
We cannot use GV$LOCK.BLOCK in RAC as it always contains value 2 ("potential locking conflict") unless there is local conflict detected.
.
Finding root blocker
Run query provided in Script section and do one of the following.
(1) Find and kill root blockers
a) - Find oldest resource request - row with highest CTIME (this is row L1)
b) - Exists there another row with the same SID as L1? (this is row L2 if exists)
NOT - this is root blocker, kill it
YES - Find row with the same values of ID1,ID2 columns as in L2 where LOCK > 0 (this is row L3)
- Repeat (b) with L3 (L3 becomes L1) until You find root blocker
(2) Or use simple rule (may not be best)
a) Kill oldest blocking session (highest CTIME)
b) Run script again and repeat (a) until blocking session exists
.
Detecting Object of Locking Conflict
Typically problematic conflicts happen with "DML Locks" (transaction - TX and table - TM lock types) and sometimes it is important to find out subject of the conflict (i.e. fix application design error to prevent issue).
Object name for TM lock can be easily identified as V$LOCK.ID1 is matching to DBA_OBJECTS.OBJECT_ID.
select OBJECT_ID, OWNER,OBJECT_NAME from dba_objects o, V$LOCK l
where l.SID=&sid and l.ID1=o.OBJECT_ID;
Contrary there is no easy way (select) to find out which row (TX) is the session waiting for i.e. to match TX request to TM lock (table name). This is due to locking implementation - pointer in database block points to assigned slot in undo segment (transaction id - XID). It is possible to make a guess of table name (using similarity of CTIME column value). I.e. session is waiting for TX lock for 100s and TM lock on table A has been placed 100s ago (this is just a guess as TM lock could have been acquired for earlier update in the same transaction).
Note: For simplicity GV$LOCK is referred as view but actually this is synonym for view GV_$LOCK (the same applies to V$LOCK).
Software Requirements/PrerequisitesSQL*Plus
Configuring the ScriptThere are no steps required.
Running the ScriptConnect as user able to select from GV$LOCK (typically user having DBA role).
sqlplus '/ as sysdba'
@
CautionThis script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.
Script
-- Author: daniel.x.simon@oracle.com
prompt CTIME is in Seconds
set lines 120
col BLOCK for 9
col LMODE for 9
col INST_ID for 9
col REQUEST for 9
col SID for 999999
select INST_ID, SID, TYPE, ID1, ID2, LMODE, REQUEST, CTIME, BLOCK
from gv$lock where (ID1,ID2,TYPE) in
(select ID1,ID2,TYPE from gv$lock where request>0);
Script Output- example output showing two independent root conflicts
- there can be noted oldest conflict (sid 64 vs 38) is not the root as session 38 is blocked by session 67
INST_ID SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
---------- -------- ---------- --------- ------- ---------- ---------- ---------- ----------
2 212 TX 1114116 1399221 6 0 308 2
1 248 TX 1114116 1399221 0 6 304 0
4 67 TX 6225949 1244199 6 0 26 2
1 38 TX 6225949 1244199 0 6 23 0
2 64 TX 131103 2270514 0 6 117 0
1 38 TX 131103 2270514 6 0 171 2
ReferencesKeywords'RAC' 'BLOCKING~LOCKS' 'ENQUEUE' 'V$LOCK' 'V$LOCK'
Doc ID: Note:398519.1 Type: SCRIPT
Last Revision Date: 12-JAN-2007 Status: PUBLISHED
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Script
Running the Script
Caution
Script
Script Output
References
--------------------------------------------------------------------------------
Applies to: Oracle Server - Enterprise Edition - Version: 7.3.4 to 10.2
Information in this document applies to any platform.
PurposeScript and explanation for finding blocking and blocked sessions in both RAC (multi instance) and non-RAC (single instance) environments.
.
Locking Mechanism
Resources (rows, tables, control files ...) represented by enqueues (TX, TM, CF, ...) can be locked in various modes (i.e. shared, exclusive, ...). Concurrent locking requests can conflict as per compatibility rules. Resources (enqueues) are externalized via GV$RESOURCE and lock requests via GV$LOCK views. Details can be found in
'Oracle® Database Concepts' -> chapter 'Data Concurrency and Consistency' (10.2. version)
.
Locking Conflict Types
1. Local locking conflict (block) - conflicting sessions are connected to the same instance (also applies to one instance of RAC)
Drawback: V$LOCK column BLOCK contains value 1 for blocking lock (session)
2. Global locking conflict (block) - conflicting sessions are connected to different instances (multi instance RAC only)
Drawback: V$LOCK column BLOCK contains value 2 to mark potential conflict (value is always 2 in RAC environments unless there is local conflict)
@ as per Bug:3230663 in 10.2 we should be able to detect global conflict (but still reproducible in 10.2.0.2 @ Linux x86 )
.
Script Principle
Display all sessions holding or requesting lock of resource some session is waiting for. Waiting session has non-zero value of column GV$LOCK.REQUEST. Resource is identified by (TYPE,ID1,ID2 columns of GV$LOCK view).
.
We cannot use GV$LOCK.BLOCK in RAC as it always contains value 2 ("potential locking conflict") unless there is local conflict detected.
.
Finding root blocker
Run query provided in Script section and do one of the following.
(1) Find and kill root blockers
a) - Find oldest resource request - row with highest CTIME (this is row L1)
b) - Exists there another row with the same SID as L1? (this is row L2 if exists)
NOT - this is root blocker, kill it
YES - Find row with the same values of ID1,ID2 columns as in L2 where LOCK > 0 (this is row L3)
- Repeat (b) with L3 (L3 becomes L1) until You find root blocker
(2) Or use simple rule (may not be best)
a) Kill oldest blocking session (highest CTIME)
b) Run script again and repeat (a) until blocking session exists
.
Detecting Object of Locking Conflict
Typically problematic conflicts happen with "DML Locks" (transaction - TX and table - TM lock types) and sometimes it is important to find out subject of the conflict (i.e. fix application design error to prevent issue).
Object name for TM lock can be easily identified as V$LOCK.ID1 is matching to DBA_OBJECTS.OBJECT_ID.
select OBJECT_ID, OWNER,OBJECT_NAME from dba_objects o, V$LOCK l
where l.SID=&sid and l.ID1=o.OBJECT_ID;
Contrary there is no easy way (select) to find out which row (TX) is the session waiting for i.e. to match TX request to TM lock (table name). This is due to locking implementation - pointer in database block points to assigned slot in undo segment (transaction id - XID). It is possible to make a guess of table name (using similarity of CTIME column value). I.e. session is waiting for TX lock for 100s and TM lock on table A has been placed 100s ago (this is just a guess as TM lock could have been acquired for earlier update in the same transaction).
Note: For simplicity GV$LOCK is referred as view but actually this is synonym for view GV_$LOCK (the same applies to V$LOCK).
Software Requirements/PrerequisitesSQL*Plus
Configuring the ScriptThere are no steps required.
Running the ScriptConnect as user able to select from GV$LOCK (typically user having DBA role).
sqlplus '/ as sysdba'
@
CautionThis script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.
Script
-- Author: daniel.x.simon@oracle.com
prompt CTIME is in Seconds
set lines 120
col BLOCK for 9
col LMODE for 9
col INST_ID for 9
col REQUEST for 9
col SID for 999999
select INST_ID, SID, TYPE, ID1, ID2, LMODE, REQUEST, CTIME, BLOCK
from gv$lock where (ID1,ID2,TYPE) in
(select ID1,ID2,TYPE from gv$lock where request>0);
Script Output- example output showing two independent root conflicts
- there can be noted oldest conflict (sid 64 vs 38) is not the root as session 38 is blocked by session 67
INST_ID SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
---------- -------- ---------- --------- ------- ---------- ---------- ---------- ----------
2 212 TX 1114116 1399221 6 0 308 2
1 248 TX 1114116 1399221 0 6 304 0
4 67 TX 6225949 1244199 6 0 26 2
1 38 TX 6225949 1244199 0 6 23 0
2 64 TX 131103 2270514 0 6 117 0
1 38 TX 131103 2270514 6 0 171 2
ReferencesKeywords'RAC' 'BLOCKING~LOCKS' 'ENQUEUE' 'V$LOCK' 'V$LOCK'
Subscribe to:
Posts (Atom)