v$lock中type为TX时,id1、id2的含义
trunc(id1/65536)=v$transaction.XIDUSN
mod(id1/65536)=v$transaction.XIDSLOT
id2=v$transaction.XIDSQN
Sunday, September 21, 2008
Wednesday, September 03, 2008
Why is a Particular Query Slower on One Machine than Another
Subject: Why is a Particular Query Slower on One Machine than Another?
Doc ID: Note:604256.1 Type: TROUBLESHOOTING
Last Revision Date: 18-JUL-2008 Status: REVIEWED
In this Document
Purpose
Last Review Date
Instructions for the Reader
Troubleshooting Details
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.2.0.1
Information in this document applies to any platform.
Purpose
This article outlines some reasons why a particular query may be slower on one machine than another.
The article assumes that software versions are the same on the two machines being compared.
Last Review Date
June 24, 2008
Instructions for the Reader
A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.
Troubleshooting Details
Computer systems are deterministic - the same inputs should produce the same outputs. Unless the systems under comparison are exactly the same, behavior on these systems cannot be expected to be exactly the same. Factors such as CPU speed, memory size, memory allocation, memory usage, other concurrent database and non-database activity can all have an affect on the relative performance of different systems. Additionally, any differences in data including volumes and distribution can have an effect. Some of these factors will feed into the optimizer's decisions and may even cause a difference in execution plan.
If all the inputs to a query running on a particular version on different machines are the same, but the performance is different then that would imply that those differences are introduced by something outside of Oracle. Different hardware and settings can have a significant affect on the relative performance on different systems.
NOTE: If the comparison of different systems is part of a benchmarking exercise then support for this is provided via the relevant Oracle Account Manager and not through normal Support Services. Please contact your Account Manager for assistance with benchmarking queries.
Potential issues:
Issue can be divided into 2 main categories - those that influence the optimizer's plan choice resulting in different access paths for queries, and those that do not change the execution plan but affect the performance of all or part of the query execution.
Plan Differences - plan changes can be caused by a number of different issues:
Is the Query really the same?- Any changes in a SQL statement can have an impact on the selection of an access path for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query However similar the statements may be, there may be unexpected differences that result in reduced (or increased) performance. If a select list of a query is changed to add or modify columns or a where clause has altered predicate values, then this can cause the optimizer to choose a different execution plan. Changes in execution plans used for queries are likely to cause different response times when the query is executed. Additionally, changes in data volumes in the underlying objects can make a large difference. Remember that changes in the predicates (ie the where clause) can result in changes in the proportion of rows returned from an object - different values may return a larger or smaller proportion of the rows and this in turn can have a significant affect on the performance of queries.
Different statistics - The Cost Based Optimizer (CBO) uses statistics to determine the execution plan for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query. Assuming that the same statistics levels are gathered on the different systems and that systems have the same parameter settings, statistic differences are attributable to data volume and data distribution differences. Potentially, with reduced sample sizes, sampling could produce different statistics due to chance groupings of data that may be the result of differing loading methods etc. On 10g and above it is recommended to use an estimate sample size of 100% (if it is possible for this to fit within the window maintenance), even if that means that statistics are gathered on a reduced frequency. If 100% is not feasible, try using at least an estimate of 30%. Generally, the accuracy of the statistics overall outweighs the day to day changes in most applications. Additionally, care should be taken that column statistics (histograms) are gathered in the same way on the systems. Note that the defaults for different versions of statistics gathering are not necessarily the same, for example:
ESTIMATE_PERCENT: defaults:
9i : 100%
10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage)
METHOD_OPT: defaults:
9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
10g and 11g : AUTO - DBMS_STATS decides in which columns a histogram may help to produce a better plan.
Note that features such as Dynamic Sampling and System Statistics can also have an affect on query access paths even if parameters and gathered statistics on user objects are the same. See the information in the following sections for details.
Different volume and distribution of data - although data differences may appear trivial, they may be enough to change a plan from one to another if the choice of one access path over another is close in cost terms. Even small differences in the number of rows can easily be magnified by many orders dependent on the selected execution plan. Scalability issues do not always mean that a smaller data set is representative of a larger one. It is not only the volume of data which can be an issue, but also the method used to load or insert the data in the first place. For example data with a different distribution (such as pre-sorted data) may produce different access paths for queries due to statistical changes which mean that certain access paths appear more attractive than they actually are.
Different Block Sizes - The way that the data itself is stored can have an impact on the access path chosen by a query. If the block size selected at database or tablespace level are different on the 2 systems, then this can result in different access paths being chosen. This is because the objects in question may take up less data blocks overall with a larger block size making certain operations seem more (or less) efficient in terms of I/O than on the comparative system. Note that block size changes can even affect cases where no statistics have been gathered since the default stats used by the Cost Based Optimizer are derived in part from the block size and the number of blocks. Any storage differences that could affect the number of blocks taken up by an object could affect plans. Even the same data in two different databases (or even in the same database) with different block sizes, pct used/free or intital/next settings (leading to the use of a different number of blocks) can lead to different default statistics.
Different Parameter settings - there are a large number of parameters that can have an affect on the optimizer. If parameter settings are the same then the optimizer should produce the same execution plan given the same SQL and statistics input. Specific things to look out for here are the settings for:
Dynamic Sampling: If dynamic sampling is enabled then the optimizer may sample information at runtime to determine a better plan for a query. If data distribution or volume is different then this sampling could return different statistics and this could clearly make a difference to the chosen execution plan. See:
Note 336267.1 Parameter : OPTIMIZER_DYNAMIC_SAMPLING
Optimizer Features Enable: "OPTIMIZER_FEATURES_ENABLE" is an 'umbrella' parameter that covers a large variety of version specific optimizer changes. Changing this parameter can result in features being enabled or disabled or the defaults they use from changing and the result of this can be execution plan changes. See:
Note 62337.1 Init.ora Parameter "OPTIMIZER_FEATURES_ENABLE" Reference Note
Parallelism : If the parallel degree available on a system is different then the optimizer may select dif rent execution plans for queries that were relying on parallel execution for their access paths to be perform well. See:
Note 203238.1 Using Parallel Execution
Schema Differences - Differences in indexes, table partitioning and storage clauses can also have a big impact on plan choice. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
PGA and sort area size differences - the availability of large amounts of PGA can influence plan choices since sorts or hash joins are less likely to spill over to disk. With a large PGA, this can mean that plans that may otherwise have involved high I/O associated with this spill over may be selected since the reduced I/O may make them a lot more attractive to the optimizer.
Different NLS settings may affect sort orders and mean that certain indexes are not available to avoid sorts or would be inefficient to use. See:
Note 227335.1 Linguistic Sorting - Frequently Asked Questions
Note 67522.1 Diagnosing Why a Query is Not Using an Index
Different Bind Variable Peeking - The default value for _OPTIM_PEEK_USER_BINDS is true in 9.0 and above.This means the first time a SQL statement with bind variables is parsed, the optimizer will examine the value of that bind variable and use that for creating the execution plan. If that bind is not representative of the majority of the supplied values, then it may cause the optimizer to choose a different execution plan than when one of these more representative values is submitted. If that is the case then the chosen plan may perform in a suboptimal manner. See:
Note 70075.1 Use of bind variables in queries
Different System Statistics - system statistics gathered using the DBMS_STATS.GATHER_SYSTEM_STATS procedure and stored in the SYS.AUX_STATS$ table will be used by default when the "_OPTIMIZER_COST_MODEL" (hidden) parameter (which defaults to CHOOSE) is set to CHOOSE or CPU. These statistics can cause access paths for queries to change dependent upon the recorded capability of a system. Systems with different capabilities may use different access paths for the same query. See:
Note 470316.1 Using Actual System Statistics (Collected CPU and IO information
Note 149560.1 Collect and Display System Statistics (CPU and IO) for CBO usage
Note 153761.1 Scaling the System to Improve CBO optimizer
Query Performance Differences with Unchanged Execution Plan
Is the Query really the same?- As with a changed execution plan, any changes in a SQL statement can have an impact on performance. Adding or changing columns in a select list, especially the addition of functions or scalar sub-queries may add significantly to the CPU usage of a query since these will be executed for every row returned.
Platform - Differences in any of the technology stack underneath the database can cause differences in performance. CPU specification, memory, I/O subsystems, Operating System versions and patch level can all have an impact. Price, physical size and branding are irrelevant. Platform choices are made for a variety of reasons, price and performance are just two of them. Similar or even dissimilar price is no guarantee of similar or dissimilar performance.
Network - Any operation that requires pulling data across a network, for example across a dblink, will be affected by network performance. Even same specification networks can perform differently depending on the load on the particular network segment. The larger the amount of data being sent across a network, the greater impact such variances in network performance are likely to have.
CPU - CPU intensive operations will take longer when executed on slower CPUs. If an application is CPU intensive, it does not matter how 'powerful' in terms of concurrent users and throughput a system is, if this processing is not scale-able and spreadable over the resources available, then the performance may come down to the raw CPU speed.
Memory - It is possible that memory constraints may mean that more sorts or join information spills to disk incurring the inevitable I/O penalties and reducing the performance of the queries. Additionally, operating system memory usage and constraints due to concurrent usage of oracle and non-oracle users should be monitored to avoid swapping and paging.
Disk - I/O throughput can make a large difference to I/O dependent queries. Different I/O subsystems may be tuned for massive throughput as opposed to raw speed. Additionally, Network based storage may depend on network speed and load on the network itself. These problems can be alleviated to a degree by buffer cache and o/s caching. See:
Note 223117.1 Tuning I/O-related waits.
Different Block Sizes - Just as in the section above regarding plan differences, using different block sizes can have an affect even if the access path has not changed. Storing data in a different block size can impact the number of I/Os required and thus change the relative level of stressing of the I/O subsystem.
Concurrency - If the concurrent usage of a system is not the same as that it is being compared with, then the relative performance of that system may appear to be different. Ensure that the load profiles of the systems being compared are the same in order to make valid comparisons.
Schema Differences - Differences in indexes, table partitioning and storage clauses can have a big impact on execution time. If the data is stored in a different way then more blocks may be required to be read to return the same information. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
Parallelism - When the Optimizer selects an execution plan that it believes should be using Parallel Execution then it has to assume that sufficient resource is actually available at execution time to actually execute the query at the desired level of parallelism. If this resource is unavailable at execution time, then the query may be executed at a reduced level of parallelism or in serial, which could have a significant affect on the overall performance of a query and of the system as a whole. See:
Note 203238.1 Using Parallel Execution
References
Note 149560.1 - System Statistics: Collect and Display System Statistics (CPU and IO) for CBO us
Note 153761.1 - System Statistics: Scaling the System to Improve CBO optimizer
Note 203238.1 - Using Parallel Execution
Note 223117.1 - Tuning I/O-related waits
Note 223806.1 - Query with Unchanged Execution Plan is Slower than Previously
Note 227335.1 - Linguistic Sorting - Frequently Asked Questions
Note 398838.1 - FAQ: Query Tuning Frequently Asked Questions
Note 470316.1 - Using Actual System Statistics (Collected CPU and IO information)
Note 70075.1 - Use of bind variables in queries
Doc ID: Note:604256.1 Type: TROUBLESHOOTING
Last Revision Date: 18-JUL-2008 Status: REVIEWED
In this Document
Purpose
Last Review Date
Instructions for the Reader
Troubleshooting Details
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.2.0.1
Information in this document applies to any platform.
Purpose
This article outlines some reasons why a particular query may be slower on one machine than another.
The article assumes that software versions are the same on the two machines being compared.
Last Review Date
June 24, 2008
Instructions for the Reader
A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.
Troubleshooting Details
Computer systems are deterministic - the same inputs should produce the same outputs. Unless the systems under comparison are exactly the same, behavior on these systems cannot be expected to be exactly the same. Factors such as CPU speed, memory size, memory allocation, memory usage, other concurrent database and non-database activity can all have an affect on the relative performance of different systems. Additionally, any differences in data including volumes and distribution can have an effect. Some of these factors will feed into the optimizer's decisions and may even cause a difference in execution plan.
If all the inputs to a query running on a particular version on different machines are the same, but the performance is different then that would imply that those differences are introduced by something outside of Oracle. Different hardware and settings can have a significant affect on the relative performance on different systems.
NOTE: If the comparison of different systems is part of a benchmarking exercise then support for this is provided via the relevant Oracle Account Manager and not through normal Support Services. Please contact your Account Manager for assistance with benchmarking queries.
Potential issues:
Issue can be divided into 2 main categories - those that influence the optimizer's plan choice resulting in different access paths for queries, and those that do not change the execution plan but affect the performance of all or part of the query execution.
Plan Differences - plan changes can be caused by a number of different issues:
Is the Query really the same?- Any changes in a SQL statement can have an impact on the selection of an access path for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query However similar the statements may be, there may be unexpected differences that result in reduced (or increased) performance. If a select list of a query is changed to add or modify columns or a where clause has altered predicate values, then this can cause the optimizer to choose a different execution plan. Changes in execution plans used for queries are likely to cause different response times when the query is executed. Additionally, changes in data volumes in the underlying objects can make a large difference. Remember that changes in the predicates (ie the where clause) can result in changes in the proportion of rows returned from an object - different values may return a larger or smaller proportion of the rows and this in turn can have a significant affect on the performance of queries.
Different statistics - The Cost Based Optimizer (CBO) uses statistics to determine the execution plan for a particular query. Differences in statistics can cause the CBO to generate different execution plans, which in turn can lead to differences in performance of the same query. Assuming that the same statistics levels are gathered on the different systems and that systems have the same parameter settings, statistic differences are attributable to data volume and data distribution differences. Potentially, with reduced sample sizes, sampling could produce different statistics due to chance groupings of data that may be the result of differing loading methods etc. On 10g and above it is recommended to use an estimate sample size of 100% (if it is possible for this to fit within the window maintenance), even if that means that statistics are gathered on a reduced frequency. If 100% is not feasible, try using at least an estimate of 30%. Generally, the accuracy of the statistics overall outweighs the day to day changes in most applications. Additionally, care should be taken that column statistics (histograms) are gathered in the same way on the systems. Note that the defaults for different versions of statistics gathering are not necessarily the same, for example:
ESTIMATE_PERCENT: defaults:
9i : 100%
10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage)
METHOD_OPT: defaults:
9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
10g and 11g : AUTO - DBMS_STATS decides in which columns a histogram may help to produce a better plan.
Note that features such as Dynamic Sampling and System Statistics can also have an affect on query access paths even if parameters and gathered statistics on user objects are the same. See the information in the following sections for details.
Different volume and distribution of data - although data differences may appear trivial, they may be enough to change a plan from one to another if the choice of one access path over another is close in cost terms. Even small differences in the number of rows can easily be magnified by many orders dependent on the selected execution plan. Scalability issues do not always mean that a smaller data set is representative of a larger one. It is not only the volume of data which can be an issue, but also the method used to load or insert the data in the first place. For example data with a different distribution (such as pre-sorted data) may produce different access paths for queries due to statistical changes which mean that certain access paths appear more attractive than they actually are.
Different Block Sizes - The way that the data itself is stored can have an impact on the access path chosen by a query. If the block size selected at database or tablespace level are different on the 2 systems, then this can result in different access paths being chosen. This is because the objects in question may take up less data blocks overall with a larger block size making certain operations seem more (or less) efficient in terms of I/O than on the comparative system. Note that block size changes can even affect cases where no statistics have been gathered since the default stats used by the Cost Based Optimizer are derived in part from the block size and the number of blocks. Any storage differences that could affect the number of blocks taken up by an object could affect plans. Even the same data in two different databases (or even in the same database) with different block sizes, pct used/free or intital/next settings (leading to the use of a different number of blocks) can lead to different default statistics.
Different Parameter settings - there are a large number of parameters that can have an affect on the optimizer. If parameter settings are the same then the optimizer should produce the same execution plan given the same SQL and statistics input. Specific things to look out for here are the settings for:
Dynamic Sampling: If dynamic sampling is enabled then the optimizer may sample information at runtime to determine a better plan for a query. If data distribution or volume is different then this sampling could return different statistics and this could clearly make a difference to the chosen execution plan. See:
Note 336267.1 Parameter : OPTIMIZER_DYNAMIC_SAMPLING
Optimizer Features Enable: "OPTIMIZER_FEATURES_ENABLE" is an 'umbrella' parameter that covers a large variety of version specific optimizer changes. Changing this parameter can result in features being enabled or disabled or the defaults they use from changing and the result of this can be execution plan changes. See:
Note 62337.1 Init.ora Parameter "OPTIMIZER_FEATURES_ENABLE" Reference Note
Parallelism : If the parallel degree available on a system is different then the optimizer may select dif rent execution plans for queries that were relying on parallel execution for their access paths to be perform well. See:
Note 203238.1 Using Parallel Execution
Schema Differences - Differences in indexes, table partitioning and storage clauses can also have a big impact on plan choice. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
PGA and sort area size differences - the availability of large amounts of PGA can influence plan choices since sorts or hash joins are less likely to spill over to disk. With a large PGA, this can mean that plans that may otherwise have involved high I/O associated with this spill over may be selected since the reduced I/O may make them a lot more attractive to the optimizer.
Different NLS settings may affect sort orders and mean that certain indexes are not available to avoid sorts or would be inefficient to use. See:
Note 227335.1 Linguistic Sorting - Frequently Asked Questions
Note 67522.1 Diagnosing Why a Query is Not Using an Index
Different Bind Variable Peeking - The default value for _OPTIM_PEEK_USER_BINDS is true in 9.0 and above.This means the first time a SQL statement with bind variables is parsed, the optimizer will examine the value of that bind variable and use that for creating the execution plan. If that bind is not representative of the majority of the supplied values, then it may cause the optimizer to choose a different execution plan than when one of these more representative values is submitted. If that is the case then the chosen plan may perform in a suboptimal manner. See:
Note 70075.1 Use of bind variables in queries
Different System Statistics - system statistics gathered using the DBMS_STATS.GATHER_SYSTEM_STATS procedure and stored in the SYS.AUX_STATS$ table will be used by default when the "_OPTIMIZER_COST_MODEL" (hidden) parameter (which defaults to CHOOSE) is set to CHOOSE or CPU. These statistics can cause access paths for queries to change dependent upon the recorded capability of a system. Systems with different capabilities may use different access paths for the same query. See:
Note 470316.1 Using Actual System Statistics (Collected CPU and IO information
Note 149560.1 Collect and Display System Statistics (CPU and IO) for CBO usage
Note 153761.1 Scaling the System to Improve CBO optimizer
Query Performance Differences with Unchanged Execution Plan
Is the Query really the same?- As with a changed execution plan, any changes in a SQL statement can have an impact on performance. Adding or changing columns in a select list, especially the addition of functions or scalar sub-queries may add significantly to the CPU usage of a query since these will be executed for every row returned.
Platform - Differences in any of the technology stack underneath the database can cause differences in performance. CPU specification, memory, I/O subsystems, Operating System versions and patch level can all have an impact. Price, physical size and branding are irrelevant. Platform choices are made for a variety of reasons, price and performance are just two of them. Similar or even dissimilar price is no guarantee of similar or dissimilar performance.
Network - Any operation that requires pulling data across a network, for example across a dblink, will be affected by network performance. Even same specification networks can perform differently depending on the load on the particular network segment. The larger the amount of data being sent across a network, the greater impact such variances in network performance are likely to have.
CPU - CPU intensive operations will take longer when executed on slower CPUs. If an application is CPU intensive, it does not matter how 'powerful' in terms of concurrent users and throughput a system is, if this processing is not scale-able and spreadable over the resources available, then the performance may come down to the raw CPU speed.
Memory - It is possible that memory constraints may mean that more sorts or join information spills to disk incurring the inevitable I/O penalties and reducing the performance of the queries. Additionally, operating system memory usage and constraints due to concurrent usage of oracle and non-oracle users should be monitored to avoid swapping and paging.
Disk - I/O throughput can make a large difference to I/O dependent queries. Different I/O subsystems may be tuned for massive throughput as opposed to raw speed. Additionally, Network based storage may depend on network speed and load on the network itself. These problems can be alleviated to a degree by buffer cache and o/s caching. See:
Note 223117.1 Tuning I/O-related waits.
Different Block Sizes - Just as in the section above regarding plan differences, using different block sizes can have an affect even if the access path has not changed. Storing data in a different block size can impact the number of I/Os required and thus change the relative level of stressing of the I/O subsystem.
Concurrency - If the concurrent usage of a system is not the same as that it is being compared with, then the relative performance of that system may appear to be different. Ensure that the load profiles of the systems being compared are the same in order to make valid comparisons.
Schema Differences - Differences in indexes, table partitioning and storage clauses can have a big impact on execution time. If the data is stored in a different way then more blocks may be required to be read to return the same information. Use SQLTXPLAIN.SQL to verify whether there are such differences. See:
Note 215187.1 SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
Parallelism - When the Optimizer selects an execution plan that it believes should be using Parallel Execution then it has to assume that sufficient resource is actually available at execution time to actually execute the query at the desired level of parallelism. If this resource is unavailable at execution time, then the query may be executed at a reduced level of parallelism or in serial, which could have a significant affect on the overall performance of a query and of the system as a whole. See:
Note 203238.1 Using Parallel Execution
References
Note 149560.1 - System Statistics: Collect and Display System Statistics (CPU and IO) for CBO us
Note 153761.1 - System Statistics: Scaling the System to Improve CBO optimizer
Note 203238.1 - Using Parallel Execution
Note 223117.1 - Tuning I/O-related waits
Note 223806.1 - Query with Unchanged Execution Plan is Slower than Previously
Note 227335.1 - Linguistic Sorting - Frequently Asked Questions
Note 398838.1 - FAQ: Query Tuning Frequently Asked Questions
Note 470316.1 - Using Actual System Statistics (Collected CPU and IO information)
Note 70075.1 - Use of bind variables in queries
Monday, July 21, 2008
NLS considerations in Import/Export - Frequently Asked Questions
Subject: NLS considerations in Import/Export - Frequently Asked Questions
Doc ID: Note:227332.1 Type: BULLETIN
Last Revision Date: 17-JUL-2008 Status: PUBLISHED
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
For the main Globalization (NLS) FAQ please see:
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
1. How does NLS affect import/export (exp/imp)?
2. How should NLS_LANG be set when using export?
3. How should NLS_LANG be set when using import?
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
5. I have the message "( possible ncharset conversion )" during import.
6. How to know in what characterset a dmp (export) file is created?
7. How does NLS affect datapump (expdp/impdp)?
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
For the main exp/imp FAQ please see:
Note 175624.1 Oracle Server - Export and Import FAQ
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
1. How does NLS affect import/export(exp/imp)??
Import and export are client products, in the same way as SQL*Plus or
Oracle Forms, and will therefore translate characters from the database
character set to that defined by NLS_LANG. The character set used for
the export will be stored in the export file and, when the file is imported,
the import will check the character set that was used. If it is different to
that defined by NLS_LANG at the import site, the characters will be
translated to the import character set and then, if necessary, to the
database character set.
References:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
2. How should NLS_LANG be set when using export?
Oracle recommends to set the character set part of NLS_LANG environment parameter
to ALWAYS the same character set as the character set of the database you are exporting.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
That way no conversion will take place and the exportfile will be created
in the same character set as the original database and contain ALL data from
original database (even incorrectly stored data if that would be the case).
Even if the plan is to import this into a database with a different character set
later the conversion can be postponed until the import.
Note that this has no relation with the Operating system. If your have a
WE8MSWIN1252 database on a unix server (which is totally supported) then you
should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.
During *interaction* with the database (= sqlplus) you need to configure
your *unix* client properly and that cannot be 1252 seen *unix* does not
has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
3. How should NLS_LANG be set when using import?
If the source and target database have the same character set,
the character set part of the NLS_LANG should be set to that same character set
on both the export and the import.
Even if the character sets of the exporting and importing databases
are not the same the best (preferred) value to use for
the character set part of NLS_LANG on both export and import is still
the character set of the source database.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
Setting the NLS_LANG to the character set of the target database during import
is also correct as such, but import has some limitations when going to a multibyte
characterset (like UTF8), hence using the SOURCE NLS_CHARACTERSET during both imp and exp
session is simply the best option, avoiding any problems like
IMP-16 "Required character set conversion (type %lu to %lu) not supported".
So, the preferred place to do the conversion is between the import executable
and the target database.
Note that this has no relation with the Operating system. If your source
database is a WE8MSWIN1252 database then you simply should set NLS_LANG
to AMERICAN_AMERICA.WE8MSWIN1252 before import, even on a Unix server.
Note that during *interaction* with the database (= sqlplus) you need to
configure your *unix* client properly and that cannot be 1252 seen *unix*
does not has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
4. Example: you want to go from an WE8MSWIN1252 to an UTF8 db:
(note that this is only the exp/imp example, if you want to migrate to UTF8
check Note 260192.1 for the full story )
1) double check the NLS_CHARACTERSET on the SOURCE database
select * from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
and export with the NLS_LANG set to AMERICAN_AMERICA.
In this case we want to create a export file containing WE8MSWIN1252 data.
(This is also the setting you want to use if you take an
export as backup)
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ exp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>exp ....
2) import with the NLS_LANG set to American_america.WE8MSWIN1252 (= source NLS_CHARACTERSET)
into the new UTF8 db.
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ imp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>imp ....
The conversion to UTF8 is done while inserting the data
in the UTF8 database by the imp connection.
We recommend to set the NLS_LANG explicit in the current shell for unix
(-> Note 131207.1 How to Set Unix Environment Variable )
or in the dos box used for the exp or imp tool on windows.
(-> "c:\>set NLS_LANG=AMERICAN_AMERICA.")
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
Not. Normally you use the AMERICAN_AMERICA default, but
if you imported with NLS_LANG set to FRENCH_FRANCE for example
then you will not have problems, even if the originating environment
used GERMAN_GERMANY or so.
5. I have the message "( possible ncharset conversion )" during import.
you see something similar to
Export file created by EXPORT:V08.01.07 via direct path
import done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P15 NCHAR character set (possible ncharset conversion)
in the import log, please see point 11 in
Note 276914.1 The National Character Set in Oracle 9i and 10g
6. How to know in what characterset a dmp (export) file is created?
simply issue: imp system/oracle@database show=yes file=test.dmp
the output gives you
import done in US7ASCII character set and AL16UTF16 NCHAR character set
-> this is the current NLS_LANG value set in the environment
and the NCHAR characterset of the target database
import server uses WE8MSWIN1252 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during this import session is different
from the target database characterset, so if you see 3 lines you might have problems :-)
export client uses UTF8 character set (possible charset conversion)
-> this is the characterset used during the export session and the
characterset used in the dmp file.
7. How does NLS affect datapump (expdp/impdp)?
Datapump does not use the NLS_LANG to do conversion between databases.
Conversion between 2 database charactersets is done purely based on the
NLS_CHARACTERSET ( or NLS_NCHAR_CHARACTTERSET for Nchar,Nvarchar and Nclob datatypes)
of the source and target database.
However, if you specify a parameter file then the NLS_LANG *is* used.
This is only important if you use non-English characters (e.g. for the
QUERY parameter) in the parameter file.
If you use non-English characters in the parameter file then the NLS_LANG
environment variable should be set (in the session where the Data Pump job
is started) to the correct encoding of the parameter file (!).
Note:
-----
There is a chance to have data corruption when going from a 8 bit
characterset to UTF8 or a other multibyte characterset on ALL 10g versions
(including 10.1.0.5 and 10.2.0.3) and 11.1.0.6.
Impdp may provoke data corruption unless you applied Patch 5874989.
This is caused by the impdp Bug 5874989.
The "old" exp/imp work fine. This problem is fixed in the upcoming
10.2.0.4 and 11.1.0.7 patchset.
All existing patches for this bug are found here:
http://updates.oracle.com/download/5874989.html
For 10.1.0.5, 10.2.0.2, 10.2.0.3 and 11.1.0.6 you can ask a backport
for Bug 5874989 if there is no patch yet for your version/platform.
For windows the fix is included in
10.1.0.5.0 Patch 20 (10.1.0.5.20P) or later ( Note 276548.1 )
10.2.0.3.0 Patch 11 (10.2.0.3.11P) or later ( Note 342443.1 )
The patch is technically only needed on the impdp side, but if you use
expdp/impdp between different character sets we suggest to patch all your
systems.
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
9i and lower gives ORA-01401: inserted value too large for column
10g and up gives ORA-12899: value too large for column
This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET
(like WE8ISO8859P1, WE8MSWIN1252 , WE8DEC ...) or 16 bit NLS_CHARACTERSET
(like JA16SJIS , ZHS16GBK, KO16MSWIN949) to a database with a
NLS_CHARACTERSET set to AL32UTF8 or UTF8
(AL32)UTF8 uses more BYTES to store a character then a 8 and 16 bit charactersets.
This is explained in Note 119119.1 AL32UTF8 / UTF8 (Unicode) Database Character Set
Implications
CHAR and VARCHAR2 colum sizes are defined by default in BYTES not characterset.
The best solution is to pre-create the tables using CHAR semantics (9i and up)
Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
Other Known Problems:
---------------------
Note 278980.1 Oracle10g Import Errors like IMP-00008 / IMP-00009 if Eported with NLS_LANG=FRENCH_FRANCE.[characterset]
References
----------
For further NLS / Globalization information you may start here:
Note 267942.1 - Globalization Technology (NLS) Knowledge Browser Product Page
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
Further details of issues discussed in this note can be found in:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
Note 175624.1 Oracle Server - Export and Import FAQ
Doc ID: Note:227332.1 Type: BULLETIN
Last Revision Date: 17-JUL-2008 Status: PUBLISHED
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
For the main Globalization (NLS) FAQ please see:
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
1. How does NLS affect import/export (exp/imp)?
2. How should NLS_LANG be set when using export?
3. How should NLS_LANG be set when using import?
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
5. I have the message "( possible ncharset conversion )" during import.
6. How to know in what characterset a dmp (export) file is created?
7. How does NLS affect datapump (expdp/impdp)?
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
For the main exp/imp FAQ please see:
Note 175624.1 Oracle Server - Export and Import FAQ
Globalization (NLS): NLS considerations in Import/Export - Frequently Asked Questions
-------------------------------------------------------------------------------------
1. How does NLS affect import/export(exp/imp)??
Import and export are client products, in the same way as SQL*Plus or
Oracle Forms, and will therefore translate characters from the database
character set to that defined by NLS_LANG. The character set used for
the export will be stored in the export file and, when the file is imported,
the import will check the character set that was used. If it is different to
that defined by NLS_LANG at the import site, the characters will be
translated to the import character set and then, if necessary, to the
database character set.
References:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
2. How should NLS_LANG be set when using export?
Oracle recommends to set the character set part of NLS_LANG environment parameter
to ALWAYS the same character set as the character set of the database you are exporting.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
That way no conversion will take place and the exportfile will be created
in the same character set as the original database and contain ALL data from
original database (even incorrectly stored data if that would be the case).
Even if the plan is to import this into a database with a different character set
later the conversion can be postponed until the import.
Note that this has no relation with the Operating system. If your have a
WE8MSWIN1252 database on a unix server (which is totally supported) then you
should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.
During *interaction* with the database (= sqlplus) you need to configure
your *unix* client properly and that cannot be 1252 seen *unix* does not
has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
3. How should NLS_LANG be set when using import?
If the source and target database have the same character set,
the character set part of the NLS_LANG should be set to that same character set
on both the export and the import.
Even if the character sets of the exporting and importing databases
are not the same the best (preferred) value to use for
the character set part of NLS_LANG on both export and import is still
the character set of the source database.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
Setting the NLS_LANG to the character set of the target database during import
is also correct as such, but import has some limitations when going to a multibyte
characterset (like UTF8), hence using the SOURCE NLS_CHARACTERSET during both imp and exp
session is simply the best option, avoiding any problems like
IMP-16 "Required character set conversion (type %lu to %lu) not supported".
So, the preferred place to do the conversion is between the import executable
and the target database.
Note that this has no relation with the Operating system. If your source
database is a WE8MSWIN1252 database then you simply should set NLS_LANG
to AMERICAN_AMERICA.WE8MSWIN1252 before import, even on a Unix server.
Note that during *interaction* with the database (= sqlplus) you need to
configure your *unix* client properly and that cannot be 1252 seen *unix*
does not has a 1252 characterset
Note 264157.1 The correct NLS_LANG setting in Unix Environments
4. Example: you want to go from an WE8MSWIN1252 to an UTF8 db:
(note that this is only the exp/imp example, if you want to migrate to UTF8
check Note 260192.1 for the full story )
1) double check the NLS_CHARACTERSET on the SOURCE database
select * from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
and export with the NLS_LANG set to AMERICAN_AMERICA.
In this case we want to create a export file containing WE8MSWIN1252 data.
(This is also the setting you want to use if you take an
export as backup)
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ exp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>exp ....
2) import with the NLS_LANG set to American_america.WE8MSWIN1252 (= source NLS_CHARACTERSET)
into the new UTF8 db.
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ imp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>imp ....
The conversion to UTF8 is done while inserting the data
in the UTF8 database by the imp connection.
We recommend to set the NLS_LANG explicit in the current shell for unix
(-> Note 131207.1 How to Set Unix Environment Variable )
or in the dos box used for the exp or imp tool on windows.
(-> "c:\>set NLS_LANG=AMERICAN_AMERICA.
4. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
Not. Normally you use the AMERICAN_AMERICA default, but
if you imported with NLS_LANG set to FRENCH_FRANCE for example
then you will not have problems, even if the originating environment
used GERMAN_GERMANY or so.
5. I have the message "( possible ncharset conversion )" during import.
you see something similar to
Export file created by EXPORT:V08.01.07 via direct path
import done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P15 NCHAR character set (possible ncharset conversion)
in the import log, please see point 11 in
Note 276914.1 The National Character Set in Oracle 9i and 10g
6. How to know in what characterset a dmp (export) file is created?
simply issue: imp system/oracle@database show=yes file=test.dmp
the output gives you
import done in US7ASCII character set and AL16UTF16 NCHAR character set
-> this is the current NLS_LANG value set in the environment
and the NCHAR characterset of the target database
import server uses WE8MSWIN1252 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during this import session is different
from the target database characterset, so if you see 3 lines you might have problems :-)
export client uses UTF8 character set (possible charset conversion)
-> this is the characterset used during the export session and the
characterset used in the dmp file.
7. How does NLS affect datapump (expdp/impdp)?
Datapump does not use the NLS_LANG to do conversion between databases.
Conversion between 2 database charactersets is done purely based on the
NLS_CHARACTERSET ( or NLS_NCHAR_CHARACTTERSET for Nchar,Nvarchar and Nclob datatypes)
of the source and target database.
However, if you specify a parameter file then the NLS_LANG *is* used.
This is only important if you use non-English characters (e.g. for the
QUERY parameter) in the parameter file.
If you use non-English characters in the parameter file then the NLS_LANG
environment variable should be set (in the session where the Data Pump job
is started) to the correct encoding of the parameter file (!).
Note:
-----
There is a chance to have data corruption when going from a 8 bit
characterset to UTF8 or a other multibyte characterset on ALL 10g versions
(including 10.1.0.5 and 10.2.0.3) and 11.1.0.6.
Impdp may provoke data corruption unless you applied Patch 5874989.
This is caused by the impdp Bug 5874989.
The "old" exp/imp work fine. This problem is fixed in the upcoming
10.2.0.4 and 11.1.0.7 patchset.
All existing patches for this bug are found here:
http://updates.oracle.com/download/5874989.html
For 10.1.0.5, 10.2.0.2, 10.2.0.3 and 11.1.0.6 you can ask a backport
for Bug 5874989 if there is no patch yet for your version/platform.
For windows the fix is included in
10.1.0.5.0 Patch 20 (10.1.0.5.20P) or later ( Note 276548.1 )
10.2.0.3.0 Patch 11 (10.2.0.3.11P) or later ( Note 342443.1 )
The patch is technically only needed on the impdp side, but if you use
expdp/impdp between different character sets we suggest to patch all your
systems.
8. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
9i and lower gives ORA-01401: inserted value too large for column
10g and up gives ORA-12899: value too large for column
This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET
(like WE8ISO8859P1, WE8MSWIN1252 , WE8DEC ...) or 16 bit NLS_CHARACTERSET
(like JA16SJIS , ZHS16GBK, KO16MSWIN949) to a database with a
NLS_CHARACTERSET set to AL32UTF8 or UTF8
(AL32)UTF8 uses more BYTES to store a character then a 8 and 16 bit charactersets.
This is explained in Note 119119.1 AL32UTF8 / UTF8 (Unicode) Database Character Set
Implications
CHAR and VARCHAR2 colum sizes are defined by default in BYTES not characterset.
The best solution is to pre-create the tables using CHAR semantics (9i and up)
Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
Other Known Problems:
---------------------
Note 278980.1 Oracle10g Import Errors like IMP-00008 / IMP-00009 if Eported with NLS_LANG=FRENCH_FRANCE.[characterset]
References
----------
For further NLS / Globalization information you may start here:
Note 267942.1 - Globalization Technology (NLS) Knowledge Browser Product Page
Note 60134.1 Globalization (NLS) - Frequently Asked Questions
Further details of issues discussed in this note can be found in:
Note 15095.1 Export/Import and NLS Considerations
Note 48644.1 Identifying the Export Character Set
Note 175624.1 Oracle Server - Export and Import FAQ
Export/Import and NLS Considerations
Subject: Export/Import and NLS Considerations
Doc ID: Note:15095.1 Type: FAQ
Last Revision Date: 07-JUL-2006 Status: PUBLISHED
Introduction
------------
This note provides an in-depth overview of exp/imp and NLS, you may not need to
know all this if you just want to take a export and make sure you don't lose any
data. Please read this first:
Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
Export/Import and NLS considerations
------------------------------------
If you have exported/imported a database or table(s) and are now encountering
character set conversion problems, use the following information to confirm
whether the export/import procedure was performed correctly.
+ In Oracle9 and higher, most data is automaticaly exported in the character set
of the database that is exported. The only exception to that, is that
data-dictionary information is exported in the character set that is set in
the NLS_LANG environment variable when making the export.
In pre-Oracle9 export, ALL data is exported in the character set that is set
in the NLS_LANG environment variable when making the export.
This means that if the character set part of the NLS_LANG environment variable
during export is different than the database character set, there will be a
automatic conversion between those (for 9i and higher only for data-dictionary
data since "normal" data is exported in the database character set, regardless
of the NLS_LANG, as described above).
+ The export file contains the character set that the data was exported in, in
the first few bytes of the export file. Export stores the character set ID
(not the text string) of the "exporting character set". See Note 48644.1.
This is relevant in pre-Oracle9 export files because all data will be exported
in the same character set and this way we can "see" what that character set
is, and we can even "update" these fields to work around certain problems
(only when advised by Oracle Support!).
For Oracle9 and higher export files this is less relevant because the
character set is stored in many more places in the export file, and can be
different depending on the data. So for Oracle9 and higher we can never
"update" the export file manualy.
+ An import session runs in the character set that is set in the NLS_LANG
environment variable for import session.
+ Import will check the character set of the data in the dump file and compare
it with the session's character set as defined in NLS_LANG.
+ No conversion occurs if the export data character set and the import session
character set are the same. If they are not the same, conversion is performed
from the export data character set to the import session character set prior
to the data being inserted into the database.
However, import can only perform this conversion for single-byte character
sets !
+ For imports into multibyte character set databases (i.e. UTF8) the character
set part of NLS_LANG should be set identical to that of the export session.
Otherwise an
IMP-16 "Required character set conversion (type %lu to %lu) not supported"
error will come up during import. See also Bug 896407.
+ The import session character set should be a the same, or a superset of the
export data character set, otherwise special characters will not be correctly
converted.
+ If you import pre-oracle7 export files include the parameter 'CHARSET' when
defining the import parameter set. CHARSET identifies the character set of the
export file. The CHARSET option was developed to import older export files
which did not have stored character set ID information.
If you define CHARSET but the export file does contain character set
information (export version 7 and higher) then the value in CHARSET should
match the export file character set. If they do not match, IMP-42 will
result (so it's better not to use this parameter at all, unless you import
pre-oracle7 data).
+ After the data has been converted to the import session character set, it is
then converted to the database character set if they differ. The database
character set should be a superset (or the same) of the import's session
character set otherwise special characters will not be correctly converted.
So this means that there is a potential for 3 seperate conversions when moving
data from between databases, as described in the following graph:
------------------- for all data pre-oracle9 and data-
| db in character | export dictionary data in oracle9 and up:
| set A |---------------------> exp session is in character
------------------- set B as defined by NLS_LANG.
source Therefore the dump file is in
character set B. Character set
conversion may occur.
|
|
| move file over to
| another machine
|
V
destination
-------------------
| db in character | import imp session is in character
| set D | <------------------- set C as defined by NLS_LANG.
------------------- The dump file is still in
character set B. Character set
conversion may occur.
During the import process
character set conversion
may occur between character
set C and the db's character
set D if they differ.
If you have any problems related to character set conversion in export/import
then you need to identify the following:
-- What is (was) the database character set in the source database (character
set A in the above)?
-- What was the client character set specified in NLS_LANG when the data was
exported (character set B in the above)?
-- What was the client character set specified in NLS_LANG when the data was
imported (character set C in the above) ?
-- What is the database character set of the destination database (character
set D in the above)?
Minimizing character set conversions during export/import
---------------------------------------------------------
As described, it is important to note import will do up to 3 character set
conversions depending on:
(a) character set of exported database
(b) NLS_LANG of export session (-> this is the "export file character set")
(c) NLS_LANG of import session
(d) character set of taget database.
Obviously there really is only the need for 1 conversion only (from original
database character set to target database character set). Minimizing the number
of conversions means that you minimize the potential of "loosing" data.
To minimize the number of conversions you are advised to follow these rules:
+ Set the NLS_LANG during export to the same as the character set of the
exported database -> this means no conversion takes place, all data is still
stored in the export file as it was stored in the database.
+ Set the NLS_LANG during import to the same value as during the export -> this
means that no conversion takes place in the import session.
+ If the character set of the target database is different, then the data will
be automatically converted when import inserts the data into the database, you
do not have to "set" anything for this, it's automatic.
Of course you do not have to follow these rules, but complying with these rules
does minimize the risk of loosing data during export/import.
Regarding the National Characterset:
------------------------------------
NCLOBs are always exported/imported in UCS-2/AL16UTF16.
NCHAR/NVARCHAR2s are always exported in the database's national character set.
This is something you can't influence by setting any parameters.
Doc ID: Note:15095.1 Type: FAQ
Last Revision Date: 07-JUL-2006 Status: PUBLISHED
Introduction
------------
This note provides an in-depth overview of exp/imp and NLS, you may not need to
know all this if you just want to take a export and make sure you don't lose any
data. Please read this first:
Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions
Export/Import and NLS considerations
------------------------------------
If you have exported/imported a database or table(s) and are now encountering
character set conversion problems, use the following information to confirm
whether the export/import procedure was performed correctly.
+ In Oracle9 and higher, most data is automaticaly exported in the character set
of the database that is exported. The only exception to that, is that
data-dictionary information is exported in the character set that is set in
the NLS_LANG environment variable when making the export.
In pre-Oracle9 export, ALL data is exported in the character set that is set
in the NLS_LANG environment variable when making the export.
This means that if the character set part of the NLS_LANG environment variable
during export is different than the database character set, there will be a
automatic conversion between those (for 9i and higher only for data-dictionary
data since "normal" data is exported in the database character set, regardless
of the NLS_LANG, as described above).
+ The export file contains the character set that the data was exported in, in
the first few bytes of the export file. Export stores the character set ID
(not the text string) of the "exporting character set". See Note 48644.1.
This is relevant in pre-Oracle9 export files because all data will be exported
in the same character set and this way we can "see" what that character set
is, and we can even "update" these fields to work around certain problems
(only when advised by Oracle Support!).
For Oracle9 and higher export files this is less relevant because the
character set is stored in many more places in the export file, and can be
different depending on the data. So for Oracle9 and higher we can never
"update" the export file manualy.
+ An import session runs in the character set that is set in the NLS_LANG
environment variable for import session.
+ Import will check the character set of the data in the dump file and compare
it with the session's character set as defined in NLS_LANG.
+ No conversion occurs if the export data character set and the import session
character set are the same. If they are not the same, conversion is performed
from the export data character set to the import session character set prior
to the data being inserted into the database.
However, import can only perform this conversion for single-byte character
sets !
+ For imports into multibyte character set databases (i.e. UTF8) the character
set part of NLS_LANG should be set identical to that of the export session.
Otherwise an
IMP-16 "Required character set conversion (type %lu to %lu) not supported"
error will come up during import. See also Bug 896407.
+ The import session character set should be a the same, or a superset of the
export data character set, otherwise special characters will not be correctly
converted.
+ If you import pre-oracle7 export files include the parameter 'CHARSET' when
defining the import parameter set. CHARSET identifies the character set of the
export file. The CHARSET option was developed to import older export files
which did not have stored character set ID information.
If you define CHARSET but the export file does contain character set
information (export version 7 and higher) then the value in CHARSET should
match the export file character set. If they do not match, IMP-42 will
result (so it's better not to use this parameter at all, unless you import
pre-oracle7 data).
+ After the data has been converted to the import session character set, it is
then converted to the database character set if they differ. The database
character set should be a superset (or the same) of the import's session
character set otherwise special characters will not be correctly converted.
So this means that there is a potential for 3 seperate conversions when moving
data from between databases, as described in the following graph:
------------------- for all data pre-oracle9 and data-
| db in character | export dictionary data in oracle9 and up:
| set A |---------------------> exp session is in character
------------------- set B as defined by NLS_LANG.
source Therefore the dump file is in
character set B. Character set
conversion may occur.
|
|
| move file over to
| another machine
|
V
destination
-------------------
| db in character | import imp session is in character
| set D | <------------------- set C as defined by NLS_LANG.
------------------- The dump file is still in
character set B. Character set
conversion may occur.
During the import process
character set conversion
may occur between character
set C and the db's character
set D if they differ.
If you have any problems related to character set conversion in export/import
then you need to identify the following:
-- What is (was) the database character set in the source database (character
set A in the above)?
-- What was the client character set specified in NLS_LANG when the data was
exported (character set B in the above)?
-- What was the client character set specified in NLS_LANG when the data was
imported (character set C in the above) ?
-- What is the database character set of the destination database (character
set D in the above)?
Minimizing character set conversions during export/import
---------------------------------------------------------
As described, it is important to note import will do up to 3 character set
conversions depending on:
(a) character set of exported database
(b) NLS_LANG of export session (-> this is the "export file character set")
(c) NLS_LANG of import session
(d) character set of taget database.
Obviously there really is only the need for 1 conversion only (from original
database character set to target database character set). Minimizing the number
of conversions means that you minimize the potential of "loosing" data.
To minimize the number of conversions you are advised to follow these rules:
+ Set the NLS_LANG during export to the same as the character set of the
exported database -> this means no conversion takes place, all data is still
stored in the export file as it was stored in the database.
+ Set the NLS_LANG during import to the same value as during the export -> this
means that no conversion takes place in the import session.
+ If the character set of the target database is different, then the data will
be automatically converted when import inserts the data into the database, you
do not have to "set" anything for this, it's automatic.
Of course you do not have to follow these rules, but complying with these rules
does minimize the risk of loosing data during export/import.
Regarding the National Characterset:
------------------------------------
NCLOBs are always exported/imported in UCS-2/AL16UTF16.
NCHAR/NVARCHAR2s are always exported in the database's national character set.
This is something you can't influence by setting any parameters.
Wednesday, July 02, 2008
Subject: ORA-01555 "Snapshot too old" - Detailed Explanation
Subject: ORA-01555 "Snapshot too old" - Detailed Explanation
Doc ID: Note:40689.1 Type: BULLETIN
Last Revision Date: 16-MAY-2008 Status: ARCHIVED
In this Document
Purpose
Scope and Application
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
ORA-01555 Explanation
Solutions
Examples
Special Cases
Summary
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.0
Information in this document applies to any platform.
Purpose
@Checked for relevance on 16-MAY-2008
This article will discuss the circumstances under which a query can return the Oracle error ORA-01555: snapshot too old: rollback segment number string with name "string" too small. The article will then discuss actions that can be taken to avoid the error and finally will provide some simple PL/SQL scripts that illustrate the issues discussed.
Scope and Application
It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Database Concepts manual and related Oracle Database documentation.
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
Two key concepts are briefly covered below which help in the understanding of ORA-01555, Read Consistency and Delayed Block Cleanout.
1. Read Consistency
This is documented in the Oracle Database Concepts manual and so will not be discussed further. However, for the purposes of this article this should be read and understood if not understood already. Oracle Server has the ability to have multi-version read consistency which is invaluable to you because it guarantees that you are seeing a consistent view of the data (no 'dirty reads').
2. Delayed Block Cleanout
This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout').
Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes made.)
Upon commit, the database simply marks the relevant rollback segment header entry as committed. Now, when one of the changed blocks is revisited Oracle examines the header of the data block which indicates that it has been changed at some point. The database needs to confirm whether the change has been committed or whether it is currently uncommitted. To do this, Oracle determines the rollback segment used for the previous transaction (from the block's header) and then determines whether the rollback header indicates whether it has been committed or not.
If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing.
This behaviour is illustrated in a very simplified way below. Here we walk through the stages involved in updating a data block.
Stage 1 - No Changes Made
Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment.
In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.)
Stage 2 - Row 2 is Updated
Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active).
Stage 3 - The User Issues a Commit
Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block.
Stage 4 - Another User Selects Data Block 500
Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header.
Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout).
ORA-01555 Explanation
There are two fundamental causes of the error ORA-01555 that are a result of Oracle trying to attain a 'read consistent' image. These are:
The rollback information itself is overwritten so that Oracle is unable to rollback the (committed) transaction entries to attain a sufficiently old enough version of the block.
The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transaction header sufficiently to derive the original rollback segment transaction slot.
Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a query is first started and to which Oracle is trying to attain a read consistent image. Associated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN 50.
Case 1 - Rollback Overwritten
This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the rollback information that it requires. The latter is discussed in this article because this is usually the harder one to understand.
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 does some other work that generates rollback information.
Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information)
Session 1 revisits the same block B1 (perhaps for a different row).
Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image of the block as of this QENV.
If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required QENV.
It is under this condition that Oracle may not be able to get the required rollback information because Session 1's changes have generated rollback information that has overwritten it and returns the ORA-1555 error.
Case 2 - Rollback Transaction Slot Overwritten
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 commits the changes (Now other transactions are free to overwrite this rollback information)
A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions.
These transactions each consume a slot in the rollback segment transaction table such that it eventually wraps around (the slots are written to in a circular fashion) and overwrites all the slots. Note that Oracle is free to reuse these slots since all transactions are committed.
Session 1's query then visits a block that has been changed since the initial QENV was established. Oracle therefore needs to derive an image of the block as at that point in time.
Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been overwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry.
If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the data block.
It is also possible to encounter a variant of the transaction slot being overwritten when using block cleanout. This is briefly described below:
Session 1 starts a query at QENV 50. After this another process updates the blocks that Session 1 will require. When Session 1 encounters these blocks it determines that the blocks have changed and have not yet been cleaned out (via delayed block cleanout). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed.
In order to do this, Oracle must look at the relevant rollback segment transaction table slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try to construct an older version of the block and if it is before then the block just needs clean out to be good enough for the QENV.
If the transaction slot has been overwritten and the transaction table cannot be rolled back to a sufficiently old enough version then Oracle cannot derive the block image and will return ORA-1555.
Note: Normally Oracle can use an algorithm for determining a block's SCN during block cleanout even when the rollback segment slot has been overwritten. But in this case Oracle cannot guarantee that the version of the block has not changed since the start of the query.
Solutions
This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment information is overwritten by the same session and when the rollback segment transaction table entry is overwritten.
It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session must be using an Oracle extension whereby fetches across commits are tolerated. This does not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used.
Case 1 - Rollback Overwritten
Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed.
Reduce the number of commits (same reason as 1).
Run the processing against a range of data rather than the whole table. (Same reason as 1).
Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information.
If fetching across commits, the code can be changed so that this is not done.
Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by:
Using a full table scan rather than an index lookup
Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks.
Case 2 - Rollback Transaction Slot Overwritten
Use any of the methods outlined above except for '6'. This will allow transactions to spread their work across multiple rollback segments therefore reducing the likelihood or rollback segment transaction table slots being consumed.
If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can be achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :
alter session set optimizer_goal = rule;
select count(*) from table_name;
If indexes are being accessed then the problem may be an index block and clean out can be forced by ensuring that all the index is traversed. Eg, if the index is on a numeric column with a minimum value of 25 then the following query will force cleanout of the index:
select index_column from table_name where index_column > 24;
Examples
Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases given above. Before these PL/SQL examples will return this error the database must be configured as follows:
Use a small buffer cache (db_block_buffers).
REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block visit without requiring the rollback information.
Use one rollback segment other than SYSTEM.
REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required.
Ensure that the rollback segment is small.
REASON: See the reason for using one rollback segment.
Rollback Overwritten
rem * 1555_a.sql - Example of getting ora-1555 "Snapshot too old" by
rem * a session overwriting the rollback information required
rem * by the same session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
drop table dummy1;
create table dummy1 (a varchar2(200));
rem * Populate the example tables.
begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/
rem * Ensure that table is 'cleaned out'.
select count(*) from bigemp;
declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.
-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;
begin
for c1rec in c1 loop
update dummy1 set a = 'aaaaaaaa';
update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/
Rollback Transaction Slot Overwritten
rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
rem * Populate demo table.
begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/
drop table mydual;
create table mydual (a number);
insert into mydual values (1);
commit;
rem * Cleanout demo table.
select count(*) from bigemp;
declare
cursor c1 is select * from bigemp;
begin
-- The following update is required to illustrate the problem if block
-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;
for c1rec in c1 loop
for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/
Special Cases
There are other special cases that may result in an ORA-01555. These are given below but are rare and so not discussed in this article:
Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.
If a query visits a data block that has been changed by using the Oracle discrete transaction facility then it will return ORA-01555.
It is feasible that a rollback segment created with the OPTIMAL clause may cause a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to be lost.
Summary
This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, and has provided simple PL/SQL scripts that illustrate the cases discussed.
References
Note 1005107.6 - ORA-01555: snapshot too old - Causes and Solutions
Note 104008.1 - ORA-01555: "SNAPSHOT TOO OLD" - Additional Solutions
Note 10630.1 - ORA-01555: "Snapshot too old" - Overview
Note 216971.1 - ORA-01555 When Using Undo Tablespace
Note 45895.1 - ORA-01555 "Snapshot too old" in Very Large Databases
Note 62005.1 - Creating, Optimizing, and Understanding Rollback Segments
Note 69464.1 - Rollback Segment Configuration & Tips
Oracle Database Concepts on http://otn.oracle.com
Keywords
Doc ID: Note:40689.1 Type: BULLETIN
Last Revision Date: 16-MAY-2008 Status: ARCHIVED
In this Document
Purpose
Scope and Application
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
ORA-01555 Explanation
Solutions
Examples
Special Cases
Summary
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 8.1.7.0
Information in this document applies to any platform.
Purpose
@Checked for relevance on 16-MAY-2008
This article will discuss the circumstances under which a query can return the Oracle error ORA-01555: snapshot too old: rollback segment number string with name "string" too small. The article will then discuss actions that can be taken to avoid the error and finally will provide some simple PL/SQL scripts that illustrate the issues discussed.
Scope and Application
It is assumed that the reader is familiar with standard Oracle terminology such as 'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Database Concepts manual and related Oracle Database documentation.
ORA-01555 "Snapshot too old" - Detailed Explanation
Terminology
Two key concepts are briefly covered below which help in the understanding of ORA-01555, Read Consistency and Delayed Block Cleanout.
1. Read Consistency
This is documented in the Oracle Database Concepts manual and so will not be discussed further. However, for the purposes of this article this should be read and understood if not understood already. Oracle Server has the ability to have multi-version read consistency which is invaluable to you because it guarantees that you are seeing a consistent view of the data (no 'dirty reads').
2. Delayed Block Cleanout
This is best illustrated with an example: Consider a transaction that updates a million row table. This obviously visits a large number of database blocks to make the change to the data. When the user commits the transaction Oracle does NOT go back and revisit these blocks to make the change permanent. It is left for the next transaction that visits any block affected by the update to 'tidy up' the block (hence the term 'delayed block cleanout').
Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in the header of the data block which identifies the rollback segment used to hold the rollback information for the changes made by the transaction. (This is required if the user later elects to not commit the changes and wishes to 'undo' the changes made.)
Upon commit, the database simply marks the relevant rollback segment header entry as committed. Now, when one of the changed blocks is revisited Oracle examines the header of the data block which indicates that it has been changed at some point. The database needs to confirm whether the change has been committed or whether it is currently uncommitted. To do this, Oracle determines the rollback segment used for the previous transaction (from the block's header) and then determines whether the rollback header indicates whether it has been committed or not.
If it is found that the block is committed then the header of the data block is updated so that subsequent accesses to the block do not incur this processing.
This behaviour is illustrated in a very simplified way below. Here we walk through the stages involved in updating a data block.
Stage 1 - No Changes Made
Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment.
In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.)
Stage 2 - Row 2 is Updated
Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active).
Stage 3 - The User Issues a Commit
Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block.
Stage 4 - Another User Selects Data Block 500
Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header.
Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout).
ORA-01555 Explanation
There are two fundamental causes of the error ORA-01555 that are a result of Oracle trying to attain a 'read consistent' image. These are:
The rollback information itself is overwritten so that Oracle is unable to rollback the (committed) transaction entries to attain a sufficiently old enough version of the block.
The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten, and Oracle cannot rollback the transaction header sufficiently to derive the original rollback segment transaction slot.
Both of these situations are discussed below with the series of steps that cause the ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query Environment', which can be thought of as the environment that existed when a query is first started and to which Oracle is trying to attain a read consistent image. Associated with this environment is the SCN (System Change Number) at that time and hence, QENV 50 is the query environment with SCN 50.
Case 1 - Rollback Overwritten
This breaks down into two cases: another session overwriting the rollback that the current session requires or the case where the current session overwrites the rollback information that it requires. The latter is discussed in this article because this is usually the harder one to understand.
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 does some other work that generates rollback information.
Session 1 commits the changes made in steps '3' and '4'. (Now other transactions are free to overwrite this rollback information)
Session 1 revisits the same block B1 (perhaps for a different row).
Now, Oracle can see from the block's header that it has been changed and it is later than the required QENV (which was 50). Therefore we need to get an image of the block as of this QENV.
If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required QENV.
It is under this condition that Oracle may not be able to get the required rollback information because Session 1's changes have generated rollback information that has overwritten it and returns the ORA-1555 error.
Case 2 - Rollback Transaction Slot Overwritten
Session 1 starts query at time T1 and QENV 50
Session 1 selects block B1 during this query
Session 1 updates the block at SCN 51
Session 1 commits the changes (Now other transactions are free to overwrite this rollback information)
A session (Session 1, another session or a number of other sessions) then use the same rollback segment for a series of committed transactions.
These transactions each consume a slot in the rollback segment transaction table such that it eventually wraps around (the slots are written to in a circular fashion) and overwrites all the slots. Note that Oracle is free to reuse these slots since all transactions are committed.
Session 1's query then visits a block that has been changed since the initial QENV was established. Oracle therefore needs to derive an image of the block as at that point in time.
Next Oracle attempts to lookup the rollback segment header's transaction slot pointed to by the top of the data block. It then realises that this has been overwritten and attempts to rollback the changes made to the rollback segment header to get the original transaction slot entry.
If it cannot rollback the rollback segment transaction table sufficiently it will return ORA-1555 since Oracle can no longer derive the required version of the data block.
It is also possible to encounter a variant of the transaction slot being overwritten when using block cleanout. This is briefly described below:
Session 1 starts a query at QENV 50. After this another process updates the blocks that Session 1 will require. When Session 1 encounters these blocks it determines that the blocks have changed and have not yet been cleaned out (via delayed block cleanout). Session 1 must determine whether the rows in the block existed at QENV 50, were subsequently changed.
In order to do this, Oracle must look at the relevant rollback segment transaction table slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try to construct an older version of the block and if it is before then the block just needs clean out to be good enough for the QENV.
If the transaction slot has been overwritten and the transaction table cannot be rolled back to a sufficiently old enough version then Oracle cannot derive the block image and will return ORA-1555.
Note: Normally Oracle can use an algorithm for determining a block's SCN during block cleanout even when the rollback segment slot has been overwritten. But in this case Oracle cannot guarantee that the version of the block has not changed since the start of the query.
Solutions
This section lists some of the solutions that can be used to avoid the ORA-01555 problems discussed in this article. It addresses the cases where rollback segment information is overwritten by the same session and when the rollback segment transaction table entry is overwritten.
It is worth highlighting that if a single session experiences the ORA-01555 and it is not one of the special cases listed at the end of this article, then the session must be using an Oracle extension whereby fetches across commits are tolerated. This does not follow the ANSI model and in the rare cases where ORA-01555 is returned one of the solutions below must be used.
Case 1 - Rollback Overwritten
Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed.
Reduce the number of commits (same reason as 1).
Run the processing against a range of data rather than the whole table. (Same reason as 1).
Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information.
If fetching across commits, the code can be changed so that this is not done.
Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by:
Using a full table scan rather than an index lookup
Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks.
Case 2 - Rollback Transaction Slot Overwritten
Use any of the methods outlined above except for '6'. This will allow transactions to spread their work across multiple rollback segments therefore reducing the likelihood or rollback segment transaction table slots being consumed.
If it is suspected that the block cleanout variant is the cause, then force block cleanout to occur prior to the transaction that returns the ORA-1555. This can be achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :
alter session set optimizer_goal = rule;
select count(*) from table_name;
If indexes are being accessed then the problem may be an index block and clean out can be forced by ensuring that all the index is traversed. Eg, if the index is on a numeric column with a minimum value of 25 then the following query will force cleanout of the index:
select index_column from table_name where index_column > 24;
Examples
Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases given above. Before these PL/SQL examples will return this error the database must be configured as follows:
Use a small buffer cache (db_block_buffers).
REASON: You do not want the session executing the script to be able to find old versions of the block in the buffer cache which can be used to satisfy a block visit without requiring the rollback information.
Use one rollback segment other than SYSTEM.
REASON: You need to ensure that the work being done is generating rollback information that will overwrite the rollback information required.
Ensure that the rollback segment is small.
REASON: See the reason for using one rollback segment.
Rollback Overwritten
rem * 1555_a.sql - Example of getting ora-1555 "Snapshot too old" by
rem * a session overwriting the rollback information required
rem * by the same session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
drop table dummy1;
create table dummy1 (a varchar2(200));
rem * Populate the example tables.
begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/
rem * Ensure that table is 'cleaned out'.
select count(*) from bigemp;
declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.
-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;
begin
for c1rec in c1 loop
update dummy1 set a = 'aaaaaaaa';
update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/
Rollback Transaction Slot Overwritten
rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.
drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));
rem * Populate demo table.
begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/
drop table mydual;
create table mydual (a number);
insert into mydual values (1);
commit;
rem * Cleanout demo table.
select count(*) from bigemp;
declare
cursor c1 is select * from bigemp;
begin
-- The following update is required to illustrate the problem if block
-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;
for c1rec in c1 loop
for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/
Special Cases
There are other special cases that may result in an ORA-01555. These are given below but are rare and so not discussed in this article:
Trusted Oracle can return this if configured in OS MAC mode. Decreasing LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.
If a query visits a data block that has been changed by using the Oracle discrete transaction facility then it will return ORA-01555.
It is feasible that a rollback segment created with the OPTIMAL clause may cause a query to return ORA-01555 if it has shrunk during the life of the query causing rollback segment information required to generate consistent read versions of blocks to be lost.
Summary
This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has provided a list of possible methods to avoid the error when it is encountered, and has provided simple PL/SQL scripts that illustrate the cases discussed.
References
Note 1005107.6 - ORA-01555: snapshot too old - Causes and Solutions
Note 104008.1 - ORA-01555: "SNAPSHOT TOO OLD" - Additional Solutions
Note 10630.1 - ORA-01555: "Snapshot too old" - Overview
Note 216971.1 - ORA-01555 When Using Undo Tablespace
Note 45895.1 - ORA-01555 "Snapshot too old" in Very Large Databases
Note 62005.1 - Creating, Optimizing, and Understanding Rollback Segments
Note 69464.1 - Rollback Segment Configuration & Tips
Oracle Database Concepts on http://otn.oracle.com
Keywords
Tuesday, July 01, 2008
Oracle中哪些等待事件是Idle waitevent
9iR2中可以通过STATS$IDLE_EVENT查看等待事件是空闲等待事件。
10gR2中,选择比较多,v$event_name是比较典型的一个。
10gR2中,选择比较多,v$event_name是比较典型的一个。
Wednesday, April 23, 2008
TX Transaction locks - Example wait scenarios (enqueue)
Subject: TX Transaction locks - Example wait scenarios
Doc ID: Note:62354.1 Type: TROUBLESHOOTING
Last Revision Date: 28-FEB-2008 Status: PUBLISHED
Introduction
~~~~~~~~~~~~
This short article gives examples of TX locks and the waits which can
occur in various circumstances. Often such waits will go unnoticed unless
they are of a long duration or when they trigger a deadlock scenario (which
raises an ORA-60 error).
The examples here demonstrate fundamental locking scenarios which should
be understood by application developers and DBA's alike.
Useful SQL statements
~~~~~~~~~~~~~~~~~~~~~
If you encounter a lock related hang scenario the following SQL statements
can be used to help isolate the waiters and blockers:
Show all sessions waiting for any lock:
select event,p1,p2,p3 from v$session_wait
where wait_time=0 and event='enqueue';
Show sessions waiting for a TX lock:
select * from v$lock where type='TX' and request>0;
Show sessions holding a TX lock:
select * from v$lock where type='TX' and lmode>0;
What is a TX lock ?
~~~~~~~~~~~~~~~~~~~
A TX lock is acquired when a transaction initiates its first change and is
held until the transaction does a COMMIT or ROLLBACK. It is used mainly as
a queuing mechanism so that other sessions can wait for the transaction to
complete. The lock name (ID1 and ID2) of the TX lock reflect the transaction
ID of the active transaction.
Example Tables
~~~~~~~~~~~~~~
The lock waits which can occur are demonstrated using the following
tables. Connect as SCOTT/TIGER or some dummy user to set up the test
environment using the following SQL:
DROP TABLE tx_eg;
CREATE TABLE tx_eg ( num number, txt varchar2(10), sex varchar2(10) )
INITRANS 1 MAXTRANS 1;
INSERT into tx_eg VALUES ( 1, 'First','FEMALE' );
INSERT into tx_eg VALUES ( 2, 'Second','MALE' );
INSERT into tx_eg VALUES ( 3, 'Third','MALE' );
INSERT into tx_eg VALUES ( 4, 'Fourth','MALE' );
INSERT into tx_eg VALUES ( 5, 'Fifth','MALE' );
COMMIT;
In the examples below three sessions are required:
Ses#1 indicates the TX_EG table owners first session
Ses#2 indicates the TX_EG table owners second session
DBA indicates a SYSDBA user with access to
The examples covered below include:
Waits due to Row being locked by an active Transaction
Waits due to Unique or Primary Key Constraint enforcement
Waits due to Insufficient 'ITL' slots in the Block
Waits due to rows being covered by the same BITMAP index fragment
Waits due to Row being locked by an active Transaction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a session updates a row in a table the row is locked by the sessions
transaction. Other users may SELECT that row and will see row as it was
BEFORE the UPDATE occurred. If another session wishes to UPDATE the same
row it has to wait for the first session to commit or rollback. The
second session waits for the first sessions TX lock in EXCLUSIVE mode.
Eg:
Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Garbage' where num=1;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 131075 597 6 0
10 TX 131075 597 0 6
> This shows SID 10 is waiting for the TX lock held by SID 8 and it
> wants the lock in exclusive mode (as REQUEST=6).
The select below is included to demonstrate that a session waiting
on a lock will show as waiting on an 'enqueue' in V$SESSION_WAIT
and that the values of P1RAW, P2 and P3 indicate the actual lock
being waited for. When using Parallel Server the EVENT will be
'DFS enqueue lock acquisition' rather than 'enqueue'.
This select will be omitted from the following examples.
DBA: select sid,p1raw, p2, p3
from v$session_wait
where wait_time=0 and event='enqueue';
SID P1RAW P2 P3
---------- -------- ---------- ----------
10 54580006 131075 597
> ~~~~ ~~ ~~~~~~ ~~~
> type|mode id1 id2
> T X 6 131075 597
The next select shows the object_id and the exact row that the
session is waiting for. This information is only valid in V$SESSION
when a session is waiting due to a row level lock. The statement
is only valid in Oracle 7.3 onwards. As SID 10 is the waiter above
then this is the session to look at in V$SESSION:
DBA: select ROW_WAIT_OBJ#,
ROW_WAIT_FILE#,ROW_WAIT_BLOCK#,ROW_WAIT_ROW#
from v$session
where sid=10;
ROW_WAIT_O ROW_WAIT_F ROW_WAIT_B ROW_WAIT_R
---------- ---------- ---------- ----------
3058 4 2683 0
> The waiter is waiting for the TX lock in order to lock row 0
> in file 4, block 2683 of object 3058.
Ses#1: rollback;
Ses#2: rollback;
Waits due to Unique or Primary Key Constraint enforcement
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a table has a primary key constraint, a unique constraint
or a unique index then the uniqueness of the column/s referenced by
the constraint is enforced by a unique index. If two sessions try to
insert the same key value the second session has to wait to see if an
ORA-0001 should be raised or not.
Eg:
Ses#1: ALTER TABLE tx_eg ADD CONSTRAINT tx_eg_pk PRIMARY KEY( num );
Ses#1: insert into tx_eg values (10,'New','MALE');
Ses#2: insert into tx_eg values (10,'OtherNew',null);
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 196625 39 6 0
10 TX 262155 65 6 0
10 TX 196625 39 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). SID 10 holds a TX lock
for its own transaction.
Ses#1: commit;
Ses#2: ORA-00001: unique constraint (SCOTT.TX_EG_PK) violated
Ses#2: rollback;
Waits due to Insufficient 'ITL' slots in a Block
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Oracle keeps note of which rows are locked by which transaction in an area
at the top of each data block known as the 'interested transaction list'.
The number of ITL slots in any block in an object is controlled by
the INITRANS and MAXTRANS attributes. INITRANS is the number of slots
initially created in a block when it is first used, while MAXTRANS places
an upper bound on the number of entries allowed. Each transaction which
wants to modify a block requires a slot in this 'ITL' list in the block.
MAXTRANS places an upper bound on the number of concurrent transactions
which can be active at any single point in time within a block.
INITRANS provides a minimum guaranteed 'per-block' concurrency.
If more than INITRANS but less than MAXTRANS transactions want to be
active concurrently within the same block then the ITL list will be extended
BUT ONLY IF THERE IS SPACE AVAILABLE TO DO SO WITHIN THE BLOCK.
If there is no free 'ITL' then the requesting session will wait on one
of the active transaction locks in mode 4.
Eg: Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Different' where num=2;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 327688 48 6 0
10 TX 327688 48 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Ses#1: ALTER TABLE tx_eg MAXTRANS 2;
Ses#1: update tx_eg set txt='First' where num=1;
Ses#2: update tx_eg set txt='Second' where num=2;
Both rows update as there is space to grow the ITL list to
accommodate both transactions.
Ses#1: commit;
Ses#2: commit;
Also from 9.2 you can check the ITL Waits in v$segment_statistics
with a query like :
SELECT t.OWNER, t.OBJECT_NAME, t.OBJECT_TYPE, t.STATISTIC_NAME, t.VALUE
FROM v$segment_statistics t
WHERE t.STATISTIC_NAME = 'ITL waits'
AND t.VALUE > 0;
If need be, increase INITTRANS and MAXTRANS.
Waits due to rows being covered by the same BITMAP index fragment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Bitmap indexes index key values and a range of ROWIDs. Each 'entry'
in a bitmap index can cover many rows in the actual table.
If 2 sessions wish to update rows covered by the same bitmap index
fragment then the second session waits for the first transaction to
either COMMIT or ROLLBACK by waiting for the TX lock in mode 4.
Eg: Ses#1: CREATE Bitmap Index tx_eg_bitmap on tx_eg ( sex );
Ses#1: update tx_eg set sex='FEMALE' where num=3;
Ses#2: update tx_eg set sex='FEMALE' where num=4;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 262151 62 6 0
10 TX 327680 60 6 0
10 TX 262151 62 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Other Scenarios
~~~~~~~~~~~~~~~
There are other wait scenarios which can result in a SHARE mode wait for a TX
lock but these are rare compared to the examples given above.
Eg: If a session wants to read a row locked by a transaction in a PREPARED
state then it will wait on the relevant TX lock in SHARE mode (REQUEST=4).
As a PREPARED transaction should COMMIT , ROLLBACK or go to an in-doubt
state very soon after the prepare this is not generally noticeable..
Doc ID: Note:62354.1 Type: TROUBLESHOOTING
Last Revision Date: 28-FEB-2008 Status: PUBLISHED
Introduction
~~~~~~~~~~~~
This short article gives examples of TX locks and the waits which can
occur in various circumstances. Often such waits will go unnoticed unless
they are of a long duration or when they trigger a deadlock scenario (which
raises an ORA-60 error).
The examples here demonstrate fundamental locking scenarios which should
be understood by application developers and DBA's alike.
Useful SQL statements
~~~~~~~~~~~~~~~~~~~~~
If you encounter a lock related hang scenario the following SQL statements
can be used to help isolate the waiters and blockers:
Show all sessions waiting for any lock:
select event,p1,p2,p3 from v$session_wait
where wait_time=0 and event='enqueue';
Show sessions waiting for a TX lock:
select * from v$lock where type='TX' and request>0;
Show sessions holding a TX lock:
select * from v$lock where type='TX' and lmode>0;
What is a TX lock ?
~~~~~~~~~~~~~~~~~~~
A TX lock is acquired when a transaction initiates its first change and is
held until the transaction does a COMMIT or ROLLBACK. It is used mainly as
a queuing mechanism so that other sessions can wait for the transaction to
complete. The lock name (ID1 and ID2) of the TX lock reflect the transaction
ID of the active transaction.
Example Tables
~~~~~~~~~~~~~~
The lock waits which can occur are demonstrated using the following
tables. Connect as SCOTT/TIGER or some dummy user to set up the test
environment using the following SQL:
DROP TABLE tx_eg;
CREATE TABLE tx_eg ( num number, txt varchar2(10), sex varchar2(10) )
INITRANS 1 MAXTRANS 1;
INSERT into tx_eg VALUES ( 1, 'First','FEMALE' );
INSERT into tx_eg VALUES ( 2, 'Second','MALE' );
INSERT into tx_eg VALUES ( 3, 'Third','MALE' );
INSERT into tx_eg VALUES ( 4, 'Fourth','MALE' );
INSERT into tx_eg VALUES ( 5, 'Fifth','MALE' );
COMMIT;
In the examples below three sessions are required:
Ses#1 indicates the TX_EG table owners first session
Ses#2 indicates the TX_EG table owners second session
DBA indicates a SYSDBA user with access to
The examples covered below include:
Waits due to Row being locked by an active Transaction
Waits due to Unique or Primary Key Constraint enforcement
Waits due to Insufficient 'ITL' slots in the Block
Waits due to rows being covered by the same BITMAP index fragment
Waits due to Row being locked by an active Transaction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a session updates a row in a table the row is locked by the sessions
transaction. Other users may SELECT that row and will see row as it was
BEFORE the UPDATE occurred. If another session wishes to UPDATE the same
row it has to wait for the first session to commit or rollback. The
second session waits for the first sessions TX lock in EXCLUSIVE mode.
Eg:
Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Garbage' where num=1;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 131075 597 6 0
10 TX 131075 597 0 6
> This shows SID 10 is waiting for the TX lock held by SID 8 and it
> wants the lock in exclusive mode (as REQUEST=6).
The select below is included to demonstrate that a session waiting
on a lock will show as waiting on an 'enqueue' in V$SESSION_WAIT
and that the values of P1RAW, P2 and P3 indicate the actual lock
being waited for. When using Parallel Server the EVENT will be
'DFS enqueue lock acquisition' rather than 'enqueue'.
This select will be omitted from the following examples.
DBA: select sid,p1raw, p2, p3
from v$session_wait
where wait_time=0 and event='enqueue';
SID P1RAW P2 P3
---------- -------- ---------- ----------
10 54580006 131075 597
> ~~~~ ~~ ~~~~~~ ~~~
> type|mode id1 id2
> T X 6 131075 597
The next select shows the object_id and the exact row that the
session is waiting for. This information is only valid in V$SESSION
when a session is waiting due to a row level lock. The statement
is only valid in Oracle 7.3 onwards. As SID 10 is the waiter above
then this is the session to look at in V$SESSION:
DBA: select ROW_WAIT_OBJ#,
ROW_WAIT_FILE#,ROW_WAIT_BLOCK#,ROW_WAIT_ROW#
from v$session
where sid=10;
ROW_WAIT_O ROW_WAIT_F ROW_WAIT_B ROW_WAIT_R
---------- ---------- ---------- ----------
3058 4 2683 0
> The waiter is waiting for the TX lock in order to lock row 0
> in file 4, block 2683 of object 3058.
Ses#1: rollback;
Ses#2: rollback;
Waits due to Unique or Primary Key Constraint enforcement
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a table has a primary key constraint, a unique constraint
or a unique index then the uniqueness of the column/s referenced by
the constraint is enforced by a unique index. If two sessions try to
insert the same key value the second session has to wait to see if an
ORA-0001 should be raised or not.
Eg:
Ses#1: ALTER TABLE tx_eg ADD CONSTRAINT tx_eg_pk PRIMARY KEY( num );
Ses#1: insert into tx_eg values (10,'New','MALE');
Ses#2: insert into tx_eg values (10,'OtherNew',null);
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 196625 39 6 0
10 TX 262155 65 6 0
10 TX 196625 39 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4). SID 10 holds a TX lock
for its own transaction.
Ses#1: commit;
Ses#2: ORA-00001: unique constraint (SCOTT.TX_EG_PK) violated
Ses#2: rollback;
Waits due to Insufficient 'ITL' slots in a Block
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Oracle keeps note of which rows are locked by which transaction in an area
at the top of each data block known as the 'interested transaction list'.
The number of ITL slots in any block in an object is controlled by
the INITRANS and MAXTRANS attributes. INITRANS is the number of slots
initially created in a block when it is first used, while MAXTRANS places
an upper bound on the number of entries allowed. Each transaction which
wants to modify a block requires a slot in this 'ITL' list in the block.
MAXTRANS places an upper bound on the number of concurrent transactions
which can be active at any single point in time within a block.
INITRANS provides a minimum guaranteed 'per-block' concurrency.
If more than INITRANS but less than MAXTRANS transactions want to be
active concurrently within the same block then the ITL list will be extended
BUT ONLY IF THERE IS SPACE AVAILABLE TO DO SO WITHIN THE BLOCK.
If there is no free 'ITL' then the requesting session will wait on one
of the active transaction locks in mode 4.
Eg: Ses#1: update tx_eg set txt='Garbage' where num=1;
Ses#2: update tx_eg set txt='Different' where num=2;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 327688 48 6 0
10 TX 327688 48 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Ses#1: ALTER TABLE tx_eg MAXTRANS 2;
Ses#1: update tx_eg set txt='First' where num=1;
Ses#2: update tx_eg set txt='Second' where num=2;
Both rows update as there is space to grow the ITL list to
accommodate both transactions.
Ses#1: commit;
Ses#2: commit;
Also from 9.2 you can check the ITL Waits in v$segment_statistics
with a query like :
SELECT t.OWNER, t.OBJECT_NAME, t.OBJECT_TYPE, t.STATISTIC_NAME, t.VALUE
FROM v$segment_statistics t
WHERE t.STATISTIC_NAME = 'ITL waits'
AND t.VALUE > 0;
If need be, increase INITTRANS and MAXTRANS.
Waits due to rows being covered by the same BITMAP index fragment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Bitmap indexes index key values and a range of ROWIDs. Each 'entry'
in a bitmap index can cover many rows in the actual table.
If 2 sessions wish to update rows covered by the same bitmap index
fragment then the second session waits for the first transaction to
either COMMIT or ROLLBACK by waiting for the TX lock in mode 4.
Eg: Ses#1: CREATE Bitmap Index tx_eg_bitmap on tx_eg ( sex );
Ses#1: update tx_eg set sex='FEMALE' where num=3;
Ses#2: update tx_eg set sex='FEMALE' where num=4;
DBA: select SID,TYPE,ID1,ID2,LMODE,REQUEST
from v$lock where type='TX';
SID TY ID1 ID2 LMODE REQUEST
---------- -- ---------- ---------- ---------- ----------
8 TX 262151 62 6 0
10 TX 327680 60 6 0
10 TX 262151 62 0 4
This shows SID 10 is waiting for the TX lock held by SID 8 and it
wants the lock in share mode (as REQUEST=4).
Ses#1: commit;
Ses#2: commit;
Other Scenarios
~~~~~~~~~~~~~~~
There are other wait scenarios which can result in a SHARE mode wait for a TX
lock but these are rare compared to the examples given above.
Eg: If a session wants to read a row locked by a transaction in a PREPARED
state then it will wait on the relevant TX lock in SHARE mode (REQUEST=4).
As a PREPARED transaction should COMMIT , ROLLBACK or go to an in-doubt
state very soon after the prepare this is not generally noticeable..
Tuesday, April 22, 2008
Finding root blocker (enqueue)
Subject: Detecting Blocking Sessions in RAC and non-RAC (Enqueue Locks) Environments
Doc ID: Note:398519.1 Type: SCRIPT
Last Revision Date: 12-JAN-2007 Status: PUBLISHED
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Script
Running the Script
Caution
Script
Script Output
References
--------------------------------------------------------------------------------
Applies to: Oracle Server - Enterprise Edition - Version: 7.3.4 to 10.2
Information in this document applies to any platform.
PurposeScript and explanation for finding blocking and blocked sessions in both RAC (multi instance) and non-RAC (single instance) environments.
.
Locking Mechanism
Resources (rows, tables, control files ...) represented by enqueues (TX, TM, CF, ...) can be locked in various modes (i.e. shared, exclusive, ...). Concurrent locking requests can conflict as per compatibility rules. Resources (enqueues) are externalized via GV$RESOURCE and lock requests via GV$LOCK views. Details can be found in
'Oracle® Database Concepts' -> chapter 'Data Concurrency and Consistency' (10.2. version)
.
Locking Conflict Types
1. Local locking conflict (block) - conflicting sessions are connected to the same instance (also applies to one instance of RAC)
Drawback: V$LOCK column BLOCK contains value 1 for blocking lock (session)
2. Global locking conflict (block) - conflicting sessions are connected to different instances (multi instance RAC only)
Drawback: V$LOCK column BLOCK contains value 2 to mark potential conflict (value is always 2 in RAC environments unless there is local conflict)
@ as per Bug:3230663 in 10.2 we should be able to detect global conflict (but still reproducible in 10.2.0.2 @ Linux x86 )
.
Script Principle
Display all sessions holding or requesting lock of resource some session is waiting for. Waiting session has non-zero value of column GV$LOCK.REQUEST. Resource is identified by (TYPE,ID1,ID2 columns of GV$LOCK view).
.
We cannot use GV$LOCK.BLOCK in RAC as it always contains value 2 ("potential locking conflict") unless there is local conflict detected.
.
Finding root blocker
Run query provided in Script section and do one of the following.
(1) Find and kill root blockers
a) - Find oldest resource request - row with highest CTIME (this is row L1)
b) - Exists there another row with the same SID as L1? (this is row L2 if exists)
NOT - this is root blocker, kill it
YES - Find row with the same values of ID1,ID2 columns as in L2 where LOCK > 0 (this is row L3)
- Repeat (b) with L3 (L3 becomes L1) until You find root blocker
(2) Or use simple rule (may not be best)
a) Kill oldest blocking session (highest CTIME)
b) Run script again and repeat (a) until blocking session exists
.
Detecting Object of Locking Conflict
Typically problematic conflicts happen with "DML Locks" (transaction - TX and table - TM lock types) and sometimes it is important to find out subject of the conflict (i.e. fix application design error to prevent issue).
Object name for TM lock can be easily identified as V$LOCK.ID1 is matching to DBA_OBJECTS.OBJECT_ID.
select OBJECT_ID, OWNER,OBJECT_NAME from dba_objects o, V$LOCK l
where l.SID=&sid and l.ID1=o.OBJECT_ID;
Contrary there is no easy way (select) to find out which row (TX) is the session waiting for i.e. to match TX request to TM lock (table name). This is due to locking implementation - pointer in database block points to assigned slot in undo segment (transaction id - XID). It is possible to make a guess of table name (using similarity of CTIME column value). I.e. session is waiting for TX lock for 100s and TM lock on table A has been placed 100s ago (this is just a guess as TM lock could have been acquired for earlier update in the same transaction).
Note: For simplicity GV$LOCK is referred as view but actually this is synonym for view GV_$LOCK (the same applies to V$LOCK).
Software Requirements/PrerequisitesSQL*Plus
Configuring the ScriptThere are no steps required.
Running the ScriptConnect as user able to select from GV$LOCK (typically user having DBA role).
sqlplus '/ as sysdba'
@
CautionThis script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.
Script
-- Author: daniel.x.simon@oracle.com
prompt CTIME is in Seconds
set lines 120
col BLOCK for 9
col LMODE for 9
col INST_ID for 9
col REQUEST for 9
col SID for 999999
select INST_ID, SID, TYPE, ID1, ID2, LMODE, REQUEST, CTIME, BLOCK
from gv$lock where (ID1,ID2,TYPE) in
(select ID1,ID2,TYPE from gv$lock where request>0);
Script Output- example output showing two independent root conflicts
- there can be noted oldest conflict (sid 64 vs 38) is not the root as session 38 is blocked by session 67
INST_ID SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
---------- -------- ---------- --------- ------- ---------- ---------- ---------- ----------
2 212 TX 1114116 1399221 6 0 308 2
1 248 TX 1114116 1399221 0 6 304 0
4 67 TX 6225949 1244199 6 0 26 2
1 38 TX 6225949 1244199 0 6 23 0
2 64 TX 131103 2270514 0 6 117 0
1 38 TX 131103 2270514 6 0 171 2
ReferencesKeywords'RAC' 'BLOCKING~LOCKS' 'ENQUEUE' 'V$LOCK' 'V$LOCK'
Doc ID: Note:398519.1 Type: SCRIPT
Last Revision Date: 12-JAN-2007 Status: PUBLISHED
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Script
Running the Script
Caution
Script
Script Output
References
--------------------------------------------------------------------------------
Applies to: Oracle Server - Enterprise Edition - Version: 7.3.4 to 10.2
Information in this document applies to any platform.
PurposeScript and explanation for finding blocking and blocked sessions in both RAC (multi instance) and non-RAC (single instance) environments.
.
Locking Mechanism
Resources (rows, tables, control files ...) represented by enqueues (TX, TM, CF, ...) can be locked in various modes (i.e. shared, exclusive, ...). Concurrent locking requests can conflict as per compatibility rules. Resources (enqueues) are externalized via GV$RESOURCE and lock requests via GV$LOCK views. Details can be found in
'Oracle® Database Concepts' -> chapter 'Data Concurrency and Consistency' (10.2. version)
.
Locking Conflict Types
1. Local locking conflict (block) - conflicting sessions are connected to the same instance (also applies to one instance of RAC)
Drawback: V$LOCK column BLOCK contains value 1 for blocking lock (session)
2. Global locking conflict (block) - conflicting sessions are connected to different instances (multi instance RAC only)
Drawback: V$LOCK column BLOCK contains value 2 to mark potential conflict (value is always 2 in RAC environments unless there is local conflict)
@ as per Bug:3230663 in 10.2 we should be able to detect global conflict (but still reproducible in 10.2.0.2 @ Linux x86 )
.
Script Principle
Display all sessions holding or requesting lock of resource some session is waiting for. Waiting session has non-zero value of column GV$LOCK.REQUEST. Resource is identified by (TYPE,ID1,ID2 columns of GV$LOCK view).
.
We cannot use GV$LOCK.BLOCK in RAC as it always contains value 2 ("potential locking conflict") unless there is local conflict detected.
.
Finding root blocker
Run query provided in Script section and do one of the following.
(1) Find and kill root blockers
a) - Find oldest resource request - row with highest CTIME (this is row L1)
b) - Exists there another row with the same SID as L1? (this is row L2 if exists)
NOT - this is root blocker, kill it
YES - Find row with the same values of ID1,ID2 columns as in L2 where LOCK > 0 (this is row L3)
- Repeat (b) with L3 (L3 becomes L1) until You find root blocker
(2) Or use simple rule (may not be best)
a) Kill oldest blocking session (highest CTIME)
b) Run script again and repeat (a) until blocking session exists
.
Detecting Object of Locking Conflict
Typically problematic conflicts happen with "DML Locks" (transaction - TX and table - TM lock types) and sometimes it is important to find out subject of the conflict (i.e. fix application design error to prevent issue).
Object name for TM lock can be easily identified as V$LOCK.ID1 is matching to DBA_OBJECTS.OBJECT_ID.
select OBJECT_ID, OWNER,OBJECT_NAME from dba_objects o, V$LOCK l
where l.SID=&sid and l.ID1=o.OBJECT_ID;
Contrary there is no easy way (select) to find out which row (TX) is the session waiting for i.e. to match TX request to TM lock (table name). This is due to locking implementation - pointer in database block points to assigned slot in undo segment (transaction id - XID). It is possible to make a guess of table name (using similarity of CTIME column value). I.e. session is waiting for TX lock for 100s and TM lock on table A has been placed 100s ago (this is just a guess as TM lock could have been acquired for earlier update in the same transaction).
Note: For simplicity GV$LOCK is referred as view but actually this is synonym for view GV_$LOCK (the same applies to V$LOCK).
Software Requirements/PrerequisitesSQL*Plus
Configuring the ScriptThere are no steps required.
Running the ScriptConnect as user able to select from GV$LOCK (typically user having DBA role).
sqlplus '/ as sysdba'
@
CautionThis script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.
Script
-- Author: daniel.x.simon@oracle.com
prompt CTIME is in Seconds
set lines 120
col BLOCK for 9
col LMODE for 9
col INST_ID for 9
col REQUEST for 9
col SID for 999999
select INST_ID, SID, TYPE, ID1, ID2, LMODE, REQUEST, CTIME, BLOCK
from gv$lock where (ID1,ID2,TYPE) in
(select ID1,ID2,TYPE from gv$lock where request>0);
Script Output- example output showing two independent root conflicts
- there can be noted oldest conflict (sid 64 vs 38) is not the root as session 38 is blocked by session 67
INST_ID SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK
---------- -------- ---------- --------- ------- ---------- ---------- ---------- ----------
2 212 TX 1114116 1399221 6 0 308 2
1 248 TX 1114116 1399221 0 6 304 0
4 67 TX 6225949 1244199 6 0 26 2
1 38 TX 6225949 1244199 0 6 23 0
2 64 TX 131103 2270514 0 6 117 0
1 38 TX 131103 2270514 6 0 171 2
ReferencesKeywords'RAC' 'BLOCKING~LOCKS' 'ENQUEUE' 'V$LOCK' 'V$LOCK'
Thursday, April 10, 2008
Advanced Features of Oracle Net Services
Address List Parameter LOAD_BALANCE(Client Load Balancing):
When set to on, instructs Oracle Net to progress through the list of protocol addresses in a random sequence, balancing the load on the various listeners. When set to off, instructs Oracle Net to try the addresses sequentially until one succeeds.
This is different from runtime connection load balancing feature which is based on load. (This feature is automatically enabled in shared server configuration and uou must use remote_listener to enable this feature in dedicated server configuration)
Transparent Application Failover(TAF) which is also called runtime failover is controlled by parameter failover_mode and connect-time failover is controlled by parameter failover.
When set to on, instructs Oracle Net to progress through the list of protocol addresses in a random sequence, balancing the load on the various listeners. When set to off, instructs Oracle Net to try the addresses sequentially until one succeeds.
This is different from runtime connection load balancing feature which is based on load. (This feature is automatically enabled in shared server configuration and uou must use remote_listener to enable this feature in dedicated server configuration)
Transparent Application Failover(TAF) which is also called runtime failover is controlled by parameter failover_mode and connect-time failover is controlled by parameter failover.
Thursday, March 27, 2008
aix/redhat linux 32/64bit
aix
bootinfo -y硬件
bootinfo -K内核
redhat linux
uname -a
Linux DellServ.redhat.com.cn 2.6.9-22.ELsmp #1 Fri Feb 24 16:44:51 EST 2006 i686 i686 i386 GNU/Linux
得到当前运行的内核版本号以及平台,从上面的示例中可以看到当前运行的内核版本为32位(i686 i686 i386表示32bit,X86_64表示64bit)
bootinfo -y硬件
bootinfo -K内核
redhat linux
uname -a
Linux DellServ.redhat.com.cn 2.6.9-22.ELsmp #1 Fri Feb 24 16:44:51 EST 2006 i686 i686 i386 GNU/Linux
得到当前运行的内核版本号以及平台,从上面的示例中可以看到当前运行的内核版本为32位(i686 i686 i386表示32bit,X86_64表示64bit)
Monday, March 17, 2008
materialized view(mview) and vpd
1. 当基表有vpd策略(该策略不返回null),不能创建或者刷新mview。
2. 但是基表上的vpd策略(该策略不返回null)可以与mview共存。
3. query rewrite与vpd的关系,“2”的具体回避方法(exempt access policy用户)参照
Oracle® Database Data Warehousing Guide
10g Release 2 (10.2)
Part Number B14223-02
9 Advanced Materialized Views
2. 但是基表上的vpd策略(该策略不返回null)可以与mview共存。
3. query rewrite与vpd的关系,“2”的具体回避方法(exempt access policy用户)参照
Oracle® Database Data Warehousing Guide
10g Release 2 (10.2)
Part Number B14223-02
9 Advanced Materialized Views
Monday, February 25, 2008
How to Identify Resource Intensive SQL for Tuning
Subject: How to Identify Resource Intensive SQL for Tuning
Doc ID: Note:232443.1 Type: TROUBLESHOOTING
Last Revision Date: 20-AUG-2006 Status: PUBLISHED
PURPOSE
This article provides guidance on how to identify the most resource intensive SQL statements in a database for tuning purposes.
SCOPE & APPLICATION
The techniques described here can be used when initial diagnosis of a database performance problem suggests that further investigation needs to concentrate on finding and tuning the most resource intensive SQL statements according to specific criteria, e.g.
using the most CPU
performing the most disk I/O operations
having the most executions
taking the longest time to execute (elapsed time)
The article should be of use to Database Administrators, Support Engineers, Consultants and Database Performance Analysts.
WHEN TO LOOK FOR RESOURCE INTENSIVE SQL STATEMENTS
In this section we discuss briefly the reasons that would lead us to look for the most resource intensive SQL statements while investigating a database performance problem.
Response Time Analysis shows that heaviest time consumer is CPU-related e.g. CPU Other or CPU Parse time or an I/O-related Wait Event e.g. db file sequential read or db file scattered read:
This is the method followed by tools such as Statspack.
Extensive information is gathered from the database which shows in detail in what type of activities time is being spent.
It is based on the following equation:
Response Time = Service Time + Wait Time
where Service Time is time spent on the CPU
and Wait Time is the sum of time spent on Wait Events i.e. non-idle time spent waiting for an event to complete or for a resource to become available.
Service Time is comprised of time spent on the CPU for Parsing, Recursive CPU usage (for PLSQL and recursive SQL) and CPU used for execution of SQL statements (CPU Other).
Service Time = CPU Parse + CPU Recursive + CPU Other
The above components of Service Time can be found from the following statistics:
Service Time from CPU used by this session
CPU Parse from parse time cpu
CPU Recursive from recursive cpu usage
From these, CPU Other can be calculated as follows:
CPU other = CPU used by this session - parse time cpu - recursive cpu usage
When CPU Other is a significant component of total Response Time the next step is to find the SQL statements that access the most blocks. Block accesses are also known as Buffer Gets and Logical I/Os.
Statspack lists such SQL statements in section SQL ordered by Gets.
When CPU Parse is a significant component of total Response Time the next step is to find the SQL statements that have the most parses.
Statspack lists such SQL statements in section SQL ordered by Parse Calls.
Note: this is available in Oracle9i and above.
Wait Time is the sum of time waited for non-idle Wait Events. These include I/O waits for reading blocks from disk as measured by the Wait Events db file sequential read for single-block reads and db file scattered read for multi-block reads.
When such Wait Events are found to be significant components of Response Time the next step is to find the SQL statements that read the most blocks from disk.
Statspack lists such SQL statements in section SQL ordered by Reads.
Example from Statspack: (pre-Oracle9i Release 2)
Here is an example where CPU Other was found to be a significant component of total Response Time:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
direct path read 4,232 10,827 52.01
db file scattered read 6,105 6,264 30.09
direct path write 1,992 3,268 15.70
control file parallel write 893 198 .95
db file parallel write 40 131 .63
-------------------------------------------------------------
Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 358,806 130.5 12,372.6
parse time cpu 38 0.0 1.3
recursive cpu usage 186,636 67.9 6,435.7
From these figures we can obtain:
Wait Time = 10,827 x 100% / 52,01% = 20,817 cs
Service Time = 358,806 cs
Response Time = 358,806 + 20,817 = 379,623 cs
CPU Other = 358,806 - 38 - 186,636 = 172,132 cs
If we now calculate percentages for the top Response Time components:
CPU Other = 45.34%
CPU Recursive = 49.16%
direct path read = 2.85%
etc. etc.
CPU Other is a significant component of Response Time, so a possible next step is to look at the SQL ordered by Gets section.
Example from Statspack: (Oracle9i Release 2 & above)
Starting with Oracle9i Release 2, Statspack presents Service Time (obtained from the statistic CPU used by this session ) together with the top Wait Events in a section called Top 5 Timed Events, which replaces the section Top 5 Wait Events of previous releases.
Here is an example:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
library cache lock 141 424 76.52
db file scattered read 3,367 96 17.40
CPU time 32 5.79
db file sequential read 161 1 .18
control file parallel write 245 0 .05
-------------------------------------------------------------
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 3,211 4.3 1,605.5
parse time cpu 59 0.1 29.5
recursive cpu usage 232 0.3 116.0
These figures give us directly the percentages of the Wait Events against the total Response Time so no further calculations are necessary to assess the impact of Wait Events. Service Time is presented as CPU time in this section and corresponds to the total CPU utilisation. We can drill down to the various components of Service Time as follows:
CPU Other = 3,211 - 59 - 232 = 2,920 cs
CPU Other = 2,920 / 3,211 x 5.79% = 5.26%
CPU Parse = 59 / 3,211 x 5.79% = 0.11%
CPU Recursive = 232 / 3,211 x 5.79% = 0.42%
In this example, the main performance problem was an issue related to the Library Cache.
The second most important time consumer was waiting for physical I/O due to multiblock reads (db file scattered read).
In this case a possible approach would be to look at the SQL ordered by Reads section of Statspack.
Operating System resource analysis shows that excessive CPU or Disk I/O usage is caused by one or more Oracle processes:
If Operating System utilities show that most CPU usage is due to a small number of Oracle processes then typically they will be SQL Traced and have TKPROF reports generated from their tracefiles. Analysis of the TKPROF reports will lead to the most time consuming SQL statements which will be the subject of subsequent tuning.
If CPU usage is excessive but spread out fairly evenly among all the Oracle processes on the system then typically a tool such as Statspack will be used to perform Response Time analysis. The components of Service Time will be evaluated and if this shows CPU Other as being significant the next step will be to look at the SQL performing most block accesses in the SQL by Gets section of the Statspack report.
If Disk I/O is highly utilized on the system then a Response Time analysis using Statspack can be done to confirm that the Disk I/O usage is due to Oracle processes and I/O-related Wait Events can be expected to be significant components of overall Response Time. SQL statements performing the most physical reads can then be found in the section SQL ordered by Reads .
Investigation of the performance of a batch job or other session which performs a number of SQL statements and has slow performance:
This will be done with SQL Trace and TKPROF as descrived above and the most important SQL statements in terms of time spent will be identified for further tuning.
Examination of statistics stored for SQL statements in V$ dynamic views:
Part of the information stored with a SQL statement (Shared Cursor) in the Library Cache portion of the Shared Pool are a number of statistics related to its execution. These are available through the V$SQLAREA dynamic view and can be queried to monitor the most expensive SQL statements in each category.
This is the approach used by the graphical tool SQL Analyze in the Oracle Enterprise Manager Tuning Pack.
For more information on the topics discussed in this section please refer to:
Note 190124.1 THE COE PERFORMANCE METHOD
Note 228913.1 Systemwide Tuning using STATSPACK Reports:
Note 223117.1 Tuning I/O-related waits
TOP SQL STATEMENTS IN STATSPACK REPORTS
Statspack reports generated from snapshots at level 5 (default) and above can contain SQL reporting sections for the following types of resource intensive SQL statements:
SQL ordered by Gets
containing the SQL statements with most buffer accesses.
The default threshold value is 10,000 buffer gets.
Columns displayed are:
Buffer Gets, Executions, Gets per Exec, % Total, Hash Value (8.1.7)
Buffer Gets, Executions, Gets per Exec, %Total, CPU Time (s), Elapsed Time (s), Hash Value (9.0.1 & 9.2)
SQL ordered by Reads
containing the SQL statements with most read I/Os from disk.
The default threshold value is 1,000 disk reads.
Columns displayed are:
Physical Reads, Executions, Reads per Exec, % Total, Hash Value (8.1.7)
Physical Reads, Executions, Reads per Exec, %Total, CPU Time (s), Elapsed Time (s), Hash Value (9.0.1 & 9.2)
SQL ordered by Executions
containing the SQL statements executed the most times.
The default threshold value is 100 executions.
Columns displayed are:
Executions, Rows Processed, Rows per Exec, Hash Value (8.1.7)
Executions, Rows Processed, Rows per Exec, CPU per Exec (s), Elapsed per Exec (s), Hash Value (9.0.1 & 9.2)
SQL ordered by Parse Calls (Oracle9i and above)
containing the SQL statements with most soft parse calls.
The default threshold value is 1,000 parse calls.
Columns displayed are:
Parse Calls, Executions, % Total Parses, Hash Value
SQL ordered by Sharable Memory
containing the SQL statements occupying the most Library Cache memory.
The default threshold value is 1Mb (1,048,576 bytes).
Columns displayed are:
Sharable Mem (b), Executions, % Total, Hash Value (8.1.7, 9.0.1 & 9.2)
SQL ordered by Version Count
containing the SQL statements with most versions (children cursors).
The default threshold value is 20 versions.
Columns displayed are:
Version Count, Executions, Hash Value (8.1.7, 9.0.1 & 9.2)
In the next few sections we look at examples of each type of Statspack SQL reporting section.
FINDING SQL WITH HIGH CPU OTHER TIME IN STATSPACK
Here are a few examples of SQL ordered by Gets sections from Statspack.
SQL ordered by Gets for DB: PROD Instance: prod Snaps: 8 -9
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
91,938,671 4,249 21,637.7 24.1 3503723562
SELECT "TEKSTI_IK", "U_VERSION", "ASIAKIR_IK", "KONTAKT_IK", "L
OMAKE_IK", "PVM", "TIEDON_LKA", "TEKSTI_VER", "TEKST_TILA", "VA
LK_AUKKO", "SUOR_PA_IK", "SUOR_PA_ID", "RESURSS_IK", "RESURSS_I
39,196,483 4,257 9,207.5 10.3 576408779
SELECT "KONTAKT_IK", "ASIAKAS_IK", "KAYNTIPVM", "KLO_JNRO", "KT
_PIKASEL", "RESURSS_ID", "SUOR_PA_IK", "SUOR_PA_ID", "TEKSTI_IK
", "KT_TEKSTI", "KONT_LAJI" FROM "TEI1000_VIEW" WHERE (kontakt_i
31,870,113 4,262 7,477.7 8.3 3583640853
SELECT "LAAKE_T_IK", "U_VERSION", "TAPAHTU_IK", "ASIAKAS_IK", "
TOT_DATIM", "LAAKE_IK", "KAUPPANIMI", "VALM_MUOTO", "VAHVUUS",
"PAKK_KOKO", "ANNOS", "INDIKAATIO", "PYSYVAIS", "VOIM_OLEVA", "S
30,567,449 4,259 7,177.1 8.0 1901268379
SELECT "LAB_TUL_IK", "U_VERSION", "TAPAHTU_IK", "ASIAKAS_IK", "
TOT_DATE", "TOT_TIME", "PALVELU_IK", "PALV_LYHEN", "SL_KOODI",
"OSAVAS_NRO", "OSAVAS_HOP", "VAST_TYYPP", "VAST_ARVO", "VAST_SIJ
Here the first SQL statement (with hash value 3503723562) alone accounts for 24.1% of all buffer gets in the instance.
The next 3 statements account for 10.3%, 8.3% and 8.0%.
All 4 statements are executed approximately the same number of times (around 4,250 executions).
The first statement has more total Gets because it fetches more buffers each time it runs (Gets/Exec is 21,637.7 compared to 9,207.5, 7,477.7 and 7,177.1).
So it is a first candidate for tuning as it has greater impact on CPU Other time than the other 3 statements.
A better execution plan for this statement resulting in fewer Gets/Exec will reduce its CPU consumption.
Here is another example:
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,200,593 1 3,200,593.0 52.2 397365298
select c.contract_no||'/'||c.contract_version, c.owner_ag
ency_id, a.agency_name, TERRITORYPACK.getsalescont
act(a.agency_id,'SALES','N'), c.start_date, LEAST(
404,024 88,481 4.6 6.6 985453413
select cv_dist_flag from applicant
where applicant_id = :p1
178,600 3,028 59.0 2.9 3013728279
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
The first statement (hash value 397365298) generated 52.2% of the buffer gets in the instance with just 1 execution.
It has a high number of Gets/Exec 3,200,593.0 when compared to the others.
If this statement is not just a one-time query then it is a good candidate for tuning before it starts getting used more often.
The second statement (hash value 985453413) fetches on average less than 5 buffers per execution but appears high in the list because it is executed very frequently.
Initially it is not significant enough to warrant further investigation.
If after tuning the first statement, CPU Other is still a significant component of overall Response Time and a new Statspack report shows the second statement still high on the list, then it could be looked at more closely.
Here is a similar case, this time it is from Oracle9i and we can see the new CPU & Elapsed Time columns:
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
16,177,286 1 16,177,286.0 12.1 209.00 297.91 342588312
select sf.satz_id, f1.feld, f2.feld feld20, substr(sf.fehler
,instr(sf.fehler,'geschrieben:')+13) feld30 --merkwürdigerweise
wird ab 31 Byte in eine neue Zeile geschrieben from d98_ditr_s
8,177,886 375,622 21.8 6.1 214.09 302.26 3544921891
SELECT /*+ RULE */ * from d98_schemaeintraege_view where pro
f_id = :b1 order by sortierung
The Statspack report does not always show the full text of the SQL statement. The Hash Value can be used to get this using the following query, provided the SQL statement is still in the Library Cache at the time the query is run:
SELECT sql_text
FROM v$sql_text
WHERE hash_value = '&hash_value_of_SQL'
ORDER BY piece;
FINDING SQL STATEMENTS WITH HIGH CPU PARSE TIME IN STATSPACK
If CPU Parse time is a significant component of Response Time, it can be because cursors are repeatedly opened and closed every time they are executed instead of being opened once, kept open for multiple executions and only closed when they are no longer required.
The SQL ordered by Parse Calls can help find such cursors, here is an example:
SQL ordered by Parse Calls for DB: DWI1 Instance: DWI1 Snaps: 1 -4
-> End Parse Calls Threshold: 1000
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
13,632,745 13,632,745 98.90 3980186470
SELECT distinct TS002.C_JOB_DEP, TS002.C_JOB FROM TS002_JO
B_DEP TS002, TS001_JOB TS001 WHERE TS001.C_JOB = TS002.C_JO
B_DEP AND TS002.C_JOB = :b1 AND TS001.C_TIP_JOB !=
11,701 27,255,840 0.08 3615375148
COMMIT
8,192 8,192 0.06 238087931
select t.schema, t.name, t.flags, q.name from system.aq$_queue_t
ables t, sys.aq$_queue_table_affinities aft, system.aq$_que
ues q where aft.table_objno = t.objno and aft.owner_instance = :
1 and q.table_objno = t.objno and q.usage = 0 and b
itand(t.flags, 4+16+32+64+128+256) = 0 for update of t.name, aft
8,192 8,192 0.06 2780709284
select q_name, state, delay, expiration, rowid, msgid, dequeue
_msgid, chain_no, local_order_no, enq_time, enq_tid, step_no,
priority, exception_qschema, exception_queue, retry_count, corri
d, time_manager_info, sender_name, sender_address, sender_prot
ocol from SYS.AQ_SRVNTFN_TABLE where time_manager_info <= :1
The first SQL statement (hash value 3980186470) has had the most parses issued against it (98.90% of all parses in the instance). It is parsed every time it is executed (Parse Calls = Executions). Due to its frequency it is a prime candidate for reducing parse calls as described above.
Note: in excessive parsing situations, it is likely that there will be symptoms such as latch free waits on the Library Cache latches and possibly the Shared Pool latch, in addition to CPU Parse time.
FINDING SQL STATEMENTS WITH HIGH DISK I/O WAITS IN STATSPACK
Identifying SQL statements responsible for most physical reads from the Statspack section SQL ordered by Reads has similar concepts as for SQL ordered by Gets.
% Total can be used to evaluate the impact of each statement.
Reads per Exec together with Executions can be used as a hint of whether the statement has a suboptimal execution plan causing many physical reads or if it is there simply because it is executed often.
Possible reasons for high Reads per Exec are use of unselective indexes require large numbers of blocks to be fetched where such blocks are not cached well in the buffer cache, index fragmentation, large Clustering Factor in index etc.
Here is an example:
SQL ordered by Reads for DB: PROD Instance: prod Snaps: 14 -16
-> End Disk Reads Threshold: 1000
Physical Reads Executions Reads per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,633,017 48 75,687.9 28.0 3954888792
SELECT "VAR_RES_IK", "RESURSS_IK", "PVM", "ALKAEN_KLO", "PAATT_
KLO", "AJ_STATUS", "ILMOITT", "HETU", "AS_NIMI", "KASITELTY", "T
OIMIPI_IK", "SUOR_PA_IK", "SUOR_PA_ID", "AIKATYY_IK", "AIKATYY_I
1,511,867 26 58,148.7 11.6 394819206
SELECT "VAR_RES_IK", "RESURSS_IK", "PVM", "ALKAEN_KLO", "PAATT_
KLO", "AJ_STATUS", "ILMOITT", "HETU", "AS_NIMI", "KASITELTY", "T
OIMIPI_IK", "SUOR_PA_IK", "SUOR_PA_ID", "AIKATYY_IK", "AIKATYY_I
762,101 6 127,016.8 5.9 4274178025
SELECT "LAB_TUL_IK", "PALV_LYHEN", "PALV_TILA", "OSAVAS_HOP", "
VAST_ARVO", "VAST_SIJ", "MITTAYKS", "POIKKEAVA", "PALVELU_IK", "
ASIAKAS_IK", "VERIRYHMA", "VV_MAARPVM", "TOT_DATE", "TOT_TIME",
512,142 3 170,714.0 3.9 1591034069
SELECT "LAB_TUL_IK", "PALV_LYHEN", "PALV_TILA", "OSAVAS_HOP", "
VAST_ARVO", "VAST_SIJ", "MITTAYKS", "POIKKEAVA", "PALVELU_IK", "
ASIAKAS_IK", "VERIRYHMA", "VV_MAARPVM", "TOT_DATE", "TOT_TIME",
The first two SQL statements are both executed more often than the others and cause more blocks to be read in from disk each time.
Together they account for almost 40% of read I/O. They both are prime candicates for further SQL tuning.
EVALUATING SQL STATEMENTS WITH MOST EXECUTIONS IN STATSPACK
Identifying those SQL statements that execute most often in a database and tuning them can improve performance even when such statements do not consume many resources in each execution.
This is because of two reasons:
The overall resource consumption of these statements across all their executions may be significant.
Such frequently executed statements are often part of OLTP-style short transactions. Tuning them can improve the performance of the database as experienced by users entering such transactions into the system.
Here is an example of Statspack SQL ordered by Executions:
SQL ordered by Executions for DB: DIN Instance: DIN Snaps: 263 -264
-> End Executions Threshold: 100
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
404,871 133,781 0.3 0.00 0.00 3592950473
SELECT nvl(NVL(EIGTWX.WERT,EIGTW.WERT),eig.wert) wert,
1 sortier FROM D8_EIGENTYPWERTE_TEXT EIGTWX ,D98_
EIGENTYPWERTE EIGTW ,D98_EIGENSCHAFTEN EIG WHERE EIG.ANH
324,014 324,014 1.0 0.00 0.00 293077222
SELECT /*+ INDEX (D98_SCHEMAFORMATE SCHFMT_FORM_FKT_UNTER) */
upper(funktionsname), unterformat FROM
D98_SCHEMAFORMATE WHERE formatierung = :b1
183,276 183,276 1.0 0.00 0.00 66213032
INSERT INTO D98_Batch_Variablenwerte (ausf_id, b
atch_id, var_nr, wert) VALUES (:b4,
:b3, :b2, :b1)
114,224 8,936 0.1 0.00 0.00 1604001664
SELECT termin_ist FROM d98_termine WHERE a
nhang_id=:b2 AND terminart = :b1 AND aktiv = 'J'
order by termin_ist desc
It will frequently be the case that the timing columns in this section will show 0.00 for CPU and Elapsed time, as the most frequently executing SQL statements are likely to be quite fast.
FINDING SQL STATEMENTS WITH HIGH SHARED POOL CONSUMPTION IN STATSPACK
This can help with Shared Pool and Library Cache/Shared Pool latch tuning.
Statements with many versions (multiple child cursors with the same parent cursor i.e. identical SQL text but different properties such as owning schema of objects, optimizer session settings, types & lengths of bind variables etc.) are unsharable.
This means they can consume excessive memory resources in the Shared Pool and cause performance problems related to parsing e.g. Library Cache and Shared Pool latch contention or lookup time e.g. Library Cache latch contention.
Statspack has 2 sections to help find such unsharable statements, SQL ordered by Sharable Memory and SQL ordered by Version Count:
SQL ordered by Sharable Memory for DB: DIN Instance: DIN Snaps: 263 -264
-> End Sharable Memory Threshold: 1048576
Sharable Mem (b) Executions % Total Hash Value
---------------- ------------ ------- ------------
3,445,680 1 0.1 2317124142
select /*+rule*/ decode(UPPER(:P_TITEL_SORT), 'NAME', fin_sort_ansp_name,
'FIRMA', fin_sort_ans_firma, 'KURZBEZ. FIRMA', fin_sort_ans_name,
3,386,429 76 0.1 3470710979
SELECT ID,ANHANGSTYP,OBJEKT_ID,PROF_ID,ANHANGSART,SPRACHE,AKTIV,
ANFANG,ENDE,SEITEN,ARCH_BEMERKUNG,DESKRIPTOR,EIGTW_ID,EIG_WERT,T
2,836,746 447 0.1 2274525714
SELECT ID,ANHANGSTYP,OBJEKT_ID,PROF_ID,ANHANGSART,SPRACHE,AKTIV,
ANFANG,ENDE,SEITEN,ARCH_BEMERKUNG,DESKRIPTOR,EIGTW_ID,EIG_WERT,T
SQL ordered by Version Count for DB: P97 Instance: P97 Snaps: 177 -180
-> End Version Count Threshold: 20
Version
Count Executions Hash Value
-------- ------------ ------------
26 36,228 3957083415
SELECT 1 FROM NK245_RECHTE A WHERE (NK245_NK211_ID BETWEEN :b
1 AND :b2 ) AND NK245_GEHEIMSTUFE >= :b3 AND NK245_NK209_ID = :
b4 AND NK245_NKXX_ID = :b5 AND NK245_ANZAHL > 0 AND (:b6 BETW
25 606 2916558383
UPDATE KVS.NK21_DOKUMENTE SET NK21_NK41_PDA_KUERZEL=:b1,NK21_NK4
1_PDA_NAME=:b2,NK21_STR_LEVEL=:b3,NK21_STR_NR=:b4,NK21_STR_TYP=:
b5,NK21_STRL1_NR=:b6,NK21_STRL1_TYP=:b7,NK21_STRL2_NR=:b8,NK21_S
24 1,602 1157590177
INSERT INTO NK297_NACHRICHTEN ( NK297_ID,NK297_TIMESTAMP,NK297_T
IMESTAMP_E,NK297_NK210_ABSENDER,NK297_NK270_USERNAME_ABS,NK297_T
ITEL,NK297_NTEXT,NK297_NK248_ID,NK297_NK244_ID,NK297_NK213_ID,NK
For more information on tuning the Shared Pool please refer to the article:
Note 62143.1 Understanding and Tuning the Shared Pool
TOP SQL STATEMENTS IN TKPROF REPORTS
TKPROF is a tool for producing formatted reports from SQL Trace files (also known as event 10046 tracefiles).
Each SQL (and PL/SQL) statement appearing in the tracefile has its information summarized and collected in one place in the TKPROF report.
This information includes: number of parse, execution & fetch operations, total cpu & elapsed times, buffers read in consistent (query) and current mode, blocks read from disk and row counts, Row Source operations, execution plans, library cache misses, parsing user id & optimizer mode and with TKPROF version 9.0.1 and above, summary of wait events for tracefiles generated with event 10046 at levels 8 or 12.
A powerful feature available in TKPROF is the sort option which allows for ordering the SQL statement in the report according to a number of criteria.
This enables the easy identification of the most resource-intensive SQL statements and helps target efficiently the SQL tuning process.
Here are the options available for sorting, they may be combined:
prscnt number of times parse was called
prscpu cpu time parsing
prsela elapsed time parsing
prsdsk number of disk reads during parse
prsqry number of buffers for consistent read during parse
prscu number of buffers for current read during parse
prsmis number of misses in library cache during parse
execnt number of execute was called
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
exemis number of library cache misses during execute
fchcnt number of times fetch was called
fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched
The most commonly used sort options are the ones ordering according to elapsed times consumed for the execute and fetch phases: we are usually interested in tuning statements that take more time to run as time is the most important property in the tuning process.
exeela elapsed time executing
fchela fchela elapsed time fetching
It should be clear however that it is as simple to order the SQL in order to find which ones cause the most disk i/o during fetch
fchdsk number of disk reads during fetch
or according to which ones access the most buffers in consistent mode e.g. for queries this would be in the fetch phase
fchqry number of buffers for consistent read during fetch
or for DML it would be in the execute phase
exeqry number of buffers for consistent read during execute
For more information on working with TKPROF please refer to the article:
Note 32951.1 Tkprof Interpretation
TOP SQL STATEMENTS IN V$SQLAREA & V$SQL
The Oracle Server provides 3 dynamic views for querying execution statistics of all SQL statements currently cached in the Library Cache of the Shared Pool.
They are V$SQL, V$SQLAREA and V$SQLXS.
V$SQL has 1 row for each different version of a SQL statement.
This means that each child cursor has its own execution statistics.
V$SQLAREA has 1 row for each different SQL string i.e. each parent cursor.
This means that the statistics for all child cursors i.e. different versions of this cursor are grouped together. It is not based on V$SQL.
.
V$SQLXS is a simpler version of V$SQLAREA.
It is used mainly by Statspack for generating the SQL reporting sections.
It queries V$SQL with a GROUP BY.
It is defined in ORACLE_HOME/rdbms/admin/catsnmp.sql.
V$SQLAREA or V$SQLXS can be used most often to find the top few SQL statements for a specific category of statistic.
Once these are identified, V$SQL can be used to drill down to see whether different versions of each statement exhibit similar statistics or whether some particular versions stand out.
V$SQL is less resource intensive than V$SQLAREA as it avoids the GROUP BY operation and causes less Library Cache latch contention.
Here is a general form of a query on any of these views:
SELECT * FROM
(SELECT hash_value,address,substr(sql_text,1,40) sql,
[list of columns], [list of derived values]
FROM [V$SQL or V$SQLXS or V$SQLAREA]
WHERE [list of threshold conditions for columns]
ORDER BY [list of ordering columns] DESC)
WHERE rownum <= [number of top SQL statements];
Here is an example:
SELECT * FROM
(SELECT hash_value,address,substr(sql_text,1,40) sql,
buffer_gets, executions, buffer_gets/executions "Gets/Exec"
FROM V$SQLAREA
WHERE buffer_gets > 100000 AND executions > 10
ORDER BY buffer_gets DESC)
WHERE rownum <= 10;
[list of columns] = buffer_gets, executions
[list of derived values] = buffer_gets/executions
[list of threshold conditions for columns] = buffer_gets > 100000, executions > 10
[list of ordering columns] = buffer_gets
[number of top SQL statements] = 10
The following article contains a selection of ready to run Top-10 queries:
Note 235146.1 Example "Top SQL" queries from V$SQLAREA
A list of columns on which to base similar queries are:
buffer_gets
disk_reads
executions
rows_processed
sorts
parse_calls
sharable_mem
version_count
invalidations
All of these are available in all three views V$SQL, V$SQLAREA & V$SQLXS.
There are a number of other columns for queries of this kind , not so frequently used, which can be found by inspecting V$SQL and V$SQLAREA.
Note 43761.1 VIEW: "V$SQLAREA" Reference Note
Note 43762.1 VIEW: "V$SQL" Reference Note
TOP SQL STATEMENTS IN ENTEPRISE MANAGER SQL ANALYZE
SQL Analyze is a GUI tool available with the Oracle Enterprise Manager Tuning Pack that can be used to identify and can help with tuning resource intensive SQL statements.
The queries for the heaviest SQL statements are done from the Top SQL menu and a variety of search criteria are available:
Buffer Cache Hit Ratio
Buffer Gets
Buffer Gets per Execution
Buffer Gets per Row
Disk Reads
Disk Reads per Execution
Executions
Parse Calls
Parse Calls per Execution
Rows Processed
Sharable Memory
Sorts
The search can be tailored to include Top N results or All statements and can also filter out Recursive SQL.
For more information please refer to the Online Help available with the tool or the following manual:
Oracle Enterprise Manager Database Tuning with the Oracle Tuning Pack Release 9.0.1 June 2001 Part No. A86647-01
RELATED DOCUMENTS
Note 190124.1 THE COE PERFORMANCE METHOD
Note 228913.1 Systemwide Tuning using STATSPACK Reports:
Note 223117.1 Tuning I/O-related waits
Note 62143.1 Understanding and Tuning the Shared Pool
Note 43761.1 VIEW: "V$SQLAREA" Reference Note
Note 43762.1 VIEW: "V$SQL" Reference Note
Note 235146.1 Example "Top SQL" queries from V$SQLAREA
Note 32951.1 Tkprof Interpretation .
Doc ID: Note:232443.1 Type: TROUBLESHOOTING
Last Revision Date: 20-AUG-2006 Status: PUBLISHED
PURPOSE
This article provides guidance on how to identify the most resource intensive SQL statements in a database for tuning purposes.
SCOPE & APPLICATION
The techniques described here can be used when initial diagnosis of a database performance problem suggests that further investigation needs to concentrate on finding and tuning the most resource intensive SQL statements according to specific criteria, e.g.
using the most CPU
performing the most disk I/O operations
having the most executions
taking the longest time to execute (elapsed time)
The article should be of use to Database Administrators, Support Engineers, Consultants and Database Performance Analysts.
WHEN TO LOOK FOR RESOURCE INTENSIVE SQL STATEMENTS
In this section we discuss briefly the reasons that would lead us to look for the most resource intensive SQL statements while investigating a database performance problem.
Response Time Analysis shows that heaviest time consumer is CPU-related e.g. CPU Other or CPU Parse time or an I/O-related Wait Event e.g. db file sequential read or db file scattered read:
This is the method followed by tools such as Statspack.
Extensive information is gathered from the database which shows in detail in what type of activities time is being spent.
It is based on the following equation:
Response Time = Service Time + Wait Time
where Service Time is time spent on the CPU
and Wait Time is the sum of time spent on Wait Events i.e. non-idle time spent waiting for an event to complete or for a resource to become available.
Service Time is comprised of time spent on the CPU for Parsing, Recursive CPU usage (for PLSQL and recursive SQL) and CPU used for execution of SQL statements (CPU Other).
Service Time = CPU Parse + CPU Recursive + CPU Other
The above components of Service Time can be found from the following statistics:
Service Time from CPU used by this session
CPU Parse from parse time cpu
CPU Recursive from recursive cpu usage
From these, CPU Other can be calculated as follows:
CPU other = CPU used by this session - parse time cpu - recursive cpu usage
When CPU Other is a significant component of total Response Time the next step is to find the SQL statements that access the most blocks. Block accesses are also known as Buffer Gets and Logical I/Os.
Statspack lists such SQL statements in section SQL ordered by Gets.
When CPU Parse is a significant component of total Response Time the next step is to find the SQL statements that have the most parses.
Statspack lists such SQL statements in section SQL ordered by Parse Calls.
Note: this is available in Oracle9i and above.
Wait Time is the sum of time waited for non-idle Wait Events. These include I/O waits for reading blocks from disk as measured by the Wait Events db file sequential read for single-block reads and db file scattered read for multi-block reads.
When such Wait Events are found to be significant components of Response Time the next step is to find the SQL statements that read the most blocks from disk.
Statspack lists such SQL statements in section SQL ordered by Reads.
Example from Statspack: (pre-Oracle9i Release 2)
Here is an example where CPU Other was found to be a significant component of total Response Time:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
direct path read 4,232 10,827 52.01
db file scattered read 6,105 6,264 30.09
direct path write 1,992 3,268 15.70
control file parallel write 893 198 .95
db file parallel write 40 131 .63
-------------------------------------------------------------
Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 358,806 130.5 12,372.6
parse time cpu 38 0.0 1.3
recursive cpu usage 186,636 67.9 6,435.7
From these figures we can obtain:
Wait Time = 10,827 x 100% / 52,01% = 20,817 cs
Service Time = 358,806 cs
Response Time = 358,806 + 20,817 = 379,623 cs
CPU Other = 358,806 - 38 - 186,636 = 172,132 cs
If we now calculate percentages for the top Response Time components:
CPU Other = 45.34%
CPU Recursive = 49.16%
direct path read = 2.85%
etc. etc.
CPU Other is a significant component of Response Time, so a possible next step is to look at the SQL ordered by Gets section.
Example from Statspack: (Oracle9i Release 2 & above)
Starting with Oracle9i Release 2, Statspack presents Service Time (obtained from the statistic CPU used by this session ) together with the top Wait Events in a section called Top 5 Timed Events, which replaces the section Top 5 Wait Events of previous releases.
Here is an example:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
library cache lock 141 424 76.52
db file scattered read 3,367 96 17.40
CPU time 32 5.79
db file sequential read 161 1 .18
control file parallel write 245 0 .05
-------------------------------------------------------------
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 3,211 4.3 1,605.5
parse time cpu 59 0.1 29.5
recursive cpu usage 232 0.3 116.0
These figures give us directly the percentages of the Wait Events against the total Response Time so no further calculations are necessary to assess the impact of Wait Events. Service Time is presented as CPU time in this section and corresponds to the total CPU utilisation. We can drill down to the various components of Service Time as follows:
CPU Other = 3,211 - 59 - 232 = 2,920 cs
CPU Other = 2,920 / 3,211 x 5.79% = 5.26%
CPU Parse = 59 / 3,211 x 5.79% = 0.11%
CPU Recursive = 232 / 3,211 x 5.79% = 0.42%
In this example, the main performance problem was an issue related to the Library Cache.
The second most important time consumer was waiting for physical I/O due to multiblock reads (db file scattered read).
In this case a possible approach would be to look at the SQL ordered by Reads section of Statspack.
Operating System resource analysis shows that excessive CPU or Disk I/O usage is caused by one or more Oracle processes:
If Operating System utilities show that most CPU usage is due to a small number of Oracle processes then typically they will be SQL Traced and have TKPROF reports generated from their tracefiles. Analysis of the TKPROF reports will lead to the most time consuming SQL statements which will be the subject of subsequent tuning.
If CPU usage is excessive but spread out fairly evenly among all the Oracle processes on the system then typically a tool such as Statspack will be used to perform Response Time analysis. The components of Service Time will be evaluated and if this shows CPU Other as being significant the next step will be to look at the SQL performing most block accesses in the SQL by Gets section of the Statspack report.
If Disk I/O is highly utilized on the system then a Response Time analysis using Statspack can be done to confirm that the Disk I/O usage is due to Oracle processes and I/O-related Wait Events can be expected to be significant components of overall Response Time. SQL statements performing the most physical reads can then be found in the section SQL ordered by Reads .
Investigation of the performance of a batch job or other session which performs a number of SQL statements and has slow performance:
This will be done with SQL Trace and TKPROF as descrived above and the most important SQL statements in terms of time spent will be identified for further tuning.
Examination of statistics stored for SQL statements in V$ dynamic views:
Part of the information stored with a SQL statement (Shared Cursor) in the Library Cache portion of the Shared Pool are a number of statistics related to its execution. These are available through the V$SQLAREA dynamic view and can be queried to monitor the most expensive SQL statements in each category.
This is the approach used by the graphical tool SQL Analyze in the Oracle Enterprise Manager Tuning Pack.
For more information on the topics discussed in this section please refer to:
Note 190124.1 THE COE PERFORMANCE METHOD
Note 228913.1 Systemwide Tuning using STATSPACK Reports:
Note 223117.1 Tuning I/O-related waits
TOP SQL STATEMENTS IN STATSPACK REPORTS
Statspack reports generated from snapshots at level 5 (default) and above can contain SQL reporting sections for the following types of resource intensive SQL statements:
SQL ordered by Gets
containing the SQL statements with most buffer accesses.
The default threshold value is 10,000 buffer gets.
Columns displayed are:
Buffer Gets, Executions, Gets per Exec, % Total, Hash Value (8.1.7)
Buffer Gets, Executions, Gets per Exec, %Total, CPU Time (s), Elapsed Time (s), Hash Value (9.0.1 & 9.2)
SQL ordered by Reads
containing the SQL statements with most read I/Os from disk.
The default threshold value is 1,000 disk reads.
Columns displayed are:
Physical Reads, Executions, Reads per Exec, % Total, Hash Value (8.1.7)
Physical Reads, Executions, Reads per Exec, %Total, CPU Time (s), Elapsed Time (s), Hash Value (9.0.1 & 9.2)
SQL ordered by Executions
containing the SQL statements executed the most times.
The default threshold value is 100 executions.
Columns displayed are:
Executions, Rows Processed, Rows per Exec, Hash Value (8.1.7)
Executions, Rows Processed, Rows per Exec, CPU per Exec (s), Elapsed per Exec (s), Hash Value (9.0.1 & 9.2)
SQL ordered by Parse Calls (Oracle9i and above)
containing the SQL statements with most soft parse calls.
The default threshold value is 1,000 parse calls.
Columns displayed are:
Parse Calls, Executions, % Total Parses, Hash Value
SQL ordered by Sharable Memory
containing the SQL statements occupying the most Library Cache memory.
The default threshold value is 1Mb (1,048,576 bytes).
Columns displayed are:
Sharable Mem (b), Executions, % Total, Hash Value (8.1.7, 9.0.1 & 9.2)
SQL ordered by Version Count
containing the SQL statements with most versions (children cursors).
The default threshold value is 20 versions.
Columns displayed are:
Version Count, Executions, Hash Value (8.1.7, 9.0.1 & 9.2)
In the next few sections we look at examples of each type of Statspack SQL reporting section.
FINDING SQL WITH HIGH CPU OTHER TIME IN STATSPACK
Here are a few examples of SQL ordered by Gets sections from Statspack.
SQL ordered by Gets for DB: PROD Instance: prod Snaps: 8 -9
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
91,938,671 4,249 21,637.7 24.1 3503723562
SELECT "TEKSTI_IK", "U_VERSION", "ASIAKIR_IK", "KONTAKT_IK", "L
OMAKE_IK", "PVM", "TIEDON_LKA", "TEKSTI_VER", "TEKST_TILA", "VA
LK_AUKKO", "SUOR_PA_IK", "SUOR_PA_ID", "RESURSS_IK", "RESURSS_I
39,196,483 4,257 9,207.5 10.3 576408779
SELECT "KONTAKT_IK", "ASIAKAS_IK", "KAYNTIPVM", "KLO_JNRO", "KT
_PIKASEL", "RESURSS_ID", "SUOR_PA_IK", "SUOR_PA_ID", "TEKSTI_IK
", "KT_TEKSTI", "KONT_LAJI" FROM "TEI1000_VIEW" WHERE (kontakt_i
31,870,113 4,262 7,477.7 8.3 3583640853
SELECT "LAAKE_T_IK", "U_VERSION", "TAPAHTU_IK", "ASIAKAS_IK", "
TOT_DATIM", "LAAKE_IK", "KAUPPANIMI", "VALM_MUOTO", "VAHVUUS",
"PAKK_KOKO", "ANNOS", "INDIKAATIO", "PYSYVAIS", "VOIM_OLEVA", "S
30,567,449 4,259 7,177.1 8.0 1901268379
SELECT "LAB_TUL_IK", "U_VERSION", "TAPAHTU_IK", "ASIAKAS_IK", "
TOT_DATE", "TOT_TIME", "PALVELU_IK", "PALV_LYHEN", "SL_KOODI",
"OSAVAS_NRO", "OSAVAS_HOP", "VAST_TYYPP", "VAST_ARVO", "VAST_SIJ
Here the first SQL statement (with hash value 3503723562) alone accounts for 24.1% of all buffer gets in the instance.
The next 3 statements account for 10.3%, 8.3% and 8.0%.
All 4 statements are executed approximately the same number of times (around 4,250 executions).
The first statement has more total Gets because it fetches more buffers each time it runs (Gets/Exec is 21,637.7 compared to 9,207.5, 7,477.7 and 7,177.1).
So it is a first candidate for tuning as it has greater impact on CPU Other time than the other 3 statements.
A better execution plan for this statement resulting in fewer Gets/Exec will reduce its CPU consumption.
Here is another example:
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,200,593 1 3,200,593.0 52.2 397365298
select c.contract_no||'/'||c.contract_version, c.owner_ag
ency_id, a.agency_name, TERRITORYPACK.getsalescont
act(a.agency_id,'SALES','N'), c.start_date, LEAST(
404,024 88,481 4.6 6.6 985453413
select cv_dist_flag from applicant
where applicant_id = :p1
178,600 3,028 59.0 2.9 3013728279
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
The first statement (hash value 397365298) generated 52.2% of the buffer gets in the instance with just 1 execution.
It has a high number of Gets/Exec 3,200,593.0 when compared to the others.
If this statement is not just a one-time query then it is a good candidate for tuning before it starts getting used more often.
The second statement (hash value 985453413) fetches on average less than 5 buffers per execution but appears high in the list because it is executed very frequently.
Initially it is not significant enough to warrant further investigation.
If after tuning the first statement, CPU Other is still a significant component of overall Response Time and a new Statspack report shows the second statement still high on the list, then it could be looked at more closely.
Here is a similar case, this time it is from Oracle9i and we can see the new CPU & Elapsed Time columns:
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
16,177,286 1 16,177,286.0 12.1 209.00 297.91 342588312
select sf.satz_id, f1.feld, f2.feld feld20, substr(sf.fehler
,instr(sf.fehler,'geschrieben:')+13) feld30 --merkwürdigerweise
wird ab 31 Byte in eine neue Zeile geschrieben from d98_ditr_s
8,177,886 375,622 21.8 6.1 214.09 302.26 3544921891
SELECT /*+ RULE */ * from d98_schemaeintraege_view where pro
f_id = :b1 order by sortierung
The Statspack report does not always show the full text of the SQL statement. The Hash Value can be used to get this using the following query, provided the SQL statement is still in the Library Cache at the time the query is run:
SELECT sql_text
FROM v$sql_text
WHERE hash_value = '&hash_value_of_SQL'
ORDER BY piece;
FINDING SQL STATEMENTS WITH HIGH CPU PARSE TIME IN STATSPACK
If CPU Parse time is a significant component of Response Time, it can be because cursors are repeatedly opened and closed every time they are executed instead of being opened once, kept open for multiple executions and only closed when they are no longer required.
The SQL ordered by Parse Calls can help find such cursors, here is an example:
SQL ordered by Parse Calls for DB: DWI1 Instance: DWI1 Snaps: 1 -4
-> End Parse Calls Threshold: 1000
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
13,632,745 13,632,745 98.90 3980186470
SELECT distinct TS002.C_JOB_DEP, TS002.C_JOB FROM TS002_JO
B_DEP TS002, TS001_JOB TS001 WHERE TS001.C_JOB = TS002.C_JO
B_DEP AND TS002.C_JOB = :b1 AND TS001.C_TIP_JOB !=
11,701 27,255,840 0.08 3615375148
COMMIT
8,192 8,192 0.06 238087931
select t.schema, t.name, t.flags, q.name from system.aq$_queue_t
ables t, sys.aq$_queue_table_affinities aft, system.aq$_que
ues q where aft.table_objno = t.objno and aft.owner_instance = :
1 and q.table_objno = t.objno and q.usage = 0 and b
itand(t.flags, 4+16+32+64+128+256) = 0 for update of t.name, aft
8,192 8,192 0.06 2780709284
select q_name, state, delay, expiration, rowid, msgid, dequeue
_msgid, chain_no, local_order_no, enq_time, enq_tid, step_no,
priority, exception_qschema, exception_queue, retry_count, corri
d, time_manager_info, sender_name, sender_address, sender_prot
ocol from SYS.AQ_SRVNTFN_TABLE where time_manager_info <= :1
The first SQL statement (hash value 3980186470) has had the most parses issued against it (98.90% of all parses in the instance). It is parsed every time it is executed (Parse Calls = Executions). Due to its frequency it is a prime candidate for reducing parse calls as described above.
Note: in excessive parsing situations, it is likely that there will be symptoms such as latch free waits on the Library Cache latches and possibly the Shared Pool latch, in addition to CPU Parse time.
FINDING SQL STATEMENTS WITH HIGH DISK I/O WAITS IN STATSPACK
Identifying SQL statements responsible for most physical reads from the Statspack section SQL ordered by Reads has similar concepts as for SQL ordered by Gets.
% Total can be used to evaluate the impact of each statement.
Reads per Exec together with Executions can be used as a hint of whether the statement has a suboptimal execution plan causing many physical reads or if it is there simply because it is executed often.
Possible reasons for high Reads per Exec are use of unselective indexes require large numbers of blocks to be fetched where such blocks are not cached well in the buffer cache, index fragmentation, large Clustering Factor in index etc.
Here is an example:
SQL ordered by Reads for DB: PROD Instance: prod Snaps: 14 -16
-> End Disk Reads Threshold: 1000
Physical Reads Executions Reads per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,633,017 48 75,687.9 28.0 3954888792
SELECT "VAR_RES_IK", "RESURSS_IK", "PVM", "ALKAEN_KLO", "PAATT_
KLO", "AJ_STATUS", "ILMOITT", "HETU", "AS_NIMI", "KASITELTY", "T
OIMIPI_IK", "SUOR_PA_IK", "SUOR_PA_ID", "AIKATYY_IK", "AIKATYY_I
1,511,867 26 58,148.7 11.6 394819206
SELECT "VAR_RES_IK", "RESURSS_IK", "PVM", "ALKAEN_KLO", "PAATT_
KLO", "AJ_STATUS", "ILMOITT", "HETU", "AS_NIMI", "KASITELTY", "T
OIMIPI_IK", "SUOR_PA_IK", "SUOR_PA_ID", "AIKATYY_IK", "AIKATYY_I
762,101 6 127,016.8 5.9 4274178025
SELECT "LAB_TUL_IK", "PALV_LYHEN", "PALV_TILA", "OSAVAS_HOP", "
VAST_ARVO", "VAST_SIJ", "MITTAYKS", "POIKKEAVA", "PALVELU_IK", "
ASIAKAS_IK", "VERIRYHMA", "VV_MAARPVM", "TOT_DATE", "TOT_TIME",
512,142 3 170,714.0 3.9 1591034069
SELECT "LAB_TUL_IK", "PALV_LYHEN", "PALV_TILA", "OSAVAS_HOP", "
VAST_ARVO", "VAST_SIJ", "MITTAYKS", "POIKKEAVA", "PALVELU_IK", "
ASIAKAS_IK", "VERIRYHMA", "VV_MAARPVM", "TOT_DATE", "TOT_TIME",
The first two SQL statements are both executed more often than the others and cause more blocks to be read in from disk each time.
Together they account for almost 40% of read I/O. They both are prime candicates for further SQL tuning.
EVALUATING SQL STATEMENTS WITH MOST EXECUTIONS IN STATSPACK
Identifying those SQL statements that execute most often in a database and tuning them can improve performance even when such statements do not consume many resources in each execution.
This is because of two reasons:
The overall resource consumption of these statements across all their executions may be significant.
Such frequently executed statements are often part of OLTP-style short transactions. Tuning them can improve the performance of the database as experienced by users entering such transactions into the system.
Here is an example of Statspack SQL ordered by Executions:
SQL ordered by Executions for DB: DIN Instance: DIN Snaps: 263 -264
-> End Executions Threshold: 100
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
404,871 133,781 0.3 0.00 0.00 3592950473
SELECT nvl(NVL(EIGTWX.WERT,EIGTW.WERT),eig.wert) wert,
1 sortier FROM D8_EIGENTYPWERTE_TEXT EIGTWX ,D98_
EIGENTYPWERTE EIGTW ,D98_EIGENSCHAFTEN EIG WHERE EIG.ANH
324,014 324,014 1.0 0.00 0.00 293077222
SELECT /*+ INDEX (D98_SCHEMAFORMATE SCHFMT_FORM_FKT_UNTER) */
upper(funktionsname), unterformat FROM
D98_SCHEMAFORMATE WHERE formatierung = :b1
183,276 183,276 1.0 0.00 0.00 66213032
INSERT INTO D98_Batch_Variablenwerte (ausf_id, b
atch_id, var_nr, wert) VALUES (:b4,
:b3, :b2, :b1)
114,224 8,936 0.1 0.00 0.00 1604001664
SELECT termin_ist FROM d98_termine WHERE a
nhang_id=:b2 AND terminart = :b1 AND aktiv = 'J'
order by termin_ist desc
It will frequently be the case that the timing columns in this section will show 0.00 for CPU and Elapsed time, as the most frequently executing SQL statements are likely to be quite fast.
FINDING SQL STATEMENTS WITH HIGH SHARED POOL CONSUMPTION IN STATSPACK
This can help with Shared Pool and Library Cache/Shared Pool latch tuning.
Statements with many versions (multiple child cursors with the same parent cursor i.e. identical SQL text but different properties such as owning schema of objects, optimizer session settings, types & lengths of bind variables etc.) are unsharable.
This means they can consume excessive memory resources in the Shared Pool and cause performance problems related to parsing e.g. Library Cache and Shared Pool latch contention or lookup time e.g. Library Cache latch contention.
Statspack has 2 sections to help find such unsharable statements, SQL ordered by Sharable Memory and SQL ordered by Version Count:
SQL ordered by Sharable Memory for DB: DIN Instance: DIN Snaps: 263 -264
-> End Sharable Memory Threshold: 1048576
Sharable Mem (b) Executions % Total Hash Value
---------------- ------------ ------- ------------
3,445,680 1 0.1 2317124142
select /*+rule*/ decode(UPPER(:P_TITEL_SORT), 'NAME', fin_sort_ansp_name,
'FIRMA', fin_sort_ans_firma, 'KURZBEZ. FIRMA', fin_sort_ans_name,
3,386,429 76 0.1 3470710979
SELECT ID,ANHANGSTYP,OBJEKT_ID,PROF_ID,ANHANGSART,SPRACHE,AKTIV,
ANFANG,ENDE,SEITEN,ARCH_BEMERKUNG,DESKRIPTOR,EIGTW_ID,EIG_WERT,T
2,836,746 447 0.1 2274525714
SELECT ID,ANHANGSTYP,OBJEKT_ID,PROF_ID,ANHANGSART,SPRACHE,AKTIV,
ANFANG,ENDE,SEITEN,ARCH_BEMERKUNG,DESKRIPTOR,EIGTW_ID,EIG_WERT,T
SQL ordered by Version Count for DB: P97 Instance: P97 Snaps: 177 -180
-> End Version Count Threshold: 20
Version
Count Executions Hash Value
-------- ------------ ------------
26 36,228 3957083415
SELECT 1 FROM NK245_RECHTE A WHERE (NK245_NK211_ID BETWEEN :b
1 AND :b2 ) AND NK245_GEHEIMSTUFE >= :b3 AND NK245_NK209_ID = :
b4 AND NK245_NKXX_ID = :b5 AND NK245_ANZAHL > 0 AND (:b6 BETW
25 606 2916558383
UPDATE KVS.NK21_DOKUMENTE SET NK21_NK41_PDA_KUERZEL=:b1,NK21_NK4
1_PDA_NAME=:b2,NK21_STR_LEVEL=:b3,NK21_STR_NR=:b4,NK21_STR_TYP=:
b5,NK21_STRL1_NR=:b6,NK21_STRL1_TYP=:b7,NK21_STRL2_NR=:b8,NK21_S
24 1,602 1157590177
INSERT INTO NK297_NACHRICHTEN ( NK297_ID,NK297_TIMESTAMP,NK297_T
IMESTAMP_E,NK297_NK210_ABSENDER,NK297_NK270_USERNAME_ABS,NK297_T
ITEL,NK297_NTEXT,NK297_NK248_ID,NK297_NK244_ID,NK297_NK213_ID,NK
For more information on tuning the Shared Pool please refer to the article:
Note 62143.1 Understanding and Tuning the Shared Pool
TOP SQL STATEMENTS IN TKPROF REPORTS
TKPROF is a tool for producing formatted reports from SQL Trace files (also known as event 10046 tracefiles).
Each SQL (and PL/SQL) statement appearing in the tracefile has its information summarized and collected in one place in the TKPROF report.
This information includes: number of parse, execution & fetch operations, total cpu & elapsed times, buffers read in consistent (query) and current mode, blocks read from disk and row counts, Row Source operations, execution plans, library cache misses, parsing user id & optimizer mode and with TKPROF version 9.0.1 and above, summary of wait events for tracefiles generated with event 10046 at levels 8 or 12.
A powerful feature available in TKPROF is the sort option which allows for ordering the SQL statement in the report according to a number of criteria.
This enables the easy identification of the most resource-intensive SQL statements and helps target efficiently the SQL tuning process.
Here are the options available for sorting, they may be combined:
prscnt number of times parse was called
prscpu cpu time parsing
prsela elapsed time parsing
prsdsk number of disk reads during parse
prsqry number of buffers for consistent read during parse
prscu number of buffers for current read during parse
prsmis number of misses in library cache during parse
execnt number of execute was called
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
exemis number of library cache misses during execute
fchcnt number of times fetch was called
fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched
The most commonly used sort options are the ones ordering according to elapsed times consumed for the execute and fetch phases: we are usually interested in tuning statements that take more time to run as time is the most important property in the tuning process.
exeela elapsed time executing
fchela fchela elapsed time fetching
It should be clear however that it is as simple to order the SQL in order to find which ones cause the most disk i/o during fetch
fchdsk number of disk reads during fetch
or according to which ones access the most buffers in consistent mode e.g. for queries this would be in the fetch phase
fchqry number of buffers for consistent read during fetch
or for DML it would be in the execute phase
exeqry number of buffers for consistent read during execute
For more information on working with TKPROF please refer to the article:
Note 32951.1 Tkprof Interpretation
TOP SQL STATEMENTS IN V$SQLAREA & V$SQL
The Oracle Server provides 3 dynamic views for querying execution statistics of all SQL statements currently cached in the Library Cache of the Shared Pool.
They are V$SQL, V$SQLAREA and V$SQLXS.
V$SQL has 1 row for each different version of a SQL statement.
This means that each child cursor has its own execution statistics.
V$SQLAREA has 1 row for each different SQL string i.e. each parent cursor.
This means that the statistics for all child cursors i.e. different versions of this cursor are grouped together. It is not based on V$SQL.
.
V$SQLXS is a simpler version of V$SQLAREA.
It is used mainly by Statspack for generating the SQL reporting sections.
It queries V$SQL with a GROUP BY.
It is defined in ORACLE_HOME/rdbms/admin/catsnmp.sql.
V$SQLAREA or V$SQLXS can be used most often to find the top few SQL statements for a specific category of statistic.
Once these are identified, V$SQL can be used to drill down to see whether different versions of each statement exhibit similar statistics or whether some particular versions stand out.
V$SQL is less resource intensive than V$SQLAREA as it avoids the GROUP BY operation and causes less Library Cache latch contention.
Here is a general form of a query on any of these views:
SELECT * FROM
(SELECT hash_value,address,substr(sql_text,1,40) sql,
[list of columns], [list of derived values]
FROM [V$SQL or V$SQLXS or V$SQLAREA]
WHERE [list of threshold conditions for columns]
ORDER BY [list of ordering columns] DESC)
WHERE rownum <= [number of top SQL statements];
Here is an example:
SELECT * FROM
(SELECT hash_value,address,substr(sql_text,1,40) sql,
buffer_gets, executions, buffer_gets/executions "Gets/Exec"
FROM V$SQLAREA
WHERE buffer_gets > 100000 AND executions > 10
ORDER BY buffer_gets DESC)
WHERE rownum <= 10;
[list of columns] = buffer_gets, executions
[list of derived values] = buffer_gets/executions
[list of threshold conditions for columns] = buffer_gets > 100000, executions > 10
[list of ordering columns] = buffer_gets
[number of top SQL statements] = 10
The following article contains a selection of ready to run Top-10 queries:
Note 235146.1 Example "Top SQL" queries from V$SQLAREA
A list of columns on which to base similar queries are:
buffer_gets
disk_reads
executions
rows_processed
sorts
parse_calls
sharable_mem
version_count
invalidations
All of these are available in all three views V$SQL, V$SQLAREA & V$SQLXS.
There are a number of other columns for queries of this kind , not so frequently used, which can be found by inspecting V$SQL and V$SQLAREA.
Note 43761.1 VIEW: "V$SQLAREA" Reference Note
Note 43762.1 VIEW: "V$SQL" Reference Note
TOP SQL STATEMENTS IN ENTEPRISE MANAGER SQL ANALYZE
SQL Analyze is a GUI tool available with the Oracle Enterprise Manager Tuning Pack that can be used to identify and can help with tuning resource intensive SQL statements.
The queries for the heaviest SQL statements are done from the Top SQL menu and a variety of search criteria are available:
Buffer Cache Hit Ratio
Buffer Gets
Buffer Gets per Execution
Buffer Gets per Row
Disk Reads
Disk Reads per Execution
Executions
Parse Calls
Parse Calls per Execution
Rows Processed
Sharable Memory
Sorts
The search can be tailored to include Top N results or All statements and can also filter out Recursive SQL.
For more information please refer to the Online Help available with the tool or the following manual:
Oracle Enterprise Manager Database Tuning with the Oracle Tuning Pack Release 9.0.1 June 2001 Part No. A86647-01
RELATED DOCUMENTS
Note 190124.1 THE COE PERFORMANCE METHOD
Note 228913.1 Systemwide Tuning using STATSPACK Reports:
Note 223117.1 Tuning I/O-related waits
Note 62143.1 Understanding and Tuning the Shared Pool
Note 43761.1 VIEW: "V$SQLAREA" Reference Note
Note 43762.1 VIEW: "V$SQL" Reference Note
Note 235146.1 Example "Top SQL" queries from V$SQLAREA
Note 32951.1 Tkprof Interpretation .
Subscribe to:
Posts (Atom)