The analytic Pivot feature can be used to present useful information in an easy to read format. This is the case when you have information in multiple rows that you would rather display in multiple columns.
For example, recently I wanted to view information about the busiest database objects. When doing performance tuning, this information can be valuable. Oracle provides the v$segment_statistics view to contain performance metrics for each object.
Since I only wanted to review certain metrics and wanted to see all metrics from each object, the following pivot query was created. This query could be ordered by any column to show the top objects for that column's metric. Being able to see the other metrics for each object on the same line made the output more useful.
Example query:
With pivot_stats as (
select owner,object_name,statistic_name,value from v$segment_statistics
)
select * from pivot_stats
PIVOT
(sum(value) for statistic_name in ('logical reads', 'physical writes' ,'row lock waits' ))
An Oracle Database and EBS related blog site mostly focusing on performance issues.
Thursday, June 16, 2011
Monday, May 2, 2011
Gather Data for SQL tuning
This post will not focus on how to tune SQL statments. There is already a wealth of information about tuning practices. Plus there are many fine tools available such as Profiler that will assist with tuning SQL statements.
I'll give an overview on identifying SQL statements that can benefit from tuning efforts. For this approach I will only assume the use of SQLPlus. These steps are much easier with the appropriate tools, but if you are in a situation where the tools are not available then it is important to know the basic steps to gather information.
1.
Use the following query to get the top SQL statements by Logical Reads for the past couple of hours. This is outlined in my post Applied Anayltics part 1
select buffer_gets, elapsed_time,cpu_time,executions,sql_id,sql_text
from (select buffer_gets, elapsed_time,cpu_time,executions,sql_id,sql_text, rank() over
(order by buffer_gets desc
) as rank from v$sqlstats
where last_active_time > (sysdate - 1/12)
) where rank <=20;
2.
Use the following query to get information about who is calling the SQL statement. This is an important element of troubleshooting. Sometimes you may identify poor statements that are actually program bugs. Getting the calling information can help identify if the statements should even be running at all.
select parsing_schema_name, service,module,action,program_id,program_line#
from v$sql
where sql_id=&SQL_ID_from_1;
3.
If the poor performing statement has bind variables, then you need to see what values are being passed to the statement. Getting this information will allow you to test some real examples of the query.
select child_number,position,name,datatype_string,value_string
from v$sql_bind_capture
where sql_id=&SQL_ID_from_1
order by child_number,position;
4.
Run the query in an SQLPlus session with Autotrace set on. This will return an explain plan and execution results after the query finishes.
If the query returns with a good performance, then check against other bind variables. It could be that only certain values result in a poor execution. Also check the execution count. If a query has a small individual impact however the statement is an overall high consumer, then it could be due to an execptionally high number of executions.
If the individual execution demonstrates a performance problem, then steps can be begun to tune the specific SQL.
I'll give an overview on identifying SQL statements that can benefit from tuning efforts. For this approach I will only assume the use of SQLPlus. These steps are much easier with the appropriate tools, but if you are in a situation where the tools are not available then it is important to know the basic steps to gather information.
The key steps to follow are:
- Use Applied Analytics part 1 to find top sql
- Use additional queries to get information about calling programs
- Use v$sql_bind_capture to get sample bind data
- Use autotrace with SQL Plus to get plan and execution information.
1.
Use the following query to get the top SQL statements by Logical Reads for the past couple of hours. This is outlined in my post Applied Anayltics part 1
select buffer_gets, elapsed_time,cpu_time,executions,sql_id,sql_text
from (select buffer_gets, elapsed_time,cpu_time,executions,sql_id,sql_text, rank() over
(order by buffer_gets desc
) as rank from v$sqlstats
where last_active_time > (sysdate - 1/12)
) where rank <=20;
2.
Use the following query to get information about who is calling the SQL statement. This is an important element of troubleshooting. Sometimes you may identify poor statements that are actually program bugs. Getting the calling information can help identify if the statements should even be running at all.
select parsing_schema_name, service,module,action,program_id,program_line#
from v$sql
where sql_id=&SQL_ID_from_1;
3.
If the poor performing statement has bind variables, then you need to see what values are being passed to the statement. Getting this information will allow you to test some real examples of the query.
select child_number,position,name,datatype_string,value_string
from v$sql_bind_capture
where sql_id=&SQL_ID_from_1
order by child_number,position;
4.
Run the query in an SQLPlus session with Autotrace set on. This will return an explain plan and execution results after the query finishes.
If the query returns with a good performance, then check against other bind variables. It could be that only certain values result in a poor execution. Also check the execution count. If a query has a small individual impact however the statement is an overall high consumer, then it could be due to an execptionally high number of executions.
If the individual execution demonstrates a performance problem, then steps can be begun to tune the specific SQL.
Saturday, April 9, 2011
EBS Applications Technology SIG meeting at COLLABORATE
There are several fun events coming up with the EBS Applications Technology SIG meetings at COLLABORATE. Unfortunately I will not be able to make it there, but we are having some excellent sessions. Be sure to attend the SIG meetings on Monday. Note that the schedule may not be correct, the appropriate times are listed below.
Also Hotsos is serving as one of the sponsors of the Geek Meet. Be sure to grab a button and the invite information at the SIG meetings!
Monday, April 11
9:15am - 10:15am
EBS Technology Certification and Roadmap
Speaker: Steven Chan
W304G
Monday, April 11
10:30am - 11:30am
Basic 11gAS Administration for the EBS APPS DBA
Speaker: Glen Hawkins
W304G
Also Hotsos is serving as one of the sponsors of the Geek Meet. Be sure to grab a button and the invite information at the SIG meetings!
Monday, April 11
9:15am - 10:15am
EBS Technology Certification and Roadmap
Speaker: Steven Chan
W304G
Monday, April 11
10:30am - 11:30am
Basic 11gAS Administration for the EBS APPS DBA
Speaker: Glen Hawkins
W304G
Thursday, March 31, 2011
Applied Analytics Part 1
The Top x query can be applied to administrative queries. This common analytic query will come in handy if you are looking for top resource consumers
For example to see the top 10 largest tables in the database, the following query can be used:
Select owner, segment_name, segment_type, bytes
from (
Select owner, segment_name, segment_type, bytes,
rank () over
(order by bytes desc) as rank
from dba_segments)
where rank <= 10
This query can also be used against system tables such as v$sqlstats and v$segment_statistics to obtain performance information about the database. Using rank you can easily select the data you want.
For example to see the top 10 largest tables in the database, the following query can be used:
Select owner, segment_name, segment_type, bytes
from (
Select owner, segment_name, segment_type, bytes,
rank () over
(order by bytes desc) as rank
from dba_segments)
where rank <= 10
This query can also be used against system tables such as v$sqlstats and v$segment_statistics to obtain performance information about the database. Using rank you can easily select the data you want.
Friday, March 11, 2011
Hotsos Symposium 2011
I'm back home from Hotsos Symposium 2011. This was the first symposium I attended. Like many other first time attendees, my immediate thought at the end of the event was "I can't believe I haven't come to this before."
The small size of the symposium made it much easier to feel like a part of the event. The larger events can sometimes leave attendees feeling like just another number. At Hotsos Symposium that was not the case.
The sessions were first rate. Many of the presenters are some of the top level presenters available. The sessions are performance centric which lends itself to broader topics such as system management, application developement, and capacity planning.
Rather than give details on the specific sessions I attended, I'd rather talk about the main trends I saw discussed. Certain topics were discussed in multiple sessions. These key topics were:
1. Statistics
There were numerous discussions about the importance of statistics. With the Cost Based Optimizer, Oracle is very dependent on having accurate statistics. If the statistics are not valid, then there can be significant impacts to system performance. The discussions covered new statistics features in 11g as well as how to validate existing statistics.
2. AWR/ASH
Multiple presenters discussed ways to mine the AWR data to solve problems as well as proactively monitor systems. These presentions hit home with me because I often focus on AWR data during troubleshooting and performance planning. One point of concern is that AWR/ASH require the Database Diagnostic Pack. Really Oracle, we have to pay extra to get access to useful diagnostic data? As another attendee said, "Use of these tools should be mandatory!"
3. Capacity planning, performance forecasting
These topics are a big area of concern for people these days. The sessions on these topics were often the most mathematically inclined talks. People should appreciate how difficult this can be. Workload impact growth is non-linear. In fact as the performance impact begins to occur the rate of growth of impact is exponential. Dr. Neil Gunther discussed how to forecast increased workload impact using a sample size of at least 6 data load tests.
Another thing that stood out for me was the humor in the presentations. A few of my favorite moments were when presenters were able to make a strong point in a funny way. In Kerry Osborne's presentation he presented on user trends which he admitted was not based on actual data, but represented his views on the topic. Frederich Pfeiffer had a good line about hardware sizing (note item #3 above). Discussing the importance of using realistic data in test envronments for capacity forecasts he said, "If there is a performance problem after going into production, you can't suggest reverting to the test data because that ran fine." Other humourist presentations included Gwen Shapira. and Dan Fink.
Beyond the fantastic presentations and Training Day with Karen Morton, the evening events were quite entertaining. The Monday night event sponsored by Netzella was a blast. On Tuesday night Hotsos provided a Cajun themed party that was very enjoyable. All of the other little things such as quality of the food, room quality, etc were first rate.
Next year will be the tenth anniversiry of the Symposium. This will be a great event, I strongly recommend attending.
The small size of the symposium made it much easier to feel like a part of the event. The larger events can sometimes leave attendees feeling like just another number. At Hotsos Symposium that was not the case.
The sessions were first rate. Many of the presenters are some of the top level presenters available. The sessions are performance centric which lends itself to broader topics such as system management, application developement, and capacity planning.
Rather than give details on the specific sessions I attended, I'd rather talk about the main trends I saw discussed. Certain topics were discussed in multiple sessions. These key topics were:
1. Statistics
There were numerous discussions about the importance of statistics. With the Cost Based Optimizer, Oracle is very dependent on having accurate statistics. If the statistics are not valid, then there can be significant impacts to system performance. The discussions covered new statistics features in 11g as well as how to validate existing statistics.
2. AWR/ASH
Multiple presenters discussed ways to mine the AWR data to solve problems as well as proactively monitor systems. These presentions hit home with me because I often focus on AWR data during troubleshooting and performance planning. One point of concern is that AWR/ASH require the Database Diagnostic Pack. Really Oracle, we have to pay extra to get access to useful diagnostic data? As another attendee said, "Use of these tools should be mandatory!"
3. Capacity planning, performance forecasting
These topics are a big area of concern for people these days. The sessions on these topics were often the most mathematically inclined talks. People should appreciate how difficult this can be. Workload impact growth is non-linear. In fact as the performance impact begins to occur the rate of growth of impact is exponential. Dr. Neil Gunther discussed how to forecast increased workload impact using a sample size of at least 6 data load tests.
Another thing that stood out for me was the humor in the presentations. A few of my favorite moments were when presenters were able to make a strong point in a funny way. In Kerry Osborne's presentation he presented on user trends which he admitted was not based on actual data, but represented his views on the topic. Frederich Pfeiffer had a good line about hardware sizing (note item #3 above). Discussing the importance of using realistic data in test envronments for capacity forecasts he said, "If there is a performance problem after going into production, you can't suggest reverting to the test data because that ran fine." Other humourist presentations included Gwen Shapira. and Dan Fink.
Beyond the fantastic presentations and Training Day with Karen Morton, the evening events were quite entertaining. The Monday night event sponsored by Netzella was a blast. On Tuesday night Hotsos provided a Cajun themed party that was very enjoyable. All of the other little things such as quality of the food, room quality, etc were first rate.
Next year will be the tenth anniversiry of the Symposium. This will be a great event, I strongly recommend attending.
Monday, February 28, 2011
FND_TOP/sql script descriptions
There are a lot of useful scripts provided by Oracle to assist with management of the E-Busness Suite. Several such scripts are located in the $FND_TOP/sql directory.
Some of the scripts I have used include
afcmrrq.sql - to display information about concurrent manager processing.
wfstatus.sql - to provide information to Oracle support about workflow processing.
My Oracle Support article Oracle Applications Object Library SQL scripts [ID 108185.1] gives some detailed information on these scripts. This note should be reviewed in order to get a better understanding of the provided scripts. Oracle may have already provided a script you were thinking of creating.
Some of the scripts I have used include
afcmrrq.sql - to display information about concurrent manager processing.
wfstatus.sql - to provide information to Oracle support about workflow processing.
My Oracle Support article Oracle Applications Object Library SQL scripts [ID 108185.1] gives some detailed information on these scripts. This note should be reviewed in order to get a better understanding of the provided scripts. Oracle may have already provided a script you were thinking of creating.
Tuesday, February 1, 2011
What is that E-Business Suite module?
The module name FASDPAPV is running slow.
Great, what does that mean?
While looking at performance data for Oracle EBS in the system you will see the name of the offending module which is needed to perform searches on My Oracle Support. However, to determine what part of the application is affected it may be helpful to translate that module name to a more descriptive name that provides some insight as to where the module is being used.
The names of the modules do provide some hint as to their purpose. Usually the short name for the Application is included at the begining of the Module name. An Apps DBA can determine the Application from the short name. The Action will indicate if the module is a form or concurrent program.
To retrieve the descriptive name of the module you can run an additional query against the database. I've included some queries below for Forms and Concurrent Programs. These queries will also return a descriptive name of the calling application.
For Form modules:
select app_desc.application_name, app_desc.description, form_desc.user_form_name, form_desc.description
from fnd_form form, fnd_form_tl form_desc, fnd_application_tl app_desc
where form.application_id=app_desc.application_id
and form.form_id=form_desc.form_id
and form.form_name=:Insert_Form_Name
For Concurrent Program modules:
select app_desc.application_name, app_desc.description, prog_desc.user_concurrent_program_name, prog_desc.description
from fnd_concurrent_programs prog, fnd_concurrent_programs_tl prog_desc, fnd_application_tl app_desc
where prog.application_id=app_desc.application_id
and prog.concurrent_program_id=prog_desc.concurrent_program_id
and prog.concurrent_program_name=:Insert_Concurrent_Program_Name
Using the query above, the FASDPAPV module is a concurrent program Mass Depreciation Adjustment Preview Report running for the Assets program.
Great, what does that mean?
While looking at performance data for Oracle EBS in the system you will see the name of the offending module which is needed to perform searches on My Oracle Support. However, to determine what part of the application is affected it may be helpful to translate that module name to a more descriptive name that provides some insight as to where the module is being used.
The names of the modules do provide some hint as to their purpose. Usually the short name for the Application is included at the begining of the Module name. An Apps DBA can determine the Application from the short name. The Action will indicate if the module is a form or concurrent program.
To retrieve the descriptive name of the module you can run an additional query against the database. I've included some queries below for Forms and Concurrent Programs. These queries will also return a descriptive name of the calling application.
For Form modules:
select app_desc.application_name, app_desc.description, form_desc.user_form_name, form_desc.description
from fnd_form form, fnd_form_tl form_desc, fnd_application_tl app_desc
where form.application_id=app_desc.application_id
and form.form_id=form_desc.form_id
and form.form_name=:Insert_Form_Name
For Concurrent Program modules:
select app_desc.application_name, app_desc.description, prog_desc.user_concurrent_program_name, prog_desc.description
from fnd_concurrent_programs prog, fnd_concurrent_programs_tl prog_desc, fnd_application_tl app_desc
where prog.application_id=app_desc.application_id
and prog.concurrent_program_id=prog_desc.concurrent_program_id
and prog.concurrent_program_name=:Insert_Concurrent_Program_Name
Using the query above, the FASDPAPV module is a concurrent program Mass Depreciation Adjustment Preview Report running for the Assets program.
Subscribe to:
Posts (Atom)