Tuesday, August 27, 2013

Tuning Advice - Be Sure to Check the Bind Variables

An occasional problem I encounter is with queries doing too much work due to the bind variables used. When looking at queries performing a high number of reads, part of the investigation should include a review of all of the bind variables passed into the query.

Recall from older posts that the top queries can be identified in AWR/Statspack reports or by running a query against the data dictionary such as:

select buffer_gets, elapsed_time,cpu_time,executions,sql_id,sql_text
from (select buffer_gets, elapsed_time,cpu_time,executions,sql_id,sql_text, rank() over
(order by buffer_gets desc
) as rank from v$sqlstats
where last_active_time > (sysdate - 1/12)
) where rank <=20;

A trace with binds and waits will give you information about the bind variables that were used for a query. You can also query v$sql_bind_capture. If you have the appropriate license you can query dba_hist_sqlbind to get the bind variables for older queries.

select child_number,position,name,datatype_string,value_string 
from v$sql_bind_capture 
where sql_id=&SQL_ID_from_1
order by child_number,position;

When the bind variables have been identified, check the occurrences of those values in the table columns being queried. A value that is heavily skewed could represent a programming issue if too much data is being read or retrieved. Also any date variables that are causing the query to read multiple years worth of data should be reviewed.

Sometimes the fix to a query performing high reads is a programming change to alter how those queries are used. If bad variables are used against a query, then you will be judged for the performance issues created.

The Smith family seeing a query using bad variables:

Tuesday, June 11, 2013

Urge to Purge part 2

Part 1 of this blog post discussed using standard purge functions to reduce overhead on the system. A query to identify the largest objects was provided as a starting point for purge investigations.

A couple of other items are worth reviewing. Often systems will have backup tables created to assist with troubleshooting. These tables should be removed when they are no longer needed.

One way to locate backup tables is to run a query like the following. It may not catch all tables depending upon the naming standards, but it is a good place to start. The list that gets returned may include valid tables, so be sure to check to determine if the tables are being used. You can also edit this query to use other possible strings such as TEMP, TMP, ARCH, ARCHIVE, HIST, BACK, OLD, etc.

select * from dba_segments
where owner not in ('SYS','SYSTEM')
and segment_name like '%BKP%'
and segment_type='TABLE'
order by bytes desc;

Another place to review, involves the log files for materialized views. Sometimes these logs can become quite large. There are a few options to reclaim the space, review Oracle notes for cleaning up materialized view logs.  If you are using E-Business Suite, look up the view name in My Oracle Support as there may be additional steps needed to reclaim the space. The following query will list the materialized view logs ordered by size. Focus on only the largest objects.

select *
from dba_segments
where segment_name like 'MLOG$_%'
order by bytes desc;

Friday, April 26, 2013

Urge to Purge part 1

Purge is a four letter word in many organizations. Resistance to purging data is understandable. The technical staff will be at odds with the functional groups if there is ever a time when needed data is no longer available.

The penalty against purging is performance degredation and additional storage requirements. The additional storage also has an adverse effect on backup and recovery operations. In addition to purchasing additional storage, companies often have to purchase additional CPU which can also impact licensing costs.

If you're not sure of the best place to start your purge investigation, for EBS customers you should be purging as much of the Concurrent Request histor and Workflow history as possible. Next you should investigate the purging options for the features that are most heavily used at your site.

If you are unsure of which modules result in excessive storage usage, then check for the largest objects in your database. Based upon the tables returned, you can then use My Oracle Support to check for appropriate purge routines.

A simple query to check for the 50 largest objects:
select owner,segment_name, segment_type, bytes
from (select owner,segment_name, segment_type, bytes, rank() over
(order by bytes desc
) as rank from dba_segments
) where rank <=50;

Monday, February 18, 2013

Hotsos Symposium 2013

Good news! I found out today that I will get to present at the 2013 Hotsos Symposium. I was slotted in as a fill-in speaker for another presentation that had to cancel.

The Symposium is always a great event. This conference focuses totally on practical performance tuning and database management topics. I'm proud to be part of the company that hosts this event. Last year's Tenth Annual event was great.

My presentation, "Hunting Down Performance Problems", is suitable for beginner - intermediate level DBAs. The presentation will focus on examples of performance problem identification and resolution. I hope to see some of you there.

For more information see -My Speaker profile