The SQL
Performance Analyzer, or SPA, is another new feature introduced in release 11g.
SPA, and another feature named Database Replay, fall under what is known as
Real Application Testing.
The
distinction between the two is that Database Replay, as its name implies,
enables you to replay events within a database. SPA, on the other hand,
provides insight into how a change would play out from having run a SQL tuning
set.
During a
database migration or upgrade, execution plans can change. In fact, that was a
problem faced by many users who upgraded to 10g. Internet Q&A forums
are rife with questions and postings about how a database’s performance went
into the tank after upgrading to 10g – even though 10g is and was
much better than any prior release. So, regardless of whether you are upgrading
or just wanting to test the impact of adding an index, SPA will accommodate
that investigation for you. Any activity that may impact a statement’s
execution plan is a candidate for using SPA to investigate the possible
consequences – both good and bad.
Defining a workflow
Using SPA
is based on a five-step workflow.
-
Capture the
workload -
Gather
performance metrics before the change -
Introduce a
change -
Gather
performance metrics after the change -
Compare the
before and after performance
As a
testing methodology, those steps are as old as dirt. Amazingly enough, many
users have a hard time adhering to this plan or any other. Often times, step 3
becomes “Introduce many changes.” The interaction between changes can confound
the interpretation of the results. Is the database faster because of factor X
or factor Y? You just don’t know.
Somewhat
implied here is that the change is made elsewhere, as in not on your production
instance, especially if the workload is resource intensive. Step 1 requires
collecting a SQL Tuning Set (STS). Step 2 captures query statements, so
inserts, updates, and deletes are not analyzed. In step 4, SPA executes
statements and generates execution plans and statistics. Finally in step 5, you
review the report SPA has generated for you, and from there, you can invoke
other tools (such as the SQL Tuning Advisor) or make changes on your own. And,
if needed, the cycle can be repeated.
Ideally,
the test system should be a clone of production. In practicality, make it as
close as you can. The cost of maintaining a 3-node RAC architecture for
development (and one for QA) can be prohibitive. The realization is that for
whatever reasons, testing is performed on a production system. SQL Performance
Analyzer supports that option as well. If you can and do test on a separate
server, then you have the additional task of exporting a table related to the
STS and importing it into the test instance.
It should
also be obvious that since SQL Performance Analyzer uses a SQL Tuning Set, the
only versions from which you can collect an STS are 10g and above. If
you are still using release 8i or 9i, you would have to upgrade
to at least 10g.
A working example
Let’s work
our way through an example. Many ad hoc examples found on Web sites, blogs, and
other books tend to use the ALL_OBJECTS data dictionary view as a source for
generating test data, so somewhat for the sake of consistency, so will this
example.
Our command
line “tool” is the DBMS_SQLPA package. Its subprograms are shown below.
- CANCEL_ANALYSIS_TASK
- CREATE_ANALYSIS_TASK
- DROP_ANALYSIS_TASK
- EXECUTE_ANALYSIS_TASK
- INTERRUPT_ANALYSIS_TASK
- REPORT_ANALYSIS_TASK
- RESET_ANALYSIS_TASK
- RESUME_ANALYSIS_TASK
- SET_ANALYSIS_TASK_PARAMETER
- SET_ANALYSIS_DEFAULT_PARAMETER
The three
analysis tasks of interest here are the create, execute and report subprograms.
We’ll use the SCOTT schema, create a replica of ALL_OBJECTS and populate the
new table with a dump of SELECT * from ALL_OBJECTS. A relatively easy test case
or what if scenario concerns adding an index to the new objects table. An index
on what, though?
If using 11g
out of the box, the SCOTT account must be unlocked and have the password reset.
To get a feel for the amount of data, there are around 53,750 objects, and 24
object types. Synonyms and Java classes account for just over 90%, so in terms
of selectivity, the next most populous object (views) accounts for 3% and
everything else is less than that. Object type seems like a good candidate for
being indexed given that most of our queries are looking at names of tables,
views, packages, functions, procedures and triggers.
Use CTAS to
create the table and then gather table statistics using
DBMS_STATS.GATHER_TABLE_STATS. For the SCOTT schema, you could also use
GATHER_SCHEMA_STATS because it is a relatively small schema.
The target
table, in keeping with examples elsewhere, is named MY_OBJECTS.
create table my_objects as select * from all_objects; exec dbms_stats.gather_table_stats('scott','my_objects');
Execute a
variety of SQL statement (all queries).
SELECT object_type, count(*) FROM my_objects GROUP by object_type ORDER BY 2 desc; SELECT object_name FROM my_objects WHERE object_type = 'VIEW'; SELECT object_name FROM my_objects WHERE object_type like 'PACKAGE%'; SELECT count(*) FROM my_objects WHERE object_type NOT IN ('SYNONYMS','JAVA CLASS'); SELECT object_name FROM my_objects WHERE object_type = 'EDITION';
Now we’ll
create a SQL set using DBMS_TUNE.CREATE_SQLSET, load up a SQLSET cursor, and
view what’s in DBA_SQLSET_STATEMENTS.
You will
need to grant the ADMINISTER SQL TUNING SET privilege to SCOTT beforehand.
set serveroutput on EXEC DBMS_SQLTUNE.CREATE_SQLSET(sqlset_name => 'my_obj_sqlset'); DECLARE v_cursor DBMS_SQLTUNE.SQLSET_CURSOR; BEGIN OPEN v_cursor FOR SELECT VALUE(x) FROM TABLE( DBMS_SQLTUNE.SELECT_CURSOR_CACHE( basic_filter => 'sql_text LIKE ''%my_objects%'' and parsing_schema_name = ''SCOTT''', attribute_list => 'ALL')) x; DBMS_SQLTUNE.LOAD_SQLSET (sqlset_name => 'my_obj_sqlset', populate_cursor => v_cursor); END; /
And, as
confirmation that the tuning set is loaded, we can use the following query.
SQL> SELECT sql_text 2 FROM dba_sqlset_statements 3 WHERE sqlset_name = 'my_obj_sqlset'; SQL_TEXT -------------------------------------------------- SELECT object_name FROM my_objects WHERE object_type = 'VIEW' SELECT object_name FROM my_objects WHERE object_type = 'EDITION' SELECT count(*) FROM my_objects WHERE object_type NOT IN ('SYNONYMS','JAVA CLASS') SELECT object_type, count(*) FROM my_objects GROUP by object_type ORDER BY 2 desc SELECT object_name FROM my_objects WHERE object_type like 'PACKAGE%'
The next step is to get a task name for use within the call to DBMS_SQLPA.EXECUTE_ANALYSIS.
VARIABLE v_task VARCHAR2(64); EXEC :v_task := DBMS_SQLPA.CREATE_ANALYSIS_TASK (sqlset_name => 'my_obj_sqlset'); PRINT :v_task
In the
working example, the task name is TASK_35. We’ll call the before change run
“before_index” and the after change run “after_index.” We know we’re changing
something (adding an index), so use something more descriptive than
“before_change” and “after_change,” but that’s up to you. Substitute the task
name or take advantage of the bind variable construct.
BEGIN DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( task_name => :v_task, execution_type => 'test execute', execution_name => 'before_index'); END; /
We’re at
step 3, make the change, and the change is to add an index to the table (plus
gather statistics).
CREATE INDEX idx_my_objects_type on my_objects(object_type); exec DBMS_STATS.GATHER_TABLE_STATS ('scott','my_objects',cascade=>TRUE);
The after
change job of “after_index” is ready to be run.
BEGIN DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( task_name => :v_task, execution_type => 'test execute', execution_name => 'after_index'); END; /
Compare the
runs by execution name, or if those are left out, the last two EXECUTE_ANALYSIS_TASK
calls will be used.
BEGIN DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( task_name => :v_task, execution_type => 'compare performance', execution_params => dbms_advisor.arglist( 'execution_name1', 'before_index', 'execution_name2', 'after_index')); END; /
Finally,
the moment we’ve been waiting for: what is the impact of adding an index to the
MY_OBJECTS table? The API to extract this information is the
REPORT_ANALYSIS_TASK function of DBMS_SQLPA. There are several options, so it’s
worth a moment to look at some of them.
First, how
do you want the report to appear? Your choices are text (the default), HTML and
XML. Next is the level of detail. The choices are shown below.
Level |
Description |
BASIC |
Same as |
TYPICAL |
Information |
ALL |
Details |
IMPROVED |
Only |
REGRESSED |
Only |
CHANGED |
SQL with |
UNCHANGED |
Opposite |
CHANGED_PLANS |
Only SQL |
UNCHANGED_PLANS |
Opposite |
ERRORS |
SQL with |
In the
working example, several of the level choices would fit as it would not be
unreasonable (in this case) to presume there will be some improvement and
change. And, maybe we’ll see some degradation.
The
function is shown below. Oracle’s documentation shows an extra trailing right
parens, so delete that character if cutting and pasting from that source.
DBMS_SQLPA.REPORT_ANALYSIS_TASK( task_name IN VARCHAR2, type IN VARCHAR2 := 'text', level IN VARCHAR2 := 'typical', section IN VARCHAR2 := 'summary', object_id IN NUMBER := NULL, top_sql IN NUMBER := 100, task_owner IN VARCHAR2 := NULL, execution_name IN VARCHAR2 := NULL) RETURN CLOB;
For our
purposes, let’s go with a report in HTML format. The simplest case would be to
simply pass the task name in and default to everything else. Set up your
session for spooling, to include a path and file name, and invoke the function by
selecting from DUAL.
SET LONG 1000000 SET PAGESIZE 0 SET LINESIZE 200 SET LONGCHUNKSIZE 200 SET TRIMSPOOL ON SPOOL C:\temp\spa_index_test.html SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK ('TASK_35','HTML','ALL','ALL') FROM dual; SPOOL OFF
Of the five
statements, four of them improved, and one had no change. A section for one of
the statements that improved appears as follows.
The
execution plan section shows that a full table scan was used before and the
index was used after the change.
Miscellaneous items
All in all,
this wasn’t too hard to setup and get running. You may find odd errors along
the way when varying the input parameters (ORA-01478 array bind error), so try
using a different format with respect to named versus positional parameters.
If you need
to start over, use the DELETE or DROP_SQLSET procedure in DBMS_SQLTUNE. Try to
make your statements as clean as possible such as avoiding extra SELECT
statements and making the statement identification easy to find. For example,
you could alias a table with XXX, so that string would be unusual to find in
Oracle, but easy to filter on.
Enterprise
Manager offers a streamlined interface into the SQL Performance Analyzer. Even
though the tuning set and report were manually created, they’ll still be
available in OEM. As an example, drilling down to TASK_35 shows a graphical
comparison between sets. The two regressed SQL statements were related to
internals of executing the two packages.
As
mentioned in the beginning, SPA can capture differences due to most anything
that affects an execution plan. Aside from DDL operations such as index
creation or using hints, two other major elements can factor in to changes. One
is the compatibility parameter and the other is generic parameter changes.
A change in
the optimizer setting is specific enough to not be counted as a parameter
change. Although it is a parameter, the setting is intrinsic to how the
optimizer works as a whole. An example of a “normal” parameter change would be
changing the OPTIMIZER_INDEX_COST_ADJ or DB_FILE_MULTIBLOCK_READ_COUNT
settings.
And now
that you’ve seen the API interface, that is, the actual DBMS_SQLPA (and
DBMS_SQLTUNE to create the SQL tuning set), the guided workflow in Enterprise
Manager will be much easier to follow along.
Summary
In summary,
the SQL Performance Analyzer can be used to help validate or explore changes.
It can help you to overcome a 100% copy of the production environment, and
better still, and unlike ADRCI, SPA can be back ported to releases prior to 11g.
My Oracle Support note 560977.1, “Real Application Testing Now Available for
Earlier Releases,” details the applicable versions.
After
applying a patch via the opatch utility, you can capture on the older versions.
However, you can only replay on version 11g and higher.
Finally,
since SQL Performance Analyzer falls under Real Application Testing, it must be
licensed. Database Replay, SQL Performance Analyzer and SQL Tuning Sets (STS)
are licensable. STS can be used if you licensed the Tuning Pack.