Monitoring Performance With SQL Server Profiler

Monitoring performance of the stored procedures and ad-hoc
request running in production databases can help you in early identification of
the possible system overload problems.

 Usually, analyzing simple counters like procedure
duration and number of reads performed by database engine is enough to identify
procedures or queries that create unnecessary overload for the system. Usually
this is happening due to a number of reasons such as logical errors in the code,
pour database design, incorrect indexing strategy, etc.

 There are a number of sophisticated monitoring tools on
the market that help you to get any statistics and also will determine possible
drawbacks in the system for you, but from my point of view, the most effective
way is the simplest one. Microsoft SQL Profiler designed to collect all
information you need for analysis. The best thing about SQL Profiler – it’s
able to store collected information (not just counters, but actual statements)
into the database table. And after all, it’s a simple and very friendly tool
to work with.

SQL
Profiler setup
 

Start SQL Profiler and from FILE menu select NEW and then
select TRACE. You will be prompted to provide the information about the SQL
Server you want to monitor. Select server name, fill out login and password
fields and then click OK.

 On the new screen, name the new trace as TRACE1. Check
the box “Save to table” — you will be prompted to select SQL Server,
database and table where you want to store collected information. It is a very
good idea to use separate SQL Server (for example the desktop version you have
installed on your workstation) in order to avoid performance hit due to
sequential inserts into storage table.

 On the EVENTS tab first remove all counters from the
right window and then select all collections of counters for STORED PROCEDURES
and TSQL. If you want to monitor any additional events, feel free to add them as
required.

 On the filters tab select parameter OBJECTID from the
tree and check the flag “Exclude system objects”.  

Now, go back to the GENERAL tab and hit button RUN. SQL
Profiler starts to collect information about all procedures and ad-hoc queries
running on SQL Server. Let it run for a few hours during operation time and now
you are ready for the analysis. 

Data
analysis
 

There are two main counters you want to keep track of in your
system: READS and DURATIOIN. For each completed procedure or query SQL Profiler
will capture this information along with the statement, user id and other
parameters.

 Before running queries I suggest to create indexes for
columns DURATION and READS in the storage table (TRACE1). This will dramatically
speed up your analysis.

            
CREATE NONCLUSTERED INDEX
IND_TRACE_1 ON dbo.TRACE1(Duration)
  
             GO

CREATE NONCLUSTERED INDEX
IND_TRACE_2 ON dbo.TRACE1(Reads)
  
            
GO

First thing you might be interested in is identifying long
running procedures and queries. The following query will identify 20 procedures,
which take the most of the time to complete:

 SELECT * FROM TRACE WHERE Duration
IN

(SELECT DISTINCT TOP 20 Duration
FROM TRACE ORDER BY Duration DESC)

In order to understand why procedure takes a long time to
finish, take a look on parameters READS.

 When the value of READS is low, then probably the
required resources (e.g. tables or views) were locked by another process. This
indicates possible locking problems and you might want to take a look on the
procedures that use same objects.

 High READS might indicate table scans or not
optimal indexes chosen by query optimizer. Usually, you want to keep READS
parameter as low as possible because operations related to accessing hard drives
(reading and writing) are the slowest ones in the system.

 Paste the statement you want to check into the Query
Analyzer and select Show Execution Plan command from Query menu. Run the
statement and tale a look at the query plan. It will show you if any table or
index scans occur during the execution. Table scans can be fixed by applying
proper index to the columns participated in the WHERE clause or joins. Index
scans are much faster then table scans and some times can not be avoided, but
when clustered index scan occur you must check it the columns you use in WHERE
clause or joins are in the clustered index and if this is not the case, then
creation of the separate non clustered index for this columns will boost query
performance and decrease READS volume.

 Statements that uses data a lot (READS is high) but do
not take too much time to complete can by identified using the similar query:

SELECT * FROM TRACE WHERE Reads IN


(SELECT
DISTINCT TOP 20 Reads
FROM TRACE ORDER BY Duration DESC)

 

Use the same technique to find out if large READS volume
caused by unnecessary scan operations.  
I hope this information will help you and wish
you good luck in your research

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles