Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 123»»»

Measuring SQL Performance Expand / Collapse
Author
Message
Posted Thursday, April 1, 2004 11:23 AM
UDP Broadcaster

UDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP Broadcaster

Group: General Forum Members
Last Login: Saturday, February 28, 2009 6:51 AM
Points: 1,489, Visits: 7
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/chedgate/m



--
Chris Hedgate
http://www.hedgate.net/
Contributor to the Best of SQL Server Central volumes
Articles: http://www.sqlservercentral.com/columnists/chedgate/
Post #109528
Posted Friday, April 2, 2004 2:54 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 6:20 AM
Points: 2,923, Visits: 1,874

Good primer!

If I have a stored procedure that does multiple tasks, is there a way of getting the  overall cost of the entire procedure or do I have to manually tot up all the costs for each of the individual queries?



LinkedIn Profile
Newbie on www.simple-talk.com
Post #109615
Posted Friday, April 2, 2004 9:16 AM


SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, March 18, 2005 8:25 AM
Points: 37, Visits: 1

Hi David:

Use SP:Completed Event in the SQL Profiler... this should be enough for you needs.




Post #109697
Posted Friday, April 2, 2004 9:20 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Thursday, November 8, 2012 4:20 PM
Points: 225, Visits: 88

It would be nice if you also showed how to clear the sp cache so you can run the results over and over and compare.  I think the commands are

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE




Post #109699
Posted Friday, April 2, 2004 9:50 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: 2 days ago @ 3:11 PM
Points: 31,368, Visits: 15,837
Great introductory article to looking at performance. The suggestions above are good, but most code might be improved if each developer took a few minutes to run these and see the relative performance of each query v the time for their code to run.






Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #109703
Posted Friday, April 2, 2004 11:34 AM
UDP Broadcaster

UDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP BroadcasterUDP Broadcaster

Group: General Forum Members
Last Login: Saturday, February 28, 2009 6:51 AM
Points: 1,489, Visits: 7
David: Using those tools I mentioned you will have to sum up the numbers. Ad msuraski said there are other tools that can give a better idea of the entire proc. Personally I am normally more interested in optimizing each of the specific queries inside the proc, but of course you could sometimes go for different approaches for the entire proc and want to check which of them to use.

adutton: Very good advice. I thought about including more, but I thought I would lose a little focus with that. But maybe they should have been mentioned. Maybe they can be included in a follow-up sometime, as well as how to simulate production environments on dev servers.




--
Chris Hedgate http://www.hedgate.net/
Contributor to the Best of SQL Server Central volumes
Articles: http://www.sqlservercentral.com/columnists/chedgate/
Post #109725
Posted Monday, April 5, 2004 2:05 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: 2 days ago @ 6:20 AM
Points: 2,923, Visits: 1,874

Thanks Chris,

I've been tuning a stored procedure that itterates down a tree structure  to accumulate a list of ids within that structure and then returns the details associated with those ids.

I experimented with temp tables vs table variables and found that some parts of the procedure were faster with temporary tables and others were faster with table variables.

In this scenario I was interested in the overall performance of the procedure.

I spent quite a bit of time with Excel plotting the impact of one vs the other as the tree structure grew more complex.

In the end I decided that for shallow trees table variables were better, but for deep trees it was temporary tables that gained the edge.



LinkedIn Profile
Newbie on www.simple-talk.com
Post #109820
Posted Friday, December 23, 2005 5:57 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, April 5, 2010 1:37 PM
Points: 38, Visits: 3
Can we take it a step further, i.e., an article that explains what to do with the results obtained - from both a query rewrite and a hardware POV?
I ran a sample query that linked two tables, each with 200million rows, looking to return 1001 records using where 'id' between. I don't think 4.6 seconds is bad, but I always look for better.
I'm heavily indexed on both these tables - and have just recently completed a rewrite of the entire process from our front end web search tool to the redesign of the database and all the associated stored proc's. The data gets a update of about 700k records a week, indexing is dropped, records inserted, and indexs reapplied (and statistics re-computed). It works, seems like it generates fast responses - but it takes 6-8 hours to process.
Like the others above I found that for short data sets table var's are better, and for temp tables are better for larger returns. My SAN's are normalized (set up to be read/write neutral as opposed to one being faster than the other).
My overall questions(based on the results below) are
How do I reduce logical reads, and what does that mean.
How do I reduce physical reads.
What's a read-ahead - and why did it need 916 when it was 1001 records (it was queried on the primary clustered key - should it not have 'known' via the indexing, statistics and bookmarking and not needed to read ahead).
The primary table is a 3rd normal ID only table in it's own database, the secondary table is a loose 2nd normal (lots of text, some id's) in it's own database consisting of one year's data (there are 7 secondary db's).
Hope that was enough info to go on.
my results:
Table 'primary'. Scan count 1001, logical reads 4859, physical reads 6, read-ahead reads 7.
Table 'secondary'. Scan count 1, logical reads 1007, physical reads 0, read-ahead reads 916.

SQL Server Execution Times:
CPU time = 1734 ms, elapsed time = 4675 ms.



Thanks, and don't forget to Chuckle
Post #246397
Posted Tuesday, December 27, 2005 2:02 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Tuesday, June 9, 2009 8:13 AM
Points: 75, Visits: 3
Though I was aware of all the tools mentioned, I always find myself looking for excuses to not bother. This was a great reminder that SQL provides various very useful tools for the purpose of performance enhancement.

Regarding DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE, in my experience, these are useful in a testing/staging environment only. I don't believe you would want to do this in a production environment.



Cheers,

Alex


Rogue DBA
Post #246822
Posted Wednesday, December 28, 2005 6:31 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Friday, December 19, 2014 8:22 AM
Points: 14,205, Visits: 28,534

When I see words like "heavily indexed" red flags go up. How heavily indexed, and are all those indexes being used (selectivity needs to be as high as possible in order for the query analyzer to use the index, generally above 95%), are you getting excessive bookmark lookups (which double your I/O in lots of cases), are you getting hash joins instead of loop joins (have to evaluate the query plan), are you getting index scans instead of index seeks (another indication that your indexes may be selective enough or your joins are incorrect), does the table have a clustered index, is the cluster placed on the column(s) that will give you the best of chance of avoiding bookmark lookups (leading edge, first column, of the index needs to cluster the data in logical groups that will help speed up the data)... Get all these questions answered, then you can start to look at hardware as a mechanism to speed things up.  Worst thing you can do to your queries (write or read) is place a bunch of indexes that are either not used or improperly used by the query optimizer.

Oh, and nice article. It is shocking the number of times you hear that it doesn't matter how fast a query runs on the database. It's good to see that other people are pushing the need for these types of checks as part of query writing. Now if we can just get people to think about the queries being written as they design tables too.



----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of:
SQL Server Query Performance Tuning
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #246911
« Prev Topic | Next Topic »

Add to briefcase 123»»»

Permissions Expand / Collapse