Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 123»»»

A Technique for Determining the I/O Hog in your Database Expand / Collapse
Author
Message
Posted Thursday, August 24, 2006 2:45 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, August 29, 2014 12:00 PM
Points: 5, Visits: 230
Comments posted to this topic are about the content posted at temp
Post #303963
Posted Wednesday, August 30, 2006 3:19 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, March 20, 2014 2:06 AM
Points: 45, Visits: 97
I liked this alot, and found it very useful.


Post #304890
Posted Wednesday, August 30, 2006 6:21 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 1:09 PM
Points: 15,662, Visits: 28,053

Excellent article. Nice & thoughtful. I'm definately adding it to my tool box.

A question about your process though. You went about isolating the database object that was giving you coniptions. I'm just surprised you were unable to spot the problem within the TSQL code, you know, this proc runs longer & uses more resources than any other?

In any event, this was really well written, well explained and damned useful.

Thanks.



----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #304930
Posted Wednesday, August 30, 2006 6:42 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, June 9, 2014 6:02 AM
Points: 2,674, Visits: 697

At first read this seems to be an interesting read with some useful conclusions, however on more careful analysis I would suggest that the heading and conclusions are somewhat misleading.
In a nutshell the trace records the scan stat as shown thus from a query

select * from pubs.dbo.titleview

(25 row(s) affected)

Table 'titles'. Scan count 25, logical reads 50, physical reads 0, read-ahead reads 0.
Table 'titleauthor'. Scan count 23, logical reads 46, physical reads 0, read-ahead reads 0.
Table 'authors'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0.

The point about this query is that the scan on authors is actually an index scan taking 70% of the time and the scans on the other two tables are clustered index seeks - ok it's a poor example I admit- the point is that if the titles table was doing table scans the scan count would still stay at 25. ( and 25 table scans could be bad news whereas 25 index seeks are fine )

I dispute some of the maths and conclusions, in a reasonable configured SQL Server any table hotspots should be in cache, I've found more often that less frequently used heavy i/o queries can flush the data cache. Technically 200k i/o could flush 1.5Gb through the data cache, for a reasonale table this may appear only as 1 scan ( in fact a select * from a 300 million row table would show as 1 scan ) so in the overall scheme of things the 1 scan that moved several gigabytes of data would not be shown as significant. On a small table a single integer value returned from an indexed table shows a scan count of 1 whereas the entire column ( just over 2100 rows ) also shows 1 scan. In actual fact we have 4 bytes vs 8600 bytes or 2 page i/o vs 5 page i/o or 1 scan vs 1 scan. For small tables sql server is unlikely to do any row ops and read ahead is usually 64kb blocks, exactly how read ahead deals with a table of only 32kb I'm not 100% sure. Most typical integer secondary indexes would be contained within 3 pages of which two pages will always be read it's wrong to attempt to multiply the scan count by 4 bytes to see how much data is being read, I'd suggest ( in this case ) always 2 pages.

The article certainly gives a slant on object usage, however I'd suggest a sql statement vs reads, writes, cpu, duration is of much greater value



The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Post #304939
Posted Wednesday, August 30, 2006 8:24 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, September 8, 2011 2:39 AM
Points: 42, Visits: 31
If you have a look at how the calculation works, all averages being equal, then it is left open to interpretation.

Personally you cannot determine if you have a problem using a single sided approach, this would give some depth to crossing off some of those wildcards that tend to pop up once you start tracing performance problems.


Lukas Botha
Post #305003
Posted Wednesday, August 30, 2006 10:23 AM
Right there with Babe

Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

Group: General Forum Members
Last Login: Wednesday, February 5, 2014 8:30 AM
Points: 769, Visits: 191
Nice article, definitely adding it to my collection for sometime when I have a little more spare time.
Post #305098
Posted Wednesday, August 30, 2006 11:06 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Friday, December 13, 2013 8:04 PM
Points: 545, Visits: 255

Thank you Michael,  Excellent article, this is definitely a article to  add as part of DBAS collection. 



Kindest Regards,

Sameer Raval [Sql Server DBA]
Geico Insurance
RavalSameer@hotmail.com

Post #305118
Posted Wednesday, August 30, 2006 11:45 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, May 17, 2011 7:05 AM
Points: 5, Visits: 16

Excellent article!  Just one question:  as clearly stated in the beginning of the article, this technique should be used after all "commonly used" techniques have been applied.  Can we turn this around?  I mean, start with this approach and move on to other more "common" diagnostics?.  I know this is focused mainly on object/traffic issues, but just how common these issues are?  Maybe by fixing problems identified by applying this method, other problems are solved also?  Man... it was more than one question, wasn't it?

Post #305131
Posted Wednesday, August 30, 2006 1:52 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, September 24, 2010 8:44 AM
Points: 254, Visits: 18

I have one problem with the calculations.  Overall it's a good article, and more metrics are always useful, if they're accurate.

He lists the calculation for data flow as: Data Flow = Size of Data Element * Number of times it is accessed  -- and uses AverageRecordSize to represent Size of Data Element

But there's a big piece missing, namely the number of rows.  A table scan or index scan will retrieve each row of each table/index.  So we should be multiplying by total table size, i.e. AverageRecordSize * NumberOfRows, or better yet, NumberOfPages.

So, a wide table with a few rows will come across differently than a narrow table with millions of rows, even if they take up the same number of pages.  I think it's best to think of table or index data flow in terms of pages, since that's the memory allocation chunk size and basis of disk retrieval.

 




Dylan Peters
SQL Server DBA
Post #305161
Posted Wednesday, August 30, 2006 2:39 PM
Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Thursday, July 17, 2014 10:56 AM
Points: 3,924, Visits: 1,607
Hi,

select objectid, indexid, sum(*) as scans
from IndexCapture
group by objectid, indexid

what is sum(*) here? it should be sum of some column name, right?


SQL DBA.
Post #305181
« Prev Topic | Next Topic »

Add to briefcase 123»»»

Permissions Expand / Collapse