SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


A Technique for Determining the I/O Hog in your Database


A Technique for Determining the I/O Hog in your Database

Author
Message
Mike Morin-219647
Mike Morin-219647
Valued Member
Valued Member (73 reputation)Valued Member (73 reputation)Valued Member (73 reputation)Valued Member (73 reputation)Valued Member (73 reputation)Valued Member (73 reputation)Valued Member (73 reputation)Valued Member (73 reputation)

Group: General Forum Members
Points: 73 Visits: 358
Comments posted to this topic are about the content posted at temp
gazzer
gazzer
SSC Veteran
SSC Veteran (223 reputation)SSC Veteran (223 reputation)SSC Veteran (223 reputation)SSC Veteran (223 reputation)SSC Veteran (223 reputation)SSC Veteran (223 reputation)SSC Veteran (223 reputation)SSC Veteran (223 reputation)

Group: General Forum Members
Points: 223 Visits: 135
I liked this alot, and found it very useful.



Grant Fritchey
Grant Fritchey
SSC Guru
SSC Guru (119K reputation)SSC Guru (119K reputation)SSC Guru (119K reputation)SSC Guru (119K reputation)SSC Guru (119K reputation)SSC Guru (119K reputation)SSC Guru (119K reputation)SSC Guru (119K reputation)

Group: General Forum Members
Points: 119198 Visits: 33074

Excellent article. Nice & thoughtful. I'm definately adding it to my tool box.

A question about your process though. You went about isolating the database object that was giving you coniptions. I'm just surprised you were unable to spot the problem within the TSQL code, you know, this proc runs longer & uses more resources than any other?

In any event, this was really well written, well explained and damned useful.

Thanks.



----------------------------------------------------
The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood...
Theodore Roosevelt

The Scary DBA
Author of: SQL Server Query Performance Tuning and SQL Server Execution Plans
Product Evangelist for Red Gate Software
colin.Leversuch-Roberts
colin.Leversuch-Roberts
SSCoach
SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)

Group: General Forum Members
Points: 15527 Visits: 715

At first read this seems to be an interesting read with some useful conclusions, however on more careful analysis I would suggest that the heading and conclusions are somewhat misleading.
In a nutshell the trace records the scan stat as shown thus from a query

select * from pubs.dbo.titleview

(25 row(s) affected)

Table 'titles'. Scan count 25, logical reads 50, physical reads 0, read-ahead reads 0.
Table 'titleauthor'. Scan count 23, logical reads 46, physical reads 0, read-ahead reads 0.
Table 'authors'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0.

The point about this query is that the scan on authors is actually an index scan taking 70% of the time and the scans on the other two tables are clustered index seeks - ok it's a poor example I admit- the point is that if the titles table was doing table scans the scan count would still stay at 25. ( and 25 table scans could be bad news whereas 25 index seeks are fine )

I dispute some of the maths and conclusions, in a reasonable configured SQL Server any table hotspots should be in cache, I've found more often that less frequently used heavy i/o queries can flush the data cache. Technically 200k i/o could flush 1.5Gb through the data cache, for a reasonale table this may appear only as 1 scan ( in fact a select * from a 300 million row table would show as 1 scan ) so in the overall scheme of things the 1 scan that moved several gigabytes of data would not be shown as significant. On a small table a single integer value returned from an indexed table shows a scan count of 1 whereas the entire column ( just over 2100 rows ) also shows 1 scan. In actual fact we have 4 bytes vs 8600 bytes or 2 page i/o vs 5 page i/o or 1 scan vs 1 scan. For small tables sql server is unlikely to do any row ops and read ahead is usually 64kb blocks, exactly how read ahead deals with a table of only 32kb I'm not 100% sure. Most typical integer secondary indexes would be contained within 3 pages of which two pages will always be read it's wrong to attempt to multiply the scan count by 4 bytes to see how much data is being read, I'd suggest ( in this case ) always 2 pages.

The article certainly gives a slant on object usage, however I'd suggest a sql statement vs reads, writes, cpu, duration is of much greater value



The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Lukas Botha
Lukas Botha
SSC-Enthusiastic
SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)SSC-Enthusiastic (128 reputation)

Group: General Forum Members
Points: 128 Visits: 31
If you have a look at how the calculation works, all averages being equal, then it is left open to interpretation.

Personally you cannot determine if you have a problem using a single sided approach, this would give some depth to crossing off some of those wildcards that tend to pop up once you start tracing performance problems.

Lukas Botha
jereme.guenther@gmail.com
jereme.guenther@gmail.com
SSCommitted
SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)SSCommitted (1.6K reputation)

Group: General Forum Members
Points: 1645 Visits: 194
Nice article, definitely adding it to my collection for sometime when I have a little more spare time.
Sameer Raval
Sameer Raval
Ten Centuries
Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)

Group: General Forum Members
Points: 1200 Visits: 255

Thank you Michael, Excellent article, this is definitely a article to add as part of DBAS collection.



Kindest Regards,

Sameer Raval [Sql Server DBA]
Geico Insurance
RavalSameer@hotmail.com
Taconvino
Taconvino
SSC Rookie
SSC Rookie (37 reputation)SSC Rookie (37 reputation)SSC Rookie (37 reputation)SSC Rookie (37 reputation)SSC Rookie (37 reputation)SSC Rookie (37 reputation)SSC Rookie (37 reputation)SSC Rookie (37 reputation)

Group: General Forum Members
Points: 37 Visits: 16

Excellent article! Just one question: as clearly stated in the beginning of the article, this technique should be used after all "commonly used" techniques have been applied. Can we turn this around? I mean, start with this approach and move on to other more "common" diagnostics?. I know this is focused mainly on object/traffic issues, but just how common these issues are? Maybe by fixing problems identified by applying this method, other problems are solved also? Man... it was more than one question, wasn't it?


Dylan Peters
Dylan Peters
SSC-Addicted
SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)SSC-Addicted (428 reputation)

Group: General Forum Members
Points: 428 Visits: 27

I have one problem with the calculations. Overall it's a good article, and more metrics are always useful, if they're accurate.

He lists the calculation for data flow as: Data Flow = Size of Data Element * Number of times it is accessed -- and uses AverageRecordSize to represent Size of Data Element

But there's a big piece missing, namely the number of rows. A table scan or index scan will retrieve each row of each table/index. So we should be multiplying by total table size, i.e. AverageRecordSize * NumberOfRows, or better yet, NumberOfPages.

So, a wide table with a few rows will come across differently than a narrow table with millions of rows, even if they take up the same number of pages. I think it's best to think of table or index data flow in terms of pages, since that's the memory allocation chunk size and basis of disk retrieval.



Dylan Peters
SQL Server DBA
SanjayAttray
SanjayAttray
SSCertifiable
SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)SSCertifiable (6.3K reputation)

Group: General Forum Members
Points: 6261 Visits: 1619
Hi,

select objectid, indexid, sum(*) as scans
from IndexCapture
group by objectid, indexid

what is sum(*) here? it should be sum of some column name, right?

SQL DBA.
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search