Capturing I/O patterns using Process Monitor

  • Comments posted to this topic are about the item Capturing I/O patterns using Process Monitor

  • Very good article indeed. One small issue that is easily fixed is that you need to use BIGINT and not INT as any respectable sized database is going to produce offsets that are too large.

    Also our databases (some nearing 1TB) are not working unless the output is first imported into Excel and resaved as XLS as the flat file import driver is cracking up on SQL 2005.

  • A ha yes of course, i'll submit a change to the article at some point.

  • Man, that was one fast response!

  • One other change that may be needed is that not all the Data presented is an offset, some cleaning may be required with

    DELETE FROM Iops WHERE SUBSTRING(Detail, 1, 6) 'Offset'

    where access appears to be denied otherwise the SUBSTRING section will fail on occasions.

    or adding in another filter for Detail beginning with "Offset"

    This may have been caused as I added in .ldf files also. Log files shows a very marked contrast to Data mdf files as the block size changes a lot and the tempdb can write a large amount of data out in quite large blocks for database.

  • How do you get around the following issue on SQL 2000?

    Warning: The table 'Iops' has been created but its maximum row size (64050) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.

  • skyline6969 (6/23/2009)


    One other change that may be needed is that not all the Data presented is an offset, some cleaning may be required with

    DELETE FROM Iops WHERE SUBSTRING(Detail, 1, 6) 'Offset'

    where access appears to be denied otherwise the SUBSTRING section will fail on occasions.

    or adding in another filter for Detail beginning with "Offset"

    This may have been caused as I added in .ldf files also. Log files shows a very marked contrast to Data mdf files as the block size changes a lot and the tempdb can write a large amount of data out in quite large blocks for database.

    Ok, got past that part.

    The Delete helped.

    I also had to change the Update statement from INT to BIGINT.

    When I ran the IOCount though I end up with

    34,818 that have IOSize of 8.

    54,401 that have IOSize of 256. That concerns me since I only grabbed mdf items. Could I have corrupted the data with my changes above?

  • Using procmon for this is certainly a new twist on this--thanks!.

  • Hi Jason,

    Changing to a BIGINT shouldn't cause any corruption - I will make the changes myself to the article.

    If you are getting IO sizes of 256 then it probably means sql is doing read ahead which is good for performance, take a look at http://technet.microsoft.com/en-us/library/cc966500.aspx "Read-Ahead" for further details.

    It does say that editions under enterprise are limited to 128 pages so you should divide the 256 by 8 (8kb = 1 page) to give you 32 pages.

    Ed

  • Hi Skyline,

    It should be fine, but if other files or operations are being included then it is probably best to filter them out at some point.

    Ed

  • Nice article.

    The DELETE was the trick for me as well as the BIG INT. This will be useful to our virtualization managers as the block sizes seem to have a large impact on performance.

    Alan

  • Jason Crider (6/23/2009)


    How do you get around the following issue on SQL 2000?

    Warning: The table 'Iops' has been created but its maximum row size (64050) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.

    Remove some columns first in Excel (or whatever you use instead). You most probably do not need the Process Name nor the PID, the Result, the paths may most probably be trimmed down to just filenames, etc. If that's not enough you could also extract the Offset and Length from the Detail column in Excel and then import just the two numbers. In either case it's just a warning, you only run into problems if some actual rows cross the limit.

    There's a different problem. The DBCC PAGE() can only tell you what table does the offset belong to at the moment you run this query. Which may not be the same one that it belonged to when the disk access was made. The row may have been deleted and the block reassigned to a different table. I believe. How likely that is depends on your database, but you may easily miss all access to tables that are periodically emptied (queue for a batch process or something similar). Or am I wrong?

    Jenda

  • Hi Jenda,

    Yes it is possible, I think in addition trying to monitor tempdb where tables are created and dropped on the fly it is going to happen consistently.

    I think we need to take a prgamatic approach to using procmon, it is a free tool and not specifically designed for this purpose. Bearing that in mind there are some edge cases where we would have to do some extra work if getting the name of the table was critical, as opposed to getting the size and type of I/O's.

    If you had a table that was being created and dropped or filled and emptied then it is still possible, how you do this depends on the type of index (heap, clustered index or non-clustered index) and whether partitioning is used. As an example to get a list of pages in a clustered index you:

    1 - Use sysindexes to get the "root" page

    2 - Use DBCC PAGE(database, fileId, pageId, 3) -- Remember to convert the page id from a hex string to file and page id.

    3 - Use the ChildPageId column to get a list of pages which belong to the index and compare these against your offsets

    This gives you a list of the FileId and ChildPageId's that belong to the index which you would need to save and then match up with your offsets.

    If the table or index is large then you would need to also use dbcc page with option 0 (DBCC PAGE(database, fileId, pageId, 0)) and check m_nextPage if more than one IAM page is used.

    If you need to know more about getting the details for different types of indexes I would start by getting the Sql internals Viewer (http://internalsviewer.codeplex.com/).

    Thanks,

    Ed

  • This article has come at a very good time for me as we are having I/O problems on Virtual SQL servers. When I captured the procmon output as a CSV file I don't see column 0 as Sequence and I only get 7 columns in all with last one being the detail.

    What am I doing wrong?

    Thanks

    Chris

  • I couldn't get anything but an "n/a" on the sequence number export so I simply made the sequnce column of the table auto increment and eveything worked fine.

    Alan

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply