Performance Monitoring by Internal Fragmentation Measurement

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/kbiller/performancemonitoringbyinternalfragmentationmeasur.asp

  • Does this tool work on Windows Server 2000 and 2003?  The site says that the product "works on Windows XP and NT only".

  • Yes it works on all the NT family.

    On Servers, you may run it from the Administrator station. Just map the C$ drive of the server.

  • > "Suppose that the level of chaos (i.e. fragmentation) of the data in a given Data Base can be calculated and quantified. Such a factor will alert the System Manager way before the end-users start experiencing any malfunction. One will be able to manage IT resources and schedule the reorganization process ahead of time, before it becomes impractical, and before the complaints start arriving."

    Q1 Please explain: In the case of disk stats how is this superior (or indeed is it at all) to say having a task that runs:

    defrag C: -a -v

    and parses the result to determine when defragging is needed???

     

    Q2 In the case of database fragmentation how is this superior to dbcc statistics?

  • Thank you for the question.

    When issuing the DEFRAG C: -A-V command,, you get a lot of information, but you can not decide if it is good or bad.

    MS are using percentage, but you are not sure of what.

    What the user want to know, is whether he is suffering from performance degradation or not.

    The figure LAVELEVEL give, is proportional to the performance degradation of the computer, and thus to the satisfaction of the user. It is extremely important in Servers. You may live without it if you decide not to look ant your data's internal structure. Please remember, that copying the disk solves everything!

    THIS IS THE FIRST TIME THAT SUCH A FIGURE IS GIVEN.

    You may run it, either on a loaded disk or a tidy one, and see it for yourself. Run it before and after the Defrag, and you'll be convienced.

    It is the same in SQL. dbcc gives the figures, but if your need a qualitive measurement, you'll have to wait until we'll finish.

  • > "When issuing the DEFRAG C: -A-V command,, you get a lot of information, but you can not decide if it is good or bad.

    MS are using percentage, but you are not sure of what."

    Well... when the fragmentation percentages are high (MS relative fragementation of files - AFAIK), the result set returned generally should contain a message such as: "You should defragment this volume.", which seems to pretty clearly suggest what to do.

    > "The figure LAVELEVEL give, is proportional to the performance degradation of the computer, and thus to the satisfaction of the user. It is extremely important in Servers. You may live without it if you decide not to look ant your data's internal structure. Please remember, that copying the disk solves everything!"

    Well, fair enough on that point perhaps; percentage fragementation as a metric may leave something to be desired. One might imagine that with respect to increments at the high end say 70% + 5% that performance may perhaps become very bad indeed (relative to increments at the low end say 10% + 5%) - would this then be sort of situation the proposed statistic would reflect somewhat better?

     

  • This is exactly what we had in mind when the project started. What we found out, that even when you get a message "you do not need to defrag" it is based only on the status of some of the files, and not the overall components which affect fragmentation.

    Remember that the fragmentation problem (I mean what is causing the performance degradation)is a combination of the fragmentation of all the files AND the fragmentation of the free space.

    In several cases, when Lacelevel gave a relativly high grade and the operating system said "No beed to Defrag", using another software improved the situatuion.

  • The next stage of our development is distributed by Google at

    http://desktop.google.com/plugins/i/lacelevel2.html

    We are looking for assistace to complete the SQL side of the project. (C++ programming with DBCC calls)

    If anybody is interested, please contact at info@disklace.com

  • Will the software get to larger than 500 GB support? Also, I noticed that the demo only gives accurate numbers up to 80 GB. Most of our disks are 800GB RAID configurations, so I'd like to see larger disk support in both the demo and the full version. Is this something you have slated for the future?

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply