AutoGrowth 1 MB

  • I've been trying to help another DBA on his project. It has been in production for five years or more. It is running slow. I've determined there are hundreds of connection requests per second probably taking up 95% of the CPUs. I also saw table and index fragmentation was at 99% on all the largest tables. But what concerns me most right now is that all databases were set to autogrow at 1 MB increments for years. It is on an SSD so does it matter? I had him change it to 1 GB growth since the db size is at 300 GB now. My concern is not the fragmentation of allocations on disk but the potential mess it creates for sql when retrieving a page of a table. Is there a page allocation map on each of these 1 MB growth extensions?

    How could one clean up this mess without copying all objects to a new database?

    Would backing up and restoring at least put the pages contiguous on disk?

    What would you do?

  • Even on SSD the fragmentation will be an issue. The linking of the file segments is managed outside of sql server, so no map within SQL Server.

    While specifically about index fragmentation on ssd, the following illustrates that there is still increased overhead with fragmentation.

    https://www.sqlskills.com/blogs/jonathan/does-index-fragmentation-matter-with-ssds/

    Conventional logic is to not defrag the files on SSD, because the improved performance comes with a penalty of lower lifespan.

    (example article that is not super technical http://www.pcworld.com/article/2047513/fragging-wonderful-the-truth-about-defragging-your-ssd.html?page=2).

    That said, it is better for the data files to be as contiguous as possible. If you need to access data from 1000 file fragments on the disk, it will be slower than accessing the same data from 1 file fragment on the disk.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Excellent links! As I suspected, fragmentation does result in several more levels in the index tree that must be navigated. Also, I think my co-worker can get more rows per page by rebuilding indexes which would reduce data cache space needs.

    I didn't think of defragging an SSD at the OS level but it might help doing it this once. The article seems to imply reading consecutive sectors is done in one request as opposed to individual requests for fragmented pages justifying reindexing. He has mostly heap tables which I think he can defrag by temporarily creating a clustered index in "heap" order and then dropping it, leaving the table as a heap again but in contiguous pages.

  • You are correct about the defrag of the heaps. But to add a CI and then remove it would constitute unnecessary IO.

    I would recommend building a clustered index and leaving it in place. There is an added benefit to having the CI. Check out forwarded records and the impact they can have.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • This is sort of on topic but not.

    By running this command you will see how fragmented the transaction log is. It is very important to have as few extends as possible. I have found several of mine in well over 1,000 fragments. To fix this I simply shrunk the trans log to 100MB then grew it by the size it should be and set a good size for growth.

    I know this won't help you with data file .mdf issues but the trans log is important as well.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply