Fragmentation due to Frequent Re-Creation of Snapshots?

  • We have a 25GB database that we are mirroring across a wan. Since using snapshots is the only way to get read access to the mirrored database (don't get me started on how lame that is) I am considering a scheduled task that runs either nightly or weekly that will drop the existing snapshot and re-create it.

    Although I am confident this will work, I am wondering what kind of "damage" I may be doing in the form of fragmentation. Although the 25GB snapshot file is a sparse file, does it still cause the file system to allocate all 25GB every time, possibly causing nasty fragmentation?

    I would love it if there were some kind of snapshot reset command that would essentially do the same thing, and just zero-out the snapshot file, but I have found no such command, and am left with dropping and re-creating the snapshot.

  • for creating a new snapshot you have to drop the existing snapshot. there is no other way.

  • Interesting concern. AFAIK, the sparse file doesn't actually allocate the space until it's needed, so while it might report 25GB as a size, it isn't using that space.

    Fragmentation? Interesting, I think you'd have to test to see what the effect is.

  • UPDATE:

    I figured I would take a look at the current fragmentation before I started my nightly drop/re-create process. To my surprise, the snapshot file is already seriously fragmented. To further my surprise, this is on a disk that still has 93% free capacity!

    The snapshot was created 2 days ago and the "size on disk" has only grown to 620MB. But over that time, the file has separated into 6,000+ fragments! (This is according to Defraggler) Strangely, I don't see much on the web in any other discussion forums about snapshots causing such gross fragmentation. I must either be doing something wrong or I'm the first one to notice.

  • Wow. I'll see if I can bump this to a few people to see if that's occuring on other systems.

  • We encountered exactly the same issue on our databases with snapshots, the difference being that ours are around 15TB rather than 25GB (and growing by 600GB a month). In the end we limited the number of snapshots, and put them onto a completely seperate drive.

    Originally we were using the same disk as one of the data files, but that resulted in a massive no of fragments and the only way to resolve it was to remove the cause to somewhere it wouldnt be a problem.

    obviously the snapshot reports the full size, so you can't try and move the file, you should always drop and recreate. At one point we had 5 snapshots, equalling around 60TB on a 2TB disk 🙂

  • I had some time to ponder this, and when you think about it, why wouldn't you expect it to be fragmented? If you really consider how snapshots work, they are probably intended to be this way. You are sparsely writing data to a snapshot file that represents various extents in various locations of the original database file. Furthermore, when the disk reads the data from the snapshot, it has no choice but to dance around between the original MDF file and the snapshot file, so we wouldn't be saving ourselves any trouble by sequentially arranging data that is sparse in structure to begin with right?

    I guess for me what seems the only legitimate fragmentation concern would be the possible effect that dropping/re-creating a snapshot file would have on the rest of the file system as a result of all the shuffling around. But again, this would only be an issue as I start to get tight on available space.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply