• This message has to do with all the SQL versions after SQL 8.0 and is caused due to file system (OS) and not SQL. Windows 2k8 & R2 solve it by a patch which is included in the RTM of the later OS releases. It’s not caused due to fragmentation of the existing MDF/NDF or for that sake any existing file on the current disk. It’s caused because of internal i.e. kind of logical fragmentation in the snapshot (sparse) file (highlighted below). Typically a file's meta data would get complex as the file's fragmentation level increases. Sparse files are chopped up into 64 KB chunks and then each chunk is evaluated to see if it is just a run of zeros. It’s so we don't have to actually use space to store large runs of zeros. The downside is a file that instead of being contiguous, is in 64 KB chunks....which really really really fragments the file. The more fragmented the file becomes the larger the file's attribute list gets. NTFS has a hard coded limitation for the attribute list (1 KB) to prevent performance issues. Now when you have a very large file that is sparse (or zipped) you may run run into this limitation (40-90 GB). Fix is to format with /L NTFS 64 KB.

    The sparse file will created (only on NTFS) if DBCC is not executed with a TABLOCK, it’s created as the DBCC starts & vanishes on end of operation. One separate file is created for each MDF/NDF, size of which is dependent on the RO operations happening in the specific data file. These snapshot files are hidden unless you are using sql 12.0 and not in question if you are on 8.0.

    In case you are seeing events 823 & 17053, my suggestion would be the yellow ones others are also good to follow

    •DB disks should be formatted with 64 kb clusters.

    •Add /L for large volumes.

    •Before running DBCC an ESTIMATE_ONLY should be used for the first time.

    •Run DBCC CHECKDB at a time when less data modification is taking place.

    •Divide the database into a few more files. The limitations are per sparse file and each database snapshot creates matching sparse files for each database file. For example if the database files are each limited to 64GB in total size you can't exceed the limitation on a per file basis. Also the DB files in a file group should be equi sized & have same increment values.

    •Find out which tables/indexes result in the most write activity during the lifetime of the snapshot and separate them out into a different file group with many files of comparatively smaller sizes.

    Following are also good to follow

    •Bytes per-sector should be checked.

    •Partition alignment should be cross-verified.

    •For me it’s not pointing for these errors were triggered because of a high DB fragment count.

    oBut even otherwise there doesn’t seems to be any negative impact of adding /L (due tests needed).

    •Create a mannual snapshot on a different drives other than the source data files and with less file activity to reduce file level fragmentation. And run DBCC there.

    In short a defrag on the disk won’t help, it (disk) has to be re-formatted with /L as NTFS of 64 Kb.