DBCC CHECKDB uses internal database snapshots which are created at the same location of the corresponding database data file and grow as data is changed in the original data file. If transactional activity continues on this database, these snapshots created by DBCC commands may experience huge internal fragmentation. Keeping details of such high level of fragmentation requires more resources than the default size “Bytes Per FileRecord Segment” which is 1 KB.
Suggestions for Remedy <in order of effort required>.
• Suggest the users to use a combination of other DBCC CHECK jobs and avoid DBCC CHECKDB
• Avoid running DBCC CHECKDB at a time when other / major data modifications are taking place.
• Divide the database into a multiple files. The limitations are per sparse file and each database snapshot creates matching sparse files for each data file.
• Find out which tables/indexes result in the most write activity during the lifetime of the snapshot
o Separate them out into a different file group with multiple files of comparatively smaller sizes.
o Identify & revise the Index Fill Factor & PAD index values.
o Use check table or as appropriate on those.
• Format the disks with /L to increase the “Bytes Per FileRecord Segment” to 4 KB from 1 KB.
More details at
http://support2.microsoft.com/kb/967351
KB957065