Hi
We have a table with a LOB column (varchar(MAX)) which we update to NULL, after a period of time.
Historical date sensistive archive / operational data... used for tracking and processing.
Row depth is millions; width is 15 columns (only 1 LOB).
DBCC SHOWCONTIG scanning 'EmailQueue' table...
Table: 'EmailQueue' (244195920); index ID: 1, database ID: 32
TABLE level scan performed.
- Pages Scanned................................: 3485182
- Extents Scanned..............................: 437322
- Extent Switches..............................: 438993
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.24% [435648:438994]
- Logical Scan Fragmentation ..................: 0.44%
- Extent Scan Fragmentation ...................: 2.23%
- Avg. Bytes Free per Page.....................: 7347.2
- Avg. Page Density (full).....................: 9.23%
Tables is huge, but empty...
5 of the fields are defined as varchar(4000) and as far as I understand, only consume space based on the actual content. Is this impacting the free space? [These are not impacted by the nulling process].
Essentially, the bulk of the data allocated is allocated to the field that is nulled, but the free space is not allocated to subsequent inserts. It just grows and grows... ideas?