April 13, 2005 at 7:32 am
There is this test database that I have with a table initially defined as:
Dummy table:
key_cluster int (primary key - clustered)
field1 char (20)
field2 char (4000)
field3 char (4000)
and I loaded 351 records with basically this type of information:
field value
------ -------
key_cluster 1....2...3
field1 'Anna'
field2 'that is her name yes.'
field3 'that is her first name yes'
Then I checked the number of pages it produced using:
dbcc ind :and I got 351 pages of data -for index id is 0 and pagetype is 1.
Then I checked for one of the data page rows using
dbcc page: I see there are lots of blank spaces in between field2 and field3 because it has fixed length of char(4000) each with only about 20 data value.
Also the database (data - not the log) increased by 3 MB. so total of my database now is 5 MB (from 2 MB + 3 MB).
II. Then I converted the field2 and field3 from char(4000) to varchar(25).
--------------------------------------------------------------------------
key_cluster int
field1 char (20)
field2 varchar(25)
field3 varchar(30)
I then update the table with the field2 and field3 with ltrim(rtrim(field2)) - [this is for field3 as well] - because when you move the char to varchar - the spaces are also brought along with it. So I have to run ltrim(rtrim ...) to trim the extra spaces on existing data.
then I did dbcc shrinkdatabase('database') , but the database (data) didnot shrink at all - it remains at 5 MB. (the log size is ok like 1024 KB - so log size is irrelevant in this scenario).
Then I did the above again - dbcc ind and dbcc page.
Got same results - which is not what I am expecting.
I should have saved about 2 MB at least and I should not have 351 pages (since the converted table is now varchar(25) so about 6 - 8 rows should fit in 1 page but the dbcc page tells otherwise. It remains 1 page per row .
So converting char to varchar didnot help save space for existing data. I did check with the datalength and it did shows the length for field2 and field3 as about size of 15 for field2 and field3 (whereas before I did the update table with ltrim(rtrim) , it showed datalength of 4000).
Anyone know how to actually get the database /table to really shrink correctly?
Anyone have any idea.
April 13, 2005 at 8:25 am
Did you try executing DBCC SHRINKDATABASE with a percentage for free space, such as 10%?
Also, you can drop and recreate the clustered index after trimming the strings to compact the table and have more efficient storing of rows on pages.
April 13, 2005 at 10:48 am
Dynamite. It worked. The dropping and recreateing the clustered index part.
I used dbcc dbreindex...
thank you very much in solving this complex internal sql server problem.
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply