December 28, 2006 at 8:49 am
Hi
I have a database with a big data file [about 100 GB] which is heavily used in business hour. I executed sp_spaceused for some user tables space usage monitoring there and fund that there is good amount of space reserved and unused for them [The space populated in the unused column in the SP output]. I want to reduce this space reserve so that I can battle with continuous space crunch.
Could anyone kindly tell how to achieve that?
Database is set to grow in MB only.
The steps I thought:
But will it really help?
Thanks in advance for your help.
Regards
Utsab Chattopadhyay
December 28, 2006 at 9:24 am
you really don't want to go down this path unless you want to degrade your database performance.
I really don't know why everyone is so hung up on space these days, disks are cheap and there's little you can do to shrink the physical data in your database, true you might be concerned about a 100Gb mdf file, if it only contains 2Gb of data, ok it's a little on the large size, but if it contains, say, 90Gb of data what's the problem? The backups will only be the size of the data ( or smaller if you use litespeed ) the actual size of the mdf only affects the size of the restored database not the size of the backup.
Find one of the inside sql books to read up about what you're looking at - I think you may have the wrong idea about it all!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 28, 2006 at 10:40 am
To handle individual tables, you can rebuild the clustered index, which will remove the data around and reassemble it with less fragmentation. However, you want some fragmentation in the tables. When you insert/update data, sometimes you need space. This space is determined by the "fillfactor" when you build the clustered index.
Read thi: http://www.sqlservercentral.com/columnists/jdoherty/2760.asp
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply