- I hope you're performing SQL backups using the T-sql backup statement or a maintenance job from sqlserver
- If a db is "to big" that my have two causes :
1) the is a whole bunch of insert/update/delete with rowrelocation or not resulting in empty pages.
2) insert into tables with clustered indexes cause many pagesplits
In both cases schedule DBCC DBReindex on your tables having clusered indexes. (Let's hope every table has one !)
This will optimize your data and your dataaccess !!!
We also schedule sp_updatestats after a reindex, because stats may get inaccurate due to frequent small operations that did not trigger the stats.
like Christian Benvenuto mentioned you have to take at least 2 performance-hits !
1) during the shrink operation
2) during the new extend
Your applications - and users - will be better served with a scheduled reindex than with a scheduled shrink, because a shrink does not optimize your data.