At our company we have a SQL2000 database that is about 250GB in size that we like to refer to as a "Data Warehouse" (real data warehouse operators would laugh at us). Much of this data is copied every night by dropping, truncating or otherwise clearing a table and then recreating the table again from the source database.
Again, I know - a terrible practice, I didn't build it and I am going to fix it.
ANYWAYS, this duplicated data accounts for 60-70% of the data on the server. The rest of the data is unique, mission critical accumulation over time data.
The question has been raised how we can save money by only backing up the critical data and not the duplicate data (which is backed up by the systems that generate it).
Here is what we have looked into:
1 - Splitting the database apart into two databases.
We can't do this because there is 1000's of references to this database in reports, applications, Excel\VBA and Access crap, basically we aren't allowed to break any connection strings or client code.
2 - Splitting the database into two files and only backing up a single file.
It's just not what this feature was designed to do, all my experiments have failed to bring the database back to an operational state without backing up every file or leaving part of the database inoperable (even experiments on 2005\2008 doing WITH PARTIAL restores).
I was wondering if there were any experienced DBA's out there who had additional ideas beyond these that could help us save disk space by only backing up critical tables.
ALSO: I am considering proposing an upgrade to SQL2008 (and new hardware) and using the new database compression (page\row compression, backup compression, etc...) techniques offered there. Does anyone have an opinion on this?