April 1, 2010 at 6:18 am
jtetler (4/1/2010)
Jeff - the index is clustered. I'll probably try both methods & see what kind of space savings I get. We have to keep 30 days of ID images available so that will likely be my maintenance schedule. Right now I'll use old backups to determine the latest index used but I'm sure I'll be able to find a date based correlation in the transaction DB that will allow me to automate.Thanks,
Jim
If you make a small table to store a date and the MAX ID each week from the table when the scheduled job runs, the proc will become self maintaining in 30 days. No longer a need to bother checking backups for IDs.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 1, 2010 at 12:54 pm
Two good solutions - I'm leaning towards a table with the last run date with the max id at that time - that way I can schedule the run monthly & it will always have 1 month of data. I could also add a checkbox field for 'next run id' to the table.
Thanks
Jim
July 8, 2010 at 7:57 am
I finally got back to work on this deleting blobs issue. I figure I have less than a week to get it done.
My plan is to delete rows based on ImageID's that correspond to transactions > 6 months ago. Once done I was going to add a date field that defaulted to the current date and write a script that could be run periodically based on date.
I took a smaller 131MB backup to test on - it had 453 rows. I reduced it by 60% and it only went down to 113MB, I reduced it by 95% to 20 rows & it only went down to 96MB.
How do I reduce file size further??? Jeff had mentioned ALTER INDEX/REINDEX with the ALL option - would this help?
Second problem - I tried adding a date field to the CustomerImage table - when I did this the program started locking up. I'm guessing the programmer is using select * and the extra column is confusing things. I would like to create a new table with 2 columns - date & ImageID where date would default to the current date & the ImageID would default to the tblCustomerImage.ImageID - any ideas on how to do it??
Thanks again to everyone for their help,
Jim
July 8, 2010 at 8:18 am
not sure on the space saving thing....is it that important to try and downsize the db?
for the second part, if adding a column affects the application, i think you'd need an audit table and a pair of triggers to track the modified date; here's a suggestion: witht at, you can start deleting/updating based on the .ModifiedDt going forward.
Create Table [tblCustomerImagesAudit](
AuditId [int] IDENTITY(1,1) NOT NULL,
[ImageID] [int] NOT NULL,
--references [tblCustomerImages].[CustomerID],
--but no FK so that images can be deleted without failing due to this new cosntraint in this Audit Table
[CustomerID] [int] NOT NULL,
[CreatedDt] datetime default getdate(),
[ModifiedDt] datetime default getdate())
GO
CREATE TRIGGER TR_tblCustomerImages_INSERTS ON [dbo].[tblCustomerImages]
FOR INSERT
AS
BEGIN
INSERT INTO [tblCustomerImagesAudit]([ImageID],[CustomerID])
SELECT [ImageID],[CustomerID]
FROM INSERTED
END
GO
CREATE TRIGGER TR_tblCustomerImages_UPDATESS ON [dbo].[tblCustomerImages]
FOR UPDATE
AS
BEGIN
UPDATE [tblCustomerImagesAudit]
SET [tblCustomerImagesAudit].[ModifiedDt]=getdate()
FROM INSERTED
WHERE [tblCustomerImagesAudit].[ImageID]=INSERTED.[ImageID]
END
Lowell
July 8, 2010 at 8:53 am
I reorganized the indexes and the size wasn't affected. Looking deeper into the database I realised that each transaction had corresponding pictures of material in addition to ID's. I had asumed that these pix were relatively small being from a CCTV camera as opposed to the hi-res ID scans. I removed the corresponding rows and the database shrunk to 3MB!
Looks like I'm halfway there.
Jim
July 8, 2010 at 8:58 am
Lowell - thanks for the code
I'll try to implement it later next week. I first need to skinny down the DB as I'm REAL close to that 4GB limitation for Express.
Thanks,
Jim
Viewing 6 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply