Delete

  • My Application is generating 300 blank rows everyday, instead of just 400 rows to generate eevry day it generates 700 rows out of which 300 are empty. This is causing my job to run longer. How can i get rid of them effeciantly.

  • I'd strongly suggest determining why they are getting in there in the first place rather than cleaning up after the fact.

    These rows could be valid rows that, for some reason, are being inserted blank from the application. If indeed you determine they are strictly blank rows, eliminating them before they are inserted is the most efficient way of taking care of the problem.

    If it is impossible to eliminate the source, I don't know of any other choice except the good ole' DELETE clause.

    Kyle

  • Right now we are kind of managing this but it takes lots of time and also locks the tables related and degrades the performance.

    The way we do is

    Drop Indexes

    Delete blank transactions

    Rebuild Indexes

    When we do this the log file is growing bigger and it takes long time to delete log file and also locks tables.

    Looking for a very efficient way to do this job.

  • The index drop/create is where you are losing your concurrency and why the t-log is growing.

    Why are you dropping the index for a delete statement? I presume that the 300 rows that you need to delete are only a small portion of the entire table?

  • may be there are relationships between tables and so they are droping indexes so that it allows them to delete blank records.

    Actually its an old system running which i need to work on. Its not just 300 records thts the test am doing but original one has more than 70k rows and nearly 30k are blank for which log file is growing about 30gig.

  • Mike Levan (6/10/2008)


    may be there are relationships between tables and so they are droping indexes so that it allows them to delete blank records.

    Actually its an old system running which i need to work on. Its not just 300 records thts the test am doing but original one has more than 70k rows and nearly 30k are blank for which log file is growing about 30gig.

    How are you isolating your target rows on your delete statement? Can you post the table structure + indexes you are dropping?

  • Drop Indexes

    ALTER TABLE [dbo].[PRTran] DROP CONSTRAINT [PRTran0]

    if exists (select * from dbo.sysindexes where name = N'PRTRAN_si90' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTRAN_si90]

    if exists (select * from dbo.sysindexes where name = N'PRTran1' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTran1]

    if exists (select * from dbo.sysindexes where name = N'PRTran2' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTran2]

    if exists (select * from dbo.sysindexes where name = N'PRTran3' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTran3]

    if exists (select * from dbo.sysindexes where name = N'PRTran4' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTran4]

    if exists (select * from dbo.sysindexes where name = N'PRTran5' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTran5]

    if exists (select * from dbo.sysindexes where name = N'PRTran_WO' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[PRTran_WO]

    if exists (select * from dbo.sysindexes where name = N'xPrtran99' and id = object_id(N'[dbo].[PRTran]')) drop index [dbo].[PRTran].[xPrtran99]

    Delete Zero transactions

    DELETE PRTran WHERE TranAmt = 0 and Qty = 0 and UnitPrice = 0 and Type_ = 'DW'

    Rebuild Indexes

    ALTER TABLE [dbo].[PRTran] ADD CONSTRAINT [PRTran0] PRIMARY KEY CLUSTERED ([BatNbr],[ChkAcct],[ChkSub],[RefNbr],[TranType],[LineNbr]) ON [PRIMARY]

    CREATE INDEX [PRTRAN_si90] ON [dbo].[PRTran]([BatNbr], [EmpId]) ON [PRIMARY]

    CREATE INDEX [PRTran1] ON [dbo].[PRTran]([EmpId], [TimeShtFlg], [Rlsed], [Paid], [WrkLocId], [EarnDedId]) ON [PRIMARY]

    CREATE INDEX [PRTran2] ON [dbo].[PRTran]([BatNbr], [Acct], [Sub]) ON [PRIMARY]

    CREATE INDEX [PRTran3] ON [dbo].[PRTran]([PerPost], [Rlsed], [ScreenNbr], [TimeShtFlg]) ON [PRIMARY]

    CREATE INDEX [PRTran4] ON [dbo].[PRTran]([ChkAcct], [ChkSub], [RefNbr], [TranType]) ON [PRIMARY]

    CREATE INDEX [PRTran5] ON [dbo].[PRTran]([APBatch]) ON [PRIMARY]

    CREATE INDEX [PRTran_WO] ON [dbo].[PRTran]([PC_Status], [TimeShtFlg]) ON [PRIMARY]

    CREATE INDEX [xPrtran99] ON [dbo].[PRTran]([TimeShtFlg], [BatNbr], [ProjectID]) ON [PRIMARY]

  • is there a way to split the transactions and execute like ,deleting 5k rows each time. so that it has small log file and can easily get rid of that.

  • Mike Levan (6/10/2008)


    is there a way to split the transactions and execute like ,deleting 5k rows each time. so that it has small log file and can easily get rid of that.

    --Build yourself a nice lookup table

    SELECT TOP 1000 [uniquerowid]

    INTO #TempDeleteLookup

    FROM PRTran WITH (NOLOCK)

    WHERE TranAmt = 0 and Qty = 0 and UnitPrice = 0 and Type_ = 'DW'

    GO

    --Key your DELETE statement from the lookup table you created

    DELETE PRTran

    WHERE [uniquerowid] IN (SELECT [uniquerowid] FROM #TempDeleteLookup)

    GO

    --Verify you've whacked from the main table all rows which are on your lookup table

    SELECT COUNT(*)

    FROM #TempDeleteLookup

    INNER JOIN PRTran ON PrTran.[uniquerowid] = @TempDeleteLookup.[uniquerowid]

    GO

    --Expect to see a zero count result.

    Now all this assumes you actually have a nice uniquerowid hidden somewhere in there. Something like a unique transaction id, or just a row identification. Unfortunately, I don't see one included on the indexes you are rebuilding -or I maybe missing it. If you don't have one, try to modify the creation of your temp table with additionl columns you may be able to utilize as key to segregate records from your table when you are deleting. Of course you could do without the temp table and use a CTE (if you are using 2K5), or some other means. The point is to allow the transaction to be broken down, so as to help your log file not to grow beyond reason -or space.

    Oh yes, forgot to mention: once done don't forget to clear that temp table & repeat the process for the next 1K rows -as you progress you may be able to progressively increase the number of deleted rows per execution.

  • I'm making a presumption that your database is in FULL recovery mode, but your log file size is likely not related to the delete statement, but the multiple create index statements.

    You might try setting your recovery mode to BULK LOGGED and see if the t-log file still grows to be a problem. The create index will not be logged, saving the size of your t-log file. Be warned, the size of your t-log backup will still be just as large, but the log-file itself will remain smaller.

    If that doesn't work, you could switch to SIMPLE immediately prior to performing this operation, but make certain to set it back to FULL and take a backup immediately as your database has no ability to be recovered past the moment of your last t-log backup.

    If the t-log size is indeed related to the delete, running multiple deletes while the database is in FULL recovery will not help much as the same # of pages are being logged whether you do it 5 or 50k rows at a time. If it is in simple mode, deleting/committing in small chunks will help. I'd use the TOP clause of the delete statement to avoid a lot of TEMPDB i/o.

    Kyle

  • Miguel Pereira (6/10/2008)


    Mike Levan (6/10/2008)


    is there a way to split the transactions and execute like ,deleting 5k rows each time. so that it has small log file and can easily get rid of that.

    --Build yourself a nice lookup table

    SELECT TOP 1000 [uniquerowid]

    INTO #TempDeleteLookup

    FROM PRTran WITH (NOLOCK)

    WHERE TranAmt = 0 and Qty = 0 and UnitPrice = 0 and Type_ = 'DW'

    GO

    --Key your DELETE statement from the lookup table you created

    DELETE PRTran

    WHERE [uniquerowid] IN (SELECT [uniquerowid] FROM #TempDeleteLookup)

    GO

    --Verify you've whacked from the main table all rows which are on your lookup table

    SELECT COUNT(*)

    FROM #TempDeleteLookup

    INNER JOIN PRTran ON PrTran.[uniquerowid] = @TempDeleteLookup.[uniquerowid]

    GO

    --Expect to see a zero count result.

    Now all this assumes you actually have a nice uniquerowid hidden somewhere in there. Something like a unique transaction id, or just a row identification. Unfortunately, I don't see one included on the indexes you are rebuilding -or I maybe missing it. If you don't have one, try to modify the creation of your temp table with additionl columns you may be able to utilize as key to segregate records from your table when you are deleting. Of course you could do without the temp table and use a CTE (if you are using 2K5), or some other means. The point is to allow the transaction to be broken down, so as to help your log file not to grow beyond reason -or space.

    Oh yes, forgot to mention: once done don't forget to clear that temp table & repeat the process for the next 1K rows -as you progress you may be able to progressively increase the number of deleted rows per execution.

    Making an assumption that you are using SQL Server 2005 considering the forum posted.

    declare @rowsdeleted int,

    @rowstodelete int;

    set @rowstodelete = 5000;

    while (@rowsdeleted is null) or @rowsdeleted <> 0

    begin

    delete top (@rowstodelete)

    from PRTran

    where TranAmt = 0 and Qty = 0 and UnitPrice = 0 and Type_ = 'DW';

    set @rowsdeleted = @@rowcount

    -- backup log ... -- backup log statement could be coded here to reduce tlog growth

    end

    The above is another option. I would HIGHLY recommend a LOT of testing to ensure what ever method is used works as expected.

    Also note, the the OUTPUT clause could also be used to write the deleted rows to a table variable or temp table for auditing if needed.

    😎

    GO

  • Thanks guys.. I am gonna try that and see.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply