Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

DELTING HUGE No. of ROWS Expand / Collapse
Author
Message
Posted Friday, August 8, 2014 9:38 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 9:15 AM
Points: 20,861, Visits: 32,888
funooni (8/8/2014)
Lynn Pettis (8/8/2014)
There are other issues with your query, but lets not worry about that just yet. Another question, what edition of SQL Server are you running?

Also, is the intent to keep 3 months of data in the table going forward?

If so, is that 3 months plus the current month?


Yes its the current month and last three months i.e. May, June, July and August.

I am using SQL Server 2008 R2


Edition, not version. Are you running Standard Edition or Enterprise Edition.



Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Post #1601240
Posted Friday, August 8, 2014 9:54 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, December 15, 2014 1:04 AM
Points: 205, Visits: 242
Lynn Pettis (8/8/2014)
funooni (8/8/2014)
Lynn Pettis (8/8/2014)
There are other issues with your query, but lets not worry about that just yet. Another question, what edition of SQL Server are you running?

Also, is the intent to keep 3 months of data in the table going forward?

If so, is that 3 months plus the current month?


Yes its the current month and last three months i.e. May, June, July and August.

I am using SQL Server 2008 R2


Edition, not version. Are you running Standard Edition or Enterprise Edition.


Oh its Enterprise Edition.

By the way the previous roll back is still going on and I am expecting it to take another two hours as per my calculation of remaining rows.

Right now I am ran out of Tempdb Log file space



Post #1601249
Posted Friday, August 8, 2014 10:07 AM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 9:15 AM
Points: 20,861, Visits: 32,888
funooni (8/8/2014)
Lynn Pettis (8/8/2014)
funooni (8/8/2014)
Lynn Pettis (8/8/2014)
There are other issues with your query, but lets not worry about that just yet. Another question, what edition of SQL Server are you running?

Also, is the intent to keep 3 months of data in the table going forward?

If so, is that 3 months plus the current month?


Yes its the current month and last three months i.e. May, June, July and August.

I am using SQL Server 2008 R2


Edition, not version. Are you running Standard Edition or Enterprise Edition.


Since you are using Enterprise Edition I would look at using partitioning to facility data management. This will require establishing a clustered index on the table, and looking at the code you posted I would say on the AuditDateTime column. By using partitioning you will be able to use a sliding window to easily delete (actually switch out) a one month partition of data and start a new one. This will make managing the data much easier.

I would recommend that you start by reading about partitioning in Books Online and set up a sandbox database where you can practice setting up the table and management of the partitions.


Oh its Enterprise Edition.

By the way the previous roll back is still going on and I am expecting it to take another two hours as per my calculation of remaining rows.

Right now I am ran out of Tempdb Log file space






Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Post #1601254
Posted Friday, August 8, 2014 10:15 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 10:42 AM
Points: 35,769, Visits: 32,438
funooni (8/8/2014)
Jeff Moden (8/8/2014)
I have to get to work but here's a 60,000 ft view of what I would do...

1. DO A FULL BACKUP!!!!!!!!!!!!
2. MAKE SURE POINT-IN-TIME BACKUPS ARE BEING TAKEN111!
3. Create an empty identical table.
4. Do a cascaded-rename (takes about 63ms) to rename the old table to something else and rename the new table to what the old table was named. This new table will continue in the place of the old table.
5. Insert the desired old rows from the old table into the new table using a WHILE loop that does it in 3 million row batchs (most systems will reach a tipping point shortly after that so I wouldn't go much higher if at all).
6. When you're absolutely sure that everything worked correctly, DROP the old table.


Thanks Jeff

Backup is done and point in time backups are being taken

I will follow your suggestion. I believe 63 ms would mean a very slight downtime might be faced by some trigger or end user application.
By the way the table is hit by a trigger on another table so the ultimate end user might not face anything.

Any further suggestion about the downtime ? I mean what should I be expecting during that 63 ms ?

Thanks again.


I'd recommend you post ALL the code you write for this before you even think of running it once you have it code.

As to what should you be expecting during that 63ms? You're the one that's going to have to figure that out. I have no clue as to how often the table is hit.

As a bit of a side bar... I typically have staging tables for things like this and a job to transfer the new rows from the staging table to the final table that run once a minute. That way, when I need to do something to the "huge" table, I just turn off the transfer job. The new rows continue to accumulate in the staging table while I'm working (usually just a couple of minutes) and when I'm done, I just turn the job back on and everything is good.

For future deletions for this table, you might want to look into partitioning by date. Save that for until you're done with this "delete".


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1601258
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse