Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Optimizing inserts and deletes on a single large table Expand / Collapse
Author
Message
Posted Saturday, December 12, 2009 3:30 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, May 10, 2011 12:42 PM
Points: 10, Visits: 108
Our company has a very large table (150 million rows) to which we add 1000 rows per minute via a trigger on another table. At the same time, we are deleting older rows through a job which runs every few minutes. I am wondering if anyone would have any recommendations for ensuring that the delete process and insert trigger do not lead to much contention. THe rows that we delete are at the other end of the table, so to speak, from the rows that we are inserting. But occasionally we experience some sluggishness in the whole process.

We are using Sql Server 2005.
Post #833412
Posted Sunday, December 13, 2009 7:18 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Thursday, June 5, 2014 10:54 AM
Points: 9,902, Visits: 9,480
You would probably want to partition this table across multiple Volumes, additionally insuring that none of these volumes shared any physical disks. You should probably use a partitioning function that put the new instered records on a different volume(s) than the old to-be-deleted records.

-- RBarryYoung, (302)375-0451 blog: MovingSQL.com, Twitter: @RBarryYoung
Proactive Performance Solutions, Inc.
"Performance is our middle name."
Post #833479
Posted Sunday, December 13, 2009 11:30 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, May 10, 2011 12:42 PM
Points: 10, Visits: 108
Partioning is a good idea, but in addition is it worth it to try to put in hints like WITH ROWLOCK on the transactions that delete and insert, or is that not going to make much difference?
Post #833525
Posted Sunday, December 13, 2009 12:13 PM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Thursday, June 5, 2014 10:54 AM
Points: 9,902, Visits: 9,480
I doubt that ti would make much difference. Though you could always test it to see for you specific case.

-- RBarryYoung, (302)375-0451 blog: MovingSQL.com, Twitter: @RBarryYoung
Proactive Performance Solutions, Inc.
"Performance is our middle name."
Post #833526
Posted Sunday, December 13, 2009 12:46 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, May 10, 2011 12:42 PM
Points: 10, Visits: 108
Forgot to mention that we are using Standard Edition and Not Enterprise, so I guess the partitioning of the table is out, right, or rather it would have to be simulated with a partition view??
Post #833532
Posted Monday, December 14, 2009 1:07 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 2:50 AM
Points: 11,192, Visits: 11,098
Yes - partitioning is an Enterprise-only feature. You could indeed consider using a partitioned view, but you should probably take a step back before making such a big change. One very sound approach is to identify bottlenecks in the existing system, and then make changes to improve matters. The key is to work based on evidence rather than simply applying whatever 'best practice' comes to hand.

The subject is a complex one, and there is nowhere near enough detail in your posts to make any recommendations. For example, if your system is constrained by logging or free buffer pool pages, rather than data file I/O, partitioning tables and placing them on separate physical disks is highly unlikely to make much of an impact.

The best papers available, to my knowledge, discussing how to approach this sort of project are linked to below - both feature senior figures from Microsoft. The basic strategy is to measure the 'waits and queues' over the system as a whole, and then apply targeted measures to improve those things most in need.

Troubleshooting Performance Problems in SQL Server 2008
SQL Server 2005 Waits and Queues




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #833637
Posted Monday, December 14, 2009 7:57 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Friday, July 25, 2014 3:19 AM
Points: 4,320, Visits: 6,113
What you need to do is find out WHY things appear to be sluggish at times. Is it due to blocking or is it due to IO stalls or some other form of wait? Each of those possibilities should be analyzed while the activities are in progress. If for some reason (such as complexity of the query leading to bad estimation, improper indexing) table locks (or perhaps even page locks) are being acquired then you could indeed be seeing blocking and if so row lock hints could be a help. If the queries that insert/delete are suboptimal then whatever locks are taken are being held too long and tuning could be the magic bullet you seek.

LOTS of possibilities here. I recommend getting a tuning consultant in for a few hours remote work. Should be all it takes, and you can learn from how they do their work as well so you can better fix your own problems in the future.


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #833840
Posted Monday, December 14, 2009 10:13 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, September 29, 2010 5:23 AM
Points: 194, Visits: 2,357
Any idea how many rows are normally deleted by your delete process?
If it's more than about 5000, then you're likely to be getting table locks due to lock escalation, in which case it may be worth considering amending your delete to remove the records in batches of, say, 4000 which should at least prevent this issue
Post #833977
Posted Monday, December 14, 2009 10:23 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Friday, July 25, 2014 3:19 AM
Points: 4,320, Visits: 6,113
Andrew Gothard-467944 (12/14/2009)
Any idea how many rows are normally deleted by your delete process?
If it's more than about 5000, then you're likely to be getting table locks due to lock escalation, in which case it may be worth considering amending your delete to remove the records in batches of, say, 4000 which should at least prevent this issue


Can I ask where the 5000 number you pick came from?


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #833980
Posted Monday, December 14, 2009 10:30 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, September 29, 2010 5:23 AM
Points: 194, Visits: 2,357
Good question. I'll have to have a dig there.
It was from a MS article on lock escalation to the effect that threshold for Lock Escalation was c. 3000 rows for 2000, c. 5000 for 2005 - but you're quite right, it's pretty bad form really to post without reference to the appropriate source. Will try to rectify

cheers
Andrew
Post #833987
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse