will table partitioning and compression help heavily used OLTP database?

  • We have a very big table which is over 300gb with over half billion rows in it. This table is used for audit purpose for all critical applications. A lot of inserts are happening per seconds. Due to heavy blocking, we can't purge data while the database is online.

    My boss is saying by partitioning and compressing this table will help the disk space and performance. He is from OLAP DW area. But I highly doubt this will work in OLTP. How do I convince him this will not work? Any articles?

  • You should be able to purge/archive older data without any blocking, unless someone is constantly reading old data too.

    What is the table clustered on?

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Partitioning very rarely helps performance within an OLTP situation. However, if you are looking at a situation where you're rolling off old data, partitioning becomes a pretty viable approach, in fact, that's really what it's built for.

    As Scott asked, where you have your cluster is really going to matter, especially with partitioning. Also, with regards to your existing queries and processes, will you be able to ensure that the queries will always have the partitioning key as part of the filter? If not, you may want to avoid partitioning entirely because scans across partitions is very costly, more than across a single table.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply