Managing Large Data Sets in SQL Server 2005 and 2008

  • Comments posted to this topic are about the item Managing Large Data Sets in SQL Server 2005 and 2008

    Zach Mided
    www.AllianceGlobalServices.com

  • Since TOP supports a variable , add top variable to select with row_number statement to improve performance

  • Adding a top clause to the sub query will break the intended functionality: only the first page will return data. You could put a top clause in the outer query, but from what I've tested quickly this doesn't make any difference for IO nor cpu time. In fact, the query plan seems identical. The 2nd has the disadvantage that it returns data for negative page numbers too (i.e. it may be considered less robust).

    declare @nRowsPerPage int;

    declare @nPage int;

    select

    @nRowsPerPage = 25,

    @nPage = 142;

    select x.*

    from (

    select row_number() over (order by col.object_id, col.name) as rowNB, col.*

    from sys.columns col

    ) x

    where x.rowNB > (@nRowsPerPage * (isnull(@nPage,0) - 1)) and x.rowNB <= (@nRowsPerPage * isnull(@nPage,0))

    order by x.rowNB;

    select top (@nRowsPerPage) x.*

    from (

    select row_number() over (order by col.object_id, col.name) as rowNB, col.*

    from sys.columns col

    ) x

    where x.rowNB > (@nRowsPerPage * (isnull(@nPage,0) - 1))

    order by x.rowNB;



    Posting Data Etiquette - Jeff Moden[/url]
    Posting Performance Based Questions - Gail Shaw[/url]
    Hidden RBAR - Jeff Moden[/url]
    Cross Tabs and Pivots - Jeff Moden[/url]
    Catch-all queries - Gail Shaw[/url]


    If you don't have time to do it right, when will you have time to do it over?

  • It is a good way to process large data, only

    PRINT @message

    could be replaced by

    RAISERROR(@message, 5, 1) WITH NOWAIT

  • This article loooks good

  • Hi Zach, I think thats a pretty neat approach.

    With large tables I prefer to use table partitioning, which gets around the issue of locking a live table for an extended period of time, and improves query performance etc. I can see how your method would be benefitial for non-partitioned tables though.

    Cheers for the article.

  • Yes, I agree that partitioning is very useful and should be strongly considered by anyone that is working with large data sets. I am using partitions and still find breaking large operation into smaller pieces to be very useful. In my situation, I have SQL statements that operate on huge portions of the partitions and cause a lot of table locks within the partitions themselves. These locks cause too much contention with the production system and are not feasible for the business.

    When I use this technique on a partitioned table, I always order the records primarily by the partition key. This further reduces lock contention and allows SQL Server to perform well by leveraging the clustered index.

    Even in the case when partitions are being used to perform operations "offline", breaking large SQL into smaller pieces is useful. For example, for some SQL operations, I switch select partitions into an "offline" table which eliminates any lock contention by any operations against those partitions from the production system. I also drop all unnecessary indexes in the "offline" table so that the operation will run much faster. Even for these "offline" partitions, I have found that breaking large SQL operations into smaller pieces is helpful. So, instead of inserting 10,000,000 rows in one shot, I use this technique to insert 20 sets of 500,000 rows. This causes less system resources to be used at a time and allows any other processes running on the same database server to run better. An additional benefit is that the operation can be stopped and started mid-stream which is helpful if it is an operation that takes hours rather than minutes to run.

    Zach Mided
    www.AllianceGlobalServices.com

  • Zach,

    The research looks great and the article is nice explains itself very good. In SQL Server or any RDBMS partitioning large table helps in many ways and batch processing always improves the performance. It is all depends on your server and database architect.

    I like this document. I use to work for him 10 years ago.. 🙂

  • Nice article Zach.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Nice Article, a perfect representation of "Incremental Loading" wonder if an error handling section could be added somewhere

  • Nice article. I've used this type of method often, and this article explains it very well.

    Using RowNumber() could be avoided by adding an identity column on the new_purchase table and order the data on insert into new_purchase. This would obviously only work if you can order the data before or on insert into new_purchase.

  • Thanks Zach, Great Article,

    But I wonder if selecting the hole "new_purchase" table each loop, would decrease the performance.

    Am I wrong ?

  • Nice article with a nice complete example.

    Good work and thank you for the ideas - keep them coming.

  • Nice article Zach. I think this will work for me. I have a very large table in my production database. The consultant has exonerated himself from any adverse outcome if we attempt partitioning. So I am lft with the 'safe' option of moving data to a separate instance.

    I intend to replace the BEGIN TRANSACTION block in the SP with the columns in the large table. Thus the SP will move data into a replica table in another instance which will be used for long term reproting.

    If you asked why not use replication or otherwise answer is we only need a fraction of the data in live fro day to day reporting while the offline instance will be like an archive.

    If I do tis once, I can now schedule say a monthly movement of this data to manage the production able size.

    What do you think?

    Br. Kenneth Igiri
    https://kennethigiri.com
    All nations come to my light, all kings to the brightness of my rising

  • OK, so I think what you are saying is that you have a very large table that you need to report against, but you only need a relatively small amount of the data for your production environment.

    If your challenge is supporting reports against this large data set, you might want to consider directly going against the production table while using the WITH (NOLOCK) statement to avoid locks that might tie things up. You can also use the OPTION (MAXDOP 1) statement for your report queries to limit the number of processors that are used for processing and thus leave the other processors available to handle production requests.

    Another approach is as you suggested. You can set up a monthly routine that will copy groups of rows (10,000-400,00 at a time would be my recommendation) from production into your reporting instance and then in the same transaction, delete those rows from production. That should work just fine for you too.

    I have used partitioning extensively, and it works absolutely great. It can improve performance to an amazing degree by flattening the indexes and leveraging multiple processors.

    You can also use partitioning for an easier and more efficient solution for archiving data from production into a reporting instance. To do this you would create two separate tables, one for production and one for reporting. Partition each table by date (probably one month per partition). Have each table have identical partition file groups and indexes. Initially the production table would contain all of the data and the reporting table would be empty. You can then move a month at a time by using this command:

    ALTER TABLE BigTableInProduction

    SWITCH PARTITION <partition number of month to be moved> TO BigTableInReporting PARTITION <partition number of month to be moved>

    Regardless of how much data is in the table, the statement should just take a second or two to run -- seriously, it goes that fast. The downside is that managing the partitions does require some additional DBA effort.

    Let me know if you have any more questions.

    Zach Mided
    www.AllianceGlobalServices.com

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply