Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Managing Large Data Sets in SQL Server 2005 and 2008 Expand / Collapse
Author
Message
Posted Sunday, January 03, 2010 10:47 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, September 03, 2013 11:41 AM
Points: 6, Visits: 76
Comments posted to this topic are about the item Managing Large Data Sets in SQL Server 2005 and 2008

Zach Mided
www.AllianceGlobalServices.com
Post #841302
Posted Monday, January 04, 2010 2:21 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Wednesday, April 16, 2014 3:48 AM
Points: 230, Visits: 490
Since TOP supports a variable , add top variable to select with row_number statement to improve performance
Post #841336
Posted Monday, January 04, 2010 5:42 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Wednesday, April 16, 2014 5:59 AM
Points: 411, Visits: 1,394
Adding a top clause to the sub query will break the intended functionality: only the first page will return data. You could put a top clause in the outer query, but from what I've tested quickly this doesn't make any difference for IO nor cpu time. In fact, the query plan seems identical. The 2nd has the disadvantage that it returns data for negative page numbers too (i.e. it may be considered less robust).
declare @nRowsPerPage int;
declare @nPage int;

select
@nRowsPerPage = 25,
@nPage = 142;

select x.*
from (
select row_number() over (order by col.object_id, col.name) as rowNB, col.*
from sys.columns col
) x
where x.rowNB > (@nRowsPerPage * (isnull(@nPage,0) - 1)) and x.rowNB <= (@nRowsPerPage * isnull(@nPage,0))
order by x.rowNB;


select top (@nRowsPerPage) x.*
from (
select row_number() over (order by col.object_id, col.name) as rowNB, col.*
from sys.columns col
) x
where x.rowNB > (@nRowsPerPage * (isnull(@nPage,0) - 1))
order by x.rowNB;





Posting Data Etiquette - Jeff Moden
Posting Performance Based Questions - Gail Shaw
Hidden RBAR - Jeff Moden
Cross Tabs and Pivots - Jeff Moden
Catch-all queries - Gail Shaw


If you don't have time to do it right, when will you have time to do it over?
Post #841399
Posted Monday, January 04, 2010 7:17 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, August 16, 2013 5:39 AM
Points: 9, Visits: 161
It is a good way to process large data, only

PRINT @message
could be replaced by
RAISERROR(@message, 5, 1) WITH NOWAIT
Post #841454
Posted Monday, January 04, 2010 7:43 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, March 31, 2014 10:56 PM
Points: 68, Visits: 383
This article loooks good
Post #841470
Posted Monday, January 04, 2010 8:06 AM
SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Tuesday, December 03, 2013 4:40 PM
Points: 654, Visits: 375
Hi Zach, I think thats a pretty neat approach.

With large tables I prefer to use table partitioning, which gets around the issue of locking a live table for an extended period of time, and improves query performance etc. I can see how your method would be benefitial for non-partitioned tables though.

Cheers for the article.
Post #841499
Posted Monday, January 04, 2010 8:51 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, September 03, 2013 11:41 AM
Points: 6, Visits: 76
Yes, I agree that partitioning is very useful and should be strongly considered by anyone that is working with large data sets. I am using partitions and still find breaking large operation into smaller pieces to be very useful. In my situation, I have SQL statements that operate on huge portions of the partitions and cause a lot of table locks within the partitions themselves. These locks cause too much contention with the production system and are not feasible for the business.

When I use this technique on a partitioned table, I always order the records primarily by the partition key. This further reduces lock contention and allows SQL Server to perform well by leveraging the clustered index.

Even in the case when partitions are being used to perform operations "offline", breaking large SQL into smaller pieces is useful. For example, for some SQL operations, I switch select partitions into an "offline" table which eliminates any lock contention by any operations against those partitions from the production system. I also drop all unnecessary indexes in the "offline" table so that the operation will run much faster. Even for these "offline" partitions, I have found that breaking large SQL operations into smaller pieces is helpful. So, instead of inserting 10,000,000 rows in one shot, I use this technique to insert 20 sets of 500,000 rows. This causes less system resources to be used at a time and allows any other processes running on the same database server to run better. An additional benefit is that the operation can be stopped and started mid-stream which is helpful if it is an operation that takes hours rather than minutes to run.


Zach Mided
www.AllianceGlobalServices.com
Post #841539
Posted Monday, January 04, 2010 12:40 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, April 08, 2014 3:18 PM
Points: 115, Visits: 373
Zach,

The research looks great and the article is nice explains itself very good. In SQL Server or any RDBMS partitioning large table helps in many ways and batch processing always improves the performance. It is all depends on your server and database architect.

I like this document. I use to work for him 10 years ago..
Post #841654
Posted Monday, January 04, 2010 1:48 PM


SSC-Insane

SSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-InsaneSSC-Insane

Group: General Forum Members
Last Login: Today @ 8:49 PM
Points: 20,462, Visits: 14,092
Nice article Zach.



Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Posting Data Etiquette - Jeff Moden
Hidden RBAR - Jeff Moden
VLFs and the Tran Log - Kimberly Tripp
Post #841702
Posted Monday, January 04, 2010 2:33 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, August 06, 2013 7:58 AM
Points: 3, Visits: 11
Nice Article, a perfect representation of "Incremental Loading" wonder if an error handling section could be added somewhere
Post #841734
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse