Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Managing Large Data Sets in SQL Server 2005 and 2008 Expand / Collapse
Author
Message
Posted Monday, January 4, 2010 10:42 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, June 16, 2014 9:38 PM
Points: 42, Visits: 197
Nice article. I've used this type of method often, and this article explains it very well.

Using RowNumber() could be avoided by adding an identity column on the new_purchase table and order the data on insert into new_purchase. This would obviously only work if you can order the data before or on insert into new_purchase.
Post #841849
Posted Tuesday, January 5, 2010 3:14 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Friday, September 5, 2014 7:22 AM
Points: 99, Visits: 140
Thanks Zach, Great Article,
But I wonder if selecting the hole "new_purchase" table each loop, would decrease the performance.
Am I wrong ?
Post #841937
Posted Tuesday, January 5, 2010 5:44 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, November 10, 2014 7:48 AM
Points: 46, Visits: 342
Nice article with a nice complete example.
Good work and thank you for the ideas - keep them coming.
Post #841996
Posted Wednesday, March 7, 2012 9:35 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, November 17, 2014 7:45 AM
Points: 65, Visits: 269
Nice article Zach. I think this will work for me. I have a very large table in my production database. The consultant has exonerated himself from any adverse outcome if we attempt partitioning. So I am lft with the 'safe' option of moving data to a separate instance.

I intend to replace the BEGIN TRANSACTION block in the SP with the columns in the large table. Thus the SP will move data into a replica table in another instance which will be used for long term reproting.

If you asked why not use replication or otherwise answer is we only need a fraction of the data in live fro day to day reporting while the offline instance will be like an archive.

If I do tis once, I can now schedule say a monthly movement of this data to manage the production able size.

What do you think?
Post #1263108
Posted Thursday, March 8, 2012 11:12 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, September 3, 2013 11:41 AM
Points: 6, Visits: 76
OK, so I think what you are saying is that you have a very large table that you need to report against, but you only need a relatively small amount of the data for your production environment.

If your challenge is supporting reports against this large data set, you might want to consider directly going against the production table while using the WITH (NOLOCK) statement to avoid locks that might tie things up. You can also use the OPTION (MAXDOP 1) statement for your report queries to limit the number of processors that are used for processing and thus leave the other processors available to handle production requests.

Another approach is as you suggested. You can set up a monthly routine that will copy groups of rows (10,000-400,00 at a time would be my recommendation) from production into your reporting instance and then in the same transaction, delete those rows from production. That should work just fine for you too.

I have used partitioning extensively, and it works absolutely great. It can improve performance to an amazing degree by flattening the indexes and leveraging multiple processors.

You can also use partitioning for an easier and more efficient solution for archiving data from production into a reporting instance. To do this you would create two separate tables, one for production and one for reporting. Partition each table by date (probably one month per partition). Have each table have identical partition file groups and indexes. Initially the production table would contain all of the data and the reporting table would be empty. You can then move a month at a time by using this command:
ALTER TABLE BigTableInProduction
SWITCH PARTITION <partition number of month to be moved> TO BigTableInReporting PARTITION <partition number of month to be moved>

Regardless of how much data is in the table, the statement should just take a second or two to run -- seriously, it goes that fast. The downside is that managing the partitions does require some additional DBA effort.

Let me know if you have any more questions.


Zach Mided
www.AllianceGlobalServices.com
Post #1263840
Posted Tuesday, March 13, 2012 5:42 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, November 17, 2014 7:45 AM
Points: 65, Visits: 269
Thanks very much Zack.

You mean SWITCH really works that fast? That will be great.

The table I am refering to is mostly used for reporting anyway, we just need to manage the size to improve performance. Another scenario, what if I want the history table in another instance on another host. That is, I do not want to create a new table on the live instance. Is it feasible to use maybe linked servers to acheive this. What other options?

In addtion, the SWITCH command assumes my production table is partitioned, right? I may not be allowed (by the consultant) to partition this table and thats the big challenge.
Post #1265824
Posted Tuesday, March 13, 2012 6:16 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, September 3, 2013 11:41 AM
Points: 6, Visits: 76
Yes, the SWITCH requires partitioning.

Linked Servers do allow separate db servers to connect to one another. Other approaches such as replication and log shipping are used to keep the data the same in both tables which is not what you are trying to do, so linked servers is probably the best approach for you.



Zach Mided
www.AllianceGlobalServices.com
Post #1265836
Posted Tuesday, March 13, 2012 6:36 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, November 17, 2014 7:45 AM
Points: 65, Visits: 269
Thanks so much Zach.

Finally (hopefully ), did you use separate disks for your partitions? How bad can putting all your file group on the same disk/spindle be? (due to cost issues for example).

Also my table is already almost 180 GB. How long do you think it will take to create partitions it?
Post #1265846
Posted Tuesday, March 13, 2012 8:50 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, September 3, 2013 11:41 AM
Points: 6, Visits: 76
I used separate disks in order to partition my tables (the largest was 400GB). You will still get some gains even if your file groups are on the same disk, so I would not let that deter you.

The initial creation of the partitions can definitely take a while. I would recommend creating just one partition and doing it at the beginning of off-hours. Surround the SQL partitioning call with SELECT getdate() so you can get a good measurement of how long it takes to create one partition. That will help you plan for creating the additional partitions.



Zach Mided
www.AllianceGlobalServices.com
Post #1265980
Posted Tuesday, March 13, 2012 9:09 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, November 17, 2014 7:45 AM
Points: 65, Visits: 269
Thank you so much Zach.
Post #1265996
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse