Click here to monitor SSC
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in
Home       Members    Calendar    Who's On

Add to briefcase ««12

Parallel Processing Expand / Collapse
Posted Thursday, April 17, 2014 8:15 AM

SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Today @ 6:06 AM
Points: 800, Visits: 8,178
Are you processing all dimensions first, then fact table?
And what is the performance processing 1 store at a time?
Generally, we would partition by year.
Clustered index on Business Unit, which would be similar to store.
Post #1562678
Posted Thursday, April 17, 2014 6:09 PM

SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Today @ 9:19 AM
Points: 9,822, Visits: 11,891
mr_effe (4/17/2014)
I think this solution doesn't help me. Because a switch partition leads to a Schema-M lock. And if all of my parallel processes wants a schema-m lock nothing works anymore.

I already had this situation with my "truncate partition" function which uses switch partition to delete the table partition. It works fine if only one process is active. If i started the jobs in parallel all of them waits for each other. So I have already switched the logic from "truncate partition" to a delete logic.

So I think this solution will not help me :-(

If you have N partitions, you can use N+2 jobs:- one which switches the partitions out into the new table and then kicks off the insert job for each partition, one per partition for the inserts, and one which polls to find finished jobs from the set handling the inserts and when it finds one switches the corresponding partition back and notes that it's dealt with that job so it won't poll for it again, and when there is nothing left to poll for terminates.
That avoids the clashes on the schema mod lock.
Obviously you need to deal with one of the insert jobs failing, but it's probably easier to deal with than it is in the case where the insert jobs are clashing on locks.

Post #1562886
Posted Tuesday, April 22, 2014 4:59 AM


Group: General Forum Members
Last Login: Friday, March 4, 2016 2:22 AM
Points: 11, Visits: 120

thanks a lot for your tips. I've decided to change the sequence of the jobs of each parallel process, so I could also minimize the locks.

But I have still a performance isssue due to wrong execution plans.

I load the data from the staging into the core dwh via merge statements in stored procedures. As input-parameter the StoreId will be used which will be mapped to variable v_StoreId. The following statement is an example of a merge statement

ON a.StoreId = b.StoreId
) s
ON t.StoreId = s.StoreId
AND t.UniqueId = s.UniqueId

This procedure will be executed for each of the 6 StoreId's. But for some of the StoreId's there are no data in the staging tables. And this lead to my performance problems i think. Because the execution plan would use a nested loop for the inner join of STAGE_TABLE_A and STAGE_TABLE_B for the StoreId's which has no data in it. And falsely the execution plan for the StoreId's with data in the stage tables uses then also a nested loop which leads to a long runtime.

I've already analyzed the statistics of the tables. They are correct and if I display the execution plan in SSMS (it shows me the correct plans for the StoreIds without input data (nested loops) as well as for the StoreIds with data (hash match). But here I've replaced the variable @v_StoreId by the real StoreId.

So do somebody of you know what I have to do to have different execution plans for the different StoreId's? Is the variable @v_StoreId the problem?

Many thanks
Post #1563767
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse