• Hi,

    thanks a lot for your tips. I've decided to change the sequence of the jobs of each parallel process, so I could also minimize the locks.

    But I have still a performance isssue due to wrong execution plans.

    I load the data from the staging into the core dwh via merge statements in stored procedures. As input-parameter the StoreId will be used which will be mapped to variable v_StoreId. The following statement is an example of a merge statement

    MERGE CORE_TABLE_AB as t

    USING

    (SELECT * FROM STAGE_TABLE_A a WHERE StoreID = @v_StoreId

    INNER JOIN

    SELECT * FROM STAGE_TABLE_B b WHERE StoreID = @v_StoreId

    ON a.StoreId = b.StoreId

    ) s

    ON t.StoreId = s.StoreId

    AND t.UniqueId = s.UniqueId

    WHEN NOT MATCHED BY TARGET THEN

    (...)

    WHEN MATCHED THEN

    (...)

    This procedure will be executed for each of the 6 StoreId's. But for some of the StoreId's there are no data in the staging tables. And this lead to my performance problems i think. Because the execution plan would use a nested loop for the inner join of STAGE_TABLE_A and STAGE_TABLE_B for the StoreId's which has no data in it. And falsely the execution plan for the StoreId's with data in the stage tables uses then also a nested loop which leads to a long runtime.

    I've already analyzed the statistics of the tables. They are correct and if I display the execution plan in SSMS (it shows me the correct plans for the StoreIds without input data (nested loops) as well as for the StoreIds with data (hash match). But here I've replaced the variable @v_StoreId by the real StoreId.

    So do somebody of you know what I have to do to have different execution plans for the different StoreId's? Is the variable @v_StoreId the problem?

    Many thanks