Using Checkpoint

  • Folks,

    Hav a good day.

    i need a clarification regarding usage of checkpoint in SSIS.

    i set a checkpoint for single task. in this task , trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row..?

    Kindly explain it.

    Thanks in advance.

    Regards

    Kannan

  • Kannan Vignesh (1/8/2017)


    Folks,

    Hav a good day.

    i need a clarification regarding usage of checkpoint in SSIS.

    i set a checkpoint for single task. in this task , trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row..?

    Kindly explain it.

    Thanks in advance.

    Regards

    Kannan

    It will restart from row 1. This quote from BOL clears up any doubt:

    A package can be restarted only at the control flow level. You cannot restart a package in the middle of a data flow.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • If your aim is to continue from a "checkpoint" after a failure, I would suggest 1 of two ideas.

    1. Log what you have processed thus far, then, when the process starts again it gets the value of where it got to before and continues from there.

    2. Check for the existence of the data, and effectively skip that iteration if it does (exist). Easiest way would be to use expressions on your task flow.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Many thanks.!

  • Thom A (1/8/2017)


    If your aim is to continue from a "checkpoint" after a failure, I would suggest 1 of two ideas.

    1. Log what you have processed thus far, then, when the process starts again it gets the value of where it got to before and continues from there.

    2. Check for the existence of the data, and effectively skip that iteration if it does (exist). Easiest way would be to use expressions on your task flow.

    From the description ...

    , trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row.

    it appears he simply wants to know if a fail occurs, like at a dataflow, if it will start at the row that errored; when the package gets rerunned.

    ---------

    One thing you can do is redirect error rows to a different destination. It is not unusual for packages to be designed this way. You can later re-import when the problem is fixed by re-running your package*

    *It is strongly recommend that your packages implement an acceptable way to handle potential duplication (Like delete at the source or at the destination prior to the ETL move, depending on what makes sense in each particular situation).

    ----------------------------------------------------

  • MMartin1 (1/9/2017)


    Thom A (1/8/2017)


    If your aim is to continue from a "checkpoint" after a failure, I would suggest 1 of two ideas.

    1. Log what you have processed thus far, then, when the process starts again it gets the value of where it got to before and continues from there.

    2. Check for the existence of the data, and effectively skip that iteration if it does (exist). Easiest way would be to use expressions on your task flow.

    From the description ...

    , trying to insert a 1000 rows to table from a source. if package got failed after inserting 100 rows means, while executing again will it start from 101th row or 1st row.

    it appears he simply wants to know if a fail occurs, like at a dataflow, if it will start at the row that errored; when the package gets rerunned.

    ---------

    One thing you can do is redirect error rows to a different destination. It is not unusual for packages to be designed this way. You can later re-import when the problem is fixed by re-running your package*

    *It is strongly recommend that your packages implement an acceptable way to handle potential duplication (Like delete at the source or at the destination prior to the ETL move, depending on what makes sense in each particular situation).

    If it were completely a Dataflow though, then SSIS should roll back any inserts it had already done if one failed. I therefore felt that this is likely using a combination of data and task flows, as there would then be need to start at a point in process.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • If it were completely a Dataflow though, then SSIS should roll back any inserts it had already done if one failed.

    By default, yes, but not necessarily.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply