CPU Spikes Caused by Periodic Scheduled Jobs

  • skeleton567 (12/21/2016)


    skipping a minute won't hurt a thing, as long as you don't tell anybody.

    Until that one occasion where it does matter. As Murphy and Sod will cheerfully point out this will be just after it fails, at which point by hiding that it happened you may suddenly find that it matters to a lot more people not least yourself.

    The frequency of the jobs may well be dictated by business requirements or even legislation in which case if you want to reduce that frequency the lead time to get approval to do so may be huge with similarly large costs and unless you can come up with a *very* good argument will probably not happen. You may have some leeway if you are exceeding the requirement to reduce it as long as you do not drop below the minimum requirement.

    By all means identify whether the frequency appears appropriate just remember requirements are not just the technical ones. By all means if the only person determining the frequency is yourself feel free to adjust it but even then consider whether other people have been given the impression that something is possible which will no longer be the case in which case you need to at a minimum inform them.

  • crmitchell (12/23/2016)


    skeleton567 (12/21/2016)


    skipping a minute won't hurt a thing, as long as you don't tell anybody.

    Until that one occasion where it does matter. As Murphy and Sod will cheerfully point out this will be just after it fails, at which point by hiding that it happened you may suddenly find that it matters to a lot more people not least yourself.

    Well, I'm assuming that the process is designed and implemented in such a manner that a failure will not lose an data and will resume appropriate delivery on succeeding executions, especially if the sequence of data presentation and processing is important. If this is not true, you truly need to get it fixed. Or as I often did, fix it yourself.

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • skeleton567 (12/23/2016)


    crmitchell (12/23/2016)


    skeleton567 (12/21/2016)


    skipping a minute won't hurt a thing, as long as you don't tell anybody.

    Until that one occasion where it does matter. As Murphy and Sod will cheerfully point out this will be just after it fails, at which point by hiding that it happened you may suddenly find that it matters to a lot more people not least yourself.

    Well, I'm assuming that the process is designed and implemented in such a manner that a failure will not lose an data and will resume appropriate delivery on succeeding executions, especially if the sequence of data presentation and processing is important. If this is not true, you truly need to get it fixed. Or as I often did, fix it yourself.

    A failure will not lose any data.

    Igor Micev,My blog: www.igormicev.com

  • skeleton567 (12/21/2016)


    gfish@teamnorthwoods.com (12/21/2016)


    Let me suggest a much simpler way of avoiding the CPU spike caused by running multiple jobs at once. Simply combine them into one job, with the contents of all of the current job converted to steps in the single job. Spacing out the start time certainly helps with the spikes, but there is still a possibility of the jobs overlapping if one takes longer than anticipated. Individual job steps are run sequentially, with no possibility of overlap.

    Now that is what I referred to as thinking outside the box. I think this is so far the best proposed solution yet on this discussion. What we used to call the KISS method - Keep It Simple, Stupid'. I don't remember from my active days, but I don't think a running job will start again. And especially if this is that original task that runs every minute, skipping a minute won't hurt a thing, as long as you don't tell anybody.

    The jobs already have 5 steps inside, which means saving the initial time for starting them each separately.

    The KISS method, proposed, is actually going to make the maintenance and management more complex. We're like keeping the middle line of too less and too much steps per job.

    Igor Micev,My blog: www.igormicev.com

  • Igor Micev (12/23/2016)


    skeleton567 (12/23/2016)


    crmitchell (12/23/2016)


    skeleton567 (12/21/2016)


    skipping a minute won't hurt a thing, as long as you don't tell anybody.

    Until that one occasion where it does matter. As Murphy and Sod will cheerfully point out this will be just after it fails, at which point by hiding that it happened you may suddenly find that it matters to a lot more people not least yourself.

    Well, I'm assuming that the process is designed and implemented in such a manner that a failure will not lose an data and will resume appropriate delivery on succeeding executions, especially if the sequence of data presentation and processing is important. If this is not true, you truly need to get it fixed. Or as I often did, fix it yourself.

    A failure will not lose any data.

    You are also assuming that the reporting of the current state is of lower importance and that it being delayed until the next run does not matter. If that is the case fine but for a time critical application that may not be a valid assumption. Indeed in such a situation knowing that the data is out of date may be more important than avoiding data loss itself.

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply