June 19, 2012 at 9:41 pm
Comments posted to this topic are about the item Self Scheduling Jobs
June 20, 2012 at 1:04 am
Thanks for the tips. Nice to have a framework the supplements the standard jobs that ship with SQL.
Michael Gilchrist
Database Specialist
There are 10 types of people in the world, those who understand binary and those that don't. ๐
June 20, 2012 at 3:44 am
Good grief, that's a complicated way to do it. As you had to write .NET code anyway, why not do the whole thing in a single script task, (or two if you wanted to use a separate SSIS mail task for error messages rather than write code to send an email)? Simply run the SQL commands within the .NET code. If anyone doesn't know how to do that, Google for examples; SqlCommand.ExecuteScalar to get the scheduling date and SqlCommand.ExecuteNonQuery to run sp_add_schedule.
Alternatively, you could probably do it all in one SQL procedure.
I did find the article useful though, as I didn't know about msdb.dbo.sp_add_schedule before.
June 20, 2012 at 7:24 am
The purpose of this example was a demonstration of individual steps for clarity.
Yes it could of very easily been done in less steps.
June 20, 2012 at 7:36 am
If the job fails to schedule its next occurrence then the job never runs again? So for quarterly jobs, you won't find out about the failure for at least 3 months and if it languishes in a broken state for as much as 6 months you could have serious consequences of a skipped job? (or adjust the timings for whatever interval; long or short it's a similar problem)
I would not want to be the person to inherit your role. Upon first-glance inspection at what is immediately visible, the jobs are all set to run once. There also exists some proprietary control tables that don't seem to be used in any business division. Until I dig into the package to discover the self-scheduling feature, it isn't obvious there even IS a schedule.
I guess basically I'm curious why it's better to roll-your-own than to use the native scheduling feature that anyone educated in SQL Server would already understand.
June 20, 2012 at 8:10 am
Mike Dougherty-384281 (6/20/2012)
If the job fails to schedule its next occurrence then the job never runs again? So for quarterly jobs, you won't find out about the failure for at least 3 months and if it languishes in a broken state for as much as 6 months you could have serious consequences of a skipped job? (or adjust the timings for whatever interval; long or short it's a similar problem)
Nope, the emails tell you of successes and failures. In no way does this relieve you of your monitoring responsibility. All failures would require corrective action.
I would not want to be the person to inherit your role. Upon first-glance inspection at what is immediately visible, the jobs are all set to run once. There also exists some proprietary control tables that don't seem to be used in any business division. Until I dig into the package to discover the self-scheduling feature, it isn't obvious there even IS a schedule.
Really? you wouldn't want to open up a Sharepoint WorkGroup site with it all documented, the who, what, where, when etc? and then be able to go straight to the SSIS package, job history for review? Maybe you have been the victim of poor documentation. If you inherited my role, this would NOT be the case.... did i mention that it's all searchable too?
The proprietary control tables do exist in a lot of places, and educational institutions for certain have, and maintain these regularly.
I guess basically I'm curious why it's better to roll-your-own than to use the native scheduling feature that anyone educated in SQL Server would already understand.
This most definitely is NOT a replacement for scheduling but an 'addon' to handle cases where the native scheduling isn't as flexible as is necessary to the specific case at hand.
June 20, 2012 at 9:33 am
Nice work. I donโt think I'd make it as complex, but I like it, I put in the briefcase for future reference. Thanks!
June 20, 2012 at 12:02 pm
richard.noordam (6/20/2012)
Mike Dougherty-384281 (6/20/2012)
I would not want to be the person to inherit your role.
Really? you wouldn't want to open up a Sharepoint WorkGroup site with it all documented, the who, what, where, when etc? and then be able to go straight to the SSIS package, job history for review? Maybe you have been the victim of poor documentation. If you inherited my role, this would NOT be the case.... did i mention that it's all searchable too?
The proprietary control tables do exist in a lot of places, and educational institutions for certain have, and maintain these regularly.
Victim of poor documentation and sub-optimal operational practices. I forget that not every environment works this way.
I guess basically I'm curious why it's better to roll-your-own than to use the native scheduling feature that anyone educated in SQL Server would already understand.
This most definitely is NOT a replacement for scheduling but an 'addon' to handle cases where the native scheduling isn't as flexible as is necessary to the specific case at hand.
That's a good point. I didn't see that aspect of it from the original article. If/when there's a next article you could give more time to the WHY as much as the HOW. (Backstory helps frame the context for clever solutions)
Thanks
June 20, 2012 at 1:31 pm
Agreed with SSC-Enthusiastic. This solution is very danger to implement. Especially in environments where multiple DBAs and database developers controlling jobs in a heavy processing environment. If for some reason error email is missed (or action is not taken because users are not complaining) the price will be very heavy for missing the next run. E.g. if I can use pliers to hit the nail it does not mean that this is right tools for the job. So, I would like to know why this solution was implemented.
June 20, 2012 at 2:11 pm
Leo Peysakhovich (6/20/2012)
Agreed with SSC-Enthusiastic. This solution is very danger to implement. Especially in environments where multiple DBAs and database developers controlling jobs in a heavy processing environment. If for some reason error email is missed (or action is not taken because users are not complaining) the price will be very heavy for missing the next run. E.g. if I can use pliers to hit the nail it does not mean that this is right tools for the job. So, I would like to know why this solution was implemented.
Your example is most relevant, specifically, i don't need a crane to nail two 2x4's together either.
I'm pretty sure that the idea of 'universal solutions'/one size fits all is a myth. While it may not be the pathway you want to use in large installations, most SQL instances are not, and thus might have a place.
as to 'missing' emails, we have more than one recipient as a backup, and i did state that this does not remove responsibility for not only paying attention to your emails, but also to any job failures on your system (per post above). One OR the other should prompt you to do something about the problem/failure. Also failure notification would be immediate, giving you some time to determine what caused the issue and resolve it, long before it would be 'due'.
The only thing that changed with this implementation, was the jobs self schedule instead requiring some sort of manual kick off.
and as to why, succinctly:
This is a scheduling 'addon' to handle cases where the native scheduling isn't as flexible as is necessary to the specific varying periodicity of this type of data load.
i seemed to have rebutted his issues to his satisfaction, with just a minor request for further background information. what part did i miss with you?
June 20, 2012 at 3:56 pm
"This is a scheduling 'addon' to handle cases where the native scheduling isn't as flexible as is necessary to the specific varying periodicity of this type of data load"
The way the schedule flexibility can be acieved may be much more simpler and less danger: use one simple job which is running at any interval (for example daily, hourly, every 5 minutes...) and by running stored procedure define which job(s) should be called right now based on the control table you described. This way, with one simple job, you can control multiple ETL processes between multiple servers, keep tide control and is not require to schedule your main job at all. In case of failure of the main job you don't really miss the next run. The probability of the controlling job failing is much lower because it is very simple job. And this gives you an ability to control any veird schedules from one centrallized place instead of do it for each job individually. You can even create second level of defence where each job otput into a historical table. And then, with separate process, check/send email if job schedule and run event is matching.
June 20, 2012 at 4:02 pm
Leo Peysakhovich (6/20/2012)
"This is a scheduling 'addon' to handle cases where the native scheduling isn't as flexible as is necessary to the specific varying periodicity of this type of data load"The way the schedule flexibility can be acieved may be much more simpler and less danger: use one simple job which is running at any interval (for example daily, hourly, every 5 minutes...) and by running stored procedure define which job(s) should be called right now based on the control table you described. This way, with one simple job, you can control multiple ETL processes between multiple servers, keep tide control and is not require to schedule your main job at all. In case of failure of the main job you don't really miss the next run. The probability of the controlling job failing is much lower because it is very simple job. And this gives you an ability to control any veird schedules from one centrallized place instead of do it for each job individually. You can even create second level of defence where each job otput into a historical table. And then, with separate process, check/send email if job schedule and run event is matching.
And I did implement it multiple times in multiple environments: simple and very complex. And email part if job schedule and run event is not matching or job is missing. It was the part of the database framework: http://www.sqlservercentral.com/articles/framework/87549/
June 20, 2012 at 4:26 pm
I'm going to have to respectfully disagree with your indication as to what my solution should of been.
Yes, the solution you pointed to is very elegant, but it is also very much overkill for what was intended or needed here.
I have only a few loads that fall into this category, while most the base scheduler works great.
I do monitor my jobs, and job history, and watch emails closely (got to love rules).
if i had a large number of 'variable' loads then i would consider having a singular process to ride herd over them all, just like your example.
So how is it dangerous and likely to cause issue, and how would i benefit vs the time (in setup) to put what you suggest in place for one job, right now, but eventually about a handful of tasks only?
June 24, 2012 at 6:37 pm
Leo Peysakhovich (6/20/2012)
"This is a scheduling 'addon' to handle cases where the native scheduling isn't as flexible as is necessary to the specific varying periodicity of this type of data load"The way the schedule flexibility can be acieved may be much more simpler and less danger: use one simple job which is running at any interval (for example daily, hourly, every 5 minutes...) and by running stored procedure define which job(s) should be called right now based on the control table you described. This way, with one simple job, you can control multiple ETL processes between multiple servers, keep tide control and is not require to schedule your main job at all. In case of failure of the main job you don't really miss the next run. The probability of the controlling job failing is much lower because it is very simple job. And this gives you an ability to control any veird schedules from one centrallized place instead of do it for each job individually. You can even create second level of defence where each job otput into a historical table. And then, with separate process, check/send email if job schedule and run event is matching.
I'm no big fan of SSIS myself but I have to ask... what's the difference between all of that and what's built in to SQL Agent, MSDB, the job scheduler, or the logs that are already built in? How would all of that make a quarterly job any more likely to be executed or any more likely to send an email if it failed?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 24, 2012 at 6:49 pm
Mike Dougherty-384281 (6/20/2012)
If the job fails to schedule its next occurrence then the job never runs again? So for quarterly jobs, you won't find out about the failure for at least 3 months and if it languishes in a broken state for as much as 6 months you could have serious consequences of a skipped job? (or adjust the timings for whatever interval; long or short it's a similar problem)I would not want to be the person to inherit your role. Upon first-glance inspection at what is immediately visible, the jobs are all set to run once. There also exists some proprietary control tables that don't seem to be used in any business division. Until I dig into the package to discover the self-scheduling feature, it isn't obvious there even IS a schedule.
I guess basically I'm curious why it's better to roll-your-own than to use the native scheduling feature that anyone educated in SQL Server would already understand.
Ok... now you have my curiosity up. How would you schedule for something to run quarterly and guarantee that it schedules its next occurrence? How would you make it obvious that such a self-scheduling feature were obvious without digging into whatever code you have? Me? I'd include such a hint in the name of the job but I'm wondering what else you might come up with in the absence of such a name.
And, no... this isn't a challenge to what you said. Since you brought it up, I'd really like to know.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply