The case for SQL Server Agent 2.0

  • g.britton wrote:

    super easy if you put the steps in an SSIS package and run that from Agent

    Unless you're like me and have done your level best to convert all SSIS jobs so you don't even need to have an SSIS instance. 😀

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • The beauty of SQL Agent is its simplicity as a scheduler. We have work flow in SSIS or better yet, just use SQL. I cringe when I see a SQL Agent job with 20 or 30 steps where someone is trying to implement work flow. It's just not the right tool. Just control your flow in a "controller" proc that executes all other code. You will have perfect control of flow, logging, error handling, and parallelism.

  • I think the notifications and operators need to be updated. For the "Pager On Duty Schedule" of the operator, currently you can't have it go from night to morning, since the "Workday start time" and "Workday end time" only spans from midnight to midnight. What about someone working the night-shift? Also, being able to have more than one operator for the notification would be helpful as well.

  • Also, I could swear that in some previous version there was an option to stop a job if it ran over a certain definable period of time. It would be cool if that was brought back.

  • Before they are going to add new features, they need to do some very basic stuff to the agent (don't know, if they had only an underpaid junior developer to create it).

    • why can a job only be owned by an user but not a group / role? When you not set all your jobs to be owned (and executed as) SA (big security hole), only the owner (or a SA) may modify it. This is riddicilus if you have a team of more than 2 or 3 developers.
    • why can a job only write its output to a file, if it is owned by a SA
    • when you delegate the job output to a table and let it append (everything else makes not much sense), why does it create only one entry per job/step in the output table and add the output of every execution to a single log column instead of creating a new line per execution (I know, it it the original meaning of "append", but this is so not-database-stylish)
    • why saves it its dates in varchar columns?
    • when it knows that it needs proxy accounts for SSIS, Powershell etc. when the job is not owned by SA, why is it not easier to set them up

    God is real, unless declared integer.

  • John_Hill wrote:

    Also, I could swear that in some previous version there was an option to stop a job if it ran over a certain definable period of time. It would be cool if that was brought back.

    I actually ended up writing my own jobs to do this as we had situations where resource heavy overnight jobs were overrunning and causing performance issues in the morning. It would be far easier if it was built in though.

     

    My #1 improved feature for SQL Server Agent would be more granular security. I'd love the ability to give users permissions to run/modify a single job or a group of jobs. At the moment it's a little all or nothing! At the very least the ability to set the job owner as a group rather than an individual account would help to reduce this requirement.

     

    Also the ability to actually use the job categories in some meaningful way would be nice too!

  • @jeff how do you do parallelism in SQL driven by an Agent job?   Note that I don't mean parallelism in execution plans; I mean actual parallel operations as in SSIS control flow tasks without precedence constraints or SSIS data flows using the multicast operator etc.

    Gerald Britton, Pluralsight courses

  • @g.britton In a SQL a agent job, you can call to start a job asynchronously.  So one way to do it is have a 2 jobs.

    Job 1 step 1 when completed goes to step 2.  Step 1 starts job 2 step 1.

    This way job 1 step 2 would be running at the same time as job 2 step 1.

    If I am not mistaken (I have not tested this), you could have that as 1 job with 3 steps and step 1 calls job 1 starting at step 3 and when completed goes to step 2.  Then when step 2 completes it exits.  This way it is all in 1 job and step 2 and 3 would run at the same time.  Not certain that would work as the job would be already running and you normally can't start the same job twice, but if you had multiple jobs, you could have a "controller" job that starts all of the other jobs at the same time or sequentially.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • If you put this in a file called test.bat

    notepad

    Now run "test" from the command line. Then run "start test".

    There's a difference here that forks a process.

  • One technique I use to execute sql in parallel is to dynamically create a sql agent job from within a stored proc. Create however many steps you need but in most cases you only need one. Make the last step delete the job. Then just kick off the job and control returns immediately to your proc. This technique works great if you want to truncate/reload a couple of large tables at the same time. Just be careful and don't overwhelm your server. You have to be your own governor.

  • g.britton wrote:

    @Jeff how do you do parallelism in SQL driven by an Agent job?   Note that I don't mean parallelism in execution plans; I mean actual parallel operations as in SSIS control flow tasks without precedence constraints or SSIS data flows using the multicast operator etc.

    The @jeff reference you included points to the wrong "Jeff".  Since I'm the only "Jeff" (so far) on this thread, I'll assume that question was directed at me.

    The other folks have eluded to how I do it. I don't have SSIS loaded as an application on most of my servers but that doesn't prevent building a "Maintenance Plan" that calls stored procedures in parallel and then calling that either through a permanent or on-the-fly job as the others have stated.

    An example of this is a "balanced" stats update routine I made where I set things up do to things both in parallel paths and serially within the paths to make it so the two paths have a different number of databases that the stats rebuilds occur on that I've previously measured for duration so that the two paths complete at roughly the same time.

    Except for things like that, I've found that needing to "go parallel" with executions frequently means that whoever wrote the procs being executed have failed to make them adequately performant to begin with.  With that, I'll help folks with their performance rather than "caving in" and setting up parallel runs.

    I'll also state that I'd rather setup asynchronous runs between two or more nearly identical jobs that check on conditions of data (or whatever) to see if something needs to be done.  For example, "file sniffers" to see if there's anything that needs to be done based on the existence of files in a staging directory.

    An example of the later is a "STEPS" (Standardized Text Extraction and Parsing Subsystem) system I built to import "spotlight" files from Double-Click.net.  The company I built it for was receiving hundreds of such files a day and they simply couldn't handle the load because their current system took a whopping 45 minutes just to get ONE file "ready" for import (it didn't actually do an import... they were all changing variable number of columns files and had to be pre-normalized for them to do an import... it was horribly ineffecient).  The first thing I did was trash their ineffective "Tower of Babel" code and rewrote it all using only T-SQL.  The end result was that I was doing the import, validation, normalization, and "Upsert" to the main table for 8 disparate files all in less than 2 minutes (and that was in SQL Server 2000 on old spinning rust 32 bit hardware).  Then, I wrote a "Conductor" (think conductor as in orchestra) system (nothing sophisticated) that would keep track of the files as they came in and 4 nearly identical jobs that would pic the next unprocessed file to do the import with.

    It didn't take long because of the rewrite I did to realize that just one job running in a loop was all that was necessary even with the number of files being received in an 8 hour period (sometimes more than a thousand, which only took a bit more than 4 hours to process in total).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 11 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply