Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 1234»»»

A Failed Jobs Monitoring System Expand / Collapse
Author
Message
Posted Monday, February 4, 2008 9:56 PM


SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 4:28 PM
Points: 614, Visits: 441
Comments posted to this topic are about the item A Failed Jobs Monitoring System


Post #451441
Posted Tuesday, February 5, 2008 7:27 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Friday, August 29, 2014 6:11 AM
Points: 3,469, Visits: 1,489
I too use an underscore when naming a job. I started this practice after upgrading my workstation to SQL Server 2005. SQL Management Studio sorts jobs differently than Enterprise Manager.

I got tired of scrolling to the bottom of the list to see my jobs, so I started adding a underscore to make life easier.
Post #451627
Posted Tuesday, February 5, 2008 7:32 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Friday, August 22, 2014 1:49 AM
Points: 563, Visits: 1,006
We do employ a very similar method of managing our SQL servers. We have about 30 or so SQ Servers and then most of those are replicated through multiple development, test and hotfix (virtual) environments.

We have a large number of SQL servers to maintain and products like SQL Stripes become very expensive.

Only just finishing off the setup, but having a few issues with the distributed transactions, but its looking good so far.

Graham
Post #451630
Posted Tuesday, February 5, 2008 8:27 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, August 26, 2014 2:16 PM
Points: 3, Visits: 254
If you're already setting up a separate server, why not use MOM? You then get the other benefits of MOM like other SQL Server monitoring and monitoring of the servers besides failed jobs. We monitor databases for close to 200 SQL Server applications and MOM works great. MOM automatically installs it's client on a server when it detects a new SQL Server installation and starts monitoring. It works great for both SQL 2000 and 2005.


Post #451667
Posted Tuesday, February 5, 2008 8:32 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Monday, August 25, 2014 11:20 AM
Points: 1,570, Visits: 673
I came up with a similar solution. Except I have a proc that creates Linked Servers using a single name (LS_Q), before executing the queries, this makes the subsequent coding easier - no dynamic SQL to deal with, and easier to expand the processing - so far I have expanded my system to include SQL File statistics (Size / Used) for all MDF, NDF and LDF files, some basic IO stats, Failed Jobs, Current Backups, etc.
The code could have been better written - not easy to read / understand (SQL Refactor is good for this if you are an untidy coder, like most of us).



Post #451671
Posted Tuesday, February 5, 2008 8:40 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, November 16, 2009 8:47 AM
Points: 1, Visits: 9
I am not sure if this method has already been shared because I have not figured out how to go to the the beginning of the thread (new user)!

Anyway, I am not a true DBA but I am a manager of system operations. I developed a proactive method for alerting me when a job fails by simply creating triggers on the sysjobhistory table and then emailing me the log output that was generated by the job (provided I enabled the job to produce a log.

It saved me time and loads of money messing around with 3rd party products or figuring out how to get other components of MS SQL 2005 to work how I wanted them to work. Obviously, db_sendmail needs to be enabled on each server that you are planning on deploying the solution to (or if you use some kind of linked server setup, just enable it on one server). I advise that anyone considering this solution check with their company's security practices when deciding for or against enabling mail on the database server.
Post #451678
Posted Tuesday, February 5, 2008 12:02 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, April 16, 2014 12:11 PM
Points: 21, Visits: 102
I've been using a job failure email notification method for some years now on SQL 2000 with the following differences:
1) I don't use a monitor server to check the other servers, but rather just include a final email failure step on each job, only executing this step if a previous step fails. That way I avoid the linked server problems mentioned in the article and I keep it simple. (The one disadvantage is if the server goes down, no email gets sent; whereas a central monitoring server solution can catch this. But our operations unit knows within seconds if a server goes down, so not a problem for me.)
2) On my SQL 2000 servers, I don't use SQLMail because a) it requires a MAPI client and b) it's flaky (often started failing after months of no problems). I replaced it with an smtp solution developed by Gert Drapers (formerly a software architect for Microsoft) called xp_smtp_sendmail. I've used it for years and it is very robust, has never failed once.



Post #451814
Posted Tuesday, February 5, 2008 12:36 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Friday, July 11, 2014 10:59 AM
Points: 1,020, Visits: 442
First of All: Nice Job TJay.

I too built a similar system using linked servers in order to monitor failed jobs, backup history/backups out of date tollerance, databases::applications::servers, space issues at the file level, etc. What I've moved onto in SQL 2005 is a similar system, with the same Reporting Services reports as I was using prior (and practically the same schema) using SSIS. You can populate a table with your server/instance names and so long as you keep that up to date as you bring new instances online you're good-to-go. It was based in-part off of an article in SQL Server Magazine in May or June of 2007 on SSIS. It runs seamless against my 80+ instances and 800+ databases with much less overhead than my old linked server solution. I've only had it fail once, but that was due to an instance being down for maintenance. With a little more work on the customized logging that would have been apparent however.


- Tim Ford, SQL Server MVP
http://www.sqlcruise.com
http://www.thesqlagentman.com
http://www.linkedin.com/in/timothyford
Post #451843
Posted Tuesday, February 5, 2008 1:18 PM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Wednesday, August 27, 2014 1:14 PM
Points: 312, Visits: 1,106
Good article. I too have created a Job Failure management system that easier to use as it uses linked servers along with a reports created in SSRS. It's funny how no vendor has created an application for DBAs to manage multiple servers easily and effectively. My company has over 100 instances on 80 servers that I manage..alone.

Just my 2 cents worth,

Rudy



Post #451863
Posted Tuesday, February 5, 2008 1:34 PM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Today @ 3:04 AM
Points: 57, Visits: 581
Personally, this seems like a lot of heavy lifting and manual labor, and the effort involved seems to outweigh the costs of commercial tools already available. For example, SQL Sentry's Event Manager is a very affordable option, and it does everything your solution does and more... including elaborate event notifications not only for failed jobs but also job runtime thresholds, event chaining and a very nifty Outlook-style calendar view that gives you a graphical view of your job schedules across many instances.

I'm usually a total advocate for reinventing the wheel, if you are going to add something very powerful that isn't already available in packaged solutions (or if you really want to learn the API, catalog views, DMVs, Agent subsystem, etc). In this case I suggest your readers take trial versions of available packages for a spin before going too far down the "roll-your-own" road. At the very least you will see what you might want (and might not want) when you build your own solution, but more often than not you will realize how complex it can be to go that route, and that you will actually save money by spending money on a ready-made tool.

This is a classic argument that I have been having for ages. Back when ASP was a popular web language I used to argue until I was blue in the face with people who wanted to write their own mail or upload component, to save the $99 or $150 of the premier such component already available to buy on the spot. For most of us, if it takes us more than hour to build such a component, we're already behind. Then there is testing, debugging, performance testing, etc. etc. All of which you get for free when you have a reputable vendor behind the product.
Post #451869
« Prev Topic | Next Topic »

Add to briefcase 1234»»»

Permissions Expand / Collapse