Mini-Me

  • Comments posted to this topic are about the item Mini-Me

  • Yes-ish.

    Whenever I put something in to handle an event or act on a subset of data, I write something to let me see the opposite too - all the data that wasn't affected. That way, I get a quick sanity check that the process isn't too narrow as well as not too wide.

    For more important processes, I'll also run those scripts a few times manually in the few days/weeks following go-live, once again as a sanity check. However, I don't generally schedule the scripts to email me. I do, however, have a few key jobs that'll naturally send out warning emails on failure, but I suspect most of us do that.

    Semper in excretia, suus solum profundum variat

  • Yes.

    I do unit test and I test my code etc.

    Then at my workplace, we also have testers, that sits and tests everything we do so that no weird case that results in an error might occur.

  • I am not entirely sure what kind of checks you are referring to, but I have worked in some organisations where none of the checks rely on automatic monitorring, and I have worked in some where everything relies on it. The latter are usually the worst to work for, because they refuse to believe that there is ANYTHING wrong unless you can show them an alert in their monitoring tool.

    Throw away your pocket calculators; visit www.calcResult.com
  • We run audits daily on the "business integrity" of some parts of the data and send the relationships or mis-aligned data results to the Data Entry personal. This is strictly a preventive maintenance procedure and helps to find problems before they are sent through the system.

  • Yes

    At a previous job I had numerous jobs running against an ERP system database to ensure the integrity of the transactions processed. For example, compare the general ledger account balances for inventory to the extended cost of the actual inventory held in the system.

    Typically I have jobs for:

    - daily balancing reports

    - system interfaces

    - internal system integrity

    - error conditions within the data

    I find this especially useful for data errors that appear sporadically. It is much easier to track down the cause of this type of error when you get timely notification when the error occurs.

  • millsrustyk (11/21/2008)


    We run audits daily on the "business integrity" of some parts of the data and send the relationships or mis-aligned data results to the Data Entry personal. This is strictly a preventive maintenance procedure and helps to find problems before they are sent through the system.

    Me too! I've been at my current position for 9 months. One major system is an access application where the database was ported to SQL 2000 about 4 years ago. The application, still within Access, is a total nightmare (currently being re-designed/rewritten in Java but won't be live for 6 months). Since I've been here, I, with direction from the user community, have developed probably 20 or so queries that are run each morning on the SQL Server Agent to trap for invalid/incomplete data. This data is ultimately retreived and presented via the web to our clients and we want to be sure our data is as accurate possible. We have probably been in violation of our SLAs for quite some time - IMO - but are slowly getting back within our obligations. Very simply, a case where business outgrew the current system capabilities and feeble, past attempts have not been 100% reliable (which has led to "out with the old, in with the new (ME!)"). The best we can do until the rewrite is complete and live is to attempt to cleanse our data and present the best end-results within our current constraints. It may not be the absolute correct approach but, short-term, it's the best we can do with what we've got. My only saving grace is that management understands and can deal with it until June, 2009.

    -- You can't be late until you show up.

  • Hmmm, I think I like the term Business Integrity Audits! I finally have a good description of the reports that I go through each morning to confirm that things are working the way that I expect them too.

    Scott

  • I can't imagine NOT having SQL Server Agent jobs that are checking things. I've used jobs, and even some RS reports with subscriptions to monitor data integrity. I had to do it to protect myself at times. I had one instance where the same data had to be entered into 3 systems (don't start, I wanted to automate the users didn't) and if you missed one there were calls made in the middle of the night. One user claimed that someone was intentionally deleting data they had entered in to one of the systems. Well, I then wrote an audit system that emailed their entire department whenever data was entered, changed, or deleted on that system and emailed weekly whenever there was inconsistent data between systems. Imagine my surprise when this person no longer had any problems!:P

  • Jack -

    I started to write out the reasons why I had designed all of these reports, then just erased them because I would have had to add the "don't start" disclaimer too!

    Scott

  • 99% of the scripts we use are in house or customized from sites like SQLServerCentral.com

    That being said, there are a couple third party tools that are worth their weight in gold such as ones that compare databases on a QA/Development Server with their eventual incarnations in production. Any script involving heavily used remote logins, I'd always be hesitant about.

    That being said, there's a cool feature about SQL 2008 Management Studio I read about at http://www.sqlskills.com/BLOGS/KIMBERLY/post/SQL-Server-2008-Central-Management-Servers-have-you-seen-these.aspx where for example, you can right click on a group folder, such as where you have a group of QA Servers bundled up, and query that entire group. For example, right click on the folder, select new query, and a query window opens up. Any query done in that window is run against the entire set of servers in that grouping. This type of integration could provide DBA's more power to create even more powerful customized scripts.

    Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein

  • The queries help out alot. It's nice to know that a certain situation is in need of attention before it becomes a big mess. For instance there are times when a service gets hung so an email alert gets sent so the service can be restarted. These types of queries are invaluable when operating in a high volume environment.

  • How much "checking" I build into my custom systems depends on time and money available to do them. I haven't taken the SQL Agent route, typically I have a Windows service that periodically processes various multi-step business transactions. As part of the cycle of jobs-to-do, I usually place my "checks" code at the end of each processing cycle. My service always uses stored procedures and I usually build a few that will cull my data looking for issues that need to be brought to someone's attention. When something is found, I send off an email to the appropriate personnel (I usually maintain in the database a list of emails tied to types of activities).

    Transaction Status

    I should also note that I create transaction tables that stores the various transactions within my system. As part of these tables I determine the main steps to complete the transaction and create a status date/time column for each step as part of the same transaction row definition. These columns are defaulted to NULL. I also create a Status Notes column that can contain textual status info. This defaults to "Waiting to be processed". When the process completes a step, it runs a sProc that applies the current date/time to the appropriate status date column and updates the Status Notes column with the results of that step. If a process encounters an error, part of the error handling applies the error info in the Status Notes.

    Notifications based upon type of activity

    Sometimes the issue is support-related and I'll send it to system support folks. Typically these are business transactions that could not complete and need some human eyes to help it along. Having the Status Notes column greatly assists these folks when they analyze the transaction tables.

    Sometimes the issue is business-related, such as manufacturing jobs running late and I need to notify the appropriate business unit manager.

    Use of SQL Agent

    Many of my clients would not allow their SQL Servers to send email as part of their internal security policies. Some even go so far as to disable SQL Agent. I have to create systems within the confines given to me and if it's a 4' x 6' jail cell, so be it.

  • We review our SQL Servers every morning looking for "Corrupted" next to the database name. We also have alerts set up to e-mail us. We were having some problems with our storage system and found out that a database can become corrupted with no alert being triggered, especially if there's been no activity against it. So we added scheduled jobs to run CheckDB against every database just before we come in to work. In my opinion this is a must, especially for databases that aren't necessarily accessed every minute or every hour.

  • Great topic. Something I give a lot of thought to, and always looking for new ideas. In many ways, I think my responsibilities consist in large part of the following:

    1. Prevent bad stuff from happening (Protect)

    2. If it happens, become aware of it quickly (Detect)

    3. Resolve the issues accurately and efficiently (React)

    My company has a pretty diverse environment (SQL versions between 6.5 and 2005, multiple domains, both physical and virtual servers). That, in addition to having a very limited budget for DBA tools, has forced me to use a variety of methods. Three main ones come to mind, in decreasing order of automation:

    1. SQL Agent jobs for emailing me about really bad conditions (ex: SQL errors with a high severity, xaction log about to run out of space) in real time. I have a job that runs early in the morning to do nothing but send me an email, to verify that both SQL Agent and our email infrastucture are functioning properly.

    2. Custom SQL scripts using Red Gate's SQL Multi-Script tool to catch conditions that while important, can usually wait a few hours (ex: disk space getting low, missing backups, SQL Error log entries that the SQL Agent jobs don't report on). I run these scripts a few times throughout the day. Doesn't take too much time...

    3. Server-side traces to catch things like unauthorized use of the sa account, poor-performing queries.

    It would be great to be notified about everything automatically when it happens. That said, I still think there's many instances where a bit of at least somewhat-manual checks are warranted. The manual checks can act as an audit of the automated checks.

    Brian Kukowski

Viewing 15 posts - 1 through 15 (of 32 total)

You must be logged in to reply to this topic. Login to reply