What should be the daily routine of a ideal DBA

  • What should be the daily routine of a ideal  DBA apart from all these

    Watching SQL Server Logs.

    watching event viewer.

    Monitoring counter logs

    Viewing trc file/trc report

    Backup

  • Make large cup of non-decaff coffee

    I would put check backups first closely followed by check disk space and current server activity.

    I have my SQL Servers email me with the results of the nightly maintenance plans and various counters.

    Within Outlook I have various folders and rules that redirect the various server messages so I can see my SQL Server stuff vs the spam.

  • I guess this all depends on the actual role.  I know that a lot of DBA's do a lot aside from just administration.

    Things I check on a daily basis...

    * Backups

    * Scheduled Jobs

    * Disk usage

    * Event Viewer

    * SQL Error Logs

    * SQL Agent Logs

    The company I work for have no internal application systems so there is no need for development as such.  As I am still new to the company, I spend the rest of my time trolling through the vendor database looking (and finding) potential bottlenecks and making recommendations for change.

    Thats just a start.  I'm sure you'll get a lot more good advice on this post.

  • Check Security bulletin boards.

    Test new patches (if any) in test environment.

    Planning for when things go wrong (as they will).

    Writing scripts to make your job easier.

    Testing/developing optimisations for frequently run queries.

    Maintaining written server logs (to make sure that anyone can flick through to see what issues have come about and how they were fixed, or if they're liable to arise again).

    Then there's the talking amiably with database users as and when possible. Having a friendly working relationship with them helps when things fall apart. People that like you are far more likely to be patient with you than ones you've leaned on hard.

  • Believe it or not this happened to me the other day and this looks as good a place as any to post:

    I went to a DBA at the place where I work to validate my memory.  He said "let's consult the trash can"...  Literally pulled a magazine out of it promptly opened to the proper page and said here.. "You can keep this"

    Now how many of us would like to have that as part of our job????



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • What counter logs do you regularly keep track of? And what are you looking for in the trc file/report? I'm pretty new to performance tracking (as opposed to optimization) and would like more info on the day-to-day monitoring of performance. Does running a performance/counter log make a big impact on the performance of your servers?

    Question for everyone as a whole: I'm a new DBA who's taken over for a previous one. He had profiler running on the two web production boxes to watch for security issues. Do you guys/gals suggest having profiler running on a machine, or running a trace through T-SQL and monitoring a file/table? I would think the profiler option would be problematic because of the performance impact and it would have to be started and running (obviously). I suppose the T-SQL method could be automatically started on start of the DB.

    Thanks!

  • Running a profiler trace has a significant performance impact on the server so I would not recommend using it as an auditing solution. As for daily tasks, I recommend running through entire DR scenarios with your latest backups at least once a month. The only way you can ensure disaster recovery is to test a real world scenario. This should include not only restoring your user and application databases but also the system databases. In a disaster scenario you don't want to waste time looking through online message boards for help in how to restore the master db ;}

  • Running the GUI profiler will absolutely cane your server. The command line utility has a much lower overhead.

  • Daily Checklist

    1. Check the connectivity of each server over the network. You could do this by pinging the SQL servers twice a day or by clicking the server’s name in your Enterprise Manager and seeing if it is able to connect.
    2. Check whether the services are running. For each server, go to its SQL Service Manager and check whether the SQL Server Agent and MSSQL Server services are running (showing a green light). If not, start those services. (You could also check these from the Control Panel or Enterprise Manager).
    3. Check whether the scheduled tasks on the production servers are running normally. You could check this from the Enterprise Manager of each server or your email (if you have set up SQL Mail to notify you).
    4. Check the hard disk space available on the SQL Servers. If system drives run low on space, they crash.
    5. Check all the database and transaction log space on each server. If the database or transaction log space runs out, the transactions will fail.
    6. Check NT event Logs for any error messages. SQL Server writes to the NT application log in case of application errors or SQL errors and also warns you before a problem becomes critical.
    7. Check SQL Error Logs for any errors occurring within SQL Server. SQL Server warns you through these logs before the problem becomes critical.

    Khalid

  • I think that what DBA should be watching is what the developers are  up to in between watching the Red Sox games.

    All other time should be dedicated to writing scripts according to Rich James (totally agree!) to make your life easier. Use WMI scripting and VB scripting to check on everything including drive space and backups and let it send you an email if something goes wrong. Do not forget to run an independent script from another computer that will confirm that your original script is running fine.

    Yelena

     

    Regards,Yelena Varsha

  • One thing that I've found handy as part of the checking routine. I run SQL locally on my workstation and all the other SQL Servers forward events to it. Makes checking the event logs so much easier.

     

    --------------------
    Colt 45 - the original point and click interface

  • For many of those things mentioned here I use Embarcadero Performance Center.

    I check my jobs on a website. The makewebtask functionality in SQL 2000 makes life very easy for me. I created a job/sp that writes all the wanted information of my server to an html file. I check those pages several times a day. They are refreshed every hour.

    What do I see on the html file?

    • Server information (like version, machine name)
    • which databases are running and when did they have their last dump an transaction log dump
    • Error's from the last three days in the errorlog
    • Job's that did not run correctly for the last three days
    • Number of packet error's
    • Number of error's on this machine
    • Number of reads
    • Number of writes
    • Free disk space for each disk on the machine for the last 5 days
    • SQL Server configuration values (for easy checking)
    • All jobs that have run (with error's and without error's)
    • All database file information, including information on when the database files have been growing, and how much

    I am making a construction right now, that will display information of all servers on one page, and will display only the error's. Now I have only 6 SQL instances in a production environment to manage, but later this year this wil probably be much more, so checking all pages is a lot then.

    Also I just started checking MOM 2005 yesterday, which will make life much easier probably.

    So, these two things make my life easy for the day to day checking.

    Specially the Performance Center tool is very good. You can set your thresholds yourself.

  • How about using the SQL Health and History Tool to track server and database info.

    http://www.microsoft.com/downloads/details.aspx?FamilyID=eedd10d6-75f7-4763-86de-d2347b8b5f89&displaylang=en

     

    --------------------
    Colt 45 - the original point and click interface

  • All of this is good but many suggestions would struggle with say 60 servers to manage.

    In General the DBA has to define what is right for their environment - defining the processes to manage and alert has to come first - then it's a case of managing the information you gather from these.

    Typically I would know if backups had failed, disks were short of space, jobs had failed, servers were not running - probably before I even got to work!!! I'd say if you have to manually check items such as backups then your processes are wrong.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • For a large number of servers - you really need proactive monitoring of your hardware, disk space and SQL Server jobs.

    Once everything is in check and running fine get a coffee and check your tee time

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply