Question about backup/monitoring strategy

  • I'm a DBDev turned DBA/Dev now. First DBA at the company and short of a few random backups we have... well thats it. A few random backups.

    I'm looking to get things structured and wondering if there are any great strategies or tools that will help with this.

    Heres how I see it: Keep track of every DB and what settings varied from default for install and afterward. Basically: DBName, ServerName, LogicalName, FileName, InitialSize, Autogrowth, FilePath, Description of what the DB is for.

    Also I want to monitor these installations. If I could run a single query and it would look at 10 servers and ask them all where the files are today, when was the last backup, how much free space, etc.

    I can make these queries on one machine just fine, but is there some way to set up a query to run against multiple machines to get all of this information in one data pull? The best I can think of off hand might be using linked servers and union a bunch of queries or create a subreport in RS that gets all of this information from a server and apply the subreport once per server on a report, and of course we'll need a data source per server. I'm not sure if you can switch out datasources based on data provided as a parameter. If not then you would have to have a subreport per server.

    Also one last bit here... I know I want to backup ReportServer for RS, but how about system databases? MTFS? Master? Should these be backed up as well?

    I really want to get every database onto the same general backup plan, backing up to a central NAS drive in the data center

    Ideas are much appreciated 🙂

  • That's a lot of questions at once.

    My first suggestion, look at setting up a Central Management Server. This will allow you to take control of the other servers from a central point, set up backup & restore, monitoring, policy based management, etc. so that you can get your systems in hand.

    The very next thing I would do is ensure you have full backups on all the databases. Then, work with your business to understand what they're level of data loss is. I've got an article coming out shortly on Simple-Talk on just this sort of thing. Keep an eye out for it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • As you are assigned the new task of DBA.

    The first thing I would suggest you to document all the Servers/Databases in your environment. Little bit lenghty and boring but This will work as a reference whether your Server Settings, Database Level Settings are same on all the servers/database. So that you don't need to address the same issue on each Servers repeatedly.

    The Second thing about Backup/Restore is really depends upon your environment. how much data loss you can bear and the size of individual database. Based on that you can keep Full/Differential/Transaction log backup. This will help you decide what Backup/Restore Strategies you need to follow.

    The Third thing you should consider how to maintain your server/databases. For Corruption checking, Removing Fragmentation(Rebuild/Reorganize). Either do it manually or better have it automated.

    The Fourth not the last... always visit SQLServerCentral and SQLSkills and other sites to read Articles/blogs or great DBAs, which might help in your scenario to better manage things.

  • One of my questions about managing the same query against multiple servers wasn't referring to live queries, but automated.

    For ex:

    I run a query to see if db/log is running low on allotted space. I set an app/job to run this against every server and set off an alarm to alert me when disk space is low.

    I have Quest Spotlight that handles this on our production server, but not the other 15 servers.

    What would an experienced DBA do?

    When I wrote the question I was debating creating Linked Servers from a master DBA server which I could query each SQL server for info. I may not have years of experience at this, but the notion set off alarms in my head. Doesn't seem very scalable or flexible.

    I use the Central Management Server for running the same query against the whole group, but I don't see how to use this shared connection from a job.

    Today I started digging into Data Collecting in SS2008.

    It seems that I can use CMS to send out a query to all servers to collect data on a schedule, then all the data moves on a schedule to my central server.

    I can query my central server whenever I like with a single connection, set a job to analyze the db for info that I would want to be alerted to, run reports and track info over time. I can hold settings information, custom queries and performance info (from dmvs like query fingerprints)... I could even track schema changes by querying the schema for details.

    All of these queries are run locally on a server and then the data is moved to a central location on schedule.

    If all of these Data Collector settings are managed from a single set of source that runs equally against any SQL instance I have with a drop/recreate, it seems extremely flexible and scalable.

    So, that said, why shouldn't I do this? Almost every sever is 2008r2, this may not work as easily with the two 2000 servers.

    Anything else I'm missing?

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply