implementing backup system using PS (a newbie question)

  • We have a backup system based on TSQL commands:

    1. A job in the central SQL Server initiate the process once a day.

    2. The backup details (which database, in which server, to which directory) are saved in a management table in the SQL Server.

    3. The TSQL Backup commands are executed in remote servers through Linked Servers.

    4. Another process deletes files older than 2 weeks from all the directories which appear in the management table (using xp_cmdshell and Del commands).

    How would you implement such a system using PS?

    I'm not just talking about replacing the TSQL commands with PS commands, but also about the general approach: Where the management data will be stored? How the execution will be initiated?

  • I think that without some form of requirements, no matter how brief, it is such an open ended question that many would be forced to say "it depends".

    Some might even say that starting with a premise of "use PowerShell" is forcing you down a particular path which may be the right way or may be totally the wrong way to go about it.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Thanks for your answer!

    I will try to explain what I'm trying to find..

    A year ago I was asked to develop a central system for backups. Till then, for every backup, a separate job had been created. With ~50 servers, it was a very bad practice.

    My manager asked me if I intended to do it using PS, maybe because it was considered to be a good tool for administrative tasks. I'm the TSQL guy in the company, I told him that I didn't know PS, and it would have taken time to begin studying this subject, and that I would have preferred to do it using TSQL.

    A famous proverb says that for the hammer, every problem is a nail: if I were the C# guy I would have developed a system with nice forms, which would have accessed each server using a connection string; and if I were the system guy, I would have used the tool they have used to backup disks & sub-directories & files.

    As I have mentioned in my previous message, I created a table with all the details, linked servers to all the servers and a job which using a cursor, backups each database to a separate sub-directory. Additionally, each time, the job deletes old files using xp_cmdshell & del (command line).

    Now I have some spare time, and I want to try to find out how I was expected to do it using PS, and beside to get into PS..

    The man who told me to develop this system, just wanted a reliable central system, and let me do it in my way. He was not an expert nor in TSQL nor in PS.

    Now, if you were asked to do it using PS, what would you have said:

    1. Come on, that's not a task for PS: it is a very bad idea.. let's do it with TSQL!

    2. {here I expect to hear a suggestion how to do it using PS}

    I hope I made myself clear (my English is not the best).

  • First off Geri, your English is good. At least good enough for me. (I am not a professor in English so who knows :-P)

    I think that I would have used Windows Scheduler to kick of a PowerShell script that executes backups through SQL Server PowerShell cmdlets (I cannot remember how this is done) and do the file and directory manipulation using the PowerShell script.

    Build up this script a piece at a time and it makes the process a lot simpler. See www.MicrosoftVirtualAcademy.com for some great tutorials.

    EDIT: Spelling mistake!!! Whose English is poor? Clearly mine!!!

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Happy to help with any specific issues.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Thanks!

    🙂

  • GeriReshef (5/26/2015)


    Thanks for your answer!

    I will try to explain what I'm trying to find..

    A year ago I was asked to develop a central system for backups. Till then, for every backup, a separate job had been created. With ~50 servers, it was a very bad practice.

    My manager asked me if I intended to do it using PS, maybe because it was considered to be a good tool for administrative tasks. I'm the TSQL guy in the company, I told him that I didn't know PS, and it would have taken time to begin studying this subject, and that I would have preferred to do it using TSQL.

    A famous proverb says that for the hammer, every problem is a nail: if I were the C# guy I would have developed a system with nice forms, which would have accessed each server using a connection string; and if I were the system guy, I would have used the tool they have used to backup disks & sub-directories & files.

    As I have mentioned in my previous message, I created a table with all the details, linked servers to all the servers and a job which using a cursor, backups each database to a separate sub-directory. Additionally, each time, the job deletes old files using xp_cmdshell & del (command line).

    Now I have some spare time, and I want to try to find out how I was expected to do it using PS, and beside to get into PS..

    The man who told me to develop this system, just wanted a reliable central system, and let me do it in my way. He was not an expert nor in TSQL nor in PS.

    Now, if you were asked to do it using PS, what would you have said:

    1. Come on, that's not a task for PS: it is a very bad idea.. let's do it with TSQL!

    2. {here I expect to hear a suggestion how to do it using PS}

    I hope I made myself clear (my English is not the best).

    First, let me ask you what happens when your "centralized backup system" is down. Think about it.

    Second, you stated the following...

    A job in the central SQL Server initiate the process once a day.

    Any data worth having is worth protecting and backing up just once per day is a very thin veil of protection. Are you sure that the company would still be happy if a system lost up to 24 hours of data? I strongly recommend setting it up so that Point-In-Time log backups are taken at least once ever hour. More if you can.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I have some code that may be useful. To get your instance list, db names, etc, you can do something like this. I have cleaned up the code a little, and you will need to modify it to retrieve your data. I have also attached some code, that will do a backup. With that code, and the code listed below, you should be able to do what you want. You should then be able to set it up as a SQL Agent job.

    Also, I move the instance list to an array. You don't necessarily have to do that, you can just loop through the data rows.

    $InstanceList = @()

    #Retrieve Instance list from database.

    $ServerList = New-Object ('Microsoft.SqlServer.Management.Smo.Server') "SQL Server, where admin database is located"

    $db = New-Object Microsoft.SqlServer.Management.Smo.Database

    $db = $ServerList.Databases.Item("Name of Admin database")

    $ds = $db.ExecuteWithResults("SELECT [InstanceName] FROM [Name of Admin database].[dbo].[SQLServerInfo] WHERE AllowConnection = 1 ORDER BY InstanceName")

    Foreach ($InstanceNameRow in (($ds.Tables[0]).Rows))

    {

    $InstanceList += $InstanceNameRow.Item(0)

    }

    #Read thru the list of the SQL Server instances

    ForEach ($Instance in $InstanceList)

    {

    Try

    {

    $srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $Instance

    catch {}

    }

  • As has been stated before, you should probably set up a backup job, on each server, using regular t-sql. No need to do the backups from a singular location. But, if that is what you want, then what i posted above should work. If you do a backup, from each server, using SQL Agent, then you can set up a notification, on failure. As a further sanity check we run a job that checks to see if a backup did not complete, in the last 24 hours. We also have another report that checks to see if a SQL job failed, in the last 24 hours.

    See attached.

    Leonard

  • Thank you very much for your contribution: I will try to follow the script and to learn from it how to use PS in such cases.

    1. I conclude from your example that the details of the databases should be stored in a table.

    2. What is the best way to automatically initiate it: by a Job? as a scheduled task of windows? something else?

    Thanks again

    Geri Reshef

  • Hi,

    As for where the details should be stored, that is up to you. I typically run my reports from a SQL Server instance, so I use a table that has my list of servers. You could hard code the list, or use a text file. I use the table, because it also contains information, about the servers,like edition, version, etc. I have a PS SQL job that keeps that information up to date.

    $InstanceList="server1","server2"

    Is an example of hard coding the list.

    Most of my reports are executed from our DBA monitor SQL instance.

    Leonard

  • Here's the basic script I use to backup all DBs in each SQL Server instance on a machine, except tempdb.

    This is saved as a Powershell.ps1 script, and run via an SQL Agent job every night, on each machine.

    You can spruce it up with some error checking, validation, notifications, etc...

    You can create another Powershell script which can copy this .ps1 to each target server and create the SQL Agent job and schedule that starts it. This makes it a simple matter to update the script as needed, and to add and schedule it on new servers.

    Jeff makes 2 good points:

    - centralizing the backup job on a single server = potential single point of failure.

    - in many cases, 24 hours is a long time to go between backups. As always, "It depends".

    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null

    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | out-null

    #clear

    $MachineName = $Env:COMPUTERNAME

    $Servers = get-childitem SQLSERVER:\SQL\$MachineName | select name

    foreach ($Server in $Servers)

    {

    $Srv = New-Object Microsoft.SqlServer.Management.Smo.Server $Server.Name

    $DBs = $Srv.Databases | where {$_.Name -ne "tempdb"}

    $BackupFolder = $Srv.BackupDirectory

    foreach ($DB in $DBs)

    {

    $RecoveryModel = [string]$Db.RecoveryModel

    $DBName = $DB.Name

    $FileNameData = $BackupFolder + "\" + $dbname + "_Dump.bak"

    $FileNameLog = $BackupFolder + "\" + $dbname + "_Log_Dump.bak"

    $dbBackup = new-object ("Microsoft.SqlServer.Management.Smo.Backup")

    $dbBackup.Action = "Database"

    $dbBackup.Database = $dbname

    $dbBackup.Initialize = $true

    $dbBackup.Devices.AddDevice($FileNameData, "File")

    $dbBackup.SqlBackup($srv)

    if ($RecoveryModel -ne "Simple")

    {

    $dbBackup = new-object ("Microsoft.SqlServer.Management.Smo.Backup")

    $dbBackup.Action = "Log"

    $dbBackup.Database = $dbname

    $dbBackup.Initialize = $true

    $dbBackup.Devices.AddDevice($FileNameLog, "File")

    $dbBackup.SqlBackup($srv)

    }

    }

    }

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply