Has anyone used Data Domain storage for SQL 2008 backups?

  • Hi Old Hand,

    can you please share the power shell backup and restore scripts. I'm struggling to backup to Data Domain.

    Thanks

  • We're using Ola Hallengren's scripts to backup to the data domain. We are backing up to a file share on the DD and not using Networker. His scripts can be found here

  • Hi Kevin,

    EMC Data Domain, has ddbmsqlsv.exe. How you are incorporating this exe in Ola's backup script? Can you please advise.

    ddbmsqlsv.exe -c STAGEVS.abc.com -A STAGEVS.abc.com -S 2 -l full -N "DDBackup_DB1" -y "01/09/2015 10:37:34" -a "NSR_DFA_SI=TRUE" -a "NSR_DFA_SI_USE_DD=TRUE" -a "NSR_DFA_SI_DD_HOST=mrydd.abc.com" -a "NSR_DFA_SI_DD_USER=ddboost" -a "NSR_DFA_SI_DEVICE_PATH=/sql_backup" "MSSQL:DB1"

  • We aren't. We're using SQL Agent to run Ola's jobs to backup to a file share. That file share is connected to the Data Domain.

  • So, do we need give read/write permissions to sql service account in Data Domain file share? can you please advise?

  • As far as I know, that's correct. Our DD admin setup a file share on the DD, gave our SQL Agent user access to the file share and we were able to backup to it. From what little information they've given me, it isn't as good as using Networker to backup to the DD, but it still uses data de-duplication on the DD side.

  • I am not sure the cost but a lot of the time Data Domain bundles AVAMAR which is their utility or GUI to backup and now it integrates with Data Domain. With Avamar/Data domain there is an avamar agent which gets installed on the SQL server(they have a cluster intelligent version as well). After install you then use Avamar to schedule the backups or I believe they recently have released an API which can be used with SQL Agent. I had great success after you get over the AVAMAR archaic GUI. Unfortunately I had to implement TDE across the board in the last 12 months and have got no benefit from data domain and have moved off of it.

    http://www.emc.com/collateral/software/white-papers/h8883-avamar-dd-integration-wp.pdf

  • We use our own custom PowerShell script (scheduled via Task Scheduler, not SQL Server Agent) to backup our databases because we have 40+ instances spread across several clusters, and it is easier for us to manage all of our jobs at this site using Task Scheduler. However, Ola Hallengren's code (https://ola.hallengren.com) may be a better choice than a home-grown custom script if you are just getting started with Data Domain. Ola Hallengren's code can be called from PowerShell, SQL Server Management Studio, and SQL Server Agent.

    Permissions were our biggest issue with using a Data Domain device. Try the following "quick and dirty" steps:

    1. Create a file share on the Data Domain device

    2. Grant read/write permissions to the SQL Server service account so that it can use the file share

    3. Log onto the network using the SQL Server service account (for initial testing, log onto the SQL Server box itself, if your Data Security folks allow it)

    4. Use Notepad to test the creation, modification, and deletion of a sample text file of your choosing--once you get this to work, the SQL Server service account should be able to write backups to the file share during a real database backup; if you have issues, you may have some IP ports blocked--you can check that with PortQry.exe (see http://www.microsoft.com/en-us/download/details.aspx?id=17148)

    5. Grant read/write permissions to YOUR account so that you can manipulate the SQL Server backups later OUTSIDE of SQL Server

    6. Log onto the network using YOUR account (for initial testing, log onto the SQL Server box itself, if your Data Security folks allow it)

    7. Use Notepad to test the creation, modification, and deletion of a sample text file of your choosing

    8. Open SQL Server Management Studio

    9. Test a database backup using a TSQL command similar to the following:

    BACKUP DATABASE [my_database] TO DISK = N'\\data_domain_server_name\share_name\my_database_FULL_20141227_102308.bak' WITH NOFORMAT, NOINIT, NAME = N'hallengren-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 5;

    10. Use Windows Explorer to confirm that the backup was created in the expected location on the Data Domain device

    11. Optionally, use SQL Server Management Studio to test a restore of the newly backed up database

    12. Once you get the above steps working, you should be able to use PowerShell to backup to the Data Domain device

    13. Here is some sample PowerShell syntax (using the TSQL from above):

    sqlcmd.exe -S sql_server_network_name\instance_name -E -w1000 -b -l 32 -y 8000 -Q "BACKUP DATABASE [my_database] TO DISK = N'\\data_domain_server_name\share_name\my_database_FULL_20141227_102308.bak' WITH NOFORMAT, NOINIT, NAME = N'hallengren-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 5;"

    WARNING: We had error trapping issues when trying to use SQLPS.exe, the version of PowerShell that launches via SQL Server Management Studio. The above syntax is just "plain jane" PowerShell (i.e., powershell.exe), opened from the Start Menu. Be sure to include "-b" on the sqlcmd.exe command line so that errors will be detected.

    You can check the success/failure of PowerShell commands using "$?", which can call a failure routine, such as "job_failure" when needed. Be aware that Ola Hallengren's code has notification code built into it.

    if (!$?) {$ErrorMessage_STR = $error[0].ToString(); $ErrorPosition_STR = $error[0].InvocationInfo.PositionMessage.ToString(); job_failure $ErrorMessage_STR $ErrorPosition_STR}

  • I know I'm late to this and rarely if ever post, but here's my 2 cents: We were "required" to migrate to DD backups for all of our SQL instances; approximately 300 instances and over 5000 databases. We went from backing up to local disk (SAN) that was then picked up by NetBackup and archived to tape to backing up to CIFS shares set up on the Data Domain.

    We tested DDBoost early on and it was a joke IMHO. We have instances housing over 100 databases and writing a custom Boost script to perform every backup was untenable, not to mention a joke on logging. We already use our own custom maintenance to perform the backups and all other maintenance; we were not interested in re-writing all of our code to make this happen. The only thing we did add was a PS script to perform recovery chain analysis and cleanup old files in each db backup dir.

    We left SQL backup compression turned on to see if the storage team would complain about the de-dupe rates; they felt they got an acceptable ratio and did not press us on this point.

    There are pros and cons to this technology. It has made our backup file storage more secure. We do have domain authentication turned on (I recall this took some time for us as well, but eventually we got it working). As a result we limit the top level CIFS shares to the service accounts performing the backups and to our team for recovery access. It is nice to have it all visible in one place. This is also a con however since it is all in one place. We do not replicate at this time, although it's been discussed over the years. It has definitely been slower and has begun to affect our customers with lengthy jobs, etc. We are at a turning point where we are going to begin implementing more Diffs into our schedule and sadly, we will also probably have to revisit DDBoost at some point to get back to client-side de-dupe and push less across the network.

    The one perception amongst my teammates that I have to keep emphasizing is that we can't say with certainty that our backups now "take longer." Previously our insight ended at the SQL agent job time to backup to the SAN disk (A); we had no view into the time taken to complete the archive to tape (B), so total backup time really = A+B. Now that we use DD, we are writing all the way to the archive (only B). The two outcomes could be equal for all we know.

  • gary1 (1/12/2015)


    So, do we need give read/write permissions to sql service account in Data Domain file share? can you please advise?

    Yes, we actually determined that it needed Full Control on the individual server-level folder (I know, I know, but it's the only thing that worked).

    Our structure is similar to \\DDname\sharename\servername\instancename\dbname\files.

    One positive note, we are in the process of also converting all that we can to MSAs, so that's the account that gets the access.

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply