How to take the backup of 700gb with less time

  • Hi Experts

    I have a Question like "If the Database size is more that 700GB" means how can i take the backup with less time without using any third party tools..

    Regards

    Jagadeesh...

    Regards
    Chowdary...

  • "Divide and Conquer".

    The question is, how to "divide". "It Depends" on the structure of the database. I would move all the very large, high usage tables to a couple of different file-file groups and start doing backups by file group. If most of your data is in just a couple of audit tables, consider partitioning in a temporal fashion across 1 fie group per month. Any "static" file groups could see a final backup followed by making the file group "read only" so you never have to backup that particular file group ever again.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Before you go the filegroup route.... The DB needs to be in full recovery and you need log backups covering the range from the oldest read-write filegroup backup to the latest, or you won't be able to restore. Test restore paths carefully before you consider doing that to your production DB, the restores are a lot harder as a result of the piecemeal backups, you don't want to find yourself in the position of not being able to restore.

    When backing up read only filegroups, you need to take the final backup after switching the filegroup to read-only, otherwise you'll still need log backups covering the entire range from that backup right up to present.

    As for making the backup faster...

    First check the throughput of the backup device. Backups are IO-bound operations, if either the source (drive with the data files) or destination (drive with the backups) are slower than they should be, backups will suffer. Check the latencies on the drives, make sure that the drives are handling the load. Make sure you're not backing up to the same drives that the data files are on.

    If you're using SQL 2008 Enterprise edition or SQL 2008 R2 (or above) standard edition, you can do compression with native backups. Since backups are IO-bound, compressing the backups usually makes them significantly faster. Do test first, make sure that the additional CPU load isn't a problem.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Also, if you're backing up to a remote drive across the network, check the network connectivity. That can seriously impact backup performance.

    Check the wait statistics while the backup is running to get an idea of what might be causing it to run slow.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • GilaMonster (7/1/2014)


    As for making the backup faster...

    First check the throughput of the backup device. Backups are IO-bound operations, if either the source (drive with the data files) or destination (drive with the backups) are slower than they should be, backups will suffer.

    +1

    In a previous role we had slow backup issues develop with one of our instances. It transpired that the storage admin had moved the LUN to a set of ropey old SATA disks which he assumed would be good enough for the job!

    We soon put him straight 😉

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • striping the backup should help.

    ---------------------------------------------------------------------

  • Thank you all for giving ur valuable Answers......

    i just started to analyse that, soon i ll get u back with ?.:-)

    Regards

    Jagadeesh..

    Regards
    Chowdary...

  • Start performing Diff backup

  • 700GB isn't terribly large.

    There have been many good points already laid down.

    Something I tend to do is stripe the backups and compress them for databases in that range. In other cases, I will go through the tasks of finding the best method to break the tables apart into filegroups so I can perform filegroup backups.

    Here are some articles on the topic.

    http://www.brentozar.com/archive/2014/01/improving-the-performance-of-backups/

    http://sqlblog.com/blogs/linchi_shea/archive/2007/03/30/sql-server-backup-i-o-performance.aspx

    And some really good information with graphs demonstrating effects after changes

    http://henkvandervalk.com/how-to-increase-sql-database-full-backup-speed-using-compression-and-solid-state-disks

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply