Restoring with multiple BAK-files (with multiple CPU-cores)

  • Hi all,

    I do have a question about restoring databases:

    SQL server uses just on CPU-core when performing a regular backup to 1 BAK file.
    When restoring this BAK file SQL server uses just one CPU-core.

    Is it possible to direct multiple CPU-cores to the backup mechanism to performing a backup to multiple files - and is this (addressing multiple CPU-cores) also possible when restoring the multiple BAK files to one database?

    Our backup (2,5 TB – compressed 500 GB) takes about 4-5 hours to restore and I like to speed this up. Is this a possible solution or are there other ways to accomplish this?

    All replies are welcome.

    Regards
    Guus Kramer
    The Netherlands

  • Do you have the Standard or Enterprise edition of SQL Server?

    Also, spitting the backups to multiple files could actually slow things down unless you can guarantee that each file will be saved on totally separate dedicated spindles.  That's not the case in most systems anymore.

    Finally, CPU isn't the limiting factor here.  Disk Reads, Disk Writes, size of the "pipe", and configurations such as "Instant File Initialization" and packet size are.

    Of course , one good experiment is worth a thousand expert opinions and hours of research.  Try the multiple file thing.  I think you'll be disappointed but the experiment will remove or confirm any doubts for your particular setup.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Enterprise edition

  • in my experience, disk is bottleneck most of the time, try it out. 

    SET NOCOUNT ON
    declare
    @fno int,
    @dbname varchar(100),
    @path varchar(100),
    @finame varchar(100),
    @dsk varchar(1000),
    @ts VARCHAR(1000)
    begin
    set @fno = 64
    set @dbname ='mydb'
    set @path ='y:\backup\' + CAST(@@servername AS VARCHAR) +'\'

    SET @finame = REPLACE(@dbname,' ','_')
    SELECT @ts='_' +replace(replace(replace(CONVERT(VARCHAR(20),GETDATE(),120),':','_'),'-',''),' ','_') + '_FILENO_'
    print 'Backup Database [' + @dbname + '] to '
    --print @fno
        while @fno > 1
        BEGIN
            --SELECT 'this is test'
        SET @dsk= 'disk = ' +CHAR(39) + @path + @finame + @ts + cast(@fno as varchar) + '.full' + CHAR(39) + ','
        PRINT @dsk
        set @fno = @fno - 1
        END
    --PRINT 'out of loop'
    SET @dsk= 'disk = ' +CHAR(39) + @path + @finame + @ts + cast(@fno as varchar) + '.full' + CHAR(39)
        PRINT @dsk + ' with compression'

    end

  • gkramer 23701 - Thursday, December 6, 2018 8:17 AM

    Enterprise edition

    In that case, if you're not using compression, you should give it a try.  I did a (actually, multiple times this past weekend on a test box) restore of a 2.2TB database in 55 minutes.  Of course, I also have some other tricks of the trade going on like "Instant File Initialization" (IFI) and defining both the packet and buffer size in the RESTORE command.  I also have a giga bit connection between me and the location of the backup but most of the performance comes from using compression during the backup and having IFI active.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply