SQL Server Restore precheck gets free space wrong

  • BobAtDBS

    SSCarpal Tunnel

    Points: 4582

    I'm trying to move a large database (22 TB) from a stand alone server to a Storage Spaces Direct (S2D) cluster.  The database to be moved is four files, none over 16 TB.  The cluster is already hosting many TB of data and is working fine. To minimize downtime, I generally restore a full backup, then restore a Diff backup, then stop providing the service and restore the Trans backups.  But I can't even get the process started.  When I try to execute the Restore statement, I get the following error message:

    Msg 3257, Level 16, State 1, Line 13

    There is insufficient free space on disk volume '\\ReindeerAFS3\WSPSQL\' to create the database. The database requires 21812542636032 additional free bytes, while only 17592186040320 bytes are available.

    You might note that the number reported as "available" is exactly 16 TB.  The Cluster Shared Volume actually has 25 TB free, it was created specifically for this SQL Server data.

    I have tried the rumored Trace Flag 3104 which is supposed to bypass checking of free space, but it does not help.  The error message remains the same.  I believe that Trace Flag was a SQL 2000 item.

    The data files target is a Scaleout Fileserver  (\\ReindeerAFS3) share (WSPSQL).  I have restored small files to this share a few months ago to make sure things were working and things went very smoothly.  I can restore this same backup to a stand-alone virtual machine that lives on the cluster, with the VHDX hosted on that same CSV, with no issues.  So I'm sure that the free space message is wrong.

    Has anyone else tried to Restore a large (over 16 TB) database to a S2D cluster?  Is anyone storing large databases on a SOFS share?  I'm using SQL Server Standard 2014 SP3 on both the source and destination servers.  The cluster is all 2019 Datacenter, fully patched.  This is either a bug in SQL Server, a bug in S2D or SOFS, or I'm missing something.

     

    • This topic was modified 1 month, 4 weeks ago by  BobAtDBS.


    Student of SQL and Golf, Master of Neither

  • Site Owners

    SSC Guru

    Points: 80380

    Thanks for posting your issue and hopefully someone will answer soon.

    This is an automated bump to increase visibility of your question.

  • Jeffrey Williams 3188

    SSC Guru

    Points: 88149

    Just curious - but what was the allocation unit used when the drive was formatted?

    [font="Verdana"]Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster[/url]
    Managing Transaction Logs[/url]
    [/font]

  • BobAtDBS

    SSCarpal Tunnel

    Points: 4582

    Good question, and I wondered the same thing.  I took the defaults when I built the volumes on the S2D Pool.  This query shows that the block size is 4096, which used to mean a max volume size of 16TB.  but I have several 25 TB volumes with a block size of 4096.  I'm using REFS, don't know if it has the same limitations that NTFS did.

    get-wmiobject win32_volume -ComputerName Donner | FT Name,Blocksize  - Shows all volumes with 4096 block size

    I can certainly understand no one file being more than 16 TB, but the volume is clearly 25 TB.  I have actually "worked around" the problem by adding another SOFS share on the same CSV, and successfully restored all 22 TB in one Restore statement.


    Student of SQL and Golf, Master of Neither

  • Jeff Moden

    SSC Guru

    Points: 994858

    BobAtDBS wrote:

    Good question, and I wondered the same thing.  I took the defaults when I built the volumes on the S2D Pool.  This query shows that the block size is 4096, which used to mean a max volume size of 16TB.  but I have several 25 TB volumes with a block size of 4096.  I'm using REFS, don't know if it has the same limitations that NTFS did.

    get-wmiobject win32_volume -ComputerName Donner | FT Name,Blocksize  - Shows all volumes with 4096 block size

    I can certainly understand no one file being more than 16 TB, but the volume is clearly 25 TB.  I have actually "worked around" the problem by adding another SOFS share on the same CSV, and successfully restored all 22 TB in one Restore statement.

    It's an interesting subject.  You should write an article on that..

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
    "If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
    "Change is inevitable... change for the better is not."
    When you put the right degree of spin on it, the number 3|8 is also a glyph that describes the nature of a DBAs job. 😉

    Helpful Links:
    How to post code problems
    Create a Tally Function (fnTally)

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply