MS SQL 2005 Datafiles no Autogrowth [5128] Write to sparse file

  • Dear @all, in this case we have a SAP BI system with MSSQL Server 2005 and a 200 GB database. Alle Datafiles are stored on one logical volume with 205 GB. We have 5 GB of free space and all datafiles were configured without Autogrowth. The Database was / is filled with 135 GB and we have about 65 GB free space. Last night some BI jobs were running in this system and we had a lot of dumps.

    I now wonder why we didn't receive error messages about empty free space in the DB, only this errors:

    Database error 5128 at INS access to table /BIC/DZFIG

    > [5128] Write to sparse file

    > 'N:\PBTDATA\PBTDATA18.ndf:MSSQL_DBCC6' failed due to lack of

    > disk space.#[5128] Write to sparse file

    What exacly do the MSSQL Server with the fixed configured datafiles, for what he need more space in the filesystem and how much space I have to let empty on the volume to save the system?! I found this article, but I cannot make head or tail of it. http://support.microsoft.com/kb/926070

  • It's not an error about space in the file. There was a database integrity job run and there wasn't space for the CheckDB to create the snapshot that it needs to run off.

    Two options here. Get more drive space or run CheckDB with the TABLOCK option (requires exclusive access to the DB for the duration)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • From the error message, it is clear that some DBCC command was being executed when the error occured. In SQL Server 2005 and above, during DBCC command execution Database Snapshots are created.

    For example you have the data file on F drive, during DBCC command executing one snapshot is created on F drive.

    In your case, the drive ran out of space when this snapshot was created. I feel DBCC CHECKDB was being run against the database at that time. If yes, please try to use the TABLOCK hint the CHECKDB query. Using TABLOCK will force SQL Server not to use the database snapshots.

    DBCC CHECKTABLE ([TABLENAME]) WITH TABLOCK

    Pradeep Adiga
    Blog: sqldbadiaries.com
    Twitter: @pradeepadiga

  • Hi and thx for the explanation. But what is the risk of option TABLOCK. Are other processes blocked by accessing the currently checked areas in the DB?

    And furthermore, how much space need the DB to create the snapshot? In this case each datafile is 10 GB; 290 GB = 29 datafiles on an 305 GB volume.

    thx: solved for my understanding πŸ˜‰ > SAP Note 142731 - DBCC checks of SQL server

    Consistent view in the database during CHECKDB

    SQL Server 7 uses table locks to ensure a consistent view of the tables to be checked. Newer versions of the SQL Server also use this procedure if you start the CHECKDB with the option WITH TABLOCK. The disadvantage of this is that other transactions can be temporarily locked by the CHECKDB.

    In SQL Server 2000, other transactions can change a table while it is being checked. The CHECKDB uses the transaction log to recognize these changes. This has the disadvantage that the transaction log cannot be truncated during a CHECKDB. A long CHECKDB may require a large transaction log.

    SQL Server 2005 creates a database snapshot at the beginning of the CHECKDB if all database files are on one NTFS file system. The system then creates an additional stream (NTFS sparse file) for each database file for the snapshot, which is not visible in Windows Explorer. Therefore, a long CHECKDB requires free space in all directories containing database files. The space requirement depends on the transactions running in parallel, and therefore cannot be calculated in advance.

    If you cannot use the snapshot procedure or if you specified the option WITH TABLOCK, SQL 2005 uses table locks instead.

  • mulle-78 (7/23/2010)


    Hi and thx for the explanation. But what is the risk of option TABLOCK. Are other processes blocked by accessing the currently checked areas in the DB?

    From BoL:

    TABLOCK

    Causes DBCC CHECKDB to obtain locks instead of using an internal database snapshot. This includes a short-term exclusive (X) lock on the database. TABLOCK will cause DBCC CHECKDB to run faster on a database under heavy load, but decreases the concurrency available on the database while DBCC CHECKDB is running.

    TABLOCK limits the checks that are performed; DBCC CHECKCATALOG is not run on the database, and Service Broker data is not validated.

    And furthermore, how much space need the DB to create the snapshot?

    It depends on the concurrent usage. The more that is happening to the DB while the CheckDB is running the more space it needs. Looks like you're getting tight for space. Can you move any of those files to another drive? Even without checkDB, that DB grows much you're going to be in trouble.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • It is an BI system and the usage type is typical for a lot of data changes on the DB. I've clarified with our BI consultant that we have a time range for CHECKDB out of data load time.

    The alternative is the TABLOCK option; but we have to try how heavy is the effect to the run time if it runs classical with table locks. For the moment it has been solved; thx for your feedback... your feedback was the richght hint to the main problem and now I'm a little bit foxier πŸ˜‰

  • Neither of the suggested workarounds are good. The WITH TABLOCK will most likely fail as it needs an exclusive database lock to be able to perform the allocation checks – which is usually not possible on in-use user databases. Moving the data file to another location is also not feasible as it requires downtime.

    One alternative if there is no space for the hidden database snapshot I create is to create your own database snapshot and run DBCC CHECKDB on that snapshot – it’s semantically the same. The best alternative is to restore a full backup of the database to another server and run DBCC CHECKDB in it there – offloading all the workload from the production system.

    Thanks

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • Thx @all, for your ideas: I've extended the volume for more free space for the shadow copies and planed the DBCC in an out of heave data change time range. We've documented the error and disabused about the reason. I think we are steeled for the future...

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply