Log full - restricted 2TB vs unrestricted

  • I received an alert for log file full this morning after a large data loading process. the log file was 35GB on a 200GB disk, with restricted growth set to 2TB (2,097,152MB). There was also a CHECKDB running, and I killed that. The log file still did not autogrow, so I set the MAXSIZE to UNLIMITED, and then the file resumed growth. Any idea why the restricted growth setting of 2TB blocked the log file growth?

    thanks in advance

  • 2TB is the max size allowable for a log file, so 2TB and unlimited are the same thing.

    It wouldn't, by itself, have prevented autogrow (unless the log really was 2TB). Was the drive full? Did the autogrow time out?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Yeah, I don't understand it. Is it just Monday or me? Or both? 🙂

    The log file disk is 200GB and the log file is only 35GB, so 165GB free.

    We (DBAs) kept trying to look at the db properties in SSMS but kept getting some "property unavailable" popup that prevented us from viewing the setting.

    So I ran: ALTER DATABASE [XXX] MODIFY FILE ( NAME = N'XXXLOG1', MAXSIZE = UNLIMITED) from Query Analyzer.

    At the same time, I killed the CHECKDB job.

    Afterwards, the file started autogrowing.

    Was it the killing of the CHECKDB maybe? How would this affect the log?

    FWIW, we have been running the CHECKDB nightly forever, this is the first time any issue.

  • Property unavailable usually indicates a lock and a command timeout. Something had a lock such that it was preventing the file grow. May have been checkDB, may have been something else. If it happens again, look in sys.dm_exec_requests, see what locks are wanted and what the blocking session is doing.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for the info, Gail.

    OK, I'm convinced it was the CHECKDB now. We had log space issues from earlier in the weekend and our applications team was reporting it as if the problem was occurring now (Monday morning).

    When I checked the system activity earlier (using Activity Monitor) there were several CREATE INDEX transactions in SUSPENDED mode (iirc). I think the CHECKDB and the CREATE INDEX statements must have been in conflict with each other (CHECKDB blocking CREATE INDEX). Once I killed the CHECKDB, everything resumed and completed normally.

  • DBCC CheckDB operates on an internal snapshot of the database created when the command is executed, so it is highly unlikely to be the problem.

    What is the size of the auto-growth increment? When the log grows it must ask Windows for the space, and then it zero's each bit before turning it over to SQL. Essentially causing all processing to stop until the growth is complete. A high percentage on a large database could cause issue.

    I would get a handle on why you need such a large transaction that you need a 35 GIG(!) log file. Can you make the transactions shorter? Use Bulk mode? Full Recovery with Log backups? Fewer explicit transactions? Seems like something isn't set up or coded correctly.

    Hope you can figure it out as right now I think it could happen again.

  • WHug (3/1/2016)


    DBCC CheckDB operates on an internal snapshot of the database created when the command is executed, so it is highly unlikely to be the problem.

    What is the size of the auto-growth increment? When the log grows it must ask Windows for the space, and then it zero's each bit before turning it over to SQL. Essentially causing all processing to stop until the growth is complete. A high percentage on a large database could cause issue.

    I would get a handle on why you need such a large transaction that you need a 35 GIG(!) log file. Can you make the transactions shorter? Use Bulk mode? Full Recovery with Log backups? Fewer explicit transactions? Seems like something isn't set up or coded correctly.

    Hope you can figure it out as right now I think it could happen again.

    thanks for the info and questions! i just now learned that instant file initialization does not apply to log files. 🙂

    Some details:

    simple recovery model.

    autogrowth is set to 1GB.

    normal log growths are very quick < 2 secs.

    on a dedicated server with dedicated disks (mountpoint SAN LUNs)

    its a SAP BW application and this happened during a large dataload overnight.

    it was already run in QA without incident. (no checkdb running tho).

    once i killed the checkdb, the process continued, with the log growing normally.

    can it be done better? sure. but we have run large loads with up near 180GB of translog without incident. Database is almost 2TB.

  • With a 2TB database, I wouldn't worry about 35GB of log file. It's not large. Hell, I had a 250GB log file several years ago for a 1TB database (and we used every bit of that log file during overnight data processing)

    CheckDB doesn't take locks, but it could be that it was hammering the IO subsystem enough that other things were slowed down waiting for IO. If it happens again, check sys.dm_exec_requests and sys.dm_os_waiting_tasks and see what processes are waiting and what wait type and resource they have.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply