Operating system Error 112

  • Comments posted to this topic are about the item Operating system Error 112

  • Oh my, that is some frighteningly bad advice.

    Backup log with truncate only? Shrink data files? Both with no mention of the downsides and consequences of them?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Eck ... totally agree Gail, we have certainly dropped the standards a bit 😀

    --------------------
    Colt 45 - the original point and click interface

  • Might also mention that WITH TRUNCATE_ONLY is not available in SQL Server 2008.

  • I guess it wouldn't be so bad if this was talking about restoring a test databases to a different location - I often find that becuase test databaes aren't always fully maintained their log files can grow. But my guess is this is an example of restoring a production database to a local machine. My recomendation would be to get a proper test server with enough room on it for the required databases(s).

  • People make external drives FAT32 for a variety of legitimate reasons - switching it to NTFS as the only solution isn't much help and is a pretty well known and simple process. I can think of several ways to move a database copy to another system rather that use the backup and restore method if NTFS to FAT is stumping you.

  • Intentions might be good of this writer but advice is horrible. Backup log WITH TRUNCATE_ONLY.:w00t:

    Dude, are you serious enough to execute it on production ! ! !

    SQL DBA.

  • I missed a step in including that is, restoring the database to a server where there is enough space and test this hypothesis, execute WITH TRUNCATE_ONLY

    and the remaining steps to an NTFS external hard drive.

    Thanks for leaving the feedback though.

    Mahi

  • Three options for not having to convert your FAT32 drive to NTFS and also not having to truncate the log file on the original to save space:

    1. Detach and copy the original mdf and lfd files. This works on my SQL Express between NTFS HDD and FAT32 USB. Is there something that prevents this with Standard or Enterprise?

    2. Temporarily set up a trust between servers and copy all objects with data. Actually, you can pipe it all through your Management Studio login via the 'import data' or 'export data' if you have login access to both systems (and of course the correct read/write permissions). Depending on your workstation connection this might be too slow if you are working remote, but it is an excellent option for a workstation on the same LAN as the servers or if you can log in to Management Studio with remote desktop to one of the servers.

    3. Generate a script that includes all data. Edit the first couple of lines where the 'create database' commands are located to put the mdf and ldf files on the FAT32. Also change the logging to be simple or bulk instead of full logged. Then run the script with sqlcmd on the recipient machine.

    In all three, problem solved - the log file does not need to be truncated and the FAT32 does not need to be converted. If the ldf is too big to do #1 then #2 and #3 will work.

    Note that if the script file from #3 is exceptionally large due to the amount of data you might have to be creative in how you edit the first few lines. UltraEdit on a Windows system or your choice of a Linux command line editor should do the trick.

    In summary, copying an entire database via the backup and restore method leaves a lot to be desired and several cases where problems can crop up. There are ways that are easier, and sometimes they're faster. Method 2 above, for example, does not first output a backup file that must be moved and then read; the output goes straight into the recipient database without an intermediary file.

    To summarize the summary: Use backup for backup and use one of the variety of copy methods to copy.

    Should I see if I can get this post put up as an article? lol... (I'd have to flesh out the steps a lot more)

  • magarity (4/18/2009)


    1. Detach and copy the original mdf and lfd files. This works on my SQL Express between NTFS HDD and FAT32 USB. Is there something that prevents this with Standard or Enterprise?

    No, but if you're trying to copy a production DB to a test environment, taking it offline for the duration of the copy may not be an option

    2. Temporarily set up a trust between servers and copy all objects with data.

    Fine if the source DB is dev, but not so much an option on a busy production DB. The load that the copy will impost may be unacceptable. If there's a downtime window where it can be done, ensure it's big enough.

    3. Generate a script that includes all data. Edit the first couple of lines where the 'create database' commands are located to put the mdf and ldf files on the FAT32. Also change the logging to be simple or bulk instead of full logged. Then run the script with sqlcmd on the recipient machine.

    Same as above. Fine for a small DB or a dev DB, may be an unacceptable load or take far too long on large, busy production servers

    In summary, copying an entire database via the backup and restore method leaves a lot to be desired and several cases where problems can crop up.

    True, but if it's a production DB that you're copying, there will always be a backup file available.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Look. the straight copy all from the script doesn't have any kind of sorting or anything to make a lot of overhead - do you really have production servers that are so heavily loaded they can't handle a one time copy like I've suggested at some time during the week? Sounds like an upgrade is in order! Besides, compared to the original article's suggestion, I'm sure my ideas are much better all around. Since the problem is to go to a computer that has a FAT32 drive without enough space to handle the full log, I can't think of any other ways to do it.

  • magarity kerns (4/18/2009)


    Look. the straight copy all from the script doesn't have any kind of sorting or anything to make a lot of overhead - do you really have production servers that are so heavily loaded they can't handle a one time copy like I've suggested at some time during the week?

    I've worked on databases that are mission critical, heavily used and over 1 TB in size, where a script of all the data will take some hours, during which the increased IO and the flushing of data from cache will degrade performance beyond what's acceptable.

    If you DBs aren't that heavily used and are small enough that the copy won't take hours, great. I'm just pointing out that it's not the case for every system and that all methods to copy databases have some advantages and some disadvantages.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I never deal with anything mission critical so I'll take your word for it; my own experience is 100% data warehouses where even if there is a lot of data, they're loaded in the wee hours of the night, the reporting services grind out reports before everyone comes to work and then during the day it's all data miners and other ad-hocs who expect moderate to long query times anyway. So doing the direct connection copy all objects method is just yet another long running daytime query. That's how I used to repopulate the development database from time to time and no one ever noticed it to complain. I haven't actually done the script out including data to a .sql file on a 'real' database; I admit that would be a horrendous size text file. I was throwing it out as something to consider as a better possibility than the original.

  • This article was in pending for several months and posted now on how I have performed/achieved in backup and restore of a 200 GB production db to a FAT32.

    Thoughts and ideas are always welcome.

    Mahidhar Vattem

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply