Backups Aren't Backups Until a Restore Is Made

  • We have a reports server which is partly for that very purpose — a restore is made every morning on the reports server after the main backup from the production server has been taken. The data on our reports server is never more than a day old and we have confirmation that the last backup from production can be restored if needs be.

  • As Steve says, this isn't limited to databases only. The databases are often the company's crown jewels be there are a lot of other treasures out there.

    Personally, I don't have a database to backup (except sometimes in development VMs which I try to keep a copy of for mini DR style scenarios). I do have a lot of files. For a long time I have backed up some files to AWS. Knowing that backups are worthless unless you can retrieve the data when needed, as part of the setup I tested random files of different types. I also keep checking on a regular basis.

    Even though I knew it before becoming a member of this community, it is essential that places like this keep banging this drum. It is too costly an oversight to make.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • We do a restore of our two big "money makers" every night.  We use them for troubleshooting purposes if anything goes wrong in prod but I'd do it anyway even if no such use existed.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • There are few worse feelings than the moment you realize that the backup you thought you had is not any good.  Not only can you have a failure of the backup medium (Tape, disk, etc) but there can also be problems with the procedure.  I once had a client that insisted that they were faithfully doing a backup every night, and they were with the exception of one step.  That one step somehow got left out of the procedure and it resulted in the same stale data getting "backed up" each night.

    It can seem like a pain sometimes to test restores, but it is time well spent in the long run.

  • It is also possible to have perfectly good backups, but then loose the keys. For example, iIf TDE is enabled on a database, then backups are encrypted by default, so insure you are backing up the certificate to a separate, safe, and reliable location.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Many eons ago when I worked at a large company we used Alpha4 (Dbase clone). That version of Alpha4 was not multi-user friendly.  Regardless, I put a database that 3-5 users would be accessing on a network drive (other option was to use my PC and my getting up any time another user needed to use the database... and that was NOT going to happen).  If two users accessed the same data, the database became corrupted.  Because of this, I set up batch files to back up the database that I ran before lunch and just before I left work each day.  If I wasn't in the office another user ran the batch files.  

    That worked great as long as I was in charge of that project.  When I transferred to another department I showed my replacement the batch files and explained how vital it was to run the backups twice each day.  Fast forward 18 months later and that replacement's replacement calls me about not being able to access the database.  I went back to the old department and found that two users had indeed tried to update the same record at the same time, so the database file was corrupted.  I checked the backups, and they were there.... and they were almost 7 months old.  Coincidentally, the new guy had been there for 7 months.  Doh!

    He was fortunate, though, as the network adminstrators were able to pull the database files from the prior Friday's backups (the restore worked).  Also, the new version of Alpha4 that had recently been released was multi-user, so he was able to convince his boss to upgrade.

  • Back in early 90's, I too worked with a dBase clone called FoxPro.
    OMG! If we opened the .dbf file in the FoxPro IDE to run and ad-hoc query, it locked out all other users until the database was closed again. If an application opened the database to insert or update records, but it didn't open every related index file, then the next user would get an error when they attempted to use the database. If a user shutdown their PC while having the application open, it would corrupt the database. It happened so frequently that I coded an error handler for the appliaction that would trap for corruption related error messages, run an auto-fix utility against the database file, and retry again before raising the error. That cut the 2 AM support calls down to half.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell - Wednesday, February 8, 2017 11:37 AM

    Back in early 90's, I too worked with a dBase clone called FoxPro.
    OMG! If we opened the .dbf file in the FoxPro IDE to run and ad-hoc query, it locked out all other users until the database was closed again. If an application opened the database to insert or update records, but it didn't open every related index file, then the next user would get an error when they attempted to use the database. If a user shutdown their PC while having the application open, it would corrupt the database. It happened so frequently that I coded an error handler for the appliaction that would trap for corruption related error messages, run an auto-fix utility against the database file, and retry again before raising the error. That cut the 2 AM support calls down to half.

    Ah, FoxPro...
    My last employer, right up until sometime probably recently (I left about 3yrs ago) was using FoxPro (and Visual FoxPro) for their application.  So many "fun" times resolving broken databases from dropped network connections (the customers cousin "knew computers" and set up the network,) and all the multitudinous other ways a FoxPro dbf file could be corrupted...

    Yeah.  I don't miss those days.

    But, backups were (sort of) easy, teach the customer to make sure everyone had closed out of the application and set them up with a DOS batch file to zip up the folder that held the DBF / IDX files on a USB stick or CD-RW...

  • But, backups were (sort of) easy, teach the customer to make sure everyone had closed out of the application and set them up with a DOS batch file to zip up the folder that held the DBF / IDX files on a USB stick or CD-RW...


    As for backing up the database files and reminding users to kindly exit from the application before leaving out for the day, there was always that one user who didn't get the email. So, I had a Timer control on the main form that would check for existence of a special file in the network folder. If it was found, then the application would close all databases and exit. The backup batch file would delete the "kick-out" file after it completed all it's copying overnight, so everything was ready to go the next morning. It was law and order in the wild west. No apologies.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell - Wednesday, February 8, 2017 12:24 PM

    But, backups were (sort of) easy, teach the customer to make sure everyone had closed out of the application and set them up with a DOS batch file to zip up the folder that held the DBF / IDX files on a USB stick or CD-RW...


    As for backing up the database files and reminding users to kindly exit from the application before leaving out for the day, there was always that one user who didn't get the email. So, I had a Timer control on the main form that would check for existence of a special file in the network folder. If it was found, then the application would close all databases and exit. The backup batch file would delete the "kick-out" file after it completed all it's copying overnight, so everything was ready to go the next morning. It was law and order in the wild west. No apologies.

    So much better than our method...
    Customer would call the next morning "Jane Doe didn't exit out last night when we ran the backup, what should we do?"
    Us: "If you can, kick everyone out now, and re-run the backup.  Unless you're OK with potentially losing everything everyone did yesterday."
    :-/
    (Please note before I get flamed here, this was how the company did it before I started, and I wasn't a developer so I wouldn't have had a clue how to build the solution Eric had, much less that such a thing would even be possible.)

  • Eric M Russell - Wednesday, February 8, 2017 11:37 AM

    Back in early 90's, I too worked with a dBase clone called FoxPro.
    OMG! If we opened the .dbf file in the FoxPro IDE to run and ad-hoc query, it locked out all other users until the database was closed again. If an application opened the database to insert or update records, but it didn't open every related index file, then the next user would get an error when they attempted to use the database. If a user shutdown their PC while having the application open, it would corrupt the database. It happened so frequently that I coded an error handler for the appliaction that would trap for corruption related error messages, run an auto-fix utility against the database file, and retry again before raising the error. That cut the 2 AM support calls down to half.

    I really liked Foxpro, but you definitely had to be careful with the files.

  • Steve Jones - SSC Editor - Thursday, February 9, 2017 9:57 AM

    Eric M Russell - Wednesday, February 8, 2017 11:37 AM

    Back in early 90's, I too worked with a dBase clone called FoxPro.
    OMG! If we opened the .dbf file in the FoxPro IDE to run and ad-hoc query, it locked out all other users until the database was closed again. If an application opened the database to insert or update records, but it didn't open every related index file, then the next user would get an error when they attempted to use the database. If a user shutdown their PC while having the application open, it would corrupt the database. It happened so frequently that I coded an error handler for the appliaction that would trap for corruption related error messages, run an auto-fix utility against the database file, and retry again before raising the error. That cut the 2 AM support calls down to half.

    I really liked Foxpro, but you definitely had to be careful with the files.

    A story one of the customer support people told me, was a customer called in having problems with the application, wouldn't start, etc.  The error on trying to start seemed to indicate corrupt files, so they ran the usual repair command.  Nope, didn't fix it.  Came to find out, one of the users (actually the office manager, and the person who called support) had gone in to change data, in the DBFs, using...
    Notepad.
    I think the resolution to their "problem" was being offered the choice of either:
    1)  Paying a not-insignificant amount for our programmers to try to recover their data
    2)  Getting their last backup and having out support team restore the data files (and I seem to recall they hadn't backed up in quite a while, either, with the same person being responsible for the backups as who had tried editing the data files)

    So, yeah, all sorts of interesting things could be done to those data files...
    Of course, someone could also try editing an MDF with Notepad, so...

  • jasona.work - Thursday, February 9, 2017 10:18 AM

    Steve Jones - SSC Editor - Thursday, February 9, 2017 9:57 AM

    Eric M Russell - Wednesday, February 8, 2017 11:37 AM

    Back in early 90's, I too worked with a dBase clone called FoxPro.
    OMG! If we opened the .dbf file in the FoxPro IDE to run and ad-hoc query, it locked out all other users until the database was closed again. If an application opened the database to insert or update records, but it didn't open every related index file, then the next user would get an error when they attempted to use the database. If a user shutdown their PC while having the application open, it would corrupt the database. It happened so frequently that I coded an error handler for the appliaction that would trap for corruption related error messages, run an auto-fix utility against the database file, and retry again before raising the error. That cut the 2 AM support calls down to half.

    I really liked Foxpro, but you definitely had to be careful with the files.

    A story one of the customer support people told me, was a customer called in having problems with the application, wouldn't start, etc.  The error on trying to start seemed to indicate corrupt files, so they ran the usual repair command.  Nope, didn't fix it.  Came to find out, one of the users (actually the office manager, and the person who called support) had gone in to change data, in the DBFs, using...
    Notepad.
    I think the resolution to their "problem" was being offered the choice of either:
    1)  Paying a not-insignificant amount for our programmers to try to recover their data
    2)  Getting their last backup and having out support team restore the data files (and I seem to recall they hadn't backed up in quite a while, either, with the same person being responsible for the backups as who had tried editing the data files)

    So, yeah, all sorts of interesting things could be done to those data files...
    Of course, someone could also try editing an MDF with Notepad, so...

    There were a handful of occasions where I had to manually or programmatically (as in C++) retreive data from a corrupted FoxPro .dbf file. From what I recall it was basically fixed width columns and records, all data types in plain text, with a header page that containing meta-data fields for column definitions, a record count indicator, and maybe a pointer to the index files. Even with severely mangled files, it was fairly easy to just ignore the header and read the data records directly as if it were a flat text file, if you knew the column positions. However, I've never attempted decoding SQL Server's .mdf file format; it's something far more complex, especially when row/page compression or encryption is involved.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 14 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply