• Br. Kenneth Igiri (7/19/2016)


    muthukkumaran Kaliyamoorthy (7/18/2016)


    Br. Kenneth Igiri (7/18/2016)


    I think you should also check row count of the affected tables before and after repair_allow_data_loss.

    Yep, already did that. Since, There was a variation.

    Depending on the impact, you might have to live with the loss if there is no good backup or no other solution. I had a case in the past where I had to go one month back for a good backup. Scheduling DBCC CHECKDB is a very good idea. Fortunately it was a database in the pilot phase though. 😀

    Totally agreed! I have tested both options and both did not work in my case, since the application will not work for any single row mismatch. It is content DB, we used a different method from the application.

    1. Restored a month old backup of DB --> count(*) of the currupted table had big variation

    2. Restored DB a day old backup of DB --> run repair_allow_data_loss and the count(*) had slight variation

    The DB is big and the original checkDB from the maintenance plan runs more than a day with high IOPS, I tested Ola hallengren’s checkdb with excluding non-clustered index, it runs just 6 hours. It also has an exclude table option. The script is excellent!

    I always use his reindex for VLDB now the checkDB as well. The schedule is now weekly once, before a full backup.

    Still, no clue about the corruption, checking with infra team 🙂

    Edited: To help and understand better.

    Muthukkumaran Kaliyamoorthy
    https://www.sqlserverblogforum.com/