Are there issues with ONLINE INDEX REBUILD

  • Richard M (8/31/2009)


    wjones21 (8/31/2009)


    My company is obviously concerned about the 10 hour+ exposure without a backup

    We also change the T-Log backup to every 5 minutes so it doesn't grow to big during this time period.

    OK, now we are getting somewhere. This is what I was going to propose. Setting the scheduled trans log backups to 5 min.

    We have top notch hardware with separate LUNS on their own spindles, but over 1,500 tables. It takes much longer than 2 hours. Probably in the neighborhood of 10 hours if I do all indexes on all tables.

    Bill

  • wjones21 (8/31/2009)


    My company is obviously concerned about the 10 hour+ exposure without a backup that an online reindexing process will lead to with the DB in SIMPLE recovery mode. The way I've done it in the past was to set recovery mode to simple and go without a backup for the duration. That's no longer an option.

    I did not suggest Simple recovery. I suggested Bulk-logged recovery

    Oh, you cannot perform a trans log backup after you've set the DB to SIMPLE. You need to set the DB back to FULL recovery mode and then do a FULL Backup, then start your transaction log backup schedule again.

    That's correct for Simple, but not for bulk-logged. Bulk-logged retains the transaction log and allows log backups. It just minimally logs certain operations, like index rebuilds

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/31/2009)


    wjones21 (8/31/2009)


    My company is obviously concerned about the 10 hour+ exposure without a backup that an online reindexing process will lead to with the DB in SIMPLE recovery mode. The way I've done it in the past was to set recovery mode to simple and go without a backup for the duration. That's no longer an option.

    I did not suggest Simple recovery. I suggested Bulk-logged recovery

    Oh, you cannot perform a trans log backup after you've set the DB to SIMPLE. You need to set the DB back to FULL recovery mode and then do a FULL Backup, then start your transaction log backup schedule again.

    That's correct for Simple, but not for bulk-logged. Bulk-logged retains the transaction log and allows log backups. It just minimally logs certain operations, like index rebuilds

    Thank you! That's a useful suggestion. Perhaps I'll set the DB to Bulk-Logged recovery mode and then increase the frequency of the trans log backups to every 5 - 10 minutes.

    Any further opinions are welcome.

    William

  • Also, bulk-logged recovery mode still allows for standard transactions from users to be backed up by the periodic trans log backups, correct? It only effects bulk type transactions such as bulk loads, reindexing online, mass inserts, etc.. Is that right?

    William

  • wjones21 (8/31/2009)


    Also, bulk-logged recovery mode still allows for standard transactions from users to be backed up by the periodic trans log backups, correct?

    Yes

    It only effects bulk type transactions such as bulk loads, reindexing online, mass inserts, etc.. Is that right?

    Even they will be included in the log backups, they just won't be fully logged in the transaction log. I suggest you read up on Bulk-logged recovery in Books Online, make sure you understand what it is and what it does. Also be aware what it means for point-in-time recovery.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Why are you rebuilding all of the indexes? You should only be rebuilding those indexes that need to be rebuilt, that is - on indexes with greater than 1000 pages and fragmentation greater than 10%. Most smart re-indexer scripts use those settings as default.

    Here is one of the better scripts that I have found: http://sqlfool.com/2009/03/automated-index-defrag-script/

    You should also realize that using the above script will still grow your log file - even if you switch to bulk-logged recovery. Reorganizing an index is not a bulk-logged operation, and will be fully logged in all recovery models. Yes, even in simple recovery - your transaction log will grow.

    I have a 500GB database with just as many tables that I rebuild in less than an hour. Of course, I am not rebuilding every index - and the script I use forces a rebuild instead of a reorganize when we are on Enterprise Edition and can perform the rebuilds online.

    Oh - and one more thing. My transaction log is backed up every 15 minutes and I still keep it sized at 25000MB to handle the nightly index rebuilds and weekly data loads on top of normal operations.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • I am using the same procedure written by SQLFool for a 300 GB database .... I have ensured that the maxdop option is set to 1, since I dont want to impact other processes running at this time as this is also a 24/7 app.

    In addition to that, I have also added another step to run sp_updatestats which takes care of the out of date stats that require an update ... on smaller dbs/less transactional dbs I have written a procedure to update stats with a full scan which looks into the indexes that were defraged and stats that are not indexes (rebuilt indexes would have their stats updated, but is not the case for the defrag ones)

    Increasing the frequency of transaction log backups helps in controlling the size of these backups.

    All of this works for me fine.

    -------------------------------------------------
    -Amit
    Give a man a fish and he'll ask for a lemon. Teach a man to fish and he wont get paged on weekends !! :w00t: - desparately trying to fish [/size]

  • With so many tables, and the size of DB included, something I have done is to create multiple filegroups. I separate the tables out into these filegroups and then move the indexes belonging to the tables to the same filegroups.

    With a separation like this, I can now do Index rebuilds based on filegroups. I have a procedure that will check for fragmentation percent, filegroup name, and I can throw in there a minimum number of pages as well (if i want). And then defrag based on these criteria. Then I can set up a schedule to run FG1 on Monday, FG2 on Wednesday, and FG7 on Friday if I want. This decreases the duration on any one day and gives me more control.

    I also like to run the defrags online where possible.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • It may be overkill but I'm also performing an update stats and an sp_recompile on every table that I'm rebuilding indexes on.

    I'm just going to have to rebuild only indexes that require it and then perform scheduled trans log backups every 5 min and monitor.

  • Also, I have replication running against this database. Do you see any reason why this should create any problems with online reindexing or make it take a lot longer?

  • wjones21 (8/31/2009)


    Richard M (8/31/2009)


    wjones21 (8/31/2009)


    My company is obviously concerned about the 10 hour+ exposure without a backup

    We also change the T-Log backup to every 5 minutes so it doesn't grow to big during this time period.

    OK, now we are getting somewhere. This is what I was going to propose. Setting the scheduled trans log backups to 5 min.

    We have top notch hardware with separate LUNS on their own spindles, but over 1,500 tables. It takes much longer than 2 hours. Probably in the neighborhood of 10 hours if I do all indexes on all tables.

    Bill

    We only reindex those tables that are "big" enough (see another post with that limit), fragmented more than 5% and have no LOB's (can't be rebuild online). And again, the reason it takes about 2 hours is because the process is pausing when the mirroring queue gets above 10GB.

    _______________________________________________________________________
    For better assistance in answering your questions, click here[/url]

  • wjones21 (9/1/2009)


    It may be overkill but I'm also performing an update stats and an sp_recompile on every table that I'm rebuilding indexes on.

    No need and possibly even harmful. Rebuilding indexes updates the stats on that index WITH FULLSCAN. If you're then doing a sampled update (using UPDATE STATISTICS) you can reduce the quality of the stats.

    Altering an index invalidates all plans based on that index, essentially forcing all queries that use it to recompile on the next execution.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 12 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply