Index Rebuild Taking too Long?

  • Shouldn't cross terms I suppose 🙂

    I immediately saw a blocking chain on the master database. I found it to be dangerous, because I really wanted to troubleshoot it...

    In this case though users are only affected while the rebuild is occurring so I just need to be patient and wait for the next scheduled run.

    I don't have enough privileges to restore a backup to another location, and our primary DBAs are currently over booked.

    Thanks again,

    Dane

  • Are you sure this is not a problem with your rebuild script or stored procedure?

    What happens if you just run an index rebuild from a query window like this?

    ALTER INDEX [PK_item] on [dbo].[item] REBUILD

  • At this point I am only running the individual rebuild. With, or with out, (online=on) I get the same results.

    Thanks again!

    -Dane

  • curious question - how much memory do you have on the machine.

  • Here is what I see in sys.dm_os_sys_info:

    physical_memory_in_bytes

    68718006272

    So about 64GBs.

  • I was able run the rebuild this morning after running a:

    DBCC UPDATEUSAGE( db, 'item' ) WITH COUNT_ROWS, NO_INFOMSGS

    The rebuild completed under a second...an issue though is that I didn't try it again before running this....so this may of resolved the issue, but I can't be sure. Come this weekend I will see if the issue comes up again.

    Wish I would of tested it before running the UPDATEUSAGE...

    Thanks again for all the help and I will post updates if/when something new happens.

    -Dane

  • So the update usage seems to have corrected the issue.

    Can someone give any additional insight as to why that would of corrected this type of problem?

    Thanks for all the help,

    Dane

  • The only time I have ever needed to use updateusage was when I had a negative page allocation. It is possible SqlServer had an incorrect value for that table or index you were trying to work on. Did you see any checkdb warnings or failures for a negative value in page allocation?

  • No I check the results of CHECKDB every week and haven't seen anything flagged as an issue. I will say that this DB was converted from 2000, and I had done some reading that on 2000 it was more common to have issues where you needed to run UPDATEUSAGE.

    Thanks again,

    Dane

  • Given that this was a 2000 DB and UPDATEUSAGE had not been run, it compels me to ask you if you have ever run CHECKDB WITH DATA_PURITY. If you have not, and you can find a blog by Paul Randal to learn how to check, then you might want to do that the next time you run CHECKDB.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • opc.three (4/15/2013)


    Given that this was a 2000 DB and UPDATEUSAGE had not been run, it compels me to ask you if you have ever run CHECKDB WITH DATA_PURITY. If you have not, and you can find a blog by Paul Randal to learn how to check, then you might want to do that the next time you run CHECKDB.

    http://www.sqlskills.com/blogs/paul/checkdb-from-every-angle-how-to-tell-if-data-purity-checks-will-be-run/

    If the op is not sure checkdb can be run with the data_purity clause and it will run for ever more with data_purity if the results come back clean.

    ---------------------------------------------------------------------

  • That is great information. I will do some digging and look into running that process this weekend.

    Thanks for all the help,

    Dane

  • So I ran:

    DBCC CHECKDB() WITH DATA_PURITY

    On a recently restored test DB and it ran with no errors. Should I go ahead and schedule a production run? It took about 2-3 times longer to run then the typical CHECKDB (didn't notice any blocking or resource contention), but I can't say if that is because of the DATA_PURITY check or if it is the server/usage difference.

    I reread Paul's post a few times, but not sure what exactly the gains are to now run it on the prod DB...if I check it on a restore and it comes back clean...2005+ won't allow the insertion of bad data correct?

    Would it still be worth running it on prod so that future checks would automatically check for data purity?

    Thanks again for all the help,

    Dane

  • dkschill (4/17/2013)


    So I ran:

    DBCC CHECKDB() WITH DATA_PURITY

    On a recently restored test DB and it ran with no errors. Should I go ahead and schedule a production run? It took about 2-3 times longer to run then the typical CHECKDB (didn't notice any blocking or resource contention), but I can't say if that is because of the DATA_PURITY check or if it is the server/usage difference.

    I reread Paul's post a few times, but not sure what exactly the gains are to now run it on the prod DB...if I check it on a restore and it comes back clean...2005+ won't allow the insertion of bad data correct?

    Would it still be worth running it on prod so that future checks would automatically check for data purity?

    Thanks again for all the help,

    Dane

    probably down to server spec differences.

    defiitely run with data_purity against the actual prod database.

    ---------------------------------------------------------------------

  • Definitely run in prod. Chances are good if it ran clean off the restored copy it will run clean in prod. CHECKDB is not going to make mods to your database by default but of course run it during low usage per the usual approach since it is a resource-hog.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

Viewing 15 posts - 16 through 30 (of 34 total)

You must be logged in to reply to this topic. Login to reply