The Gambler

  • We took over a company some years ago with 100 SQL2000 dbs. CHECKDB was never run. I found two that had minor corruption that would have not allowed them to upgrade to 2005. I got one of them fixed and one I could not. It wasn't a serious issue in MSFTs eyes and it was going away so I didn't bother. I got checkdbs scheduled and low and behold within a few months one failed. I got it fixed but the point was it happened.

  • MiguelSQL (3/14/2013)How can I create a test database that has corruption to test it out?

    Thanks

    Miguel

    Miguel,

    Try one of Paul Randal's corrupt sample databases he uses in his DBCC demos at:

    http://www.sqlskills.com/pastconferences.asp

    ๐Ÿ˜€

    "Technology is a weird thing. It brings you great gifts with one hand, and it stabs you in the back with the other. ...:-D"

  • Markus (3/14/2013)


    We took over a company some years ago with 100 SQL2000 dbs. CHECKDB was never run. I found two that had minor corruption that would have not allowed them to upgrade to 2005. I got one of them fixed and one I could not. It wasn't a serious issue in MSFTs eyes and it was going away so I didn't bother. I got checkdbs scheduled and low and behold within a few months one failed. I got it fixed but the point was it happened.

    An ounce of prevention is worth a pound of cure as my grandmother used to say. Expect the best, prepare for the worst. Also, employ jobs/stored procedures that check those SQL Error logs daily like clockwork!!! If there is a integrity problem developing in the early stages, you are likely to see it start to surface there first. ๐Ÿ˜€

    "Technology is a weird thing. It brings you great gifts with one hand, and it stabs you in the back with the other. ...:-D"

  • I run DBCC CHECKDB daily on all databases when I can or at least weekly if the maintenance window is too small to do it daily. I have a stored procedure that performs the checks and sends an email to DBAs if it finds an error, and I set that up as a scheduled job on every server. I always run DBCC CHECKDB on the system databases every day. They are small but critical so there is no reason not to check them every day.

    While it is not that common to get errors, I have seen it a number of times. We had an installation of 300+ servers with thousands of databases, and got actual database corruption about once every year or so.

    If you do get errors, itโ€™s a sign that you have some kind of hardware issue that you need to address, so ignoring it is a bad move. When you look in the system event log and see thousands of IO errors reported itโ€™s a real bad feeling.

    When DBCC CHECKDB takes a long time to complete itโ€™s an indication that the IO throughput on the system is not really adequate for the job. When I setup a new system I do IO testing with SQLIO, and then run DBCC CHECKDB on any existing database that I am moving to the new server. If DBCC CHECKDB runs slower than it did on the old system or even not much faster, I consider that a show stopper and halt conversion until the IO issues are resolved.

  • Hi all,

    @steve-2:

    You can run physical only, which does some checks, but not all of it.

    yes, and we do that on the production server.

    The only solution is to really get a good second box and throw some horsepower on it. I'd try to run it every week.

    yes, agree, and we have done that. Only problem is that a restore (from disk not from tape) takes 3 days, and DBCC CHECK_DB another 2 days.

    That leaves very little time for other use of the box.

    My manager thinks that's a little expensive.:-)

    So we restore about once a month, and do the check_db most of the times.

    The other thing which is a "cheap" check, but is semi-safe for me, is to run a SELECT * from all tables and if they all complete, at least you have all your data accessible.

    That would only check the clustered index, and it will also cost a lot of DiskIO and CPU. Not sure if it will upset the cache too much.

    @andrew: Thank you for the links for SQLSkills. Here they are a well-known high quality source of information. I've almost finished watching the 40+ hours of video. It has taken two years because I only have an hour or so every week. But time well-spent.

    @TravisDBA: Thank you for the link. It is futher spelled out in the videos. But nice and crisp link.

    @michael-2 Valentine Jones:

    sends an email to DBAs if it finds an error

    How do you check that you get the email?

    I mean; are you sure that you receive all emails?

    I do IO testing with SQLIO

    We did that too, and it took a long time for the SAN Vendor (HP) to realise that they were not delivering the promised IO with their recommended setup.

    "Thin Provisioning" I think they called it. We currently have 9 GB/s. It sure helps on a busy system.

    We too use Ola Hallengrens stored procedure "DatabaseIntegrityCheck" at [/url]

    It is better than anything else I've seen so far.

    Best regards,

    Henrik Staun Poulsen

    Stovi Software

  • henrik staun poulsen (3/15/2013)

    The only solution is to really get a good second box and throw some horsepower on it. I'd try to run it every week.

    yes, agree, and we have done that. Only problem is that a restore (from disk not from tape) takes 3 days, and DBCC CHECK_DB another 2 days.

    That leaves very little time for other use of the box.

    My manager thinks that's a little expensive.:-)

    Best regards,

    Henrik Staun Poulsen

    Stovi Software

    Henrik

    Just a thought after reading this again, have you considered standing up a cloud server with the express purpose of DBCC CHECK_DB? Stand up the cloud server that has the necessary resources to process in a few hours and let it rip. When done take it down. It might get the job done in a far more reasonable time period and should not cost such that your manager would worry about the extra dollars.

    Might be worth a shot!

    Miles...

    Not all gray hairs are Dinosaurs!

  • Good idea, Miles.

    Who knows, it might even turn out to be a more cost effective and better performing platform than what Henrik has at the moment.

    Henrik, 3 days to restore from disk sounds like "very thin provisioning". How many terabytes is your database?

    You should try solid state disks - more expensive to buy but cost per IO much less than spindle disks.

  • Hi Miles, Howard,

    A very good idea that I'll look at on Monday.

    Last time I looked, there were a limit at 15 TB, but that is quite some time ago.

    I'm not sure how happy our network guys will be if we send 35 TB to the cloud every week.

    It is indeed "very thin provisioning", only 486 GB per hour or 136 MB/s.

    Unfortunately, we could only get "shared storage" for the test system

    On the production system we get an order of magnitude more, I believe.

    Best regards,

    Henrik

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply