Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««1234»»»

DBCC CHECKDB - how use it in a maintenance plan ? Expand / Collapse
Author
Message
Posted Thursday, October 16, 2008 3:37 PM
SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 3:37 PM
Points: 4,389, Visits: 9,522
Here you go:

http://blogs.msdn.com/sqlserverstorageengine/archive/2007/03/22/easy-way-to-create-a-corrupt-database-for-testing.aspx


Jeffrey Williams
Problems are opportunites brilliantly disguised as insurmountable obstacles.

How to post questions to get better answers faster
Managing Transaction Logs
Post #587344
Posted Thursday, October 16, 2008 3:42 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 6:12 AM
Points: 42,755, Visits: 35,843
Do you know how I can go about deliberately corrupting a database. I'd like to test some different things, but I'm not sure how to go about it.


Sure. First though, a disclaimer for anyone else who wants to try this. Do not do this to a DB you care about. Do not do this to a database that you're not willing to discard. Do not ever consider doing it to a production database.

Stop SQL. Open the mdf file in a hex editor. Go at least 800kb into the database (to avoid corrupting the system tables) and then randomly change values. Save the file and exit, then restart SQL

Or, if you're feeling adventurous, you can use SQL Server as a hex editor and get it to corrupt one of its own databases.
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/03/10/how-to-create-a-corrupt-database-using-bulk-insert-update-and-bcp-sql-server-as-a-hex-editor.aspx



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #587347
Posted Thursday, October 16, 2008 3:46 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 6:12 AM
Points: 42,755, Visits: 35,843
Leo (10/16/2008)

1. How do we know database is corrupt before the Backup at 22:00? Job is already setup and backup the corrupt database anyway according from the JOB. Any way to stop that, NOT to backup?


You could write the results of checkDB to a table and have the backup job check that.

2. Which command can fix and repair the Database if 'dbcc checkdb' result is failed.

Don't repair. If all the corruption is within nonclustered indexes, you rebuild (or drop and recreate) those indexes.
If the corruption is elsewhere, you restore the last clean backup and then roll the transaction log backups forward.

Repair is a last-resort when there's no good backup and no other way to get data back. It doesn't always work.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #587352
Posted Friday, October 17, 2008 4:31 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, February 1, 2010 4:13 AM
Points: 141, Visits: 394
Hi Everyone,

I bought LiteSpeed Backup, I am using it for SQL 2000 Server. Interesting thing is some function are similar with SQL 2005 maintenance plan.

However, let me get to the point...

I setup 3 difference maintenance plan in Litespeed - Please analyse and suggest me if I am doing something wrong

Mon-Fri
1.Check DB Integrity [All Database]
2.Shrink Database [Expect System Database]
3.Backup Database [All Database]
4.Clean Up Maintenance Plan
5.Clean Up History

Sat
1.Check DB Integrity [All Database]
2.Rebuild Index [Expect System Database]
3.Shrink Database [Expect System Database]
4.Backup Database [All Database]
5.Clean Up Maintenance Plan
6.Clean Up History

Sun
1.Check DB Integrity [All Database]
2.Reorganise Index [Expect System Database]
3.Shrink Database [Expect System Database]
4.Backup Database [All Database]
5.Clean Up Maintenance Plan
6.Clean Up History

Other Reason I am doing 'Rebuild' and 'Reorganise' on difference day becasue it take long time to index and another reason is-
Reorganise an index when the degree of fragmentation is between 5 and 30%
Rebuild an index when the degree of fragmentation is over 30%

Is that right setup?

Thanks.
Leo
Post #588034
Posted Friday, October 17, 2008 4:38 PM
SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 3:37 PM
Points: 4,389, Visits: 9,522
You should not shrink your database on a regular scheduled basis. You should always maintain enough free space in the database to allow for index rebuilds and reorgs - as well as day to day growth.

By shrinking your database every day - you are causing index fragmentation and file fragmentation as the database grows again.


Jeffrey Williams
Problems are opportunites brilliantly disguised as insurmountable obstacles.

How to post questions to get better answers faster
Managing Transaction Logs
Post #588036
Posted Saturday, October 18, 2008 3:08 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 6:12 AM
Points: 42,755, Visits: 35,843
2.Rebuild Index [Expect System Database]
3.Shrink Database [Expect System Database]


That's a combination I generally refer to as a waste of time. The shrink will cause massive fragmentation of your indexes, easily up to 90%. You're spending time, CPU and IOs to put the indexes in order (possibly growing the database) and then you're spending more time, cpu and IOs shrinking the file and shuffling the indexes. Next time any data is added, SQL will spend even more time, CPU and IOs growing the file so it has space to work.

Also, repeated shrinks and grows can easily cause fragmentation at a file system level, requiring a disk defragment to fix

See - http://sqlinthewild.co.za/index.php/2007/09/08/shrinking-databases/



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #588097
Posted Saturday, October 18, 2008 4:05 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, February 1, 2010 4:13 AM
Points: 141, Visits: 394
Hi Jef and Gila,

Now I understand why I shouldn't shrink the Data file, I will take 'Shrink Database' Off from Maintenance Plan. All information you guys posted are very useful. Thanks.

I got another query, on my production server, all the transaction log files are very big, about 45GB but initial size is 4GB. My question is, shall I shrink the transaction log ONLY so that Log not gow too much. Shall I do that every night?

I never backup the transaction log even database model is -FULL. transaction log backup is not required for my environment at the moment.

if you suggest to do that, 'Before' or 'After' my Maintenance Plan?

*****
BACKUP LOG t_log_name WITH TRUNCATE_ONLY
DBCC SHRINKFILE (t_log_name,amount,retain )
*****

Sat

<<=== Here ?
1.Check DB Integrity [All Database]
2.Rebuild Index [Expect System Database]
3.Backup Database [All Database]
4.Clean Up Maintenance Plan
5.Clean Up History
<<== Here?

Thanks
Post #588109
Posted Saturday, October 18, 2008 4:27 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 6:12 AM
Points: 42,755, Visits: 35,843
Leo (10/18/2008)


I never backup the transaction log even database model is -FULL. transaction log backup is not required for my environment at the moment.


You need transaction log backups in full recovery because if you don't, your transaction log will grow without bound, as you're seeing.

If you don't need point-in-time recovery, then switch the database to simple and you won't have to worry about the transaction logs. If you do need point-in-time recovery, then set up regular log backups.

See - http://sqlinthewild.co.za/index.php/2008/07/23/recovery-model-and-transaction-logs/

if you suggest to do that, 'Before' or 'After' my Maintenance Plan?

*****
BACKUP LOG t_log_name WITH TRUNCATE_ONLY
DBCC SHRINKFILE (t_log_name,amount,retain )
*****


Neither. Switch to simple recovery if you have no need of log backups. Then you won't have to worry about doing that at all.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #588112
Posted Sunday, October 19, 2008 8:55 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, February 1, 2010 4:13 AM
Points: 141, Visits: 394
Hi Gila,

Sorry, I get back to you a bit late. I am not fully understanding about Transaction Log. Let say I switched my DB Model to 'SIMPLE'. I don't need to worry about transaction log anymore as you said. But what do I do if transaction log is getting bigger, for example -
Transaction log initial size 4GB [that is okay and I am happy with this size], During the day or when I check the database at night, Transaction Log is about 40GB . What do I do?


I know, you will hate me but I really want to understand this which mean I want to learn from your experience. Last question.......

Insert or Reading the record from Database is very slow sometime, is that anything to do with transaction log? How to solve it?

Thanks again.


Post #588253
Posted Sunday, October 19, 2008 9:06 AM
SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 3:37 PM
Points: 4,389, Visits: 9,522
Leo (10/19/2008)
Hi Gila,

Sorry, I get back to you a bit late. I am not fully understanding about Transaction Log. Let say I switched my DB Model to 'SIMPLE'. I don't need to worry about transaction log anymore as you said. But what do I do if transaction log is getting bigger, for example -
Transaction log initial size 4GB [that is okay and I am happy with this size], During the day or when I check the database at night, Transaction Log is about 40GB . What do I do?


I know, you will hate me but I really want to understand this which mean I want to learn from your experience. Last question.......

Insert or Reading the record from Database is very slow sometime, is that anything to do with transaction log? How to solve it?

Thanks again.


On your first question, if your database is in SIMPLE recovery model you don't have to worry about BACKING UP the transaction log. The transaction log is still required and will grow to the size needed to manage the largest transaction that will process on your system.

If your transaction log is growing to 40GB (in SIMPLE model), then your transaction log needs to be 40GB to handle the workload. If that is not acceptable, then you need to identify the process(es) that are using that much space in the transaction log and fix those processes.

One example would be an ETL process that is loading millions of rows of data in a single transaction. A process like this could easily cause the transaction log to grow that large.

Second question - I would not suspect the transaction log has anything to do with your performance issues. The only way the transaction log would really affect your performance is if the file is on the same disks as the data files and you are seeing I/O contention.


Jeffrey Williams
Problems are opportunites brilliantly disguised as insurmountable obstacles.

How to post questions to get better answers faster
Managing Transaction Logs
Post #588254
« Prev Topic | Next Topic »

Add to briefcase ««1234»»»

Permissions Expand / Collapse