January 3, 2010 at 5:02 pm
Hi,
I am not sure if you have solved your problem yet, but here's what I think about it.
Look at the debugging on a production database from business point of view. Even if you are the only person accessing the database this way, your queries may disrupt production processes which have to deal with locking and possibly memory pressure, because you load data to memory that are not required by the insert processes. I think this alone would give a good reason to work on a copy rather than the production database. You should restore the copy on a separate server to avoid any interruption with production if possible.
Regarding the second question, the first idea that comes to my mind is if you can identify somehow parts of the tables that do not change? As in campaign information for campaigns which have finished and there are no updates (or not too many) for them? You could split the tables to make size of particular indexes smaller. Then you could create partitioned view on top of these tables, which could still be updateable, so from point of view of client applications nothing would change.
The mostly readonly data wouldn't have to have indexes updated so often, and even so, the operation would be faster.
If you cannot identify readonly data it might still make sense to divide the table into smaller ones, using a date as partitioning key for example.
Regards
Piotr
...and your only reply is slàinte mhath
January 4, 2010 at 3:47 am
How many times do you have to debug using these indexes?
An easy (but ugly) solution if it happends very few times you could create the index on the database when needed and drop them when solved.
Another one could be to use log shipping and when you need to debug you can create a backup of the log shipped DB and restore in a dev environtment and create there the indexes. You may need extra time but your prod system wouldn't have a much overload cost.
Josep
Viewing 2 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply