April 15, 2009 at 6:42 pm
I saw no SQL Agent Forum in the SQL Server 2005 section and since I am using 2005 I wasn't certain what particular forum was best for this posting.
I'm setting up a fair number of Index Rebuild jobs where each table is its own Job. Currently we don't have any solid info on exactly how often an index will need rebuilding/reorganizing and so I'm setting up each rebuild as its own job and will then call each Table specific job based on how fragmented it is.
My question is this: when you have multiple Index Rebuild Jobs and you are running a multiprocessor SQL box, does anyone have an idea as to whether this kind of thing would be best run asynchronously, letting each job complete before the next begins or letting the thing run synchronously?
I realize there may not be any one answer fits all for this either and that a more correct answer is 'This is what you need to look at to decide in your environment' and I'm interested in anything like that as well.
Thanks
Kindest Regards,
Just say No to Facebook!April 20, 2009 at 10:05 am
Just curious if the question is badly worded, a dumb question or is this really something no one has any input/thoughts on?
IS there a better forum for this item?
Kindest Regards,
Just say No to Facebook!April 21, 2009 at 2:40 am
There is no inherent need to run these jobs in a single stream or in multiple streams. You just need to be aware of the impact running multiple streams may have on your user queries.
If you run the rebuilds at a quiet time, you should be able to run more streams than when the server is busy. I think you will need to experiment and see what is best for your installation.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
April 21, 2009 at 9:53 am
EdVassie (4/21/2009)
There is no inherent need to run these jobs in a single stream or in multiple streams. You just need to be aware of the impact running multiple streams may have on your user queries.If you run the rebuilds at a quiet time, you should be able to run more streams than when the server is busy. I think you will need to experiment and see what is best for your installation.
Thanks for the input Ed
Kindest Regards,
Just say No to Facebook!April 21, 2009 at 10:34 am
Be aware that on SQL Server 2005 Standard, index rebuilds will only utilize a single processor. They will never use a parallel plan.
SQL Server 2005 Enterprise Edition can (and will, unless otherwise specified) use a parallel plan and will utilize up to the maximum number of processors available.
If you try to run multiple index rebuild jobs, and you are running on Enterprise Edition - you could easily cause the processes to take much longer to complete because each process will be trying to run using a parallel plan and could block.
One other thing, the general consensus is that tables with less than 1000 pages don't need to be rebuilt or reorganized. For those tables that have more than 1000 pages, the recommendation is to reorganize the index if it is greater than 10% fragmented and to rebuild the index if it is greater than 30% fragmented. These are general guidelines only and should be validated and tested on your system.
What I do is monitor how often an index is selected to be rebuilt/reorganized. If it is getting selected on a daily basis, I may decrease the fill factor and see if I can't get the index to not fragment as quickly.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 21, 2009 at 10:55 am
Jeffrey Williams (4/21/2009)
Be aware that on SQL Server 2005 Standard, index rebuilds will only utilize a single processor. They will never use a parallel plan.SQL Server 2005 Enterprise Edition can (and will, unless otherwise specified) use a parallel plan and will utilize up to the maximum number of processors available.
If you try to run multiple index rebuild jobs, and you are running on Enterprise Edition - you could easily cause the processes to take much longer to complete because each process will be trying to run using a parallel plan and could block.
One other thing, the general consensus is that tables with less than 1000 pages don't need to be rebuilt or reorganized. For those tables that have more than 1000 pages, the recommendation is to reorganize the index if it is greater than 10% fragmented and to rebuild the index if it is greater than 30% fragmented. These are general guidelines only and should be validated and tested on your system.
What I do is monitor how often an index is selected to be rebuilt/reorganized. If it is getting selected on a daily basis, I may decrease the fill factor and see if I can't get the index to not fragment as quickly.
Thanks Jeff for the input.
We are using Enterprise Edition with 8 chips. I have to admit that I'm surprised to read that simoultanious Index Rebuilds could block each other when they are dealing with Indexes on different tables. It's not like 2 SELECT's vying for the same set of data in one or more tables at the same time.
I've seen the %10Reorganize/%30Rebuild when Pages >1000 formula listed in many various posts and internet articles. e are using a less then optimal or even decently designed DB that we are limited in what we can do as changes could affect the support we get from the software vendors who's product we use. And so tinkering with the Fill Factor is a no-no unfortunately. Even though we have the DB we do end up providing the software vendor with regular backups of it for them do to support work for us as well as some customization work in reporting and so any tweaks I'd make to core objects in the DB would be seen eventually. We can add our own custom objects like VIEWS & SP's but not alter what is stock or core to the product.
Thanks Again
Kindest Regards,
Just say No to Facebook!April 21, 2009 at 11:07 am
The index rebuilds won't block on the table/index resource - they block on CPU resources. You'll start to see a lot of CXPACKET waits and they will increase a lot if you are running multiple reindex jobs at the same time.
If you are only running two reindex jobs at one time - probably not much of an issue and you really wouldn't notice the difference. Running 8 simultaneous reindex jobs and you will definitely see a difference.
Not to mention the impact on all of the other processes.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 21, 2009 at 11:21 am
Jeffrey Williams (4/21/2009)
The index rebuilds won't block on the table/index resource - they block on CPU resources. You'll start to see a lot of CXPACKET waits and they will increase a lot if you are running multiple reindex jobs at the same time.If you are only running two reindex jobs at one time - probably not much of an issue and you really wouldn't notice the difference. Running 8 simultaneous reindex jobs and you will definitely see a difference.
Not to mention the impact on all of the other processes.
Ahhhhh-hah. Yes, I have already had to deal with CXPACKET wait type although it was not related to Indexe management or simultaneous job runs but with how the 3rd party software we use (our primary or main software app for our business) works.
Thanks for the tip
Kindest Regards,
Just say No to Facebook!April 22, 2009 at 2:56 am
If you have multiple files in your filegroup, then using fragmentation stats to decide if an index needs maintenance will not work.
Fragmentation stats work well with a single-file filegroup, but with multiple files you will always get a large value for fragmentation, especially after you have done an index rebuild! This is because with multiple files you will end up with your index and tables dispersed across all files to help improve query performance, but this dispersal will show as a high amount of fragmentation.
An alternative approach is to use the rowmodctr from sysindexes along with the STATS_DATE() for the index. If STATS_DATE() is more recent than the last index rebuilt, pro-rata the rowmodctr value back to the last index rebuild time. You can then work out how many rows may have changed, and trigger the index rebuild if this goes above a given limit. This approach is far from perfect, but does give a way to cope with multiple files in a filegroup.
Whatever approach you use, I agree with previous posters that if you rebuild an index very often you should lower the fill percentage to reduce the need for rebuilds. However, a lower fill percentage means a larger index and potentially increased query times. If you find your formula for fill percentage goes below 90% it may be better to rebuild that index frequently enough to keep the optimum fill percentage 90% or above. You will need to see what is best for your installation.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
April 22, 2009 at 2:35 pm
EdVassie (4/22/2009)
If you have multiple files in your filegroup, then using fragmentation stats to decide if an index needs maintenance will not work.Fragmentation stats work well with a single-file filegroup, but with multiple files you will always get a large value for fragmentation, especially after you have done an index rebuild! This is because with multiple files you will end up with your index and tables dispersed across all files to help improve query performance, but this dispersal will show as a high amount of fragmentation.
An alternative approach is to use the rowmodctr from sysindexes along with the STATS_DATE() for the index. If STATS_DATE() is more recent than the last index rebuilt, pro-rata the rowmodctr value back to the last index rebuild time. You can then work out how many rows may have changed, and trigger the index rebuild if this goes above a given limit. This approach is far from perfect, but does give a way to cope with multiple files in a filegroup.
Whatever approach you use, I agree with previous posters that if you rebuild an index very often you should lower the fill percentage to reduce the need for rebuilds. However, a lower fill percentage means a larger index and potentially increased query times. If you find your formula for fill percentage goes below 90% it may be better to rebuild that index frequently enough to keep the optimum fill percentage 90% or above. You will need to see what is best for your installation.
This is some good info to have; thanks for replying. In our case we are not using multiple files although we have discussed the idea of moving some indexes off the primary file and onto their own file which would be stored on a sepearte drive from the current data file. The problem we run into is possible expense incurred from the vendor of this software who can deny support or charge a lot extra on a DB their product sues thats been customized without OK from the vendor. It may be our DB but it's their product and design so we have to find a happy medium.
Fill Factor: Currently all of the tables with big Indexes large enough to warrant routine Index Monitoring, are set with a FillFactor of 90 with the exception of 1 Index in each table and that Index is set to a FillFactor of 0 which of course defaults to 100. I didn't design these I'm just maintaining them so if any of these metrics sound odd or bad keep in mind I'm not the creator just the DB-sitter.
Kindest Regards,
Just say No to Facebook!April 23, 2009 at 3:02 am
I think you need to get some clarity from your vendor on what you are allowed to do. The vendor may well be concerned if you want to change the schema they supplied. The vendor should not be concerned if you do not change their schema but apply normal database maintenance and tuning to their system.
Getting this clarity is something you can raise with your management. If you can give examples where the default setup is harming your SLAs then you have a lever management understand. If your SLAs are not impacted then why are you worried about performance?
Normal database tuning in just about any DBMS would include items such as:
* Spreading the data over multiple disks. In SQL Server this means moving tables and indexes to a different file group, and adding multiple files to a file group.
* Adding additional indexes and indexed views to improve query performance.
* Defragmenting indexes. In SQL Server this means index rebuild or reorganisation.
No vendor should be concerned if you do any of the above. You may have to take some risk, (e.g. a fix from the vendor may recrerate a table back on to the Primary file group and you have to move it to where you really want it) but because you are not changing the schema design this should not affect vendor support.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
May 7, 2009 at 11:51 am
EdVassie (4/23/2009)
I think you need to get some clarity from your vendor on what you are allowed to do. The vendor may well be concerned if you want to change the schema they supplied. The vendor should not be concerned if you do not change their schema but apply normal database maintenance and tuning to their system.Getting this clarity is something you can raise with your management. If you can give examples where the default setup is harming your SLAs then you have a lever management understand. If your SLAs are not impacted then why are you worried about performance?
Normal database tuning in just about any DBMS would include items such as:
* Spreading the data over multiple disks. In SQL Server this means moving tables and indexes to a different file group, and adding multiple files to a file group.
* Adding additional indexes and indexed views to improve query performance.
* Defragmenting indexes. In SQL Server this means index rebuild or reorganisation.
No vendor should be concerned if you do any of the above. You may have to take some risk, (e.g. a fix from the vendor may recrerate a table back on to the Primary file group and you have to move it to where you really want it) but because you are not changing the schema design this should not affect vendor support.
Thanks for the suggestions but unfortunately even routine maintenance actions like you've given examples of are on the edge of what is and is not allowed. Because the DB itself and the program and processes that access/use it are created by progarmmers with little to no (with an emphasis on the NO part) formal RDMS training beyound reading SQL For Dummies combined with the fact that the vendor has no formal DBA on staff nor consults with one, actions that would normally be considered allowable routine maintenance could get thrown back at us by the vendor if we were to do them without approval first from said vendor the next time we had some kind of problem.
It would happen something like this:
We would setup additional files to store certain items on like large/volatile indexes and perhaps add some that are missing and shuld have been added and everything would be fine until a few weeks later some issue would arrise that may or may not have anything to do with the additional files and or indexes but the vendor would point to them as the cause after not being able to determine the problem and getting a backup of our DB to test, throwing the responsability to resolve the issue back onto us. I've seen this done before (to another client using this same product) and so I'm hessitant to do anything that modifies any stock object.
Adding new views or our own custom tables is fine so long as they are passive in how they work so that there is nothing they could do that would affect any stock object. Adding an index to a stock table without prior approval, even an Index that clearly should exist, could be thrown back at us by the vendor. Sine we don;t have access to the source code of the application we have no way to disprove the claim that the custom index we added is not what's effecting performance. Because the vendor has too many non-Best Practices handling of the data doing thisng like SELECT * INTO, we have to tread cautously when doing anything.
That all being said I am curious as to your thoughts on what metric/size or formula (if any) for an index is good to determine when a large Index would be best placed on a seperate file/drive?
Thanks for the feedback.
Kindest Regards,
Just say No to Facebook!May 8, 2009 at 2:39 am
It looks like you need to live within the constraints of what a VERY poor vendor is putting on you.
IMHO you should raise these issues with your management, as it appears they are affecting what your business is able to do. If management say they don't care, then maybe neither should you. Just tell management you need to get the vendor onsite when things are slow because your waranty wold be voided if you changed anything. Getting the vendor to provide a decent service is an issue for your management to resolve, not for you.
Eventually your management may get the vendor to put their house in order, or you move to a different vendor, or your customers move to a different supplier.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
May 8, 2009 at 10:06 am
EdVassie (5/8/2009)
It looks like you need to live within the constraints of what a VERY poor vendor is putting on you.IMHO you should raise these issues with your management, as it appears they are affecting what your business is able to do. If management say they don't care, then maybe neither should you. Just tell management you need to get the vendor onsite when things are slow because your waranty wold be voided if you changed anything. Getting the vendor to provide a decent service is an issue for your management to resolve, not for you.
Eventually your management may get the vendor to put their house in order, or you move to a different vendor, or your customers move to a different supplier.
Thanks for the input
Kindest Regards,
Just say No to Facebook!Viewing 14 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply