how to find the lowest usage of a database

  • Hi Guys

    How do we find the lowest usage of a database ?
    I need  to  find  the  best  time  to  implement  an index in huge  database 

    Thank you
    Cheers

  • You can set up perfmon to monitor CPU usage. for a few days then look at the plot.
    If you have enterprise version of SQL Server you can create an index with ONLINE= ON. This reduces the impact on other queries running at the same time.

  • WhiteLotus - Tuesday, November 13, 2018 2:42 AM

    Hi Guys

    How do we find the lowest usage of a database ?
    I need  to  find  the  best  time  to  implement  an index in huge  database 

    Thank you
    Cheers

    If it's "huge", don't you have the "Enterprise Edition" so that you can use the "Online" option (not to mention the ability to use more memory in the system overall).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, November 13, 2018 7:40 AM

    WhiteLotus - Tuesday, November 13, 2018 2:42 AM

    Hi Guys

    How do we find the lowest usage of a database ?
    I need  to  find  the  best  time  to  implement  an index in huge  database 

    Thank you
    Cheers

    If it's "huge", don't you have the "Enterprise Edition" so that you can use the "Online" option (not to mention the ability to use more memory in the system overall).

    I've had problems creating a large index even if ONLINE=ON is set. So it's still best to do it at a quiet time.

  • Thanks All 🙂

  • Jonathan AC Roberts - Tuesday, November 13, 2018 7:43 AM

    I've had problems creating a large index even if ONLINE=ON is set. So it's still best to do it at a quiet time.

    What kind of problems, Jonathan?  I mean I know the bad parts of using the Online option (especially when it comes to the log file and that nasty "little" mapping table, etc, etc) but is there something in particular that you've experienced other than those nuances? I ask only because even "Quiet Times" don't mean "Exclusive Times" where you could actually use the Offline option.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • WhiteLotus - Tuesday, November 13, 2018 3:23 PM

    Thanks All 🙂

    There might be better advice if we knew more so, with that thought in mind, just a couple of questions, if you don't mind.
    1.  How big is the huge database?
    2.  How big is the index in question?
    3.  Is it a Clustered or Non-Clustered Index.  If Non-Clustered, how many columns and what are the datatypes?  Any INCLUDEs involved?
    4.  What is the Clustered index of the table the index is on (datatype is my biggest concern)
    5.  Which edition and release of SQL Server do you have?
    6.  How much RAM is available to SQL Server on the system?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, November 13, 2018 4:29 PM

    WhiteLotus - Tuesday, November 13, 2018 3:23 PM

    Thanks All 🙂

    There might be better advice if we knew more so, with that thought in mind, just a couple of questions, if you don't mind.
    1.  How big is the huge database?
    2.  How big is the index in question?
    3.  Is it a Clustered or Non-Clustered Index.  If Non-Clustered, how many columns and what are the datatypes?  Any INCLUDEs involved?
    4.  What is the Clustered index of the table the index is on (datatype is my biggest concern)
    5.  Which edition and release of SQL Server do you have?
    6.  How much RAM is available to SQL Server on the system?

    Thanks for the response Jeff
    1.  How big is the huge database? 180 GB in size , the table itself has 1.6 billion record , actually the main concern is the huge table
    2.  How big is the index in question? 16 GB
    3.  Is it a Clustered or Non-Clustered Index.  If Non-Clustered, how many columns and what are the datatypes?  Any INCLUDEs involved?
    Non clustered index , only 1 column as key column , data type is INT
    4.  What is the Clustered index of the table the index is on (datatype is my biggest concern) data type is also int
    5.  Which edition and release of SQL Server do you have? Standard edition , SQL 2008 R2
    6.  How much RAM is available to SQL Server on the system? 26 GB ( max memory setting for SQL )

    Wondering why data type is your biggest concern ?

  • Jeff Moden - Tuesday, November 13, 2018 4:23 PM

    Jonathan AC Roberts - Tuesday, November 13, 2018 7:43 AM

    Jeff Moden - Tuesday, November 13, 2018 7:40 AM

    WhiteLotus - Tuesday, November 13, 2018 2:42 AM

    Hi Guys

    How do we find the lowest usage of a database ?
    I need  to  find  the  best  time  to  implement  an index in huge  database 

    Thank you
    Cheers

    If it's "huge", don't you have the "Enterprise Edition" so that you can use the "Online" option (not to mention the ability to use more memory in the system overall).

    I've had problems creating a large index even if ONLINE=ON is set. So it's still best to do it at a quiet time.

    What kind of problems, Jonathan?  I mean I know the bad parts of using the Online option (especially when it comes to the log file and that nasty "little" mapping table, etc, etc) but is there something in particular that you've experienced other than those nuances? I ask only because even "Quiet Times" don't mean "Exclusive Times" where you could actually use the Offline option.

    It has blocked other queries and transaction on that table while it's being run. I know it's not supposed to but from my experience it sometimes does.

  • Wondering why data type is your biggest concern ?                                  

    Couple of reasons...
    1.  You've underestimated the size the index will be.  Each row of the non-clustered index will be 15 bytes consisting of a 7 byte row header, 4 bytes for the column being indexed, and 4 bytes for the Clustered Index that will automatically be added.  That means that your index will actually be 24GB (15 bytes * 1.6 Billion rows).
    2.  I've recently (about the last year) been working with indexes and index maintenance a whole lot and was concerned with "Expansive Updates" on this fairly large index, which would immediately and severely fragment your index, not to mention the affects on performance, log file activity, a low page densities that occur due to the bad page splits that "Expansive Updates" cause especially if the page splits occur in a "skewed" fashion.  These effects are especially pronounced on NCI's because of how many rows per page there usually are due to a narrow index row width.

    Since you have the Standard Edition, you already know you can't use the "ONLINE" option and that explains why you're looking for a "quiet time".  One way to reduce the time to build such an index is to create or rebuild the index while in the BULK LOGGED Recovery Model.  Index creation or rebuilding is minimally logged when in the BULK LOGGED model and your doing it in an OFFLINE manner.  I recently rebuilt a 146GB Clustered Index in about 12 minutes using that method. A 24GB index will take a lot less.  I also used the SORT_IN_TEMPDB option but your mileage may vary there because of memory and not knowing how fast your TempDB drive is compared to drive your database is on.

    I'm also concerned with what the column for the index being built is actually used for.  If it's an ever-increasing value and so is the CI, then you can set the FILL FACTOR for the index to 100 and likely never have to defrag it in the next 20 years.  If the column for the index isn't ever-increasing (inserted into the index out of order) or if the values in the column are updated or the values are inserted in large batches with a narrow scope on the column (Siloed Index is what I call that), then you may or may not be able to take advantage of any FILL FACTOR to reduce page splits.  You'll have to check later to figure that out.

    If the index already exists and isn't a unique index being used for an FK, you can also avoid some unwanted expansion of the MDF file by disabling the index before your rebuild it.

    Don't forget to shift back to the FULL Recovery Model if that's where you started.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, November 13, 2018 6:35 PM

    Wondering why data type is your biggest concern ?                                  

    Couple of reasons...
    1.  You've underestimated the size the index will be.  Each row of the non-clustered index will be 15 bytes consisting of a 7 byte row header, 4 bytes for the column being indexed, and 4 bytes for the Clustered Index that will automatically be added.  That means that your index will actually be 24GB (15 bytes * 1.6 Billion rows).
    2.  I've recently (about the last year) been working with indexes and index maintenance a whole lot and was concerned with "Expansive Updates" on this fairly large index, which would immediately and severely fragment your index, not to mention the affects on performance, log file activity, a low page densities that occur due to the bad page splits that "Expansive Updates" cause especially if the page splits occur in a "skewed" fashion.  These effects are especially pronounced on NCI's because of how many rows per page there usually are due to a narrow index row width.

    Since you have the Standard Edition, you already know you can't use the "ONLINE" option and that explains why you're looking for a "quiet time".  One way to reduce the time to build such an index is to create or rebuild the index while in the BULK LOGGED Recovery Model.  Index creation or rebuilding is minimally logged when in the BULK LOGGED model and your doing it in an OFFLINE manner.  I recently rebuilt a 146GB Clustered Index in about 12 minutes using that method. A 24GB index will take a lot less.  I also used the SORT_IN_TEMPDB option but your mileage may vary there because of memory and not knowing how fast your TempDB drive is compared to drive your database is on.

    I'm also concerned with what the column for the index being built is actually used for.  If it's an ever-increasing value and so is the CI, then you can set the FILL FACTOR for the index to 100 and likely never have to defrag it in the next 20 years.  If the column for the index isn't ever-increasing (inserted into the index out of order) or if the values in the column are updated or the values are inserted in large batches with a narrow scope on the column (Siloed Index is what I call that), then you may or may not be able to take advantage of any FILL FACTOR to reduce page splits.  You'll have to check later to figure that out.

    If the index already exists and isn't a unique index being used for an FK, you can also avoid some unwanted expansion of the MDF file by disabling the index before your rebuild it.

    Don't forget to shift back to the FULL Recovery Model if that's where you started.

    Very valuable information ! Thank you

  • Jeff Moden - Tuesday, November 13, 2018 6:35 PM

    Wondering why data type is your biggest concern ?                                  

    Couple of reasons...
    1.  You've underestimated the size the index will be.  Each row of the non-clustered index will be 15 bytes consisting of a 7 byte row header, 4 bytes for the column being indexed, and 4 bytes for the Clustered Index that will automatically be added.  That means that your index will actually be 24GB (15 bytes * 1.6 Billion rows).
    2.  I've recently (about the last year) been working with indexes and index maintenance a whole lot and was concerned with "Expansive Updates" on this fairly large index, which would immediately and severely fragment your index, not to mention the affects on performance, log file activity, a low page densities that occur due to the bad page splits that "Expansive Updates" cause especially if the page splits occur in a "skewed" fashion.  These effects are especially pronounced on NCI's because of how many rows per page there usually are due to a narrow index row width.

    Since you have the Standard Edition, you already know you can't use the "ONLINE" option and that explains why you're looking for a "quiet time".  One way to reduce the time to build such an index is to create or rebuild the index while in the BULK LOGGED Recovery Model.  Index creation or rebuilding is minimally logged when in the BULK LOGGED model and your doing it in an OFFLINE manner.  I recently rebuilt a 146GB Clustered Index in about 12 minutes using that method. A 24GB index will take a lot less.  I also used the SORT_IN_TEMPDB option but your mileage may vary there because of memory and not knowing how fast your TempDB drive is compared to drive your database is on.

    I'm also concerned with what the column for the index being built is actually used for.  If it's an ever-increasing value and so is the CI, then you can set the FILL FACTOR for the index to 100 and likely never have to defrag it in the next 20 years.  If the column for the index isn't ever-increasing (inserted into the index out of order) or if the values in the column are updated or the values are inserted in large batches with a narrow scope on the column (Siloed Index is what I call that), then you may or may not be able to take advantage of any FILL FACTOR to reduce page splits.  You'll have to check later to figure that out.

    If the index already exists and isn't a unique index being used for an FK, you can also avoid some unwanted expansion of the MDF file by disabling the index before your rebuild it.

    Don't forget to shift back to the FULL Recovery Model if that's where you started.

    • I am wondering if we create index in a table whichtype of lock that might happen ?

    • Previously I thought if we have query withselect statement and where criteria then Non clustered index with included columnwould be the best index regardless the table size .

      So apparently it is not true ?

  • WhiteLotus - Wednesday, November 14, 2018 8:49 PM

    Jeff Moden - Tuesday, November 13, 2018 6:35 PM

    Wondering why data type is your biggest concern ?                                  

    Couple of reasons...
    1.  You've underestimated the size the index will be.  Each row of the non-clustered index will be 15 bytes consisting of a 7 byte row header, 4 bytes for the column being indexed, and 4 bytes for the Clustered Index that will automatically be added.  That means that your index will actually be 24GB (15 bytes * 1.6 Billion rows).
    2.  I've recently (about the last year) been working with indexes and index maintenance a whole lot and was concerned with "Expansive Updates" on this fairly large index, which would immediately and severely fragment your index, not to mention the affects on performance, log file activity, a low page densities that occur due to the bad page splits that "Expansive Updates" cause especially if the page splits occur in a "skewed" fashion.  These effects are especially pronounced on NCI's because of how many rows per page there usually are due to a narrow index row width.

    Since you have the Standard Edition, you already know you can't use the "ONLINE" option and that explains why you're looking for a "quiet time".  One way to reduce the time to build such an index is to create or rebuild the index while in the BULK LOGGED Recovery Model.  Index creation or rebuilding is minimally logged when in the BULK LOGGED model and your doing it in an OFFLINE manner.  I recently rebuilt a 146GB Clustered Index in about 12 minutes using that method. A 24GB index will take a lot less.  I also used the SORT_IN_TEMPDB option but your mileage may vary there because of memory and not knowing how fast your TempDB drive is compared to drive your database is on.

    I'm also concerned with what the column for the index being built is actually used for.  If it's an ever-increasing value and so is the CI, then you can set the FILL FACTOR for the index to 100 and likely never have to defrag it in the next 20 years.  If the column for the index isn't ever-increasing (inserted into the index out of order) or if the values in the column are updated or the values are inserted in large batches with a narrow scope on the column (Siloed Index is what I call that), then you may or may not be able to take advantage of any FILL FACTOR to reduce page splits.  You'll have to check later to figure that out.

    If the index already exists and isn't a unique index being used for an FK, you can also avoid some unwanted expansion of the MDF file by disabling the index before your rebuild it.

    Don't forget to shift back to the FULL Recovery Model if that's where you started.

    • I am wondering if we create index in a table whichtype of lock that might happen ?

    • Previously I thought if we have query withselect statement and where criteria then Non clustered index with included columnwould be the best index regardless the table size .

      So apparently it is not true ?

    On your first question, all user access to the underlying table in the ONLINE = OFF mode is impossible when....
    1. Creating, rebuilding, or dropping a Clustered Index.
    2. Rebuilding or dropping a Non-Clustered Index.

    Still on your first question, Inserts, Updates, and Deletes on the underlying table are blocked when Rebuilding or Dropping a Non-Clustered Index.  However, reads are not blocked and so you can still select against the underlying table.

    On your second question, I'm wondering where the question comes from and so I'm not sure of what your actually looking for.  I'm especially confused on how anything I may have said led you to believe that a covering index (which is what you're talking about) isn't beneficial especially on large tables.    A narrow Non-clustered Index (NCI from here on) that covers a query will be faster (sometimes significantly faster) than a wider Clustered Index (CI from here on) simply because fewer pages will be read thanks to the more narrow rows of the NCI (which is a form of table itself).  For large tables, I usually use the CI to control/reduce fragmentation, which frequently benefits queries with temporal criteria (seek first, range scan the rest).  Of course, as with anything else in SQL Server, "It Depends" on the shape of the data and the usage of the data.

    The caveat with covering indexes is that you have to remember that all NCIs are a duplication of data and such indexes can quickly grow to be larger than the original data whether the original data is in a heap or a CI (table).  You also have to remember that the key for the CI (or RID if it's a heap) is also stored in every NCI.  That's also why I said "usually" when it comes to me using the CI to control fragmentation.  There are exceptions depending on the data.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Wednesday, November 14, 2018 10:45 PM

    WhiteLotus - Wednesday, November 14, 2018 8:49 PM

    Jeff Moden - Tuesday, November 13, 2018 6:35 PM

    Wondering why data type is your biggest concern ?                                  

    Couple of reasons...
    1.  You've underestimated the size the index will be.  Each row of the non-clustered index will be 15 bytes consisting of a 7 byte row header, 4 bytes for the column being indexed, and 4 bytes for the Clustered Index that will automatically be added.  That means that your index will actually be 24GB (15 bytes * 1.6 Billion rows).
    2.  I've recently (about the last year) been working with indexes and index maintenance a whole lot and was concerned with "Expansive Updates" on this fairly large index, which would immediately and severely fragment your index, not to mention the affects on performance, log file activity, a low page densities that occur due to the bad page splits that "Expansive Updates" cause especially if the page splits occur in a "skewed" fashion.  These effects are especially pronounced on NCI's because of how many rows per page there usually are due to a narrow index row width.

    Since you have the Standard Edition, you already know you can't use the "ONLINE" option and that explains why you're looking for a "quiet time".  One way to reduce the time to build such an index is to create or rebuild the index while in the BULK LOGGED Recovery Model.  Index creation or rebuilding is minimally logged when in the BULK LOGGED model and your doing it in an OFFLINE manner.  I recently rebuilt a 146GB Clustered Index in about 12 minutes using that method. A 24GB index will take a lot less.  I also used the SORT_IN_TEMPDB option but your mileage may vary there because of memory and not knowing how fast your TempDB drive is compared to drive your database is on.

    I'm also concerned with what the column for the index being built is actually used for.  If it's an ever-increasing value and so is the CI, then you can set the FILL FACTOR for the index to 100 and likely never have to defrag it in the next 20 years.  If the column for the index isn't ever-increasing (inserted into the index out of order) or if the values in the column are updated or the values are inserted in large batches with a narrow scope on the column (Siloed Index is what I call that), then you may or may not be able to take advantage of any FILL FACTOR to reduce page splits.  You'll have to check later to figure that out.

    If the index already exists and isn't a unique index being used for an FK, you can also avoid some unwanted expansion of the MDF file by disabling the index before your rebuild it.

    Don't forget to shift back to the FULL Recovery Model if that's where you started.

    • I am wondering if we create index in a table whichtype of lock that might happen ?

    • Previously I thought if we have query withselect statement and where criteria then Non clustered index with included columnwould be the best index regardless the table size .

      So apparently it is not true ?

    On your first question, all user access to the underlying table in the ONLINE = OFF mode is impossible when....
    1. Creating, rebuilding, or dropping a Clustered Index.
    2. Rebuilding or dropping a Non-Clustered Index.

    Still on your first question, Inserts, Updates, and Deletes on the underlying table are blocked when Rebuilding or Dropping a Non-Clustered Index.  However, reads are not blocked and so you can still select against the underlying table.

    On your second question, I'm wondering where the question comes from and so I'm not sure of what your actually looking for.  I'm especially confused on how anything I may have said led you to believe that a covering index (which is what you're talking about) isn't beneficial especially on large tables.    A narrow Non-clustered Index (NCI from here on) that covers a query will be faster (sometimes significantly faster) than a wider Clustered Index (CI from here on) simply because fewer pages will be read thanks to the more narrow rows of the NCI (which is a form of table itself).  For large tables, I usually use the CI to control/reduce fragmentation, which frequently benefits queries with temporal criteria (seek first, range scan the rest).  Of course, as with anything else in SQL Server, "It Depends" on the shape of the data and the usage of the data.

    The caveat with covering indexes is that you have to remember that all NCIs are a duplication of data and such indexes can quickly grow to be larger than the original data whether the original data is in a heap or a CI (table).  You also have to remember that the key for the CI (or RID if it's a heap) is also stored in every NCI.  That's also why I said "usually" when it comes to me using the CI to control fragmentation.  There are exceptions depending on the data.

    Being able to read a covering index could indeed be vital on a table this size.  And, I think as you've suspected, that points out a potential problem with rebuilding such a non-clus index.  SELECTs are still allowed, but without that index present, the query might have to scan the entire table instead of using the covering index (since it's being rebuilt in OFFLINE mode).  Naturally a full scan of such a huge table could potentially cause performance issues across the entire instance.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher - Thursday, November 15, 2018 10:21 AM

    Jeff Moden - Wednesday, November 14, 2018 10:45 PM

    WhiteLotus - Wednesday, November 14, 2018 8:49 PM

    Jeff Moden - Tuesday, November 13, 2018 6:35 PM

    Wondering why data type is your biggest concern ?                                  

    Couple of reasons...
    1.  You've underestimated the size the index will be.  Each row of the non-clustered index will be 15 bytes consisting of a 7 byte row header, 4 bytes for the column being indexed, and 4 bytes for the Clustered Index that will automatically be added.  That means that your index will actually be 24GB (15 bytes * 1.6 Billion rows).
    2.  I've recently (about the last year) been working with indexes and index maintenance a whole lot and was concerned with "Expansive Updates" on this fairly large index, which would immediately and severely fragment your index, not to mention the affects on performance, log file activity, a low page densities that occur due to the bad page splits that "Expansive Updates" cause especially if the page splits occur in a "skewed" fashion.  These effects are especially pronounced on NCI's because of how many rows per page there usually are due to a narrow index row width.

    Since you have the Standard Edition, you already know you can't use the "ONLINE" option and that explains why you're looking for a "quiet time".  One way to reduce the time to build such an index is to create or rebuild the index while in the BULK LOGGED Recovery Model.  Index creation or rebuilding is minimally logged when in the BULK LOGGED model and your doing it in an OFFLINE manner.  I recently rebuilt a 146GB Clustered Index in about 12 minutes using that method. A 24GB index will take a lot less.  I also used the SORT_IN_TEMPDB option but your mileage may vary there because of memory and not knowing how fast your TempDB drive is compared to drive your database is on.

    I'm also concerned with what the column for the index being built is actually used for.  If it's an ever-increasing value and so is the CI, then you can set the FILL FACTOR for the index to 100 and likely never have to defrag it in the next 20 years.  If the column for the index isn't ever-increasing (inserted into the index out of order) or if the values in the column are updated or the values are inserted in large batches with a narrow scope on the column (Siloed Index is what I call that), then you may or may not be able to take advantage of any FILL FACTOR to reduce page splits.  You'll have to check later to figure that out.

    If the index already exists and isn't a unique index being used for an FK, you can also avoid some unwanted expansion of the MDF file by disabling the index before your rebuild it.

    Don't forget to shift back to the FULL Recovery Model if that's where you started.

    • I am wondering if we create index in a table whichtype of lock that might happen ?

    • Previously I thought if we have query withselect statement and where criteria then Non clustered index with included columnwould be the best index regardless the table size .

      So apparently it is not true ?

    On your first question, all user access to the underlying table in the ONLINE = OFF mode is impossible when....
    1. Creating, rebuilding, or dropping a Clustered Index.
    2. Rebuilding or dropping a Non-Clustered Index.

    Still on your first question, Inserts, Updates, and Deletes on the underlying table are blocked when Rebuilding or Dropping a Non-Clustered Index.  However, reads are not blocked and so you can still select against the underlying table.

    On your second question, I'm wondering where the question comes from and so I'm not sure of what your actually looking for.  I'm especially confused on how anything I may have said led you to believe that a covering index (which is what you're talking about) isn't beneficial especially on large tables.    A narrow Non-clustered Index (NCI from here on) that covers a query will be faster (sometimes significantly faster) than a wider Clustered Index (CI from here on) simply because fewer pages will be read thanks to the more narrow rows of the NCI (which is a form of table itself).  For large tables, I usually use the CI to control/reduce fragmentation, which frequently benefits queries with temporal criteria (seek first, range scan the rest).  Of course, as with anything else in SQL Server, "It Depends" on the shape of the data and the usage of the data.

    The caveat with covering indexes is that you have to remember that all NCIs are a duplication of data and such indexes can quickly grow to be larger than the original data whether the original data is in a heap or a CI (table).  You also have to remember that the key for the CI (or RID if it's a heap) is also stored in every NCI.  That's also why I said "usually" when it comes to me using the CI to control fragmentation.  There are exceptions depending on the data.

    Being able to read a covering index could indeed be vital on a table this size.  And, I think as you've suspected, that points out a potential problem with rebuilding such a non-clus index.  SELECTs are still allowed, but without that index present, the query might have to scan the entire table instead of using the covering index (since it's being rebuild in OFFLINE mode).  Naturally a full scan of such a huge table could potentially cause performance issues across the entire instance.

    There IS a trick that you can do for such rebuilds.  Build a new index first and then drop the old one.  You may be able to do this more easily with the use of "WITH DROP EXISTING" but I've not tried that for such a thing.  You'd have to test.

    Of course, you have to be careful not to do such a thing if the index is a unique index that is also the target of an FK.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply