rebuild index worsens query performance

  • we have a query which runs several times a day

    reads from several tables in one database ("W"), and two tables in another database ("P") (both db's reside on the same server)

    the two tables in database "P" never had their indexes rebuilt or statistics updated, and were significantly fragmented

    I purged some old information from another table in database "P" (about 50% worth), and shrank database "P" (to reclaim disk space which we were running out of)

    I rebuilt the indexes/updated statistics in "P" immediately afterward, and now weekly, via a maintenance plan (rebuild index task [original amount of free space], update statistics task [column stats only, initially at 10%, then 50% weekly])

    From that point forward the aforementioned query takes twice as long to run

    Thoughts?

    FYI:

    the two tables in the query, and the table I purged, are subscribed tables in replication, the publisher being on another server in the same domain

    I dropped replication on the purged table, purged it on both the publisher and subscriber, and re-created replicarecreatedthe "replication support only" option

    the two tables in the query had replication active the whole time (purge of the third table, shrink, index rebuild, etc.) -- no information was flowing at that time

    database "P" has not expanded since the shrink

    a comparison (via RedGate Compare) between the database pre-purge/shrink/index rebuild and present show no structural differences between the two tables used by the query

    Is it possible that the plan the query was using prior to the index rebuild was running faster despite the fragmentation? I would think that any reduction in index fragmentation would be cause for improvement in query performance

    *** UPDATE (11-20-2012) -- it was determined that, on that day, I rebuilt indexes but NOT stats -- update stats came 2 weeks later and made no difference either way

  • Index defragmentation shouldn't cause a performance degradation, but it won't necessarily speed things up either. It depends on how the index is being used.

    Single-row seeks, for example, aren't significantly impacted by fragmentation.

    Before I can suggest anything about the issue, I'd need to see the execution plan at the very least. Table definitions and the query are usually also needed, but the execution plan is the bare minimum. Can you attach that as an sqlplan file? (Don't copy-and-paste the XML into the forum, in other words. Save the plan as a file and attach that to a post in the forum.)

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • note: the emphasis is on what happened when the aforementioned operations were performed on database "P" such that the query doubled in run time, not so much on what code to change in order to improve performance

    the 1st letter of the two databases in the plan will be either "W" or "P" (per opening post)

  • After completing the index rebuild, did you then rebuild all the statistics? If so did you use a sample size or full scan?

    If you used a sample size on the statistics for the indexes you rebuilt, you made them worse. The statistics for the indexes are rebuilt when you rebuild the indexes and is done with a full scan, meaning the statistics were better before you rebuilt the statistics separately.

  • I dont think I rebuilt stats that day,just the indexes.

    I added a update stats task to the maint plan about a week later (after the index rebuild), however it updates COLUMN stats only -- my understanding is that index rebuild updates TABLE stats but not COLUMN stats, correct me if I'm wrong

  • I suppose that it's possible that an index rebuild could cause a slow down of this nature because of the stats rebuild that also occurs. That may have caused the execution plan to change. I've also see that adding an index (found this out in the code for one of my articles) can cause code to run much slower. The optimizer isn't magic... it was written by humans and it sometimes makes bad choices.

    Whatever was the cause, I think you're now to the point where you have to treat it like a new query and tune it. If you want, check the article at the second link in my signature below for how to post what folks need to help you for such problems.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I have to agree with Jeff at this point. I have looked at the estimated execution plan, but one of the things we really need is the actual execution plan from the slow running process. That is in addition to the information also mentioned in the article Jeff recommended you read.

  • Taking a guess here, but is the table valued function you use in this process a multi-statement table valued function?

    Also, looks like you may have several scalar functions being used, are this in the select list of any tables queried or in the where or join clauses between tables?

  • that IS the actual plan:

    select

    (SELECT SUBSTRING(text, r.statement_start_offset/2,(CASE WHEN r.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max), text)) * 2 ELSE r.statement_end_offset END - r.statement_start_offset)/2 ) FROM sys.dm_exec_sql_text(r.sql_handle) ) AS query_text

    ,

    (SELECT convert(xml, query_plan) FROM sys.dm_exec_text_query_plan (r.plan_handle , r.statement_start_offset, r.statement_end_offset )) AS query_plan

    --,

    --(SELECT query_plan FROM sys.dm_exec_query_plan (r.plan_handle )) AS query_plan_batch

    -- , qs.*

    from sys.dm_exec_requests r

    left join sys.dm_exec_query_stats qs

    on r.statement_start_offset = qs.statement_start_offset

    and r.sql_handle = qs.sql_handle

    where 1=1

    and session_id = ...

  • I believe it is multi-line

    Note that the query did not change -- the emphasis is on what could have happened that weekend that caused the query to double in time

  • jgenovese (11/17/2012)


    I believe it is multi-line

    Note that the query did not change -- the emphasis is on what could have happened that weekend that caused the query to double in time

    You are being tunnel-visioned. It may not be just what happened that weekend. Maybe you reached a tipping point with the data that caused the query to slow down, not just the index rebuild and stats updates. As Jeff indicated, you need to look at this as if the query did change and needs to be tuned.

  • I have attached the results of sp_help's on the tables, and the sql statement

  • Just an FYI, your parse routing is slow. Take a look at this one, then read the article and its discussion that is referenced in the comments.

    Be sure to tread the comments.

    /****** Object: UserDefinedFunction [dbo].[DelimitedSplit8K] Script Date: 11/17/2012 11:57:10 ******/

    IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[DelimitedSplit8K]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))

    DROP FUNCTION [dbo].[DelimitedSplit8K]

    GO

    /****** Object: UserDefinedFunction [dbo].[DelimitedSplit8K] Script Date: 11/17/2012 11:57:10 ******/

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    CREATE FUNCTION [dbo].[DelimitedSplit8K]

    /**********************************************************************************************************************

    Purpose:

    Split a given string at a given delimiter and return a list of the split elements (items).

    Notes:

    1. Leading a trailing delimiters are treated as if an empty string element were present.

    2. Consecutive delimiters are treated as if an empty string element were present between them.

    3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.

    Returns:

    iTVF containing the following:

    ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)

    Item = Element value as a VARCHAR(8000)

    Statistics on this function may be found at the following URL:

    http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx

    CROSS APPLY Usage Examples and Tests:

    --=====================================================================================================================

    -- TEST 1:

    -- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are

    -- laid out in the comments

    --=====================================================================================================================

    --===== Conditionally drop the test tables to make reruns easier for testing.

    -- (this is NOT a part of the solution)

    IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest

    ;

    --===== Create and populate a test table on the fly (this is NOT a part of the solution).

    -- In the following comments, "b" is a blank and "E" is an element in the left to right order.

    -- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks

    -- are preserved no matter where they may appear.

    SELECT *

    INTO #JBMTest

    FROM ( --# & type of Return Row(s)

    SELECT 0, NULL UNION ALL --1 NULL

    SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)

    SELECT 2, SPACE(1) UNION ALL --1 b (1 space)

    SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)

    SELECT 4, ',' UNION ALL --2 b b (both are empty strings)

    SELECT 5, '55555' UNION ALL --1 E

    SELECT 6, ',55555' UNION ALL --2 b E

    SELECT 7, ',55555,' UNION ALL --3 b E b

    SELECT 8, '55555,' UNION ALL --2 b B

    SELECT 9, '55555,1' UNION ALL --2 E E

    SELECT 10, '1,55555' UNION ALL --2 E E

    SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E

    SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E

    SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b

    SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b

    SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)

    SELECT 16, 'This,is,a,test.' --E E E E

    ) d (SomeID, SomeValue)

    ;

    --===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)

    SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')

    FROM #JBMTest test

    CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split

    ;

    --=====================================================================================================================

    -- TEST 2:

    -- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against

    -- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because

    -- they are "control" characters. More specifically, this test will show you what happens to various non-accented

    -- letters for your given collation depending on the delimiter you chose.

    --=====================================================================================================================

    WITH

    cteBuildAllCharacters (String,Delimiter) AS

    (

    SELECT TOP 256

    'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',

    CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)

    FROM master.sys.all_columns

    )

    SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')

    FROM cteBuildAllCharacters c

    CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split

    ORDER BY ASCII_Value, split.ItemNumber

    ;

    -----------------------------------------------------------------------------------------------------------------------

    Other Notes:

    1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.

    2. Optimized for single character delimiter. Multi-character delimiters should be resolvedexternally from this

    function.

    3. Optimized for use with CROSS APPLY.

    4. Does not "trim" elements just in case leading or trailing blanks are intended.

    5. If you don't know how a Tally table can be used to replace loops, please see the following...

    http://www.sqlservercentral.com/articles/T-SQL/62867/

    6. Changing this function to use NVARCHAR(MAX) will cause it to run twice as slow. It's just the nature of

    VARCHAR(MAX) whether it fits in-row or not.

    7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows that the UNPIVOT method

    is quite machine dependent and can slow things down quite a bit.

    -----------------------------------------------------------------------------------------------------------------------

    Credits:

    This code is the product of many people's efforts including but not limited to the following:

    cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a bit of extra speed

    and finally redacted by Jeff Moden for a different slant on readability and compactness. Hat's off to Paul White for

    his simple explanations of CROSS APPLY and for his detailed testing efforts. Last but not least, thanks to

    Ron "BitBucket" McCullough and Wayne Sheffield for their extreme performance testing across multiple machines and

    versions of SQL Server. The latest improvement brought an additional 15-20% improvement over Rev 05. Special thanks

    to "Nadrek" and "peter-757102" (aka Peter de Heer) for bringing such improvements to light. Nadrek's original

    improvement brought about a 10% performance gain and Peter followed that up with the content of Rev 07.

    I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL

    and to Adam Machanic for leading me to it many years ago.

    http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

    -----------------------------------------------------------------------------------------------------------------------

    Revision History:

    Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Lynn Pettis and others.

    Redaction/Implementation: Jeff Moden

    - Base 10 redaction and reduction for CTE. (Total rewrite)

    Rev 01 - 13 Mar 2010 - Jeff Moden

    - Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny

    bit of extra speed.

    Rev 02 - 14 Apr 2010 - Jeff Moden

    - No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra

    documentation.

    Rev 03 - 18 Apr 2010 - Jeff Moden

    - No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this

    type of function.

    Rev 04 - 29 Jun 2010 - Jeff Moden

    - Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the

    function is used in an UPDATE statement even though the function makes no external references.

    Rev 05 - 02 Apr 2011 - Jeff Moden

    - Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and

    for strings that have wider elements. The redaction of this code involved removing ALL concatenation of

    delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,

    and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one

    instance of one add and one instance of a subtract. The length calculation for the final element (not

    followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF

    combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be

    had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a

    single CPU box than the original code especially near the 8K boundary.

    - Modified comments to include more sanity checks on the usage example, etc.

    - Removed "other" notes 8 and 9 as they were no longer applicable.

    Rev 06 - 12 Apr 2011 - Jeff Moden

    - Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and

    the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived

    in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.

    Rev 07 - 06 May 2011 - Peter de Heer, a further 15-20% performance enhancement has been discovered and incorporated

    into this code which also eliminated the need for a "zero" position in the cteTally table.

    **********************************************************************************************************************/

    --===== Define I/O parameters

    (@pString VARCHAR(8000), @pDelimiter CHAR(1))

    RETURNS TABLE WITH SCHEMABINDING AS

    RETURN

    --===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...

    -- enough to cover NVARCHAR(4000)

    WITH E1(N) AS (

    SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1

    ), --10E+1 or 10 rows

    E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows

    E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max

    cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front

    -- for both a performance gain and prevention of accidental "overruns"

    SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4

    ),

    cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)

    SELECT 1 UNION ALL

    SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter

    ),

    cteLen(N1,L1) AS(--==== Return start and length (for use in substring)

    SELECT s.N1,

    ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)

    FROM cteStart s

    )

    --===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.

    SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),

    Item = SUBSTRING(@pString, l.N1, l.L1)

    FROM cteLen l

    ;

    GO

  • Also, no one is really going to dive into what you recently posted in an effort to help you figure out what be going on. We really need the information posted as DDL scripts, plus you should put together some sample data (i.e. NOT REAL data) for the tables that is representative of the problem. One other thing would really help, the actual execution plan.

    Also, do you really drop the temporary table as soon as it is populated as it appears in the code you posted?

  • "One other thing would really help, the actual execution plan."

    This IS the ACTUAL execution plan

    "Also, do you really drop the temporary table as soon as it is populated as it appears in the code you posted?"

    NO -- I extracted the query from the stored proc in which it resides

Viewing 15 posts - 1 through 15 (of 56 total)

You must be logged in to reply to this topic. Login to reply