Index Question

  • I have a table that has 592,969,976 records.  The primary key is a bigint with identity option turned on.  The primary key is clustered and is the bigint identity column.

    How can this clustered index get fragmented?  It is showing 50+% fragmentation.  The table does have a varchar(max) field in it but it is not in the clustered index.

     

    Thank you.

    • This topic was modified 4 years, 8 months ago by  mike 57299.
  • Are there any deletes?  Are there any updates that increase the size of VARCHAR columns?  Are there any batch inserts of more than 1 row?

    All of these operations can create fragmentation.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Except for the first insert into a table under some special conditions, INSERTs will always try to fill all pages to as near to 100% as possible according to the row lengths.  So, the INSERTs will never be the problem... except for when you have LOBs, do expAnsive updates, or, as Jeffrey Williams pointed out, you're doing DELETEs.

    Focusing a bit on the LOB column (VARCHAR(MAX) in your case)...

    1. Is the column ALWAYS populated during the initial INSERT?
    2. Not including any null or empty values for the VARCHAR(MAX) column if it's not populated by the initial INSERT, how big are the normal values when the column is updated?
    3. Can you post the CREATE TABLE statement for the table in question so that I can better advise what is going on and why fragmentation is rampant on this table and explain things such as "trapped tiny pages" etc.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Here is the code from when I tell SQL server to script the creation of it:

    SET ANSI_NULLS ON
    GO

    SET QUOTED_IDENTIFIER ON
    GO

    CREATE TABLE [dbo].[Audit](
    [AuditID] [bigint] IDENTITY(1,1) NOT NULL,
    [AuditDate] [datetime] NOT NULL,
    [HostName] [sysname] NOT NULL,
    [SysUser] [nvarchar](128) NOT NULL,
    [Application] [varchar](50) NOT NULL,
    [TableName] [sysname] NOT NULL,
    [Operation] [char](1) NOT NULL,
    [SQLStatement] [varchar](max) NULL,
    [PrimaryKey] [int] NOT NULL,
    [RowDescription] [varchar](50) NULL,
    [SecondaryRow] [varchar](50) NULL,
    [ColumnName] [sysname] NULL,
    [OldValue] [varchar](50) NULL,
    [NewValue] [varchar](50) NULL,
    [RowVersion] [int] NULL,
    CONSTRAINT [pkAudit] PRIMARY KEY CLUSTERED
    (
    [AuditID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_HostName] DEFAULT ('') FOR [HostName]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_SysUser] DEFAULT ('') FOR [SysUser]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_Application] DEFAULT ('') FOR [Application]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_TableName] DEFAULT ('') FOR [TableName]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_Operation] DEFAULT ('') FOR [Operation]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_SQLStatement] DEFAULT ('') FOR [SQLStatement]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_PrimaryKey] DEFAULT ((0)) FOR [PrimaryKey]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_RowDescription] DEFAULT ('') FOR [RowDescription]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_SecondaryRow] DEFAULT ('') FOR [SecondaryRow]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_ColumnName] DEFAULT ('') FOR [ColumnName]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_OldValue] DEFAULT ('') FOR [OldValue]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_NewValue] DEFAULT ('') FOR [NewValue]
    GO

    ALTER TABLE [dbo].[Audit] ADD CONSTRAINT [DF__Audit_RowVersion] DEFAULT ((0)) FOR [RowVersion]
    GO

    The field - SQLStatement - is always populated.  I don't delete from this table ever.

    Mike

  • Yep... pretty typical.  But I do have to ask again, what's the size of the VARCHAR(MAX) entries?  More specifically, how many of the VARCHAR(MAX) entries are between 4047 and ~7000 bytes?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Inserts can be a problem if you are inserting multiple rows in a single statement.

    There is no guarantee that the inserts will be performed in identity order unless you specifically add an order by, and even then the process may be parallel which can cause out of order pages.

    Page splits can also occur with single row inserts depending on when the insert is committed.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams wrote:

    Inserts can be a problem if you are inserting multiple rows in a single statement.

    There is no guarantee that the inserts will be performed in identity order unless you specifically add an order by, and even then the process may be parallel which can cause out of order pages.

    Page splits can also occur with single row inserts depending on when the insert is committed.

    On the given table, though, a single or multi-row INSERT will always insert at the logical end of the Clustered Index and there will be no page splits of the Clustered Index ever during INSERTs because the Clustered Index is based on an ever-increasing value.  That's just for INSERTs and,  if there are no updates expansive updates on this audit table (and there should never be updates on an audit table), the Clustered Index will never suffer a page split.  In such cases, the only place logical fragmentation can come from is from gaps in physical pages (which can also be caused by DELETEs).  The only places the other type of fragmentation (low page density) can come from is from DELETES or short rows that are trapped between long, in-row entries in the VARCHAR(MAX) column and, based on the fact that they're capturing the "SQLStatement" that changed the table that's being audited, it's very likely that a bunch of "short rows" are, in fact, trapped between large rows.  This has been a common problem since 2005 when the MAX datatypes came out because they default to "in row if they fit" and "in row" means either in the HEAP (which this table is not) or the Clustered Index.

    This is the reason why I'm asking how many values in the SQLStatement table there are that fall in between the two numbers I provided.  It'll let us know for sure if there are "trapped short rows".  The ONLY way to fix this problem is to set the table option to have new LOB data go out of row and then do a table update to force all the existing LOB data out of row.

    There is one other place where the fragmentation could be coming from and that's if they're doing the DELETEs in some order other than the key of the Clustered Index.

    We didn't have this type of problem with LOBs prior to 2005 because Text, NText, and Image all defaulted to "Out of row" storage.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • This is the reason why I'm asking how many values in the SQLStatement table there are that fall in between the two numbers I provided.  It'll let us know for sure if there are "trapped short rows".  The ONLY way to fix this problem is to set the table option to have new LOB data go out of row and then do a table update to force all the existing LOB data out of row.

    How would I do this?  How would it affect database size?  How do I see if it set as you say?

    Thanks.

  • mike 57299 wrote:

    This is the reason why I'm asking how many values in the SQLStatement table there are that fall in between the two numbers I provided.  It'll let us know for sure if there are "trapped short rows".  The ONLY way to fix this problem is to set the table option to have new LOB data go out of row and then do a table update to force all the existing LOB data out of row.

    How would I do this?  How would it affect database size?  How do I see if it set as you say?

    Thanks.

    Before we go there, could you answer the question that I previously asked to find out if we actually need to go there?  The question is "how many of the VARCHAR(MAX) entries are between 4047 and ~7000 bytes"?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Sorry about that -- I thought I had posted that #.  It is 6,957,805 records between 4000 & 8000.

     

    Mike

  • That's a pretty strong indication that the problem may exist.  All of those items are actually in the Clustered Index. With the large row count and the fact that "only" about 1.1% of the rows are guaranteed to be "single page" rows, it might not be the actual problem but I'll try to put something together for you after I get home tonight to make a better determination.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    Would it help to split the table into 2.   One with all data except the varchar(max) and one with the same ID and the varchar(max) only?

    Mike

  • mike 57299 wrote:

    Jeff,

    Would it help to split the table into 2.   One with all data except the varchar(max) and one with the same ID and the varchar(max) only?

    Mike

    Exactly BUT... you don't have to do it yourself.

    The issue is that the shorter stuff in the VARCHAR(MAX) column is actually living within your Clustered Index.  Except for doing an INCLUDE of the column on an index (which is a REALLY bad idea because all Non-Clustered Indexes are a duplication of data at their leaf levels), you can't even use the VARCHAR(MAX) column in your indexes.  All they the short stuff (which isn't that short) is doing is slowing down any Clustered Index scans including the coveted seek followed by a range scan (for monthly reporting and things like that).

    In the days prior to SQL Server 2005, LOBs (Text, NText, and Image) defaulted to being entirely "Out-of-Row", which means they were stored in special pages outside of the Clustered Index.  When 2005 came along and the made the totally awesome VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX) and the not-so-awesome XML (also a LOB) datatypes, they defaulted them to "In-Row-If-They-Fit".  Most of our queries aren't centered on LOB data, especially when we're using a WHERE clause or a JOIN.  So all the in-row-lob junk is just occupying more Clustered Index pages and, because data lives on logical pages in the correct order, you may have a really short page (no LOB data or very short LOB) stuck between two pages where most of the page is LOB data.  That means that really short page might (and frequently is) something like 3% (not a misprint... 3% and sometimes less) full.

    As you state, storing the LOB data in a "separate table" would be the way to go but you'd still end up with trapped short-pages, etc.

    There's a much easier and much more effective way to store LOB data and that's to FORCE the LOB data to ALWAYS be out of row, no matter how big or small it might be.

    With that in mind, here are the steps to setup a NEW table to force LOB data out of row... just run the following command on your NEW table (of course, you need to make the appropriate substitutions)...

    EXEC sp_tableoption 'tableschemaname.tableobjectname', 'large value types out of row', 1;

    What about existing tables that already have data in them?  Any NEW LOB data will auto-magically be moved to out-of-row during an INSERT or an UPDATE.

    What about existing rows?  It seems typical that anything that's good for you is a bit painful at first but it's well worth it.  You first have to set the table option like we did above.  Then, you have to do 2 things... 1) you have to UPDATE each LOB value with itself, which will cause the data to move out of the Clustered Index into LOB storage and 2) then you have to REBUILD your Clustered Index to recover all of the space left behind (the average page density will have dropped to about 10-15 percent or so).

    --===== Force the LOB column to move its data to LOB storage by 
    -- doing an "in-place" self-update.
    UPDATE tableschemaname.tableobjectname
    SET SomeLOBColumnName = TheSameLobColumnName
    WHERE SomeLOBColumnName IS NOT NULL
    ;
    --===== Reclaim the massive amount of free-space from the
    -- now much-smaller clustered index.
    -- Since this particular table is based on an ever-increasing
    -- IDENTITY column and NEVER suffers UPDATES, we'll make sure
    -- we use a 100% Fill Factor and we'll still never need to defrag
    -- it ever again (PROVIDED that what you say is true about NEVER
    -- being UPDATEd and never being DELETEd from).
    -- You might want to do this in the BULK_LOGGED recovery model
    -- to make it "Minimally Logged"
    ALTER INDEX ClusteredIndexName
    ON tableschemaname.tableobjectname WITH (FILLFACTOR=100)
    ;

    You also mentioned that the "table" (Clustered Index) was 50% fragmented.  If that does continue on the index after the LOBs have been moved out of row, you need to find out what's doing either UPDATEs or DELETEs on the table because a Clustered Index based on an ever-increasing key, like your identity column, will take tens of years to even get to 5 or 6% fragmentation (in most cases, there are exceptions).

    Getting back to moving the LOBs, while that certainly isn't difficult to do, it's wicked tough on the system if the table is big.  Moving the data out of row is going to cost extra disk space to store it all and the will be about the same size as the existing table.  Yes, you'll get a lot of "free-space" back when you rebuild the Clustered Index but it's "free" only to the current database and you can't get it back without doing a SHRINK FILE and that has it's own complications including needing to REBUILD many indexes that have inverted after the shrink.

    There's also a hidden major problem we've not yet spoken of and that has to do with backups.  This IS an audit table and it should never be UPDATEd.  That means that as soon as a row is inserted (and they're all inserted at the logical end of the ever-increasing index), it's stagnant... it will NEVER change.  Pick any given row from today and any number of years in the future and the row will still not have changed.

    So, when you do the laundry, do you wash the dirty clothes or the clean ones (hopefully the former or you have a problem)?  The same hold with backups on large audit tables.  Why are you backing up what will NEVER change and only be added to?

    With that thought in mind, consider "partitioning" the table.  I personally prefer Partitioned VIEWS (I drank the Kool-Aid on Partitioned Tables and hate them for many reasons especially when it comes to DR restores) over Partitioned TABLES but, whatever.  If you partition tables such as this audit table and you make the realization that you can have one partition per filegroup, you can set the old partitions to "Read Only", back them up, and never have to back them up again.  I did that on just one table in my telephone system at work and, because we're required to keep literally more than a decade of some really wide, high volume data, my backups dropped from more than 10 hours to less than 6 minutes.  It's all because I'm not backing up data that will never change since the last backup.

    So the bottom line is, yes, we can do LOB out-of-row tricks and a whole lot more and, yes, all that still needs to be done BUT I'd do that as a part of partitioning the table.

    And, if it turns out that the "SQLStatement" column is being updated after the initial row insert, then add a default to the column of just one space.  That will cause the LOB row pointer to materialize so that the update isn't an expansive one, which would cause about 50% fragmentation on an index that's not supposed to ever fragment.

     

     

    • This reply was modified 4 years, 8 months ago by  Jeff Moden.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    Wow.  Thank you for all of the above.  You are correct with the backup being huge because of the audit data.  I have a lot to think about.

    Thanks!

  • Hi.

    Here is an update.  I have ran the code above.  Here are the #'s  (mdf/ldf):

    Original size:  566,086,656 // 50,003,456

    After tableoption : same

    After Update:  935,619,456 // 584,126,656  (6 hours later)

    After Alter Index: same

    Before this process, the PKAudit Clustered index was at 50.645% fragmentation.  Now it is at 0% with 97.02% page fullness.

    Now I am shrinking the database - it has about 38% free space.

    I would like to do this to the live data but our server is way to short of room for this.  I will have to think of a way to do this that doesn't take up another 1 TB of space.  Also, during the 6 hours, I could not use this machine because the first time I tried it, the machine froze about an hour into the process because I was doing other things.

    I will keep all of you updated.

    Thanks!

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply