Best index structure for table that holds a large amount of log data

  • Folks

    I have a ASP.NET solution that needs to log every time a method is invoked.(that part I have done)

    I am logging this activity into a table

    I had constructed a table that has a identity filed that is also the Clustered Primary Key

    When we do load testing and the number of clients is greater than 20 I get errors (which I also log to another table).

    The error is "The underlying provider failed on Commit"

    Inner Exception is: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding"

    I also SQL profile these tests

    Under 10 users inserts are under 1500 msecs

    Over 20 users inserts are about 6000-9000 msecs

    At 50 users inserts are over 30000 msecs

    This table is never read by the application and it has only one index the clustered index.

    So how can Index this table to reduce the insert time and avoid this exception?

    Or should the table have no index?

    regards

    GregJF

  • If you need an index look at creating a non-clustered index rather than a clustered index, with a high fill factor (90-95%) to prevent a large number of page splits occuring.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • 1500ms is incredibly high for an insert, unless you're inserting a million or so rows. Can you post more details, including an execution plan?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail

    Yes and the 1500ms is just me filtering out quick running(ie < 1500ms) queries

    I have 100s of inserts for this command(Sp_excute_SQL from .net EF4)

    I run the Load Test for 85 mins.

    25 minutes into it, I start getting these timeout errors (the test emulate 10 user for time then 20 the 30 on up to 50)

    Also at that time the inserts for this table are getting longer and longer.

    Start at 7000ms then 8000ms then 9000 ms then jump to 20000ms 22000ms then jump to 30000ms to 35000ms.

    I have attached the Plan and the prepared statement, but remember this works fine, except between 25 and 26 min

    The last 30 mins(between 55 mins and 85 mins) has 50 "robot" users doing lots of queries and inserts.

    Load: 175511 insert in the 85 mins, of which 802 were between 25 and 26 min

    Long running 200 > 700ms to insert , of which 40 between 10000ms and 25000ms and 15 greater than 25000ms ( all of these at 25 minutes into Load Test)

    Errors: 49 ( all at 25 minutes into Load Test, when user load went from 10 users to 20 users)

    If seems that it may be something else.

    So what should I look for?

    I have a full SQL profiler of all actions from that application against that database as well as the profiler that isolate queries greater than 1500ms

    I also have Perfmons on the SQL Server and the 2 "client" computers.

    regards

    Greg Frazer

  • The code is inserting into the table and then unneccessarily reading from the table. Try the following, instead...

    exec sp_executesql N'insert [dbo].[LogApplicationControllerAction]([ApplicationName], [ControllerName], [ActionName], [RouteData], [RequestSessionID], [SerialisedType], [LogData], [ReceivedTime], [CreatedBy], [CreatedDate], [UpdatedBy], [UpdatedDate])

    values (@0, @1, @2, @3, null, @4, @5, @6, @7, @8, @9, @10)

    select [LogApplicationControllerActionID] = scope_identity()',N'@0 nvarchar(300),@1 nvarchar(120),@2 nvarchar(120),@3 nvarchar(max) ,@4 nchar(270),@5 nvarchar(max) ,@6 datetime2(7),@7 nvarchar(128),@8 datetime2(7),@9 nvarchar(128),@10 datetime2(7)',@0=N'DnBConnect',@1=N'search',@2=N'ShowCompanyImage',@3=N'',@4=N'DnB.Connect.MessageContentLibrary.ControllerActionLog

    If you can't get the ORM to use the code above, write a CRUD stored proc in a manner similar to the above and call it instead.

    And, no... I wouldn't change the clustered index you have except you should consider changing the FILLFACTOR to 100. There are no updates to this audit table and the CI on the IDENTITY column will actually prevent page splits. No sense in saving blank space on a page for such a thing.

    Something else to make sure of if you haven't looked in a while... make sure no one screwed up and put a trigger or FK on this audit table. Might want to double check and see if someone added another index or two, as well. Can't tell from here but another thing to look for is to see if someone put an indexed (materialized) view on the table. Like Gail said, the duratio of 1500ms is terribly high for a single insert and the things I listed could be the cause.

    One other thing. Have you checked the datatypes of the query against the data types of the actual table columns. That could be yet another source of ms although it shouldn't be that significant for an unjoined insert.

    (Note: all the questions above could be answered if I could see the query plan you attached. I can't because I'm working from a 2k5 machine right now).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply