High value of work files and Work table

  • Hi All,

    I monitor my sql server 2008 performance via Solarwind and I got a high number of work files and Worktable ...( worktable is created 865/sec )...

    I wonder What should I do to lower the value down ?

  • Firstly, is that value a problem? Is it higher than usual? Is it causing problems?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/19/2014)


    Firstly, is that value a problem? Is it higher than usual? Is it causing problems?

    Thanks for your response! appreciate it !

    Yes I use SOLARWIND to monitor SQL Performance and It shows that Workfiles Created/Sec and Worktables created /sec are in CRITICAL status.

    There is some information from SOLARWIND aboout workfiles :

    "Work files could be used to store temporary results for hash joins and hash aggregates. The returned value should be less than 20. Tempdb work files are used in processing hash operations when the amount of data being processed is too large to fit into the available memory."

    Possible problems: High values can indicate thrash in the tempdb file as well as poorly coded queries.

    and This is some information from SOLARWIND aboout worktables (at the moment, the value is : 739/sec)

    "Work tables could be used to store temporary results for query spool, lob variables, XML variables, and cursors. The returned value should be less than 20. Worktables are used for queries that use various spools (table spool, index spool, and so on."

    Possible problems: High values could cause general slowdown.

    I wonder what should I do ?

  • WhiteLotus (8/19/2014)


    GilaMonster (8/19/2014)


    Firstly, is that value a problem? Is it higher than usual? Is it causing problems?

    Thanks for your response! appreciate it !

    Yes I use SOLARWIND to monitor SQL Performance and It shows that Workfiles Created/Sec and Worktables created /sec are in CRITICAL status.

    There is some information from SOLARWIND aboout workfiles :

    "Work files could be used to store temporary results for hash joins and hash aggregates. The returned value should be less than 20. Tempdb work files are used in processing hash operations when the amount of data being processed is too large to fit into the available memory."

    Possible problems: High values can indicate thrash in the tempdb file as well as poorly coded queries.

    and This is some information from SOLARWIND aboout worktables (at the moment, the value is : 739/sec)

    "Work tables could be used to store temporary results for query spool, lob variables, XML variables, and cursors. The returned value should be less than 20. Worktables are used for queries that use various spools (table spool, index spool, and so on."

    Possible problems: High values could cause general slowdown.

    I wonder what should I do ?

    Start looking for "crap code" and fix it. 😉

    To be honest, though, if you have a lot of batch jobs or reporting requests on a busy machine, 20 seems like a low number especially in this day and age with all the bloody XML flying around.

    I'd still check for performance challenged code, though.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (8/19/2014)


    WhiteLotus (8/19/2014)


    GilaMonster (8/19/2014)


    Firstly, is that value a problem? Is it higher than usual? Is it causing problems?

    Thanks for your response! appreciate it !

    Yes I use SOLARWIND to monitor SQL Performance and It shows that Workfiles Created/Sec and Worktables created /sec are in CRITICAL status.

    There is some information from SOLARWIND aboout workfiles :

    "Work files could be used to store temporary results for hash joins and hash aggregates. The returned value should be less than 20. Tempdb work files are used in processing hash operations when the amount of data being processed is too large to fit into the available memory."

    Possible problems: High values can indicate thrash in the tempdb file as well as poorly coded queries.

    and This is some information from SOLARWIND aboout worktables (at the moment, the value is : 739/sec)

    "Work tables could be used to store temporary results for query spool, lob variables, XML variables, and cursors. The returned value should be less than 20. Worktables are used for queries that use various spools (table spool, index spool, and so on."

    Possible problems: High values could cause general slowdown.

    I wonder what should I do ?

    Start looking for "crap code" and fix it. 😉

    To be honest, though, if you have a lot of batch jobs or reporting requests on a busy machine, 20 seems like a low number especially in this day and age with all the bloody XML flying around.

    I'd still check for performance challenged code, though.

    Hi jeff,

    Thanks!

    How do I know which query or StoreProc that has crap code because there are hundreds of StoreProc?

  • WhiteLotus (8/24/2014)


    How do I know which query or StoreProc that has crap code because there are hundreds of StoreProc?

    You don't.

    This is one of the flaws at looking at just perfmon counters. Sure, work tables are 'high', but you haven't said if they're higher than normal or if they're causing problems (don't use the tools blindly).

    Performance analysis is never about one counter. There's no one counter or value that tells you everything. It's about looking at the whole system, perfmon counters, wait stats, query execution statistics and drawing conclusions from all of them together.

    The perfmon counters, with very few exceptions don't have good or bad thresholds. They have 'normal' and 'not normal' for your server. The wait stats tell you what, in general, SQL is spending time waiting for during query executions and the query stats tell you which are your most resource-intensive queries overall.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • WhiteLotus (8/19/2014)


    Yes I use SOLARWIND to monitor SQL Performance and It shows that Workfiles Created/Sec and Worktables created /sec are in CRITICAL status.

    I guess the threshold is a configured value within SOLARWIND and my suspicion is that hasn't been specifically set for your environment. This "high" number of Workfiles may be normal depending on the activity, not necessarily a result of crap code either.

    😎

  • Eirikur Eiriksson (8/25/2014)


    WhiteLotus (8/19/2014)


    Yes I use SOLARWIND to monitor SQL Performance and It shows that Workfiles Created/Sec and Worktables created /sec are in CRITICAL status.

    I guess the threshold is a configured value within SOLARWIND and my suspicion is that hasn't been specifically set for your environment. This "high" number of Workfiles may be normal depending on the activity, not necessarily a result of crap code either.

    😎

    Hmm I believe that it has been set in my environment ... cause when i check the fragmentation of indexes it is identical ....

    Moreover It also show the high number of pagesplit and page write ... I hv configured the fill factor in some indexes that get fragmented quickly but it seems the number of pagesplit is still high ..from 12 am -5 am it is relatively slow ( around 20/sec) but it becomes so high at 7 pm - now (around 90/sec)

    I wonder how to lower it down .... Any idea ?

    Thankss

  • I will repeat what I said earlier

    GilaMonster (8/25/2014)


    This is one of the flaws at looking at just perfmon counters. Sure, work tables are 'high', but you haven't said if they're higher than normal or if they're causing problems (don't use the tools blindly).

    Performance analysis is never about one counter. There's no one counter or value that tells you everything. It's about looking at the whole system, perfmon counters, wait stats, query execution statistics and drawing conclusions from all of them together.

    The perfmon counters, with very few exceptions don't have good or bad thresholds. They have 'normal' and 'not normal' for your server. The wait stats tell you what, in general, SQL is spending time waiting for during query executions and the query stats tell you which are your most resource-intensive queries overall.

    Page splits/sec are NOT just the mid-index splits which cause fragmentation. As for how to reduce both it and the page writes/sec, maybe ask the users not to use the system?

    Yes, that was a silly suggestion, but it'll do what you want (reduce the counter values).

    Stop looking at individual counter values with a desire to reduce them (you can reduce them by decreasing the workload, but that would be counterproductive). Look at the overall server health. Look for changes (high not, wasn't high last week), look for things which are causing problems and fix those.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply