Time Outs!!

  • Hello everyone I am pretty new to SQL, I've created and played with SQL in the past but never on a production level or professional one. I work for a company that uses SQL as the back end for our company website to run reports. The server was set up many moons ago by someone who no longer functions within the company.

    The server ran fine for 10+ years with no problems. Within the last 3 weeks a single report will not run it just returns a time out(only when running a report between 3 weeks ago and today) all of the dates prior to this happening will still pull and export to Excel. I know this is very Vague and could be multiple reasons. I've confirmed it is not the network or the front end, I now find myself staring into the SQL Server Management Studio scratching my head. It is running on Windows Web Server 2008.

    I was wondering if off the top of any of your skilled heads if you would have a suggestion for me to look into that might be causing this issue, the only thing I can think of would be the Query is failing but how can that be if the old data still pulls and nothing has been changed? Is corrupt data a possibility? Please let me know if you would like any more information on the situation, I understand it is vague but Desperate Times....

  • There are a lot of good articles written pertaining to SSRS and timeouts out there, such as this one:
    http://geekswithblogs.net/ssrs/archive/2009/10/30/steps-to-resolve-ssrs-timeout-issues.aspx

    There are 2 basic approaches to timeout issues in SSRS:
    1 - increase the timeout (ie wait longer for the report to get the data and render)
    2 - increase the data processing/throughput

    The article I listed above indicates how to adjust the timeout.
    If you want to increase the data throughput, grab the query from the report and run that against your database directly (preferrably a test database, not a live version) and see how long things take with the parameters you are giving it in your report.  Likely there is some optimization that can be performed on the query or the tables it is getting the data from that is causing the timeout.

    But I'd start by checking how long it takes to get the data from the report server and checking the execution plan may not hurt too.  It could be a lot of different things causing the slow report such as stale statistics resulting in slow data retrieval resulting in report timeout.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Thank you for the article, I am going through it all today, hopefully I will find a solution and get back to you!

  • I checked the config file, it is set up currently like this:

    <httpRuntime executionTimeout="3600"  maxRequestLength="1048576">

    I checked the web.config file? From what I've gathered so far that's 1GB throughput and 1 Hour timeout

    I still have to run the report against the system, but I ran out of time today, thanks again and I will keep you posted.

  • Another issue that is possible is that you've crossed a threshold on the data or in the statistics that has resulted in a change to the execution plan which is degrading performance. To be sure about this, you'd need to capture plans for when the query is running slow and when it is running fast. See if they are different.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • How would I Capture plans? I ran a trace and gathered the Query from the trace, ran the query against the server itself and found when the issue started, everything before the date variable 6/3/18 runs immediately, every report after that date however takes 40-55 seconds to gather data. I feel like I'm getting closer now, the amount of rows is relatively the same regardless of the good/bad reports so the data being pulled by the query is mostly the same size.

    Any advice on Capture plans or where to go from here to find out what is holding up the query when I run it within the management studio.

  • Check the links in my signature. The book on execution plans will help.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I guess I'd do two things before any of that substantial deep diving...

    1.  Rebuild stats or at least make sure they've been rebuilt.
    2.  Check the indexes being used in the execution plan (or tables that appear in the queries) using sys.dm_db_index_physical_stats() to determine levels of fragmentation.  While fragmentation usually doesn't matter, there is one time where it's an absolute bone crusher and that's when SQL Server has to read from disk because something isn't in memory already for use.  It's the READ-AHEADs that are really wrecked when fragmentation is present.  In high contrast, if the necessary data is already in memory, logical fragmentation won't affect performance.

    Physical fragmentation (page density or avg_page_space_used_in_percent) does matter because the lower the number the more memory you've wasted and the more pages you'll need to read to do a given job.

    And, please, tell me that no one turned on auto-shrink or has shrunk the database.  That produces index inversion, which is the worst type of logical fragmentation there is.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'm looking into the above now, I'm essentially walking into the db with-out any of it's history Jeff so I'm not sure what has been done in the past. I do know that a long time ago some data was purged but it had ran fine over the years after.

    Looking into the execution plans I noticed that on the date that works immediately there is a hash join, but when I change the date to the 10th instead of the 3rd it uses a nested loops join in the same place. Does this have chance of being the problem?

    When the report that works generates it uses the hash join and takes 39% and when the report that takes 45 seconds runs, it uses the nested join and takes 40%

    Thank you for your patients with my SQL handicap

  • nighthawkk - Monday, August 6, 2018 9:13 AM

    I'm looking into the above now, I'm essentially walking into the db with-out any of it's history Jeff so I'm not sure what has been done in the past. I do know that a long time ago some data was purged but it had ran fine over the years after.

    Looking into the execution plans I noticed that on the date that works immediately there is a hash join, but when I change the date to the 10th instead of the 3rd it uses a nested loops join in the same place. Does this have chance of being the problem?

    When the report that works generates it uses the hash join and takes 39% and when the report that takes 45 seconds runs, it uses the nested join and takes 40%

    Thank you for your patients with my SQL handicap

    That sounds like a classic symptom with stats that need to be updated, especially if the indexes involved have the "ever increasing" trait to them.

    Another possible problem may have to do with the "bad" form of parameter sniffing. You can do a quick check by using DBCC FREEPROCCACHE, which isn't usually the bear that people make it out to be.  But that would be my second guess rather than my first.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • So, the issue isn't the particular join, different ones are used for different reasons. What the issue is, is likely data or statistics related. It's probable that each of your different queries has different data being returned. Depending on the values passed to the parameters (and if parameters are used or not), you're getting different execution plans. So, I'd suggest checking the statistics and the row estimates. You can get these from DBCC SHOW_STATISTICS, or in 2017 by querying the statistics DMVs (my new preferred method, depending on what I'm looking for). It's possible that the statistics are out of date or that the data is skewed. Knowing that you have two different execution plans can help you focus on where the issues are.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Hey Grant,

    I ran them both again for some of the statistics, other than the date the Query is identical.

    With the Query that works quickly on 6/3 the rows stats are:

    Est: 82
    Act: 1148

    The slow Query @ 6/10:
    Est: 8
    Act: 1197

    I've compared the actual data output and couldn't find anything "unusual" as far as data format goes. I'm looking into "Updating Statistics" now

  • Right. So see the difference? One thinks nearly 100 rows. The other thinks only 8. The slow one used a loops join to process 1200 rows. A hash join is faster in this case. Better should result in better plans.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • If you're on 2017, I'd strongly recommend turning on Query Store for your databases to help diagnose these issues. It really is an absolute godsend for finding and comparing variances in plans.

  • I wanted to thank everyone again for your help, I rebuilt the stats and it works like a dream every report in 1-2 seconds. Your all the true heroes!

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply