The Need For Speed - Upgrading Your Servers

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnis

  • Thanks for the article Steve. It left me wondering though.

    As you say your metrics looked good except at certain peak times for certain timezones/geography. So what can one do to alviete the burden of those peak times without upgrading the whole box in all deminsions (or the ones peaking)?

    Perhaps the developers could identify the jobs that are sucking the most resources at those peak times and scheme up some sort of pre-processing that could be done in times leading up to the peak.

    I guess I approach this from a developer point of view. The first thing I'd do is an over the shoulder analysis of the work performed by users at this peak time, it could turn out that roundtrips to the db could be reduced simply by small changes to application design. I have seen apps where the user ends up having to load up a data heavy screen simply to access a link or a button. So the db is doing a bunch of work to deliver data tghat the users doesn't actual care about. Multiple that by 5,000 seats at tax time and I guess it would save cycles simply to offer them a direct link to the info they want.

    I guess there are dozens of schemes that developers can employ; pre-processing, distributed processing, or other design changes but in the final analysis weighing developer time versus a shiny new box, the box might be allot cheaper

    Dave

    Trainmark.com IT Training B2B Marketplace
    (Jobs for IT Instructors)

  • I would second the motion to investigate the peaks.

    I work on a large web CMS that has a complicated caching mechanism.  If I get the CMS caching right the database load drops as most stuff gets retrieved from the cache.

    User perception is also a problem.  I remember thinking an IBM AT was fast!!

    Today the performance wows them, tomorrow exactly the same performance is taken for granted.

  • Yesterday, I went to a Microsoft seminar where Mogens Nørgaard were speaking about obtaining the best performance for a DB system. Key points were:

    1) Always investigate on a job/session level. Do not use overall counters.

    2) Find out where precisely the time is spent.

    Have a look at http://www.baarf.com/ on why not to use Raid 5.

    He was hoping that there were better "system metrics" build into SQL 2005, so that you were able to better pin-point where time was spent. 

    Regards,

    Henrik Staun Poulsen

    Denmark

     

  • As far as this system is concerned, it's a DW backend for a Microstrategy backend, so many of the peaks are not tunable. End suers pick a few things and it generates 50 lines of SQL (or more). So we've built it for the 80-90% of the load. Just can't get the last without a huge upgrade, and even then who knows.

    I agree that most of your tuning is in the application and possible indexes. I've had a couple seminars where we really run profiler over a few days, capturing SQL and then grouping and sorting to find the top 10 worst performers and focusing there.  We also look at the frquency of these queries and use that to gauge whether to tune them. I might have a 2 hour query, but if it's run once a year I might not tune it. But a query that takes 2 minutes and it run 100 times a day might be something to focus on.

    The reality is that the most frequently complained about queries are the ones we focus on, trying to tune or rework them (and the app) to get the perception of performance down.

  • If it's for reports try SQL Reporting Services where you can schedule reports delivered to users.

    So you can run stuff overnight, overweekend or in offpeak periods and deliver it to users by mail

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply