What do you consider important when performance testing?

  • Comments posted to this topic are about the item What do you consider important when performance testing?

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Recently I thought of something that had never previously crossed my mind: what about purposely testing against a database with fragmented indexes as may exist in a production database? Is this is a valid scenario? How do you deliberately create such data?

    When a database is restored to another instance having the same version of SQL Server; the indexes, fragmentation, and statistics will be the same as when the backup was performed. So, if your QA process involves working from a recent restore of production, you should have a good base for comparison in that regard.

    However, when performance testing in a QA environment, I don't think it's necessary to duplicate the physical environment of production to get a good idea of how it will perform in production. Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • One quick answer would be to create a load of VMs and do the testing from there, however if your prod instance(s) are physical there's a chance the results could be skewed due to difference in hardware.

    qh

    [font="Tahoma"]Who looks outside, dreams; who looks inside, awakes. – Carl Jung.[/font]
  • Staging a high transaction volume OLTP environment for QA and performance testing is actually much harder than a data warehouse environment. Data Warehouses tend to be static with fewer users. However, even if you duplicate the OLTP physical environment, software, and database, it more difficult to predict how it will perform or behave when confronted with multi-users and concurrancy scenarios.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (8/7/2014)


    Staging a high transaction volume OLTP environment for QA and performance testing is actually much harder than a data warehouse environment. Data Warehouses tend to be static with fewer users. However, even if you duplicate the OLTP physical environment, software, and database, it more difficult to predict how it will perform or behave when confronted with multi-users and concurrancy scenarios.

    There are plenty of tools, LoadRunner springs to mind, that can configured and scripted to exercise clients to replicate concurrency and multi-user scenarios.

    It is the database instance that I have not thought through deeply enough. Hardware etc is easy, if not costly, to replicate as is the data.

    I don't think that many people duplicate data's physical scenarios so I thought I would ask.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Thanks Gaz! Testing is imperative and any test no matter how valid or invalid it is thought to be is welcomed. Sometimes the most poorly thought out test can emulate what happens in production under abnormal circumstances. Should we stress test the code and the database? Yes, why wouldn't we?

    M.

    Not all gray hairs are Dinosaurs!

  • Eric M Russell (8/7/2014)


    Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    Try offering to management the answer that runtime duration isn't important when end users are seeing slow responses after a recent change. It won't go down well!

  • The Fault (8/7/2014)


    Eric M Russell (8/7/2014)


    Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    Try offering to management the answer that runtime duration isn't important when end users are seeing slow responses after a recent change. It won't go down well!

    Runtime duration is important. I'm just saying that i/o pages and execution plans are more useful for relative comparisons between two different environments.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (8/7/2014)


    The Fault (8/7/2014)


    Eric M Russell (8/7/2014)


    Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    Try offering to management the answer that runtime duration isn't important when end users are seeing slow responses after a recent change. It won't go down well!

    Runtime duration is important. I'm just saying that i/o pages and execution plans are more useful for relative comparisons between two different environments.

    And we do those relative comparisons between two different environments so that we can determine how to make the response more efficient, and faster. 🙂

    Not all gray hairs are Dinosaurs!

  • Miles Neale (8/7/2014)


    Eric M Russell (8/7/2014)


    The Fault (8/7/2014)


    Eric M Russell (8/7/2014)


    Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    Try offering to management the answer that runtime duration isn't important when end users are seeing slow responses after a recent change. It won't go down well!

    Runtime duration is important. I'm just saying that i/o pages and execution plans are more useful for relative comparisons between two different environments.

    And we do those relative comparisons between two different environments so that we can determine how to make the response more efficient, and faster. 🙂

    Differences in the disk system, memory, and CPU can influence runtime, even if the database is identical. But if page reads double in QA, then you can bet that there will be double the reads in production as well.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (8/7/2014)


    Miles Neale (8/7/2014)


    Eric M Russell (8/7/2014)


    The Fault (8/7/2014)


    Eric M Russell (8/7/2014)


    Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    Try offering to management the answer that runtime duration isn't important when end users are seeing slow responses after a recent change. It won't go down well!

    Runtime duration is important. I'm just saying that i/o pages and execution plans are more useful for relative comparisons between two different environments.

    And we do those relative comparisons between two different environments so that we can determine how to make the response more efficient, and faster. 🙂

    Differences in the disk system, memory, and CPU can influence runtime, even if the database is identical. But if page reads double in QA, then you can bet that there will be double the reads in production as well.

    In summary, measuring would only be indicative as opposed to quantifiable. Right?

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Gary Varga (8/7/2014)


    Eric M Russell (8/7/2014)


    Miles Neale (8/7/2014)


    Eric M Russell (8/7/2014)


    The Fault (8/7/2014)


    Eric M Russell (8/7/2014)


    Even if the QA environment only has 10% the data volume as production; so long as the object schemas, configuration, and data cardinality are equivalent, the execution plans should be equivalent too. So, for example, if adding a new join and columns to a stored procedure results in 30% more page reads in QA, then you should expect the same proportional 30% increase in production. That's what you should focus on, how the latest changes modify the the plan and proportionally impact i/o, cpu, and memory, not runtime duration.

    Try offering to management the answer that runtime duration isn't important when end users are seeing slow responses after a recent change. It won't go down well!

    Runtime duration is important. I'm just saying that i/o pages and execution plans are more useful for relative comparisons between two different environments.

    And we do those relative comparisons between two different environments so that we can determine how to make the response more efficient, and faster. 🙂

    Differences in the disk system, memory, and CPU can influence runtime, even if the database is identical. But if page reads double in QA, then you can bet that there will be double the reads in production as well.

    In summary, measuring would only be indicative as opposed to quantifiable. Right?

    Measuring page reads would be more quantifiable than measuring duration. Duration can be influenced by any number of things, some of them transitory like the speed of the disk, cache hit ratio, blocking, etc. However, if a modified query reads twice as many pages in QA, then it will most certainly read twice as many pages in production; assuming your QA database is a recent restore from production.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (8/7/2014)


    Gary Varga (8/7/2014)


    In summary, measuring would only be indicative as opposed to quantifiable. Right?

    Measuring page reads would be more quantifiable than measuring duration. Duration can be influenced by any number of things, some of them transitory like the speed of the disk, cache hit ratio, blocking, etc. However, if a modified query reads twice as many pages in QA, then it will most certainly read twice as many pages in production; assuming your QA database is a recent restore from production.

    I think that is a yes.

    M.

    Not all gray hairs are Dinosaurs!

  • Miles Neale (8/7/2014)


    Eric M Russell (8/7/2014)


    Gary Varga (8/7/2014)


    In summary, measuring would only be indicative as opposed to quantifiable. Right?

    Measuring page reads would be more quantifiable than measuring duration. Duration can be influenced by any number of things, some of them transitory like the speed of the disk, cache hit ratio, blocking, etc. However, if a modified query reads twice as many pages in QA, then it will most certainly read twice as many pages in production; assuming your QA database is a recent restore from production.

    I think that is a yes.

    M.

    LOL. No, really I did 🙂

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • For single row queries normally associated with the front end, fragmentation means almost nothing because you're stuck with looking up single rows.

    For batch file code, I normally don't test against highly fragmented data. I test with the conditions that already exist.

    For special cases, you might want to do something like that which was done in the "Test Table and Data" section of the following article. http://www.sqlservercentral.com/articles/T-SQL/68467/

    If you want to really test with fragmented data on all of the tables, just do a shrink-file on the test database. Don't even think of doing that on a production database.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply