• Sure.  And understand that this is just my opinion.  If you want official definitions, look up each term on the Internet.

    My Basic Definitions of the Terms

    A "Bench Mark" is the result of a specific test and the inputs and processes are very well controlled and understood.  The tests are frequently executed under "optimal conditions" and are just as frequently used to determine what is possible if everything goes right.

    A "Baseline" can be made up of the results of  many "Bench Mark" tests or it can be made up of simple observations/measurements of what the current condition of something is.

    My Discussion on the Terms

    Depending on how the measurements were taken for a Baseline, it can be used for different things:

    1. If the Baseline was established from the results of carefully controlled "Bench Mark" testing, then the Baseline can be used to mark goals to be achieved compared to new development or current conditions.

    2. If the Baseline is a result of observation or measurements of current conditions, it can be used to...

    2.1 Determine if something is running more poorly than "normal".
    2.2 With the idea that "normal" may actually be rather poor to begin with, this type of Baseline can be used to measure improvements to make sure that such improvements actually are "improvements".
    2.3 When a "normal" Baseline is compared to a Baseline of "Bench Marks" that were established during some form of "optimal testing", you can determine if additional improvements are needed or not.

    An example of the importance of both types of Baselines is like what happened when I started employment at my latest job.  To make a much longer story shorter, average CPU was typically at 40% with spikes to 80% and a whole lot of logical and physical reads and writes (typically, > 100MB/Second which, IMHO, was just silly for the paltry workload present).  Everyone before me thought that was "normal" for the amount of work being done.  They never checked to see if the baseline could be improved upon but I demonstrated some of the possibilities.  Today, many of our databases have grown from 65MB to 1.2TB, the number of users has quadrupled, our current Baseline for average CPU is 6-8%, and our current Baseline for average disk usage is < 5MB per second.

    Of course, the end users don't give a hoot about any of that.  They just "want things to run fast", but we wouldn't have been able to accomplish such performance without having both types of Baselines available and we wouldn't have known when to stop trying for improvements, either.

    Of course, there are also "Anecdotal Bench Marks and Baselines".  A good example of that is how easy it used to be to format both text and code on this forum compared to how difficult it is now but people argue about what it was compared to what it is and to no avail because it's all "Anecdotal" and, apparently, isn't worth the proverbial paper that it's written on because it was never officially measured and documented. 😉  This exemplifies that a good "Baseline" of "Bench Marks" from good testing PRIOR to a change must be taken so that you can actually test "improvements" to make sure that they actually are improvements before releasing something to production.  Then, as so often happens, you actually have to find someone "pure" that isn't biased and can correctly analyze if a change was an actual improvement or not.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)