Jeff Moden - Monday, June 11, 2018 8:31 PM
Sure, there are probably people who make that mistake. But I suspect they are a minority. The proper assumption is that the first baseline is meaningless (as are, probably, the next few).
My approach is that I measure things several times, and get a baserange rather than a baseline. Then I change something a bit, and see if the measure goes outside the base range - and if it goes somewhere I think is better, that tells me my change may be useful - but not whether the amount of change is enough or too much or too little so I have to take measures with a lot of changes in that dimension. If it goes the wrong way, then I need to see whether a smaller of larger change would go the right way. Then I change some other thing, and the process is similar. There are probably lots of things to change, and in the end changing one at a time isn't enough, so one has to end up with an N-dimensional table of results if there are N things one might change. Doing this properly requires a very large amount of measuring. So it may well be an unacceptable overhead. So I need to find some things that I understand (for example that if f I usually want to sort on X then having an index on X has a chance of being useful), and if I can find enough things that I understand then I can reduce the amount of measurement (perhaps partly by reducing the number of dimensions I have to consider) to get a reasonable conclusion at a lower cost than blind experimentation..
Tom