I took a database and use SQL Profiler to trace the data while a computer program executed a complete cycle as a user would in the application (it's like one BIG macro that mimics a user). Those results said, "if I add these indexes and statistics to the DB, then I will get "X" percentage of improvement." So I did. I reran the above scenario and saw an improvement, but no where near what Database Tuning Advisor said I would. I ran the second test through the DTA and found "Y" percentage of improvement if I added some more indexes and statistics. So I did. On the third test, I returned numbers from DTA that were terrible. Duration numbers 3,4,5 times higher, reads 20% higher on some tables, and CPU 3,4,5 times higher than any previous test. However, this is where it gets REALLY strange. The same computer controlled program that ran every test in about 5 hours, completed the exact same results in 3 hours and 30 minutes with all the new indexes and stats.
So here are my questions:
1. How can adding indexes and statistics make these numbers "appear" to be worse when DTA suggested the queries ran from the test need these changes???
2. Do I care about number 1? (Am I looking at this from an absolute indication of worsening or is there a "good reason" for increased reads... like I gave the DB a better way to read data so it reads more faster???)
3. How can longer duration (3,4,5 times longer than any other test) produce faster completion results overall?
(I understand #3 is a loaded question as you are not aware of the variables within the network)
4. What else can I do to evaluate these numbers and determine a REAL evaluation of the results?