Recently we released the results of the annual survey to determine interests for the 2010 Summit. That’s a step in the right direction, sharing the results, but what we didn’t do was share what we thought we had learned from the results, and we were taken to task some by Steve Jones and Brent Ozar (great comments on this one) for not doing that.
It’s a case of us failing by doing more. If we had not released the results, we wouldn’t get in trouble for not interpreting the results! Speaking only for me here, we missed an opportunity, but it’s a good lesson to learn. I sent the following to the Board as thoughts around this:
Note: PASSCab is our advisory group, a chance for us to get some opinions from outside the bubble of the Board – see the post by Tom for more details.
We’ve got a lot of learning to do and process to build around surveys – or do we? Part of me thinks absolutely, we need to go through the 10 step process and follow it perfectly, lest we be criticized. If we can spend a few hours to do that and still field a new survey quickly, that’s good. If every survey gets bogged down in hair splitting review, is it worth that to get perfect answers?Surveys that will get reused merit more investment, and probably all surveys benefit from a simple process, but – see how it easy it is to add overhead?
I’ve got another post coming up next week that talks about the criticism side – might surprise you with some thoughts there.
See this post by Tom LaRock on the same issue.