Steve Jones - SSC Editor (9/15/2015)
Sean Redmond (9/14/2015)
There are several metrics that can be used:
1. How fast can an everyday problem be solved? Does a complex query take the DBA 30 minutes to do or 2 days?
2. How accurate/thorough is the work done? This is measured in code-reviews: after how many code-reviews, were changes made that made a difference to the data-output (rather than, say, to the speed or efficiency of the query)?
3. How efficient is the work done? Likewise, it is measured in code-reviews.
4. How extensive is the body of knowledge of the DBA? This is measured in how often a DBA must look something up. For example, if asked to implement, say, log-shipping, could it be done there-and-then or would the DBA need to go and read up about it first?
5. How complete is the body of knowledge of the DBA? About what does the DBA not know about? I know of Service Broker but never implemented it. There are surely many things that I don't know about. A complete checklist of the all features available within the SQL Server spectrum would be the way to go to measure that what is not even known about.
6. How interested is the DBA in learning more about technology? This can be measured yearly in new skills practiced and mastered.
All the best,
Everything you've listed is a highly subjective measurement. A code review is not a code review. The choice of "how often" or "how interested" is both highly variable and elusive from person to person (either interviewee or interviewer)
Yes, everything here is subjective, until standardisation and comparison comes into play, at which point baselines are established and can be used as a point of reference. I should have mentioned that I had a departmental setup in mind. The DBA sets themself as the standard and comparisons are made on that. By code-review, I meant double-checking. We have a policy that all reports and queries that leave the department are double-checked by a member of the DBA-team. This usually means re-doing the query in question. What kind of changes, if any, were sent back? How long did the double-check take them? These are all measurable things and can be used both to measure progress over time as well as industriousness and thoroughness.
It is not too different from the DB-server. This SP is running slow. Slow is subjective. Once baselines and acceptable guidelines are set, then one may decide definitively as to whether it is running slowly and under what conditions. Likewise with DBAs or developers — queries of certain level difficulty, say a 4-join query made with could-be-cleaner-data with various exclusions and conditions and, in all, returns 100K-ish rows back. Something like that is most measurable. Does it take the interviewee or employee 10 minutes, 30 minutes or 2 days? And how accurate are they to tthe test result that you have generated?
I had more continuous assessment in mind. However, once stats and test-queries have been compiled on the members of a team, there is no reason why an interviewee can't be tested. In an interview, it is the interviewer who decides how often often is and how interested interested is.