Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

The Voice of the DBA

Steve Jones is the editor of SQLServerCentral.com and visits a wide variety of data related topics in his daily editorial. Steve has spent years working as a DBA and general purpose Windows administrator, primarily working with SQL Server since it was ported from Sybase in 1990. You can follow Steve on Twitter at twitter.com/way0utwest

Rating PASS Abstracts

I was on a committee to help rate abstracts for the PASS Summit this year. It was an interesting and challenging experience. I learned some things, and I can better appreciate that this is a tough job. It’s hard to try and choose what people want, what will interest people, and what makes the Summit a better sell to potential attendees. Or at least their managers.

I found some flaws in the process, at least things that I thought made it a difficult set of decisions for me. I wanted to list them out, not to blame anyone, but to give some insight into the process, and perhaps get some ideas on how to better serve the community in the future.

Here’s the basic process:

  • People submit abstracts. This year they could see what else was submitted prior to their entry. I believe they could edit theirs, but not sure.
  • All abstracts were put in an XLS and sent to committee members.
  • We had a tool on the PASS site which allowed us to rate each abstract in four areas. Ratings were 1-10, and were in these areas: Abstract, topic, speaker, subjective.
  • Once all sessions in our area were rated, the committee scheduled a call to review the overall rating.
  • We picked a certain number of sessions in various tracks along with alternates.
  • People get notified.

A quick list of the issues I saw, and I’ll give some more detailed thoughts on each one of these. However I’m curious about any feedback as well from other people that are interested.

  • My first issue was that I didn’t really have a set of guidelines for what each rating meant. What is abstract v topic v subjective? Speaker I could figure out, but what do you do if you don’t know the speaker or haven’t seen them talk? We actually had people rating this differently
  • I find it difficult to actually rate things on a 1-10 scale. What’s a 7 v 8 v 9? I found myself struggling and probably inconsistently rated things. I might give someone an 8 in one area, and then find a very similar item and give it a 7. This is a hard one
  • I didn’t have much feedback on how other people in the community felt about speakers.
  • I didn’t have much feedback on how other people in the community felt about topics, or these specific sessions.
  • It was hard to tell if we were covering all types of topics in SQL Server. I only saw one replication submission in the spotlight sessions. No idea if there were any in the regular sessions.
  • I had no insight into what other groups were doing for their tracks. For all I know we all picked SSIS ETL sessions and everyone left out fuzzy matching or data mining.
  • I have no idea what speakers submitted in other tracks or areas.

Some of these worked themselves out, so they aren’t major complaints. As an example, we discussed our ratings, and didn’t necessarily pick the top xx in some area. We moved around, and sometimes picked sessions rated much lower.

As I mentioned, I’ll post some notes on each of these areas in more detail, assuming I can disclose things. Please feel free to comment on what you think we should do to pick sessions of more value to everyone.

Comments

Posted by Brad M. McGehee on 1 July 2010

Steve, did you have access to the speaker's PASSPort profiles, last years speaker ratings, and the survey taken this spring which rated the various topics potential attendees wanted to see at this year's summit? None of these are perfect, but my main question is if you knew about these information sources, and if you knew about them, did you use them? And if you did use them, how could they be improved?

Posted by Steve Jones on 1 July 2010

I suppose we could have viewed PASSPort profiles, but it didn't come up, and I believe that my committee just forgot they were there. The tool we used didn't surface this information, so it was slightly cumbersome for us.

I know that I researched some people, Googled for sessions, had the Handbook PDF and Speaker Ratings PDF open along with our rating tool and an XLS of the sessions.

The critique I list above isn't so much a complaint as an observation about what I found difficult in this process.

Posted by jeremiah.peschka on 2 July 2010

Steve - thanks for sharing your thoughts. Of course, I already knew about them since you emailed me a while back with them.

We had last year's speaker scores available. If you didn't get that info, let me know offline and I'll make sure that we get that in place as something more obvious next year.

You've also exposed one of my secret loathings in this world: Likert scales (1-5/1-10 rating systems). I could expound at length on what makes for a good or bad rating system, in my opinion. But, alas, I am not a statistician.

Anyway, like I said, thanks for sharing your thoughts.

Posted by Steve Jones on 2 July 2010

I did have last year's scores around. I had those on the screen and would read them off. Another committee member had some attendance numbers and read those off as needed, etc.

Part of the issue was process. The tools didn't make it easy to expose and compare things quickly, which you need when you are on a call. Not bad, but room for improvement.

I would like to see the scales changed. We had some arguments, where people were saying our rating for xx was 88 and for y was 90. To me, at that point they're the same, so we need to talk about other things.

I have more posts coming. We can, I think, make this a smoother, and better process for people making decisions.

Posted by Kevin Kline on 7 July 2010

You bring up a good point, Steve, that I don't think has ever been addressed before.  That is, I don't ever remember seeing a process to handle "completeness".  Taking an extreme example, what if there weren't any speakers who submitted a session on query tuning?  Not likely, of course, but afaik the selection process only ranks abstracts submitting by the broader speaker community.  If no one puts in an abstract on a given topic, there's no means of finding that "NULL" value nor a means of seeking to fill that gap.

I don't really have a solution to the issue.  But it's something that'd be good to think about imo.

Best regards,

-Kev

Leave a Comment

Please register or log in to leave a comment.