Blog Post

Selecting Speakers for the Summit

,

This is a series of posts on the PASS Summit and SQL Saturdays. I’m outlining some thoughts here, sometimes for the first time, but on topics that I think make better events. These are opinions and thoughts, not mandates or demands.

I attended, well, listened to, the PASS Speaker Selection Q&A this week. In the spirit of being constructive, I want to thank the board members and committee members for attending. I thought the event was well run and informative. I appreciate the disclosure.

Tldr: I think the volunteers did a good job, with minor room for improvement. Disclose more numbers and guidelines, and involve the community more.

Hearing about the process is always good as this affects the community that attends the event. Both speakers and attendees. I’ve been involved before, and should do so again, to see how things might be different. I appreciate the amount of work involved, and would like to thank Allen White, Lance Harra and Mindy Curnutt for their work. I know many others are involved, and thanks for your time as well.

  • 840 abstracts.
  • 255 speakers (3+/speaker)
  • 112 session slots (+10 pre-cons, 4 lightning talks)
  • 5000+ comments from reviewers.

That last number is quite interesting. That’s an average of 5.9 comments per submissions, much higher than when I was involved. What’s more, Lance and Mindy reviewed all comments, ensuring none were inappropriate to return to speakers. While I abhor censorship, this is something that needs to be done. Some (very few) volunteers will poorly communicate their thoughts, or have a bad day. Reviewing and redacting (or asking for rewording) makes sense.

There also was a note that Lancy/Mindy tracked the timing of comments to ensure volunteers spent time actually thinking about the sessions/speakers and not racing through with a quick CTRL+C, CTRL+V. That is impressive.

I asked a question on first time speakers. Not to beat up the committee, but because I think the health of the community depends on regularly getting new speakers, both first timers at the Summit and new pre-con presenters. Note that I don’t want someone to give their first talk or their first pre-con at the Summit. They need to practice elsewhere, but we need a rotation of new speakers.

Allen mentioned that he looked for 20-25% new speakers, though that guideline isn’t published or listed. I know that the number depends on the submissions, but having guidelines and then giving reasons for deviating is what I’d expect. Give numbers and then react. Adjust and explain why. That’s what many of us do with data and our jobs.

For the record, I think 25% is high. Maybe 20%, especially as we have more and more speakers. I would like to see a goal of at least 10% each year. If you do more, fine, but explain a bit. Not a lot. A bit.

Allen has a nice post with some numbers and the process explanation. Thanks, Allen.

However.

More Data

I want to see more numbers. That’s our job. If PASS doesn’t have time, recruit a few volunteers. I’d bet there are a few people in the community that would love to play with data. Let us see the pre-cons broken out. Topics, resutls of surveys. There’s even this cool tool that lets you build some visualizations with data.

Or better yet, put out a feed, or if that’s too hard, some numbers in an Excel sheet. A flat file. Let people do some analysis and learn. You might learn something as well.

Honestly, while I think comments deserve some privacy protection, I’m not sure ratings do. I’d like to see those released. The committee is always going to upset people, and there will always be complaints. However, the more information you provide, the more we can comment, yes, also complain, but also you might learn something about how the community thinks about your choices.

After all, you’re supposed to consider the community.

I’m not asking you to only listen to the community. After all, the Summit is a business, and it’s fair to say that the 12th PowerBI session rated a 4.2 got bumped to get in a 3rd Extended Events talk rated 3.5.

Disclose more and analyze more. React and adapt. If you don’t want complaints, resign. That’s part of the job.

Enough on that.

Community Vote

I do think that it’s impossible to build a perfect schedule. Looking at last year’s numbers is inherently flawed. After all, how highly were the AlwaysEncrypted sessions rated last year? Or the first time speakers? It’s a guideline, and I’d like you to publish those numbers to show what you’re considering, but I also think the community deserves a vote.

I asked the question and Allen responded that not a lot of people voted and there were issues. I dislike terms like “issues” without specifics.

However, I’m not asking for sessions to bypass the program committee. I think there is good work done here. What I’m saying is that of the 112 sessions, when you get 100, put the last 10 or 12 up for a vote. Take the sessions rated 100-124 and drop them in a survey. Let community people, those that wish to, vote. After all, of your entire membership, how many vote for the Board? Probably a similar number to what you’d get here.

You take popularity votes from last year’s numbers already. Take a few more.

If it’s a lot of work, explain that. Maybe we can help you slim things down.

Break the Bubble

Mostly what I see from organizations, and what I’ve done, is that groups start to take on responsibility, feel the weight of that responsibility, and work in a bubble. Over time, they become more disconnected from the people they make decisions over.

This is a hard process, and a lot of work. I know you try and mostly succeed. You may be overthinking this, and over-controlling it. Let go a little, take more feedback, and look to improve the process.

Continually.

That’s what we do in this business, or at least, what we should do.

Filed under: Blog Tagged: conferences, PASS, syndicated, thinkingaloud

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating