Blog Post

The Selection Process

,

According to this post from Amy Lewis (PASS Board of Directors) there were 943 abstracts submitted for the 2014 PASS Summit by 335 speakers competing for 144 slots on the schedule. An abundance of riches to be sure, but it also marks a time of expectation, exultation, disappointment, and even disenchantment as some win (get selected) and some lose (not selected). Those not selected are certainly not “losers”, but there is a sense of having failed in some way. Last year was the first year I wasn’t selected since I submitted my first abstract for the Denver event back in 2000 (I think) and I hated being “left out”. It’s not as easy to accept, or understand, if it’s your first time submitting, or if you’ve never been selected, because the feedback is minimalistic. Track full, or duplicate session, that’s all you get. There is also a tendency to wonder if sponsor reps get special treatment, or Board members, or people on some secret “A” list. Given the lack of feedback, I don’t blame anyone – everyone – for wishing there was more feedback and more transparency.

Before I get into my thoughts on the Summit selection, I’d like to talk about SQLSaturday selection. Here in Orlando we have what hardly anyone would call a process, just some guidelines. We prefer local speakers and work hard to get them, we try not to turn anyone down (which is why we typically run 8 tracks, and we love the value and diversity of speakers that fly across the country to donate their time to our community. We do what we can to help them be successful. We ask them to be servant leaders and volunteers by serving lunch to our attendees each year (while wearing a SQLSaturday chef hat or apron of course!). We’re so lucky that we don’t have to send out “rejection letters”, at least so far. If we get in a jam Kendal or Rodney or Bradley or Shawn or myself will give up a slot to make it all work out. We build our schedule trying to get that magical and mythical mix of skill level and topic and approach, choosing one of the 2 or 3 abstracts that most speakers submit, unless they stated a preference for one of them. Even with the given of putting everyone on the schedule, building a good schedule is hard! We sweat over it, and that’s with 50 or 60 abstracts. Imagine having 943. We also try to do one or two paid seminars each year. We’ve talked about building a process for that, but it’s still just guidelines, and mostly unpublished. We’re looking for a topic that will draw enough attendees to cover the costs and send the speaker home with a little cash, we want a speaker that we really think can deliver on the promise of the abstract. We don’t always take the “best” speaker or abstract. We like the idea of helping others grow and get ready for a Summit seminar some day. We will sometimes pick a speaker/topic and then ask for tweaks to the title and/or abstract that we need are needed to help us market it effectively. Entirely analog, not something where we score sessions and speakers, though sometimes I think we’d benefit from doing so, just to help us think it through a little better.

Now let’s return to the Summit selection process.

What’s the best way to build a schedule for a top-of-the-pyramid paid event? Is it the detailed and heavy scoring and weighting process we seem to have now? An all out election by the people as advocated by my friend Brian Kelley? A combination of both (we used to have “community picks”, what happened to that?)? Beyond that, should there be any other rules? The current plan limits speakers to no more than four abstracts for the main conference. Should we consider (as I have argued for a few times) a rule that requires speakers to sit out a year after being selected? A rule to limit speakers to a max of one session?

Good questions I think. Looking at it a different way, what would I want to achieve?

  • A schedule that has top tier presenters on it, both for the value of what they deliver and for the marketing value of having them participate
  • A schedule that has something for different skill levels and interests, ideally with two tracks per “major” focus area so that not only is there content, there is always a great “plan b” if the first pick room is filled to capacity, or just not a perfect fit
  • A process for building the schedule that is as clear and as consistent as I can make it, explained before the abstracts are submitted (which is also the time to specify any topics of special interest or disinterest each year)
  • A process for penalizing speakers who fail to show, or get really bad ratings, or break the speaker agreement in some way
  • A process for providing some kind of feedback about why their session was, or wasn’t, accepted
  • A process that volunteers can take and own and execute, without fear of the backlash on the day the schedule is announced
  • A process that allows for analogy input because we’ll never have a perfect formula
  • A process for dealing with grievances (from the committee, or the speakers)
  • A process that considered the speaking experience (including evaluations and recommendations) from other PASS events
  • A process that considered previous/estimated interest in the topic
  • A process that speakers understood, perceived as fair, and that gave everyone a chance at making it to the show (but not necessarily an equal chance)

You might not agree with all of those and I probably missed a few too. I think about what I’d want for a paid event, in particular one that is the only fund raiser I have, and I don’t think I could bet it all on an election. Maybe that’s wrong, but I think there is real value in a team looking at options and making hard decisions about what gets on the schedule. I do think we should return to having some of the sessions be selected by the community, it’s a great way to make sure new voices get heard, to correct minor flaws in that years process, and to engage the speakers/community in a lively and interesting way.

The problem is today that we don’t really understand the process. Maybe it sucks. Maybe it’s really good and we just don’t see it. It seems to consist of two parallel threads, where one team does a blind evaluation of the abstract and another team evaluates the speakers. Do typos matter? Does length of abstract matter? What goes into evaluating a speaker? How are conflicts resolved when two people submit abstracts in the same niche? Can someone at the manager level override the scoring and mandate someone be put on the schedule? Is there some process for removing people that are anti-PASS? Do former Board members get preferential treatment? Do current Board members?  Sponsors? Publishing details of the process and the results of the process would go a long way towards stamping out any bit of distrust in the process. Of course transparency can also provoke a lot of arguments and maybe hurt some feelings too, and that’s worthy of consideration. Would you want the whole world to see your abstract rejected as “numerous typos, bad grammar, no focus” or you being rejected as “previous bad conduct” or “consistent low eval scores”?

Speaking of transparency, this is what we used to post – why don’t we do that now?

 

image

 

image

image

 

 

It’s easy to forget that our process is run by volunteers. Amy Lewis is on the PASS Board and is a volunteer, and she was helped by Melissa Coates and Lance Harra as Program Managers. They may not have arrived at the perfect answer, but I don’t doubt for a moment that they labored hard to follow their process and deliver sessions that would be worthy of the Summit. Much as in my comments earlier today about members of the Board being eligible (or not) to give paid seminars, I’d like to see us have a process where a volunteer can do the work we ask and not have to take any flack about the results, at least on a high level. We can’t keep doing it the way we are without a lot of volunteers, no one wants to volunteer just to be yelled at, or to have someone imply they somehow played favorites. We don’t do a very good job of celebrating their efforts, something else we should work on.

While I’m writing, I also want to mention that when I was on the Board I don’t think we ever talked about the selection process as far as who was or wasn’t speaking. Allen Kinsel was working on tools and we had an idea it was going on, but I’m sure we never as a group were involved in any decision about selecting or not selecting anyone. I think that’s good, but I also remember feeling strangely distant from that process, and really the running of the Summit as a whole.

I don’t think the current process is bad, but I think we can do better. Here are the changes I’d like to see discussed this year for implementation next year:

  • PASS to publish a video and documentation about the entire process as part of the call for speakers launch. Let’s make sure we all understand the rules, and let’s leverage those speakers in the community that have spent a lot of effort figuring out how to write great abstracts and get great scores and deliver great presentations.
  • Commit to providing every speaker/abstract with feedback privately, with the speaker having the option to share it publicly (a step towards full transparency)
  • Limit speakers to one session (max opportunities)
  • Require x percent first time Summit speakers each year, provided there are enough candidates to hit that goal and they can document speaking experience (I like the idea of anyone new to the Summit requiring 2 references from other speakers)
  • Limit speakers to presenting every other year
  • Re-establish the community vote for some x percent of the available sessions
  • Limit speakers to submitting two abstracts. That would reduce the volume and increase the time spent on abstracts. I think just this one change could be huge in making things better)
  • Require every presentation to have been presented before it is submitted. No more presentations on spec.
  • Someone qualified and authorized to answer questions directly on the day/week the schedule is released, same day answers as much as possible
  • Annual review and suggestions provided back to the Board, with the Board voting each year whether an independent committee is needed to assess changes or not (go/no-go)

I’d like to see similar rules put in place for seminars, but that probably requires extra care on all sides. I’d like to see 100% turnover in that space each year. Because money is involved, it gets complicated.

I don’t claim to have all the answers. I think we try hard to do good and mostly we do. I’m extra sensitive to the needs of the person trying to make it to the schedule for the first time. Let’s help them, even if they end up beating us at our game later on. Let’s try to see both (all sides) and do the things that both build and support trust of the way we do this. We’ll still have some that are excited and some that are disappointed each year, but we can do it in a way that encourages those that didn’t quite make it in to redouble their efforts the next year, and in a way that reminds anyone that made it in already that it’s not a lifetime appointment, we have to re-earn that seat every time. Comment on my ideas, or publish your own. Let’s share ideas and see if we can drive some good changes into the system for next year.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating