Why a professional conference must have a committee, and what that committee does

What exactly is it that a conference committee does? This post comes as response to a comment on A sneak peek at the Percona Live MySQL Conference & Expo 2014, reading:

Why the same committee each year? Community should vote on proposals and committee should just work schedule,etc.

I’ll pick the glove and shed some light into the work of the committee. While this specific comment related to the Percona Live conference, I trust that my opinions expressed below apply just as well to any (technical?) professional conference; the point below can equally apply to conferences from Oracle MySQL Connect, O’Reilly Velocity to FOSDEM & PyCon.

I can sum up the entire answer with one word: “Discussion”. For a breakdown, please read through.

First, what’s not feasible with community-based voting, and what looks very wrong

So why not open up a voting system and let the community do the rating? I always disliked the “send an SMS to this number to vote for X” approach. It is so unbalanced and unreliable: if I were to submit a proposal describing how my company invented/develops/uses X to do great things, I can expect my co-workers to vote for me. In fact, my company would possibly ask my co-workers to do so. I stand a better chance if I work in a large company; less so in a small company.

Anonymous votes tend to be touched by politics. I could vote for my company, against a competing product, for my friends, against people I dislike, and none the wiser. We can take away anonymity, which means my votes will be public, which means they are visible to all. In which case my ranking will be affected by what people I rate would think of me; which means my rating would not be based on strictly professional/technical grounds.

But before we drop into this endless pit, let’s consider: will I, as a KMyPyVelocirails community member, really engage in reviewing over 300 submissions? How many members of my community would take so many hours of their time to do so? Let me clarify, this is a part-time job. It requires time, and it requires a mindset. I’m guessing here that you cannot count on everyone rating all talks. Some more prominent talks will be reviewed by more people, others may be left little to not reviewed in the first place.

The idea of a purely community based rating is romantic and beautiful, but not feasible.

And then there’s the discussion. Let’s look at some of the things the committee is engaged in to clarify.

Duties, responsibility and actions of a conference committee

The following discussion cannot be an exhaustive description of a committee’s work, but it can give a good glimpse into its scope. We begin with the commitment the members take upon themselves: to invest their time and will in the committee’s duties. Once you join in, you are expected to work and deliver.

The duties of the reviewing committee extend beyond reviewing the proposals — though this duty is the most critical of all, as it serves the basis for all others. So let’s describe this first.

Reviewing the proposals

Reviewing papers is more than rating them (1-5 stars). We also comment and share our opinion. Members typically explain why they rated the way they did, especially on low rating. Our ratings and comments are visible to other members, and they can react on these. For example, I can disagree with one’s comments and counter-comment myself. We can continue this discussion in our mailing list.

We might throw in our experience in attending the same talk in the past. Was it good? Was it bad? Yes, it matters; we want the conference to be successful, and our personal experience is something we bring to the table. We might have some inside knowledge about an emerging technology, or speakers may share with us such knowledge in private section (“By the time this talk is given, this project will have been released as open source”).

We may suggest improvements: “Better make this tutorial 3 hours instead of 6”, “Should include this topic as well”. We might (and sometimes do) approach speakers to refine their submissions if we think they can make for good sessions.

We can express our true professional opinion on proposals. We wouldn’t always want to share in public; this private discussion works for the benefit of all – there is no reason to suspect we use fluttery nor insults. We just say what we think within our small group — and still be prepared for backfire from our peers.

But, wait. By which guidelines do we vote? Before we even start rating, we set the very basic guidelines, such as “what content are we looking for; what’s this conference about, what it is not, what are the rules for rating, …”. Can I rate talks submitted by my colleagues? Would that be unfair? But if so, am I allowed to rate our competitors? Wouldn’t that be unfair as well? Is there an end to it? We set these rules up front.

Meta

Throughout the reviewing process we collect meta-data about the submitted papers. Do we find content that is missing? Do we have enough “beginner” talks?

Some speakers submit 5-6 talks, in the hope than one or two make it through. But they don’t really intend to submit 5 or 6 talks if all happen to get approved. And the committee also prefers a variety of speakers. The committee (or committee chairman) reaches to those speakers to understand better their limitations. We can therefore reject talks based on high submission number per speaker.

We typically very quickly stumble upon the issue of overlapping content. Some topics are proposed again and again by multiple speakers. We take notes and balance the figures. We sometimes reject talks not because the abstracts are bad, but because we have so many other competing talks on same topic.

We may choose to promote an open source product. We may be passionate about this.

When we do reach a conclusion that some content is missing, we may pursue the issue by contacting speakers or potential speakers to submit papers on the subject. We may contact specific companies or speakers that we know would provide good value.

We make backup plans: speakers cancel; some don’t get Visas, some fall ill. We make a “Waitlist” of sessions and verify with speakers that they would be able to provide them upon request.

Speakers may have personal limitations, e.g. only being able to attend on this and that days.

Non reviewing tasks

We are further engaged in shaping the general shape of the conference. The organizers consult with us on various topics. I guess this really depends on the nature of the conference.

Conclusion

As you can see from all the above, this is far from just rating all the talks and picking the “top 100 talks”. There are a lot of constraints, limitations, contradictions, and issues to go through. This cannot be an automated “community vote” thing. And the moment you put someone at the position of making decision, you’re effectively reinventing the committee.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.