I have been attending the Agile Alliance conference for a number of years now.  A few years ago, I wanted to start helping with the conference.  My reasons were:

  • Wanting to give back to a conference that I’ve learned quite a bit from
  • Wanting to help improve the conference (more advanced and different topics)
  • Wanting to understand the world of conference speaking – what’s a quality submission, etc

After having been a reviewer for two years and a track co-chair for three, my reasons have changed:

  • Wanting to give back to a conference that I’ve learned quite a bit from
  • Wanting to help improve the conference (more advanced and different topics)
  • Wanting to help open a door for a new reviewer and/or speaker

So in the spirit of the new reason, I’ll share a little of this year’s experience thus far.

The simple view of the submission process is:  Speakers have a window to submit a session.  The submission will receive 3 reviews.  The speaker updates submission based on reviews.  The review team makes ranked recommendations for the submissions.  The program chairs decide final program with the intent to honor the recommendations.

Unfortunately, this is actually anything but simple.  In no particular order, here are some common challenges (from the speaker’s, reviewer’s, and/or chair’s point of view):

  • Track Selection for the speaker:  Every year, where does this session go is a constant problem.  The organizers will continue to experiment but I can say that this year the response time for chairs to determine where a session should go was greatly improved.  However, some speakers still hear “does this fit?” when you submit a session you believe is quality.  Our track’s goal was to try very hard to first understand the session intent to help place in the right track.  In addition, our track had a stance to review for quality topic over track fit if a session had been bounced around.   To help minimize this challenge for a speaker, I recommend to read the track description and questions that the review team is targeting.  Incorporate the question within the information for the review team section.  
  • Review discussions:  The SLA indicates each submission will receive 3 reviews.  However, it’s really tough for the 3rd reviewer if the submitter hasn’t responded to the first two reviews.  It feels like piling on to just say “what the other two said”.  Sometimes that doesn’t happen but most of the time it does.  Personally, I pay attention to sessions that received no updates/no responses to the reviews.  I have to question your interest in really presenting or your time to prepare to present.  Reviewers are giving their opinions with the intent to help improve your submission, engage with them.  If you do but don’t receive responses back, email the chairs!
  • Submission date:  I get it…we are all super busy.  I get it…the submission window is over the end of the year holidays.  However, we received over half of the submissions on the track in the last 24 hours of the submission window.  This makes things really tough for the review team to meet the 3 reviews and give the submitter time to respond before the next review.  Submit even a few days before the window, you will probably see a different level of interaction and reviews.
  • Abstract:  This is an area that I have to work really hard on myself.  I’m not a natural writer.  I’m a talker :).  I look for a few things in an abstract:  The opening hook (what is the problem that this session helps with).  This helps target the right audience for you session.  Teaser details about the session (an example of what you will cover and/or supporting info on why it’s important.  Closer hook (typically highlighting learning objectives as in invitation to join).  This should be a paragraph or two, not a book.  This is the printable description that gets butts into your session.  Spend quality time here.  No spelling mistakes.  Readable.  I get most excited about submissions when I’m done reading the abstract and my first thought is, I want to go to this session.
  • Learning Objectives:  If you have 10 learning objectives for 75 mins, red flag.  If you have one learning objective for 75 min and especially if the learning objective is not actionable for the attendee, red flag.  If your learning objectives conflict with your title/abstract, red flag.  If your learning objectives target 5 different roles (audience attendees), red flag.  Now all of these red flags could turn into nothing with a review discussion but they are warning signs of whether the session will be valuable to attendees.
  • Information for Review team:  You can never put too much in this section.  Even accomplished speakers, need to express what and how they will meet the learning objectives.  I have seen submissions with just the abstract and I won’t recommend these sessions.  Include former speaking history and the ratings you received.  I understand that you might not put the session together if it’s not selected but you need to demonstrate that this is a well thought through submission.
  • Session Selection:  I have submitted to this conference many times and not been selected too.  Even reviewers/chairs that have gained experience with submission entries, are not always selected.  There are a variety of reasons why a submission is not selected:
    • Submission lacks details, not high quality or targets too basic of a session for a conference with mostly practicing/expert attendees:  This will typically be represented in the reviews.
    • Another submission beats out this submission:  This is a tricky one.  A submission might receive fairly positive and encouraging reviews.  Unfortunately having two sessions on pair programming in the same track might not make sense.  The reviewers than decide which submission is stronger.  If you are not selected but had encouraging reviews, inquire to the chairs.  This is a very likely scenario.
    • Top sessions beat out this submission:  This is another tricky one.  You very well could have a quality submission.  You could receive positive reviews but not be selected.  How is this possible?  Because there are only so many sessions.  We received over 80 submissions and have room for 15 in the program.  Our track tries to balance a number of factors in deciding (such as topic, session type, etc).  If you are not selected but had encouraging reviews, inquire to the chairs.  This is a very likely scenario.
    • Wisdom of the Group is leveraged:  One reviewer may really be positive and others may not.  There is a discussion and the chairs make final calls when necessary but the wisdom of the reviewer group is leveraged.   This can be frustrating for a speaker receiving conflicting information but we are giving our opinions.  Sometimes, they will differ dramatically.  We read each other’s reviews and realize the conflict too.  Those typically have strong discussions during the recommendations meeting.  If you receive conflicting reviews, engage in the discussion.  Reviewers’ intents are to help but sometimes we can cause problems too.  Personally, I’ve had to express my stance even if it was counter to a review.  We are not all clones and that’s the wonderful thing but it can be annoying.
    • Speaker history:  First, our track was very clear that our selection process is not a popularity contest.  However, former speaking data is reviewed by the chairs.  This data is something that is not shared with the entire review team.   I do use this as one part of my input for decisions.  Personally, I really look for the top tier and the bottom tier of speakers (and then if it’s a trend).  For example, if a session is on the fence but the speaker was one of the top sessions from the previous year…I’m more inclined to recommend.  Alternatively, if the speaker was in the bottom and has been for a couple of sessions, I’m more inclined to pass.  Maybe one of our reviewers has seen this session at a local user group and didn’t feel it was delivered well, this would be taken into consideration.    We do have a responsibility to ensure quality sessions for the attendees, your history of speaking is a predicting factor of the quality in your submission.  What is not taken into consideration is:  This person is a “celebrity” and should be speaking.  So-so was mean to me once and I don’t think they should get in.  So-so is my friend so I’m going to vote them in.  Frequently on the review team, people will disclose and remove themselves from the conversation due to a conflict of interest when appropriate.

When notifications for Agile 2014 submissions go out…

If you were selected and this is your first time presenting, reach out to someone that has done this before.  They have valuable tips to help you prepare a successful session.

If you were not selected and want to improve, reach out to a program/track chair to get additional feedback.  Most of all, if you believe it’s a quality session that should be shared back with the community…don’t give up.  Submit the session to local user groups.  Submit the session to various conferences.  Our community’s learning only thrives when we have people passionate about giving back.  It took me some time to understand how to get my thoughts represented in submission format.  I know you can too.

Best of luck to all of the submitters this year,


Tricia Broderick

Tricia Broderick

Tricia Broderick is a leadership and organizational advisor. Her transformational leadership at all levels of an organization, ignites growth of leaders and high performing teams to deliver quality outcomes. Tricia has more than twenty years of experience in the software development industry. She is a highly-rated trainer, coach, facilitator and motivational keynote speaker. Beyond her extensive knowledge and skills, her biggest offering is inspiring people to believe anything is possible.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.