At Ephox we are extremely good at making the impossible possible, just take a look at EditLive!. The things we did in the early days were not possible with the limitations of the Java APIs, so we found a way around them. As such, it is possible to continue with a feature beyond the current return on investment of the feature.
What we need to do is find a way of identifying when to stop the investment in a given feature, mark it as "failed" and move onto the next most valuable feature. To do this however, you need to know what are the conditions by which you define something as having failed.
We recently moved to a tri-estimate approach1 where all stories include a Best case, Worst case and Most Likely estimate. These estimates not only give us an indication of the risk associated with a feature, but also a basis by which to determine "failure".
My framework for "failing" a story is as follows. Once we hit the Most Likely estimate, we re-evaluate the Worst Case estimate with the knowledge gained so far. Assuming the revised Worst case is an acceptable investment in the feature, the new value is then considered the "line in the sand". Once we hit the Worst Case estimate, we review, asking how much longer to go to completion. If the time to go is acceptable, then this is the "failure" time. Once that time has expired, the story fails.
So, for example, figures of 20 and 40hrs are given for Most Likely and Worst Case estimates respectively. At 20hrs, the team revises the Worst Case to 45hrs. It is accepted that the features value is worth 45hrs, and development continues. At 45hrs, the team says there is an additional 3 hrs to go to completion. This final amount of effort is short enough that development continues for the 3 additional hours. At the end of that time, we "fail" the story if it's not completed.
Now of course, there are exceptions to this, but the aim is to identify when a feature is just going to keep going and stop it.
1 – Doug discussed the idea in his article Estimations – Best, Expected and Worst ↩
]]>In our first review, we identified Test Driven Development (TDD), Daily Stand-ups and Iteration Demos as the practices we would get the most value by focussing on.
To inject some fun into our practice focus, we introduced the "Talking Car"1 for Stand-ups. For TDD one member of each team volunteered to be the TDD representative for the week, during the weekly retrospective. Their job was to remind the team during the stand-up that we were focussed on TDD. Finally, we made it the responsibility of the "client" to hold Iteration Demos to the business.
In our most recent XP review we have identified Root Cause Analysis, Planning Game and Weekly Iterations as the practices to focus on. We will continue to work on improving the previously identified practices, but we felt that as a team, we would gain the most value through the new practices.
The question is, how do we remind the team of our commitment to improving these practices?
Atlassian recently posted about their Agile Process and one thing they mentioned caught my attention. Chris explained that they "have practice champions for many of the more challenging practices".
I'm really interested in how we could use Practice Champions to help focus on and improve the 3 practices we have chosen. I'm hoping these champions can bring some fun and energy to the adoption process and galvanize the team behind improving some fundamental XP practices.
]]>When we adopted XP (eXtreme Programming) we undertook to have a retrospective at the beginning of each development iteration, preceding the planning game. With weekly iterations, we have a chance to reflect on the previous weeks pluses and to formulate some changes/improvements (deltas) identified from the week.
In the article the author made the following comment,
In fact, looking back is only half of the retrospective pattern. Reflection is not learning. To bring about learning and improvement, it is necessary to identify areas for improve and explicitly document a brief action plan for which the team becomes accountable.
We regularly come up with a number of delta's and then choose the highest priority ones to be tackled during the next iteration. We even assign someone to "own" the task. So we are on track to learn and improve however what we struggle with is when we have too many delta's to be done in a single iteration.
Currently we review the previous retrospective deltas at the beginning of the retrospective and copy any un-addressed ones to this retrospective's delta list. The problem with this is that the list is getting bigger.
I'm not really sure what the solution to this is yet so if you have an suggestions I'd love to hear them. For now, we'll continue to tackle the most pressing or productive delta's and keep reviewing the previous one.
]]>