Tuesday, 31 March 2020

The Definition of Ready

Photo by Braden Collum on Unsplash

Anyone familiar with the scrum methodology should be familiar with a good Definition of Done (DoD), but there is often an opinion amongst developers that their jobs would be a lot simpler if only all those other people did theirs well. It isn't uncommon for that to be the perception in any situation when there are tribes of people involved, but I'll leave that for the sociologists to debate.

This article's focus is on an equivalent to the DoD prior to the development sprint beginning.


Definition of Done


The Definition of Done is a commonly understood checklist of conditions to consider a story finished, from a Development perspective. The same list is applicable to any story. It does not imply that the story has reached the Production environment and is live, only that it is expected to be, subject to further higher-environment testing.

A story must accomplish the below items to be considered Done:
  • The story was understood by the affected teams;
  • Unit tests were written, completed and executed successfully;
  • All coding activities are complete;
  • All analytics changes or additions were included;
  • All acceptance criteria were met;
  • Zero code smells exist;
  • Continuous Integration test execution revealed no errors;
  • A peer review of a pull request revealed no issues;
  • Functional tests were written and passed without error;
  • Non-functional requirements were met;
  • OWASP checking revealed no issues;
  • Any necessary mid-sprint design changes were included;
  • The relevant feature branch was closed;
  • The feature was included in a release package; and
  • The Product Owner accepted the user story.

Of course, some of the above may be different in your organisation - I tried to present a typical set for the web and mobile developments I've run.


Definition of Ready


In keeping with the DoD approach, a story is considered "ready" when the team agree that they can develop it.

A good Definition of Ready would be:
  • Everyone involved understands what the story is and why it is needed;
  • The story was written as a user story;
  • Acceptance criteria exist;
  • Behaviour Driven Development scenarios exist that reflect the acceptance criteria;
  • Where there are any UI elements included in the story, designs are provided;
  • Designs of all related architectural elements are complete;
  • The team understands how to demonstrate the feature; and
  • The story was estimated by the team.

Let's address each element in turn.

Everyone involved understands what the story is and why it is needed

One of the main points of story definition is to define a feature or component in a clear, unambiguous way. Whilst Agile's short iteration cycle reduces the impact of the awkward "that isn't what I wanted" delivery, there is still the capability to spend two weeks working on the wrong thing if the definition is unclear.

In my projects we have a number of review points prior to a sprint beginning that try to reduce this risk, including three amigos, feature briefings and look-ahead meetings. The nature and scope of these meetings is dependent on the circumstances and the complexity of the work, but I would advise at least including the three amigos meeting, so that there is sufficient scrutiny of a story to avoid ambiguity as much as is practical. Note that the "three" can sometimes be more, if the team includes a range of delivery platforms (e.g. for web and mobile).

Include the story's context. It should be atomic, yes, but it doesn't exist in a vacuum. Where does it sit? Of what does it form part? What does it enable? What does it rely on?

The story was written as a user story

While it is tempting to skip the "benefits" part of the "As a <user> I want <feature> so that <benefit(s)>" pattern, it is that part that justifies its inclusion.

Always address the justification for every story. If you can't, question whether the story is valuable. Justification should include the scope, the user base impact, the demand for it and the financial implications of doing it and not doing it. The latter is critical. I'm sure you have your own examples of a determined Product Manager pushing a story that they think will be of benefit, without doing their homework to prove it. Development time costs money, so spend it wisely.

Acceptance criteria exist

Business analysts or product owners who come from a waterfall background are used to writing extensive functional specifications. Stories are much more atomic than the long documents of old, but should include acceptance criteria if relevant. Whether these are formal or simply in note form to supplement the story is determined by the needs of the project, but they must be light. They do not replace the story; they are not the "description" of the story (i.e. what the badly-worded story title "really meant"); they are not the "part that you really need to read". If any of those are true, rewrite your story title.

I prefer to use acceptance criteria in note form to annotate the story. I use real world examples where I can, to aid understanding. You shouldn't need to go into a lot of detail if your BDD section is extensive.

Behaviour Driven Development scenarios exist

Behaviour driven development (BDD) scenarios are effectively real-world test cases designed to detail and prove the acceptance criteria.

I recommend the following structure for behaviour driven development cases:

REF      <incremental reference number within story>
TITLE   <Why do we need this case? What are we testing?>
GIVEN  <pre-condition 1 exists>
[AND     <pre-condition x exists>]
WHEN   <action 1 happens>
[AND     <action y happens>]
THEN    <result 1 must happen>
[AND     <result z must happen>]

Always include both positive and negative scenarios (what should happen and what error messages appear when it doesn't). In addition, consider including examples to illustrate more complex cases. That will give your developers and testers the best chance of meeting the intended acceptance criteria.

If you're constrained for analysis resource or time, consider having the testers write the BDD scenarios, but always make sure that the product owners or business analysts check and confirm their definition before proceeding. That also confirms that the testers have understood the story correctly.

Don't be afraid to add more scenarios later as people identify them, but try to be as comprehensive as possible in the initial definition. These cases will help the team to estimate the work and long lists of BDD scenarios suggest that the story is too complex and needs to be split into smaller stories.

Where there are any UI elements included in the story, designs are provided

I'm not suggesting that every UI component needs to signed off before any development work can be started. If the project was started well, there is already a component pattern library or design guide for the various types of UI element (and if there isn't, create one now!). UX testing of those elements should be done early in the project, to check that they work in practice.

Beyond that point, individual usage should be little more than the type of element and its configuration for that specific case. For example, yes, it's another drop down list, but what are the values within it? How are they sorted? Where does it appear on the page? Does it have any unusual properties (e.g. it only appears if the user selected value 1 from the previous field). In particular, complex UI designs must be included, to avoid ambiguity.

Some of you will argue that UI elements can be determined as the sprint progresses, and you're right, but if these are known up front why not include them in the story immediately? Many people think visually and having an image to see and discuss is a powerful way to get the meaning of a story across and assist with estimation.

You don't need to go into too much detail to get a feel for the UX. I've worked with customers who use a quick paper sketch and others who expect a fully rendered final look and feel. If possible, go for the former. Information over art.

Designs of all related architectural elements are complete

The stage your development has reached will determined the volume of work to be done here, but the developers will need to know on what architecture the story solution needs to sit. For established products, this will be stable, but if the story forms the basis for a new feature it may still need architectural elaboration. For brand new products the architectural design is a phase in itself and should be sized separately.

Even projects that use a "create the architecture as we go along" approach need some sort of principles to be established early on so that the team knows the frame in which they can operate. For example, what tech stack? When and how to create a new service? What are the non-functional requirements? etc.

The team understands how to demonstrate the feature

At the end of a development sprint, there should be a demo to show that the work meets the specification. That is intuitive when there is a visual component, but how will the team demo a story that has no visual element (for example a system to system interface)? For the latter case I'd use a test harness, with a lot of verbal explanation.

Even when there is a visual element, what examples will be used? Who will demo the work? How comprehensive should the demo be? These questions should be discussed with the product owner beforehand, so that the necessary planning work can be included in the sprint. The demo part of the sprint is considered as an afterthought too often.

The story was estimated by the team

Armed with all the information described above, the team should be able to estimate the story. The method and unit of estimation should be determined by each team individually and consistently throughout your project. Having used story points with a lot of good intentions for a number of years, I'd favour time-based estimation in hours. Story points work well in theory, but as a concept they are hard to grasp and relative sizing is implicitly pointless (pardon the pun) when most people regard points in terms of how much time something will take to do anyway.

Of course, story points represent much more than just the working hours required to fulfil a task, but using an hour-based time estimate approach incorporates a lot of the same implicit elements. Avoid simply having the person with the lowest estimate do the work, however. Always think in terms of the team as a whole.

One issue I've found with story points is the use of the Fibonacci series for estimation. "A little bit bigger than an 8" becomes a 13 - a 62% potential increase - when the work might have only increased by an hour. Hourly estimation avoids this inflation.

If you're worried about estimating before all the facts are known, either push back or estimate in coarser units. An example might be to use days for the three amigos initial "finger in the air" but hours in the sprint planning meeting.


I don't have time for all that!

Stable product teams that have worked together for years gain an intuitive ability to understand nuances expressed by team members. If your project does not have those elements or if the team changes from time to time, you'll struggle to produce good quality output quickly unless it is clear to the team what it is they are working towards. While it might be optimistic to think that people will either "just get it" or that "failing fast" is a good thing, years of analysis practice suggest that it is more complex than that and any time spent thinking before doing is valuable, if only to reduce the chance of failing at all.

You'll need to find a balance and I find that using at least the ideals of the above can be done quickly and effectively. One quick method is to create and use a pro-forma template. If all the sections contain at least some content they enable discussion towards a common understanding. As with any approach, the more you put in; the more you'll get out, but that needs to be balanced with time pressures and a form works well to set expectations and remind the user of the areas they need to consider. Overall, though, keep it light.


In summary

If your Definition of Ready includes the elements above and your stories meet those objectives adequately, you'll be in a good position to develop the product you want. 

Monday, 2 March 2020

Some tips for a successful scrum project

Photo by Olga Guryanova on Unsplash

If you want your scrum project to succeed, I've learned a few tricks that work for me in ten years of running Agile projects. Here are some of them:
  • Education, education, education
  • Keep stories tiny
  • Keep sprint cadence to 2 weeks
  • Reserve one sprint in five for tech debt
  • Set up the rules early
  • Update the estimates with the actuals
  • Make stand-ups useful

Education, education, education

Everyone on the project / product development should know what you're doing and how you're going about it before you start. Spend time working out the best way to achieve that, depending on your circumstances.

Be aware that a number of your team and several of your stakeholders will think they already know it all (and not really listen) or be unavailable for any sessions you plan. That's a fact of life.

Plan what you want to cover. Plan multiple sessions. Seek feedback. Implement the feedback.

After you think they've got it, have a refresher session a few weeks later to hammer the message home again. Some people will have misunderstood some of the original points.

Keep stories tiny

A team member (or pair, if you're pair programming) must be able to complete a story within a sprint. I suggest that in general practice they should be able to complete several stories in a sprint.

Whether you estimate in days or story points, make sure the stories are prepped so that none of them are expected to take more than 25% of your elapsed sprint time. That will minimise the risk that stories spill over into subsequent sprints.

Using that rule of thumb quickly reveals the level of granularity that a story should attain prior to the sprint commencing. If you aren't sure whether it's small enough, it's too large - break it down.

No one ever complained that too many stories were being done in a single sprint, though they might complain that the stories are too small to be useful. If that's the case, consider increasing the size and combining dependent stories, but not beyond the 25% limit.

Keep sprint cadence to 2 weeks

I like to start off any development with a set-up sprint, which is often referred to as "Sprint Zero". Sprint Zero may be longer or shorter, depending on your circumstances, but make sure that everything required to start sprint 1 is ready before it ends.

If you're running a scrum development, keep sprint cadence to 2 weeks after Sprint Zero.

Having longer sprints might make you feel as though you’re getting more done, but it delays the feedback loop. Too long and you're doing mini-waterfall.

Conversely, if you're following scrum, shorter sprints are too overhead-heavy and the ceremonies will get in the way of available development time.

If you're working on a Kanban principle instead, I'd suggest having a backlog and progress review every couple of weeks anyway. Under busienss pressure, it's easy to forget that a large part of Agile is to try, fail fast and learn from it. I've seen teams that don't achieve those objectives due to rushing ahead without taking any time to review their approach or to communicate and learn from each other.

Reserve one sprint in five for tech debt

Tech debt accumulates. Like any debt it gains interest and becomes worse the longer you leave it. Plan in times when it will have your focus as you proceed. That sets expectations amongst stakeholders and improves quality without stifling initial creativity and the right to fail. Going back to my education point, it is vital that this approach is clearly understood by all parties from the outset.
Having a tech debt sprint every quarter or less empowers your team to not be afraid to not achieve the perfect solution - they know there's a built-in safety blanket and they'll perform better and work more efficiently.

Make sure that you plan what tech debt will be covered in the sprint and size it as you would any user story. The work will expand to fill the time available and some problems will be very tricky to fix. If you aren't going to complete a challenge within the sprint, consider setting up a new team to continue to work on it, or just park it for the next tech debt sprint.

One final word of caution: keep an eye on your developers. Rabbit holes are fun, stimulating challenges but they need to be ready to work on the user stories in the following sprint.

Set up the rules early

Ideally during the discovery phase, but by the end of Sprint Zero at the latest, establish a set of golden rules that cover the working principles that will be adopted throughout the project. These should include:

  • Ceremonies (identify which ones there will be; how often; who must attend; what to do if they can't attend)
  • How to treat your colleagues (e.g. turn up to meetings; let everyone speak; listen)
  • Architectural principles (e.g. technology stack; service definition; encapsulation; reusability; security; versioning; scalability; performance targets)
  • Tools and tool use (what; when; minimum content; standards ; tool integration / overlaps)
  • UX principles (e.g. pattern libraries; personas; reskinability; accessibility)
  • What makes a good story (e.g. size; depth; BDD scope; atomicity)
  • What the difference is between a user story and a task (e.g. how to break down stories; technical tasks as stories; how they link together)
  • DevOps principles (e.g. continuous integration; continuous deployment vs build frequency; infrastructure as code; )
Make sure that everyone on the development team has buy in to the rules. Review and revise them after a few sprints.

Update the estimates with the actuals

How many times have you spent a lot of effort estimating stories only to never look at the estimates again once the stories have been built? As part of the retrospective for each sprint, look at the estimates for the stories you completed and revise the numbers based on the actual time or story points spent on implementing the story.

That way, when a similar story appears much later in the development, you can refer back to the story just completed and gain confidence in knowing that your new estimate is based on reality not new guesswork.

In addition, record any unforeseen issues you had with it and how you overcame them. Remember, it might be a different developer or team working on the new story and even if they'd like to estimate for themselves, they'll benefit from your experience and insight.

Make stand-ups useful

"Every day we waste an hour in stand-ups" is a common complaint amongst developers. If you follow the rigorous "what I did yesterday / what I'm doing today / what are my blockers" paradigm, they can be slow and tedious, to the point where they become something people try to avoid.

To combat the attrition:

  • Make them actual stand ups, not "sit downs"
  • Start on time
  • If your team is scattered geographically, use video and make sure it works before the meeting starts
  • Impose a 2 minute limit per person and impose 30 minute limit in total
  • Instead of making them a progress report to the scrum master, try only speaking about blockers and other immediate issues
  • Don't fall into the trap of trying to solve a problem in the meeting - you'll be wasting the time of everyone who isn't interested - arrange a post-stand-up conversation instead
  • Don't feel the need to speak if you have nothing to say - just say that there are no changes

What's your experience?

Hopefully at least some of that has been helpful and I know you'll recognise many of the issues identified. If you have any other tips, please feel free to comment below.