Tuesday, 31 March 2020

The Definition of Ready

Photo by Braden Collum on Unsplash

Anyone familiar with the scrum methodology should be familiar with a good Definition of Done (DoD), but there is often an opinion amongst developers that their jobs would be a lot simpler if only all those other people did theirs well. It isn't uncommon for that to be the perception in any situation when there are tribes of people involved, but I'll leave that for the sociologists to debate.

This article's focus is on an equivalent to the DoD prior to the development sprint beginning.


Definition of Done


The Definition of Done is a commonly understood checklist of conditions to consider a story finished, from a Development perspective. The same list is applicable to any story. It does not imply that the story has reached the Production environment and is live, only that it is expected to be, subject to further higher-environment testing.

A story must accomplish the below items to be considered Done:
  • The story was understood by the affected teams;
  • Unit tests were written, completed and executed successfully;
  • All coding activities are complete;
  • All analytics changes or additions were included;
  • All acceptance criteria were met;
  • Zero code smells exist;
  • Continuous Integration test execution revealed no errors;
  • A peer review of a pull request revealed no issues;
  • Functional tests were written and passed without error;
  • Non-functional requirements were met;
  • OWASP checking revealed no issues;
  • Any necessary mid-sprint design changes were included;
  • The relevant feature branch was closed;
  • The feature was included in a release package; and
  • The Product Owner accepted the user story.

Of course, some of the above may be different in your organisation - I tried to present a typical set for the web and mobile developments I've run.


Definition of Ready


In keeping with the DoD approach, a story is considered "ready" when the team agree that they can develop it.

A good Definition of Ready would be:
  • Everyone involved understands what the story is and why it is needed;
  • The story was written as a user story;
  • Acceptance criteria exist;
  • Behaviour Driven Development scenarios exist that reflect the acceptance criteria;
  • Where there are any UI elements included in the story, designs are provided;
  • Designs of all related architectural elements are complete;
  • The team understands how to demonstrate the feature; and
  • The story was estimated by the team.

Let's address each element in turn.

Everyone involved understands what the story is and why it is needed

One of the main points of story definition is to define a feature or component in a clear, unambiguous way. Whilst Agile's short iteration cycle reduces the impact of the awkward "that isn't what I wanted" delivery, there is still the capability to spend two weeks working on the wrong thing if the definition is unclear.

In my projects we have a number of review points prior to a sprint beginning that try to reduce this risk, including three amigos, feature briefings and look-ahead meetings. The nature and scope of these meetings is dependent on the circumstances and the complexity of the work, but I would advise at least including the three amigos meeting, so that there is sufficient scrutiny of a story to avoid ambiguity as much as is practical. Note that the "three" can sometimes be more, if the team includes a range of delivery platforms (e.g. for web and mobile).

Include the story's context. It should be atomic, yes, but it doesn't exist in a vacuum. Where does it sit? Of what does it form part? What does it enable? What does it rely on?

The story was written as a user story

While it is tempting to skip the "benefits" part of the "As a <user> I want <feature> so that <benefit(s)>" pattern, it is that part that justifies its inclusion.

Always address the justification for every story. If you can't, question whether the story is valuable. Justification should include the scope, the user base impact, the demand for it and the financial implications of doing it and not doing it. The latter is critical. I'm sure you have your own examples of a determined Product Manager pushing a story that they think will be of benefit, without doing their homework to prove it. Development time costs money, so spend it wisely.

Acceptance criteria exist

Business analysts or product owners who come from a waterfall background are used to writing extensive functional specifications. Stories are much more atomic than the long documents of old, but should include acceptance criteria if relevant. Whether these are formal or simply in note form to supplement the story is determined by the needs of the project, but they must be light. They do not replace the story; they are not the "description" of the story (i.e. what the badly-worded story title "really meant"); they are not the "part that you really need to read". If any of those are true, rewrite your story title.

I prefer to use acceptance criteria in note form to annotate the story. I use real world examples where I can, to aid understanding. You shouldn't need to go into a lot of detail if your BDD section is extensive.

Behaviour Driven Development scenarios exist

Behaviour driven development (BDD) scenarios are effectively real-world test cases designed to detail and prove the acceptance criteria.

I recommend the following structure for behaviour driven development cases:

REF      <incremental reference number within story>
TITLE   <Why do we need this case? What are we testing?>
GIVEN  <pre-condition 1 exists>
[AND     <pre-condition x exists>]
WHEN   <action 1 happens>
[AND     <action y happens>]
THEN    <result 1 must happen>
[AND     <result z must happen>]

Always include both positive and negative scenarios (what should happen and what error messages appear when it doesn't). In addition, consider including examples to illustrate more complex cases. That will give your developers and testers the best chance of meeting the intended acceptance criteria.

If you're constrained for analysis resource or time, consider having the testers write the BDD scenarios, but always make sure that the product owners or business analysts check and confirm their definition before proceeding. That also confirms that the testers have understood the story correctly.

Don't be afraid to add more scenarios later as people identify them, but try to be as comprehensive as possible in the initial definition. These cases will help the team to estimate the work and long lists of BDD scenarios suggest that the story is too complex and needs to be split into smaller stories.

Where there are any UI elements included in the story, designs are provided

I'm not suggesting that every UI component needs to signed off before any development work can be started. If the project was started well, there is already a component pattern library or design guide for the various types of UI element (and if there isn't, create one now!). UX testing of those elements should be done early in the project, to check that they work in practice.

Beyond that point, individual usage should be little more than the type of element and its configuration for that specific case. For example, yes, it's another drop down list, but what are the values within it? How are they sorted? Where does it appear on the page? Does it have any unusual properties (e.g. it only appears if the user selected value 1 from the previous field). In particular, complex UI designs must be included, to avoid ambiguity.

Some of you will argue that UI elements can be determined as the sprint progresses, and you're right, but if these are known up front why not include them in the story immediately? Many people think visually and having an image to see and discuss is a powerful way to get the meaning of a story across and assist with estimation.

You don't need to go into too much detail to get a feel for the UX. I've worked with customers who use a quick paper sketch and others who expect a fully rendered final look and feel. If possible, go for the former. Information over art.

Designs of all related architectural elements are complete

The stage your development has reached will determined the volume of work to be done here, but the developers will need to know on what architecture the story solution needs to sit. For established products, this will be stable, but if the story forms the basis for a new feature it may still need architectural elaboration. For brand new products the architectural design is a phase in itself and should be sized separately.

Even projects that use a "create the architecture as we go along" approach need some sort of principles to be established early on so that the team knows the frame in which they can operate. For example, what tech stack? When and how to create a new service? What are the non-functional requirements? etc.

The team understands how to demonstrate the feature

At the end of a development sprint, there should be a demo to show that the work meets the specification. That is intuitive when there is a visual component, but how will the team demo a story that has no visual element (for example a system to system interface)? For the latter case I'd use a test harness, with a lot of verbal explanation.

Even when there is a visual element, what examples will be used? Who will demo the work? How comprehensive should the demo be? These questions should be discussed with the product owner beforehand, so that the necessary planning work can be included in the sprint. The demo part of the sprint is considered as an afterthought too often.

The story was estimated by the team

Armed with all the information described above, the team should be able to estimate the story. The method and unit of estimation should be determined by each team individually and consistently throughout your project. Having used story points with a lot of good intentions for a number of years, I'd favour time-based estimation in hours. Story points work well in theory, but as a concept they are hard to grasp and relative sizing is implicitly pointless (pardon the pun) when most people regard points in terms of how much time something will take to do anyway.

Of course, story points represent much more than just the working hours required to fulfil a task, but using an hour-based time estimate approach incorporates a lot of the same implicit elements. Avoid simply having the person with the lowest estimate do the work, however. Always think in terms of the team as a whole.

One issue I've found with story points is the use of the Fibonacci series for estimation. "A little bit bigger than an 8" becomes a 13 - a 62% potential increase - when the work might have only increased by an hour. Hourly estimation avoids this inflation.

If you're worried about estimating before all the facts are known, either push back or estimate in coarser units. An example might be to use days for the three amigos initial "finger in the air" but hours in the sprint planning meeting.


I don't have time for all that!

Stable product teams that have worked together for years gain an intuitive ability to understand nuances expressed by team members. If your project does not have those elements or if the team changes from time to time, you'll struggle to produce good quality output quickly unless it is clear to the team what it is they are working towards. While it might be optimistic to think that people will either "just get it" or that "failing fast" is a good thing, years of analysis practice suggest that it is more complex than that and any time spent thinking before doing is valuable, if only to reduce the chance of failing at all.

You'll need to find a balance and I find that using at least the ideals of the above can be done quickly and effectively. One quick method is to create and use a pro-forma template. If all the sections contain at least some content they enable discussion towards a common understanding. As with any approach, the more you put in; the more you'll get out, but that needs to be balanced with time pressures and a form works well to set expectations and remind the user of the areas they need to consider. Overall, though, keep it light.


In summary

If your Definition of Ready includes the elements above and your stories meet those objectives adequately, you'll be in a good position to develop the product you want. 

Monday, 2 March 2020

Some tips for a successful scrum project

Photo by Olga Guryanova on Unsplash

If you want your scrum project to succeed, I've learned a few tricks that work for me in ten years of running Agile projects. Here are some of them:
  • Education, education, education
  • Keep stories tiny
  • Keep sprint cadence to 2 weeks
  • Reserve one sprint in five for tech debt
  • Set up the rules early
  • Update the estimates with the actuals
  • Make stand-ups useful

Education, education, education

Everyone on the project / product development should know what you're doing and how you're going about it before you start. Spend time working out the best way to achieve that, depending on your circumstances.

Be aware that a number of your team and several of your stakeholders will think they already know it all (and not really listen) or be unavailable for any sessions you plan. That's a fact of life.

Plan what you want to cover. Plan multiple sessions. Seek feedback. Implement the feedback.

After you think they've got it, have a refresher session a few weeks later to hammer the message home again. Some people will have misunderstood some of the original points.

Keep stories tiny

A team member (or pair, if you're pair programming) must be able to complete a story within a sprint. I suggest that in general practice they should be able to complete several stories in a sprint.

Whether you estimate in days or story points, make sure the stories are prepped so that none of them are expected to take more than 25% of your elapsed sprint time. That will minimise the risk that stories spill over into subsequent sprints.

Using that rule of thumb quickly reveals the level of granularity that a story should attain prior to the sprint commencing. If you aren't sure whether it's small enough, it's too large - break it down.

No one ever complained that too many stories were being done in a single sprint, though they might complain that the stories are too small to be useful. If that's the case, consider increasing the size and combining dependent stories, but not beyond the 25% limit.

Keep sprint cadence to 2 weeks

I like to start off any development with a set-up sprint, which is often referred to as "Sprint Zero". Sprint Zero may be longer or shorter, depending on your circumstances, but make sure that everything required to start sprint 1 is ready before it ends.

If you're running a scrum development, keep sprint cadence to 2 weeks after Sprint Zero.

Having longer sprints might make you feel as though you’re getting more done, but it delays the feedback loop. Too long and you're doing mini-waterfall.

Conversely, if you're following scrum, shorter sprints are too overhead-heavy and the ceremonies will get in the way of available development time.

If you're working on a Kanban principle instead, I'd suggest having a backlog and progress review every couple of weeks anyway. Under busienss pressure, it's easy to forget that a large part of Agile is to try, fail fast and learn from it. I've seen teams that don't achieve those objectives due to rushing ahead without taking any time to review their approach or to communicate and learn from each other.

Reserve one sprint in five for tech debt

Tech debt accumulates. Like any debt it gains interest and becomes worse the longer you leave it. Plan in times when it will have your focus as you proceed. That sets expectations amongst stakeholders and improves quality without stifling initial creativity and the right to fail. Going back to my education point, it is vital that this approach is clearly understood by all parties from the outset.
Having a tech debt sprint every quarter or less empowers your team to not be afraid to not achieve the perfect solution - they know there's a built-in safety blanket and they'll perform better and work more efficiently.

Make sure that you plan what tech debt will be covered in the sprint and size it as you would any user story. The work will expand to fill the time available and some problems will be very tricky to fix. If you aren't going to complete a challenge within the sprint, consider setting up a new team to continue to work on it, or just park it for the next tech debt sprint.

One final word of caution: keep an eye on your developers. Rabbit holes are fun, stimulating challenges but they need to be ready to work on the user stories in the following sprint.

Set up the rules early

Ideally during the discovery phase, but by the end of Sprint Zero at the latest, establish a set of golden rules that cover the working principles that will be adopted throughout the project. These should include:

  • Ceremonies (identify which ones there will be; how often; who must attend; what to do if they can't attend)
  • How to treat your colleagues (e.g. turn up to meetings; let everyone speak; listen)
  • Architectural principles (e.g. technology stack; service definition; encapsulation; reusability; security; versioning; scalability; performance targets)
  • Tools and tool use (what; when; minimum content; standards ; tool integration / overlaps)
  • UX principles (e.g. pattern libraries; personas; reskinability; accessibility)
  • What makes a good story (e.g. size; depth; BDD scope; atomicity)
  • What the difference is between a user story and a task (e.g. how to break down stories; technical tasks as stories; how they link together)
  • DevOps principles (e.g. continuous integration; continuous deployment vs build frequency; infrastructure as code; )
Make sure that everyone on the development team has buy in to the rules. Review and revise them after a few sprints.

Update the estimates with the actuals

How many times have you spent a lot of effort estimating stories only to never look at the estimates again once the stories have been built? As part of the retrospective for each sprint, look at the estimates for the stories you completed and revise the numbers based on the actual time or story points spent on implementing the story.

That way, when a similar story appears much later in the development, you can refer back to the story just completed and gain confidence in knowing that your new estimate is based on reality not new guesswork.

In addition, record any unforeseen issues you had with it and how you overcame them. Remember, it might be a different developer or team working on the new story and even if they'd like to estimate for themselves, they'll benefit from your experience and insight.

Make stand-ups useful

"Every day we waste an hour in stand-ups" is a common complaint amongst developers. If you follow the rigorous "what I did yesterday / what I'm doing today / what are my blockers" paradigm, they can be slow and tedious, to the point where they become something people try to avoid.

To combat the attrition:

  • Make them actual stand ups, not "sit downs"
  • Start on time
  • If your team is scattered geographically, use video and make sure it works before the meeting starts
  • Impose a 2 minute limit per person and impose 30 minute limit in total
  • Instead of making them a progress report to the scrum master, try only speaking about blockers and other immediate issues
  • Don't fall into the trap of trying to solve a problem in the meeting - you'll be wasting the time of everyone who isn't interested - arrange a post-stand-up conversation instead
  • Don't feel the need to speak if you have nothing to say - just say that there are no changes

What's your experience?

Hopefully at least some of that has been helpful and I know you'll recognise many of the issues identified. If you have any other tips, please feel free to comment below.

Monday, 18 November 2019

Agile Time

Photo by Sonja Langford on Unsplash
Another big item on your shopping list for a successful Agile development is a little less tangible, but the work will fail without it.

Working hours

Decide up front during what hours people will be available. Most of us don't work in a Draconian factory where everyone clocks in at 9:00 and out at 5:30, but we do need to communicate effectively and to do that we need to know when people are available. Agree product-wide core hours and stick to them. Commonly, I'd use 10-12 and 2-4, but that might vary for situations where part of the team works in other time zones.

Stakeholder availability

If you can, be Agile, don't just "do" Agile. That means that all parties involved in the work need to be trained and operate in an Agile manner. However, regardless of what the books say, people in some roles (at least stakeholders and, often, even product owners) cannot be dedicated to the project full time. If that's the case, agree scheduled time slots when they can be dedicated and make sure they stick to them. Assuming there's no travel involved (see "video conferencing", earlier) hold the regular meetings even if there isn't much to say. Everyone likes having unexpected free time in their calendars. If a regular meeting is skipped once it can be clawed back; twice and no one turns up to the third one.

It's a sensible thing to do for each stakeholder to designate a deputy. There will be times when even the most dedicated individual can't make it (due to holidays, illness, etc). Having a deputy means the work can still go ahead, but remind the stakeholder that they need to keep their deputy informed.

Sprint cadence

I'm used to running large scale Agile developments with many teams, but even if you have only a few teams you'll benefit from sticking to a common sprint duration across all teams. Stakeholder time may be rare, so make the most of it by ensuring that sprint demos and other common meetings happen once for the product, not once per team. Common cadence makes these events easy to plan for and significant to miss. Knowing when a feature delivery that spans several team will be ready and having all teams' work entering integration testing at the same time is a great way to reduce stress and rework.

Essential gear for Agile development

Photo by Todd Quackenbush on Unsplash
I'm all about doing things on the cheap; quickly and efficiently. There are many complex and online tools and services out there that provide great (and expensive) platforms for collaboration. Sometimes, things can be simpler and just as effective.

In order to communicate effectively, you're going to need the proper kit, but what does that mean? This post hopes to point you in the right direction. What tools you'll need and how you might use them to better advantage, all based on real-world experience. It might make you smile too.

Jira

Yes, there are alternatives, but Atlassian's workhorse has become the industry standard for large scale development. Sorry, Rally, but you really never made it. Jira's (fairly) predictable nature means that there's probably a way to do it (for any definition of "it") either already as part of the tool or as a plug in. Learn the features; thank me later. Cloud licencing is more expensive, but less of a headache than the local version.

Chat

Sure, you can use email but that's so last decade. Share your thoughts in a public workspace and never have to remember that attachment again! Public chat tools can be a little scary at first, as your views are suddenly known to everyone with the right permissions, so keep it professional and make sure you use a tool that allows secure 1:1 chats as well as group think. Teams isn't bad; Slack is better.

Videoconferencing

A Microsoft study from 2010 found that there are orders of magnitude difference of effective communication in paper vs email vs chat vs audio conference vs video conference, so get in front of that camera and shine! Sure, audio conferences are more hangover-friendly, but you can't see the expression of the person speaking (or those behind them) without video, so you lose a lot of information about things such as whether they mean what they're saying, whether they're really interested or whether the expensive "get everyone in a room" meeting is really cost effective. Skype works; Teams works; Zoom is better.

Audio conferencing

"But how am I supposed to video conference and screen-share with our bandwidth?!" I hear you cry. Pay for more bandwidth, but in the meantime get on a call. Tools such as Teams are good for talking over a shared app or desktop. Enjoy your conference call bingo session as you're not sure whether the other side can hear you or not though. Did I mention video conferencing? You can see if they're having trouble or not, immediately. Just sayin'

Screen sharing and collaborative editing

Effective remote / global working relies on common understanding of content. Using collaboration tools that let you see and edit that content as a group from a number of locations simultaneously is marvellous (until that idiot in the other office edits your document using spaces instead of tabs again - hate that guy!). Don't be afraid to hand over ownership of a document to allow a reviewer to show you exactly which part they'd like to change. A stream of "this bit?" "no, before that" "this bit then?" "no after that" is always fun, but document-battleships wears thin after a while.

Meeting rooms

Why does no one ever design a building with enough meeting rooms? Is there some secret cabal of office space renting companies controlling our working lives? Conducting meetings in an open plan office is always sub-optimal. Get a room, people!

Put stuff on the wall. More importantly, take stuff off the wall once you're done. Decorating is not your strength. Use vertical space elaborately during meetings, then photograph it and take it off the wall. Save the photos centrally so that everyone can access them. If you don't have the luxury of long-term meeting room use, stick sheets of cheap plain recyclable wallpaper up first then stick stuff to / write on the paper. If you need to change rooms, simply remove the wallpaper and carry it to the next room.

White boards

If you can, use whiteboards on wheels so that they can be moved from the office space into meeting rooms and back again. I recommend one reasonably large, double-sided whiteboard per team. Label each whiteboard with the team name so that they remain with the team. In an open-plan office with limited meeting room space these are a godsend. No one is allowed to write "do not remove" or "please leave" on any whiteboard - take a photo and spend two minutes redrawing / rewriting the content at the start of the next meeting in which you need it. You'll find that having to lay the content out again actually helps your thought process.

Wall space

If you have walls or windows, use them. Let each team set an archive policy before anything goes up though. Wall space fills up fast and "just in case we need it again" does not pay for the lack of creative opportunity. One tip: don't put company-sensitive stuff up on an external window.

Pens

In this digital age, always make sure that you have something to write with.

Whiteboard pens are like socks - blink and they're gone. The one that's left is the one with the damaged nib or that has run dry. Throw it out if you can't use it and save yourself the disappointment later.

Larger nibs make the writing more visible from a distance (like, say the other side of the room), so don't be afraid to use felt tips.

Post-Its

Post-Its (real ones, not the cheap variety that dry up and fall off overnight) are essentials of modern business life if for no other reason than to edit that process modelling mistake you made yesterday.

Forgot a step in your process model? Shuffle stuff along. Want a decision box? Turn a square Post-It through 45 degrees and you get a diamond. 5cm square stickies work well to limit the content you can put on them. Use colours to mean things and you add an extra dimension to your annotation.

One extra tip: when using Post-Its, always take a photo of your wall before going home. I once returned to the office the next day to find that since some had fallen off the wall, the cleaner had neatly stacked the entire wall contents and left them on the meeting room table.

Wiki

You're going to need somewhere to store all those photos, sketches and other documents you've produced, even in "document-light" Agile. Make sure it's accessible and searchable.

If you're using Jira, consider Confluence as they link reasonably well together, but other collaboration tools you have might work just as well.

What's the right size of a cross-functional team?

Photo by Michael Browning on Unsplash

In the crazy old days of waterfall development we had distinct teams of specialists: business analysts, architects, designers, developers, testers, deployment specialists and support staff.

In the world of Agile development one of the recommendations is to have cross-functional teams. This post describes what that means and what a good cross-functional scrum team composition looks like.

So what is a "cross-functional team"?

Imagine a development team that has all the people in it that can deliver a piece of functionality. A base of developers, a healthy dollop of testers, a sprinkling of UX, a soupcon of project owner and a scrum master to stir it all with. I'm going to take the food analogy way too far but it fits reasonably well, so forgive me.

Vital ingredients

To get a perfect mix you need to use the right ingredients. Hopefully these are all items you have in your cupboard already, but here's my summary of what I need from each:

  • Product Owner - The restaurant owner who decides what sort of food should be on the menu and which customers will want to eat it
  • Scrum Master - The restaurant manager who ensures that everything the chefs require is available when it they need it; that the menu isn't too elaborate and who makes sure that the kitchen is kept tidy
  • Tech Lead - The head chef who works out how to make the food on the menu, works out the recipes that can be delivered each iteration and makes sure that the food is as expected when it's served to the customers
  • UX - The one who makes sure that the food looks attractive and is easy to eat, with the correct utensils to hand
  • Developers - The sous-chefs who prepare and cook the food, make sure that it tastes OK and clean the work-surfaces afterwards
  • Testers - The tasters who check that the food is really OK to eat and meets health & safety regulations before the customers get to eat it
  • DevOps Specialist - The mechanics who make sure that the equipment in the kitchen is adequate and works as expected for the chefs (if the chefs don't do that themselves); that the routes through to the serving area are clear and well understood and that the chefs know how to get their food through those routes

Getting the recipe right

The purpose of the team is to be able to deliver food to the restaurant (i.e. features into the production environment), but it's quite hard to work out the correct recipe. After much trial and error, this is the ideal balance I came up with per team in a scaled Agile framework:

  • 1 Product Owner
  • 1 Scrum Master
  • 1 Tech Lead
  • 1 UX
  • 2-4 Developers
  • 1-2 Testers
  • 1 DevOps specialist
Team structure for a single scrum team project
Team structure for a single scrum team project

However, that's a heavy mix of some very expensive ingredients and the kitchen is large enough for several recipes to be cooked by multiple squads at the same time, so if you like you can spread some of the ingredients over a number of dishes. I'd recommend these proportions:

  • 1 product owner per 3-5 squads
  • 1 scrum master per 1-3 squads
  • 1 DevOps specialist per 1-3 squads
  • 1 UX per 2 user interface squads (not all squads require a visual element if they are specialised)
  • The remaining roles are full time and shouldn't be divided cross-squad or the squads risk losing core cohesion.
Team structure for a multiple scrum team project
Team structure for a multiple scrum team project

The result is a team size of around 6-8 FTEs per squad once the kitchen has settled down into a regular pattern. Too small and the team can't deliver enough food to the customers; too large and they keep falling over each other in the kitchen.

Considerations for larger projects

For projects that require larger teams, I'd add a Delivery Manager above the Scrum Masters and a Product Manager above the Product Owners. You may want to add a Business Analyst or two if the project is very large or Product Owners will spend their time dealing with team requests and run out of time to write stories for future work.

Bon appetit!

Tuesday, 4 November 2014

How to do usability testing with only an hour's notice

Photo by JESHOOTS.COM on Unsplash
Recently, while on a customer site, I participated in two usability tests carried out by a couple of experienced specialists who did not follow the same process as the more usual camera-based method. I thought it would be valuable to share that with you.

First of all, apologies for the slightly misleading title to this blog. You don't need to use this approach for impromptu usability testing alone; indeed, this is the approach always used by those particular specialists. In this case, they prepared the testing the day before it was due, but the same principles can be adopted with much less time.

Secondly, my inclination was that testing only rough wireframes would not be useful. I was wrong. Test early, to help to guide the direction of the project. Test again when the solution looks a lot closer to the finished article, even if that just means testing a prototype.

The rest of this article describes the steps you as a test moderator should follow to carry out an effective usability test. You can do all this in under an hour, but you'll have to get your skates on!

Step 1: Decide what to test

The main challenge with any usability test is to set the scope. Will you dictate the actions that you want the testers to follow, allow them to roam through the system or both?

If you are going to create a set of tasks, do that and stick to it. Make them singular user journeys. Make sure they have an obvious start, a middle and an expected end point.

Don't expect a tester to test more than five things in a single session. To expect more will make them bored and their attention spans will start to wane, skewing the test results.

Step 2: Vary the tests

Asking a tester to run the same test again but with a slightly different set of parameters (e.g. the same lookup but with a different status) will do two things, both of which are bad:

  1. They will have learned about the function the first time around, which will skew the results; and
  2. They will be bored, which will skew the results.

If you have to test different parameters for the same test, have them done by different testers.

Step 3: Write the tests down

Each test should have a clear objective, written in a way that any tester would find easy to understand. If it is more than two paragraphs, it is too long. Make sure that only the test (and any data the tester needs to perform the test, such as user ids or reference numbers) are written on the sheet that the tester sees. Prepare the instruction sheets to that you can give them the sheet for one test at a time, without them being able to see the details of other tests.

For each test, prepare a review sheet that asks the tester to answer a series of yes/no questions about how they found the test. This keeps the responses brief and focused and removes the kind of doubt that (for example) a 1-10 scale might introduce.

Include questions about whether the journey made sense, whether they felt that the functionality was logical and whether the activity was easy to do, but also softer subjects such as whether they would recommend the function to a friend, whether they are likely to re-use the the function or not, and so on.

Always make sure that there is an area at the end of the sheet where the tester can write their overall views of the test and offer any comments they may have. Don't forget to make sure that the form includes the name of the test they were carrying out.

Step 4: Find some test subjects

If you have time, send out general invites to the expected user groups that the application will be used by. Do not invite anyone who knows about the application build, unless your work is to enhance an existing, non-public system. In that case, only invite users of the current system. In all other cases, you can use members of the public and you're in luck - there are loads of those sat around you!

Try to make sure that the testers have different backgrounds and levels of comfort with technology, even if that means randomly grabbing people in the park!

Step 5: Sort out the logistics

Book a room for the session (if you don't plan to do a mobile test - I was serious about grabbing people in the park). Make sure you have a computer / tablet / laptop / phone / etc on which to test the application. Make sure that the application is installed on an environment that can be accessed from the device and from that particular room.

Expecting wireless access to be sufficient isn't the same as knowing that it will work. Expecting the application server to be available isn't the same as making sure it is accessible and booked for the test.

Step 6: Invite the testers

Create a schedule of testing slots and send out invitations to the potential pool of testers. Assume that 20% of people who accept the invitation will drop out but make sure that everyone understands that there is a waiting list. In other words, invite more people than you need. Once you have some respondents, allocate each a test slot and confirm it with them.

Step 7: Prep the team

There will not be a camera; having a camera in the room makes people nervous and skews the test.

Have no more than two observers (to avoid intimidating the tester) but have them in the room. Make it clear to the observers that they are there only to observe unless invited to speak. If they make any noise, pull faces during a test or otherwise do anything that might influence the tester, throw them out of the room. Really. Make these rules known and understood.

If there are two observers, have one sit facing the tester (watching the tester and, if possible, screen-sharing the screen they are using) and one to the side of and slightly behind the tester so that they can see the tester's profile and the screen. Having both behind the tester can be intimidating.

Step 8: Introduce the tests

Each tester must receive a brief introduction to the test session: what is expected of them and how the session will run. The Moderator (that's you) runs the test sessions and does all the talking.

First, tell the tester that they are not being tested; it is the application that is being tested. Any problems they encounter are problems with the system and not with the tester. They cannot hurt your feelings.

Tell them how long the set of tests will take and that the tester will be given a series of tasks (never say how many, as that will put undue pressure on the tester - if you run out of time, just do less tests).

Step 9: Run the tests

Introduce the first task. Read the test script. Ask if there are any questions or things that need further explanation. Once any questions are answered, hand the test script to the tester for reference.

Depending on what the tester is comfortable with, either ask them to talk their way through what they are doing as they go, or let them stay silent. In either case, observe closely what they do and take notes as they go. Encourage the observers to do the same. Ask the tester to tell you when they think they have finished.

Do not guide or offer advice to the tester. If they ask for advice, ask them what they think they should do next. Do not answer their questions directly. You must act as if they were alone, difficult though that is.

Step 10: Review the test

When the tester seems to have finished the test (often detectable by bodily sitting back from the screen or by them saying so), hand them the review sheet and get them to complete it.

Once they've completed the sheet, ask the tester how it went and whether they have any questions. Ask the observers if they have any questions for the tester.Record the questions and the tester's answers.

Start the next test and repeat steps 9 and 10 until the time runs out.

Thank the tester for their time and hand out any reward they earned (often something like a gift voucher for public tests). Call in the next tester.

Step 11: Collate the results

Compile the review sheet answers into a spreadsheet and bring out the most significant answers; "significant" means problems reported by more than 20% of the respondents. You will never be able to please all the people all the time. Go with only common problems, do not try to resolve every bit of feedback you get into a system change.

Step 12: Decide what to do with the results

Will each significant feedback item become a formal change request? Will you simply absorb the feedback into the build? Will you ignore the feedback? Each case has to be taken on its merits. Talk to the project manager, team leads and business representatives to decide what to do.

In summary

You can do usability testing quickly and easily, which means that you can do it often. Do it often. Learn about how people will use your application as you are developing it. They will surprise and frustrate you. Let them. Learn from it. The result will always be a better system.