Tuesday, 4 November 2014

How to do usability testing with only an hour's notice

Photo by JESHOOTS.COM on Unsplash
Recently, while on a customer site, I participated in two usability tests carried out by a couple of experienced specialists who did not follow the same process as the more usual camera-based method. I thought it would be valuable to share that with you.

First of all, apologies for the slightly misleading title to this blog. You don't need to use this approach for impromptu usability testing alone; indeed, this is the approach always used by those particular specialists. In this case, they prepared the testing the day before it was due, but the same principles can be adopted with much less time.

Secondly, my inclination was that testing only rough wireframes would not be useful. I was wrong. Test early, to help to guide the direction of the project. Test again when the solution looks a lot closer to the finished article, even if that just means testing a prototype.

The rest of this article describes the steps you as a test moderator should follow to carry out an effective usability test. You can do all this in under an hour, but you'll have to get your skates on!

Step 1: Decide what to test

The main challenge with any usability test is to set the scope. Will you dictate the actions that you want the testers to follow, allow them to roam through the system or both?

If you are going to create a set of tasks, do that and stick to it. Make them singular user journeys. Make sure they have an obvious start, a middle and an expected end point.

Don't expect a tester to test more than five things in a single session. To expect more will make them bored and their attention spans will start to wane, skewing the test results.

Step 2: Vary the tests

Asking a tester to run the same test again but with a slightly different set of parameters (e.g. the same lookup but with a different status) will do two things, both of which are bad:

  1. They will have learned about the function the first time around, which will skew the results; and
  2. They will be bored, which will skew the results.

If you have to test different parameters for the same test, have them done by different testers.

Step 3: Write the tests down

Each test should have a clear objective, written in a way that any tester would find easy to understand. If it is more than two paragraphs, it is too long. Make sure that only the test (and any data the tester needs to perform the test, such as user ids or reference numbers) are written on the sheet that the tester sees. Prepare the instruction sheets to that you can give them the sheet for one test at a time, without them being able to see the details of other tests.

For each test, prepare a review sheet that asks the tester to answer a series of yes/no questions about how they found the test. This keeps the responses brief and focused and removes the kind of doubt that (for example) a 1-10 scale might introduce.

Include questions about whether the journey made sense, whether they felt that the functionality was logical and whether the activity was easy to do, but also softer subjects such as whether they would recommend the function to a friend, whether they are likely to re-use the the function or not, and so on.

Always make sure that there is an area at the end of the sheet where the tester can write their overall views of the test and offer any comments they may have. Don't forget to make sure that the form includes the name of the test they were carrying out.

Step 4: Find some test subjects

If you have time, send out general invites to the expected user groups that the application will be used by. Do not invite anyone who knows about the application build, unless your work is to enhance an existing, non-public system. In that case, only invite users of the current system. In all other cases, you can use members of the public and you're in luck - there are loads of those sat around you!

Try to make sure that the testers have different backgrounds and levels of comfort with technology, even if that means randomly grabbing people in the park!

Step 5: Sort out the logistics

Book a room for the session (if you don't plan to do a mobile test - I was serious about grabbing people in the park). Make sure you have a computer / tablet / laptop / phone / etc on which to test the application. Make sure that the application is installed on an environment that can be accessed from the device and from that particular room.

Expecting wireless access to be sufficient isn't the same as knowing that it will work. Expecting the application server to be available isn't the same as making sure it is accessible and booked for the test.

Step 6: Invite the testers

Create a schedule of testing slots and send out invitations to the potential pool of testers. Assume that 20% of people who accept the invitation will drop out but make sure that everyone understands that there is a waiting list. In other words, invite more people than you need. Once you have some respondents, allocate each a test slot and confirm it with them.

Step 7: Prep the team

There will not be a camera; having a camera in the room makes people nervous and skews the test.

Have no more than two observers (to avoid intimidating the tester) but have them in the room. Make it clear to the observers that they are there only to observe unless invited to speak. If they make any noise, pull faces during a test or otherwise do anything that might influence the tester, throw them out of the room. Really. Make these rules known and understood.

If there are two observers, have one sit facing the tester (watching the tester and, if possible, screen-sharing the screen they are using) and one to the side of and slightly behind the tester so that they can see the tester's profile and the screen. Having both behind the tester can be intimidating.

Step 8: Introduce the tests

Each tester must receive a brief introduction to the test session: what is expected of them and how the session will run. The Moderator (that's you) runs the test sessions and does all the talking.

First, tell the tester that they are not being tested; it is the application that is being tested. Any problems they encounter are problems with the system and not with the tester. They cannot hurt your feelings.

Tell them how long the set of tests will take and that the tester will be given a series of tasks (never say how many, as that will put undue pressure on the tester - if you run out of time, just do less tests).

Step 9: Run the tests

Introduce the first task. Read the test script. Ask if there are any questions or things that need further explanation. Once any questions are answered, hand the test script to the tester for reference.

Depending on what the tester is comfortable with, either ask them to talk their way through what they are doing as they go, or let them stay silent. In either case, observe closely what they do and take notes as they go. Encourage the observers to do the same. Ask the tester to tell you when they think they have finished.

Do not guide or offer advice to the tester. If they ask for advice, ask them what they think they should do next. Do not answer their questions directly. You must act as if they were alone, difficult though that is.

Step 10: Review the test

When the tester seems to have finished the test (often detectable by bodily sitting back from the screen or by them saying so), hand them the review sheet and get them to complete it.

Once they've completed the sheet, ask the tester how it went and whether they have any questions. Ask the observers if they have any questions for the tester.Record the questions and the tester's answers.

Start the next test and repeat steps 9 and 10 until the time runs out.

Thank the tester for their time and hand out any reward they earned (often something like a gift voucher for public tests). Call in the next tester.

Step 11: Collate the results

Compile the review sheet answers into a spreadsheet and bring out the most significant answers; "significant" means problems reported by more than 20% of the respondents. You will never be able to please all the people all the time. Go with only common problems, do not try to resolve every bit of feedback you get into a system change.

Step 12: Decide what to do with the results

Will each significant feedback item become a formal change request? Will you simply absorb the feedback into the build? Will you ignore the feedback? Each case has to be taken on its merits. Talk to the project manager, team leads and business representatives to decide what to do.

In summary

You can do usability testing quickly and easily, which means that you can do it often. Do it often. Learn about how people will use your application as you are developing it. They will surprise and frustrate you. Let them. Learn from it. The result will always be a better system.