Usability testing methods and common mistakes made

Usability Testing

Delivering powerful insight into what is and isn’t working for your users, usability testing is right at the heart of user experience design. Executed well, it can require relatively little outlay and deliver great return on investment. Executed badly, costs can spiral with no conclusive results. Here, we cover some of the most common mistakes in usability testing to help you avoid potential pitfalls – leaving you better placed to take advantage of the insights and opinions of real users, and produce higher quality products that support your business goals.

Choosing your usability testing methods

Using the most appropriate testing method for your objectives and contexts is crucially important, and if you don’t select the right combination of activities you risk capturing the wrong information from the wrong people about the wrong areas of your site, preventing you from translating your data into actionable insight.
 

  • Looking to validate early sketches and prototypes before committing development resource? Low-fidelity guerrilla testing provides instant insight.
    Want to benchmark current performance? You can gather large volumes of quantitative data through an unmoderated remote approach.
  • Need to test with specific industries, regions or demographics? Moderated remote testing gathers detailed feedback without requiring users to visit a lab.
  • Want the full bells and whistles treatment? Laboratory testing has been proven as one of the most effective methods for discovering usability and accessibility issues.
  •  

Common mistakes in usability testing

Once you’ve selected the most appropriate approach (or combination of approaches), it’s important to avoid falling into the following traps:

Recruiting unsuitable participants

It’s important that the right people are chosen for testing. You achieve this by conducting pre-testing user research about the audience of the site or app to be reviewed, through techniques such as surveys and statistical analysis, call centre ethnography, and user stories. All of these will help you identify the right demographics and write a better test plan.

Remember, your results are only as good as your participants and, if these participants are not representative of actual users, you could likely end up being led astray.

Not testing early and often during the project lifecycle

Conducting usability testing, unfortunately, has a reputation for being expensive so it’s tempting to run only a few laboratory testing sessions at the end of a project instead of conducting testing regularly, taking advantage of the various lightweight methodologies available to you, such as guerrilla and unmoderated remote testing.

Industry experts disagree on the exact number of participants to use for optimal accuracy and return on investment, although naturally, the more users that you are able to test, the more confidence you can have in the results. Where the experts do agree is that even with just 5 – 10 users you can deliver incredibly valuable insights, at very low cost.

Consider also that testing with even just a few users (rather than none!) will always be cheaper in the long-run than only catching errors or usability issues later in the development process, when they will be more expensive and time-consuming to resolve.

Following too rigid a test plan

Usability testing sessions are notoriously unpredictable and relaxed participants will say what they think and navigate freely. This is natural; it mirrors their normal behaviour, so you should encourage it.

For example, if someone starts talking about a competitor site you should take them there. Ask them why they do and don’t like it, and how it compares to the site being tested. This ‘extra’ feedback can be highly insightful. Refrain from thinking, “I have three more questions to ask and not enough time”. Yes, you may have to drop a question but it’s worth it if the insight is unique.

Be flexible; allow for additional questions and prioritise. You’ll need to keep an eye on the clock and your test plan throughout the session and if you are running out of time or the participant has unwittingly answered an upcoming question you may need to skip it. Try not to stop the session temporarily while you study the test plan so as not to disrupt the session.

Not rehearsing your setup

Failing to check your setup thoroughly is probably the most common mistake you can make so ensure you rehearse, rehearse, and rehearse again. It’s easy to assume the setup hasn’t changed from the last time only to discover a missing cable or a new PC without software (yep it happens!).

Put your setup through a practice run two weeks before the testing sessions so serious problems can be rectified, and check again a couple of days beforehand – it’s better to find a last minute glitch now than 30 minutes before the first session.

Using a one-way mirror

One-way mirrors are used in usability labs so observers can watch without being seen. However, they often create an unnatural environment for participants and risk influencing feedback through the ‘observer effect’.

Instead of a one-way mirror, try relaying footage in a separate space (as we do at Box UK, via our state-of-the-art usability testing suite featuring dedicated testing and viewing rooms, and high-quality audio and visual feeds to relay details of sessions to observers on- and off-site). This distance allows observers to leave the room whenever they want and talk without being overheard, which can unnerve or distract participants.

Not meeting participants in reception

It’s natural for participants to be a little nervous so it’s a good idea to meet them in reception and make them feel welcome. Thank them for coming and ask them how they arrived or whether they’d like a hot drink. A friendly face will relax them and hopefully result in a great session.

Asking leading questions

Good facilitators will ask the right questions, follow up on an insightful comment and observe the participant’s body language.

It’s easy to ask “Was that easy?” or “Did you not click that button because it’s hard to see?” and hear “Yes, I think so” so beware the leading question. The response would be more accurate if you asked more open questions such as “How did you find that?” or “Was it how you expected?”. It’s also common for participants to ask “Am I doing it right?” or “Am I in the right place?” so, again, make sure not to lead them by switching focus and asking “What do you think?”.

Finally, remember that body language can be as insightful as a comment. Common displays of emotions include frowning or grimacing, fidgeting, excessive mouse movement, sweating and hand to mouth gestures. To ensure these emotions are not misinterpreted you should follow up with questions or ask if they’d like a break.

Interrupting the participant

We’ve all done it. You think the participant has finished talking or doesn’t know what to say and you interrupt. Try and give the participant time to think and remember that you aren’t learning if you are talking. Don’t forget, it also gives you time to review your test plan and keep an eye on the clock!

Undertaking two roles in a testing session

It’s very difficult to run a successful usability testing session if you’re both facilitator and observer. A facilitator should be fully focused on the participant, test plan and clock while the observer should be making notes, monitoring analytics, and, if necessary, identifying improvements for the next session. Without this you run the risk of unnecessary delays and of missing vital participant feedback and signals, not to mention fatigue and stress.

Make sure to use two different consultants in the role of facilitator and observer wherever possible, and if it is absolutely necessary for one person to undertake both, use a voice recorder or webcam so you can do the observation after facilitating rather than attempting both at the same time.

Not considering external influences

External factors such as building work, loud office music, overheard noise and movement, and interruptions can all detrimentally affect your usability testing process and results. To avoid these distractions, be sure to inform your office manager a few weeks before to prevent double bookings and avoid any pre-scheduled maintenance work, and send out an office email ahead of the sessions so they know to keep clear of the testing space. If you use a one-way mirror, it’s also a good idea to ask a colleague to sit in the observation room beforehand to test the impact of noise and light, and ensure there won’t be any disruptions.

Set yourself up for success

The benefits of conducting usability testing are consistently demonstrated through industry research, making it a crucial consideration at the outset of any project – and by avoiding the usability testing mistakes highlighted in your post, you’ll put yourself in the best possible position to realise the benefits of this high-impact user experience activity.

At Box UK we understand the importance of regular testing, and have a strong team of UX consultants with hundreds of hours’ testing experience across a range of industries including FMCG, finance, education and leisure – driving improvements such as 46% increases in order value and 275% increases in client recruitment. To find out more email ux@boxuk.com to request a product sheet, and you can also download a PDF version of this post from https://www.boxuk.com/insight/ten-common-usability-testing-mistakes/, which features valuable additional insight and practical tips to support your own usability testing efforts.