Home

Awesome

Learn User Experience Testing

Learn how to test your app with real humans

Why?

This is probably the most important/useful skill any "creative technologist" has. πŸ’ͺ

Well conducted user experience testing helps you ensure at each stage of development that what you are planning to build is going to solve the problem for your users that you set out to solve. βœ…

Done properly, it prevents you from wasting time ⏳/ effort πŸ˜“/ money πŸ’°on guessing πŸ€”β“what your users want, need or understand by getting them to critique your product as you go rather than retrospectively.

Testing usability and following usability guidelines is not something we have to do to restrict ourselves or because we're told to. Understanding usability is about ensuring users can use our product effectively πŸ“ˆ πŸ‘

What?

User experience testing is a means of collecting feedback πŸ—£from users on whether the UX/UI presented allows them to achieve their desired goal in the fewest steps.

It should be performed from the very beginning of a project and throughout in a cycle of:

User experience testing cannot be performed as a 'token gesture' to produce insightful results. Care must be taken to ensure that guidelines are followed properly so that the results are not invalidated by "confirmation bias".

How?

There are many approaches to user experience testing that you may choose to use based on your product, stage of product development, budget πŸ’΅, client base πŸ‘₯ etc.:

The first stage of user testing you may wish to conduct is known as the discovery phase, for information on this stage see: https://www.gov.uk/service-manual/agile-delivery/how-the-discovery-phase-works

To perform usability user testing you may use one/some of the following methods:

Ensure that when you conduct tests you are testing them across devices so that your data reflects all of your users experiences e.g. mobile πŸ“±, tablet and desktop πŸ’» .

How to conduct interviews

  1. Create a script to run through with your testers πŸ“ƒ
  2. Decide who you will interview πŸ‘₯
  3. Find somewhere appropriate to conduct your interviews 🏒
  4. Decide on a time/date to perform your research πŸ“† πŸ•
  5. Decide who in the team will conduct the interviews πŸ‘©β€πŸ’ΌπŸ‘¨β€πŸ’Ό
  6. Decide how many interviews you wish to do
  7. Decide how you will record the findings of your interviews. πŸ“πŸ”‰πŸŽ₯

1. Writing a script πŸ–Š

A script is a helpful way of ensuring interviews are delivered in a consistent way (if different team members conduct them) and that the interviewer can feel relaxed ☺️ because they know what they've got to say. It's important that scripts aren't read like an autocue πŸ€– as you want the tester to feel relaxed too, so making them more like a normal conversation is beneficial. However if you are anxious 😰 about missing something out, one way to deal with this is to tell the participant 'I'm going to read through this part of the process to make sure I don't leave anything out'. By telling them what you are doing and why it can help alleviate worries they may have constructed otherwise, you're telling them that you're reading it so they benefit from all of the information, not because this is a formal setting and you're avoiding their eye contact πŸ‘€πŸ˜Š! So make sure the team familiarises themselves with the script before they start. The script doesn't need to be followed word for word but team members should bare in mind that certain word choices are important in order to not influence the testers responses. Here is a script outline that might be of use for some projects: https://docs.google.com/forms/d/e/1FAIpQLSfM2Uaje8gpBde-RqU01DlxrGUEsWZrjiy8yTKmYaeJYAZUuw/viewform

The script came from this 5 min Google tutorial: https://www.youtube.com/watch?v=0YL0xoSmyZI. Remember though to always tailor your script to what you are testing e.g. you may not need as many introductory questions as this one depending on what you are aiming to find out from your testing at this stage.

These are some of the key points from it and other resources (see list at the bottom of readme):

Introduction

Initial Questions

Exploring your wireframes / product πŸ”Ž

2. Who πŸ‘₯

If you have existing users for your product reach out to them and ask if any of them would like to participate in some research to help improve the product. If you don't have any existing users yet, aim to test your application on people who represent your target user group. Even when you do have an existing user base, testing with people who are new to your product can offer a different perspective to those who already know it and what it does. If you are struggling to find participants encouraging people by informing them of how long it takes to participate (10mins is reasonable) or you can offer something to thank them for participating e.g. buy them a coffee β˜•οΈ if you are in a coffee shop. Just be mindful that you don't want what you offer as a thank you to impact what responses they give. You may get a better response rate for participation from people who are on their own πŸ‘€.

3. Where πŸ—Ί

Key for determining where to conduct your interviews are:

4. When πŸ•₯πŸ•š

When is a time that suits you and your interviewees. If you're looking to source interviewees on the day when is a time that your chosen location is likely to have plenty of people around e.g. office hours for an office, would people be happier to talk on their lunch break or are they likely to leave the office altogether? Does the day of the week or time of year make a difference πŸŒžπŸ‚β„οΈ? Are there any key events within your company or in the public eye that would be good to coordinate with to conduct your research πŸŽƒπŸŽ„πŸ’˜? E.g. conducting research in January to coordinate with 'Dry January' when researching for a product aimed at people interested in low/no alcohol drinks options.

5. Who will conduct interviews

1 or 2 people is sufficient (you don't want to make your tester feel uncomfortable). Everyone involved in your product should go at some point (developers too!)

6. How many interviews

Up to 85% of core usability problems can be found by observing just 5 people using your application according to Jakob Nielsen (see https://www.youtube.com/watch?v=0YL0xoSmyZI and https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/). The reason for this is that the core usability problems are easy to spot for people who are new to your application (and difficult for you to spot as you are too close to the product to see how real people perceive it for the first time). The 'magic 5' approach suggests that you find out the most from the first person you to speak to and then a little less from the next person and so forth.

7. Recording the findings of your interviews πŸ“ and interpreting them πŸ“Šβ“

Agree before you conduct your interview who is going to take notes and in what format or whether you're going to record the session (the screen or audio). Think about how you will later collate your results to see patterns from them. Once you have all of your findings discuss them with your team to find trends and see how you can make improvements based upon them.

Testing with small sample sizes like 4 or 5 people is worthwhile when often the alternative is no testing at all. However, you shouldn't rely on statistics from such small groups. So if 1/4 people take issue with something don't consider it 25% and therefore something that must be changed. Consider what the issue for that 1 person was and interpret it with your knowledge to deduce if you should change the designs based on their response. Remember that small and simple amends are just as good, if not better than a total redesign. Just because you have areas to improve on doesn't mean you should start again from scratch. Also remember it's better to make your existing product work rather than adding new feature after new feature when the original product isn't working well for users yet. If you have made changes to the existing product and people are reacting badly to them remember that aversion to change is normal. Give it a bit of time to allow users to adjust to the new designs or leave assistance mechanisms to point people in the right direction to teach them the new way of doing things.

Be conscious of bias - think about how every aspect of your set up (time/location, interviewee demographic, script etc.) could influence the responses you gained in conducting your research. Could interviewing loved ones about your idea expose only the positives of your idea because they don't want to give you negative feedback for fear of hurting you?

Resources: