Cultural Content – usability testing how-to
A key part of the toolkit of understanding online audiences...
Good Morning,
Today I’m going to be looking at usability testing.
Usability testing tests how usable your website (or web app) is in the eyes of its users/ your audiences.
Why's this important?
I've read a lot of digital strategies in the sector over the last few years. Most are looking at understanding and developing global and diverse audiences online. Digital teams in the sector are asking:
Who are we talking to (and not talking to)?
What kind of experience are different types of people having on our online channels?
How can we best represent our offer to a growing and diverse online audience?
Sounds great!
But to answer those questions you're going to need to get some data. In particular, data on how your web estate currently works for different kinds of people. The insight from this data will help you make robust judgements about what to change.
Enter, usability testing – one of the most robust digital user evaluation methods. In this short post I’ll take you through how it works.
What usability testing normally amounts to
Usability testing generally involves:
Recruiting testers that correspond to your target audience (Jakob Nielsen recommends around five).
Defining a ‘test script’ that asks users to find info, or fulfil tasks, using your website.
Recording whether a test subject has been able to complete the task.
Reviewing which aspects of your website were harder for most users to complete.
Why's this useful?
Digital content is a topic that lots of stakeholders will have opinions about what ‘works’. Usability testing cuts through individual opinions. It provides hard data on what the majority of actual users think. It gives you evidence of what users find (and don’t find) when confronted with your content ‘in the wild’, and without the context of being on the staff.
Gaining qualitative feedback
Usability testing can also be a good chance to get more qualitative feedback from users. For example:
About what they like and dislike.
How cultural content intersects with their life (or doesn’t!).
What they think certain navigation terms or button text means.
Anything that’s off-putting or ‘othering’ about the design, terminology, or imagery on the site currently.
Defining your test parameters
It’s helpful, right at the outset, to have a clear sense of what you’re testing and why.
You might have a hunch that part of your website isn’t serving users as well as it might. Or you might want to test multiple parts of your website at the same time. This is often done as part of a website discovery project. In this case testing can provide prioritised data points on what to change on the new website.
TL;DR – Usability testing can be big or small. You can test small tweaks to a single page right up to testing the whole website. You can test with one cohort of five testers, or multiple different test groups.
Recruitment
Recruitment can be one of the most labour intensive aspects of usability testing. Particularly if you want to recruit specific types of audiences.
In formulating who you’re testing with it’s good to define up front who your target and actual audiences are. We might first recruit ‘warm’ users (who already know of you). You could do this by putting a pop up survey on the your website. The survey can also act as a screener, so that you can recruit a mix of - for example - genders and ages.
The image above shows an example pop up survey for recruitment. We use Survicate for pop-ups like these which are triggered via Google Tag Manager. GTM allows for a lot of customisation:
How many seconds delay the pop up appears after.
Which pages it appears on.
You can also preventing the same pop up being shown multiple times to a user who has previously closed the pop up.
Testing with specific users
It can be helpful to work with a recruitment agency to help recruit specific audience groups. This is also helpful if you want to hear from people who haven't already heard of your organisation. These audiences are sometimes known as 'cold' users. This contrasts with 'warm' users who know what you have to offer.
By comparing the tests of 'warm' and 'cold' users you can see:
If your site biases in favour of those who are familiar with your offer.
What aspects are confusing or 'othering' to a non traditional audience.
What elements draw users in, and whether this is different for warm and cold.
It’s worth setting aside budget for an incentive/reward for testers. You are generally taking an hour of testers' time. Providing some kind of payment is good practice and likely to get you a wider set of participants.
Facilitation
There are a few different ways of facilitating usability testing. I'll discuss remote and in-person testing here.
Remote
My preference is to conduct tests remotely, over Zoom. This saves travel time and costs. This is nice for you, but also for the participant. Testing remotely means you can speak to audiences across the country (or world). It means you don't bias in favour of those who live closer to the city centre during the working day. If you want your digital channels to reach a global audience, testing with a global audience makes sense.
For remote sessions, your recruitment process might look a bit like this:
Create a privacy policy for the study, outlining what data you'll be storing, how it will be used, for how long.
Acquire a list of willing participants (probably from a recruitment agency or pop up survey).
Review that list and pick testers with a mix of demographics.
Send those testers an email explaining:
What the study is about.
What their participation will involve.
More details on the reward/incentive.
Time window for the study.
One of the fiddlier aspects of usability testing can be finding when both you and the tester are free. We use the calendar tool Calendly. This allows testers to browse free calendar slots and book one straight in. Calendly then automatically generates a Zoom link for the session.
During the session itself you’ll probably want to start with some ‘ice breaker’ questions. These questions help you get a sense of who this person is and how culture intersect with their daily life.
After the ice breaker questions you'll want to ask the participant to screenshare. From there you can ask them to bring up your website and begin running through a script of pre-set questions. It’s important to let testers know this is very much a test of the website and not of them. It’s useful to ask them to ‘think aloud’ as they are searching, as this gives you a better sense of what’s giving them pause.
In person
In person testing may be a better bet if you’re testing:
AR or VR.
In-gallery digital.
Content that isn’t easily shareable (for example because it’s not yet live to the public).
In these cases you’ll want to hire or book a dedicated room and be able to reimburse travel costs (up to a fixed pre-agreed limit).
Note taking, coding and analysis
It's possible to record an in-person usability session. But it's more difficult to get a view of the tester's expression and the content they’re looking in the same pane. This is something Zoom does very easily.
Work out at the start of the process if you want to use recorded clips in your findings. If you are using user recordings you'll want to be explicit about that in the privacy policy. You'll also want to reiterate that at the start of the study when you ask for the testers permission to record.
One of the simplest ways of documenting the outcomes of usability testing is in a grid. Below is an example. In this case test subjects form the columns along the top and the tasks form the rows. A traffic light system has been used to distinguish between 1; pass, 2; pass with difficulties, and 3; fail.
A scoreboard like this gives you a sense of where the priority usability barriers on your site are.
The full report can explain the scores. For example:
What problems testers were encountering.
What testers were expecting to see.
Whether there were significant differences between testers responses;
Were more tech savvy users more comfortable with certain UI elements compared to others?
Were more familiar audiences better able to navigate the site?
A purist approach to usability testing would run separate tests for most audience variables. For example, you might recruit:
One cohort of five users for familiar 'warm' audiences.
Another for non-familiar 'cold' users.
A third for those with low digital literacy.
Another for mobile testers.
In reality, most culture sector budgets don’t allow for more than four sets of tests for any particular project or web application. So you'll want to think about which variables are most important for you to test.
We tend to the recordings from tests through an automated transcription process. This is really useful for evidencing why certain tasks were problematic for users. It means that if you remember a helpful quote, you can search for that particular keyword in the text and playback that section.
Coding responses
If you’re running lots of usability studies, it can be useful to code interview comments against your research questions. Tools like NVivo do this for you. For a few years I’ve been using a Google Sheets formula to do this that I borrowed from Meghan Casey’s Content Strategy Toolkit book.
That’s it for this week. As ever, get in touch if you have questions or comments - or a project you think the cultural content community would find interesting, I’d love to hear it! I’m georgina@onefurther.com
This is handy! Don't suppose you have any thoughts on devices or tools for mapping the insights that you generate? The obvious one is onto a matrix of Value vs Ease (ie, you want to be doing the easy, high value things!) but I wonder what other matrices and the like are out there...