Archive — Usability testing
A short list of surprisingly common things people ask users to do during a usability test — and what you should do instead.
Not mentioned in this list is the idea that you can ask people just to tell you what they think of the website generally.
The golden rule is: “Try to simulate reality”.
Reasons why you shouldn’t simply ask users to choose which design they prefer.
It turns out people aren’t good at answering this kind of question. People don’t know why, or they don’t care enough to answer, or they may not want to tell you. When asked for an opinion, most people will form one on the spot. Such opinions aren’t carefully considered or deeply held. It’s not that UX researchers don’t care what people like: it’s just risky making important design decisions based on fickle opinions.
User experience isn’t about discovering what people think they want. It’s about finding out what they need.
Here’s what happened when we ran usability testing with staff members using Learn for the first time. From four videos we found 20 usability issues, and a wide variety of strategies to complete the same basic tasks.
My awesome colleague Lauren Tormey wrote this blog post about a brilliant project she’s been involved in. She has been collaborating with our Information Services Helpline to reduce unnecessary support calls by iteratively improving content with a regular cycle of usability testing.
Over two summers, we had done work to improve content related to getting a student ID card. This was another case of turning long pages with giant paragraphs into concise step-by-step pages.
From July to September 2017, the IS Helpline received 433 enquires related to student cards. For this same period in 2018, they received 224, so the figure nearly halved. I repeat: halved.
My colleague Alex Burford from the University of Edinburgh School of Informatics has written this great blog post about some usability testing we have conducted in support of the Learn Foundations project.
I thoroughly enjoyed working with Duncan Stephen on this mini project. The feedback was informative, encouraging, and a call to action. I’m looking forward to embedding similar practice across the School for alternative platforms for content delivery.
Each month we are working with a different school to conduct usability testing in Learn, the virtual learning environment, to inform improvements to the Learn service.
This is just one strand of a huge amount of user research I’ve been carrying out for the Learn Foundations project. It’s been a fascinating and very enjoyable project to work on. I’ve been pretty lax at writing about it yet — but I’ll be posting much more about it soon.
The hunt for missing expectations
Jared Spool tells the story of a bookkeeper who became frustrated using Google Sheets because it didn’t have a double underline function.
To keep [usability] testing simple and under control, we often define the outcomes we want. For example, in testing Google Spreadsheet, we might have a profit and loss statement we’d want participants to make. To make it clear what we were expecting, we might show the final report we’d like them to make.
Since we never thought about the importance of double underlines, our sample final report wouldn’t have them. Our participant, wanting to do what we’ve asked of her, would unlikely add double underlines in. Our bias is reflected in the test results and we won’t uncover the missing expectation.
He suggests interview-based task design as a way of finding these missing expectations. Start a session with an interview to discover these expectations. Then construct a usability test task based on that.
I recently ran hybrid interviews and usability tests. That was for expediency. I didn’t base tasks on what I’d found in the interview. But it’s good to know I wasn’t completely barking up the wrong tree. I plan to use this approach in future.
Keeping yourself out of the story: Controlling experimenter effects
How do you stop yourself, as a user researcher, biasing the results? An important topic for user researchers to consider. (It’s also an excellent excuse to re-tell the story about Clever Hans, the horse who everyone thought could count, until they realised he was simply reacting to subtle, unintentional cues from his trainer.)
I recently undertook some usability testing, where I was asking people to complete tasks that I didn’t know how to complete myself. This meant I was less likely to bias the participant. But it was a strange experience for me, and it made me less certain about how to conduct the test.