Archive — User research
A write-up of a brilliant talk Jo Arthur gave at this month’s UX Glasgow event, where she outlined how the National Lottery Heritage Fund analyse user research remotely. I found it super useful, not least because this is exactly what we need to do at my work right now, and I have taken a lot of inspiration from this. Thanks Jo!
This case study would be seen by some as a reason not to understand users at all. “If I asked users what they wanted, they’d say faster horses. Hurr hurr.”
In fact, like the idea of faster horses, it demonstrates how important it is to understand your users in the right way, not just pay lip service to doing so.
Badly-designed user research leads respondents to certain responses. This is often unintentional — avoiding bias is difficult.
Sometimes it’s intentional. Perhaps the survey designer has a pet idea. They might (subconsciously) skew the questions in a certain way to get the answers they want.
A classic example is asking someone if they would like a certain feature to be added to a product. The answer is almost always: “Er, yes, I suppose so.” People think they like choice, so more features sounds good. But in reality, too many features — or too much choice — leads to choice paralysis and greater frustration.
The lesson isn’t to ignore user research. But be aware of your biases. Be wary of surveys as a methodology. And don’t simply ask people what they want. Instead, understand what they do, and why they do it.
This piece really challenged my thinking.
In my job I am currently trying to figure out ways to make quality user research scale across the organisation in a sustainable manner. It’s like one of those triangular diagrams outlining three goals: “you can have two of these things”.
Working in such a large organisation, central resources inevitably have their limits. My desire is to empower others to carry out their own user research. Our role becomes an education role. How we do that remains an unsolved problem. Various attempts have yielded variable results.
But Saswati Saha Mitra, reflecting on her experiences of trying to democratise user research, suggests that it is a bad idea.
A researcher is a dynamic thinker who has to adapt their methods and questions based on who is in front of them, how much they have already learnt and what new areas could be probed on. This did not happen. We got a lot of verbatim and videos which after a point became repetitive and did not add more to the analysis. This then led to analysis paralysis.
I’m inclined to continue trying to empower others to conduct user research. But this article is food for thought.
The coronavirus outbreak has posed massive challenges for everyone in society. For practitioners of human-centred approaches to design, where face-to-face interaction is often so important to enhancing our understanding, our current requirement to maintain social distancing creates obvious barriers.
However, this doesn’t mean our work to ensure we’re meeting people’s needs has to stop. In fact, there are some perhaps surprising advantages to working remotely as a user experience practitioner.
Over on my team’s blog, I have outlined some of what I’ve learned about remote user research over the past month or so.
A short list of surprisingly common things people ask users to do during a usability test — and what you should do instead.
Not mentioned in this list is the idea that you can ask people just to tell you what they think of the website generally.
The golden rule is: “Try to simulate reality”.
This is possibly the best explanation I’ve seen of how to conduct user research interviews. This framework could be given to almost everyone, and they would be on their way to conducting good interviews.
It includes a very useful diagram outlining how to structure the interview — when to be open, and when to narrow down.
A useful guide for those of us trying to push user research forward in our organisations.
The session will outline the comprehensive programme of user research the University of Edinburgh’s User Experience Service conducted on behalf of the Learn Foundations project. It will show how, as the project went along, we adopted a service design approach in order to better meet the needs of both students and staff. Read full article1 comment
Note — 2019-11-14
Duncan’s talk will take us through how the University of Edinburgh’s User Experience Service has undertaken a comprehensive programme of user research supporting a project aimed at improving students’ experience accessing course materials digitally. Find out how they developed a programme of multiple user research methods to understand what students really need.
Time: Wednesday 4 December
Venue: Amazon Development Centre, 2–4 Waterloo Place, Edinburgh
Maybe see you there?
My final blog post about our user research for the Learn Foundations project, outlining how our service design approach has left the University of Edinburgh in a stronger position to understand how we can really improve services for students.
Why do unethical products keep being designed? According to Ovetta Sampson, it’s because of an unnecessary disconnect between user researchers and data scientists.
…it’s easier to say, “I’m just the engineer” or ”I’m just the numbers guy.” It allows us to divorce ourselves from the responsibility of what that data can do to people.
As part of our comprehensive programme of user research in support of the Learn Foundations project, the User Experience Service has conducted contextual enquiry to better understand the contexts and needs of staff members working with Learn. This blog post summarises our findings.
My colleague Nicola Dobiecka wrote this brilliant blog post about how designers need to take different approaches depending on the level they are working at. It builds on Jared Spool’s analogy with Charles and Ray Eames’ classic film Powers of Ten.
Essentially, colleagues at different levels of the organisation have different perspectives. All valid, but all require different skills and processes.
Here’s what happened when we ran usability testing with staff members using Learn for the first time. From four videos we found 20 usability issues, and a wide variety of strategies to complete the same basic tasks.
We had developed an information architecture and tree tests as part of our programme of user research for Learn Foundations. The next step was to use first click tests to pit the new template against existing courses.
The latest post in my series for the Website and Communications blog about our user research work around the University of Edinburgh’s virtual learning environment.
Slides from my Edinburgh UX meetup talk on Monday 2 September 2019, about the user research we have been conducting around the needs of students and staff working with course materials digitally at the University of Edinburgh. See the more detailed blog posts about this project over at the Website and Communications team blog. Read full article1 comment
This year I have had the fantastic opportunity to study with the Service Design Academy. This intensive course in service design has given me hands-on experience in new techniques. This blog post summarises my experience.
Note — 2019-09-02
I’m doing a couple of talks this week. They are both about the user research we’ve been doing for the Learn Foundations project.
This evening I will be presenting at the Edinburgh UX monthly meetup. It’s a friendly meetup and it’s free, so do come along if you’re interested.
Then on Wednesday I’ll be presenting with my colleagues Karen Howie and Paul Smyth at the Association for Learning Technology (ALT) Annual Conference.
After completing the top tasks survey and the card sort as part of the Learn Foundations project, our next step was to create a prototype information architecture and test it.
It’s always great to see advice from Indi Young. Here are tips on how to better identify and synthesise patterns in qualitative data.
…when you’re looking at data, don’t group things together by noun. Group them together by verb. I’ve done a lot of work with the healthcare industry, and one thing I often see research teams do is bring together insights that are all about a noun — here is all of the data that we got about how people feel about the doctors. But when you do that the intent behind what people are really saying ends up all over the place.
As part of our programme of user research in support of the Learn Foundations project, we have carried out a top tasks survey to understand what students need when accessing course materials online.
What we found was that students value three items much more than everything else. Those items are all to do with lectures.
See the full post to find out more.
As part of the Learn Foundations project, we have carried out a programme of quantitative research to ensure a user-centred approach to solution development.
The Learn Foundations project team wanted to develop a new template using a user-centred approach. This template would be designed to introduce more consistency between different courses in Learn. But it also had to support a diverse variety of needs across different courses, supporting different schools, colleges and teaching needs. It also had to be developed quickly.
We took inspiration from a classic user experience diagram to ensure this new template could be built on firm foundations.
This post introduces the steps we took. Forthcoming posts will describe each step in more detail and some of our key findings.
Summarising the key findings from a set of user interviews I conducted with students on their needs around accessing course materials digitally. Just one of the strands of the Learn Foundations project, which I still have much more to write about.
After analysing and synthesising the insights gathered through the interviews, we built up a picture of how and why students’ experience with Learn varies throughout the year as students attempt to complete different tasks. This is presented as a semester in the life of students using Learn.
Since September, my main focus at work has been to carry out a comprehensive programme of user research for a project aiming to improve services surrounding Blackboard Learn, the University of Edinburgh’s main virtual learning environment.
I wrote this blog post providing a high-level overview of all the work that’s taken place this academic year. More detailed blog posts about each of the strands of research will come in due course.
This is been a brilliant project to be involved in. We’ve been given a lot of time and freedom to do large amount of research in support of one of the university’s most important digital services, used daily by most of our students, and many staff members.
We have made some really important discoveries. This work is ensuring that improvements are based on a strong understanding of users’ behaviour and needs when working with course materials digitally.
Check out this video, where I describe the work and some of the findings in a bit more detail, and keep an eye out for the forthcoming blog posts.
Personas are one of the most popular techniques in the user experience toolkit, but they also remain among the most controversial. It is often still unclear to some what value personas can bring, and how to avoid the pitfalls of bad personas.
This article brings one of the clearest explanations I’ve seen of how to make good personas. It is a lengthy but must-read article if you make personas and want to make them work.
This article is particularly useful at explaining why obsessing over demographics is bad, and why you should instead focus on “thinking styles”.
Statements-of-fact, preferences, and demographics frequently serve as distracting barriers. They kick off all kinds of subconscious reactions in team members minds.
Dial in the feedback — Gregg Bernstein
Keeping it weird
Or, more accurately, stopping it being weird. This refers to the problem that most psychology research is conducted on people that are western, educated, industrialized, rich and democratic.
Tim Kadlec considers the implication this has on our understanding of how people use the web.
We’ve known for a while that the worldwide web was becoming increasingly that: worldwide. As we try to reach people in different parts of the globe with very different daily realities, we have to be willing to rethink our assumptions. We have to be willing to revisit our research and findings with fresh eyes so that we can see what holds true, what doesn’t, and where.
The hunt for missing expectations
Jared Spool tells the story of a bookkeeper who became frustrated using Google Sheets because it didn’t have a double underline function.
To keep [usability] testing simple and under control, we often define the outcomes we want. For example, in testing Google Spreadsheet, we might have a profit and loss statement we’d want participants to make. To make it clear what we were expecting, we might show the final report we’d like them to make.
Since we never thought about the importance of double underlines, our sample final report wouldn’t have them. Our participant, wanting to do what we’ve asked of her, would unlikely add double underlines in. Our bias is reflected in the test results and we won’t uncover the missing expectation.
He suggests interview-based task design as a way of finding these missing expectations. Start a session with an interview to discover these expectations. Then construct a usability test task based on that.
I recently ran hybrid interviews and usability tests. That was for expediency. I didn’t base tasks on what I’d found in the interview. But it’s good to know I wasn’t completely barking up the wrong tree. I plan to use this approach in future.
Keeping yourself out of the story: Controlling experimenter effects
How do you stop yourself, as a user researcher, biasing the results? An important topic for user researchers to consider. (It’s also an excellent excuse to re-tell the story about Clever Hans, the horse who everyone thought could count, until they realised he was simply reacting to subtle, unintentional cues from his trainer.)
I recently undertook some usability testing, where I was asking people to complete tasks that I didn’t know how to complete myself. This meant I was less likely to bias the participant. But it was a strange experience for me, and it made me less certain about how to conduct the test.
Understanding user behaviour for online learning recruitment
The University of Edinburgh Website and Communications team has recently been heavily involved in a pilot project to improve the journey of prospective online learning students, from investigation to offer. Read about our user research approach and how we ensured project outputs met the needs of users.
How to get better answers from asking better questions
Chris How’s tips on doing better interviews. This is essentially a text version of his session at UX Scotland, which I wrote about on the University of Edinburgh Website Programme blog.
User research myth busting
Bringing focus to our findings: continued user research for the API Service
This is the final blog post in my short series about the user research I led on for the API Service at the University of Edinburgh.
This post covers the second half of the research, where we brought focus to the detailed picture developed in the first phase, and began to prioritise the issues to help the API Service team direct their ongoing work.
The information architecture of libraries part 1: Dewey Decimal Classification
This article is a bit of a sales pitch, but I enjoyed this research into how intuitive the Dewey Decimal Classification is.
When user-centred design of public services is a risk
An exploration of the risks surrounding undertaking user-centred design. For me, the lesson is to put the same sort of effort into designing your research and your interactions with your users as you would into the product your research is for.