Describing personasIndi YoungIndi’s Essays

Personas are one of the most popular techniques in the user experience toolkit, but they also remain among the most controversial. It is often still unclear to some what value personas can bring, and how to avoid the pitfalls of bad personas.

This article brings one of the clearest explanations I’ve seen of how to make good personas. It is a lengthy but must-read article if you make personas and want to make them work.

This article is particularly useful at explaining why obsessing over demographics is bad, and why you should instead focus on “thinking styles”.

Statements-of-fact, preferences, and demographics frequently serve as distracting barriers. They kick off all kinds of subconscious reactions in team members minds.

Improving student experiences in Learn: usability testing showcase and workshopInformatics Learning Technology Service

Prioritised usability issues

My colleague Alex Burford from the University of Edinburgh School of Informatics has written this great blog post about some usability testing we have conducted in support of the Learn Foundations project.

> I thoroughly enjoyed working with [Duncan Stephen](https://www.ed.ac.uk/profile/duncan-stephen) on this mini project. The feedback was informative, encouraging, and a call to action. I’m looking forward to embedding similar practice across the School for alternative platforms for content delivery.

[You can read my own reflections on this work at the Website and Communications team blog](https://blogs.ed.ac.uk/website-communications/come-to-the-next-learn-usability-testing-showcase-on-29-march/).

Each month we are working with a different school to conduct usability testing in Learn, the virtual learning environment, to inform improvements to the Learn service.

This is just one strand of a huge amount of user research I’ve been carrying out for the [Learn Foundations](https://blogs.ed.ac.uk/website-communications/tag/learn-foundations/) project. It’s been a fascinating and very enjoyable project to work on. I’ve been pretty lax at writing about it yet — but I’ll be posting much more about it soon.

Keeping it weird

Or, more accurately, stopping it being weird. This refers to the problem that most psychology research is conducted on people that are western, educated, industrialized, rich and democratic.

Tim Kadlec considers the implication this has on our understanding of how people use the web.

We’ve known for a while that the worldwide web was becoming increasingly that: worldwide. As we try to reach people in different parts of the globe with very different daily realities, we have to be willing to rethink our assumptions. We have to be willing to revisit our research and findings with fresh eyes so that we can see what holds true, what doesn’t, and where.

The hunt for missing expectations

Jared Spool tells the story of a bookkeeper who became frustrated using Google Sheets because it didn’t have a double underline function.

To keep [usability] testing simple and under control, we often define the outcomes we want. For example, in testing Google Spreadsheet, we might have a profit and loss statement we’d want participants to make. To make it clear what we were expecting, we might show the final report we’d like them to make.

Since we never thought about the importance of double underlines, our sample final report wouldn’t have them. Our participant, wanting to do what we’ve asked of her, would unlikely add double underlines in. Our bias is reflected in the test results and we won’t uncover the missing expectation.

He suggests interview-based task design as a way of finding these missing expectations. Start a session with an interview to discover these expectations. Then construct a usability test task based on that.

I recently ran hybrid interviews and usability tests. That was for expediency. I didn’t base tasks on what I’d found in the interview. But it’s good to know I wasn’t completely barking up the wrong tree. I plan to use this approach in future.

Keeping yourself out of the story: Controlling experimenter effects

How do you stop yourself, as a user researcher, biasing the results? An important topic for user researchers to consider. (It’s also an excellent excuse to re-tell the story about Clever Hans, the horse who everyone thought could count, until they realised he was simply reacting to subtle, unintentional cues from his trainer.)

I recently undertook some usability testing, where I was asking people to complete tasks that I didn’t know how to complete myself. This meant I was less likely to bias the participant. But it was a strange experience for me, and it made me less certain about how to conduct the test.