‘What’ means nothing unless you know ‘why’

Two speech bubbles, one containing '?' and the other containing '!'

Tricia Wang wrote about why big data needs what she calls thick data. Thick data being rich, qualitative insights that come from methods like ethnography.

She explains why your organisation really needs to value qualitative data more than it probably does.

While conducting ethnographic research at Nokia in 2009, I discovered something that challenged Nokia’s existing business model…

I reported my findings and recommendations to headquarters. But Nokia did not know what to do with my findings. They said my sample size of 100 was weak and small compared to their sample size of several million data points.

I have also felt the pain of stakeholders not trusting qualitative information on the basis of a small sample size. This is despite the fact that qualitative approaches tend to yield the most useful insights.

The obsession with large sample sizes and big data often feels more like an outsourcing of decision-making to spreadsheets.

Or, in the words of Clive Thompson:

By taking human decision-making out of the equation, we’re slowly stripping away deliberation — moments where we reflect on the morality of our actions.

Some are more comfortable looking at numbers than thinking about people.

(I’m tempted to draw a comparison with Brexiters vaguely claiming that “technology” will sort out the Irish border issue. It stops them having to talk about the people that will be caught up in it.)

Some people make, at best, a distinction between quantitative data telling you ‘what’, and qualitative data telling you ‘why’. But often you need to know some whys before you know which whats to look for.

Thinking about one of my most recent projects, Learn Foundations, we had great success using a mixture of methods. We were asked to conduct some kind of en masse usability testing. As a proxy for the impossible, we held a series of five quantitative studies, which attracted almost 5,000 responses from students and staff.

Often it’s the response rate that gets the attention, even though the greatest insights came from the dozen interviews. The quantitative studies were a vital piece of the puzzle. But we wouldn’t have been able to interpret them properly if it wasn’t for the interviews and usability testing we were also doing.

To take one example, we were stumped by a troubling set of ‘findings’ from the quantitative studies. Accessing past exam papers had emerged as one of students’ top tasks when accessing course materials digitally, in a survey completed by over 1,000.

Finding the right place for this in the information architecture was trickier. There was no strong consensus in the card sort completed by almost 800 people. Once we’d put past papers somewhere in the structure that seemed sensible, a first click test completed by over 1,000 people suggested that people couldn’t work out where to find them.

By luck, at roughly the same time, I was conducting qualitative one-to-one usability testing. This included a task to find past papers. There, each and every student told me they wouldn’t use Learn to find past papers. They Google it instead. That takes them a university webpage, which in turn takes them to a database of past papers hosted in a separate system. This works for our users. They don’t think about looking for it in Learn.

(We later verified this by looking at more quantitative data from Google Analytics, such is the reluctance around relying on a small number of qualitative interactions.)

Qualitative studies completed by thousands had sent us barking up the wrong tree. Watching just four people all say the same thing sorted it out.

It’s tempting to rely on the perceived security of high sample sizes. But numbers mean nothing if you don’t understand the people behind them.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.