My name is Ben, and I’m a serial guerrilla researcher.
I’ve recently been part of a healthy ongoing discussion about whether UX research should be lightweight and rapid vs. more thorough and methodologically/statistically sound. Upon reflection, I’ve found that my approach has become very much skewed towards the rapid, lightweight and guerrilla end of the scale. This has become so much so that my more formal research skills have become very rusty indeed.
What I find interesting about the whole discussion is that at university I studied a Bachelor of Social Science with a major in Applied Policy & Social Research. Of which research techniques and statistical analysis were a core component. Yet as a practitioner working in (semi) related field, I’ve pretty much forgotten how to do a T-test – and yet this hasn’t created any problems.
What this highlights to me is the difference between UX Design and HCI. Or to put it another way, the gap between practitioners vs. academics. This isn’t meant as a rant for or against one or the other, it’s just a personal reflection on how I see the UX Design field is evolving to become more lightweight and lean.
What constitutes UX research and the way it is structured greatly depends on what the project is and what you are trying to achieve out of it. There are a multitude of different tools, techniques and approaches out there. For instance, evaluating a design by conducting usability testing to identify problems is a significantly different scenario from conducting field interviews to discover user needs in the first place, or again from getting statistically sound stats to make a business case about how you should proceed with a particular product/feature.
When planning research the first thing you should start with is asking: What do you want to know? Why do you want to know it? What is the project goal?
After that you need to ask questions like: Are you discovering or evaluating? What tasks/questions should be included within interview sessions? How many participants do you need? What data do you need to collect? This helps guide what type of methodology is suitable.
Then of course you need to consider: What is your time and budget? How closely the team is collaborating? How much documentation do you need to produce? Are your findings going to be challenged?
A lot of UXers put an emphasis on statistical analytics. I find this interesting, mainly because I’ve rarely had the chance to get to the numbers of participants needed to make it possible. Looking back at my 10 years plus+ personal experience I have rarely engaged in anything other than qualitative research, with just some basic stats thrown in. It’s always good to triangulate you’re findings from different data sources, but it’s not always feasible.
I’ve found this trend becoming stronger over time. With most of my recent projects, I’ve worried less about a formal methodology, and focussed more just having an open and engaging conversation with the research subjects.
I see this as both a good and a bad habit. After some thought, here’s my justification/list of excuses of why this has been become the case:
- #1 = time. Traditionally I’ve had to start from the position of negotiating hard for any kind of user research to be factored in to a project in the first place. Let alone, being able to take the time to cover more participants. Based on practice I’ve learnt to become more and more lean (for both participants and outputs) so as not to hold up development much more than necessary. The biggest challenge I’ve usually found is to move away from an older waterfall mentality where UX research needs to be thorough and can become a hold up in the process. To me good design research can be done in a way that is iterative and just-in-time within development. This approach wouldn’t be considered sound academic methodology, but it is highly effective for software development.
- Budget. Working as a consultant, it can be hard to convince clients to spend their precious money on comprehensive research, particularly when you know a lot of research can be done more efficiently if it’s not going to be submitted for peer review.
- Archaic technology. Unfortunately, I’ve been conditioned not to expect that tracking tags will be implemented properly to allow accurate traffic reports to be produced. This is changing thankfully, but there are still too many legacy systems out there that can’t give meaningful analytics on users behaviours.
- I want to know Why, not just What. I’ve always started with researching the qualitative goal of finding out why people behave why they do. Probing further in to how many people behave that way doesn’t usually happen as understanding why proves to be enough of an insight.
- Usability issues don’t need to be statistically valid to exist. Other than confirming that someone isn’t an outlier, if 2-3 people have a problem with something then it exists as a problem. Usually that is all I need to find out with a design – what problems exist with it. When you’re busy iterating a design, the ball is in your court as a designer to find a solution for it.
- A background in websites, not software or physical devices. The level of fine-tuning of performance that can be achieved by statistical analysis is traditionally rarely a priority on the web. Unless you work on a site that gets such high traffic as Google, Amazon, BBC the return on investment for more thorough research hasn’t been there. The traditional focus has been more on issues such as implementing global style guides and redesigns, rather than refining them. But then maybe I just haven’t stayed at the one place long enough to get to that. Alternatively, if you’re about to start mass producing a physical gadget, it’s worth taking the time to fine tune it a little.
- My own laziness. Producing results that will hold up to academic rigor is time-consuming hard work. For whatever reason, I have been able to get away without putting in the hard yards to cover the participants and do the statistical analysis. While I have no doubt I could improve research techniques (and have been trying to of late), it is a result of my experience. Chicken or Egg? Maybe either/both.
As part of this conversation a fellow ThoughtWorker shared this design spectrum. It’s a great way to articulate the different approaches that can be taken.
I guess I tend to dwell on the intuition-driven end. It’s good to get some perspective.