Kelly Moran
[Note from Chris: This week I’ve invited a post from Kelly Moran, a longtime colleague and exceptionally thoughtful and experienced qualitative researcher and anthropologist. Many of us are familiar with the qual/quant "sandwich" that alternates research approaches. Kelly shares deeper thoughts about how to get the sequence right, and I recommend these considerations to all UX researchers.]
In the data-obsessed world we live in, companies are constantly reaching for numbers as the foundation for improving the performance of their products and services. A commitment to gathering and analyzing user data is a critical component for building an understanding of how, and whether, we’re meeting customer needs. Harnessing this information allows teams to move beyond assum…
Kelly Moran
[Note from Chris: This week I’ve invited a post from Kelly Moran, a longtime colleague and exceptionally thoughtful and experienced qualitative researcher and anthropologist. Many of us are familiar with the qual/quant "sandwich" that alternates research approaches. Kelly shares deeper thoughts about how to get the sequence right, and I recommend these considerations to all UX researchers.]
In the data-obsessed world we live in, companies are constantly reaching for numbers as the foundation for improving the performance of their products and services. A commitment to gathering and analyzing user data is a critical component for building an understanding of how, and whether, we’re meeting customer needs. Harnessing this information allows teams to move beyond assumptions, identify pain points, and strategically design solutions that enhance usability, satisfaction, and ultimately, business outcomes. We know in UX research that numbers are often only part of the story.
My name is Kelly Moran, and I’ve been working on applying research to business and consumer problems since the early 00’s. I have an MS degree in Applied Anthropology and I’ve worked in both consulting and in-house, most recently on Google Search. I’m currently leading Experience Research at the global experience consulting company, Geniant.
Mixing Methods
Combining quantitative (quant) and qualitative (qual) research methods is a hot topic right now, and for good reason. A common question is which one should come first. I often hear the perspective that starting with quant will bring up questions that the team can then dig into with qual research for a deeper understanding of the “why.” This is a great approach. By identifying what is happening through quantitative data (the numbers and counts), teams can then use qualitative methods (the rich, descriptive, word-based data) to explore the motivations, context, and underlying reasons behind those metrics. It allows for a targeted investigation into the phenomena the numbers highlight. But sometimes flipping that script can ensure you’re measuring the right things in the first place, or perhaps more accurately, that the quantitative measurements you are taking truly reflect the real-world behaviors and experiences you intend to capture.
This alternative approach (starting with qualitative research) can be incredibly powerful for checking your underlying assumptions and refining your data collection instruments before a large-scale quantitative study is even launched. By first engaging with a smaller group through observation, interviews, or other qualitative methods, you gain a deep, nuanced understanding of the landscape. This initial insight can reveal blind spots in your current metrics or suggest entirely new categories of data that should be tracked. In essence, starting with qual lets you refine the questions you ask in your quant research, ensuring the resulting data is both accurate and meaningful.
I have an illustrative example from some consulting work that demonstrates the value of a qualitative-first approach. I start by laying out the problem as business saw it, the response research took, what we learned, and how we recommended a larger change to the way the business approached understanding and building for their own team.
Responding to Poor Metrics
A company that services loans wanted to improve the metrics coming in from their call center. They had a team of agents for borrowers to reach out to by phone with questions or for general service on their loans, and at the end of each call, agents were required to enter a reason for the call into their call management system. They had a list of about half a dozen options to choose from. The reason “make a payment” was selected over 80% of the time. And this made sense. Paying a loan is important, and hopefully every borrower is making frequent and timely payments. But the company had put a lot of work into making it easy for people to pay their loans online. Loan payment should be a straightforward process, yet according to the metrics, it was taking a lot of time for call center agents to get through all these calls processing payments for customers over the phone. Relieving this load from the agents would decrease the wait time for borrowers calling in about other matters.
So they kept working on improving the online payment process.
And they kept seeing “make a payment” as the dominant reason borrowers called in.
They began to wonder if they needed to work this from another angle. Shifting your mindset is a great way to approach a sticky problem. They decided making the software easier on the agent side might at least help agents get through those payments faster.
They engaged with my team to observe agents in the call center so we could make context-informed design recommendations for the agent-facing software.
What we learned was surprising
We ended up listening in on 98 calls over a few days of observation with several agents. We did see opportunities to improve the design of the agent software, but we also learned something else.
In our observations, the frequency of calling in to make a payment was much lower than 80%. In fact it was under 50%. Instead, many borrowers were calling to check on a payment. These customers had used the website to make a payment but the system had a delay in showing funds applied (a banking issue we recommended they address), so borrowers were calling to be sure their payment had gone through or at least to ask that it not be counted as late, as the delay was out of their control.
Agents would assure the customer that all was well, and at the end of the call, they’d check the “Make a Payment” box because “Reassure the Borrower” was not an option. Nothing else on the reasons list was close enough.
The data was wrong because the selection options did not accurately reflect the real world.
The fix was multi-faceted. On the one hand, they needed to either address the delay in applying payments or at least provide clearer messaging that a payment was being applied and that no late fees would result from the processing time. A sophisticated solution would include a call routing system that identified a caller by phone number, registered that their account had recently had a payment submitted online, and provided a recorded message that the payment was in process. This alone could have brought their “make a payment” numbers down, as perhaps some callers would hang up before reaching an agent.
The agent-side fix would not only improve their “make a payment” metric but also provide a more accurate picture of why customers call in by improving the data being collected. They needed to reassess the list of reasons for calls, which our qualitative data provided a great starting point for. On the other hand if they had begun their original effort to collect data on "reasons for call” with a qualitative review of those calls they could have had a more accurate picture from day 1 of metrics collection.
There is real power in rotating between qual and quant
The benefits of pairing quantitative and qualitative research methodologies is clear, but the sequencing can feel challenging. While the quant-first approach, using numbers to raise questions for qual to answer, is valuable for targeted follow-up, the example from the loan servicing company demonstrates a practicality in leading with qualitative inquiry. Had the company implemented a qualitative discovery project before they implemented their call categorization metrics they could have saved a lot of time trying to fix an online payment system that was already working (albeit missing critical follow-up messaging). And without my team happening to be in the right place observing for a different design purpose, the company would have continued to invest in the wrong solutions.
Qualitative research can serve as a vital diagnostic tool, ensuring your quantitative instruments are calibrated to the real world. By doing qual first, you don’t just dig deeper into existing data; you ensure you are measuring the right things in the first place, leading to truly effective, context-informed solutions.
So which should you do first?
As always with research, it depends. Is your team confident they have a clear enough understanding to create meaningful categories for your quantitative data? Is there any reason why they could not pause for a qualitative review of the landscape? Sometimes secondary datasets can provide useful context for this work. And criticality, what risks might there be in collecting inaccurate or incomplete data? Talk through these questions with your team and decide how to create the best combined quant/qual plan for your workstream. If confidence is high, timing is tight (and second guess that decision, always), and the risk is low, suggest circling back once metrics start rolling in to see how a qualitative project could add clarity. Otherwise, get out there and start defining your customers’ reality with descriptive data.
The cover image is adapted from a photo by Jem Sahagun found on Unsplash.