Three Weeks to Clarity
How rapid research unblocked an Intelligent Assistant in transition
Rapid Research and Usability
Even the best-laid product plans can hit an unexpected snag. When our team developing an intelligent assistant for a large financial institution encountered a major technical hiccup, we had to decide fast how to keep users engaged—without compromising the long-term user experience.
What followed was a three-week sprint that turned urgency into insight, balancing research speed with design rigor. This is how we made confident decisions under pressure—and what I learned about running lean but effective UX research when the clock is ticking.
The Challenge: A Temporary Solution, a Permanent Risk
The project’s momentum came to a sudden halt when a technical issue prevented full deployment of the intelligent assistant. Engineering and UX teams faced a difficult choice: which of two possible design approaches should serve as a temporary stopgap until the fix was complete?
The challenge wasn’t just technical—it was strategic.
A poor user experience, even in a temporary version, could damage trust and discourage adoption once the full assistant was launched. We needed to provide a seamless, credible interim experience that still reflected the brand’s standard of quality.
As the research lead (working solo, part-time), I had three weeks to generate actionable evidence that would help the team choose wisely. The goal: move fast, stay grounded in data, and protect the user experience.
Aligning the Team: Clarifying What Really Matters
Before launching any studies, I brought together Product, UX Design, and Conversation Design stakeholders for a short kickoff workshop. The purpose was alignment, not analysis.
In that session, we identified three key questions to guide the research:
Expectations – What do users believe the assistant can do in this context?
Usability – How effectively can users complete essential tasks in each prototype?
Trust – Where does the experience create confidence—or confusion?
Starting with alignment helped focus the research and ensured that every test would serve a concrete decision. After the workshop, I revisited our existing research to map what we already knew and pinpoint gaps that still needed evidence. That step alone saved hours later and allowed me to design smaller, sharper studies.
The Approach: Lean Research at Speed
Given the tight timeline, I chose an unmoderated testing approach using UserZoom, which allowed us to gather data from real users quickly—without waiting to schedule moderated sessions.
Over the course of the sprint, I ran three rapid studies with a total of 126 target users:
1. Comparative Usability Tests (2 studies, 26 users total)
Each unmoderated test compared one of the two design approaches. Participants were asked to complete five realistic tasks—such as contacting customer support to cancel a transaction—while thinking aloud. Example task prompt:
“Now you want to talk to a customer support team member to assist you in canceling your transfer. From here, enter the forum to speak with a member of the customer support team. You will be successful once you are waiting for a customer support member to enter your conversation. When you think you have been successful, click the 'Success' button. Remember to think out loud.”
2. Expectations Survey (1 study, 100 users)
The third study used a mix of first-click tests and open-ended questions. It asked participants to identify where they would look for key information on the interface and what they believed was happening at any given point in the conversation.
Example click test prompt:
“Imagine that you want to have a conversation with Schwab and tell Schwab to cancel a transaction you recently made. Looking at this screen, click on where you are looking to know who or what you are speaking to at this moment.”
This combination of quantitative (clicks, completion rates) and qualitative (user language, confidence ratings) data gave a full picture of how users understood the assistant—and how each design shaped that understanding.
Annotated Screens: Bringing Findings to Life
Next, I used annotated screens to make user actions visible. Each annotation corresponded to a quote, click pattern, or observed hesitation. This approach helped designers quickly identify friction points and turning moments in the flow.