UX Research Leader · Experience Strategist · People Manager
“I design for how people actually think — not how we wish they would.”
I am a UX research leader with a formal background in cognitive psychology and more than ten years building research-driven experience programs across Amazon Alexa, Expedia Group, Aceable, and enterprise consulting at ChaiOne.
What distinguishes my work is a genuine understanding of how people perceive, learn, decide, and interact with technology — combined with an accountability to the business outcomes that make research worth doing. I do not just deliver insights. I build the infrastructure, culture, and processes that allow organizations to consistently make better design decisions.
I have led voice AI research, enterprise contextual inquiry, SaaS product testing, international studies, and research ops initiatives — and I have managed and mentored researchers at every career stage. I am as comfortable presenting findings to a CEO as I am designing a study or debugging an A/B test.
“Behavioral science meets business outcome — I build the kind of UX research cultures where great design decisions happen consistently, not occasionally.”
Research is not a phase — it is the process. Every engagement starts with genuine inquiry and ends with decisions someone can act on.
Great research should be able to defend its own ROI. I track SUS scores, conversion lifts, and revenue impact as a matter of practice.
Voice interfaces, multimodal AI, nuclear facility operations, enterprise call centers — I am most effective where the design space is genuinely ambiguous.
I measure my impact not just by insights delivered, but by the research cultures built — where asking hard questions is valued over defending assumptions.
Students were confused — not about features, but about what it meant to be ready for their licensing exam. A full mixed-methods research program spanning generative discovery, feature prioritization, iterative design testing, and go-to-market research redefined how test prep worked and felt across two distinct verticals (teen Driver’s Ed students and adult Real Estate professionals), with 279 study participants across 9 methods.
Following the research program, Aceable needed to validate that research-informed messaging and experience changes would drive measurable conversion and revenue improvement before scaling the Ace It product to national markets. A GTM research sprint defined the language strategy; live A/B experiments (via Statsig and Amplitude) across California, Texas, and Florida markets measured real-world impact.
Aceable needed to understand where students were genuinely struggling — not just where they stopped — to build a personalization engine grounded in real behavioral patterns rather than engagement proxies. Research examined how learners interpreted feedback, conceptualized their confusion, and responded to remediation prompts, using a closed-loop approach: research → design → prototype → back to users → iterate.
Outcome: Grounded Aceable’s personalization strategy in real student needs. Confirmed by VP Product: “Liz ensured we were solving the right problems before moving into solution design.”
Aceable had no research function, no infrastructure, and no research culture when Liz joined. As the founding senior researcher — a team of one — the task was not just to conduct research but to design the entire system: what research gets done, how, by whom, with what tools, and how findings reach decision-makers.
Recognized by CEO Blake Garrett and VP Product William Mulford-Carper as a “force multiplier” for the organization.
Research insights were siloed in documents, Dovetail tags, and individual researcher memory — not accessible to product managers, engineers, and designers in real time. The challenge: make the student voice available at scale without requiring every team member to read every research report.
Outcome: Internal tooling that consolidated research insights, generated metrics summaries, and made student voice accessible across the organization. Described by manager as “pushing the envelope on AI.”
Travelers in “inspiration mode” — browsing without a destination in mind — represented a significant but underserved segment. End-to-end research explored how people discover travel destinations naturally, what signals indicate openness to suggestion, and how a structured discovery experience could feel helpful rather than prescriptive.
Prototype testing at Alexa was slow, often happening too late to drive real product decisions. Teams needed rapid-cycle validation — completing evaluations within days, not months — with a standardized approach to voice prototype evaluation across the organization. I designed and launched RAPT from scratch, establishing it as a recurring, scalable research function used across multiple Alexa product teams, running 3–5 concurrent studies at peak.
RAPT became the reference framework for rapid-research infrastructure and was adopted by additional Amazon product research teams outside Alexa.
Designing for voice requires understanding not just what users say, but what they expect a voice assistant to understand — and where those expectations break. Research spanned voice-only, screen+voice, and multimodal interaction models across a complex device ecosystem, including smart speakers, Echo devices with screens, and companion apps.
Amazon’s research teams needed a centralized platform to make insights findable, shareable, and actionable across a large, distributed organization. The only scalable path was enabling lightweight self-service research — designed so non-researchers could run their own studies with appropriate quality guardrails.
Internal initiative — details reflect publicly shareable scope.
Recruiting for Alexa-specific research through external agencies was slow (2–3 weeks), expensive, and often poorly screened. A proprietary participant panel of pre-screened Alexa users was needed to enable same-week recruiting and support the rapid-cycle research infrastructure Alexa’s roadmap demanded.
Alexa’s wake word recognition needed to perform reliably across diverse languages, accents, regional dialects, and acoustic environments in international markets. No validated methods existed for this problem space — the program had to develop them from scratch, in partnership with global research firm IPSOS.
Research into how Alexa could adapt its behavior, voice, and content to individual users over time — and where the lines between helpful personalization and uncomfortable surveillance actually fell for real users in real households. The “personalization paradox” — users wanting to be known but not shown — defined the entire product strategy.
Research presented to VP-level leadership as foundational input for Alexa’s multi-year personalization roadmap.
Different Alexa teams used “personalization” to mean entirely different things — causing conflicting product requirements, inconsistent user communication, and privacy controls that didn’t match how users actually thought. The PIC framework resolved this by grounding a shared taxonomy in real user mental models.
PIC became a reference document adopted across multiple Alexa product and design orgs. Presented to VP-level leadership as key input to personalization strategy.
A Fortune 500 nuclear energy utility was losing $10M+ annually in demurrage fees from delivery scheduling failures at nuclear facilities. Contextual inquiry conducted inside active nuclear facilities — physically observing security officers during real shift operations — revealed four systemic failure points. The solution included the industry’s first enterprise Apple Watch deployment.
NRG Energy’s acquisitions call center — spanning 3 major brands — had conversion rates below target and new-hire training consuming 2 of 3 onboarding weeks on system workarounds. Contextual inquiry during live calls revealed a 78-click enrollment flow, non-linear navigation, and an incentive structure driving agents to pre-pick offers rather than serve customers. The redesigned engagement layer exceeded every target metric within 6 months.
As a team of one, Liz built and scaled an entire Research Center of Excellence from the ground up. They introduced clear mental models for research, established strong operating rhythms, implemented tools and systems that allowed research to scale, and ensured insights were accessible and actionable for product managers and designers across the organization.
Liz pushed the envelope on AI, building internal tools that consolidated insights, generated metrics, and made student voice more accessible across the organization.
What always stood out about Liz is that her research doesn’t stop at insight. She’s exceptional at translating findings into concrete direction — what to do next, what tradeoffs matter, and how to align teams around decisions.
Liz played a pivotal role in leading user research for destination discovery, resulting in the successful launch of the module on the Expedia App homepage in early 2023.
Liz was an empathetic, enthusiastic, and highly capable manager. They struck a rare balance between being Socratic and decisive, encouraging critical thinking without ever being passive, and consistently treated me like a peer rather than an intern.
Identified ways to improve key metrics like conversion and activation. Pre-launch research was invaluable, helping us understand which features to prioritize and where friction existed.
Open to UX Manager, Head of Research, Director of Research, and Principal Researcher roles. Based in Seattle, WA — remote-first.