Skip to main content
Surveys that Work: Using Them with Rigor and Human Insight in UX Research

Surveys that Work: Using Them with Rigor and Human Insight in UX Research

By Paulina Contreras

Surveys are among the most commonly reported tools in UX Research practice: 59% of professionals use them regularly according to the UXPA 2024 report, and 83% of Research Operations teams support them as a primary method. However, their popularity doesn’t guarantee proper use. They’re often launched without a clear purpose, without validating questions, or without ensuring representativeness. The result is abundant data that’s of little use and poorly informed decisions.

This article explores how to design surveys with rigor, purpose, and respect for people, integrating Caroline Jarrett’s methodological approach and the scientific foundation of social sciences.


What is a survey, really?

Let’s start with some necessary definitions

“A survey is a systematic method for collecting information from a sample of entities, with the purpose of building quantitative descriptors of the attributes of a larger population.” — Groves et al., 2004, Survey Methodology

“A survey is a process of asking people questions, getting numbers, and making decisions.” — Caroline Jarrett, 2021

Jarrett translates the technical definition into a more human one: surveys are a bridge between curiosity and informed decision-making. They’re not forms, but structured conversations with purpose.

In UX Research and social sciences, surveys allow us to collect information from large populations efficiently. They’re primarily used to measure attitudes, evaluate satisfaction, and establish patterns through numerical scales and quantitative data.


Establish your objectives before writing a single question

Here we’ll look at a crucial part, not just of surveys, but of your research. If you don’t know what you want to know, and what use you’ll give that information, then start by resolving that.

Jarrett recommends starting every survey with four guiding questions:

  • What do you want to know?
  • Why do you want to know it?
  • What decision will you make based on the answers?
  • What number do you need for that decision?

“The goal of a survey is not to collect responses, but to obtain a number that helps make a decision.” — Surveys That Work Webinar, FocusVision

If you can’t answer these four questions, you probably don’t need a survey yet. The questionnaire content should be based on research questions. If the answer to an item doesn’t bring you closer to a design decision, eliminate it or rethink it.


The “Survey Octopus” model: thinking about surveys as a system

Jarrett proposes the Survey Octopus, with seven interconnected phases that reflect the complete cycle of a survey. Each “tentacle” can introduce error; neglecting one compromises total validity.

Survey Octopus Caroline Jarrett The model integrates purpose, questions, questionnaire, sample, fieldwork, response, and reporting.

1. Goals

Define the purpose and expected decisions.

❌ Typical error: “We want to know what users think about the design.” (too broad, no associated decision) ✅ Improved: “Do at least 60% consider the onboarding clear? If not, we redesign the welcome section in Q4.”

2. Questions

Write clear, neutral, and answerable questions.

❌ Typical error: “Do you like our product?” (bias and vagueness) ✅ Improved: “From the following list, which features of the product led you to purchase it?”

3. Questionnaire

Structure flow, tone, and length.

  • Length: avoid fatigue (don’t repeat questions, don’t overwhelm, you risk abandonment).
  • Types: mix closed questions (quant) and open ones (context).
  • Scales: Likert to operationalize opinions.
  • Order: watch for biases; use filler items if applicable.

4. Sample

Define who to ask to ensure representativeness.

❌ Typical error: sending to “the entire database” without segmentation. ✅ Improved: segment by user type and calculate sample size.

5. Fieldwork

How and when to administer.

  • Sending time (e.g., mid-morning, weekdays).
  • Reminders at 3–5 days.
  • Appropriate incentives.
  • Sufficient open window (time) to reach target sample.

6. Response

Analyze who responds and who doesn’t.

  • Email own databases: 10–30%
  • Email cold databases: 2–10%
  • In-app intercept: 1–5%

Monitor non-response biases: certain profiles may respond less and bias conclusions. Analyze who responds and who doesn’t (reduce bias).

7. Report

Communicate findings ethically, clearly, and usefully.

  • Contextualize: explain what each finding means.
  • Combine quant + qual.
  • Visualize with purpose.
  • Don’t confuse significance with practical relevance.

When to use surveys (and when not to)

✅ When to use them

  • Validate hypotheses at scale.
  • Measure perceptions, attitudes, or satisfaction.
  • Segment users and generate quantified profiles.
  • Evaluate prototypes or products (SUS, NPS, CES).
  • Discovery, evaluation, and continuous monitoring.

🚫 When not to (and some alternatives)

Implicit or unconscious experiences

Limitation: measures explicit attitudes, not deep processes. Alternative: in-depth interviews, projective techniques.

Need to observe real behavior

Limitation: self-report unreliable; doesn’t observe context. Alternative: contextual observation, ethnography, contextual inquiry.

Small or expert samples

Limitation: doesn’t aim for statistical representativeness. Alternative: qualitative to understand processes and meanings.

Co-creation and collaborative design

Limitation: doesn’t generate dialogue or ideas. Alternative: participatory workshops and Design Thinking.

Cause-effect relationships

Limitation: correlational, not causal. Alternative: experimental designs / A/B testing.


Some examples of survey use: simple and complex

Example 1: Simple survey for quick validation (Evaluation)

Context: fashion e-commerce; evaluate new checkout. Objective: ≥70% rate the process as easy (4–5 on 1–5 scale). Structure (3 questions):

  1. Purchase ease today (1–5).
  2. Difficulties encountered (open, optional).
  3. Would you buy again? (Likert options).

Decision: thresholds >70% / 50–70% / <50% for actions. Sample: 200 recent buyers. Expected rate: 15–20% in-app.

Example 2: Complex survey for strategic prioritization (Discovery/Strategy)

Context: mental health app; choose 1 of 3 features for Q1 2026. Design: Segmentation filters, MaxDiff for prioritization, open context questions, NPS/satisfaction, basic demographics. Considerations: sensitive language, informed consent, doesn’t substitute professional care. Analysis: MaxDiff scores by segment + cross with analytics; roadmap by % demand/segment and effort.


Best practices for designing surveys that work

1. Align each question with a decision

If the answer doesn’t change anything in strategy, design, or roadmap, eliminate it.

2. Use clear and neutral language

Avoid jargon, double-barreled or leading questions. Prefer one idea per question and neutral formulations.

3. Control length

  • Short: 3–5 questions (≤3 min).
  • Standard: 10–15 questions (5–7 min).
  • Long: only with high commitment/compensation.

4. Validate with a pretest

Pilot with 5–10 people from the target audience. Review comprehension and exhaustive options. You’ll save yourself later headaches.

5. Avoid frequent biases

  • Order: randomize options when possible.
  • Acquiescence: balanced scales.
  • Social desirability: anonymity + neutrality.
  • Recency: limit time frame (“last 7 days…”).

“Asking one person the right question is worth more than asking 10,000 people the wrong question.” — Caroline Jarrett


Roles and collaboration in the survey process

Key profiles to involve

  • Research/psychology: methodological design and rigorous interpretation.
  • Data analysts: quantitative analysis and statistical validation.
  • UX/UX Writing: questions aligned with design decisions.
  • Stakeholders/PO: shared purpose and prioritization.

How to analyze and communicate results

Perfect, we’re aligned, we know what to ask, who to ask, and why. We send it and get responses. Now what?

Analysis methods

There are several ways to analyze results from simplest to most complex:

  • Descriptive: means, frequencies, deviations.
  • Inferential: significance (p<0.05), correlations.
  • Segmentation: patterns by groups.
  • Factor analysis: validate constructs/items.
  • Thematic (open-ended): coding and patterns.

Effective communication

Having the data analysis, we now need to prepare information in a way that’s useful and actionable. Some recommendations for this stage:

  • Contextualize numbers and their impact.
  • Visualize with simple and comparative charts.
  • Combine quant + qual (brief quotes bring data to life, nothing more powerful than hearing/reading the user talking about the product or service).
  • Declare limitations: sample, rates, biases.

Points to watch: avoid misinterpretations in your surveys

In practice, even the best-designed surveys can be misinterpreted if we don’t analyze data carefully.

  • Correlation ≠ causation: two variables being related doesn’t mean one causes the other. High satisfaction may be due to multiple factors the survey didn’t measure.

  • Avoid interested “sharpening” or “leveling”: sometimes we simplify or dramatize results to make them more convincing. A good practice is to report accurately, even if findings are ambiguous or go against team expectations.

  • Watch for confirmation and non-response biases (researcher bias): we tend to see what confirms our hypotheses and ignore what contradicts them. Ask yourself who didn’t respond and what stories might be absent from your data.


Surveys with ethics and empathy

In my previous article, “Persuasion and Behavioral Psychology in UX: From Cialdini to Responsible Design”, I explored how psychological knowledge can be used with awareness and respect. The same ethical principles apply here: designing surveys is also an act of influence and requires the same care.

Surveys are not just a means to collect data; they’re experiences that communicate something about how we understand and treat the people who respond to them. Designing with empathy means caring for tone, purpose, and the relationship we establish through each question.

Guiding questions for ethical design

Before launching a survey, it’s worth pausing to reflect on these questions:

  • Can and do people want to answer these questions safely?
  • Am I using their responses with respect and a clear purpose?
  • Am I giving them something meaningful in return for their time?
  • Does this survey respect their autonomy and dignity?

Key ethical principles for responsible surveys

  • Do no harm: avoid causing anxiety, guilt, or pressure. Reject coercive practices or dark patterns disguised as engagement.

  • Transparency: explain the survey’s purpose, how data will be used, and what benefits participation will have. Clarity generates trust.

  • Control and agency: let people decide. Include options to skip questions, exit easily, or review their answers before submitting.

  • Informed consent: especially in health, education, or vulnerable populations. Explain risks, benefits, and data protection measures.

  • Measuring ethical impact: consider tools like post-interaction surveys or regret metrics to evaluate how participants felt.

Ultimately, designing ethical and empathetic surveys is designing sustainable relationships: each well-formulated question is an opportunity to listen better and build trust.


Recommended resources

  • Jarrett, Caroline (2021). Surveys That Work. Rosenfeld Media.
  • Groves, R. M. et al. (2004). Survey Methodology. Wiley.
  • Dillman, Smyth & Christian (2014). The Tailored Design Method. Wiley.
  • NNGroup: How to Write Effective UX Surveys
  • Effortmark Workshop: Surveys That Work
  • Video:
  • Reports: State of User Research 2025 (User Interviews); UX Methods 2024 (MeasuringU/UXPA).

Closing

Surveys are not a formality, nor are they just the form we see at the end, but rather they reflect how we think about design and research. Surveys are also designed, and when well applied, they translate people’s voices into decisions that humanize products.