Drawing from years of my experience designing surveys for my academic projects, clients, along with teaching research methods and Human-Computer Interaction, I've consolidated these insights into this comprehensive guideline. Introducing the Layered Survey Framework, designed to unlock richer, more actionable insights by respecting the nuances of human cognition. This framework (https://lnkd.in/enQCXXnb) re-imagines survey design as a therapeutic session: you don't start with profound truths, but gently guide the respondent through layers of their experience. This isn't just an analogy; it's a functional design model where each phase maps to a known stage of emotional readiness, mirroring how people naturally recall and articulate complex experiences. The journey begins by establishing context, grounding users in their specific experience with simple, memory-activating questions, recognizing that asking "why were you frustrated?" prematurely, without cognitive preparation, yields only vague or speculative responses. Next, the framework moves to surfacing emotions, gently probing feelings tied to those activated memories, tapping into emotional salience. Following that, it focuses on uncovering mental models, guiding users to interpret "what happened and why" and revealing their underlying assumptions. Only after this structured progression does it proceed to capturing actionable insights, where satisfaction ratings and prioritization tasks, asked at the right cognitive moment, yield data that's far more specific, grounded, and truly valuable. This holistic approach ensures you ask the right questions at the right cognitive moment, fundamentally transforming your ability to understand customer minds. Remember, even the most advanced analytics tools can't compensate for fundamentally misaligned questions. Ready to transform your survey design and unlock deeper customer understanding? Read the full guide here: https://lnkd.in/enQCXXnb #UXResearch #SurveyDesign #CognitivePsychology #CustomerInsights #UserExperience #DataQuality
Satisfaction Survey Design
Explore top LinkedIn content from expert professionals.
Summary
Satisfaction survey design is the process of creating questions and structuring surveys to accurately capture people’s experiences, feelings, and opinions. A well-designed satisfaction survey helps organizations collect meaningful feedback and make better decisions based on honest and clear responses.
- Ask clear questions: Use language that everyone can understand and avoid combining multiple ideas into one question to get the most accurate answers.
- Structure thoughtfully: Build your survey so it flows logically and only asks relevant questions to each respondent, making the experience smoother and your data more reliable.
- Include all options: Make sure response choices let everyone answer honestly by providing options like "does not apply" and avoiding assumptions about people’s experiences.
-
-
Nonprofit friends, planning to collect data soon? Remember: Your questions shape your data—but they don’t always get you what you need. Imagine this: You are filling out a border form, and it asks: "Do you exceed duty-free allowances per person?" The only answers are Yes or No. For someone who didn't bring any goods, selecting No implies they did get something but stayed within the limit. The question doesn't account for people for whom the question is irrelevant, forcing them to provide inaccurate information. Now think about your data collection tools (say, your last survey): ● Are your questions boxing people into answers that don't reflect their reality? ● Are you assuming experiences that don't apply to everyone? ● Are you unintentionally excluding voices by limiting response options? Poorly worded questions = bad data = flawed decisions = a loss of trust. Here are three examples of common pitfalls: ● Assumptions baked into questions Example: “What barriers prevent you from attending our events?” assumes the respondent knows about your events and faces barriers. A better question: “Have you heard of our events?” followed by, “What barriers, if any, prevent you from attending?” ● Excluding relevant options Example: “Which of these programs have you used?” but leaving out “I haven’t used any.” Guess what happens? People pick a random answer or leave it blank, and now your data is a mess. ● Vague questions Example: “On a scale of 1-5, how satisfied are you with our communication?” Without specifying—emails? Social media? In-person?—responses will be all over the place. Your questions are your bridge to listening and understanding. Two things to remember here (and by no means this is the complete list): ● Plan your survey – the why, what, how, when, what-next… before jumping to design ● Use inclusive language, providing options like "Does not apply.", wherever relevant. Ensuring people responding to it can see themselves in the questions and responses is the only way to give them the true choice of what and how much they want to share with us. Please reach out if you want to plan a Survey Kaleidoscope workshop with your team on your upcoming survey (for context, it's a workshop where we solely plan the survey collectively - every single element of how to ensure a successful survey happens) #nonprofits #nonprofitleadership #community
-
As UX researchers, we often rely on survey totals. We sum up Likert scale responses across a few items and call it a metric - satisfaction, usability, engagement, trust. It’s fast, familiar, and widely accepted. But if you’ve ever questioned whether a survey is truly capturing what matters, that’s where Item Response Theory (IRT) steps in. IRT is more than just a statistical model - it’s a smarter way to design, evaluate, and optimize questionnaires. While total scores give you a general snapshot, IRT gives you the diagnostic toolkit. It shifts your focus from just what the total score is to how each question behaves across different user types. Instead of treating every item as equally valuable, IRT assumes that each question has its own characteristics - its own difficulty level, its ability to discriminate between users with different trait levels (like low vs. high satisfaction), and even its tendency to generate noise. It mathematically models the likelihood of a particular response based on the person’s underlying trait (e.g., engagement) and the specific properties of that item. This lets you see which items are doing real work - and which ones are just adding bloat. Let’s say you’re trying to measure perceived product enjoyment. You include five questions. One of them - "I enjoy using this product" - is endorsed by nearly everyone. Another one - "This product makes me feel inspired" - gets more varied responses. Under IRT, the first item would be flagged as too easy; it doesn’t help you separate highly engaged users from moderately engaged ones. The second item, if it cleanly differentiates users with different enjoyment levels, would be seen as high in discrimination power. That’s the kind of insight you won’t get from a simple average. One of the biggest advantages of IRT is that it allows you to assess not just people’s responses, but the quality of the items themselves. You can identify and remove redundant or low-informative questions, focus your surveys to measure what matters most, and retain high precision with fewer items. This is a huge win for both survey respondents and UX researchers, especially when you're working in product environments where every question has to earn its place. IRT also enables more advanced applications. You can build adaptive surveys- ones that tailor themselves in real time to each participant. You can create item banks that offer equivalent measurement across time or populations. And you can track individual-level changes in UX perceptions over time more reliably, which is something traditional scoring methods often miss. I use IRT models to analyze UX questionnaires in my own work, especially when I want to make sure each item is pulling its weight. It also leads to clearer communication with designers, PMs, and engineers, because I can show why a certain item matters or doesn’t, backed by data that makes sense.
-
After more than 25 years in market research, I’ve learned that a single poorly worded survey question can mislead teams and compromise decision-making. One of my most memorable examples of this was when I had a client that had built a prototype of a device to track and monitor driving and wanted to target parents with teenage drivers. This was their question: With 8% of all fatal crashes occurring among drivers ages 15 to 20, motor vehicle deaths are the second-leading cause of death for that age group. We know your child’s safety is of utmost importance, and you are willing to do whatever you can to keep them safe. How likely would you be to install a device in your car to track and monitor your teenage driver? I told them that question would guilt a lot of the parents into selecting a positive rating, but it would not give them an accurate, unbiased estimate of market potential. Here's the wording they finally agreed to. A manufacturer has created a device that tracks a driver’s behavior (e.g., speeding, slamming on the brakes) and their location. It allows a user to set boundaries for where a car can be driven and be notified if the boundaries are crossed. It also allows a user to talk to the driver while they are on the road. How likely would you be to install a device with those capabilities to monitor your teenage driver? The results were not very favorable, which upset the client but also prevented them from making an expensive mistake. #MarketResearch #SurveyDesign #DataDrivenDecisions
-
Writing your own survey? Stop making these survey mistakes… I’ve reviewed dozens of surveys from brands and consultants who are taking a DIY approach to survey-based research. While I love seeing more companies using data and original insights in their content, there are some common pitfalls with surveys that can undermine your efforts. Here are the biggest mistakes I see—and how to avoid them: 1️⃣ Too many open-ended questions While open-ended questions can be valuable, overusing them can overwhelm respondents and make it harder to extract actionable insights. Many of these could easily be reworked as multi-select options, which are quicker to answer and easier to analyze. 2️⃣ Not tailoring questions to respondents Failing to properly segment your audience or filter questions (e.g., asking irrelevant questions to people outside a specific group) frustrates respondents and skews your data. Make sure your survey flows logically and adapts based on responses. 3️⃣ Using jargon or acronyms Don’t assume your audience speaks the same language as your internal team. Spell out acronyms and avoid industry jargon—it ensures clarity and a better response rate. 4️⃣ Combining ideas in one question or response option Questions or responses like “Do you think A and B?” are problematic because a respondent might agree with one but not the other. Keep questions and responses focused on one idea at a time to get accurate answers. 5️⃣ Making surveys too long Long surveys lead to drop-offs or rushed responses. Respect your respondents' time—focus on what you really need to know and keep it concise. 6️⃣ No narrative structure—just a dump of internal questions One of the most common mistakes I see is surveys that lack a clear story arc. Instead of building around a strong theme or hypothesis, it’s just a long list of random questions from different stakeholders. The result? Disconnected data that's hard to turn into compelling content. When designing your survey, think about the story you want to tell. Build your questions to support that narrative. Key Takeaway: Thoughtful design makes a huge difference in the quality of your insights—and ultimately, the impact of your content. Have you seen any survey mistakes that drive you nuts? Or tips for improving them? #SurveyTips #OriginalResearch #ContentStrategy Hi, I'm Becky. 👋 My clients have garnered 80+ media mentions, 2-3X the leads, and over 250K in free advertising from branded research💰 Interested in branded original research to boost your marketing KPIs? DM me and we'll talk. 🙂
-
Most people think writing a survey is easy… until the data comes back and it doesn’t say anything new or have the ability to influence a decision. This week, I'm sharing a few things we do differently when designing surveys at Two Cents Insights, especially for teams without an in-house quant expert but who still need high-quality insights they can actually use. These are the small moves that turn survey data from “interesting” into “let's action off of this.” ☝️ Let’s start with this: 𝗮 𝘀𝘂𝗿𝘃𝗲𝘆 𝗶𝘀 𝗮𝗻 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲. 𝘚𝘪𝘥𝘦 𝘯𝘰𝘵𝘦: 𝘐’𝘮 𝘯𝘰𝘵 𝘫𝘶𝘴𝘵 𝘴𝘢𝘺𝘪𝘯𝘨 𝘵𝘩𝘪𝘴 𝘣𝘦𝘤𝘢𝘶𝘴𝘦 𝘐 𝘨𝘦𝘯𝘶𝘪𝘯𝘦𝘭𝘺 𝘦𝘯𝘫𝘰𝘺 𝘥𝘦𝘴𝘪𝘨𝘯𝘪𝘯𝘨 𝘴𝘶𝘳𝘷𝘦𝘺𝘴 (𝘵𝘩𝘰𝘶𝘨𝘩 𝘐 𝘥𝘰). Most people treat surveys like forms; just a stack of questions meant to collect data in whichever random order or wording, then move on. We design with the respondent in mind because the more intentional the experience, the better the data. Some of the things we ask ourselves: ▪️ Should we start broad and zoom in, or begin specific and build out? ▪️ What tone fits not just the topic, but the brand we're working with? ▪️ Where should we slow people down in the survey to reflect? ▪️ Where should we give them momentum? ▪️ How do we minimize bias (ours, the client’s, the respondent’s) through thoughtful word choice? When a survey feels like a chore, people rush. You get surface-level answers, straight-lining… or worse, angry rants in the open ends. (Yes, people will let you know when they hate your survey, and they don't hold back.) But when it feels intentional, even a little bit engaging, people show up differently. They pause. They think. They share deeply. Sometimes, they even enjoy it. When that happens, the quality of your insights completely shifts. You get sharper signal. You get ideas you haven’t heard before. And if respondents know it’s coming from your brand, the experience itself can build affinity just like a great piece of marketing. 🖤 Win-win-win.
-
I've worked on SEVEN reports this year. I normally ask you all "is that too many? Is that very few?" but for this one, I have my answer. It's a lot. But a lot of briefs skip out this ONE thing: No survey strategist. No strategic input for category, quality, or order of the questions. No answers to the question "why are we asking what we are asking?" And that leaves it to the report creator(me) to "find" a POV after the responses are already in. This is much harder to do than if we go in with a mission. Unbiased, but directional. For example, you could be asking "Have you received a promotion in the last year?" to learn how promotions correspond with salary increases. But how does this question fit into the bigger story? If you can't answer that, you have a floating fact, at best and wasted respondent time, at worst. A survey strategist would frame a series of questions that explore the bigger story of career progression. They might ask: 👉 “Have you received a promotion in the last year?” (that’s your baseline, your starting point for career movement) 👉 “Did this promotion come with a salary increase?” (now you’re tying that movement to financial impact) 👉 “How did the promotion affect your job satisfaction?” (the emotional weight of advancement) 👉 “How do you perceive your growth opportunities within the company?” ( here, you’re getting at the big picture: loyalty, ambition, future potential) They'll understand the question logic, how an analyst would layer the responses, and what the designer would need to tell the story via graphs. Survey design doesn't need to be expensive. You can do it in-house and get a research report creator (me, Becky Lawlor) to sanity-check the questions, refine the flow, and tie them back to a clear narrative. It'll save you time, money, and peace of mind down the line. If this is something you're thinking about, send me a note! 📩
-
Your customer satisfaction survey is more than a score. Here's how one client used it to leverage a strength and fix a major pain point: 1. Analyze comments Review the survey comments and identify themes for each rating. I can review about 100 surveys by hand in 30 minutes. AI software does this in seconds. Here's what my client's survey comments revealed: 💪 Strengths: employees were frequently mentioned for caring service ❌ Weaknesses: My client discovered that one particular process was a major pain point. Customers felt it was too difficult and inconvenient. 2. Investigate findings Dig deeper to learn more about the strengths and weaknesses the survey helped reveal. Observing employees and workflows is often the best way. My client's observations deepened two insights: 🙏 Employees frequently mentioned in surveys were great at building genuine rapport. Their techniques were easily shared with the rest of the team. ⏱️The painful process was inefficient. The team made changes that made the process more efficient and easier for customers. 3. Experiment Implement new ideas and track the results to see if they work. My client combined observations, anecdotal feedback from customers, and new survey results to assess how the rapport techniques and new process were working. Both were a hit! The painful process in particular stood out. Many customers mentioned how happy they were with the changes. My client had taken a pain point and turned it into a strength! Bottom line --> Follow this process to get more value from your surveys: 1. Analyze comments 2. Investigate findings 3. Experiment
-
I prefer a “Satisfactory” to an “Exceeded My Expectations.” At least, I prefer it in surveys. Last weekend, my family and I stayed at a hotel in Moab. We’d been there before — great pool, solid breakfast, comfy bed. Check, check, check. We chose it again because we’d had such a great experience last time. Our expectations were high. Then came the follow-up survey. It asked us to rate a long list of attributes on how well they met our expectations: Breakfast, staff, cleanliness, comfort, etc. Here’s the problem — everything did meet our expectations. But because my expectations were already high, everything landed in the middle of the scale. To someone analyzing that data, it might look like I had an average stay. But I didn’t. It was great. The issue? The question itself. As a respondent, I hesitated — should I mark “Exceeded Expectations” even though things simply met them? I’m a market researcher, and this still gave me pause. If I was uncertain, imagine how many others were too. That means the study results are likely off — not because of poor data collection, but because the question design didn’t capture what they truly wanted to know. Did I enjoy my stay? Was I satisfied? Would I come back? Those are the real indicators. When you ask a survey question, you get *one* shot to get it right. You have to think through every possible interpretation — and how the respondent’s mindset might differ from your intent. Thousands of surveys like this go out every day. And when the questions miss the mark, the data (and the budget) are wasted. At TBG, we help teams design research that gets to the truth — because we’re meticulous about asking questions that can only be interpreted one way. That’s how we (and our clients) trust the results. So tell me — when you see “Satisfied” vs. “Exceeded Expectations,” which side are you on? #marketresearch #customerinsights #surveydesign #edtech #researchquality #insightsthataction Photo: Instead of the Moab hotel referenced above, here's the Delicate Arch at Arches National Park.
-
** Why CX survey design is both an art form and a skill ** One of my favourite and oft-repeated comments in CX is "The Customer Experience may have been excellent, up to the point where you send the customer a survey". Bad survey design is an absolute sticking point in CX Management, and in itself, the survey is a key touchpoint in the overall experience. Mess it up and your holy grail NPS of 10, based on the overall experience, becomes a 6 or a 7 with a chunky comment from the customer saying "Everything was great, up to the point I received your stupid survey." What am I talking about? The list is mighty: ❌ Dumb, badly worded, unnecessary questions using up survey "space" ❌ Questions you don't need to ask if you already have the interaction data (STOP being lazy and go find the data without bothering the customer) ❌ Questions leading to responses you won't actually do anything about ❌ Questionnaires that are too long ❌ Questionnaires that are too short ❌ Surveys offered at entirely inappropriate points (e.g. a pop-up survey 30 seconds after visiting a website FOR THE FIRST TIME) ❌ Survey-design-by-committee (a question for Ops, a question for Marketing, a question for Compliance, a question for your mother-in-law...etc.) ❌ Surveys with silly prize incentives, just to get people to respond ❌ Unkept promises - "We'll get back to you"..."Someone may contact you based on your response"...or other such falsehoods ❌ Basic grammatical and spelling errors ❌ Contradictions in the survey questionnaire design (conflicting questions, repeating questions, ambiguous wording) ❌ Poor questionnaire tooling (e.g. multiple choice instead of radio grids, etc.) The best questionnaire designers? In my honest personal experience are: - "Experience-walkers" who have gone out and actually WALKED the experience. What happens? When does it happen? Is an intended question workable or even relevant? Don't hypothesise the experience based on expected behaviours - REALISE the experience based on reality - Linguistically competent. Word the questions using accessible, straightforward language. Little tip - if you have to "explain something" in a survey, using an asterisk for a definition, or some other graphical gimmick, you aren't asking the question correctly You don't need a Masters degree in Social Marketing or Statistical Analytics or Business Studies to design a great survey. You need common sense, a reality-balanced viewpoint and the ability to realise the survey is an opportunity to EDUCATE respondents if you ask the questions correctly. You are not just gathering answers. You are triggering reactions, and with that you'll construct an effective survey rather than one where the respondent abandons it halfway through due to laziness and clumsiness in the design. So sayeth someone who has, at various times, felt the crunch of none of the above advice being followed. Ouch. #cx2025 #customerexperience #surveydesign