Multivariate Testing In UX

Explore top LinkedIn content from expert professionals.

  • View profile for Tony Moura
    Tony Moura Tony Moura is an Influencer

    Senior UX Architect & Founder | 30 years building enterprise-grade experiences | IBM Federal | Open to senior UX/design roles

    44,072 followers

    UX Designers, So, you've started using AI to see if you can leverage it to amplify what you can do. The answer is yes, but... If you've never been part of the (SDLC) or (PDLC). You'll get through it, but it won't be easy and not to fun at first. If you're in a well established company with a huge design system. Suddenly adding in AI might make life a real pain. It depends on how adaptive the company and others are. If you're starting something from scratch. Well, now you can do whatever you want to. This is where the fun, frustration and learning comes in. Buckle Up.. To give you an example. I've been working on something and it's almost ready for people to test. I was going through and manually testing the user flows. As something was found. Claude inside of Cursor would find the issue after I point it out. It suggests a fix. I review and approve and continue from there. This was taking a lot of time as you might imagine. So, this morning at 2am with what felt like sand in my eyes. "There has to be a way I can automate this..?" Prompt: As you know. I've been testing the user flows manually, and we've been fixing the issues along the way. Do you know if a way that we can automate this without having to send out various emails, and just do this internally? When you find an issue it gets documented in a backlog and we then work those, and run the test again? I got answers. I selected one I liked (playwright) and combined it with ReactFlow so it was visual. Created a dashboard for it. Long story short. I can now run 100% automated user flow tests, see them in action in real-time, see where the issues are and then go fix them. All done in less than 6 hours and at $0 except for my time. So, can you build something like this with the help of AI? Yes, I did and it fully works. #ux #uxdesigner #uxdesign

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    Helping you succeed in your career + land your next job

    307,748 followers

    Getting the right feedback will transform your job as a PM. More scalability, better user engagement, and growth. But most PMs don’t know how to do it right. Here’s the Feedback Engine I’ve used to ship highly engaging products at unicorns & large organizations: — Right feedback can literally transform your product and company. At Apollo, we launched a contact enrichment feature. Feedback showed users loved its accuracy, but... They needed bulk processing. We shipped it and had a 40% increase in user engagement. Here’s how to get it right: — 𝗦𝘁𝗮𝗴𝗲 𝟭: 𝗖𝗼𝗹𝗹𝗲𝗰𝘁 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 Most PMs get this wrong. They collect feedback randomly with no system or strategy. But remember: your output is only as good as your input. And if your input is messy, it will only lead you astray. Here’s how to collect feedback strategically: → Diversify your sources: customer interviews, support tickets, sales calls, social media & community forums, etc. → Be systematic: track feedback across channels consistently. → Close the loop: confirm your understanding with users to avoid misinterpretation. — 𝗦𝘁𝗮𝗴𝗲 𝟮: 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 Analyzing feedback is like building the foundation of a skyscraper. If it’s shaky, your decisions will crumble. So don’t rush through it. Dive deep to identify patterns that will guide your actions in the right direction. Here’s how: Aggregate feedback → pull data from all sources into one place. Spot themes → look for recurring pain points, feature requests, or frustrations. Quantify impact → how often does an issue occur? Map risks → classify issues by severity and potential business impact. — 𝗦𝘁𝗮𝗴𝗲 𝟯: 𝗔𝗰𝘁 𝗼𝗻 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 Now comes the exciting part: turning insights into action. Execution here can make or break everything. Do it right, and you’ll ship features users love. Mess it up, and you’ll waste time, effort, and resources. Here’s how to execute effectively: Prioritize ruthlessly → focus on high-impact, low-effort changes first. Assign ownership → make sure every action has a responsible owner. Set validation loops → build mechanisms to test and validate changes. Stay agile → be ready to pivot if feedback reveals new priorities. — 𝗦𝘁𝗮𝗴𝗲 𝟰: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 What can’t be measured, can’t be improved. If your metrics don’t move, something went wrong. Either the feedback was flawed, or your solution didn’t land. Here’s how to measure: → Set KPIs for success, like user engagement, adoption rates, or risk reduction. → Track metrics post-launch to catch issues early. → Iterate quickly and keep on improving on feedback. — In a nutshell... It creates a cycle that drives growth and reduces risk: → Collect feedback strategically. → Analyze it deeply for actionable insights. → Act on it with precision. → Measure its impact and iterate. — P.S. How do you collect and implement feedback?

  • View profile for Akhil Yash Tiwari
    Akhil Yash Tiwari Akhil Yash Tiwari is an Influencer

    Building Product Space | Helping aspiring PMs to break into product roles from any background

    34,617 followers

    I opened Canva the other day and something caught my eye - 👀 A vibrant banner right on the home screen announcing "Droptober is coming." With a countdown, hyping up new features that are set to launch in a few days. It's a simple yet effective reminder for users that new and exciting tools are just around the corner that not only sparks curiosity but also creates anticipation. 👉🏻 𝗜𝘁'𝘀 𝗮 𝗯𝗿𝗶𝗹𝗹𝗶𝗮𝗻𝘁 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆 𝘁𝗼 𝗯𝗼𝗼𝘀𝘁 𝗻𝗲𝘄 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗿𝗮𝘁𝗲𝘀 𝗯𝗲𝗰𝗮𝘂𝘀𝗲: ✅ Instead of relying on emails or external announcements that might get lost, a banner on the app's home screen ensures the message reaches active users. It’s an in-app reminder that stays top of mind. ✅ Adding a countdown creates a sense of urgency. It makes users feel like they’re part of something special, something they don’t want to miss out on. ✅ Visual elements like banners can capture attention faster than text-heavy announcements. 🔵 𝗪𝗵𝗮𝘁 𝗼𝘁𝗵𝗲𝗿 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀 𝗰𝗮𝗻 𝘄𝗲 𝘂𝘁𝗶𝗹𝗶𝘇𝗲 𝗳𝗼𝗿 𝗱𝗿𝗶𝘃𝗶𝗻𝗴 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻: 💡 𝗜𝗻-𝗮𝗽𝗽 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀: Like Canva, using banners or pop-ups within the product helps to keep users informed. It’s a great way to announce a new feature, offer tutorials, or even give a sneak peek. 💡 𝗚𝗮𝗺𝗶𝗳𝘆 𝘁𝗵𝗲 𝗹𝗮𝘂𝗻𝗰𝗵: Products like Duolingo have mastered gamification. What if you could create a mini-challenge for users to try out the new feature? Reward them with badges or exclusive access. 💡 𝗣𝗿𝗼𝗴𝗿𝗲𝘀𝘀𝗶𝘃𝗲 𝗿𝗼𝗹𝗹𝗼𝘂𝘁𝘀 𝘄𝗶𝘁𝗵 𝘂𝘀𝗲𝗿 𝘀𝗲𝗴𝗺𝗲𝗻𝘁𝘀: Netflix often tests new features with a small percentage of users before a full rollout. This helps gather feedback, refine the experience, and build buzz through word-of-mouth. 💡 𝗢𝗻𝗯𝗼𝗮𝗿𝗱𝗶𝗻𝗴 𝘄𝗮𝗹𝗸𝘁𝗵𝗿𝗼𝘂𝗴𝗵𝘀: When Slack releases a new feature, they often integrate it directly into the product’s onboarding flow, guiding users step-by-step. It’s not just about telling users what's new, but how to use it. 👉🏻 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 𝗳𝗼𝗿 𝗣𝗠𝘀 - Invest in strategies that bring the message to where your users are most engaged i.e. within your product itself. Keep it simple, visually appealing, and engaging. And remember, the more excitement you build around a new feature, the higher the chances of driving adoption. So, next time you’re planning a launch, think about how you can create that “I can’t wait to try this!” moment. PS. What other strategies do you use as a PM for new feature launches? Do share in the comments!

  • View profile for Oren Greenberg
    Oren Greenberg Oren Greenberg is an Influencer

    Scaling Tech Companies | GTM Engineering Advisor. I help your team implement AI workflows.

    39,280 followers

    𝑵𝒆𝒘 𝑫𝒂𝒕𝒂 𝑺𝒉𝒐𝒘𝒔 𝑳𝒐𝒔𝒊𝒏𝒈 80% 𝑶𝒇 𝑴𝒐𝒃𝒊𝒍𝒆 𝑼𝒔𝒆𝒓𝒔 𝑰𝒔 𝑵𝒐𝒓𝒎𝒂𝒍, 𝒂𝒏𝒅 𝑾𝒉𝒚 𝒕𝒉𝒆 𝑩𝒆𝒔𝒕 𝑨𝒑𝒑𝒔 𝑫𝒐 𝑩𝒆𝒕𝒕𝒆𝒓 How do you measure app success? Clicks? Downloads? Rating? How about retention? New data from Andrew Chen, a partner at Andreessen Horowitz, and mobile intelligence start-up, Quettra shows: → An average app loses 77% of its Daily Active Users (DAU) within 3 days of installation; AND → The decline slows considerably after 3 days. Interestingly Andrew’s analysis goes on to show that for the top-performing apps, the initial decline is less marked. For example, the top 10 apps only lose around 25% DAU in the first 3 days. However, after around 3 days, the decline rate bottoms out in a similar fashion to the overall average. In other words, the initial few days after installation are critical. Recently, I highlighted the two main ways to raise app awareness and acquire users: searching and paid ads. But what can you do to retain your user after they’ve installed your app? The key is user engagement. 🔹 Build trust with a compelling, but authentic story The first touchpoint is often the app store. App Store Optimisation (ASO) is critical to allow users to discover you in the first place. However, your description must be more than a few lines plastered with keywords. Long-term engagement is established through connection with your user. Convey your app’s value honestly and without exaggeration. Users are quickly turned off when developers over-promise and apps fall short. 🔹Design a frictionless onboarding process Your user’s first real experience of your app is at onboarding, so make it user-friendly with simple step-by-step instructions, and easy-to-understand graphics or videos. There’s nothing worse than a cumbersome onboarding process, and if the app is also difficult to navigate or counterintuitive to use, users will soon be reaching for the delete button. 🔹Make your app sticky Your ultimate goal is to make your app part of your user’s routine. Adding gamified elements that require users to complete daily challenges, tailoring the experience to the user, or incorporating social features are just some of the options available. Kurve’s client, BackThen, had a goal of increasing its user base. We analysed user behaviours to identify the triggers that led to subscription. Based on the outcomes, we crafted an engagement strategy and complementary marketing approach around 4:2:1 – 4 photos uploaded + 2 invites + 1 comment. The result? BackThen’s user base grew 5x in 12 months. With so many apps vying for users’ attention, stemming user decline is a conundrum. There’s no quick fix. It takes a combination of discoverability, seamless onboarding, and a first-class user experience to drive loyalty. Anything you’d add? 👇 #mobileapps #appmarketing #appstore Source: andrewchen.com

  • View profile for Mukta Sharma
    Mukta Sharma Mukta Sharma is an Influencer

    |Quality Assurance | ISTQB Certified| Software Testing| In Top 10 London Influencers| In Top 100 Women In Tech|

    47,723 followers

    How to use JAWS on a real time application for accessibility testing?#testing JAWS (Job Access With Speech) is a popular screen reader used for testing accessibility, especially for users with visual impairments. let me share step-by-step process on how to use JAWS for accessibility testing on a real time application: Install JAWS: First, you need to install JAWS on your computer. Once installed, you can enable it by pressing the JAWS key (usually Insert or Caps Lock) along with the desired command. Enable JAWS: When testing websites or applications, launch JAWS and start navigating through the content. JAWS reads aloud the text on the screen and announces elements like headings, links, form fields, etc. Press the JAWS key and use arrow keys to move through the page. Check Keyboard Navigation: Ensure that all interactive elements, such as forms, buttons, and links, are accessible using the keyboard. JAWS will announce any interactive elements as you tab through them. Verify that users can navigate and interact without a mouse. JAWS relies on proper HTML semantics (e.g., headings, lists, and ARIA roles). Test the structure by using commands like the "H" key to skip to headings or "L" to navigate through links. Ensure the structure is logical and that headings are used correctly. Test Images and Alt Text: JAWS reads alternative text (alt text) for images. Test this by navigating to images and ensuring that meaningful descriptions are provided in the alt attribute. if not, this could be an issue/defect which you can raise with development team. If images lack alt text, JAWS will either read nothing or default to the image filename. Evaluate Forms and Controls: Check how JAWS announces form fields, labels, and error messages. Ensure that each form field has an associated label and that error messages are read when validation fails. By following these steps, you can use JAWS to assess the accessibility of web pages and applications successfully and ensure they are usable by individuals with visual special abilities. This is based on my realtime experience of testing some web based apps. #JAWS #sharingexperience #learnandgrow

  • View profile for Vahe Arabian

    Founder & Publisher, State of Digital Publishing | Founder & Growth Architect, SODP Media | Helping Publishing Businesses Scale Technology, Audience and Revenue

    10,106 followers

    Publisher experiments fail when they start with tactics, not hypotheses. A/B testing has become a staple in digital publishing, but for many publishers, it’s little more than tinkering with headlines, button colours, or send times. The problem is that these tests often start with what to change rather than why to change it. Without a clear, measurable hypothesis, most experiments end up producing inconclusive results or chasing vanity wins that don’t move the business forward. Top-performing publishers approach testing like scientists: They identify a friction point, build a hypothesis around audience behaviour, and run the experiment long enough to gather statistically valid results. They don’t test for the sake of testing; they test to solve specific problems that impact retention, conversions, or revenue. 3 experiments that worked, and why 1. Content depth vs. breadth: Instead of spreading efforts across many topics, one publisher focused on fewer topics in greater depth. This depth-driven strategy boosted engagement and conversions because it directly supported the business goal of increasing loyal readership, and the test ran long enough to remove seasonal or one-off anomalies. 2. Paywall trigger psychology: Rather than limiting readers to a fixed number of free articles, an engagement-triggered paywall is activated after 45 seconds of reading. This targeted high-intent users, converting 38% compared to just 8% for a monthly article meter, resulting in 3x subscription revenue. 3. Newsletter timing by content type: A straight “send time” test (9 AM vs. 5 PM) produced negligible differences. The breakthrough came from matching content type to reader routines: morning briefings for early risers, deep-dive reads for the afternoon. Open rates increased by 22%, resulting in downstream gains in on-site engagement. Why most tests fail • No behavioural hypothesis, e.g., “testing headlines” without asking why a reader would care • No segmentation - treating all users as if they behave the same • Vanity metrics over meaningful metrics - clicks instead of conversions or LTV • Short timelines - stopping before 95% statistical confidence or a full behaviour cycle What top performers do differently ✅ Start with a measurable hypothesis tied to business outcomes ✅ Isolate one behavioural variable at a time ✅ Segment audiences by actions (new vs. returning, skimmers vs. engaged) ✅ Measure real results - retention, conversions, revenue ✅ Run tests for at least 14 days or until reaching statistical significance ✅ Document learnings to inform the next test When experiments are designed with intention, they stop being random guesswork and start becoming a repeatable growth engine. What’s the most valuable experimental hypothesis you’re testing this quarter? Share with me in the comment section. #Digitalpublishing #Abtesting #Audienceengagement #Contentstrategy #Publishergrowth

  • View profile for Evelyn Gosnell

    Managing Director | Irrational Labs | Building behaviorally informed products that are good for people

    8,275 followers

    Build it and they will come? 🤔 When product teams launch a highly-requested feature, they tend to expect users to engage with it. But things don’t always work out that way. 😬 This is what Lyft discovered when they launched Women+ Connect. Despite the clear benefits and the demand for the feature, not all drivers who were eligible were opting into it. 🔍 The challenge? Simply telling users about a feature isn’t always enough to drive action. When Irrational Labs partnered with Lyft, here’s what our brilliant behavioral scientist Isabel Macdonald, PhD and team learned, working closely with Robyn Bald and Kirsten M.: 🚀 A simple shift in messaging—based on behavioral science—can massively impact feature engagement. Irrational Labs tested several behaviorally-informed messages and all outperformed the control. The winning message? “Just checking. Looks like you are not opted into Women+ Connect. Is this correct? Tap to review.” What this does: The question creates a desire for resolution and nudges the driver to take action (versus do nothing). The result? Compared to the control group, this approach got 173% more opt-ins from new drivers. 📈 So, what’s the takeaway for product teams? 👉🏼 Product success doesn’t come just from building great features. You have to frame them in ways that resonate with your users, capture their attention, and motivate them to act. 💡 Curious to see how small changes can lead to massive impact? Check out the link in the comments to learn how we helped Lyft get great engagement with a great feature. 👇🏼 #BehavioralScience #ProductManagement #UserEngagement #WomenInTech #IrrationalLabs #Lyft

  • View profile for Rasel Ahmed

    3× Co-Founder | CEO @ Musemind GmbH | UX Design Awards Jury | Top #2 Design Leadership Voice 🇩🇪 | Driving innovative, sustainable, empathetic AI × UX that delivers real impact

    50,208 followers

    Most products in 2026 will face lawsuits. Unless… They built it with color-blindness in mind. Accessibility mistakes are quiet. Very quiet. No error messages. No angry emails. Just lost users. We fixed this for a 9-figure biz. This client thought everything was fine. Clean UI. Nice colors. Modern layout. But… conversions dropped. We tested one thing. “Grayscale.” And everything broke. Errors disappeared. Buttons blended. States looked identical. Not bad design. Just invisible design flaws. So we changed how signals worked. Not just color. Icons. Text. Contrast. Structure. Nothing fancy. Just thoughtful UX. And then? Forms worked. Drop-offs reduced. Users trusted the product again. That’s why this carousel exists. Color-blind users aren’t edge cases. They’re: Real people. Paying customers. Accessibility isn’t charity. It’s product maturity. Design isn’t about taste. It’s about clarity. If you design products, remember this. Color should support meaning. Not carry it alone. Make UX work without color. Then add beauty. That’s real design. If this helped you rethink color, repost the carousel ♻️ P.S. Most accessibility bugs hide in plain sight.

  • View profile for Mark Eltsefon

    Staff Data Scientist @ Meta, ex-TikTok | Boosting Data Science Careers | Causality Over Correlation Advocate

    39,869 followers

    Classic A/B testing relies on SUTVA (Stable Unit Treatment Value Assumption), which assumes one user’s decision doesn’t influence another’s. But what if your product is a social network, marketplace, or delivery service? Imagine you’ve improved the post-ranking algorithm on LinkedIn. Users in Group A (new algorithm) creates more content now. But this content spreads to Group B (old algorithm), distorting the results due to network effects. Here are two main ways to tackle this: 1. 𝐂𝐥𝐮𝐬𝐭𝐞𝐫𝐢𝐧𝐠-𝐛𝐚𝐬𝐞𝐝 𝐞𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐬: Randomize groups of users (clusters) instead of individual users. For social networks, the most popular approach is to define clusters based on interaction frequency — those who engage more often stay together in one cluster. 2. 𝐒𝐰𝐢𝐭𝐜𝐡𝐛𝐚𝐜𝐤 𝐭𝐞𝐬𝐭𝐬: In this approach, everyone in the network receives the same treatment at any given time. Over time, we flip between test and control groups, compare metrics, and evaluate the impact. This is especially useful for location-based services (e.g., taxis or delivery). Even if you’re not working with a product that has potential network effects, understanding these methods will help you in future interviews!

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher at PUX Lab | Human-AI Interaction Researcher at UALR

    9,505 followers

    Segmentation is one of those concepts that sounds simple until you actually try to do it properly. Most teams start with broad categories like age, location, or gender, but the real insight comes when you start looking at how users act - how often they visit, how recently they engaged, how much value they bring, and which patterns naturally form across those dimensions. The goal of segmentation isn’t to label users, it’s to understand the structure of their behavior. That’s what data-driven segmentation methods allow us to do. K-Means, for example, helps you find natural patterns hidden in behavioral data. You decide how many groups you want to explore, and the algorithm does the heavy lifting, assigning each user to the cluster that best represents their behavior. It’s simple, efficient, and powerful for large datasets where you want to explore engagement trends without predefining who belongs where. When you need to see relationships instead of just results, hierarchical clustering becomes more useful. It builds a tree-like view showing which users are similar and where meaningful divisions exist. You don’t need to commit to a single number of segments. You can cut the tree at different points to explore how granular your understanding should be. It’s particularly helpful for moderate datasets where interpretability matters as much as precision. Then there’s DBSCAN, a method designed for reality - where user behavior is messy, irregular, and full of noise. Unlike K-Means, DBSCAN doesn’t assume clusters are neat or circular. It groups users by density, identifying natural clusters and automatically separating outliers. This makes it especially valuable for complex behavioral or clickstream data where some users behave in ways that don’t fit any conventional pattern. If you want something more business-focused and immediately actionable, RFM segmentation (Recency, Frequency, Monetary) remains a classic for a reason. By scoring how recently and how often users engage, and how much they contribute, you can pinpoint who’s loyal, who’s at risk, and who’s gone silent. It’s simple but effective for linking behavior to ROI and retention strategies. Finally, once you have meaningful segments, classification models can keep them alive. You can train a model to automatically assign new users to the right segment as data flows in, turning segmentation from a static exercise into a living system that adapts as behavior changes.

Explore categories