User Flows And Pathways

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    224,248 followers

    🔎 How To Redesign Complex Navigation: How We Restructured Intercom’s IA (https://lnkd.in/ezbHUYyU), a practical case study on how the Intercom team fixed the maze of features, settings, workflows and navigation labels. Neatly put together by Pranava Tandra. 🚫 Customers can’t use features they can’t discover. ✅ Simplifying is about bringing order to complexity. ✅ First, map out the flow of customers and their needs. ✅ Study how people navigate and where they get stuck. ✅ Spot recurring friction points that resonate across tasks. 🚫 Don’t group features based on how they are built. ✅ Group features based on how users think and work. ✅ Bring similar things together (e.g. Help, Knowledge). ✅ Establish dedicated hubs for key parts of the product. ✅ Relocate low-priority features to workflows/settings. 🤔 People don’t use products in predictable ways. 🤔 Users often struggle with cryptic icons and labels. ✅ Show labels in a collapsible nav drawer, not on hover. ✅ Use content testing to track if users understand icons. ✅ Allow users to pin/unpin items in their navigation drawer. One of the helpful ways to prioritize sections in navigation is by layering customer journeys on top of each other to identify most frequent areas of use. The busy “hubs” of user interactions typically require faster and easier access across the product. Instead of using AI or designer’s mental model to reorganize navigation, invite users and run a card sorting session with them. People are usually not very good at naming things, but very good at grouping and organizing them. And once you have a new navigation, test and refine it with tree testing. As Pranava writes, real people don’t use products in perfectly predictable ways. They come in with an infinite variety of needs, assumptions, and goals. Our job is to address friction points for their realities — by reducing confusion and maximizing clarity. Good IA work and UX research can do just that. [Useful resources in the comments ↓] #ux #IA

  • View profile for Drew Neisser
    Drew Neisser Drew Neisser is an Influencer

    CEO @ CMO Huddles | Podcast host for B2B CMOs | Flocking Awesome CMO Coach + CMO Community Leader | AdAge CMO columnist | author Renegade Marketing | Penguin-in-Chief

    25,509 followers

    "Our funnel is completely clogged, and our CEO and investors are starting to panic," shared a CMO from a $375MM SaaS firm. The other Huddlers sympathized, noting they were facing similar challenges. Sound familiar? The old playbook of flooding the funnel, scoring MQLs, and handing off to sales isn't just broken; it's toxic. Here's why your funnel is clogged and what actually works now: 1. Your data is a disaster. The average customer contact database health score? A pathetic 47%, according to research from BoomerangAI. More than half of B2B companies haven't updated their database in six months—or ever. Bad data isn't just an operational issue. It erodes every layer of your funnel. Fix this first. Assign database ownership cross-functionally. Tie enrichment to your GTM motions. And please activate alumni contact programs. Only 12% of companies have formal programs for contacts who left employers, yet they're gold mines. 2. You're still pitching tours when buyers want tools. Recent TrustRadius research shows that 52% of buyers say prior experience is their #1 decision input. Only 13% say a demo "blew them away." 3. Stop the demo obsession. Launch website-based product exploration tools. Add pricing guidance. Create modular content for AI summarization since 90% of buyers who see AI-generated summaries click through to cited sources. 4. The MQL addiction is killing you. As one CMO put it: "MQLs are problematic... we’re trying to figure out how to get fewer, better leads." Track conversion quality at each funnel stage. Hold weekly demand gen and sales alignment meetings. Ditch vanity metrics for outcome-based KPIs. 5. You're pitching spend instead of displacement. Few CFOs are greenlighting net-new spending, but they will approve reallocation when the ROI is crystal clear. Reframe your pitch: "Invest in this → reduce spend on that." Connect to CFO logic, not just user pain. 6. You're making promises instead of proving value. Buyers want proof in 120 days or less. The "trust us, it'll pay off eventually" era is dead. If you have the data, create 120-day value realization case studies. Use prospect data to build "speed-to-value" narratives. Lead with time-to-value, not feature lists. The companies unclogging their funnels aren't working harder—they're working smarter. They've ditched the old playbook for data-driven precision. Your move. PS - For a longer look at this issue, please check out my May 2025 #HuddleUp newsletter.

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,298 followers

    LLMs are optimized for next turn response. This results in poor Human-AI collaboration, as it doesn't help users achieve their goals or clarify intent. A new model CollabLLM is optimized for long-term collaboration. The paper "CollabLLM: From Passive Responders to Active Collaborators" by Stanford University and Microsoft researchers tests this approach to improving outcomes from LLM interaction. (link in comments) 💡 CollabLLM transforms AI from passive responders to active collaborators. Traditional LLMs focus on single-turn responses, often missing user intent and leading to inefficient conversations. CollabLLM introduces a :"Multiturn-aware reward" system, apply reinforcement fine-tuning on these rewards. This enables AI to engage in deeper, more interactive exchanges by actively uncovering user intent and guiding users toward their goals. 🔄 Multiturn-aware rewards optimize long-term collaboration. Unlike standard reinforcement learning that prioritizes immediate responses, CollabLLM uses forward sampling - simulating potential conversations - to estimate the long-term value of interactions. This approach improves interactivity by 46.3% and enhances task performance by 18.5%, making conversations more productive and user-centered. 📊 CollabLLM outperforms traditional models in complex tasks. In document editing, coding assistance, and math problem-solving, CollabLLM increases user satisfaction by 17.6% and reduces time spent by 10.4%. It ensures that AI-generated content aligns with user expectations through dynamic feedback loops. 🤝 Proactive intent discovery leads to better responses. Unlike standard LLMs that assume user needs, CollabLLM asks clarifying questions before responding, leading to more accurate and relevant answers. This results in higher-quality output and a smoother user experience. 🚀 CollabLLM generalizes well across different domains. Tested on the Abg-CoQA conversational QA benchmark, CollabLLM proactively asked clarifying questions 52.8% of the time, compared to just 15.4% for GPT-4o. This demonstrates its ability to handle ambiguous queries effectively, making it more adaptable to real-world scenarios. 🔬 Real-world studies confirm efficiency and engagement gains. A 201-person user study showed that CollabLLM-generated documents received higher quality ratings (8.50/10) and sustained higher engagement over multiple turns, unlike baseline models, which saw declining satisfaction in longer conversations. It is time to move beyond the single-step LLM responses that we have been used to, to interactions that lead to where we want to go. This is a useful advance to better human-AI collaboration. It's a critical topic, I'll be sharing a lot more on how we can get there.

  • View profile for Nathan Baird

    Helping Teams Solve Complex Problems & Drive Innovation | Design Thinking Strategist & Author | Founder of Methodry

    7,281 followers

    How do you and your teams synthesise and select which customer needs or pains to progress in your #product, #design, or #innovation projects? Imagine you've just completed some great customer discovery research, including observing, interviewing and being the customer. You've built some good empathy for who your customers are, what is important to them, what pains them, and what delights them. Then you unpack your findings into some form of empathy map, and you've got 100s of sticky notes everywhere. You've then started to narrow them down to the most promising and interesting observations, but this still leaves you with a sizeable collection and you want to add some rigour to your intuition on which ones to take forward first. Well, here are 3 different methods that I’ve used and iterated over the years: Number One – The Opportunity Scale This first one is the simplest and is inspired by how Alexander Osterwalder et al rank jobs, pains and gains in their book Value Proposition Design, 2014. As a team, you take your short list of observations from your empathy map and rank them from how insignificant/moderate to how important/extreme the need/pain is for the customer with the most important/extreme being prioritised to explore further first. Number two – The Opportunity Matrix A The opportunity matrix increases the rigour and confidence of your prioritizing by adding ‘strength of evidence’ as another dimension. Strength of evidence at this stage of journey can be determined by the number and type of data points. For example, if you heard from several customers that a pain point was extremely painful then you could be more confident this was worth solving than one highlighted by only one customer. Likewise, observing customers do something provides stronger evidence than customers saying they do something. Here you prioritise the most important needs with the strongest evidence first. Something to watch out for is when your team selects an observation that has strong evidence but isn’t that important of a need or pain to customers. Teams can be blinkered by numbers and end up over-investing in time wasting-opportunities. Number three – The Opportunity Matrix B The third method swaps out evidence for fulfilment of the need - how satisfied are customers with their ability to fulfil the need/solve the pain with the solutions they use today? By matching this with the importance of the need/pain we can select those observations that we understand to be the most important and unmet for our customers. You can then overlay the strength of evidence across this ranking to make your final selection even more robust. And to take it to a whole new level and really de-risk your selection you can test your prioritised observations, written as need statements, in quantitative research with customers. This is something that Antony Ulwick shares in his book Jobs To Be Done, 2016. I hope you find these methods useful. #designthinking #humancentreddesign

  • View profile for Prof. Amanda Kirby MBBS MRCGP PhD FCGI
    Prof. Amanda Kirby MBBS MRCGP PhD FCGI Prof. Amanda Kirby MBBS MRCGP PhD FCGI is an Influencer

    Honorary/Emeritus Professor; Doctor | PhD, Multi award winning;Neurodivergent; Founder of tech/good company

    140,675 followers

    Neurodiversity 101: When we design neurodevelopmental pathways, we often focus on clinical accuracy and capacity. Far less attention is given to accessibility - yet this is where many pathways quietly break down. ****Accessibility is NOT just about ramps or translated leaflets. It is about whether families can understand, navigate and trust the system at all. Many parents accessing neurodevelopmental services are juggling: *Limited confidence with English *Low literacy or challenges with health literacy ( understanding terms/words being used) *Neurodivergent traits of their own *Stress, stigma, and fear of being judged! IMPORTANT: If pathways assume fluent English, high organisational skills and confidence in professional settings, they will systematically exclude the very families most in need. This matters particularly for conditions such as Developmental Language Disorder (DLD) and Developmental Coordination Disorder (DCD), where: *Needs are often identified late *Presentations are subtle or misunderstood *Parents must explain complex, everyday functional difficulties *When forms are long ( have you seen this?), language is abstract, and instructions are unclear, families disengage.... This is not because they don’t care, but because the system is not built for them. SO ....Accessible pathways share common features: Plain language, avoiding unnecessary jargon Multiple formats (written, visual, supported conversation) Clear expectations about what will happen next Permission not to know the “right words” Recognition that parents may themselves have language, attention or coordination differences Accessibility also means moving away from a narrow, diagnosis-first mindset. Families often describe functional concerns long before a label is considered. Pathways that listen to what isn’t working day to day — rather than how well concerns are articulated — are more equitable and more effective. If accessibility is treated as optional, inequalities widen. If it is treated as core infrastructure, pathways become fairer, earlier and more humane. Neuroinclusion in healthcare is not only about who qualifies. It is about who can get through the door, stay engaged, and be understood.

  • View profile for Sneha Vijaykumar

    Data Scientist @ Takeda | Ex-Shell | Gen AI | LLM | RAG | AI Agents | Azure | NLP | AWS

    25,007 followers

    If you’ve ever shipped a GenAI model to production, you already know the real interview isn’t about transformers, it’s about everything that breaks the moment real users touch your system. 1) How would you evaluate an LLM powering a Q&A system? Approach: Don’t talk about accuracy alone. Break it down into: ✅ Functional metrics: exact match, F1, BLEU, ROUGE depending on task. ✅ Safety metrics: hallucination rate, refusal rate, PII leakage. ✅ User-facing metrics: latency, token cost, answer completeness. ✅ Human evaluation: rubric-based scoring from SMEs when answers aren’t deterministic. ✅ A/B tests: compare model variants on real user flows. 2) How do you handle hallucinations in production? Approach: ✅ Show you understand layered mitigation: ✅ Retrieval first (RAG) to ground the model. ✅ Constrain the prompt: citations, “answer only from provided context,” JSON schemas. ✅ Post-generation validation like fact-checking rules or context-overlap checks. ✅ Fall-back behaviors when confidence is low: ask for clarification, return source snippets, route to human. 3) You’re asked to improve retrieval quality in a RAG pipeline. What do you check first? Approach: Walk through a debugging flow: ✅ Check document chunking (size, overlap, boundaries). ✅ Evaluate embedding model suitability for domain. ✅ Inspect vector store configuration (HNSW params, top_k). ✅ Run retrieval diagnostics: is the top_k relevant to the question? ✅ Add metadata filters or rerankers (cross-encoder, ColBERT-style scoring). 4) How do you monitor a GenAI system after deployment? Approach: ✅ Make it clear that monitoring isn’t optional. ✅ Latency and cost per request. ✅ Token distribution shifts (prompt bloat). ✅ Hallucination drift from user conversations. ✅ Guardrail violations and safety triggers. ✅ Retrieval hit rate and query types. ✅ Feedback loops from thumbs up/down or human review. 5) How do you decide between fine-tuning and using RAG? Approach: ✅ Use a decision tree mentality: ✅ If the issue is knowledge freshness, go with RAG. ✅ If the issue is formatting/style, go with fine-tuning. ✅ If the model needs domain reasoning, consider fine-tuning or LoRA. ✅ If the data is large and structured, use RAG + reranking before touching training. Most interviews test what you know. GenAI interviews test what you’ve survived. Follow Sneha Vijaykumar for more... 😊 #genai #datascience #rag #production #interview #questions #careergrowth #prep

  • View profile for PANKAJ BUDHWANI

    Director @Mediagarh | Driving College Admissions | Digital Marketing Expert for Educational Institutions | Data-Driven Strategies | Aspiring Author

    7,211 followers

    ₹1 Crore spent on ads, yet 50% of seats remain empty. What's going wrong? Every year, colleges invest crores into digital advertising, hoping to fill their seats. Yet, many institutions find themselves struggling with enrollments even after spending massive budgets. The root problem isn’t a lack of leads—it’s a broken marketing funnel. Most colleges focus on generating inquiries but overlook what truly matters: ● How many of those leads actually convert into enrollments? ● Where are students dropping off in the application journey? ● Are leads being nurtured effectively, or are they slipping through the cracks? The Harsh Reality ● 70% of leads generated by colleges never make it past the inquiry stage. ● 80% of students prefer personalized communication, yet most colleges still rely on generic email blasts. ● Students today expect instant responses, but many institutions take days or even weeks to follow up. ● This disconnect between marketing and admissions results in low conversion rates and wasted ad spend. How This Can Be Fixed Instead of focusing solely on lead generation, shifting attention to conversion strategies can make all the difference. A few key steps include: ▶️ Identifying where leads drop off in the journey, from ad clicks to inquiry forms to actual enrollments. ▶️ Optimizing landing pages and CTAs to improve conversions, ensuring the application process is seamless and engaging. ▶️ Running targeted campaigns rather than broad, generic marketing efforts. ▶️ Personalization and precise audience segmentation can significantly boost effectiveness. ▶️ Leveraging WhatsApp and AI chatbots to provide instant engagement, as real-time responses can increase the likelihood of application by three times. ▶️ Implementing retargeting and nurturing strategies, ensuring students stay engaged throughout the decision-making process rather than losing interest. The ImpactWhen done right, this approach can lead to: ▶️A significant increase in high-quality leads—not just random inquiries. ▶️ A 30% reduction in acquisition costs through smarter targeting. ▶️ Higher enrollment rates without increasing the marketing budget. Colleges don’t have a lead generation problem—they have a lead conversion problem. Are you tracking where your leads drop off? Let’s discuss in the comments!

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,240 followers

    Understanding user intent is foundational to improving any AI-driven product experience. In this tech blog, Udemy’s engineering team shares how they evolved their intent-understanding system by incorporating LLMs, ultimately improving the user experience of the Udemy AI Assistant. - For the Assistant to work well, the very first step is figuring out what a learner actually means so that the system can take the right action. Early versions relied on a lightweight sentence-embedding model: user messages were mapped to a vector space and matched against example utterances to identify the closest intent. This approach worked reasonably well at the start, but as the Assistant grew to support more features and nuanced intents, it began to struggle, leading to more misclassifications and weaker responses. - To improve accuracy, the team explored larger embedding models and eventually tested using LLMs directly for intent classification. While this LLM-only approach significantly improved understanding by leveraging full conversational context, it also came with higher latency and cost. The key was a hybrid strategy: use embeddings when confidence is high, and fall back to a smaller LLM only when intent is ambiguous. This delivered a strong balance between accuracy and efficiency in production. What stands out is how real-world constraints shaped the final design. In production systems, there are always trade-offs between quality, speed, and cost—and the “best” architecture is rarely the most complex one. Udemy’s approach is a useful reminder that combining lightweight methods with LLMs in the right places can meaningfully improve user experience without over-engineering the solution. #DataScience #MachineLearning #LLM #ProductAI #AppliedML #MLSystems #IntentUnderstanding #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gFYvfB8V    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/ga5JJuzN

  • View profile for Kuldeep Singh Sidhu

    Senior Data Scientist @ Walmart | BITS Pilani

    15,642 followers

    Exciting research from Snap Inc.'s engineering team! Just came across their paper on Universal User Modeling (UUM) that's revolutionizing how they handle cross-domain user representations. The team at Snap has developed a framework that learns general-purpose user representations by leveraging behaviors across multiple in-app surfaces simultaneously. Rather than building separate user models for each surface (Content, Ads, Lens, etc.) and combining them post-hoc, UUM directly captures collaborative filtering signals across domains. Their approach formulates this as a cross-domain sequential recommendation problem, processing user interaction sequences of up to 5,000 events and using sliding windows of 800-length subsequences to balance computational efficiency with capturing long-range dependencies. The architecture leverages transformer-based self-attention mechanisms to model these sequences, with a clever design that projects feature vectors from different domains into a shared latent space before applying multi-head attention layers. The results are impressive! After successful A/B testing, UUM has been deployed in production with significant gains: - 2.78% increase in Long-form Video Open Rate - 19.2% increase in Long-form Video View Time - 1.76% increase in Lens play time - 0.87% increase in Notification Open Rate They're also exploring advanced modeling techniques like domain-specific encoders and self-attention with information bottlenecks to address the challenges of imbalanced cross-domain data. This work demonstrates how sophisticated user modeling can drive substantial engagement improvements across multiple recommendation surfaces within a large-scale social platform.

  • View profile for Shristi Katyayani

    Senior Software Engineer | Avalara | Prev. VMware

    9,195 followers

    Lately, I’ve been thinking about why choosing something to watch often takes longer than actually watching it. You open a streaming app, scroll for a while, watch a few trailers, switch genres, and sometimes end up rewatching something familiar. For years, most recommendation systems have been optimized around predicting what a user is most likely to click or watch next. In large-scale systems, this usually involves generating candidate content using embeddings and retrieval systems, ranking those candidates using machine learning models trained on engagement signals, and then presenting results to the user. But even well-optimized systems struggle with something fundamental: 𝐡𝐮𝐦𝐚𝐧 𝐢𝐧𝐭𝐞𝐧𝐭 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐪𝐮𝐢𝐜𝐤𝐥𝐲. A person’s watch history reflects what they liked in the past, but it does not always capture what they feel like watching in the moment. In reality, discovery often feels less like ranking and more like a conversation. Preferences are 𝐜𝐨𝐧𝐭𝐞𝐱𝐭𝐮𝐚𝐥, 𝐟𝐮𝐳𝐳𝐲 and 𝐞𝐯𝐨𝐥𝐯𝐢𝐧𝐠. This is where 𝐬𝐞𝐪𝐮𝐞𝐧𝐜𝐞-𝐛𝐚𝐬𝐞𝐝 𝐦𝐨𝐝𝐞𝐥𝐢𝐧𝐠 𝐚𝐧𝐝 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 start to change how recommendation systems can be built. Instead of treating every interaction independently, modern approaches can model user behavior as a ����𝐞𝐪𝐮𝐞𝐧𝐜𝐞 𝐨𝐟 𝐚𝐜𝐭𝐢𝐨𝐧𝐬 𝐰𝐢𝐭𝐡𝐢𝐧 𝐚 𝐬𝐞𝐬𝐬𝐢𝐨𝐧. Transformer-based models are particularly well-suited for this because they can learn patterns across sequences of behavior. These systems can begin to understand how preferences shift during discovery rather than simply predicting the next click. In production environments, this often leads to 𝐡𝐲𝐛𝐫𝐢𝐝 𝐫𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐚𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞𝐬 that combine retrieval systems with generative or reasoning models: 💡Real-time user events feed feature stores that support both offline training and low-latency inference. 💡Embedding-based retrieval systems (e.g., two-tower models + ANN search) reduce millions of items to a few hundred candidates in milliseconds. 💡Ranking models score these candidates based on click probability, watch time, and completion likelihood. 💡Session-aware embeddings and sequence models capture short-term intent shifts during browsing. 💡Re-ranking layers enforce diversity, freshness, and exploration under strict latency constraints. Recommendation systems are gradually moving from 𝐩𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐧𝐠 𝐛𝐞𝐡𝐚𝐯𝐢𝐨𝐫 to 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐢𝐧𝐭𝐞𝐧𝐭, and from 𝐫𝐚𝐧𝐤𝐢𝐧𝐠 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 to 𝐠𝐮𝐢𝐝𝐢𝐧𝐠 𝐝𝐢𝐬𝐜𝐨𝐯𝐞𝐫𝐲.

Explore categories