User behavior is more than what they say - it’s what they do. While surveys and usability tests provide valuable insights, log analysis reveals real interaction patterns, helping UX researchers make informed decisions based on data, not just assumptions. By analyzing interactions - clicks, page views, and session times - teams move beyond assumptions to data-driven decisions. Here are five key log analysis methods every UX researcher should know: 1. Clickstream Analysis - Mapping User Journeys Tracks how users navigate a product, highlighting where they drop off or backtrack. Helps refine navigation and improve user flows. 2. Session Analysis - Seeing UX Through the User’s Eyes Session replays reveal hesitation, rage clicks, and abandoned tasks. Helps pinpoint where and why users struggle. 3. Funnel Analysis - Identifying Drop-Off Points Tracks user progression through key workflows like onboarding or checkout, pinpointing exact steps causing drop-offs. 4. Anomaly Detection - Catching UX Issues Early Flags unexpected changes in user behavior, like sudden drops in engagement or error spikes, signaling potential UX problems. 5. Time-on-Task Analysis - Measuring Efficiency Tracks how long users take to complete actions. Longer times may indicate confusion, while shorter times can suggest disengagement.
User Flow Analysis Techniques
Explore top LinkedIn content from expert professionals.
Summary
User flow analysis techniques help teams understand how people interact with websites or apps step-by-step, revealing where users get stuck, drop off, or succeed. By tracking and breaking down user actions, businesses can spot hidden friction points and improve overall experience for everyone.
- Map user journeys: Analyze navigation patterns, clicks, and session times to see where users struggle or abandon tasks.
- Segment and compare: Group users by behavior, device, or region and compare their progress through core workflows to spot specific pain points.
- Investigate task breakdowns: Break tasks into smaller steps, study completion rates and times, and use session replays to pinpoint confusion and opportunities for improvement.
-
-
🚨The greatest drop-off is from Product Details Page To Cart Page, so we must improve our Product Details Page! Not so fast ✋ In today's age of data obsession, almost every company has an analytics infrastructure that pumps out a tonne of numbers. But rarely do teams invest time, discipline & curiosity to interpret numbers meaningfully. I will illustrate with an example. Let's take a simple e-commerce funnel. Home Page ~ 100 users List Page ~ 90 users Product Display Page ~ 70 users Cart Page ~ 20 users Address Page ~ 15 users Payments Page ~12 users Order Confirmation Page ~ 9 users A team that just "looks" at data will immediately conclude that the drop-off is most steep between Product Details Page & Cart Page. As a consequence they will start putting in a lot of fire power into solving user problems on Product Display Page. But if the team were data "curious", would frame hypothesis such as "do certain types of users reach cart page more effectively than others?" and go on to look at users by purchase buckets, geography, category etc and look at the entire funnel end to end to observe patterns. In the above scenario, it's likely that the 20 cart users were power users whilst new & early purchasers don't make it to this stage. The reason could be poor recommendations on the list page or customers are only visiting the product display page to see a larger close up of the product. So how should one go about looking at data ? Do ✅ Start with an open & curious mind ✅ Start with hypothesis ✅ Identify metrics & counter metrics that will help prove/disprove hypothesis ✅ Identify the various dimensions that could influence behaviours - user type, geography, category, device type, gender, price point, day, time etc. The dimensions will be specific to your line of business. ✅ Check for data quality and consistency ✅ Look at upstream and downstream behaviour to see how the behaviour is influenced upstream and what happens to the behaviour downstream. ✅ Check for historical evidence of causality Dont ❌ Look at data to satisfy your bias ❌ Rush to conclude your interpretation ❌ Look at data in isolation - - - TLDR - Be curious. Not confirmed. #metrics #analytics #productmanagement #productmanager #productcraft #deepdiveswithdsk
-
✅ How To Run Task Analysis In UX (https://lnkd.in/e_s_TG3a), a practical step-by-step guide on how to study user goals, map user’s workflows, understand top tasks and then use them to inform and shape design decisions. Neatly put together by Thomas Stokes. 🚫 Good UX isn’t just high completion rates for top tasks. 🤔 Better: high accuracy, low task on time, high completion rates. ✅ Task analysis breaks down user tasks to understand user goals. ✅ Tasks are goal-oriented user actions (start → end point → success). ✅ Usually presented as a tree (hierarchical task-analysis diagram, HTA). ✅ First, collect data: users, what they try to do and how they do it. ✅ Refine your task list with stakeholders, then get users to vote. ✅ Translate each top task into goals, starting point and end point. ✅ Break down: user’s goal → sub-goals; sub-goal → single steps. ✅ For non-linear/circular steps: mark alternate paths as branches. ✅ Scrutinize every single step for errors, efficiency, opportunities. ✅ Attach design improvements as sticky notes to each step. 🚫 Don’t lose track in small tasks: come back to the big picture. Personally, I've been relying on top task analysis for years now, kindly introduced by Gerry McGovern. Of all the techniques to capture the essence of user experience, it’s a reliable way to do so. Bring it together with task completion rates and task completion times, and you have a reliable metric to track your UX performance over time. Once you identify 10–12 representative tasks and get them approved by stakeholders, we can track how well a product is performing over time. Refine the task wording and recruit the right participants. Then give these tasks to 15–18 actual users and track success rates, time on task and accuracy of input. That gives you an objective measure of success for your design efforts. And you can repeat it every 4–8 months, depending on velocity of the team. It’s remarkably easy to establish and run, but also has high visibility and impact — especially if it tracks the heart of what the product is about. Useful resources: Task Analysis: Support Users in Achieving Their Goals (attached image), by Maria Rosala https://lnkd.in/ePmARap3 What Really Matters: Focusing on Top Tasks, by Gerry McGovern https://lnkd.in/eWBXpCQp How To Make Sense Of Any Mess (free book), by Abby Covert https://lnkd.in/enxMMhMe How We Did It: Task Analysis (Case Study), by Jacob Filipp https://lnkd.in/edKYU6xE How To Optimize UX and Improve Task Efficiency, by Ella Webber https://lnkd.in/eKdKNtsR How to Conduct a Top Task Analysis, by Jeff Sauro https://lnkd.in/eqWp_RNG [continues in the comments below ↓]
-
Most teams are just wasting their time watching session replays. Why? Because not all session replays are equally valuable, and many don’t uncover the real insights you need. After 15 years of experience, here’s how to find insights that can transform your product: — 𝗛𝗼𝘄 𝘁𝗼 𝗘𝘅𝘁𝗿𝗮𝗰𝘁 𝗥𝗲𝗮𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗥𝗲𝗽𝗹𝗮𝘆𝘀 𝗧𝗵𝗲 𝗗𝗶𝗹𝗲𝗺𝗺𝗮: Too many teams pick random sessions, watch them from start to finish, and hope for meaningful insights. It’s like searching for a needle in a haystack. The fix? Start with trigger moments — specific user behaviors that reveal critical insights. ➔ The last session before a user churns. ➔ The journey that ended in a support ticket. ➔ The user who refreshed the page multiple times in frustration. Select five sessions with these triggers using powerful tools like @LogRocket. Focusing on a few key sessions will reveal patterns without overwhelming you with data. — 𝗧𝗵𝗲 𝗧𝗵𝗿𝗲𝗲-𝗣𝗮𝘀𝘀 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲 Think of it like peeling back layers: each pass reveals more details. 𝗣𝗮𝘀𝘀 𝟭: Watch at double speed to capture the overall flow of the session. ➔ Identify key moments based on time spent and notable actions. ➔ Bookmark moments to explore in the next passes. 𝗣𝗮𝘀𝘀 𝟮: Slow down to normal speed, focusing on cursor movement and pauses. ➔ Observe cursor behavior for signs of hesitation or confusion. ➔ Watch for pauses or retracing steps as indicators of friction. 𝗣𝗮𝘀𝘀 𝟯: Zoom in on the bookmarked moments at half speed. ➔ Catch subtle signals of frustration, like extended hovering or near-miss clicks. ➔ These small moments often hold the key to understanding user pain points. — 𝗧𝗵𝗲 𝗤𝘂𝗮𝗻𝘁𝗶𝘁𝗮𝘁𝗶𝘃𝗲 + 𝗤𝘂𝗮𝗹𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 Metrics show the “what,” session replays help explain the “why.” 𝗦𝘁𝗲𝗽 𝟭: 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗗𝗮𝘁𝗮 Gather essential metrics before diving into sessions. ➔ Focus on conversion rates, time on page, bounce rates, and support ticket volume. ➔ Look for spikes, unusual trends, or issues tied to specific devices. 𝗦𝘁𝗲𝗽 𝟮: 𝗖𝗿𝗲𝗮𝘁𝗲 𝗪𝗮𝘁𝗰𝗵 𝗟𝗶𝘀𝘁𝘀 𝗳𝗿𝗼𝗺 𝗗𝗮𝘁𝗮 Organize sessions based on success and failure metrics: ➔ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗖𝗮𝘀𝗲𝘀: Top 10% of conversions, fastest completions, smoothest navigation. ➔ 𝗙𝗮𝗶𝗹𝘂𝗿𝗲 𝗖𝗮𝘀𝗲𝘀: Bottom 10% of conversions, abandonment points, error encounters. — 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗥𝗲𝗽𝗹𝗮𝘆 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 Make session replays a regular part of your team’s workflow and follow these principles: ➔ Focus on one critical flow at first, then expand. ➔ Keep it routine. Fifteen minutes of focused sessions beats hours of unfocused watching. ➔ Keep rotating the responsibiliy and document everything. — Want to go deeper and get more out of your session replays without wasting time? Check the link in the comments!
-
🔹 Day 21 – Product Manager Interview Prep Series 🔹 🎯 RCA-Based Question: “Your team just launched a new onboarding flow. Instead of increasing activation, it's led to a spike in churn. How would you analyze and resolve this issue?” 📌 Step-by-Step Breakdown – Root Cause Analysis (RCA) As a PM, your goal is to understand user behavior, pinpoint the friction, and fix the flow without compromising long-term retention. 1️⃣ Clarify the Problem 🔍 Define “churn”: Is it users dropping mid-onboarding? Or completing onboarding but not returning? Ask: -What’s the exact drop-off point in the new flow? -Is the churn immediate (same day) or delayed (after 1–2 days)? -What does churn look like compared to the previous flow? 2️⃣ Quantify & Segment the Impact 📊 Dive deep into analytics: 📈 Timeframe: When was the new flow launched? Sudden spike or gradual rise in churn? 👥 User Segments: Are new users from a particular platform (iOS/Android/Web) churning more? 🌐 Geo/Cohort Analysis: Are certain regions, age groups, or acquisition channels seeing higher churn? 🧪 AB Testing: Compare churn between users on old vs. new flows (if test is live). 3️⃣ Identify Potential Root Causes 🧠 UX/UI Issues: -Too many steps or confusing layout? -New permission asks too early (e.g., location, notifications)? -Value not shown quickly enough? 🔧 Technical Issues: -App crashes, lags, or slow load times? -Broken API, failed calls, or validation errors? 🧭 Psychological Friction: Users feeling overwhelmed or not understanding the benefits? High cognitive load in first interaction? 4️⃣ Talk to Stakeholders & Users 👂 User Feedback: - Session recordings (Hotjar/FullStory) - User interviews or feedback surveys - App store reviews post-launch 🤝 Internal Teams: - Engineering: Check for bugs, crashes, error logs. - Design: Walk through usability testing insights. - Data Science: Get funnel drop-off visualization. 5️⃣ Suggest Short-Term & Long-Term Improvements 🛠 Short-Term Fixes: - Roll back the most friction-heavy step. - Add in-line help or tooltips at high drop-off points. - Highlight core product value earlier. 🚀 Long-Term Initiatives: - Redesign onboarding based on user mental models. - Introduce progressive disclosure – don’t show everything at once. - Run usability tests before full rollout. 6️⃣ Measure Success Track: ✅ Increase in activation rate 📉 Drop in onboarding churn 🧠 User comprehension (measured via surveys or task success rate) 🎯 Retention metrics over Day 1, Day 7, Day 30 🔁 PM Mindset Tip: Onboarding is your first impression. Make it intuitive, not intimidating. Test thoroughly, talk to real users, and iterate until value is delivered with clarity and ease. 💬 How would YOU debug a broken onboarding flow? Let’s brainstorm in the comments 👇 #ProductManagement #PMInterview #RootCauseAnalysis #Onboarding #UserChurn #UserExperience #LinkedInDaily #ActivationStrategy #ProductDesign #LinkedInNewsIndia
-
In my last post, I wrote about the dilemma of tracking everything versus missing data later. In this post, I want to share what I did to solve it for myself… __________ I’ll use the example of my work at Stable Money, and take a particular flow (registration → starting an FD booking) to explain my points. 𝗦𝘁𝗲𝗽 𝟭 I would start with the core business questions. Typically, these are figures I would want to track every morning: ⇢ How many users initiate FD booking every day? ⇢ How many users drop off between completing registration and initiating FD booking? This would lead to the core events: 𝘳𝘦𝘨𝘪𝘴𝘵𝘳𝘢𝘵𝘪𝘰𝘯_𝘤𝘰𝘮𝘱𝘭𝘦𝘵𝘦𝘥, 𝘩𝘰𝘮𝘦_𝘱𝘢𝘨𝘦_𝘷𝘪𝘦𝘸𝘦𝘥, 𝘣𝘢𝘯𝘬_𝘱𝘢𝘨𝘦_𝘷𝘪𝘦𝘸𝘦𝘥, 𝘧𝘥_𝘣𝘰𝘰𝘬𝘪𝘯𝘨_𝘪𝘯𝘪𝘵𝘪𝘢𝘵𝘦𝘥, 𝘦𝘵𝘤. 𝗦𝘁𝗲𝗽 𝟮 Next, I would ask myself deeper product-level questions, drilling down from the core business questions: ⇢ If users are dropping off between one step to another, where are they going? ⇢ For users dropping off, how do I characterize them? The first question would lead to the non-core events: 𝘩𝘰𝘮𝘦_𝘱𝘢𝘨𝘦_𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘪𝘰𝘯_𝘷𝘪𝘦𝘸𝘦𝘥, 𝘣𝘢𝘯𝘬_𝘧𝘢𝘲_𝘷𝘪𝘦𝘸𝘦𝘥, 𝘣𝘢𝘯𝘬_𝘧𝘥_𝘳𝘢𝘵𝘦𝘴_𝘷𝘪𝘦𝘸𝘦𝘥, 𝘦𝘵𝘤. The second question would lead to event properties: • 𝘣𝘢𝘯𝘬_𝘯𝘢𝘮𝘦, 𝘭𝘪𝘴𝘵_𝘰𝘧_𝘳𝘢𝘵𝘦𝘴_𝘥𝘪𝘴𝘱𝘭𝘢𝘺𝘦𝘥, 𝘦𝘵𝘤. for 𝘣𝘢𝘯𝘬_𝘱𝘢𝘨𝘦_𝘷𝘪𝘦𝘸𝘦𝘥 event • 𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘪𝘰𝘯_𝘯𝘢𝘮𝘦, 𝘭𝘪𝘴𝘵_𝘰𝘧_𝘧𝘥𝘴_𝘪𝘯_𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘪𝘰𝘯, 𝘦𝘵𝘤. for 𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘪𝘰𝘯_𝘷𝘪𝘦𝘸𝘦𝘥 event and user properties: 𝘢𝘨𝘦, 𝘨𝘦𝘯𝘥𝘦𝘳, 𝘤𝘰𝘶𝘯𝘵_𝘰𝘧_𝘧𝘥𝘴_��𝘰𝘰𝘬𝘦𝘥, 𝘦𝘵𝘤. These 2 steps together would give me a list of p0 events & properties. 𝗦𝘁𝗲𝗽 𝟯 Finally, I would list down all the remaining actions that users could take in the product, within this flow: ⇢ Clicking on a particular question on the FAQ page of a bank ⇢ Turning on senior citizen toggle while checking a bank's FD rates This would give me the list of p1 events: 𝘣𝘢𝘯𝘬_𝘧𝘢𝘲_𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯_𝘢𝘯𝘴𝘸𝘦𝘳_𝘷𝘪𝘦𝘸𝘦𝘥… with event properties: 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯_𝘳𝘢𝘯𝘬, 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯_𝘵𝘦𝘹𝘵, 𝘢𝘯𝘴𝘸𝘦𝘳_𝘵𝘦𝘹𝘵, 𝘦𝘵𝘤. My rule of thumb was that no matter what, p0 events should be instrumented with the feature go-live. For p1 events, I was okay to defer them in case of limited bandwidth. __________ Now, this was my approach to not get stuck while doing analysis, as well as not drown in an ocean of rarely used events & properties. I would love to know what approaches other #ProductManager & #ProductAnalyst folks are using to solve this dilemma!