Back when I worked on user growth @ Pinterest, I conducted 3 retention analyses that helped Pinterest grow to 450M+ MAU’s. Excited to share those analyses on Reforge Artifacts. Check it out 👇 🔗 Link to each artifact/analysis in comments. 🕹 1. Feature Retention Analysis: How can you tell when a new feature is good enough? When should you promote it? It's a question you often run into in a rapidly evolving startup. At Pinterest, we were developing an AR/VR feature called Lens. It allowed users to take pictures of objects around them and find similar pins. Before we poured time and effort on the growth team into driving users to it, we wanted to know if the feature had “product-feature fit” — i.e. were people getting value out of this feature regularly, or was it just a novelty? We benchmarked the new AR features against Pinterest features like repinning and search. We built retention curves for each feature to see if the new AR features were falling in the ballpark of other core features. In the data we saw that retention was low, people were checking it out because it was cool, but not coming back since they weren’t finding recurring use cases for it, so we made the call to not have the growth team heavily promote the feature. 📊 2. Churn Probability Analysis: In the early days of Pinterest we were developing one of our first retention emails. One of the primary questions we needed to answer was when should we intervene to try and win someone back? Our intuition was that for a really active user, you might get worried after a few days, but for a less engaged user it might be ok if they are inactive for a week or more. So we created a heatmap to show the relationship between how active a user was and how many days they had been inactive on churn probability. 🔥 To actually use the heat map, we set a cut line of 20%. We decided that when a user's churn probability hit 20%, that's when we'd send a notification or email to try to re-engage them. 📵 3. Cost of Unsubscribe Analysis: Notifications are a core lever to driving retention for many products. A couple years into scaling Pinterest’s email program, the team was sending a dozen types of emails. We wanted to understand how unsubscribing impacted user retention. We needed to get some sort of feel for the cost associated with an unsubscribe to help us understand how many emails were too much. So we did a analysis to look at correlations between someone unsubscribing and their longer-term retention after that action. 🤯 We were really surprised to see that unsubscribes had a pronounced increase in churn propensity for our core and casual users, but virtually no impact on churn for dormant, new, and resurrected users. Our key takeaway was that we should be more sensitive about email volume with our core and casual users. Check out the full analysis at the link in the comments. ⬇
User Engagement Analytics
Explore top LinkedIn content from expert professionals.
Summary
User engagement analytics is the process of tracking and analyzing how users interact with a product—such as an app or website—to understand their behavior, measure retention, and inform business decisions. By focusing on meaningful metrics beyond simple counts, companies can identify what drives ongoing participation, spot churn risks, and refine their offerings to build lasting relationships.
- Focus on retention: Shift your attention from total user sign-ups or downloads to tracking active users and retention rates to understand if people are sticking with your product.
- Segment your audience: Use analytics to identify different user groups, such as casual users and power users, and tailor your engagement strategies to fit their needs.
- Connect metrics to value: Calculate metrics like lifetime value and churn to see how user engagement impacts your bottom line and guides smarter investment decisions.
-
-
📊 Average vs. Percentiles: A Product Manager's Guide to Feature Adoption Analysis - Ever wondered why averages can be misleading? Let's dive into a real-world scenario that showcases the power of percentiles in product analytics. 🎯 Scenario: Analyzing Adoption of a New Collaboration Feature. Imagine tracking user engagement with a new feature in the first month. Here's the engagement count data for 15 users: [2, 5, 8, 10, 12, 15, 18, 20, 25, 30, 35, 40, 45, 50, 60] 📈 Key Percentiles: - 50th (Median): 20 engagements - 75th: 35 engagements - 90th: 47.5 engagements - 100th (Max): 60 engagements 🤔🤔🤔 Why Not Use the Average? The average (25 engagements) seems simple but can be misleading: - Sensitive to Outliers: Poor or Power users skew the number. - Misrepresents Typical Behavior: Doesn't show where most users are. - Lacks Distribution Insight: Misses the bigger picture. 🚀 The Power of Percentiles for Product Managers: - Median (50th): Half of users engage ≤20 times (typical behavior) - 75th: 75% of users engage ≤35 times (great for realistic goals!) - 90th: Only 10% engage >47.5 times (your power users) 💡 Actionable Insights: - Aim to increase 75th percentile to 40 engagements/month. - Learn from 90th percentile users - what drives their high engagement? - Improve experience for below-median users to boost overall adoption/ 🎉 Key Takeaways: Percentiles offer a clearer picture of user behavior, helping you: - Identify user segments (casual vs. power users). - Prioritize improvements and plan A/B tests. - Set realistic, segmented goals. - Communicate feature performance effectively to stakeholders. Thanks Shikha Pandey for sharing this input.
-
Power user product usage drops 90%? That's not a bug. That's a job change. Engagement signals that predict power user turnover: * Logins don't stop, but * Integrations get turned off * Permissions changes spike * Export requests spike suddenly * Workflows get edited and saved... a lot What this all could mean: - Prepping the "playbook" for the next role - Building transition docs - Wrapping up projects - Starting handover I've missed these signals before. I also missed the opportunity: Research from Sturdy says that 51% of accounts churn when champions leave **But that means 49% don't** Let's call it a 50/50 shot at keeping that revenue. What makes the difference? Timing. And approach. 1. Call right away or within 2 days (Sturdy says 33% renewal odds if you do this) 2. Offer transition help (no pitching, ok?) 3. Ask to train their replacement 4. Document their workflows 5. Ask about their next gig There's a lot on the line when a power user leaves a company. I didn't appreciate that power users/champions take solutions they like with them. Spotting the engagement dip early and doing one "I see you haven't been in the tools?" call can turn into: - A saved renewal (plus proper handoff to the newbie) - A new logo (at their next company) We preach account tracking, but knowing what your power users are really doing in your products means you can catch things like losing a champion early. BTW -- when was the last time you changed jobs and told all your software vendors that you were leaving? That never happened. Bottom line: Retention and expansion opportunities are hiding in usage data if you can get your hands on it.
-
Download numbers are nothing but vanity metrics if your users are leaving through the back door as fast as they enter through the front. It is easy to get obsessed with the initial spike in user acquisition. We see it all the time here at Full Metal. Founders come to us beaming about hitting their first ten thousand downloads, but when we look at the active daily users, the picture has gone a bit pear-shaped. Here is the cold reality: nearly 71% of app users will have forgotten all about your app within three months. If you are paying £2 to £5 to acquire a single user in the UK—which is standard for many industries—and they leave immediately, you are essentially setting fire to your marketing budget. It is a massive drain on resources and a huge missed opportunity. We need to shift the conversation from acquisition to retention. We need to fix the leaky bucket. The data supports this shift. A study by Bain & Company found that increasing user retention by just 5% can boost profits by anywhere from 25% to a staggering 95%. That is where the real value lies. It is not about casting the widest net; it is about keeping the fish you catch. Consider the maths of churn. If you start with 10,000 users and have a 5% monthly churn, you are fighting a losing battle. But reduce that churn to 2%, and you will see thousands of additional active users within a single year. So, how do we stop the leak? Actionable Takeaways: ✅ Solve a genuine problem: This sounds obvious, but you would be surprised how many apps offer a solution looking for a problem. Ensure your app addresses a real-world headache for your users today, tomorrow, and next week. ✅ Check your "Sanity Metrics": Stop looking at total downloads. Focus on Active Users (DAU/MAU) and Retention Rate. These figures tell you if your business model actually works. ✅ Calculate Lifetime Value (CLTV): Connect engagement to your bottom line. If a user stays for twelve months, what are they worth? Now compare that to the cost of acquiring them. If the maths does not stack up, neither will the business. Building a loyal following means you get more value from each user and can finally stop pouring money into a strategy that isn't working. Read the full strategy in our latest blog: https://lnkd.in/emz2A--g Question: When you look at your current app metrics, are you tracking how many people stay, or just how many arrived? #AppRetention #SoftwareDevelopment #BusinessStrategy
-
Analytics aren’t just numbers; they’re your roadmap to publishing growth. Data isn’t power, it’s potential. For publishers, the real value lies in transforming raw metrics into repeatable growth strategies that drive audience retention, revenue, and #SEO performance. Too often, publishers collect vast amounts of data but fail to extract meaningful takeaways. The key is understanding what content resonates, how audiences engage, and where opportunities for growth exist. Collecting data is easy; extracting insights is not. Without clarity, metrics like pageviews and bounce rates become distractions. For example, a 40% drop in returning visitors isn’t just a traffic issue—it’s a retention red flag. By using the right tools and refining strategies based on real data, you can turn numbers into growth. Here are actionable strategies to turn data into action: 1. Know Your Audience Beyond Pageviews Pageviews alone don’t tell the full story. Instead, track return visitors, time on page, and scroll depth to measure true engagement. Tools like Google Analytics 4 (GA4) and Parse.ly provide deeper insights. Cohort analysis can reveal trends, millennials may prefer video, while Gen X engages more with newsletters. For example, if mobile traffic spikes by 20% after 8 PM, push breaking news via mobile notifications to capture that audience in real-time. 2. Optimise Content Performance with Behavioural Data Understanding why some content performs well helps you replicate success. Use @Google Search Console and Semrush to analyse search visibility and Hotjar Digital Marketing Company to track user interactions. For example, if "AI in media" gets 3x more shares than "content trends," double down on AI-related content. Additionally, A/B test headlines (e.g., “5 Growth Hacks” vs. “Proven Tactics”) to see what improves click-through rates. 3. Track Conversions, Not Just Traffic Traffic alone doesn’t guarantee success—conversions do. Set up goals in GA4 to measure newsletter sign-ups, paid subscriptions, or product purchases. Identify which referral sources drive the highest conversion rates, and adjust your strategy accordingly. For example, premium subscribers from "how-to guides" tend to have a 15% higher lifetime value than general news readers, meaning content type matters when driving long-term revenue. To scale what works, automate reporting with Power BI Visualization or Looker Studio to save 10+ hours per month. Analytics only matter when they drive actions. The biggest mistake any publishers can make is to treat data as a report card instead of a playbook. Start by auditing one content category this week, setting up a conversion goal in GA4, and A/B testing a headline. Data doesn’t lie, but it won’t work unless you do something. What analytics tools are you using to grow your publishing efforts? Share your go-to platforms in the comment below. #DigitalPublishing #SEO #ContentStrategy #AudienceGrowth #DataAnalytics
-
Many companies think they're set if they have product usage metrics and can track user engagement. But unfortunately, that's only part of the picture. The real value comes from connecting that usage data to actual business impact. The best product ops teams create the vision and ability to connect those data points. They help relate user behavior metrics to critical business outcomes like revenue, churn, and more. Imagine seeing a feature with rising usage month-over-month. Seems great, right? But what if you found that the usage spike was mainly from a customer segment you're looking to phase out... while adoption from your strategic focus segment had dropped 20%? Yikes. Having that analytical power to map product metrics to business metrics is the secret sauce. With product ops, you can scale those capabilities across the entire product org and executive team, guiding decision-making in the right direction. As Aniel Sud, CTO of Optimizely, puts it: "Product ops becomes data-driven over time, turning data into actual value." And according to Joe Peake of Featurespace, the goal is analyzing each product's revenue opportunity and ROI - not just relying on gut feelings about the market. True product insight means bringing all data together - from product usage to customer feedback to financial impacts. As Shira Bauman of Zapier notes, "Learning about the data that people care about, and partnering across data teams, is so important." With product ops connecting those dots, we get out of the "build trap" and can optimize for real outcomes. The path to successful products lies in combining engagement metrics with business performance. What's your experience been in tying product usage data to business metrics? Share your insights and lessons learned in the comments!
-
In UX, we talk a lot about what users think, but we rarely study how their attitudes actually change over time. Most research still relies on one-time surveys like SUS, NPS, or post-test ratings. These snapshots are useful, but they tell us almost nothing about how trust grows, how frustration accumulates, or how confidence rises and collapses after a single confusing update. Attitudes are not steady states. They are trajectories shaped by experience. There are scientific ways to track those trajectories. Continuous-Time SEM lets researchers measure how satisfaction or trust evolves in real time, even if we collect feedback at irregular moments. A streaming app can trigger a question after each session and see exactly when enjoyment starts to drop, so recommendations can intervene before disengagement sets in. Latent Transition Analysis helps us understand how people move between hidden states such as novice, intermediate, competent, or stuck. Instead of guessing who needs help in onboarding, we can calculate the probability a user will progress or remain frustrated and then redesign tutorials to move them forward. Bayesian Hierarchical Models solve a common UX problem. What if we do not have huge samples like consumer apps do? With twenty or thirty enterprise users, traditional statistics break down, but Bayesian methods still model growth and decline in attitudes. They can reveal that confidence improves for new employees but decreases for experts after a redesign, a pattern that would otherwise remain invisible. Joint Modeling goes further by connecting attitude trends with real outcomes such as churn. It can show that a drop in usability or motivation predicts cancellation two weeks before users actually leave, turning measurement into prevention. One of the most powerful and practical tools is Hidden Markov Modeling. Instead of relying on surveys, it infers emotional states from behavior like hesitation, rage clicks, repeated backtracking, or abandoned tasks. It detects frustration even when people are silent, revealing emotional shifts that traditional surveys fail to capture. If you want to go deeper into these methods and see more concrete examples, I put together a full breakdown on the blog. You can read it here: https://lnkd.in/eY_Nwme2
-
Everyone thinks GA4 is one metric. But the brands winning in 2025 know it’s twelve. Most teams still treat Google Analytics like it’s a single dashboard one view, report, number to check. But that era is gone. Over the past few years, analytics has shifted from tracking traffic to understanding behavior. And inside this new environment, GA4 isn’t one metric anymore. It’s twelve different signals. Twelve different insights. Twelve different decisions. Yet so many brands treat them as one. They mix users with sessions. Engagement with conversions. Entrances with exits. Views with bounce rate. And when you blend all twelve, don’t understand performance you confuse it. Because Users say: I want to measure my reach. Sessions say: I want to measure my activity. New Users say: I want to measure my growth. Average Engagement Time says: I want to measure content quality. Bounce Rate says: I want to measure relevance. Conversion Rate says: I want to measure success. Entrances say: I want to see where the journey starts. Exits say: I want to see where interest ends. Views Per User says: I want to see curiosity. Engaged Sessions say: I want to see depth. Engagement Rate says: I want to see attention. Returning Users say: I want to see loyalty. Twelve different questions. Twelve different truths. Google understands this. Smart marketers understand this. But most brands… still don’t. At Brand ClickX, we saw it everywhere: Brands weren’t struggling because they lacked traffic. They were struggling because they weren’t reading the right metric for the right decision. So we rebuilt the approach. We stopped treating GA4 as a counter and started treating it as a map. Users for reach. Sessions for activity. Engagement for quality. Conversions for performance. Returning users for loyalty. And that’s when everything changed. Insights became clearer. Decisions became sharper. Growth became predictable. GA4 didn’t replace analytics. It upgraded it. It exposed the weaknesses of single-metric thinking. Each metric has a job. Each metric has a purpose. And when you stop treating them as one, your analytics stop confusing you and start guiding you.
-
Day 1: What I’d Do as an Analyst – Tackling a Drop in Netflix Engagement Hi Everyone! This is Day 1 of my 7-day series, “What I’d Do as an Analyst.” Over the next week, I’ll tackle real-world scenarios from different industries to show how I’d approach analytical challenges. Today, we’re diving into Netflix and a problem that could stump any analyst. The Scenario: Netflix notices a sudden drop in user engagement for its recommendation engine. Instead of watching recommended shows, users are manually searching for content. What’s causing this, and how would I fix it? Step 1: Understanding the Problem This signals a potential mismatch between the recommendations and user preferences. As an analyst, my first step would be to fully grasp the scope of the issue: - Is this a specific trend or a widespread problem? - Are certain user groups (new users, specific regions) more affected than others? Step 2: Analyzing the Data I’d dig into: 1️⃣ User Behavior - CTR (Click-Through Rate) on recommendations vs. manual searches. - Time spent browsing vs. selecting content. - Search terms vs. the recommended titles to identify gaps. 2️⃣ Content Performance - Performance of recently added titles in recommendations. - Popular genres/themes among users in different regions. - Localization impact is engagement lower in certain regions? 3️⃣ Algorithm Metrics - Diversity of recommendations: Are users seeing the same types of content repeatedly? - Coverage metrics: How well does the algorithm represent the catalog? - Precision and recall: Are recommendations predicting user interests accurately? 4️⃣ User Feedback - Surveys, reviews, or support tickets to understand user frustration or dissatisfaction. Step 3: The Solution Approach Once the data tells the story, here’s how I’d approach solving it: 1️⃣ Identify Patterns - Compare users who search manually vs. those engaging with recommendations. - Check for seasonal trends or catalog changes affecting recommendations. 2️⃣ Evaluate Algorithm Performance - Conduct A/B testing by tweaking algorithm parameters to improve personalization or diversify recommendations. 3️⃣ Enhance Recommendations - Swipe Style Discovery: Gamify recommendations with a swipe feature to make discovering new content fun and interactive. - Mood Slider: Let users pick their current mood to instantly tailor recommendations. - Socially Driven Recommendations: Highlight shows popular in users’ circles or among their friends. 4️⃣ Test Hypotheses - Experiment with updated recommendations. Monitor engagement metrics like CTR, watch time, and manual searches post-update. Step 4: Expected Outcome This approach would help: - Pinpoint gaps in content relevance or user preferences. - Increase CTR, watch time, and overall satisfaction. Let’s talk! How would you approach this challenge? Share your thoughts below! 👇 #DataAnalytics #DataDriven #BusinessAnalysis #DataScience #RecommendationEngine #NetflixData #7DayChallenge
-
How AI Can Predict User Drop-Off Points! (Before It's Too Late) Have you ever wondered why users abandon your app, website, or product halfway through a workflow? The answer lies in invisible friction points—and AI has become the perfect detective for uncovering them. Here's how it works: 1️⃣ Pattern Recognition: AI analyzes vast datasets of user behavior (clicks, scrolls, pauses, exits) to identify trends. 2️⃣ Predictive Analytics: Machine learning models flag high-risk moments (e.g., 60% of users drop off after step 3 of onboarding). 3️⃣ Real-Time Alerts: Tools like Hotjar, Mixpanel, or custom ML solutions can trigger warnings when users show signs of frustration (rapid back-and-forth, rage clicks, session stagnation). Why this matters: E-commerce: Predict cart abandonment before it happens. When a user lingers on the shipping page, AI can trigger a live chat assist or dynamic discount. SaaS: Spot confusion in onboarding. When users consistently skip a setup step, it's a clear signal your UI needs simplification. Content Platforms: Identify "boredom points" in videos or articles. Adjust pacing, length, or CTAs to maintain engagement. The Bigger Picture: AI isn't just about fixing leaks—it's about understanding human behavior at scale. By predicting drop-off, teams can: ✅ Proactively improve UX before losing customers ✅ Personalize interventions (e.g., tailored guidance for struggling users) ✅ Turn data into empathy—because every drop-off point represents a real person hitting a wall The future of retention isn't guesswork. It's about combining AI's analytical power with human intuition to create experiences that feel effortless. Have you used AI to predict user behavior? Share your wins (or lessons learned) below! 👇