Helio’s cover photo
Helio

Helio

Research Services

Campbell, California 29,727 followers

Test your design with UX metrics.

About us

Test your design with UX metrics. Decide faster. Most teams wait too long to test, and by then, it’s too late. Helio helps product and design teams move faster by turning feedback into clear, actionable proof. Test ideas, prototypes, and concepts before you build. See what works and skip what doesn’t. No long research cycles. No endless debates. Just fast design signals that help you make confident decisions. We share daily examples and tools from real teams using Helio to bring clarity back into design. Follow along if you want to spend less time guessing and more time improving what actually works. Created by ZURB, Helio builds on 25 years of design experience helping 2,500+ teams make design work. Because progress starts with proof.

Website
https://helio.app/
Industry
Research Services
Company size
11-50 employees
Headquarters
Campbell, California
Specialties
Product Discovery, UX Research, and Market Research

Updates

  • View organization page for Helio

    29,727 followers

    Modeling diagrams sharpen requirements. In this throwback featured post, we like Karl Wiegers' point that diagrams help clarify and make requirements easier to understand. They help teams improve ideas early to avoid costly mistakes. He explains that relying only on written requirements often causes confusion and missed details. Check out his article: https://lnkd.in/gArnusaY Modeling is a smart way to reduce risk and improve design quality during development. Diagrams (like process flows or state models) help teams: • see the big picture • spot gaps and errors early • communicate better across roles • iterate quickly and cheaply Here’s how to think about the process: 1. Choose a diagram that helps your team understand the problem 2. Use common formats like UML to keep things clear 3. Walk through diagrams with your team to spot missing steps 4. Start with rough drafts to get early feedback 5. Use sketching first, then switch to tools like Visio 6. Check that all diagrams match to catch errors 7. Use diagrams to improve the design communication 💬 We asked Karl why he wrote the article: “A long time ago, I discovered the power of drawing various analysis models to represent software requirements. These alternative views provide a deeper understanding and reveal requirement errors and gaps. This article briefly describes how to apply several useful types of analysis models.” We agree that providing another way to visualize a workflow can only strengthen the requirements. Helio helps you test workflows early with users to spot confusion fast. UX metrics show how well people understand each step. #productdesign

    • This image shows a chart called “Modeling Diagrams” about how a billing system works. It explains the life of an invoice. 

The boxes are different steps. It starts with “Initiated,” then moves to “Prepared,” then “Sent.” If the customer pays a deposit, it goes to “Partially Paid,” and then to “Paid” when the full balance is received. 

If the customer cancels, it can go to “Canceled” or “Canceled & Refunded.” There is also a “Being Revised” step if changes are made. 

Arrows between the boxes show how the invoice moves from one step to another.
  • View organization page for Helio

    29,727 followers

    Know what drives value before measuring it. We love Tim Herbig’s argument that what matters most is understanding how metrics connect and how value flows through your product. Every product team has an audience, even internal teams. The goal of any metrics setup should be to measure the value delivered to that audience, whether it is customers or teams like sales and marketing. Check out his post: https://lnkd.in/g7tFFXKa Here are Tim’s big ideas: 1. Frameworks are secondary- The structure you use matters less than understanding how metrics connect. 2. Every team has an audience- Even internal product teams must measure value delivered to someone. 3. Metrics labels are contextual- North Star, leading, lagging, KPI all depend on where you sit in the value chain. 4. Start with value flow- Map how value moves through your product before defining goals. 5. Strategy should drive metrics- Metrics should connect directly to decisions, product strategy, and discovery work. 💬 We asked Tim why he created the post: "To help teams make intentional decisions about how to measure progress instead of just filling out the framework." Straightforward. And he’s got an upcoming class to prove it! If you’re a product or design leader thinking about which metrics to use, join us in the forum where we unpack decisions like this every week.  https://lnkd.in/gynueqWu

    • The image shows a chart called “Metrics Framework” with the line, “Know what drives value before measuring it.” It looks like a tree with boxes connected by lines.

At the top is a green box with dollar signs for money. Under that are green boxes for internal business metrics like team capacity and outcome ROI. In the middle is a yellow box called the North Star Metric: “The number of bi-weekly well-packaged releases.”

Below that are yellow input metrics, like number of teams doing DevOps work, automated test coverage, and number of release rollbacks. 

At the bottom are purple boxes with more product KPIs. On the right, a blue box asks how to increase automated test coverage for cloud teams. 

The chart shows how small metrics connect to bigger business goals. It is by Tim Herbig.
  • View organization page for Helio

    29,727 followers

    Differentiate on features or redefine the problem. We love Anthony Pierri’s idea about choosing your market. He says you have to decide what kind of game you are playing. You can sell your product in a market that already exists, where people have money set aside and know what they are buying, but you will face big, strong competitors. Or you can create something new for a problem people solve in messy ways today, where there is less direct competition, but you have to teach people why they need it. Check out his post: https://lnkd.in/gssjYkNs Each choice changes how you talk about your product, how you price it, and how you sell it. The big decision is not just what to call your product, but which game you think you can win, and then sticking to it. In a mature market, like CRM: → Buyers are already shopping → Budget is allocated → The category is clear → Big players dominate → Demand exists before you show up The upside is built-in demand. The downside is heavy competition and pricing pressure. In an immature market: → There is no clear category. → People are solving the problem with messy workflows, spreadsheets, humans, and patchwork tools. → Your real competition is the status quo. → No one is actively searching for your type of product. → There is no predefined budget, but you get more freedom on pricing. The upside is less direct competition and more control over framing. The downside is you must create demand from scratch. 💬 We asked Anthony why he created the decision tree: "So many founders pick a category to put themselves in without understanding the immense implications that come with that type of decision. This was my attempt to try and visualize the downstream impact of putting yourself in a category." Great stuff. ⚽️ If you’re a product or design leader thinking about which game you’re really playing, join us in the forum where we unpack decisions like this every week.  https://lnkd.in/gynueqWu

    • A flowchart titled “Product Positioning” asks if you should put your product in an existing category. 

It shows a series of yes and no questions in boxes connected by arrows. The chart starts by asking if there is a category that fits your product. 

If yes, it asks about market size, competition, and if you have strong differences. 

If no, it asks if part of your product fits a smaller category or if you can change packaging or pricing. It also asks if you can grow fast or have enough time and money to wait. 

At the end, the chart leads to two choices: “Position in a Category” or “Position for a Workflow.”
  • View organization page for Helio

    29,727 followers

    Drive product performance by tracking outcomes, not outputs. In this throwback featured post, we love Calvin Arterberry’s article, which shows product teams how to measure impact by focusing on outcomes, not just outputs. He explains how to track meaningful performance metrics, tie them to business goals, and make smarter decisions. Focusing on outcomes helps teams prioritize what matters: proving product value and improving user experiences. Check out his article: https://lnkd.in/dTBs-zKb 💬 We asked Calvin why he wrote the article: ”Honestly, I believe that product designers are at their best when they place business and user needs ahead of “looks cool-ism” and design principle dogma we all learned in design school. We are hired to solve human and business problems, not to make art." Here are the different types of performance metrics you can use to measure product outcomes: 1️⃣ Leading metrics Predict future trends by identifying early signs of change in user behavior or business performance. 2️⃣ Behavioral metrics Track user actions over time, like engagement, conversion, and retention, to understand how people use the product. 3️⃣ Lagging metrics Confirm past trends by analyzing data to measure long-term success and business impact. 4️⃣ Qualitative metrics Measure user sentiment through feedback and language analysis to gauge whether responses are positive, negative, or neutral. 5️⃣ Business Metrics Analyze financial and operational data to track growth, profits, and overall business performance. We love it. Showing impact requires connecting the metrics. Before you build, measure product outcomes by collecting real user data through quick tests and surveys in the design phase. Helio makes this easy. #productdiscovery #productdesign

    • Infographic titled “Performance Metrics.” It shows five types of product metrics in colored rows from top to bottom.

Leading metrics predict future trends. Example: Feature Adoption Rate, which shows how many users start using a new feature.

Behavioral metrics track user actions. Example: Task Completion Rate, which shows how many users finish a task they start.

Lagging metrics confirm past results. Example: Customer Satisfaction Score (CSAT), which measures how happy users are.

Qualitative metrics measure user feelings. Example: Product-Market Fit Score, based on how many users would be very upset if the product went away.

Business metrics track money and growth. Example: Annual Recurring Revenue (ARR), which shows how much money the product makes each year.

Each row includes a short description and a simple math formula example.
  • View organization page for Helio

    29,727 followers

    Metrics don’t create progress. If teams are measuring everything, why do they still get stuck in decision making? We’ve found it's because metrics that don’t tie back to a design signal and don’t move the work forward. Design, now, more than ever, needs viable proof that its work is creating momentum. Metrics tell you the story of what happened, and design signals tell you what to do next. When those signals are front and center, design stops feeling subjective ... and starts earning the credit it deserves. This week’s pressure points lean into why signals are so important for driving impact. With features from: Jodah Jensen measuring real outcomes Bryan Zmijewski making design impact visible  Hristo Butsev turning design into signal Jordan Dalladay strengthening judgment with AI How is your team making design visible right now? We’d love to hear.

  • View organization page for Helio

    29,727 followers

    Create product roadmaps with company goals and priorities. In this throwback featured post, we enjoy Jean Huang’s article, which highlights the importance of starting with clear company goals and strategies to create a product roadmap. She shares a structured approach from Stanford’s Product Management Accelerated program to ensure product decisions align with business objectives. The roadmap should be a flexible tool that keeps teams focused on the most important initiatives and connects big-picture goals to everyday work. Check out her post: https://lnkd.in/gs7Hfwtx Here are her steps: 1️⃣ Define company objectives Keep objectives simple, clear, and aligned across all departments to ensure focus and collaboration. 2️⃣ Break down strategies Decompose company strategies into 3–5 actionable product strategies. Use the MECE framework to ensure clarity and avoid overlap. 3️⃣ Form hypotheses Identify how each subtopic or initiative supports the company’s overall goals and objectives. 4️⃣ Prioritize initiatives Leverage data and user feedback to focus on the initiatives with the highest impact. 5️⃣ Create product strategies Develop a clear list of product strategies or initiatives based on the defined priorities. 6️⃣ Build the product roadmap Turn strategies into a dynamic, prioritized roadmap with actionable projects that guide execution. We love it. Helio helps create better product roadmaps by providing fast UX metrics to guide decisions at every step, from setting objectives to prioritizing initiatives. It keeps roadmaps dynamic and user-focused, ensuring alignment with company goals and real customer needs. #productdiscovery #productdesign

    • A slide titled “Product Roadmap” shows how a company turns its big goal into small projects. At the top, it says the goal is to increase revenue by making users happier. 

This big goal connects to new users and existing users. Under that are ideas like onboarding, new features, performance, and fixing bugs.

There are short reasons for each idea, such as users leaving during onboarding or the product being too slow. These lead to main goals like better experience and better user support. 

At the bottom is a list of projects in order, like simplifying onboarding and making an interactive guide. 

On the right side, arrows show how company strategy breaks down into product strategy and then into a product roadmap.
  • View organization page for Helio

    29,727 followers

    Skipping research makes early prototypes less effective. In this throwback featured post, we enjoy Sam Ladner, PhD’s post that challenges Lean Startup and Agile principles, arguing that early prototyping and rapid iterations don’t work without first gaining deep knowledge. She shared a 9-year study from HBR of five startups. They found that successful ones prioritized generative research—interviewing users, observing their context, and mapping the problem space—before building a product.  Check out her post: https://lnkd.in/gTc5EePq Rushing to market with a low-fidelity prototype skips this critical step, leading to failure. Understanding the problem and its context is essential for success. Here’s a link to Douglas Hannah and Shi-Ying L.'s article and research: https://lnkd.in/gUrDMS8u Here are the big takeaways: 1. Prototypes alone don’t solve the problem. Early prototypes give shallow feedback; you need to understand the problem first. 2. Research before prototyping leads to better results. Talking to users and mapping the problem space leads to smarter solutions. 3. Early prototypes can give false signals. Quick prototypes often show surface interest, not real product value. 4. Focus on one product-market pair early. Committing to one idea helps you learn more and improve faster. 5. Explore broadly before building anything. Understanding multiple opportunities first saves time and resources later. 6. Learn strategically after initial failure. Use what you know to explore new ideas instead of pivoting blindly. 7. Narrow down options to refine your product. Drop weak ideas and focus resources on what works best. These are great ideas.  We included a summary of the five companies they researched as a reference. What are your thoughts? Join hundreds of product and design leaders sharing how they use UX metrics in our Glare forum to make better prototyping decisions. https://lnkd.in/gynueqWu

    • This image is a table called “Generative Research” with the subtitle, “Understanding users early leads to better product-market fit.” It compares five companies: Stardoc, Goodhealth, Nudge, Orthofix, and Turnaround.

The table has five columns: Venture, Approach, Commitment, Outcome, and Indicators.

Stardoc and Goodhealth studied users early through interviews and learning. They focused on specific markets and reached strong product-market fit with real growth.

Nudge and Orthofix focused more on fast prototypes but did not gain traction. They struggled with customers and revenue.

Turnaround first struggled, then shifted to better learning and focus. In the end, they doubled their customers and grew their revenue.

The chart shows that early user learning leads to better results.
  • View organization page for Helio

    29,727 followers

    If you can’t research, measure outcomes to guide better decisions. In this throwback featured post, we love Jodah Jensen’s idea that measuring outcomes is the next best step if research isn't possible. Many teams avoid research because they see it as costly, slow, or unnecessary. However, it helps reduce risks, challenge assumptions, and ensure products solve real problems. When time or budget is tight, measuring outcomes keeps teams informed and on track. Check out his article: https://lnkd.in/gK4_jcFy 💬  Jodah shared with us: “My main focus when writing is on breaking through the "hard things of hard things" in experience design, many of my topics tend to explore or at least congruent to the ideas fostering an evidence-based approach to problem-solving.“ Jodah argues that teams can make data-informed decisions and improve their product iteratively by measuring. Here are a few examples using UX metrics: Value Proposition ↳ Measure usefulness and expectations to track how well the product meets user needs. Functionality ↳ Track success rate and error rate to ensure features work without major issues. Usability ↳ Measure comprehension, task completion, and time on task to see if users navigate easily. Performance ↳ Monitor session duration, drop-off, and click-through rates to assess system speed and reliability. Customer Satisfaction ↳ Use brand score, loyalty, and post-task satisfaction to gauge user sentiment. Measuring is better than guessing. If you can’t invest in deep research, at least collect meaningful data to guide your decisions. Fortunately, Helio allows you to capture UX metrics as you ask questions before building. What are your thoughts? Join hundreds of product and design leaders sharing how they use UX metrics in our Glare forum to make better decisions. https://lnkd.in/gynueqWu #productdiscovery #productdesign

    • The image is a side by side chart titled “Research vs Measure.” A short line under the title says research sets outcomes and measurement tracks impact.

On the left is a blue column labeled “Research.” It lists what research focuses on: understanding if a product solves real problems, checking if features work, studying usability, testing speed and reliability, gathering user opinions, learning how users feel, seeing how the product affects sales, and finding out why users stay or leave.

On the right is a green column labeled “Measure.” It lists what measurement tracks: feature usage, errors and bugs, task speed and mistakes, response time and uptime, satisfaction scores, session length and drop offs, revenue and conversion rates, and repeat use and churn.

In the center are labels like Value Prop, Functionality, Usability, Performance, Customer Satisfaction, Experience, Revenue, and Retention, connecting both sides.
  • View organization page for Helio

    29,727 followers

    Product teams need new patterns. Every week, a new wave of apps and AI agents launches. A little team skepticism is healthy with all these tools. Too much second-guessing on what to integrate, though, can slow your team down. And when uncertainty creeps in, teams default to safe, technical decisions. They implement what is feasible, not what makes sense for users. Confidence erodes internally and externally. Clarity is the growth lever here. When people understand what they are seeing, what to do next, and why it matters, momentum builds in products and services. When teams share patterns for guiding AI, prioritizing work, and reducing ambiguity, doubt shrinks. The strongest teams treat every feature as an opportunity to reduce doubt and strengthen shared understanding. This week’s pressure points explores how trust is built through practical patterns in the work. With features from: • Sharang Sharma building AI people can trust • Karla G. focusing effort on what drives progress • Stephanie Muxfeld aligning teams toward the same future How is your team building trust through clarity right now? Let us know.

  • View organization page for Helio

    29,727 followers

    Prioritize features by impact on user satisfaction. In this throwback featured post, we agree with Hsin-Jou Lin that design teams often face hard choices when they can’t build everything. The Kano Model helps teams focus on what matters most: starting with must-have features, then adding ones that delight users, and avoiding anything that might cause frustration. It helps turn a long list of ideas into clear priorities based on real customer needs. She recommends choosing features based on how they affect user satisfaction. Check out her article: https://lnkd.in/gBc--QpW The Kano Model, created by Professor Noriaki Kano, groups customer needs into five types based on how each feature impacts user satisfaction: 1️⃣ Must-be Basic expectations. Users won't be impressed if they're there, but will be upset if they're missing (like a my account feature). 2️⃣ One-dimensional The more you provide, the happier users are. If missing, users are dissatisfied (like the ability to sort search results). 3️⃣ Attractive Pleasant surprises. Users love them if present, but don't mind if they’re missing (like visual search for users who care about it). 4️⃣ Indifferent Features users don’t care about either way. 5️⃣ Reverse Features that annoy users (like showing how often a product was searched). 💬 We asked Hsin-Jou why she wrote the article: “To share my insights from applying it to an image search project on an e-commerce website. After developing an MVP, we conducted user research to understand how users felt about this feature and what prevented them from using it. We also needed real-user data to verify the quality of image search results. The Kano Model helped us a lot to identify and prioritize which features to implement to meet user goals. This experience inspired me to share how the Kano model helps in product design." Love it. Helio helps you test which features matter most to your users by collecting quick feedback. You can use UX metrics to see what delights, frustrates, or gets ignored. Focus on building what improves the experience! #productdiscovery #productdesign

    • A chart titled “Prioritizing Product Features” shows how different features affect user happiness. The left side says “Not fulfilled” and the right side says “Fully fulfilled.” The top says “Satisfaction” and the bottom says “Dissatisfaction.”

There are five colored lines. A blue line called “One-dimensional” goes up as features improve, showing more is better. 

A green curved line called “Attractive” rises fast near the top, showing surprise features make people very happy. 

An orange curved line called “Must-Be” starts very low and slowly levels out, showing basic features prevent frustration. 

A gray flat line called “Indifferent” shows users do not care much. 

A purple line called “Reverse” goes down, showing some features make users unhappy.

Affiliated pages

Similar pages