Multitudes’ cover photo
Multitudes

Multitudes

Software Development

Build an engineering team that leads in AI

About us

Multitudes helps you build an engineering team that leads in AI. Our approach turns data into continuous improvement – highlighting what's working and what's not, giving nudges to action, and helping every team track their progress. Based on original research, Multitudes goes beyond classic metrics like DORA, SPACE, and DevEx to give you what you need for an AI era: --visibility into AI adoption, including where your super-users are --leading indicators of AI slop, based on changes to the code and to your human review process --tracking to assess the impact of your different AI interventions – what's having the biggest positive impact? --nudges to action in the workflow – so every team can get support, not just the ones you have time to coach No one needs another dashboard; we need an easier way to get better at productivity, code quality, and developer experience. That's our focus at Multitudes. Try it out with a 2-week free trial. More here: www.multitudes.com

Website
http://www.multitudes.com
Industry
Software Development
Company size
2-10 employees
Headquarters
Auckland
Type
Privately Held

Locations

Employees at Multitudes

Updates

  • Our latest research on the impact of AI has been covered in an article by LeadDev. Check it out below!

    Everyone’s talking about #AI-driven engineering productivity gains. But here’s the uncomfortable truth: most teams still struggle to measure them. I am excited to share my first LeadDev article where I break down why the hype isn’t matching reality! Based on brand new research from Multitudes and featuring unique insight from Lauren Peate and Sridhar Krishnan, VP of engineering, DevX and platforms, at LinkedIn! A few standout takeaways: 📏 Nearly all teams are adopting AI, yet most can’t clearly measure its impact despite growing pressure from leadership to prove results. 🎯 Engineering productivity is a moving target: traditional metrics like lines of code break down with AI. ✅ Measure broadly, not perfectly. Use simple, consistent metrics across usage, customer value, quality, and human impact. The real issue? Teams are optimizing for tasks (like code generation), while productivity is shaped by systems — workflows, collaboration, and decision-making. AI isn’t a magic multiplier. It exposes existing bottlenecks. If you’re serious about AI ROI, the question isn’t: 👉 “Are we using AI?” 👉 It’s: “Are we measuring what actually matters?” https://lnkd.in/eg7Z42qx

  • Our second AI impact whitepaper is now live! Over the last 15 months, we've researched 500+ engineers, and 200+ leaders, all to understand the impact of AI on engineering teams. Part one of our research looked at the impact of AI on developer productivity, codebase quality, and wellbeing. Part two, which we’re sharing today, looks at how organizations are evolving their practices to keep up with the pace of change. Links in comments to whitepapers 1 and 2  – let us know what you think!

    500+ engineers, 200+ leaders, and 15 months of data – all to get here: Our second AI impact whitepaper is now LIVE! 🎉 I don’t even want to tell you how many hours it takes a human to write a 30-page whitepaper. But not once was I tempted to have AI write this because – ironically? – using AI to write a research paper on AI’s impact feels especially wrong, if only to do justice to the 700+ people who shared their insights with us. I’ll have lots more to say about this paper later but for now, settle in with your favorite warm drink and get ready for a paper brimming with insights! To get you excited, our key takeaways were: 1️⃣ For more AI usage, build a learning organization. Learning organizations are ones that have systems that support continuous learning – something we especially need now given the pace of change of AI. 2️⃣ The desired AI outcomes are clear but measuring them is not. While organizations know what they want out of their AI tooling, many aren’t measuring outcomes – in part because it’s hard to know how to measure them. 3️⃣ Adapt processes, not just your tooling, to work with AI. Despite the importance of supporting people to get more from AI, the vast majority of the time, people focus on codebase changes rather than people or process ones. 4️⃣ Engineering roles are evolving. Engineers are expected to be more cross-functional, and with non-engineers building more software, there are additional demands on engineers to support these new builders. The most interesting part, of course, is how organizations are doing all the above, and so we’ve filled this paper with the rich examples that came up in our surveys and interviews. Also, a MASSIVE thanks to: Kelly Blincoe, Thomas Fritz, Brittany Johnson, MBA, Vivek Katial, and Nathen Harvey for the thoughtful feedback – on our draft survey and more recently on this whitepaper. You’ve made this research and whitepaper better, and any remaining issues are our own. And of course, HUGE thanks to Youxiang Lei for the analysis and writing, and to Laura Walker for the design wizardry. Whew, we did it!! Link to paper in comments – ungated, of course.

    • More data, more real-world practices: AI impact whitepaper #2 now live
  • We’ve been hard at work iterating based on feedback about our AI impact feature – and now we have a ton of updates to share! Deeper usage metrics, AI surveys, Open Telemetry support, and a free ROI calculator – all built to help engineering leaders answer the key question: Am I helping my team make the most of our AI tooling? More below. 

    Since our AI impact measurement feature launched, we’ve been hard at work listening to user feedback and iterating. And now we have a TON of updates to share – a few highlights: 📊Intensity of Usage:  As we move from “are we using AI?” to “how much are we using AI?”, it becomes important to look at the depth of AI usage. That's what this feature does, considering cost, input tokens, and lines accepted. 💬Feedback insights: Human code reviews are the guardians of code health. To make sure you're getting enough high-quality human code reviews, you can see what types of feedback AI adopters are getting, and if the patterns of what they get feedback on have shifted. 📋AI survey With the pace of change in AI, it’s extra important to understand how our people are feeling. So that’s why we’re bringing in our very first survey to Multitudes. 🔭Open Telemetry: Most of you are using a lot of different AI tools. To make it faster to send us data from a wide range of sources, we now support Open Telemetry data, starting with Claude Code. 💲AI ROI calculator: Off the back of the launch of our free AI ROI calculator last week (link in comments), we now make it easy to calculate this from your Multitudes metrics. Tell us the methodology you’d like to use, and we’ll give you the metrics you need. But honestly, that’s just skimming the surface of all the features we’ve shipped recently – check out our docs or get in touch to see more! 

    • Updates to our AI impact feature
    • Measure intensity of usage
    • Ensure that AI code gets high-quality feedback
    • Understand how people feel about AI
    • More data sources: Open Telemetry
      +1
  • A common question we've come across is how the ROI of AI should be measured. So we built a free calculator to help. All assumptions and logic are laid out.   Give it a spin and let us know what you think!

    Talk long enough about AI tooling and someone will ask: Yeah, but what’s the ROI? I don’t talk about this often, but early in my career (between doing enough research to realize a PhD wasn’t for me and later discovering that startups are my happy place), I spent a few years doing strategy consulting. So if you want a financial model – perhaps, say, the ROI of AI tooling, then I’m your person. If you need the ROI of your AI tooling, then I have 2 resources for you: 1️⃣ *An ROI calculator for AI tooling* If you’re in a hurry, start here: Use our ROI calculator to estimate the financial impact of your AI tooling. I’ve given a few options for how you might calculate this, with the goal of allowing you to use metrics you already have to get the best possible estimate. 2️⃣ *A guide for measuring the ROI of AI tooling* To understand how the ROI calculation above works, read this. This guide combines my experience of using AI to build products with my previous stint doing financial models to share how I approach the ROI question. I also share key principles I’ve learned for building an ROI model well, based on common mistakes. In light of the massive Block layoffs last week, it is worth noting: Our model is based on the assumption that you’ll use the returns from AI tooling to deliver more value to your customers, not to cut staff. I’m interested in how we can use these tools to augment people, so that’s what I’ve built this model for. Links in comments. This is our first time doing a mini-calculator like this so I would love feedback – was this helpful? Should I do more data nerd + business mashups like this?

    • What's the ROI of AI tooling? Image showing organization info and DORA metrics going into a calculator and coming out with numbers for AI investment, returns, and ROI.
    • Try our free AI tooling ROI calculator here: Try our free ROI calculator:
www.multitudes.com/ai-roi-calculator
  • Engineering leaders today are caught between AI hype (shouldn’t all our devs be going 2x faster?) and team skepticism (doesn’t AI slop create more work than value?). But there’s reason for hope: Our research shows that what matters most for success with AI is how you lead. Today, we’re excited to announce not one but _two_ launches: 💥 The results of our AI impact research, following 500+ devs through AI rollouts  💥 Our AI impact feature, which makes it easy to put the results of our research into practice Your actions can help your team get more benefit from AI, with fewer of the costs (to codebase quality, learning, and more). But to do that, we need to be able to:  👉 See what AI interventions are working and which aren’t 👉 Support our team members to learn – whether they’re engaged or skeptical 👉 Measure the full impact of AI – we don’t want to focus so much on productivity that we lose sight of code quality or the impact on developer experience 👉 Get early warning signs of AI slop Our AI impact feature helps with all of the above, based on our research findings about what helps teams get more from their AI tooling. A big thank you to Culture Amp, Eucalyptus, Mable and Pleo for participating in the research and helping us shape this feature! To get a copy of our research or to see more about the feature, check out the comments.

    • Now live: AI impact feature
Get the full picture of AI impact, backed by months of original research.
Chart shows adoption curves for different teams, and says
"Team Berry has the highest adoption, but Team Apple has more super-users."
    • Metrics you can trust
Reliable metrics are built on good statistical foundations. Get thoughtful data visualizations to cut through the hype and see the real impact of AI. 
Chart shows box-and-whisker plots showing the difference in minimal reviews for low and high AI adopters
    • Backed by original research
This feature is based on our original research following 500+ developers through AI rollouts.
    • Want to measure your AI impact?
Try it for free with a 2-week trial.
  • Multitudes reposted this

    View organization page for Signal Not Noise

    1,822 followers

    Supporter appreciation. Thanks to Multitudes for being part of LAST Conference Melbourne 2025. "With Multitudes you can get engineering delivery on track, sustainably. With their insights, you can identify team and productivity blockers early, then take action!" Don't miss Multitudes' Vivek Katial's Closing Address "Beyond the Hype: What's Really Happening When Engineering Teams Adopt AI". Full programme details and register: https://lnkd.in/gQQ_ts5K We welcome you also to take part in Multitudes 15 minute Survey to contribute to their research on how AI is reshaping engineering teams. More details and the survey link: https://lnkd.in/gS76Ukvs

    • No alternative text description for this image
    • No alternative text description for this image
  • 🇪🇺 Exciting news for our EU customers! Multitudes now offers European data processing for our app. What does this mean for your engineering teams? ✅ Your data never leaves the EU - processed and stored entirely in European data centers ✅ Enhanced data sovereignty for European organizations  ✅ Faster performance for European teams Whether you're looking for AI slop with Multitudes, identifying blocked work, or looking for collaboration bottlenecks, your data stays exactly where you need it to be. We’re not just about developer productivity; we’re about data privacy too.

    • No alternative text description for this image
  • Multitudes reposted this

    View profile for Lauren Peate

    Multitudes7K followers

    Tech leaders right now are caught in the middle, even more than usual: How can I set my developers up for success when there’s pressure coming down – around adoption or outcomes with AI – and also pressure coming up – from engineers who are concerned about risks to code quality, environmental issues, and what AI means for juniors roles? There’s no clear guidance from existing research yet either: 👟 Think AI will speed us up? The Microsoft Copilot paper says engineers will get 55.8% faster. 📉 Think AI isn’t what the hype says it is? There’s the METR study where engineers thought AI made them 20% faster but actually they were 19% slower. 👩 Worried about inequity and double standards in our AI rollouts? A Peking University study showed that women and “mature” engineers face a competence penalty for AI usage. And while there are existing surveys asking tech leaders about AI, they tend to focus on the more functional side – how much are you using it and what for, what productivity gain do you think you’re getting, and what’s the org’s investment in it. What existing research hasn’t covered as much is the human side of AI rollout – why leaders are rolling it out, the worries, the rollout challenges, and what it’s changed in how teams work together. Essentially, all the change management that is so hard but also so necessary for any big shift. So, drumroll please: Today we’re officially launching our AI impact survey for tech leaders. If you lead dev or dev-adjacent teams (eng, product, data, etc), then we want to hear from you! Survey link in the comments below 👇

    • Ai impact survey: Cut through the hype – join our original research!
  • 😊 The Multitudes AI impact survey is LIVE 😊 We're aiming to capture why tech leaders are rolling out AI, the worries, the challenges, and what it’s changed in how teams work together. Fill it out here: https://lnkd.in/gMVW_TSE P/S: more below from Lauren about why we're doing this, and the state of research right now for those who are interested ⬇️

    Tech leaders right now are caught in the middle, even more than usual: How can I set my developers up for success when there’s pressure coming down – around adoption or outcomes with AI – and also pressure coming up – from engineers who are concerned about risks to code quality, environmental issues, and what AI means for juniors roles? There’s no clear guidance from existing research yet either: 👟 Think AI will speed us up? The Microsoft Copilot paper says engineers will get 55.8% faster. 📉 Think AI isn’t what the hype says it is? There’s the METR study where engineers thought AI made them 20% faster but actually they were 19% slower. 👩 Worried about inequity and double standards in our AI rollouts? A Peking University study showed that women and “mature” engineers face a competence penalty for AI usage. And while there are existing surveys asking tech leaders about AI, they tend to focus on the more functional side – how much are you using it and what for, what productivity gain do you think you’re getting, and what’s the org’s investment in it. What existing research hasn’t covered as much is the human side of AI rollout – why leaders are rolling it out, the worries, the rollout challenges, and what it’s changed in how teams work together. Essentially, all the change management that is so hard but also so necessary for any big shift. So, drumroll please: Today we’re officially launching our AI impact survey for tech leaders. If you lead dev or dev-adjacent teams (eng, product, data, etc), then we want to hear from you! Survey link in the comments below 👇

    • Ai impact survey: Cut through the hype – join our original research!
  • ⏰TWO DAYS OUT ⏰ If you haven't already RSVPed for the next TLC with Aino Vonge Corry and Vivek Katial to discuss all things to do with retros, you can do so using the link here: https://lnkd.in/g6w3XyV2 Can't wait to see you there 👋

    View organization page for Multitudes

    2,508 followers

    📆 Next week – Tech Leader Chat, September 24/25 📆 You've heard us banging on about this before, but it's our next TLC! Presented by Aino Vonge Corry and our Lead Data Scientist Vivek Katial, we'll be talking about how to conduct human-centric, data-driven retrospectives. We know that retrospectives are important for engineering teams for a number of performance reasons. However, there is a gap in practice between the objective data and the subjective data used in retros to evaluate performance. There are also so many ways to conduct effective retros, so identifying what fits your team’s evaluative needs is a unique challenge for engineering leaders to consider. Vivek will talk through our research, and Aino (the author of Retrospectives Antipatterns) will share insights on: ⚖️ Common pitfalls that leaders encounter when running retrospectives 🚦 Why it’s important for teams to conduct data-driven retros 🤘 The impact of people problems on retros 🚂 How to maximize the benefits of retros for your team members Sign up on the event page in the comments below 👇Feel free to register even if you can't make it – we'll be recording the talk and will send you a copy afterwards!

Similar pages

Funding

Multitudes 4 total rounds

Last Round

Grant

US$ 6.2K

See more info on crunchbase