A lot of leaders ask me about using velocity to measure development team performance. I *love* story points and velocity, but they're useful for other things--not truly understanding the ability for teams to deliver. The question to ask is, "What is my team's ability to *ship software*?" I encourage leaders to focus on the four DevOps Research and Assessment (DORA) metrics around Software Delivery Performance if they want to understand how their development organization is operating: Lead Time for Changes How long does it take to go from code committed to code successfully running in production? Deployment Frequency How often does your organization deploy code to production or release it to end users? Change Failure Rate (aka "How often do we break stuff?") What percentage of changes to production or released to users result in degraded service (e.g., lead to service impairment or service outage) and subsequently require remediation (e.g., require a hotfix, rollback, fix forward, patch)? Time to Restore Service (aka "How long does it take to fix it?") How long does it generally take to restore service when a service incident or a defect that impacts users occurs (e.g., unplanned outage, service impairment)?
Understanding Dora Metrics for Software Delivery
Explore top LinkedIn content from expert professionals.
Summary
Dora metrics are four key measurements used to assess the performance of software delivery teams, focusing on how quickly, reliably, and frequently software changes are released and issues are resolved. By tracking deployment frequency, lead time, change failure rate, and recovery time, organizations gain insight into the true impact of their development processes—not just their activity.
- Measure what matters: Shift your focus from counting lines of code and commits to tracking the speed, frequency, and quality of your software releases.
- Balance speed and stability: Use Dora metrics to ensure you’re not just delivering software faster, but also maintaining reliability and quick recovery when things go wrong.
- Start with a baseline: Begin by measuring your current performance using Dora metrics, then identify areas for improvement to guide your team’s progress.
-
-
Most engineering teams have a blind spot: They don’t actually measure their software delivery performance. They assume that because they ship code, things are working fine. But have you ever deployed a "working" update, only to spend the next two days fixing what it broke? DORA’s research proves that gut feelings aren’t enough. They’ve identified four key metrics that predict high-performing teams: → Change Lead Time – How long does it take for a commit to go live? → Deployment Frequency – How often do you ship changes? → Change Failure Rate – How often do deployments break things? → Mean Time to Recovery – How fast do you fix failures? High-performing teams don’t choose between speed and stability. They improve both. And they measure their progress so they know they’re actually improving. The best part? These metrics work for any engineering team—whether you’re building a mobile app, a banking system, or a machine learning model. The key isn’t just to measure—it’s to measure wisely. And that’s where many teams stumble: → They set vanity targets (e.g., “Every app must deploy daily!”) instead of focusing on real improvement. → They use a single metric instead of a balanced set. → They compare teams with completely different constraints. → They measure obsessively but never take action. Avoid these traps. The goal isn’t just to track numbers. It’s to build better software, faster—and with confidence. Track what matters. Take action. Deliver with speed and stability. #DevOps #SoftwareEngineering #ContinuousDelivery #EngineeringLeadership
-
For decades, engineering teams have been measured by lines of code, commit counts, and PRs merged—but does more code actually mean more productivity? 🚀 Some of the best developers write LESS code, not more. 🚀 The fastest-moving teams focus on outcomes, not just output. 🚀 High commit counts can mean inefficiency, not impact. Recent research from DORA, GitHub, and real-world case studies from IT Revolution debunk the myth that developer activity = developer productivity. Here’s why: 🔹 DORA Research: After studying thousands of engineering teams, DORA (DevOps Research & Assessment) found that the best teams optimize for four key engineering performance metrics: ✅ Deployment Frequency → How often do we ship value to users? ✅ Lead Time for Changes → How fast can an idea go from code to production? ✅ Change Failure Rate → Are we improving quality, or just shipping fast? ✅ MTTR (Mean Time to Restore) → Can we recover quickly when things go wrong? → Notice what’s missing? Not a single metric is based on lines of code, commits, or individual developer output. 🔹 GitHub’s Data: GitHub found that developers working remotely during 2020 pushed more code than ever—but many felt less productive. Why? Longer workdays masked inefficiencies. More commits ≠ meaningful work; some were just fighting bad tooling or slow reviews. Teams that automated workflows (CI/CD, code reviews) merged PRs faster and felt more productive. 🔹 IT Revolution case studies: High-performing engineering orgs measure outcomes, not just outputs. The best teams: Shift from tracking commit counts → to measuring customer value. Use DORA metrics to improve DevOps flow, not micromanage engineers. View engineering productivity as a team effort, not an individual scoreboard. If you want a high-performing engineering org, don’t just push developers to write more code. Instead, ask: ✅ Are we shipping value faster? ✅ Are we reducing friction in our workflows? ✅ Are our developers able to focus on meaningful work? 🚨 The takeaway? Great engineering teams don’t write the most code—they deliver the most impact. 📢 What’s the worst “productivity metric” you’ve ever seen? Drop a comment below 👇 #DeveloperProductivity #SoftwareDevelopment #DORA #GitHub #EngineeringLeadership
-
⚖️ How do you measure the effectiveness of a software team? We've been talking a lot about how to measure the change we bring to organizations via our work at Lincoln Loop. I've seen efforts in the past using points and velocity, but they can be subjective, easily gamed, or not measure critical parts of the workflow. DORA (DevOps Research and Assessment) has come up on a few occasions. It's assessment focuses on just 4 metrics: ⏳ Lead time for changes 📆 Production deploy frequency 🚥 Change fail percentage (how many deploys cause issues?) ⏱️ Recovery time (how long does it take to restore service after an issue?) (do a self-assessment here https://lnkd.in/eWK4CjJm) While it's not a direct measurement of software development capabilities, it is a measurement of the outcome of them (which is arguably more important). If 30% of your deployments fail or it takes you more than a day to recover from a problem, it's probably a good indicator that there are issues with your software development process. On the other hand, if you can deploy multiple times a day and your change fail percentage less than 1%, it's probably a good indicator that your software development process is working well. Another nice thing about the DORA assessment is that it's easy to get a baseline upfront without waiting weeks or months to collect the information. The ranges are large enough that anyone close to the process can answer off-the-cuff. 💬 If you have thoughts on DORA or have other ways to measure the effectiveness of your tech team, I'd love to hear about it in the comments!
-
𝐃𝐞𝐯𝐎𝐩𝐬 𝐌𝐞𝐭𝐫𝐢𝐜𝐬 𝐓𝐡𝐚𝐭 𝐀𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐌𝐚𝐭𝐭𝐞𝐫 Most teams measure the wrong things. They track commits per day, lines of code, hours spent deploying. These are vanity metrics—they show activity, not impact. 𝐇𝐞𝐫𝐞'𝐬 𝐰𝐡𝐚𝐭 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐦𝐚𝐭𝐭𝐞𝐫𝐬: 𝐃𝐎𝐑𝐀 𝐌𝐞𝐭𝐫𝐢𝐜𝐬 (The only 4 metrics proven to predict software delivery performance) --- 𝐓𝐇𝐄 𝟒 𝐃𝐎𝐑𝐀 𝐌𝐄𝐓𝐑𝐈𝐂𝐒: 𝟏. 𝐃𝐄𝐏𝐋𝐎𝐘𝐌𝐄𝐍𝐓 𝐅𝐑𝐄𝐐𝐔𝐄𝐍𝐂𝐘 How often you deploy to production ✅ High frequency = faster feedback loops ✅ Indicates automation maturity Elite teams: Multiple times per day Low performers: Once per month 𝟐. 𝐋𝐄𝐀𝐃 𝐓𝐈𝐌𝐄 𝐅𝐎𝐑 𝐂𝐇𝐀𝐍𝐆𝐄𝐒 Time from commit → production ✅ Shorter lead time = faster value delivery ✅ Shows pipeline efficiency Elite teams: Less than 1 hour Low performers: 1-6 months 𝟑. 𝐂𝐇𝐀𝐍𝐆𝐄 𝐅𝐀𝐈𝐋𝐔𝐑𝐄 𝐑𝐀𝐓𝐄 % of deployments causing incidents ✅ Low failure rate = quality releases ✅ Stability over speed Elite teams: 0-15% Low performers: 46-60% 𝟒. 𝐌𝐄𝐀𝐍 𝐓𝐈𝐌𝐄 𝐓𝐎 𝐑𝐄𝐂𝐎𝐕𝐄𝐑𝐘 How fast you recover from failure ✅ Fast recovery > zero failures ✅ Resilience matters Elite teams: Less than 1 hour Low performers: 1 week to 1 month --- 𝐇𝐎𝐖 𝐓𝐎 𝐈𝐌𝐏𝐋𝐄𝐌𝐄𝐍𝐓 𝐃𝐎𝐑𝐀 𝐌𝐄𝐓𝐑𝐈𝐂𝐒 Week 1: Measure Current State → Calculate your baseline DORA metrics → Identify your biggest bottleneck → Set improvement targets Week 2-4: Automate → CI/CD pipeline (reduce lead time) → Automated testing (reduce failure rate) → Monitoring & alerts (reduce MTTR) Month 2+: Optimize → Increase deployment frequency gradually → Reduce batch sizes → Improve observability → Build blameless post-mortem culture --- What DORA metric is your team struggling with most? Drop a comment—let's discuss how to improve it. ♻️ Repost if you found it valuable ➕ Follow Jaswindder for more insights #DevOps #DORAMetrics #CloudEngineering #SoftwareDelivery #ContinuousDeployment
-
🚀 𝐓𝐡𝐞 2024 𝐒𝐭𝐚𝐭𝐞 𝐨𝐟 𝐃𝐞𝐯𝐎𝐩𝐬 𝐑𝐞𝐩𝐨𝐫𝐭 𝐢𝐬 𝐎𝐮𝐭! 🚀 The latest 𝐃𝐎𝐑𝐀 (DevOps Research and Assessment) report has dropped, packed with insights on how 𝐀𝐈 and 𝐩𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 are reshaping 𝐃𝐞𝐯𝐎𝐩𝐬 and 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐝𝐞𝐥𝐢𝐯𝐞𝐫𝐲 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞. From faster deployment times to the unexpected effects of AI. Here are the 𝐤𝐞𝐲 𝐡𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬: 📊 𝐓𝐡𝐞 𝐅𝐨𝐮𝐫 𝐊𝐞𝐲 𝐌𝐞𝐭𝐫𝐢𝐜𝐬: A Blueprint for High Performance DORA's foundation rests on four metrics that define software delivery excellence: 𝐥𝐞𝐚𝐝 𝐭𝐢𝐦𝐞 𝐟𝐨𝐫 𝐜𝐡𝐚𝐧𝐠𝐞, 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐟𝐫𝐞𝐪𝐮𝐞𝐧𝐜𝐲, 𝐜𝐡𝐚𝐧𝐠𝐞 𝐟𝐚𝐢𝐥 𝐫𝐚𝐭𝐞, and 𝐫𝐞𝐜𝐨𝐯𝐞𝐫𝐲 𝐭𝐢𝐦𝐞. This year's report shows that teams who excel across these metrics achieve elite performance, 𝐫𝐞𝐠𝐚𝐫𝐝𝐥𝐞𝐬𝐬 𝐨𝐟 𝐢𝐧𝐝𝐮𝐬𝐭𝐫𝐲. 👉 It’s proof that high performance depends not on sector but on 𝐬𝐦𝐚𝐫𝐭, 𝐞𝐟𝐟𝐞𝐜𝐭𝐢𝐯𝐞 𝐃𝐞𝐯𝐎𝐩𝐬 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬. 🤖 𝐀𝐈 𝐀𝐝𝐨𝐩𝐭𝐢𝐨𝐧 = 𝐇𝐢𝐠𝐡𝐞𝐫 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲... with a Twist 𝐀𝐈 has gone 𝐦𝐚𝐢𝐧𝐬𝐭𝐫𝐞𝐚𝐦, with 81% of organizations now integrating it into 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 workflows. DevOps professionals report big 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐛𝐨𝐨𝐬𝐭𝐬, but there’s a paradox: AI speeds up "𝐯𝐚𝐥𝐮𝐚𝐛𝐥𝐞" work but leaves more time for the 𝐫𝐞𝐩𝐞𝐭𝐢𝐭𝐢𝐯𝐞 𝐭𝐚𝐬𝐤𝐬 we’re all too familiar with. 👉 Meaning AI makes the 𝐯𝐚𝐥𝐮𝐚𝐛𝐥𝐞 work 𝐪𝐮𝐢𝐜𝐤𝐞𝐫, but it 𝐝𝐨𝐞𝐬𝐧’𝐭 𝐞𝐥𝐢𝐦𝐢𝐧𝐚𝐭𝐞 everything we 𝐝𝐨𝐧’𝐭 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐝𝐨 😢! 🛠️ 𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠: Powers Developer Independence 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐥 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐩𝐥𝐚𝐭𝐟𝐨𝐫𝐦𝐬 are transforming team workflows by enabling self-service. Teams with strong platforms report a 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲 increase of 8% and 𝐭𝐞𝐚𝐦 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 boost of 10%! However, with more layers in the pipeline, overall delivery speed can sometimes slow. 👉 For the best results, 𝐩𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 should focus on 𝐮𝐬𝐞𝐫-𝐜𝐞𝐧𝐭𝐞𝐫𝐞𝐝 design, developer 𝐢𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐜𝐞, and a 𝐩𝐫𝐨𝐝𝐮𝐜𝐭-𝐨𝐫𝐢𝐞𝐧𝐭𝐞𝐝 approach. ❤️🔥 𝐓𝐡𝐞 𝐑𝐨𝐥𝐞 𝐨𝐟 𝐋𝐞𝐚𝐝𝐞𝐫𝐬𝐡𝐢𝐩 & 𝐒𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐥𝐞𝐚𝐝𝐞𝐫𝐬𝐡𝐢𝐩 that prioritizes stability and 𝐮𝐬𝐞𝐫-𝐜𝐞𝐧𝐭𝐫𝐢𝐜𝐢𝐭𝐲 is a game-changer. Teams with consistent 𝐩𝐫𝐢𝐨𝐫𝐢𝐭𝐢𝐞𝐬 and 𝐮𝐬𝐞𝐫-𝐟𝐨𝐜𝐮𝐬𝐞𝐝 goals produce better software, experience less burnout, and enjoy higher job satisfaction. 👉 Set the 𝐠𝐨𝐚𝐥 for your organization and teams to be just a 𝐥𝐢𝐭𝐭𝐥𝐞 𝐛𝐞𝐭𝐭𝐞𝐫 than yesterday. 𝐓𝐡𝐞 𝐌𝐚𝐢𝐧 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲: 💡 AI and platform engineering are powerful, but they’re not magic solutions. Elite performance is about 𝐜𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐢𝐦𝐩𝐫𝐨𝐯𝐞𝐦𝐞𝐧𝐭, 𝐞𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐢𝐧𝐠, and staying 𝐮𝐬𝐞𝐫-𝐟𝐨𝐜𝐮𝐬𝐞𝐝. 📎 Link to the full report is in the comments! #DevOps #AI #PlatformEngineering #StateOfDevOps #DORA #Leadership
-
If you are trying to lead modernization or transformation of software programs in the government, YOU NEED TO KNOW about DORA and the State of DevOps report. 💥 💥 💥 The government is different from industry—yes. But the government can also perform like the best of the best in industry. It’s possible. This excerpt on the history of DORA from that 2024 State of DevOps report is cool to see. The first time I heard about DORA (DevOps Research and Assessment) was during my time scaling the AOC Pathfinder into what’s now known as Kessel Run. I was absolutely mindblown. DORA introduced the Four Key Metrics that help measure software delivery performance: 1️⃣ Lead Time for Changes – How quickly can changes be made? 2️⃣ Deployment Frequency – How often can teams deliver updates? 3️⃣ Change Failure Rate – What percentage of changes fail in production? 4️⃣ Time to Restore Service – How fast can teams recover when issues occur? These metrics aren’t just for tech companies; they’re for anyone serious about delivering impactful software, including government programs. Teams don’t have to sacrifice speed for stability. High-performing teams achieve both, driving not just mission success but organizational transformation. What I love about this year's DORA report is that they emphasized how 𝐂𝐮𝐥𝐭𝐮𝐫𝐞 𝐢𝐬 𝐞𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠. High-trust cultures that prioritize collaboration, learning, and empowerment are the strongest predictors of success. - Measure what matters, but ensure your tools and practices actually improve delivery and stability. - Foster a culture that enables teams to experiment, learn, and recover quickly from failures. - Remember: reducing friction in the delivery process is just as critical as meeting user expectations. The government can match the best in industry, but it starts with adopting the right principles and practices. DORA provides the blueprint. DORA has been around for a DECADE. Believe that there is some truth and real empirical evidence behind it. DORA transformed how I thought about delivering impactful software, and it should for all of you change agents and bureaucracy hackers. #DevOps #DORA #SoftwareDelivery #Culture