We're data-driven in so many other software development practices – when we're analyzing logs, examining test coverage, improving app performance, and more. So of course we want to be data-driven in how we measure and improve our software delivery performance.
One note here though: The bigger the thing that we want to understand, the more likely it is that data can only show part of the picture. So for something as big as understanding how our performance in software delivery, data will only ever be part of the answer. That's why we ultimately recommend being data-informed rather than data-driven. The data isn't telling us what to do; it's showing us what questions to ask and where to spend our time.
In this article, we’ll explore how software product metrics can serve as a guide to achieving exceptional software delivery.
Sections:
There are endless metrics available now. Research from Cornell suggests there are over 23,000 metrics you can use to measure software product quality alone! That is far too many to be feasible for any one team.
Below we’ll summarize the key metrics that we believe should be a priority for product teams:
Want to understand how engaged users are with your product? Active user metrics give you a first view of customer behavior. They show you whether customers are deriving value from your product, through how frequently they interact with your software. In general, the higher this number is, the better, since it means more people are getting value.
Active users can show how sticky your product is and how engaged users are. And changes in this metric can help with product decision-making, flag early signs of customer churn, and more.
How it's calculated:
Active user = the number of unique users who performed a specific action in a given time frame
It's important to use a consistent definition of "active user" – and the action you choose will depend on what meaningful engagement looks like in your product. Examples include an app log-in, an interaction (liking a post, sending a message), or playing content (e.g., on Spotify or TikTok).
The formula for this is Total Active Users
= Unique New Users
+ Unique Returning Users
You can also measure this at the Daily, Weekly, or Monthly level:
It's not necessarily true that DAU is better than WAU or MAU. Daily usage is a good sign of product stickiness, but a product can be sticky without requiring daily usage.
For example, people can use AirBnB when booking a vacation and we don't expect people to be doing that every day (as much as we might like to be taking vacations that often!), so DAU isn't a good metric for AirBnB because the need it solves (where to stay when I travel) isn't a daily need. But that doesn't mean that it's not an important need or a sticky product; AirBnB has built a unicorn company (valued >$1B) in its space.
Churn represents the moment users say goodbye to your product – it's the percentage of customers who stop using your service over a given time period (as decided by the organization). For example, Spotify experiences churn when a user cancels their premium subscription after a free trial or a few months of paid usage. The customer may have switched to a competitor like Apple Music or reverting to the free ad-supported version — but in either case they churned.
While gaining new users is exciting, keeping your existing ones happy often tells you more about your product's true health and how well you're meeting user needs. The reason the time period varies is because of the same reason why DAU isn't the best metric for AirBnB – some user needs are less frequent than others, so for some products it might be normal to have days or weeks with no activity, whereas for others that would be a sign of churn. You'll also have to decide if the "moment of churn" is when users cancel their subscription or when the subscription actually ends. As with the "Active Users" metric above, the key is that you make an organization-wide decision and then measure consistently.
Think of churn as your product's vital signs. A rising churn rate might signal that something needs attention – maybe your latest feature isn't hitting the mark, they’re not seeing enough value over time, or users are hitting friction points in their journey. By tracking churn closely, product and engineering teams can spot these warning signs early and make improvements before small issues become bigger problems.
How it's calculated:
Churn Rate
= Customers lost in a period
/ Customers at the beginning of the period
Here's the simple math:
Example: Let's say you're running a collaboration tool – if you begin a quarter with 2,000 active workspaces and 100 close their accounts by the same quarter's end, your quarterly churn rate is 5%.
Why focus on churn? Because keeping existing customers engaged typically costs far less than winning new ones. Plus, those satisfied long-term users often become your best advocates, helping fuel sustainable growth.
Very closely related to churn is customer satisfaction, since satisfied customers rarely churn. Customer satisfaction measures how well a company's products and services meet customers' expectations.
See below for the 3 most commonly-used customer satisfaction metrics:
The 0-10 scale makes it easier for you to segment customers according to their responses:
How it's calculated:
NPS
= (Number of promoters
) - (Number of detractors
) x 100 / Total Responses
Another way to say this is: NPS
= % promoters
- % detractors
From this formula, you can see that the range for NPS is -100 (if everyone was a detractor) to +100 (if everyone was a promoter). The global NPS average is +32, with +35 as the average for technology companies and the top quartile of technology companies scoring +64 or higher (for more, see SurveyMonkey benchmarks).
Tracking CSAT scores helps you monitor customer sentiment and refine your business based on their feedback. Higher satisfaction leads to greater loyalty, and loyal customers are the key to driving business growth. And as mentioned above, loyal customers are much cheaper to maintain than to acquire brand new customers.
How it's calculated:
To calculate the percentage of satisfied customers:
1. Total the number of customers who are “very satisfied” (5) or “satisfied” (4)
2. Divide by the total number of responses. Then, multiply that result by 100 to get your customer satisfaction percentage.So the formula then looks like this:
According to Retently, a good CSAT is a score between 65% to 80%, but varies across industries.
CSAT = # of satisfied respondents (rated you a 4 or 5) / # of total survey respondents` x 100
A study from Harvard Business Review found that future purchase behavior is correlated with a good customer effort score; the key insight here is that what customers value most is making it easy (reducing their effort) to do what they need to do. That's why this measure is so valuable.
How it's calculated:
Customer Effort Score is calculated very similarly to the satisfaction score:
1. Total the number of customers who are “low effort”or “somewhat low” (the 2 green faces in the below example)
2. Divide by the total number of responses. Then, multiply that result by 100 to get your customer satisfaction percentage.So the formula then looks like this:
CES = # of respondents rating low effort / # of total survey respondents` x 100
You essentially want to consider the best half of the responses as considering the process low effort, excluding the neutral results. For example, if you had a 7-point scale instead of 5, you would include 5,6, or 7.A strong CES score is considered above 90%.
For subscription-based products, MRR is your north star metric for your financials.
MRR is the predictable, recurring revenue that flows in from your customers each month. It's different from your monthly revenue because monthly revenue includes one-off purchases and also includes annual pre-payments; in contrast, MRR is the monthly revenue you can expect to bring in based on the value of the subscriptions you have today. This means that MRR evens out one-off changes and gives you a forward-looking view of value today, helping you understand your product's true financial health.
Think of MRR like a monthly health check for your business. Rising MRR often signals that your product strategy is working: new features are resonating, pricing feels right, and customers see enough value to stick around. By tracking these patterns over time, product teams can make more informed decisions about where to focus their efforts.
How to calculate it:
MRR
= number of customers
x average monthly revenue per customer
For example, if your developer productivity tool has 500 teams paying an average of $200 per month for their workspaces, your MRR would be $100,000.
What "good" looks like for your business will depend on the financial goals that matter most to you right now. Are you aiming for profitability? Then compare MRR to your monthly expenses. Are you pushing for growth? Then look at your month-over-month changes in MRR and if that's at the level you want.
Uptime is a cornerstone metric for web-based products, measuring the percentage of time your service is available and functioning correctly for users. It's a direct indicator of service reliability and has a significant impact on user trust and satisfaction.
How to calculate it:
Uptime
=Product available time / total time period * 100
.
Note that achieving high uptime requires managing complex infrastructure components: servers, networks, authentication systems, and firewalls must all work seamlessly together.
Poor uptime directly impacts user experience and business outcomes. Each moment of downtime erodes user trust, increases frustration, and can lead to higher churn rates. This makes uptime monitoring and optimization essential for maintaining product health and sustaining business growth.
Consider the importance of uptime to Amazon Web Services (AWS). A AWS outage will cause major disruptions for major companies like Netflix, Disney+, and Slack — which has a snowball effect leading to lost revenue, frustrated customers, and operational chaos. Even a few hours of downtime can cost businesses millions, highlighting why maintaining high uptime is critical for reliability and customer trust.
The above metrics provide an indication of your product’s performance. So how can you, as an engineering team, contribute to improving the product outcomes measured by these metrics?
Outside of the obvious – building the product that then gets measured – engineering teams can also contribute by improving their delivery performance, data infrastructure, and system reliability.
When we think about software engineering performance, a strong engineering team thrives on more than just technical skills — it's built on trust, open communication, and productive collaboration that brings out the best in everyone. To foster a collaborative culture, we at Multitudes believe in starting with trust. Build an environment with psychological safety, where the people on the team feel secured and valued. And as it turns out, psychological safety was also the number one factor that Google found in determining whether a team would be high-performing (read more about Google's Project Aristotle research).
Building a high-trust culture starts with leaders role-modeling high-trust behavior to help their team members do the same. For example, “share of voice” in meetings is an important concept for us. As leaders, we want to make sure we’re giving more space to others to speak (research also shows the importance of listening to building trust). The way that our leaders at Multitudes think about this is by trying to "speak less, speak last" to ensure that others on the team get a chance to share their views.
Ultimately, if your development team can deliver quality work, faster, then both the organization and the customer win. Moving faster means more round of experiments, to learn what's working best for customers, and more rounds of iteration, so we can get a little bit better with each iteration.
Being able to move faster, together, starts with understanding what is and isn’t working and what might be blocking the team. This is where Multitudes comes in – we suggest looking at the data first, measuring how your team is working together and delivering work, so you can identify areas of opportunity and make improvements.
Here are some tips to start measuring delivery performance:
Our recommendation is to start with DORA and SPACE metric frameworks, which track developer productivity metrics such as Change Lead Time, Deployment Frequency, Wellbeing, and Collaboration.
By strengthening data infrastructure, engineering teams can directly impact the metrics users care about most. For example, building the infrastructure for comprehensive data logging enables you to understand your product's health in real-time, so teams can quickly spot and fix issues before users even notice them.
When paired with efficient data pipelines, teams can turn all this data into actionable insights that drive smarter decisions. Strong data infrastructure forms the foundation for future product analytics needs.
Security and reliability also contribute to the security of the product, which can be improved by:
To learn more about data infrastructure check out this guide from the Institute of Data.
It’s a non brainer that system reliability will be important for your product metrics. When systems experience issues or downtime, the impact can be serious — not just on your customers, but also your customer’s customers. The 2024 Crowdstrike outage even led to shutdowns of airports and massive delays!
There are some tips that are commonly known to improve system reliability:
These technical capabilities work together as building blocks for reliability. When implemented thoughtfully, they create a foundation that supports both system stability and team confidence in making changes. Teams can then focus more on innovation and less on handling preventable incidents. To learn more, here are some starter guides on System reliability to check out by OpsLevel and Gremlin.
Ultimately, product and engineering are part of the same bigger team and share the same goal – to build products that customers love. The tips above are simply examples of the ways that engineering teams can uniquely contribute to that bigger, shared goal of building an amazing product.
Engineering teams use Multitudes to perform at their best, which translates to stronger product performance.
Multitudes is an engineering insights platform built for sustainable delivery. Multitudes seamlessly integrates with your existing tools like GitHub and Jira to provide a comprehensive view of your team's technical performance, operational health, and collaboration patterns.
With Multitudes, you can:
By leveraging Multitudes, teams can focus less on metrics collection and more on using these insights to drive engineering excellence.
Our clients ship 25% faster while maintaining code quality and team wellbeing.
Ready to improve your engineering performance?