People & Process

Top Software Product Metrics: Measure, Analyze, and Improve Performance in 2025

software product metrics

We're data-driven in so many other software development practices – when we're analyzing logs, examining test coverage, improving app performance, and more. So of course we want to be data-driven in how we measure and improve our software delivery performance.

One note here though: The bigger the thing that we want to understand, the more likely it is that data can only show part of the picture. So for something as big as understanding how our performance in software delivery, data will only ever be part of the answer. That's why we ultimately recommend being data-informed rather than data-driven. The data isn't telling us what to do; it's showing us what questions to ask and where to spend our time.

In this article, we’ll explore how software product metrics can serve as a guide to achieving exceptional software delivery.

Sections:

1. Top software product metrics for success

There are endless metrics available now. Research from Cornell suggests there are over 23,000 metrics you can use to measure software product quality alone! That is far too many to be feasible for any one team.

Below we’ll summarize the key metrics that we believe should be a priority for product teams:

Active users

Want to understand how engaged users are with your product? Active user metrics give you a first view of customer behavior. They show you whether customers are deriving value from your product, through how frequently they interact with your software. In general, the higher this number is, the better, since it means more people are getting value.

Active users can show how sticky your product is and how engaged users are. And changes in this metric can help with product decision-making, flag early signs of customer churn, and more.

How it's calculated:

Active user = the number of unique users who performed a specific action in a given time frame

It's important to use a consistent definition of "active user" – and the action you choose will depend on what meaningful engagement looks like in your product. Examples include an app log-in, an interaction (liking a post, sending a message), or playing content (e.g., on Spotify or TikTok).

The formula for this is Total Active Users = Unique New Users + Unique Returning Users

You can also measure this at the Daily, Weekly, or Monthly level:

  • Daily Active Users (DAU) — Active users who who engage with your app at least once per day. Gaming apps love this metric since it helps them track those multiple-times-a-day habits they aim to build.
  • Weekly Active Users (WAU) — Active users who engage with your app at least once during a 7-day period. This timeframe can be useful for business tools like Slack, where weekend dips could skew your daily numbers.
  • Monthly Active Users (MAU) — Active users who engage with your app within the last 30 days. This is a key metric in understanding your app’s "stickiness", it shows how well you are retaining users over time. MAU also serves as a foundation for calculating other crucial metrics, such as retention rate, customer lifetime value, and more.

It's not necessarily true that DAU is better than WAU or MAU. Daily usage is a good sign of product stickiness, but a product can be sticky without requiring daily usage.

For example, people can use AirBnB when booking a vacation and we don't expect people to be doing that every day (as much as we might like to be taking vacations that often!), so DAU isn't a good metric for AirBnB because the need it solves (where to stay when I travel) isn't a daily need. But that doesn't mean that it's not an important need or a sticky product; AirBnB has built a unicorn company (valued >$1B) in its space.

Churn rate

Churn represents the moment users say goodbye to your product – it's the percentage of customers who stop using your service over a given time period (as decided by the organization). For example, Spotify experiences churn when a user cancels their premium subscription after a free trial or a few months of paid usage. The customer may have switched to a competitor like Apple Music or reverting to the free ad-supported version — but in either case they churned.

While gaining new users is exciting, keeping your existing ones happy often tells you more about your product's true health and how well you're meeting user needs. The reason the time period varies is because of the same reason why DAU isn't the best metric for AirBnB – some user needs are less frequent than others, so for some products it might be normal to have days or weeks with no activity, whereas for others that would be a sign of churn. You'll also have to decide if the "moment of churn" is when users cancel their subscription or when the subscription actually ends. As with the "Active Users" metric above, the key is that you make an organization-wide decision and then measure consistently.

Think of churn as your product's vital signs. A rising churn rate might signal that something needs attention – maybe your latest feature isn't hitting the mark, they’re not seeing enough value over time, or users are hitting friction points in their journey. By tracking churn closely, product and engineering teams can spot these warning signs early and make improvements before small issues become bigger problems.

How it's calculated:

Churn Rate = Customers lost in a period / Customers at the beginning of the period

Here's the simple math:

  1. Take the number of customers who left during a period
  2. Divide it by your total customers at the start

Example: Let's say you're running a collaboration tool – if you begin a quarter with 2,000 active workspaces and 100 close their accounts by the same quarter's end, your quarterly churn rate is 5%.

Why focus on churn? Because keeping existing customers engaged typically costs far less than winning new ones. Plus, those satisfied long-term users often become your best advocates, helping fuel sustainable growth.

Customer satisfaction

Very closely related to churn is customer satisfaction, since satisfied customers rarely churn. Customer satisfaction measures how well a company's products and services meet customers' expectations.

See below for the 3 most commonly-used customer satisfaction metrics:

  1. Net Promoter Score (NPS) — a measure showing how likely your customers are to recommend your products. Respondents answer using a 0-10 scale, with 10 being “Very Likely” and 0 being “Very Unlikely.”

The 0-10 scale makes it easier for you to segment customers according to their responses:

  • Promoters (9-10): Your biggest fans who actively spread the word
  • Passives (7-8): Satisfied users who aren't quite ready to rave about you
  • Detractors (0-6): Unhappy customers who might be considering alternatives

How it's calculated:

NPS = (Number of promoters) - (Number of detractors) x 100 / Total Responses

Another way to say this is: NPS = % promoters - % detractors

From this formula, you can see that the range for NPS is -100 (if everyone was a detractor) to +100 (if everyone was a promoter). The global NPS average is +32, with +35 as the average for technology companies and the top quartile of technology companies scoring +64 or higher (for more, see SurveyMonkey benchmarks).

Source: Customer Thermometer
  1. Customer Satisfaction Score (CSAT) — a measure directly asking customers how satisfied they are with your product. Respondents are usually asked to rate their satisfaction level from 1 to 5, where 5 is “Very Satisfied" and 1 is "Very Dissatisfied”.

Tracking CSAT scores helps you monitor customer sentiment and refine your business based on their feedback. Higher satisfaction leads to greater loyalty, and loyal customers are the key to driving business growth. And as mentioned above, loyal customers are much cheaper to maintain than to acquire brand new customers.

How it's calculated:

To calculate the percentage of satisfied customers:

1. Total the number of customers who are “very satisfied” (5) or “satisfied” (4)
2. Divide by the total number of responses. Then, multiply that result by 100 to get your customer satisfaction percentage.So the formula then looks like this:

CSAT = # of satisfied respondents (rated you a 4 or 5) / # of total survey respondents` x 100

According to Retently, a good CSAT is a score between 65% to 80%, but varies across industries.

Source: Retently
  1. Customer Effort Score (CES) — a measures of a product’s ease of use to customers. People who take the survey will get to choose between multiple answers – normally ranging from “Very Difficult” to “Very Easy”.

A study from Harvard Business Review found that future purchase behavior is correlated with a good customer effort score; the key insight here is that what customers value most is making it easy (reducing their effort) to do what they need to do. That's why this measure is so valuable.

How it's calculated:

Customer Effort Score is calculated very similarly to the satisfaction score:

1. Total the number of customers who are “low effort”or “somewhat low” (the 2 green faces in the below example)
2. Divide by the total number of responses. Then, multiply that result by 100 to get your customer satisfaction percentage.So the formula then looks like this:
CES = # of respondents rating low effort / # of total survey respondents` x 100

You essentially want to consider the best half of the responses as considering the process low effort, excluding the neutral results. For example, if you had a 7-point scale instead of 5, you would include 5,6, or 7.A strong CES score is considered above 90%.

Source: Retently

Monthly recurring revenue (MRR)

For subscription-based products, MRR is your north star metric for your financials.

MRR is the predictable, recurring revenue that flows in from your customers each month. It's different from your monthly revenue because monthly revenue includes one-off purchases and also includes annual pre-payments; in contrast, MRR is the monthly revenue you can expect to bring in based on the value of the subscriptions you have today. This means that MRR evens out one-off changes and gives you a forward-looking view of value today, helping you understand your product's true financial health.

Think of MRR like a monthly health check for your business. Rising MRR often signals that your product strategy is working: new features are resonating, pricing feels right, and customers see enough value to stick around. By tracking these patterns over time, product teams can make more informed decisions about where to focus their efforts.

How to calculate it:

MRR = number of customers x average monthly revenue per customer

For example, if your developer productivity tool has 500 teams paying an average of $200 per month for their workspaces, your MRR would be $100,000.

What "good" looks like for your business will depend on the financial goals that matter most to you right now. Are you aiming for profitability? Then compare MRR to your monthly expenses. Are you pushing for growth? Then look at your month-over-month changes in MRR and if that's at the level you want.

Reliability and performance

Uptime is a cornerstone metric for web-based products, measuring the percentage of time your service is available and functioning correctly for users. It's a direct indicator of service reliability and has a significant impact on user trust and satisfaction.

How to calculate it:

Uptime =Product available time / total time period * 100.

Note that achieving high uptime requires managing complex infrastructure components: servers, networks, authentication systems, and firewalls must all work seamlessly together.

Poor uptime directly impacts user experience and business outcomes. Each moment of downtime erodes user trust, increases frustration, and can lead to higher churn rates. This makes uptime monitoring and optimization essential for maintaining product health and sustaining business growth.

Consider the importance of uptime to Amazon Web Services (AWS). A AWS outage will cause major disruptions for major companies like Netflix, Disney+, and Slack — which has a snowball effect leading to lost revenue, frustrated customers, and operational chaos. Even a few hours of downtime can cost businesses millions, highlighting why maintaining high uptime is critical for reliability and customer trust.

2. How do you improve software product metric outcomes?

The above metrics provide an indication of your product’s performance. So how can you, as an engineering team, contribute to improving the product outcomes measured by these metrics?

Outside of the obvious – building the product that then gets measured – engineering teams can also contribute by improving their delivery performance, data infrastructure, and system reliability.

Build high-trust teams

When we think about software engineering performance, a strong engineering team thrives on more than just technical skills — it's built on trust, open communication, and productive collaboration that brings out the best in everyone. To foster a collaborative culture, we at Multitudes believe in starting with trust. Build an environment with psychological safety, where the people on the team feel secured and valued. And as it turns out, psychological safety was also the number one factor that Google found in determining whether a team would be high-performing (read more about Google's Project Aristotle research).

Building a high-trust culture starts with leaders role-modeling high-trust behavior to help their team members do the same. For example, “share of voice” in meetings is an important concept for us. As leaders, we want to make sure we’re giving more space to others to speak (research also shows the importance of listening to building trust). The way that our leaders at Multitudes think about this is by trying to "speak less, speak last" to ensure that others on the team get a chance to share their views.

Software delivery performance

Ultimately, if your development team can deliver quality work, faster, then both the organization and the customer win. Moving faster means more round of experiments, to learn what's working best for customers, and more rounds of iteration, so we can get a little bit better with each iteration.

Being able to move faster, together, starts with understanding what is and isn’t working and what might be blocking the team. This is where Multitudes comes in – we suggest looking at the data first, measuring how your team is working together and delivering work, so you can identify areas of opportunity and make improvements.

Here are some tips to start measuring delivery performance:

  1. Focus on building trust first
  2. Start with a small, manageable selection of metrics. One or two alone won’t be a true reflection of the trade-offs we have to make
  3. Use research-backed metrics because they're better-validated and will often have robust industry benchmarks thanks to the research
  4. Don’t agonize about choosing perfect metrics. Metrics will evolve over time based on priorities — what’s important is leaving plenty of opportunity for team feedback and changes along the way

Our recommendation is to start with DORA and SPACE metric frameworks, which track developer productivity metrics such as Change Lead Time, Deployment Frequency, Wellbeing, and Collaboration.

Data infrastructure

By strengthening data infrastructure, engineering teams can directly impact the metrics users care about most. For example, building the infrastructure for comprehensive data logging enables you to understand your product's health in real-time, so teams can quickly spot and fix issues before users even notice them.

When paired with efficient data pipelines, teams can turn all this data into actionable insights that drive smarter decisions. Strong data infrastructure forms the foundation for future product analytics needs.

Security and reliability also contribute to the security of the product, which can be improved by:

  • Secure data storage to prevent breaches
  • Reliable backup systems for business continuity
  • Performance monitoring to catch bottlenecks early

To learn more about data infrastructure check out this guide from the Institute of Data.

System reliability

It’s a non brainer that system reliability will be important for your product metrics. When systems experience issues or downtime, the impact can be serious — not just on your customers, but also your customer’s customers. The 2024 Crowdstrike outage even led to shutdowns of airports and massive delays!

There are some tips that are commonly known to improve system reliability:

  • Failover systems ensure service continuity when issues arise, while automated deployment pipelines enable consistent and error-free releases.
  • Security measures and regular penetration testing help protect against vulnerabilities and potential threats.
  • Database performance stays smooth through optimized maintenance routines, and rate limiting helps protect your systems from becoming overwhelmed during peak usage.
  • Finally, building system redundancy eliminates single points of failure, creating multiple paths for your service to stay available.

These technical capabilities work together as building blocks for reliability. When implemented thoughtfully, they create a foundation that supports both system stability and team confidence in making changes. Teams can then focus more on innovation and less on handling preventable incidents. To learn more, here are some starter guides on System reliability to check out by OpsLevel and Gremlin.

Ultimately, product and engineering are part of the same bigger team and share the same goal – to build products that customers love. The tips above are simply examples of the ways that engineering teams can uniquely contribute to that bigger, shared goal of building an amazing product.

3. Drive Engineering impact with Multitudes

Engineering teams use Multitudes to perform at their best, which translates to stronger product performance.

Multitudes is an engineering insights platform built for sustainable delivery. Multitudes seamlessly integrates with your existing tools like GitHub and Jira to provide a comprehensive view of your team's technical performance, operational health, and collaboration patterns.

With Multitudes, you can:

  • Track all key software delivery metrics such as DORA, SPACE, and more — all in one place
  • Identify patterns in your metrics that impact delivery speed and quality
  • Measure team collaboration and its effect on engineering performance

By leveraging Multitudes, teams can focus less on metrics collection and more on using these insights to drive engineering excellence.

Our clients ship 25% faster while maintaining code quality and team wellbeing.

Ready to improve your engineering performance?

Try Multitudes today!

Contributor
Multitudes
Multitudes
Support your developers with ethical team analytics.

Start your free trial

Get a demo
Support your developers with ethical team analytics.