Fixation with lagging metrics

Guru Kini
8 min readAug 2, 2021

--

While running a growing SaaS startup, it is very easy to fall into the trap of sticking with metrics that give a sense of progress. For instance, let us talk about metrics like ARR & MRR. These are better than most metrics because they are a fairly objective indicator of revenue. These are not feel-good, vague indicators of “growth” — these indicate money in the bank, money that your customers have paid in order to use your product. However, fixating on these could also spell disaster in the medium term. Here’s why…

Let us take ARR (Annual Recurring Revenue) and unpack it a bit. For simplicity, let us say your SaaS product is offered only with an annual subscription of say $100/year (no monthly subscription, no multi-year subscription). So ARR is calculated as:

  • New subscriptions added in the year, plus
  • Subscription renewals from existing customers, plus
  • Subscription upgrades from existing customers (say, from Individual Plan to Team Plan, if that applies to your product)

And then we deduct the revenue loss (or churn), which is essentially the subscription revenue that existed last year but isn’t now:

  • Subscription loss from downgrades (if applicable), and
  • Subscription loss from cancellations.

ARR has great forensic value. It tells you exactly what happened and where you stand. It is less likely to be gamed or distorted, and it is easy to calculate. Yet, it could get misleading if it is used for projections. To understand this, let us walk through the customer subscription journey.

Acquiring a new customer is hard. It is harder than you had initially thought. It is harder than what your worst detractor told you. The competition is fierce and costs are high. In this scenario, since the customer is already contributing to the ARR, you have managed to cross this significant hurdle. Your efforts have been paid off, congratulations! The customer too has overcome their inertia and has decided to invest in your product.

However, in the SaaS universe, the paying customer is very fickle. They may decide not to renew their subscription next year. Some people often project the subscription loss due to cancellations or downgrades and say “Yeah, we expect 5% of the existing subscribers to cancel every year”. It may be true if you are a well-established SaaS company with 10 years in the market— you have enough data to generalize. But this may is very misleading if you are in Year 1 or 2. Especially if you have no idea why the customers are churning. And in the first few years, you don’t really know. At best you may have some hypotheses. And it is all too tempting to dismiss the churn with statements like “Oh, this set of customers were early adopters and not our long term user base”, or “These 3 wanted some bespoke features that we are not interested in building”, and so on. Perhaps all valid reasons but none of them would help you remedy the churn problem.

Many growing SaaS companies miss to identify the usage patterns and predict which customers are going to churn. A lot of attention is paid to converting customers into paying customers, but things tend to go downhill after that.

Measuring what matters

Stewart Butterfield (back in 2015, when Slack became a unicorn) noted that they don’t consider just the number of new signups as a reliable metric; instead, they look at the number of teams who have exchanged at least 2,000 messages on Slack. These teams had really found great value in what Slack was offering. These were teams that had actually preferred Slack as their tool of choice — they were the potential champions of Slack. And indeed, it turned out that 93% of the teams that crossed the 2,000 messages threshold stayed on Slack.

Every SaaS product needs similar leading signals that indicate the likelihood of the customer churn. Forensic metrics like ARR should be used along with such leading signals to understand where is the business going. The problem is that identifying such signals is hard work. While metrics like ARR are well established in startup literature, the leading signals are likely to be very specific to your product and userbase. This means you have to make the effort the define what’s important.

Instead, we often see overzealous user activity tracking. There are GBs of user activity data collected through dozens of tools. There are 3 problems with this:

  • The product often slows down, annoying the user
  • Often no one knows what to do with the massive amount of data
  • No user likes unnecessary surveillance

Stop spying on your customers and start talking to them — Dan Martell

Collecting mountains of data seems to be the norm. Product teams actually seem to boast about it. As if all the data is going to magically convert into some actionable insight. More often than not, such data is collected just so it’s not lost; just so someone can do some analysis down the line to figure out what’s going on. The more-is-better maxim is actually hurtful to the customers. I liked Dan Martell’s view on this: “Stop spying on your customers and start talking to them”. But it is way more convenient to track — hoping that if you collect enough data, someday you will find the right set of signals with which to predict your growth. That is, IMO, wishful thinking.

On the other hand, talking to customers is incredibly hard, especially when you are growing. In the growth phase, it is not feasible to talk to every customer and to collate what they have to say. Moreover, not all customers may have useful feedback — they may not have used the product properly yet, they may just try to be polite, or they may be terse, or they may not really want your exact product but may want something adjacent to it (typically this set will have a bunch of features they want you to add). You will definitely get good feedback if the customers are frustrated about the product, especially if they are expected to pay for it. But that would push you into a reactive damage-control mode.

It is pointless to collect loads of user activity data without having a clear idea of how to derive relevant insights from it

What if, instead of investing heavily into analytics tools and surveilling the customer, you try doing it the hard way: Identify the top 3 leading signals that will help you understand that the customer is happy with your product. And no, CSAT and NPS surveys won’t cut it — they rarely present the true picture. And no, there is no book, framework, tool, YouTube channel, or podcast that will you which signals you should look for your product. Your business is somewhat unique, your SaaS customers' needs are somewhat unique — so you will have to go the whole hog.

MAUs, DAUs, and other feel-good metrics

Just settling for MAUs/DAUs isn’t enough either. The hard part is defining what an “active user” means to your business? Of course, it can’t just be no. of users who logged in on a given day. Different user types use your product for getting different things done.

Let’s take a B2B example: Say you have a CRM product. A decent metric per client would be how many leads were added/updated per day by a Sales Executive user. However, an admin user may use the product once in a few days just to add or remove users. Perhaps that’s good enough as that’s all you expect an admin to do. Or perhaps, there are other features that the admin can use but no one is using. Pondering over this can bring up interesting questions:

  • Are the admin users aware of these advanced features?
  • Are these features important enough for the admins to care about?
  • Are you planning to extend these features? Should you be spending any time or money on extending them? How would you calculate the ROI?
  • Could these features probably be like an add-on or available only with a higher tier subscription?

Similarly, you the sales manager users could probably use certain reports far more than say, adding a lead.

  • How often do they view the reports? Few times a day? Few times a week?
  • How often are they expected to view the said reports?
  • Which reports are least used?

Viewing reports is generally a passive activity. You can’t know for sure if it is serving the manager’s purpose. But if the manager users rarely view the reports, it would be a signal that the managers don’t care if your CRM exists or not. They are probably asking the sales executives to export everything into a spreadsheet, which is the more convenient and familiar option. If these managers are influencers in purchase decisions, it is likely they will vote against your CRM in the next subscription cycle. Meanwhile, the sales executives, who do find some value in your product, are frustrated that they have to copy everything in spreadsheets — doubling their work. The upshot: Your product is just not compelling enough to switch over from their spreadsheets.

An unused feature may give you more useful insights into your product’s future than dozens of usage metrics.

Many teams spend more time on building new features and acquiring new customers without giving really equal attention to existing customers. MAUs/DAUs and similar metrics, if superficially defined, can only lull you in a false sense of security that existing business is not going to churn. And yet, metrics like these are far too commonly tracked (and rewarded) — it is again a case of Metrics of Convenience being prioritized.

Your business, your customers’ needs, and your solutions are going to be unique. You may get blindsided by relying solely on lagging indicators because they are “tried and tested” and everyone’s using them. You will need to think of what are the key leading signals that are relevant for you. This is hard work and needs to be iterative. In the process, you will be tempted to choose metrics that are easy to measure and/or paint a pretty picture of the business — resist going down that route. A simple list of questions that can get you started with the journey to identify such leading signals:

  • Who are the different types of users? Which type of users will benefit most from your product?
  • Which users will are the decision-makers and influencers for your products? Is your product solving something for them?
  • How will you objectively know that users find value in your product? If you pull your product off the market, who will miss it?
  • Can you quantify the value your product provides in some intuitive way? (Avoid any contrived calculations that are designed to give feel-good numbers)
  • Which features seemed most important during your product research but are barely used in the actual product? Was something wrong in the research or in the implementation?
  • How do you know you are seeking out the right metrics or user feedback? If there is some confirmation bias at play, do you have a culture where anyone in the team is empowered to call it out?

To summarize: Finding the right leading metrics for your SaaS product is incredibly hard. It is important not to get comfortable with commonly defined metrics like ARR/MRR, MAUs/DAUs, ARPA, LTV, etc. — these are necessary but may not be sufficient. You will have to build on top of these, not blindly follow them. Finally, never confuse metrics with insights — it is never about fancy charts and tables.

--

--

Guru Kini

Technology. Software. Leadership. Metrics. (Only opinions, no facts here).