Since 2003, many companies have relied on Net Promoter Score (“NPS”) for insight into how happy and loyal their customers are. However, with customer expectations rising and customer experience reigning as today's primary competitive differentiator, many CX leaders are debating the efficacy of the NPS metric in measuring customer relationship health.
We enjoyed hosting Sherrod Patching, Head of Global Technical Account Managers at GitLab, and Graham Gill, VP Success & Services at Accent Technologies, to help us understand NPS’ role in measuring the customer experience. Sherrod currently uses NPS in two primary ways — first, Product Marketing delivers a product-centric NPS survey to a random sample of users quarterly. Secondly, Customer Success distributes surveys to customers upon reaching key journey milestones, such as completing onboarding, to generate snapshots of customer relationship health. Conversely, while Graham has used NPS in the past, he no longer uses NPS and does not plan to use NPS again. Here are our top three takeaways from our discussion about the limitations of NPS and how to navigate them.
1. NPS Is an attribute, Not the attribute
Sherrod shared some context on how GitLab factors in NPS inputs to the company’s overall customer health scores. For accounts with a Technical Account Manager (“TAM”) assigned, NPS is one of several attributes that comprise overall customer health scores — with the oversight of a TAM, it’s less likely that relationship risk signals will fall through the cracks. For GitLab’s large base of smaller tech-touch, or “digital” customers that don’t have dedicated TAM attention, NPS has more weight in the customer health score. However, Sherrod cautioned that “there isn’t ever a point in the customer lifecycle where we would look at NPS and accept the score as a 100% determinant of customer health. NPS is an attribute, not the attribute.” Especially without a TAM, Sherrod emphasized the importance of incorporating a variety of signals into customer health scores. These signals can include usage data, time-to-value, and insights from mined unstructured data from support tickets and free-form survey comments. Together, these rich sources of customer feedback can help you understand what drives individual customer ratings and clarify how passives feel, but most importantly, reveal the drivers underneath positive and negative trends across your customer base.
While GitLab is a clear exception, Graham observed that NPS is still the attribute for too many companies — when this is the case, he says, NPS risks devolving from a valuable benchmark to a vanity metric. Reflecting on an experience with NPS, Graham recalled,
“We had really high NPS scores. However, these scores didn’t reflect the pulse within the Customer Success team or across the post-sales team more broadly, so something wasn’t right. It turned out that we were loading the questions and missed what was truly bothering our customers. It was great that customers reacted positively to the questions that we asked them, but it turned out that we were asking the wrong questions.”
Gamification, Graham explained, is another slippery slope that can turn NPS into a vanity metric. He cited a few examples of well-meaning attempts to increase survey participation with gamification, whereby companies can bias survey feedback and create skepticism among internal and external reporting audiences. While NPS can deliver useful insights, both Graham and Sherrod agreed that the insights are most valuable when contextualized by other sources of customer feedback.
2. Value Depends on Segmentation and Trends vs. Scores
Despite its limitations, NPS still plays an important role as a benchmark in helping companies understand customer experience — but successfully incorporating NPS into a CX initiative is not as simple as blasting out a generic question to your entire base and reporting aggregate statistics.
Both Graham and Sherrod emphasized the importance of targeted questions when designing NPS surveys. Sherrod explained,
“If you’re asking questions along the lines of, ‘how happy are you with your product experience?’ and you’re not differentiating between a customer who is two months into their relationship with you from a customer who is two years in, your results will be skewed. The more segmentation you can build on top of surveys, the more impactful the output can be.”
Graham added that “the questions can’t be one-size-fits-all. That approach is like trying to fit a square peg into a round hole and will set you up for failure.”
Furthermore, Sherrod and Graham agreed that ensuring stakeholders understand how to interpret the nuances of NPS in reporting is equally important as asking the right questions to the right audience. When reporting NPS to the Board, trends matter far more than actual scores. Sherrod said,
“The snapshot in time isn’t that interesting without context. But if I can weed out outliers or questions with low response rates, and dig into the underlying data and say, ‘we believe this is indicative of what’s happening in our customer base,’ that can be powerful.”
Without digging into the underlying data, it can be difficult to evaluate the veracity of what NPS may be signaling. For example, because a broad range of factors can influence NPS scores, Graham and Sherrod agreed that Customer Success Managers (“CSMs”) should never have compensation linked to scores. In summary, Sherrod concluded, “NPS is excellent, but it’s not going to give you a sense of whether or not your customers will renew.”
3. Action Happens Beyond the “Iceberg Tip”
Many CX leaders are familiar with the iceberg analogy, which implies that most organizations can only harness a small percentage of their available customer feedback, e.g., the “tip of the iceberg.” Graham called out perpetuating the “iceberg mentality” as one of NPS’ most salient flaws. He said, “the way that most companies approach NPS leaves teams sitting at the metric level when what’s going on underneath the metric level is what matters.”
Graham highlighted a typical example, whereby companies ask customers to rate their experience working with their team.
“With questions like this, we typically see favorable results because no one wants to throw their CSM under the bus unless there are severe grievances. But what the ‘good NPS results’ miss is how the customer proceeds to answer the follow-up, ‘anything else you’d like us to know?’ question where the customer reveals they have no idea how to use their system and that nothing is working.”
Furthermore, Graham emphasized that details uncovered several layers below the “iceberg tip” inspire real action to improve customer experience, not survey scores.
“I’ve never seen someone look at NPS data and say, ‘here are our immediate actions!’ This begs the following question — when organizations report the numbers, are they touting it as a vanity metric, or is it used to influence decisions about taking action within the organization? Execution begins when you drill into the underlying details a few altitudes below the metric level.”
One way to drill into the details is to look at the unstructured data sources mentioned earlier — sources like support tickets, free-form survey comments, and customer community posts. Sherrod shared an example of how customer comments related to their digital programs helped them make data-driven decisions about what kinds of content to produce and what formats to utilize. “We iterate on all of our digital programs based on customer feedback. A lot of it is the unstructured text that someone has to read through, but that’s what’s most valuable.”
Many thanks to Sherrod Patching and Graham Gill for an incredibly insightful discussion! Check out the full recording below for more.