NPS and C-SAT: What's the difference?

Most of our clients (all very large corporations and well-known nonprofits) seem to have established Net Promoter Score at the center of their business, using it as the metric to track and drive their organization’s growth and success.  Here at Taylor, being in the research business for as long as we have, we grew up in the age of “customer satisfaction” measurement as the key metric of our clients, and have seen and been part of the transition to NPS over the last several years.  

Conceptually, we can see the difference between the two:

Satisfaction . . . 

. . . Is about me only (it’s about how I feel)

. . . Is an attitude about the present (it’s about how I feel now)

Willingness to recommend . . . 

. . . Is about me and someone else (it’s relational)

. . . Is an attitude about the future (what the other’s experience will likely be)

Conceptual distinctions do not, of course, necessarily entail operational distinctions. The interesting thing for us is that, as a metric (that is to say, as a survey measure), there is little difference to speak of between a 10-point (or 11-point) customer satisfaction score, on the one hand, and the Net Promoter Score on the other.   

Here’s two examples—from two different nationwide industry studies of cable TV customers—sample sizes = 3,800 and 1,500.  Both measures were included in each study:  the NPS measure (likelihood-to-recommend 10-point scale) and a 10-point customer satisfaction measure (1=extremely dissatisfied, 10=extremely satisfied).  

Every way you look at the relationship, at least in these two examples, there’s not much to suggest that the two measures are different, yield different results, measure something different. For example:

  • The correlation between the two (Pearson’s r) in each study is .84 and .88, respectively. You can’t get much higher correlation coefficients than those between two measures.
  • The means of the two scores are virtually the same in both studies:  
    • 7.22 for customer satisfaction, 7.04 for NPS in one study.
    • 7.14 for customer satisfaction, 7.17 for NPS in the other.
  • The variation around the mean (standard deviation) on both scores is very similar in both studies:
    • 2.1 for customer satisfaction, 2.7 for NPS in one study.
    • 2.0 for customer satisfaction, 2.2 for NPS in the other.
  • If we follow the NPS categorization (combining 9s and 10s, 7s and 8s, and 6s and below), the vast majority of scores fall into the same category on the two measures in both studies:
    • 69% in one study
    • 85% in the other

If there is so little difference between what the two measures produce, how is it that NPS has taken the corporate world by storm? The answer, as we see it, is (a) thoughtful packaging, (b) relentless promotion, and (c) momentum.

Its founders offered NPS as a simplified one-question solution for tracking performance through the eyes of customers; and equally importantly, they created intuitively appealing labels for their categorized scale points:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

The developers and leading practitioners built Net Promoter into an all-embracing management system for transforming an organization by energizing employees and captivating customers. They achieved this by explicitly making the case that NPS is much more than just a metric. To quote perhaps the leading practitioner, Satmetrix:

“It’s more than a numbers game. Net Promoter programs are not traditional customer satisfaction programs, and simply measuring your NPS does not lead to success. Companies must follow an associated discipline to actually drive improvements in customer loyalty and enable profitable growth.”

This is the kind of “packaging” (and we use this term in the most complimentary way) that practitioners of customer satisfaction measurement just never fully achieved. Moreover, NPS practitioners have been highly aggressive at selling Net Promoter programs into corporate America and beyond. This combination of great packaging and relentless promotion has created a snowball effect, with one large organization after another establishing Net Promoter as the customer-centered metric to drive their business forward.

If you have been around for any length of time and you think about it, you’ve got to feel a little sad for “customer satisfaction.” Arguably (and empirically) it measures the very same thing as NPS. It has a much longer history than NPS—history that’s been lost to organizations that dropped it in favor of NPS. But because it was never packaged and promoted effectively, when NPS came along customer satisfaction never really had a chance.

A note on the differences that do exist on the two metrics

It’s noteworthy that of those who fall into different categories on the two measures in our two cable TV customer survey examples, there are just as many who are more satisfied than willing to recommend as there are who are less satisfied than willing to recommend. Focus groups (along with some speculation) suggest how and why such differences might exist.

Reasons why a consumer who is satisfied with a given product or service might be less willing to recommend that product or service to a friend:

1. The standard for “recommending” is higher than that for “satisfaction” (“I’ve got to love it to recommend it.”)

2. Reluctance to take responsibility (“I don’t want it to be on me if they have a bad experience as a result.”)

3. Concern for the other person (“I don’t want my friend to risk having a bad experience with what I recommended.”)

4. Worry about effect on friendship (“I don’t want something I recommend to potentially damage my relationship.”)

5. Disinclination to impose tastes (“What I like might not be what someone else might like.”)

6. Unwillingness to share (“If I’ve got something special, I might not want others to have it too, because then it’s not unique to me.”)

7. Too soon to recommend (“I’m satisfied with my experience so far, but I haven’t been with this provider long enough to feel comfortable recommending them.”)

8. Great product, no longer available (“Why would I ever recommend a product I have that I know my friend can’t get because it isn’t available anymore?”)

Reasons why a consumer might be willing to recommend a product or service despite not being particularly satisfied with it:

1. Only game in town (“I’m not all that satisfied, but if you want this service this is the only place to get it.”)

2. Great product/Lousy service (“They have terrible customer service, but the product is so good that I’d recommend it.”)

While there seem to be many more reasons for more satisfaction than willingness to recommend than vice versa (8-2), suggesting perhaps that NPS ought to be lower than CSAT scores, the fact that that’s not necessarily the case in our cable TV customer examples may say more about cable TV (The only game in town? Great product/Lousy service?) than anything else. Again, speculation as to the explanation.