Service Ops

Stop Chasing CSAT — It's Lying to You

A 4.6/5 CSAT feels great. It's also telling you almost nothing. Three structural problems — survivor bias, rating inflation, agents begging for 5-stars — have broken the metric. Here's what to measure instead.

A lot of support teams treat CSAT as their north star. Quarterly review, 4.6/5, everyone claps.

Here's the problem: a high CSAT score and actual customer satisfaction have very little to do with each other.

Not an opinion. Three structural problems have turned CSAT into a metric that rewards self-deception.

1. Survivor bias

Who fills out a CSAT survey?

Two groups: the extremely happy, and the extremely angry. The massive middle — "it's fine," "good enough," "might switch next time" — doesn't fill it out.

Worse: the customers about to churn stop answering surveys before they churn. They don't have the energy to complain.

Result: your CSAT goes up while your churn goes up, the two curves moving together. The team is celebrating the CSAT score while customers are quietly canceling the renewal.

2. Rating inflation

Consumers grade softer on 5-star scales every year. It's industry-wide — Uber, Doordash, hotels. Anything under 4.5 basically means "bad."

Same story for CSAT. A 4.6/5 sounds great, but against inflation that score is the new pass/fail line. The real alarm is anything under 4.2.

More important: inflation kills discrimination. Agent A scores 4.7, Agent B scores 4.6 — that gap carries zero signal. You literally cannot tell who is better.

3. Agents begging for scores

The hardest one to fix.

Tie CSAT to an agent's KPI and the agent will end every conversation with "if you had a good experience, could you give me a 5? Thanks so much!" The moment that line enters the script, CSAT stops being customer feedback and becomes emotional pressure from the agent.

Craziest version we've seen: a team at 98% CSAT. Drill in — every conversation ends with three emoji packs and "pls pls give me a 5, my KPI is on this." The number means nothing.

What to measure instead

Not "stop listening to customers." Stop making CSAT the main number. Better options:

1. Segmented NPS

NPS has its own problems, but segmenting it unlocks a lot.

  • New customer NPS (30 days post first-purchase)
  • Tenured customer NPS (6+ months)
  • Post-support-interaction NPS

Track three curves separately. Tenured NPS dropping? Your product or service is aging. Post-support NPS dropping? The support experience itself is degrading. A single aggregate number hides all of this.

2. RR — Repeat Rate

For e-commerce and most DTC, this is the honest metric. Customers can lie with the word "satisfied." Customers cannot lie with a second purchase.

RR also measures the whole chain — product, service, delivery, support — together. CSAT only measures the support stretch. RR measures everything.

3. Lightweight CES

CES asks "how much effort did it take to get this resolved?" It's less emotionally biased than CSAT.

Lightweight version: at conversation close, one question — "did we solve it? [yes / no, still stuck]". Binary, not a 5-point scale. Advantages:

  • No agent politeness can change the answer
  • Every "no" is a case study — pull the conversation ID, review it
  • Harder number, no room for inflation

Practical take

CSAT isn't useless. It's unusable as a KPI. Once it's a KPI, it gets gamed.

Use CSAT as a diagnostic, glance at trends occasionally. Make RR or CES your north star. Put agent incentives on RR, not CSAT.

Your service will feel different.