Support metrics worth tracking (and the ones to stop reporting)

Individual metrics lie. The relationships between them tell the truth.

What to track and why

First contact resolution rate (FCR). The single best indicator of whether your team can actually solve problems. Not "did we respond fast" but "did we fix it the first time." A team with high FCR is a team that understands the product, has access to good information, and doesn't need to bounce tickets around. When FCR drops, something structural changed: maybe a product update created unfamiliar issues, maybe new agents lack the knowledge to resolve on first contact, maybe your KB is out of date.

Full resolution time. Not first response time. Full resolution time measures how long it takes from ticket creation to actual problem solved. First response time measures how quickly someone typed "we're looking into it." One of these tells you about service quality. The other tells you about typing speed.

Knowledge reuse rate. How often do agents reference past resolutions, KB articles, or saved replies when resolving tickets? This tells you whether your team is building on accumulated knowledge or starting from scratch every time. If reuse is low, agents are either reinventing solutions or can't find the existing ones. Either way, you have a knowledge access problem.

Customer effort score. How much work did the customer have to do to get their problem solved? Did they explain the issue once, or three times to three different agents? Did they have to follow up, or was it handled proactively? Effort predicts loyalty better than satisfaction. Customers don't leave because they had a bad experience. They leave because resolving it was exhausting.

What to stop reporting

Total ticket volume without context. "Ticket volume is up 20%" could mean your product is growing, or it could mean a broken feature is generating complaints. It could mean a seasonal spike, or it could mean your self-service is failing. The raw number tells you nothing without segmentation. If you're going to report volume, break it down by category, product area, or source. Otherwise it's noise that gets misinterpreted in every meeting.

CSAT on individual tickets. The response rate on post-ticket surveys is typically 5-15%. The people who respond skew toward extremes: very happy or very unhappy. With a sample that small and that biased, individual CSAT scores tell you about the customer's mood, not your service quality. An agent who handles a billing dispute gets worse CSAT than an agent who handles a simple how-to question, regardless of how well either one performs.

Average first response time as a standalone metric. FRT is trivially gameable. Set up an auto-response and it drops to zero. Have agents send "looking into this" as their first action and it looks great on a dashboard. Meanwhile, the customer waits the same amount of time for an actual answer. FRT doesn't predict customer satisfaction and it incentivizes exactly the wrong behavior: fast replies over good ones. If you track it at all, pair it with FCR so you can see whether those fast responses actually solved anything.

How metrics interact

This is where most support reporting goes wrong. Metrics are reported in isolation as if they're independent. They aren't. Optimizing one metric almost always affects the others, and not always in the direction you want.

Pushing for faster first response time can tank first contact resolution. Agents rush to respond before they fully understand the problem, leading to more back-and-forth, more reopens, and longer total resolution times. The dashboard shows FRT improving while the customer experience gets worse.

Optimizing for ticket volume incentivizes closing tickets fast, not solving problems. If agents are measured on throughput, they'll close tickets with partial answers and hope the customer doesn't come back. Some won't come back because the issue resolved itself. Some won't come back because they gave up. Your volume numbers look great either way.

Resolution time has a complicated relationship with knowledge access. When agents can find past solutions quickly, resolution time drops because they're not re-investigating known issues. But when agents start doing more thorough investigations instead of deferring to someone else, resolution time goes up even though quality improves. The direction of the change only makes sense if you know what's driving it.

The metric that matters most is the one that changes other metrics when you improve it. For most teams, that's knowledge reuse. When agents consistently find and apply past resolutions, FCR goes up, resolution time goes down, duplicate rate drops, and customer effort decreases. If you're going to focus on one thing, focus on making your team's accumulated knowledge accessible.

Setting targets

Don't copy SLAs from a blog post or a competitor. A B2B SaaS company with enterprise clients who pay six figures a year needs different targets than a consumer product where support is a cost center. Your customers' expectations are specific to your product, your price point, and the alternatives they have.

Start from what your customers actually expect. The simplest way to find out is to ask them, either through post-resolution surveys that include an open-ended question about expectations, or through direct conversation during QBRs with key accounts. The less direct way is to measure where satisfaction drops off. If CSAT craters when resolution time exceeds 4 hours, you've found your target without asking.

Benchmark against yourself over time, not against published industry averages. Those averages collapse companies with wildly different products, customer bases, and cost structures into a single number. A "benchmark" of 80% FCR means nothing when the range across companies is 40% to 95%. Your 72% FCR last quarter is the only number that matters. If it's 76% this quarter, you improved. That's the signal.

Set targets that your team can actually influence. "Reduce resolution time by 10%" is actionable if you pair it with a plan (better knowledge access, updated KB articles, improved routing). "Achieve 90% CSAT" is aspirational decoration that doesn't change anyone's behavior because nobody knows what lever to pull.

Frequently asked questions

How many metrics should we actually track?
Four to six, maximum. Every metric you track is a metric someone has to explain, defend, and act on. If your weekly report has 15 KPIs, nobody is reading past the third one. Pick the metrics that actually change decisions and drop the rest. You can always add one back if you realize you need it.
Should we tie agent performance reviews to these metrics?
Be very careful. Any metric you attach to individual compensation will be gamed. Agents optimizing for first response time will send fast, shallow replies. Agents optimizing for resolution count will close tickets prematurely. Use metrics to identify systemic patterns and coaching opportunities, not to rank individuals. The one exception is quality scores from manual review, because those are harder to game and measure what actually matters.
How do we measure knowledge reuse rate?
Track how often agents reference existing resources when resolving tickets. This could be linking to a KB article, copying from a saved reply, or referencing a past ticket. Most helpdesks don't track this natively, so you may need to infer it: check what percentage of responses include KB links, saved reply usage rates, or internal note references to past tickets. Even a rough measure is better than nothing because it tells you whether your knowledge system is being used at all.

Ready to get on the list?

Join the waitlist for early access.

No spam. Notify on launch only. Privacy policy