Ticket deflection is a vanity metric

Most "deflected" tickets are just customers who gave up.

What deflection actually measures

The standard definition sounds clean: deflection is when a customer resolves their issue through self-service instead of submitting a ticket. The help center did its job. The chatbot answered the question. The customer got what they needed without consuming agent time. Everyone wins.

The problem is how teams actually measure it. Most deflection calculations count the gap between help center visits and tickets created. If 10,000 people visit your help center and 3,000 submit tickets, your deflection rate is 70%. That number shows up in the quarterly review. Leadership is pleased. The self-service initiative is "working."

But what happened to the other 7,000? Some found their answer. Some bookmarked the page and never came back. Some clicked around for two minutes, couldn't find anything useful, and decided the problem wasn't worth the effort of writing a ticket. That last group didn't get deflected. They got discouraged. Your analytics tool counts all three the same way.

The abandonment problem

Abandonment is invisible in most deflection metrics. A customer who visits your help center, searches for "cancel subscription," reads an article about account settings that doesn't answer their question, and then closes the tab looks exactly like a customer who found their answer and left satisfied. Both show up as "did not submit a ticket." Both count as deflection.

The difference matters because abandoned customers don't disappear. They show up later as churn. They leave a negative review. They tell a colleague that your support is hard to reach. The cost of that outcome is significantly higher than the cost of the ticket they would have submitted.

You can estimate your abandonment rate with a rough test. Look at help center sessions that lasted under 90 seconds and ended without a ticket, a page scroll past the fold, or a "helpful" click. Those sessions are almost certainly not resolutions. For most teams, this group represents 30% to 50% of what the dashboard calls "deflected."

That means your real deflection rate is probably half of what you're reporting. The number on the slide isn't wrong because the math is bad. It's wrong because the definition is too generous.

How to separate real deflection from fake

The most reliable signal is what happens after the self-service session. If a customer views a help article and doesn't submit a ticket on the same topic within 48 hours, there's a reasonable chance they self-served successfully. If they submit a ticket within a few hours, the self-service failed. This isn't perfect, but it's dramatically better than counting all non-ticket sessions as deflection.

The second signal is session engagement. A customer who scrolled through an article, spent two or more minutes reading, and didn't submit a ticket behaved differently from someone who bounced in 20 seconds. You can't know for certain that the first person was satisfied, but you can be fairly sure the second one wasn't. Weight your deflection numbers accordingly.

Some teams add a small feedback prompt at the bottom of help articles: "Did this answer your question?" The response rate is usually 5% to 15%, which isn't enough for precision but is enough for trends. If a particular article's "no" rate spikes from 20% to 50%, something changed. Maybe the product updated and the article didn't.

Combine these signals into a qualified deflection number: sessions where the customer engaged meaningfully with content and did not submit a ticket within 48 hours. This number will be lower than your current deflection rate. It will also be a number you can actually trust.

The self-service paradox

When deflection numbers are low, the instinct is to create more content. Write more articles. Build a bigger FAQ. Launch a chatbot. The thinking is straightforward: more coverage means more questions answered before they become tickets. The knowledge management trap applies here too.

The paradox is that more content often makes self-service worse, not better. A help center with 50 well-organized articles is more useful than one with 500 that require the customer to know the right keywords. Every article you add increases the noise a customer has to sort through. If your search returns 30 results and the answer is number 22, you've technically covered the topic. You've also made it nearly impossible to find.

The real bottleneck in self-service is findability, not coverage. Customers don't fail because the article doesn't exist. They fail because they can't locate it with the words they use to describe their problem. A customer searching "can't log in" needs to find the article titled "Resetting your password via SSO." Keyword search doesn't bridge that gap.

Before writing a single new article, audit how your existing content performs. Which articles have high traffic and low satisfaction? Which search queries return zero results? Which queries return results that customers click on but then still submit a ticket? The gap between what customers search for and what your help center returns is where real deflection opportunities live.

Building a deflection number you can trust

Start by defining what success looks like for a self-service session. At minimum: the customer found content relevant to their issue, engaged with it for a meaningful amount of time, and did not submit a ticket or return to the help center on the same topic within two days. That's your qualified deflection event.

Next, establish your baseline. Run the qualified criteria against the past 90 days of help center data. Compare the qualified number to your current reported deflection rate. The gap between the two is the abandonment you've been counting as success. For most teams, the qualified number is 40% to 60% of the reported number. That's not a failure of your help center. It's a more honest view of where you actually stand.

Then tie deflection to cost per ticket. Each qualified deflection avoids one ticket. Multiply your qualified deflection volume by your cost per ticket and you have the dollar value of your self-service program. This number is smaller than what you'd calculate from the inflated rate, but it's a number that survives scrutiny when finance asks how you arrived at it.

Track the qualified rate monthly. The trend is what matters, not the absolute number. If it's going up, your self-service improvements are working. If it's flat while your reported deflection rate is climbing, you're getting better at discouraging tickets without getting better at resolving issues. Those are very different outcomes, and only one of them is good for your customers.

Frequently asked questions

What counts as ticket deflection?
Deflection means a customer found a resolution through self-service instead of submitting a ticket. The key word is resolution. A customer who visits your help center, fails to find an answer, and leaves without submitting a ticket was not deflected. They abandoned. Most analytics tools count both the same way, which is why reported deflection rates are inflated.
What's a good ticket deflection rate?
The number itself matters less than what it actually represents. A 40% deflection rate where half is abandonment is worse than a 20% rate where every deflected customer genuinely self-served. Instead of targeting a deflection percentage, measure the ratio of self-service sessions that end without a follow-up ticket within 48 hours. That gives you a number that reflects actual resolution.
Can you have too much ticket deflection?
Yes. If customers cannot reach a human when they need one, you are not deflecting, you are blocking. Aggressive deflection strategies that hide contact options or force customers through long self-service flows before allowing ticket submission create frustration and churn. The goal is to make self-service effective enough that customers choose it, not to make contacting support so difficult that they give up.
How do I track whether self-service actually resolved the issue?
Track the return rate. After a customer views a help article or interacts with a self-service tool, monitor whether they submit a ticket on the same topic within 24 to 48 hours. If they do, the self-service attempt failed. Some teams also add a simple thumbs up or down to help articles and track the ratio over time. Neither method is perfect, but both are better than counting page views as deflection.

Ready to get on the list?

Join the waitlist for early access.

No spam. Notify on launch only. Privacy policy