21 Customer Service Standards That Will Take You Higher
Back in highschool I had this clueless sports coach. He would always set the jump bar at medium height — “the perfect non-discriminatory value” — then sit back and watch. Sure enough, this bored sporty ones, and frustrated the less athletic.
Like in athletics, setting the right standards in customer service is crucial for performance. It gives your service reps something to aim for, an incentive to improve, and allows you to identify the chubby kids. Only to help them get better, of course.
Customer service standards are the measurable micro goals towards the bigger goal of customer satisfaction. They're the minimum height your service reps should be able to jump.
Based on our principles of quality support , here are 21 customer service standards to guide your team.
1
Speed Standards
This won’t make the news but it’s worth repeating: quick service means convenience for customers. Speed is a stable determinant for customer satisfaction . A number of metrics for speed standards:
First response time. The average time until your customers receive the first answer to an inquiry. The issue is not necessarily solved by then. It’s simply when customers know that someone has heard them.
The channel matters a lot here. As GrooveHQ’s Len Markidan suggests , responses via live chat and phone support should be immediate. They should definitely stay below 2 minutes. Many companies promise to answer emails within 24, or worse, 48 hours. I’d say that one day should be the max. For the speed-cultivating social channels, reckon 60 minutes.
Understanding Cultural Diversity in Customer Service
When you’re dealing with a multitude of people every day, being aware of cultural diversity is a must.
Mr./Mrs. WorldwideOn a related note, an automated email stating that “someone’s on the issue” is not an actual first response. It’s a handy trick to comfort customers, who can rest assured that their message reached target. It lets you set expectations about your response time, which makes the wait more bearable .
Response time. The total time average between responses. An email ticket that was completed with 3 responses, with respective times of the responses of 20, 3, and 10 minutes, means a response time of 11 minutes.
First contact resolution ratio. The number of issues resolved with the first contact, divided by the number of issues that needed more replies. This important factor for customer satisfaction is also a reliable indicator of team efficiency. An example standard would be 75%, the global standard according to the International Finance Corporation.
Instant service / queueing ratio. The ratio of customers who received instant support to those who were put in the queue. With email, everyone’s queued. But the metric works for live chat, messaging, and phone. An example standard would be to have a maximum of 2 customers waiting in the queue per agent, or 15 customers waiting in the queue in total.
Queuing waiting time. The time you keep customers on hold until they’re served. Understaffing and aimless routing lets the value rise. You wanna keep it low. Maya S. Horowitz from talkdesk tells you how . A good standard would be a queueing waiting time below the time at which your first customers start dropping out.
Problem resolution time. Average duration before an issue is resolved. Consider a tiered model of importance when you set your standards, like below one of Stanford University .
2
Accuracy Standards
A tourist in the french city of Marseille, I once asked a guy on the street for directions. In passing, he said “easy, down there” and pointed somewhere with a smile. I was delighted about the quick solution and followed his finger. Soon I was mightily lost and figured how that guy just wanted to get it over with.
Quick is good but it’s not appreciated by customers if the answer is not correct. On the other hand, a slow but correct answer might earn you some respect still, as Robert Johnston found .
Customer success ratio. The number of customers who found what they came for, compared to that of customers who didn’t. No matter how great your service, you’ll get the latter. This metric tells you how often and/or in which situations. You can define what a “success” is, the numbers will follow. High ratio = successful customers.
Things gone wrong. A metric taken from the Lean Six Sigma approach , focusing on detecting holes in your suit. It does so by tracking the number of complaints per 100 to 1,000,000 survey units, items sold, customer interactions, or other. An accuracy standard for customer service could be '1 complaint about inaccuracy on 1,000 service surveys'. Low value = efficient service.
3
Transparency Standards
Transparency is arguably hard to measure. Most metrics would be expected to be at one end of the scale, lacking qualitative nuances. Or, otherwise, you’d require big data or costly documentation analysis.
Customer awareness ratio. The Psychology of Waiting Lines shows how uncertain and unexplained waits make queuing more of a drag. This one’s a metric to be gathered with surveys. The entry condition is that the customer questioned received service and was kept waiting for a certain time, say 3 minutes in a chat or 1 minute on the phone. You wanna track the ratio of those customers who felt like they knew why and/or how long they had to wait versus those who were in doubt. Higher value = more transparency.
4
Accessibility Standards
A customer who’s troubled with an issue should not be troubled further with figuring out how to get in touch. At best, your accessibility is so advanced that it brushes proactivity. These metrics test whether you’re right there when you’re supposed to.
Customer Effort Score (CES). The effort, on a scale you define, it took for customers to solve their problem or have their question answered through your service. CEB brought us this useful metric in this HBR article .
Userlike: Instant chats, long-term customer relationships
Over 10,000 companies like Toyota and Hermes trust Userlike to connect with their customers every day - via website chat, WhatsApp, chatbots and more.
Learn moreIt originates from the finding that customers are more likely to punish bad service than to reward an outstanding one. Instead of delighting the customer, the authors argue that make things as easy as possible for them leads to much higher payoffs. Ask your customers in surveys how much effort it took them from 1 - 10. Aim for an average standard 0f 3 or lower. Low CES = good service.
Handovers per issue. This measures how many different service employees have to touch an issue before it’s resolved. Customers on the phone particularly dislike handovers, since they’ll need to explain all over again. Ceb also found this to be one of the four most ‘popular’ service complaints. Try to cap this value at 1 for simple issues, at 3 for more complex ones.
Queuing hang-ups. The number of customers who quit the queuing process. Every instance is a service opportunity you missed and most likely a churned customer. This is a specific 'things gone wrong' metric that relates closely to your standards of queueing waiting times. Low value = satisfying service.
Clicks to first contact. The clicks a customer needs from opening a new browser tab or a search engine’s homepage until the first message is transmitted. It can be tracked per channel for phone, live chat, email, or FAQ. For phone you would track how many clicks a customer needed to find your number. You can also engage external usability service providers to test how easy it is to reach you, e.g. with a service like UserTester . Fewer clicks = better accessibility.
5
Empowerment Standards
Being in control is one of the main principles of good customer service . It's a bit of a hard one to set up standards for, though.
Single question rate. The amount of appearances of the same question divided by the number of all customer inquiries. If you receive the same questions over and over again it doesn’t mean that your customers have a blind spot. It means that you probably don’t inform them well enough. Questions with a certain count (say, 3% and above) must find their way into your FAQ, tutorials or help desk section. Your standard would be to keep the single question rate below 3%.
6
Friendliness Standards
Being nice is a fundamental implicitness in customer service. These techniques will tell you if your smile sits well on you.
Net Promoter Score (NPS). The NPS measures how likely customers are to refer you to someone. The inherent question aims at an intention, not an emotion, resulting in answers untouched by moods. Still, the NPS also tells you how much they like you overall. It’s gathered via (post service) survey, with a scale from 0-10.
To calculate the NPS, take the percentage of respondents in the ‘promoter’ category (10 - 9) and subtract the percentage of ‘detractors’ (0 - 6).
Companies leading in customer loyalty, like Apple, see scores of 75-85%. Rob Markey from Bain & Company explains how and why to benchmark your NPS competitively .
Follow-up survey. Just like with post service surveys you can measure pretty much anything with a follow-up survey. It’s different, however, in that it’s usually done via email, for example with Google Forms .
Its big advantage is that you can ask several questions to dig deeper and cover different aspects of your service, even those not really connected. Like friendliness and expertise. Also, customers can take a more considerate stance after some time has passed since the interaction. To measure friendliness you’d ask something like “how friendly was your operator?” .
Post service ratings. Let your customers rate the service right after the interaction. In our case you want to ask them how friendly they found your service. Preferably, you let them rate it on a classic scale, like 1 to 5. Keep it simple on all channels, customers usually don’t think about service performance in an all too elaborate ways. Large scales are more vulnerable to cultural differences, too.
Think of what you’d consider good enough in product ratings on amazon – probably you consider 3 out of 5 not trustworthy. So aim for average ratings no lower than 4 (out of 5) stars since 3 suggests there’s been something bothering your customer.
7
Efficiency Standards
Speed and efficiency are two forces pulling in opposite directions. They’re not mutually exclusive but hard to balance out. Consultant and author Rhonda Scharf pinned down the dilemma:
When I'm standing in line to buy my coffee, I want the line to be quick. However, it does need to be good coffee, you do need to get my order right, and I need to feel like I, as a customer, matter.
Rhonda Scharf, HuffPost Business
Here are some metrics that assist you in finding the golden path.
Replies per ticket. This metric tracks how many answers your team needs to resolve an issue. It tells you about the potency of your reps’ communication and about the amount of effort your customers have to put up with.
What you’re looking for is what you deem an excessive number of replies.
Diana Potter, Kayako
Diana Potter suggests to focus on the upper outliers of this metric instead of the mean average. That way you zoom in on those cases with the most room for improvement. Start out by removing all first contact resolution from your analysis. Potter considers two to four interactions a standard value.
Average occupancy. The average time one or all service reps are occupied in a customer interaction. Add to this the time your reps work on the case after the interaction has ended ( Talkdesk shares some tips on how to reduce this ). Then divide by the remaining time, which is idle time. Aim for a value between 60-80%.
Backlog inflow/outflow. The number of issues received or opened compared to the number of issues resolved. This metric gains higher expressiveness if tracked over time. A growing number indicates that your team is too small. A constant suggests that you’re well equipped. Set up a standard to keep the ratio around 0.5 .
Minutes spent per issue. Allows you to compare your service reps’ efficiency. This metric works for all channels but requires individual benchmarks for each one.
CS standards gone wrong
A standard is always an attempt to create a size that fits all kinds of cases. But it should not make you lose flexibility. So beware of communicating standards as rules with a number seemingly set in stone. If you name an average of responses per case a good value, you don’t actually want your service reps to stop answering customers once they’ve reached that value, do you?
Additionally, a standard put to paper with a precise number may breed vanity. You don’t want your employees to run after tin gods and lose focus of what’s most important – your customer’s satisfaction.
If you place too much emphasis on optimizing your team’s first call resolutions, your reps may be tempted to juke the stats, or consciously sacrifice the quality of their work just to hit their numbers. Insightsquared.com
Another downside of standards is that they communicate the minimum but don’t inspire to think about potential maximums. When based on your current day performance, they’re usually closer to the status quo than to your highest potential.
Too modest expectations can reduce your team’s motivation to prosper and improve. Your reps will probably deliver just above the standard if you don’t give incentives to continuously raise the bar. Paul Hagen dedicated an insightful post on rewarding employees for great customer service.