The CSAT Score — Strengths, Flaws, and Best Practices
The CSAT score is the simplest way to measure customer satisfaction . In fact, it’s so simple that companies often forget the fine tuning of their survey, thereby missing out on the metric’s full potential.
While the previously discussed NPS and CES are indicators for customer satisfaction, the CSAT score measures satisfaction directly . There are no detours, no variables with custom rates of prediction power. Its simplicity and comprehensiveness are two of the reasons for CSAT's popularity.
Another is that a high average customer satisfaction rating can be a strong marketing feat. Today’s consumers rely heavily on ratings and reviews . Companies need a reliable and straightforward measure to showcase their customers' experiences to the world. Unlike NPS and CES, CSAT is a score that everyone understands.
But this simplicity is also its weakness. The CSAT is like a blank signpost. It points you into a vague direction while it doesn’t tell you what exactly too look for there. Unless you perfect the process and interpret the results. Here is how to do it.
1
Choose the right wording
The CSAT question normally aims to capture a customer’s satisfaction with a certain experience. The question’s content defines the experience to evaluate and rate. The standard version for measuring the satisfaction with a complete service experience goes:
“How satisfied are you with our service?”
But the question can also focus on…
...the last interaction:
“How satisfied are you with this chat service?”
...the product:
“How satisfied are you with your product?”
...or the whole customer journey:
“How satisfied are you with your shopping experience?”
The attributes of a good CSAT question at a glance. It's:
- simple and comprehensive.
- suited to ordinal scale format, asking for the characteristic of satisfaction.
- easy to think up and sounds natural.
- unambiguous, it asks about the satisfaction and nothing else.
- neutral and doesn’t favor an answer direction.
- marking off an area of analysis, anything from the full customer experience to a singular item like the chat interaction that helped solve the issue.
- translatable for global comparability.
2
Time your questions
You don't want customer biases to be messing with your survey. So the best moment to ask your CSAT question is right after the relevant customer experience ended, with the customer’s memory still fresh and untouched by irrelevant events.
The end of a customer experience is typically marked by a certain technical routine the user runs into. A live chat experience, for example, could be completed when either the agent or the customer closes the chat window. A purchase cycle could end when the customer reaches the checkout confirmation page. Specific URLs are reliable triggers for surveys.
Looking for better customer relationships?
Test Userlike for free and chat with your customers on your website, Facebook Messenger, and Telegram.
Read moreMost survey tools include such triggers. They let you assess customer satisfaction on various platforms and channels: on a website, in-app, desktop or mobile.
Automated emails also allow for a swift follow-up from your side. But this entails a change of platforms, email open rates are low, and you don't know in which mindset the customers are when the email reaches them. A quick query right inside the channel works better and these tools definitely offer it:
- emolytics
- SurveyGizmo
- SurveyMonkey
- Zoho Survey
- Checkbox Survey
- getfeedback (ideal for mobile)
3
Select the right scale
The classic scale used for the CSAT score is a straight 5-point or 10-point scale. Its ordinal format requires labels only on its extreme ends, or no labels at all since all points in between are of the same nature.
A lesson learned from CEB and their first CES version is to be wary of inverted scales. If ‘1’ represents 'good' and ‘5’ represents 'bad', respondents might accidentally hit the wrong number. Our minds are set on a concept in which low means bad and high means good. There’s no compelling reason to switch it around for the CSAT and harm a precise result.
Apart from that, scale size matters. A short scale’s fewer options allows for quick decisions, which increases response rates. It’s advisable especially for mobile, whose users are notoriously impatient . SurveyGizmo suggests a response rate aim of at least 80% .
A long scale allows for more precision and reflects nuances in your results. But it also makes them more vulnerable to cultural differences. This article in Psychological Science explains that people in individualistic countries like the USA tend to choose extremes, while people in collectivistic countries like Japan prefer more modest statements in the middle area.
A universal language is another answer to cultural peculiarities. Emoticons , for instance, allow you to let go of the more abstract labels and digits altogether.
Then there's also the choice between an evenly or unevenly numbered scale. The former takes away the middle ground that surveyed often resort to when in doubt . The latter, however, gives you information on a neutral area. Respondents aren’t forced to choose a side when they actually feel the middle option reflects their opinion best.
4
Interpret the results
Consumers love to compare ratings. A percentage has the highest familiarity and comparability because people know that the top value is 100% and can therefore naturally assume what’s ‘good’. To display your long-term CSAT score in this way, divide the number of satisfied customers by the total number of respondents, then multiply by 100.
The most popular customer satisfaction evaluation, the ASCI, offers plenty of benchmark data here. Depending on the industry, the boundary line separating good and bad lies somewhere between 70 and 80% .
You can also use the ASCI benchmarks of your industry or sector on your point scale. If the respective ASCI margin between satisfied and not satisfied is at 75%, then it’s at 3.75 on a 5-point scale.
If you use the CSAT mainly for internal quality assessment, you can pursue a goal-oriented approach as well. Then you can stick to the absolute average rating. Just look at your sales or sign-up figures to see if your average satisfaction rating is in line with your goals. It’s a decent indicator for what’s ‘good’ and ‘not good enough’ in your particular case.
5
Gather additional feedback
The best metrics are actionable. If you're only aiming to raise your satisfaction rating for marketing purposes, you’re missing out on improvement opportunities and issue prevention.
Use the tools mentioned in point 2 to ask any respondent, but most importantly the lower and upper outliers, why they gave you that particular rating. This will eventually lead you to the crucial points in the customer experience.
To learn how to reduce customer effort and raise satisfaction, read our post “The 9 Levers for Improving Customer Satisfaction” .