in

Customer survey scores

[ad_1]

Hello all. I wanted to get some opinions or feedback on this

Yesterday I had a meeting with my supervisor about stats goals and such

We got to the customer survey part and it didn’t match up. What she could see and what I saw on my side so I started asking about it

I found out that management see it and score it based on the date the interaction/call took place. And when we log in to see our own scores it shows by the dates the surveys were submitted by the customer. Which are typically two days to a week after the interaction.

I don’t have much of a problem with that honestly what’s kind of rustling my jimmies is this next part

Apparently any score below an 8/10 is basically counted as zero.

In my example I had 9 total surveys for September. Six 10/10s two 0/10s and one 7/10

The way it was explained to me is that the 7 does not contribute to the total score… But still counts as a survey toward the amount of surveys when averaging.

So instead of 67/9 (total points added up divided by total amount of surveys)
It’s 60/9 (total points divided by total amount of surveys except the 7 is a zero)

This drops my average score percentage by like 8 percent.

I was told something along the lines of “our goal is excellence so they are only counting the highest scores toward the points”

And yea I understand excellence is the -goal- but to not count any of the middle ground scores at all and basically make them zero? It feels rigged.

I don’t see how this does anything but harm the agents scores.

It’s there any kind of way this kind of system would be positive? Is there something I’m not understanding? How does this do anything but harm our scores if any score we get between 1-7 is counted as a zero and drags the average down?

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *

I don’t think I’ve ever had such a bittersweet interaction with a rude customer like today

Some Customer Questions you can’t answer because they make no sense.