Outright fails are the questions that appear in your quality control process which flag a significant issue. A miscalculation here can skew an entire business.
This article asks you to review your quality control calculations to see how you handle a common scenario.
I have seen this process happen in many quality control processes, so it’s worth sense checking your approach for a 5 minute read.
Designed for every kind of employee this article seeks to help start a discussion that improves processes and procedures. The whole point of a quality process.
- The setup
- One Step Further
[read more=”Read more” less=”Read less”]
The primary question
Regardless of which quality assurance approach you use, let’s start with getting you in the mind set of what we’re asking.
You do a test and score 100%. However you happened to see the answer another student wrote on their paper?
Is your paper scored a 0% for cheating or do you just get docked that question?
In a science test, you are doing an exam. You get the definition wrong at the start but all the rest of the questions right.
Does that show a fundamental lack of understanding of the principles, a mistake or is it an instant fail?
The challenge is interpretation of mistakes and how you highlight those mistakes with scores.
Mistakes in school are often ignored however a mistake in business can cost multi-million dollar deals.
A basic quality control
Lets do a quick quality control setup for calls coming into a contact centre.
The department is customer care / service in this very simplified example.
Care calls usually break up into three sections but there can be many more sections.
- Care specific questions
We are going to shorten this questionnaire to 10 questions for maths purposes.
Next thing we do is throw it into Excel to let that do some work for us.
For simple maths we score each question worth 10 points. Get a yes, 10 points. Get a no, 0 points.
This is called weighting the questions.
We set an imaginary target of 80% success rate, so we can only get two wrong and still be successful.
Like school systems you can rate using letters if you really want. So our example got an A1.
When you work with an outsource company the target is typically 85% or higher for quality control reviews.
The problem for quality control is when you have very very important questions. Was Data Protection observed?
If you fail data protection that is a very serious issue. It should not hide in statistics.
So to compensate a common practice is to heavily or negatively weight that one question.
Missing it causes the quality control to fail overall and where the term outright fail comes in.
All other questions go to 5 but the data protection is 55.
Other approaches put maths in such that if there is a no there, the total score is nullified.
Skewed overall scoring
Let’s say you perform 10 of these quality control checks. 9 of 10 get a perfect 100% score.
1 of the ten, as per the above example just gets the data protection wrong so getting a 0%.
When you get the score it works out to be 90% overall which is good. However is it actually representative of how you are performing?
This sets up a cascade issue through your scoring system. This is called skewing and results in skewed scores.
If you use negative scoring or have outright fails in your quality control checks, which should be there, how do you manage their scoring.
For one centre I worked with their average score was 86% because of 2 failed data protection scores instead of 98% as it was the only thing that failed.
Outright fails and heavily weighted questions need special attention as they may be abnormally skewing your quality scoring which isn’t right.
One option is to add another column to your Excel for Outright Fail.
So your review of your scores is, we got 99% but we had 1 outright fail.
So this has been a very simple example.
When we start examining the numbers for management purposes we actually need to be far more in-depth in our review.
How are our openings doing vs. closing?
If you have to take staff off the phones to do training, the whole team doesn’t need training on the whole process for one process getting one question wrong.
So how can you minimise this waste. The answer is grouping.
Group the scores of the opening, closing and your own specific areas. When the training kicks in you can review parts separately.
If you have strong enough reporting you can even go to a question level review.
One Step Further
System based approach
You can make very complicated Excel sheets but there are tools build explicitly to do all the complicated maths and grouping.
For years ScoreBuddy stood head and shoulders above all others.
Not only for grouping but also fully manage Outright Fails.
SMARTER training plans
bxp has a Learning Management System (LMS) built in.
Each question in your Quality Control links to a piece of training that shows how its done.
This ability is amazing for the training team, the calibration team and the operation in general.
If a person gets one question wrong in their quality control, the bxp platform generates a training plan just for them across their Quality Control reviews on only the bits they got wrong.
This approach means people only train on the bits they got wrong and the training is tailored just to their needs.
It’s like having the coach sit with you.
A special clickable report is emailed and accessible in the system which records what each person works through. The progress can be reviewed by managers and coaches.
Consequently this represents massive savings for contact centres in time for agents, team leads, managers and lost time due to reviews.
You can do all this on Excel you don’t need a tool but tools save massive amounts of setup, tweaking and changing time.
Please please check your not skewing your scores through simple maths. It happens easily.
If you need to fix it, you don’t have to start again.
Honestly, get bxp and have it running this week and see the actual difference in your quality scores.