Katy Wilburn lead the third session of the day at the 6th annual Voluntas conference. Her presentation focused on hot to reach customer goals using research design/methods and analysis.
Discover Katy's top seven tips now.
4. • Measuring contractor/ internal KPI’s
• Benchmarking against peers
• Benchmarking against rest of the business
• Monitoring end to end service delivery
• Highlighting failures in process
• Improving value for money
• Identifying cost savings
• Understanding what we do well/
not so well
• Understanding what’s important
to tenants
What’s the point?: Objectives-led design
Objectives:
We need to ask ‘what is the point’ to get everybody thinking from the
start about where they want to get to
6. Measure what’s important: Don’t try to make
important what you
measure
• Don’t just ask whether you delivered your service standards
• Shine a light on your known areas of weakness
• Ask about things outside your control
• Check the open comments for gaps
Benchmarking has it’s place….. But remember it’s not
important to your customers
7. Objectives
led
Engage the
Business
Theme and Question
Design
Analyse
Data
Reporting
Insight
What’s the
point?
Use open
questions
wisely
Measure
what’s
important...
3. Use open questions wisely
8. Use open questions wisely
Better communication
when appointments
are booked as to who
and when they are
coming to do what
repairs if more than
one job is booked
Communicate -
give a choice on
appointment
times/days
The fan is working
but I have been
told not to have my
window open when
it is in use which is
not what I
expected
Because it
was
terrible
Why were you
dissatisfied?
What would you
most like to see
improve?
9. Objectives
led
Engage the
Business
Theme and Question
Design
Analyse
Data
Reporting
Insight
What’s the
point?
Use open
questions
wisely
Focus on
the fence
sitters
Measure
what’s
important...
4. Use open questions wisely
10. The ‘neither’: “Embracing ambivalence”
Think more about the middle ground neither responders - they don’t hate
you, they just don’t feel that strongly about your service on way or another.
It is a shorter distance to travel from that attitude up into satisfied.
11. Objectives
led
Engage the
Business
Theme and Question
Design
Analyse
Data
Reporting
Insight
What’s the
point?
Results are
just the start
Use open
questions
wisely
Focus on the
fence sitters
Measure
what’s
important...
5. Results are just the start.
12. Results are just the start: Difference between
results and insight
“80% of customers were satisfied
with appointments”
“20% dissatisfied are all young
working families”
“95% of the time the operative
showed their ID….”
“the other 5% is always Nigel”
Don’t just break data down by all the usual suspects (single
demographics, age, property type) think about what might really have an
affect on satisfaction or expectations of service delivery…
13. Objectives
led
Engage the
Business
Theme and Question
Design
Analyse
Data
Reporting
Insight
What’s the
point?
Results are
just the start
Everyone
loves a
good story
Use open
questions
wisely
Focus on the
fence sitters
Measure
what’s
important...
6. Everyone loves a good story.
‘people don’t get data, but they love a good story’
14. A lunchtime story….
This confirms the desk research we did on the main causes of responsive repairs complaints
before we started the survey, which also highlighted appointments not being kept causing issues.
23% of customers said they were dissatisfied with the system for allocating repairs appointments.
This would benefit all customers, but is particularly important for customers who work.
Our sub group analysis tells us that dissatisfaction is highest for households who work part
time, but we’ve looked at our in-house data and know that this group have no more
appointments missed than any other.
We’ve looked at the suggestions for improvement in this area, and customers would like 2 hr
appointment slots, or a text message when the operative is on their way, so they don’t have to
take a whole day off work, or wait in all day for the operative to arrive.
This is a high level of dissatisfaction compared to the other measures in the survey so we have
identified appointments as a key area for improvement.
15. Objectives
led
Engage the
Business
Theme and Question
Design
Analyse
Data
Reporting
Insight
What’s the
point?
Results are
just the start
Everyone
loves a good
story
Use open
questions
wisely
Focus on the
fence sitters
Measure
what’s
important...
‘people don’t get data, but they love a good story’
16. Objectives
led
Engage the
Business
Theme and Question
Design
Analyse
Data
Reporting
Insight
What’s the
point?
Review, Tweak, Change, Improve….
Results are
just the start
Everyone
loves a good
story
Use open
questions
wisely
Focus on the
fence sitters
Measure
what’s
important...
7. Review, Tweak, Change, Improve…
17. Want a free conference debrief?
Contact us HERE
Notes de l'éditeur
The seven tips I want to give you today are spread across the whole design and reporting process, starting with the first day we meet a new client.
So starting at the beginning, at Voluntas we talk a lot about objectives led design when we set up a new project, and this runs right the way through how we design the survey, the samples we choose and how we report the data back to you.
When we turn up for first survey set up meeting or first talk on the phone or by email, first thing we ask is ‘in 6/12/18 month’s time what do you expect this research to give you?’ ‘what do you expect to get out of it’? which is a bit of an odd question because generally at this point you’ve just commissioned us to deliver a survey and we arrive and say ‘what’s this for exactly?’ ‘Do you really need it?’
What’s the point of doing customer satisfaction research?
What we’re really trying to do by starting the meeting in that way is step back from our clients coming to us with a list of questions and saying ‘we need to ask this’ and get everybody thinking from the start about where you’re looking to get to, what do you want from the survey and also where your service is heading, starting at the end if you like and working backwards.
Why do we do that/ how does that help? – we find if we verbalise the aims of the research and how that links to the direction the service is travelling in, the research has a much better chance of delivering something actionable at the end, because we set a fixed structure for the whole of the research project, which you can refer back to; this question links to this objective, this headline result tells us how close we are to achieving this, these suggestions for improvement should help us get here.
Designing research by objective also tends to make it more fluid – if you know what you’re looking to achieve from the research it makes it obvious much quicker when the current questions, design or reporting isn’t working and they get changed and improved quickly rather than ending up with a survey which hasn’t changed or delivered anything in 12 months.
Satisfaction research can have very different objectives in different organisations, (click list) which also means that starting your survey design this way makes your research more personalised to your business and less abstract – so you’ve already starting to knock down that data silo that can sometimes build up around research in Housing Associations which are a predominantly operational business.
So, we discussed the objectives and set a structure for what the research is trying to achieve. Now we need to start designing a survey which unpacks how to get to those objectives in sufficient detail for you to be able to take action to deliver them.
Starting on the quantitative side first….
We need to measure what’s important.
This might sound like the most obvious piece of advice you’ve ever heard in your life but it’s almost always where customer research fails – straight forward asking the wrong questions – collecting loads of data and absolutely no insight, and no amount of clever analysis is going to salvage that situation once it’s done.
As an example - We do a lot of key driver analysis at the moment looking at which other questions in a survey have the greatest pull on overall satisfaction. Improving ‘overall satisfaction’ is an incredibly difficult objective to achieve because ‘overall satisfaction’ is abstract, it’s basically a feeling, you’re trying to shift how your customers feel about you and that could be impacted by all sort of things both inside and some outside of your control – so you have to work out what concrete things are causing that feeling (and even that can be fluid day to day), before you can work on doing those things more or less or better or differently, to have a knock on effect on customer’s overall satisfaction.
KDA helps because it starts to do that, it highlights the other questions in the survey which are having the greatest pull on whether a residents is ‘satisfied with you overall or not’, BUT it’s an analysis tool not a crystal ball it can only tell you about the questions you had in the questionnaire in the first place – if the real issue was never asked about, if all your residents are dissatisfied with waiting time but you only ask about reporting and then service delivery, we’re never going to be able to feed that bit of insight back to you – the KDA will tell you which is more important ‘reporting’ or ‘service delivery’ but it wont tell you the most important thing because it was never there to find – applying advanced analysis to poor data is a bit like employing a crowd of blokes with spades to try to help you find enlightenment.
So how do you make sure you’re asking about what’s important? – the first stage is to recognise that what's important to customers does fit in a neat little box, sometimes just delivering your service standards to the letter wont improve satisfaction, so you need to leave space for customers to give you that feedback – in ‘getting at the goals of the customer’ you’re not the customer – we’re trying to get at what your customers want, the households you house and support.
Measuring what’s important is going to involve asking difficult questions and putting your service out their for criticism – asking about things you already know you do badly, at least then you can measure and publicise the improvement in that sore spot over time.
Ask about things outside your control – this comes up a lot in repairs and typically around the relationship between contact centres taking reports and repairs team delivering the service.
I’ve been told by a lot of repairs managers that one of their main issues is the reporting of repairs – inaccuracies in recording or contact centre staff increasing customers expectations, potentially beyond service standards and that’s impacting our satisfaction and it’s outside of our control, but it comes out in the open comments because we don’t measure it explicitly.
I don’t want to start a discussion about whether that’s true or not, but my argument always is it might be outside of your control, but include it in your survey anyway, from the tenants point of view that’s all one service and from your point of view, that way you can measure the number of times that’s the case, the impact that’s having on your satisfaction score and potentially how that needs to be improved, then you’ll either be proven right or wrong in your assumption that this is one of your major issues, and if you’re right when you’re feeding it back you’ll have facts not anecdotes.
With the best will in the world, we’re all still going to miss something when we design our surveys, so after 3 or 6 months of surveying you need to check the open comments - because that’s where those missed bits will appear and if it’s mentioned enough you might need to add it as a long term measure to start addressing it.
Finally, think carefully about benchmarking - some organisations swear by benchmarking others don’t. Don’t worry this is not my well-wore rant about STAR questions (anyone who wants to hear that can find me at the BBQ later) but what I will say is some high level benchmarking fits squarely in the ‘trying to make important what you measure’ camp rather than actually measuring what's important in terms of improving services because to make them generic they are so far abstracted from the service they are measuring – if you run a key driver analysis on the 7 core star questions 9 times out of 10 ‘repairs and maintenance’ and ‘listening to your views and acting upon them’ come out as the key areas to work on which gives you some headlines but falls a very long way short of helping you know exactly what to do to improve.
So, we nailed our quantitative questions, what about the qualitative?
Open questions are really important in terms of understanding how to improve service – quantitative scores tell you where the service is failing or could be improved, qualitative tells you how to improve it.
BUT they have to be used sparingly in a survey – there nothing more fatiguing in a survey than being asked how do you rate x, and why do you say that? How do you rate y, and why do you say that, and z, how do you rate z….. If you’re anything like me and you’re confronted with that in an online survey you either give up or start going back to change your answers to try and avoid the follow up. Position them where they will have most benefit, after questions which need some elaboration to help you action the feedback and make them varied and specific – follow up where customers said they weren't happy with the waiting time with ‘how long do you feel would have been OK to wait for this repair?’ isn’t of ‘why do you say this?’
Open questions should be about finding ways to improve rather than going over previous issues, so word them that way – don’t ask why were you dissatisfied? Ask what one thing can we do to improve? You’ll be surprised the difference it makes in terms of the directness of the responses you get, how actionable they are and the tone of the interview itself, switches from evaluate to service recovery.
Analysing data…..
I’ve lost count of the number of times I’ve had the ‘neither’ debate this year!
And I know there are people in the room with a range of views on this subject and I don’t want to upset anyone, so putting aside the methodological and academic arguments for today I want to make a case for why I think the ‘neither’ should stay in relation to how it helps identify improvements in service and satisfaction.
I think a lot of this centres on how you think about customers who say they are neither satisfied or dissatisfied so my aim is to maybe make you think about them differently – as ‘potentially satisfied’ rather than ‘without a view’.
In our web portal, in Housemark and pretty much everywhere else satisfaction scales are broken down into satisfied, dissatisfied and neither in the middle – and when looking to ‘improve satisfaction’ our instinct seems to go directly to this group at the end here the ‘dissatisfied’s and try and work out why that is, we regularly have tailored follows ups which ask why are you dissatisfied etc.
However, the sad truth is, some of this percentage are going to be customers who will simply never like you whatever you do, the hard core detractors who will never in a million years be shifted in their negative opinion to the extent that they will give you a satisfied score.
Think about the last time you said you were ‘dissatisfied’ with a service you received, the strength of negative feeling you have to have about a service to give that sort of score – it’s a long way from there back up to satisfied – now I’m not saying we should therefore ignore that group and certainly from a service recovery point of view you wouldn’t want to, what I’m trying to say is if you’re looking for quick wins to improve satisfaction they’re probably not the place to look.
Now think again about our middle ground neither responders - they don’t hate you, they just don’t feel strongly about your service one way or another, or potentially they had a mixed experience, some of it was good but something grated on them enough to not give you a satisfied score. It’s a lot shorter distance to travel from that attitude up into satisfied, so maybe focusing on the reasons for that ambivalence and addressing that will identify a few quick wins and easy changes to make while you’re addressing that hard core of dissatisfaction in the longer term?
I’ve had clients say to me over the past year, the neither's are unfairly pulling our satisfaction score down or they’re basically counted as dissatisfied because the benchmarking scores are positive responses only which is actually a helpful way to think about them – a group that pulls your satisfaction score down but who might be more easily swayed than the real dissatisfieds.
So, we’ve designed the survey, we have data and the last stage is to report that data back to the business and start improving.
I’ve linked reporting and engaging the business together here because as a bonus tip I think it’s important to link think about these two as one and the same thing – what you report should always be about engaging the business and moving towards taking action it shouldn’t be report and then work out what to feedback.
Reporting Insight. The important word here is ‘insight’.
Insight to me has to be something more than a headline – 80% of customers satisfied with appointments and 95% of operatives showing their ID is not insight it’s a finding.
Identifying who was dissatisfied and potentially why or where the service failure is is insight – and the difference is that insight is actionable, it moves you closer to doing something to improve, it directs your efforts.
To do that you need to break the data down further, you need to look at wider data, you need to look at your sub-groups. And think outside the box a little bit – don’t just break your data down by all the usual suspects (single demographics, age, property type) think about what might really have an effect on satisfaction or expectations of service delivery – not just HB or not, level of contribution towards your rent has a massive impact – partial HB are always the least satisfied with vfm because they struggle to find that money.
We’re asked a lot by performance teams about feeding results back to service leads and operational staff in a way that makes them take action – My last tip is really to share with you some great advice I was given by boss in my first ever research job.
I’d just written what I thought was a brilliant report – for a start it was about 60 pages long and who doesn’t love 60 pages of stats right? It had graphs for every point, it have every question broken down by at least three different sub groups, it was ridiculously detailed, but what it didn’t have was a ‘so what?’
So what should we do? So what should we change? So what will make our service better?
The advice my boss gave me was ‘people don’t get data, but they love a good story’ take it away, and tell a continuous story with the data, write it like a murder mystery, and at the end tell them who done it.
So here’s a good example of a data story, which uses some of the tips I’ve highlighted today and draws in wider business intelligence to make it more relevant.
23% of customers said they were dissatisfied with the system for allocating repairs appointments.
This is a high level of dissatisfaction compared to the other measures in the survey so we have identified appointments as a key area for improvement.
This confirms the desk research we did on the main causes of responsive repairs complaints before we started the survey, which also highlighted appointments not being kept causing issues.
Our sub group analysis tells us that dissatisfaction is highest for households who work part time, but we’ve looked at our in-house data and know that this group have no more appointments missed than any other.
We’ve looked at the suggestions for improvement in this area, and customers would like 2 hr appointment slots, or a text message when the operative is on their way, so they don’t have to take a whole day off work, or wait in all day for the operative to arrive.
This would benefit all customers, but is particularly important for customers who work.
I did say at the start 7 hints and tips and the more observant amongst you will of noticed that’s just 6 – so the 7th and final tip is about remembering to review – your business and the way you delivery your services evolves so so should your research, every 6 months look at what your research has given you in that time and whether it’s really helping you move towards your objective and if it’s not it’s time to look at these points again and think about what needs to change.