There is a significant body of management science behind the formation, delivery, management, and interpretation of surveys, sufficient to fill a series of books in its own right. The intent of this article is only to provide the layperson with some guidelines for preparing and administering a survey, suitable for use within a company or the customer base.
There are three truths that underpin any change initiative. The first two are:
1. If you can tell me how you are measured, I will show you how you behave. The principle is that measurement (including the absence of measurement) drives behaviour. So only measure criteria you can change. If you can’t change it, don’t measure it.
2. People only change when their discomfort is high, caused by “pain” or unrealised “pleasure.” These two points are on opposite ends of the spectrum. If people are experiencing any point in between, then they are unlikely to change. Choose questions that will measure where the respondents are on the spectrum.
A survey is a quick and easy means to measure the survey populations position relative to both points with the second point being easier to measure than the first. The most important feature of a survey is that it is only a snapshot of people’s perceptions at a specific point in time. This brings me to the third truth:
3. General statements do not define the specific and the specific does not define general statements.
A survey provides a snapshot in time on general statements only. For example, a survey on customer satisfaction may indicate that customers are highly satisfied with the service they have received. This does not mean that every single customer is happy and it would not be difficult to find a single customer who was unhappy. All you can conclude from the survey is that generally customers are happy. Equally, just because you found one customer that was unhappy, that does not invalidate the survey.
The biggest mistake in surveys is measuring what you cannot change. This issue typically manifests itself through broad questions. The less specific the question, the more it is open to interpretation by the respondent. Consider the question: “Are you happy? Answer yes or no.” This may seem like a specific question due to the binary nature of the answer, but it is actually a very general question. What is “happy?” How do I know when I am happy, or do I measure my happiness the same way as the next person?
Assume a 60/40 split in responses, yes to no. At best, given the inherent vagueness in the concept of happiness, the most reliable insight that can be inferred from the study, is that, at the time of answering the question, 60% of the respondents were not unhappy. It does not predict if the same people will be happy one minute or one hour later. If your objective was to make everyone happy, then this survey offers no insight into what is making people happy or unhappy. It provides no clue as to what needs to change. In summary, this style of question is a waste of time.
A better approach is break the concept you wish to measure into its component parts, ensuring that no matter what the answer, you will be able to introduce a change that will improve the result. Assume you wish to survey management’s perception of the quality of information they receive. The first hurdle is to define the concept of “quality.” As per the happiness example, it would be futile to ask management if they considered the information they received to be of poor or good quality, as you would not know what to change if the answer was that the quality of the information was poor.
To resolve this issue, I define quality information to be information that is complete, accurate, and timely. In other words, I get all the information I want, when I want it and without errors.
Using this definition the first question could be: “Do you consider the information you receive to be complete?” It is substantially easier to resolve issues around incomplete information than it is to fix issues of poor quality. A further refinement of the question can be to ask “How often are you required to request additional information for use in the decision-making process?” as it may not be possible to be confident that everyone defines “complete” the same way.
The survey is further improved by moving away from using binary answers (yes/no) to using a scale. A scale allows the respondent to be more specific in their answers. The Likert scale is my preference. The primary characteristic of a Likert scale is that it considers all responses to be equal. To set it up, the survey author should write down the question and then, at a minimum, define each side of the scale. Ideally each response point in the scale will also be labelled.
A Likert scale should comprise at least five choices. The ideal number is eight as it allows the respondent to show a higher sensitivity in how they respond to each question. I prefer using an even number of choices as it forces a decision from the respondent. Using an odd number provides a natural midpoint that can become the easy choice for respondents not wishing to commit themselves. There is no midpoint with an even number of choices.
The question on completeness now looks like this:
How often are you required to request additional information for use in the decision-making process?
Constantly Seldom
1 2 3 4 5 6 7 8
The results are presented by totalling the number of times each point is selected, as each point on the scale is equally valid.
I also recommend asking the question twice. The first question is to evaluate the current position and the second is to determine the ideal or desired position.
The results graph could look as follows:
The current position is in front and the ideal position at the back.
From the graph it can be seen that of the 160 respondents (managers within the business), 40 rated the current completeness of information as 2, 20 rated it with a 3, 0 rated it with a 4, 30 rated it with a 5 and 20 rated it at each of 6, 7 and 8.
The important point is that there is no trend line. It is only a series of discrete scores.
From the graph it can be extrapolated that the vast majority of managers consider the information they receive to be incomplete. This is an easily accepted result. What is unexpected is that approximately 60% of managers have an ideal score of 4, 5 and 6. That is, over half of the survey population do not consider it important to have complete information to do their jobs. (Not all managers responded to the question for the ideal position).
These results can be further enhanced with follow-up interviews to better understand them.
And further insight can be gained through cross-referencing their responses to the demographic information about the respondents such as seniority, gender, location, function etc.
Once you have established the gap between the current and the ideal position, the question of how to close it arises. My experience is that the gaps are closed through a combination of changes to policy, behaviour, process, and technology. The following table illustrates how this can be worked through:
On the left are the criteria measured by the survey. On the right are the four change drivers of behaviour, policy, process, and technology. The numbers represent which of the four drivers need to be addressed to close the gap and are in decreasing order of priority. (1 is highest priority and 4 lowest.)
It can be seen that substantial improvements across all measures can be made by changing or introducing policy supplemented by changes to behaviour and process. Frequently companies jump straight to changes to technology. In this case, changes to technology will help, but they are not the place to start.
A survey can act as a catalyst for change and can provide a baseline prior to making changes. But it is important to keep in mind that it is only a snapshot in time and it only provides answers to the specific questions that you ask.