This chapter explains how to construct a questionnaire, mainly for use in surveys. Other types of audience research dont use questionnaires much.
A questionnaire is a strange type of communication. Its like a play, in which one actor (the interviewer) is following rules and reading from the script, while the other actor (the respondent) can reply however he or she likes - but only certain types of reply will be recorded. This is an unnatural social situation, and in countries with no tradition of this kind of conversation, respondents may need to have the principles explained to them.
Though it is easy to write a questionnaire, you need a lot of skill and experience to write a good questionnaire: one in which every question is clear, can be answered accurately, and has usable results.
Working out what you need to know
It seems to be a natural human tendency to jump into action: to start writing a questionnaire the moment you decide to do a survey. However better questionnaires result from planning the structure before you start writing any questions. If you simply start writing questions, you are likely to find out, too late, that some important questions were omitted, and other questions were not asked in a useful way.
Its important to distinguish between questions that a respondent is to answer (questionnaire questions), and questions that you (the publisher or organization) need answers to (internal questions - sometimes called research questions). The questions you ask yourself are usually unsuitable for asking respondents directly. This is a problem with a lot of questionnaires written by beginners.
Some of your internal questions might be:
1 What sorts of people tune in to our station?
2 How long do they tune in for?
3 What are the most popular programs?
4 If we introduced a talkback program, would this bring a larger audience?
Often, one internal question will need several questionnaire questions. Sometimes, one questionnaire question may help to answer several internal questions.
I suggest you draw up a large table, with three columns, headed Our question, How results will be used, and Priority - like this:
How results will be used
What sorts of people tune in to our station?
If we introduced a talkback program, would this bring a larger audience?
If Yes: go ahead with program
The priority column is there to help you reduce the number of questions, if the questionnaire is too long: low priority questions can be omitted.
How do you create such a table? And how can you make sure you dont miss any important internal questions? I suggest that many staff be involved in creating internal questions. The more people who are involved, the better the questionnaire will be (even though it may take longer to develop). An excellent method of working out internal questions is to hold a discovery conference, as explained in chapter 14 below.
Make this table on a very large sheet of paper and put it on the wall in a prominent place, where people will notice it, and be able to add suggestions. Later, you can add a fourth column, to show which questionnaire questions correspond with which internal questions.
When you have worked out what you want to know, and with what priority, then it is time to begin writing a questionnaire.
There are two main types of questionnaire: spoken and written. With a spoken questionnaire, interviewers read the questions aloud to respondents, and the interviewers fill in the answers. With written questionnaires, there are no interviewers. Respondents read the questions, and fill in their own answers.
The advantages and disadvantages of the different types of survey have already been described, in the Planning chapter above. For surveys of the whole population, it is normally best to use interviewers. Response rates are higher, fewer questions go unanswered, there is no assumption that all respondents can read, and results come back more quickly.
Written questionnaires are best when the surveyed population is highly literate, and most respondents know of the surveying organization - which is usually true for a media organization. If most of the population have never heard of the organization, the response rate is likely to be very low.
A suitable situation for written questionnaires is when an organization surveys its staff. Mail panels, with an educated population, can also work well, when people have agreed in advance to be surveyed.
The length of a spoken questionnaire is the time it will take people to answer, on average. There can be tremendous variation between respondents, but a skilled interviewer can hurry the most talkative people and encourage the most reticent, reducing the variation. The questionnaire for your first survey should be fairly brief, so that the average person will take no more than 5 or 10 minutes to answer. An interviewer can usually go through about two single-answer questions in a minute, or 1 multiple-answer question. In 10 minutes, about 15 questions can be asked.
When interviewers are skilled and the questionnaire is interesting and not too difficult, a face-to-face interview can often take up about 30 minutes. Telephone questionnaires should not last more than about 15 minutes, on average. Both interviewers and respondents find it much harder to concentrate on the telephone.
For printed questionnaires, a good maximum length is an A3 piece of paper, folded once to form a 4-page A4 leaflet. About 20 questions of average length will fit on a questionnaire of this size. Beyond that, pages must be stapled together, response rates fall off, return postage costs more, and it is generally a lot of trouble. So if all possible, keep a written questionnaire down to 4 A4 pages, perhaps with a separate covering letter.
Its possible to use much longer questionnaires than the figures given above, but skilled questionnaire design is needed. Even so, the concentration of both interviewer and respondent tends to drop off towards the end of a long questionnaire. And if the questionnaire is very long, it takes weeks to analyse the results.
On the other hand, if the questionnaire is too short, respondents can be disappointed, and feel they havent been able to give their full opinions. Theres no advantage for a spoken interview to take less than about 5 minutes, or a written questionnaire less than 2 pages.
Satisfying respondents is an important consideration in designing a questionnaire. This is specially true in a small community. If people say that your questionnaire was frustrating to complete, you may have a high refusal rate for your next survey.
The first few questions will set the scene for the respondent. Its important, at the beginning, to have some questions that are both interesting and easy to answer. As rapport gradually builds up between interviewer and interviewee, more difficult and personal questions can be asked.
In a good questionnaire, the questions will seem to flow in a logical order. Any break in this logical order will be punctuated by a few words of explanation from the interviewer, such as "Now a few questions about you." Such verbal headings should be used every few minutes, to let the respondents know what theyll be asked about.
The questions should move gradually from the general to the specific; this is called funnelling. For example, you may want to ask a question on attitudes towards the radio stations in your area, and also some questions about your own stations programs, without asking about the other stations programs. At the beginning of the questionnaire, the respondents shouldnt know which station is conducting the survey. So if all those questions about your own station are asked first, respondents will think "Aha! So thats the station which is doing the survey!" Then, when it comes to the comparison of stations, the respondents will seem to favour the station that has organized the survey. Therefore, the question comparing the stations should come before the specific question on programs.
In planning a small questionnaire, its usually helpful to determine which question (or small group of questions) is the most important, and to build the questionnaire around this leading up to the most important question, and away from it again.
The more sensitive a question, the closer it should be to the end of the interview, for two reasons: firstly, rapport takes time to build up, and secondly, if a respondent does get offended and refuses to go on with the interview, little information will be lost. Therefore, the demographic questions normally come close to the end of the questionnaire.
At the end of a questionnaire, I normally include a very general open-ended question, such as "Is there anything youd like to add?" or "Would you like to make any comments?" Not many respondents have much to say at this point, but if a number of them make similar comments, this is perhaps a sign that you omitted a question that respondents think is important. So a question like this is a quality-control check: often more useful in the next survey than in the current one.
Questionnaire-writing should not be rushed, so dont set artificial deadlines. Its common for a questionnaire to be rewritten 10 times before it is ready to go. If, as a novice researcher, you think the questionnaire is perfect after only one or two rewritings, you probably havent checked it enough.
Its important not to get too emotionally involved with a questionnaire. When you have drafted a questionnaire, dont think of it as "your" questionnaire, to be defended at all costs against attacks by others. Good questionnaires are group efforts the more people who check them and comment on them, the better the questionnaires become.
Another difficulty is that, after you have written several drafts, its hard for you to see whats really there, because youre remembering what was there in the earlier drafts. This is a good time to invite different people to comment on the latest draft. Experienced interviewers are among the best people to consult on questionnaire wording, because of their experience with hearing respondents answer many questions.
When you are writing a questionnaire, you will spend a lot of time re-typing and re-ordering questions. If you can use a word processor for updating the drafts, youll save a lot of time. Most modern word processing programs have Outline features built in. I suggest you learn to use outlining. It is not difficult to set up, and makes it very easy to rearrange the order of any text with headings and sub-headings.
If you dont have a word processor, type each question on a separate piece of paper this makes it much easier to insert new questions, or change the sequence.
At some point, the development of a questionnaire must stop. Among argumentative people, its possible to never reach a point where all can agree on a questionnaire. Even the most perfect questionnaire can be criticized along the lines that "You cant word the question that way, because some people might answer such-and-such." The real issue is not whether it is possible to misunderstand a question, but what proportion of respondents are likely to misunderstand it. This can only be known from experience with that type of question. However, any question can be misunderstood by some people - if they try hard enough.
Another problem which can never be solved is how detailed a question should be. When a small number of people are likely to give a particular answer, should a separate category be provided? For example, if you are recording the education level of each respondent, should you include a category for "postgraduate degree" - which might apply only to one person in 100? The answer depends both on the number of people in the category, and their importance. If the survey was mainly about education, you probably would include that category, but in a media usage survey of the whole population, it would probably be unnecessary. The safe solution is to include an "other" category, and ask interviewers to write in details of whatever "other" answers turn up.
Much of the value of a survey depends on the sensitivity of the interviewers. An interviewer who feels that a respondent may have misunderstood a question will probe and re-check. In this way, competent interviewers can compensate for a poorly worded questionnaire. But dont rely on this youll certainly get answers to a poorly worded question, if the interviewers are thorough but the answers may not apply to the exact question that was printed.
Its useful to end a questionnaire with a broad open-ended question such as "Is there anything else youd like to say that hasnt come up already?" This gives respondents an opportunity to communicate with you in their terms (the rest of the questionnaire has been on your terms.) Though many replies to such a question will be irrelevant, youll often find interesting and thought-provoking comments, which can be turned into questions for future surveys.
Questions can be described in several ways: by content, and by format. This section deals with different types of content; the next with different question formats.
A substantive question is one about the substance of the survey - the topics you want to know about. These are likely to be different for every survey, and for every organization. This seems so obvious that its hardly worth mentioning - but there are other types of question too.
In most surveys, there are some questions which do not apply to everybody. For example, if some respondents have not heard a radio program, there is no point in asking their opinion of it. So the question on hearing the program would be used as a filter. On a written questionnaire, it would look like this.
Q16. Have you heard the program Cheo? 1 Yes -> ask Q17
2 No, or dont remember -> skip to Q18
Q17. What is your opinion of the program Cheo?
1 Like it
2 Dont care
3 Dislike it
Question 16 is the filter question, because the people who answer No are filtered out of answering questions 17, which asks about Cheo (a popular serial in Vietnam).
Any question whose answers determine whether another question is asked is known as a filter question. Sometimes (as discussed above, in the chapter on sampling) the whole questionnaire will be directed at only a certain category of people. In such a case, there will be a filter question right at the beginning. Depending on their answer to this question, people either answer all the other questions, or answer none. Such questionnaires take a lot less time for some respondents than for others.
Once I worked for a market research company that did an omnibus survey every week, for many different clients. Two filter questions asked about smoking and buying paint. Any unfortunate respondent who both smoked and bought paint took twice as long to get through the questionnaire as non-smokers who hadnt bought paint. So when youre checking the length of a questionnaire which has filter questions, you need to do so for the longest and shortest combinations of answers.
When the sample is small, you should make sure that filter questions do not exclude too many people. Suppose you want to ask three questions about a program, but exclude non-listeners to the program. There are many ways of defining non-listeners. For example, your filter question could be any of these:
1. Have you ever in your life listened to the program Cheo?
2. Have you listened to Cheo in the last year?
3. Have you listened to Cheo in the last month?
4. Do you listen to Cheo on most days of the week?
5. Have you listened to Cheo on every day in the last year?
If you define listeners as those who said Yes to the 5th version, those people will be very well informed about the program, but you may find only a few percent of respondents answering the main questions about the program, because everybody else has been filtered out.
At the other extreme, you could include everybody who had ever listened to the program. Plenty of people would answer the questions about the program, but the opinions of some of them would be based on episodes they heard years ago.
The best solution is often to ask a filter question with a range of answers, not only Yes or No, e.g.
Thinking of the program Cheo, about when did you last listen to a full episode? In the last week? The last month? The last year? Longer ago than a year?
All people who had listened in the last year would be asked the questions about the program. It would then be possible to compare the opinions of recent and not-so-recent listeners.
Most questionnaires include a number of demographic questions. These are questions about the respondents characteristics and circumstances. Questions about sex, age group, occupation, education, household type, income, religion, are all demographic. These are included in surveys for two main reasons:
For surveys with small samples (up to 100 respondents) the number of respondents will be too few for these comparisons. If you split 100 people into six age groups, some age groups will probably contain less than 10 people. Looking at the distribution of station listening in each age group may mean comparing 3 people in one age group with 5 in another. These numbers are too small to prove anything at all. Even with a large sample, theres seldom much value in dividing people into more than 6 demographic categories.
You should also compare your survey results with census figures. Most censuses ask questions about age group, sex, where people live, and the proportion of respondents who work. Its best to avoid asking about characteristics which many people regard as private, such as income or religion: answers are often inaccurate or misleading. Also, such questions upset some people, unless they can see a reason for them. For example, including questions on religious programs would justify a question about the respondents religion.
An interviewer does not need ask some "questions," such as the sex of the respondent, and the area where the person lives. The answers to such items are already known, and can simply be written on the questionnaire.
Its always interesting to compare results from different surveys. If you can find data from an earlier survey conducted in your area, or a survey on the same topic from anywhere else, you can include some comparison questions in your survey. Copy the exact question asked in the other survey, and find out how your respondents compare with others. Demographic questions are also comparison questions, when their results are used to compare survey data with census data.
These are not real questions, but other data gathered by the interviewer and recorded on the questionnaire. As already mentioned, the respondents sex and residential locality are usually written on questionnaires. Other information is often useful, such as:
... and anything which may affect the answers given. These control items usually appear at the beginning and end of the questionnaire. For written questionnaires, they can be entered before the questionnaire reaches the respondent.
There are several different styles of question. The most common are multiple-choice and open-ended questions.
Multiple choice questions
Here is a typical multiple choice question: the respondent is asked to choose one answer from several possibilities offered:
"Which radio station do you listen to most often: 5SE, 5MG, or some other station?"  5SE
To answer the question, the interviewer ticks the appropriate box. The three boxes are supposed to cover all possible choices. But what if the respondent answers "I dont listen to radio at all"? Thats not really an "other" station, so we probably need a fourth choice: "no station."
In a multiple-choice question, all possible answers must be catered for. To account for unexpected answers, its usually a good idea to include an "other" category though it can be annoying to find that "other" was the commonest answer. You should try and keep "other" below 5% of the total, though this is not always predictable. Pilot testing (explained below) will help in revealing common answers that should have been mentioned in a multiple-choice question.
A multiple-choice question normally needs a single answer. Sometimes multiple answers are valid (e.g. "Which of the following radio stations do you listen to?"), but when youre expecting one answer and get two, something is wrong. Probably the answer categories are not mutually exclusive.
With a questionnaire filled in by respondents, multiple choice questions can offer a large number of possible answers the practical limit is about 50, or a full page. But when an interviewer reads out the questions, it is difficult for respondents to remember many of the possible answers when the interviewer recites these as a long list. I recommend offering no more than four choices, and limiting the total question length to about 25 words.
If you must offer a large number of choices, and the respondent cannot be expected to think of the correct one, it helps to divide a question with a large number of multiple choices into a number of smaller questions. This greatly reduces the number of possible answers to be read out. In practice, it would be much simpler to make this a single open-ended question, and (if the respondent was unsure), only then to ask the three prompting questions, and read off a short list of possible stations.
Another alternative, when there are many possible answers to a question, is to print them on a card, and hand this to respondents to choose their answers. But this cannot be done in telephone surveys, or in places where many respondents are illiterate.
Multiple-choice vs multiple-answer
Dont confuse multiple-choice questions with multiple-answer questions. A multiple-choice question is one where the respondent is told the possible answer choices. A multiple-answer question (which need not be multiple-choice) is one that allows more than one answer. Whether a question can have one answer or more than one depends on its meaning. Here is a question with only one possible answer:
And heres a multiple-answer question:
Multiple-answer questions must have at least one answer (even if that is "does not apply"), but they can have many more than that.
Often, a multiple-answer question is equivalent to a set of single-answer questions, as in this example (based on a place with 3 local radio stations)
(a) Single answer series
Do you listen to 5MG at least once a week?  Yes  No
Do you listen to 5SE at least once a week?  Yes  No
Do you listen to BRR at least once a week?  Yes  No
(b) Multiple-answer question
"Which radio stations do you listen to at least once a week? (Circle all that apply)
Sometimes respondents may be tempted to give all possible answers to a question. This often applies to questions that ask about reasons, e.g.
"Here are some reasons why people dont listen to radio. Please tell me the reasons that apply to you:
Not having a radio
Not knowing what stations there are
Not knowing what times the programs are on
No time to listen to radio
Dont like the programs
People tend to give several answers to such questions - but if every respondent gives every possible answer, this doesnt help much. You can make respondents think a little harder by limiting the number of answers to about 3 per person. Respondents who give more than 3 answers can be asked "Which are the three most important answers?"
Open-ended questions with limited choice
Open-ended questions, as the name implies, are those where the respondent can choose any answer. There are two types of open-ended questions: limited choice and free choice. Limited choice questions are essentially the same as multiple choice questions, but those choices are not stated to the respondent. For example, heres a multiple-choice question.
Heres the limited choice version of the same question:
To answer a limited choice question, the interviewer either ticks a box, or writes down the respondents answer.
As far as the respondent is concerned, the only difference between multiple choice questions and limited choice questions is that a list of possible answers is not given for limited choice questions. The lack of prompting has two effects:
(a) some unexpected answers,
(b) many people will not think of an answer they should have given.
Notice that the two versions of the above question began identically. In a spoken interview, the respondents only cue that the question has finished is a pause. If, in the multiple-choice version, the interviewer pauses for too long after the word "status" many respondents will answer immediately, before hearing the choices. So where memory or recognition might be a problem, it is acceptable to ask a limited-choice question (without listing alternative answers), and then, if the respondent hesitates, to read out a list of possible answers. This converts a limited-choice question into a multiple-choice one.
Sometimes a question may have a limited - but very large - number of possible answers. Examples are "What is your occupation?" and "What is your favourite television program?"
In both cases, it would be possible to list hundreds of alternative answers, but this is never done. There are two solutions: either the humble dotted line is called into service, or else pre-coded categories are provided. For example, occupations might be coded as white collar, blue collar, and other. This latter method is easier, but some information is lost. Worse still, interviewer error in this situation is both common and undetectable. When the respondent gives an occupation, the interviewer must decide its category within a few seconds, by ticking a box. Its much better if the interviewer writes down the occupation in full. Grouping of occupations can be done more accurately and consistently after all the completed questionnaires have been returned to the survey office.
Open-ended questions with free choice
A free choice question is one which has a near-infinite number of possible answers. The questionnaire provides a dotted line (or several) on which the interviewer writes the answer in the respondents own words. An example of a free choice question is:
The problem with such questions is to make them specific enough. Just because a respondent did not give a particular answer, this does not necessarily mean that answer did not apply. Perhaps it did apply, but the respondent didnt think of it.
Therefore, if you have some particular emphasis in mind, the question wording must point respondents in that direction. Also, respondents should be encouraged to give multiple answers.
Thus, the results of the above question could not be used to assess what respondents thought of the announcer on the 5MG breakfast session: a respondent may like the announcer very much, but like the news still more.
A more specific way to ask such questions is in balanced pairs, e.g.
"Tell me some of the things you like about the 5MG breakfast session."
"And now please tell me some of the things you dont much like about the 5MG breakfast session."
With this format, any element, such as announcers or news, can have the number of likes and dislikes compared.
If an open-ended question is unclear to some respondents in a pilot test, consider explaining it. You can put a question into context by explaining why you are asking it, and what will be done with the information.
When there are hundreds of possible answers, and more than one answer is possible, its good to break the question into several groups, so that respondents dont forget something. So instead of asking "Which magazines have you read in the last month," say "Id like to ask you about magazines you have read or looked into in the last month. Think about magazines you have read at home, at work, at school, in a public building, or in somebody elses home. Think about magazines you often read, magazines youve seen occasionally, and magazines youd never seen before."
Detailed wording, like that, will produce a much higher (and more accurate) list of magazines read. However, the question takes a lot longer, both to ask and to answer, than the one-sentence version.
Questions answered with numbers are a special type of open-ended question. For example:
"What is your age?"
Enter number of years: .....
Though statistical software is designed mainly to handle numbers, numeric questions are rare in audience surveys. Most people are not good at giving precise numerical answers from memory. For example, I once organized an event survey including these two questions: "Which area do you live in?" and "How many kilometres is that from here?" The answers could be checked on a map. The average error in distance was more than 20%.
Even when people in a survey are asked their exact age, you always find unexpectedly large numbers aged 30, 40, 50, 60 and so on: it seems that some people round off their age to the nearest 10 (sometimes 5) years.
So if youre doing a survey where accurate numbers are important, you cant rely on respondents memory. If they are being surveyed in their homes, the interviewer could ask to see documents to check the figures. Though some respondents may refuse, this method should produce more accurate results.
But do you really need this level of precision? Will 39-year-olds really have different TV preferences from 38-year-olds? Will people who live 7.2 km from the theatre be less likely to attend than those who live 7.1 km away? Surely not - and unless the sample size is very large, such differences would be barely detectable. Therefore, most surveys ask about age groups, and approximate numbers. Exact numerical answers are rarely needed.
When to ask each type of question
Youll find that once you have thought up a question, the form it takes (whether multiple choice, limited-choice, or free choice) is not related to its wording but to the number of possible answers. Few questions can be easily converted from one of the three types to another.
A good questionnaire needs both multiple choice questions (with few possible answers) and free choice questions (to which everybody could give a different answer). Multiple choice questions are easily processed by counting, but provide little detail. Free choice answers have a lot of detail, but the bulk of that detail can be difficult to handle.
In professional surveys, with their sample sizes of several thousand respondents, the free choice answers are always a problem. Verbal responses take more time to process, and computers cant summarize them well. Thus, the most common fate of questions with a large number of possible answers is to have these answers divided into categories. A coder reads all the answers, works out a set of categories (often 10 to 20), then decides which category each answer falls into.
The result of this process is a table showing the percentage of people answering in each category, though each category is itself a mixture of widely differing answers. In other words, to fit the computer system, a lot of the information is lost by the grouping of responses.
But when the sample size is less than about 500, the information need not be lost. Though it is still helpful to group the open-ended answers (specially if you want to test some hypothesis that you have), the volume of wording in the answers is not too much to read through.
The use of verbatim responses can partly substitute for a small sample. For example, with a large-scale survey you might try to find out why people listen to one radio station rather than another, by cross-tabulating demographic categories against the station listened to. With a small-scale survey, the equivalent would be studying the open-ended reasons given by those who prefer each station.
The implication of this for a small survey is to make maximum use of open-ended questions. Compared with multiple choice questions, less can go wrong with question wording, and the mathematical skills needed for normal survey analysis are largely replaced by verbal skills, which are more common among broadcasters.
However, a survey with only open-ended questions will produce no numerical results at all. The most useful information is produced when open-ended and multiple choice questions are combined, in effect covering the same topic in different ways. For example...
1 What do you most like about 5MGs breakfast session?
2 What do you most dislike about 5MGs breakfast session?
3 To summarize 5MGs breakfast session, would you say it is an excellent program, or a good program, or not very good?
Question 3 above both summarizes the results of questions 1 and 2, enables percentages to be calculated (e.g. 57% may have thought the program excellent), and also serves as a check on the two other answers. If a respondent has a lot of likes (Q.1) and no dislikes (Q.2), but then rates the program as "not very good", this may show that he or she has not heard Q.3 properly. The interviewer is in a position to detect and ask about the apparent discrepancy. Its good practice to use this cross-checking technique whenever possible.