In 1998 I organized an evaluation of a pilot TV program intended for young people, aged between about 13 and 20. It seemed to be an ideal situation for a consensus group. This research had to be done as cheaply as possible, so instead of recruiting participants through a random screening survey (which would have been best) we contacted a school and a university, and asked them to supply volunteers in the age groups we needed. We organized two viewing sessions, one with the school students (aged about 13-17) and one with university students (about 17-20). There were about 30 people in each group.
Participants were paid to take part in these groups, which were held at the main TV building in Adelaide, Australia. Each person was paid about as much as they could have earned in an hour or two. And we paid them as soon as they came in, which made a great impression on them. "What if I walked out now?" some of them asked. "Stick around," we advised. "This will be interesting." (The reason for the payment was to induce them to turn up as promised - it wasn't a reward for participating.)
Everybody filled in a preliminary questionnaire, with some details about themselves and their TV viewing habits and preferences. Then we showed the pilot program, twice. (It lasted only about 20 minutes.) It was divided into short segments, and participants were encouraged to write comments while they viewed the program for the second time. Finally, we asked the viewers their chances of watching it if it became a regular program. This was on a scale of 0 to 10, where 0 meant there was absolutely no way they'd watch this program willingly, and 10 meant they'd go out of their way to watch it every week.
Now the consensus phase began. We had the participants stand in groups, depending on the scores they'd given. The quarter of them with the lowest scores (less than about 5 out of 10) were sent off to form one group, and the quarter with the highest scores (8 or more) formed another group. The middle half were divided into two groups, for 6 out of 10, and 7 out of 10. Each group (averaging about 7 people) was asked to appoint a moderator and a secretary.
The high-scoring group was then asked to come up with a list of reasons why they liked the program so much.
The low-scoring group was asked why they disliked it.
The two middle-scoring groups was asked to come up with ways of improving it.
We did all this in a large room, with four tables. I and my assistant toured around the four groups, offering procedural advice. The moderator in each group was asked to get at least two suggestions from each group member, and the secretary wrote these down on a very large sheet of paper. When everything had been written down, the group members voted on each statement, and the number who agreed with each statement was written next to that item.
And that was all. Combining the individual questionnaires and the group statements, the program's perceived strengths and weaknesses became very clear. In one afternoon (plus several days of organizing) we had the complete results.
As so often happens, the program appealed to a different age group from the one it was designed for. It was aimed at older teenagers (around 17 to 20) but was most liked by those aged about 14.
The most common problem with programs aimed at children is that the program-makers are out of touch with the age group they're making the program for. The ABC stopped producing radio programs for children in the late 1980s, after discovering that most listeners to these programs were aged over 55! Listening to the programs, parents of young children had no problem seeing why this was so, but the program-makers - mostly young and single, or older people whose children had long since grown up, had forgotten what appeals to specific age groups. This is a situation where audience research can be very helpful, even when (as in this case) done in a very simple way.