Audience Dialogue

Know Your Audience: chapter 16
Content analysis

This section has examples of content analysis, using various types of coding described in described in section 5. The first two examples demonstrate content questioning. Example 3 shows how multi-coding can be done, while Example 4 covers the use of software in automatic content analysis

Example 1: TV violence

An "interview" with a violent episode in a TV program might "ask" it questions such as: 

 ...and so on. All the answers to the questions are available from watching the program. Notice that some of the criteria are subjective (e.g. the last one). Instead of relying on a single person's opinion on such criteria, it's usual to have several "judges" and record the average rating, often on a scale out of 10.

Example 2: Newspaper coverage of asylum seekers

I’m working on a project that involves media content analysis, without transcription. The project’s purpose is to evaluate the success of a public relations campaign designed to improve public attitudes towards asylum seekers. The evaluation is done by "questioning" stories in news media: mainly newspapers, radio, and TV. For newspaper articles, six sets of questions are asked of each story:

1. Media details

The name of the newspaper, the date, and the day of the week. This information can later be linked to data on circulation and readership, which is available from public sources.

2. Exact topic of the news story

Recorded in two forms: a one-line summary - averaging about 10 words, and a code, chosen from a list of about 15 main types of topic on this issue. Codes are used to count the number of occurrences of stories on each main type of topic.

3. Apparent source of the story

This can include anonymous reporting (apparently by a staff reporter), a named staff writer, another named source, a spokesperson, and unknown sources. If the source is known, it is entered in the database.

4. Favourability of story towards asylum seekers

To overcome subjectivity, we ask several judges (chosen to cover a wide range of ages, sexes, occupations, and knowledge of the overall issue) to rate each story on this 6-point scale:

1 = Very favourable
2 = Slightly favourable
3 = Neutral
4 = Slightly unfavourable
5 = Very unfavourable
6 = Mixed: both favourable and unfavourable

When calculating averages, the "6" codes are considered equivalent to "3". The range (the difference between the highest and lowest judge) is also recorded, so that each story with a large range can be reviewed.

5. How noticeable the story was

This is complex, because many factors need to be taken into account. However, to keep the project manageable, we consider just three factors. For newspapers, these factors are:

  • The space given to the story (column-centimetres and headline size)
  • Its position in the issue and the page (the top left of page 1 is the ideal)
  • Whether there’s a photo (a large colour one is best).

For radio and TV, the above factors are modified to suit those media, with an emphasis on time instead of space.

Each of these three factors is given a number of points ranging from 0 (hardly noticeable at all) up to 3 (very noticeable indeed). The three scores are then added together, to produce a maximum of 9. We then add 1 more point if there’s something that makes the story more noticeable than the original score would suggest (e.g. a reference to the story elsewhere in the issue, or when this topic is part of a larger story).

6. Anything unusual about this story

The coders write comments when they notice something unusual about the story, specially when an extra point is added in the previous item. These comments can be referred to later when trying to make sense of the results of the content analysis.

All this information is recorded first on a one-page printed form, then entered into a spreadsheet, so that weekly tables and graphs can be produced, showing trends in coverage and differences between media outlets, specially the balance between the amount of coverage and its favourability.

This example (newspaper coverage of an issue) is actually a much simpler task than the first (TV violence). If it appears more complex, it’s because I’ve covered it in detail, to show exactly how quantitative content analysis can be done. It’s simpler because we know exactly what we are looking for: to relate changes in media coverage to changes in public opinion. For TV violence, on the other hand, it’s more difficult to decide exactly what to look for, and even what "violence" is. (Angry words? Slamming a door? Casual mention of a death? And so on: many decisions to be argued about). If you’re a novice at content analysis, don’t begin with a topic as complex as violence.

Example 3: technology diffusion with multiple coding

See case no. 27

Example 4: counting words in comments

This example is about automatic content analysis, based on a survey I organized for a forum on the future of Ipswich, an Australian town. 390 people living in the town were interviewed, and asked their views of the town’s future. The open-ended answers were typed into a computer file, and TACT software (designed for literary content analysis, but useful in this context too) was used to identify the main themes. This was done by comparing the frequency of keywords in the comments with those words’ frequency in normal English. To avoid being overwhelmed by common stopwords such as the and and, the program ignored these words.

By looking at these Key Words In Context (KWIC) I found a small number of comments that summarized most respondents�