Before content analysis can begin, it needs to be preserved in a form that can be analysed. For print media, the internet, and mail surveys (which are already in written form) no transcription is needed. However, radio and TV programs, as well as recorded interviews and group discussions, are often transcribed before the content analysis can begin.
Full transcription that is, conversion into written words, normally into a computer file is slow and expensive. Though its sometimes necessary, full transcription is often avoidable, without affecting the quality of the analysis. A substitute for transcription is what I call content interviewing (explained below).
When content analysis is focusing on visual aspects of a TV program, an alternative to transcription is to take photos of the TV screen during the program, or to take a sample of frames from a video recording. For example, if you take a frame every 15 seconds from a 25-minute TV program, you will have 100 screenshots. These could be used for a content analysis of what is visible on the screen. For a discussion program this would not be useful, because most photos would be almost identical, but for programs with strong visual aspects e.g. most wildlife programs a set of photos can be a good substitute for a written transcript. However this depends on the purpose of the content analysis.
Its not possible to accurately analyse live radio and TV programs, because theres no time to re-check anything. While youre taking notes, youre likely to miss something important. Therefore radio and TV programs need to be recorded before they can be content-analysed.
If you ve never tried to transcribe an interview by writing out the spoken words, youd probably don t think theres anything subjective about it. But as soon as you start transcribing, you realize that there are many styles, and many choices within each style. What people say is often not what they intend. They leave out words, use the wrong word, stutter, pause, and correct themselves mid-sentence. At times the voices are inaudible. Do you then guess, or leave a blank? Should you add "stage directions" - that the speaker shouted or whispered, or somebody else was laughing in the background?
Ask three or four people (without giving them detailed instructions) to transcribe the same tape of speech, and youll see surprising differences. Even when transcribing a TV or radio program, with a professional announcer reading from a script, the tone of voice can change the intended meaning.
The main principle that emerges from this is that you need to write clear instructions for transcription, and ensure that all transcribers (if there is more than one) closely follow those instructions. Its useful to have all transcribers begin by transcribing the same text for about 30 minutes. They then stop and compare the transcriptions. If there are obvious differences, they then repeat the process, and again compare the transcriptions. After a few hours, they are coordinated.
It generally takes a skilled typist, using a transcription recorder with a foot-pedal control, about a days work to transcribe an hour or two of speech. If a lot of people are speaking at once on the tape, and the transcriber is using an ordinary cassette player, and the microphone used was of low quality, the transcription can easily take 10 times as long as the original speech.
Another possibility is to use speech-recognition software, but unless the speaker is exceptionally clear (e.g. a radio announcer) a lot of manual correction is usually needed, and not much time is saved.
Transcribing speech is very slow, and therefore expensive. An alternative that we (at Audience Dialogue) often use is to make a summary, instead of a full transcription. We play back the recording and write what is being discussed during each minute or so. The summary transcript might look like this:
|0 0"||Moderator introduces herself|
|1 25"||Each participant asked to introduce self|
|1 50"||James (M, age about 25)|
|2 32"||Mary (F, 30?, in wheelchair)|
|4 06"||Ayesha (F, 40ish)|
|4 55"||Markus (M, about 50) - wouldnt give details|
|5 11"||*||Grace (F, 38)|
|6 18"||Lee (M, 25-30, Java programmer)|
|7 43"||Everybody asked to add an agenda item|
|7 58"||**||James - reasons for choosing this ISP|
This takes little more time than the original recording took: about an hour and a half for a one-hour discussion. The transcriber uses asterisks: * means "this might be relevant" and ** means "very relevant." These marked sections can be listened to again later, and transcribed fully.
If you are testing some particular hypothesis, much of the content will be irrelevant, so it is a waste of time to transcribe everything. Another advantage of making a summary like that above is that it clearly shows the topics that participants spent a lot of time discussing.
An important note: if you record the times as above, using a tape recorder, make sure the tape is rewound every time you begin listening to it and the counter is reset to zero, otherwise the counter positions wont be found.
Content is often transformed into written form before content analysis begins. For example, if you are doing a content analysis of a photo exhibition, the analysis will probably not be of uninterpreted visual shapes and colours. Instead, it might be about the topics of the photos (from coders descriptions, or a written catalogue), or it might be about visitors reactions to the photos. Perhaps visitors comments were recorded on tape, then transcribed. For most purposes, this transcription could be the corpus for the content analysis, but analysis with an acoustic focus might also want to consider how loud the visitors were speaking, at what pitch, and so on.
If your source is print media, and you want a text file of the content (so that you can analyse it using software) a quick solution is to scan the text with OCR software. Even a cheap scanner works very well with printed text, using basic OCR software, supplied free with many scanners. Unless the text is small and fuzzy (e.g. on cheap newsprint) only a few corrections are usually needed per page.
If the content you are analysing is on a web page, email, or word processing document, the task is easier still. But to analyse this data with most text-analysis software, you will first need to save the content as a text file, eliminating HTML tags and other formatting that is not part of the content. First save the web page, then open it with a word processing program, and finally save it as a text file.
If your purpose in the content analysis is very clear and simple, an alternative to transcription is live coding. For this, the coders play back the tape of the radio or TV program or interview, listen for perhaps a minute at a time, then stop the tape and code the minute they just heard. This works best when several coders are working together. It is too difficult for beginners at coding, but for experienced coders it avoids the bother of transcription. Sometimes, content analysis has a subtle purpose, and a transcript doesnt give the information you need. Thats when live coding is most useful: for example, a study of the tone of voice that actors in a drama use when speaking to people of different ages and sexes.
Sometimes the corpus for a content analysis is produced specifically for the study - or at least, the transcription is made for that purpose. Thats primary data. But in other instances, the content has already been transcribed (or even coded) for another purpose. Thats secondary data. Though secondary data can save you a lot of work, it may not be entirely suitable for your purpose.
This section applies to content that was produced for some other purpose, and is now being analysed. Content created for different purposes, or different audiences, is likely to have different emphases. In different circumstances, and in different roles, people are likely to give very different responses. The expectations produced by a different role, or a different situation, are known as demand characteristics.
When you find a corpus that might be reusable, you need to ask it some questions, like:
Its often misleading to look only at the content itself - the content makes full sense only in its original context. The context is an unspoken part of the content, but is often more important than the text of the content.
Content that was prepared to support a specific cause is going to be more biased than content that was prepared for general information.
Its safe to assume that all content is biased in some way. For example, content that is produced by or for a trade union is likely to be very different in some ways) from content produced by an employer group. But in other ways the two sets of content will share many similarities - because they are likely to discuss the same kinds of issues. Content produced by an advertiser or consumer group, ostensibly on that same topic, is likely to have a very different emphasis. That emphasis is very much part of the content - even if this is not stated explicitly.
Is it still valid for your current purpose? Its tempting to use a corpus thats already prepared, but it may no longer be relevant.
Coding in content analysis is the same as coding answers in a survey: summarizing responses into groups, reducing the number of different responses to make comparisons easier. Thus you need to be able to sort concepts into groups, so that in each group the concepts are both
Does that seem puzzling? Read on: the examples below will make it clearer.
Another issue is the stage at which the coding is done. In market research organizations, open-ended questions are usually coded before the data entry stage. The computer file of results has only the coded data, not the original verbatim answer. This makes life easier for the survey analysts - for example, to have respondents occupations classified in standard groups, rather than many slightly varying answers. However, it also means that some subtle data is lost, unless the analyst has some reason to read the original questionnaires. For occupation data, the difference between, say "clerical assistant" and "office assistant" may be trivial (unless that is the subject of the survey). But for questions beginning with "why," coding usually over-simplifies the reality. In such cases its better to copy the verbatim answers into a computer file, and group them later.
The same applies with content analysis. Coding is necessary to reduce the data to a manageable mass, but any piece of text can be coded in many different ways. Its therefore important to be able to check the coding easily, by seeing the text and codes on the same sheet of paper, or the same computer screen.
Its usual in survey analysis to give only one code to each open-ended answer. For example, if a respondents occupation is "office assistant" and the coding frame was this ...
Professionals and managers = 1
Other white collar = 2
Skilled blue-collar = 3
Unskilled blue-collar = 4
... an office assistant would be coded as group 2. But multiple coding would also be possible. In that case, occupations would be divided in several different "questions," such as
Question 1: Skill level
Professional or skilled = 1
Unskilled = 2
Question 2: Work environment
Office / white collar = 1
Manual / blue collar = 2
An office assistant might be classified as 2 on skill level and 1 on work environment.
If you are dealing with transcripts of in-depth interviews or group discussions, the software normally used for this purpose (such as Nud*ist or Atlas) encourages multiple coding. The software used for survey analysis doesnt actually discourage multiple coding, but most people dont think of using it. My suggestion is to use multiple coding whenever possible - unless you are very, very certain about what you are trying to find in a content analysis (as when youve done the same study every month for the least year). As youll see in the example below, multiple coding lets you view the content in more depth, and can be less work than single coding.
A coding frame is just a set of groups into which comments (or answers to a question) can be divided e.g. the occupation categories shown above. In principle, this is easy. Simply think of all possible categories for a certain topic. In practice, of course, this can be very difficult, except when the topic is limited in its scope - as with a list of occupation types. As thats not common in content analysis, the usual way of building a coding frame is to take a subset of the data, and to generate the coding frame from that.
An easy way to do this is to create a word processing file, and type in (or copy from another file) about 100 verbatim comments from the content being analysed. If you leave a blank line above and below each comment, and format the file in several columns, you can then print out the comments, cut up the printout into lots of small pieces of paper, and rearrange the pieces on a table so that the most similar ones are together. This sounds primitive, but its much faster than trying to do the same thing using only a computer.
When similar codes are grouped together, they should be given a label. You can create either conceptual labels (based a theory you are testing), or in vivo labels (based on vivid terms in respondents own words).
A coding frame for content analysis normally has between about 10 and 100 categories. With fewer than 10 categories, you risk grouping dissimilar answers together, simply because the coding frame doesnt allow them to be separated. But with more than 100 categories, some will seem very similar, and theres a risk that two near-identical answers will be placed in different categories. If its important to have a lot of categories, consider using hierarchical coding.
This is also known as tree coding, with major groups (branches) and sub-groups (twigs). Each major group is divided into a number of sub-groups, and each subgroup can then be divided further, if necessary. This method can produce unlimited coding possibilities, but sometimes it is not possible to create an unambiguous tree structure for example, when the codes are very abstract.
As an example, a few years ago I worked on a study of news and current affairs items for a broadcasting network. We created a list of 122 possible topics for news items, then divided these topics into 12 main groups:
Crime and justice
Government and politics
International events and trends
Leisure activities and sport
Media and entertainment
Science and technology
Work and industry
This coding frame was used for both a survey and a content analysis. We invited the survey respondents to write in any categories that wed forgotten to include, but our preliminary work in setting up the structure had been thorough, and only a few minor changes were needed.
Because setting up a clear tree-like structure can take a long time, dont use this method if youre in a hurry a badly-formed tree causes problems when sub-groups are combined for the analysis. (The Content Analysis section at the end of chapter 12 has practical details of tree coding.)
You dont always need to create a coding frame from scratch. If you know that somebody has done the same type of content analysis as you are doing, there are several advantages to using an existing coding frame. It not only saves all the time it takes to develop a coding frame, but also will enable you to compare your own results with those of the earlier study. Even if your study has a slightly different focus, you can begin with an existing coding frame and modify it to suit your focus. If youd like a coding frame for news and current affairs topics, feel free to use (or adapt) my one above. Government census bureaus use standard coding frames, particularly for economic data - such as ISCO: the International Standard Classification of Occupations. Other specialized coding frames can be found on the Web. For example, CAMEO and KEDS/TABARI are used for coding conflict in news bulletins.
When you are coding verbatim responses, youre always making borderline decisions. "Should this answer be category 31 or 73?" To maintain consistency, I suggest taking these steps:
If you have created a coding frame based on a small sample of the units, you often find some exceptions after coding more of the content. At that point, you may realize your coding frame wasnt detailed enough to cover all the units. So what do you do now?
Usually, you add some new codes, then go back and review all the units youve coded already, to see if they include the new codes. So it helps if you have already noted the unit numbers of any units where the first set of codes didnt exactly apply. You can then go straight back to those units (if youre storing them in numerical order) and review the codes. A good time to do this review is when youve coded about a quarter of the total units, or about 200 units - whichever is less. After 200-odd units, new codes rarely need to be added. Its usually safe to code most late exceptions as "other" - apart from any important new concepts.
This works best if all the content is mixed up before you begin the coding. For example, if you are comparing news content from two TV stations, and if your initial coding frame is based only on one channel, you may have to add a lot of new categories when you start coding items from the second channel. For that reason, the initial sample you use to create the coding frame should include units of as many different types as possible. Alternatively, you could sort all the content units into random order before coding, but that would make it much harder for the coders to see patterns in the original order of the data.
When analysing media content (even in a visual form - such as a TV program) its possible to skip the transcription, and go straight to coding. This is done by describing the visual aspects in a way thats relevant to the purpose of the analysis.
For example, if you were studying physical violence on TV drama programs, youd focus on each violent action and record information about it. This is content interviewing: interviewing a unit of content as if it were a person, asking it "questions," and recording the "answers" you find in the content unit.
What you cant do, of course, when interviewing the content is to probe the response to enable the coding to be more accurate. A real interview respondent, when asked "Why did you say that?" will answer - but media content cant do that.
When youre interviewing content, its good practice to create a short questionnaire, and fill in a copy for each content unit. This helps avoid errors and inconsistencies in coding. The time you save in the end will compensate for the extra paper used. The questionnaires are processed with standard survey software.
The main disadvantage of content interviewing is that you cant easily check a code by looking at the content that produced it. This helps greatly in increasing the accuracy and consistency of the analysis. But when theres no transcript (as is usual when interviewing content) you can check a code only by finding the recording of that unit, and playing it back. In the olden days (the 20th century) content analysts had a lot of cassette tapes to manage. It was important to number them, note the counter positions, make an index of tapes, and store them in order. Now, in the 21st century, computers are faster, and will store a lot more data. I suggest storing sound and video recordings for content analysis on hard disk or CD-ROM, with each content unit as a separate file. You can then flick back and forth between the coding software and the playback software. This is a great time-saver.
When the content you are analysing has large units of text e.g a long interview, this can be difficult to code. A common problem is that codes overlap. The interviewee may be talking about a particular issue (given a particular code) for several minutes. In the middle of that, there may be a reference to something else, which should be given a different kind of code. The more abstract your coding categories, the more likely you are to encounter this problem. If its important to capture and analyse these overlapping codes, there are two solutions:
Cutting up transcripts and sorting the pieces of paper as explained above. Disadvantages: if interrupted, you easily lose track of where youre up to and never do this in a windy place!
My suggestion: unless youre really dedicated, avoid this type of content analysis. Such work is done mainly by academics (because they have the time) but not by commercial researchers, because the usefulness of the results seldom justifies the expense.
Coding is another form of summarizing. If you want to summarize some media content (the usual reason for doing content analysis) one option is to summarize the content at a late stage, instead of the usual method of summarizing it at an early stage.
If your content units are very small (such as individual words) theres software that can count words or phrases. In this case, no coding is needed, and the software does the counting for you, but you still need to summarize the results. This means a lot less work near the beginning of the project, and a little more at the end. If you dont do coding, theres much less work.
Unfortunately, software cant draw useful conclusions. Maybe in 10 years the software will be much cleverer, but at the moment theres no substitute for human judgement - and that takes a lot of time. Even so, if your units are not too large, and all the content is available as a computer file, you can save time by delaying the coding till a later stage than usual. The time is saved when similar content is grouped, and a lot of units can all be coded at once.
For example, if you were studying conflict, you could use software such as NVivo to find all units that mentioned "conflict" and a list of synonyms. It would then be quite fast to go through all these units and sort them into different codes.
A common way to overcome coding problems is to appoint a small group of "judges" and average their views on subjective matters. Though its easy to be precise about minor points (e.g. "the word Violence was spoken 29 times"), the more general your analysis, the more subjective it becomes (e.g. the concept of violence as generally understood by the audience).
Use judges when there are likely to be disagreements on the coding. This will be when any of these conditions applies:
The more strongly these conditions apply, the more judges you need. Unless you are being incredibly finicky (or the project has very generous funding!) 3 judges is often enough, and 10 is about the most you will ever need. The more specific the coding instructions, the fewer the judges you will need. If you only have one person coding each question, he or she is then called a "coder" not a "judge" - though the work is the same.
Any items on which the judges disagree significantly should be discussed later by all judges and revised. Large differences usually result from misunderstanding or different interpretations.
Maybe you are wondering how many judges it takes before youre doing a survey, not content analysis. 30 judges? 100?
Actually, it doesnt work like that. Judges should be trained to be objective: they are trying to describe the content, not give their opinions. All judges should agree as closely as possible. If theres a lot of disagreement among the judges, it usually means their instructions werent clear, and need to be rewritten.
With a survey, respondents are unconstrained in their opinions. You want to find out their real opinions, so it makes no sense to "train" respondents. Thats the difference between judging content and doing a survey. However if youre planning to do a content analysis that uses both large units and imprecise definitions, maybe you should consider doing a survey instead (or also).