For print media, the two main measures of audience are readership and circulation. Circulation relates to the number of copies circulated to the public. Readership is the number of readers - either of a specific issue of the publication, or over a certain time period, such as 3 months. Circulation is measured from sales figures, but readership is measured from surveys of the population.
Readership is practically always larger than circulation, because of "pass-on readers" - in other words, each copy sold is normally read by some people as well as its buyer. The only way that readership can be smaller than circulation is if some buyers don't read the publication themselves, but immediately pass it on to others. Normally, this only happens when the "buyers" aren't paying for the publication, as with controlled-circulation publications.
Let's look at each of these concepts in more detail...
As circulation is defined as the number of copies circulated, this should be a simple, unambiguous measure of a publication's success. If only it were so easy! In fact, defining circulation can be quite a problem - because "circulated" can have several different meanings. When a publication is circulated mainly through sales, the circulation can be the number of copies sold. But even that is not simple. Some factors that complicate the counting are:
Many magazines from the USA publish a "Statement of Ownership, Management, and Circulation", often in small print, near the back of the last issue for a calendar year. This is required by the US Postal Service (USPS), in return for concessions on mailing cost. The breakdown they use shows how complex it can be to measure circulation. To make the example clearer, here are figures from a real magazine...
|A. Total number of copies (net press run)||611,475|
|B. Paid and/or requested circulation|
|B1-2. Mail subscriptions||451,595|
|B3. Sales through dealers and carriers, street vendors, counter sales and other non-USPS paid distributors||73,714|
|C. Total paid and/or requested circulation [sum of B1-2 and B3]||525,332|
|D. Free distribution by mail||6,179|
|E. Free distribution outside the mail||5,207|
|F. Total free distribution [sum of D and E]||11,386|
|G. Total distribution [sum of C and F]||536,718|
|H. Copies not distributed||74,757|
|I. Total [sum of G and H; should equal A]||611,475|
In other words: paid distribution + free distribution + non-distribution = press run. To put it another way still: each copy printed is either sold, or given away, or not distributed - and if a copy is distributed, this is done either through the mail or directly.
The true situation is more complex still: because some subscribers are late to resubscribe, sometimes a copy of the magazine is sent to subscribers who have not yet resubscribed, in the hope that they will renew their subscriptions. Whether or not these copies are counted in the paid circulation will not be known until the ex-subscribers decide to resubscribe - thus the criterion becomes how long you wait to see whether they resubscribe. At what point does the publisher give up, and stop sending copies that may or may not be paid for in the end?
There are complex formulas (using RFM) to work out when to stop sending copies, based on how long the subscriber has been subscribing for, how late they have paid in past years, and so on. It's not clear whether these extra copies sent to late-paying subscribers are counted as B (under "requested") or as D ("free distribution by mail").
What this means in practice is that the publisher may not know the number of paid subscriptions till several months later.
Because advertisers don't trust publishers to produce unbiased circulation figures, most developed countries have a system of auditing circulations. In several countries, this is done by a body called the Audit Bureau of Circulation, or ABC.
The ABC auditors work in a similar way to financial auditors. They visit the offices of publishers (and sometimes distributors, etc) to verify the records on invoices and accounts sent to distributors. Because distributors do not want to pay for more copies than they receive, the invoices are a good measure of circulation. If no copies were returned, circulation auditing would be quite simple - but what messes up the process is when unsold copies are returned to the publisher - sometimes months later. Because the freight cost of unsold copies is high, it's common to tear off the front covers, and return only those. The unsold copies - less their front covers - are then supposed to be destroyed. But sometimes that doesn't happen, and you see coverless back issues for sale in places such as market stalls. These copies are not included in the circulation figures - even though they are circulated.
Also, copies that are given away are not normally included in circulation figures - hence the term paid and/or requested circulation. What confuses the issue here is that with some publications, all of their copies are given away: for example, free suburban newspapers, promotional magazines, and controlled-circulation publications.
Controlled circulation usually applies to specialized or technical publications. A cover price might be displayed, but the publication usually is not sold in shops. It is normally posted to people on a mailing list of decision-makers, in the hope that the readers will respond to the free advertising. (A high price can be charged for the advertising, because all the recipients are known to be in the target market. This makes up for the lack of revenue from sales of copies.) Many medical and scientific magazines use controlled circulation. For example, a medical magazine might be sent to all medical doctors in a country, finding their addresses from a publicly available database of registered doctors. For controlled-circulation media, it's the number of database records that is audited, instead of sales figures.
Imagine that you are being questioned for a readership survey. The interviewer asks "Have you read or looked into a copy of Fortean News at any time in the last month?" This presents you with three main tasks:
1. What might the interviewer mean by "read or looked into"? If you flicked through a few pages of one copy at a bookshop, does that count?
2. Identifying Fortean News from its title. Perhaps there's a similar magazine called Faustian News, and you don't remember which one you saw at the bookshop.
3. You are being asked to remember what you've done in the last month. If today is the 20th of the month, are you're being asked what you've done since the 20th of last month, or since the first day of this month, or the first day of last month? Your answers could vary.
4. An extra task: deciding whether you should tell the truth. If you have read the magazine, you may not want to admit it. Perhaps the interviewer would think you are stupid for reading such a magazine, or perhaps your wife or husband (who hates Fortean News) is listening, and you don't want to have yet another argument about it. Alternatively, you might feel that this is an important magazine, and you should have read it (but you haven't, because you've been too busy), so you should answer Yes, because that's what you'd have done, if you'd had the time. Or perhaps you read it about two months ago - not much more than one month, after all - so you answer Yes.
What a lot of thinking a respondent must do, just to answer a simple question! Considering the question from the respondent's point of view (as above), you can easily see the possibilities for error in the survey results. Considering it from the researcher's point of view, those implicit questions need to be clarified, to minimize the possibilities for error.
Readership surveys often have very large samples. That's because many magazines have small circulations, so it's hard to find somebody (in a random survey) who reads one of the more obscure magazines. For example, in Australia, a monthly magazine, printed in colour, can make a profit with a circulation of about 20,000. As there are about 8 million households in Australia, a viable magazine can be bought each month by as few as one in every 400 households. As a survey requires about 30 respondents to answer Yes to a question before the figures become reasonably stable, that implies surveying 12,000 households (400 times 30) each month, to get an accurate estimate of that magazine's readership. In other rich countries, the figures would be much the same, but in developing countries, publications can often survive with smaller circulations. In that case, the sample size would have to be even larger - but poorer countries are less able to afford large-scale readership surveys.
The practical solution is not to release the survey results every month, but to accumulate them until there's a balance between the findings being out of date and the survey being affordable. Years ago, I worked for Morgan Gallup Polls, one of the two organizations then doing regular readership surveys in Australia. (They still do: see www.roymorgan.com.au.) This was a large-scale operation - the interviewers made over 1,000 interviews each week, and the readership results were published every six months, for a total sample of about 30,000. In fact, to avoid annoying variations between successive survey periods (due solely to the effects of random sampling) the published results included the previous 6 months' data. In other words, the published sample was about 60,000, including a whole year's worth of data. In 1980, this was heavy-duty number crunching. It took our IBM 370 mainframe computer all weekend to produce the results - we'd feed in the punched cards on Friday night, and the results were ready on Sunday afternoon - or later, if one hole in one card had been mispunched.
Besides the huge sample, another reason for the massive computing workload was the sheer number of publications surveyed. In each state, there were several daily newspapers, and around 10 weekly newspapers. On top of that were 40-odd national magazines, mostly monthlies. The average respondent was asked about 45 or so publications (less than the total number surveyed, because not all publications were available in each state). But even in 1980, that was only a small proportion of the newspapers and magazines published in Australia.
Most other countries also have one major national readership survey. Sometimes, as in Britain, the survey is commissioned from a market research company by an association of print publishers, which has the right to onsell results. In the UK, that association is JICNARS. In the USA, the survey is undertaken by a market research company, Mediamark Research Inc (MRI), with a sample of 26,000 per year. In France.....
The method just described is typical of most countries.Readership surveys need large samples and cost a lot of money, so it makes sense for a number of publishers to combine forces and share the cost of one survey. Adding extra publications to a survey adds little to the cost, while doing a survey for only one publication can bias the respondents and tend to produce overestimates of readership. A syndicated survey is the type I worked on, where one company funds the survey, and charges publishers a fee to subscribe.
A joint industry survey is one where all the publishers get together, design a survey, and call tenders for a market research company to actually do the work. For example, the National Readership Survey in Britain is done that way.
Most Western countries use one method or the other, to much the same effect: that readership data is shared between publishers. For magazines and newspapers with large circulations, either method works well. But for titles with small circulations, readership surveys of this type are simply not economic. As mentioned above, the smaller the circulation, the larger the sample size needed for a given level of accuracy, and the larger the cost. There are few economies of scale in market research: So small-circulation publications have to rely on circulation data, perhaps supplemented by occasional small studies to get some information about their readers.
There are many ways to measure readership. The best known measure is (1) average issue readership, which can be derived in several different ways. Other measures include (2) reach, coverage, or cumulative audience, (3) frequency, (4) the Starch method, (5) readers per copy, and (6) eye-tracking. Other measures related to readership include appreciation levels and actions taken as a result of reading.
The number of people who have read some or all of the average issue of a publication. This is in many ways the least misleading measure, and probably the most widely used. If you divide average issue readership by average circulation, you can calculate the average readers per copy - but the later after publication the survey is done, the higher will be the readers per copy.
There are at least four ways of measuring average issue readership, and disputes between proponents of the various ways have generated a lot of heat over the last few decades. The four main ways are (a) "Through the book", (b) "First Read Yesterday", (c) readership diaries, and (d) "recent reading".
This is a long-established method, beginning in the USA around the 1950s: perhaps the most accurate method, but also expensive - which explains why it is now seldom used. This involves showing respondents an issue of each publication whose readership is being measured. The interviewer hands the issue to the respondent, invites them to flick through it, and say whether or not they have seen that issue before.
The usual question asked is "Have you read or looked into this issue, before just now?" The phrase "read or looked into" (in French: "vu/lu") means that if they have flicked through its pages for a few seconds in a shop, they should answer Yes. This criterion will obviously produce much higher readership figures than if the question had been "Have you read most of this issue?"
The practical problem of TTB is the number and the weight of the publications that interviewers have to carry around. Sometimes, to reduce the weight, shortened issues are assembled, with only the first page or two of each main article, and omitting material that's much the same in each issue, such as masthead pages and repeated advertisements.
An variant to normal TTB is covers only. To make sure that magazines (some with very similar titles) were identified correctly, respondents are shown full-size images of the covers of the magazines surveyed. Because many magazines' covers are quite similar from one issue to the next, showing black and white photos is usually not good enough: colour is necessary, for accurate responses. A reduced-size image is OK, as long as the unique words on the cover are clearly readable.
("Unique words" - what are they? For example, the price is often shown in small print on the cover, and may not be readable on a reduced image - but that doesn't matter, if the price is the same for each issue. What helps respondents remember if they've read an issue is the colours and pictures on the cover, and the titles of the main articles.)
This is now the most common method for measuring readership. Unlike the previous three methods, this one collects data not on a specific issue, but on the title in general. The form of question is "How long ago did you last read Publication X?"
Though this method is popular because it is easy to do, it has technical problems in using the answers to calculate average issue readership: when a publication is read in more than one issue period, or when several issues are read in the same period. Another problem is that people find it difficult to remember exactly when they did something, if it wasn't in the last few days. A common effect is the "telescoping of memory" in which people think something happened more recently than it really did. To some extent, however, the various problems can cancel out.
This applies only to daily newspapers. As the name of the method suggests, interviewers ask, for a group of newspapers, "Which of these did you first read yesterday?" The principle is that if a lot of "yesterdays" are averaged out, the data will produce average issue readership. This method is mostly used in Europe, and is often done by telephone. Of all four methods discussed here, it produces data the soonest. One weakness of the method (when done by phone) is that publications with similar titles tend to be confused. Well-known publications with declining circulations tend to be over-reported, because of such confusion - but the confusion applies mostly to magazines, and this method is most suitable for newspapers.
This is an expensive method, because it requires a daily survey, for daily publications. However it can also be done weekly, becoming "first read in the last 7 days", though some accuracy is lost because of poor memories; figures for weekly publications tend to be over-estimates.
With this method, homes are chosen at random throughout the study area. Interviewers go to those homes, and persuade people to fill in a readership diary, which often runs for a month. During that time, the diary-keepers are asked to write in the title of each newspaper or magazine they read, and which issue it was. Other information collected includes how much of it was read, whether it was the first time they had read this issue, and how they obtained the issue.
Readership surveys don't strictly need data on readers' opinions of the publications, but such questions are often included, because one of the biggest motivations for people to complete a readership diary is the chance to give their opinions and influence the editorial content to better meet their needs. In terms of value for money, readership diaries work very well, collecting a lot of data quite cheaply - though not quickly. As long as respondents are motivated enough to fill in their diaries, this method should produce more accurate data than the Recent Reading or First Read Yesterday methods.
For a monthly magazine you can ask two similar questions: "What's the average number of readers in a month?" and "What's the average number of readers per issue?" At that level of generality, they both have the same meaning. But when you consider specific issues and specific months, the data can be different, because not everybody reads the current issue (and only the current issue) in the current month. One advantage of using diaries is that both of these figures can be calculated from the diary data.
Further questions asked to measure readership include
You should know that a reach figure can never be smaller than the corresponding average readership figure, and is usually a lot larger. That's because reach counts the number of different readers. A simple example: there are five people, named A, B, C, D, and E. A and B read one issue of a publication, while A and C read the next issue. The average readership is 2, but the reach is 3 (A, B, and C).
Avrage frequency is mathematically related to Average Issue Readership and Reach. Simply:
Average frequency = average issue readership x number of issues/ reach
The number of issues relates to the period being studied - the number of issues over which the average readership and the reach were calculated. The average readership times the number of issues can be thought of as the total number of readings - assuming that each reader read an issue only once. Advertisers use the terms impacts and impressions in this sense.
Continuing the simple example from the previous section, there were 4 readers and 2 issues. The reach was 3 people, and the average issue readership was 2. For two issues, that was a total of 4 readings. So the average frequency is the total number of readings (4) divided by the total number of readers (3), or 1.33.
Publications sold mostly by subscription have much higher average frequencies than publications sold mostly on the streets. And because of the way the mathematical relationship works, publications sold mostly on the streets (for a given average issue readership) have higher reach, to match their lower frequencies.
Frequencies are not only averages, but also spreads. One question commonly asked in readership surveys is "In a typical four-month period, how many issues of this magazine do you read: none, one, two, three, or four?" Results from such questions tend to be a little on the high side.
This method is named after Daniel Starch, a US media researcher who was prominent around the 1950s. It's normally used to measure advertising audiences, but can also be applied to articles or editorial pages. For each item (whether an ad, a page, or an article) the following measure are made:
The percentage of respondents who...
Occasionally the Starch questions are extended by asking about involvement. This is measured by asking respondents how involved with the item they feel - often on a 4-point or 5-point scale ranging from "not at all involved" to "highly involved".
This is the link between circulation and readership. If you know the circulation, estimate the readership with a survey, then divide total readers by the total circulation, you can calculate the average number of readers per copy. Because advertisers are more interested in readership than in circulation, it's worthwhile for a publication to aim at maximizing the number of readers per copy. However it's a difficult figure to change.
For many monthly magazines, the average is about 4 or 5 readers per copy. For magazines that don't go out of date quickly, and have a high pictorial and educational content, the readers per copy can be as high as 20 - e.g. the sorts of magazines found in doctors' waiting rooms, such as National Geographic. However it can take up to a year to reach that number of readers, and advertisers usually don't want to wait that long.
A completely different method of audience research is eye tracking. This sounds like something out of a science fiction movie, but has been around for years. Volunteers wear an electronically equipped pair of glasses or mask. This is linked to a video camera, which records exactly what they are looking at, at each moment. A recently developed system is much more comfortable for users, dispensing with the helmet and chin-rest: it simply has a camera below a computer monitor (see tobii.srlabs.it. It can track a viewer's gaze to within 1 degree.
The well-known Pew Center in the USA used eye-tracking for a study of newspaper reading in 2003. An interesting finding was that readers tended to skip over photographs. Details can be found on the Pew website at www.people-press.org
If you can't afford the high-tech equipment, and are more interested in pages than places on a page, our consensus group method (see below) works very well, and produces data that's almost as specific.
It's useful for advertisers to know how many people have read or bought a publication, but these measures are often the result of many factors, and don't clearly show why people are reading or buying the publication. To answer a "why" question you need to speak to the readers - simple counts can't provide solutions. So measuring motivations usually involves a survey or a qualitative research method. It is more useful to the editors than to advertisers. Audience Dialogue has been developing a method of measuring appreciation, which has proved useful in getting readers' reactions to editorial content.
Our method involves the use of consensus groups, with at least three groups, each group involving about 12 readers of a magazine. As an example, we did a study for a magazine in Croatia in 2003. In the groups, we handed out two recent issues to each person: one issue with a higher than normal circulation, and one with an unusually low circulation. We asked participants to look through the two issues, and record three items that they particularly liked, and another three that they particularly disliked. When they had chosen the items, we quickly worked out which items had been mentioned most often, and for each of these asked participants to explain what it was that they liked and disliked about that item. We then used the standard consensus group process to discuss, clarify, and vote on the explanatory statements. The votes were used to create a page-by-page graph showing approval and disapproval rates, along with reasons for the highest and lowest figures.
Perhaps the ultimate test of a publication is not just that people read it, but that they act on it. Advertisers are interested in whether people buy the advertised products after reading about them in the publication. An environmentally oriented activist journal will want its readers to take more care of the environment. For a politically oriented publication, politicians will be very interested in how the content affects readers' voting habits. Some community newspapers have the goal of increasing democracy by encouraging increased participation in local affairs.
Questions about the outcomes of reading are not often asked, but are much more relevant for some purposes than the types of measure listed above. An outcome question is often in the form "In the last six months, have you taken any actions as a result of reading this magazine?" Respondents then need to be reminded that the actions taken can include buying an advertised product, writing a letter to the editor, contacting some other person or organization mentioned, or acting on advice given in the magazine - such as cooking a recipe given in the magazine. Sometimes actions taken include passing the bought copy onto others. They don't incude reading it or throwing it out!
All of these outcomes can be assessed using standard evaluation methods (such as program logic modelling. The usual difficulty is that other factors as well as the publication are contributing to the outcome, and the evaluation needs to separate these factors - for example, by finding a sample of matched pairs of readers and non-readers. Though outcome assessment is an important area, we haven't found or done much research on it - but see our case study about the dreams of young readers.
The publications of the annual Readership Symposiums (founded by Harry Henry, the pre-eminent researcher in this area) are the most detailed source of information on readership survey methods. See www.readershipsymposium.com. A summary of findings from these symposia is the book >Effective Print Media Measurement by Michael Brown. There's also a good summary article by Katherine Page in the International Journal of Market Research for 1999.