In the last few years, audience research has expanded its horizons. In the past, audience researchers studied mostly TV, radio, print media, and arts events. But in the 1990s, a new medium arrived: the internet. Though the principles of research remain the same, the details had to be changed, because the internet is different from traditional media.
The biggest difference are that the internet (thinking mainly of web sites) is complex: there are millions of pages. An article in Nature in late 1999 estimated that the worldwide web had about 3,600,000,000 pages. An study made in 2000, on a different basis, estimated the true figure to be many times higher than that. In early 2002, Google (the largest search engine) claims to index just over 2 billion pages.
In the parts of the world most intensively served by radio and TV, people can receive perhaps 100 radio stations and 100 TV channels. In most of the world, specially the developing world, the number is far less than that (not counting shortwave radio stations that can't be clearly heard).
I live in Adelaide, Australia: a city with about 1 million people. It has about 28 local radio stations, which I can receive on a cheap AM-FM radio, with no special aerial. There are 6 free-to-air TV channels (artificially restricted by an Australian government policy), and about 40 subscription TV channels. Very few people in Adelaide know how many stations are available: I did some research, and the commonest guess at the number of radio stations was "about 10". The average respondent could name just 4 radio stations, and listened to just 2 in a week. (The figure of 2 different stations in a week is almost identical throughout Australia, almost regardless of the number of stations available.)
If people can't (or don't want to) keep track of the relatively small number of radio stations, how can they possibly do it for the Web, with billions of pages? In many ways, the Web is not like a radio or TV station, but more similar to a gigantic magazine. The big problem is how to find all those pages.
Even if somebody has found a web site, some sites are huge. I worked on one in late 1998, which had about 20,000 or about 50,000 pages, depending on who I spoke to. Nobody knew exactly. By the time they'd have counted the files on the host computers, a few hundred more would have been created.
We were asked to evaluate this site: in particular, the massive part of it that was devoted to online news. I decided that the best method was a three-phase one:
One astonishing finding we made was that usage of the site was growing at the rate of 8% a week. Using the Rule of 72 to find the doubling time (i.e. divide 72 by the growth rate) we found that traffic to the site was doubling every 9 weeks, or growing by a factor of about 60 in one year. We couldn't think of any historical example, in which any human activity had ever grown that fast. Obviously it couldn't keep up. If this rate had continued, everybody in the world would have been using the site daily, after a few years. While it was happening, at the time of this research, new server capacity had to be added every few weeks.
The site presented news stories in a variety of different ways, including:
We checked this hypothesis in the second stage: the group discussions. With each of these, about 8 users of the web site met to discuss their likes, dislikes, and suggestions. We had a computer in the room, online throughout the discussion, so that points and problems could be demonstrated to everybody present.
As each participant demonstrated their most and least liked features to the others, it became obvious that few of them were aware of the variety of news styles available on the site. Even though links were shown on most of the pages, the descriptions weren't clear to members of the public - even if they were obvious to journalists. Before we even reached the third stage of the study, it was obvious that minor changes in the wording of the underlined links would greatly increase the chances of most people finding the format that was most useful to them at the time.
With the web, changes can be very quick. The day after the last group discussion, the necessary changes had already been made.
The third stage - the large-scale survey - mostly just quantified what we'd found in the group discussions. As with a mail survey, it took a long time for the online responses to come in, and more time to finish our report. Perhaps that final stage wasn't necessary.
This was an example of a usability study. You'll find plenty on the Web about this: one of the best-known sites is Jakob Nielsen's at www.useit.com
However, usability is only part of the purpose. Above all, a site must be effective. In other words, it should achieve what its owners want it to achieve. To do this, it must be usable - and for a large site, the main aspect of usability is that visitors should be able to find what they are looking for.
The clearer that people (and organizations) are about what they want to do, the more straightforward it is to evaluate whether they've achieved it. For example, if a public health program sets out to eliminate malaria in some place, the evaluation is simple. But for anything involving audiences, the objectives are usually multiple, a little fuzzy, and often contradictory. A TV station might want a new program to have the largest possible audience, but also be highly regarded by critics, and also be cheap to produce.
For web sites, the goals often seem to be even fuzzier. These goals (stated and unstated) can include:
- Dennis List