Saturday, May 05, 2007

Towards a more meaningful conception of online performance

There seems to have been a fair amount of noise about online measurement recently, much of it from the UK newspaper industry, the rump of which recently committed to publishing its ABCe figures on a monthly basis. What's interesting to me (apart from The Guardian and The Telegraph both claiming victory by citing different metrics) is how rudimentary our current approach to online measurement is. Yes, we've finally moved on from hits (remember them?) but the page impression is still alive and well and the unique user remains largely unchallenged as the daddy of online metrics. Below is a look at some of the key shortcomings of today's main site metrics (I've deliberately steered clear of streams, downloads and RSS as we'd most likely be here all night) followed by a brief rumination (if indeed you can ruminate briefly) on which direction I think we should be taking things in.

Page impressions (a.k.a. page views)

The successor to the wildly inaccurate 'hit' (which counted requests for all files/page elements), the page impression pretty much does what it says on the tin, counting one page impression for every webpage successfully requested by a user's browser. In the early days of the web, one of the main challenges to the accuracy of page impressions was the use of frames, which result in a separate page impression for every frame loaded, thus inflating the total. Whilst the use of frames has diminished in recent years, the rise of Flash and AJAX has threatened to compromise page impressions in the other direction as extensive navigation can often be achieved without the need to refresh the page.

Caching (the temporary storage of a webpage for faster retrieval) presents another challenge to the integrity of the page impression. Whether carried out locally or at ISP level, caching invariably results in fewer page impressions, but not in a way that can be measured or taken account of.

A more fundamental challenge to the validity of the page impression is the fact that a high page impression count doesn't necessarily equate to a good user experience. A user who instantly finds what they're looking for, perhaps just one click from the homepage, will generate fewer page impressions than a frustrated user desperately clicking around a site trying (possibly in vain) to locate a particular piece of content or information.

Page impression totals are also vulnerable to deliberate inflation by developers whose performance is being measured by them. If your salary is determined by advertising revenues, which in turn are dictated by page impressions, then why not spread that bit of content, which could just as easily reside on one page, over many pages, encouraging the user to click through and generate more page impressions? Or how about a spot the ball competition, which refreshes the page with every wrong guess...?

A further challenge to the authenticity of the page impression comes from its susceptibility to robotic traffic (e.g. web spiders, monitoring software, development scripts) and automated page requests (e.g. auto-refreshes, tickers). Whilst the majority of robotic traffic will be filtered out by decent web analytics software (if not by the long-arm of the auditor) some will inevitably slip through the net as the filters are increasingly unable to keep up with the pace of change in this area. How to factor new file formats (e.g. XML) in or out of the page impression equation also presents problems.

Unique users (a.k.a. unique visitors)

One of the longest standing frustrations with the page impression, especially amongst advertisers, has been its lack of correlation with the actual number of people accessing a given site. The arrival of the unique user promised to move site owners closer to an appreciation of the actual number of people visiting their sites.

There are three main ways of deriving unique user figures (although they aren't mutually exclusive and are often used in combination). The first is IP address, the unique numerical identifier which internet connected devices are assigned in order to communicate with each other. The second is cookies, the small packets of information that sites issue to a user's web browser to identify that browser on its return. The third is registration, asking users to sign up/in to use your site, with e-mail address the usual determinant of uniqueness.

Unfortunately all of these approaches, even when used in combination, fall short of accurately reflecting actual user numbers. IP addresses are most compromised as a signifier of uniqueness by the practice of dynamic IP allocation by ISPs (the server maintains a list of available IP addresses and assigns you a different one each time your modem/router requests one). This can lead to both over and under-counting of users as one user can be assigned multiple IP addresses over a given period on the one hand, whilst multiple users can be assigned the same IP address on the other. The rise of home networking and public Wi-Fi has further compromised the notion that a single IP address equates to one user/computer as multiple machines increasingly access the Internet through routers with a single IP address.

Cookies present a different range of problems the most acute of which is 'cookie churn' - the deletion of cookies, causing new cookies to be issued to existing users. The impact of cookie churn is greater over longer reporting period as the likelihood of users (or network administrators) having purged their browser's cookies increases (hence the BBC's decision to switch from reporting monthly to weekly unique users).

A more endemic challenge to the veracity of unique users, whether derived from cookies or IP addresses, is posed by shared-computer use (e.g. in schools, libraries and the home) and multiple computer use by one individual. In is no longer unusual for one individual to have access to a machine at home, another at work and possibly a laptop or other net-enabled portable device. This issue has been exacerbated by the arrival of the full internet on mobile phones (which don't reliably accept cookies) and the increasing use of multiple browsers on one machine (I regularly use Firefox, Internet Explorer and Opera depending on the standards compliance of any given website).

The registration option, whilst mitigating against the use of multiple computers/browsers, is not a desirable barrier to entry for many sites and is usually undermined by the use of e-mail addresses as a signifier of uniqueness. It would now be the exception, rather than the rule, to find someone who has only ever used one email address for the duration of their life online. Forgotten passwords mean it's often easier to register with a different email address than reset the password for the old one. Factor in those people deliberately maintaining more than one profile on any given site and unique user figures based on registration start to look less robust.

Visits

Visits are something of a halfway house between page impressions and unique users, combining both metrics to calculate the frequency which which users visit the site (ABCe defines a visit as "A series of one or more Page Impressions, served to one User, which ends when there is a gap of 30 minutes or more between successive Page Impressions for that User"). Aside from being derived from two rather imperfect metrics, visits tend to favour websites which lend themselves to quick, frequent checks (e.g. e-mail, news, weather, social networking) over sites which require more sustained engagement from the user (e.g. gaming, e-commerce, blogging), which is why visits are best looked at alongside our next metric which is...

Time spent

Nielsen//NetRatings recently announced plans to abandon the lowly page view in favour of a 'time spent' metric, citing the rise of Ajax as a key factor in their decision. Whilst arguably an improvement on page views, time spent is critically flawed as a metric by failing to take account of the level of user engagement. Having a website open doesn't equate to being engaged with it, especially in this era of tabbed browsing. It's not unusual for me to have a website open on a tab for days or weeks at a time without actually looking at it.

Where next?

So, if our current metrics are repeatedly coming up short, where should we be headed? It seems to me that engagement, coupled with the value attributed by users to an online experience, are critical to the future of online measurement. I fear we have become too fixated on the figures in our server logs and have failed to exploit the potential of an inherently interactive medium in gauging user experiences. Rather than looking at graphs going up and down and trying to infer what our users are thinking, why don't we start asking them?

As part of its thinking around how to adapt to the Web 2.0 world, the BBC (disclaimer: I work for the BBC) has started asking it's users (by means of a quarterly online survey) to rate different areas of its website according to how likely they would be to recommend them to a friend or colleague. The methodology is known as Net Promoter and was developed by loyalty business model expert Fred Reichheld. Combined with more specific questions about what users like and dislike about our sites, Net Promoter has proved a revelation, delivering tremendous insight into what our users most value and what they find frustrating.

Of course, online surveys aren't the only way of gathering qualitative feedback from users (which is good news as they're invariably self-selecting and can be unwelcome if they appear too often) . Flickr (recent recipient of the Webby Award for Best Practices) has made something of an art form of iterating product features and gauging the community's response, rolling back or tweaking the new features as necessary. By adding simple voting buttons to sites and individual page elements we can start to build up a much richer picture of what audiences are truly engaging with and valuing than a graph of unique users or page impressions can provide.

Whilst unique users and page impressions undoubtedly still have a place in a site's suite of online measurement tools, they need to be lower down in the mix and secondary to more sophisticated measurements of user engagement and appreciation. We must move the debate around online measurement beyond just raw numbers and start to enable our audience to tell us what they really think of our sites and services.

2 comments:

Amelia said...

Certainly agree that moving from sheer numbers to an engagement model is interesting, but I do think that you can't really have this discussion without talking about RSS and what that does to online metrics. I read most of my blogs through a reader, so I think that none of my site visits effectively "count" for anything.

Anonymous said...

ru·mi·nate (rū'mə-nāt') pronunciation

v., -nat·ed, -nat·ing, -nates.

v.intr.

1. To turn a matter over and over in the mind.
2. To chew cud.

So, provided one is not a cow or similar, brief rumination is certainly possible under definition 1, but as with cud-chewing, is likely to be more productive if engaged in for longer.....

Big Pa / Pedant