Although I’ve seen several blogs link in the last few hours to comScore’s "Behaviors of the Blogosphere" study that I posted about earlier (though admittedly not the feeding frenzy I’d expected), I’ve also seen a few questions about the methodology. So I thought I’d take a bit of time to address some of those.
A convenient way to do that is for me to answer questions that Darren Barefoot emailed me today. I haven’t asked Darren’s persmission to answer these questions in this forum, but I figure as a fellow blogger he’ll be cool with it:
* Are there more details about your methodology? I’m no statistician, but page 3 of your report doesn’t describe how data was gathered from "1.5 million US participants", nor how those people were selected. There’s an asterisk in the first paragraph of page 3 which suggests more details, but I can’t figure out what it’s referencing.
Let me start with the most important thing: my opinion is the best information market research can give us is this: "Is it bigger than a breadbox?" This research study satisfactorally answers that question for the blogosphere: Yes.
There is no flawless methodology in market research. It’s an inexact science. Samples get biased,
corners are cut trade-offs are made, yadda-yadda-yadda. It’s always directional, at best. Research wonks like myself obsess on the details, and if it’s details you want, it is details you will get. This will be one of my "long posts." It’s late and I’m bored, so I’ll dwell on the details. (Man, rereading it, I went completely OCD on your ass!)
In fact, I’ll begin by sharing a new favorite quote, from the second page of How to Lie With Statistics, a classic work (1954) by Darrel Huff (and wonderful illustrations by Irving Geis):
I have a great subject [statistics] to write upon, but feel keenly my literary incapacity to make it easily intelligible without sacrificing accuracy and throughness.
- Sir Francis Galton
You’re right, Darren, it looks like that there should be some footnote on that page that’s
missing. I’ll call it to comScore’s attention and see if we can get
clarification and update the PDF. I’ll also invite them to elaborate in the comments here. And, BTW, they do offer a Methodology page on their site, though as Cameron Marlow complains it could be more detailed.
I can tell you that comScore’s panel is one of the largest in the world
for media research. By comparison, TV viewing habits in America are
laregely determined by a panel of a few thousand maintained by Nielsen
One funny thing to me is that within the bubble I live —
Internet advertising and media research — no one argues much anymore over the methodology of comScore and their chief rival Nielsen//NetRatings, in part because we’ve heard the explanations before but also because they’re such household names in our sector we don’t think to worry about it much. All the biggest web sites and online ad agencies and advertisers are quite familiar with comScore and their numbers. But apparently in the blogosphere they’re not so familiar.
How the panel members were selected… I’d have to defer to comScore for a thorough explanation there, but I’m sure there was an element of "self-selection" along the lines of recruitment to participate in the panel through banner ads and other "customer acquisition" tactics. So one potential bias could be that they get "joiners" in their panel. They also recruited some people with free utilities, such as a virus detector. Everyone gets a clear explanation, though, that their online surfing will be monitored for aggregate research purposes, which they have to opt into.
But they address the bias in various ways. First and foremost, their panel is really, really huge by conventional research standards. Most opinion polls the results of which you read in the newspaper or elsewhere are based on samples typically of 1,000 (or fewer) respondents on the low-end or 20,000 on the high end. comScore’s 1.5 million research subjects simply shatters most research constructs.
Cameron rashly writes, "Given that they do not justify their sample, nor provide margins of error, the initial sampling frame should be considered bunk." He couldn’t be more wrong. I was the ultimate project manager for this research. Two years ago, I made the well-considered decision to steer this research in comScore’s direction precisely because I believe they have the mother of all research panels. Theirs is really the only one I would trust to project reliably to audiences as small as blog readers.
To the extent to which all that wasn’t made more clear in the methodology section is partly comScore’s modesty and partly time constraints getting this out the door.
You can make statistically sound projections based on relatively small subsets of a population. But with a panel this gynormous, projections are quite sound. So that’s one thing that corrects the sample bias: humungous sample size. The Advertising Research Foundation gave comScore the seal of approval based on that alone.
Also, they weight results from the survey against a regular (quarterly? semi-annual?) random-digit-dial (RDD) phone survey. I don’t know the size of that sample, but it’s sufficiently big to be statistically reliable, and RDD is typically known as one of the best random sampling methodologies for populations, because virtually everyone (in the U.S., anyway) has a phone, and numbers are generated randomly, which gets "unlisted" households (curiously, though, it doesn’t get cell phones, so it does tend to under-sample Gen Y).
(See, this stuff get’s really geeky. But you asked.)
Your question also asked how the data were gathered. ("Data" is plural for "dataum"; use the plural verb form, people!) Again, comScore can correct me, but they use some kind of combination of a "proxy network" (a farm of servers set up to cache all web content panelists surfed) and/or some software on panelists’ machines. They have some mechanism, in any event, for seeing everywhere panelists go and everything they do (including purchases, SKUs, money spent, etc.). Then they suck all that data up into the mothership, a multi-terrabyte (I imagine) datamart thing. Results are recent and highly detailed.
* Why is there no discussion of margin of error?
Uh…an oversight, I guess. The whole reason with going with comScore is their accuracy based on sample size is superior in the industry. With 1.5 million panelists’ behavioral data, they can project with extreme accuracy on thousands of sites. Margin of error, within a certain "confidence level," is a measure of reliability in terms of variance, were the same survey to be administered numerous times. So, for example, a sample size of 2,000 respondents, more or less randomly selected, will represent a given population, say 290 million U.S. residents, within a "margin of error" of 2.19% , meaning, if 20% of survey respondents said "I like gum," it could be more like 18-22% in 95 similar surveys out of 100 times it was conducted (i.e., a 95% "confidence level").
So, to have a panel of comScore’s (1.5 million) represent a U.S. online population of 204 million, at a confidence level of 95%, your margin of error would be 0.008% (meaning "dead on"), according to this margin of error calculator. [comScore folks or anyone else out there, please correct me if I’m misrepresenting or mistaken in anything here. I’m not an actual statistian, I just play one on the Interweb.]
* The first graph on page 6 discusses unique visitors to particular domains. These don’t jibe with the sites’ own reports. For example, Boing Boing claims 4.6 million unique visitors (http://www.boingboing.net/stats/) in Q1 of 2005. Yet, the comScore study only reports 849,000. The same goes for Slashdot, which reportedly sees 300,000 – 500,000 visitors on a daily basis. Surely in three months they receive far more than 911,000 unique ones? Which numbers do you claim to be more accurate–comScore’s or the sites’ own?
Assumption 1: I don’t see where you get the 4.6 million unique visitors figure for BoingBoing. When I look at one of the first sections of that page you link to, I see a monthly range of 1.8 to 1.5 million "unique visitors" (UV). So, in the months of our examination, Q1 2005, BoingBoing’s monthly UV stats range from 1.45 to 1.66 million. So, let’s assume for the three months you’re probably talking about an undupilicated audience of 2-3 million, by their site stats,
Factor 1: How does BoingBoing stat package collect uniques? How does it work at all? I can’t be bothered to find out those answers, as stat packages vary (widely) in methodology and accuracy, but one key question is do they count "unique visitors" by IP addresses, cookies or some other means? Probably IP addresses, which is the most common. At least this package distinguishes "visits" from "visitors," as many don’t and bloggers often get confused thinking "visits" (which is surfing sessions) is the same as visitors (unique people), as visitors can have multiple visits during a month.
In any event, if it is using IP addresses to distinguish uniques, as I bet it is, those can be highly variable. Many ISPs assign IP addresses randomly every time a user logs on, so if you are on dial up or you shut your computer off during a month, you might show up as several IP addresses to BoingBoing on your repeated visits throughout the month. Not to mention the same person surfing from work and home being counted twice. So the likelihood is an overcount due to IP address counting.
comScore doesn’t have this problem when it comes to unique identities, because it knows (at least to the household level) that people are unique visitors, because of its persistent software relationship with the computer. )
Factor 2: International traffic. comScore’s panel used for this study comprises only U.S. residents. For advertiser purposes, that’s what most advertisers care about. Also, because of it’s very construct, it would be nearly impossible to get 100% international panel coverage (e.g., Iraq, Nigeria, Belize, etc.).
So their numbers exclude traffic from international sites. (The Methodology section of the report says the sample is U.S. only, but it doesn’t dwell on the point.) Many U.S. sites may between 10-50% traffic from international visitors. That may also explain a lot of the variance.
There is more I could say here, but I think that’s sufficient, as those are probably the main factors for the differences. That and simply that log files analysis systems can also be quite flaky. I once had a client when I was freelance who had two stat tracking packages installed on her site, and there was a 10x difference between them: one said something like 10,000 visitors a month, and the other said 100,000. Go figure.
* The definition of ‘unique visitor’ in the study reads "The number of individual people visiting a site in a given time period." Meanwhile, the text addressing the most popular blogs says "Examples include DrudgeReport, which drew 2.3 million visitors who visited an average of 19.5 times, and Fark, which drew 1.1 million users an average of 9.0 times in Q1 2005."
What’s the ‘given time period’? Clearly you don’t mean a unique visitor in Q1, 2005, because you discuss each visitor coming to a site x times.
Yes, we do mean for the first three months of 2004, DrudgeReport drew 2.3 million unique U.S. visitors who visited an average of 19.5 times (at total of 44.3 million visits during that period). That means, it’s audience is both large and hugely loyal. Fark had 1.1 million visits who visited 10.1 million times (an average of 9) in the first quarter.
Beyond that, Blogdex’s Cameron Marlow, a would-be friend of mine and Ph.D. student at MIT, raises quite a fuss about the methodology of the study over at his blog Overstated (that’s an understatement), where I have to be honest he gets it pretty much entirely wrong. Most of his concerns should have been refuted in this post, and others I argued in his comments field.