A recent thread at Sphinn became even more interesting when Matt Cutts dropped a comment that Google do not use bounce rates as a variable in their ranking algorithm. Here’s what he said:
Without reading the article, I’ll just say that bounce rates would be not only spammable but noisy. A search industry person recently sent me some questions about how bounce rate is done at Google and I was like “Dude, I have no idea about any things like bounce rate. Why don’t you talk to this nice Google Analytics evangelist who knows about things like bounce rate?” I just don’t even run into people talking about this in my day-to-day life.
Thing is, I read the write up at Google Webmaster Help from the October 22nd Q&A session. One of the questions answered covered bounce rate as a ranking factor, and was answered by Google’s John Mueller:
chrisff, palo alto, ca: Is the bounce rate and speed taken into account when ranking a page? i.e. if you see a searcher click on a result then return very quickly and choose another result, is the first page ranked lower?
JohnMu: Hi chrisff, assuming that users will be jumping out of a site like that, there’s a high probability that they won’t be willing to recommend it to others (or come back themselves). So yes, indirectly at least, if a site is made in a way that users run away right away, then chances are that we might not be recommending it as much as other sites that users like (and recommend to others).
Most SEO’s first response to the conflicting messages might be that a feeling that there’s a tiny bit of inconsistency here, or at least there’s some need for clarification on the topic.
Should we care? No, not really. Why would we spend all of our time (as SEO’s) promoting a website that users would find so useless, they all bounced straight away? As Matt said, bounce rate metrics could be spammed quite easily and an algorithm that takes browser behaviour as its key influencing variable is a frightening idea indeed.
This leads us to an interesting point on Microsoft Research’s “Browserank”. The Browserank paper points out that the “web link graph is not a reliable data source for computing page importance.”, but neither is user behaviour. Perhaps there’s potential for an algorithm that takes both of those metrics (Pagerank and Browserank) into account for a more bullet proof ranking algorithm?
Could Google use browser based data to influence their rankings? The main problem faced by any browse based algorithm is the data source. Where could a search engine company get browser behaviour data? I came up with a short (and in no way exhaustive) list of traffic data sources:
Their own click throughs on the organic SERPS (and ok, bounce rates…)
Toolbars – buy in data from multiple toolbars or create one specific to the task. Google Toolbar has the option to send user data already
Browsers – create a browser that relays browsing information back to the algorithm (with user permission of course). Like say, Google Chrome?
ISP data – Hitwise use ISP data as their reporting source, which offers up a highly accurate snapshot of Internet traffic behaviour
Votes – we’ve seen this with Search Wiki though there’s no official line from Google on how this data is currently being used
None of those are bullet proof (non gameable) information sources, but then again the link graph based algorithm has its flaws too. Perhaps all of those sources, acting together could help search engines improve their organic results and make for a better search experience.