In a blog post last week, Matt Cutts, head of Google's Webspam team, wrote about the progress the team has made in reducing the amount of spam in search engine results. In that post, he hinted at some changes in the works to push spam levels lower, including one that affects sites that copy content from other sites, as well as those that have low levels of original content.
Clearly, there's a blurry line there - or a "slippery slope," as Larry Dignan referred to it in his own post that waved some red flags over how the quality of a site would be judged.
Today, Cutts posted an update to last week's post on his own blog, announcing that that one specific change to the algorithm was approved at the team's weekly meeting and that it was launched earlier this week. In his post, Cutts explains:
This was a pretty targeted launch: slightly over 2% of queries change in some way, but less than half a percent of search results change enough that someone might really notice. The net effect is that searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.
When you're a search engine that processes billions of searches, small percentages equal big numbers - so, for Google, this is still a pretty significant change.
But for site operators, the impact is still unknown. In dozens of comments on Cutts' blog, readers are expressing their gratefulness for the change. But others still have questions. They want to know how Google determines original content from scraped content. They're wondering what happens to the small blog that posts something first and then is followed by larger sites that have more credibility. And what happens to sites that have a legitimate "mirror" of their content?
Clearly, this is just the beginning of changes to make search results even more relevant. What the ripple effects of such changes will be, however, is still an unknown.