Although this update doesn’t yet bear a name (like the infamous Panda update), it may be just as important. Matt Cutts, the head of the Google Webspam team, explained the page layout update on Google’s Inside Search blog on January 19 as a response to “complaints from users that if they click on a result and it’s difficult to find the actual content, they aren’t happy with the experience.” True story, Matt! Nothing new here, as Google has always been committed to delivering high quality, relevant results, according to Cutts.
Likely targets of this update, in my opinion, will be “money sites”. You may have seen a few. Loaded with ads and not much else, unless the content is stolen. (It will be interesting to see this update used with the rel=author tag to further reduce the visibility of scraper sites.)
The page layout algorithm targets websites that have pages with an excessive number of ad placements “above the fold”, as further explained by Danny Sullivan over at Search Engine Land. This doesn’t necessarily mean that every website that has ads above the fold or above actual content will be hit with a penalty, but the fact that Googlebot can understand the difference between an ad and something else may be more important than the algorithm update itself. You can check how your website looks with online tools.
“Smart” search engine spiders isn’t a new concept by any means, but this update may validate some theories made in 2011 by some very smart SEOs. Bill Slawski wrote one of the most personally influential posts of 2011 about how search engines may visually identify parts or blocks of a web page to improve organic search results. His post includes a breakdown of a patent granted to Microsoft in 2011 that “describes a machine-based training process that might look at a number of features related to blocks on web pages.” The primary objective of this machine learning technology appears to be separating ad content from real content when identifying content that will actually provide value or answer a question.
More recently, Michael King dropped an SEO bombshell on the SEOMoz blog that answered some questions and gave us a new perspective on Googlebot and other crawlers. Without getting too technical, SEOs were made to believe that Googlebot could only recognize and evaluate text. Based on research done by Joshua Giardino, King and Slawski, as well as a slew of other key indicators, it was clear then that Googlebot was doing a lot more than a text-based crawler could. Instead, imagine Googlebot as a browser, like Chrome or Firefox: clicking on Flash buttons, seeing images, understanding pages as we do. Whoa.
Now, Google is officially and unquestionably displaying the intelligence of its spider and its commitment to improving the quality of search results. Seems odd that Google would penalize websites that use their product, especially in the year where online ad spending is likely to outpace print advertising for the first time ever. Perhaps we’ll see ads entering the Google+ sphere sooner than we thought.
In addition to this update, weatherman Matt Cutts also mentioned that another 500 updates are likely to hit this year. Armed with Google+ integration and a crawler that can view and evaluate pages with an almost-human sense of quality, Google may make 2012 a very difficult year for SEOs who can’t see beyond the algorithm.