I remember when Google had been out for about a year or so, back when I was promoting a single website in-house. I was really cautious about making changes to the site because we were ranking well for terms we cared about, and I was afraid that a single change could possibly take that away. I remember making lots of small changes slowly, and then waiting patiently to see if they had some kind of impact.

When you lose rankings and traffic to your site, those might be as a result of changes you make to your site. They might happen because a competitor added something new to theirs, or got more links to their pages, or redesigned their site.

We’ve seen in the past few years that Google has been averaging over 500 modifications and updates to their search algorithms each year. Those can sometimes trigger fluctuations and changes in search results as well.

And sometimes what searchers search for changes too. Language evolves, and people start using other terms to refer to something, or the demand for a specific product or service waivers or is redirected to something else.

Last week, while I was searching around in the US Patent and Trademark Office databases, I came across a newly granted patent from Google that described how they may monitor and respond to changes on Websites. The patent discussed changes that are against Google’s guidelines such as keyword stuffing, hidden and tiny text, misleading redirects, and unnatural linking.

When Google notices changes on a web page, it might calculate an old rank for that page, a new or target rank, and may not re-rank the page to that new rank immediately. Instead, it might slowly move that page up in rankings over the course of weeks or possibly months. Or, Google might even initially drop the rankings of pages for query terms they might be relevant for, and then move the pages up to the target rank.

I wrote about the patent in a post I titled The Google Rank-Modifying Spammers Patent

What’s interesting about it is that Google may also monitor that page to see if the person responsible for the page might make additional changes as a response to the page losing rankings or not rising as high in rankings as might have been expected. If other changes are made that resemble the kinds of web spam I mentioned above, Google might positively identify the page as spam and take further action against it.

Chances are that when you make changes like adding Google Authorship markup to your pages, or canonical link elements to pages, or other site architectual changes that improve the quality of a site, that you might not see this kind of activity from Google. Rewriting title elements or meta descriptions for pages to more accurately describe the content of those pages, or adding unique and original and quality content to pages also might not trigger such a response from Google. But they could.

The warning I got from reading the patent, titled Ranking documents, is to not panic, and that if the changes you make are the kind that improve the quality of your pages and the experience a visitor has on your pages, there shouldn’t be a problem. The impact of those changes might not be immediate, and there may be a period where you temporarily lose some rankings. But focus upon making things better, and it’s unlikely that your pages are going to be flagged as spam.

Pages I enjoyed this past week:

I’ve been a fan of using absolute URLs in links on webpages for many years, ever since I learned (the hard way) that when you use relative URLs sometimes there are unintended consequences. This week, Joost de Valk explained Why relative URLs should be forbidden for web developers

It’s good to see people publish case studies related to SEO, and one that definitely caught my eye last week was How I Recovered from the Google Panda Slap. There have been a lot of stories on the Web about people about losing rankings under Google’s Panda update, but very few about recovering from it.

Chances are good that Google is still experimenting with ways to use social signals in Web and Social Search results, to influence the rankings of pages. But we’ve been seeing authorship badges in search results for quite a while now, and presuming that those can help increase the click throughs on search results that show them. Cyrus Shepard reports on an experiment that he performed with some success, in How Optimizing My Ugly Google+ Pic Increased Free Traffic 35%

I saw an interesting quote in a page from Microsoft about a new whitepaper from them, and had to share it:

For a reputed ad network, only one out of 20 people clicking our ad stayed for longer than five seconds,” Guha reports. “We suspect this is because people mis-clicked the ad due to the small mobile-screen sizes and quickly hit the back button.

I know I make my share of fat finger errors on my phone, and sometimes click on things I didn’t intend. The page is: Fighting Back Against Click-Spam

It’s a cliche, but it’s true, “You can’t manage what you can’t measure.” I really enjoy drilling down through data in Google Analytics and Google Webmaster Tools, and coming up with actionable items that can be taken on the pages of a site. If you want to drill down even deeper, Avinash Kaushik responds to a number of reader questions on his blog in Dear Avinash: Attribution Modeling, Org Culture, Deeper Analysis +++

Regarding Google updates, Google’s Matt Cutts showed up for a Q&A session at SES San Franscisco last week, and responded to a number of questions including a few about the Penguin and Panda updates. Thom Craver provided an overview of the session in Matt Cutts Stops By SES To Talk Google & Answer Some Difficult Questions- #SESSF. There are lots of interesting tidbits in Matt’s answers, including a note that we should be on the lookout for another Penguin update sometime soon.

The Monday Morning SEO series is based loosely on the concept of the Monday Morning Quarterback, where people sometimes express their thoughts and opinions about decisions made in football games the day after, when they can benefit from hindsight as to the effectiveness of those decisions. This past week, I found that Steve Webb of Web Gnomes had started writing a recurring recap of blog posts about the Search and SEO community to, and we entered into a friendly competition with our recap posts.

His latest is Gnome Likes: Unicorn Linkbait, Update Hysteria & International SEO. Hopefully we all end up winners.

Retro Post of the Week

We have a lot of discussions about what makes a website interesting, what makes it sticky and causes people to come back over and over and over. For some sites, part of their approach is o allow visitors to treat their interactions with the site as if it were a game, and to have fun using it.

What makes a game engaging? What aspects of a game evoke emotional reactions? How can this translate over to websites, to social networks, to places that people return back to over and over?

According to gaming consultant Nicole Lazzaro, there are at least 4 different types of fun that might appear in a game, from easy fun, to hard fun, to serious fun, and finally to people fun. Where this gets really interesting is when it turns to social networks like Facebook and Twitter, and how they use elements of gamification as web sites and as social networks. Do you help create sites that introduce elements of engagement through game elements?

An interview with Nicole Lazzaro from a couple of years ago that focuses upon these different types of fun is: What Makes a Game Fun?

  • http://www.webgnomes.org/steve/ Steve Webb

    Well played, sir!

    I’m really glad you included Joost’s post. That was also one of my favorites from last week, and there was some “interesting” discussion about it on Twitter (mostly people inexplicably trying to defend relative URLs).

    I completely missed the Panda post so thanks for that… I’ll definitely add it to today’s reading list.

    I love the “Retro Post of the Week” idea, and I can’t promise I won’t steal it at some point ;-).

    Finally, thanks for the shout-out! This little competition was a great idea… it helps keep me motivated when I’m writing my recap :-)

    • Bill Slawski

      Thanks, Steve

      The Absolute URL vs. Relative URL debate is one that I made a decision on a long time ago when I found unnecessary and unwanted https versions of pages on a site I was working on filling up Google’s search results as duplicates. I’ve also seen that relative URLs make it easier for people scraping your content to show it off as their own. And relative URLs inadvertently can cause endless loops when done incorrectly, so that http://www.example.com/about/ becomes http://www.example.com/about/about/ and then http://www.example.com/about/about/about/ and so on, and so on.

      I thought the retro post of the week would be a fun idea because there are so many older posts out there that are worth revisiting even if they weren’t something published just last week. Be my guest in using the idea if you’d like. Like I could stop you if you decided to. :) I’m definitely interested in what you come up with.

      The competition is fun, and partially what inspired my choice of the retro post of the week – as webmasters we should have fun with what we write, and a friendly competition between us does add that. Thank you.

  • http://www.visiblics.com/ Rekha

    Amazing post! I Thanks Bill for sharing such an amazing post.

    • Bill Slawski

      You’re welcome, Rekha