<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=508589213126107&amp;ev=PageView&amp;noscript=1">

WebiMax Blog

Digital marketing tips and advice.

Dip Your SEO Toe in the Google Pool

Jer Niedt, February 22, 2013

To the technical uninitiated, it can be hard to tell whether or not a site needs SEO work or not.  What follows is a general SEO litmus test you can perform on your site, using only a browser.  No fancy tools,  litmus paper, or black magic required.

The process covers three primary areas:

Proper indexing - ensures the correct pages on your site are appearing in Google’s results, and that their listings are displayed optimally.

Architecture - primarily concerned with creating a logical, linear path through your site, ensuring deep pages are crawled.

Performance testing - ensures the site’s speed is up to par, by reviewing site code and server settings.

Search engine result pages (SERPs) are the primary battleground in SEO, and proper display of your site can make or break a campaign. Below is a standard example of a SERP result.

head-and-body-webpage-diagram SERP-Listing-diagram

Each web page on the internet should contain two distinct and unique areas, the <head> and <body>.  The head contains information that is invisible to the user on the page, unless it is requested.  Some of this information is called meta data, which can tell search engines important information about your page.  These requests take the form of title bar information in browsers, as well as various areas of the SERPs.  Proper engagement of these and other website resources help provide the best representation of your pages in search results as possible.

 

Proper Indexation, or How I Learned to Ignore my Siamese Twin

 

Each SERP listing represents one URL on any given site.  Only one URL, and therefore listing, should correlate with each page on your site.  One important SEO concept to understand is the difference between a page and a URL.  A URL is an address to a particular location on the web.  This “address” can sometimes be replicated in various ways, which causes two URLs to exist that reach the same page.

For example, an ecommerce site may have the same product in different categories, which creates two URLs that can serve an identical product page.

 

Google indexes URLs, so even though the page is the same, each URL is seen as a duplicate listing.  Splitting indexation not only reduces the amount of “credit” (or PageRank) each listing is receiving, but can also cause devaluation through duplication of content, which is an issue to Google.

Discovery:

Copy content from whatever URL you want to rank for a particular page, and search for it in Google, limiting the search to the site.

Site:yoursite.com “content, enough to differentiate from other pages”

If your site has limited amounts of content, search for the title of the page instead of content

Site:yoursite.com intitle:”page title”

Possible Results and Issues:

I didn’t find any results, what now?
Enter the url alone with the site operator, as shown:
site:www.site.com.

If no results are being shown with this test, it may indicate an actual block to indexation exists on site.  If this does occur, test every level of your site to see if it is a widespread problem.  The root cause would need to be explored more by an SEO professional.

I Found multiple copies, what now?
There are multiple ways to restricted off duplicate content, but an SEO professional should help you identify the best method.  Attempting to de-index parts of your site without proper advice can lead to unintentional de-indexation of large areas, or the entirety, of the site.

 I Found One Result, what now?
If you found one unique result, examine the listing.  For reasons we will cover in the next section, we recommend a title tag that reflects site structure, and does not exceed four levels hierarchically.

For example,
Widgets.com/self-reciprocating/widge-blue-pro-kit.html
Might have a title
Widgi Blue Pro Kit | Self Reciprocating Widgets | Widgets.com

This title implies the site structure to Google, as well as the user.

All meta descriptions should include a call to action and alternate contact method, usually a phone number.  Additionally, meta descriptions that do not contain the query that brought up the result may be pulled from on page content surrounding that term.  Similarly, titles can be rewritten by Google if too long, or not descriptive.  An SEO professional can help minimize this by crafting a keyword strategy that targets the appropriate queries on page.  Using the above example, a meta description might look like:

Since 2003, Widgets.com has been the premier widget provider in Eastern Canada. Explore our expansive online catalog of premium widgets, or call us today at 555-555-5555.

Ensure that page titles are no longer than 70 characters, and descriptions do not exceed 160 in length.

 

Website Architecture, and other Pyramid Schemes

Assuming that all of the pages on the site are singularly indexable, and properly display their meta data in the SERPs, site architecture must be examined.  Site architecture refers to the paths through which the crawler will navigate the site in question.

Establishing linear linking paths is very important to ensure that the whole site is crawled appropriately.  In addition to ensuring indexation of the whole site, Google uses the paths it crawls, and the subsequent URLs seen, to determine the heirarchy of site pages.  Often sites utilize redundant navigation in the form of sidebars, blogrolls, or large footer sections, all of which can create a web like network that can confuse Googlebot.

Chaotic linking design that provides too many pathways around the site, in a non-linear progression burns through what is known as crawl budget.  Crawl budget is a estimation made by the crawler about how much time should be spent on site.  Entities known as “crawl traps” can burn through this crawl budget by offering an infinite amount of navigation choices from the page in question.  Ecommerce filtering, which changes the url based on the order in which filters are clicked is a prime example.  When a crawler burns through budget, or becomes stuck, it may abandon the site without further exploration of deeper pages.

 

site-tiering-diagram

The example above shows a linear progression of tiers established by using intelligent navigation design.  From the home page, navigation directs to category pages, which in turn have unique menus that direct to the deepest pages on the site.  Site depth should be kept, ideally, to three steps below the domain.  Deeper pages run a  risk of not being crawled regularly, or at all.  Interlinking can occur between pages within the same group, or silo, but should not cross into other silos.

Discovery:
Check each tier of the site against the Google SERPs, using the site operator (Site:site.com)  If a page fails to load, check a sampling from the same tier.  If they are also not indexed, it can indicate a problem with the bot diving deep enough.

Possible Results and Issues:

A particular tier cannot be found in the index, what now?
Barring indexation problems, site architecture should be examined for a few key issues.

Some types of links can prevent Google from crawling through them.  Flash and Javascript links are notorious culprits.  To determine if a link is a flash link, a quick test is simply right clicking on the link.

If it brings up a menu that looks like the one below, it is a flash link.

flash-right-click

Javascript links are harder to spot, and require delving into the code to find them.

Most Browsers when you right click on a link have a menu similar to the one below:

javascript-right-click

 

Clicking “inspect element” on a particular link will bring up the code behind it.

Javascript code appears as below:
<a href="javascript:show_me('Details.php?ID=2445485', 2445485, 800);">

If there is any part of the link that says javascript, it may not be crawlable.

Beyond pathway issues, crawl traps may be to blame as well.  Check to see if filtered pages, contact forms, or other complex mechanisms are indexable.  They may have to be restricted off, with alternate pathways put into place. To do this, use the site: operator we have used previously to see whether or not a page is indexed.

Bring a Jet to the Drag Race

Google’s ostensible goal is that of providing the best model of user behavior online as possible.  It should come as no surprise that sites speed, which can have a tremendous impact on user experience, is also factored into Google’s algorithm.

One of the quickest gauges of site performance is Google Pagespeed Insights.

google-pagespeed-insights

The interface will prompt for a domain, then provide a list of optimization suggestions and a score.  Fix any issues listed in high or medium priority, and it will have a dramatic impact on your score.

On the sidebar there is a link listed for “Critical Path Explorer”.  This displays a timeline of page load, broken down by element.  This can help you visualize the amount of time it takes for a page to load.  Generally speaking, a second or two is ideal for most sites.  Large ecommerce may take longer.

critical-path-explorer

If this posts helps you recognize any red flags, feel free to reach out to Webimax.  Our team of SEO professionals has the support of a massive in house team of inbound marketers, writers, developers, and designers.

 

 

Need an Expert Contributor?

Ken Wisnefski is a seasoned web entrepreneur and a frequent contributor to news outlets and business publications. Ken’s vast knowledge of how to make online businesses succeed has made him a sought after consultant from businesses wishing to improve their online initiatives. Contact pr@webimax.com to collaborate!

More...

Subscribe to Updates