On behalf of WebiMax, thank you for joining the Agile SEO Meetup featuring SEO experts Bill Slawski and Chris Countey. Members can still join here.

7:08pm – Chis Countey speaking toward Google Authorship

7:12pm – “Google is the referee in the world of SEO”

7:12pm – “Deliver quality content, or it will be susceptible to algorithm changes.”

7:14pm – “If your competitors are optimizing for certain keywords and you’re not, you’re left out.”

7:14pm – “If your domain isn’t even indexed anymore, you have a LONG way to go!”

7:14pm – “Banned domains, unnatural link building, Panda update, and the Penguin update are all working against you if you’re conducting black-hat SEO.”

7:15pm – “On-site SEO Playbook: Maximum Indexing Crawling, Unique Page Titles, URLs, File Names, Code Architecture, Microdata, Authorship, and Trust signals and Page Speed.”

7:17pm – “If you don’t put in a Meta Description, Google will decide what should go there.  Take away their ability to assume.”

7:20pm – “What are you competitors doing off-site?  We MUST make sure we are building links the proper way.  Will other people want to read this?”

7:21pm – “Having your business listed in credible directories, such as Chamber of Commerce, is fine.  Directories such as 123business is wrong!”

7:25pm – “Check out Ian Laurie on Twitter, great insight and knowledge and you’ll laugh all day.”

7:26pm – “Ross Hudgens @RossHudgens is another valuable source for link building and competitive analysis.”

—- Bill Slawski takes the stage —-

7:28pm – “I told my parents I was here to talk about ‘crawling’ tonight — they didn’t quite get it!”

7:28pm – “There are 3 main aspects to a search engine.  It crawls, indexes them, and shows them to searchers.”

7:28pm – “Robots when they first came out, people started writing programs to interact with them and abuse them.”

7:30pm – “Martijn Koster developed the Robots.txt protocol.”

7:31pm – “Important web pages have a high backlink count, have a high PageRank, and are in or are close to the root directory for sites.”

7:34pm – “Most crawlers will not only be Polite, but they will also hunt down important pages first.”

7:35pm – Search Engines filed patents on how they might crawl and collect content found on Web pages.”

7:38pm – “IBM came out with a patent for link merging to distinguish between the amount of different backlinks on a webpage.”

7:39pm – “If they see content structures, they might say ‘this is different, we may count as 2 independent links as groups’.”

7:40pm – “We know Search Engines crawl, we don’t know how well they crawl.”

7:40pm – “Search Engines like it when there is only 1 URL per page and you can set this up to be different URLs per page.  This makes a difference!”

7:42pm – “What happens when you have an E-commerce page with over 50 links per page?  The Rel=”prev” and rel=”next” page is here to help associate these pages together.

7:44pm – “This also works well for article pages, an article that is broken up amongst multiple pages.  This tells Google these pages are together.”

7:45pm – “The Rel=”hreflang” tells the search engine that these are the same pages, they’re just in different languages.  We see this on multilingual pages including FedEx, UPS, United Nations, and so on.”

7:49pm – “XML Sitemap’s are a way to create alternatives for search engines to crawl web pages.  These can be discovered without having the crawler physically crawl your web pages.”

7:50pm – “Use Canonical links, remove 404s, Validate with an XML Sitemap Validator.”

7:50pm – “Crawling versus XML Sitemaps.  Google said they were discovering content on webpages faster with XML’s versus crawling.”

7:51pm – “Yahoo came out with a patent not too long ago saying ‘we’re going to crawl social media’.”