The following is one section of one chapter of Danny Dover’s new book. It is available to buy on Amazon if anyone is interested!!
The basics of SEO problem identification can be done in about 15 minutes. When completing this audit I recommend you take notes based on the action items listed in each section. This will help you later when you do a deeper dive of the website. This audit is not comprehensive (See Chapter 9 for a full annotated site audit), but it will help you quickly identify major problems so you can convince your clients that your services are worthwhile and that you should be given a chance to dig deeper. The smart ones reading this section may notice that it builds upon the ideas expressed in Chapter 2. The dumb ones reading this, will think it is Harry Potter. The latter might enjoy it more but the former will end up with better SEO skills.
Before you start your audit you need to set your browser to act more like the search engine crawlers. This will help you to identify simple crawling errors. In order to do this, you will need to do the following:
Disable cookies in your browser
Switch your user-agent to Googlebot
When the search engines crawl the Internet they generally do so with a user-agent string that identifies them (Google is googlebot and Bing is msnbot) and in a way where they don’t accept cookies.
To see how to change your user-agent go to Chapter 3 (Picking the Right SEO Tools) and see user-agent switcher. Setting your user-agent to Googlebot increases your chance of seeing exactly what Google is seeing. It also helps with identifying cloaking issues (Cloaking is the practice of showing one thing to search engines and a different thing to users. This is what sarcastic Googlers call penaltybait. ) In order to do this well, a second pass of the site with your normal user-agent is required to identify difference. That said, this is not the primary goal for this quick run through of the given website.
In addition to doing this you should also disable cookies within your browser. By disabling them, you will be able to uncover crawling issues that relate to preferences you make on the page. One primary example of this is intro pages. Many websites will have you choose your primary language before you can enter their main site. (This is known as an intro page.) If you have cookies enabled and you have previously chosen your preference, the website will not show you this page again. Unfortunately, this will not happen for search engines.
This language tactic is extremely detrimental from a SEO perspective because it means that every link to the primary URL of the website will be diluted because it will need to pass through the intro page. (Remember, the search engines always see that page as they can’t select a language) This is a big problem, because as we noted in Chapter 1, the primary URL (i.e. www.example.com/) is usually the most linked to page on a site.
Next, go to the primary URL of the site and pay particular attention to your first impression of the page. Try to be as true to your opinion as possible and don’t over think it. You should be coming from the perspective of the casual browser (This will be made easier because at this point you probably haven’t been paid any money and its a lot easier to be casual when are not locked down with the client) Follow this by doing a quick check of the very basic SEO metrics. In order to complete this step, you will need to do the following:
Notice your first impression and the resulting feeling and trustworthiness you feel about the page
Read the title tag and figure out how it could be improved
See if the URL changed (As in you were redirected from www.example.com/ to www.example.com/lame-keyword-in-URL-trick.html)
Check to see if the URL is canonical
The first action item on this list helps you align yourself with potential website users. It is the basis for your entire audit and serves as a foundation for you to build on. You can look at numbers all day, but if you fail to see the website like the user, you will fail as an SEO.
The next step is to read the title tag and identify how it can be improved. This is helpful because changing title tags is both easy (A big exception to this is if your client uses a difficult Content Management System.) and has a relatively large direct impact on rankings.
Next you need to direct your attention to the URL. First of all, make sure there were not redirects that happened. This is important because adding redirects dilutes the amount of link juice that actually makes it to the links on the page.
The last action item is to run a quick check on canonical URLs. The complete list of URL formats to check for is in Chapter 2 (Relearning How You See the Web). Like checking the title tag, this is easy to check and provides a high work/benefit ratio.
Usability experts generally agree that the old practice of cramming as much information as possible “above the fold” on content pages and homepages is no longer ideal. Placing a “call to action” in this area is certianly important but it is not necessary to place all important information there. Many tests have been done on this and the evidence overwhelmingly shows that users scroll vertically (especially when lead).
After checking the basics on the homepage, you should direct your attention to the global navigation. This acts as the main canal system for link juice. Specifically, you are going to want to do the following:
Make sure the navigation system works and that all links are HTML links
Take note of all of the sections that are linked to
As we discussed in Chapter 2 (Relearning How You See the Web), site architecture is critical for search friendly websites. The global navigation is fundamental to this. Imagine that the website you are viewing is ancient Rome right after the legendary viaduct and canal systems were built. These waterways are exactly like the global navigation that flows link juice around a website. Imagine the impact that a major clog can have on both systems. This is your time to find these clogs.
Next view source and see if all of the navigational links are true HTML links. Ideally, they should be because they are the only kind that can pass their full link value.
Your next step is to take note of which sections are linked to. Ideally, all of the major sections will be linked in the global navigation. The problem is, you won’t know what all of the major sections are until you are further along in the audit. For now just take note and keep a mental checklist as you browse the website.
After finishing with the homepage and the global navigation, you need to start diving deeper into the website. In the waterway analogy, category and subcategory pages are the forks in the canals. You can make sure they are optimized by doing the following:
Make sure there is enough content on these pages to be useful as a search result alone.
Find and note extraneous links on the page (there shouldn’t be more than 150 links)
Take notes on how to improve the anchor text used for the subcategories/content pages
As I mentioned, these pages are the main pathways for the link juice of a website. They help make it so if one page (most often the homepage) gets a lot of links, that the rest of the pages on the website can also get some of the benefit. The first action point requires you to make a judgment call on whether or not the page would be useful as a search result. This goes with my philosophy that every page on a website should be a least a little bit link worthy. (It should pay its own rent, so to speak) Since each page has the inherent ability to collect links, webmasters should put at least a minimal amount of effort into making every page link worthy. There is no problem with someone entering a site (from a search engine result or other third party site) on a category or subcategory page. In fact, it may save them a click. In order to complete this step, identify if this page alone would be useful for someone with a relevant query. Think to yourself:
Take notes on the answers to both of these questions.
The next action item is to identify extraneous links on the page. Remember, from Chapter 2 we discussed that the amount of link value a given link can pass is dependent on the amount of links on the page. To maximize the benefit of these pages, it is important to remove any extraneous links. Going back to our waterway analogy, this type of links are the equivalent “canals to nowhere”. (Built by the Roman ancestors of former Alaskan Senator Ted Stevens)
To complete the last action item of this section, you will need to take notes on how to better optimize the anchor text of the links on this page. Ideally, they should be as specific as possible. This helps the search engines and users identify what the target pages are about.
Many people don’t realize that category and subcategory pages actually stand a good chance of ranking for highly competitive phrases. When optimized correctly, these pages will have links from all of their children content pages, the websites homepage (giving them popularity) and include a lot of information about a specific topic (relevancy). Combine this with the fact that each link that goes to one of their children content page also helps the given page and you have a great pyramid structure for ranking success.
Now that you have analyzed the homepage and the navigational pages, it is time to audit the meat of the website, the content pages. In order to do this, you will need to complete the following:
Check and note the format of the Title Tags
Check and note the format of the Meta Description
Check and note the format of the URL
Check to see if the content is indexable
Check and note the format of the alt text
Read the content as if you were the one searching for it
The first action item is to check the title tags of the given page. This is important because it is both helpful for rankings and it makes up the anchor text used in search engine result. You don’t get link value from these links but they do act as incentives for people to visit your site.
SEOmoz did some intensive search engine ranking factors correlation testing on the subject of title tags. The results were relatively clear. If you are trying to rank for a very competitive term, it is best to include the keyword at the beginning of the title tag. If you are competing for a less competitive term and branding can help make a difference in click through rates, it is best to put the brand name first. With regards to special characters, I prefer pipes for aesthetic value but hyphens, n-dashes, m-dashes and subtraction signs are all fine. Thus, the best practice format for title tags is one of the following:
See http://www.seomoz.org/knowledge/title-tag/ for up-to-date information
Similarly to the first action item, the second item has to do with a metric that is directly useful for search engines rather than people (they are only indirectly useful for people once they are displayed by search engines.) Check the meta description by viewing source or using the mozBar and make sure it is compelling and contains the relevant keywords at least twice. This inclusion of keywords is useful not for rankings but because matches get bolded in search results.
The next action item is to check the URL for best practice optimization. Just like Danny Devito, URLs should be short, relevant and easy to remember.
The next step is to make sure the content is indexable. To ensure that it, make sure the text is not contained in an image, flash or within a frame. To make sure it is indexed, copy an entire sentence from the content block and search for it within quotes in a search engine. If it shows up, it is indexable.
If there are any images on the page (as there probably should be for users sake) you should make sure that the images have relevant alt text. After running testing on this at SEOmoz, my co-workers and I found that relevant anchor text was highly correlated to high rankings.
Lastly and possibly most importantly, you should take the time to read the content on the page. Read it from the perspective of a user who just got to it from a search engine result. This is important because the content on the page is main purpose for the page existing. As an SEO, it can be easy to become content-blind when doing quick audits. Remember, the content is the primary reason this user came to the page. If it is not helpful, vistors will leave.
Now that you have an idea of how the website is organized it is time to see what the rest of the world thinks about it. To do this, you will need to do the following:
View the amount of total links and the amount of root domains linking to the given domain
View the anchor text distribution of inbound links
As you read in Chapter 1 (Understanding Search Engine Optimization), links are incredibly important in the search engine algorithms. Thus, you cannot get a complete view of a website without analyzing its links.
This first action item requires you to get two different metrics about the inbound links to the given domain. Separately, these metrics can be very misleading due to internal links. Together, they provide a fuller picture that makes accounting for internal links possible and thus more accurate. At the time of writing, the best tool to get this data is through SEOmoz’s Open Site Explorer.
The second action item requires you to analyze the relevancy side of links. This is important because it is a large part of search engine algorithms. This was discussed in Chapter 1 (Understanding Search Engine Optimization) and proves as true now as it did when you read it earlier. To get this data, I recommend using Google’s Webmaster Central.
Now that you have gathered all the data you can about how the given website exists on the internet, it is time to see what the search engines have done with this information. Choose your favorite search engine (you might need to Google it) and do the following:
Search for the given domain to make sure it isn’t penalized
See roughly how many pages are indexed of the given website
Search three of the most competitive keywords that relate to the given domain
Choose a random content page and search the engines for duplicate content
As an SEO, all of your work is completely useless if the search engines don’t react to it. To a less degree this is true for webmasters as well. The above action items will help you identify how the given website is reacted to by the search engines.
The first action item is simple to do but can have dire affects. Simply go to a search engine and search for the exact URL of the homepage of your domain. Assuming it is not brand new, it should appear as the first result. If it doesn’t and it is an established site, it means it has major issues and was probably thrown out of the search engine indices. If this is the case, you need to identify this clearly and as early as possible.
The second action item is also very easy to do. Go to any of the major search engines and use the site command (as defined in Chapter 3) to find roughly all of the pages of a domain that are indexed in the engine. For example, this may look like site:www.example.com. This is important because the difference between the number that gets returned and the number of pages that actually exist on a site says a lot about how healthy a domain is in a search engine. If there are more pages in the index than exist on the page, there is a duplicate content problem. If there are more pages on the actual site than there are in the search engine index, then there is an indexation problem. Either are bad and should be added to your notes.
The next action item is a quick exercise to see how well the given website is optimized. To get an idea of this, simply search for 3 of the most competitive terms that you think the given website would reasonably rank for. You can speed this process up by using one of the third party rank trackers that are available. (Refer back to Chapter 3)
The final action item is to do a quick search for duplicate content. This can be accomplished by going to a random indexed content page on the given website and search for either the title tag (in quotes) or the first sentence of the content page (also in quotes). If there is more than one result from the given domain, then it has duplicate content problems. This is bad because it is forcing the website to compete against itself for rankings. In doing so, it forces the search engine to decide which page is more valuable. This decision making process is something that is best avoided because it is difficult to predict the outcome.
30.01.2020 • Dan
10.01.2020 • Dan
25.10.2019 • Dan