Google Search Console

Formerly Known as Google Webmaster Tools

My Comprehensive Guide for Beginners

Coming soon

Enjoying My Website?

This website was created in my personal time. Why not buy me a coffee?

(NZ $5 secure payment via Paypal)

Cranked SEO is made in New Zealand

Cranked SEO is Made in New Zealand

The SEO Data

Organic Traffic

Google Search Console reports on what people searched to find your website and how many organic clicks your got from those sources

Sitemap Submissions

This is where you can submit your sitemap to Google and create your very first set of backlinks. It’s a vital first step if your website is new

Data & Statistics

Use reporting data from Google Search Console to analyse link profile, crawl stats, add markup so your site is properly referenced, and much more

Google Search Console is Your Top Tool for SEO

Google’s Search Console is without a doubt my favourite SEO tool to use when analysing a website’s search performance. Why? Because it reflects actual stats of what is happening in Google search in relation to my website. It’s an invaluable data source that is available to anyone who owns or has admin control of a website. The data is shared by Google, and costs nothing to connect to. Having lots going on in Google search doesn’t guarantee that it’s any good. It doesn’t mean you are getting good web traffic referrals and it doesn’t mean your website is converting visits into business. The Google Search Console data is just the start of the story, but it is a significant one. What I want to share with you on this page is the many features currently in Search Console, and hope you can read or interpret the data those features provide.

Google Search Console
DID YOU KNOW?

If you employ an SEO services provider with the aim of getting a quick result, you’re probably best to try something like Google Adwords instead. SEO is most definitely not a quick fix, but it’s the fix that can have long lasting benefits if done well. Both Organic and Paid ranking in search engines have their place, and often there will be a unique mix of the two that will work best for your needs.

Verifying Your Website in Google Search Console:

Everyone that has a Gmail or Google account login also has access to all of Google’s many tools and features using the same login. You can log in to Search Console here:

Go to GOOGLE SEARCH CONSOLE

Once you’ve logged in, you need to add your site to the login, so that you can start viewing the data available for your site. Use the “Add a Site” button and follow the steps to add access to your site’s data.

When searches are performed in Google, the search, all the websites that are rendered in results, and any clicks those searches generate are recorded. Google will share this data with you, for up to three months of history. Keep in mind though, that the data is ‘first-in-first-out’, so data older than 3 months is automatically deleted. I suspect that Google themselves keep longer histories of this data, but are providing only the most recent three months for you to view.

To get access to the data, you need to verify that you are either the owner or administrator of the website in question. Google wont give you this data for sites other than those you can demonstrate this level of control over. That’s great of course, because it stops competitors from using data about your site to their own advantage. Verifying ownership or admin rights is fairly straight forward, but if you’re not familiar with web access, you might need some help getting the right codes into the right places for you to ‘verify’ your ownership or control.

The verification process that I prefer is the Meta Tag verification. The Meta Tag is a small snippet of code that you set into thesection of the website code. It has a unique string of numbers and letters that represent your website address to Google. Once you’ve set the code in the, you simply hit ‘verify’ in the Search Console interface and Google will go to look at your web page to verify your done so correctly. Positive verification indicates that you are indeed the owner or controller of the website, because no-one else can set that code in the page other than a person with those admin rights.

Other ways you can verify ownership:

  • You can add a record to your domain’s DNS hosting setup
  • You could upload a html file into your website’s root directory
  • You could connect to the site if you are already verified from your Google Analytics account
  • You can use Google’s Tag Manager

In the following sections, I will be talking about most of the menu items and explaining how you can use them to your advantage.

If you’d like to ‘fast forward’ to any particular section, you can use the following links to take you to the right spot in this page:

Starting to Use Google Search Console:

Once you’ve added your website, the first thing I also do is verify any alternate address, like when you are using ‘www’ in front of your web address or not, and if you use both, you’d want to verify both of them. Once that’s done, you can then select which version you’d like Google to display in search results as the default address. You don’t need to set a new tag, file or code snippet. The ‘www’ and ‘non-www’ addresses can both be verified using the same tag, files or snippet. Allocating which one will be the default is done in the ‘site settings’ section under the settings icon. You will also find the ‘crawl rate’ control in the same section. I recommend you leave this set to “Let Google optimize for my site”.

Site Dashboard

Site Dashboard

Ok, so let’s say you successfully registered your site, and a site thumbnail appears for two versions of your site (‘www’ and ‘non-www’). Click on the one you want to manage. You will enter into the website’s dashboard view, which looks a lot like this:

Submitting a Sitemap:

You should submit a sitemap to Google because it’s a fast and efficient way to tell Google how many pages you have in your website. Sitemaps provide a link to each page you want crawled, which means Google will have access to any page in your sitemap, even if you don’t have that page linked to inside your website or from elsewhere.

You should make sure before submitting a sitemap that the sitemap is ‘clean’ and free of any URLs that you don’t want crawled or have expressly not linked to in your website because you want to keep them out of Google search. Note that this happens a lot. You should not list such URLs in your sitemap, and also ensure that they are tagged with a robots no-index tag, or indicated in the robots file as being no-index.

Clicking on the sitemap header in the dashboard will open up the sitemap submission section, which if no sitemap has been submitted will be virtually blank, with exception of the “Add/Test Sitemap” button. A dialog box appears, and you add the address of your sitemap file or page into this box:

In many website systems, the sitemap page is something like ‘http://yourdomainname.co.nz/sitemap.xml”. Verify where your sitemap is and what kind of format it’s in before submitting it. If unsure, try the Test Sitemap feature first. You can also test where it is by just typing the full address of the sitemap page in your browser and seeing if it lands on a valid sitemap page. If not, an ‘Error404’ page will probably show instead. Sitemaps are also added via txt file, so the page may be called ‘sitemap.txt’. Other sitemap systems use multiple files, so they might be accessed via ‘sitemap_index.xml’.

On successful submission, the sitemap section will show how many URLs have been submitted successfully. If a media sitemap was included, then it will also show how many images in your site were submitted to Google. It will look a bit like this, but probably without any of the RED bars on the graph:

The red bars indicate URLs that are already indexed in Google. If you have no red bars, but only blue bars, don’t panic! It means that you have successfully submitted the sitemap and that Google has now been signaled about where to find your pages. Eventually, as Google adds your pages to the index, your red bars will grow and (hopefully but not necessarily) equal the blue bars. There are many different reasons why a page URL may not be added to Google’s index. If some are not added, the red bars will not match the blue bars.

My Google Search Console is not collecting data!

Well, more than likely it is, but here’s the catch. If you did the above steps, best you go to the next step of doing a ‘Fetch’ and then sit back and wait a while. In my experience, Google Search Console has a lag of about 2-6 days of data. In other words, anything you add now, probably wont start showing any data for at least a few days afterward. This doesn’t mean you have done anything wrong.

‘Fetching’ Pages in Google Search Console:

The fetch process is where you actually ask Google to please come and take a look at your website. It’s a fast way of getting your new pages indexed in Google search. It will NOT help you improve your ranking. Go to the “Fetch as Google” menu item. Your page will look something like this:

The difference will be that your starting figures on your new Google Search Console account will be set to 500 fetches remaining, and 10 URL and linked pages submissions remaining.

Here’s what you do: Your first fetch should be the home page. Normally, the home page has no extra details in the web address, i.e., it’s probably already represented by the web address alone (like mine is shown above). If that’s the case, leave the space blank, and hit fetch. If all is working, your web page will be downloaded by Google. Once that is reported as being done, click on the “submit to Index” button that appears next to the result. Click this button, and you will be given the option of submitting either just the one page, or that page plus all pages that are linked from it. If you’ve done this for the first time, and you just fetched the home page, go ahead and click option two: “URL and all linked pages”. That means Google will add your home page to the index, plus all other pages that are linked directly from that page. Usually, your entire website might be linked to via the site navigation bar and any other menu that appears on your home page. Your entire website will then be crawled and submitted for indexing. Note: there is nor guarantee that Google will index your pages. It will only examine them and assess them, and if suitable, will index them too.

Here’s an example of my page after submitting first my Home page, with all linked pages, then one of my sub-pages, then my home page again, but this time without the linked pages, and finally, my home page again, but not yet submitted. I’ve done these solely to show you what these results look like. If you submitted the home page and all linked pages, there’s no need for you to Fetch any other pages.

Now wait for GSC to start collecting data.

If your website is new, you wont have had any data in Google Search Console for your site, but once you have a few weeks worth of data, we can start looking at some really interesting stuff about how your website is doing.

Site Messages

Site Messages

OK, let’s look at the next item down on the Google Search Console menu.

The site messages tab allows you to see any communications that are made directly from Google in regards to the management of your site. The types of things that might appear here are:

  • Messages about site settings changes
  • Security messages
  • Manual actions messages
  • Significant crawl error messages

Some of these are due to changes you may have made and should come as no surprise, while others are about critical issues in your website that are affecting Google’s ability to crawl, rank or index the site. You should pay careful attention to the messages, and perhaps keep them (don’t delete them) especially if you have several persons administering the Google Search Console, and the website. I won’t explain each of the possible messages in this section, but instead, I may raise them in their appropriate sections in the menu, as generally, they relate to those in some way.

Search Appearance

Search Appearance

This page shows several possible additional ‘snippets’ of data that could appear alongside your regular search result entry, and which ones will appear will depend entirely on the kinds of data you have on your page. By default, the three items that always appear in search results pages are the page title meta (in blue), the URL (in green) and the description snippet (in black). You could also however get additional links to other pages in your site – called ‘sitelinks’ – together with their description snippets, you could have the site internal search box appear, you might have Rich Snippets that relate to Events, breadcrumb URL, product snippet and author information.

Each of these data extensions will help the Google user to determine if your site is what they were after.

Structured Data:

This section shows a dashboard of what structures data may be appearing in your web pages. Sometimes, web developers code elements in a way that signals to Google that there is a ‘pattern’ of data on your page. The data pattern can be things like ‘author’ – who wrote the content on the page, ‘date’ – the date at which the content was written, ‘title’ – the title of the page, etc. Usually, these have to do with kinds of data that are set in a particular format on each page, like say in a blog post. Blog posts will often have the author name, the date and the title of the post consistently placed on each post. These elements can be used to signal Google that each page’s data can be extracted and indexed, so that search for posts of a certain age, or posts by a certain author, can be indexed more effectively.

If your site is a blog, events calendar, ticket vendor or eCommerce product catalogue (or any similar structured content) with pages that follow a prescribed template showing important data about the page content in a regular pattern, then you should be paying close attention to this section.

Here’s an example of the structured data dashboard on a website with 29 posts or pages of a recurring format. The dashboard indicates that Google expected to find data in the structure, but didn’t find it. Don’t Panic! This will not negatively impact your website’s rank in general (there is no ‘penalty’ for it), but it will however make some filtered search types miss rendering your posts or pages in search results for those posts and pages only, and you could miss valuable opportunities. Most of the time, this is not the end of the world unless however, your site is mostly only posts (a blog) or product pages (eCommerce or catalogue site) and it’s important for your post or product pages to render well in search results, then you should attempt to fix errors with structured data.

Data Highlighter:

Using the data highlighter allows you to star correcting some of the data problems that were detected in the structured data section, but it may not fix them all. It’s important that your data does have the right markup connected with it in the website code so that, as much as possible, Google can already find the right data and assign it to the right data sets. To begin using the data highlighter, click into the menu item and select the blue “start highlighting” button. You may wish to view the supporting data highlighting video into before attempting to do this.

When beginning to highlight data, first select the URL you wish to highlight. Google will ask you if this page fits a recurring pattern (like a product or a post) and you answer if you’d like to tag just the URL page selected, or include other pages using the same template.

Select the URL, then select the type of data that appears on the page.

Here’s an example of a blog post page where the author, publishing date, title, main image and category have been selected as data to highlight. For any element, you may select whichever data type is most relevant for the element. You can tag a few pages like this and rely on Google to detect more pages of the same format in your site by itself, or you can manually tag each and every page. It depends on what’s the most practical solution for you. For eCommerce sites with hundreds of product pages of the exact same format, it’s unnecessary to tag them all separately. Just do 5-10 product pages and then test a few others to see if Google is accurately identifying the repeated formatting.

HTML Improvements:

This section will assist with detecting any issues with the code of the website that may be affecting Google’s ability to appropriately place your pages into search results. They wont stop your pages from appearing, but they will show issues that could make a user-click to your site less likely.

Typical issues highlighted here are duplication problems with Page Title Metas, Description Metas, and page content. Every page in your website should have a unique Page Title Meta, and where possible, have a Meta Description written for it, that is also unique. Not all web pages however have to have Meta Descriptions, however, it’s advisable that all key pages you want rendering well in Google search should have one.

When Page Title Metas or Description Metas are empty, Google will extract an element from your page to display in the relevant slots in the search results page, where your Page Title Meta and a description would appear. The result may be undesirable and can negatively affect your click-through rate from Google search.

Sitelinks:

If Google is rendering sitelinks for your search result, it may have selected pages that you don’t necessarily want people entering your website on. i.e., the sitelink may be for a page that is not designed as a landing page and your conversion goals for the website may suffer. If that’s the case, you can ‘demote’ the sitelink. This will remove it from the pool of possible sitelinks that Google can select from, to display in search results.

You cannot ‘force’ Google to show sitelinks for your search result. Google will determine what it thinks is appropriate based on the search query. Usually, sitelinks appear if Google thinks the search query is a ‘brand search’ specifically aimed at finding you.

Search Analytics

Queries:

Here’s a look at a website that has got a full three months of data. To see this view for your site, select the Queries radio button in the menu.

Queries - Impressions in Google Search Console

The red line shows the number of impressions that your website is getting in Google searches. An impression is when someone does a search, and one or more of your website pages shows up as part of that search. Each page appearing in search results is one impression. These are counted from any page in Google search, so it doesn’t mean your are on page 1, it just means your site is showing up there ‘somewhere’.

Getting impressions in Google search is not what you are after. It’s just where the story starts. Impressions in search have no value, unless the person also clicks on the search result and goes to your website.

Clicks are shown in red and are always less than impressions. It’s fairly normal for a new website to get zero or very few clicks from search. Keep in mind that this data also includes any time you search for exactly your website name, and click on it, so keep this in mind before getting excited and seeing your home page got 100 clicks this week, if you can personally account for 99 of them!

The proportionality you can expect between impression and clicks is at best 1 click per 3 impressions, often 1 click per 100 impressions is normal. When you have a big website, but it’s new, you might even get a click rate of just 1 in 1000 impressions (excluding you and your friends clicking of course – I mean ‘natural’ or ‘organic’ clicks).

Once your site starts getting organic clicks, these translate to ‘visits’ in your website stats or in Google Analytics, with the referrer being ‘Google Search’.

Here’s another screen shot with a full list of search terms relating to the impressions being made in Google. I’ve blurred out the blue search impressions list to protect the privacy of the website owner, but you can see in this list that there are 5 columns:

  • Query (the actual search query)
  • Impressions (the number of times this search query has triggered a search impression for your website)
  • Clicks (the number of times this search query has generated a click from search results)
  • CTR (the Click Through Rate = the number of clicks per impression, as a percentage)
  • Avg. Position (the average position in search results that this query got you an impression for. 1-10 = on page 1, 11-20 = on page 2 etc)
Queries, CTR, Pos, Clicks & Impressions in Google Search Console

Using this list of search queries, you can figure out of your website is on the right path to getting clicks for the right kinds of words. If your brand name, your website name, or your main products or services are not in this list, then your website has a problem that needs to be solved!

Filtering Search Queries:

At the top of the search queries list, you’ll find a “filters” button. From there, you can filter your search queries page results to show only those results from Web, from Mobile, or from Image search. You can also select which country your search queries are showing for, and also add stars to your queries.

Links to Your Site:

While useful to know how many links are pointing toward your website, this section does not indicate which of those links are providing you with any kind of benefit.

There are two main benefits to having links coming to your website:

  • Referrals
  • Passing pagerank

These are two quite different things.

The first is about whether or not people can follow a links from the referring website to your website. These are called ‘referrals’ in Google Analytics, and each referring link shows in the Links to Your Site section of Google Search Console. Whether or not they are actually followed by anyone, is another matter. In theory, if you have more referring links, then you are more likely to be found on the web too, outside of the user doing a Google search and finding you that way, they may just be viewing pages on another site, and then decide to follow a link from that site to yours. referring links are usually set by consent of the referring website, i.e., they don’t appear on their own, someone purposefully places them on the referring website, and usually the website owner or administrator is aware that this is the case. It’s common to get referring links placed in web directories, Yellow Pages, related websites, or websites that have shared some of your content on their sites. You don’t always have control over referring links, as you may have no connection with the website that places the link on their site – linking to yours.

The second benefit from having links to your site is that you may gain ranking in search engines, because search engines typically assess which links they think indicate that the content on your website is trusted by others. If another website places a link to your site on their pages, without your knowing, or without you purchasing the link (in other words, the link is a ‘natural’ link), then this can be a signal to Google that people think your site has value. A problem arises however when the referring website has little value itself, and placing a link on that site to yours adds little if any value to the visitor of the referring site. When that’s the case, you receive little if any trust benefit from the link, in fact, it may be assessed by search engines as being irrelevant, or even negative in value. Having multiple links referring to your site from websites that are already trusted and respected by Google and other search engines will have a beneficial effect on the ranking of your website, but the converse is also true. So ensuring that links to your site are only of the beneficial type is a challenge that faces many SEO specialists and website owners on a regular basis.

Specially tagged “no-follow” links are links that refer to your website that Google will dismiss the possibility of passing pagerank to your site, because the tag indicates “please don’t pass pagerank via this link”. It does not however stop Google from following the link, or crawling and indexing the contents of the page it links to. To ensure your links are not counted negatively toward the ranking of your site, you should try to ensure that low quality links, links from low quality websites, or high volume links from affiliates be set to “no-follow”. The science of figuring out which links will benefit you and which will gain you a penalty against your pagerank and trust values in search engines is call ‘Off-Page SEO’ and won’t be expanded on in this page.

Here is an example of some sitelinks. The left column shows the domain the links comes from, the red ‘links’ value is the total number of links from that domain, and the black ‘linked pages’ value states how many of your pages are linked to from that domain. In Google Search Console, you can download all of the links into a spreadsheet and examine them in more detail.

From this list, you can only gather data about how many links there are. You can’t tell if they have referred any visitors, have the ‘no-follow’ tag, or pass positive or negative pagerank.

Internal Links:

This section in Google Search Console shows how your pages are linked inside your own website. This is a sum total of all menu links, sub menu, footer or sidebar links and on-page links embedded in your texts. Aside from figuring out if you haven’t included a page in a menu somewhere, I’ve found this of little practical value. All I can suggest here is:

  • You should have at least one link to every page, from every other page i.e. if your site has 20 pages, every page should have at least 20 links pointing to it, so your total links would be 20 x 20 = 400.

Manual Actions:

Pay particular attention to any messages appearing here. The manual actions section will note any important messages from Google about issues that will affect your ranking or user experiences on your site. One example is when Google has detected that you have gained a large volume of poor quality links from other websites, and they are passing negative pagerank, or are using overly spammy keyword anchors as their linking texts. Creating 1000 links to your site with the keyword anchor “Best Web Developer” (or any other relevant keyword phrase for your website) may gain you a ‘manual action penalty’ and will negatively affect your ranking. You will need to resolve your links back to having an acceptable percentage or volume of commercial keyword anchors, and cease any campaigns that are building them. Once that’s done, you can ask Google to re-assess the situation and (hopefully) re-establish your ranking. There’s no guarantee though that you get it back, so it’s best you never embark on any kind of spammy link-building programmes unless you are prepared to get a penalty for it.

Other examples of a manual actions by Google against your site is if they have detected that your site contains malicious software, or has been hacked. Both such situations can either place the website visitor at risk, so Google prefers not to send visitors to your site in those cases.

Google Index

Index Status:

The index status section will show how many web pages in your website are currently in Google’s index. The figure should be a close match to the total number of pages in your site for which you have allowed Google indexing, i.e. the robots settings for the pages are not set to ‘no-index’. If you have an eCommerce website, you may wish to set your functional pages like the Cart page, Checkout page, Account page and any other pages that perform a function only to ‘no-index’. This setting will stop them from appearing in the index.

Click on the “advanced” button will allow you to toggle the on-off switches to show other page data:

  • Total indexed
  • Ever crawled
  • Blocked by Robots
  • Removed

The total indexed number is the same figure from the first page, and shows the total number of pages currently in Google’s index. The red line shows the total number of pages ever crawled by Google in your site. Keep in mind, this includes all the pages you ever started writing, but deleted, plus all the pages you had as a different URL, or used in the past, but no longer have, or have set the ‘no-index’ setting for now. The brown line shows how many pages are being blocked by the ‘no-index’ tag or by the robots.txt file. Finally, the purple line shows how many of your URLs had been in the index, but you requested that Google remove them for whatever reason.

What often happens is that the ‘ever crawled’ figure gets larger and larger, making the scale of the other data rather pointless, unless you turn off the ‘ever crawled’ display. Most sites have far more URLs in the ‘ever crawled’ list than they have pages currently live. A website that has been around for quite a while should probably have a reasonably large difference between ‘ever crawled’ and ‘total indexed’ figures. This can be a sign of evolution of content. Sites that have not been updated for many years, and have stale content, will have smaller differences between these two figures. While longevity is good, so is ‘freshness’, so a site that has both is likely to gain better rank. This is evidenced by a long term steady gain in the ‘ever crawled’ figure that is larger than the ‘total indexed’ change.

Update: March 9th, 2014. Not surprisingly, Google removed the “ever crawled” part of the graph when they changed to record more accurate and relevant data. In my opinion, the “ever crawled” data is useful only for the Google Algorithm, not particularly of any value for the website owner. The new graph looks like this (click to view large):

Content Keywords:

Of course, the main way that your website will gain rank in Google is by having lots of relevant information in it that uses the right kinds of words that your audience is searching for. The content keywords section shows which words have been detected in your website that have relevance to your site offering. Typical words on the list will be nouns and verbs.

Examine this list for your site. The list should have your main keywords (as single elements) near the top end of the list. If they are not there, then you probably haven’t used your keywords effectively in your site, or you may have misidentified which words are most useful for you to use. The top end of the list should tell what you site is about.

One thing to keep in mind is that this list shows what words have been indexed for your site. If your site is new, or your pages are new, then it’s possible your words have not yet been indexed, even if your page is already. I’ve noticed that there is a significant lag between words appearing on this list, vs the time at which the page has been indexed. From this, we could probably assume that Google takes longer to examine content in detail, than it does to capture content in a more shallow form i.e., while Google has taken a ‘capture’ of your whole pages and all the words on it, it has not yet examined the detail of that page to detect the theme and occurrences of keywords. Expect indexing of page detail to take 2-6 months.

Remove URLs:

This is a great tool to get rid of URLs showing up in Google search that you don’t want there. The conditions however are that you must set the pages in question to ‘no-index’ before using the tool, or, the page must have been removed from public view entirely and renders a 404 error when trying to access the URL. You can also remove a whole site from Google search results very quickly, assuming the above conditions are met, but using the root web address (just the domain name without page names) as the removal URL.

After requesting removal of a URL, this takes about 1-24 hours to take effect as Google must verify your pages are either offline or blocked before adjusting its index.

Crawl

Crawl Errors:

Site Errors:

If your site has been live a while but your pages aren’t appearing in search as you expected, take a look at this section and see if there are any errors. Ideally, proactively check for errors, even before you notice their effect in search.

Not all errors are serious or affect Google’s ability to assign rank to your pages, so your mission should be to minimise errors rather than necessarily eliminate them. But by all means, eliminate them if you reasonably can.

DNS errors are on the serious end of the scale. Any problems with the Domain Name Server will mean that your entire site can fail to appear in search. If your site has been successfully shown live, and is already indexed by Google, but you have the occasional DNS error showing, you should consider discussing this problem with the domain provider. There could be a reliability issue with their services, or you may have incorrectly configured setting in the control panel for the domain. These are normally not problems with the site itself, but errors here have a catastrophic effect on your site’s status.

Server connectivity issues often happen when using budget hosting services, low-cost international servers, or when trying to serve content between continents. It related to Google’s ability to connect to the device that is hosting your website, and the site fails to respond in a timely manner. It could be down, or could have excessive load on the server making it too slow to respond. Occasional server connectivity issues shouldn’t adversely affect your site’s rank, but repeated and regular issues will. To resolve these problems, always host the website in the country you are targeting, be prepared to spend a few more dollars on a reliable service, and if your website is particularly large, consider investing in dedicated hosting services so that your server is not also busy trying to host hundreds of other websites. This may be expensive, but failing to get ‘up-time’ for your website, and losing rank because of it is a double-edged sword.

The robots.txt fetch is a regular check that Google does of your robots.txt file. This is the file that tells Google which pages in your site to crawl and index, and which ones it can look at but not show in search results. Failing to connect to the robots.txt file is not a serious error and it’s unlikely to negatively impact on ranking.

URL errors:

This section is broken into several parts.

Google reports which kinds of errors it found, and for which part of Google’s crawl, Googlebot web or Googlebot mobile. These are reported separately. The report is split into three sub-parts for server errors, access errors or ‘not found’ errors. The data in each depends on several settings in your website. Server errors are serious and possibly harmful to rank, but if there are few errors, they may be of little concern. Access errors are reported when Googlebot is provided a link in your site, but is prevented from following the link due to some other parameter. These tend not to be serious, unless it’s clear that there are incorrectly set parameters. The final section for ‘not found’ errors occur when Googlebot was provided with a link, or still has an old (but now removed) link in its index and tried to access the page linked-to, but finds nothing. A small number of these errors are also not serious, unless they indicate an issue with pages that should be there, but are no longer accessible for some reason. This sometimes happens when sub-directories of a website are moved, and the collective URL pathway changes, but a redirect to the new path has not been set. Google will detect what it thinks are ‘new’ pages, and report that the ‘old’ pages are now gone. In such cases, it’s important to set redirects. A redirect is a command that tells Google that the page URL has permanently changed, and to find a valid page via redirection to existing URL. If you often create new pages and remove old ones, it’s wise to set redirects for the old URLs so that you can maintain integrity of Google links to your site, and redirect visitors to new and relevant sections of your site, rather than have them land on an error page.

Crawl Stats:

Google Search Console will report on how often Google has visited your website, and what happened when it got there. The report is broken into three key sections:

  • Pages crawled per day
  • Kilobytes downloaded per day
  • Time spent downloading a page (in milliseconds)

Googlebot will visit your website on a regular basis, to check if there have been any changes in the site so that it can update its cache. While it’s possible that Google can read a whole page or more on any one visit, it’s also unlikely. Usually what happens is that Googlebot will enter your site via a link, examine part of the page it lands on, then potentially follows a link on your site to another page, not necessarily in your site, possibly to another site. This gets counted as one count on the total pages crawled. If the link that Googlebot followed was to another of your pages, it will count this as a second page. Don’t be surprised to see that Googlebot has crawled more pages in a day than you actually have in your website. This is not unusual, and indicates only that some of the same pages where crawled more than once, and probably via many different entry points (links). The number of pages crawled is governed by crawl frequency, and this may have a bearing on how well your pages get indexed.

How much data that Googlebot looks at during its (usually) daily visits to your site is reported in the ‘kilobytes downloaded’ section. This can vary quite a lot, from as little as 0kb (Googlebot didn’t visit that day) or many megabytes. Combined with pages crawled, this is referred to as crawl depth, and may also have a bearing on ranking for your site.

The final metric in this section is download time. This is reported as an average ‘per page’ figure. You should take note of your figures here and especially in regard to unusual spikes or changes in the average. A good average time is between 500ms and 2000ms (half to 2 seconds). Inconsistency is an indication of possible server problems, or usage load on your website. If your website is suddenly getting a lot more traffic, you may wish to adjust the site settings to reduce the Googlebot crawl rate so that it doesn’t negatively impact on the user-experience on your website. If you notice that your downloading time is particularly long (over 2 seconds), consider check with your hosting provider to see if this can be resolved. Slow download times reported by Googlebot will also cause Googlebot to throttle down on pages crawled and kilobytes downloaded rates, and this will affect how quickly your new content will gain rank in search.

Fetch as Google:

This is one of my favourite sections in Google Search Console because it allows you to rapidly get a page and its contents indexed into Google search results. It does however come with a warning: Do not overuse it! If you repeatedly fetch a page and submit it, Google will begin ignoring your requests. From experience, the first fetch of a given page URL will index within 5 minutes. Subsequent fetches of the same URL will be ignored until some time has passed. I estimate about 48-72 hours before Google will allow you to submit the same page again. So if you are making changes to a page, make all the changes, then fetch the page only when you are done.

When your site is new, use this tool to download and submit all of your website pages into Google. You can either use the ‘submit URL’ command, or ‘submit URL and all linked pages’ command. Usually, on submitting a new website, I will use both. I submit all pages individually, and then also use the ‘submit URL and all linked page’ command for the home page. I haven’t yet tested to see if there are any significant benefits to doing both, but if you don’t have time, simply submit the home page using the all linked pages setting. That way, you do one command and most of the site will start appearing in Google search within minutes*.

* Not for keywords! Your site content will not be indexed, only the superficial view of your page will appear. keyword indexing comes only after 2-6 months of crawls by Googlebot.

After doing a fetch, check to see if your pages are starting to appear in Google search by using the “site:yourdomainname.com” command in Google search. For example, to check the pages of this website, simply search this in Google: “site:https://crankedseo.com”. The result returned will include all indexed pages of this site, and no pages of any other website.

If after your pages are already indexed and you make some significant changes to a page, your can speed up indexing of the changes by fetching the page you changed, and submitting the page. Google may ignore repeated requests for the same page if done too frequently.

Blocked URLs:

You don’t necessarily want Google to crawl your entire site, in fact, many of the files required to operate a website should not get crawled, especially if your site is built with a CMS (content management system). With CMS websites, the site root directory will have many sub-folders with the operating system files in them. These files should not get crawled or indexed by Google. In this website, both the wp-admin and wp-includes sub-folders are blocked from being crawled, and the command to block the crawl is held in the robots.txt file. Google will regularly seek out the robots.txt file to check the current configuration and compare that against any links it is attempting to crawl.

You can also prevent Google from crawling individual web pages with the robots.txt file, and these are notarised as individual pages URLs, or as directories if you want to block all pages in a directory.

In the blocked URLs section, you can also manually test URLs to see if they are being blocked. Just add tyhe URL into the test box, and hit “test”. Google Search Console will report back whether it can successfully access the page, or if it is blocked to either Googlebot web or Googlebot mobile.

Sitemaps:

The sitemaps section is an expansion on the sitemap portion of the website dashboard. This is where you submit the sitemap for your site. While sitemaps are not absolutely necessary, it does help Google ascertain what pages to expect to find, and gives Google yet another way to access them. Sitemaps are essentially a page of links to all the pages in your site.

Where sitemaps become very useful is when they provide information to Google about various taxonomies in your website, like post types, portfolios, authors, products, categories, tags and much more. For WordPress, common sitemaps are Posts, Pages and Images.

On first submission of a sitemap, Google may not immediately access the sitemap file. In that case, it will report that the sitemap submission is “pending”. Once successfully submitted, all of your posts, pages, images etc will appear as blue bars. Submitted does not mean indexed, so be patient. Combining the sitemap submission with the Fetch as Google command will speed up indexing of your site or any new pages you submit. Once indexed, Google shows a red bar in the graph so you can see how many of your submitted pages and posts are now being indexed.

URL Parameters:

Unless you are already experienced in using Google Search Console to set URL Parameters, I suggest you do not touch this next section.

URL parameters are special controls that appear in the URL of some pages, usually search results or taxonomies, and point to pages that are already represented in the website under a canonical URL. A typical example of URL parameter occurs when doing a product search in an eCommerce website. The pages returned in results will include products that are probably also accessible via the main menu, not just by search, but the URL will probably have search parameters in it. Google could mistake this page for being a different page when in fact it isn’t. Telling Google how to handle URL parameters will help Google figure out that two or more URLs resulting in showing the same product, post, category or page, are not to be regraded as content duplicates. Content duplication is unfavourable because Google prefers to serve pages in search results that have new content, not the same content over and over, so one or more versions of the page directed to by seemingly different URLs, may be penalised for using duplicate content. Hence, the instruction about how to eliminate the effect of the URL parameters.

Google is, however, very good at working out how parameters work in websites, as they often form a pattern and occur in many different websites, not just yours. Google will automatically weed out any issues with URL parameters in most cases, so chances are you need do noting here at all.

Here are some examples of URL parameters at work. Each of the following URLs actually point to the exact same content, so could be misinterpreted as being three different pages.

Security Issues

Security Issues

Google reports on any security issues it has detected in your site. This could be related to hacked content, virus infections, user security issues and such.