What are the Key Features of Google Search Console?

Inbound Marketing: What Are Its Benefits?
12 Tips to Promote Your Company on LinkedIn& nbsp; Companies on LinkedIn: 12 & nbsp; tips for posting your content in an attractive way.

Personal Oops, okay? In our last article we saw a little bit about the features  Google Search Console . We access the tabs Panel , Mail, Search and aspects of search traffic.

Not yet know the Google Search Console? Click here to learn!

We will continue?

Google’s index

Image of Google Search Console and its features menu.

What is the first thing that comes to mind with the word “index” ? Something like the books? It can be said that it is following this reasoning. However, instead of having pages of a book, we will have the pages of your site. And we have the following options on this tab:

 

Index Status:  Is a graph representing the total number of indexed pages in a given period of time shown by the graph. However, we have an option called Advanced  . It creates the possibility to also display the number of pages blocked by robots and removed pages. You can also update this information from the button Refresh.

Image of Google Webmaster Tools Index Status

Blocked resources: As one might think from the name, this option have all blocked resources on your site. That is, they are not being displayed to the user on a given page.

Image of blocked features of Google Webmaster Tools

Remove URLs: A feature used only in specific cases or in emergency situations. Allows to “hide” a certain page of access to users, removing it from the search results. It is noteworthy that it is only temporarily .

Image tab remove Google Search Console urls

Tracking

Image tracking section of Google Webmaster Tools menu and functionality.

We will see here monitoring related to crawling your site. Issues such as the famous file robots.txt and  sitemaps  of your website will be treated later. Check out:

 

Crawl Errors :   We have   site errors  and URL errors.  Errors site , refer to the internal site errors. While the URL errors,  refer to DNS errors on your server. The graphics Computer  and  Smartphones has tabs errors on the server and not found .

Image Google Webmaster Tools Crawl Errors tab

Tracking statistics:  questions are displayed related to your site’s performance. We have graphics pages crawled per day Kilobytes received per day and a page download time in milliseconds .

Tab image from Google Webmaster Tools Tracking Stats

Fetch as Google: It is a section at least interesting of the Google Search Console . Lets see how  Google   handles your website.

We are not dealing with the interface, but looking at the trace level. The How is the visibility of each directory and access URLs to pages of your site.

The URL path are informed, the type of googlebot  (search little robots), the status and data.Você have the option to just look at the beginning of the page, if you already know the specific path.

Image Search tab As a Google Webmaster Google

Test robots.txt: You can send your robots.txt file or check your current file by checking goes on robots tests or has any errors. Let’s talk about the file robots.txt  soon.

Tab image Test Google Search Console robots.txt

Sitemap:   In this part, you can view the full contents of Sitemaps and how much of his content is being indexed or not. We’ll talk more about what a Sitemap also shortly. ; )

Tab image from Google Webmaster Sitemap

URL parameters: Another part that should only be used if really necessary. Here you can pass URL parameters to facilitate Google ‘s crawl. You should move here if necessary, spend some wrong parameter can Disengaging a page.

Image tab of Google Webmaster Tools URL Parameters

Safety problems

Image Security Parameters section of Google Webmaster Tools menu

Finally, we have one of the features of Google Webmaster Tools that actually you never will want to need to access. Here are all reported security problems that are detected by the Google Search Console .

 

The robots.txt fileIllustration image of robots.txt

It is nothing but the file in the root of your site that has features aimed to apply simple rules to access Google’s robots or any other engine searches.

“Access Rules? As well?”.

Not everything that is on your site should be accessed by search robots. For example, a very common case is the access to user validation pages.

You do not want it to be accessible to search little robots. Because this can negatively influence on your score.

Then you increment simple little rules in this file called robots.txt to prevent it.

You must set so that search robots rules will apply. You set this by entering User-agent:  and the names of the robots. That done, you will now define the access rules. The main ones are: Noindex, nofollow, Disallow, Allow and Sitemap.

See video which means  noindex, nofollow and disallow

The command Noindex makes Google does not perform the indexing of a given directory. That is, does not allow Google to display it in their searches. And consequently does not index the content of subdirectories also.

But the command Disallow , makes the search robots do not enter the specified directory and its subdirectories not. However, we have the Allow command, which can work making within a specific directory that is as Disallow, you want a subdirectory it is seen by robots. Then you apply the command Allow for this specific subdirectory.

The command nofollow is so that when there are internal links on your site. Robots will see them, but will not access them. Ie only identify that is a link, but do not know what your content. Finally, we have the Sitemap . which is nothing more than specify which path your (s) Sitemap (s).

exemplification of the image features of a robots file, .txt

 

What are the features of a Sitemap?               Sitemap representation icon and its Features

It is an .xml file that has the function to inform all the pages that are contained in your site. This is done with the purpose of informing how the organization of your site content. Not necessarily contained in a single file.

Google’s robots use it to analyze your Sitemap to your site better, more easily.

Usually, these files do not have a fixed location, so that is why it is important to inform the location of your Sitemap in the file robots.txt . Inserting the entire URL access, getting something like seudomínio.com / folder1 / sitemap.xml

In any case, even without informing the location of your Sitemap, it’s possible that the Google Webmaster find it.

After all these features, it is sure that if you’re looking to improve your site, this is the right tool. That in addition to providing statistics to make it also provides the necessary resources to fix them.

 

And then, they liked the post? Or you have any questions? Leave a comment below on what you think or contact us!

To the next. : D