We move on to the second chapter of the book how to achieve its SEO web … here we will see in more detail the operation of research and spiders / bots.
The minute referencing n ° 2
In this second chapter we still remain very theoretical, quite far from the concrete, we explain especially the detailed operation of the search engine, but not really sure that it is useful to us. During this episode we will remember especially that the robots that will browse your site are vital to your visibility and that we must make their lives easier by proposing a volume of reasonable links.
In the program
- Google and Bing also feed other engines.
- The spider, scans your site according to your refresh rate and page volume.
- Try not to exceed 100 links per page to make it easier for Google.
- The inverted index contains the essential keywords of your page, it is a secondary index that allows Google to go even faster.
- Google at an index only 2x bigger than Bing … who would have thought?
- What determines the ranking of a page? The title, the density / recurrence of a word or phrase, the bold content and the already indexed expressions of your site (and the title h2 in all that – nothing about it here).
- Negotiate external links, the Page Rank of your site will also depend on the relationships tied on the web.
- If your users stay 2 seconds on the page, your positioning will be penalized.
- Check out the graph of Google’s internal workings (not really sure in my opinion).
- Caffeine, an even faster partial indexing system (like an electron that gravitates around a mass).
I try to sidecut at best to really take that interesting content, but for this chapter, it was not really concrete … Next week we must prepare the ground, I do not tell you more 😉.