Main menu


Issues with excluded pages in the Webmaster Indexing Coverage Status report

issues with excluded pages in google search-console Indexing Coverage Status report

Indexing Google search console
Indexing Google search console

Explanation of issues with excluded pages in the Webmaster Site Indexing Coverage Status report.

    Our lesson today for explaining the excluded pages in the case of indexing coverage tools report webmasters, just announced by Google help report that is not indexed these pages usually, and we believe that this action is very appropriate, but excellent because it allows you to develop your blog or website and climb For Google's rankings,  these pages are either duplicate of indexed pages, have been blocked from indexing by some mechanism you use on your website or are not indexed for some other reason that we believe is not caused by an error.

We present it with an explanation as follows:

The page was excluded by the "no index" tag

    When Google tries to index the page, it gets the "noindex" command, so the search engine doesn't index it. If you do not want this page to be indexed, this has been done successfully. If you want to index this page, you must remove the "noindex" command to let search engines know and then index it. 

Access is blocked using the Pages Removal Tool

    It appears from the Webmaster Tools data report that access to a page is currently blocked due to a URL removal request, even if you are a verified website owner, you can use the URL removal tool to see who submitted a request to remove the address. 

    You should know that removal requests will not actually remain available until about 90 days after the date of removal. After this period of time, the Googlebot search spiders may return to indexing the page, even if you do not submit another indexing request.

    If you don't want the page to be indexed, use the "no index" tag, ask to provide access data for the page, or remove the page.

    You can view the SEO course to improve the appearance of your site in the first results.

Access blocked with robots.txt

    When Googlebot spiders are blocked from accessing the page with a robots.txt file. You can check this using the robots.txt test tool. 

    Note that this does not mean that other means cannot be used to index the page. If Google can find other information about this page without loading it, the page may be indexed (although this possibility is less common). To make sure that Google won't index a page, you can remove the robots.txt file and use the "index" command.

Crawl in an unusual way

    It means that an unusual case of crawl without specific reasons occurred when fetching this URL. This may mean receiving a 5xx or 5xx error response code, and you can try fetching the page using the URL Inspection tool from Webmaster Tools to see if any issues are fetching the page if the page is not indexed.

The page has been crawled and is not currently indexed

    It means that the page has been crawled by Google, but it has not been indexed. It may or may not be indexed in the future and there is no need to resubmit this URL to request crawl because search spiders will crawl it. 

The page was found and is not currently indexed

    Google found the page but it hasn't been crawled yet, usually because Google tried to crawl the URL but it was overloading due to a slow page loading, which made Google decide to re-crawl at another time (Which is a real problem for the blog not showing up on the first page of Google search engine) Therefore, the last crawl date field was left blank in the report.

Alternate page that includes a suitable master page tag

    This page is an exact copy of a page that Google marks as a canonical page. This page redirects to the main page correctly, so there is no need to take any action on your part, just complete your work without any problems and this is what many bloggers at Google like because it relieves you of extra work. 

Duplicate page without a tag for user-selected master page

    This page is a replica, none of which include a tag for a canonical page. Google believes that this page is not canonical, so you must expressly refer to the canonical page among the exact copies of this page.

    Checking this URL should show the canonical page URL chosen by Google.

The search engine Google chose a different canonical page from the one chosen by the user

    This page has been flagged as the canonical page of a group of pages, but Google thinks there is another, better URL that can be used as a canonical address.

    Google index this page which we think is the canonical page and not the other page and we suggest you indicate it clearly as an exact copy of the canonical URL. 

    We now know that this page was detected without an explicit crawl request and checking this URL should show the canonical URL chosen by Google, so don't worry my friend, this case doesn't affect your blog or site. 

Page Not Found (404)

    This page returned a 404 error when requesting access to it, and Google found this URL without any explicit request to crawl it or a sitemap. Google may have detected the URL as a link from another site or a page that existed before it was deleted. Googlebot will most likely keep trying to access that URL for a certain period of time and there is no way to direct Googlebot spiders to ignore the URL permanently, knowing that it will greatly reduce the crawl rate of that URL, and  404 responses are not a problem if they are intentional responses.

    If your page has moved, you can use a 301 redirect to its new location 

The page includes a redirect

    The URL includes a redirect, so it is not added to the index and this is normal because all bloggers and webmasters can change their pages to a new site or move from one label to another.

    So you don't have to do anything as long as Google spiders have the ability to crawl and index them. 

Duplicate URL sent not chosen as canonical

    It is a URL that falls into the set of duplicate URLs without a clearly marked canonical page, and even though you sent an explicit request to index this address, Google did not index this URL because it is a duplicate and Google thinks another address is more worthy because be the primary address. Therefore, the canonical address chosen by the search engine is indexed.

    (Google only indexes the canonical page within a set of replicas) The difference between this case, and the case that "Google chose a canonical page different from the page the user chose" is that you explicitly requested that the page be indexed. 

    Checking this URL should show the canonical page URL chosen by Google.

    And there is the problem of changing the navigation paths

    These were the cases shown in the Webmaster Tools report, it is enough for you to fix only the affected pages according to this report, which will have a significant impact on your blog or site in its direct ranking in the Google search engine, so do not forget that.