How Pages Blocked by Robots.txt Are Ranked And Site UX Can Impact Your Google Rankings
Original Air Date: July 11, 2019
Google’s John Mueller recently explained how query relevancy is determined for pages blocked by robots.txt. It has been stated that Google will still index pages that are blocked by robots.txt. But how does Google know what types of queries to rank these pages for? Dejan posted a Twitter poll the other day and it received almost 600 responses from within the SEO community. The question was "Good UX impacts rankings." The responses available were "directly," "indirectly," or "neither." Google's John Mueller said it is best to be consistent with your URL structure and either choose to use slashes after the URL or not to use slashes after the URL. Try not to use both formats if possible. He posted this on Twitter when asked about slashes after URLs "The best solution is to be consistent and only use one version of a URL. Link to that version, redirect to it, use it in sitemaps, use it for rel-canonical, etc." Some searchers in India are noticing that when you enter a query into Google that has no matches and Google has no idea what you are looking for, Google will show you a section called "some popular queries."