Free SEO tool
Robots.txt Tester
Paste your robots.txt content and test whether specific URL paths are allowed or blocked for any crawler, including AI bots like GPTBot and ClaudeBot.
Robots.txt Content
Test a URL Path
Common Robots.txt Mistakes
Blocking CSS and JS files
Disallowing /assets/ or /static/ prevents Googlebot from rendering your pages. Google needs access to CSS, JS, and images to properly index content. Only block truly private resources.
Forgetting AI crawlers
GPTBot, ClaudeBot, and PerplexityBot respect robots.txt. If you want to block AI training but allow search indexing, add specific User-agent blocks for each AI bot rather than using a blanket Disallow for all agents.
Trailing slash inconsistency
Disallow: /blog and Disallow: /blog/ are different. The first blocks /blog, /blog/, and /blogging. The second only blocks paths starting with /blog/. Be precise about what you intend to block.
Missing sitemap directive
Always include a Sitemap line pointing to your XML sitemap. This helps crawlers discover content even if some internal links are missing. Place it outside any User-agent block.
Using robots.txt for security
Robots.txt is publicly accessible and only a suggestion. Never use it to hide sensitive pages. Use authentication, noindex meta tags, or HTTP authentication instead.
