Robots.txt Generator

Easily generate a robots.txt file to control how search engines crawl your website.

Global Settings

Crawler Rules 1

User Agent
Directory Rules

robots.txt

User-agent: *
Disallow: /admin/
Disallow: /private/

Sitemap: https://example.com/sitemap.xml

About Robots.txt

A robots.txt file gives instructions about your site to web robots; this is called The Robots Exclusion Protocol.

Frequently Asked Questions

What is a robots.txt file?

A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page.

Where should I put my robots.txt file?

The robots.txt file must be placed at the root of your website host to which it applies. For example, to control crawling on all URLs below https://www.example.com/, the robots.txt file must be located at https://www.example.com/robots.txt.

What does "User-agent: *" mean?

The asterisk (*) is a wildcard that applies the rule to all web crawlers. So "User-agent: *" means the instructions that follow apply to every bot that visits your site.

Can I have multiple sitemaps in my robots.txt?

Yes! You can add as many "Sitemap: [URL]" directives as you need anywhere in your robots.txt file. This helps search engines discover all your XML sitemaps efficiently.

It's time to ditch Google Analytics.

Tired of the frustration, complexity and privacy issues of Google Analytics? We were too. That's why we built Swetrix - the ethical, open source and fully cookieless alternative.

Free to try
Easy to use
Privacy-first