Free Tool

Robots.txt Generator

Create a robots.txt file to control how search engines crawl your website. Use presets or customize rules for specific bots.

Quick Presets
Rule 1

Use * for all bots, or specify: Googlebot, Bingbot, etc.

Sitemaps
Options

Time bots should wait between requests. Most sites don't need this.

Generated robots.txt
# robots.txt generated by Keyword Intel
# https://keywordintel.dev/tools/robots-txt-generator

User-agent: *
Disallow: /api/
Disallow: /admin/
Disallow: /private/
Allow: /
How to Use
1

Configure your rules using the options on the left

2

Copy the generated robots.txt content

3

Create a file named robots.txt in your website's root directory

4

Test at yourdomain.com/robots.txt

What You Can Configure

User-Agent Rules

Create rules for all bots (*) or target specific crawlers like Googlebot or Bingbot.

Allow/Disallow Directives

Specify which paths crawlers can and cannot access on your site.

Sitemap Declaration

Point search engines to your XML sitemap for better indexing.

Crawl Delay

Set time intervals between crawler requests to reduce server load.

What is robots.txt?

Robots.txt is a text file that website owners use to instruct search engine crawlers (also called robots or bots) about which pages or sections of their site should or should not be crawled and indexed. It's part of the Robots Exclusion Protocol (REP).

Why Do You Need a robots.txt File?

A robots.txt file serves several important purposes:

  • Prevent indexing of private pages - Keep admin panels, staging areas, and sensitive directories out of search results
  • Manage crawl budget - Help search engines focus on your important pages by blocking unimportant ones
  • Prevent duplicate content - Block URL parameters, print versions, or other duplicate pages
  • Point to your sitemap - Tell search engines where to find your XML sitemap

robots.txt Best Practices

Do

  • Place the file in your root directory (example.com/robots.txt)
  • Use it to block admin areas and private directories
  • Include your sitemap location
  • Test your robots.txt file before deploying

Don't

  • Use robots.txt to hide sensitive information (it's publicly accessible)
  • Block CSS or JavaScript files that search engines need to render pages
  • Expect robots.txt to be a security measure (it's just a suggestion to bots)
  • Block your entire site unless you specifically don't want it indexed

Common robots.txt Directives

DirectivePurpose
User-agentSpecifies which crawler the rules apply to
DisallowTells crawlers not to access specified paths
AllowExplicitly allows access (overrides Disallow)
SitemapPoints to your XML sitemap location
Crawl-delaySets seconds between crawler requests

FAQ

What is robots.txt?

Robots.txt is a text file that tells search engine crawlers which pages or files they can or cannot request from your site. It's placed in the root directory of your website.

Where do I put the robots.txt file?

The robots.txt file must be placed in the root directory of your website. For example, if your site is example.com, the file should be accessible at example.com/robots.txt.

Can robots.txt block all search engines?

Yes, you can block all search engines by using "User-agent: *" followed by "Disallow: /". However, this will prevent your site from appearing in search results entirely.

Is robots.txt a security feature?

No, robots.txt is not a security feature. It's a suggestion to well-behaved crawlers, not a barrier. Malicious bots can ignore it entirely. Never use robots.txt to protect sensitive information.