~/robots ☀ LIGHT apps ← back to terminal

🌐 Language

// seo · crawlers · search engines · web scraping
██████╗ ██████╗ ██████╗ ██████╗ ████████╗ ███████╗ ██╔══██╗ ██╔═══██╗ ██╔══██╗ ██╔═══██╗ ╚══██╔══╝ ██╔════╝ ██████╔╝ ██║ ██║ ██████╔╝ ██║ ██║ ██║ ███████╗ ██╔══██╗ ██║ ██║ ██╔══██╗ ██║ ██║ ██║ ╚════██║ ██║ ██║ ╚██████╔╝ ██║ ██║ ╚██████╔╝ ██║ ███████║ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚══════╝

robots.txt generator_

Build a robots.txt file visually — add rules per bot, set allow/disallow paths, sitemaps and crawl delays.
Live preview updates as you type. Copy and paste directly into your site root. 100% client-side.

// quick presets
// global settings
// live preview robots.txt

      
FAQ — What is the Robots.txt Generator?

Frequently Asked Questions — Robots.txt Generator

What is a robots.txt file?

A plain-text file at /robots.txt that tells web crawlers which pages they can or cannot crawl. The first file most crawlers check before accessing any page.

What is the Disallow directive?

Tells a crawler not to crawl the specified path. Disallow: /admin/ blocks the admin directory. Disallow: / blocks the entire site.

What is User-agent: * ?

Applies the following rules to all crawlers. Use a specific name like User-agent: Googlebot to target one crawler only.

Does robots.txt prevent indexing?

No. It only prevents crawling. To prevent indexing use <meta name="robots" content="noindex"> or the X-Robots-Tag HTTP header.

Is this tool free?

Yes, completely free. One of 46 free tools at jasperbernaers.com.