PR SEO

Boost SEO with robots.txt: Improve Site Performance Through Smarter Crawler Control

Published: 2025.01.08 Updated: 2026.03.12
A network spreading around the world

Crawler control plays an important role in both SEO and website performance. Search-engine crawlers move through a website and collect information so they can retrieve the data needed to show pages in search results. By controlling crawler behavior appropriately, you can improve SEO results and site performance.

The central tool for this is robots.txt. This article explains robots.txt in depth, from the basics to practical use, points of caution, and advanced techniques, so that you can become genuinely proficient with it.

The Complete SEO Guide [2025 Edition]: The Full Map to Higher Search Rankings
The Complete SEO Guide [2025 Edition]: The Full Map to Higher Search Rankings

Chapter 1: The basics of robots.txt

A network spreading around the world

What is robots.txt? How crawler control works

Robots.txt is a plain-text file placed in the root directory of a website. It tells crawlers which parts of the site they may crawl and which parts they should not crawl.

When a crawler accesses a website, it usually reads robots.txt first and then crawls the site according to those instructions. Robots.txt is a request to crawlers, not a forceful block, but major search engines do respect it. However, because malicious crawlers and some other bots may ignore robots.txt, you should never rely on it alone to protect confidential information.

Where to place robots.txt, file format, and character set

Robots.txt must be placed in the root directory of the website, such as https://example.com/robots.txt.

It will not work if you place it in a subdirectory. The file name also has to be lowercase robots.txt.

The file format must be plain text, and UTF-8 encoding is strongly recommended. If you use another encoding, crawlers may fail to interpret the file correctly.

Basic syntax: User-agent, Disallow, Allow, and rule details

Robots.txt is written with directives such as User-agent, Disallow, and Allow. These directives are case-sensitive and are written one per line.

  • User-agent: Specifies which crawler a rule applies to. You can name a specific crawler or use * for every crawler. By declaring multiple User-agent lines, you can define different rules for different crawlers. Examples: User-agent: Googlebot, User-agent: Bingbot, User-agent: *.
  • Disallow: Specifies a path that must not be crawled. It is written as a relative path beginning with a slash. An empty Disallow line means everything is allowed. Examples: Disallow: /private/, Disallow:.
  • Allow: Specifies a path that may be crawled. It is used when you want to allow part of a location that has been blocked with Disallow. An Allow rule takes precedence over Disallow in that case. Example: Disallow: /private/ and Allow: /private/public.html.

How to use wildcards (*) and ($): flexible path matching and advanced usage

The asterisk matches any character string. For example, <code>Disallow: /*.pdf</code> blocks every PDF file, and <code>Disallow: /images/*.jpg$</code> blocks only JPG files under the /images/ directory.

The dollar sign matches the end of a line. For example, <code>Disallow: /blog/$</code> blocks access to the /blog/ directory itself while still allowing addresses such as /blog/article1/.

Setting Crawl-delay: reducing server load and its effect on Googlebot

With the Crawl-delay directive, you can specify the interval between crawler requests in seconds. This can help when server load is high, but Googlebot does not officially support Crawl-delay. Google previously recommended crawl-rate settings in Search Console, but now handles this automatically, so it usually does not require much attention.

Because Google has improved its automatic crawl-rate adjustment, and in line with a broader effort to simplify the user experience, Google is ending support for the crawl rate limiter tool in Search Console.

Planned end of support for the crawl-rate limiter tool in Search Console

It may still have an effect on other crawlers.

Specifying Sitemap: guiding crawlers and handling multiple sitemaps

You can specify sitemap URLs with the Sitemap directive. This helps crawlers understand the structure of the website more easily and improves crawl efficiency. You can also specify multiple sitemaps. Examples: <code>Sitemap: https://example.com/sitemap.xml</code> and <code>Sitemap: https://example.com/sitemap_images.xml</code>.

Supercharge SEO: Build a Google-Friendly Site Structure with sitemap.xml

Chapter 2: Practical robots.txt examples

A man typing on a laptop

Protecting login-required pages: Disallow: /member/

Content that requires login, such as members-only pages, should generally be excluded from search-engine indexing.

By using robots.txt, you can prevent crawlers from accessing these pages and reduce wasted crawling. For example, if members-only content is stored under /member/, writing <code>Disallow: /member/</code> blocks access to every file and subdirectory under that location.

However, robots.txt is only a request to crawlers, so malicious crawlers may ignore it.

Truly sensitive information must be protected with server-side authentication rather than robots.txt. Robots.txt should be treated as a supporting method for limiting crawler access and saving server resources. In many cases, it is appropriate to allow access to the login page itself so that crawlers can understand that authentication is required.

Controlling parameterized URLs: Disallow: /*?page=*

Parameterized URLs can sometimes make the same content accessible under multiple URLs, which may be treated as duplicate content. For example, if you use a <code>?page=</code> parameter for pagination, you may end up with pages like example.com/blog?page=1 and example.com/blog?page=2 that have different URLs but almost the same content.

By writing <code>Disallow: /*?page=*</code>, you can block access to every URL that includes the page= parameter. However, this can remove all paginated content from search engines and may hurt SEO.

A better approach is to use a canonical tag and indicate the canonical URL. If every paginated page points to the first page, such as example.com/blog, with a canonical tag, you can avoid duplicate-content issues and communicate the correct page to search engines.

Using robots.txt to control pagination should be treated as a last resort when implementing canonical tags is not possible.

Controlling a specific crawler: User-agent: YandexBot Disallow: /

With the User-agent directive, you can set different rules for different crawlers. If you write <code>User-agent: YandexBot</code> and then <code>Disallow: /</code>, only YandexBot will be blocked from the entire site. Other crawlers will follow rules set under other User-agent sections, or the rules under <code>User-agent: *</code>.

Typical cases where you may want to control a specific crawler include the following.

  • When a specific crawler is placing excessive load on the server
  • When a specific crawler is ignoring robots.txt and causing problems
  • When you want to hide region-specific content from crawlers of search engines that are not used in that region

In these and similar cases, the User-agent directive is useful. The names of major search-engine crawlers can be confirmed in each search engine’s official documentation.

Chapter 3: Cautions and common mistakes in robots.txt

A man operating a smartphone

Robots.txt is a powerful tool, but incorrect settings can have serious consequences for a website. This chapter explains common mistakes and points of caution so that you can use robots.txt safely and effectively.

3.1 SEO damage from robots.txt mistakes: falling out of search

The most serious mistake in robots.txt is accidentally blocking important pages from crawling.

If you disallow product pages or service pages, for example, those pages may fall out of the search index and disappear from search results. That directly reduces website traffic and can severely harm SEO.

Whenever you change robots.txt, always use the robots.txt testing tool in Google Search Console to confirm that only the intended pages are blocked. After the change, continue monitoring rankings and traffic regularly so you can catch any unintended effects.

3.2 The mistake of using Allow for pages you meant to block

The Allow directive should be used only when you want to permit part of a location that has been blocked with Disallow. For example, if you want to block /private/ but allow only /private/public.html, you would use both <code>Disallow: /private/</code> and <code>Allow: /private/public.html</code>.

Using Allow alone for an area that has not been disallowed has no effect. Crawlers generally assume every page is accessible unless it has been explicitly blocked with Disallow.

3.3 Case sensitivity: pay close attention

User-agent, Disallow, Allow, and URL paths are all case-sensitive. For example, <code>disallow: /images/</code> is treated differently from <code>Disallow: /images/</code> and will not work as intended.

When writing robots.txt, always use the correct capitalization and check carefully for typographical errors.

3.4 Differences in crawler behavior: dealing with malicious crawlers

Robots.txt works with good-faith crawlers such as Googlebot and Bingbot, but malicious crawlers may ignore it completely. That means robots.txt alone cannot protect sensitive information.

Information that is truly confidential must be protected with server-side authentication or access restrictions. You need to understand that robots.txt is only a tool for controlling cooperative crawlers and is not sufficient as a security measure.

3.5 Robots.txt alone cannot provide security

As noted above, robots.txt is insufficient as a security measure. Anyone can read the contents of a robots.txt file, so malicious users may use it as a clue for finding restricted areas.

Real security requires a layered approach that combines multiple methods, including password protection, access control lists, and firewalls, not robots.txt alone.

3.6 Unexpected behavior from overusing wildcards

Wildcards such as * and $ make path matching more flexible, but overusing them can block pages you never meant to block. For example, <code>Disallow: /*image*</code> would block not only the /images/ directory but also a URL such as /article/my-image.jpg.

When using wildcards, check the full scope of their effect carefully and make sure you are not blocking pages unintentionally.

3.7 robots.txt caching: delays before changes are reflected

Search engines cache robots.txt, so changes are not always reflected immediately. Even if you check with a testing tool right after editing it, the result may still be based on the previous version.

In Google Search Console, you can request that robots.txt be fetched again through the robots.txt tester. This can shorten the delay before the cache updates and your changes are reflected.

By following these cautions and configuring robots.txt properly, you can improve SEO and avoid unnecessary risk.

Chapter 4: robots.txt creation tools and verification methods

A man typing

This chapter explains how to create, test, and revise robots.txt efficiently. By following these steps, you can prevent unintended mistakes and maximize website performance.

4.1 Using robots.txt creation tools

You can write robots.txt manually, but online tools let you do it faster and with fewer mistakes. These tools generate a robots.txt file automatically once you input the necessary directives, which helps reduce syntax errors and rule mistakes.

Representative tools include the following.

  • Google Search Console robots.txt tester: A built-in Search Console tool that can create, edit, and test robots.txt. If you already use Search Console, this is often the easiest choice.
  • SEO checker tools: Some SEO tools include robots.txt generation features. Because they can be used together with other SEO functions, they are convenient when optimizing a site more broadly.
  • Other online robots.txt generators: If you search the web for robots.txt generator, you will find many free tools. These are suitable for creating a simple robots.txt file.

Which tool is best depends on your needs and the size of the website.

4.2 Testing robots.txt in Google Search Console

Once you create robots.txt, you must test it to verify that crawlers interpret it correctly. Google Search Console provides a robots.txt testing tool that can show whether a specific URL is crawlable and whether there are mistakes in the file.

The testing process is as follows.

  1. Open Google Search Console and select the property for the target website.
  2. Choose the robots.txt tester from the menu on the left.
  3. Enter the URL you want to test and click the Test button.
  4. Review whether the URL is crawlable and which directive is being applied.

Whenever you change robots.txt, use this tool and confirm that the file works exactly as intended.

4.3 Reviewing and fixing robots.txt

Because robots.txt is placed in the root directory of a website, you can open it directly in a browser, review its contents, and revise it if necessary. For example, accessing https://example.com/robots.txt will display the file.

When making corrections, open robots.txt in a text editor, make the necessary changes, and upload it to the server. Because search engines need to refresh their cache, it may take a little time before the changes are reflected.

The robots.txt tester in Google Search Console lets you edit and test at the same time, making it easier to iterate on corrections and verification.

By following these steps, you can keep robots.txt in an optimal state and improve both SEO and site performance.

Chapter 5: Crawler control beyond robots.txt

Differences from the meta robots tag and how to use each

The meta robots tag is used to control crawlers on an individual page basis. When used together with robots.txt, it enables finer control. Noindex instructs search engines not to index a page, and nofollow instructs them not to follow links. If you add noindex to a page that has also been blocked from crawling with robots.txt, it may help remove an already indexed page from search results in some cases.

Using it together with noindex and nofollow

You can specify multiple directives separated by commas, such as noindex,follow.

Control through the X-Robots-Tag HTTP header

By using X-Robots-Tag in the HTTP response header, you can control crawling for non-HTML files such as PDFs and images as well. This requires server-side configuration.

Summary

Robots.txt is an indispensable tool for both SEO and website performance.

When you understand the points covered in this article and configure robots.txt properly, you can draw out the full potential of your website. It is important to stay current and keep optimizing robots.txt over time.

Appendix: robots.txt examples, including advanced ones

  • Allow only certain file types for a specific crawler:
User-agent: Googlebot-Image
Allow: /images/*.jpg
Allow: /images/*.png
Disallow: /

User-agent: *
Disallow: /images/
  • Slow down access for a specific crawler:
User-agent: AhrefsBot
Crawl-delay: 10

User-agent: *
Allow: /

Use these advanced patterns to optimize your website and move it toward success.