PR SEO

Wzmocnij SEO dzięki robots.txt: popraw wydajność strony dzięki lepszej kontroli crawlerów

Published: 2025.01.08 Updated: 2026.03.12
Sieć rozciągająca się po całym świecie

Kontrola crawlerów odgrywa ważną rolę zarówno w SEO, jak i wydajności strony. Roboty wyszukiwarek przemieszczają się po witrynie i zbierają informacje, aby pobierać dane potrzebne do wyświetlania stron w wynikach wyszukiwania. Odpowiednie sterowanie ich zachowaniem pozwala poprawić SEO i wydajność strony.

Głównym narzędziem do tego jest robots.txt. Ten artykuł szczegółowo wyjaśnia robots.txt — od podstaw po praktyczne użycie, środki ostrożności i techniki zaawansowane — abyś mógł naprawdę dobrze się nim posługiwać.

Kompletny przewodnik SEO [wydanie 2025]: pełna mapa do wyższych pozycji w wynikach wyszukiwania
Kompletny przewodnik SEO [wydanie 2025]: pełna mapa do wyższych pozycji w wynikach wyszukiwania

Rozdział 1: podstawy robots.txt

Sieć rozciągająca się po całym świecie

Czym jest robots.txt? Jak działa kontrola crawlerów

Robots.txt to plik tekstowy umieszczony w katalogu głównym witryny. Informuje crawlerów, które części strony mogą przeszukiwać, a których nie powinny.

When a crawler accesses a website, it usually reads robots.txt first and then crawls the site according to those instructions. Robots.txt is a request to crawlers, not a forceful block, but major search engines do respect it. However, because malicious crawlers and some other bots may ignore robots.txt, you should never rely on it alone to protect confidential information.

Gdzie umieścić robots.txt, format pliku i zestaw znaków

Robots.txt musi być umieszczony w katalogu głównym witryny, na przykład https://example.com/robots.txt.

Nie będzie działać, jeśli umieścisz go w podkatalogu. Nazwa pliku musi być zapisana małymi literami jako robots.txt.

Format pliku musi być zwykłym tekstem, a kodowanie UTF-8 jest zdecydowanie zalecane. Jeśli użyjesz innego kodowania, crawlerzy mogą nie odczytać pliku poprawnie.

Podstawowa składnia: User-agent, Disallow, Allow i szczegóły reguł

Robots.txt zapisuje się za pomocą dyrektyw takich jak User-agent, Disallow i Allow. Dyrektywy te rozróżniają wielkość liter i zapisuje się je po jednej w linii.

  • User-agent:

    Specifies which crawler a rule applies to. You can name a specific crawler or use * for every crawler. By declaring multiple User-agent lines, you can define different rules for different crawlers. Examples:

    User-agent: Googlebot

    ,

    User-agent: Bingbot

    ,

    User-agent: *

    .

  • Disallow:

    Specifies a path that must not be crawled. It is written as a relative path beginning with a slash. An empty Disallow line means everything is allowed. Examples:

    Disallow: /private/

    ,

    Disallow:

    .

  • Allow:

    Specifies a path that may be crawled. It is used when you want to allow part of a location that has been blocked with Disallow. An Allow rule takes precedence over Disallow in that case. Example:

    Disallow: /private/

    and

    Allow: /private/public.html

    .

Jak używać symboli wieloznacznych (*) i ($): elastyczne dopasowywanie ścieżek i zastosowanie zaawansowane

The asterisk matches any character string. For example, Disallow: /*.pdf blocks every PDF file, and Disallow: /images/*.jpg$ blocks only JPG files under the /images/ directory.

The dollar sign matches the end of a line. For example, Disallow: /blog/$ blocks access to the /blog/ directory itself while still allowing addresses such as /blog/article1/.

Ustawienie Crawl-delay: zmniejszanie obciążenia serwera i wpływ na Googlebota

With the Crawl-delay directive, you can specify the interval between crawler requests in seconds. This can help when server load is high, but Googlebot does not officially support Crawl-delay. Google previously recommended crawl-rate settings in Search Console, but now handles this automatically, so it usually does not require much attention.

Because Google has improved its automatic crawl-rate adjustment, and in line with a broader effort to simplify the user experience, Google is ending support for the crawl rate limiter tool in Search Console.

Planned end of support for the crawl-rate limiter tool in Search Console

It may still have an effect on other crawlers.

Wskazywanie Sitemap: kierowanie crawlerami i obsługa wielu map witryny

You can specify sitemap URLs with the Sitemap directive. This helps crawlers understand the structure of the website more easily and improves crawl efficiency. You can also specify multiple sitemaps. Examples: Sitemap: https://example.com/sitemap.xml and Sitemap: https://example.com/sitemap_images.xml.

Supercharge SEO: Build a Google-Friendly Site Structure with sitemap.xml

Rozdział 2: praktyczne przykłady robots.txt

A man typing on a laptop

Ochrona stron wymagających logowania: Disallow: /member/

Content that requires login, such as members-only pages, should generally be excluded from search-engine indexing.

By using robots.txt, you can prevent crawlers from accessing these pages and reduce wasted crawling. For example, if members-only content is stored under /member/, writing Disallow: /member/ blocks access to every file and subdirectory under that location.

However, robots.txt is only a request to crawlers, so malicious crawlers may ignore it.

Truly sensitive information must be protected with server-side authentication rather than robots.txt. Robots.txt should be treated as a supporting method for limiting crawler access and saving server resources. In many cases, it is appropriate to allow access to the login page itself so that crawlers can understand that authentication is required.

Kontrola adresów URL z parametrami: Disallow: /*?page=*

Parameterized URLs can sometimes make the same content accessible under multiple URLs, which may be treated as duplicate content. For example, if you use a ?page= parameter for pagination, you may end up with pages like example.com/blog?page=1 and example.com/blog?page=2 that have different URLs but almost the same content.

By writing Disallow: /*?page=*, you can block access to every URL that includes the page= parameter. However, this can remove all paginated content from search engines and may hurt SEO.

A better approach is to use a canonical tag and indicate the canonical URL. If every paginated page points to the first page, such as example.com/blog, with a canonical tag, you can avoid duplicate-content issues and communicate the correct page to search engines.

Using robots.txt to control pagination should be treated as a last resort when implementing canonical tags is not possible.

Kontrola konkretnego crawlera: User-agent: YandexBot Disallow: /

With the User-agent directive, you can set different rules for different crawlers. If you write User-agent: YandexBot and then Disallow: /, only YandexBot will be blocked from the entire site. Other crawlers will follow rules set under other User-agent sections, or the rules under User-agent: *.

Typical cases where you may want to control a specific crawler include the following.

  • When a specific crawler is placing excessive load on the server

  • When a specific crawler is ignoring robots.txt and causing problems

  • When you want to hide region-specific content from crawlers of search engines that are not used in that region

In these and similar cases, the User-agent directive is useful. The names of major search-engine crawlers can be confirmed in each search engine’s official documentation.

Rozdział 3: ostrzeżenia i częste błędy w robots.txt

A man operating a smartphone

Robots.txt is a powerful tool, but incorrect settings can have serious consequences for a website. This chapter explains common mistakes and points of caution so that you can use robots.txt safely and effectively.

3.1 Szkody SEO wynikające z błędów robots.txt: wypadanie z wyników wyszukiwania

The most serious mistake in robots.txt is accidentally blocking important pages from crawling.

If you disallow product pages or service pages, for example, those pages may fall out of the search index and disappear from search results. That directly reduces website traffic and can severely harm SEO.

Whenever you change robots.txt, always use the robots.txt testing tool in Google Search Console to confirm that only the intended pages are blocked. After the change, continue monitoring rankings and traffic regularly so you can catch any unintended effects.

3.2 Błąd użycia Allow dla stron, które miały być blokowane

The Allow directive should be used only when you want to permit part of a location that has been blocked with Disallow. For example, if you want to block /private/ but allow only /private/public.html, you would use both Disallow: /private/ and Allow: /private/public.html.

Using Allow alone for an area that has not been disallowed has no effect. Crawlers generally assume every page is accessible unless it has been explicitly blocked with Disallow.

3.3 Wielkość liter: zwracaj szczególną uwagę

User-agent, Disallow, Allow, and URL paths are all case-sensitive. For example, disallow: /images/ is treated differently from Disallow: /images/ and will not work as intended.

When writing robots.txt, always use the correct capitalization and check carefully for typographical errors.

3.4 Różnice w zachowaniu crawlerów: jak radzić sobie ze złośliwymi crawlerami

Robots.txt works with good-faith crawlers such as Googlebot and Bingbot, but malicious crawlers may ignore it completely. That means robots.txt alone cannot protect sensitive information.

Information that is truly confidential must be protected with server-side authentication or access restrictions. You need to understand that robots.txt is only a tool for controlling cooperative crawlers and is not sufficient as a security measure.

3.5 Sam robots.txt nie zapewnia bezpieczeństwa

As noted above, robots.txt is insufficient as a security measure. Anyone can read the contents of a robots.txt file, so malicious users may use it as a clue for finding restricted areas.

Real security requires a layered approach that combines multiple methods, including password protection, access control lists, and firewalls, not robots.txt alone.

3.6 Nieoczekiwane zachowanie przy nadmiernym użyciu symboli wieloznacznych

Wildcards such as * and $ make path matching more flexible, but overusing them can block pages you never meant to block. For example, Disallow: /*image* would block not only the /images/ directory but also a URL such as /article/my-image.jpg.

When using wildcards, check the full scope of their effect carefully and make sure you are not blocking pages unintentionally.

3.7 Buforowanie robots.txt: opóźnienia w uwzględnianiu zmian

Search engines cache robots.txt, so changes are not always reflected immediately. Even if you check with a testing tool right after editing it, the result may still be based on the previous version.

In Google Search Console, you can request that robots.txt be fetched again through the robots.txt tester. This can shorten the delay before the cache updates and your changes are reflected.

By following these cautions and configuring robots.txt properly, you can improve SEO and avoid unnecessary risk.

Rozdział 4: narzędzia do tworzenia robots.txt i metody weryfikacji

A man typing

This chapter explains how to create, test, and revise robots.txt efficiently. By following these steps, you can prevent unintended mistakes and maximize website performance.

4.1 Korzystanie z narzędzi do tworzenia robots.txt

You can write robots.txt manually, but online tools let you do it faster and with fewer mistakes. These tools generate a robots.txt file automatically once you input the necessary directives, which helps reduce syntax errors and rule mistakes.

Representative tools include the following.

  • Google Search Console robots.txt tester:

    A built-in Search Console tool that can create, edit, and test robots.txt. If you already use Search Console, this is often the easiest choice.

  • SEO checker tools:

    Some SEO tools include robots.txt generation features. Because they can be used together with other SEO functions, they are convenient when optimizing a site more broadly.

  • Other online robots.txt generators:

    If you search the web for robots.txt generator, you will find many free tools. These are suitable for creating a simple robots.txt file.

Which tool is best depends on your needs and the size of the website.

4.2 Testowanie robots.txt w Google Search Console

Once you create robots.txt, you must test it to verify that crawlers interpret it correctly. Google Search Console provides a robots.txt testing tool that can show whether a specific URL is crawlable and whether there are mistakes in the file.

The testing process is as follows.

  1. Open Google Search Console and select the property for the target website.

  2. Choose the robots.txt tester from the menu on the left.

  3. Enter the URL you want to test and click the Test button.

  4. Review whether the URL is crawlable and which directive is being applied.

Whenever you change robots.txt, use this tool and confirm that the file works exactly as intended.

4.3 Przeglądanie i poprawianie robots.txt

Because robots.txt is placed in the root directory of a website, you can open it directly in a browser, review its contents, and revise it if necessary. For example, accessing https://example.com/robots.txt will display the file.

When making corrections, open robots.txt in a text editor, make the necessary changes, and upload it to the server. Because search engines need to refresh their cache, it may take a little time before the changes are reflected.

The robots.txt tester in Google Search Console lets you edit and test at the same time, making it easier to iterate on corrections and verification.

By following these steps, you can keep robots.txt in an optimal state and improve both SEO and site performance.

Rozdział 5: kontrola crawlerów poza robots.txt

Różnice względem tagu meta robots i sposób użycia każdego z nich

The meta robots tag is used to control crawlers on an individual page basis. When used together with robots.txt, it enables finer control. Noindex instructs search engines not to index a page, and nofollow instructs them not to follow links. If you add noindex to a page that has also been blocked from crawling with robots.txt, it may help remove an already indexed page from search results in some cases.

Używanie go razem z noindex i nofollow

You can specify multiple directives separated by commas, such as noindex,follow.

Kontrola za pomocą nagłówka HTTP X-Robots-Tag

By using X-Robots-Tag in the HTTP response header, you can control crawling for non-HTML files such as PDFs and images as well. This requires server-side configuration.

Podsumowanie

Robots.txt to niezastąpione narzędzie zarówno dla SEO, jak i wydajności strony.

Gdy zrozumiesz punkty omówione w tym artykule i prawidłowo skonfigurujesz robots.txt, możesz wydobyć pełen potencjał swojej witryny. Ważne jest, aby pozostawać na bieżąco i stale optymalizować robots.txt.

Dodatek: przykłady robots.txt, także zaawansowane

  • Zezwól na określone typy plików dla konkretnego crawlera:

User-agent: Googlebot-Image Allow: /images/*.jpg Allow: /images/*.png Disallow: / User-agent: * Disallow: /images/

  • Spowolnij dostęp dla konkretnego crawlera:

User-agent: AhrefsBot Crawl-delay: 10 User-agent: * Allow: /

Korzystaj z tych zaawansowanych wzorców, aby zoptymalizować swoją stronę i przybliżyć ją do sukcesu.