What is a robots.txt file?

A robots.txt file is a file placed at the root of a website and contains instructions to web robots.  In order to understand information from and about your site, search engines like Google and Bing use programs called “robots” (also known as bots or spiders) to retrieve web documents, index content from those documents, and follow hyperlinks in order to discover new documents. This process is called “crawling” a site.  It’s a completely automated process and is necessary in order for search engines to be able to display your site’s content in their results pages.

Webmasters can instruct search engines to not crawl specific pages or directories by listing them in their website’s robots.txt file. Common exclusions are admin pages, includes, non-public file libraries, e-commerce checkout pages, duplicates (for example “print versions” of content) and any type of page that robots should not visit or display in search results.

This section demonstrates how to assign a robots.txt file to your Bento pages.

How to add rules to your Bento pages

Station Bento has a default robots.txt file that disallows the /admin/ folder. If this file is already assigned to your site, you only need to edit it if additional pages need to be blocked.

Figure 1

 

Figure 2

 

Figure 3

 

(tick) Your website now contains a robots.txt file and there is nothing more you need to do.