About Amazonbot

Amazonbot is Amazon's web crawler used to improve our services, such as enabling Alexa to answer even more questions for customers. Amazonbot is a polite crawler that respects standard robots.txt rules and robots meta tags.

How Can I Identify Amazonbot?

In the user-agent string, you'll see “Amazonbot” together with additional agent information. An example looks like this:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1)

 

How Can I Control What Amazonbot Crawls on my Site?

Robots.txt: Amazonbot respects standard robots.txt directives like Allow and Disallow. In the example below, Amazonbot won't crawl documents that are under /do-not-crawl/ or /forbidden/, but will crawl sub-directory /do-not-crawl/except-this/:

User-agent: Amazonbot               # Amazon's user agent
Disallow: /do-not-crawl/            # disallow this directory
Allow: /do-not-crawl/except-this/   # allow this subdirectory

User-agent: *                # any robot
Disallow: /not-allowed/      # disallow this directory

 

Page-Level Meta Tags: Amazonbot supports the robots nofollow directive in meta tags in HTML pages. Put the meta tags in the section of the document to specify robots rules, like this:

<html><head><meta name="robots" content="nofollow"/>
...
</head>
<body>...</body>
</html>

 

Once you have included this directive, Amazonbot won't follow any links on the page.

Link-Level Rel Parameter: We also support thet link-level rel=nofollow directive. Include these in your HTML like this to keep Amazonbot for following and crawling a particular link.

<a href="signin.php" rel=nofollow>Sign in </a>
...

 

Contact Us

If you have questions or concerns, please contact us.