SEOTracerBot
This page provides technical details about SEOTracerBot, the official crawler used by the SEO Tracer application for website analysis.
About
SEOTracerBot is a site crawler that retrieves publicly available web pages to power SEO Tracer’s on-device analysis, including link checks, metadata extraction, and status code reporting. The crawler respects robots.txt directives and uses reasonable rate limits.
User-agent
Requests identify as the following user agent string by default:
Mozilla/5.0 (Macintosh; Intel Mac OS X) SEOTracerBot/1.0 (+https://www.seotracer.app/bot)
Some users of the SEO Tracer macOS app may customize the user-agent for testing. Only requests that match the above pattern and include this page URL should be considered SEOTracerBot.
Verify SEOTracerBot
To verify a request came from SEOTracerBot, use reverse DNS and forward DNS lookups on the crawler IP:
- Perform a reverse DNS lookup on the IP address to get a hostname.
- Perform a forward DNS lookup on that hostname and confirm it maps back to the same IP.
If both steps match, the request is likely valid. If in doubt, rate-limit or block and contact us.
robots.txt
SEOTracerBot respects the robots.txt protocol, including Disallow, Allow, and Crawl-delay(where supported). Example configurations:
Allow crawling (default)
User-agent: SEOTracerBot Allow: /
Disallow crawling
User-agent: SEOTracerBot Disallow: /
Crawl-delay (optional)
User-agent: SEOTracerBot Crawl-delay: 5
Note: Crawl-delay is not officially part of the standard and may not be honored by all crawlers, but SEOTracerBot will respect it.
Crawl rate and politeness
SEOTracerBot is designed to be polite. Typical crawl concurrency is low and adapts based on server responses. If you experience issues, please add a Crawl-delay directive or block our bot specifically via robots.txt.
Opt-out
To opt out, disallow the user agent in your robots.txt:
User-agent: SEOTracerBot Disallow: /
Contact
If you have questions about SEOTracerBot or need assistance, please visit our Support page.