Book a 15-min intro call on Google Calendar Mon–Fri, 2–10 PM IST · Free · Google Meet Pick a time →

User-initiated fetches are HTTP requests made by an AI engine on behalf of an end user, typically when the user asks a question that requires fetching a specific URL. They sit between training crawls and search crawlers in behavior: they are bot-like in form but user-like in intent.

What they do

A user-initiated fetch happens when:

  • A user asks the engine a question that names a specific URL or domain.
  • A user asks the engine to summarize a page they’re looking at.
  • A user asks a question and the engine decides to fetch a fresh page rather than rely on its index.
  • A user follows up on an answer and the engine needs to verify or expand on a source.

The fetch is bursty rather than continuous. A site might see no user-initiated fetches for hours, then a sudden spike when a query trends.

Major user-initiated fetch bots

| Bot | Operator | Triggered by |

|—|—|—|

| ChatGPT-User | OpenAI | A ChatGPT user asking a question that needs a fresh page fetch |

| Claude-User | Anthropic | A Claude user invoking web search or a URL fetch |

| Perplexity-User | Perplexity | A Perplexity user asking a question that needs live retrieval |

Some operators distinguish further between “user-named URL” and “engine-decided URL” fetches. The behavior is similar; the policy differences are subtle.

How they handle robots.txt

Operators take different stances on whether user-initiated fetches should respect robots.txt:

  • OpenAI documents ChatGPT-User as honoring robots.txt, but the spirit is that these are user requests rather than autonomous crawls.
  • Anthropic documents Claude-User as also respecting robots.txt.
  • Perplexity explicitly states that Perplexity-User “generally ignores robots.txt since users initiated the requests.”

The rationale is that a user explicitly asking the engine to fetch a URL is similar to the user opening that URL in a browser themselves. Robots.txt does not apply to browsers.

This means a site can block PerplexityBot and still see Perplexity-User requests when users name the site explicitly. The block applies to systematic indexing, not to user-driven fetches.

Why these requests matter

User-initiated fetches have outsized AEO impact:

  • They produce citations directly. When the engine fetches a URL during answer composition, that fetch is what decides whether the page makes it into the answer.
  • They reach pages the index missed. A page not yet in the engine’s search index can still be cited if a user-initiated fetch reaches it during the query.
  • They reflect explicit user demand. A user asking about a specific brand or page indicates real interest. Blocking these requests denies users who explicitly asked for the content.

Designing access for user-initiated fetches

The recommended default is to allow them even when training and search crawlers are blocked:

  • Add ChatGPT-User, Claude-User, Perplexity-User to allowlists.
  • Verify these user agents pass through WAF rules.
  • Do not rate-limit them aggressively. The volume is low and the user is waiting.

For paywalled or premium content:

  • The fetch will hit the paywall like a regular browser would.
  • The engine may surface the snippet that’s available before the paywall.
  • This is usually the intended behavior: the user can read the snippet, see the source, and click through if they want.

Logging user-initiated fetch traffic

Useful to log separately because the patterns matter for understanding engine behavior:

  • Which queries triggered a user-initiated fetch on the site.
  • How often the fetch happened for which URLs.
  • Whether the fetch resulted in a citation in the answer (correlation can be inferred from subsequent inbound traffic).

This data is rarely surfaced in standard analytics and needs custom log analysis to extract.

Common failures

  • WAF blocks because the user-fetch request looks bot-like even when it’s user-initiated.
  • CAPTCHA challenges that the engine cannot solve, blocking legitimate user fetches.
  • Aggressive bot detection that fingerprints user-fetch traffic as suspicious.
  • Robots.txt rules that block * and inadvertently include user-fetch bots.

Each of these silently reduces visibility because the user gets a poor answer about the site without ever knowing the fetch was blocked.

Implementation example

AwesomeShoes Co.’s support lead notices users saying ChatGPT gives outdated return-policy answers even though the website has the latest policy page. The problem is that user-initiated fetch requests are intermittently blocked by CAPTCHA and bot-mitigation rules.

Implementation discussion: the security engineer creates safe-pass rules for verified ChatGPT-User, Claude-User, and Perplexity-User traffic, the web lead ensures policy pages render fully without login barriers, and the support lead tracks whether complaint volume drops after deployment. This ties crawler access directly to customer experience outcomes.

WhatsApp
Contact Here
×

Get in touch

Three ways to reach us. Pick whichever suits you best.

Send us a message

Takes under a minute. We reply same-day on weekdays.

This field is required.
This field is required.
This field is required.
This field is required.
Monthly Budget
Focus Area
This field is required.
Preferred Mode of Contact
Select how you'd like to be contacted.
This field is required.