An AEO audit is a baseline measurement of how visible a brand is inside AI answers across a defined set of queries and engines. The output is a snapshot — a fixed reference point against which future work is measured. Without it, “we improved AEO” is unprovable.
What an audit measures
A useful audit captures four things:
- Citation presence — for each query, on each engine, was the brand cited.
- Position in source list — when cited, where in the list of sources.
- Brand mention — was the brand named in the prose, with or without a citation.
- Sentiment — neutral, favorable, or unfavorable framing.
It also captures, for context:
- Which competitors appeared.
- Which reference sources the engine cited instead.
- The full text of each answer.
Building the prompt set
The prompt set is the spine of the audit. Get this wrong and every metric downstream is wrong.
A well-constructed set has:
- Branded queries — explicit brand-name queries. These are the easiest to win and the floor for visibility.
- Category queries — high-value queries describing what the brand does, without naming the brand. The hardest to win and the most valuable.
- Competitor queries — queries naming competitors. Useful for spotting comparison opportunities.
- Long-tail informational queries — specific questions a buyer might ask in the research phase.
- Query variations — same intent, different phrasing. Engines treat near-synonyms differently.
Aim for 50–200 queries on the first audit. More is better but harder to maintain.
Running the audit
For each query in the set:
Manual auditing works for small sets. Automated tooling is necessary at scale. See test AI citations.
What to look for in the data
Beyond counting citations, read the answers:
- Where is the brand absent on category queries? That gap is the work.
- What sources appear repeatedly that the brand isn’t? Those are the reference sources the engine trusts. Coverage on those sources is leverage.
- Where is the sentiment negative or thin? Pages that get cited but describe the brand poorly may need rewriting on the source side, or proactive content on the brand side.
- Which competitor pages get cited? Read them. The structural pattern is usually obvious.
Output
The audit deliverable is two artifacts:
- A scoreboard — share of voice per engine, per query type, per topic cluster.
- A gap analysis — query-by-query notes on what’s missing and what to do about it.
The scoreboard becomes the baseline for future measurement. The gap analysis becomes the AEO work plan.
Frequency
Run a full audit:
- Once at the start of an AEO program.
- Quarterly thereafter.
- After any major content launch or restructure.
- After a known engine model update.
Between full audits, a smaller smoke-test prompt set (10–20 priority queries) can run weekly or daily.
Implementation example
AwesomeShoes Co. starts a quarterly AEO audit to benchmark citation performance before rewriting key buying guides. The insights manager owns the audit framework, but execution spans SEO, content, and competitive research roles.
Implementation discussion: the team builds a query set across branded, unbranded, and competitor intents, captures response/citation/sentiment outputs by engine, and produces a scoreboard plus prioritized gap list. The next sprint plan is then tied directly to the highest-impact gaps, so audit output is actionable rather than descriptive.