Three layers decide whether your brand shows up in AI answers
Retrieval, representation, reasoning. Each failure mode demands a different fix, and publishing more content solves none of them reliably.
Key takeaways
- AI visibility failures fall into three layers: retrieval, representation, and reasoning.
- Financial services brands typically fail at representation; multilaterals fail at retrieval; industrials fail at reasoning.
- Publishing more content fixes none of the three layers reliably.
- Ask any AEO vendor which of the three layers their methodology actually covers.
What happened
Per Search Engine Journal, Duane Forrester argues that AI visibility is not a single problem with a single fix. It is three distinct problems operating on three different layers, and most brands are misdiagnosing which one is actually breaking.
Forrester's framework splits the question "why doesn't my brand show up in ChatGPT or Perplexity" into three failures: a retrieval failure (the model cannot find or fetch your content), a representation failure (the model finds you but does not understand what you are or why you matter), and a reasoning failure (the model understands you but does not select you as the answer to this specific prompt). Each layer has its own diagnostic signals and its own fixes. Publishing more blog posts addresses none of them reliably.
The piece lands at a moment when most enterprise content teams are still treating LLM visibility as an extension of SEO: produce more, optimise harder, wait for traffic. Forrester's point is that the failure modes are categorically different, and treating them as one problem guarantees wasted budget.
Why it matters for your brand
The three-layer split matters because it forces a different conversation with the CMO and the board. "We are not being cited in AI answers" is not a content problem. It is a diagnostic problem, and the answer determines whether you spend on PR, on structured data, on partnerships, or on rewriting the corporate site.
For financial services brands, the most common failure is layer two: representation. A global asset manager publishes thousands of research notes a year, but the models cannot reliably summarise what the firm stands for or which categories it leads. The content is retrievable. The brand is not legible. The fix is not more thought leadership; it is consolidating the entity description across Wikipedia, Wikidata, analyst databases, and the firm's own about-page schema so that when a model is asked "which firms lead in private credit," your name surfaces as an entity, not as a string of disconnected PDFs.
For multilaterals and UN system bodies, the failure is usually layer one: retrieval. A UN agency may produce the definitive global dataset on a topic, but if the canonical version sits behind a PDF on a slow domain with weak internal linking, the model never gets to it. It pulls from a Reuters summary instead, and the agency loses authorship of its own finding. The fix is structural: HTML-first publishing, clean canonical URLs, machine-readable data alongside the report, and a deliberate syndication strategy with the outlets that LLMs actually crawl and trust.
For major industrial groups (cement, steel, energy, logistics), the failure is most often layer three: reasoning. The models know who you are and can find your content, but when a procurement director asks "which suppliers have the lowest-carbon product in this category," the answer pulls from a competitor's sustainability page because that competitor framed the claim in the specific language the model maps to the prompt. This is a positioning problem, not a content volume problem. The fix is writing claims the way prompts are asked, with the comparators and qualifiers a buyer would use.
For philanthropic and policy institutions, the diagnostic is usually mixed: retrieval is fine, representation is strong inside the policy bubble, but reasoning fails because the model has not been given any reason to prefer your framing over a think tank with louder distribution. The fix is partnerships and co-citation, not republishing your own report a fourth time.
The strategic point: every quarter you spend producing more content without diagnosing the layer is a quarter your competitors use to fix theirs. The teams that win the next eighteen months of AI visibility will be the ones that audit before they publish.
The signal in context
Forrester's framework is the clearest articulation yet of something practitioners have been circling for a year: the SEO playbook does not transfer cleanly to LLMs because the failure modes are different. Traditional search has one main failure (you do not rank), and the fix is a known recipe. LLM citation has at least three failures, and they require different specialists. Retrieval is an infrastructure problem. Representation is a brand and entity problem. Reasoning is a positioning and competitive intelligence problem. Most marketing organisations are not currently structured to handle all three, which is why the early winners tend to be brands with strong technical SEO, strong PR, and a sharp category narrative all at once.
This also explains why generic "AI visibility audits" from agencies have been hit or miss. An audit that only looks at crawlability misses two thirds of the problem. An audit that only looks at brand mentions misses the retrieval layer entirely. Senior marketers should ask any vendor pitching AEO or GEO services which of the three layers their methodology covers, and treat anyone who answers "all of them" with the same scepticism they would apply to a full-funnel agency promising to do everything.