AI engines diverge on sources, converge on brand citations
Five AI engines pull from different sources but converge on naming brands. The implication: brand-building is now the load-bearing half of GEO.
Key takeaways
- ChatGPT, Gemini, Perplexity, Claude, and Copilot pull from sharply different source sets for the same brand queries.
- Optimising for one engine's source preferences is a losing game; engines converge on consensus brand mentions, not on optimised pages.
- Earned media and category authority outperform owned content as the citation surface expands.
What happened
Per Search Engine Journal, a comparison of five AI search engines (ChatGPT, Gemini, Perplexity, Claude, and Copilot) found that they pull from sharply different source sets but converge on one behaviour: they cite brands. The engines disagree on which publishers, forums, and reference sites to trust. They agree that named brands belong in the answer.
That divergence on sources is the headline most SEO teams will react to. The convergence on brands is the part that should reshape how B2B marketers spend the next twelve months.
The takeaway from the SEJ analysis is blunt. Optimising for one engine's source preferences is a losing game. Building a brand that any engine has to mention when answering a category question is not.
Why it matters for your brand
For senior marketers at financial institutions, multilaterals, industrial groups, and foundations, this finding reframes what "AI visibility" actually means. The instinct has been to chase citations: get into the sources that ChatGPT quotes, get scraped by Perplexity, get summarised by Gemini. That work matters, but it is downstream. The SEJ data suggests the engines are not converging on a shared canon of trusted publishers. They are converging on the recognition of brands as entities worth naming.
In financial services, this is already visible. Ask any of the five engines about ESG ratings methodology and you will get MSCI, Sustainalytics, and S&P named explicitly, even when the underlying citations point to wildly different blogs, regulatory filings, or trade press. The brand is the anchor the model returns to. The sources are interchangeable scaffolding. A challenger ratings provider that wins a Reuters citation but is not yet a recognised entity in the model's training and retrieval layer will still lose the answer.
For multilaterals, the implication is sharper. UN agencies, the World Bank, and the IMF benefit from decades of entity recognition. They get named. The risk is the opposite: assuming that brand strength alone will carry them, while newer policy institutes and think tanks invest in the kind of structured, machine-readable content the engines actually retrieve. Citation share can erode quietly even when brand mentions stay high. The two metrics are not the same and need to be tracked separately.
For industrial groups (cement, steel, chemicals, logistics), the playbook shifts toward category ownership. If five different engines pull from five different source sets but all name Holcim when answering a question about low-carbon cement, the strategic question is not "how do we get into more sources" but "how do we make sure the brand is the unavoidable answer to the category prompt." That is a brand-building problem disguised as an SEO problem.
For philanthropic and policy institutions, the divergence in sources is an opportunity. Each engine has a different idea of who counts as authoritative on climate finance, vaccine equity, or migration. A foundation that publishes structured, citable research can plausibly become a source for two or three engines without ever cracking the others. That is fine. The goal is being named, not being universally cited.
The content strategy consequence is that distribution needs to fragment deliberately. The same research package should ship in formats optimised for the source preferences of each engine: long-form reports for Claude and Perplexity, structured data and Wikipedia-grade reference pages for Gemini, news-cycle pickups for ChatGPT and Copilot. The brand-building work, by contrast, should consolidate. One clear category claim, repeated across every surface, until the engines have no choice but to name you.
The signal in context
This finding lines up with what the Pulse has tracked across the last several months: the engines are not standardising. They are differentiating. Source preferences, refusal behaviours, and citation formats are diverging as each lab tunes for its own product positioning. The convergence on brand naming is the counterweight. It is the one stable signal in an otherwise noisy environment, and it is the signal senior marketers should plan against.
The broader trend is that GEO is splitting into two disciplines. One is technical and engine-specific: structured data, source placement, retrieval optimisation. The other is classical brand-building, executed with the knowledge that the audience now includes five language models. B2B leaders who treat these as the same job will underinvest in both.
What to do
- SEO/GEO lead: Run ten category prompts across all five engines and log source citations versus brand mentions separately.
- Marketing team: Identify the three category questions where your brand must be named in the answer and make them the content north star.
- Comms: Audit Wikipedia, Crunchbase, and equivalent reference pages that feed entity recognition across engines.
- SEO/GEO lead: Map earned media against each engine's source preferences instead of treating all citations as equal.
- Marketing team: Produce one structured, machine-readable reference asset per priority category.
- CMO: Require monthly reporting that separates brand mentions from source citations in AI answers.