AI search collapses the PR and SEO divide
AI engines blend earned and owned sources into one answer. Brands that still split PR and SEO budgets are under-investing in their own visibility.
Key takeaways
- AI search engines cite the same brands regardless of which underlying sources they pull from.
- PR coverage now functions as a retrieval signal, not just a reputation signal.
- Splitting PR and SEO budgets means under-investing in whichever side is currently weaker.
- Brief both teams against a shared list of buyer questions you want to be the answer to.
- Multilaterals with strong earned media but weak structured owned content are particularly exposed.
What happened
Per Search Engine Journal, AI search engines surface the same brands regardless of which underlying sources they cite. Greg Jarboe's argument is blunt: PR and SEO have been treated as separate disciplines with separate budgets, separate KPIs, and often separate agencies. AI answers do not respect that org chart.
When ChatGPT, Perplexity, or Google's AI Mode answer a question about, say, sustainable cement or trade finance reform, the model pulls from a blended set: trade press, Reddit threads, Wikipedia, analyst notes, the brand's own site, regulatory filings. The brands that get named are the brands that recur across all of those surfaces. The ones that invested only in owned-content SEO, or only in earned media, get thinner coverage in the model's training and retrieval.
Search Engine Journal's framing matters because it inverts the usual conversation. The question is no longer "is PR worth it for SEO backlinks." It is whether your brand exists with enough density across third-party sources for an LLM to confidently mention you when a buyer asks an unprompted question.
Why it matters for your brand
For CMOs at large industrial groups, this collapses a budget line that has been politically separate for two decades. The corporate communications team owns Reuters, the FT, trade press, and analyst relations. The digital marketing team owns the website, paid, and SEO. In an LLM-mediated buying process, both feed the same retrieval layer. A procurement lead at a utility asking Claude "who are the credible low-carbon cement suppliers in Southeast Asia" gets an answer shaped by both. Splitting the budget along old lines means under-investing in whichever side is currently weaker, and the model will reflect that weakness in its answer.
For multilaterals and UN agencies, the implication is sharper still. These organisations rarely run aggressive SEO programmes; their authority has historically come from being cited by Reuters, the Economist, and policy outlets. That earned-media footprint is exactly what LLMs use to anchor expertise on climate adaptation, financial inclusion, or disaster risk. The risk is the inverse of the corporate problem: strong PR presence, weak structured content on owned domains, which means the model knows the agency is authoritative but cannot easily extract specific positions, statistics, or named experts. The fix is to publish primary research, datasets, and expert bios in a format LLMs can parse, then make sure the press coverage points back to those primary sources.
For financial services brands, the stakes are regulatory as well as commercial. When a wealth client asks an AI assistant "is [Bank X] exposed to commercial real estate," the answer is assembled from earnings call transcripts, analyst notes, news coverage, and the bank's own IR site. A communications team that has spent a decade managing analyst narratives but has not aligned with the SEO team on how that narrative shows up in structured web content is leaving the model to fill gaps with whatever it finds. Often that is a hostile blog post or a stale forum thread.
For philanthropic and policy institutions, the change is about competitive share of voice on issues. Foundations and think tanks compete for citation in policy debates. If a researcher asks Perplexity "what's the evidence on cash transfer programmes in fragile states," the institutions that win are the ones whose papers, op-eds, and interviews recur across the corpus the model retrieves. PR placements that used to be measured in Cision reports now need to be measured in whether the named expert and named institution show up in AI answers to the questions the institution wants to own.
The practical consequence for content strategy: stop briefing PR teams to chase coverage and SEO teams to chase rankings. Brief both teams against a shared list of the 50 to 200 buyer questions you want to be the answer to. Measure both teams against citation in AI responses to those questions. The org chart will resist this. Do it anyway.
The signal in context
This is the same pattern visible in every recent study of LLM citation behaviour. Models pull from a wider source set than classic search results, they over-weight third-party validation, and they reward brands with consistent presence across both owned and earned surfaces. Profound, Semrush, and Ahrefs have all published data showing that the pages and brands cited in AI answers correlate more tightly with mentions in trusted third-party media than with traditional SEO authority metrics. The takeaway is consistent: earned media has become a retrieval signal, not just a reputation signal.
The organisations that will adapt fastest are the ones where comms and digital already report into a single executive. Where they don't, expect a structural conversation in 2025 budget cycles about whether "AI visibility" deserves its own owner who can pull both levers. That role does not have a settled name yet. It will.