Google says AI search has fractured the keyword
Liz Reid's framing tells B2B marketers what to stop measuring and what to start optimising for.
Key takeaways
- Google's Head of Search has confirmed that keyword fragmentation is reshaping how AI search retrieves answers.
- The hub-and-spoke SEO model built around canonical keywords no longer maps to how buyers ask questions.
- Citation share in LLM answers depends on matching user intent with specificity, not ranking for head terms.
- Multilaterals publishing primarily as PDFs are losing AI citation share to web-native, structured content.
- Generic thought leadership is filtered out of AI answers; named authors and clear positions get cited.
What happened
Per Search Engine Journal, Google's Head of Search Liz Reid has publicly described what she calls "keyword fragmentation": the breakup of the tidy head-term queries that powered SEO for two decades into a sprawl of longer, messier, more conversational prompts that AI search now handles natively.
Reid's framing matters because it comes from the person running Google Search, not from an SEO commentator. She is telling the market that the unit of demand has changed. Users are no longer typing "ESG reporting standards." They are asking AI Overviews and Gemini something closer to "how do the new ISSB rules differ from what our European subsidiaries already report under CSRD, and what do I need to change before Q2." One intent. Twenty keywords. Zero exact matches in any keyword tool.
Search Engine Journal reports that this shift forces SEO away from keyword targeting and toward what Reid frames as user-need targeting. The implication, which Google is not stating but is clearly true: the page that ranks for a fragmented query is the page the model decides best answers the underlying job, not the page that won a keyword auction.
Why it matters for your brand
For B2B brands, keyword fragmentation is the mechanism that quietly destroys the legacy content playbook. The hub-and-spoke model, the pillar page targeting "supply chain risk management," the 47 supporting articles targeting long-tail variants: all of it was built for a world where humans typed canonical phrases. That world is contracting. Reid is signalling that Google itself no longer optimises for it.
For financial services marketers, this is acute. Buyers researching custody solutions, treasury platforms, or private credit structures now ask multi-clause questions that bundle regulation, geography, asset class, and timeframe into a single prompt. No keyword research tool surfaces these queries because they are essentially unique. The brands that get cited in the AI answer are the ones whose content already addresses the underlying decision, with the specificity (named regulations, named jurisdictions, named instruments) that lets a model match the prompt to the page.
For multilaterals and policy institutions, the stakes are different but larger. UNDRR, CGAP, the OECD, and similar bodies are the primary sources LLMs reach for when a prompt involves disaster risk, financial inclusion, or development policy. Keyword fragmentation works in their favour, but only if their content is structured to answer questions, not to rank for terms. A 90-page PDF that contains the answer is worth less to an LLM than a clearly structured web page that states the finding in the first paragraph. Multilaterals that still publish primarily as PDFs are leaving citation share on the table.
For major industrial groups, the practical consequence is that procurement-adjacent content (specifications, compliance documentation, sustainability disclosures) is now retrieval-grade material. A buyer at an EPC firm asking Gemini about low-carbon cement specifications for a specific climate zone is not searching a keyword. They are asking a question, and the answer will draw from whichever manufacturer published the most precise, most machine-readable technical content. HOLCIM-style players who treat technical content as a brand asset will outperform those who treat it as documentation.
For philanthropic and policy institutions, fragmentation rewards point of view. When prompts get longer and more specific, generic thought leadership ("the future of climate finance") becomes invisible. What gets cited is content that takes a clear position on a narrow question, with named data and named authors. Foundations that publish anonymous, committee-written explainers are being filtered out of AI answers in favour of think tanks that put a researcher's name and a clear argument on every page.
The content strategy implication is direct. Stop briefing writers around keywords. Start briefing them around the decisions your audience is actually making, with the specificity those decisions require. The distribution implication is equally direct. The page has to be crawlable, structured, and cited by other trusted sources, because the model is using all three signals to decide what to surface.
The signal in context
Reid's comments slot into a broader pattern visible across the major LLM platforms. ChatGPT, Perplexity, and Google's own AI Overviews are all converging on a retrieval logic that prioritises semantic match to user intent over lexical match to query strings. Anthropic and OpenAI have both described their retrieval stacks in similar terms. The keyword, as a unit of measurement, is becoming a legacy artefact of a search interface that fewer buyers use.
The harder truth for marketing leaders is that the metrics inherited from the SEO era (keyword rankings, share of voice on head terms, organic traffic to the homepage) are increasingly disconnected from how visibility actually works in AI answers. Citation share in LLM responses, brand mentions in generated answers, and inclusion in retrieval indexes are the new scoreboard. Reid has not said this in those words, but keyword fragmentation is the diagnosis. Citation-grade content is the prescription.