Schema's AI citation value falls under fresh scrutiny
Google's FAQ removal and fresh Ahrefs data suggest structured data is hygiene, not a visibility lever for AI search.
Key takeaways
- Google has removed FAQ rich results from most SERPs, eliminating a decade-old visibility tactic.
- Ahrefs finds little correlation between schema markup and citations in ChatGPT, Perplexity, or AI Overviews.
- LLMs appear to weight source authority and prompt-intent fit over on-page structured data.
- Enterprise budget should shift from markup to earned citations in trusted third-party outlets.
- Treat schema as hygiene, not as an AI search growth investment.
What happened
Per Search Engine Journal, Google has quietly removed FAQ rich results from most SERPs, and new Ahrefs research finds little evidence that schema markup meaningfully boosts citations in AI search engines. The two developments land in the same news cycle and point the same direction: structured data is no longer the visibility lever many SEO teams have spent a decade building around.
Search Engine Journal frames the Ahrefs analysis as a direct challenge to the assumption that marking up content with Schema.org vocabulary gives pages a measurable edge in ChatGPT, Perplexity, or Google's AI Overviews. The correlation, per the data, is weak to non-existent. FAQ schema specifically, once a reliable way to colonise SERP real estate, now returns nothing visible to most users.
For brands that built FAQ libraries, glossary pages, and structured product data on the promise of richer SERP treatment and AI legibility, the ROI calculation has changed. The investment did not disappear. The visible payoff did.
Why it matters for your brand
Schema was sold to enterprise marketing teams as a hedge. The pitch: even if rankings shift, structured data makes your content machine-readable, future-proofs you for AI search, and earns rich results today. Two of those three benefits are now under serious question. Ahrefs' data suggests LLMs are not preferentially citing schema-rich pages, and Google has just removed the most visible SERP benefit.
For financial services brands, this matters because compliance teams have often justified heavy schema implementation (FAQPage, FinancialProduct, Article) as a way to ensure regulated content gets surfaced correctly. If LLMs are ignoring the markup and Google is hiding the rich results, the rationale collapses into "it might still help indexing." That is a much harder business case to defend at budget time, especially when the same teams are being asked to fund net-new AI search experiments.
For multilaterals and policy institutions, the implication is sharper. Organisations like the UN agencies and World Bank affiliates have invested in structured data to make reports, indicators, and country profiles machine-discoverable. The bet was that LLMs would reward structured publishers with citation share. Ahrefs' finding suggests the models are instead weighting source authority, link patterns, and prompt-intent fit. A well-marked-up PDF on a low-authority subdomain still loses to a Reuters paraphrase of the same data. Distribution and surface area matter more than markup.
For major industrial groups, the change reframes the technical SEO conversation. Product schema, HowTo schema, and specification markup were positioned as the moat against commodity content. If AI engines are not using that signal to choose citations, industrial brands need to think about which trade publications, standards bodies, and analyst reports the LLMs actually quote. Being cited inside an IEEE standard or an ISO technical committee output now matters more than perfectly nested JSON-LD on your own site.
Content strategy implications are concrete. Stop treating schema as a visibility investment and start treating it as a hygiene cost. Move budget toward the formats LLMs demonstrably pull from: bylined expert commentary on third-party outlets, primary research with named authors, and clear declarative prose that answers the prompt without requiring the model to decode markup. The pages winning AI citations read like answers, not like databases.
The signal in context
This sits inside a broader correction happening across technical SEO. For two years the assumption was that AI search would reward the publishers who had invested most heavily in structured, semantic, machine-readable content. The early evidence is less flattering. LLMs appear to weight retrieval signals (authority, recency, link patterns, prompt-intent match) over on-page semantic scaffolding. Schema helps Google parse a page; it does not appear to help an LLM decide to quote it. Several recent studies, including work from Ahrefs and from independent GEO researchers, have converged on the same conclusion: citation share tracks more closely with domain authority and content fit than with markup.
The FAQ removal is the canary. Google built FAQ rich results, encouraged the ecosystem to adopt them, watched them get gamed, and then withdrew the feature. Brands that planned their content architecture around that single SERP feature now have orphaned assets. The lesson for senior marketers is to stop optimising for the rendering layer of any single platform, whether that platform is Google's SERP or a specific LLM's citation panel. Optimise for the underlying signal the platforms keep rewarding: being the source other credible sources point to.