Google adds AI Search links, hides click data
Google is expanding where brands can appear in AI Search, then refusing to show whether appearing is worth anything. Plan around the asymmetry.
Key takeaways
- Google added more link surfaces in AI Search but still reports no AI-specific clicks or impressions in Search Console.
- Studies continue to show click rates drop when an AI answer appears above organic results.
- Brands should track citation share across AI Overviews, ChatGPT, Perplexity, and Claude as the primary visibility KPI.
- Definitional and reference pages win citations; long-form essays without a clear top-of-page answer lose ground.
- Assume publisher-grade click data from AI surfaces is not coming. Reframe reporting now.
What happened
Per Search Engine Journal, Google has added more link surfaces inside AI Search results but has not given publishers any new click data to measure what those links deliver. Matt G. Southern reports that while citations are showing up in more places across AI Overviews and AI Mode, Search Console still does not break out clicks or impressions originating from AI responses. Independent studies cited in the piece continue to show click rates fall when an AI summary appears above the organic results.
The asymmetry is the story. Google is expanding the surface area where brands can appear, then asking publishers to trust that the appearance is worth something, without showing the receipts. SEOs cannot tell which queries triggered an AI citation, which AI citations drove a click, or how the click rate from an AI link compares to a classic blue link.
This is a deliberate product choice. Google has had two years to ship AI Search reporting in Search Console and has not. The expansion of link surfaces without measurement is the model change marketers should plan around, not a temporary gap.
Why it matters for your brand
For CMOs at financial services firms, multilaterals, and industrial groups, this kills a category of internal reporting. The standard "organic traffic from branded and non-branded search" dashboard that justifies content investment now has a growing blind spot. If 30% of high-intent queries about your category resolve inside an AI answer, and Google will not tell you which ones, you cannot defend the content budget on click attribution alone. You will need to defend it on citation share, which is a different metric and requires different tooling.
Brand teams at regulated institutions feel this most acutely. A central bank, a UN agency, or a standards body has spent years building authority pages that rank for definitional queries: what is operational resilience, what is disaster risk reduction, what is ISO 27001. Those queries are exactly the ones AI Search now answers in-line. The page still gets cited. The click does not happen. The internal team that owns the page sees traffic drop and gets asked why. The honest answer is that the page is doing better than ever in terms of influence per query, and Google has decided not to measure that.
Distribution strategy has to shift accordingly. If you cannot measure clicks from AI Search, measure citations. That means setting up monitoring across Google AI Overviews, AI Mode, ChatGPT, Perplexity, and Claude for the 50 to 200 queries that matter most to your category. Track which sources the models pull from, how often your domain appears, and what competitors are displacing you. This is the new equivalent of rank tracking, and it is not optional for any brand whose buyers research before they buy.
For content strategy, the implication is sharper still. Google adding more link surfaces means more chances to be cited, but only for pages built to be cited. That means structured answers near the top of the page, clear definitional language, named authors with credentials, and explicit data points the model can lift. Pages written as long-form thought leadership essays without a clear answer in the first 200 words will continue to lose ground. Pages built like reference entries will win.
Procurement and B2B sales cycles in industrial and financial services contexts compound this. A treasurer evaluating a payments platform, a sustainability lead evaluating a carbon accounting vendor, or a procurement officer at a multilateral evaluating a supplier will increasingly start with an AI answer. If your brand is not in the citation set at that first query, you are not in the consideration set at the second. There is no Search Console report that will tell you this happened. You will only see it in pipeline that did not materialise.
The signal in context
The broader pattern is that AI Search is consolidating the read layer of the internet while disintermediating the measurement layer. Google, OpenAI, Anthropic, and Perplexity have all expanded the share of queries answered in-product over the past 18 months. None of them provides publisher-grade analytics. Cloudflare, Similarweb, and a handful of GEO tools are filling the gap with sampled data, but the platforms themselves treat citation telemetry as proprietary. Publishers got search referral data because Google needed publishers to keep indexing the web. The AI platforms appear to have decided they do not need the same bargain.
For senior marketers, the working assumption should be that first-party click data from AI surfaces is not coming, certainly not at the granularity Search Console once offered. The brands that adapt fastest will treat citation share as the primary visibility KPI, organic clicks as a lagging indicator, and brand-tracking research as the way to confirm that AI visibility is translating into mental availability with buyers. The brands that wait for Google to ship a dashboard will spend 2026 explaining traffic declines they could have been reframing as citation wins.