AI Overviews cut organic clicks 38%, field study finds
A randomized field study confirms what dashboards have been hinting at: AI Overviews are harvesting clicks without making users any happier.
Key takeaways
- Google's AI Overviews cut organic clicks on triggered queries by 38 percent in a controlled field study.
- User satisfaction did not improve, removing the last defensible argument that AI Overviews are a net-positive product change.
- Definitional and informational queries are hit hardest, exactly the terms that fund top-of-funnel awareness.
- Citation visibility, not ranking position, is now the metric that pays.
What happened
Per Search Engine Journal, a randomized field experiment found that Google's AI Overviews reduced organic clicks on triggered queries by 38%, while user satisfaction ratings stayed flat. The study isolated AI Overviews as the variable. Users did not report a better experience. They just clicked through to publishers far less often.
That second number is the one to sit with. If AI Overviews were producing meaningfully better answers, you would expect satisfaction to climb alongside the click drop. It did not. Google is harvesting roughly four in ten clicks that previously went to publishers, and users are no happier for it.
The experiment, surfaced by Matt G. Southern at Search Engine Journal, gives senior marketers something they have lacked in this debate: a clean causal estimate rather than another correlational dashboard from a SEO vendor with a horse in the race.
Why it matters for your brand
A 38% organic click reduction is not a SEO problem. It is a brand visibility problem, and it lands hardest on the sectors that have spent a decade building search authority through whitepapers, thought leadership, and indexed PDFs.
Consider a global asset manager. The firm publishes quarterly outlooks, ESG frameworks, and macro commentary designed to be discovered by allocators and corporate treasurers researching specific questions. Under the old model, a query like "private credit default rates 2024" sent traffic to BlackRock, Apollo, or PIMCO research pages. Under AI Overviews, Google synthesises the answer, names two or three sources in a collapsed citation tray, and the click never happens. The brand may still be cited inside the Overview, but the user journey to the firm's site is severed. Reach without traffic is the new baseline.
For multilaterals, the math is worse. Institutions like UNDRR, CGAP, or the WHO produce authoritative reference content that ranks well precisely because it is canonical. AI Overviews love canonical sources. They will be quoted heavily and clicked rarely. The implication: if your KPI is sessions to the policy library, you are about to miss the number. If your KPI is "are we the source the model paraphrases," you may be winning and not measuring it.
Industrial groups face a different angle. Procurement researchers asking "lowest carbon cement specifications" or "ISO 14064 reporting requirements" will get a synthesised answer. The named source in that answer becomes the de facto category authority for that buyer's mental shortlist. The brands that lose are not the ones that fail to rank. They are the ones that fail to be the cited source inside the Overview itself.
The strategic shift: stop optimising for clicks on informational queries. Optimise for citation inside the answer, and reserve click-optimised content for the bottom of the funnel where intent is transactional or relational (events, RFP submissions, advisor contact). Mid-funnel SEO traffic is structurally impaired and will not recover.
A blunt reframe for the CMO conversation: if your content team's quarterly review still leads with organic sessions, you are measuring a leaking bucket. The metric that matters is share of citations across ChatGPT, Gemini, Perplexity, and Google AI Overviews on the 50 to 200 queries that define your category.
The signal in context
This study confirms what previous Pulse coverage has framed as the central transition of 2024 to 2025: the search interface is becoming an answer interface, and brands that built audiences on the click economy are quietly losing distribution they cannot see in their analytics.. The novelty here is the satisfaction data. Until now, Google's defence of AI Overviews has rested on the implicit promise that users prefer them. The randomized design says they do not prefer them; they tolerate them.
For B2B brands, the practical consequence is that the window to be cited inside AI answers, rather than ranked beneath them, is the visibility battle for the next 18 months.. Treat the 38% number as a floor, not a ceiling. Studies of similar interface changes (featured snippets in 2016, knowledge panels in 2018) showed click erosion deepening as the feature expanded coverage. AI Overviews are still being rolled out across query categories. The compression has not finished.
What to do
- SEO/GEO lead: Pull your top 100 informational queries, run them through AI Overviews, and log whether you are cited or a competitor is.
- Marketing team: Rewrite the 10 highest-value queries where competitors are cited and you are not, prioritising extractable formats.
- Comms: Brief executives that 30%+ informational traffic declines are now expected. Reset the dashboard before the board does.
- Marketing team: Audit your three most-trafficked thought leadership PDFs and republish core claims as extractable HTML with structured data.
- SEO/GEO lead: Stand up a monthly LLM citation audit across ChatGPT, Gemini, Perplexity, and Claude.
- CMO: Reallocate budget from informational SEM to owned channels where the click cannot be intercepted.