GA4 now isolates ChatGPT and Gemini traffic by default
Measurement was the last excuse for ignoring LLM traffic. Google just removed it from every GA4 property by default.
Key takeaways
- GA4 now shows AI assistant traffic as its own default channel group.
- ChatGPT, Gemini and other LLM referrers no longer get buried in 'Referral' or 'Unassigned'.
- The measurement gap that blocked LLM optimisation budgets is closed.
- Expect AI Assistants to appear in board-level traffic reports within a quarter.
- Multilaterals and industrial brands can now tie LLM citations to actual session data.
What happened
Per Search Engine Journal, Google Analytics has added "AI Assistants" as a default channel group in GA4. Traffic from recognised chatbot referrers, including ChatGPT and Gemini, will now appear as its own line item instead of getting buried inside "Referral" or "Unassigned."
The change is live by default. Marketers no longer have to build custom channel groups, regex-match utm sources, or scrape server logs to see how much traffic an LLM is sending them. Google has accepted that AI assistant traffic is a category, not a curiosity.
For brands that have spent the last 18 months arguing internally about whether LLM visibility actually drives any business, the spreadsheet just got simpler.
Why it matters for your brand
The single biggest blocker to investment in AI search has been measurement. CMOs at large financial institutions and industrial groups have told us, repeatedly, that they cannot get budget approved for work they cannot count. Procurement does not fund "vibes." Now there is a default row in the standard analytics tool that every digital team in the world already uses. That changes the politics of the conversation.
Expect three things to follow quickly. First, AI assistant traffic will get benchmarked against organic search inside quarterly board decks. For most B2B brands the ratio is currently somewhere between 1% and 5%, but it is growing fast and the trajectory is what matters. Second, content teams will start attributing pipeline to specific LLM referrers, which means the next round of SEO RFPs will explicitly ask agencies to report on ChatGPT and Gemini sessions. Third, the "AI traffic is junk traffic" objection will get tested with real conversion data instead of assertion.
For financial services brands, this is the moment to stop treating LLM optimisation as an experiment line and start treating it as a measurable channel. A wealth manager whose research notes get cited by ChatGPT can now show, in GA4, the click-throughs and downstream events that follow. The compliance team will still have concerns. The CFO will have fewer.
For multilateral institutions and policy bodies, the implication is sharper. Organisations like the UN system, the World Bank, and the IMF produce the kind of authoritative reference content that LLMs lean on heavily. Until now, the citation was the only measurable artefact. With AI Assistants as a default channel, comms teams can finally show traffic from those citations landing on policy briefs and data portals. That is the difference between "we are quoted in ChatGPT" and "ChatGPT sent 14,000 policymakers to our latest report." One is anecdote. The other is a KPI.
For major industrial groups, the change matters most in procurement-led buying journeys. Specifiers researching cement specifications, industrial coatings, or compliance standards increasingly start in an LLM. If GA4 shows a measurable trickle of those sessions converting to spec-sheet downloads or distributor enquiries, the case for investing in technical content that LLMs prefer (structured data, clear authorship, machine-readable specs) writes itself.
Philanthropic and policy institutions should watch this for a different reason. Foundations have struggled to demonstrate the reach of their grantee-funded research. AI Assistant traffic, tracked at the URL level, becomes a new line in impact reporting. Expect the next generation of MEL frameworks to include it.
The signal in context
Default channel groups in GA4 are not a small product decision. They define what gets reported, what gets optimised, and what gets resourced. When Google added "Organic Shopping" and "Organic Video" as defaults, those categories went from invisible to mandatory inside enterprise dashboards within a quarter. The same will happen here. By Q1 next year, board reports at every serious B2B brand will have a row for AI Assistants, and the line items below it (ChatGPT, Gemini, Perplexity, Copilot) will start carrying their own targets.
This also sharpens a trend that has been building all year: the platforms are formalising AI traffic as a first-class entity. Cloudflare introduced bot-level controls for AI crawlers. OpenAI launched referrer headers that identify ChatGPT sessions. Now Google is wiring AI assistants into the default reporting layer. The infrastructure for measuring and monetising LLM-originated traffic is being built in public, and brands that wait for the dashboards to mature before acting will be reporting on a channel their competitors have already optimised.