A new measurement framework for AI search visibility
Three layers, three instruments. The framework that lets CMOs separate whether they appear in AI answers, whether they are built to, and whether it moves revenue.
Key takeaways
- Aleyda Solis proposes three measurement layers: AI presence, AI readiness, and AI business impact.
- Most enterprise dashboards conflate the three, which is why click-loss narratives dominate budget conversations.
- Readiness, not content volume, is where most B2B brands have their real AI visibility gap.
- Self-reported attribution (sales calls, surveys, deal-source fields) is now a primary input, not a supplement.
- Whoever defines the AI search dashboard defines what gets funded next year.
What happened
Per Aleyda Solis, the standard SEO measurement stack (rankings, clicks, sessions) no longer captures what matters in an AI search environment, and most teams are bending old dashboards to fit a new reality rather than building for it. Her response: a three-layer framework that separates AI presence, AI readiness, and AI business impact into distinct measurement disciplines.
Solis argues that presence (whether your brand appears in AI answers), readiness (whether your content, site, and entity signals are structured to be retrieved and cited), and business impact (whether AI-driven referrals and assisted conversions actually move revenue) are three different questions that demand three different instruments. Conflating them produces the dashboards most marketing leaders are looking at right now: confident, busy, and wrong.
The framework matters less as a taxonomy and more as a diagnosis. Solis is naming what every senior marketer with an AI search line item has felt for the last twelve months: the existing reports answer the wrong question.
Why it matters for your brand
The first practical consequence is budget defense. CMOs at financial services groups, multilateral institutions, and industrial companies are being asked to justify spend on content, PR, and SEO against declining organic clicks. Without a presence layer (share of voice in ChatGPT, Perplexity, Google AI Overviews, Gemini, Copilot answers for a defined prompt set), the conversation collapses into a click-loss narrative. Solis's framework gives marketing leaders a way to say: clicks are down, presence is up, and here is the prompt set that proves it.
The readiness layer is where most B2B brands will find their actual gap. Readiness is not a single score. It is a stack: entity consolidation across Wikidata and Wikipedia, structured data coverage, crawlability for OAI-SearchBot and PerplexityBot, content formatted for extractive answers, and third-party corroboration on the sources LLMs trust. A development bank with strong domain authority but weak entity disambiguation will underperform a smaller think tank that has cleaned up its knowledge graph footprint. Readiness audits, not keyword audits, are the new diagnostic.
For financial services brands, the business impact layer is the hardest and the most important. AI-assisted journeys rarely produce a clean last-click attribution. A treasurer asking Copilot about counterparty risk frameworks, then visiting two vendor sites three days later, will show up in analytics as direct or branded organic. Solis's framework legitimises self-reported attribution (post-purchase surveys, sales call transcripts, deal-source fields) as a primary input rather than a soft supplement. Marketers who refuse to instrument this will keep losing budget arguments they should be winning.
For multilaterals and policy institutions, presence measurement is the urgent layer. UNDRR, CGAP, ISO, and IEEE compete for citation in AI answers about standards, risk, and development finance. If a journalist, civil servant, or program officer asks ChatGPT to summarise the state of disaster risk reduction financing and a competing institution's framing is cited, that is a brand event, not a marketing event. Without a presence dashboard, nobody at the institution knows it happened.
For industrial groups, the readiness layer is the lever. Holcim, Siemens, or Schneider compete on technical authority. LLMs cite technical authority when it is structured, machine-readable, and corroborated. Product documentation, sustainability disclosures, and engineering whitepapers that are PDF-locked or buried in JavaScript are invisible to the systems now mediating buyer research. Readiness work is mostly an information architecture project, not a content project.
The signal in context
Solis is formalising what a growing number of practitioners have been improvising. Profound, Peec, Athena, Scrunch, and Conductor's AI module all sell some version of presence tracking. Semrush and Ahrefs have shipped AI visibility features. None of them, on their own, give a CMO the full three-layer picture. The market is fragmenting into presence-monitoring tools, technical readiness auditors, and attribution platforms, and most enterprise marketing teams are buying one and pretending it covers the other two.
The strategic point for senior marketers is that measurement frameworks shape budget. Whoever defines the dashboard defines what gets funded. If your AI search reporting only shows presence (mentions in answers), you will overinvest in PR and underinvest in technical readiness. If it only shows readiness scores, you will ship schema markup nobody is searching for. Solis's contribution is insisting that all three layers report into the same review, and that is the version of the dashboard CMOs at large institutions should be asking their teams to build now.