WhailyWhaily
All posts

AI search crosses 1 billion weekly queries: what the numbers say about discovery

Combined AI search volume across ChatGPT, Gemini, Perplexity, and Claude now exceeds 1 billion queries per week. The shift in buyer discovery is no longer theoretical.

Abstract chart showing AI search query volume growth

Sometime in the first weeks of 2026, the combined AI search ecosystem passed a threshold that feels significant: more than 1 billion queries per week flowing through conversational AI systems. That includes ChatGPT, Google Gemini, Perplexity, Claude, and a cluster of smaller platforms.

The number itself is less important than what it implies. At this scale, AI-mediated discovery is no longer a trend to prepare for. It's an active channel where brands are being recommended or ignored right now, at volume.

Breaking down the numbers by platform

The picture isn't uniform across platforms, and the differences matter for how brands should think about coverage.

ChatGPT remains the largest single source of AI-assisted queries by a wide margin. OpenAI reported crossing 400 million weekly active users in January 2025, and usage has continued growing since. Even assuming modest query rates, ChatGPT alone likely accounts for somewhere between 500 and 600 million search-intent queries per week. Not all of those are buying-intent queries, but a meaningful share are. Enterprise and professional users, who make up a growing slice of the user base, skew heavily toward research, comparison, and decision-support queries.

Perplexity has grown quietly but aggressively. The company reported over 500 million monthly queries as of late 2025, with a user base that indexes toward technically sophisticated buyers. Monthly queries don't map neatly to weekly figures, but the trajectory is clear. Perplexity's model differs from ChatGPT's in an important way: it retrieves live web content at query time. That makes recent press coverage, fresh review site content, and up-to-date third-party mentions more directly influential in Perplexity's outputs than in closed models.

Gemini's query volume is harder to isolate because it's embedded in multiple Google products. AI Overviews in Google Search, the standalone Gemini app, and Gemini in Google Workspace all run on the same underlying model. Combining these surfaces, Gemini almost certainly handles more total AI-generated text than any other system, even if not all of it looks like traditional search.

Claude's growth has accelerated since Anthropic expanded its distribution through third-party integrations and the launch of Claude for Enterprise. Usage figures are not public, but traffic data and API trends suggest a meaningful share of professional research queries now flow through Claude.

Bar chart showing estimated weekly AI query volume across ChatGPT, Gemini, Perplexity, and Claude
Estimated weekly query volumes across major AI platforms, February 2026. Cross-platform coverage is no longer optional for brands doing AI visibility work.

The zero-click problem at scale

The zero-click problem in traditional search, where Google's featured snippets and knowledge panels answer questions without sending traffic to a website, created real tension for publishers over the past decade.

AI search takes this dynamic much further. When a user asks Perplexity "what's the best accounting software for a small agency?" and gets a confident, structured answer naming three products with reasons for each, they often have what they need to make a decision without visiting any website. There's no click. There's no session in your analytics. There's no impression in Google Search Console.

At 1 billion queries per week, the aggregate of those invisible decisions is substantial. If even 5% of AI queries are buying-intent queries where a user acts on the AI's recommendation without a web visit, that's 50 million weekly decisions happening outside traditional measurement frameworks.

Note

Zero-click in AI search is structurally different from zero-click in traditional search. With a featured snippet, users can see the source and visit it if they want more detail. With a conversational AI answer, the source of the recommendation is often opaque to the user. They don't necessarily know that the AI is drawing on a G2 review or a TechCrunch comparison article. The persuasion happens invisibly.

What the query mix reveals about buyer behavior

The types of queries flowing through AI search differ from traditional keyword search in ways that matter for marketing strategy.

Traditional search queries tend to be short and keyword-dense. "CRM software small business." "Best email marketing tool." These queries are easy to classify by intent, easy to build landing pages for, and well-served by SEO content that matches exact terms.

AI search queries are conversational and often include context. "We're a 12-person SaaS company switching from Salesforce because it's too expensive. What would be the best alternative?" That query encodes buyer situation, decision context, and a specific concern. No keyword-targeted landing page is built to answer it. The AI synthesizes an answer from everything it knows about CRM alternatives, pricing, and the buyer's profile.

This shift has a practical implication: the content that helps your brand appear in AI answers is not the content optimized for keyword ranking. It's the content that clearly articulates what your product does, who it's for, when it's the right choice, and how it compares to alternatives. Review site profiles, comparison articles, and detailed editorial coverage tend to be more useful here than a product page stuffed with keywords.

Diagram comparing a typical traditional search query with a typical AI search query, highlighting the difference in specificity and context
AI search queries contain far more buyer context than traditional keyword searches. This changes what content needs to exist for a brand to be recommended.

Why measurement is the first problem to solve

The volume numbers make AI search impossible to ignore. The measurement gap makes it possible to ignore, because the absence of data creates the illusion that nothing is happening.

Unlike Google Search, AI platforms don't provide webmaster tools or impression data. There's no equivalent of Search Console showing you how many times ChatGPT mentioned your brand this week, or in what context, or alongside which competitors. Without instrumented measurement, teams have no way to know whether they're being recommended, how accurately they're being described, or whether a model update shifted their visibility.

This isn't a gap that goes away as AI search matures. None of the major platforms have announced plans to offer brand-level visibility data to marketers. The measurement infrastructure has to be built from the outside.

AI Visibility Tracking

See where your brand stands in AI search

Track how ChatGPT, Gemini, Perplexity, and Claude recommend your brand vs competitors.

Start tracking free

That means running structured queries across AI platforms, sampling regularly, tracking which brands appear and how they're framed, and watching for changes. Whaily does this systematically across ChatGPT, Gemini, Perplexity, and Claude, giving teams a consistent view of their AI visibility without having to build and maintain the infrastructure themselves.

What brands should do with these numbers

The milestone of 1 billion weekly queries is a useful forcing function for conversations that have been easy to defer.

If your category has significant buyer research volume, AI search is almost certainly influencing some share of your pipeline right now. The question is whether that influence is positive, neutral, or negative, and you can't answer that without measurement.

A practical starting point is to run a structured sample of the questions your buyers actually ask, across at least three platforms, and document what each AI says about your brand. Is your brand present? Where in the list? Is the description accurate? Are the use cases cited the right ones?

That sample won't be statistically comprehensive, but it will tell you whether you have a visibility problem worth solving. At a billion queries per week, finding out the answer is worth the hour it takes to look.

FAQ

Is the 1 billion query figure for AI search broadly or just buying-intent queries? It's a combined figure across all query types. Not every query flowing through these platforms is a buying or discovery query. Rough estimates suggest 15 to 25% of AI queries have commercial or research intent, which still represents hundreds of millions of weekly decisions that could involve brand recommendations.

Does high AI search volume mean I should deprioritize SEO? No. Traditional search remains the larger channel by volume. The case for AI visibility is that it's growing faster, measuring it is harder, and neglecting it creates blind spots in your marketing intelligence. Both channels deserve attention.

Are certain industries more exposed to AI search influence than others? Yes. Software and SaaS, financial services, professional services, and B2B tools in general see higher rates of AI-assisted research than industries where purchase decisions are primarily in-store or relationship-driven. If your buyers are desk workers who research purchases online, your exposure is high.

How frequently should brands measure their AI search visibility? Monthly is a sensible baseline. If you're running active content campaigns or suspect a model update has shifted your visibility, more frequent sampling, weekly or even daily for short windows, helps you connect cause and effect.

AI Visibility Tracking

See where your brand stands in AI search

Track how ChatGPT, Gemini, Perplexity, and Claude recommend your brand vs competitors.

Start tracking free

Keep reading

Abstract visualization of Perplexity's user growth trajectory
Industry

Perplexity reaches 100 million monthly active users: what it means for brand discovery

7 min read
Abstract visualization of Reddit discussion threads feeding into AI model recommendations
Industry

Reddit's outsized influence on AI brand recommendations

8 min read