OpenAI shipped a change this week that matters more for brand visibility than most model updates. ChatGPT now browses the web by default on every query, for all users, not just paid subscribers.
This is a significant departure from how ChatGPT has worked since its launch. Until now, the vast majority of ChatGPT queries were answered from training data alone. The model's knowledge had a cutoff date. If your brand had meaningful press coverage or a product update after that cutoff, ChatGPT had no reliable way to incorporate it. Web search was available as an optional feature for Plus and Pro subscribers, but most queries from most users went through the closed, static model.
That default has flipped.
Why the previous model mattered for brands
To understand why this change matters, it's worth being precise about how the old system worked.
When a user asked ChatGPT "what's the best tool for tracking AI search visibility?" without triggering a web search, the model drew entirely on patterns from its training data. That training data had a cutoff months or years in the past. A brand that launched after the cutoff might not be in the model's weights at all. A brand that had minimal coverage during the training window would appear infrequently regardless of how much momentum it built afterward.
The consequence for newer or fast-growing brands was significant. You could run a successful product launch, get covered in TechCrunch and a dozen industry publications, accumulate hundreds of G2 reviews, and still barely register in ChatGPT recommendations because the model had been trained before any of that existed.
The consequence for established brands was the mirror image. Legacy players with years of accumulated editorial coverage and review site presence had a structural advantage in training data, regardless of whether their product still deserved it.
What changes with default web browsing
ChatGPT with default web browsing operates more like Perplexity. Before generating a response, the model runs search queries against the live web and incorporates what it finds into the answer. Fresh content, published yesterday or last week, can now influence what ChatGPT says about your brand today.
The feedback loop is dramatically shorter. A new review published on a high-authority review platform can be indexed within hours and potentially incorporated into ChatGPT responses the same day. A critical article on a widely-read publication can shift how ChatGPT frames your brand within days rather than waiting for a model retrain.
This cuts in both directions. The same speed that lets positive new coverage help you also lets negative coverage hurt you. A critical piece in a high-authority publication can now affect ChatGPT brand recommendations far faster than it previously could.
The mechanism also means ChatGPT recommendations have become less stable. Training-data-based responses were consistent: the model would say roughly the same thing about your brand every time, because its underlying knowledge wasn't changing. With live retrieval, two users asking the same question an hour apart might get different recommendations if ChatGPT fetched different sources between the two queries.
This variability is not a bug in the retrieval system. It reflects reality. Brand reputations do change. New competitors do emerge. Recommendation volatility in AI search now roughly tracks the actual pace of change in the information landscape, which is a reasonable behavior even if it creates new complexity for brand teams.
How this narrows the gap with Perplexity
Perplexity built its entire product on retrieval-augmented generation. Every Perplexity response is grounded in live web content with citations. This gave Perplexity a distinctive positioning in the AI search market: fresh, verifiable, less likely to hallucinate confident-sounding but outdated information.
ChatGPT's move to default web browsing narrows that gap substantially. The two products now share a core architecture. Perplexity's remaining differentiation is in how it surfaces and presents citations, its default search interface, and a user base that skews toward research-oriented queries. ChatGPT's advantage is its far larger user base and the depth of its language model.
For brands, the practical implication is that the content strategies that worked for Perplexity visibility now apply to ChatGPT as well. Fresh, authoritative, easily indexable content matters more. Training data presence matters relatively less. The SEO-adjacent approaches to AI visibility, earning fresh coverage, maintaining updated review site profiles, publishing regularly on high-authority platforms, become relevant to ChatGPT for the first time.
What this means for your brand strategy
Three things deserve immediate attention.
Content freshness is now a ChatGPT variable. Brands that had stale review profiles or hadn't generated fresh press coverage in a while weren't penalized in the old closed-model ChatGPT, because the model couldn't see any of it. They may now be. Auditing the freshness of your third-party presence across review platforms and editorial sources is worth doing now rather than later.
Response monitoring becomes more urgent. When ChatGPT relied on static training data, your brand's position in ChatGPT responses was predictable. A weekly check was usually sufficient to confirm nothing had changed. With live retrieval, the same query can produce different results across days. Teams that want to track ChatGPT visibility accurately need to increase their sampling frequency.
Competitor moves can affect your ChatGPT position faster than before. If a competitor gets a major press hit or a wave of positive reviews, ChatGPT can pick that up quickly. Brands that aren't monitoring their own position won't know when competitive shifts are happening.
Whaily can handle the sampling frequency that manual monitoring can't: running your query set across ChatGPT and the other major models daily, flagging changes in brand mentions or framing, and surfacing shifts that would otherwise go unnoticed.
The broader shift toward retrieval-based AI
ChatGPT's move completes a transition that has been underway across the AI search landscape. Twelve months ago, the primary question for brand teams was whether to care about AI visibility at all. Six months ago, the question was which models to prioritize. Today, the architecture question is resolved: retrieval-based responses are the default, not the exception.
Gemini has operated with retrieval since its early versions. Perplexity was built on it. Claude offers web search as an integrated option. ChatGPT has now made it the default.
The implication is that the content and authority strategies that drive AI visibility are now essentially aligned across the entire major model ecosystem. There is no longer a major model that can be won purely through training data presence. Fresh, authoritative, externally visible content is the common factor across all of them.
For brand teams, this is clarifying. There is one set of activities that improves AI visibility across all models: earning high-quality external coverage, maintaining accurate and positive review site profiles, publishing credible content that gets indexed and cited, and tracking how all of that flows through to recommendations.
See where your brand stands in AI search
Track how ChatGPT, Gemini, Perplexity, and Claude recommend your brand vs competitors.
Start tracking freeFAQ
Does this affect all ChatGPT users or only certain tiers? OpenAI's announcement specifies that default web browsing applies to all users, including those on the free tier. Previous web search features were limited to Plus and Pro subscribers.
Will ChatGPT cite sources the way Perplexity does? ChatGPT's retrieval integration doesn't always surface explicit citations in the same way Perplexity does. The model may incorporate retrieved content without showing links in the response. This is a meaningful difference: Perplexity's citations let you see exactly which sources influenced the answer. ChatGPT's retrieval is less transparent.
How should I think about my ChatGPT baseline data now? Any AI visibility data you collected before this change represents the closed-model version of ChatGPT. Treat it as a separate historical baseline. Post-change data reflects retrieval-augmented ChatGPT and is not directly comparable to pre-change figures.
Does this affect how I should prioritize review site management? Yes. Review sites like G2 and Trustpilot are likely to be among the sources ChatGPT retrieves when answering product category questions. The freshness and volume of your reviews on these platforms now directly influences ChatGPT in a way it previously didn't.
See where your brand stands in AI search
Track how ChatGPT, Gemini, Perplexity, and Claude recommend your brand vs competitors.
Start tracking free