WhailyWhaily

Neural Clusters

The influence map of your AI category

PageRank told you which pages the web trusted. Neural Cluster Influence (NCI) tells you which sources AI trusts in your specific category. One live map, continuously calibrated to your market.

ChatGPTChatGPT
ClaudeClaude
GeminiGemini
PerplexityPerplexity
DeepSeekDeepSeek

The problem

Citations compound, but only from the right sources.

Every category has a hidden hierarchy of sources that AI actually draws from. Some carry ten times the weight of others. If you cannot see that hierarchy, you cannot invest against it. You end up chasing the loudest sites instead of the most influential ones, and six months later the category leaders still have the citation advantage.

How it works

From sign-up to signal in minutes.

1

Whaily identifies the cluster of sources in your category

Using your prompts and the AI responses they generate, Whaily maps the full set of third-party domains that appear across answers. These form the "neural cluster" for your category.

2

Each source is scored by NCI

Whaily calculates a Neural Cluster Influence score for every source. NCI blends citation frequency, citation position, model diversity and semantic fit. Higher NCI means a source carries more weight when AI forms an answer in your category.

3

Use the map to decide where to invest

The ranked map shows which sources deserve outreach, which deserve content partnerships, and which you can ignore. Competitor presence is overlaid so the gap-to-close is obvious.

What you get

Everything you need, in one place.

Category-specific scoring

NCI is not a global number. Every category has its own cluster and its own hierarchy. Your scores reflect your market.

Composite signal, not just volume

Frequency, position, model diversity and semantic fit. A single citation from a high-position source beats ten from noise.

Country & language clusters

The cluster for US English is different from the one for German. Whaily scores each market independently.

Algorithm-versioned

NCI is versioned. Historical scores are preserved under the algorithm version they were calculated with — trends stay comparable.

Drill from cluster to source to prompt

Every score is explainable. Click a source to see the exact prompts and responses that gave it that score.

Visible competitor positions

Each node on the map shows which competitors are present. Target the gaps, not the noise.

See the map. Understand the shape of your category.

A visual representation of the neural cluster — sources sized by NCI, connected by the prompts they co-appear in. The shape of your category at a glance.

Screenshot slot

The NCI cluster graph — sources sized by influence, connected by shared prompts. Use the home page NciGraph component or an app-context equivalent.

/app/sources (graph view)

How NCI works

A score that reflects how AI actually builds an answer.

Large language models generate responses by retrieving from a set of trusted sources and weighting them against the query. Two sources with the same number of citations can carry wildly different authority if one is quoted near the top of responses and the other only in footnotes. NCI captures that difference.

Whaily computes NCI from the raw responses it captures. For every source, we aggregate how often it is cited, the position of the citation in the response, how many distinct models cite it, and how semantically relevant it is to the prompt set. The result is a 0–100 score that is directly comparable across sources within your category.

Because NCI is versioned, you can trust historical trends. When we improve the algorithm, past scores are preserved under their original version and new scores run side by side until you explicitly backfill. That matters — visibility metrics are only useful if you can compare them over time.

A ranked NCI leaderboard for your category.

The same map as a sortable list — each source with its NCI score, citation count, source type, and which competitors appear on it. This is the version your team will actually work from.

Screenshot slot

Sources page sorted by NCI descending, with competitor presence inline and the filter bar visible.

/app/sources

Questions

The short answers.

Is NCI a public rank, like DR or DA?+
No. NCI is category- and market-specific and is calculated from your own prompt runs. Two Whaily customers in different categories will see different NCI scores for the same source.
How often does NCI update?+
NCI is recalculated as new prompt responses come in. For active customers it is effectively live — a new wave of citations will shift scores within hours of the prompts running.
How is NCI different from NCI per source shown on the sources page?+
They are the same number. The Neural Clusters feature is the conceptual map and the scoring system. The Citation Sources feature is the workflow layer you use to act on the map. Same underlying data, different mental model.
Can I see how NCI is broken down?+
Yes. Every NCI score has an explanation view showing the components (frequency, position, model diversity, semantic fit). It is not a black box.
What counts as a "category" for clustering?+
Your cluster is defined by the set of prompts you track. As you add prompts, the cluster expands. Tags let you define sub-clusters by persona, product line or market.
Does algorithm versioning affect my dashboards?+
Historical NCI values keep their original algorithm version. New runs use the latest version. Dashboards are explicit about which version a trend line is drawn on, so you never compare apples to oranges without knowing it.

Ready to be recommended by AI?

Start free. See your first insights in minutes.