NEW · MCP server
Whaily, inside your AI client.
Connect Claude, Cursor, ChatGPT, or any MCP-aware client to your workspace. Ask about visibility, competitors, and citations. Take action. Build automations. All without leaving your editor.
What is the Whaily MCP server?
The same workspace, reachable from any AI client.
Model Context Protocol is Anthropic's open standard for letting AI agents discover and call external tools safely. Whaily speaks it natively. Once an API key is wired into your client, the agent sees a typed catalogue of every read and write the dashboard can do, and it picks the right one for the question you asked.
No proprietary plugin to install. No glue code to write. The same engineering teams behind Claude Desktop, Cursor, and the OpenAI Agents SDK already implement the protocol. You connect once and the agent does the rest.
Read your data
Visibility scores, competitor benchmarks, source rankings, citation history, brand audits, and AI recommendations. 22 read tools.
Take actions
Create todos, claim sources, archive prompts, tag recommendations, and assign work to teammates. 13 write tools, governed by explicit confirmation.
Stay in control
Scoped API keys, monthly quotas per scope, per-key audit log, one-click revoke. The agent never sees tools the key cannot use.
Connecting to Whaily MCP
Set up in under two minutes.
Three steps. Pick your AI client, paste the config, restart, and ask anything. The Whaily tool catalogue appears automatically.
Step 1
Generate an API key
In your Whaily workspace, open Settings → API & MCP. Click Generate key, choose the scopes you want (read / write / expensive), and copy the whaily_live_… token. It is shown once, then bcrypted at rest.
Step 2
Paste the config into your AI client
The Whaily MCP server speaks the standard Streamable HTTP transport, so any MCP-aware client works. Pick yours below.
{ "mcpServers": { "whaily": { "url": "https://whaily.com/api/mcp", "headers": { "Authorization": "Bearer whaily_live_..." } } } }Step 3
Restart and start asking
Restart your AI client. The Whaily tools appear automatically, scoped to whatever permissions you granted the key. Try it: "How is our AI visibility this week?" and watch the agent pick the right tool.
What you can do with it
Four ways teams use Whaily from inside their AI client.
Use case · Weekly status
Skip the dashboard tour. Just ask.
The agent stitches together visibility, per-model breakdown, and the prompts moving against you in one round-trip. The answer is a three-line briefing, not a chart you have to interpret. Use it for standups, board updates, and the moment your CEO Slacks you on a Sunday.
Use case · Reports
Generate a PDF brief for the marketing lead.
Ask for the executive brief and the agent calls four tools in sequence, distils the answer, and emits a styled PDF you can drop into a deck or email. The same prompt works every Friday. Turn it into a recurring schedule and the brief writes itself.
Use case · Coaching
Proactive heads-up before you ask.
Open with a vague question and the agent surfaces what is unattended: dormant recommendations, prompts you have lost, sources not citing you. Each item comes with a deep link straight to the right page in Whaily, so the agent both flags the issue and hands you the click that resolves it.
Use case · Automations
Build the recipe. In English.
Whaily exposes the data and the actions. You bring your AI agent. Describe the automation in plain English to whichever client your team already uses (Claude, Cursor, an Agents SDK script) and it reads the Whaily MCP catalogue and assembles a structured recipe for you: schedule, tools to call, filter, delivery channel. Save it, hand it to your scheduler, and the digest writes itself every Monday. The same pattern works for alerts, daily check-ins, and one-off audits.
The full catalogue
35 tools. One typed catalogue.
Every tool has an input schema, a clear description the model reads to decide whether to call it, and a structured response. Filtered server-side by the scopes on the API key.
Works with
FAQ
Common questions about the MCP.
What is the Whaily MCP server?+
It is a remote Model Context Protocol server hosted at https://whaily.com/api/mcp. Once connected to an MCP-aware AI client, the agent can call 35 typed tools that read your Whaily workspace and take governed actions inside it. The protocol is the same standard Anthropic, OpenAI, Cursor, and others use, so there is no proprietary integration to maintain.Which AI clients work with the Whaily MCP server?+
Claude Desktop, Claude Code, Cursor, the OpenAI Agents SDK, and any other client that speaks the MCP Streamable HTTP transport. Bearer token auth is the only requirement. Claude.ai web requires OAuth, which is on the roadmap.Do I need to be technical to set it up?+
No. Setup is generate an API key in Settings → API & MCP, paste a 7-line config snippet into your AI client, and restart. The full setup takes under two minutes for non-developers. The connecting guide on this page has copy-paste configs for every supported client.What is the difference between read, write, and expensive tools?+
Read tools (22) only fetch data: visibility, competitors, sources, recommendations, audit checks, todos. Write tools (13) create, update, archive, or assign across todos, prompts, tags, and recommendations. Expensive tools fan out to LLM calls or scraping; they count against a per-org cost ceiling. API keys carry scopes, so a read-only key never even sees write or expensive tools in the catalogue the agent receives.Is my data safe? What stops the agent from doing something it should not?+
Five layers. Bcrypted Bearer keys with one-time plaintext reveal. Scope gates on every tool call. Per-key token-bucket rate limits. Per-org monthly call quotas. A complete audit log written before every response, owner-readable in Settings → API & MCP. Destructive actions like archive or delete require an explicit confirm flag so the agent has to ask the user first. Revoke a key in one click.Does this replace the Whaily dashboard?+
No. The dashboard is still the source of truth for share-of-voice, competitor benchmarks, and the action plan. The MCP is an alternative entry point for the same data and actions, optimised for ad-hoc questions, briefs, and automations. Use whichever fits the moment.How is this different from a regular API?+
A regular API requires a developer to write code that calls endpoints, parses responses, and constructs prompts for an LLM. MCP collapses that loop. The agent discovers the tool catalogue, picks tools by name and shape, fills arguments, parses structured responses, and writes natural-language summaries, without your team writing or maintaining glue code. The same surface is available as a JSON-RPC API for the cases where you do want to script directly.Does this cost extra?+
No on Pro plans and above. The MCP server is enabled by default with monthly call quotas matched to plan tier (Pro, Team, Enterprise). Free workspaces can browse the documentation but cannot generate keys.
Related features
Track
Track brand visibility across every AI engine
Read moreDiscover
Find the sources AI cites in your category
Read moreAct
Turn visibility gaps into a shipped action plan
Read moreReady to ask Whaily from inside Claude?
Generate an API key, paste the config, and start asking. Setup takes under two minutes. Free during onboarding.
