Skip to content

MCP Server

Connect Trakkr to Claude, Cursor, Windsurf, and any MCP-compatible AI assistant. Query your AI visibility data conversationally - no code required.

What is MCP?

The Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external tools and data sources. Instead of copy-pasting data or writing API calls, you talk to your AI assistant in natural language and it calls the right tools behind the scenes.

The Trakkr MCP server wraps the entire Trakkr API into 19 tools that any MCP-compatible assistant can use. Once connected, you can ask things like "How is my brand doing in AI search?" and get answers backed by live data from your Trakkr account.

Natural language
Ask questions in plain English. No endpoints, no parameters.
19 tools
Full coverage: scores, citations, competitors, content, reports, and more.
Works everywhere
Claude Desktop, Cursor, Windsurf, and any MCP-compatible client.

Quick Start

Three steps to connect your AI assistant to Trakkr.

1
Get your API key

Generate an API key from Settings → API in your Trakkr dashboard. Keys start with sk_live_.

API access requires the Scale plan ($399/mo) or higher.
2
Add the config to your AI assistant

Copy the JSON configuration from the panel on the right and paste it into your assistant's MCP config file. Replace sk_live_your_key_here with your actual API key. See the tabs for file paths per assistant.

3
Restart and start asking questions

Restart your AI assistant so it picks up the new config. You should see "Trakkr" listed as a connected MCP server. Start with "How is my brand doing in AI search?" to verify the connection.

Installation

The Trakkr MCP server is a Python package. You have two options:

Option A: uvx (recommended)

If you have uv installed, use uvx trakkr-mcp in your MCP config. This auto-installs and runs in an isolated environment with zero setup. This is the approach used in the config examples on this page.

brew install uv
Option B: pip install

Install the package globally or in a virtual environment, then reference the trakkr-mcp command in your config.

pip install trakkr-mcp
Requires Python 3.10 or higher. Check with python3 --version.

Configuration

Each AI assistant stores MCP config in a different location. The JSON structure is the same for all of them.

AssistantConfig File
Cursor.cursor/mcp.json(project root or global)
Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json
Windsurf~/.codeium/windsurf/mcp_config.json
OtherCheck your assistant's MCP documentation for the config file location.
Keep your API key secure. Don't commit MCP config files containing your key to version control. Add the config file to .gitignore or use an environment variable instead.

How It Works

When you ask your AI assistant a question about your brand's AI visibility, here's what happens:

1
You ask a question
"Which pages get cited most by Perplexity?"
2
Assistant selects tools
It decides to call list_brands (to get your brand ID), then get_citations with the sources view.
3
MCP server calls the API
The server translates tool calls into authenticated requests to api.trakkr.ai.
4
You get a natural language answer
The assistant interprets the JSON response and presents it as a readable summary with key insights.

Your assistant handles tool selection, parameter mapping, pagination, and error handling automatically. For multi-step queries, it chains tools together - for example, fetching your brand ID first, then using it to pull scores and citations.

Available Tools

The MCP server exposes 19 tools organized into four groups. Click any tool to see its parameters and views. Your assistant automatically picks the right tool based on your question.

Core(4 tools)
Visibility(4 tools)
Intelligence(6 tools)
Actions(5 tools)

API Endpoint Mapping

Each MCP tool maps to a Trakkr API endpoint. The MCP server handles authentication, request formatting, and error handling for you. If you need more control, you can call the API directly.

MCP ToolAPI EndpointDocs
list_brands/get-brandsView
get_visibility_scores/get-scoresView
list_prompts/get-promptsView
manage_prompt/get-promptsView
get_citations/get-citationsView
get_rankings/get-rankingsView
get_model_breakdown/get-modelsView
get_competitors/get-competitor-dataView
get_opportunities/get-opportunitiesView
get_content_ideas/get-content-ideasView
get_perception/get-perceptionView
get_prism/prismView
get_crawler_analytics/get-crawlerView
get_narratives/narrativesView
run_diagnosis/diagnoseView
get_diagnosis_result/diagnoseView
generate_report/get-reportsView
get_reports/get-reportsView
export_data/exportView

Example Conversations

Here are practical examples of how the MCP server works. Your assistant picks the right tools automatically based on your question.

“How is my brand doing in AI search?”
Tools used:list_brandsget_visibility_scores
Fetches your brand, then returns your current visibility score, presence rate, and 90-day trend across all tracked AI models.
“Which competitors are gaining ground?”
Tools used:get_competitors
Calls get_competitors with view='threats' to surface competitors whose visibility is rising and areas where your lead is shrinking.
“What content should I create next?”
Tools used:get_opportunitiesget_content_ideas
Finds citation gaps (queries where competitors are cited but you aren't), then generates prioritized content ideas to close those gaps.
“Run a diagnosis on "best CRM for startups"”
Tools used:run_diagnosisget_diagnosis_result
Triggers a live diagnosis across ChatGPT, Perplexity, Gemini, and Claude, then retrieves the full analysis with citations and rankings.
“Show me my citation trends over the last quarter”
Tools used:get_citations
Returns citation history with view='history' and days=90, showing how your citation count has changed week over week.
“How does Perplexity describe my brand vs competitors?”
Tools used:get_perceptionget_competitors
Pulls perception analysis showing how Perplexity positions your brand, then compares with competitors' positioning using the by-model view.

Advanced Workflows

Your AI assistant can chain multiple tools together for complex analysis. Here are some powerful multi-step workflows you can try.

Weekly competitive briefing
"Give me a weekly briefing: my visibility trend, top competitor threats, and any new citation gaps." Your assistant chains get_visibility_scores, get_competitors (threats view), and get_opportunities into a single report.
Content pipeline
"Find my biggest citation gaps and generate content ideas for the top 5." Uses get_opportunities to find gaps, then get_content_ideas for targeted suggestions.
Model-specific analysis
"How do I perform on ChatGPT vs Perplexity vs Gemini? Which one should I focus on?" Combines get_model_breakdown with get_competitors (by-model view) for a platform-specific strategy.
Prompt management
"Add these 10 prompts to track: [list]. Then show me which ones already have citations." Uses manage_prompt (create) in a loop, then get_citations to check coverage.

Error Handling

The MCP server translates API errors into clear, human-readable messages. Your assistant will see these messages and can explain what went wrong.

StatusMessageWhat to Do
401Invalid or missing API keyCheck your TRAKKR_API_KEY in the MCP config.
403Access denied / paid plan requiredUpgrade your plan or check brand permissions.
404Resource not foundVerify the brand_id or other identifiers.
429Rate limitedWait a moment. 60 reads/min, 30 writes/min.
5xxTemporarily unavailableRetry after a few seconds. Includes request ID for support.
TimeoutRequest timed out (60s)For long operations, poll for results instead.

See the Errors reference for the full list of API error codes and response formats.

Troubleshooting

"TRAKKR_API_KEY environment variable is required"
Your API key isn't set. Make sure the env block in your MCP config includes a valid key starting with sk_live_.
Tools appear but return "Invalid or missing API key"
The key is being passed but is invalid. Double-check you copied the full key from Settings and that it hasn't been revoked.
"Access denied. This feature may require a paid plan"
All API access requires a paid plan. Some features like narratives require the Scale plan. Check your plan at Settings > Billing.
Tools aren't showing up in my AI assistant
Restart your assistant after adding the config. Make sure you have Python 3.10+ and uv installed (brew install uv on macOS).
"Rate limited. Wait a moment and try again"
The MCP server respects API rate limits (60 reads/min, 30 writes/min). Your assistant will retry automatically in most cases.
"Request timed out"
Some operations (diagnosis, report generation) take longer. The server uses a 60-second timeout. For diagnosis, use get_diagnosis_result to poll for results.

Requirements

Python 3.10 or higher
A Trakkr API key (get one here)
An MCP-compatible AI assistant (Claude Desktop, Cursor, Windsurf, etc.)
uv (recommended for zero-setup install via uvx)

Package Info

Packagetrakkr-mcp
Version0.1.0
Python>= 3.10
Dependenciesmcp[cli] >= 1.0, httpx >= 0.27
LicenseMIT

Code Example

Install
pip install trakkr-mcp
.cursor/mcp.json in your project root (or global config)
{
  "mcpServers": {
    "trakkr": {
      "command": "uvx",
      "args": [
        "trakkr-mcp"
      ],
      "env": {
        "TRAKKR_API_KEY": "sk_live_your_key_here"
      }
    }
  }
}
Environment Variable
# Alternative to env block in config
export TRAKKR_API_KEY="sk_live_..."
Press ? for keyboard shortcuts