MCP Server
Connect Trakkr to Claude, Cursor, Windsurf, and any MCP-compatible AI assistant. Query your AI visibility data conversationally - no code required.
What is MCP?
The Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external tools and data sources. Instead of copy-pasting data or writing API calls, you talk to your AI assistant in natural language and it calls the right tools behind the scenes.
The Trakkr MCP server wraps the entire Trakkr API into 19 tools that any MCP-compatible assistant can use. Once connected, you can ask things like "How is my brand doing in AI search?" and get answers backed by live data from your Trakkr account.
Quick Start
Three steps to connect your AI assistant to Trakkr.
Generate an API key from Settings → API in your Trakkr dashboard. Keys start with sk_live_.
Copy the JSON configuration from the panel on the right and paste it into your assistant's MCP config file. Replace sk_live_your_key_here with your actual API key. See the tabs for file paths per assistant.
Restart your AI assistant so it picks up the new config. You should see "Trakkr" listed as a connected MCP server. Start with "How is my brand doing in AI search?" to verify the connection.
Installation
The Trakkr MCP server is a Python package. You have two options:
If you have uv installed, use uvx trakkr-mcp in your MCP config. This auto-installs and runs in an isolated environment with zero setup. This is the approach used in the config examples on this page.
Install the package globally or in a virtual environment, then reference the trakkr-mcp command in your config.
python3 --version.Configuration
Each AI assistant stores MCP config in a different location. The JSON structure is the same for all of them.
| Assistant | Config File |
|---|---|
| Cursor | .cursor/mcp.json(project root or global) |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| Other | Check your assistant's MCP documentation for the config file location. |
.gitignore or use an environment variable instead.How It Works
When you ask your AI assistant a question about your brand's AI visibility, here's what happens:
Your assistant handles tool selection, parameter mapping, pagination, and error handling automatically. For multi-step queries, it chains tools together - for example, fetching your brand ID first, then using it to pull scores and citations.
Available Tools
The MCP server exposes 19 tools organized into four groups. Click any tool to see its parameters and views. Your assistant automatically picks the right tool based on your question.
API Endpoint Mapping
Each MCP tool maps to a Trakkr API endpoint. The MCP server handles authentication, request formatting, and error handling for you. If you need more control, you can call the API directly.
| MCP Tool | API Endpoint | Docs |
|---|---|---|
list_brands | /get-brands | View |
get_visibility_scores | /get-scores | View |
list_prompts | /get-prompts | View |
manage_prompt | /get-prompts | View |
get_citations | /get-citations | View |
get_rankings | /get-rankings | View |
get_model_breakdown | /get-models | View |
get_competitors | /get-competitor-data | View |
get_opportunities | /get-opportunities | View |
get_content_ideas | /get-content-ideas | View |
get_perception | /get-perception | View |
get_prism | /prism | View |
get_crawler_analytics | /get-crawler | View |
get_narratives | /narratives | View |
run_diagnosis | /diagnose | View |
get_diagnosis_result | /diagnose | View |
generate_report | /get-reports | View |
get_reports | /get-reports | View |
export_data | /export | View |
Example Conversations
Here are practical examples of how the MCP server works. Your assistant picks the right tools automatically based on your question.
list_brandsget_visibility_scoresget_competitorsget_opportunitiesget_content_ideasrun_diagnosisget_diagnosis_resultget_citationsget_perceptionget_competitorsAdvanced Workflows
Your AI assistant can chain multiple tools together for complex analysis. Here are some powerful multi-step workflows you can try.
get_visibility_scores, get_competitors (threats view), and get_opportunities into a single report.get_opportunities to find gaps, then get_content_ideas for targeted suggestions.get_model_breakdown with get_competitors (by-model view) for a platform-specific strategy.manage_prompt (create) in a loop, then get_citations to check coverage.Error Handling
The MCP server translates API errors into clear, human-readable messages. Your assistant will see these messages and can explain what went wrong.
| Status | Message | What to Do |
|---|---|---|
401 | Invalid or missing API key | Check your TRAKKR_API_KEY in the MCP config. |
403 | Access denied / paid plan required | Upgrade your plan or check brand permissions. |
404 | Resource not found | Verify the brand_id or other identifiers. |
429 | Rate limited | Wait a moment. 60 reads/min, 30 writes/min. |
5xx | Temporarily unavailable | Retry after a few seconds. Includes request ID for support. |
Timeout | Request timed out (60s) | For long operations, poll for results instead. |
See the Errors reference for the full list of API error codes and response formats.
Troubleshooting
Requirements
Package Info
| Package | trakkr-mcp |
| Version | 0.1.0 |
| Python | >= 3.10 |
| Dependencies | mcp[cli] >= 1.0, httpx >= 0.27 |
| License | MIT |
Code Example
pip install trakkr-mcp
{
"mcpServers": {
"trakkr": {
"command": "uvx",
"args": [
"trakkr-mcp"
],
"env": {
"TRAKKR_API_KEY": "sk_live_your_key_here"
}
}
}
}# Alternative to env block in config
export TRAKKR_API_KEY="sk_live_..."