From Chat to Action: Building MCP for AI Native Analytics
Introduction
Until recently, giving AI tools deep analytical context meant manually feeding exported data or API responses into a chatbot. That has changed. With the advent of the Model Context Protocol (MCP), we now have a standardized bridge that connects Large Language Models (LLMs) directly to the specialized data and services they need to be effective.
At GoodData, we see MCP as more than just a standard; it is a foundational pillar of our AI- native vision. To be AI-native means that AI is not an afterthought or a bolted-on feature, it is a core component of the system design. In an AI-ative ecosystem, communication between tools must be as standardized and efficient as communication between microservices.
This is why we built the GoodData Platform MCP Server. It is the gateway that connects your AI tools — Cursor, Claude Desktop, and custom agents — to the heart of our analytics platform. We are launching this alongside the Analytics-as-Code MCP (built for BI developers in IDEs), which my colleague Sandra Suszterova explores in her companion article. While Sandra focuses on the IDE experience, this article dives into the Platform MCP Server — the foundation that enables AI to take action, such as searching for insights, creating alerts, modeling data, and deploying analytics at the speed of thought.
At launch, the Platform MCP Server exposes many governed analytics capabilities as structured tools. That matters because it gives any MCP-compatible client, whether it’s an IDE assistant, a chat interface, or a customer-built agent, a consistent way to execute analytics workflows inside the same enterprise guardrails as human teams. The goal is not “AI that can talk about your dashboards,” but “AI that can safely build, validate, and operate analytics end-to-end.”
The result is a fundamental shift in velocity. Faster insights mean better decisions when they matter most. Faster deployment means shorter time-to-value. And by automating manual analytics work, we enable teams to focus on strategy rather than syntax.
This is the story of how we built it, what we learned, and why we believe MCP is the future of the AI-powered enterprise.
The Problem: Chat is Not Enough
Most AI integrations today stop at the “chat” interface. While chatting with your data is a powerful first step, it quickly hits two major walls in a production environment.
The first is a capability gap. Real analytics workflows require more than words; they require a sequence of operations that actually move the needle. An agent needs to be able to scan a database, propose a logical data model, set up monitoring alerts, and deploy changes directly to a production workspace. When these actions must be performed manually through a UI or by tedious copy‑pasting, the AI remains a high-level observer rather than an active participant in the analytics lifecycle.
The second is a knowledge gap. LLMs are incredibly capable, but they are limited by training cutoffs and a lack of proprietary domain knowledge. They do not natively understand the nuances of GoodData’s Multi-Dimensional Analytical Query Language (MAQL). They can’t guess your dashboard structures or the exact parameters required for an automated alert. Without a bridge that provides this context in real time, the AI is forced to guess, which leads to errors and a breakdown in trust.
Architecture: Built for Production
When we set out to build the Platform MCP Server, we had a clear goal: it had to be production-ready, multi-tenant, and scalable from day one. We chose Anthropic’s Python SDK for MCP as our foundation, which is built on the FastMCP framework, allowing us to focus on our business logic (the tools and resources) while the SDK handled protocol compliance, transport layers, and security.
Multitenancy with contextvars
One of the unique challenges of building a server-hosted MCP for an enterprise platform is multitenancy. We needed to ensure that every request was isolated and scoped to the correct user and workspace context, without any risk of leaking state between concurrent calls.
We leveraged Python’s contextvars to manage per-request isolation. By capturing authentication headers and workspace identifiers at the boundary, we make that context available throughout the execution path, from controllers to backend clients, without threading it through every function signature.
The Controller-Client Pattern
Our architecture maintains a clean separation of concerns through a controller-client pattern. The FastMCP layer handles protocol and tool registration, while controllers orchestrate domain logic such as metadata lookup, automated alerts, and knowledge retrieval. Controllers communicate with GoodData’s backend services through dedicated clients. An API Gateway sits at the front, managing authentication and path rewriting so only authorized requests reach the server.
This gateway boundary is also where our “enterprise reality” shows up. Every tool execution runs inside workspace isolation and inherits the same authentication and authorization constraints as a human user. In practice, an agent can’t do more than a user could do; it can only do it faster, more consistently, and without the manual handoffs.
Capability-Driven Use Cases: The Bridge to Action
The true value of the Platform MCP Server is revealed in how it moves beyond simple Q&A. We didn’t build this server just to give AI agents more things to talk about; we built it to give them the capabilities to perform agentic analysis.
Consider the persona of a business analyst or data scientist. In a traditional BI environment, performing a deep-dive analysis, i.e., investigating performance drivers, detecting anomalies, and summarizing findings, can easily consume 200 minutes of focused work in a notebook or a complex UI. The “context wall” between the analyst’s intent and the platform’s data is thick with manual queries and handoffs.
With the GoodData MCP Server, that same analyst can delegate the workflow to an AI agent. We are currently developing agentic workflows that leverage these tools to, for example, perform automated dashboard analysis and make recommendations based on retrieved organizational knowledge. Instead of manual steps, the agent can query workspace data, investigate drivers, detect anomalies, run specialized computations, and produce an executive-ready summary grounded in the platform’s real metrics and semantics.
This isn’t just about speed; it’s about the operationalization of insights. When the analysis reveals something important, an agent can move from “insight” to “action” without switching contexts. It can set up monitoring on the KPI that matters, configure the appropriate notification channel, and keep the organization informed as conditions change. The key is that alerts, workflows, and analysis are exposed through a consistent tool interface, allowing agents to compose reliable, end-to-end automation rather than stitching together brittle API chains.
Bridging the Knowledge Gap
A major hurdle for LLMs is their lack of understanding of proprietary languages like MAQL. To an LLM, MAQL often looks like SQL, but its multidimensional logic is fundamentally different. Without specific guidance, even the best agents produce syntax errors.
To solve this, we embedded deep domain knowledge directly into the server. We expose a set of structured knowledge resources covering everything from dashboard schemas to semantic model definitions.
We also provide tools, like get_maql_guide(), making MAQL guidance available even for MCP clients that don’t fully support resources. This has the added benefit of making retrieval explicit and just‑in‑time; the agent can pull the right documentation at the moment it needs it, and generate analytics that are correct and consistent with GoodData best practices.
Our Internal AI Ecosystem: Build Once, Use Everywhere (for Everyone)
One of the most exciting outcomes of this architecture is how it unified our internal AI development. The Platform MCP Server is not just a gateway for external clients; it has become a universal protocol layer for our ecosystem.
We’ve established a bidirectional relationship between the MCP server and our internal AI services. When an external client calls semantic search or analysis tools, the MCP server orchestrates the request to those services. The synergy also works in reverse. When internal AI services need to perform platform-level operations, like setting up complex metric alerts, they don’t rely on bespoke glue code. Instead, they call the same MCP tools. In other words, the interface we expose to customers is also the interface we standardize internally.
As mentioned earlier, this is what enables agentic workflows: once analytics capabilities are exposed as MCP tools, workflows can be composed reliably, rather than being hard-coded one integration at a time. The key point is that this composability isn’t reserved for our own teams. Because the same tools are available to any MCP-compatible client, customers can build custom agents on top of GoodData using the same governed interface, eliminating one-off integrations and ensuring their agents operate with the same platform context and guardrails as our own.
Lessons Learned
Choose the Ecosystem That Maximizes Iteration
One of the most surprising (but not entirely unexpected) decisions was choosing Python over Kotlin. At GoodData, we are primarily a Kotlin-based engineering organization; our backend services, libraries, and internal tooling are almost entirely built on the JVM. We initially followed our standard patterns and began prototyping the MCP server in Kotlin.
But as we pushed into the MCP ecosystem, we hit a reality check. The Python SDK was significantly more mature and feature-rich at the time, and iteration speed mattered. Stronger AI copilot support for Python, combined with faster iteration cycles, made it easier to ship tools quickly in a fast-moving space.
Just as importantly, this choice didn’t shut out the rest of the organization. Most teams already had exposure to Python through our own SDK, and modern AI coding assistants reduce the barrier to contribution dramatically. Ultimately, while Kotlin remains our “native” language for core backend services, Python is the native language of the AI ecosystem, and embracing it helped us move faster while keeping contributions broadly accessible.
Optimize for the Machine Reader
Building a production MCP server taught us that how you describe a tool is just as vital as the logic behind it. Humans can infer intent from vague instructions; LLMs require explicit, structured guidance to stay accurate.
This realization led to an effort to standardize our tool descriptions across the entire server. Our CTO, Jan Soubusta, developed this documentation pattern first for an internal MCP server, and we used it as the guide for applying the same approach consistently to the Platform MCP Server; something he later highlighted as a demonstration of best practices in MCP tool optimization.
We adopted a specialized documentation pattern designed for agentic consumption:
Tool description pattern (optimized for LLM tool selection):
WHEN TO CALL:
Concrete user intents and examples that map to this tool.
NOT FOR:
Common confusions ("If you mean X, use Y instead").
DEFAULT BEHAVIOR:
What happens when optional fields are omitted.
ERROR RECOVERY:
Specific next steps the agent should take after a failure.
In practice, this means we don’t just document what a tool does, we document how an agent should reason about using it. We map user intent to the right tool, prevent common selection mistakes, make defaults predictable, and embed recovery steps directly into error messaging.
Alongside the written structure, we also standardized how tool parameters are described at the type level. We use Pydantic with Annotated[Type, Field(description="...")] to attach clear, consistent descriptions directly to each argument. That metadata becomes part of the tool schema the agent sees, which improves tool selection and reduces ambiguity during multi-step tool calling.
The impact was immediate: tool selection accuracy improved, and the "loop of confusion" where an agent repeatedly calls the wrong tool was virtually eliminated. Our takeaway was clear: in an AI-native world, your API documentation is your UI.
What’s Next: The Roadmap to AI Velocity
Our journey with the Platform MCP Server is just beginning. As we move beyond initial launch, we’re doubling down on a simple philosophy: deliver value through user stories, not just API wrappers. We’ll keep adding tools that solve complete business problems, like our already deployed unified alert system that handles comparison, range, and relative alerts through a single interface.
We’re also evaluating how agents can handle richer, multi-step workflows without blowing up context. A promising pattern is to combine MCP with code execution: instead of emitting raw tool calls, the agent writes small pieces of code that orchestrate tool usage and only returns the results it actually needs. Cloudflare calls this “Code Mode” (Cloudflare’s Code Mode), and Anthropic has explored similar approaches (Anthropic: Code execution with MCP).
In parallel, we’re watching how teams package repeatable procedures around tools; Anthropic’s “Agent Skills” is a compelling model for bundling workflows, scripts, and guidance that agents can load dynamically (Anthropic: Agent Skills).
And lastly, our engineers are currently hard at work to expand authentication options to support more enterprise deployment scenarios, ensuring that GoodData remains a safe, fast way to connect AI to governed analytics.
Conclusion: From Protocol to Practice
MCP is not just another protocol; it is the infrastructure that makes AI-ative analytics possible. By building a platform that AI can finally “speak,” we’ve lowered the walls between insight and action. Whether you are a BI developer working in an IDE or an AI developer building the next generation of analytical agents, the GoodData MCP ecosystem is designed to give you the velocity you need in an AI-first world.
Check out the documentation and start building with the GoodData Platform MCP Server today.