2026-03-12mcpai-nativedeveloper-toolsintegrations

Why We Built an MCP Server for UX Monitoring

The best interface for developer tools isn't a dashboard. It's a conversation.

I check the Flusterduck dashboard maybe twice a day. I ask Claude about my confusion scores six or seven times.

"How's the duck?"

That's the prompt. Four words. Claude reads the MCP resources, pulls the current per-page scores, and gives me a summary: "Overall site confusion is 18. All pages within normal range. /pricing is slightly elevated at 34, up from a baseline of 22."

I didn't open a browser. Didn't navigate to a dashboard. Didn't click through filters. Typed a question, got an answer. The whole interaction took maybe 4 seconds.

Why MCP

The Model Context Protocol is an open standard for connecting AI assistants to external data. Anthropic published the spec, and now Claude, Cursor, and other MCP-compatible tools can read resources and call tools from any server that implements it.

We built a Flusterduck MCP server because I realized something: the dashboard is the wrong interface for most UX monitoring interactions. You don't need a chart to answer "is anything broken right now?" You need a yes or a no.

Dashboards are good for exploration. For drilling into signal breakdowns, comparing time periods, looking at flow corridors. But 80% of my interactions with UX data are simple questions: Is it fine? What's the worst page? Did that deploy help? A conversation handles those faster than any GUI.

What the server exposes

Seven tools and seven resources. Tools return formatted summaries (get_confusion_scores, get_page_detail, get_deploy_impact, compare_periods, etc.). Resources return raw JSON (flusterduck://scores, flusterduck://page/{path}, flusterduck://alerts, flusterduck://revenue).

Some prompts I use weekly:

"What's wrong with checkout?" The assistant calls get_page_detail for /checkout and comes back with: score 67, 2.8x above baseline, 82% of frustration signals are rage clicks on the coupon input field, 43 users affected in the last hour. Plus the recommended fix.

"What did yesterday's deploy break?" It calls get_deploy_impact and correlates. Deploy #847 increased confusion on /settings by 340%. The pricing toggle refactor introduced a dead click on the new component. The commit hash, the author, the affected pages.

"Compare this week to last week." The compare_periods tool runs. Overall confusion down 12%. Biggest improvement: /onboarding dropped from 54 to 29 after deploy #312. Biggest regression: /settings rose from 18 to 41.

"Which fixes would make the most money?" It reads flusterduck://revenue and ranks by impact. Fixing the coupon button on /checkout: roughly $1,800/month. Fixing the pricing toggle: $900/month. Fixing form labels on /signup: $400/month.

The workflow shift

Before MCP, my UX monitoring workflow was: get a Slack alert, open the dashboard, click into the page, look at signals, check the deploy timeline, form a theory, switch to the code. Six context switches.

Now it's: get a Slack alert, ask Claude "what's happening on /checkout?", read the answer in my IDE, start fixing. Two context switches. The data comes to me in the environment where I'm already working.

For engineers using Cursor, it's even tighter. The MCP server connects directly to the editor. You can ask about confusion scores while you're staring at the component that's causing them. "Is this pricing toggle confusing anyone?" isn't a rhetorical question anymore. It has a real-time answer.

What AI-native developer tools look like

Most developer tools are built dashboard-first, API-second, integrations-third. The MCP server was an afterthought for us too, initially. Then I started using it and couldn't stop.

Deep analysis needs charts and timelines. Quick status checks need a conversation. Automated workflows need an API. Same data, different access patterns depending on the question.

Nobody else in UX monitoring has an MCP server. FullStory makes you log in. Hotjar makes you click through filters. Flusterduck lets you ask.

I don't think every developer tool needs an MCP server. But tools that produce continuous, summarizable status data (monitoring, analytics, error tracking, CI/CD) are perfect for it. The question "is everything okay?" shouldn't require a browser tab.

If you're building developer tools and you haven't looked at MCP yet, start there. The spec is straightforward. The @modelcontextprotocol/sdk package handles the protocol. You define tools and resources, the AI does the rest.

We shipped ours in about a week. It's the feature I'd miss most if I had to give one up.

All posts