
Microsoft released the Clarity MCP Server earlier this month.
The MCP Server is a backend service that exposes your Clarity data through Microsoft’s new Model Context Protocol (MCP). It enables AI agents, such as Claude, Copilot, or any agent built using the OpenAI or Azure agent framework, to query session behavior in natural language. Instead of writing scripts or exporting JSON, you can ask: “Find sessions where users tapped the same button 3+ times and didn’t convert.”
Most AI analytics tools are bloated, overpriced wrappers around basic endpoints. We tested MCP in a real production case to see if it could do something cool or, better yet, helpful.
Problem: Mobile Form Drop-Off With No Clear Signal
A client had a sudden drop in mobile form completions. No console errors. GA4 looked fine. Scroll data was normal. Session replays didn’t show anything actionable. The Clarity UI wasn’t giving us anything concrete, and the drop was large enough that something was clearly broken.
Exporting Clarity sessions and writing a parser wasn’t a good use of engineering time. So we tested the MCP Server to pull structured behavior insights using AI queries instead.
Deploying MCP in Less Than an Hour
We deployed Microsoft’s official Clarity MCP Server GitHub package using Cloudflare Workers. Setup was minimal: install the package, plug in your Clarity project ID and API key, and configure the schema endpoint. We connected Claude Desktop as the agent interface.
No frontend. No dashboards. Just prompt = data.
Queries That Surfaced the Problem
First prompt:
Find mobile sessions from the last 7 days where users clicked the same element more than 3 times and did not reach the thank-you page.
MCP returned structured output. Around 26% of mobile sessions matched. All iOS Safari. All tapping the “Next” button on the final form screen.
Follow-up prompt:
How many of those sessions ended within 20 seconds of the last click?
Answer: ~72%. It was a silent-fail scenario.
What Actually Happened
The form submit was broken on Safari due to a bad autocomplete attribute paired with a JS handler that failed silently. There were no visible errors and no fallback. Users tapped, nothing happened, and they left. We fixed the issue and saw conversion rates return to baseline the next day.
Segment | Metric | Value (Before Fix) | Value (After Fix) |
---|---|---|---|
Mobile sessions w/ repeated button taps | % of total mobile sessions | 26.3% | 2.1% |
Avg time to exit after final button tap | Seconds | 17.4s | 6.2s |
Mobile Safari conversion rate | Form completion rate | 4.8% | 9.2% |
iOS-specific session drop-off (no errors) | Sessions exiting form step 2 | 71.8% | 24.5% |
Form submission failures (silent) | Untracked submits (via GA4) | Not detected | Detected post-fix |
Why MCP Was Useful
This would’ve taken hours with Clarity exports and manual filtering. It wasn’t visible in replays. It didn’t trigger rage click flags. GA4 didn’t register the problem at all.
With MCP, we asked two targeted prompts and got structured data back without writing a single export script. It revealed a pattern across enough sessions to validate a real problem. Fast.
Initial Takeaways
This tool isn’t for dashboards or vanity metrics. It’s for isolating broken behavior patterns when standard tooling gives you nothing. We’re already building prompt libraries for ongoing use in CRO audits and analytics QA.
If you manage large-scale user behavior data and deal with noisy session volume, MCP is a fast way to get answers that would normally require dev involvement or BI tooling. If you’re already running Clarity, it’s a low-effort, high-yield addition.

Co-founder Custom Design Partners
Alexander Hatala is the co-Founder at Custom Design Partners. He specializes in e-Commerce operations, performance marketing strategies, and behavioral analytics.