Frequently asked questions

The questions we get most often. If yours is not here, email security@flusterduck.com.

What is a confusion score?

A number between 0 and 100 for each page in your application, updated in real time as users interact. It aggregates 18 behavioral signals weighted by severity. Baseline is 50. When a page consistently hits 70 or above, something on it is causing friction. At 85 or above, users are probably abandoning.

How the confusion score works

How does Flusterduck detect frustration without watching what users type?

The SDK tracks behavioral patterns, not content. It sees that a user clicked something 4 times in 800ms (rage click, weight 25). It sees they navigated to pricing, back to home, back to pricing, back to home in 90 seconds (loop navigation, weight 20). It never reads what users typed, what the page says, or who they are.

What are the 18 frustration signals?

Ten desktop signals: rage click, loop navigation, focus trap, form abandonment, scroll hijack, dead click, hover thrash, text select confusion, tab abandon, and error recovery failure. Six mobile signals: pinch-zoom frustration, tap miss, swipe miss, orientation thrash, double-tap zoom, and keyboard dismissal thrash. Two accessibility signals: focus skip and resize thrash.

Rage clicks are not enough

How big is the SDK?

3.8KB gzipped. Hotjar is 17KB. FullStory is 30KB or more. For a page with a 3% conversion rate, a 200ms slower load costs roughly 1 to 2 percentage points. We track our bundle weight obsessively.

Is Flusterduck GDPR compliant?

Compliant by architecture, not by configuration. The data Flusterduck stores is behavioral (click patterns, navigation timing, element selectors), not personal. IP addresses are hashed with SHA-256 at the edge before they reach any database. There is no consent banner required because there is no personal data to consent to collecting.

No session replay is a feature

How does deploy correlation work?

Before deploy: Flusterduck captures a confusion snapshot of per-page scores. After deploy: it checks scores every 5 minutes for 90 minutes. If any page's score increases 15 or more points relative to the pre-deploy snapshot, it sends an alert with the commit hash and which elements spiked. You can pass the commit SHA in the webhook payload to tie alerts directly to specific changes.

Every deploy is a UX experiment

What does element-level diagnosis show me?

Which specific element is contributing most to a page's confusion score. Not just "the checkout page is broken" but a concrete breakdown: 67% of rage clicks are hitting the Apply promo code button, clustering on the right edge of the click target. Flusterduck uses CSS selectors and ARIA labels, so there is no content captured and no PII involved.

How is this different from a click heatmap?

Heatmaps show where users clicked. Flusterduck shows which clicks were rage clicks (frustrated), which were dead clicks (non-interactive element), which were normal. Same button, completely different signal. A button with 500 normal clicks and 3 rage clicks is fine. A button with 12 clicks and 9 rage clicks is broken.

Dead clicks explained

Does Flusterduck work on mobile?

Yes, with six signals specific to mobile interactions: pinch-zoom frustration, tap miss, swipe miss, orientation thrash, double-tap zoom, and keyboard dismissal thrash. Standard mobile analytics miss all of them because they count clicks and sessions, not touch interaction failures.

Mobile UX frustration signals

How quickly will I see data after installing?

Data flows within seconds of the first real user visit. The confusion score for a new page is visible within a few minutes of traffic. Full z-score normalization activates after 3 days on new pages, after which scores become calibrated to your baseline. During the warm-up period you see raw scores with a "warming up" indicator.

Can Flusterduck identify specific users?

No, by design. Flusterduck assigns temporary anonymous session IDs that expire with the browser session. There is no user identity layer. If you need to correlate confusion patterns with specific users for your own analysis, the API includes a session_id field you can join on your own infrastructure. Flusterduck itself never stores identifying information.

What is the difference between the three pricing plans?

Sessions. Grow covers 5,000 monthly sessions at $39. Scale covers 25,000 at $99 and adds team members and API access. Enterprise covers 100,000 at $249 and adds SSO, a dedicated support channel, and SLA guarantees. Every plan includes all 18 signals, deploy correlation, element-level diagnosis, and all three alert channels.

How do I install Flusterduck?

One script tag in your site's head element. Takes under 2 minutes. No build step, no package to install. For teams that prefer it, there are framework wrappers for React, Next.js, Vue, Nuxt, and SvelteKit.

What alert channels does Flusterduck support?

Slack, email, and webhooks. You can use all three simultaneously. Each alert rule has a configurable threshold (the score that triggers it, 0-100) and a cooldown period (minimum minutes between repeat alerts for the same page). Most teams set Slack alerts at 70 to catch emerging issues and email alerts at 85 for critical pages.

Does Flusterduck have an MCP server?

Yes. You can connect the Flusterduck MCP server to Claude, Cursor, or any MCP-compatible AI assistant. Once connected, you can ask your assistant why a specific page is scoring 82 and get an answer grounded in your actual signal data: which elements are contributing, which signals fired, and what the trends look like.

Why we built an MCP server for UX monitoring

Ready to see your confusion scores?

One script tag. Data in under 2 minutes. 3-day free trial, no credit card.

Start free trial