What Is UX Monitoring (And How It Differs From Analytics)
Analytics tells you what happened. UX monitoring tells you if something's wrong right now. They answer different questions.
Server monitoring works because someone decided it should. 20 years ago, ops teams built dashboards with numbers: uptime, latency, error rate. When numbers crossed thresholds, alerts fired. Nobody watches server logs manually in real time anymore. The number watches for you.
UX monitoring is the same idea applied to user experience. Instead of watching session recordings, you have a number. It goes up when users are frustrated. When it crosses a threshold, you get a Slack message. You fix the thing. The number goes back down.
That's the whole concept.
Analytics vs monitoring
Analytics tells you what happened. You open GA, you check the bounce rate, you look at the conversion funnel, you see where users dropped off. All past tense. The analysis runs after the fact.
Monitoring tells you if something is wrong right now. Not "your pricing page had a 73% bounce rate last week" but "your pricing page confusion score has been above 70 for the last 90 minutes." One is reporting. The other is alerting.
The distinction matters because the problem state is different. In reporting mode, you investigate after you notice a trend. In monitoring mode, the system investigates continuously and tells you when it finds something.
For a 3-person startup, this matters because nobody has time to check dashboards. For a 50-person company, this matters because UX problems compound faster than engineering capacity to review them.
What gets monitored
The confusion score is a number from 0 to 100 for each page in your product. It aggregates 18 behavioral signals, each weighted by correlation with actual user abandonment. Rage clicks, loop navigation, dead clicks, form abandonment, mobile-specific signals, keyboard accessibility signals.
Baseline is 50. A score of 50 means users are behaving normally for that page. When the score rises, users are showing more frustration signals than usual. When it hits 70, something on the page is causing friction. At 85, users are probably abandoning.
The score is per-page, not site-wide. Your homepage can be at 18 while your checkout is at 74. The monitoring surfaces the specific page with the specific problem.
What it's not
UX monitoring is not the same as UX research. Research involves interviews, usability tests, card sorts. It produces qualitative insight about why users behave a certain way.
Monitoring produces quantitative signals about when behavior changes. When a confusion score spikes after a deploy, that's a signal to investigate. The monitoring tells you that something is wrong. The investigation (and sometimes user research) tells you why.
UX monitoring is also not session replay. Session replay gives you video evidence of specific user sessions. Useful for investigating hypotheses you already have.
Monitoring gives you population-level signals continuously. The difference is like weather forecasting vs watching out the window. Both give you information about weather. One is reactive, one is proactive.
When you need it
The moment you have enough traffic for behavioral patterns to be statistically meaningful, you need some form of UX monitoring. For most web products, that's a few hundred daily active users.
Before that threshold, you probably need more user research and less monitoring. A product with 50 users needs to talk to those users, not watch their confusion scores.
After that threshold, the signal-to-noise ratio is high enough that the monitoring actually tells you something. A checkout page with 200 daily sessions and a confusion score of 78 has enough data to be confident something is wrong.
What changes when you have it
The conversation in product reviews changes. Instead of "I think the checkout might be confusing" (opinion with no evidence), you have "the checkout confusion score has been above 65 for two weeks, element attribution shows 71% of signals on the coupon field, we estimate $800-1,200/month in missed conversions" (opinion with evidence).
Evidence makes prioritization easier. Engineering time goes to the things with the highest measured impact. Not the loudest stakeholder, not the most recent complaint.
That's what the number is for.