feelsfast.fyi
Concepts

What perceived performance actually is

Perceived performance is the gap between the actual time a user has to wait while a piece of software loads and what the user feels about that wait. Two products can render the same data in the same number of milliseconds; one feels snappy, the other feels slow. The gap between those two feelings is the entire field.

The gap is the product

Imagine two versions of the same web app. Version A loads its dashboard in 1.4 s. Version B loads its dashboard in 1.6 s but renders a skeleton screen at 80 ms, animates it gently, and snaps to the real layout the instant data arrives. The stopwatch tells you A is faster. If we could use the user's perception as a measuring device, it would tell you B is faster.

This is not a controversial claim. The classical literature on response time — Miller 1968 Miller 1968, popularised three decades later by Nielsen Nielsen 1993 — treats user-perceived time as the dependent variable, not the clock. Doherty's IBM study Doherty 1982 measured actual productivity gains as response time dropped under 400 ms. The productivity curve does not match the clock curve linearly, because users do not respond to time measurements — even though they are real, those measurements are abstract. Ask anyone in the tri-state area how far they live from downtown and they will say "an hour away." The number of miles is unimportant to the commute (provided gas prices are in check).

What this means in practice: every wait in your product has two durations. The objective one (your APM dashboard knows about). The subjective one (your user-churn rate shows). Engineering optimises the first. Design — hand in hand with Product, when it is doing the work — optimises the second.

The 20 % rule, and why it is more annoying than it sounds

There is one piece of psychophysics every designer working on performance should have memorised: Weber–Fechner Weber–Fechner. The threshold of difference humans can perceive in a stimulus is roughly proportional to the magnitude of the stimulus. For latency in the sub-30-second range, the just-noticeable difference sits around 20 %.

How does it translate to software? If you shave 100 ms off a 1-second wait, almost nobody will notice. If you shave 300 ms off, almost everyone will. If you ship a 15 % improvement and put it on a billboard, you have just wasted money on the billboard.

The annoying, almost logarithmic relationship between cost and success is what this rule does to engineering effort. The first 20 % of a wait is the cheapest to remove; the second 20 % requires real architectural work; the third 20 % requires rewriting the data layer; the fourth 20 % requires not just backend support but a real wunder-team. Each tier costs more, each tier is increasingly visible, and the user only gives credit for the visible improvements.

This is exactly why perceived performance deserves first-class treatment. A tuned skeleton screen, a route prefetch, a mousedown listener instead of a click listener — these can produce perceptible improvements at one team's cost. The math is brutal: it is simply cheaper to shave time off the perception layer than off the actual clock.

The counter-argument, and why it is partly right

If the perception layer is so cheap and so effective, why does anyone still grind on objective performance? Because perception is not everything.

Eizenberg makes the cleanest version of this argument Eizenberg: a placeholder is not a substitute for an interactive surface. If your search box draws a skeleton at 80 ms but the actual debounced query takes 4 s to come back, the user is staring at a useless skeleton for most of their session. Same for an editor that shows its layout instantly but cannot accept keystrokes for 3 seconds — a polished perception layer becomes a polished lie. Eizenberg's preferred metric, Time to Interactive, is the right one for the kind of UI we are talking about.

The reconciliation, if you want one, is this: time perception wins for consumption surfaces; objective speed wins for production surfaces. A pricing page, a marketing site, a documentation reader, a dashboard that shows numbers — these are consumption. Skeleton screens, predictive preloading, optimistic UI, and other techniques carry their weight here. An IDE, a CRM, a search-as-you-type interface, a chat composer — these are production. The user is reaching to act, and any wait that delays the action will cost real engagement, no matter how elegantly the skeleton shimmers.

What this platform does differently

You'll notice every claim on the platform comes with a citation. There is a purpose for that. Perceived performance has accumulated a lot of interesting folklore — as any other UX subfield has — and most of it traces back to a small set of papers that almost nobody reads in the original. I read the originals, so anything in here can be argued from the root.

What to do with this

Three main takeaways before the next essay:

  1. Treat objective and subjective time as two different budgets. Track them separately. The win condition for one is not the win condition for the other. Your F1 car needs both a turbocharged V12 engine and an amazing driver, who shaves milliseconds off every corner.
  2. Spend your design effort where the JND is. An effort to gain 5 % improvement is wasted; gains are noticeable after at least 20 % improvement.
  3. Notice when you cross from consumption into production. The perception layer is honest in one mode and may start to lie in the other. Any improvement in speed — whether in how things feel or how they really are — has to be implemented on all layers equally well.

References · 5

  1. Miller 1968

    Miller, R. B. (1968). Response time in man-computer conversational transactions. Proceedings of the AFIPS Fall Joint Computer Conference, 33(I), 267–277.

  2. Nielsen 1993

    Nielsen, J. (1993). Response Times: The 3 Important Limits. Excerpt from Usability Engineering, Ch. 5. Morgan Kaufmann.

  3. Doherty 1982

    Doherty, W. J., & Thadani, A. J. (1982). The Economic Value of Rapid Response Time. IBM Technical Report GE20-0752-0.

  4. Weber–Fechner

    Weber, E. H. & Fechner, G. T. (c. 1834+). The Weber–Fechner Law of perceived stimulus intensity. UX-readable application: Mishunov, Why Performance Matters: The Perception of Time (Smashing Magazine, 2015).

  5. Eizenberg

    Eizenberg, E. When Actual Performance Is More Important Than Perceived Performance (Medium).