You spend lots of time waiting on your computer. You pause while apps start and web pages load. Spinner icons are everywhere. Hardware gets faster, but software still feels slow. What gives?
If you use your computer to do important work, you deserve fast software. Too much of today's software falls short. At the Ink & Switch research lab we've researched why that is, so that we can do better. This article shares we've learned.
What feels "slow" to users? We all have a sense for when software annoys us with delays. But to get a better handle on this problem, we'll complement these intuitions with academic research that answers the question rigorously.
Perceived speed is all about latency. Comparing the "what feels slow" findings from academia with measurements of real-world app latency tells us just how bad things really are.
When discussing software performance, we often hear about throughput. E.g. "this web server can do 10,000 requests / second." But that's not how users perceive things. They care how long their particular web request takes, or how long a doc takes to open, or how responsive an app is to their clicks. These interactions are about latency. User perception See  for more on the importance of latency in interactive systems. Other factors influence how fast software feels. E.g. frame rates and feedback about long-running tasks. But we think latency is the most fundamental, and we're only going to have software that feels truly instantaneous if latency is very low. Latency is the critical metric we'll examine in this article.
To start, let's look at user sensitivity to latency when using touch screens.
Researchers can test this with rigs that control exactly how much latency users see. They present a user with an interface having (say) 1ms latency and another with (say) 70ms and then ask them to perform operations like tapping a button. If the 70ms interface consistently feels slower than the 1ms interface, 70ms would be a "noticeable difference." Just noticeable differences For more on just noticeable latency differences and associated experimental rigs, see e.g. .
The smallest such noticeable difference is the budget one has before a given operation starts to feel slow to that user.
When dragging items on the screen, for example, users perceive latencies as low as ~2ms. The just noticeable latency varies by user and action being performed, but it's consistently very low. Drag latency perception  reports 2–16ms for drags with a stylus. : 2–11ms for drags with finger. : mean of 11ms for drags with finger.
Inking on a tablet with a stylus gives similar results. Here the literature suggests users detect slowness between 20ms and 80ms of latency. In our own informal tests in the lab, latency towards 80ms feels very slow, and it takes something much closer to 20ms for a stylus to feel responsive while inking. Stylus latency perception  reports 10–70ms for stylus scribbling. : 21–82ms for various stylus actions.
The difference between low- and high-latency inking is clear when looking at contrasting examples side-by-side:
Another common operation on touch devices is tapping on buttons or links. Here tests suggests users on average notice latency as it goes beyond ~70ms (though it's likely lower for some individual users). Tapping latency perception  reports 69ms mean just noticeable latency for finger tap across users.
Here's an example showing two different latencies side-by-side:
How do modern apps fare compared to these latency thresholds for touch interactions? In terms of dragging with a finger, no current consumer system will consistently meet the low single digit millisecond level needed to satisfy all users. Drag performance As we'll see below, <10ms is not enough latency budget even for input hardware and displays, not to mention multiple layers of software. So all current touchscreen operating systems will leave at least some users feeling like the object they're dragging is lagging behind their finger.
For drawing with a stylus, a small number of systems get close to the latency levels needed to feel reasonably good. But most are well above these levels and—as we'd expect—feel very slow to users:
While we don't have data on tapping latencies across devices, we'd expect them to be comparable to the inking latencies observed above. Inking vs tapping latencies We expect these latencies to be comparable because they both test the touch input → screen update loop. That said, there are differences between the two paths so we wouldn't expect them to be exactly the same. Since the noticeable latency here is ~70ms, most systems should be able to feel responsive to taps. But it's also easy to find apps that perform much worse than the system's theoretical capability.
Overall, touch systems need to have very low latencies to feel responsive. Most devices and apps fail to perform at this level, and consequently feel varying degrees of slow to users.
There is some evidence that increased typing latency impairs users. Typing latency impact In , random amounts of latency are added to keying operations, which reduces typing performance. However it only assessed one distribution of latency. It also suggested skilled typists may acclimate to the increased latency. However, we're not aware of studies specifically measuring least noticeable end-to-end typing latencies. The tap latency numbers (noticeable at ~70ms) may be a useful benchmark because they also measure a discrete finger touch to visual update.
Here are some informal end-to-end keyboard latency measurements:
|Commodore Pet 4016||60|
|iMac g4 OS 9||70|
|Macbook Pro 2014||100|
|Custom Haswell-e 24Hz||140|
|Powerspec g405 Linux||170|
Rigorous assessments of the impact of keyboard latency would be great experiments for enterprising researchers. In any event, it seems likely the latency threshold for typing is below ~100ms for many users, and perhaps well below it.
The last input type we'll look at is mice. One experiment found user latency perception thresholds down to 34ms. Mouse latency impact  measures latency sensitivity in a cursor dragging test and reports sensitivities in the range of 34–137ms, with a mean of 65ms.
Input latency of mice varies widely. Some setups achieve latencies in the single digit milliseconds range by combining high-performance hardware with careful, low-level programming. It's also possible to go beyond 100ms of end-to-latency with a combination of mediocre hardware and applications that introduce extra delays or buffers between input and display. Mouse latencies  describes an optimized setup with end-to-end average mouse movement latencies of ~8ms.
Application-level latencies measure how long it takes to complete app-specific actions like loading web pages:
What feels fast for application actions? It's hard to say exactly because actions are more complex and varied than simple inputs. The answer here probably also depends on what users are conditioned to expect (currently, it's usually slow software). But we can triangulate a rough number. Latency literature  surveys literature on the impact of latency to application users. It's a good starting point for a deeper dive on this topic.
One reference is the typical ~70ms just noticeable difference mentioned above for tapping a touch screen. If you notice a delay between tapping a link and seeing a tap indicator, you can probably notice a similar delay between tapping a link and seeing a web page open.
Another data point is Google’s RAIL model. This model claims that responses within 100ms “feel like the result is immediate” and that higher latency “[breaks] the connection between action and reaction”. RAIL Google's model Unfortunately they doesn't give a basis for their 100ms figure.
You can informally test our own sensitivity in a terminal. Consider the command-line programs you use and how they feel, then try them with `time`. You'll notice the difference between e.g. ~15ms CLI responses (great!) and ~500ms (obviously slow).
As a final data point, consider that typical human reaction time from seeing a visual stimulus to taking a physical action is about 220ms. Reaction time See Literature Review on Reaction Time This value must be significantly more than noticeable latencies, because reactions involve observing something and then doing something.
Altogether we think this suggests action latencies should be ~100ms or less to avoid user perception of delay. Physiological effects Software latency may be even more damaging than we consciously realize.  documents unconscious, physiological effects in users exposed to increased latency.
How do current apps fare against this benchmark? Some do well. For example, many Unix command line programs run in under 100ms.
Most of the web does poorly. A ~1,000ms Google search result will feel faster than most of what you see online, though still noticeably delayed compared to a ~100ms interaction. And it's easy to find examples of pages that take ~5,000ms+ to load even on a good connection.
In the case of mobile and desktop, there are some apps that will consistently achieve <100ms latency, such as the built-in calculator on iOS. But it's easy to find cases of productivity apps that significantly exceed this threshold even when they have (or should have) all data available locally. Consider the Slack example below:
It's hard to draw a general conclusion about such a broad area as the action latency of all software. That said, it seems clear that while some apps do complete actions fast enough to feel instantaneous to users (less than ~100ms), many apps do not.
So we've established that a lot of software is in fact slow. Where does all that time go (and what might we optimize)? We'll look at this next, starting with the first component in the chain: input devices.
The first step in the pipeline that converts physical inputs to updates on screen is the input processing: converting contact with a touch screen, keyboard, or mouse to a digital signal for the operating system. Here we'll look at how long this step takes.
Let's start with keyboards:
As you can see these keyboards easily take up 10s of milliseconds of latency budget on the very first step in the processing pipeline. That's out of a total budget of ~100ms or less! Typing and latency See Typing with Pleasure for much greater detail on this topic.
Mice can similarly introduce 10s of milliseconds of latency. Though the highest performance gaming mice will have latencies in the single digit millisecond range. Mouse click latency Data for mouse click latency is a scattered. One example with single digit millisecond latency.
It's harder for us to measure the contribution of input devices specifically in mobile devices (because they're tightly integrated with other hardware components). That said, we can use a few of the common patterns in input device hardware to understand latencies in these as well as standalone devices.
One common pattern is sample rates. In many input devices, the hardware "scans" or "samples" for new input on a periodic interval. For example, typical consumer touch screens sample for input at the rate of 60hz, or once every ~17ms. This means that in the worst cases input device latency will be at least ~17ms, and in the average case it can be no better than ~8ms.
All things being equal, higher scan rates can reduce input latency. High-end Apple mobile hardware samples touch and stylus input more frequently than 60hz, correspondingly reducing latency:
|Device||Touch (hz)||Stylus (hz)|
|iPad Air 2||60|
|iPad Mini 4||60|
A similar source of latency is USB scanning. The USB protocol pulls input from the keyboard, so the keyboard needs to wait for the USB scan to send its key presses. Low speed USB scans at 125hz, introducing an unavoidable ~8ms max and ~4ms average delay. More recent USB versions scan at 1000hz or more, minimizing the latency impact.
There are many other potential sources of latency in input devices, for example debouncing in keyboards. Debouncing See Keyboard Matrix Scanning and Debouncing for details on hardware and software implications of debouncing. We won't cover them all here, but emphasize the meta points that a) input devices themselves can introduce significant latency before any software processing happens and b) this may be due to multiple discrete causes that add up.
The hardware at the other end of the pipeline are displays and graphics cards.
One source of latency here is the frame rate of the display. Since displays can't redraw constantly, this introduces unavoidable latency similar to the input scanning discussed above. If a screen updates (say) every 20ms, it adds 20ms of latency in the worst case and 10ms in the average case. Perceiving movement Other factors influence how we perceive objects moving on screens. Blur Busters is a great resource. See e.g. LCD Motion Artifacts 101.
Most displays run at 60hz, though high end-devices and especially displays built for gaming run at 120hz, 144hz, and 240hz. Thus display frame rate alone usually contributes ~8ms average-case latency, though this can be reduced to a few milliseconds in the highest-frame-rate displays.
Another contribution to latency from displays is the time it takes them to physically change the color of pixels after they receive new pixel data. This time varies from low single digit milliseconds or less in high-end gaming displays to double digit milliseconds in less responsive LCDs. Display response times These are hard to measure, but Notebook Check has some illustrative data. See e.g. a fast and slow example.
On modern high-end devices, a dedicated graphics unit (GPU) powers the display. GPUs produce the array of pixels for displays by e.g. compositing 2D windowing layers or rendering 3D virtual scenes. GPUs produce frames at a rate that depends on the GPU hardware, their interaction with application and framework code, and sometimes on synchronization logic with displays.
A related issue happens when application code is outright slow, and doesn't even send instructions to the GPU fast enough to take full advantage of it. This can lead to the GPU producing unique frames at a lower rate than it could if it did in fact have frequent instruction from the application. This is a common source of "jank" we see in 2D applications that render less than 60fps. Jank "Jank" is hard to define, but you know it when you see it. Nathan Gitter defines it as "visual glitches that are unexpected or distracting" in Designing Jank-Free Apps.
We've discussed at least three parts of the pipeline where latency accrues due to periodic activity: input scanning, GPU rendering loops, and display refresh cycles. It's important to note that these can stack in ways that essentially adds all of their latency together:
In order to move to the next step in a pipeline, we need to wait for the next cycle of that stage to come along. And the cycles may not be aligned. Misaligned cycles and an unfavorable initial input time can cause 10s of milliseconds of additional latency, a large amount relative to the latency budgets discussed above.
On the software side, runtime overhead is a catch-all for overhead from the operating system and other non-application code. We'll look at two important examples: garbage collection and scheduling.
There are ways to mitigate GC-induced latency. These include moving as much GC work as possible off of the main thread and optimizing the GC to require only small individual pauses. One can also use a language that trades off some of the convenience of GC for more predictable performance. GC latency mitigation See for example the V8 effort to move more GC off the main thread and the Go work to bring max GC pauses well below 1ms. Languages like Swift avoid arbitrary GC by using automatic reference counting.
Another potential source of overhead is operating system scheduling. Our application (and its dependencies in the OS) are not necessarily running all the time. Other programs may be scheduled in while ours is paused, even if for a very short time.
If we have a wall-clock budget of 10ms and our app takes 10ms "end-to-end" without accounting for any other programs running, we may well exceed our budget in wall clock time due to OS scheduling. CPU usage Related to the scheduling issue is CPU usage. If your app meets its performance goals but requires ~100% of the CPU to do that, it may well annoy users. You'll drain batteries faster, make the device hotter, and perhaps trigger a noisy fan. In other words, even with user-facing performance held constant, lower CPU usage is a better user experience.
Every program will ask for their time slices, and there are only so many CPU cores to go around.
A common source of latency on mobile interfaces is the design of the OS and apps themselves. There are some important interactions that can only be accomplished by literally waiting.
Android and iOS both make substantial use of "long press" to access context menus, which require that the user wait hundreds of milliseconds in the middle of their command gestures.
A related source is delays for disambiguation. For example, on mobile Safari there's a default 350ms delay between when the user taps a link and when the browser begins fetching the new page, in order to tell the difference between a link click and a double-tap zoom. iOS tap delays See More Response Tapping on iOS for background on this delay as well as recent changes that allow application developers to work around the issue.
A major source of latency for users on the web is user-hostile work, such as downloading trackers that surveil user activity and loading intrusive ads.
There are many great articles on web bloat. Web bloat See e.g. The Bullshit Web, The Website Obesity Crisis, Web bloat We'll just emphasize that the single biggest source of latency on many sites is downloading stuff that users don't want.
The last source of latency we'll mention is perhaps the most obvious: the application. If the app spends a lot of CPU time processing a given input, or executing some action, it will be slow.
Let's look at an example of how latency can add up:
The example above is hypothetical but illustrative. It shows how there are a lot of layers that add latency and an application can exhibit high latency (much greater than frame length) even if it successfully runs at at the full frame rate.
There is a deep stack of technology that makes a modern computer interface respond to a user's requests. Even something as simple as pressing a key on a keyboard and having the corresponding character appear in a text input box traverses a lengthy, complex gauntlet of steps, from the scan rate of the keyboard, through the OS and framework processing layers, through the graphics card rendering and display refresh rate.
There is reason for this complexity, and yet we feel sad that computer users trying to be productive with these devices are so often left waiting, watching spinners, or even just with the slight but still perceptible sense that their devices simply can't keep up with them.
We believe fast software empowers users and makes them more productive. We know today's software often lets users down by being slow, and we want to do better. We hope this material is helpful for you as you work on your own software.
 Y. Endo, Z. Wang, J. Chen, and M. Seltzer. "Using Latency to Evaluate Interactive System Performance," In Proceedings of the USENIX 2nd Symposium on Operating Systems Design and Implementation, 1996.
 A. Ng, M. Annett, P. Dietz, A. Gupta, and W. Bischof. "In the Blink of an Eye: Investigating Latency Perception During Stylus Interaction," In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, 2014.
 A. Ng, J. Lepinski, D. Wigdor, S. Sanders, and P. Dietz. "Designing for low-latency direct-touch input," In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, 2012.
 J. Deber, and R. Jota, C. Forlines, and D. Wigdor. "How Much Faster is Fast Enough?: User Perception of Latency & Latency Improvements in Direct and Indirect Touch," In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.
 M. Annett, A. Ng, P. Dietz, W. Bischof, and A. Gupta. "How Low Should We Go?: Understanding the Perception of Latency While Inking," In Proceedings of Graphics Interface 2014, 2014.
 M. Annett, A. Ng, P. Dietz, W. Bischof, and A. Gupta. "How Low Should We Go?: Understanding the Perception of Latency While Inking," In Proceedings of Graphics Interface 2014, 2014.
 V. Forch, T. Franke, N. Rauh, and J. Krems. "Are 100 ms Fast Enough? Characterizing Latency Perception Thresholds in Mouse-Based Interaction," In Engineering Psychology and Cognitive Ergonomics: Cognition and Design, 2017.
 J. Dabrowski and E. V. Munson. "40 years of searching for the best computer system response time," Interacting with Computers 23.5, 2011.
 M. Barreda-Ángeles, I. Arapakis, X. Bai, B. Cambazoglu, A. Pereda. "Unconscious Physiological Effects of Search Latency on Users and Their Click Behaviour," In 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2015.