As of March 12th 2023, Interaction to Next Paint (INP) replaces First Input Delay (FID) as a Core Web Vital metric.
FID and INP are measuring the same situation in the browser: how clunky does it feel when a user interacts with an element on the page? The good news for the web—and its users—is that INP provides a much better representation of real-world performance by taking every part of the interaction and rendered response into account.
It’s also good news for you: the steps you’ve already taken to ensure a good score for FID will get you part of the way to a solid INP. Of course, no number—no matter how soothingly green or alarmingly red it may be—can be of any particular use without knowing exactly where they’re coming from. In fact, the best way to understand the replacement is to better understand what was replaced. As is the case with so many aspects of front-end performance, the key is knowing how JavaScript makes use of the main thread. As you might imagine, every browser manages and optimizes tasks a little differently, so this article is going to oversimplify a few concepts—but make no mistake, the more deeply you’re able to understand JavaScript’s Event Loop, the better equipped you’ll be for handling all manner of front-end performance work.
The Main Thread
You might have heard JavaScript described as “single-threaded” in the past, and while that’s not strictly true since the advent of Web Workers, it’s still a useful way to describe JavaScript’s synchronous execution model. Within a given “realm”—like an iframe
, browser tab, or web worker—only one task can be executed at a time. In the context of a browser tab, this sequential execution is called the main thread, and it’s shared with other browser tasks—like parsing HTML, some CSS animations, and some aspects of rendering and re-rendering parts of the page.
JavaScript manages “execution contexts”—the code currently being executed by the main thread—using a data structure called the “call stack” (or just “the stack”). When a script starts up, the JavaScript interpreter creates a “global context” to execute the main body of the code—any code that exists outside of a JavaScript function. That global context is pushed to the call stack, where it gets executed.
When the interpreter encounters a function call during the execution of the global context, it pauses the global execution context, creates a “function context” (sometimes “local context”) for that function call, pushes it onto the top of the stack, and executes the function. If that function call contains a function call, a new function context is created for that, pushed to the top of the stack, and executed right away. The highest context in the stack is always the current one being executed, and when it concludes, it gets popped off the stack so the next highest execution context can resume—“last in, first out.” Eventually execution ends up back down at the global context, and either another function call is encountered and execution works its way up and back down through that and any functions that call contains, one at a time, or the global context concludes and the call stack sits empty.
Now, “execute each function in the order they’re encountered, one at a time” were the entire story, a function that performs any kind of asynchronous task—say, fetching data from a server or firing an event handler’s callback function—would be a performance disaster. That function execution context would either end up blocking execution until the asynchronous task completes and that task’s callback function kicked off, or suddenly interrupting whatever function context the call stack happened to be working through when that task completed. So alongside the stack, JavaScript makes use of an event-driven “concurrency model” made up of the “event loop” and “callback queue” (or “message queue”).
When an asynchronous task is completed and its callback function is called, the function context for that callback function is placed in a callback queue instead of at the top of the call stack—it doesn’t take over execution immediately. Sitting between the callback queue and the call stack is the event loop, which is constantly polling for both the presence of function execution contexts in the callback queue and room for it in the call stack. If there’s a function execution context waiting in a callback queue and the event loop determines that the call stack is sitting empty, that function execution context is pushed to the call stack and executed as though it were just called synchronously.
So, for example, say we have a script that uses an old-fashioned setTimeout
to log something to the console after 500 milliseconds:
setTimeout( function myCallback() {
console.log( "Done." );
}, 500 );
// Output: Done.
Code language: JavaScript (javascript)
First, a global context is created for the body of the script and executed. The global execution context calls the setTimeout
method, so a function context for setTimeout
is created at the top of the call stack, and is executed—so the timer starts ticking. The myCallback
function isn’t added to the stack, however, since it hasn’t been called yet. Since there’s nothing else for the setTimeout
to do, it gets popped off the stack, and the global execution context resumes. There’s nothing else to do in the global context, so it pops off the stack, which is now empty.
Now, at any point during this sequence of events our timer will elapse, calling myCallback
. At that point, the callback function is added to a callback queue instead of being added to the stack and interrupting whatever else was being executed. Once the call stack is empty, the event loop pushes the execution context for myCallback
to the stack to be executed. In this case, the main thread is done working long before the timer elapses, and our callback function is added to the empty call stack right away:
const rightNow = performance.now();
setTimeout( () => {
console.log( `The callback function was executed after ${ performance.now() - rightNow } milliseconds.` );
}, 500);
// Output: The callback function was executed after 501.7000000476837 milliseconds.
Code language: JavaScript (javascript)
Without anything else to do on the main thread our callback fires on time, give or take a millisecond or two. But a complex JavaScript application could have tens of thousands of function contexts to power through before reaching the end of the global execution context—and as fast as browsers are, these things take time. So, let’s fake an overcrowded main thread by keeping the global execution context busy with a while
loop that counts to a brisk five hundred million—a long task.
const rightNow = performance.now();
let i = 0;
setTimeout( function myCallback() {
console.log( `The callback function was executed after ${ performance.now() - rightNow } milliseconds.`);
}, 500);
while( i < 500000000 ) {
i++;
}
// Output: The callback function was executed after 1119.5999999996275 milliseconds.
Code language: JavaScript (javascript)
Once again, a global execution context is created and executed. A few lines in, it calls the setTimeout
method, so a function execution context for the setTimeout
is created at the top of the call stack, and the timer starts ticking. The execution context for the setTimeout
is completed and popped off the stack, the global execution context resumes, and our while
loop starts counting.
Meanwhile, our 500ms
timer elapses, and myCallback
is added to the callback queue—but this time the call stack isn’t empty when it happens, and the event loop has to wait out the rest of the global execution context before it can move myCallback
over to the stack. Compared to the complex processing required to handle an entire client-rendered web page, “counting to a pretty high number” isn’t exactly the heaviest lift for a modern browser running on a modern laptop, but we still see a huge difference in the result: in my case, it took more than twice as long as expected for the output to show up.
Now, we’ve been using setTimeout
for the sake of predictability, but event handlers work the same way: when the JavaScript interpreter encounters an event handler in either the global or a function context, the event becomes bound, but the callback function associated with that event listener isn’t added to the call stack because that callback function hasn’t been called yet—not until the event fires. Once the event does fire, that callback function is added to the callback queue, just like our timer running out. So what happens if an event callback kicks in, say, while the main thread is bogged down with long tasks buried in the megabytes’ worth of function calls required to get a JavaScript-heavy page up and running? The same thing we saw when our setTimeout
elapsed: a big delay.
If a user clicks on this button
element right away, the callback function’s execution context is created and added to the callback queue, but it can’t get moved to the stack until there’s room for it in the stack. A few hundred milliseconds may not seem like much on paper, but any delay between a user interaction and the result of that interaction can make a huge difference in perceived performance—ask anyone that played too much Nintendo as a kid. That’s First Input Delay: a measurement of the delay between the first point where a user could trigger an event handler, and the first opportunity where that event handler’s callback function could be called, as the main thread has become idle. A page bogged down by parsing and executing tons of JavaScript just to get rendered and functional won’t have room in the call stack for event handler callbacks to get queued up right away, meaning a longer delay between a user interaction and the callback function being invoked, and what feels like a slow, laggy page.
That was First Input Delay—an important metric for sure, but it wasn’t telling the whole story in terms of how a user experiences a page.
What is Interaction to Next Paint?
There’s no question that a long delay between an event and the execution of that event handler’s callback function is bad, sure—but in real-world terms, “an opportunity for a callback function’s execution context to be moved to the call stack” isn’t exactly the result a user is looking for when they click on a button. What really matters is the delay between the interaction and the visible result of that interaction.
That’s what Interaction to Next Paint sets out to measure: the delay between a user interaction and the browser’s next paint—the earliest opportunity to present the user with visual feedback on the results of the interaction. Of all the interactions measured during a user’s time on a page, the one with the worst interaction latency is presented as the INP score—after all, when it comes to tracking down and remediating performance issues, we’re better off working with the bad news first.
All told, there are three parts to an interaction, and all of those parts affect a page’s INP: input delay, processing time, and presentation delay.
Input Delay
How long does it take for our event handlers’ callback functions to find their way from the callback queue to the main thread?
You know all about this one, now—it’s the same metric FID once captured. INP goes a lot further than FID did, though: while FID was only based on a user’s first interaction, INP considers all of a user’s interactions for the duration of their time on the page, in an effort to present a more accurate picture of a page’s total responsiveness. INP tracks any clicks, taps, and key presses on hardware or on-screen keyboards—the interactions most likely to prompt a visible change in the page.
Processing Time
How long does it take for the callback function associated with the event to run its course?
Even if an event handler’s callback function kicks off right away, that callback will be calling functions that call more functions, filling up the call stack and competing with any other work taking place on the main thread.
const myButton = document.querySelector( "button" );
const rightNow = performance.now();
myButton.addEventListener( "click", () => {
let i = 0;
console.log( `The button was clicked ${ performance.now() - rightNow } milliseconds after the page loaded.` );
while( i < 500000000 ) {
i++;
}
console.log( `The callback function was completed ${ performance.now() - rightNow } milliseconds after the page loaded.` );
});
// Output: The button was clicked 615.2000000001863 milliseconds after the page loaded.
// Output: The callback function was completed 927.1000000000931 milliseconds after the page loaded.
Code language: JavaScript (javascript)
Assuming there’s nothing else bogging down the main thread and preventing this event handler’s callback function, this click handler would have a great score for FID—but the callback function itself contains a huge, slow task, and could take a long time to run its course and present the user with a result. A slow user experience, inaccurately summed up by a cheerful green result.
Unlike FID, INP factors in these delays as well. User interactions trigger multiple events—for example, a keyboard interaction will trigger keydown
, keyup
, and keypress
events. For any given interaction, INP will capture a result for the event with the longest “interaction latency”—the delay between the user’s interaction and the rendered response.
Presentation Delay
How quickly can rendering and compositing work take place on the main thread?
Remember that the main thread doesn’t just process our JavaScript, it also handles rendering. The time spent processing all the tasks created by the event handler are now competing with any number of other processes for the main thread, all of which is now competing the layout and style calculations needed to paint the results.
Testing Interaction to Next Paint
Now that you have a better sense what INP is measuring, it’s time to start gathering data out in the field and tinkering in the lab.
For any websites included in the Chrome User Experience Report dataset, PageSpeed Insights is a great place to start getting a sense of your pages’ INP. Your best bet for gathering real-world data from across a unknowable range of connection speeds, device capabilities, and user behaviors is likely to be the Chrome team’s web-vitals JavaScript library (or a performance-focused third-party user monitoring service).
Then, once you’ve gained a sense of your pages’ biggest INP offenders from your field testing, the Web Vitals Chrome Extension will allow you to test, tinker, and retest interactions in your browser—not as representative as field data, but vital for getting a handle on any thorny timing issues that turned up in your field testing.
Optimizing Interaction to Next Paint
Now that you have a better sense of how INP works behind the scenes and you’re able to track down your pages’ biggest INP offenders, it’s time to start getting things in order. In theory, INP is a simple enough thing to optimize: get rid of those long tasks and avoid overwhelming the browser with complex layout re-calculations.
Unfortunately, a simple concept doesn’t translate to any quick, easy tricks in practice. Like most front-end performance work, optimizing Interaction to Next Paint is a game of inches—testing, tinkering, re-testing, and gradually nudging your pages toward something smaller, faster, and more respectful of your users’ time and patience.