INP is the Core Web Vital that finally tells the truth
FID was a polite lie. INP measures interaction responsiveness across the full page lifetime, and it is harder to fix. Here's how I think about it on production sites.
Interaction to Next Paint replaced First Input Delay as a Core Web Vital in March 2024. If you have not been paying attention to it since, your site's CrUX field data is probably worse than your Lighthouse runs suggest. INP is the metric I now check first when a client tells me their site feels slow but the lab numbers say it is fast.
Why FID was a polite lie
First Input Delay measured the delay between the user's first interaction and the browser's ability to process it. It captured exactly one moment in the page's life, and it captured only the queueing delay, not the time the handler took to run or the time before the next paint. A site could pass FID with a 50-millisecond first interaction and then spend a second on every subsequent click, and FID would not flag any of it.
Interaction to Next Paint fixes both problems. It measures the latency of every interaction during the page lifetime — clicks, taps, key presses — and reports the longest one. It includes the queueing delay, the handler runtime, and the time until the next paint. The number you see is the worst experience your user actually had, not the first one. The threshold for 'good' is 200 milliseconds at the 75th percentile.
Why INP is harder to fix than FID
FID was almost always fixed by reducing main thread work during initial page load. If you trimmed your bundle, deferred scripts, and stopped blocking on third parties, FID got better. INP can be triggered by a handler that runs three minutes after page load, on a route the user navigated to from the search box, on a code path that only fires when a specific filter is selected. The space of things that can hurt INP is the entire interactive surface of your application.
Worse, the worst interactions are often the ones least visible in development. A search filter that calls a synchronous JSON parser on a 500KB blob is fine in development, where you have a fast machine, and catastrophic on a mid-range Android. A scroll handler that runs a layout-reading function on every event is fine until your user lands on a page with seventy product cards.
Where INP regressions actually come from
Five recurring patterns I see on production sites:
First, third-party scripts running synchronously on user interaction. Tag managers that fire a custom event on every click. Analytics SDKs that flush a buffer when a button is pressed. Ad providers that schedule an unrelated reflow on focus events. None of these show up in your bundle analyzer and all of them can blow past 200 milliseconds on a slow phone.
Second, large React state updates triggered by interaction. A click that causes a parent component to re-render, which re-renders fifty children, which each re-evaluate their props and re-mount their children. The fix is usually memoization or moving state down — but the diagnostic step is the React DevTools profiler in production mode, not the development build.
Third, layout thrashing in event handlers. A handler that reads element.offsetWidth, then writes a style, then reads offsetWidth again forces a synchronous layout twice. On a complex page with deep DOM, that is tens of milliseconds per interaction at minimum.
Fourth, blocking work in render. A select component that renders a thousand options on every keystroke. A virtual list that recalculates its layout function for every item synchronously. A modal that runs a syntax highlighter on a code block while the user is trying to scroll.
Fifth, animation handlers that fight with interaction. A scroll-linked animation library running requestAnimationFrame work that competes with the interaction handler for main thread time. The fix is sometimes as simple as moving the animation work to CSS or to the compositor.
How I diagnose INP issues
Step one is to look at CrUX field data, not Lighthouse. CrUX shows you the 75th percentile INP for real users on real devices in real network conditions. The Chrome DevTools Performance Insights panel surfaces specific interactions that have crossed thresholds. The web-vitals library, instrumented in production, gives you per-interaction reports tied to the page URL and component, which is the level you actually need to fix.
Step two is to reproduce the slow interaction on a throttled device. Most INP regressions are invisible on a developer machine. The Chrome DevTools 4x CPU throttle and slow 4G network throttle approximate a low-end Android. If you cannot reproduce slowly, you cannot diagnose accurately.
Step three is the Performance recorder, focused on the specific interaction. Record only the interaction. Look for long tasks. Look at what is on the main thread between the input event and the next paint. The biggest blocks are usually the place to start.
Common fixes that actually work
Yield to the browser. Break long synchronous work into chunks with scheduler.yield (where supported) or setTimeout 0. The point is to give the browser a chance to paint between chunks.
Move work off the main thread. Web Workers for parsing, computation, and any pure function that does not need DOM access. Chrome's transfer of typed arrays makes this much cheaper than it used to be.
Defer non-critical work. Analytics, marketing pixels, and non-essential observability code can run after the next paint via requestIdleCallback or after a generous delay.
Replace synchronous third-party scripts with delayed or proxied versions. PartyTown moves a class of third-party scripts to a worker. Custom proxies can capture and batch tag manager calls.
Audit React renders. Profiler in production mode. Memoize handlers. Memoize child components when they receive stable props. Move state down so updates have a smaller blast radius.
What good INP looks like
Sub-200ms at the 75th percentile is the Core Web Vital threshold. Sub-100ms is achievable on a well-tuned production site. The gap between those numbers is mostly third-party scripts and large React render paths.
When I take on a performance engagement, I commit to a specific INP target on real CrUX data, not on Lighthouse. The lab number can mislead. The real-user number is what Google uses for ranking and what your users feel. Optimize for that, and the lab number follows for free.