9th Jul 2019
The first implementation used the react-autocomplete and react-virtualized libraries for the UI.
First you pass the Autocomplete component the searchTerm as the value property (which gets rendered in the input), then passthe search results as the items property and pass a renderMenu function that forwards the list of search results to react-virtualized.
react-virtualized solves the problem of displaying long lists like our big data set. It only renders what will fit within the scrollable results window, and updates what to display as the user scrolls.
So I thought there wouldn’t be serious performance issues with the amount of components we would be rendering.
The update lifecycle is simple:
- user types ‘a’ into input
- Autocomplete onChange handler triggers a re-render where this.state.searchTerm = ‘a’
- The search engine calculates the results with ‘a’ as the search term in the getSearchResults method of this subsequent render, passing those results to `react-virtualized` to render.
Let’s see how this looks:
Wow… that’s bad. You really notice the hanging UI when you hold delete because the keyboard fires the delete event so fast.
At least the fuzzy search is working though: ‘anerican’ is correctly interpreted as ‘American,’ but as the search term gets longer, the rendering of the two separate elements (input and search results) just can’t keep up with the user typing which results in huge delays.
While our search algorithm is slow, such long delays aren’t caused by an individual search call taking that long to calculate the results. There is another phenomenon at work here that is important to understand: lets call it UI blocking. Understanding this requires a deep dive into the Chrome DevTools performance profiler, so let’s take a look.
In this series you’ll find performance profiles and the interesting conclusions we can draw from them at each iteration.
Lets profile our Autocomplete handling two key presses:
It’s important to understand the basic sequence here.
Don’t be confused by the term render. React rendering doesn’t imply that the work gets painted on screen. Rendering is just the work React does to figure out what the updated components should look like. If you look very closely to the right of the 2nd Event (keypress) there are tiny, barely visible green marks just outside the Event (keypress) bar. Zoom way in and you will see what is called the browser paint:
This is when visual UI updates are actually painted on the screen and a new frame is visible on the display. To note, there is no paint after the first Event (kepress) – it only occurs after the 2nd one.
What this tells us is that a browser paint (a visual update) doesn’t always happen even after React is finished updating after an Event (keypress).
Unfortunately, this leads to huge delays. Because each render is so slow (in part because of the expensive search algorithm inside the render method) the user will often input a new letter before the current execution stack completes if they are typing remotely fast. This queues up that new Event (keypress) and prioritizes it over the browser paint of the current updates. So the paint just keeps getting delayed by the queued up user inputs.
Not only that, but even after the user stops typing, there are usually many keypress events in the queue and react will do the calculations for each key press consecutively. So, although you’ve finished typing, you are actually waiting on several of these slow searches to complete for stale searchTerms you don’t even need!
Notice the last Key Character happens halfway through yet 4 more Event (keypress) stacks run before the browser can paint.
What can we do about this?
To help solve this problem, it’s important to understand what constitutes good interactive UI. There are two separate pieces of visual feedback the user is expecting from each keystroke:
- The key the user pressed showing up in the input
- The results list updating with search results based on the new searchTerm
Understanding user expectations helps us come up with potential solutions. While lightning fast searches like google are awesome, it’s not a completely uncommon experience for users to not receive search results instantly. Sometimes UI’s even use loaders while a network request retrieves the search results.
The important thing to realize is that the first type of feedback (the input updating with the key pressed) plays a much greater role in the user’s perception of a quick, responsive UI. If your UI can’t manage that, you have serious problems.
Seeing the performance profile should be our first clue to solving the problem. Looking at those long execution stacks with the expensive search method right in the render, we can see that all that work to update both the input and search results is occurring in the same stack. And so both UI updates are being bogged down by the long search algorithm.
The input updating shouldn’t have to wait for the results. It only needs to know about which key the user pressed, not what the results are. If we can somehow control the scheduling of event execution so that input has a chance to update in the UI first, before starting the search results render, this should reduce some of that lag. Thus, our first optimization should be to split up the execution of the input render from the search results.
Side note: Anyone familiar with Dan Abramov’s React JSconf 2018 talk should recognize this scenario. In his presentation he artificially created expensive updates by increasing the number of components on screen as his input value grew. Here we are facing a similar constraint, but through a single search function whose complexity increases with the length of the search term. In his talk, Dan was demoing Time Slicing, a feature the React team is working on that may have helped address just this scenario! Our attempts to solve the problem will lead to a similar solution in principle: finding ways to schedule renders and expensive calculations in a way that doesn’t block the main thread from making more meaningful UI updates.
Tune in next week for Async Rendering