Performance Engineering with React

Saif Hakim
Benchling Engineering
11 min readFeb 4, 2016

--

This is the first half of a 2-part series on performance engineering in React. Part 2 — A Deep Dive into React Perf Debugging is now up!

This post is for those of you with a complex React application. If you’re building something smaller, you might not need to focus on performance yet. Don’t prematurely optimize! Go build things!

However, if you’re building DNA design tools, gel image analysis software, a rich-text editor, or full-feature spreadsheets, you’re going to hit performance bottlenecks, and you’re going to need to solve them. We hit our fair share of this at Benchling, so this post attempts to share some of what we’ve learned — it’s targeted towards folks out there on the Internet and fellow Benchlings alike. (And yes, if you like these kinds of problems, we’re hiring!)

In this post, I’ll cover the basics of using React’s Perf tools, some common issues that can lead to React rendering bottlenecks, and tips to keep in mind while debugging.

Baseline React

To recap browser performance in 3 sentences: you ideally want to render 60 frames per second in the browser, leaving you 16.7ms per frame. When your app is slow, it’s often taking too long to respond to user events, taking too long to process the data or to re-render the new data. In a majority of cases, you’re not doing complex data processing onClock — you’re wasting time re-rendering.

By using React, you instantly get some performance gains without any extra work:

Because React handles all DOM manipulations, you largely avoid issues regarding DOM parsing and layout. Behind the scenes, React maintains a virtual DOM in JavaScript, which it can use to quickly determine the minimal changes needed to bring the document to the desired state.

Because a React component’s state is stored in JavaScript, we avoid accessing the DOM. A classic performance issue is accessing the DOM at inopportune moments, which can result in issues like forced synchronous layouts (in order to read e.g. someNode.style.left, the browser is forced to render a frame). Instead of doing:

someNode.style.left = parseInt(someNode.style.left) + 10 + "px";

we’d declaratively call <SomeComponent style={{left: this.state.left}} />, and to animate the component we would simply update the state without reading from the DOM:

this.setState({left: this.state.left + 10}).

To be clear, these optimizations are possible without React — I’m simply pointing out React tends to solve these problems ahead of time.

For simple applications, these performance optimizations that come with React are sufficient — I think of them as the minimum work needed for its declarative framework to be feasible. However, as you develop more complicated views, maintaining and comparing virtual DOMs can become an expensive operation. Fortunately, React provides some tools to detect where performance issues exist and means for avoiding them.

Performance issues caused by debugging

Watch out — there’s some overhead that debugging itself can cause, leading to confusing debugging sessions that seem to go away in production.

Elements pane

The elements pane is a nice, simple way to see what is getting re-rendered — it flashes a color when an attribute changes or a DOM node is updated/inserted/replaced. However, the flashing/re-rendering of the element panel will affect performance! I often switch away from the Elements pane to the Console to get a more accurate sense of the FPS.

PropTypes

In development builds of React, PropType validation occurs when a component is rendered — the props that components receive are checked to help with debugging and development. Using the Chrome’s JS Profiler, you may observe that your React component spends a majority of the time in the validate method:

While the development build raises warnings useful for debugging, they incur costs that don’t reflect production. I sometimes switch to using React’s production build to ignore this false sense of sluggishness. (To enable the production build, set NODE_ENV to production: https://facebook.github.io/react/docs/optimizing-performance.html#webpack.)

Identifying Perf issues with React.addons.Perf

Before we dive into common fixes, it’s important to emphasize that you should only spend time fixing issues that you were able to measure. It’s very easy to end up shooting in the dark if you’re not disciplined — again, focus on building things and only invest time on fixing the key performance bottlenecks.

Identifying bottlenecks using standard debugging tools still works, but it is often difficult to interpret the data because application code can result in more time spent in React-land code (e.g. a complicated render method you wrote runs quickly, but the resulting virtual DOM computations are much more expensive). It becomes difficult to identify what application code triggered the visible bottlenecks in React-land.

Fortunately, React is bundled with some perf tools that can be used in the non-production build of React (docs). You’ll find it as React.addons.Perf via react/addons in <= 0.13, and in its own react-addons-perf package in 0.14 onwards.

Usage

To use Perf, simply call Perf.start() from the console, perform the actions to record, and then call Perf.stop(). You can then call one of the methods below to view sets of useful measurements.

When I’m in Perf-Debugging Mode, I like to use a simple record button to start/stop recording easily. (The code is super simple — it’s just a React component in the corner of the screen that calls through to React.addons.Perf.) It looks like the React DevTools are also looking into adding one officially: facebook/react-devtools#71. Jeff had a great tip where he binds keyboard shortcuts to start/stop, useful for when you need to profile with the mouse.

Perf.printWasted()

Perf.printWasted() is easily where most of the usefulness of React.addons.Perf comes from. It tells you how much time was wasted doing render tree construction and virtual DOM comparisons that did not result in a DOM modification. The components that are surfaced here are prime candidates to be improved via PureRenderMixin / other techniques that I listed above.

Perf.printInclusive() / Perf.printExclusive()

Perf.printInclusive() and Perf.printExclusive() print the times it took to render your components. I haven't found this to be too useful, as the bottleneck in rendering is often solved by not rendering (via Perf.printWasted() analysis and PureRenderMixin, more on that later) vs rendering faster. However, it can help highlight which components that perform expensive computations in lifecycle methods. I have typically found that after resolving printWasted issues it is my application code that is expensive. At that point it's easier to use the standard Chrome DevTool's JS Profiler and look directly at the most expensive function calls.

Perf.printDOM()

Perf.printDOM() returns all DOM operations that took place when rendering the React trees. In my experience, this is often difficult to interpret / visualize as it is a long list of entries describing exactly what happened, e.g. each attribute change and each DOM insertion, and if your app is sufficiently complicated this can represent a fairly large changeset.

After your initial component is rendered, it is expected that future re-renders should be re-using / updating existing DOM nodes and not creating new ones — after all, this is the optimization that React’s virtual DOM affords us.

I’ve used this occasionally to discover weird, expensive browser quirks or notice large, unexpected amounts of DOM modifications.

Avoiding renders with shouldComponentUpdate

While React does wonders by maintaining a virtual DOM representation to avoid expensive DOM operations, the maintenance of that virtual DOM can also be expensive. Imagine a very large, complicated render tree. If you update the props of any node, React needs to recompute the render tree from that node all the way down to the leaves, where it finally does its virtual DOM comparisons. Fortunately, React provides a mechanism for avoiding this, namely shouldComponentUpdate - return false from this method, and the component's entire subtree does not bother re-rendering. We just need to figure out how/when to return false.

The simplest way to take advantage of this is to keep your render methods pure — your component should render only based on state and props (as opposed to reading from the DOM, cookies, or anything else.) This “pure rendering” technique is mentioned quite often, but it’s to stress as it’s a good habit that leads to components that are easier to reason about. You’ll have external state once in a while — try to isolate that to a few components and keep the rest pure.

Once you do this, you can use PureRenderMixin in your components. From the source, the mixin just calls through to shallowCompare. (If you're using ES6 classes, you'll want to use shallowCompare directly as well.)

var ReactComponentWithPureRenderMixin = {
shouldComponentUpdate: function(nextProps, nextState) {
return shallowCompare(this, nextProps, nextState);
},
};

If no changes in props/state are detected, it doesn’t re-render. For a component to still behave correctly, a component should ensure that:

  • render() is strictly dependent on props and state, i.e. it shouldn't be reading values from some global state.
  • props and state should never be mutated - any changes should result in a new variable, since shallowCompare only checks for strict equality between top-level props. react-addons-update will help you perform immutable updates, and we use Object.assign / _.extend for simple cases as well. ImmutableJS is a more significant switch, but once you use immutable data structures you can use PureRenderMixin easily. Beware of the urge to do things like this.state.myItem.stars++ - it's easy to forget that you're mutating the state directly, especially if things happen to work due to some other state changing as well.

If you adhere to pure components while developing, when you notice bottlenecks it becomes much easier to toss in a PureRenderMixin.

A small gotcha

You may get a false sense of performance increase if you use PureRenderMixin — it will avoid propType validation in child components, which seems faster in development but will be skipped anyways in production builds.

A much larger gotcha!

Even if you adhere to this stricter policy, you may not immediately reap the benefits of PureRenderMixin. As described earlier, React does a shallow-equal comparison, not a deep comparison, to decide if it needs to re-render. There are surprisingly many easy ways to accidentally create deep-equal props that are not shallow-equal (more on that later).

One quick way to address this is to use deep comparison, e.g. _.isEqual:

shouldComponentUpdate(nextProps, nextState) {
return !_.isEqual(this.props, nextProps)
|| !_.isEqual(this.state, nextState);
}

If you mostly re-use props, _.isEqual first does a shallow comparison, so comparing the props that haven't changed is cheap and performance is fine. In practice, where _.isEqual suffices, we haven't found the deep compare to be a performance issue.

You could also write a custom shouldComponentUpdate method that is specifically tailored to your component, but I'd only do this on simpler components. If the custom method isn't properly maintained, you will run into issues with your component not updating when it really should.

Optimizing for shallow-equal props

Often, it turns out using best practices to avoid creating new objects when possible helps naturally with rendering optimizations.

Function.bind() / inline (anonymous) functions

Function.bind is a convenient way to expose a contextual call to a component. Unfortunately, each call to Function.bind produces a new function:

> console.log.bind(null, 'hi') === console.log.bind(null, 'hi')
false
> function(){console.log(‘hi');} === function(){console.log(‘hi');}
false
// New function each time
render() {
return <MyComponent onClick={() => this.setState(...)} />;
}

No amount of prop checking will help, and your component will always re-render. (You can turn on the react/jsx-no-bind eslint rule to disallow using bind and arrow functions in jsx props.)

The simplest solution we’ve found is to pass the unbound function and the desired args to the child component, and have the child use an instance method, e.g:

const TodoItem = React.createClass({
deleteItem() {
this.props.deleteItem(this.props.index);
},
});

It feels odd to expose a more general method to a subcomponent, with the contract that it will pass back in the index. To get better encapsulation for more complex scenarios, we sometimes use an IntermediateBinder whose only purpose is to act as a binding context for e.g. the id argument. It takes the id as a prop, defines its own methods bound to itself, and passes those methods to the child component.

This allows us to write:

<IntermediateBinder
deleteItem={this.deleteItem}
boundArg={item.id}
>
{(boundProps) => <TodoItem deleteItem={boundProps.deleteItem} />}
</IntermediateBinder>

(Another possibility we’ve explored is using a custom bind function that stores metadata on the function itself, which in combination with a more advanced check function, could detect bound functions that haven’t actually changed. This didn’t seem explicit enough for our tastes.)

Literal array/object construction

It’s simple but often overlooked. Array literals will break PureRenderMixin:

> ['important', 'starred'] === ['important', 'starred']
false

If this object is never expected to change, you can move it into a module constant / component static variable:

const TAGS = ['important', 'starred'];

Subcomponents

Defining content boundaries between a component and its subcomponent often lends itself to easy performance optimizations — well-encapsulated component interfaces lend naturally to performant updates. Refactoring out intermediate components can help improve where you can use PureRenderMixin and save updates:

<div>
<ComplexForm props={this.props.complexFormProps} />
<ul>
<li prop={this.props.items[0]}>item A</li>
...1000 items...
</ul>
</div>

In this case, if complexFormProps and items come from the same store, typing in the ComplexForm might lead to store updates, and each store update leads to re-rendering the entire <ul>. Virtual DOM diffing is great, but it still has to check every <li>. Instead, refactor out <ul> into its own subcomponent that takes in this.props.items, and only update if this.props.items changes:

<div>
<CustomList items={this.props.items} />
<ComplexForm props={this.props.complexFormProps} />
</div>

Cache expensive computations

This goes against the “single source of state” principle, but if computations on a prop are expensive you can cache them on the component. Instead of directly using doExpensiveComputation(this.prop.someProps) in the render method, we can wrap the call that caches the value if the prop is unchanged:

getCachedExpensiveComputation() {
if (this._cachedSomeProp !== this.prop.someProp) {
this._cachedSomeProp = this.prop.someProp;
this._cachedValue = doExpensiveComputation(this.prop.someProp);
}
return this._cachedValue;
}

Candidates for this optimization would be best discovered using the JS Profiler.

Link state

React’s Two Way Binding Helpers can be very useful for simple inversion of control, allowing a child component to communicate new state to the parent. If only used with valueLink for a React form component, it isn’t so bad as the React form inputs are very simple. But if you start threading it through more components like we were doing, you may run into issues. linkState is implemented as follows:

linkState(key) {
return new ReactLink(
this.state[key],
ReactStateSetters.createStateKeySetter(this, key)
);
}

Every call to linkState returns a new object, even if the the state hasn’t changed! This means shallowCompare will never work. Our workaround is unfortunately simply not to use a linkState. If you instead flatten the linkState into a getter prop and a setter prop, we avoid creating a new object, e.g. nameLink={this.linkState('name')} could be replaced with name={this.state.name} setName={this.setName}. (We've considered writing a linkState that caches itself...)

Compiler Optimizations

Newer versions of Babel and React support inlining React elements and automatically hoisting constant React elements. We haven’t played too much with this yet, unfortunately, but they will help with reducing calls to React.createElement and in speeding up DOM reconciliation, respectively.

Wrapping Up

We went through a lot just now (you should’ve seen the original list!), but the key point to take away is that you should 1) get comfortable with profiling and 2) shouldComponentUpdate will get you a long way. We hope this has been useful!

Any suggestions, comments, or things we’ve missed? Let us know — saif at benchling.com.

Stay tuned for part 2, where we’ll discuss our React debugging workflows, dive into real examples of non-performant code, and subsequently fix them.

Discuss on Hacker News

Update: Part 2 is out! Check it out — A Deep Dive into React Perf Debugging.

As always, we’re looking for product-loving folks to join the team. :)

Thanks to Jeff Chan, Victoria Sun, Harry Yu, and Mark Zhang for reading drafts of this.

--

--