Webpack and Frontend Build-Time Performance Engineering for React

This article explores webpack, focusing on ways to investigate the build-time performance of webpack for your React apps. First, we cover the items that can affect build-time and runtime performance. Next, we’ll explore profiling tools for CPU and memory usage and how to use them. We’ll then delve into webpack bundles and how to analyze what’s in a bundle. Finally, we talk about setting goals.

React and Performance

React is an excellent framework for building web apps combined with other libraries.

However, there are multiple parts to managing and improving the performance of a React app:

  • State management libraries, such as Redux or MobX or React Query, can affect runtime performance. They control when re-renders occur and when and how often data is retrieved.
  • Type-checking with Flow or TypeScript can affect the build-time performance.
  • webpack has loaders and module resolution that can affect build-time performance (other build systems may have different performance considerations).
  • Caching in webpack can reduce build-times on subsequent builds for development or be disabled for production builds.
  • Webpack can generate source maps at different levels with different effects on build-time and runtime performance.
  • Targeting multiple runtimes, such as web and mobile (React Native) and desktop (Electron), can impact the build-time performance and the runtime. For instance, it may be acceptable on the desktop to load a large bundle JS with all the app components, whereas on the web, it’s better to lazy-load and load the app’s components as needed.

It’s also essential to understand the usage pattern of the app and what kind of effect longer initial page load times will have. Are your clients and customers accessing the app at the beginning of their workday and using it throughout the day? In that case, a large bundle size is okay. Are users accessing it throughout the day but only using a few features or pages of the app? A small initial bundle that includes those commonly accessed pages with everything else lazy-loaded is the way to go.

Runtime performance has to be weighed by understanding the impact on users, the amount of effort to improve and maintain it, and the business value derived from it. Build-time performance is a set of trade-offs that includes the developer experience.

When debugging memory issues, you have a few levers that you can control in the build process, such as the configurations and the plugins and loaders, and there are layers to the memory issues. Some of the problems may be in your webpack configuration; other times, the issues are deeper at the level of a function in core webpack or an extra plugin.

Build-Time Performance for webpack

Build-time is all the steps required to build a bundle of your web app. From TypeScript compilation to SASS and PostCSS compilation to source map generation to minification. You can measure build-time performance along two dimensions: timing and memory usage.

CPU Profiling Plugin and Timings

webpack’s ProfilingPlugin generates a JSON file showing which functions were called, how often, and how much CPU they used. In addition, they are grouped by the webpack plugins so you can see which ones are taking up the most time during the build process. Improving the overly slow parts is only possible with profiling data that identifies where the build-time performance bottlenecks are.

You can analyze the JSON file in Chrome Dev Tools’ Performance tab. However, larger builds may create JSON files over 100 MB, too large for Chrome Dev Tools. If the file is large, you can use 0x to view a flame graph. However, 0x may not display all the information. Instead, you can use the Node.js CPU profiling with webpack.

Heap Memory Profiling

heap-sampling-webpack-plugin is a plugin for webpack that uses the Node inspector to snapshot the build process’s memory usage. It is simple to set up, and it generates heap profile files. A heap profile contains a tree of function calls and the memory used and retained. You can view the heap profile files in the Memory tab of Chrome Dev Tools or VS Code. Both tools provide a tree of function calls to dive deep and discover where memory is being allocated and retained. Both Chrome Dev Tools and VSCode can display a flame graph, which is a more compact view and can be easier to navigate.

Here’s a video demonstrating how the heap profile looks like in Chrome Dev Tools:

It’s a very neat and practical plugin that reports peak memory usage, vital for continuous integration environments.

When profiling the memory usage, you must consider whether that part of the build process is necessary. Consider if particular webpack plugins or loaders have memory leaks or unique memory usage characteristics.

Example: optimization and realContentHash

For instance, the `optimization` configuration for webpack has a `realContentHash` setting. This setting enables webpack to process assets again to generate accurate content hashes: https://webpack.js.org/configuration/optimization/#optimizationrealcontenthash

Webpack uses an internal hash calculation when this setting is disabled. This internal hash calculation may yield different hashes for identical content. Typically, the realContentHash setting is enabled. You can see how much memory is used by the process of generating that real content hash in the heap profile. The realContentHash pass consumes much more memory when there are more assets. When more files are part of the build process, the build-time and memory usage may increase. To reduce memory usage, one option is to turn this setting off and accept the drawback. Another option is to crack open the webpack “optimization” plugin code and investigate the possibility of updating the realContentHash algorithm to consume less memory, for example, by redesigning the algorithm or by calling out to an optimized C/C++/Rust/Go or operating system utility function.

Profiling with Node.js, CPU, and Heap Snapshots

Instead of using the plugins for webpack, you can use Node’s profiling tools to get a heap snapshot or a CPU snapshot. You can use this to profile any part of the build process.

As part of your React build process, you have to examine the performance of webpack and the performance of ESLint or TypeScript compiler if you’re using TypeScript or the test framework, such as Jest or Vitest. Each piece of the build-time process could become a performance issue. At runtime, you can use the profiling tools in Chrome Dev Tools.

Jest tests, for example, have had an issue with performance since October 2021, where the heap size became larger with each new Node version, indicating a memory leak. The thread about this problem has hundreds of comments from various investigations. It’s only been potentially solved recently. Sometimes, there is more than just a cost in time for slow build-time processes such as test runs; the cost is also in dollars if you are using GitHub Actions or another Continuous Integration tool to run the tests.

Example: Profiling a newly created React App

Let’s walk through an example of using these tools. I used Create React App (with TypeScript) to generate a new app and updated the webpack configuration to include the above-mentioned profiling tools. You can view the repo on GitHub here: https://github.com/rudolfolah/npm-webpack-profiling

Then, I created a baseline heap memory profile. The total memory used was 119 MB. By looking at the flame graph of the profile, it was possible to see that a big chunk of memory (52 MB) was used in the middle of the build process.

A few rows of the flame graph showed where the files were and were part of the ESLint webpack linter plugin. Checking the webpack configuration generated by create-react-app, an environment variable turned the ESLint check on or off. I turned it off and ran the build process to create another heap profile. This time, the total memory used was 54 MB. Quite an improvement!

Then, the source map generator used the most significant amount of memory. One thing to note here again is that you need to understand what the goals of the build process are; source maps can be helpful when debugging in a production environment and can be sent to tools like BugSnag or Sentry to have more informative tracebacks when an error occurs. Since this is an example to demonstrate profiling, I also turned that off. The total memory used was then 42 MB, an improvement that you can weigh against the usefulness of the source maps. So, we can live with a 54 MB build process with source maps.

When examining the Profiling plugin output, you can see that the disabled ESLint plugin and source map generator are no longer there.

What’s in the webpack Config?

Another plugin that’s useful for performance analysis is the webpack-config-dump-plugin. If your webpack configuration has conditionals and imports parts of it, it cannot be easy to trace which options and configurations were part of the webpack build process. The webpack-config-dump plugin will save the configuration with the values.

Here’s how I have been using it in the webpack configuration:

const { webpackConfigDumpPlugin } = require(“webpack-config-dump-plugin”);
new webpackConfigDumpPlugin({
  showFunctionNames: true,
  keepCircularReferences: true,
  includeFalseValues: true
})

Webpack Bundle Analysis

Tree-shaking with ESM modules

Through tree-shaking, webpack actively removes unused functions and code from its bundles. Suppose you use a library with ten functions but only import and call one. In that case, webpack will actively remove the nine unused functions from the created bundle. This process is valuable for the libraries you make, and it’s essential to determine if the libraries you use can undergo tree-shaking.

If you build libraries correctly, they can take advantage of tree-shaking. This issue might arise if the library doesn’t implement ES modules, lacks a “sideEffects” specification in its package.json, or has dependencies that don’t support tree-shaking. Some JavaScript libraries face this issue; although they support tree-shaking, their dependencies still need to. You can observe this outcome when you display the content of concatenated modules in the webpack bundle analyzer’s treemap.

Webpack Loaders

Wherever there is an import/require statement that uses a non-JavaScript file as part of the module path, webpack tries to find a loader that matches and then uses the loader instead of module resolution. For example, importing a .scss file would use the Sass CSS loader, which would transform the .scss into .css and return some code for the bundle. You can also achieve this with images and SVGs by incorporating them into the bundle’s static assets and copying them over.

It can be worth considering whether those loaders are still needed and if there are replacements for the part of the process that requires them.

Remove or Replace Packages

Some libraries and packages are relatively large for their features and business value.

Here’s an example. Datetime functions are essential. However, the most commonly used datetime library is moment.js, which will cause your webpack bundle size to grow by ~250 kb. Lighter-weight alternatives can be used instead, such as luxon, which will increase the bundle size by ~178 kb. Another alternative is date-fns, which also supports tree-shaking.

Understanding how much value you get out of the package and whether there are replacements for it can help reduce the bundle size.

Another option is to remove a package altogether. You can sometimes do this by rewriting the code to avoid the package dependency or to reuse some other existing code. In other cases, you can inline the functionality you need precisely, though this depends on the package license and how much effort you want to spend on incorporating the code.

Off-load Packages to CDNs using “externals”

CDNs (Content Distribution Networks) host the most popular JavaScript packages. You can load libraries like lodash or React from CDNs such as cdnjs or unpkg.

Your webpack configuration can flag those dependencies as “externals,” which means they will not be parsed or compiled into your bundle. There are a few advantages to this: CDNs are fast and near the edge where your customer is located; commonly used libraries on popular CDN URLs are likely already cached by the browser from visiting other websites; the webpack bundle becomes smaller.

Webpack supports multiple types of externals, from commonjs to umd to module to var (the default).

Split Packages into Sub-Packages, from Monolith to Mono-repo

To reduce the impact of a package on your bundle’s size, move it into a sub-package, especially for code parts that change infrequently. Combine this with a CDN and specify it as an external package to shrink the primary app bundle size. This approach also ensures that client web browsers retain the CDN packages in their browser cache on subsequent visits if unchanged. Alternatively, the new sub-package might enable tree-shaking, mainly if it is a utility library used by multiple apps, where each relies on a distinct set of functions.

Lazy-Loading with React

In React, you can use the “lazy()” method and Suspense to wrap a component, ensuring it loads only when needed. Webpack converts it into a separate chunk, reducing the main app bundle’s size. This approach speeds up the initial page load by preventing the loading of unneeded code.

An issue may arise when you import a component directly instead of using the “lazy()” method. Direct imports of some pages or subsections into another page can embed that chunk back into the main bundle. You should carefully analyze each file and component’s import declarations and dependencies. Analyzing the bundle and visualizing the connections between the chunks and assets will make it easier to identify if you are re-importing a piece of code and embedding it into multiple chunks or the main app bundle.

Setting Goals for Frontend Performance

Webpack allows you to set a performance budget for bundles and assets, which significantly assists in the early stages of developing a new app and adding features. If your app surpasses this performance budget, consider implementing one of the previously mentioned options or expand the performance budget. When engineering frontend performance, whether during build-time, at runtime on initial page load, or during user interactions, you must base decisions on business value, engineering value, and customer experience.

Depending on what kind of app you’re building and usage patterns, a vast bundle can deliver extraordinarily great value, or several small bundles with externals/CDN-hosted libraries and lazy-loading could provide the same or better business value and be of excellent engineering value during the development process. Setting these goals, understanding the value, and respecting your team’s expectations is highly important.