Figma rendering: Powered by WebGPU

When Figma Design launched in 2015, most rich design tools were still native desktop apps. Betting on WebGL—a browser graphics API originally designed for 3D applications—was a bold move. WebGL wasn’t widely used for complex 2D applications at the time, but Figma’s team saw its potential to power a smooth, infinite canvas in the browser. That early bet on WebGL set the foundation for Figma’s performance and real-time collaboration capabilities.

In 2023, Chromium shipped support for WebGPU, the successor to WebGL. It allows for new rendering optimizations not possible in WebGL—for instance, compute shaders that can be used to move work off the CPU and onto the GPU to take advantage of its parallel processing. By supporting WebGPU, we could also avoid WebGL’s bug-prone global state, and benefit from much more performant and clear error-handling.

This wasn’t an easy upgrade, however—we had to design the new rendering backend with performance in mind, maintain compatibility with WebGL while adding WebGPU support, and roll out our changes carefully to avoid breakages. What follows are highlights from the major phases of the project.

Updating our graphics interface

When we started the project to support WebGPU, Figma’s engine already had an existing interface layer between higher-level rendering code and low-level OpenGL—but the interface mapped closely to the WebGL API. We had to implement several key improvements to modernize our interface and ensure that the transition to WebGPU would improve performance, not regress it.

Making draw-call arguments explicit

WebGL relies heavily on global state and “binding” resources to global binding points, prior to issuing draw calls to the GPU. Initially, our interface did the same:

C++

// set up different types of data/settings that will be used for a draw call
context->bindVertexBuffer(vertexBuffer, ...);
context->bindTextureUniform(texture, ...);
context->bindMaterial(material, ...);
context->bindFramebuffer(framebuffer, ...);
// … set up any other resources
context->draw();

After draw() is called, the resources stay bound! It’s easy to forget to update one or more of the inputs to drawing and introduce a bug.

A major part of our project was to make all of the state required for each draw call more explicit and WebGPU-like. In WebGL, we implement this API by lazily updating the bindings’ state. So our new API looked like:

C++

context->draw(vertexBuffer, framebuffer, {texture}, material, …);

The draw() function’s WebGL implementation updates the bindings for each resource type only as needed. Since they are now function arguments, it’s also impossible to forget to update them.

This modified interface fixed a handful of bugs in our WebGL renderer before we even touched WebGPU.

Shader processing

Shaders are programs that run on the GPU and are responsible for actually producing the pixel output that is displayed on the user’s screen.

Figma’s renderer is powered by shaders.

In WebGL, shaders are written in a language called GLSL. But WebGPU uses its own new shading language called WGSL. Since we still need to support WebGL, we couldn’t just update all our existing GLSL shaders to use WGSL, but instead needed to support both in a way that was easy for engineers to write new shaders and maintain existing ones. So duplicating all our shaders and maintaining one version in GLSL and one version in WGSL was not feasible.

Further complicating this problem, our GLSL shaders were written in an older format targeting WebGL 1, which is structurally very different from WGSL (and from newer versions of GLSL). For example, in our GLSL shaders we specify uniforms individually, but later versions of GLSL and WGSL both require uniforms to be grouped up in blocks.

Fortunately, there are some open-source tools that can convert shaders to different formats. Unfortunately, they don’t support our older GLSL shaders. So our solution was to combine an existing open-source tool with our own custom shader processor. We maintain our existing GLSL shaders, written in WebGL 1–compliant format. The shader processor then automatically handles translating them to WGSL. This involves parsing the shaders, making the necessary translations to convert them to a newer version of GLSL, then running the open-source tool naga to convert them to WGSL. The shader processor ultimately generates both GLSL and WGSL and extracts some important information (e.g., the input types and data layouts) for use within the Figma app. It also supports some features like file includes for better code modularity and reuse.

Uniform buffers

Uniforms are like global variables that are passed to shaders.

For example, you might supply a color as a uniform to a shader, so that you can compile a shader once and use it for drawing many different colors.

While WebGPU enables new performance optimizations, just moving from WebGL to WebGPU by itself isn’t guaranteed to improve performance. In fact, there are many ways to implement WebGPU support that might make performance a lot worse. One of these risks comes from the different ways WebGL and WebGPU handle uniforms.

In WebGL, you can set uniforms individually, one at a time:

JavaScript

const locationAlpha = gl.getUniformLocation(program, "alpha");
const alphaValue = 1.0;
gl.uniform1f(locationAlpha, alphaValue); const locationTransform = gl.getUniformLocation(program, "transform");
const transformValue = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0];
gl.uniformMatrix3fv(locationTransform, false, transformValue);
...

In our graphics interface layer, we mirrored this WebGL API:

C++

material->setUniform1f(ALPHA, 1.0);
material->setUniform3fv(TRANSFORM, transform);
context->draw(material, ...);

In WebGPU, all uniforms must be supplied using a uniform buffer. Rather than setting uniform values individually, we have to write uniform data for multiple uniforms into a single buffer and upload the data to the GPU all at once:

JavaScript

// create a Float32Array with multiple uniform values
const uniformData = new Float32Array(sizeOfAllUniforms);
// write data into the array at the right offsets...
uniformData.set(0.0, offsetOfAlpha);
uniformData.set([1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0], offsetOfTransform);

// set up a uniform buffer
const uniformBuffer = device.createBuffer({
size: uniformBufferSize, usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
}); // upload the data to the GPU
device.queue.writeBuffer(uniformBuffer, /*offset*/0, uniformValues);

// now we can use the uniformBuffer in a draw call

Naively, we could follow these steps every time we call setUniform, and our existing interface would be compatible with WebGPU. However, we expected that this would regress performance, because allocating GPU memory and uploading data to a buffer are both expensive operations.

Instead, we decided we would need to batch uploads together, by setting up the uniforms for multiple draw calls, uploading all of the data at once, and then “submitting” all the draw calls in the right order. So, we updated the API to account for this:

C++

context->encodeDraw(uniformStructData, material1, ...)
context->encodeDraw(otherUniformStructData, material2, ...)
// encode more draws...
context->submit()

When using WebGPU, when submit() is called, we upload all the uniform data for all encoded draw calls to a single buffer, then execute the draw calls by providing offsets into that buffer for where to find the uniforms.

When using WebGL, we just call the existing individual uniform functions.

By updating the interface to support managing uniforms in this way, we reduced the risk of regressing performance with WebGPU.

The core implementation

Once we’d made the necessary updates to our graphics API interface, we began actually building the WebGPU implementation of that interface.

The graphics interface and WebGL implementation, before adding WebGPU

The graphics interface and both WebGPU and WebGL implementations

The existing WebGL implementation consisted of a few different classes, each wrapping some part of WebGL state.

Since we’d previously invested time to update each of these interfaces and functions to map more closely to WebGPU resources, we were able to save a lot of time during this implementation phase.

But how do we actually use WebGPU?

Our renderer is written in C++. We compile this C++ code to WebAssembly (Wasm) using Emscripten, in order to use it in the main Figma application. But we also compile our C++ renderer code into a native x64/arm64 application used for server-side rendering, as well as testing and debugging. So we needed a way to write code using the WebGPU C/C++ API and have it work in both cases, with minimal per-platform branching.

Abstract illustration of layered red and yellow grids with black shapes, overlaid by a central multicolored form.

Read more about one of the features built on top of our server-side editor.

For Wasm, we decided to use Emscripten’s built-in WebGPU bindings support. This means that C++ WebGPU calls ultimately end up using the WebGPU browser API in JavaScript, even though our code is written in C++. We also had to write some of our own custom C++/JS bindings in cases where these built-in bindings weren’t performant enough.

How our rendering code interacts with Dawn and is shared between our web and native apps

We’re now working to update to use Dawn’s emdawnwebgpu bindings, since Emscripten’s WebGPU bindings support has been deprecated.

WebGL supports synchronous pixel readback, but in WebGPU it’s only possible to read back data from the GPU asynchronously. This is a major API change that any existing WebGL application will need to adapt to or work around when migrating to WebGPU.

To support using WebGPU in native builds, we incorporated Dawn, the WebGPU implementation used by the Chromium browser, into our build.

A benefit of this setup is that both our Wasm and native app use Dawn for translating WebGPU into lower-level graphics APIs.

There were many more differences between WebGL and WebGPU that we had to account for throughout the implementation process, including differences in internal coordinate systems, error handling, and sync versus async readback.

In WebGL, checking errors is synchronous, and we’ve found that checking errors too frequently can tank performance.

In WebGPU, errors are reported asynchronously and contain helpful error messages.

Shipping WebGPU

Once we had a working WebGPU implementation in the Figma editor, we began to assess performance using our internal performance testing framework and benchmarking against our existing WebGL baseline.

Abstract artwork with curved yellow and purple stripes against a green background decorated with sparkles and line accents.

Read more about our internal performance testing framework.

We ran all of our tests across a variety of different Windows, Mac, and ChromeOS devices (we expected that the performance might vary a lot on different types of devices, and it did!).

How we automated performance testing against multiple different device types

From there, we identified the scenarios with the largest regressions and began to work on tuning and optimizing our implementation. We worked on optimizations like caching and reusing bindGroups as much as possible, and finding ways to better batch draw calls into renderPasses.

Once we’d fixed all the major regressions, we began the production rollout, continuing to monitor performance metrics at each rollout percentage, and breaking down the data by GPU type, OS, and browser. We saw a performance improvement when using WebGPU on some classes of devices, and more neutral results on others, but no regressions.

Deep dive: Windows device compatibility

When initializing WebGL, Figma runs a series of tests that try to render pixels to a texture, read the pixels back, and confirm that WebGL is working correctly. These tests allow us to detect cases where a user has a buggy graphics card or driver, and try to work around the bugs.

Our initial plan was to write some similar tests for WebGPU, to detect cases where WebGPU support is buggy, and therefore use WebGL instead. Unfortunately, this sort of test requires reading back data from the GPU. Because WebGPU only supports asynchronous readback, this could increase load times by hundreds of milliseconds, which wasn’t acceptable.

We came up with a plan to work around this by rolling out WebGPU support in two parts:

  1. First we would ship a change to run a series of WebGPU compatibility tests on devices in a non-load-blocking way, after a session had already started.
  2. From there we could identify any potentially buggy devices based on the tests, and blocklist these specific devices prior to rolling out WebGPU.

These tests also turned out not to be good enough. After we started our rollout on Windows, we began to see cases where WebGPU support could fail mid-session. For example, it’s possible for the WebGPU device to be lost, but for the application to be unable to request a new one (requestDevice or requestAdapter start to throw errors). This meant that running tests on file load wouldn’t be sufficient.

So we pivoted again and built a dynamic fallback system where we could start a session with WebGPU-based rendering, and fall back to WebGL-based rendering later as needed. The system works similarly to how we already handle WebGL context loss (and in WebGPU, device loss), but instead of re-creating the context/device using the same backend, we swap backends. We trigger the fallback if the asynchronous tests fail, or if we get any other kind of failure related to WebGPU mid-session.

How we choose whether to use WebGPU or WebGL for rendering

Once we had the dynamic fallback system in place, we began to roll out again, this time blocklisting devices based on their average fallback rates (since falling back from WebGPU to WebGL can cause a hitch, which we’d like to avoid). Using this approach, we were finally able to complete the rollout.

The WebGPU future

We’re planning to leverage WebGPU to continue improving Figma’s performance. For instance, we can now optimize blur rendering using computer shaders, and use WebGPU’s MSAA (Multi-Sample Anti-Aliasing). Submitting work to the GPU has CPU overhead, too; to improve performance further, we could take advantage of WebGPU’s RenderBundles feature to reduce the CPU overhead of rendering.

With a year of hard work behind us to unlock these kinds of optimizations, we’re excited about the future of Figma rendering with WebGPU.

- 위키
Copyright © 2011-2025 iteam. Current version is 2.146.0. UTC+08:00, 2025-09-19 22:12
浙ICP备14020137号-1 $방문자$