Speeding up file load times, one page at a time

Neither alternative would allow us to materially improve user experience in the short-term, while being a sustainable long-term solution. Delayed editing and backfill would be simpler technically, but it wouldn’t reduce client-side memory. While the data model overhaul would avoid the need for write dependencies, it would be a longer-term undertaking. As a result, we moved forward with write dependency computation, which strikes a balance between performance, feasibility, and user experience. With this loading approach, Figma downloads the first page and all of its read and write dependencies on initial load. As the user navigates to other pages, Figma downloads those pages (and their read and write dependencies) on demand.

Implicitly and explicitly encoding dependencies

Previously, QueryGraph only encoded read dependency edges since viewers and prototypes don’t need to consider write dependencies. To extend this framework to editors, we replaced the underlying data structure with a bidirectional graph. When loading a file dynamically, it was important that we could quickly determine both sets of dependencies for a given node.

The auto layout write dependency we introduced is an example of an implicit write dependency between nodes that don’t otherwise refer to one another directly. We encoded these dependencies as a new type of edge in our graph.

In addition, all of the existing read dependencies were foreign key dependencies, meaning that the dependency was explicitly encoded on the node data structure. For example, instance nodes have a componentID field, which provides the foreign key to look up the component node that they depend on. Dynamic loading for editors required extending this further to support implicit write dependencies, like the fact that edits to a node in an auto layout frame can result in automatic changes to neighboring nodes.

Multiplayer holds both the full representation of the file and the QueryGraph of dependencies in-memory in order to serve client file loads and edits. For each dynamic file load, the client specifies the initial desired page and QueryGraph computes the subset of the file that the client needs. As users make edits to the file, the server computes which edits need to be sent down to each session as a function of the session’s subscription set and QueryGraph. For example, if a user has only loaded the first page in a file, and a collaborator updates the fill of a rectangle on another page, we wouldn’t send that change to the first user because the rectangle is “unreachable” from their subscribed set.

If the font size variable is updated, the text in the button component will change. The instance will then need to update, and because it’s in an auto layout frame, the layout of the frame and its contents will also update based on the new size of the instance.

As other users edit the file, the dependency edges of QueryGraph can change. Changes to the dependency graph by one user may result in significant multiplayer traffic for another. For example, if a user swaps an instance to another component, multiplayer may need to recognize that their collaborator now needs to receive the component and all of its descendants (and their dependencies)—even if the first user didn’t touch any of those nodes. That user has simply made those nodes newly “reachable” to their collaborator, and the system must respond accordingly.

Validating dependencies

Actions that users take in a dynamically loaded file should produce the exact same set of changes as they would if the file were fully loaded. To achieve editing parity, the set of write dependencies needed to be perfect. Missing a dependency could result in a client failing to update derived data on a downstream node. To a user, these errors would look like serious bugs: instances diverging from their backing component, layout being incorrect or stale, or text with missing fonts failing to display correctly.

We wanted to know that clients were strictly editing nodes in accordance with the write dependency roles we had enumerated. To validate that, for an extended period of time, we ran multiplayer in a shadow mode. In this mode, multiplayer would track what page the user was on, computing write dependencies as if they had loaded dynamically, without actually changing any runtime behavior. If multiplayer received any edits to nodes outside of the write dependency set, it would report an error.

Using this validation framework, we successfully identified dependencies we had missed in our initial implementation. For example, we discovered a complex, cross-page, recursive write dependency involving frame constraints and instances. Had we not handled this dependency properly, edits could have resulted in incomplete layout computations. With our shadow validation framework, we were able to identify any gaps, introduce additional tests, and update QueryGraph to avoid similar bugs.

Performance in practice

Before dynamic loading for editors, clients could download an encoded Figma file directly, without our multiplayer system needing to decode it in memory. But with dynamic page loading, the server needs to first decode the Figma file and build up QueryGraph in-memory in order to determine which contents to send to the client. This decoding process can be time-consuming and is in the critical path, so it was important to optimize.

First, we ensured that multiplayer could begin the decoding process as early as possible. As soon as Figma receives the initial GET request for the page load, our backend sends a hint to multiplayer indicating that a file load is imminent and that it should start preloading the file. This way, multiplayer begins downloading and decoding the file even before the client establishes a WebSocket connection to it. Preloading in this fashion shaves off 300–500 milliseconds from the 75% percentile (p75) load time.

Next, we introduced parallel decoding, an optimization in which we persist raw offsets in the Figma file, allowing us to split decoding work into chunks that multiple CPU cores can process concurrently. Decoding the binary-encoded Figma file in serial can be quite slow (over five seconds for our largest files!), so in practice, this reduces decoding time by over 40%.

Reducing the amount of data the multiplayer system sends to clients was a big win for dynamic page loading, but we recognized we could go even further with additional client-side optimizations. Specifically, the client caches descendants of instance nodes in memory to allow for easy editing and interaction by the user. But instance “sublayers” are fully derivable from the instance’s backing component and any overrides that the user has set, so there is no need to materialize them all on initial file load. As part of dynamic page loading, we now defer materializing instance sublayers for nodes on other pages. This yielded huge load time wins but required updating dozens of subsystems to remove the assumption that all nodes are fully materialized at load, and instead support lazy, deferred materialization.

Down-and-to-the-right load times

We shipped dynamic page loading to groups of users over the course of six months, carefully measuring the impact of our changes in controlled A/B tests and monitoring our automated telemetry. In the end we saw some great results:

  • 33% speed-up for the slowest and most complex file loads—despite files increasing 18% in size year-over-year
  • 70% reduction in number of nodes in memory on the client by only loading what users need
  • 33% reduction in users experiencing out of memory errors

We’re always looking for opportunities to optimize load times and reduce memory, and as files grow ever larger and more complicated, dynamic page loading has become the foundation of our performance improvements. If this type of work interests you, check out our open roles—we’re hiring!

Thank you to all of those who made this happen, including Andrew Chan, Andrew Swan, Darren Tsung, Eddie Shiang, Georgia Rust, Jackie Chui, John Lai, Melissa Kwan, and Raghav Anand.

trang chủ - Wiki
Copyright © 2011-2024 iteam. Current version is 2.137.1. UTC+08:00, 2024-11-06 13:55
浙ICP备14020137号-1 $bản đồ khách truy cập$