Web Performance Regression Detection (Part 1 of 3)

Pinterest Engineering
Pinterest Engineering Blog
3 min readApr 22, 2024

--

Michelle Vu | Web Performance Engineer

Detecting, preventing, and resolving performance regressions has been a standard at Pinterest for many years. Over the years, we have seen many examples showing significant business metric movements resulting from performance optimizations and regressions. These concrete examples motivate us to optimize and maintain performance. In particular, fighting regressions was made a priority because we’ve seen countless times that months of hard earned optimizations can easily be wiped out by a regression. Oftentimes, the regression was from a single line of code, and investing a little bit of time to change the implementation brings us back to baseline. In this three-part series of articles, we will be focusing on the systems we have in place for holding the line on web performance. To begin with, this first part will provide a brief overview of the Performance Program at Pinterest.

The Performance Program at Pinterest

Performance Metrics

While the Performance team logs and monitors many performance metrics, we goal and communicate on a set of custom metrics we call Pinner Wait Time (PWT). Each critical surface has a PWT defined to track the most important elements on the page. For example, on a Pin closeup page we track how long it takes to load the large hero image as well as the button to save a Pin to your board.

Figure 1: The hero image and save button are tracked as part of the PWT metric for the Pin closeup page

On web, the Core Web Vitals joined our PWT metrics as topline performance metrics we guard and optimize. They’ve been the target of heavy optimization efforts and have been added to our monitoring dashboards, A/B experiments framework, and per-diff performance integration tests for regression protection.

Ownership

The Performance team owns in-house performance logging libraries, data workflows, monitoring dashboards, and performance tooling, but the metrics themselves are owned by surface-owning teams. Surface owner responsibilities include monitoring the health of the metric, making tradeoff decisions, and responding to regressions. The Performance team syncs regularly with each surface owning team to provide as much support as possible in these responsibilities.

Performance Budgets

To hold the line on performance, surface owners are accountable for maintaining the baseline for their performance metrics. This means that at the end of the year, their performance metric should be equal to or lower than where it was at the beginning of the year. While the Performance team will help ensure that any performance regressions have been investigated and mitigated as much as possible, surface owners ultimately are the drivers for tradeoff decisions between key business metrics and performance metrics. These tradeoff decisions are often made when shipping A/B experiments, but they can also be made before or after changes are released (e.g. when a PR adds a necessary module causing a JS bundle size regression, or there is an increase in the distribution of a certain type of content that is causing unavoidable regressions in production).

Much more could be said about the Performance Program at Pinterest as the processes behind the program have matured well over the years and the value of a strong performance culture can not be understated, but this article series will focus on the tooling and processes we have specifically for regression detection.

In the next article, we will discuss the real time, real user monitoring we have in place for web performance metrics. These monitoring graphs have been pivotal for alerting on regressions and performing root cause analysis.

To learn more about engineering at Pinterest, check out the rest of our Engineering Blog and visit our Pinterest Labs site. To explore and apply to open roles, visit our Careers page.

--

--