话题公司 › pinterest

公司:pinterest

Pinterest(中文译名:缤趣),是一个网络与手机的应用程序,可以让用户利用其平台作为个人创意及项目工作所需的视觉探索工具,同时也有人把它视为一个图片分享类的社交网站,用户可以按主题分类添加和管理自己的图片收藏,并与好友分享。其使用的网站布局为瀑布流(Pinterest-style layout)。

Pinterest由美国加州帕罗奥图的一个名为Cold Brew Labs的团队营运,创办人为Ben Silbermann、 Paul Sciarra 及 Evan Sharp。2010年正式上线。“Pinterest”是由“Pin”及“interest”两个字组成,在社交网站中的访问量仅次于Facebook、Youtube、VKontakte以及Twitter。

Efficient Resource Management at Pinterest’s Batch Processing Platform

Pinterest’s Batch Processing Platform, Monarch, runs most of the batch processing workflows of the company. At the scale shown in Table 1, it is important to manage the platform resources to provide quality of service (QoS) while achieving cost efficiency. This article shares how we do that and future work.

Ensuring High Availability of Ads Realtime Streaming Services

The Pinterest Ad Business has grown multi-fold in the past couple years, with respect to both advertisers and users. As we scale our revenue, it becomes imperative to:

  • Distribute advertiser spend smoothly over the course of the day
  • Avoid over-spending beyond the advertiser’s daily / lifetime budget
  • Maximize advertiser value

Pinterest Home Feed Unified Lightweight Scoring: A Two-tower Approach

Pinterest is a place where users (Pinners) can save and discover content from both web and mobile platforms, and where increasingly Creators can publish native content right to Pinterest. We hold billions of content (Pins) in our corpus and serve personalized recommendations that inspire Pinners to create a life they love. One of the key and most complicated surfaces for Pinterest is the home feed, where Pinners will see personalized feeds based on their engagement and interests. In this blog, we will discuss how we unify our light-weight scoring layer across the various candidate generators that power home feed recommendations.

Pinterest’s Analytics as a Platform on Druid (Part 3 of 3)

In this blog post series, we are going to discuss Pinterest’s Analytics as a Platform on Druid and share some learnings on using Druid. This is the third of the blog post series, and will discuss learnings on optimizing Druid for real-time use cases.

Pinterest’s Analytics as a Platform on Druid (Part 2 of 3)

In this blog post series, we’ll discuss Pinterest’s Analytics as a Platform on Druid and share some learnings on using Druid. This is the second of the blog post series, and will discuss learnings on optimizing Druid for batch use cases.

Pinterest’s Analytics as a Platform on Druid (Part 1 of 3)

In this blog post series, we’ll discuss Pinterest’s Analytics as a Platform on Druid and share some learnings on using Druid. This is the first of the blog post series with a short history on switching to Druid, system architecture with Druid, and learnings on optimizing host types for Mmap.

Improving efficiency and reducing runtime using S3 read optimization

We describe a novel approach we took to improving S3 read throughput and how we used it to improve the efficiency of our production jobs. The results have been very encouraging. A standalone benchmark showed a 12x improvement in S3 read throughput (from 21 MB/s to 269 MB/s). Increased throughput allowed our production jobs to finish sooner. As a result, we saw 22% reduction in vcore-hours, 23% reduction in memory-hours, and similar reduction in run time of a typical production job. Although we are happy with the results, we are exploring additional enhancements in the future. They are briefly described at the end of this blog.

How we scaled the size of Pinterest’s ad corpus by 60x

In May 2020, Pinterest launched a partnership with Shopify that allowed merchants to easily upload their catalogs to the Pinterest platform and create Product Pins and shopping ads. This vastly increased the number of shopping ads in our corpus available for our recommendation engine to choose from, when serving an ad on Pinterest. In order to continue to support this rapid growth, we leveraged a key-value (KV) store and some memory optimizations in Go to scale the size of our ad corpus by 60x.

Fighting Spam using Clustering and Automated Rule Creation

One of our biggest priorities at Pinterest is keeping Pinners safe, and that includes protecting them from spam. The Trust & Safety team’s goal is not only to catch spam, but to remove it as quickly as possible to minimize Pinner impact.

The goal of spammers is to make money, and the best way to do this is to spam at scale. It’s a numbers game: one million spam emails are much more effective than one spam email. In order to remove spam quickly, we look at common trends in spam attacks to identify suspect behavior.

To achieve the scale required to be effective, spammers must automate their actions, and each of these “attacks” can be thought of as a cluster. Each event within the attack cluster may share some common features, but different clusters will have a different set of common features.

For example, during an attack where a large number of Pins are created, a spammer might point all Pins to the same domain. While the domain may change between attacks, spammers are still trying to direct traffic to the same spam site.

One of our spam mitigation tactics is our rule engine, Guardian, which helps to identify common features in spam attacks.

The machine learning behind delivering relevant ads

Pinterest is where people go to plan and shop, making ideas and ads from brands helpful in taking Pinners from inspiration to action. It’s our goal to ensure ads continue to be additive and not intrusive on Pinterest. Because of the unique and powerful first party signals on the platform, advertisers can reach Pinners based on their interests, intent and engagement on the platform.

To help in delivering the right ads to the right Pinners in an audience of hundreds of millions of people, we offer advertisers features to achieve relevance including Actalike (AAL) audiences, also known in the industry as Lookalike audiences. AAL audiences help advertisers reach potentially new users via audience expansion.

In this blog, we’ll focus on the machine learning model component of relevant ads delivery and explain how we achieve high quality audience expansion through universal user embedding representations together with per-advertiser classifier models. We demonstrate the power of the proposed combined approach by showing better performance over both regression-based and similarity-based approaches.

Building scalable near-real time indexing on HBase

HBase is one of the most critical storage backends at Pinterest, powering many of our online traffic storage services like Zen (graph database) and UMS (wide column data store). Although HBase has many advantages like strong consistency at row level in high volume requests, flexible schema, low latency access to data, and Hadoop integration, it doesn’t natively support advanced indexing and querying. Secondary indexing is one of the most demanded features by our clients, but supporting that directly in HBase is quite challenging. Maintaining separate index tables as the number of indexes grows is not a scalable solution in terms of query efficiency and code complexity. This motivated us to build a storage solution called Ixia, which provides near real-time secondary indexing on HBase. The design is largely inspired by Lily HBase Indexer.

Unified Flink Source at Pinterest: Streaming Data Processing

To best serve Pinners, creators, and advertisers, Pinterest leverages Flink as its stream processing engine. Flink is a data processing engine for stateful computation over data streams. It provides rich streaming APIs, exact-once support, and state checkpointing, which are essential to build stable and scalable streaming applications. Nowadays Flink is widely used in companies like Alibaba, Netflix, and Uber in mission critical use cases.

Interactive Querying with Apache Spark SQL at Pinterest

To achieve our mission of bringing everyone inspiration through our visual discovery engine, Pinterest relies heavily on making data-driven decisions to improve the Pinner experience for over 475 million monthly active users. Reliable, fast, and scalable interactive querying is essential to make those data-driven decisions possible. In the past, we published how Presto at Pinterest serves this function. Here, we’ll share how we built a scalable, reliable, and efficient interactive querying platform that processes hundreds of petabytes of data daily with Apache Spark SQL. Through an elaborate discussion on various architecture choices, challenges along the way, and our solutions for those challenges, we share how we made interactive querying with Spark SQL a success.

Improving data processing efficiency using partial deserialization of Thrift

At Pinterest we’ve worked to greatly improve data processing efficiency. One quote that resonates with our unique approach is from writer Antoine de Saint-Exupéry: “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”

Ultimately, we process petabytes of Thrift encoded data at Pinterest. Most jobs that access this data need only a part of it. To meet our unique needs, we devised a way to efficiently deserialize only the desired subsets of Thrift structures in each job. Our solution enabled us to significantly decrease our data processing resource usage: about 20% reduction in vcore usage, 27% reduction in memory usage, and 36% reduction in intermediate data (mapper output).

What we learned from an iOS app OOMs incident

2020年初,我们开始看到Pinterest的iOS应用的内存外(OOM)崩溃率明显升高。这一事件导致无崩溃用户率(CFUR)下降,从之前的99%下降到96%,这是一个急剧下降。发生了什么事?

我们在这一过程中改进了许多系统,但这些经验可以单独写成一篇博文。这篇博文的主要重点是与更广泛的iOS社区分享我们通过这个iOS的具体问题学到的东西。

Building a Label-Based Enforcement Pipeline for Trust & Safety

随着Pinterest的发展,用户和企业的数量不断增加,提供一个安全和值得信赖的体验是我们的首要任务之一。每天,该平台提供数十亿个Pins和Board,以激发Pinners。面对如此多的图钉和活动,如何在释放高质量内容传播的同时,为内容安全提供及时和一致的决策,可能是一个挑战。在这篇博文中,我们将对我们面临的问题进行技术上的深入研究,并介绍我们如何建立一个基于标签的执行管道来解决这些问题并大规模地打击滥用。

Главная - Вики-сайт
Copyright © 2011-2024 iteam. Current version is 2.129.0. UTC+08:00, 2024-07-01 10:44
浙ICP备14020137号-1 $Гость$