话题公司 › pinterest

公司:pinterest

Pinterest(中文译名:缤趣),是一个网络与手机的应用程序,可以让用户利用其平台作为个人创意及项目工作所需的视觉探索工具,同时也有人把它视为一个图片分享类的社交网站,用户可以按主题分类添加和管理自己的图片收藏,并与好友分享。其使用的网站布局为瀑布流(Pinterest-style layout)。

Pinterest由美国加州帕罗奥图的一个名为Cold Brew Labs的团队营运,创办人为Ben Silbermann、 Paul Sciarra 及 Evan Sharp。2010年正式上线。“Pinterest”是由“Pin”及“interest”两个字组成,在社交网站中的访问量仅次于Facebook、Youtube、VKontakte以及Twitter。

Simplify Pinterest Conversion Tracking with NPM Packages

Pinterest conversions are critical for businesses looking to optimize their campaigns and track the performance of their advertisements. By leveraging Pinterest’s Conversion API and Conversion Tag, advertisers can gain deeper insights into user behavior and fine-tune their marketing efforts.

To make this process seamless for developers, we’ve created two NPM packages: pinterest-conversions-server and pinterest-conversions-client. These packages simplify the integration of Pinterest’s Conversion API and Conversion Tag, offering robust solutions for server-side and client-side tracking.

How Pinterest Leverages Honeycomb to Enhance CI Observability and Improve CI Build Stability

At Pinterest, our mobile infrastructure is core to delivering a high-quality experience for our users. In this blog, I’ll showcase how the Pinterest Mobile Builds team is leveraging Honeycomb (starting in 2021) to enhance observability and performance in our mobile builds and continuous integration (CI) workflows.

Resource Management with Apache YuniKorn™ for Apache Spark™ on AWS EKS at Pinterest

Monarch, Pinterest’s Batch Processing Platform, was initially designed to support Pinterest’s ever-growing number of Apache Spark and MapReduce workloads at scale. During Monarch’s inception in 2016, the most dominant batch processing technology around to build the platform was Apache Hadoop YARN. Now, eight years later, we have made the decision to move off of Apache Hadoop and onto our next generation Kubernetes (K8s) based platform.

Ray Batch Inference at Pinterest (Part 3)

Offline batch inference involves operating over a large dataset and passing the data in batches to a ML model which will generate a result for each batch. Offline batch inference jobs generally consist of a series of steps: dataloading, preprocessing, inference, post processing, and result writing. These offline batch inference jobs can be both I/O and compute intensive.

Structured DataStore (SDS): Multi-model Data Management With a Unified Serving Stack

In this blog, we will show how the team transitioned from supporting multiple query serving stacks to provide different data models to a brand new data serving platform with a unified multi model query serving stack called Structured DataStore (SDS).

Feature Caching for Recommender Systems w/ Cachelib

At Pinterest, we operate a large-scale online machine learning inference system, where feature caching plays a critical role to achieve optimal efficiency. In this blog post, we will discuss our decision to adopt Cachelib project by Meta Open Source (“Cachelib”) and how we have built a high-throughput, flexible feature cache by leveraging and expanding upon the capabilities of Cachelib.

Pinterest Tiered Storage for Apache Kafka®️: A Broker-Decoupled Approach

When it comes to PubSub solutions, few have achieved higher degrees of ubiquity, community support, and adoption than Apache Kafka®️, which has become the industry standard for data transportation at large scale. At Pinterest, petabytes of data are transported through PubSub pipelines every day, powering foundational systems such as AI training, content safety and relevance, and real-time ad bidding, bringing inspiration to hundreds of millions of Pinners worldwide. Given the continuous growth in PubSub-dependent use cases and organic data volume, it became paramount that PubSub storage must be scaled to meet growing storage demands while lowering the per-unit cost of storage.

Improving Efficiency Of Goku Time Series Database at Pinterest (Part — 3)

At Pinterest, one of the pillars of the observability stack provides internal engineering teams (our users) the opportunity to monitor their services using metrics data and set up alerting on it. Goku is our in-house time series database that provides cost efficient and low latency storage for metrics data.

Improving ABR Video Performance at Pinterest

Video content has emerged as a favored format for people to discover inspirations at Pinterest. In this blog post, we will outline recent enhancements made to the Adaptive Bitrate (ABR) video performance, as well as its positive impact on user engagement.

Redesigning Pinterest’s Ad Serving Systems with Zero Downtime (part 2)

In the first part of the post, we introduced the motivations on why we decided to rewrite the ads serving systems and the desired final state. We also outlined our design principles and high level design decisions on how to get there. In part 2 of the post, we are going to concentrate on the detailed design, implementation, and validation process towards the final launch.

NEP: Notification System and Relevance

Notifications (e.g. email, push, in-app messages) play an important role in driving user retention. In our previous system, which operated on a daily budget allocation model, the system relied on predicting daily budgets for individual users on a daily basis, constraining the flexibility and responsiveness required for dynamic user engagement and content changes. Notification Event Processor (NEP) is a next generation notification system developed at Pinterest, offering the flexibility to process and make decisions to send notifications in near real-time. By harnessing the power of machine learning, NEP determines various factors for sending notifications, such as content selection, recipient targeting, channel preferences, and optimal timing. The implementation of this system resulted in significant improvements in user email and push engagement metrics and weekly active user (WAU) growth.

Delivering Faster Analytics at Pinterest

Pinterest is a visual discovery platform where people can find ideas like recipes, home and style inspiration, and much more. The platform offers its partners shopping capabilities as well as a significant advertising opportunity with 500+ million monthly active users. Advertisers can purchase ads directly on Pinterest or through partnerships with advertising agencies. Due to our huge scale, advertisers get an opportunity to learn about their Pins and their interaction with Pinterest users from the analytical data. This gives advertisers an opportunity to make decisions which will allow their ads to perform better on our platform.

At Pinterest, real-time insights play a critical role in empowering our advertisers and team members to make data-driven decisions. These decisions impact campaign performance, our experiments’ performance, and our policies such as rules to catch spam. We have been using Druid to store and provide these real-time insights, but as our scale and requirements continue to change, we have been evaluating different storage options. In the end we decided to migrate this data to StarRocks.

In this blog post, we’ll discuss and share our experience of launching our Analytics app on StarRocks. In the past, we have published our thoughts on using Druid and the benefits we have gotten from it. This post highlights the need for a new system as our scale and requirements have changed over time.

TiDB Adoption at Pinterest

HBase has been a foundational storage system at Pinterest since its inception in 2013, when it was deployed at a massive scale and supported numerous use cases. However, it started to show significant inadequacy to keep up with the evolving business needs due to various reasons mentioned in the previous blog. As a result, two years ago we started searching for a next-generation storage technology that could replace HBase for many years to come and enable business critical use cases to scale beyond the existing storage limitations.

Building Pinterest Canvas, a text-to-image foundation model

Pinterest Canvas是一种文本到图像的模型,用于增强Pinterest平台上的现有图像和产品。它通过训练基础文本到图像模型,然后进行微调生成可视化产品的真实背景。模型经过两个阶段的训练,第一阶段训练模型填充缺失的图像区域,第二阶段专注于产品的可视化任务。模型还支持个性化结果,通过附加样式上下文来指导生成过程。Pinterest Canvas的未来改进包括升级底层的扩散骨干模型,进一步提高生成质量,并与用户进行反馈交流。此外,团队还在研究如何重新思考模型条件约束,并探索使用Pinterest优化的视觉嵌入来改进模型的文本条件组件。

Web Performance Regression Detection (Part 3 of 3)

Pinterest网站在预发布阶段通过A/B实验和JS捆绑大小检查来主动检测和防止性能回归。当实验的性能指标出现明显回归时,会触发警报和工单,以跟踪和解决问题。此外,通过CI流水线运行的JS捆绑大小检查可以防止捆绑大小增加导致的回归。任何捆绑大小的显著变化都会在PR评论中报告,并发送警报通知相关团队。开发人员可以通过提示解决回归,并使用webpack-bundle-analyzer工具进行进一步调查。

Ray Infrastructure at Pinterest

Pinterest在构建Ray基础架构时,采用了中间层解决方案,包括API Server、Ray Cluster / Job Controller和MySQL数据库。该解决方案简化了用户与Kubernetes的交互,并提供了实时的Ray集群和Ray Job状态监控。Pinterest还使用AWS S3持久化Ray日志,并在Ray Cluster UI上展示。他们还开发了Statsboard工具,用于展示Ray应用程序的性能指标和特定功能。此外,Pinterest还提供了三种开发Ray应用程序的选项,并通过ML RESTful API支持Dev server、Jupyter和Spinner工作流。Pinterest还提供了两种测试选项,即Unittest和Integration Test,以及网络和安全性方面的考虑。

ホーム - Wiki
Copyright © 2011-2024 iteam. Current version is 2.139.0. UTC+08:00, 2024-12-25 14:47
浙ICP备14020137号-1 $お客様$