Building Lyft’s In-App Messaging Platform

We believe that relevant messaging is at the heart of our mission to improve people’s lives with the world’s best transportation.

Our Comms Platform team’s mission is to be an extension of the Lyft product surface, delivering reliably great experiences and customer value for Lyft users. All messaging via Comms Platform is delivered to the right user at the right time with the right context to maximize engagement while preventing bad user experiences.

In this blog post, we will discuss our journey of building Lyft’s in-app messaging platform. We started with one use case, created an impactful product, and then converted it into a platform.

Do we need another messaging channel?

Although we already utilized various ESPN messaging channels (Email, SMS, and Push Notifications), we noticed that while these were effective in gaining the attention of our riders, we did not cater well to our active riders or provide them with relevant messaging while they were interacting with the app.

Lyft’s business has a large operational component to it. Local regulation and legislation have an impact on riders’ experiences. For example, there were changes in the curbside pickup rules at LAX, and riders could not be picked up at the curbside unless they were traveling via Lux mode. This information was new to many riders from other states who were used to being picked up at the curbsides of their home airports. It was crucial to provide this information as soon as we detected a rider was at LAX and looking for a ride. While we could use our ESPN channels to notify users, these channels could also distract them from their primary task of requesting a ride (i.e., by having to open an email or read a push notification) or there might be a delay in delivering these messages. Thus, there was no guarantee that the user would see this message before requesting a ride, resulting in a bad experience. There were many other similar use cases (such as promotional or upsell opportunities) that led us to believe that we needed a more contextually relevant messaging channel to provide critical information to our riders.

Picking the first use case: Banners

The rider app already had a few banners implemented, but they were not server-driven (every change required a client release). There were multiple placements where banners could be shown, including the rider app’s home screen. Hence, we picked in-app banners as our first use case to test out the in-app messaging platform.

Design principles

  • Provide relevant messages: We wanted to maintain the sanctity of the messaging space. If there was a message in that space, we wanted to ensure that our riders read it. We wanted to avoid user apathy towards these surfaces. So, our first principle was to make sure the messages shown to our riders were relevant.
  • Influence, not intervene: The in-app messaging platform was in place to assist riders in making the right decisions given their context and location. We didn’t want to interfere with their flow.
  • Riders, not rides: The last but most important principle was to ensure we always had the riders’ best interests at heart. We were not looking for a short-term gain in rides. We were looking for long-term rider retention.

The stage was set. We had identified the problem and it was time to get to work. We started with the user experience — what should be the structure of the message? What should be its placement?

We decided to start with a simple messaging structure and picked what we thought was our most impactful placement — our home screen. The home banner would be one of the first things our riders would see when they opened the app! Picking the home screen would allow us to test many different types of banners. If we were not successful with the home screen placement, it was unlikely that we would see success in other placements.

Anatomy of a banner on the home screen

Basic scaffolding for the MVP

We spoke to a few product and marketing teams at Lyft and came up with the first sets of banners we wanted to launch. We worked backwards to figure out the engineering requirements for the core product. At a high level, we needed:

  1. User targeting: We needed an easy way for other teams to declare which riders were eligible to see their banners (based on rider state, location, or marketplace context). We integrated with our in-house Customer Data Platform to power the user targeting.
  2. Rate limiting: We wanted to ensure that the same banner was not shown to our riders indefinitely, as this would lead to user apathy. We started with basic rate-limiting techniques like lifetime rate limiting (applying a maximum number of times that a banner could be shown to a user).
  3. Experimentation: All our customer teams wanted to understand the impact of their banners. We (the Comms Platform team) wanted to understand the impact of all banners. So, integrating with our in-house experimentation platform was crucial for us.
  4. Banner ranking: As we spoke with other teams, it was clear that there would be banner conflicts — the same user would be eligible for multiple banners. We needed a way to resolve these conflicts. We opted for a simple manual ranking where we maintained a ranked list of all banners. The principle we enforced was that the banner targeting the smallest population of users would be ranked the most highly. This ensured that the specific banners (i.e., an LAX banner explaining the change to curbside pickup) would always win.

We bootstrapped the framework on the server-side and created the client experience on iOS and Android. We launched the initial set of banners in a non-scalable way by having engineers manually create banners. We carefully created monitoring dashboards and watched our experiments. The impact of these banners was very evident within the first few weeks and we eventually saw statistically significant positive movement in feature adoption, weekly active riders, and other user engagement metrics.

Stage 2 — Adoption/scaling

Once we started to see positive impact, we began receiving more requests for banners. Our team also spent a lot of time marketing the banners framework internally (brown bag presentations) to onboard more internal customers.

Bottlenecks

As more teams started using banners, there were two main bottlenecks:

  1. Time to create new banners: Our bootstrapping approach involved engineers coding up banners on the backend. This worked great in the initial stages, but it soon became a bottleneck and our team was not able to meet demand. We needed to provide a mechanism for other teams to create their own banners without any engineering involvement from our team.
  2. Limited placements: Our home banner was working well, but it was not enough to meet each new use case. We needed to expand the breadth of our reach and identify new placement surfaces.

In order to scale better, we needed a new set of tools:

  • Self-service message templates: One of our top priorities was to ensure that our engineering team was not the bottleneck for creating new banners. We used our internal engagement platform to enable anyone at Lyft to define a banner template.

Banner template UI

  • Better ranking: Our approach to manually ranking messages was not scalable. We did not want to pick winners, and we didn’t have enough data to train an ML model to automate it. We needed a better way to rank the banners. So, we created banner categories to identify the goals for each banner and ranked the categories (instead of ranking individual banners). The ranked categories were (in descending order of importance):
  1. Transactional: Providing important information to our riders (i.e., limited curbside pickups at an airport).
  2. Promotional (upsell): Promoting new features, programs, or partnerships (i.e., Lyft Pink).
  3. Promotional (conversion): Improving the possibility of session conversion (i.e., reminding the rider that they have an active coupon).
  4. Promotional (lifecycle): Improving user engagement and boosting long-term riders (i.e., thanking the rider for tipping).

Banner categories

  • Better user targeting: As new use cases came to light, our initial basic audience targeting became insufficient. To remedy this, we added better targeting by providing geofencing and event-based targeting capabilities.
  • Observability: As the service scaled, we were often pinged with requests for information on the health of a particular banner. Other teams were interested in their banners’ impression counts and user interactions, so we provided banner-level dashboards to give better visibility to all teams.
  • More placements: Our initial placements were limited. We picked a few other placements, starting with in-ride banners and ending up with 10+ placements in the final state.

In-ride banner

During the adoption phase, we were able to easily onboard more teams, support new use cases, and scale our operations. Life was good. Our holdouts were showing good results and our initial hypothesis of scaling the impact of banners by supporting multiple use cases had been proven relatively well. Our next challenge was to optimize the impact of individual banners by showing the right banner to the right user at the right time.

Stage 3: Optimization

More banners, more problems

Our adoption and scaling tactics worked well. We had enough banners to cover a large percentage of our riders and placements. However, we knew we could do better, and we started looking into optimizing the impact of banners across two dimensions:

  • For a given user and placement, which banner should we show?

We were manually ranking the banner categories but picking the actual banner from the winning category randomly. This was the first area of improvement. We introduced a banner ranker: a model to personalize the banner impression for our users. The banner ranker used relevant contextual information like user attributes, session attributes, and banner features, and predicted the expected reward for showing a specific banner to a user. The module then “ranked” the reward and chose the winning banner.

We also introduced a bandit model in the ranker to solve the “long-tail problem”, i.e., that over time only a few popular banners receive the majority of impressions while most other banners — especially ones with limited use cases — disappear. This is a classic “exploration vs. exploitation tradeoff” problem that reinforcement learning models such as bandits are designed to solve. Our use of bandits ensured that we balanced showing the most relevant banners to our users while simultaneously exploring new opportunities for less-used banners.

Banner ranker performance

  • For a given banner, which content/copy should we show?

Once we picked a winning banner, the next optimization opportunity was to find the right content. We introduced bandit optimization to improve the efficiency of creative testing. Compared to a traditional A/B testing framework, we used Thompson Sampling to dynamically change the allocation of incoming traffic based on past banner performance. The system improved the user experience during experiments, and shortened the time to find a winning option.

Thompson Sampling performance

Impact

We now have a robust product supporting multiple in-app placements, more than 500 distinct banners, and driving more than half a billion impressions every year in the Lyft rider app.

Banner Placements

Are you interested in building systems that power growth at scale? Growth Platform teams are hiring and we are now fully flexible — Join us!

Acknowledgments

Thanks to Michael Yoshizawa and Xing Xing for contributing to this blog post.

Special thanks to the In-app Messaging Platform and Applied ML teams for making this vision a reality: Stefan Zier, Evgeniy Syniak, Jorge Antonio Martínez Rojas, Lili Jiang, Janani Sundarrajan, Wintha Kelati, Sara Smoot, and Vibhu Gavini.

Comms Platform engineering team: John Tran, Kate Liu, Kenneth Gomez, Ryan Barner, Melody Cerritos, Sri Naik, Dmytro Lisovyi, Siddharth Suresh, Rajesh Krishnamurthy, Parker Cuskey, Henry Tsang.

Our partner teams: Growth Marketing and product teams at Lyft!

Главная - Вики-сайт
Copyright © 2011-2024 iteam. Current version is 2.139.0. UTC+08:00, 2024-12-27 06:30
浙ICP备14020137号-1 $Гость$