话题公司 › lyft

公司:lyft

来福车(英语:Lyft)是一家交通网络公司,总部位于美国加利福尼亚州旧金山,以开发移动应用程序连结乘客和司机,提供载客车辆租赁及媒合共乘的分享型经济服务。乘客可以通过发送短信或是使用移动应用程序来预约车辆,利用移动应用程序时还可以追踪车辆位置。

Lyft 拥有 30% 的市场份额,是美国仅次于优步的第二大的叫车公司。

Continuous Deployment at Lyft

Continuous Deployment (CD) is when software changes, such as new features and fixes, are automatically deployed to our customers as quickly and safely as possible. At Lyft, we pride ourselves on “making it happen”, so in 2019 we set out to move from our manual deployment process to CD. The microservice architecture at Lyft made our path to CD challenging, as adoption required changing how each team maintained and deployed their services. Not only did we have to make configuration changes to each service, but most importantly we had to drive cultural changes in how each team approached and performed their deployments. We recently hit a new milestone of 90+% of our approximately 1,000 services using CD to deploy to production. We would like to share the journey, technical details, and challenges we faced.

Causal Forecasting at Lyft (Part 1)

Efficiently managing our marketplace is a core objective of Lyft Data Science. That means providing meaningful financial incentives to drivers in order to supply affordable rides while keeping ETAs low under changing market conditions — no easy task!

Lyft’s tool chest contains a variety of market management products: rider coupons, driver bonuses, and pricing, to name a few. Using these efficiently requires a strong understanding of their downstream consequences — everything from counts of riders opening the Lyft app (“sessions”) to financial metrics.

To complicate the science further, our data is heavily confounded by our previous decisions, so a merely correlational model would fail us. Sifting out causal relationships is the only option for making smart forward looking decisions.

Full-Spectrum ML Model Monitoring at Lyft

Machine Learning models at Lyft make millions of high stakes decisions per day from physical safety classification to fraud detection to real-time price optimization. Since these ML model based actions impact the real world experiences of our riders and drivers as well as Lyft’s top and bottom line, it is critical to prevent models from degrading in performance and alert on malfunctions.

However, identifying and preventing model problems is hard. Unlike problems in deterministic systems whose errors are easier to spot, models’ performance tends to gradually decrease, which is more difficult to detect.

Bringing Lyft Safety Features to the Web

Safety is fundamental to Lyft. Thus, as part of the Safety & Customer Care Web team, we strive to provide the safest experience for our users. Our focus is to bring safety and support features to our riders’ web experience (ride.lyft.com); this allows riders to request a ride from their browser without needing the mobile app. In doing so, we also maintain consistency with the mobile app experience.

Monitoring CPU performance of Lyft’s Android applications

Android applications such as Lyft’s apps are developed by a large number of contributors. This means that the codebase grows and changes very quickly. Features are constantly being added or improved, and all these modifications can potentially impact the performance of the application. Thus, it is important to understand how the application consumes CPU resources and to see the dynamics of such metrics across product releases.

Building Lyft’s In-App Messaging Platform

We believe that relevant messaging is at the heart of our mission to improve people’s lives with the world’s best transportation.

How LyftLearn Democratizes Distributed Compute through Kubernetes Spark and Fugue

In a previous blog post, we discussed LyftLearn’s infrastructure built on top of Kubernetes. In this post, we will focus on the compute layer of LyftLearn, and will discuss how LyftLearn solves some of the major pain points faced by Lyft’s machine learning practitioners.

Orchestrating Data Pipelines at Lyft: comparing Flyte and Airflow

We will focus on comparing Airflow and Flyte implementations at Lyft: dive into the architecture and summarize its benefits and drawbacks.

Lyft and urban mobility

Lyft moves people through space and time. But where those people move, and why, is up to them. Lyft’s riders use our services to get to and from work, go out to dinner, visit family, and get to the airport. When and where they do so tells us a lot about urban mobility — whether and how the notions of neighborhood, geography, and landscape shape how people move through space.

Here we share what we can learn from long-run patterns on Lyft’s operations in major US cities. We see that cities vary a lot internally in how people travel, where, and when. That diversity implies a need for a diverse range of products and services. But, strikingly, we also see how cities resemble each other — that sometimes, common patterns, like urban downtowns, look more like other cities’ downtowns than they do their companion suburbs.

Scaling productivity on microservices at Lyft (Part 4): Gating Deploys with Automated Acceptance…

This is the fourth and final post in the series on how we scaled our development practices at Lyft in the face of an ever-increasing number of developers and services.

Improving Web Vulnerability Management through Automation

Vulnerability management is important, but can be incredibly time consuming. We have to scan our systems and then fix the vulnerabilities that we’ve discovered. In a large software engineering organization this becomes more challenging — service owners are responsible for fixing vulnerabilities in their systems along with all their other work, and security has to track this work, nudge engineers to actually fix things, and report to CISO/compliance/etc. Fortunately much of this work lends itself to automation, letting security engineers focus on understanding and fixing vulnerabilities! In this post we’ll focus specifically on web vulnerabilities, and some of the fun automation challenges this process poses.

Scaling productivity on microservices at Lyft (Part 3): Extending our Envoy mesh with staging overrides

This is the third post in the series on how we scaled our development practices at Lyft in the face of an ever-increasing number of developers and services.

In our previous post, we described our laptop development workflow designed for fast iteration of local services. In this post, we’ll detail our solution for safe and isolated end-to-end (E2E) testing in staging: our pre-production shared environment. We’ll briefly recap the issues that led us to this form of testing before diving deeper into implementation details.

Building Lyft’s Incentive Platform

Our journey to scale incentive campaigns to unlock high experiment velocity for Growth.

Scaling productivity on microservices at Lyft (Part 2): Optimizing for fast local development

This is the second post in the series on how we scaled our development practices at Lyft in the face of an ever-increasing number of developers and services.

This post will focus on how we brought a great development experience right to the laptop to allow for super fast iteration.

Scaling productivity on microservices at Lyft (Part 1)

Late in 2018, Lyft engineering completed decomposing our original PHP monolith into a collection of Python and Go microservices. A few years down the road, microservices had been largely successful in allowing teams to operate and ship services independently of one another. Separation of concerns that microservices brought about enabled us to experiment and deliver features faster–deploying hundreds of times each day–and provided us with the flexibility to use different programming languages where they work best, have stricter or looser requirements based on service criticality, and much more. However, as the number of engineers, services, and tests all increased, our development tooling struggled to keep up with an explosion of microservices, eroding much of the productivity gains we had strived for.

This four-part series will walk through the development environments that served Lyft’s engineering team as it grew from 100 engineers and a handful of services to 1000+ engineers and hundreds of services. We’ll discuss the scaling challenges that caused us to pivot away from most of those environments, as well as a testing approach based predominantly on heavy integration tests (often approaching end-to-end), in favor of a local-first approach centered on testing components in isolation.

  • Part One: History of development and test environments (this post)
  • Part Two: Optimizing for fast local development
  • Part Three: Extending the service mesh in staging with overrides
  • Part Four: Gating deployments with automated acceptance tests

Mobile Performance @ Lyft

In Q2 of 2021, Lyft served 17.1 million active riders through our suite of mobile applications. At this scale, every crash, frozen frame, or hiccup can translate to thousands of hours of wasted time and frustration. Given the potential to impact millions of Lyft’s customers, we doubled our efforts in 2020 to improve our applications’ speed and stability by launching an internal mobile performance initiative.

This is the first in a series of stories detailing our journey to materially improve the speed and stability of our applications. We hope that this can be used by other companies as a reference and inspiration for their own investments in this space.

In this post, we will explore both how and why we started investing in mobile performance at Lyft, and we will dig into our team’s philosophy — everything from how we select projects to how we evaluate success.

首页 - Wiki
Copyright © 2011-2024 iteam. Current version is 2.124.0. UTC+08:00, 2024-05-03 03:18
浙ICP备14020137号-1 $访客地图$