话题公司 › Uber

公司:Uber

关联话题: 优步

优步(英语:Uber,/ˈuːbər/)是一间交通网络公司,总部位于美国加利福尼亚州旧金山,以开发移动应用程序连结乘客和司机,提供载客车辆租赁及媒合共乘的分享型经济服务。乘客可以透过应用程序来预约这些载客的车辆,并且追踪车辆的位置。营运据点分布在全球785个大都市。人们可以透过网站或是手机应用程序进入平台。

优步的名称大多认为是源自于德文über,和over是同源,意思是“在…上面”。 (页面存档备份,存于互联网档案馆)

然而其营业模式在部分地区面临法律问题,其非典型的经营模式在部分地区可能会有非法营运车辆的问题,有部分国家或地区已立法将之合法化,例如美国加州及中国北京及上海。原因在于优步是将出租车行业转型成社群平台,叫车的客户透过手机APP(应用程序),就能与欲兼职司机的优步用户和与有闲置车辆的租户间三者联系,一旦交易成功即按比例抽佣金、分成给予反馈等去监管化的金融手法。

2019年5月10日,优步公司透过公开分发股票成为上市公司,但首日即跌破分发价。

据估算,优步在全球有1.1亿活跃用户,在美国有69%的市占率。优步亦在大中华区开展业务,目前优步已在香港和台湾建成主流召车服务平台,并于中国大陆通过换股方式持有该市场最大网约车出行平台滴滴出行母公司小桔科技17.7%经济权益。

Demand and ETR Forecasting at Airports

Airports currently hold a significant portion of Uber’s supply and open supply hours (i.e., supply that is not utilized, but open for dispatch) across the globe. At most airports, drivers are obligated to join a “first-in-first-out” (FIFO) queue from which they are dispatched. When the demand for trips is high relative to the supply of drivers in the queue (“undersupply”), this queue moves quickly and wait times for drivers can be quite low. However, when demand is low relative to the amount of available supply (“oversupply”), the queue moves slowly and wait times can be very high. Undersupply creates a poor experience for riders, as they are less likely to get a suitable ride. On the other hand, oversupply creates a poor experience for drivers as they are spending more time waiting for each ride and less time driving. What’s more, drivers don’t currently have a way to see when airports are under- or over-supplied, which perpetuates this problem.

One way to tackle this undersupply/oversupply issue at airports is to forecast supply balance and use this to optimize resource allocation. Our first application of these models is in estimating the time to request (ETR) for the airport driver queue. We estimate the length of time a driver would have to wait before they receive a trip request, thereby giving drivers the information they need to identify and reposition in periods of undersupply (short waits), or to remain in the city during periods of oversupply (long waits).

Setting Uber’s Transactional Data Lake in Motion with Incremental ETL Using Apache Hudi

The Global Data Warehouse team at Uber democratizes data for all of Uber with a unified, petabyte-scale, centrally modeled data lake. The data lake consists of foundational fact, dimension, and aggregate tables developed using dimensional data modeling techniques that can be accessed by engineers and data scientists in a self-serve manner to power data engineering, data science, machine learning, and reporting across Uber. The ETL (extract, transform, load) pipelines that compute these tables are thus mission-critical to Uber’s apps and services, powering core platform features like rider safety, ETA predictions, fraud detection, and more. At Uber, data freshness is a key business requirement. Uber invests heavily in engineering efforts that process data as quickly as possible to keep it up to date with the happenings in the physical world.

In order to achieve such data freshness in our ETL pipelines, a key challenge is incrementally updating these modeled tables rather than recomputing all the data with each new ETL run. This is also necessary to operate these pipelines cost-effectively at Uber’s enormous scale. In fact, as early as 2016, Uber introduced a new “transactional data lake” paradigm with powerful incremental data processing capabilities through the Apache Hudi project to address these challenges. We later donated the project to the Apache Software Foundation. Apache Hudi is now a top-level Apache project used industry wide in a new emerging technology category called the lakehouse. During this time, we are excited to see that the industry has largely moved away from bulk data ingestion towards a more incremental ingestion model that Apache Hudi ushered in at Uber. In this blog, we share our work over the past year or so in extending this incremental data processing model to our complex ETL pipelines to unlock true end-to-end incremental data processing.

How We Unified Configuration Distribution Across Systems at Uber

Uber has multiple, domain-specific products to manage and distribute configuration changes at runtime across our many systems. These configuration products cater to different use cases: some have a web UI that can be used by non-engineers to change product configuration for different cities, and others expose a Git-based interface that primarily caters to engineers.

While these domain-specific configuration products have different applications, they share common parts that can be consolidated for simplicity and to reduce the overhead of operations, maintenance, and compliance. This article will cover how we consolidated and streamlined our underlying configuration and rollout mechanisms, including some of the interesting challenges we solved along the way, and the efficiencies we achieved by doing so.

Uber’s Sustainable Engineering Journey

Uber has made a commitment to sustainability by setting several goals across various sectors. By 2030, Uber plans to become a zero-emission mobility platform in Canada, Europe, and the US – and by 2040, worldwide. Uber Green, which offers no- or low-emission rides, has become the most widely-available option of its kind globally. However, this commitment encompasses more than just rides, as it also includes Uber’s engineering infrastructure such as its data centers and hardware resources, both on-premise and in public clouds.

As engineers and technology leaders, we nurture and develop the concept of responsible ownership, which is often thought of as maintaining high quality of our products. Responsible ownership also implies building efficient services, of which metrics for energy efficiency and sustainability should be an integral part.

In late 2021, we embarked on a journey to find out the best sustainable engineering practices, tools, and technologies, and began building them into our services, products, and training sessions. In this article, we present our vision and roadmap, walk through Uber Eng best practices for engineering sustainably towards a zero-emission world, and introduce novel, sustainability-oriented services.

D3: An Automated System to Detect Data Drifts

Data powers almost all critical, customer-facing flows at Uber. Bad data quality impacts our ML models, leading to a bad user experience (incorrect fares, ETAs, products, etc.) and revenue loss.

Still, many data issues are manually detected by users weeks or even months after they start. Data regressions are hard to catch because the most impactful ones are generally silent. They do not impact metrics and ML models in an obvious way until someone notices something is off, which finally unearths the data issue. But by that time, bad decisions are already made, and ML models have already underperformed.

This makes it critical to monitor data quality thoroughly so that issues are caught proactively.

Fixing Go’s Linker: An Unexpected Journey into ARM64, DWARF, and Linker Internals

We encountered an unusual problem recently at Uber with Golang™ debugging, as our engineers began transitioning to Apple® Silicon hardware, which uses the ARM64 Instruction Set Architecture (ISA), rather than the x86/AMD64 ISA many of us have been using for many years now. This required some rather complex debugging of the toolchain itself by Uber engineers.

uAct - Unified Action Platform

At a company as large and as complex as Uber, the volume of internal communication can easily become overwhelming. Our employees use multiple independent systems to send and receive a variety of notifications every single day. These include:

  1. Pullo – to raise access requests for software tools and services
  2. Uber Feedback – to provide and receive co-worker feedback
  3. Uber Learning – to undertake assigned training
  4. Workday – to apply for leaves, sign HR contracts, etc.
  5. ERD-PRD tool – to seek and provide approval of engineering and product documentation
  6. UberHub – to make requests for IT infra, hardware & software, etc.

Thus employees receive several action items and approval requests from multiple applications on a daily basis. Unified Action Platform or uAct has been built with a view to help employees keep on top of their assigned tasks and action items. uAct aggregates all such requests into one place for employees to easily view and address.

How the Uber Membership Team Developed the ActionCard Design Pattern to Do More with Less

The ActionCard pattern reduces app screen UI, navigation (routing) logic, and other app logic into simple, decoupled elements. The UI elements are called cards, and the associated reusable logical elements are called actions. Together cards and actions are configured to create app screens and features. Every screen is backed by a server-driven feed of card data models.

The ActionCard pattern implementation described here is the result of our team leveraging learnings from across Uber engineering and a lot of our own trial and error. The result is a pattern that allows us to quickly launch new features across multiple screens and apps with a focus on rapid iteration.

The ActionCard pattern has allowed us to reduce complexity and eliminate redundancy. Our hope is that it might be helpful for other teams who want to go fast.

Containerizing the Beast – Hadoop NameNodes in Uber’s Infrastructure

There are several online references on how to run Apache Hadoop® (referred to as “Hadoop” in this article) in Docker for demo and test purposes. In a previous blog post, we described how we containerized Uber’s production Hadoop infrastructure spanning 21,000+ hosts.

HDFS NameNode is the most performance-sensitive component within large multi-tenant Hadoop clusters. We had deferred NameNode containerization to the end of our containerization journey to leverage learnings from containerizing other Hadoop components. In this blog post, we’ll share our experience on how we containerized HDFS NameNodes and architected a zero-downtime migration for 32 clusters.

Scaling Adoption of Kerberos at Uber

At Uber, we have been operating an Apache Hadoop® based Data analytics platform since 2015. As adoption picked up exponentially in 2016, we decided to secure our platform with Kerberos authentication. Since then, Kerberos has become a critical component of our security infrastructure supporting not only Uber’s Hadoop ecosystem but also other services that are considered mission critical to our tech stack.

Uber has a large and diverse tech stack deployed on thousands of machines and storing more than 300 PB of analytics data. Systems using Kerberos authentication include YARN, HDFS, Apache Hive™, Apache Kafka®, Apache Zookeeper™, Presto®, Apache Spark™, Apache Flink®, and Apache Pinot™, to name a few. Besides open-source systems, many of the internally developed platforms and services also use Kerberos for authentication. All these services need to securely communicate with each other, as well as provide secure access for all their user clients.

In this blog post, we’ll share some of the key challenges and the solutions that enabled us to scale adoption of Kerberos for authentication at Uber.

Devpod: Improving Developer Productivity at Uber with Remote Development

In this blog, we share how we improved the daily edit-build-run developer experience using DevPods, our remote development environment. We will start with some of the initial challenges, the pain points we addressed with Devpod, our architecture, and some of our recent successes in terms of adoption and cost reduction. We will finally leave you with some thoughts around the future of remote development at Uber.

LeakProf: Featherlight In-Production Goroutine Leak Detection

Go is a popular programming language used in microservice development, with one of the key features being first-class support for concurrency. Given its burgeoning popularity, it is no surprise that Uber adopted it: the Go monorepo is a centerpiece of its development platform, housing a significant portion of Uber’s codebase in the form of critical business logic, support libraries, or key components of the infrastructure.

Go’s concurrency model is built on goroutines–lightweight threads. Any function call prefixed with the “go” keyword launches the function asynchronously. Goroutines are used abundantly in Go codebases owing to low syntactical overhead and resource requirements, with programs often involving tens to hundreds or thousands of goroutines simultaneously. Two or more goroutines can communicate with each other via message passing over channels, a paradigm inspired by Hoare’s Communicating Sequential Processes. While traditional shared-memory communication is still an option, the development team behind Go encourages users to favor channels, arguing for higher tolerance against data races when used properly.

How Uber Optimizes the Timing of Push Notifications using ML and Linear Programming

Push notifications are an integral channel for Uber Eats customers to discover new restaurants, valuable promotions, new offerings such as grocery and alcohol, and the perks of becoming a member, among other things. Push notifications are sent from various teams internally such as Marketing, City Operations, and Product. Since marketing push notifications were introduced in March 2020, not only did the list of teams sending notifications grow quickly, but there was also quick growth in volume to billions of notifications per month by the end of 2020.

Speed Up Presto at Uber with Alluxio Local Cache

At Uber, data informs every decision. Presto is one of the very core engines that powers all sorts of data analytics at Uber. For example, the operations team makes heavy use of Presto for services such as dashboarding; the Uber Eats and marketing teams rely on the results of these queries to make decisions on prices. In addition, Presto is also used in Uber’s compliance department, growth marketing department, and ad-hoc data analytics.

The scale of Presto at Uber is large. Currently, Presto has 9,000 daily active users, processing 500K queries per day and handling over 50PB of data. Uber’s infrastructure encompasses 2 data centers with 7,000 nodes and 20 Presto clusters across 2 regions.

Simplifying Developer Testing Through SLATE

Modern-day technical system deployments generally follow SOA or a microservice-based architecture that allows for clearer separation of concerns, ownership, well-defined dependencies, and abstracts out a single unit of business logic.

Uber has thousands of services coordinating to power up the platform that drives the company at scale. To offer a great experience to customers, developers have to ensure that their service is meeting its functional requirements. Building confidence in meeting functional requirements requires testing of a service.

Introducing WorkflowGuard: The Workflow Governance and Observability System That Oversees over 120,000 Data Workflows

At Uber, over 120,000 production workflows are orchestrated, scheduled, and executed every day. These workflows are owned by over 3,000 users across many teams within Uber, powering critical ETL jobs, business metrics, dashboards, machine learning models, or critical regulatory reports. Internally, the Data Workflow Platform (DWP) team makes this happen by developing Uber’s centralized workflow management system with high infrastructure reliability and minimum scheduling latency. The workflow management system also comes with a user-friendly application that allows users to create, author, and manage both streaming and batch workflows in a self-serve way.

首页 - Wiki
Copyright © 2011-2024 iteam. Current version is 2.125.0. UTC+08:00, 2024-05-04 17:58
浙ICP备14020137号-1 $访客地图$