轻量级办公基础设施:从Backbone向SD-WAN的过渡
Uber’s enterprise network encompasses about 250 offices worldwide, from compact workspaces to expansive campuses. To support this global presence, we’re moving away from a legacy centralized backbone network in favor of a scalable, flexible SD-WAN architecture. Our objectives are to cut latency, streamline operations, boost automation, and reduce costs.
Uber 的企业网络覆盖全球约 250 个办公室,从小型工作空间到大型园区。为了支持这一全球布局,我们正在摆脱传统的集中式骨干网络,转向可扩展、灵活的 SD-WAN 架构。我们的目标是降低延迟、简化运营、提升自动化水平并降低成本。
Historically, our large office connectivity relied on a backbone model using regional PoPs (Points of Presence) and centralized firewalls for routing and control. However, deploying a PoP near every office hasn’t always been practical. Often, a single PoP serves multiple offices across national borders, forcing internet-bound traffic to traverse the PoP first before reaching the internet. This detour introduces latency and adds to network complexity.
过去,大型办公室的连接依赖使用区域 PoP(Points of Presence)和集中式防火墙进行路由和控制的骨干模式。然而,在每个办公室附近部署 PoP 并不总是可行。通常,一个 PoP 会服务跨越国界的多个办公室,迫使互联网流量先绕行 PoP 再接入互联网。这种绕行增加了延迟并提升了网络复杂度。
Maintaining P2P (point-to-point) connections requires careful planning: primary and backup links must be provisioned with different ISPs to avoid overlap, increasing operational overhead. While this backbone model has enabled Uber’s growth so far, it has also led to high costs, slow deployments, and growing infrastructure complexity. To address these pain points, we’re shifting to a decentralized, AI-driven SD-WAN architecture. This transition is already allowing for quicker deployments, better scalability, and smarter network operations, with less dependence on traditional PoPs and data centers.
维护 P2P(点对点)连接需要周密规划:主备链路必须由不同 ISP 提供以避免重叠,增加了运营开销。尽管该骨干模型迄今支撑了 Uber 的增长,但也带来了高成本、部署缓慢和基础设施日益复杂等痛点。为解决这些问题,我们正转向去中心化、AI 驱动的 SD-WAN 架构。这一转变已带来更快的部署、更强的可扩展性和更智能的网络运营,同时减少了对传统 PoP 和数据中心的依赖。
This initiative was driven by the need to lower operational costs, speed up deployment timelines, and reduce reliance on aging, complex infrastructure. At the same time, our goal ...