中间件与数据库:Zuul

Curbing Connection Churn in Zuul

When Zuul was designed and developed, there was an inherent assumption that connections were effectively free, given we weren’t using mutual TLS (mTLS). It’s built on top of Netty, using event loops for non-blocking execution of requests, one loop per core. To reduce contention among event loops, we created connection pools for each, keeping them completely independent. The result is that the entire request-response cycle happens on the same thread, significantly reducing context switching.

There is also a significant downside. It means that if each event loop has a connection pool that connects to every origin (our name for backend) server, there would be a multiplication of event loops by servers by Zuul instances. For example, a 16-core box connecting to an 800-server origin would have 12,800 connections. If the Zuul cluster has 100 instances, that’s 1,280,000 connections. That’s a significant amount and certainly more than is necessary relative to the traffic on most clusters.

As streaming has grown over the years, these numbers multiplied with bigger Zuul and origin clusters. More acutely, if a traffic spike occurs and Zuul instances scale up, it exponentially increases connections open to origins. Although this has been a known issue for a long time, it has never been a critical pain point until we moved large streaming applications to mTLS and our Envoy-based service mesh.

Netflix云原生微服务设计分析

本文总结了 Netflix 系统架构的方方面面,并对系统设计的目标和实现进行了分析。阅读本文,穷尽 Netflix系统设计架构的奥秘。

全面解析Netflix的微服务架构设计

本文描绘了Netflix流媒体服务的整体云架构图景,并从可用性、延迟、可扩展性和对网络系统或系统中断的适应性方面分析了系统的设计。

  • «
  • 1
  • »

Accueil - Wiki
Copyright © 2011-2024 iteam. Current version is 2.139.0. UTC+08:00, 2024-12-25 01:36
浙ICP备14020137号-1 $Carte des visiteurs$