Astra Dynamic Chunks: 我们如何通过重新设计Astra的关键部分来节省

Slack handles a lot of log data. In fact, we consume over 6 million log messages per second. That equates to over 10 GB of data per second! And it’s all stored using Astra, our in-house, open-source log search engine. To make this data searchable, Astra groups it by time and splits the data into blocks that we refer to as “chunks”.

Slack 处理大量的日志数据。事实上,我们每秒处理超过 600 万条日志消息。这相当于每秒超过 10 GB 的数据!而且这些数据都是使用我们内部的开源日志搜索引擎 Astra 存储的。为了使这些数据可搜索,Astra 按时间对其进行分组,并将数据分成我们称之为“块”的部分。

Initially, we built Astra with the assumption that all chunks would be the same size. However, that assumption has led to inefficiencies from unused disk space and resulted in extra spend for our infrastructure.

最初,我们在构建Astra时假设所有块的大小都是相同的。然而,这一假设导致了未使用磁盘空间的低效,并导致了我们基础设施的额外支出。

We decided to tackle that problem in a pursuit to decrease the cost to operate Astra.

我们决定解决这个问题,以降低运营Astra的成本。

The Problem with Fixed-Size Chunks

固定大小块的问题

The biggest problem with fixed-sized chunks was the fact that not all of our chunks were fully utilized, leading to differently sized chunks. While assuming fixed-sized chunks simplified the code, it also led to us allocating more space than required on our cache nodes, resulting in unnecessary spend. 

固定大小块的最大问题在于并非我们所有的块都被完全利用,导致块大小不同。虽然假设固定大小块简化了代码,但也导致我们在缓存节点上分配了超过所需的空间,造成了不必要的支出。 

Previously, each cache node was given a fixed number of slots**,** where each slot would be assigned a chunk. While this simplified the code, it meant that undersized chunks of data would have more space allocated for them than required.

以前,每个缓存节点都有固定数量的插槽,每个插槽会被分配一个块。虽然这简化了代码,但这意味着尺寸较小的块会被分配到比所需更多的空间。

For instance, on a 3TB cache node, we would have 200 slots, where each slot was expected to hold a 15GB chunk. However, if any chunks were undersized (say 10GB instead of 15GB), this would result in extra space (5GB) being allocated but not used. On clusters where we’d have thousands of chunks, this quickly led to a rather large percentage of space being allocated but unused.

例如,在一个3TB的缓存节点上,我们会有200个插槽,每个插槽预计容纳一个15GB的块。然而,如果任何块的大小不足(例如10GB而不是15GB),...

开通本站会员,查看完整译文。

trang chủ - Wiki
Copyright © 2011-2024 iteam. Current version is 2.137.3. UTC+08:00, 2024-11-28 15:41
浙ICP备14020137号-1 $bản đồ khách truy cập$