话题公司 › Grab

公司:Grab

Grab(前身为MyTeksi)是一间在东南亚地区提供服务的技术公司和交通网络公司,总部位于新加坡,由陈炳耀和陈慧玲于2012年在马来西亚雪兰莪州八打灵再也创立的移动应用程序。该应用连结乘客和司机,提供载客车辆租赁及即时共乘的分享型经济服务。乘客可以透过发送短信或是使用移动应用程序来预约这些载客的车辆,利用移动应用程序时还可以追踪车辆的位置。疫情期间兼开始经营外卖、送货、电子商务等等,成为全方面的生活平台。

Android App Size at Scale with Project Bonsai

With the size of our app growing to include more features, Grab recognised it as a potential hurdle for new users with small storage capacities or restricted Internet bandwidth. Read to find out more…

Enabling near real-time data analytics on the data lake

In the domain of data processing, data analysts run their ad hoc queries on the data lake. The lake serves as an interface between our analytics and production environment, preventing downstream queries from impacting upstream data ingestion pipelines. To ensure efficient data processing in the data lake, choosing appropriate storage formats is crucial.

The journey of building a comprehensive attribution platform

The Grab superapp offers a comprehensive array of services from ride-hailing and food delivery to financial services. This creates multifaceted user journeys, traversing homepages, product pages, checkouts, and interactions with diverse content, including advertisements and promo codes.

Rethinking Stream Processing: Data Exploration

In this digital age, companies collect multitudes of data that enable the tracking of business metrics and performance. Over the years, data analytics tools for data storage and processing have evolved from the days of Excel sheets and macros to more advanced Map Reduce model tools like Spark, Hadoop, and Hive. This evolution has allowed companies, including Grab, to perform modern analytics on the data ingested into the Data Lake, empowering them to make better data-driven business decisions. This form of data will be referenced within this document as “Offline Data”.

With innovations in stream processing technology like Spark and Flink, there is now more interest in unlocking value from streaming data. This form of continuously-generated data in high volume will be referenced within this document as “Online Data”. In the context of Grab, the streaming data is usually materialised as Kafka topics (“Kafka Stream”) as the result of stream processing in its framework. This data is largely unexplored until they are eventually sunk into the Data Lake as Offline Data, part of the data journey (see Figure 1 below). This induces some data latency before the data can be used by data analysts to inform decisions.

Kafka on Kubernetes: Reloaded for fault tolerance

Coban - Grab’s real-time data streaming platform - has been operating Kafka on Kubernetes with Strimzi in production for about two years. In a previous article (Zero trust with Kafka), we explained how we leveraged Strimzi to enhance the security of our data streaming offering.

In this article, we are going to describe how we improved the fault tolerance of our initial design, to the point where we no longer need to intervene if a Kafka broker is unexpectedly terminated.

Sliding window rate limits in distributed systems

Grab使用Roaring位图来限制发送通信数量,避免信息过载和被用户视为垃圾邮件。他们将用户划分为不同群体,并根据用户与应用程序的互动确定每个群体的限制值。Roaring位图通过使用RLE容器来优化存储和性能,可以动态切换容器。他们选择了Redis作为数据存储,使用滑动日志速率限制算法来计算特定时间范围内的请求次数。他们使用Redis的SCRIPT LOAD命令来上传Lua脚本,并获取SHA1哈希值。然后使用EVALSHA命令调用Lua脚本来执行速率限制逻辑,并使用Redis的流水线功能进行批量处理。Redis的流水线功能将多个命令进行分组,并通过单个网络调用发送给相关节点,然后将速率限制结果返回给客户端。为了避免长时间运行的Lua脚本阻塞其他Redis命令,他们确保脚本在5毫秒内执行完毕。此外,脚本还接收当前时间作为参数,以考虑在节点副本上执行脚本时可能存在的时间差异。

An elegant platform

Grab的Coban团队开发了一个名为Coban的实时数据流平台,其中核心组件是Coban UI和Heimdall。Coban UI是一个前端Web界面,用户可以通过几次点击创建数据流资源,并与多个监控系统无缝集成,实时监控关键指标和健康状态。Heimdall是Coban UI的后端,提供API来管理数据流资源,包括创建、读取、更新和删除操作。Heimdall还负责集中和提供与这些资源相关的元数据,以供其他Grab系统使用。通过从各种上游系统和平台获取数据,并不断丰富和更新元数据,Heimdall可以为其他Grab平台提供全面准确的数据流资源信息。此外,Heimdall还将整个资源清单纳入Grab的库存平台,以及将Kafka流纳入其中。

Road localisation in GrabMaps

通过对地理哈希的优化,我们可以在处理地图时提高效率。同时,我们还需要监控地理哈希的内容,考虑到其中的道路密度,以实现计算操作的平衡性。此外,选择适当的资源也是优化时间和成本的关键。总之,通过优化地理哈希和平衡资源选择,我们可以实现最佳的性价比。

Graph modelling guidelines

Graph modelling is a highly effective technique for representing and analysing complex and interconnected data across various domains. By deciphering relationships between entities, graph modelling can reveal insights that might be otherwise difficult to identify using traditional data modelling approaches. In this article, we will explore what graph modelling is and guide you through a step-by-step process of implementing graph modelling to create a social network graph.

LLM-powered data classification for data entities at scale

At Grab, we deal with PetaByte-level data and manage countless data entities ranging from database tables to Kafka message schemas. Understanding the data inside is crucial for us, as it not only streamlines the data access management to safeguard the data of our users, drivers and merchant-partners, but also improves the data discovery process for data analysts and scientists to easily find what they need.

The Caspian team (Data Engineering team) collaborated closely with the Data Governance team on automating governance-related metadata generation. We started with Personal Identifiable Information (PII) detection and built an orchestration service using a third-party classification service. With the advent of the Large Language Model (LLM), new possibilities dawned for metadata generation and sensitive data identification at Grab. This prompted the inception of the project, which aimed to integrate LLM classification into our existing service. In this blog, we share insights into the transformation from what used to be a tedious and painstaking process to a highly efficient system, and how it has empowered the teams across the organisation.

PII masking for privacy-grade machine learning

At Grab, data engineers work with large sets of data on a daily basis. They design and build advanced machine learning models that provide strategic insights using all of the data that flow through the Grab Platform. This enables us to provide a better experience to our users, for example by increasing the supply of drivers in areas where our predictive models indicate a surge in demand in a timely fashion.

Grab has a mature privacy programme that complies with applicable privacy laws and regulations and we use tools to help identify, assess, and appropriately manage our privacy risks. To ensure that our users’ data are well-protected and avoid any human-related errors, we always take extra measures to secure this data.

However, data engineers will still require access to actual production data in order to tune effective machine learning models and ensure the models work as intended in production.

In this article, we will describe how the Grab’s data streaming team (Coban), along with the data platform and user teams, have enforced Personally Identifiable Information (PII) masking on machine learning data streaming pipelines. This ensures that we uphold a high standard and embody a privacy by design culture, while enabling data engineers to refine their models with sanitised production data.

Safer deployment of streaming applications

The Flink framework has gained popularity as a real-time stateful stream processing solution for distributed stream and batch data processing. Flink also provides data distribution, communication, and fault tolerance for distributed computations over data streams. To fully leverage Flink’s features, Coban, Grab’s real-time data platform team, has adopted Flink as part of our service offerings.

In this article, we explore how we ensure that deploying Flink applications remain safe as we incorporate the lessons learned through our journey to continuous delivery.

Evolution of quality at Grab

To achieve our vision of becoming the leading superapp in Southeast Asia, we constantly need to balance development velocity with maintaining the high quality of the Grab app. Like most tech companies, we started out with the traditional software development lifecycle (SDLC) but as our app evolved, we soon noticed several challenges like high feature bugs and production issues.

In this article, we dive deeper into our quality improvement journey that officially began in 2019, the challenges we faced along the way, and where we stand as of 2022.

How OVO determined the right technology stack for their web-based projects

In the current technology landscape, startups are developing rapidly. This usually leads to an increase in the number of engineers in teams, with the goal of increasing the speed of product development and delivery frequency. However, this growth often leads to a diverse selection of technology stacks being used by different teams within the same organisation.

Having different technology stacks within a team could lead to a bigger problem in the future, especially if documentation is not well-maintained. The best course of action is to pick just one technology stack for your projects, but it begs the question, “How do I choose the best technology stack for my projects?”.

One such example is OVO, which is an Indonesian payments, rewards, and financial services platform within Grab. We share our process and analysis to determine the best technology stack that complies with precise standards. By the end of the article, you may also learn to choose the best technology stack for your needs.

Migrating from Role to Attribute-based Access Control

Grab has always regarded security as one of our top priorities; this is especially important for data platform teams. We need to control access to data and resources in order to protect our consumers and ensure compliance with various, continuously evolving security standards.

Additionally, we want to keep the process convenient, simple, and easily scalable for teams. However, as Grab continues to grow, we have more services and resources to manage and it becomes increasingly difficult to keep the process frictionless. That’s why we decided to move from Role-Based Access Control (RBAC) to Attribute-Based Access Control (ABAC) for our Kafka Control Plane (KCP).

In this article, you will learn how Grab’s streaming data platform team (Coban) deleted manual role and permission management of hundreds of roles and resources, and reduced operational overhead of requesting or approving permissions to zero by moving from RBAC to ABAC.

Securing GitOps pipelines

This article illustrates how Grab’s real-time data platform team secured GitOps pipelines at scale with our in-house GitOps implementation.

首页 - Wiki
Copyright © 2011-2024 iteam. Current version is 2.123.1. UTC+08:00, 2024-03-29 08:55
浙ICP备14020137号-1 $访客地图$