Project Lighthouse — Part 1: P-sensitive k-anonymity

Skyler Wharton
The Airbnb Tech Blog
9 min readSep 1, 2020

--

A two-part series on how we will measure discrepancies in Airbnb guest acceptance rates using anonymized perceived demographic data. This first part gives an overview of the privacy model of p-sensitive k-anonymity that we use.

Photo of a family of five (two parents, three kids) reading together under a blanket on a couch in an Airbnb

Introduction

In June, the Airbnb Anti-Discrimination product team announced Project Lighthouse, an initiative with the goal to measure and combat discrimination when booking or hosting on Airbnb. We launched this project in partnership with Color Of Change, the nation’s largest online racial justice organization with over 7 million members, as well as with guidance from other leading civil rights and privacy rights organizations.

At the core of Project Lighthouse is a novel system, being used in the United States, to measure discrepancies in people’s experiences on the Airbnb platform that could be a result of discrimination and bias. This system is built to measure these discrepancies using perceived race data that is not linked to individual Airbnb accounts. By conducting this analysis, we can understand the state of our platform with respect to inclusion, and begin to develop and evaluate interventions that lead to more equitable outcomes on Airbnb’s platform.

We focus on measuring inequities with respect to perceived race (instead of self-identified race) because discrimination is often a result of one person’s perception of another. Additionally, the data collected for Project Lighthouse will be handled in a way that protects people’s privacy and will be used exclusively for anti-discrimination work.

The goal of this series of blog posts is to provide a high-level overview of the system we have developed. Our work so far has been focused on two objectives:

  1. Design a privacy-centric process that utilizes anonymized perceived race data.
  2. Ensure that the resultant data can be used to accurately measure potential experience gaps in the product.

In this post, we will focus on how we can use data that satisfies p-sensitive k-anonymity to calculate acceptance rates by guest perceived race.

Our use of p-sensitive k-anonymity is just one part of the privacy-by-design approach we’ve taken to protect our users’ privacy. Other privacy features of the program include user notice and choice — our community can opt out of participating. This project has also benefited from the amazing work done by the Airbnb Security Engineering team to keep all users’ data secure.

The second post in this series shows how we verify that this data can lead to accurate estimates of the impact of product interventions.

These blog posts are intended to serve as an introduction to the methodology underlying Project Lighthouse, which is described in greater detail in our technical paper. By publicly sharing our methodology, we hope to help other technology companies systematically measure and reduce discrimination on their platforms.

Setup

We designed Project Lighthouse to help us understand potential experience gaps in the product — that is, where one demographic group may experience the product differently from another. For example, we could use Project Lighthouse to investigate potential differences in reservation acceptance rates experienced by perceived demographic groups.

Throughout the rest of this post, we’ll work through an example using made-up data of how we calculate acceptance rate by perceived race while preserving user privacy. We’ll first present a methodology that is simple but not privacy-preserving as a counterfactual; then we’ll present our actual methodology, which takes steps to anonymize data.

We’ll demonstrate both methodologies on six fictional Airbnb users:

  • Michael
  • Stephen
  • Gerard
  • Nora
  • Suzanne
  • Aoife

Each fictional user has provided Airbnb with a profile photo. Each user has also attempted to make several bookings on Airbnb; each attempted booking has either been accepted or rejected. Michael, Stephen, and Aoife each have 6 accepted and 2 rejected bookings; Nora and Suzanne each have 4 accepted and 2 rejected bookings; and Gerard has 2 accepted and 2 rejected bookings.

Michael, Stephen, & Aoife: 6 accepted & 2 rejected. Nora & Suzanne: 4 accepted & 2 rejected. Gerard: 2 accepted & 2 rejected.
Figure 1. A fictional set of six Airbnb users.

Each of the boxes in Figure 1 contains information that we may have about an Airbnb guest: a user-submitted profile photo and first name, the number of times a booking they’ve attempted has been accepted, and the number of times a booking they’ve attempted has been rejected.

Simple approach

If we simply tasked a person to label each guest with their perceived race based on their first name and profile photo, then we could easily calculate the exact acceptance rate for each perceived race.

Michael, Gerard, Nora, and Suzanne have a perceived race of X. Stephen and Aoife have a perceived race of Y.
Figure 2. A fictional set of six Airbnb users with associated perceived race labels.

For example, in Figure 2, the acceptance rate for guests perceived as “X” is 67% and the acceptance rate for guests perceived as “Y” is 75%. Therefore, the acceptance rate gap between groups X and Y is 8%.

The downside of this methodology is that Airbnb would have a database that contains a 1:1 mapping between each user and their perceived race. Unfortunately, such a database could cause significant harm to our users if a misactor (someone who intentionally or unintentionally uses the system in a way for which it is not designed) got access to it, as it could allow a misactor to target our users based on their perceived race. Thus, we would like to avoid having this 1:1 mapping, and instead store only the minimum amount of data required to compute the acceptance rate gap.

Privacy-centric approach

To achieve this, we embedded privacy into the design of Project Lighthouse. Specifically, a core component of Project Lighthouse is the preservation of a privacy property called p-sensitive k-anonymity.

We chose p-sensitive k-anonymity primarily to mitigate the risk of a type of data breach called certain attribute disclosure. Certain attribute disclosure occurs if a misactor is able to obtain users’ perceived races at scale.

Our desire to prevent this type of event narrowed our focus in our system design process to properties such as k-anonymity, instead of others such as ε-differential privacy. (ε-differential privacy is best suited to prevent membership disclosure, which means that a misactor has determined whether a user is present in a dataset, and probabilistic disclosure, which means that a misactor has increased their knowledge of users’ attributes through an attack, but not to the point of being certain.)

To explain p-sensitive k-anonymity, let’s examine the following table, which represents the same set of fictional users as in Figure 2.

Accepted versus rejected rates by perceived race. 1 X and 2 Y have 6 versus 2. 2 X have 4 versus 2. 1 X has 2 vs 2.
Figure 3. A transposed version of Figure 2, with first names and profile photos removed.

Removing user IDs — as has been done in Figure 3 — would not be sufficient to protect user privacy. A guest could still have a unique number of accepted bookings and rejected bookings, resulting in a misactor being able to discern their race. For example, if a misactor got access to this table, knew that Gerard was present in the table, and knew that Gerard had been accepted 2 times and rejected 2 times, then they’d know that Gerard’s perceived race is “X”, because the only row where “Number of Accepts” = 2 and “Number of Rejects” = 2 has “Perceived race” = “X”.

Next, let’s k-anonymize our dataset:

K-anonymized version of Figure 3. 1 X and 2 Y have 6 versus 2. 3 X have 3.33 (underlined) versus 2.
Figure 4. A k-anonymized version of Figure 3. The underlined cells are where changes were made in order for the dataset to satisfy k-anonymity.

K-anonymity means that there are at least k instances of each unique pair of (number_of_accepts, number_of_rejects) in our dataset. Specifically, our dataset is now 3-anonymous (so k = 3) because we can confirm that each unique pair of accepts/rejects — (6, 2) and (3.33, 2) — appear at least 3 times in the dataset (in rows 1, 2, 6 and rows 3, 4, 5, respectively).

The text in three cells in rows 3, 4, and 5 in Figure 4 are underlined to show exactly which numbers were changed in order for the dataset to satisfy 3-anonymity. There are many ways we could have modified the dataset in order to achieve k-anonymity; in this case, we averaged the original contents of the underlined cells — 2, 4, 4 — to get 3.33.

Finally, let’s p-sensitize our dataset:

P-sensitized version of Figure 4. 1 X and 2 Y have 6 versus 2. 2 X and 1 Y (underlined) have 3.33 versus 2.
Figure 5. A p-sensitized version of Figure 4. The underlined cell is where a change was made in order for the dataset to satisfy p-sensitive k-anonymity.

P-sensitive k-anonymity means that, in addition to satisfying k-anonymity, each unique pair of (number_of_accepts, number_of_rejects) has at least p distinct perceived race values. Specifically, our dataset is 2-sensitive 3-anonymous because each unique pair of accepts/rejects has at least 3 rows (k = 3) and at least 2 distinct perceived race values (p = 2): (6, 2) is associated with 2 perceived race values (“X” and “Y”), and (3.33, 2) is associated 2 perceived race values (“X” and “Y”).

The text in one cell in Figure 5 is underlined to show exactly which perceived race label was changed in order for the dataset to be 2-sensitive. There are many ways we could have modified the dataset in order to achieve p-sensitive k-anonymity; in this case, we changed the original content of the underlined cell from “X” to “Y”.

Our dataset is now p-sensitive k-anonymous:

Same as Figure 5 without the underline
Figure 6. The final p-sensitive k-anonymous dataset.

As a result of this transformation, we have substantially reduced the risk of exposing users’ perceived races to bad actors. Unfortunately, this data anonymization process has a downside: it may have jeopardized our ability to accurately calculate the acceptance rate for each perceived race. Our example demonstrates this risk: in the anonymized dataset, the acceptance rate for group X is 68% and the acceptance rate for group Y is 72%, as compared to acceptance rates of 67% and 75%, respectively, before anonymization occurred.

To explore this risk, we designed a set of simulations to understand how taking steps to anonymize data would affect our ability to measure the acceptance rate gap. With these simulations, we verified that our approach can still lead to accurate estimates of acceptance rates for guests of different perceived races. Our next blog post covers these simulations — including our methodology, results, and conclusions — in greater detail.

Project Lighthouse

The privacy property of p-sensitive k-anonymity is a core component of Project Lighthouse. Another core component of Project Lighthouse is that, to associate perceived race labels with the p-sensitive k-anonymous data, we are partnering with an external research partner. This partner provides us perceived race labels based on human judgments rather than a machine learning model.

We made this decision because we believe the best way to determine perceived race information is using human perception and not machine learning. The use of algorithms, facial recognition technology, or machine learning for something as sensitive as race would require careful consideration and input from, among others, civil rights and privacy organizations. Were we to consider doing so, we would seek their guidance and support in addition to making sure our users are informed.

Conclusion

In this blog post, we described how we analyze anonymized data to measure Airbnb acceptance rate gaps that can be attributed to a guest’s perceived race. We juxtaposed our own methodology against a simpler one to demonstrate the privacy benefits of our methodology. We hope that the system within Project Lighthouse — described in greater detail in our publicly-available technical paper — can help other technology companies systematically measure and reduce discrimination on their platforms.

Project Lighthouse represents the collaborative work of many people both within and external to Airbnb. The Airbnb anti-discrimination product team is: Sid Basu, Ruthie Berman, Adam Bloomston, John Campbell, Anne Diaz, Nanako Era, Benjamin Evans, Sukhada Palkar, and Skyler Wharton. Within Airbnb, Project Lighthouse also represents the work of Crystal Brown, Zach Dunn, Janaye Ingram, Brendon Lynch, Margaret Richardson, Ann Staggs, Laura Rillos, and Julie Wenah. Finally, we’re grateful to the many people who reviewed our technical paper, on which this blog post is based.

We know that bias, discrimination, and systemic inequities are complex and longstanding problems. Addressing them requires continued attention, adaptation, and collaboration. We encourage our peers in the technology industry to join us in this fight, and to help push us all collectively towards a world where everyone can belong.

Want to learn more? Part 2 of the Project Lighthouse series is available here.

For more information about Project Lighthouse, visit: https://www.airbnb.com/resources/hosting-homes/a/a-new-way-were-fighting-discrimination-on-airbnb-201

--

--