使用预测性技术促进建设性的对话
Nextdoor’s purpose is to cultivate a kinder world where everyone has a neighborhood they can rely on. We want to give neighbors ways to connect and be kind to each other, online and in real life. One of the biggest levers we have for cultivating more neighborly interactions is by building strategic nudges throughout the product to encourage kinder conversations.
Nextdoor的目的是培养一个更善良的世界,在那里每个人都有一个可以依赖的邻居。我们希望为邻居们提供在线和现实生活中相互联系和善待的方法。我们培养更多邻里互动的最大杠杆之一是通过在整个产品中建立战略性的提示,以鼓励更亲切的对话。
Today, we use a number of mechanisms to encourage kindness on the platform, including pop-up reminders that slow neighbors down before responding negatively. Over the past few years, we’ve used machine learning models to identify uncivil and contentious content.
今天,我们使用一些机制来鼓励平台上的善意,包括弹出提醒,让邻居们在做出负面回应前放慢脚步。在过去的几年里,我们已经使用机器学习模型来识别不文明和有争议的内容。
Nextdoor’s definition of harmful and hurtful content is anything containing uncivil, fraudulent, unsafe, or unwelcoming, including personal attacks, misinformation and discrimination. In partnership with key experts and academics, we identified various moments that add friction on the platform and implemented those findings with our machine learning technology. Our goal: to encourage neighbors to conduct more mindful conversations. What if we can be proactive and intervene before the conversations spark more abusive responses? Oftentimes unkind comments beget more unkind comments. 90% of abusive comments appear in a thread with another abusive comment, and 50% of abusive comments appear in a thread with 13+ other abusive comments.* By preventing some of these comments before they happen, we can avoid the resulting negative feedback loops.
Nextdoor对有害和伤害性内容的定义是任何含有不文明、欺诈、不安全或不受欢迎的内容,包括人身攻击、错误信息和歧视。在与主要专家和学者的合作中,我们确定了在平台上增加摩擦的各种时刻,并通过我们的机器学习技术实施这些发现。我们的目标是:鼓励邻居们进行更多用心的对话。如果我们能主动出击,在对话引发更多的虐待性反应之前进行干预,会怎么样?很多时候,不友善的评论会带来更多不友善的评论。90%的辱骂性评论出现在一个有另一个辱骂性评论的主题中,50%的辱骂性评论出现在一个有13个以上其他辱骂性评论的主题中。
Nextdoor’s thread model was built to identify potentially contentious conversations, and where intervention might preven...