Hate Speech Bias

Awarded by Facebook as part of the “Content Policy Research on Social Media Platforms request for proposals” .

PI: Antonela Tommasel

Collaborators (in alphabetical order):

  • Daniela Godoy - ISISTAN, CONICET-UNICEN, Argentina
  • Aiqi Jiang - Queen Mary University of London, UK
  • Arkaitz Zubiaga - Queen Mary University of London, UK

An important goal for hate speech detection techniques is to ensure that they are not unduly biased towards or against particular norms of offence. Training data is usually obtained by manually annotating a set of texts. Thereby, the reliability of human annotations is essential. Meanwhile, the ability to let big data “speak for itself” has been questioned as its representativeness, spatiotemporal extent and uneven demographic information can make it subjective. We hypothesize that demographics substantially affect hate speech perception. In this context, the research question guiding this project is:

How do latent norms and biases caused by demographics derive in biased datasets, which affects the performance of hate speech detection systems?
Avatar
Antonela Tommasel
Researcher at CONICET

My research interests include social computing applications of machine learning and recommender systems.

Related