Awarded as part of the Bilateral Collaboration Projects between CONICET & Royal Society.
PIs: Daniela Godoy & Arkaitz Zubiaga
Collaborators (in alphabetical order):
- Rabab Alkhalifa
- Aiqi Jiang
- Matthew Purver
- Juan Manuel Rodriguez
- Silvia Schiaffino
- Antonela Tommasel
It is common for online social media platforms to recommend content to its users through features such as “whom to follow” or personalised content. It has been shown however that social media platforms also spread harmful contents that have proven problematic. This project aims to detect and mitigate content flagged as online harm, which includes hate speech and misinformation. These problematic issues in social media have led to mental health issues owing to hate speech messages, as well as a disruption of the democratic system due to the diffusion of misinformation. Identification of harmful content online has however proven difficult, with not only the scientific community but also social media platforms and governments worldwide calling for support to develop effective methods.
This project aims to develop novel recommendation algorithms for social media that prevent the amplification of online harms including misinformation and hate speech. Sources that are likely to generate this sort of online harm will be identified and will be flagged prior to making decisions with the recommender system. The algorithms will need to consider three main aspects: (1) detection of misinformation and abusive language from a multilingual perspective, for English and Spanish (2) incorporation of the notion of content toxicity, and accounts that promote it, into the inner components of recommendation algorithms (3) definition of mechanisms to counteract user exposure to online harm, such as the addition of features (such as provenance and context of information) that will increase user-awareness and help foster an informed decision making. The algorithms will also need to prevent potential biases inherent to traditional recommender systems, such as popularity and homogeneity.
- Hate Speech Bias
- Tracking the evolution of crisis processes and mental health on social media during the COVID-19 pandemic
- I Want to Break Free! Recommending Friends from Outside the Echo Chamber
- OHARS: Second Workshop on Online Misinformation- and Harm-Aware Recommender Systems
- Is my model biased? Exploring Unintended Bias in Misogyny Detection Tasks