A landmark ruling by Kenya’s High Court has opened the door for Meta, the parent company of Facebook, to be sued over its role in amplifying content related to ethnic violence in Ethiopia. The case, which stems from the 2020-2022 civil war in the Tigray region of northern Ethiopia, could set a significant precedent for how social media companies are held accountable for content moderation practices globally.
The plaintiffs in the case, which include the Katiba Institute and two Ethiopian researchers, allege that Facebook’s algorithm and recommendation systems exacerbated the spread of hateful content during the conflict, leading to real-world violence. One plaintiff, Abrham Meareg, claims that his father was killed in 2021 as a result of threatening posts on the platform. Fisseha Tekle, an Amnesty International researcher, asserts that he was targeted with hate speech for his human rights work related to the Ethiopian conflict.
Meta has faced increasing scrutiny for its handling of content moderation, particularly concerning hate speech and incitement to violence. The plaintiffs are seeking compensation for victims of violence and requesting that Meta overhaul its algorithm to prevent the promotion of harmful content. They also demand the creation of a restitution fund to support those affected by the platform’s role in spreading hate.
Meta has defended its actions, stating that it has made significant investments in content moderation, including the removal of harmful posts. The company has also pointed to its global efforts to enhance content oversight, though critics argue these measures have not been sufficient in addressing the scale of harm.
This case is the third lawsuit Meta faces in Kenya, with previous legal challenges involving content moderators employed by local contractors who have alleged poor working conditions. The ruling against Meta signals a growing willingness by Kenyan courts to tackle global issues concerning tech companies and their impact on local communities.
The outcome of this case could have far-reaching implications for social media governance, raising critical questions about accountability and the responsibility of platforms in curbing harmful content.