Amnesty International says the use of algorithmic

Amnesty International says the use of algorithmic virality — where certain content is amplified to reach a wide audience — posed significant risks in conflict-prone areas as what happened online could easily spill to violence offline. They faulted Meta for prioritizing engagements over the welfare of Tigrayans, subpar moderation that let disinformation thrive in its platform, and for disregarding earlier warnings on how Facebook was at risk of misuse.

 

The report recounts how, before the war broke out and during the conflict, Meta failed to take heed of warnings from researchers, Facebook’s Oversight Board, civil society groups and its “Trusted Partners” expressing how Facebook could contribute to mass violence in Ethiopia.

 

For instance, in June 2020, four months before the war broke out in northern Ethiopia, digital rights organizations sent a letter to Meta about the harmful content circulating on Facebook in Ethiopia, warning that it could “lead to physical violence and other acts of hostility and discrimination against minority groups.”

 

The letter made a number of recommendations, including “ceasing algorithmic amplification of content inciting violence, temporary changes to sharing functionalities, and a human rights impact assessment into the company’s operations in Ethiopia.”

 

Amnesty International says similar systematic failures were witnessed in Myanmar, like the use of an automated content removal system that could not read local typeface and allowed harmful content to stay online. This happened three years before the war in Ethiopia, but the failures were similar.