Meta is facing a fresh call to pay reparations to the Rohingya people for Facebook’s role in inciting ethnic violence in Myanmar.
A new report by Amnesty International — providing what it calls a “first-of-its kind, in-depth human rights analysis” of the role played by Meta (aka Facebook) in the atrocities perpetrated against the Rohingya in 2017 — has found the tech giant’s contribution to the genocide was not merely that of “a passive and neutral platform” which responded inadequately to a major crisis, as the company has sought to claim, but rather that Facebook’s core business model — behavioral ads — was responsible for actively egging on the hatred for profit.
“Meta’s content-shaping algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya,” Amnesty concludes, pointing the finger of blame at its tracking-based business model — aka “invasive profiling and targeted advertising” — which it says feeds off of “inflammatory, divisive and harmful content”; a dynamic that implicates Facebook for actively inciting violence against the Rohingya as a result of its prioritization of engagement for profit.
UN human rights investigators warned in 2018 that Facebook was contributing to the spread of hate speech and violence against Myanmar’s local Muslim minority. The tech giant went on to accept that it was “too slow to prevent misinformation and hate” spreading on its platform. However it has not accepted the accusation that its use of algorithms designed to maximize engagement was a potent fuel for ethnic violence as a result of its ad systems’ preference for amplifying polarization and outrage — leading the platform to optimize for hate speech.
Amnesty says its report, which is based on interviews with Rohingya refugees, former Meta staff, civil society groups and other subject matter experts, also draws on fresh evidence gleaned from documents leaked by Facebook whistleblower, Frances Haugens, last year — aka the Facebook papers — which it says provides “a shocking new understanding of the true nature and extent of Meta’s contribution to harms suffered by the Rohingya”.
“This evidence shows that the core content-shaping algorithms which power the Facebook platform — including its news feed, ranking, and recommendation features — all actively amplify and distribute content which incites violence and discrimination, and deliver this content directly to the people most likely to act upon such incitement,” it writes in an executive summary to the 74-page report.
“As a result, content moderation alone is inherently inadequate as a solution to algorithmically-amplified harms,” it goes on. “Internal Meta documents recognize these limitations, with one document from July 2019 stating, ‘we only take action against approximately 2% of the hate speech on the platform’. Another document reveals that some Meta staff, at least, recognize the limitations of content moderation. As one internal memo dated December 2019 reads: ‘We are never going to remove everything harmful from a communications medium used by so many, but we can at least do the best we can to stop magnifying harmful content by giving it unnatural distribution.’
“This report further reveals that Meta has long been aware of the risks associated with its algorithms, yet failed to act appropriately in response. Internal studies stretching back to as early as 2012 have consistently indicated that Meta’s content-shaping algorithms could result in serious real-world harms. In 2016, before the 2017 atrocities in Northern Rakhine State, internal Meta research clearly recognized that ‘[o]ur recommendation systems grow the problem’ of extremism. These internal studies could and should have triggered Meta to implement effective measures to mitigate the human rights risks associated with its algorithms, but the company repeatedly failed to act.”
‘Relentless pursuit of profit’
Amnesty says the Facebook Papers also show Meta has continued to ignore the risks generated by its content-shaping algorithms in “the relentless pursuit of profit” — with its executive summary citing an internal memo dated August 2019 in which a former Meta employee writes: “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and the family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”
“Amnesty International’s analysis shows how Meta’s content-shaping algorithms and reckless business practices facilitated and enabled discrimination and violence against the Rohingya,” it continues. “Meta’s algorithms directly contributed to harm by amplifying harmful anti-Rohingya content, including advocacy of hatred against the Rohingya. They also indirectly contributed to real-world violence against the Rohingya, including violations of the right to life, the right to be free from torture, and the right to adequate housing, by enabling, facilitating, and incentivizing the actions of the Myanmar military. Furthermore, Meta utterly failed to engage in appropriate human rights due diligence in respect of its operations in Myanmar ahead of the 2017 atrocities. This analysis leaves little room for doubt: Meta substantially contributed to adverse human rights impacts suffered by the Rohingya and has a responsibility to provide survivors with an effective remedy.”
Meta has resisted calls to pay reparations to the (at least) hundreds of thousands of Rohingya refugees forced to flee the country since August 2017 under a campaign of violence, rape and murder perpetrated by Myanmar’s military Junta. And is facing legal class action by Rohingya refugees who are suing the company in the US and the UK — seeking billions in damages for its role in inciting the genocide.
Amnesty has added its voice to calls for Meta to pay reparations to the refugees.
Its report notes that Meta has previously denied requests by Rohingya refugee groups for support funding, such as one by refugee groups in Cox’s Bazar, Bangladesh, asking it to fund a $1M education project in the camps — by saying: “Facebook doesn’t directly engage in philanthropic activities.”
“Meta’s presentation of Rohingya communities’ pursuit of remedy as a request for charity portrays a deeply flawed understanding of the company’s human rights responsibilities,” Amnesty argues in the report, adding: “Despite its partial acknowledgement that it played a role in the 2017 violence against the Rohingya, Meta has to date failed to provide an effective remedy to affected Rohingya communities.”
Making a series of recommendations in the report, Amnesty calls for Meta to work with survivors and the civil society organizations supporting them to provide “an effective remedy to affected Rohingya communities” — including fully funding the education programming requested by Rohingya communities who are parties to a complaint against the company filed by refugees under the OECD Guidelines for Multinational Enterprises via the Irish National Contact Point.
Amnesty is also calling on Meta to adopt ongoing human rights due diligence on the impacts of its business model and algorithms, and cease the collection of “invasive personal data which undermines the right to privacy and threatens a range of human rights”, as its report puts it — urging it to end the practice of tracking-based advertising and adopt less harmful alternatives, such as contextual advertising.
It also calls on regulators and lawmakers which oversee Meta’s business in the US and the EU to ban tracking-based targeted advertising that’s based on “invasive” practices or involving the processing of personal data; and regulate tech firms to ensure that content-shaping algorithms are not based on profiling by default — and must require an opt-in (instead of an opt-out), with consent for opting in being “freely given, specific, informed and unambiguous”, echoing calls by some lawmakers in the EU.
Meta was contacted for a response to Amnesty’s report. A company spokesperson sent this statement — attributed to Rafael Frankel, director of public policy for emerging markets, Meta APAC:
“Meta stands in solidarity with the international community and supports efforts to hold the Tatmadaw accountable for its crimes against the Rohingya people. To that end, we have made voluntary, lawful data disclosures to the UN’s Investigative Mechanism on Myanmar and to The Gambia, and are also currently participating in the OECD complaint process. Our safety and integrity work in Myanmar remains guided by feedback from local civil society organizations and international institutions, including the UN Fact-Finding Mission on Myanmar; the Human Rights Impact Assessment we commissioned in 2018; as well as our ongoing human rights risk management.”
Amnesty’s report also warns that the findings of what it calls “Meta’s flagrant disregard for human rights” are not only relevant to Rohingya survivors — as it says the company’s platforms are at risk of contributing to “serious human rights abuses again”.
“Already, from Ethiopia to India and other regions affected by conflict and ethnic violence, Meta represents a real and present danger to human rights. Urgent, wide-ranging reforms are needed to ensure that Meta’s history with the Rohingya does not repeat itself elsewhere,” it adds.
Meta urged to pay reparations for Facebook’s role in Rohingya genocide by Natasha Lomas originally published on TechCrunch
DUOS