Published Date: 05-31-23

We have previously written about the quasi-independent global Oversight Board that Facebook (now Meta) set up to provide guidance on how to handle content moderation issues.

In February 2023, the Oversight Board announced it would take more content moderation cases and speed up its review processes. Simultaneously, the Board released a 2022 Q4 transparency report.

We read it eagerly, searching for signs that Meta was taking responsibility for the harmful and, in many cases, reprehensible material on its platform. We had long hoped that if Meta could, with the guidance of the Oversight Board, learn how to handle the most difficult content moderation issues, then maybe they could also figure out how to handle the far more straightforward problem of widespread piracy.

Well, that didn’t happen. The report only confirmed what we long feared – Meta SUCKS at “voluntary measures” – whether they are used to address complicated social and political matters, or the simple and obvious issue of digital piracy.

We came to this conclusion by examining two cases in which the Oversight Board actually agreed with Meta’s decisions. You would think those cases would show Meta at its best, right?

WRONG!

In the first case, the Oversight Board agreed with the removal of a post concerning armed conflict in Ethiopia. The Tigray Regional State’s Communication Affairs Bureau had called for the Ethiopian army to “turn its gun” on Prime Minister Abiy Ahmed’s supporters. The Tigray Regional State told Ahmed’s supporters, meanwhile, to surrender or die.

The civil war that erupted in Ethiopia about a year before, in November 2020, led to monstrous horrors including widespread famine, massacres of civilians, acts of genocide or rape, and a refugee crisis. Based on Facebook’s alleged role in fueling violence, a lawsuit filed in Kenya is seeking restitution of $1.6 billion.

Although the Tigray Regional State’s post was flagged by Facebook users and by Meta’s automatic filters, it was greenlit by human moderators. It was only removed after referral to an expert by one of Meta’s special crisis centers. By that point, the post had already accumulated more than 300,000 views.

While the Oversight Board upheld the removal, it also faulted Meta: “Meta has long been aware that its platforms have been used to spread hate speech and fuel violence in conflict.” That might be the understatement of the year!

Although the Board acknowledged minor improvements, it pointed out that Meta has neither communicated a clear policy for emergencies nor invested sufficient resources to prevent violence or incitement. As the Board had already told Meta in May 2021 and June 2022, it needs to develop, disclose, and implement effective crisis protocols.

How do you like them apples? The next batch will turn your stomach, too.

In the second case, the Oversight Board affirmed Meta’s decision to permit a video of a Dalit (“untouchable”) woman’s sexual assault in public by several men.

Initially, Meta removed the video after a user complained. Then, an employee called for further review. Meta granted the video a “newsworthiness” exception, so it was reinstated under a warning screen.

Once again, the Board sided with Meta but repeated past criticisms. Since the assaulted woman was clothed and, in the majority’s opinion, unidentifiable, the Board said the video raised awareness about a vulnerable group without jeopardizing the victim’s rights.

However, Meta had allowed the video while lacking “clear criteria to assess the potential harm caused by content that violates the Adult Sexual Exploitation policy.” The case illustrated that Meta’s “newsworthiness” standard is vague and arbitrary – as the Board had already indicated in June 2022.

Meta’s indifference to Oversight Board findings is part of a pattern. As far as the Board could tell by its 2022 Q4 report, Meta had made only 17% of changes recommended since October 2020. For the remaining 83%, Meta took parts of suggestions (8%), claimed to be working on changes (38%), withheld evidence that would show whether recommended policies were in effect (20%), or rejected the recommendations (17%).

Do those statistics reflect effective oversight? Or do they show Meta flouting the very entity Meta established to hold itself accountable?

We’d be shocked if this was our first rodeo, but we’ve seen the same hypocrisy in Meta’s half-hearted, quarter-brained measures for stemming piracy.

As we’ve previously explained, Facebook launched a video-sharing product in 2009. Then, it waited NINE YEARS to release Rights Manager – even though it was easily predictable that video piracy would surge in the absence of a content protection tool.

Of course, Meta makes money from such irresponsible decisions. As pirated copyrighted material circulates on its platforms, Meta profits from surrounding ads.

As Meta’s actions show, it doesn’t care. It pretends to care, wasting the time of both creatives and the Oversight Board. At least, Oversight Board members get paid.

Like creatives before them, the Oversight Board is learning that voluntary measures won’t fix problems with Facebook or Instagram. Meta says one thing but does another.

For a $600 billion company that acts in such bad faith, there seems to be only one solution. That’s why we’re praying for new regulations while howling, madly, at the moon.