It’s the kind of thing that happens millions of times a year on Instagram, let alone the rest of the web. But what makes this particular post remarkable is how it appeared on Instagram’s radar in the first place — not through automated detection or user reporting, but through a police recommendation.
Over the past few years, the Metropolitan Police in London and other so-called Internet Reference Units have hit platforms such as Facebook, Instagram and, most notably, YouTube with notifications about content that supposedly violates the terms of service of these companies, but which is not necessarily illegal. . The Met’s IRU placed particular emphasis on music videos. Last year alone, the unit reportedly recommended YouTube remove 510 music videos, and YouTube complied nearly 97% of the time.
Critics have argued that the IRU operates in a dangerous legal no man’s land, where law enforcement is using the platforms’ own terms to circumvent the legal system in an affront to free speech. Even so, the prevalence of these units has only grown, with similar divisions appearing in Israel and across Europe. Even where formal IRUs do not exist, some platforms, including Facebook, have dedicated channels for government agencies such as the US Department of Homeland Security to flag content.
But the Chinx (OS) video promises to be a turning point in this relationship between the platforms and the police. Earlier this year, Meta referred the matter to its supervisory board, which will soon decide the fate of the post in question. The board’s recommendations could also help answer a much bigger question facing Meta and other tech giants: how can they most effectively fend off mounting law enforcement pressure? without sacrificing public safety?
“What the Oversight Council does will have consequences not only for Facebook, but also for governments,” said Daphne Keller, program director on platform regulation at Stanford’s Cyber Policy Center.
The rise of the IRUs is a relatively recent phenomenon, with the Met’s so-called Operation Domain initiative, which focuses on “gang-related content”, launched in 2015. That same year, in the wake of the Charlie Weekly in France, Europol has raised its own IRU division.
But the biggest test of what the IRUs could do — and how technology platforms allowed them — came in a 2019 case from Israel. In that case, two human rights groups petitioned Israel’s Supreme Court to shut down the country’s so-called Cyber Unit, arguing that this “alternative enforcement” mechanism violated people’s constitutional rights. . The court ultimately dismissed the petition last year, in part because Facebook failed to notify users that it was removing the posts in response to Cyber Unit’s referrals. Without this information, plaintiffs could not prove that the Cyber Unit was responsible for the alleged censorship. Additionally, according to the court, Facebook voluntarily deleted the posts on its own terms.
For civil liberties experts, the case illustrated the role tech company decisions play in shielding IRUs from liability. “If law enforcement can hide behind this veil of ‘it’s just a corporate action,’ they can do things including systematically target dissent, while completely cutting off the ability for people to hold them accountable in court,” said Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology, which is funded in part by Meta and the Chan Zuckerberg Initiative.
What the Oversight Council does will have consequences not just for Facebook, but for governments.
The Chinx (OS) case presents another test and a chance for Meta to do things differently. Although the council did not specify which police service had requested the removal, according to its summary, British police warned Meta that the video in question “could contribute to risk of harm offline”. The company seemed to agree and deleted the video not once, but twice, after it was restored on appeal. But the decision clearly didn’t sit well with Meta. He referred the matter to the board because, the company wrote in a blog post, “we found it important and difficult as it creates tension between our voice and security values.”
The board has since asked for feedback on the cultural significance of drill music in the UK, as well as how social media platforms in general should handle inquiries from law enforcement regarding the legal speech.
The case has drawn the attention of leading civil liberties groups and internet governance experts, including the CDT and the ACLU, both of which submitted comments to the board of directors on urging to recommend stronger safeguards against increasing IRU creep. “Government-initiated takedowns — especially those that rely entirely on private content policies to remove lawful content — pose a danger to free speech,” read a comment from the ACLU and Keller. .
There are, of course, good reasons for government and tech platforms to communicate. Law enforcement and government agencies often have a better understanding of emerging threats than platforms, and platforms increasingly rely on these agencies for advice.
The Met, for its part, said it “only works to identify and remove content that incites or encourages violence” and that it “does not seek to suppress freedom of expression through any type of music. “. The agency also touted the program’s effectiveness in comments to the UK Information Commissioner’s Office, writing: “The project to date has brought to light threats and risks that would otherwise not have not identified by other police methods.
But those systems are also ripe for abuse, Llansó said. For one thing, companies don’t always feel empowered to dismiss referrals to law enforcement. “There can be a feeling within a company that it’s better to be seen as a constructive and collaborative actor, rather than one who always rejects requests,” she said.
The fact that these requests occur out of public view also prevents users from understanding how government agencies seek to censor them and offers them no recourse to challenge them. “From a business perspective, there are potentially a lot of benefits to having experienced people, including law enforcement, aware of the material you want to get out of your service,” Llansó said. “From a user-centric perspective, it’s a whole different story.”
In the UK, where the Met’s IRU has focused on drill music in particular, such referrals may also disproportionately target black communities engaged in entirely legal speech. “It’s so subjective,” said Paige Collings, senior speech and privacy campaigner at EFF, who recently wrote about the strained relationship between London police and YouTube. “It’s really race-focused, race-driven.” Collings points to the widespread use of rap music in court as evidence of a “much larger structural problem” of police attempting to use songs as evidence. “It’s not testimony or evidence of crimes,” Collings said. “[Songs] are artistic expressions.
The CDT and ACLU call on the board to urge Meta to notify users when their content is removed in response to a law enforcement request and to publish detailed reports of such removals. Collings also thinks platforms should post examples of the content that is being taken down and list the formal and informal relationships they have with law enforcement.
The ACLU and Keller also recommended that Meta be more discerning about which law enforcement agencies it trusts and deny expedited reporting channels to IRUs who make bad faith or inaccurate referrals. The Internet Archive has in the past called out the IRUs for making erroneous referrals, including the French IRU, which the Internet Archive says incorrectly flagged over 500 URLs as terrorist propaganda in 2019. “Governments should have , and maybe, have an obligation not to be so sloppy about it,” Keller said.
While the board’s recommendations aren’t binding, Meta’s referral of this matter to the board suggests the company is looking for help — or at least backup — as it decides. how to deal with these requests in the future. And the volume of requests could soon increase. Under the EU’s Digital Services Act, platforms must have “trusted flagger” programs, like the one YouTube already runs, which allows law enforcement and other public and private entities to return the content to be deleted.
For Meta and other companies operating in Europe, it is becoming increasingly urgent to figure out how to manage this potential increase in referrals without stifling users’ ability to speak freely, Llansó said. The board’s recommendations should give Meta some cover for changes it might have wanted to make anyway. “This case could be a means for [Meta] to get the fact that this is happening on the record,” Llansó said. “If Facebook wants to roll out more transparency, they could use political support for that.”