EUGENE – Typically known for more stringent laws around speech than other western democracies, Germany’s latest push back against the use of social media for inciting violence and sharing extremist messages raises big questions on effectiveness, jurisdiction and legality.
On the heels of the recent Facebook Files leaks, which document the ways ICTs moderate content, new German legislation pushes the onus of identifying and suppressing illegal speech to multinational corporations.
Justified censorship or otherwise, the responsibility of identifying what is and isn’t legal speech should not be up to private corporations, especially ones with such poor track records.
Network Enforcement Act
To help curtail the spread of hate speech online, Germany passed new law last Friday introducing steep fines for social media companies, “if they do not delete illegal, racist or slanderous comments and posts within 24 hours [of posting or flagging].” The Network Enforcement Act would issue fines that “could total up to 50 million euros, the equivalent of about $57 million in the United States.”
Just 24 hours to accurately identify and delete posts, pictures and videos? Those are some serious terms, especially for companies not exclusively based or operating on German soil.
As I’ll argue throughout, the key piece here is accuracy. Delete the wrong post and you’ve taken away someone’s voice, leave up the wrong post and not only have you aided in spreading hate and violence but you’ve also been handed a massive fine.
Justification and Context
Between Germany’s self-aware relationship with hate speech (see Volksverhetzung), a recent history peppered with ISIS-inspired acts of terror, as well as nationalist violence targeted at a swelling immigrant population, they have plenty of reasons to keep a closer grip on social media:
On ISIS recruitment: “The Islamic State differentiates itself from its terrorist predecessors by virtue of its high-quality media. But that content would still not be so widely distributed via so many different channels were it not for the group’s willingness to crowd-source a great deal of its propaganda chores to total strangers [and dedicated fans..].” – Why ISIS Is Winning the Social Media War, Wired
On nationalist violence: “One such [Twitter] account is called @einzelfallinfos — roughly, “individual case reports.” The account’s name mocks the mainstream narrative in Germany that crimes committed by refugees and migrants are “individual cases” — something the account’s operators clearly dispute. Instead, they see a “recurring pattern” of sexual assaults against women perpetrated by young men of mostly Arab origin. So they persistently post official police reports about, as they put it, “crimes committed by refugees, migrants, and presumed migrants.” – Germany vs. Twitter, New York Times
No one is denying that Germany has reason to be concerned here, but when it comes down to it the Network Enforcement Act leaves privacy and technology experts with a lot of unanswered questions.
Early Benchmarks Don’t Look Promising
“A study [on implementation of the NEA] had shown that major social media platforms were slow to react to reported illegal content – including slander and incitement to hate, as well as Holocaust denial and glorification of National Socialism, all of which are illegal in Germany,” writes Haaretz.
According to German authorities, within a 24 hour testing window:
- Facebook removed just 39 percent of tagged hate speech.
- Twitter removed just 1 percent of tagged hate speech.
- YouTube removed close to 90 percent of tagged hate speech.
The lack of precision seen in the NEA study work seems to be consistent with day-to-day social content moderation – in which outsourced and overworked employees exposed to large quantities of vulgar content are asked to make decisions based on evolving guidelines.
“The training and support was absolutely not sufficient,” according to an analyst who worked at a company contracted by Facebook to moderate content.
Nothing New Under the Sun
All of this comes on the heels of a recent leak of Facebook content moderation guidelines which many privacy advocates panned as obtuse and out of touch:
Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn’t be allowed, in an effort to make the site a safe place for its nearly 2 billion users…
[New documents] shed light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions…
While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens. – Facebook’s Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children, Pro Publica
In Search of a Better Solution
If we can’t trust social media giants to police for themselves accurately, quickly or fairly, why should we trust them to handle national security and legal definitions?
Social media like Facebook and Twitter may act as public spheres, but at the end of the day it is important to remember that they are privately operated companies with physical locations and legal obligations.
Its as if, in a rush to better protect innocent people, we neglected to ask how companies like Facebook and Twitter should moderate inflammatory content and jumped straight to if.
While I’ll be the first to admit that there is likely no perfect solution, I think it’s fair to at least do ourselves the service of understanding how we got here, before deciding where next to go.
To Bigotry No Sanction, to Persecution No Assistance
Yes, I know the NEA is a German piece of legislation, primarily affecting the content posted and consumed by German citizens. But I wouldn’t be a true Yank without at least examining (read: injecting) what we’ve done as purveyors of free speech here in the world’s greatest melting pot, These United States.
I’m talking, of course, about the competing ideals of Safe Harbour and Freedom of Expression. Should these be considered when looking at how to moderate content?
Safe Harbour On the Internet High Seas
Say you run an online forum, where users from all over the web congregate to share information and ideas. Sure, keeping the content interesting and engaging is in your best interest, as is suppressing information that could put off or scare away users. The more visits you have, the higher your ad revenue. Self-preservation seems rational enough.
But should you also be held legally responsible for policing each piece of content before it gets published? Is hosting users’ content for them the same as saying or sharing it yourself?
What if some of the content is political (criticism of a leader), illegally obtained (like a pirated movie), categorically obscene (say child pornography) or inciting of violence (lookin’ at you ISIS)?
The Telecommunications Act of 1996 gives “most tech companies, including Facebook, legal immunity for the content users post on their services… Section 230 of the Telecommunications Act was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.”
In this case, a free market of ideas prevails and site moderators are motivated to police bad behavior, not for fear of legal fines, but for fear of losing their most valuable asset: users.
When we look at cases without safe harbor, content moderators are considerably more likely to censor broadly first and ask questions later than they are to decide on a case by case basis. This is especially egregious in situations where dissent is prohibited. I’m looking at you, filternets of Iran, Russia and China.
Where do we go from here?
I’m not advocating for a free-for-all, Wild West Web with no repercussions for hate speech. I do think, however, that when we threaten the Facebooks and Twitters with huge fines, they are far more likely to cast a wider censorship net than necessary. If their track records have shown us anything, it’s that quality (precision and accuracy) comes second to quantity.
If we encourage content moderators to restrict free expression out of fear of repercussion, and not out of the best interests of the user, we strip the internet of its inherent humanist value – value that should be enjoyed by all without the fear of unjustified censure.
// photocred: Angelsghosts.com