The human collateral is staggering. Facebook has provided the platform on which violence and hatred is perpetrated around the globe. This goes way beyond election meddling and massive data breaches, according to Matthew Ingram in Columbia Journalism Review.
“In Myanmar, the company gave the military junta and anti-Muslim organizations a platform to spread hate and prejudice against the Rohingya ethnic minority, helping to fuel a genocidal campaign that began in 2017 and has left hundreds of thousands dead or homeless,” Ingram writes.
“In India, Facebook-owned WhatsApp has been implicated in a wave of violence, including multiple incidents in which mobs of attackers have beaten innocent men and women to death,” he continues.
The company has truly unprecedented reach and influence, and in many countries Facebook has become the de facto internet, “having signed deals with telecom companies to have its app installed by default and offering access to a range of online services through its ‘Free Basics’ program,” Ingram writes.
And it’s completely beyond their ability to control the content they publish. Zuckerberg has talked about the “painful lessons” he learned in connecting two billion people. He’s promised to create an independent body to create tougher policies and make enforcement decisions. Can it be done?
For the Rohingya minority in Myanmar, it’s too late; hundreds of thousands of people have fled their homes in what’s been called a “historic migration crisis.” Clearly, the genie is out of the bottle. Just as Facebook is unable to adequately protect user privacy and data, it’s clearly not up to the task of enforcing effective guidelines for decent human behavior online.
So where will regulation come from? As Ingram explains, it’s a tricky issue.
“Would it be the US, since Facebook is an American company? That would be difficult, and not just because the First Amendment protects speech of almost all kinds,” he writes. “Tech platforms in particular are protected from liability for the content they host by Section 230 of the Communications Decency Act.”
There’s talk of modifying Section 230 to eliminate some of the protections for the company, but it’s not likely to be taken up anytime soon. Other countries have been stepping up, calling Zuckerberg to task.
“Into this regulatory vacuum have stepped a number of foreign governments, many of which have passed laws in an attempt to control the spread of misinformation and hate speech in their countries via platforms like Facebook,” Ingram explains. “One of the first was Germany, which introduced a law known as the Netzwerkdurchsetzungsgesetz, or Network Enforcement Act, in 2017. It applies to commercial social networks that have more than 2 million users, and it requires them to delete illegal content (including hate speech, neo-Nazi sentiment, etc.) within 24 hours. Platforms can be fined as much as 50 million euros for failing to comply.”
The UK is considering similar measures, and Singapore criminalized “fake news.” Yet the balance between freedom of speech and protection against hateful and incendiary speech is delicate in this country, where for the most part we have been sheltered from collateral damage.
The problem seems impossible … and has led Facebook to consider the company’s future as primarily private, encrypted communication. That has frightening implications of its own.
“The risk in turning Facebook communication private, according to experts like Renee DiResta, a disinformation researcher who co-wrote the Senate report on Russian trolling activity during the run-up to the 2016 election, is that hate speech will become significantly less visible,” Ingram writes. “While Facebook could determine which messages appear to be going viral by looking at the metadata (how many people are sharing what and where), seeing the actual content of the messages would be impossible. DiResta warns that in many cases, misinformation contained in private messages can be even more persuasive because it comes directly from friends and family.”
“The size and reach of Facebook, with more than 1.5 billion daily active users, makes it more like a nation than a company. And that suggests it will take the efforts of multiple countries to find a way to regulate the kind of behavior the social network says it is committed to curbing, but is effectively incentivizing. The alternative is too depressing to contemplate: letting the company continue to do whatever it wants, safe in the knowledge that all it has to do is apologize profusely after something terrible happens,” Ingram concludes.