German lawmakers have
a controversial law under which Facebook, Twitter, and other social media companies could face fines of up to €50 million ($57 million) for failing to remove hate speech. The Network Enforcement Act, commonly referred to as the “Facebook law,” was passed by the Bundestag, Germany’s parliamentary body, on Friday. It will go into effect in October.
Under the law, social media companies would face steep fines for failing to remove “obviously illegal” content — including hate speech, defamation, and incitements to violence — within 24 hours. They would face an initial fine of €5 million, which could rise to €50 million. Web companies would have up to one week to decide on cases that are less clear cut.
Justice Minister Heiko Maas and other supporters of the bill have argued that it is necessary to curb the spread of hate speech, which is strictly regulated under German law. But digital rights activists have broadly criticized the law, saying it would infringe on free speech, and that it gives tech companies disproportionate responsibility in determining the legality of online content.
“Experience has shown that, without political pressure, the large platform operators will not fulfill their obligations, and this law is therefore imperative,” Maas said in an address Friday, adding that “freedom of expression ends where criminal law begins.”
“We believe the best solutions will be found when government, civil society and industry work together and that this law as it stands now will not improve efforts to tackle this important societal problem,” a Facebook spokesperson said in an email statement. “We feel that the lack of scrutiny and consultation do not do justice to the importance of the subject. We will continue to do everything we can to ensure safety for the people on our platform.”
A Twitter spokesperson declined to comment on the passage of the law.
More from The Verge:
Germany has in recent years intensified efforts to crack down on hate speech, amid a rise in anti-migrant sentiment that has been fueled in part by the ongoing refugee crisis. Facebook, Twitter, and Google agreed to remove such content from their platforms within 24 hours, under a 2015 deal with the German government, but a 2017 report commissioned by the Justice Ministry found that the companies were still failing to meet their commitments. Earlier this month, German police raided 36 homes over social media posts that allegedly contained hateful content, following a similar operation that targeted 60 people last year.
Social media companies are facing pressure to remove hate speech, fake news, and terrorist propaganda from European Union leaders, as well. Last month, the European Council approved a set of proposals that would require web companies to block any videos that contain hate speech or incitements to terrorism. Maas has also called for Europe-wide regulations on hate speech and fake news.
Facebook and Google have launched campaigns to combat fake news and hate speech in recent months, with Facebook recently announcing that it would hire an additional 3,000 people over the next year to moderate flagged content. The social network explained the complexity in moderating hate speech in a recent blog post, as part of its “hard questions” series, but it faced renewed criticism this week after a ProPublica investigationdetailed Facebook’s confusing internal systems that underpin its hate speech policy.
EDRi, a Brussels-based digital rights group, criticized the bill’s passage in a blog postpublished ahead of Friday’s vote. “[Social media] companies are, quite rationally, driven by the motivation to avoid liability, using the cheapest options available, and to exploit the political legitimization of their restrictive measures for profit,” writes Maryant Fernández Pérez, senior policy adviser at EDRi. “This can only lead to privatized, unpredictable online censorship.”