New Media Matters: Free Speech vs Hate Speech

New Media Matters: Free Speech vs Hate Speech

This week saw the announcement of a new Code of Conduct regarding illegal online hate speech from four of the world’s biggest tech companies and the European Union. Facebook, Twitter, YouTube and Microsoft participated in drafting the code, which requires the U.S. companies to block illegal hate speech from their platforms and services in Europe within 24 hours of notice. European governments hope to use the deal to tackle the surge in racist, xenophobic and pro-Islamic State commentary on social media platforms.

However, some digital rights groups have warned that the new code of conduct could result in over-compliance from Internet companies and in turn threaten free speech. Is this concern justified, or are free speech advocates misguided in their intentions?

 

The Code of Conduct

Entitled ‘Code of Conduct on Countering Illegal Hate Speech Online’ the three page document sets out the parameters of the code and what the IT Companies (Facebook, Twitter, YouTube and Microsoft) are expected to do when they receive a notification regarding illegal hate speech on their services. The IT Companies are required to:

  • Put in place clear and effective processes for reviewing notifications of illegal hate speech. This includes Rules or Community Guidelines clarifying the prohibition of violent or hateful content
  • Review valid notifications within a 24-hour period and remove or disable access to such content
  • Provide information on the procedures for submitting notifications with the goal of improving the speed and effectiveness of communication between the companies and EU Member State authorities
  • Provide regular training to their staff on current societal developments
  • Intensify cooperation between themselves and other platforms and social media companies to ensure best practice sharing
  • Continue to identify and promote independent counter-narratives, new ideas and initiatives and support educational programs that encourage critical thinking
  • Intensify their work with civil society organisations (CSOs) to deliver best practice training on countering hateful rhetoric and prejudice, with the European Commission working with Member States to map out the specific needs of CSOs

The code defines illegal hate speech as:

“all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

The code also stresses the need for the IT Companies and the European Commission to defend the right to freedom of expression, which the European Court of Human Rights has stated:

“is applicable not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population"

The aim of the code is to prevent illegal online hate speech from spreading virally and to ensure democratic discourse on online platforms. But the code walks a fine line between the freedom and suppression of expression, something a number of free speech organisations have voiced concerns about.

 

Speaking out against censorship

Speaking to Breitbart London on the day of the announcement, the Index on Censorship, the National Secular Society, the Open Rights Group and the U.S.-based ‘Free Press’ organisation all criticized the deal, referring to it as an authoritarian initiative that would only make it more difficult to combat hate speech.

The groups claimed that the ambiguity of European hate speech laws would result in misuse or abuse of the notification process, causing a chilling effect on the freedom of expression. They were also concerned about the decision to hand policing power over to the unelected IT Companies, with Open Rights Group Executive Director Jim Killock saying:

“The removal of illegal hate speech should be led by law enforcement agencies not commercial companies. There needs to be a clear judicial process for making requests to remove content… It is one thing if companies decide to remove content that breaches their community standards but another if the order to do this comes from the state.”

Strangely enough, the above sentence was the only reference to the fact that social media platforms and Internet companies reserve the right to remove content they find to violate community standards or terms of service. Facebook’s Terms of Service explicitly states:

“We can remove any content or information you post on Facebook if we believe that it violates this Statement or our policies”

while Twitter’s Terms of Services reads:

“We reserve the right at all times (but will not have an obligation) to remove or refuse to distribute any Content on the Services.”

Understandably, Facebook and Twitter reserve these rights to prevent an abuse of their services and the posting of anything that may be damaging to the company. These are massive markets responsible for generating large amounts of profit, such as the global social networking market which is expected to grow at a CAGR of 18% by 2020. Shareholders do not want their business associated with hateful or violent speech, and it is their right to determine what content is suitable for their services, not the user. If users are unhappy about censorship or the lack of it then they can simply delete their profile or change platform, something which wasn’t mentioned by any of the advocacy groups.     

While the aforementioned advocacy groups were highly critical of the new Code of Conduct, they failed to offer a solution to the problem of online hate speech. Terrorist organizations, extremist movements and racist groups all use social media and Internet websites to spread propaganda and disinformation. The spreading of misinformation is equally as dangerous as censorship, and the standards of practice found in a print newspaper are usually absent from Internet websites. IT Companies and governmental bodies are responsible for ensuring that such content is absent from the Internet.  

 

Conclusion

The fact of the matter is the Internet is rife with hate speech, and can be a very toxic and frightening place. State censorship is obviously not a desirable resolution to the problem of online hate speech, but for the time being it is the best one we have.   

Stay up-to-date with the latest market developments, trending news stories and industry advances with the Research and Markets blog. Don’t forget to join our mailing list to receive alerts for the latest blog plus information about new products.

Published by Research and Markets

RELATED PRODUCTS

Global Social Networking Market 2016-2020 - Product Image

Global Social Networking Market 2016-2020

  • Report
FROM
adroll