|Part of a series on
Hate speech is a legal term with varied meaning. It has no single, consistent definition. It is defined by the Cambridge Dictionary as "public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation". The Encyclopedia of the American Constitution states that hate speech is "usually thought to include communications of animosity or disparagement of an individual or a group on account of a group characteristic such as race, color, national origin, sex, disability, religion, or sexual orientation". There is no single definition of what constitutes "hate" or "disparagement". Legal definitions of hate speech vary from country to country.
There has been much debate over freedom of speech, hate speech, and hate speech legislation. The laws of some countries describe hate speech as speech, gestures, conduct, writing, or displays that incite violence or prejudicial actions against a group or individuals on the basis of their membership in the group, or that disparage or intimidate a group or individuals on the basis of their membership in the group. The law may identify protected groups based on certain characteristics. In some countries, hate speech is not a legal term. Additionally, in some countries, including the United States, much of what falls under the category of "hate speech" is constitutionally protected. In other countries, a victim of hate speech may seek redress under civil law, criminal law, or both.
Hate speech is generally accepted to be one of the prerequisites for mass atrocities such as genocide. Incitement to genocide is an extreme form of hate speech, and has been prosecuted in international courts such as the International Criminal Tribunal for Rwanda.
Starting in the 1940s and 50s, various American civil rights groups responded to the atrocities of World War II by advocating for restrictions on hateful speech targeting groups on the basis of race and religion. These organizations used group libel as a legal framework for describing the violence of hate speech and addressing its harm. In his discussion of the history of criminal libel, scholar Jeremy Waldron states that these laws helped "vindicate public order, not just by preempting violence, but by upholding against attack a shared sense of the basic elements of each person's status, dignity, and reputation as a citizen or member of society in good standing". A key legal victory for this view came in 1952 when group libel law was affirmed by the United States Supreme Court in Beauharnais v. Illinois. However, the group libel approach lost ground due to a rise in support for individual rights within civil rights movements during the 60s. Critiques of group defamation laws are not limited to defenders of individual rights. Some legal theorists, such as critical race theorist Richard Delgado, support legal limits on hate speech, but claim that defamation is too narrow a category to fully counter hate speech. Ultimately, Delgado advocates a legal strategy that would establish a specific section of tort law for responding to racist insults, citing the difficulty of receiving redress under the existing legal system.
Main article: Hate speech laws by country
After WWII, Germany criminalized Volksverhetzung ("incitement of popular hatred") to prevent resurgence of Nazism. Hate speech on the basis of sexual orientation and gender identity also is banned in Germany. Most European countries have likewise implemented various laws and regulations regarding hate speech, and the European Union's Framework Decision 2008/913/JHA requires member states to criminalize hate crimes and speech (though individual implementation and interpretation of this framework varies by state).
International human rights laws from the United Nations Human Rights Committee have been protecting freedom of expression, and one of the most fundamental documents is the Universal Declaration of Human Rights (UDHR) drafted by the U.N. General Assembly in 1948. Article 19 of the UDHR states that "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
While there are fundamental laws in place designed to protect freedom of expression, there are also multiple international laws that expand on the UDHR and pose limitations and restrictions, specifically concerning the safety and protection of individuals.
A majority of developed democracies have laws that restrict hate speech, including Australia, Canada, Denmark, France, Germany, India, South Africa, Sweden, New Zealand, and the United Kingdom. In the United Kingdom, Article 10 of the Human Rights Act 1998 expands on the UDHR, stating that restrictions on freedom of expression would be permitted when it threatens national security, incites racial or religious hatred, causes individual harm on health or morals, or threatens the rights and reputations of individuals. The United States does not have hate speech laws, since the U.S. Supreme Court has repeatedly ruled that laws criminalizing hate speech violate the guarantee to freedom of speech contained in the First Amendment to the U.S. Constitution.
Laws against hate speech can be divided into two types: those intended to preserve public order and those intended to protect human dignity. The laws designed to protect public order require that a higher threshold be violated, so they are not often enforced. For example, a 1992 study found that only one person was prosecuted in Northern Ireland in the preceding 21 years for violating a law against incitement to religious violence. The laws meant to protect human dignity have a much lower threshold for violation, so those in Canada, Denmark, France, Germany and the Netherlands tend to be more frequently enforced.
Main article: Hate speech actions by country
A few states, including Saudi Arabia, Iran, Rwanda Hutu factions, actors in the Yugoslav Wars and Ethiopia have been described as spreading official hate speech or incitement to genocide.
Main article: Online hate speech
The rise of the internet and social media has presented a new medium through which hate speech can spread. Hate speech on the internet traces all the way back to its initial years, with a 1983 bulletin board system created by neo-Nazi George Dietz considered the first instance of hate speech online. As the internet evolved over time hate speech continued to spread and create it's footprint; the first hate speech website Stormfront was published in 1996, and hate speech has become one of the central challenges for social media platforms.
The structure and nature of the internet contribute to both the creation and persistence of hate speech online. The widespread use and access to the internet gives hate mongers an easy way to spread their message to wide audiences with little cost and effort. According to the International Telecommunication Union, approximately 66% of the world population has access to the internet. Additionally, the pseudo-anonymous nature of the internet imboldens many to make statements constituting hate speech that they otherwise wouldn't for fear of social or real life repercussions. While some governments and companies attempt to combat this type of behavior by leveraging real name systems, difficulties in verifying identities online, public opposition to such policies, and sites that don't enforce these policies leave large spaces for this behavior to persist.
Because the internet crosses national borders, comprehensive government regulations on online hate speech can be difficult to implement and enforce. Governments who want to regulate hate speech contend with issues around lack of jurisdiction and conflicting viewpoints from other countries. In an early example of this, the case of Yahoo! Inc. v. La Ligue Contre Le Racisme et l'Antisemitisme had a French court hold Yahoo! liable for allowing Nazi memorabilia auctions to be visible to the public. Yahoo! Refused to comply with the ruling and ultimately won relief in a U.S. court which found that the ruling was unenforceable in the U.S. Disagreements like these make national level regulations difficult, and while there are some international efforts and laws that attempt to regulate hate speech and its online presence, as with most as with most international agreements the implementation and interpretation of these treaties varies by country.
Much of the regulation regarding online hate speech is performed voluntarily by individual companies. Many major tech companies have adopted terms of service which outline allowed content on their platform, often banning hate speech. In a notable step for this, on 31 May 2016, Facebook, Google, Microsoft, and Twitter, jointly agreed to a European Union code of conduct obligating them to review "[the] majority of valid notifications for removal of illegal hate speech" posted on their services within 24 hours. Techniques employed by these companies to regulate hate speech include user reporting, Artificial Intelligence flagging, and manual review of content by employees. Major search engines like Google Search also tweak their algorithms to try and suppress hateful content from appearing in their results. However, despite these efforts hate speech remains a persistent problem online. According to a 2021 study by the Anti Defamation League 33% of Americans were the target of identity based harassment in the preceding year, a statistic which has not noticeably shifted downwards despite increasing self regulation by companies.
Several activists and scholars have criticized the practice of limiting hate speech. Civil liberties activist Nadine Strossen says that, while efforts to censor hate speech have the goal of protecting the most vulnerable, they are ineffective and may have the opposite effect: disadvantaged and ethnic minorities being charged with violating laws against hate speech. Kim Holmes, Vice President of the conservative Heritage Foundation and a critic of hate speech theory, has argued that it "assumes bad faith on the part of people regardless of their stated intentions" and that it "obliterates the ethical responsibility of the individual". Rebecca Ruth Gould, a professor of Islamic and Comparative Literature at the University of Birmingham, argues that laws against hate speech constitute viewpoint discrimination (which is prohibited by the First Amendment in the United States) as the legal system punishes some viewpoints but not others. Other scholars, such as Gideon Elford, argue instead that "insofar as hate speech regulation targets the consequences of speech that are contingently connected with the substance of what is expressed then it is viewpoint discriminatory in only an indirect sense." John Bennett argues that restricting hate speech relies on questionable conceptual and empirical foundations and is reminiscent of efforts by totalitarian regimes to control the thoughts of their citizens.
Miisa Kreandner and Eriz Henze argue that hate speech laws are arbitrary, as they only protect some categories of people but not others. Henze argues the only way to resolve this problem without abolishing hate speech laws would be to extend them to all possible conceivable categories, which Henze argues would amount to totalitarian control over speech.
Michael Conklin argues that there are benefits to hate speech that are often overlooked. He contends that allowing hate speech provides a more accurate view of the human condition, provides opportunities to change people's minds, and identifies certain people that may need to be avoided in certain circumstances. According to one psychological research study, a high degree of psychopathy is "a significant predictor" for involvement in online hate activity, while none of the other 7 potential factors examined were found to have a statistically significant predictive power.
Political philosopher Jeffrey W. Howard considers the popular framing of hate speech as "free speech vs. other political values" as a mischaracterization. He refers to this as the "balancing model", and says it seeks to weigh the benefit of free speech against other values such as dignity and equality for historically marginalized groups. Instead, he believes that the crux of debate should be whether or not freedom of expression is inclusive of hate speech. Research indicates that when people support censoring hate speech, they are motivated more by concerns about the effects the speech has on others than they are about its effects on themselves. Women are somewhat more likely than men to support censoring hate speech due to greater perceived harm of hate speech, which some researchers believe may be due to gender differences in empathy towards targets of hate speech.