With the rapid rise of social media, people have new platforms for expression, prompting many companies to create either a human rights policy or hate speech policy (some have both — e.g., Facebook).
If your company is writing or upgrading its human rights (or hate speech policy), you might be stuck on what to include. I found 10 examples of hate speech and human rights policies to help you get started. But first, let’s define hate speech and the difference between human rights vs. hate speech policies.
What Is Hate Speech?
The best definition of hate speech comes from the Council of Europe. In 1997 they defined hate speech as:
“all forms of expression which spread, incite, promote, or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance.”
Hate speech has been around for ages, even before social media. Here are some real-life hate speech examples:
The Palestinians are like crocodiles; the more you give them meat, they want more
-Ehud Barak, Prime Minister of Israel, August 28, 2000.
Let’s Kill Jews, and let’s kill them for fun. #killjews
-Hamas Palestine, Twitter (Twitter deleted this post because of their hate speech policies)
Posts like the ones above prompted all social media platforms (and many other co.s) to craft their own policies against hate speech.
Hate Speech vs Human Rights Policy
What Is a Human Rights Policy? The United Nations defines human rights as the right to freedom of speech, health, privacy, life, security, liberty, and a decent standard of living. Human rights policies outline your position on all of these things and how they might impact your business (or employees).
Under free speech, people have the right to express their opinion. Sometimes opinions can be offensive. Offensive speech can become hate speech and cross the line into a human rights violation if it encourages discrimination (and incites violence towards a group or person). This is why some companies also have hate speech policies. To protect their brand, its supporters, and employees.
Examples of a Hate Speech Policy
These examples of hate speech policies serve as inspiration from well-known companies. Let’s dive in with the Facebook policy on hate speech.
Note: I include a snippet for the examples, so you’ll need to click on each to read the full hate speech policy.
What are Facebook’s hate speech policies? They are specific and direct:
Facebook Policies on Hate Speech: We define hate speech as a direct attack against people — rather than concepts or institutions— based on protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and severe disease.
YouTube and Google are owned by Alphabet Inc. and share the same policy on hate speech. Here’s a snippet of what they include:
We remove content promoting violence or hatred against individuals or groups based on any of the following attributes: race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.
YouTube also includes video of their community guidelines on hate speech:
Here’s an intro of the Twitter policy on hate speech, which they call a “Hateful Conduct Policy”:
You may not promote violence against or directly attack or threaten other people based on race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
Vimeo’s hate speech policy says:
We do not allow hateful and discriminatory speech. We define this as any expression that is directed to an individual or group of individuals based upon the personal characteristics of that individual or group.
Outside of content, Vimeo also monitors avatars, screen names, and profile pictures for hate speech.
The Amazon hate speech policy below focuses on its products:
Amazon does not allow products that promote, incite, or glorify hatred, violence, racial, sexual, or religious intolerance or promote organizations with such views.
Snapchat’s hate speech policies cover a broad range of groups:
Hate speech or content that demeans, defames, or promotes discrimination or violence on the basis of race, color, caste, ethnicity, national origin, religion, sexual orientation, gender identity, disability, or veteran status, immigration status, socio-economic status, age, weight or pregnancy status is prohibited.
Here’s an excerpt from Spotify’s hate speech policy:
Don’t engage in any activity, post any User Content, or register or use a username, which is or includes material that is offensive, abusive, defamatory, pornographic, threatening, or obscene, or advocates or incites violence.
Examples of Human Rights Policies
Here are 3 examples of what you might include in a human rights policy:
Along with Facebook’s policy on hate speech, they also have a human rights policy that starts with:
Facebook’s mission is to give people the power to build community and bring the world closer together. We build social technologies to enable the best of what people can do together. Our principles are: give people a voice; serve everyone; promote economic opportunity; build connection and community; keep people safe and protect privacy. We recognize all people are equal in dignity and rights. We are all equally entitled to our human rights, without discrimination. Human rights are interrelated, interdependent and indivisible.
The Apple human rights policy covers their product, processes, and people:
We’ve worked hard to embed a respect for human rights across our company—in the technology we make, in the way we make it, and in how we treat people.
The L’oréal human rights policy is built on 4 key pillars that focus on being “good people”:
Our Human Rights Policy is based on our 4 Ethical Principles – INTEGRITY, RESPECT, COURAGE, and TRANSPARENCY – and is part of our Code of Ethics. We believe that as a business we have a responsibility to respect internationally recognised human rights and that we must take steps to identify and address any actual or potential adverse impacts in which we may be involved through our own operations or our business relationships. We also believe that we can contribute to positive human rights impacts by playing our role as a responsible
Why I Wrote This?
The world is changing for the better. Writing these types of policies can be tricky, and you might use words you don’t know are biased. Ongig’s Text Analyzer software helps remove all types of exclusionary language. We’d be happy to scan your JD text (or other content) to make sure it’s clear of any hate speech or bias.
- LGBTQ Equality at the Fortune 500 (by Human Rights Campaign)
- Human Rights (by United Nations)
- What is Hate Speech? (by Rights for Peace)
- Hate speech is still easy to find on social media (by Jennifer Grygiel)
- Facebook Policies on Hate Speech (Facebook Transparency Center)
- L’ORÉAL Human Rights Policy
- Our Commitment to Human Rights (by Apple)
- Facebook’s Corporate Human Rights Policy
- Spotify’s User Guidelines
- Community Guidelines (Snapchat)
- Offensive and Controversial Materials (by Amazon)
- Vimeo Acceptable Use Community Guidelines
- Hateful Conduct Policy (Twitter Hate Speech Policies)
- Hate Speech Policy on YouTube
- Google Hate Speech Policy for AdSense