top of page
Looking Out the Window

The Parent’s Evidence-Based Guide to Online Racism & Hate

A Scientific Framework for Countering Hate Speech & Bias

Synthesising research from the Journal of Youth and Adolescence and the Anti-Defamation League (ADL) to help parents navigate the "normalisation" of hate in gaming and social spaces.

Created with

Jess Blier.jpeg

Jessica Blier

Jessica Blier is a senior forensic linguist specialising in high-risk language analysis, including online grooming and scams. She works to identify harmful linguistic patterns, improve safety systems, and support parents and educators with evidence-based digital safety insights.
591406_77bd8dd90a5947c7875ac767b466ee7b~mv2.jpeg
Moe is a former US government cybercrime investigator with experience tackling online exploitation, fraud and digital harm. She now focuses on educating families and platforms on real-world online risks and practical prevention strategies.
2.png

Breck Foundation

The Breck Foundation is a UK charity dedicated to protecting children from online grooming and exploitation. Founded after the death of Breck Bednar, it delivers education, awareness programmes, and practical guidance for families and schools.

Racism Guide

1

Part 1: Understanding the Threat

Online gaming lobbies and social comment sections are often the "Wild West" of unregulated speech.

1. The "Gamification" of Hate

Research shows that many children first encounter racial slurs not at school, but in gaming voice chats (Call of Duty, Fortnite, Valorant) or on Discord.

  • Normalization: When a child hears a slur used 50 times a day as a synonym for "losing a game," the word loses its historical weight and becomes just "gamer trash talk."

  • "Edgy" Humor: Algorithmic culture rewards "shock value." Memes that use racist or antisemitic tropes are often shared under the guise of "dark humor" or "irony," making them harder for parents to police.

2. The "Schrödinger’s Douchebag" Effect

This internet phenomenon (identified in communication studies) is a common defense mechanism: A user makes a hateful statement and decides whether it was "a joke" or "serious" based on the reaction they get.

  • Parental Challenge: If you call your child out, they will likely say: "It’s just a joke, calm down." This gaslighting tactic is learned from influencers.

2

Part 2: The Psychological Context

How good kids get desensitised to bad words.

1. Desensitization Theory Studies (e.g., Costello et al.) confirm that repeated exposure to hate speech reduces emotional reactivity. A child who was initially shocked by the N-word may, after six months of gaming, stop noticing it entirely.

2. In-Group vs. Out-Group Dynamics Online communities are tribal. Using the specific slang/slurs of a group signals "belonging." Adolescents often use hateful language not because they hate the target, but because they want to signal to their peers: "I am one of you; I am not easily offended."

3

Part 3: Prevention (Active Counter-Narratives)

You cannot shield them from the words, so you must inoculate them against the ideas.

Strategy 1: The "Bystander" Training

Research shows that peer intervention is the most effective way to stop bullying/hate.

  • The Script: Teach your child a low-risk script for when they hear a friend use a slur. They don’t need to give a lecture. They just need to say: "Whoa, chill. That’s cringe." (Framing racism as "cringe" or "uncool" is often more effective with teens than framing it as "immoral").

Strategy 2: Diversify the Feed

Algorithms create "echo chambers."

  • Action: Actively encourage following creators from diverse backgrounds. If their entire feed looks exactly like them, the "Othering" of different races becomes psychological easier.

4

Part 4: Detection (The "Scripted Language" Markers)

Hate groups use coded language ("dog whistles") to avoid bans and parental detection.

Coded Language

Certain words, symbols, or numbers can signal exposure to extremist or racist online spaces.

 

  • Triple Parentheses: Using symbols like (((they))) to imply someone is Jewish — a known antisemitic marker.

  • 1488: A numerical code commonly used in white supremacist communities.

  • Based”: Once meaning “being yourself,” now often co-opted in extremist spaces to signal approval of racist or sexist views.

Humour & Memes

Racist ideologies are often introduced and normalised through jokes or irony.

 

  • Dark Humour” Defence: Sharing memes built on stereotypes (e.g. crime statistics, racial caricatures) and reacting angrily if others don’t laugh.

  • Wojak Variations: Using specific racist or extremist versions of the Wojak meme to mock or dehumanise groups.

Behavioural Shifts

Changes in worldview or repeated talking points can indicate deeper ideological influence.

 

  • Grievance Politics: Presenting unbalanced opinions about race injustice or obsessing over limited incidents that validate their prejudices, mirroring language used by radicalising influencers.

5

Part 5: Response (The "Ouch" Protocol)

Addressing bias without triggering the "Backfire Effect."

1. Don't Debate; Investigate

If your child says something racist or defends a racist meme:

  • Do Not: Immediately label them a racist.

  • Do Ask: "That’s a pretty intense thing to say. Where did you hear that? Do you think that’s actually true, or is it just something people say to be edgy?"

2. The "Empathy Bridge"

Re-connect the word to a person.

  • Say: "I know you hear that word in Call of Duty all the time. But words have history. If you said that in front of [Name of friend/relative/teacher of that race], how do you think they would feel? Would they think you were joking?"

3. Zero Tolerance for Abuse

Distinguish between "ignorant repeating" and "malicious targeting."

  • Rule: If your child is found targeting individuals with hate speech (harassment), this requires consequences (loss of gaming privileges) and potentially restorative justice (apology letters, education).

6

Part 6: Selected Academic References

Key studies supporting the strategies in this guide.

  1. Costello, M., et al. (2019). Online exposure to hate speech: The role of social learning and desensitization. Sociological Inquiry.

  2. Tynes, B. M., et al. (2020). Race-related traumatic events online and mental health among adolescents. Journal of Adolescent Health.

  3. Daniels, J. (2009). Cyber racism: White supremacy online and the new attack on civil rights. (Foundational text on how hate groups use the internet).

bottom of page