Your Kid Saw a Deepfake Today. What to Actually Teach Them
Table of Contents

Your Kid Saw a Deepfake Today. What to Actually Teach Them

8 million deepfakes were shared in 2025. Most kids can't tell AI content from real. Here's a 4-conversation framework that builds skepticism, not fear.

In 2023, an estimated 500,000 deepfakes were shared online. In 2025, that number reached approximately 8 million. By some projections, 90% of online content may be synthetically generated or AI-modified by 2026.

Your child, if they use the internet at all, is already encountering AI-generated content regularly — images that look photographed, audio that sounds recorded, video that looks filmed, text that looks written by a person. Most of them can’t tell the difference.

A 2025 study cited by Stanford researchers found that 72% of students could not distinguish AI-generated text from human-written articles, and 56% accepted AI-hallucinated facts — statements presented with confidence that were entirely fabricated — as true. These aren’t failing students. These are typical young people navigating an information environment that changed faster than anyone’s media literacy skills could adapt.

The parent response to deepfakes tends to be either alarm (they’re going to be manipulated by everything) or dismissal (kids are tech-savvy, they’ll figure it out). Neither is accurate, and neither is useful. The actual response is specific skill-building — a set of habits that don’t require technical sophistication but do require deliberate practice.

What Deepfakes Are — in Plain English

A deepfake is media — image, video, audio, or text — that has been generated or substantially altered by artificial intelligence to appear as something it isn’t. The term originally referred specifically to AI-manipulated video (often swapping faces), but in common usage it now covers:

  • AI-generated images that look like photographs of real people, events, or places that don’t exist
  • AI voice cloning that can reproduce a specific person’s voice from limited audio samples
  • AI video synthesis that generates moving images of people saying or doing things they never did
  • AI text generated to sound like a real person’s writing, including fake quotes and fabricated “news”

The technology producing these has improved dramatically and continues to improve. What required significant technical skill in 2020 requires a free app in 2026.

The malicious uses are real and worth naming: political disinformation (fake video of a politician saying something they didn’t say), financial fraud (voice-cloned calls impersonating family members), non-consensual intimate imagery (a specific and serious harm targeting adolescents), and misinformation campaigns. But the more common encounter is subtler: images shared as “real” that weren’t, quotes attributed to real people that weren’t said, news-formatted articles about events that didn’t happen.

Why Children Are Particularly Susceptible

A 2025 PMC study on children’s susceptibility to AI-generated content found that children ages 7–13 showed significantly lower accuracy than adults in detecting AI-generated images, even when they believed they were trying hard to evaluate them. The primary mechanism isn’t gullibility — it’s that children have less experience with the visual and linguistic tells that AI generation currently produces, and they have fewer internal reference points for “does this feel off?”

A 2025 study in Computers in Human Behavior on young people’s encounters with deepfakes found that intuition-based detection (“this feels fake”) was significantly less reliable than systematic verification techniques across all age groups — but the gap was largest in adolescents, who tend to rely on visual impression rather than checking.

The European Parliament’s 2025 briefing on children and deepfakes notes that adolescents are specifically targeted by deepfake-based harassment, with non-consensual intimate imagery being a growing safeguarding concern in schools. Children who don’t understand what’s technically possible are worse equipped to recognize when they or someone they know has been targeted.

UNESCO’s analysis “Deepfakes and the Crisis of Knowing” argues that the problem isn’t primarily about individual media items — it’s that the existence of deepfakes erodes the general credibility of all media, including real documentation. Children who grow up in this environment need not just detection skills but a broader epistemological habit: “How do I know what I know, and how confident should I be?”

Content Type × Detection Approach

AI content typeCommon encounterDetection approachTeaching conversation
AI-generated imagesSocial media shares, news articlesLook for: unnatural hands, hair merging with backgrounds, asymmetric features, odd reflections in eyes; reverse image search for original”Let’s look at the hands in this photo — AI images often get hands wrong”
AI text / fake quotesAttributed quotes on social media, fake news articlesFind the original source; quotes from public figures should trace to documented records”Where did this quote come from originally? Let’s find it.”
AI voice / phone callsFraud calls claiming to be family in distressEstablish a family code word for emergency verification”If someone calls you claiming to be me in trouble, ask for our code word”
AI videoPolitical content, viral sharesLook for unnatural eye movement, lip sync inconsistencies, lighting mismatches; check source and date”Where was this filmed? Let’s see if this event is documented elsewhere”
AI “news” articlesShared links without original sourceCheck outlet credibility, find multiple sources reporting same event”Has any news outlet you recognize reported this?”

The Four-Conversation Framework

These aren’t one-time talks — they’re recurring conversations, ideally triggered by real encounters. Each one takes about 10 minutes and builds a specific habit.

Conversation 1: “Where did this come from?”

The most foundational media literacy question. Not “is this real?” but “where does this trace to?” AI-generated content and misinformation typically fail at this question: they don’t trace to an original source, they trace only to shares.

Practice it together: find something on social media — ideally something your child is about to share or found striking — and ask together: who posted this first? What’s the original source? Is there anything about this topic on a news outlet you recognize? The process, practiced repeatedly, becomes a habit.

Conversation 2: “What would this look like if it were real?”

This is the evidentiary-standard conversation. If this event actually happened, what would you expect to find? Multiple independent news sources. Video from different angles. Official statements. Reactions from named people who were there. The absence of expected evidence is itself information.

This conversation works especially well for viral images and video: “If this were real, who else would have filmed it? Where’s the news coverage?” The habit of noticing what should exist but doesn’t is one of the strongest tools in critical media evaluation.

Conversation 3: “How does this make you feel, and why?”

Emotionally activating content spreads faster — this is a design feature of viral content, not an accident. AI-generated misinformation often targets emotional responses because emotional activation reduces critical scrutiny.

Teaching children to notice their own emotional response as a signal to check more carefully — not as an indicator of truth — is one of the highest-leverage media literacy habits. “I notice this made you really angry. That’s worth paying attention to: the things that make us most sure and most emotional are often the things we should evaluate most carefully.”

Conversation 4: “What are the stakes if this is wrong?”

This is the consequentialist question. Not all misinformation is equally costly. Getting the score of a game wrong is low stakes. Believing a medical claim about your health is high stakes. Sharing a fake video that damages someone’s reputation is high stakes for that person.

Teaching children to calibrate their verification effort to the stakes — to fact-check more carefully when it matters more — is a sustainable media literacy habit. Verifying everything to the same degree would be exhausting; verifying in proportion to consequence is practical.

For the broader framework on building AI evaluation skills, see Teaching Kids to Use AI as a Thinking Partner and What AI Literacy Means for a 10-Year-Old.

What to Watch for Over the Next 3 Months

Week 2–3: After introducing the “where did this come from?” question once, does your child spontaneously ask it about something they encounter in the next week? The self-directed application is the leading indicator that the habit is forming.

Month 2: Is your child hesitating before sharing things online — taking a beat to consider source and accuracy rather than immediately reposting? Behavioral hesitation is the visible sign of the internal evaluation process.

Month 3 self-check: Could your child name three specific things to check when they’re not sure if an image or video is real? The ability to name the verification steps means the process is internalized rather than just heard once.

Frequently Asked Questions

Should I show my child an obvious deepfake to demonstrate what they look like?

Yes, with age-appropriate examples. MIT Media Lab, the DFRLab, and several media literacy organizations maintain libraries of deepfake examples made for educational purposes. Showing a child a manipulated video and then walking through the tells together is significantly more memorable than describing it. Start with something obviously imperfect (early-era deepfakes often had very visible artifacts) before moving to more convincing examples.

My 14-year-old received an AI-manipulated image of themselves from a classmate. What do I do?

This is a serious safeguarding matter that goes beyond media literacy. Document the image without showing it further. Report to the school. Depending on the nature of the manipulation, report to the platform and potentially to law enforcement. Non-consensual intimate imagery — even AI-generated — is illegal in a growing number of states. Your child needs practical support, not just reassurance.

Is there an app that detects deepfakes?

Detection apps exist (Sensity, Hive Moderation, and others), but they lag the generation technology and produce false positives and negatives. They’re not reliable enough to use as a primary verification tool. The habits above — source tracing, evidentiary standards, emotional-response awareness — are more reliable across different content types and don’t require any app.

How do I talk about this without making my child paranoid about all media?

The goal is appropriate skepticism, not paralysis. Frame it as a skill: “We now live in a world where some things online are made to look real when they’re not. That doesn’t mean everything is fake — it means you need a method.” Paranoia comes from “I can’t trust anything.” Critical evaluation comes from “I have a way to check.” Emphasize the latter.


About the author

Ricky Flores is the founder of HIWVE Makers and an electrical engineer with 15+ years of experience building consumer technology at Apple, Samsung, and Texas Instruments. He writes about how kids learn to build, think, and create in a tech-saturated world. Read more at hiwavemakers.com.

Sources

  1. PMC. (2025). “Children’s susceptibility to content generated by artificial intelligence.” PMC13089802. https://pmc.ncbi.nlm.nih.gov/articles/PMC13089802/

  2. ScienceDirect. (2025). “Everyday encounters with deepfakes: young people’s media and information literacy practices with AI-generated media.” Computers in Human Behavior. https://doi.org/10.1016/j.chb.2025.108504

  3. European Parliament Research Service. (2025). “Children and deepfakes.” Briefing PE 775855. https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775855/EPRS_BRI(2025)775855_EN.pdf

  4. UNESCO. “Deepfakes and the Crisis of Knowing.” https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing

  5. NC State Extension / Bertie County. (2025). “Digital Literacy for the Age of Deepfakes: Recognizing Misinformation in AI-Generated Media.” https://bertie.ces.ncsu.edu/2025/03/digital-literacy-for-the-age-of-deepfakes-recognizing-misinformation-in-ai-generated-media/

  6. KidsAITools / Stanford. (2026). “Teaching Kids to Spot AI Misinformation: Media Literacy Guide.” https://www.kidsaitools.com/en/articles/teaching-kids-to-spot-ai-misinformation

Ricky Flores
Written by Ricky Flores

Founder of HiWave Makers and electrical engineer with 15+ years at Apple, Samsung, and Texas Instruments. He writes about how kids learn to build, think, and create in a tech-driven world.