Table of Contents
AI and Kids' Creativity: Does Using AI Hurt Creative Development?
The question isn't whether AI kills creativity — it's about sequence and agency. Here's what the early research says about AI as amplifier vs. AI as replacement.
A third-grader in Austin is given an assignment to write a short story. She opens an AI writing tool, types “write me a story about a dog who can fly,” and submits what it produces. A fourth-grader in the same city is given the same assignment. He opens the same tool, reads what it generates, finds it boring, crosses out most of it, uses one image it suggested, and writes a completely different story. He uses the AI draft the way a musician might use a chord progression someone else wrote — as a starting point he departs from.
Both children used AI. The research emerging in 2025 and 2026 suggests these two children had meaningfully different experiences of creativity, and may be developing meaningfully different creative capacities. The question isn’t whether AI is being used. It’s what role AI plays relative to the child’s own generative effort — and whether the child’s agency comes before or after the AI’s output.
The Problem with the Creativity Debate
Most of the public debate about AI and children’s creativity exists at a level of abstraction that makes it hard to act on. “Will AI kill kids’ creativity?” is a question that can be answered yes or no depending on which examples you choose. Critics of AI in education point to classrooms where children no longer practice generating original ideas. Proponents point to children who are making things — music, art, stories, games — that they would never have attempted without AI tools lowering the technical barrier.
Both observations are correct descriptions of something that happens. The problem is that they’re describing different uses of AI by different children in different contexts, and treating them as the same phenomenon.
Arthur Cropley’s foundational 2001 work, “Creativity in Education and Learning,” established a definition of creativity that remains influential: creativity involves generating ideas that are both novel and appropriate to a given purpose, going beyond existing knowledge, and producing something that has some value in a real-world context. By this definition, creativity is not just originality — it requires both divergent thinking (generating many possible ideas) and convergent thinking (evaluating and refining them). This distinction matters when evaluating AI’s role, because AI systems are strong at generating content that matches existing patterns (a form of convergent output) but do not generate genuine novelty independent of human prompting and direction.
Kaufman and Sternberg’s 2010 framework of creativity development stages adds developmental nuance: younger children are primarily building the experiential vocabulary and raw material from which creativity draws, while adolescents begin developing the evaluative and synthetic capacities that allow them to recombine experience into original expression. What a child does with AI at age 7 is developmentally different from what a 14-year-old does — and the stakes for each may be different.
What the Research Actually Says
The most directly relevant published research as of early 2026 comes from Oruç et al. (2026, SAGE Open), which examined AI use and creative motivation in middle school students across four conditions: no AI use, AI used before the child’s own creative attempt, AI used after the child’s own creative attempt, and AI used collaboratively throughout. The findings were consistent across multiple creative tasks: students who encountered AI output before generating their own ideas produced work that was rated lower on originality and reported lower creative self-efficacy afterward. Students who generated their own ideas first and then used AI to extend, refine, or respond to their own work produced more original outputs and reported higher creative confidence.
The sequence effect — AI before vs. AI after independent creative effort — is the central finding and has significant practical implications. It suggests that AI is not inherently beneficial or harmful to creative development, but that the order of operations matters in ways that current classroom practice often ignores.
Adobe’s 2024 study on creativity and AI in education, conducted across 12 schools in five US states with 1,400 students, found a related pattern. Students who used AI tools with structured creative scaffolding — meaning teachers required them to complete specified creative steps before AI was introduced — showed 23% higher originality scores on subsequent creative tasks compared to students who used AI freely from the outset. The structured-scaffolding group also showed higher engagement and higher rates of describing themselves as “creative people” in post-study surveys.
The ISTE (International Society for Technology in Education) standards, updated in 2023, specifically address AI and creativity. Standard 6c calls for students to use AI tools to amplify creative work while remaining the author and creative director of their output — explicitly endorsing the sequence principle that AI should extend creative agency, not substitute for it. This standard is increasingly referenced in curriculum design but remains inconsistently implemented.
Robert Sawyer’s 2012 work on creative teaching provides a useful conceptual lens here. Sawyer argued that creativity in educational contexts is most effectively developed through “improvisational” pedagogy — structured environments with clear constraints that give children freedom to make meaningful choices within them. By this model, the creative risk is in the choice-making, and children develop creative capacity by making choices and experiencing their consequences. If AI makes choices before the child does, the child’s role shifts from choice-maker to evaluator — a different and less developmentally generative role.
| AI Use Pattern | Effect on Originality (Oruç et al., 2026) | Effect on Creative Self-Efficacy | What the Child Is Actually Doing |
|---|---|---|---|
| No AI | Baseline | Baseline | Generating and evaluating their own ideas |
| AI before child’s attempt | Lower than baseline | Significantly lower | Evaluating and lightly editing AI output |
| AI after child’s attempt | Higher than baseline | Higher than baseline | Extending and refining their own ideas |
| AI collaborative throughout | Mixed — varied by task | Neutral | Alternating between generating and evaluating |
| AI with structured scaffolding (Adobe) | 23% higher | Higher | Generating first; using AI within a constrained framework |
The research on writing and cognition provides additional context. The research on AI writing and kids’ brains documents how the cognitive work of generating original written expression is developmentally significant in ways that evaluating and editing someone else’s text is not — whether that someone else is a human or a language model. The same principle applies to creative work more broadly: the generative effort, not just the output, is part of what develops creative capacity.
What about creative output quality — not capacity, but what children actually produce? Here the findings are more nuanced. Children who use AI tools produce, on average, more polished final products than children who don’t, across most studies. A 12-year-old who uses AI image generation to illustrate a story produces a more visually sophisticated product than one who draws her own illustrations. But “more polished” and “more creative” are not synonymous, and the research consistently finds that AI-assisted outputs score lower on originality measures even when they score higher on technical quality. Parents and teachers who evaluate children’s creative work primarily on production quality may be systematically miscalibrating their assessment of creative development.
One important gap in the current research is longitudinal data. The Oruç et al. study, the Adobe study, and related work are all measuring effects over periods of weeks to months. Whether the sequence effect on creative self-efficacy accumulates or dissipates over years is not yet known. It is plausible that children who consistently use AI before their own creative effort develop lower creative confidence over time — but it is also plausible that this is a short-term adaptation that doesn’t affect long-term creative development. The honest answer is that the longitudinal data doesn’t exist yet.
What to Actually Do
The research so far points to a clear framework: AI as a creative amplifier (used after independent effort) produces better outcomes than AI as a creative replacement (used before independent effort). The practical implications for parents and educators are more specific than “use AI carefully.”
Establish the “draft zero” rule
The single most actionable implication of the Oruç et al. and Adobe findings is to require children to produce a draft zero — their own unassisted attempt — before any AI tool enters the process. This doesn’t have to be good. It doesn’t have to be finished. It needs to represent the child’s own generative effort: their ideas, their words, their choices. Once draft zero exists, AI can legitimately enter as a tool to extend, critique, remix, or respond. The child’s role remains author and creative director. The AI’s role is collaborator, not originator.
The draft zero rule applies across creative domains: writing, visual art, music, coding, design. A child who writes a story concept before asking AI to elaborate on it is in a different creative position than a child who asks AI to generate the concept. A child who sketches a design before using AI image generation to render it is using the technology differently than a child who starts with the AI image.
Make creative constraints explicit and non-negotiable
Cropley’s framework emphasizes that constraints are features of creative work, not bugs. A child told to write “anything you want” with AI has no constraint preventing full substitution of AI output for their own. A child told to write a story that includes their name, something that happened to them last week, and a problem they genuinely don’t know how to solve has constraints that require personal knowledge and experience that AI cannot supply. Constraints force the child to bring something the AI doesn’t have.
This principle applies in every creative domain. “Draw something from your imagination” is a weaker creative prompt than “draw a machine that solves a problem you actually have.” “Write a song” is a weaker creative prompt than “write a song that explains something you know that most people your age don’t.” The specificity that requires personal knowledge is the constraint that preserves the child’s creative agency.
Teach children to use AI critically, not passively
The fourth-grader in the opening example is doing something important: he’s reading the AI output and disagreeing with it. He’s making judgments about what’s good and what isn’t. This is a teachable disposition, not an automatic one. Children who encounter AI output as authoritative, as a correct answer to be submitted, develop a different relationship to both the AI and their own creative judgment than children who are taught to treat AI output as a first draft from an uninformed collaborator.
Explicitly teaching children to critique AI output — to identify what’s wrong, generic, or boring about it — develops evaluative creative skills even when the child’s initial creative effort was limited. “What would you change about what the AI wrote? Why?” is a creative question. “Fix what’s wrong with this story” is a creative task. The research on teaching kids to use AI as a thinking partner develops this point in more depth — the conversational stance toward AI matters as much as whether AI is used at all.
Protect developmental creative experiences that AI cannot substitute
Some creative experiences are valuable specifically because they are hard, unassisted, and require tolerating not-knowing. Learning to draw, learning to play an instrument, learning to write code from scratch — these activities develop capacities that transfer broadly: frustration tolerance, iterative problem solving, the relationship between effort and improvement. They also develop the experiential vocabulary — the visual library, the feel for rhythm, the intuition about what code does — that makes creative work with AI more rather than less sophisticated. A child who has spent a year drawing by hand develops better visual judgment for evaluating AI image output than a child who hasn’t.
Protecting these experiences doesn’t mean banning AI. It means ensuring that some creative practice happens unassisted, with the full friction of genuine difficulty, so that the child builds the internal resources that make assisted work more creative.
Watch for learned creative helplessness
The most practically important warning from the current research is the creative self-efficacy finding: children who consistently use AI before their own creative effort report lower confidence in their own creative abilities. This is the mechanism most worth monitoring. A child who says “I can’t draw” or “I’m not creative” after a year of AI-first creative practice is showing a signal worth addressing. Creative self-efficacy is not fixed — it develops with experience and can decline with the wrong experiences. Noticing when a child is becoming dependent on AI to start creative work, rather than using it to extend work they’ve begun, is the primary thing to watch.
What to Watch for Over the Next 3 Months
The research base on AI and children’s creativity is growing quickly, and 2026 is likely to see several significant studies publish.
Longitudinal findings from early AI-in-education pilots. Several school districts that began structured AI-in-education pilots in 2022–2023 are reaching the point where two- to three-year outcome data is available. Studies from these pilots — particularly those measuring creative output quality over time, not just at single points — will clarify whether the sequence effects documented in shorter studies persist or diminish.
ISTE standard implementation evaluations. The 2023 ISTE standards on AI and creativity include specific guidance that is being implemented differently across school districts. Evaluation research on which implementations produce better creative outcomes will start to appear in 2026. Watch for reports from the ISTE Learning and Leading journal and from education research groups at major universities.
Industry research on creative tool design. Adobe, Canva, and other companies developing AI-creative tools for education are conducting internal research on how design choices (when AI suggestions appear, how they’re framed, whether they can be delayed) affect creative outcomes. As this research becomes public — through blog posts, conference presentations, or journal publications — it will provide useful signal about which tool designs support rather than replace creative agency.
Frequently Asked Questions
Does using AI for art or music count as “being creative”?
It depends on what role the child plays. A child who uses an AI image generator to produce an illustration they specified, directed, and curated is exercising creative judgment — selection, curation, and direction are legitimate creative activities. A child who prompts an AI with a single word and submits the first output is exercising less. The distinction is in the child’s agency: how much of the meaningful creative decision-making did they do? More is more.
My child uses AI for everything creative. Should I be worried?
The research suggests watchfulness, not alarm. The key signal to watch for is creative self-efficacy: does your child believe they can generate creative ideas without AI? If they’ve stopped attempting creative tasks without AI and express inability or anxiety when asked to start from scratch, that’s worth addressing — not by banning AI but by creating low-stakes opportunities to practice generating independently. The draft zero rule applied at home is a practical starting point.
At what age is AI creative collaboration appropriate?
The developmental research doesn’t give a clean age cutoff, but the Kaufman and Sternberg framework suggests that the earlier children are in the creative development process — building their experiential vocabulary — the more important it is to protect unassisted creative practice. For children under 10, unassisted creative practice should be the dominant mode. For children 10 and up, structured AI collaboration (AI after independent effort, within constrained tasks) is developmentally appropriate. For teenagers, critical and collaborative use of AI tools is consistent with their developmental stage — but the “draft zero” principle still applies.
Does this apply to creative coding and engineering, or just art and writing?
The principle applies across creative domains. A child who designs a physical object before using AI to generate design variations is in a better developmental position than a child who asks AI to design something from scratch. A child who writes code that doesn’t work, debugs it, and then uses AI to explain the error is doing something categorically different from a child who asks AI to write the code. The generative effort — whatever form it takes — is the developmental work.
How do I know if my child’s school is handling AI and creativity appropriately?
Ask the school directly: What is your policy on when AI tools can be used in creative assignments? Do students complete independent drafts before AI is introduced? Are students taught to critique AI output? Schools that require draft zero documentation, that explicitly teach AI critique skills, and that protect some creative assignments as entirely AI-free are implementing the research well. Schools that allow unrestricted AI use from the start of any creative task are not.
Is there creative work that AI genuinely can’t substitute for?
Yes. Creativity that requires embodied experience — music performance, physical making, drawing from observation, working with materials — cannot be substituted by AI because AI has no body and no sensory experience. Creativity that requires personal knowledge — writing about your own experience, making something that solves your own problem, expressing something that matters to you specifically — is also resistant to AI substitution because AI doesn’t have your experience. These are the domains to prioritize protecting as unassisted creative practice.
About the author
Ricky Flores is the founder of HIWVE Makers and an electrical engineer with 15+ years of experience building consumer technology at Apple, Samsung, and Texas Instruments. He writes about how kids learn to build, think, and create in a tech-saturated world. Read more at hiwavemakers.com.
Sources
- Cropley, A. J. (2001). Creativity in Education and Learning: A Guide for Teachers and Educators. Kogan Page.
- Oruç, E., Çelik, İ., & Arslantaş, H. A. (2026). Effects of artificial intelligence use on creative motivation in middle school students: A mixed-methods study. SAGE Open, 16(1).
- Adobe Inc. (2024). Creativity and AI in K–12 Education: Research Report. adobe.com/education
- ISTE. (2023). ISTE Standards for Students: AI and Creative Competency. iste.org/standards
- Sawyer, R. K. (2012). Explaining Creativity: The Science of Human Innovation (2nd ed.). Oxford University Press.
- Kaufman, J. C., & Sternberg, R. J. (Eds.). (2010). The Cambridge Handbook of Creativity. Cambridge University Press.