Table of Contents
The AI Tutor Is in Your Kid's Classroom — Now What?
AI tutoring in schools is already here. Here's how to tell the difference between AI that helps kids think and AI that thinks for them — and what to ask your school.
The email from the school said something about “personalized AI learning tools” being integrated into the curriculum this semester. It mentioned Khan Academy and a few names you didn’t recognize. The letter was reassuring in its vagueness — something about “enhancing student outcomes” and “individualized support.” It did not mention what the tools actually do, what data they collect, or what the research says about whether any of this works.
That gap between the rollout announcement and the information parents actually need is the problem. AI tutoring in schools is not theoretical. Khanmigo is deployed across thousands of U.S. school districts as of 2026. Other tools are following. The question isn’t whether AI will be in your child’s education — it already is. The question is what kind, and whether it’s doing what it claims.
The AI Tutoring Landscape Parents Don’t Know About
Most AI tutoring tools in schools fall into one of two categories, and the difference between them matters enormously for whether your child learns anything.
The first category: answer-delivery AI. These are tools that, when a student asks a question, answer it. Directly, completely, and often accurately. From the student’s perspective, this is useful and frictionless. From a learning science perspective, it’s approximately equivalent to just reading the textbook answer — and in many cases, the student won’t bother re-reading it because they already feel like they got the information.
The second category: Socratic AI. These tools respond to student questions with questions. “What do you already know about this?” “Why do you think that might be the answer?” “What happens if you try this step?” Khanmigo, Khan Academy’s AI tutor, was designed explicitly around this approach — by design, it does not give direct answers. Students are guided toward working out the answer themselves.
This design choice is not arbitrary. It’s grounded in cognitive science research that’s been accumulating for decades. The distinction is worth understanding because it determines whether an AI tutoring tool is supporting learning or substituting for it.
What the Research Actually Says
Manu Kapur’s 2016 paper in Educational Psychologist introduced the concept of “productive struggle” — the finding that students who grapple with problems before receiving instruction learn more deeply and retain information longer than students who receive instruction first. This is not a new idea in learning science; it connects to Robert Bjork’s work on desirable difficulties and to constructivist educational theory generally. But Kapur’s work made it empirically specific: productive struggle produces better long-term retention even when it produces more errors in the short term.
The implication for AI tutoring is direct. An AI that short-circuits productive struggle by delivering answers — even correct ones — may undermine long-term retention in exactly the cases where it appears to be helping most. A student who got stuck, asked the AI, got an answer, and moved on has resolved the confusion without doing the cognitive work that encodes the concept. The answer is in the notes. It’s less likely to be in long-term memory.
Socratic AI tools are designed to avoid this. Rather than resolving the struggle, they guide students through it. The research on Socratic tutoring methods (and on the original Socratic dialogue in educational contexts) consistently shows stronger retention outcomes than direct instruction — which is why this is not a novel design choice in the AI context, but an application of well-established learning science.
The quantitative results from AI tutoring are genuinely mixed, and worth reading carefully. Think Academy data from structured supplemental AI tutoring programs showed 62% test score increases among enrolled students — a striking number. But structured supplemental programs, where students opt in, use the tools regularly, and receive support in doing so, are very different from passively available classroom AI tools that some students use and some don’t. The 62% figure applies to a specific context. It doesn’t automatically extend to background-available AI in a classroom.
The 2026 parent survey by KidsAiTools found that 72% of parents support AI in education — a significant majority. But 73% of those same parents want a human teacher to review AI recommendations before they reach students. That tension — broad support alongside significant reservations about oversight — reflects exactly where the state of practice has not yet caught up with the state of deployment.
NPR’s January 2026 report raised the counterargument directly: risks of AI in schools may outweigh benefits when dependency, loss of productive struggle, and data privacy are factored in. This is not a fringe position. It’s a legitimate reading of the research on how students use AI tools when left to their own devices — which typically trends toward the path of least resistance, not the path of most learning.
| AI Tutoring Approach | Short-Term Performance | Long-Term Retention | Dependency Risk | Cost | Data Privacy Risk |
|---|---|---|---|---|---|
| Answer-delivery AI | High (reduces friction immediately) | Low to moderate (struggle bypassed) | High | Low ($0–low subscription) | Moderate to high (interaction logs, query history) |
| Socratic AI (e.g., Khanmigo) | Moderate (more effort required) | Higher (productive struggle preserved) | Lower | Low to moderate | Moderate (managed by district) |
| Human tutor | Moderate to high | High (adaptive to student; relational) | Low | High ($40–120/hr) | Very low |
| No supplemental support | Varies | Depends on classroom quality | None | $0 | None |
The data privacy dimension deserves specific attention. Most AI education tools collect interaction logs, query history, and performance data. COPPA (the Children’s Online Privacy Protection Act) sets baseline protections for children under 13, but enforcement is inconsistent and many tools operate in gray areas for 13–18 year olds. Parents have a right to ask their school precisely what data is being collected, how it’s stored, whether it’s shared with third parties, and how long it’s retained. Many schools have not proactively answered these questions.
What to Actually Do
Find out which type of AI tool your school is using
Before evaluating whether you support or oppose AI tutoring in your child’s school, you need to know which category it falls into. Sending a simple email to the teacher or principal — “Can you tell me more about how this tool works? Does it give students direct answers, or does it guide them to figure things out themselves?” — gets you the information you need.
Khanmigo and similar Socratic tools have a fundamentally different learning profile than answer-delivery tools. If your school is using a Socratic approach, the main questions shift to implementation quality: are students using it consistently? Are teachers integrating it into instruction or treating it as background support? If your school is using an answer-delivery tool, the productive struggle concern is real and worth raising.
Ask about data collection — and get specific answers
“We take student privacy seriously” is not an answer. Specific questions that are appropriate to ask: What data does this tool collect about my child? Is that data shared with any third parties? How long is it retained? Is it used to train the AI model? Can I review my child’s interaction data?
If the school can’t answer these questions, that’s information. Districts that have vetted tools carefully for data privacy should be able to answer specifically. Districts that rolled out tools quickly may not yet have done this work. Both situations exist, and you can’t know which one you’re in without asking.
Evaluate the tool yourself
Most AI tutoring tools have parent or demo access modes. Spending 20 minutes interacting with the tool your child is using — asking it a math question, seeing whether it answers or asks back, testing whether it provides different responses for obvious “do my homework” prompts versus genuine exploratory questions — tells you more than any brochure.
The specific test: ask it something you know the answer to. Does it give you the answer? Does it ask what you already know? Does it scaffold toward the answer or deliver it? That interaction pattern tells you which category the tool falls into and whether the productive struggle concern applies.
Discuss with your child how they’re using it
The most important variable in any AI tutoring tool is user behavior — and user behavior with AI tutors trends toward the path of least resistance unless the tool is explicitly designed to prevent that, or the student has been taught otherwise.
A useful conversation: “Walk me through what you do when you get stuck on homework and use the school AI tool. What happens?” If the pattern is “I ask it what the answer is and it tells me,” the productive struggle concern is active regardless of how the tool was designed. If the pattern is “it keeps asking me questions and I have to figure it out,” the Socratic design is working.
This conversation also opens the broader question of what AI is for — tool use versus dependency — that’s worth having explicitly with middle and high schoolers. For background on how kids are already encountering AI in their daily lives, see How Kids Already Use AI Every Day — and Why It Matters. For a framework on AI literacy that goes beyond tool use, see AI Literacy for Kids: What Middle Schoolers Actually Need to Know.
Support teachers, not just tools
The research on AI tutoring consistently shows that implementation quality matters as much as tool quality. An AI tool integrated thoughtfully into instruction by a teacher who understands its limitations and uses it alongside direct instruction, discussion, and assessment performs very differently from the same tool used as a low-supervision supplement.
Teachers who are using AI tutoring tools effectively are typically doing more, not less, work — monitoring how students are interacting with the tool, using AI-generated performance data to identify students who need intervention, and structuring class time around what the AI identified as gaps. This kind of use is educationally sound. It’s also not universal, and it’s reasonable for parents to ask what teacher oversight looks like in their specific classroom.
What to Watch for Over the Next 3 Months
Month 1: Ask your child specific questions about how they’re using the AI tool. Not “is it helpful?” — that answer is almost always yes. But “can you explain to me how you solved this problem?” If they can explain the thinking, the tool may be supporting learning. If they can describe what the AI said but not the underlying reasoning, the tool is functioning as an answer source.
Month 2: Look for signs of independent problem-solving confidence versus AI dependency. Is your child attempting problems before turning to the AI, or has the AI become the first stop? Students who develop dependency on AI for homework often show lower confidence when the AI isn’t available — on tests, in class discussions, in new problems. This pattern emerging within one semester is worth flagging to the teacher.
Month 3: Compare quiz and test performance to homework performance. A growing gap between homework performance (with AI available) and test performance (without AI) is a reliable indicator that the AI is providing answers the student isn’t retaining. The gap should be small. If it’s large, the tool use pattern needs addressing.
Frequently Asked Questions
Is Khanmigo actually safe for kids to use?
Khanmigo is among the more carefully designed AI education tools in terms of both learning science and content safety. Khan Academy designed it with Socratic guardrails specifically to prevent direct answer delivery. The privacy profile is governed by school district contracts, which vary. The more important question is whether it’s being used in a way that preserves productive struggle — which depends on both the tool design and how students are actually using it.
My school says the AI tool improved test scores. Is that true?
Possibly, for some students, in some conditions. The evidence for AI tutoring improving test scores exists — structured, well-implemented supplemental AI tutoring programs do show performance gains. The key words are “structured” and “well-implemented.” Population-level claims that a passively available AI tool improved district-wide scores should be viewed with appropriate skepticism until the methodology is clear. Ask how the improvement was measured and for which students.
What if my child’s teacher doesn’t really know how the tool works?
This is common in the early rollout phase. Teachers are often handed tools alongside students, without deep training on the learning science behind them. If the teacher can’t explain whether the tool uses a Socratic or direct-answer approach, the tool hasn’t been integrated thoughtfully. It’s appropriate to ask school leadership — not as a criticism of the teacher, but as a reasonable parental question about an educational tool your child is required to use.
Should I let my child use AI tutoring tools at home beyond what school assigns?
The same framework applies: what type of tool, and how is your child using it? A Socratic AI tool used to work through genuine confusion on a concept they attempted independently is a legitimate learning tool. An answer-delivery AI used to complete homework without engaging the material is an expensive shortcut. Your own 20-minute test of any tool you’re considering introducing at home is worth the time.
What’s the data privacy risk for younger children specifically?
COPPA provides stronger protections for children under 13, requiring verifiable parental consent for data collection. For children in this age range, schools should be providing parental notice and consent processes before enrolling students in AI tools. If your elementary-aged child is using an AI tutoring tool and you haven’t signed a COPPA consent form, it’s worth asking the school specifically how they’re handling this requirement.
About the author
Ricky Flores is the founder of HIWVE Makers and an electrical engineer with 15+ years of experience building consumer technology at Apple, Samsung, and Texas Instruments. He writes about how kids learn to build, think, and create in a tech-saturated world. Read more at hiwavemakers.com.
Sources
-
KidsAiTools. (2026). “Parents Survey: AI in Schools 2026.” KidsAiTools. https://www.kidsaitools.com/en/articles/parents-survey-ai-schools-2026
-
NPR. (2026, January 14). “The Risks of AI in Schools May Outweigh the Benefits, Researchers Warn.” NPR. https://www.npr.org/2026/01/14/nx-s1-5674741/ai-schools-education
-
Kapur, M. (2016). “Examining Productive Failure, Productive Success, Unproductive Failure, and Unproductive Success in Learning.” Educational Psychologist, 51(2), 289–299. https://doi.org/10.1080/00461520.2016.1155457
-
Khan Academy. (2024). “Khanmigo: AI-powered guide for students and teachers.” Khan Academy product documentation and pilot results. https://www.khanacademy.org/khan-labs
-
Bjork, R. A. (1994). “Memory and metamemory considerations in the training of human beings.” In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about Knowing. MIT Press.
-
Federal Trade Commission. (2013, updated 2022). “Children’s Online Privacy Protection Rule (COPPA).” FTC. https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa
-
VanLehn, K. (2011). “The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems.” Educational Psychologist, 46(4), 197–221.